

# Compute
<a name="compute-pattern-list"></a>

**Topics**
+ [Containers & microservices](containersandmicroservices-pattern-list.md)
+ [Serverless](serverless-pattern-list.md)
+ [Networking](networking-pattern-list.md)
+ [Content delivery](contentdelivery-pattern-list.md)

# Containers & microservices
<a name="containersandmicroservices-pattern-list"></a>

**Topics**
+ [Access an Amazon Neptune database from an Amazon EKS container](access-amazon-neptune-database-from-amazon-eks-container.md)
+ [Access container applications privately on Amazon ECS by using AWS PrivateLink and a Network Load Balancer](access-container-applications-privately-on-amazon-ecs-by-using-aws-privatelink-and-a-network-load-balancer.md)
+ [Access container applications privately on Amazon ECS by using AWS Fargate, AWS PrivateLink, and a Network Load Balancer](access-container-applications-privately-on-amazon-ecs-by-using-aws-fargate-aws-privatelink-and-a-network-load-balancer.md)
+ [Access container applications privately on Amazon EKS using AWS PrivateLink and a Network Load Balancer](access-container-applications-privately-on-amazon-eks-using-aws-privatelink-and-a-network-load-balancer.md)
+ [Automate backups for Amazon RDS for PostgreSQL DB instances by using AWS Batch](automate-backups-for-amazon-rds-for-postgresql-db-instances-by-using-aws-batch.md)
+ [Automate deployment of Node Termination Handler in Amazon EKS by using a CI/CD pipeline](automate-deployment-of-node-termination-handler-in-amazon-eks-by-using-a-ci-cd-pipeline.md)
+ [Automatically build and deploy a Java application to Amazon EKS using a CI/CD pipeline](automatically-build-and-deploy-a-java-application-to-amazon-eks-using-a-ci-cd-pipeline.md)
+ [Copy Amazon ECR container images across AWS accounts and AWS Regions](copy-ecr-container-images-across-accounts-regions.md)
+ [Create an Amazon ECS task definition and mount a file system on EC2 instances using Amazon EFS](create-an-amazon-ecs-task-definition-and-mount-a-file-system-on-ec2-instances-using-amazon-efs.md)
+ [Deploy Lambda functions with container images](deploy-lambda-functions-with-container-images.md)
+ [Deploy Java microservices on Amazon ECS using AWS Fargate](deploy-java-microservices-on-amazon-ecs-using-aws-fargate.md)
+ [Deploy Kubernetes resources and packages using Amazon EKS and a Helm chart repository in Amazon S3](deploy-kubernetes-resources-and-packages-using-amazon-eks-and-a-helm-chart-repository-in-amazon-s3.md)
+ [Deploy a CockroachDB cluster in Amazon EKS by using Terraform](deploy-cockroachdb-on-eks-using-terraform.md)
+ [Deploy a sample Java microservice on Amazon EKS and expose the microservice using an Application Load Balancer](deploy-a-sample-java-microservice-on-amazon-eks-and-expose-the-microservice-using-an-application-load-balancer.md)
+ [Deploy a gRPC-based application on an Amazon EKS cluster and access it with an Application Load Balancer](deploy-a-grpc-based-application-on-an-amazon-eks-cluster-and-access-it-with-an-application-load-balancer.md)
+ [Deploy containerized applications on AWS IoT Greengrass V2 running as a Docker container](deploy-containerized-applications-on-aws-iot-greengrass-version-2-running-as-a-docker-container.md)
+ [Deploy containers by using Elastic Beanstalk](deploy-containers-by-using-elastic-beanstalk.md)
+ [Generate a static outbound IP address using a Lambda function, Amazon VPC, and a serverless architecture](generate-a-static-outbound-ip-address-using-a-lambda-function-amazon-vpc-and-a-serverless-architecture.md)
+ [Identify duplicate container images automatically when migrating to an Amazon ECR repository](identify-duplicate-container-images-automatically-when-migrating-to-ecr-repository.md)
+ [Install SSM Agent on Amazon EKS worker nodes by using Kubernetes DaemonSet](install-ssm-agent-on-amazon-eks-worker-nodes-by-using-kubernetes-daemonset.md)
+ [Install the SSM Agent and CloudWatch agent on Amazon EKS worker nodes using preBootstrapCommands](install-the-ssm-agent-and-cloudwatch-agent-on-amazon-eks-worker-nodes-using-prebootstrapcommands.md)
+ [Migrate NGINX Ingress Controllers when enabling Amazon EKS Auto Mode](migrate-nginx-ingress-controller-eks-auto-mode.md)
+ [Migrate your container workloads from Azure Red Hat OpenShift (ARO) to Red Hat OpenShift Service on AWS (ROSA)](migrate-container-workloads-from-aro-to-rosa.md)
+ [Run Amazon ECS tasks on Amazon WorkSpaces with Amazon ECS Anywhere](run-amazon-ecs-tasks-on-amazon-workspaces-with-amazon-ecs-anywhere.md)
+ [Run an ASP.NET Core web API Docker container on an Amazon EC2 Linux instance](run-an-asp-net-core-web-api-docker-container-on-an-amazon-ec2-linux-instance.md)
+ [Run stateful workloads with persistent data storage by using Amazon EFS on Amazon EKS with AWS Fargate](run-stateful-workloads-with-persistent-data-storage-by-using-amazon-efs-on-amazon-eks-with-aws-fargate.md)
+ [Set up event-driven auto scaling in Amazon EKS by using Amazon EKS Pod Identity and KEDA](event-driven-auto-scaling-with-eks-pod-identity-and-keda.md)
+ [Streamline PostgreSQL deployments on Amazon EKS by using PGO](streamline-postgresql-deployments-amazon-eks-pgo.md)
+ [Simplify application authentication with mutual TLS in Amazon ECS by using Application Load Balancer](simplify-application-authentication-with-mutual-tls-in-amazon-ecs.md)
+ [More patterns](containersandmicroservices-more-patterns-pattern-list.md)

# Access an Amazon Neptune database from an Amazon EKS container
<a name="access-amazon-neptune-database-from-amazon-eks-container"></a>

*Ramakrishnan Palaninathan, Amazon Web Services*

## Summary
<a name="access-amazon-neptune-database-from-amazon-eks-container-summary"></a>

This pattern establishes a connection between Amazon Neptune, which is a fully managed graph database, and Amazon Elastic Kubernetes Service (Amazon EKS), a container orchestration service, to access a Neptune database. Neptune DB clusters are confined within a virtual private cloud (VPC) on AWS. For this reason, accessing Neptune requires careful configuration of the VPC to enable connectivity.

Unlike Amazon Relational Database Service (Amazon RDS) for PostgreSQL, Neptune doesn't rely on typical database access credentials. Instead, it uses AWS Identity and Access Management (IAM) roles for authentication. Therefore, connecting to Neptune from Amazon EKS involves setting up an IAM role with the necessary permissions to access Neptune.

Furthermore, Neptune endpoints are accessible only within the VPC where the cluster resides. This means that you have to configure network settings to facilitate communication between Amazon EKS and Neptune. Depending on your specific requirements and networking preferences, there are [various approaches to configuring the VPC](https://docs.aws.amazon.com/neptune/latest/userguide/get-started-vpc.html) to enable seamless connectivity between Neptune and Amazon EKS. Each method offers distinct advantages and considerations, which provide flexibility in designing your database architecture to suit your application's needs.

## Prerequisites and limitations
<a name="access-amazon-neptune-database-from-amazon-eks-container-prereqs"></a>

**Prerequisites **
+ Install the latest version of **kubectl** (see [instructions](https://kubernetes.io/docs/tasks/tools/#kubectl)). To check your version, run: 

  ```
  kubectl version --short
  ```
+ Install the latest version of **eksctl** (see [instructions](https://eksctl.io/installation/)). To check your version, run: 

  ```
  eksctl info
  ```
+ Install the latest version of the AWS Command Line Interface (AWS CLI) version 2 (see [instructions](https://docs.aws.amazon.com/cli/latest/userguide/getting-started-install.html)). To check your version, run: 

  ```
  aws --version
  ```
+ Create a Neptune DB cluster (see [instructions](https://docs.aws.amazon.com/neptune/latest/userguide/get-started-cfn-create.html)). Make sure to establish communications between the cluster's VPC and Amazon EKS through [VPC peering](https://docs.aws.amazon.com/vpc/latest/peering/what-is-vpc-peering.html), [AWS Transit Gateway](https://docs.aws.amazon.com/vpc/latest/tgw/tgw-getting-started.html), or another method. Also make sure that the status of the cluster is "available" and that it has an inbound rule on port 8182 for the security group.
+ Configure an IAM OpenID Connect (OIDC) provider on an existing Amazon EKS cluster (see [instructions](https://docs.aws.amazon.com/eks/latest/userguide/enable-iam-roles-for-service-accounts.html)).

**Product versions**
+ [Amazon EKS 1.27](https://docs.aws.amazon.com/eks/latest/userguide/kubernetes-versions.html)
+ [Amazon Neptune engine version 1.3.0.0 (2023-11-15)](https://docs.aws.amazon.com/neptune/latest/userguide/engine-releases-1.3.0.0.html)

## Architecture
<a name="access-amazon-neptune-database-from-amazon-eks-container-architecture"></a>

The following diagram shows the connection between Kubernetes pods in an Amazon EKS cluster and Neptune to provide access to a Neptune database.

![\[Connecting pods in a Kubernetes node with Amazon Neptune.\]](http://docs.aws.amazon.com/prescriptive-guidance/latest/patterns/images/pattern-img/2fcf9e00-1664-462a-825e-b0fdd962f478/images/86da67e5-340e-4b29-acc6-2da416ce57eb.png)


**Automation and scale**

You can use the Amazon EKS [Horizontal Pod Autoscaler ](https://docs.aws.amazon.com/eks/latest/userguide/horizontal-pod-autoscaler.html)to scale this solution.

## Tools
<a name="access-amazon-neptune-database-from-amazon-eks-container-tools"></a>

**Services**
+ [Amazon Elastic Kubernetes Service (Amazon EKS)](https://docs.aws.amazon.com/eks/latest/userguide/getting-started.html) helps you run Kubernetes on AWS without needing to install or maintain your own Kubernetes control plane or nodes.
+ [AWS Identity and Access Management (IAM)](https://docs.aws.amazon.com/IAM/latest/UserGuide/introduction.html) helps you securely manage access to your AWS resources by controlling who is authenticated and authorized to use them.
+ [Amazon Neptune](https://docs.aws.amazon.com/neptune/latest/userguide/intro.html) is a graph database service that helps you build and run applications that work with highly connected datasets.

## Best practices
<a name="access-amazon-neptune-database-from-amazon-eks-container-best-practices"></a>

For best practices, see [Identity and Access Management](https://aws.github.io/aws-eks-best-practices/security/docs/iam/) in the *Amazon EKS Best Practices Guides*.

## Epics
<a name="access-amazon-neptune-database-from-amazon-eks-container-epics"></a>

### Set environment variables
<a name="set-environment-variables"></a>


| Task | Description | Skills required | 
| --- | --- | --- | 
| Verify the cluster context. | Before you interact with your Amazon EKS cluster by using Helm or other command-line tools, you must define environment variables that encapsulate your cluster's details. These variables are used in subsequent commands to ensure that they target the correct cluster and resources.First, confirm that you are operating within the correct cluster context. This ensures that any subsequent commands are sent to the intended Kubernetes cluster. To verify the current context, run the following command.<pre>kubectl config current-context</pre> | AWS administrator, Cloud administrator | 
| Define the `CLUSTER_NAME` variable. | Define the `CLUSTER_NAME` environment variable for your Amazon EKS cluster. In the following command, replace the sample value `us-west-2` with the correct AWS Region for your cluster. Replace the sample value `eks-workshop` with your existing cluster name.<pre>export CLUSTER_NAME=$(aws eks describe-cluster --region us-west-2 --name eks-workshop --query "cluster.name" --output text)</pre> | AWS administrator, Cloud administrator | 
| Validate output. | To validate that the variables have been set properly, run the following command.<pre>echo $CLUSTER_NAME</pre>Verify that the output of this command matches the input you specified in the previous step. | AWS administrator, Cloud administrator | 

### Create IAM role and associate it with Kubernetes
<a name="create-iam-role-and-associate-it-with-kubernetes"></a>


| Task | Description | Skills required | 
| --- | --- | --- | 
| Create a service account. | You use [IAM roles for service accounts](https://docs.aws.amazon.com/eks/latest/userguide/iam-roles-for-service-accounts.html?sc_channel=el&sc_campaign=appswave&sc_content=eks-integrate-secrets-manager&sc_geo=mult&sc_country=mult&sc_outcome=acq) to map your Kubernetes service accounts to IAM roles, to enable fine-grained permissions management for your applications that run on Amazon EKS. You can use [eksctl](https://eksctl.io/) to create and associate an IAM role with a specific Kubernetes service account within your Amazon EKS cluster. The AWS managed policy `NeptuneFullAccess` allows write and read access to your specified Neptune cluster.You must have an [OIDC endpoint](https://docs.aws.amazon.com/eks/latest/userguide/enable-iam-roles-for-service-accounts.html?sc_channel=el&sc_campaign=appswave&sc_content=eks-integrate-secrets-manager&sc_geo=mult&sc_country=mult&sc_outcome=acq) associated with your cluster before you run these commands.Create a service account that you want to associate with an AWS managed policy named `NeptuneFullAccess`.<pre>eksctl create iamserviceaccount --name eks-neptune-sa --namespace default --cluster $CLUSTER_NAME --attach-policy-arn arn:aws:iam::aws:policy/NeptuneFullAccess --approve --override-existing-serviceaccounts</pre>where `eks-neptune-sa `is the name of the service account that you want to create.Upon completion, this command displays the following response:<pre>2024-02-07 01:12:39 [ℹ] created serviceaccount "default/eks-neptune-sa"</pre> | AWS administrator, Cloud administrator | 
| Verify that the account is set up properly. | Make sure that the `eks-neptune-sa` service account is set up correctly in the default namespace in your cluster.<pre>kubectl get sa eks-neptune-sa -o yaml</pre>The output should look like this:<pre>apiVersion: v1<br />kind: ServiceAccount<br />metadata:<br />  annotations:<br />    eks.amazonaws.com/role-arn: arn:aws:iam::123456789123:role/eksctl-eks-workshop-addon-iamserviceaccount-d-Role1-Q35yKgdQOlmM<br />  creationTimestamp: "2024-02-07T01:12:39Z"<br />  labels:<br />    app.kubernetes.io/managed-by: eksctl<br />  name: eks-neptune-sa<br />  namespace: default<br />  resourceVersion: "5174750"<br />  uid: cd6ba2f7-a0f5-40e1-a6f4-4081e0042316</pre> | AWS administrator, Cloud administrator | 
| Check connectivity. | Deploy a sample pod called `pod-util` and check connectivity with Neptune.<pre>apiVersion: v1<br />kind: Pod<br />metadata:<br />  name: pod-util<br />  namespace: default<br />spec:<br />  serviceAccountName: eks-neptune-sa<br />  containers:<br />  - name: pod-util<br />    image: public.ecr.aws/patrickc/troubleshoot-util<br />    command:<br />      - sleep<br />      - "3600"<br />    imagePullPolicy: IfNotPresent</pre><pre>kubectl apply -f pod-util.yaml</pre><pre>kubectl exec --stdin --tty pod-util -- /bin/bash<br />bash-5.1# curl -X POST -d '{"gremlin":"g.V().limit(1)"}' https://db-neptune-1.cluster-xxxxxxxxxxxx.us-west-2.neptune.amazonaws.com:8182/gremlin<br />{"requestId":"a4964f2d-12b1-4ed3-8a14-eff511431a0e","status":{"message":"","code":200,"attributes":{"@type":"g:Map","@value":[]}},"result":{"data":{"@type":"g:List","@value":[]},"meta":{"@type":"g:Map","@value":[]}}}<br />bash-5.1# exit<br />exit</pre> | AWS administrator, Cloud administrator | 

### Validate connection activity
<a name="validate-connection-activity"></a>


| Task | Description | Skills required | 
| --- | --- | --- | 
| Enable IAM database authentication. | By default, IAM database authentication is disabled when you create a Neptune DB cluster. You can enable or disable IAM database authentication by using the AWS Management Console.Follow the steps in the AWS documentation to [enable IAM database authentication in Neptune](https://docs.aws.amazon.com/neptune/latest/userguide/iam-auth-enable.html). | AWS administrator, Cloud administrator | 
| Verify connections. | In this step, you interact with the `pod-util` container, which is already in running status, to install **awscurl **and verify the connection.[\[See the AWS documentation website for more details\]](http://docs.aws.amazon.com/prescriptive-guidance/latest/patterns/access-amazon-neptune-database-from-amazon-eks-container.html) | AWS administrator, Cloud administrator | 

## Troubleshooting
<a name="access-amazon-neptune-database-from-amazon-eks-container-troubleshooting"></a>


| Issue | Solution | 
| --- | --- | 
| Can't access the Neptune database. | Review the IAM policy that's attached to the service account. Make sure that it allows the necessary actions (for example, `neptune:Connec,neptune:DescribeDBInstances`) for the operations you want to run. | 

## Related resources
<a name="access-amazon-neptune-database-from-amazon-eks-container-resources"></a>
+ [Grant Kubernetes workloads access to AWS using Kubernetes Service Accounts](https://docs.aws.amazon.com/eks/latest/userguide/service-accounts.html) (Amazon EKS documentation)
+ [IAM roles for service accounts](https://docs.aws.amazon.com/eks/latest/userguide/iam-roles-for-service-accounts.html) (Amazon EKS documentation)
+ [Creating a new Neptune DB cluster](https://docs.aws.amazon.com/neptune/latest/userguide/get-started-create-cluster.html) (Amazon Neptune documentation)

# Access container applications privately on Amazon ECS by using AWS PrivateLink and a Network Load Balancer
<a name="access-container-applications-privately-on-amazon-ecs-by-using-aws-privatelink-and-a-network-load-balancer"></a>

*Kirankumar Chandrashekar, Amazon Web Services*

## Summary
<a name="access-container-applications-privately-on-amazon-ecs-by-using-aws-privatelink-and-a-network-load-balancer-summary"></a>

This pattern describes how to privately host a Docker container application on Amazon Elastic Container Service (Amazon ECS) behind a Network Load Balancer, and access the application by using AWS PrivateLink. You can then use a private network to securely access services on the Amazon Web Services (AWS) Cloud. Amazon Relational Database Service (Amazon RDS) hosts the relational database for the application running on Amazon ECS with high availability (HA). Amazon Elastic File System (Amazon EFS) is used if the application requires persistent storage.

The Amazon ECS service running the Docker applications, with a Network Load Balancer at the front end, can be associated with a virtual private cloud (VPC) endpoint for access through AWS PrivateLink. This VPC endpoint service can then be shared with other VPCs by using their VPC endpoints.

You can also use [AWS Fargate](https://docs.aws.amazon.com/AmazonECS/latest/developerguide/AWS_Fargate.html) instead of an Amazon EC2 Auto Scaling group. For more information, see [Access container applications privately on Amazon ECS by using AWS Fargate, AWS PrivateLink, and a Network Load Balancer](https://docs.aws.amazon.com/prescriptive-guidance/latest/patterns/access-container-applications-privately-on-amazon-ecs-by-using-aws-fargate-aws-privatelink-and-a-network-load-balancer.html?did=pg_card).

## Prerequisites and limitations
<a name="access-container-applications-privately-on-amazon-ecs-by-using-aws-privatelink-and-a-network-load-balancer-prereqs"></a>

**Prerequisites **
+ An active AWS account
+ [AWS Command Line Interface (AWS CLI) version 2](https://docs.aws.amazon.com/cli/latest/userguide/getting-started-install.html), installed and configured on Linux, macOS, or Windows
+ [Docker](https://www.docker.com/), installed and configured on Linux, macOS, or Windows
+ An application running on Docker

## Architecture
<a name="access-container-applications-privately-on-amazon-ecs-by-using-aws-privatelink-and-a-network-load-balancer-architecture"></a>

![\[Using AWS PrivateLink to access a container app on Amazon ECS behind a Network Load Balancer.\]](http://docs.aws.amazon.com/prescriptive-guidance/latest/patterns/images/pattern-img/a316bf46-24db-4514-957d-abc60f8f6962/images/573951ed-74bb-4023-9d9c-43e77e4f8eda.png)


 

**Technology stack**
+ Amazon CloudWatch
+ Amazon Elastic Compute Cloud (Amazon EC2)
+ Amazon EC2 Auto Scaling
+ Amazon Elastic Container Registry (Amazon ECR)
+ Amazon ECS
+ Amazon RDS
+ Amazon Simple Storage Service (Amazon S3)
+ AWS Lambda
+ AWS PrivateLink
+ AWS Secrets Manager
+ Application Load Balancer
+ Network Load Balancer
+ VPC

*Automation and scale*
+ You can use [AWS CloudFormation ](https://docs.aws.amazon.com/AWSCloudFormation/latest/UserGuide/Welcome.html)to create this pattern by using [Infrastructure as Code](https://docs.aws.amazon.com/whitepapers/latest/introduction-devops-aws/infrastructure-as-code.html).

## Tools
<a name="access-container-applications-privately-on-amazon-ecs-by-using-aws-privatelink-and-a-network-load-balancer-tools"></a>
+ [Amazon EC2](https://docs.aws.amazon.com/AWSEC2/latest/UserGuide/concepts.html) – Amazon Elastic Compute Cloud (Amazon EC2) provides scalable computing capacity in the AWS Cloud.
+ [Amazon EC2 Auto Scaling](https://docs.aws.amazon.com/autoscaling/ec2/userguide/what-is-amazon-ec2-auto-scaling.html) – Amazon EC2 Auto Scaling helps you ensure that you have the correct number of Amazon EC2 instances available to handle the load for your application.
+ [Amazon ECS](https://docs.aws.amazon.com/AmazonECS/latest/developerguide/Welcome.html) – Amazon Elastic Container Service (Amazon ECS) is a highly scalable, fast, container management service that makes it easy to run, stop, and manage containers on a cluster.
+ [Amazon ECR](https://docs.aws.amazon.com/AmazonECR/latest/userguide/what-is-ecr.html) – Amazon Elastic Container Registry (Amazon ECR) is a managed AWS container image registry service that is secure, scalable, and reliable.
+ [Amazon EFS](https://docs.aws.amazon.com/efs/latest/ug/whatisefs.html) – Amazon Elastic File System (Amazon EFS) provides a simple, scalable, fully managed elastic NFS file system for use with AWS Cloud services and on-premises resources.
+ [AWS Lambda](https://docs.aws.amazon.com/lambda/latest/dg/welcome.html) – Lambda is a compute service for running code without provisioning or managing servers.
+ [Amazon RDS](https://docs.aws.amazon.com/rds/) – Amazon Relational Database Service (Amazon RDS) is a web service that makes it easier to set up, operate, and scale a relational database in the AWS Cloud.
+ [Amazon S3](https://docs.aws.amazon.com/AmazonS3/latest/userguide/Welcome.html) – Amazon Simple Storage Service (Amazon S3) is storage for the internet. It is designed to make web-scale computing easier for developers.
+ [AWS Secrets Manager](https://docs.aws.amazon.com/secretsmanager/latest/userguide/intro.html) – Secrets Manager helps you replace hardcoded credentials in your code, including passwords, by providing an API call to Secrets Manager to retrieve the secret programmatically.
+ [Amazon VPC](https://docs.aws.amazon.com/vpc/latest/userguide/what-is-amazon-vpc.html) – Amazon Virtual Private Cloud (Amazon VPC) helps you launch AWS resources into a virtual network that you've defined.
+ [Elastic Load Balancing](https://docs.aws.amazon.com/elasticloadbalancing/latest/userguide/what-is-load-balancing.html) – Elastic Load Balancing distributes incoming application or network traffic across multiple targets, such as Amazon EC2 instances, containers, and IP addresses, in multiple Availability Zones.
+ [Docker](https://www.docker.com/) – Docker helps developers to pack, ship, and run any application as a lightweight, portable, and self-sufficient container.

## Epics
<a name="access-container-applications-privately-on-amazon-ecs-by-using-aws-privatelink-and-a-network-load-balancer-epics"></a>

### Create networking components
<a name="create-networking-components"></a>


| Task | Description | Skills required | 
| --- | --- | --- | 
| Create a VPC. | [\[See the AWS documentation website for more details\]](http://docs.aws.amazon.com/prescriptive-guidance/latest/patterns/access-container-applications-privately-on-amazon-ecs-by-using-aws-privatelink-and-a-network-load-balancer.html) | Cloud administrator | 

### Create the load balancers
<a name="create-the-load-balancers"></a>


| Task | Description | Skills required | 
| --- | --- | --- | 
| Create a Network Load Balancer.  | [\[See the AWS documentation website for more details\]](http://docs.aws.amazon.com/prescriptive-guidance/latest/patterns/access-container-applications-privately-on-amazon-ecs-by-using-aws-privatelink-and-a-network-load-balancer.html) | Cloud administrator | 
| Create an Application Load Balancer. | [\[See the AWS documentation website for more details\]](http://docs.aws.amazon.com/prescriptive-guidance/latest/patterns/access-container-applications-privately-on-amazon-ecs-by-using-aws-privatelink-and-a-network-load-balancer.html) | Cloud administrator | 

### Create an Amazon EFS file system
<a name="create-an-amazon-efs-file-system"></a>


| Task | Description | Skills required | 
| --- | --- | --- | 
| Create an Amazon EFS file system. | [\[See the AWS documentation website for more details\]](http://docs.aws.amazon.com/prescriptive-guidance/latest/patterns/access-container-applications-privately-on-amazon-ecs-by-using-aws-privatelink-and-a-network-load-balancer.html) | Cloud administrator | 
| Mount targets for the subnets. | [\[See the AWS documentation website for more details\]](http://docs.aws.amazon.com/prescriptive-guidance/latest/patterns/access-container-applications-privately-on-amazon-ecs-by-using-aws-privatelink-and-a-network-load-balancer.html) | Cloud administrator | 
| Verify that the subnets are mounted as targets.  | [\[See the AWS documentation website for more details\]](http://docs.aws.amazon.com/prescriptive-guidance/latest/patterns/access-container-applications-privately-on-amazon-ecs-by-using-aws-privatelink-and-a-network-load-balancer.html) | Cloud administrator | 

### Create an S3 bucket
<a name="create-an-s3-bucket"></a>


| Task | Description | Skills required | 
| --- | --- | --- | 
| Create an S3 bucket.  | Open the Amazon S3 console and create an S3 bucket to store your application’s static assets, if required. | Cloud administrator | 

### Create a Secrets Manager secret
<a name="create-a-secrets-manager-secret"></a>


| Task | Description | Skills required | 
| --- | --- | --- | 
| Create an AWS KMS key to encrypt the Secrets Manager secret. | Open the AWS Key Management Service (AWS KMS) console and create a KMS key. | Cloud administrator | 
|  Create a Secrets Manager secret to store the Amazon RDS password. | [\[See the AWS documentation website for more details\]](http://docs.aws.amazon.com/prescriptive-guidance/latest/patterns/access-container-applications-privately-on-amazon-ecs-by-using-aws-privatelink-and-a-network-load-balancer.html) | Cloud Administrator  | 

### Create an Amazon RDS instance
<a name="create-an-amazon-rds-instance"></a>


| Task | Description | Skills required | 
| --- | --- | --- | 
| Create a DB subnet group.  | [\[See the AWS documentation website for more details\]](http://docs.aws.amazon.com/prescriptive-guidance/latest/patterns/access-container-applications-privately-on-amazon-ecs-by-using-aws-privatelink-and-a-network-load-balancer.html) | Cloud administrator | 
| Create an Amazon RDS instance. | Create and configure an Amazon RDS instance within the private subnets. Make sure that **Multi-AZ** is turned on for HA. | Cloud administrator | 
| Load data to the Amazon RDS instance.  | Load the relational data required by your application into your Amazon RDS instance. This process will vary depending on your application's needs, as well as how your database schema is defined and designed. | Cloud administrator, DBA | 

### Create the Amazon ECS components
<a name="create-the-amazon-ecs-components"></a>


| Task | Description | Skills required | 
| --- | --- | --- | 
| Create an ECS cluster. | [\[See the AWS documentation website for more details\]](http://docs.aws.amazon.com/prescriptive-guidance/latest/patterns/access-container-applications-privately-on-amazon-ecs-by-using-aws-privatelink-and-a-network-load-balancer.html) | Cloud administrator | 
| Create the Docker images.  | Create the Docker images by following the instructions in the *Related resources* section. | Cloud administrator | 
| Create Amazon ECR repositories. | [\[See the AWS documentation website for more details\]](http://docs.aws.amazon.com/prescriptive-guidance/latest/patterns/access-container-applications-privately-on-amazon-ecs-by-using-aws-privatelink-and-a-network-load-balancer.html) | Cloud administrator, DevOps engineer | 
| Authenticate your Docker client for the Amazon ECR repository.  | To authenticate your Docker client for the Amazon ECR repository, run the "`aws ecr get-login-password` command in the AWS CLI. | Cloud administrator | 
| Push the Docker images to the Amazon ECR repository.  | [\[See the AWS documentation website for more details\]](http://docs.aws.amazon.com/prescriptive-guidance/latest/patterns/access-container-applications-privately-on-amazon-ecs-by-using-aws-privatelink-and-a-network-load-balancer.html) | Cloud administrator | 
| Create an Amazon ECS task definition.  | A task definition is required to run Docker containers in Amazon ECS. [\[See the AWS documentation website for more details\]](http://docs.aws.amazon.com/prescriptive-guidance/latest/patterns/access-container-applications-privately-on-amazon-ecs-by-using-aws-privatelink-and-a-network-load-balancer.html)For help with setting up your task definition, see "Creating a task definition" in the *Related resources* section. Make sure you provide the Docker images that you pushed to Amazon ECR. | Cloud administrator | 
| Create an Amazon ECS service.  | Create an Amazon ECS service by using the ECS cluster you created earlier. Make sure you choose Amazon EC2 as the launch type, and choose the task definition created in the previous step, as well as the target group of the Application Load Balancer. | Cloud administrator | 

### Create an Amazon EC2 Auto Scaling group
<a name="create-an-amazon-ec2-auto-scaling-group"></a>


| Task | Description | Skills required | 
| --- | --- | --- | 
| Create a launch configuration. | Open the Amazon EC2 console, and create a launch configuration. Make sure that the user data has the code to allow the EC2 instances to join the desired ECS cluster. For an example of the code required, see the *Related resources* section. | Cloud administrator | 
| Create an Amazon EC2 Auto Scaling group.  | Return to the Amazon EC2 console and under **Auto Scaling**, choose **Auto Scaling groups**. Set up an Amazon EC2 Auto Scaling group. Make sure you choose the private subnets and launch configuration that you created earlier. | Cloud administrator | 

### Set up AWS PrivateLink
<a name="set-up-aws-privatelink"></a>


| Task | Description | Skills required | 
| --- | --- | --- | 
| Set up the AWS PrivateLink endpoint. | [\[See the AWS documentation website for more details\]](http://docs.aws.amazon.com/prescriptive-guidance/latest/patterns/access-container-applications-privately-on-amazon-ecs-by-using-aws-privatelink-and-a-network-load-balancer.html)For more information, see the *Related resources* section. | Cloud administrator | 

### Create a VPC endpoint
<a name="create-a-vpc-endpoint"></a>


| Task | Description | Skills required | 
| --- | --- | --- | 
| Create a VPC endpoint. | Create a VPC endpoint for the AWS PrivateLink endpoint that you created earlier. The VPC endpoint Fully Qualified Domain Name (FQDN) will point to the AWS PrivateLink endpoint FQDN. This creates an elastic network interface to the VPC endpoint service that the DNS endpoints can access. | Cloud administrator | 

### Create the Lambda function
<a name="create-the-lambda-function"></a>


| Task | Description | Skills required | 
| --- | --- | --- | 
| Create the Lambda function. | On the AWS Lambda console, create a Lambda function to update the Application Load Balancer IP addresses as targets for the Network Load Balancer. For more information on this, see the [Using AWS Lambda to enable static IP addresses for Application Load Balancers](https://aws.amazon.com/blogs/networking-and-content-delivery/using-aws-lambda-to-enable-static-ip-addresses-for-application-load-balancers/) blog post. | App developer | 

## Related resources
<a name="access-container-applications-privately-on-amazon-ecs-by-using-aws-privatelink-and-a-network-load-balancer-resources"></a>

**Create the load balancers:**
+ [Use a Network Load Balancer for Amazon ECS](https://docs.aws.amazon.com/AmazonECS/latest/developerguide/nlb.html)
+ [Create a Network Load Balancer](https://docs.aws.amazon.com/elasticloadbalancing/latest/network/create-network-load-balancer.html)
+ [Use an Application Load Balancer for Amazon ECS](https://docs.aws.amazon.com/AmazonECS/latest/developerguide/alb.html)
+ [Create an Application Load Balancer](https://docs.aws.amazon.com/elasticloadbalancing/latest/application/create-application-load-balancer.html)

**Create an Amazon EFS file system:**
+ [Create an Amazon EFS file system](https://docs.aws.amazon.com/efs/latest/ug/creating-using-create-fs.html)
+ [Create mount targets in Amazon EFS](https://docs.aws.amazon.com/efs/latest/ug/accessing-fs.html)

**Create an S3 bucket:**
+ [Create an S3 bucket](https://docs.aws.amazon.com/AmazonS3/latest/userguide/GetStartedWithS3.html#creating-bucket)

**Create a Secrets Manager secret:**
+ [Create keys in AWS KMS](https://docs.aws.amazon.com/kms/latest/developerguide/create-keys.html)
+ [Create a secret in AWS Secrets Manager ](https://docs.aws.amazon.com/secretsmanager/latest/userguide/intro.html)

**Create an Amazon RDS instance:**
+ [Create an Amazon RDS DB instance](https://docs.aws.amazon.com/AmazonRDS/latest/UserGuide/USER_CreateDBInstance.html)

**Create the Amazon ECS components:**
+ [Create an Amazon ECS cluster ](https://docs.aws.amazon.com/AmazonECS/latest/developerguide/create-ec2-cluster-console-v2.html)
+ [Create a Docker image](https://docs.aws.amazon.com/AmazonECS/latest/developerguide/create-container-image.html)
+ [Create an Amazon ECR repository ](https://docs.aws.amazon.com/AmazonECR/latest/userguide/repository-create.html)
+ [Authenticate Docker with Amazon ECR repository](https://docs.aws.amazon.com/AmazonECR/latest/userguide/Registries.html#registry_auth)
+ [Push an image to an Amazon ECR repository](https://docs.aws.amazon.com/AmazonECR/latest/userguide/docker-push-ecr-image.html)
+ [Create Amazon ECS task definition](https://docs.aws.amazon.com/AmazonECS/latest/developerguide/task_definitions.html)
+ [Create an Amazon ECS service](https://docs.aws.amazon.com/AmazonECS/latest/developerguide/create-service-console-v2.html)

**Create an Amazon EC2 Auto Scaling group:**
+ [Create a launch configuration](https://docs.aws.amazon.com/autoscaling/ec2/userguide/create-launch-config.html)
+ [Create an Auto Scaling group using a launch configuration](https://docs.aws.amazon.com/autoscaling/ec2/userguide/create-asg.html)
+ [Bootstrap container instances with Amazon EC2 user data](https://docs.aws.amazon.com/AmazonECS/latest/developerguide/bootstrap_container_instance.html)

**Set up AWS PrivateLink:**
+ [VPC endpoint services (AWS PrivateLink)](https://docs.aws.amazon.com/vpc/latest/privatelink/privatelink-share-your-services.html)

**Create a VPC endpoint:**
+ [Interface VPC endpoints (AWS PrivateLink)](https://docs.aws.amazon.com/vpc/latest/privatelink/create-interface-endpoint.html)

**Create the Lambda function:**
+ [Create a Lambda function](https://docs.aws.amazon.com/lambda/latest/dg/getting-started.html)

**Other resources:**
+ [Using static IP addresses for Application Load Balancers](https://aws.amazon.com/blogs/networking-and-content-delivery/using-static-ip-addresses-for-application-load-balancers/)
+ [Securely accessing services over AWS PrivateLink](https://d1.awsstatic.com/whitepapers/aws-privatelink.pdf)

# Access container applications privately on Amazon ECS by using AWS Fargate, AWS PrivateLink, and a Network Load Balancer
<a name="access-container-applications-privately-on-amazon-ecs-by-using-aws-fargate-aws-privatelink-and-a-network-load-balancer"></a>

*Kirankumar Chandrashekar, Amazon Web Services*

## Summary
<a name="access-container-applications-privately-on-amazon-ecs-by-using-aws-fargate-aws-privatelink-and-a-network-load-balancer-summary"></a>

This pattern describes how to privately host a Docker container application on the Amazon Web Services (AWS) Cloud by using Amazon Elastic Container Service (Amazon ECS) with an AWS Fargate launch type, behind a Network Load Balancer, and access the application by using AWS PrivateLink. Amazon Relational Database Service (Amazon RDS) hosts the relational database for the application running on Amazon ECS with high availability (HA). You can use Amazon Elastic File System (Amazon EFS) if the application requires persistent storage.

This pattern uses a [Fargate launch type](https://docs.aws.amazon.com/AmazonECS/latest/userguide/launch_types.html) for the Amazon ECS service running the Docker applications, with a Network Load Balancer at the front end. It can then be associated with a virtual private cloud (VPC) endpoint for access through AWS PrivateLink. This VPC endpoint service can then be shared with other VPCs by using their VPC endpoints.

You can use Fargate with Amazon ECS to run containers without having to manage servers or clusters of Amazon Elastic Compute Cloud (Amazon EC2) instances. You can also use an Amazon EC2 Auto Scaling group instead of Fargate. For more information, see [Access container applications privately on Amazon ECS by using AWS PrivateLink and a Network Load Balancer](https://docs.aws.amazon.com/prescriptive-guidance/latest/patterns/access-container-applications-privately-on-amazon-ecs-by-using-aws-privatelink-and-a-network-load-balancer.html?did=pg_card&trk=pg_card).

## Prerequisites and limitations
<a name="access-container-applications-privately-on-amazon-ecs-by-using-aws-fargate-aws-privatelink-and-a-network-load-balancer-prereqs"></a>

**Prerequisites**
+ An active AWS account
+ [AWS Command Line Interface (AWS CLI) version 2](https://docs.aws.amazon.com/cli/latest/userguide/getting-started-install.html), installed and configured on Linux, macOS, or Windows
+ [Docker](https://www.docker.com/), installed and configured on Linux, macOS, or Windows
+ An application running on Docker

## Architecture
<a name="access-container-applications-privately-on-amazon-ecs-by-using-aws-fargate-aws-privatelink-and-a-network-load-balancer-architecture"></a>

![\[Using PrivateLink to access a container app on Amazon ECS with an AWS Fargate launch type.\]](http://docs.aws.amazon.com/prescriptive-guidance/latest/patterns/images/pattern-img/31cca5e2-8d8b-45ec-b872-a06b0dd97007/images/57cc9995-45f4-4039-a0bf-2d2b3d6a05de.png)


**Technology stack**
+ Amazon CloudWatch
+ Amazon Elastic Container Registry (Amazon ECR)
+ Amazon ECS
+ Amazon EFS
+ Amazon RDS
+ Amazon Simple Storage Service (Amazon S3)
+ AWS Fargate
+ AWS PrivateLink
+ AWS Secrets Manager
+ Application Load Balancer
+ Network Load Balancer
+ VPC

**Automation and scale**
+ You can use [AWS CloudFormation](https://docs.aws.amazon.com/AWSCloudFormation/latest/UserGuide/Welcome.html) to create this pattern by using [Infrastructure as Code](https://docs.aws.amazon.com/whitepapers/latest/introduction-devops-aws/infrastructure-as-code.html).

## Tools
<a name="access-container-applications-privately-on-amazon-ecs-by-using-aws-fargate-aws-privatelink-and-a-network-load-balancer-tools"></a>

**AWS services**
+ [Amazon Elastic Container Registry (Amazon ECR)](https://docs.aws.amazon.com/AmazonECR/latest/userguide/what-is-ecr.html) is a managed AWS container image registry service that is secure, scalable, and reliable.
+ [Amazon Elastic Container Service (Amazon ECS)](https://docs.aws.amazon.com/AmazonECS/latest/developerguide/Welcome.html) is a highly scalable, fast, container management service that makes it easy to run, stop, and manage containers on a cluster.
+ [Amazon Elastic File System (Amazon EFS)](https://docs.aws.amazon.com/efs/latest/ug/whatisefs.html) provides a simple, scalable, fully managed elastic NFS file system for use with AWS Cloud services and on-premises resources.
+ [AWS Fargate](https://docs.aws.amazon.com/AmazonECS/latest/developerguide/AWS_Fargate.html) is a technology that you can use with Amazon ECS to run containers without having to manage servers or clusters of Amazon EC2 instances.
+ [Amazon Relational Database Service (Amazon RDS)](https://docs.aws.amazon.com/rds/index.html) is a web service that makes it easier to set up, operate, and scale a relational database in the AWS Cloud.
+ [Amazon Simple Storage Service (Amazon S3)](https://docs.aws.amazon.com/AmazonS3/latest/userguide/Welcome.html) is storage for the internet. It is designed to make web-scale computing easier for developers.
+ [AWS Secrets Manager](https://docs.aws.amazon.com/secretsmanager/) helps you replace hardcoded credentials in your code, including passwords, with an API call to Secrets Manager to retrieve the secret programmatically.
+ [Amazon Virtual Private Cloud (Amazon VPC)](https://docs.aws.amazon.com/vpc/latest/userguide/what-is-amazon-vpc.html) helps you launch AWS resources into a virtual network that you've defined.
+ [Elastic Load Balancing (ELB)](https://docs.aws.amazon.com/elasticloadbalancing/latest/userguide/what-is-load-balancing.html) distributes incoming application or network traffic across multiple targets, such as EC2 instances, containers, and IP addresses, in multiple Availability Zones.

**Other tools**
+ [Docker](https://www.docker.com/) helps developers to easily pack, ship, and run any application as a lightweight, portable, and self-sufficient container.

## Epics
<a name="access-container-applications-privately-on-amazon-ecs-by-using-aws-fargate-aws-privatelink-and-a-network-load-balancer-epics"></a>

### Create networking components
<a name="create-networking-components"></a>


| Task | Description | Skills required | 
| --- | --- | --- | 
| Create a VPC. | [\[See the AWS documentation website for more details\]](http://docs.aws.amazon.com/prescriptive-guidance/latest/patterns/access-container-applications-privately-on-amazon-ecs-by-using-aws-fargate-aws-privatelink-and-a-network-load-balancer.html) | Cloud administrator | 

### Create the load balancers
<a name="create-the-load-balancers"></a>


| Task | Description | Skills required | 
| --- | --- | --- | 
| Create a Network Load Balancer.  | [\[See the AWS documentation website for more details\]](http://docs.aws.amazon.com/prescriptive-guidance/latest/patterns/access-container-applications-privately-on-amazon-ecs-by-using-aws-fargate-aws-privatelink-and-a-network-load-balancer.html)For help with this and other stories, see the *Related resources* section. | Cloud administrator | 
| Create an Application Load Balancer. | [\[See the AWS documentation website for more details\]](http://docs.aws.amazon.com/prescriptive-guidance/latest/patterns/access-container-applications-privately-on-amazon-ecs-by-using-aws-fargate-aws-privatelink-and-a-network-load-balancer.html) | Cloud administrator | 

### Create an Amazon EFS file system
<a name="create-an-amazon-efs-file-system"></a>


| Task | Description | Skills required | 
| --- | --- | --- | 
| Create an Amazon EFS file system. | [\[See the AWS documentation website for more details\]](http://docs.aws.amazon.com/prescriptive-guidance/latest/patterns/access-container-applications-privately-on-amazon-ecs-by-using-aws-fargate-aws-privatelink-and-a-network-load-balancer.html) | Cloud administrator | 
| Mount targets for the subnets. | [\[See the AWS documentation website for more details\]](http://docs.aws.amazon.com/prescriptive-guidance/latest/patterns/access-container-applications-privately-on-amazon-ecs-by-using-aws-fargate-aws-privatelink-and-a-network-load-balancer.html) | Cloud administrator | 
| Verify that the subnets are mounted as targets.  | [\[See the AWS documentation website for more details\]](http://docs.aws.amazon.com/prescriptive-guidance/latest/patterns/access-container-applications-privately-on-amazon-ecs-by-using-aws-fargate-aws-privatelink-and-a-network-load-balancer.html) | Cloud administrator | 

### Create an S3 bucket
<a name="create-an-s3-bucket"></a>


| Task | Description | Skills required | 
| --- | --- | --- | 
| Create an S3 bucket. | Open the Amazon S3 console and [create an S3 bucket](https://docs.aws.amazon.com/AmazonS3/latest/userguide/GetStartedWithS3.html#creating-bucket) to store your application’s static assets, if required. | Cloud administrator | 

### Create a Secrets Manager secret
<a name="create-a-secrets-manager-secret"></a>


| Task | Description | Skills required | 
| --- | --- | --- | 
|  Create an AWS KMS key to encrypt the Secrets Manager secret. | Open the AWS Key Management Service (AWS KMS) console and create a KMS key. | Cloud administrator | 
|  Create a Secrets Manager secret to store the Amazon RDS password. | [\[See the AWS documentation website for more details\]](http://docs.aws.amazon.com/prescriptive-guidance/latest/patterns/access-container-applications-privately-on-amazon-ecs-by-using-aws-fargate-aws-privatelink-and-a-network-load-balancer.html) | Cloud administrator | 

### Create an Amazon RDS instance
<a name="create-an-amazon-rds-instance"></a>


| Task | Description | Skills required | 
| --- | --- | --- | 
| Create a DB subnet group.  | [\[See the AWS documentation website for more details\]](http://docs.aws.amazon.com/prescriptive-guidance/latest/patterns/access-container-applications-privately-on-amazon-ecs-by-using-aws-fargate-aws-privatelink-and-a-network-load-balancer.html) | Cloud administrator | 
| Create an Amazon RDS instance. | Create and configure an Amazon RDS instance within the private subnets. Make sure that **Multi-AZ** is turned on for high availability (HA). | Cloud administrator | 
| Load data to the Amazon RDS instance.  | Load the relational data required by your application into your Amazon RDS instance. This process will vary depending on your application's needs, as well as how your database schema is defined and designed. | DBA | 

### Create the Amazon ECS components
<a name="create-the-amazon-ecs-components"></a>


| Task | Description | Skills required | 
| --- | --- | --- | 
| Create an ECS cluster. | [\[See the AWS documentation website for more details\]](http://docs.aws.amazon.com/prescriptive-guidance/latest/patterns/access-container-applications-privately-on-amazon-ecs-by-using-aws-fargate-aws-privatelink-and-a-network-load-balancer.html) | Cloud administrator | 
| Create the Docker images. | Create the Docker images by following the instructions in the [AWS documentation](https://docs.aws.amazon.com/AmazonECS/latest/developerguide/create-container-image.html). | Cloud administrator | 
| Create an Amazon ECR repository. | [\[See the AWS documentation website for more details\]](http://docs.aws.amazon.com/prescriptive-guidance/latest/patterns/access-container-applications-privately-on-amazon-ecs-by-using-aws-fargate-aws-privatelink-and-a-network-load-balancer.html) | Cloud administrator, DevOps engineer | 
| Push the Docker images to the Amazon ECR repository.  | [\[See the AWS documentation website for more details\]](http://docs.aws.amazon.com/prescriptive-guidance/latest/patterns/access-container-applications-privately-on-amazon-ecs-by-using-aws-fargate-aws-privatelink-and-a-network-load-balancer.html) | Cloud administrator | 
| Create an Amazon ECS task definition.  | A task definition is required to run Docker containers in Amazon ECS. [\[See the AWS documentation website for more details\]](http://docs.aws.amazon.com/prescriptive-guidance/latest/patterns/access-container-applications-privately-on-amazon-ecs-by-using-aws-fargate-aws-privatelink-and-a-network-load-balancer.html)For help with setting up your task definition, see "Creating a task definition" in the *Related resources* section. Make sure you provide the Docker images that you pushed to Amazon ECR. | Cloud administrator | 
| Create an ECS service and choose Fargate as the launch type. | [\[See the AWS documentation website for more details\]](http://docs.aws.amazon.com/prescriptive-guidance/latest/patterns/access-container-applications-privately-on-amazon-ecs-by-using-aws-fargate-aws-privatelink-and-a-network-load-balancer.html) | Cloud administrator | 

### Set up AWS PrivateLink
<a name="set-up-aws-privatelink"></a>


| Task | Description | Skills required | 
| --- | --- | --- | 
| Set up the AWS PrivateLink endpoint. | [\[See the AWS documentation website for more details\]](http://docs.aws.amazon.com/prescriptive-guidance/latest/patterns/access-container-applications-privately-on-amazon-ecs-by-using-aws-fargate-aws-privatelink-and-a-network-load-balancer.html) | Cloud administrator | 

### Create a VPC endpoint
<a name="create-a-vpc-endpoint"></a>


| Task | Description | Skills required | 
| --- | --- | --- | 
| Create a VPC endpoint. | [Create a VPC endpoint](https://docs.aws.amazon.com/vpc/latest/privatelink/create-interface-endpoint.html) for the AWS PrivateLink endpoint that you created earlier. The VPC endpoint Fully Qualified Domain Name (FQDN) will point to the AWS PrivateLink endpoint FQDN. This creates an elastic network interface to the VPC endpoint service that the Domain Name Service endpoints can access. | Cloud administrator | 

### Set the target
<a name="set-the-target"></a>


| Task | Description | Skills required | 
| --- | --- | --- | 
| Add the Application Load Balancer as a target. | To add the Application Load Balancer as a target for the Network Load Balancer, follow the instructions in the [AWS documentation](https://docs.aws.amazon.com/elasticloadbalancing/latest/network/application-load-balancer-target.html). | App developer | 

## Related resources
<a name="access-container-applications-privately-on-amazon-ecs-by-using-aws-fargate-aws-privatelink-and-a-network-load-balancer-resources"></a>

**Create the load balancers:**
+ [Use a Network Load Balancer for Amazon ECS](https://docs.aws.amazon.com/AmazonECS/latest/developerguide/nlb.html)
+ [Create a Network Load Balancer](https://docs.aws.amazon.com/elasticloadbalancing/latest/network/create-network-load-balancer.html)
+ [Use an Application Load Balancer for Amazon ECS](https://docs.aws.amazon.com/AmazonECS/latest/developerguide/alb.html)
+ [Create an Application Load Balancer](https://docs.aws.amazon.com/elasticloadbalancing/latest/application/create-application-load-balancer.html)

**Create an Amazon EFS file system:**
+ [Create an Amazon EFS file system](https://docs.aws.amazon.com/efs/latest/ug/creating-using-create-fs.html)
+ [Create mount targets in Amazon EFS](https://docs.aws.amazon.com/efs/latest/ug/accessing-fs.html)

**Create a Secrets Manager secret:**
+ [Create keys in AWS KMS](https://docs.aws.amazon.com/kms/latest/developerguide/create-keys.html)
+ [Create a secret in AWS Secrets Manager ](https://docs.aws.amazon.com/secretsmanager/latest/userguide/create_secret.html)

**Create an Amazon RDS instance:**
+ [Create an Amazon RDS DB instance](https://docs.aws.amazon.com/AmazonRDS/latest/UserGuide/USER_CreateDBInstance.html)

**Create the Amazon ECS components**
+ [Create an Amazon ECR repository ](https://docs.aws.amazon.com/AmazonECR/latest/userguide/repository-create.html)
+ [Authenticate Docker with Amazon ECR repository](https://docs.aws.amazon.com/AmazonECR/latest/userguide/Registries.html#registry_auth)
+ [Push an image to an Amazon ECR repository](https://docs.aws.amazon.com/AmazonECR/latest/userguide/docker-push-ecr-image.html)
+ [Create Amazon ECS task definition ](https://docs.aws.amazon.com/AmazonECS/latest/developerguide/task_definitions.html)
+ [Create an Amazon ECS service](https://docs.aws.amazon.com/AmazonECS/latest/developerguide/create-service-console-v2.html)

**Other resources:**
+ [Securely accessing services over AWS PrivateLink](https://d1.awsstatic.com/whitepapers/aws-privatelink.pdf)

# Access container applications privately on Amazon EKS using AWS PrivateLink and a Network Load Balancer
<a name="access-container-applications-privately-on-amazon-eks-using-aws-privatelink-and-a-network-load-balancer"></a>

*Kirankumar Chandrashekar, Amazon Web Services*

## Summary
<a name="access-container-applications-privately-on-amazon-eks-using-aws-privatelink-and-a-network-load-balancer-summary"></a>

This pattern describes how to privately host a Docker container application on Amazon Elastic Kubernetes Service (Amazon EKS) behind a Network Load Balancer, and access the application by using AWS PrivateLink. You can then use a private network to securely access services on the Amazon Web Services (AWS) Cloud. 

The Amazon EKS cluster running the Docker applications, with a Network Load Balancer at the front end, can be associated with a virtual private cloud (VPC) endpoint for access through AWS PrivateLink. This VPC endpoint service can then be shared with other VPCs by using their VPC endpoints.

The setup described by this pattern is a secure way to share application access among VPCs and AWS accounts. It requires no special connectivity or routing configurations, because the connection between the consumer and provider accounts is on the global AWS backbone and doesn’t traverse the public internet.

## Prerequisites and limitations
<a name="access-container-applications-privately-on-amazon-eks-using-aws-privatelink-and-a-network-load-balancer-prereqs"></a>

**Prerequisites **
+ [Docker](https://www.docker.com/), installed and configured on Linux, macOS, or Windows.
+ An application running on Docker.
+ An active AWS account.
+ [AWS Command Line Interface (AWS CLI) version 2](https://docs.aws.amazon.com/cli/latest/userguide/install-cliv2.html), installed and configured on Linux, macOS, or Windows.
+ An existing Amazon EKS cluster with tagged private subnets and configured to host applications. For more information, see [Subnet tagging](https://docs.aws.amazon.com/eks/latest/userguide/network_reqs.html#vpc-subnet-tagging) in the Amazon EKS documentation. 
+ Kubectl, installed and configured to access resources on your Amazon EKS cluster. For more information, see [Installing kubectl](https://docs.aws.amazon.com/eks/latest/userguide/install-kubectl.html) in the Amazon EKS documentation. 

## Architecture
<a name="access-container-applications-privately-on-amazon-eks-using-aws-privatelink-and-a-network-load-balancer-architecture"></a>

![\[Use PrivateLink and a Network Load Balancer to access an application in an Amazon EKS container.\]](http://docs.aws.amazon.com/prescriptive-guidance/latest/patterns/images/pattern-img/ce977924-012c-4fb6-8e51-94d6e5c829a6/images/378456a3-f4d1-4a57-bb36-879c240cabfb.png)


**Technology stack  **
+ Amazon EKS
+ AWS PrivateLink
+ Network Load Balancer

**Automation and scale**
+ Kubernetes manifests can be tracked and managed on a Git-based repository, and deployed by using continuous integration and continuous delivery (CI/CD) in AWS CodePipeline. 
+ You can use AWS CloudFormation to create this pattern by using infrastructure as code (IaC).

## Tools
<a name="access-container-applications-privately-on-amazon-eks-using-aws-privatelink-and-a-network-load-balancer-tools"></a>
+ [AWS CLI](https://docs.aws.amazon.com/cli/latest/userguide/cli-chap-welcome.html) – AWS Command Line Interface (AWS CLI) is an open-source tool that enables you to interact with AWS services using commands in your command-line shell.
+ [Elastic Load Balancing](https://docs.aws.amazon.com/elasticloadbalancing/latest/userguide/what-is-load-balancing.html) – Elastic Load Balancing distributes incoming application or network traffic across multiple targets, such as Amazon Elastic Compute Cloud (Amazon EC2) instances, containers, and IP addresses, in one or more Availability Zones.
+ [Amazon EKS](https://docs.aws.amazon.com/eks/latest/userguide/getting-started.html) – Amazon Elastic Kubernetes Service (Amazon EKS) is a managed service that you can use to run Kubernetes on AWS without needing to install, operate, and maintain your own Kubernetes control plane or nodes.
+ [Amazon VPC](https://docs.aws.amazon.com/vpc/latest/userguide/what-is-amazon-vpc.html) – Amazon Virtual Private Cloud (Amazon VPC) helps you launch AWS resources into a virtual network that you've defined.
+ [Kubectl](https://kubernetes.io/docs/tasks/tools/install-kubectl/) – Kubectl is a command line utility for running commands against Kubernetes clusters.

## Epics
<a name="access-container-applications-privately-on-amazon-eks-using-aws-privatelink-and-a-network-load-balancer-epics"></a>

### Deploy the Kubernetes deployment and service manifest files
<a name="deploy-the-kubernetes-deployment-and-service-manifest-files"></a>


| Task | Description | Skills required | 
| --- | --- | --- | 
|  Create the Kubernetes deployment manifest file. | Create a deployment manifest file by modifying the following sample file according to your requirements.<pre>apiVersion: apps/v1<br />kind: Deployment<br />metadata:<br />  name: sample-app<br />spec:<br />  replicas: 3<br />  selector:<br />    matchLabels:<br />      app: nginx<br />  template:<br />    metadata:<br />      labels:<br />        app: nginx<br />    spec:<br />      containers:<br />        - name: nginx<br />          image: public.ecr.aws/z9d2n7e1/nginx:1.19.5<br />          ports:<br />            - name: http<br />              containerPort: 80</pre>This is a NGINX sample configuration file that is deployed by using the NGINX Docker image. For more information, see [How to use the official NGINX Docker image](https://www.docker.com/blog/how-to-use-the-official-nginx-docker-image/) in the Docker documentation. | DevOps engineer | 
| Deploy the Kubernetes deployment manifest file. | Run the following command to apply the deployment manifest file to your Amazon EKS cluster:`kubectl apply –f <your_deployment_file_name> ` | DevOps engineer | 
|  Create the Kubernetes service manifest file.  | Create a service manifest file by modifying the following sample file according to your requirements.<pre>apiVersion: v1<br />kind: Service<br />metadata:<br />  name: sample-service<br />  annotations:<br />    service.beta.kubernetes.io/aws-load-balancer-type: nlb<br />    service.beta.kubernetes.io/aws-load-balancer-internal: "true"<br />spec:<br />  ports:<br />    - port: 80<br />      targetPort: 80<br />      protocol: TCP<br />  type: LoadBalancer<br />  selector:<br />    app: nginx</pre>Make sure that you included the following `annotations` to define an internal Network Load Balancer:<pre>service.beta.kubernetes.io/aws-load-balancer-type: nlb<br />service.beta.kubernetes.io/aws-load-balancer-internal: "true"</pre> | DevOps engineer | 
| Deploy the Kubernetes service manifest file. | Run the following command to apply the service manifest file to your Amazon EKS cluster:`kubectl apply -f <your_service_file_name>` | DevOps engineer | 

### Create the endpoints
<a name="create-the-endpoints"></a>


| Task | Description | Skills required | 
| --- | --- | --- | 
| Record the Network Load Balancer’s name.  | Run the following command to retrieve the name of the Network Load Balancer:`kubectl get svc sample-service -o wide`Record the Network Load Balancer’s name, which is required to create an AWS PrivateLink endpoint. | DevOps engineer | 
| Create an AWS PrivateLink endpoint. | Sign in to the AWS Management Console, open the Amazon VPC console, and then create an AWS PrivateLink endpoint. Associate this endpoint with the Network Load Balancer, this makes the application privately available to customers. For more information, see [VPC endpoint services (AWS PrivateLink)](https://docs.aws.amazon.com/vpc/latest/userguide/endpoint-service.html) in the Amazon VPC documentation.If the consumer account requires access to the application, the consumer account’s [AWS account ID](https://docs.aws.amazon.com/IAM/latest/UserGuide/console_account-alias.html) must be added to the allowed principals list for the AWS PrivateLink endpoint configuration. For more information, see [Adding and removing permissions for your endpoint service](https://docs.aws.amazon.com/vpc/latest/userguide/add-endpoint-service-permissions.html) in the Amazon VPC documentation. | Cloud administrator  | 
| Create a VPC endpoint. | On the Amazon VPC console, choose **Endpoint Services**, and then choose **Create Endpoint Service**. Create a VPC endpoint for the AWS PrivateLink endpoint.The VPC endpoint’s fully qualified domain name (FQDN) points to the FQDN for the AWS PrivateLink endpoint. This creates an elastic network interface to the VPC endpoint service that the DNS endpoints can access.  | Cloud administrator | 

## Related resources
<a name="access-container-applications-privately-on-amazon-eks-using-aws-privatelink-and-a-network-load-balancer-resources"></a>
+ [Using the official NGINX Docker image](https://www.docker.com/blog/how-to-use-the-official-nginx-docker-image/)
+ [Network load balancing on Amazon EKS](https://docs.aws.amazon.com/eks/latest/userguide/load-balancing.html) 
+ [Creating VPC endpoint services (AWS PrivateLink)](https://docs.aws.amazon.com/vpc/latest/userguide/endpoint-service.html) 
+ [Adding and removing permissions for your endpoint service ](https://docs.aws.amazon.com/vpc/latest/userguide/add-endpoint-service-permissions.html)

# Automate backups for Amazon RDS for PostgreSQL DB instances by using AWS Batch
<a name="automate-backups-for-amazon-rds-for-postgresql-db-instances-by-using-aws-batch"></a>

*Kirankumar Chandrashekar, Amazon Web Services*

## Summary
<a name="automate-backups-for-amazon-rds-for-postgresql-db-instances-by-using-aws-batch-summary"></a>

Backing up your PostgreSQL databases is an important task and can typically be completed with the [pg\$1dump utility](https://www.postgresql.org/docs/current/app-pgdump.html), which uses the COPY command by default to create a schema and data dump of a PostgreSQL database. However, this process can become repetitive if you require regular backups for multiple PostgreSQL databases. If your PostgreSQL databases are hosted in the cloud, you can also take advantage of the [automated backup](https://docs.aws.amazon.com/AmazonRDS/latest/UserGuide/USER_WorkingWithAutomatedBackups.html) feature provided by Amazon Relational Database Service (Amazon RDS) for PostgreSQL as well. This pattern describes how to automate regular backups for Amazon RDS for PostgreSQL DB instances using the pg\$1dump utility.

Note: The instructions assume that you're using Amazon RDS. However, you can also use this approach for PostgreSQL databases that are hosted outside Amazon RDS. To take backups, the AWS Lambda function must be able to access your databases.

A time-based Amazon CloudWatch Events event initiates a Lambda function that searches for specific backup [tags applied to the metadata](https://docs.aws.amazon.com/AmazonRDS/latest/UserGuide/USER_Tagging.html) of the PostgreSQL DB instances on Amazon RDS. If the PostgreSQL DB instances have the **bkp:AutomatedDBDump = Active** tag and other required backup tags, the Lambda function submits individual jobs for each database backup to AWS Batch. 

AWS Batch processes these jobs and uploads the backup data to an Amazon Simple Storage Service (Amazon S3) bucket. This pattern uses a Dockerfile and an entrypoint.sh file to build a Docker container image that is used to make backups in the AWS Batch job. After the backup process is complete, AWS Batch records the backup details to an inventory table on Amazon DynamoDB. As an additional safeguard, a CloudWatch Events event initiates an Amazon Simple Notification Service (Amazon SNS) notification if a job fails in AWS Batch. 

## Prerequisites and limitations
<a name="automate-backups-for-amazon-rds-for-postgresql-db-instances-by-using-aws-batch-prereqs"></a>

**Prerequisites **
+ An active AWS account.
+ An existing managed or unmanaged compute environment. For more information, see [Managed and unmanaged compute environments](https://docs.aws.amazon.com/batch/latest/userguide/compute_environments.html) in the AWS Batch documentation. 
+ [AWS Command Line Interface (CLI) version 2 Docker image](https://docs.aws.amazon.com/cli/latest/userguide/install-cliv2-docker.html), installed and configured.
+ Existing Amazon RDS for PostgreSQL DB instances.  
+ An existing S3 bucket. 
+ [Docker](https://www.docker.com/), installed and configured on Linux, macOS, or Windows.
+ Familiarity with coding in Lambda. 

## Architecture
<a name="automate-backups-for-amazon-rds-for-postgresql-db-instances-by-using-aws-batch-architecture"></a>

![\[Architecture to back up Amazon RDS for PostgreSQL DB instances by using the pg_dump utility.\]](http://docs.aws.amazon.com/prescriptive-guidance/latest/patterns/images/pattern-img/3283f739-980b-43d4-aca0-9d77a2ce3b85/images/352e2eab-1b7d-44ec-840a-a772a175e873.png)


 

**Technology stack  **
+ Amazon CloudWatch Events
+ Amazon DynamoDB
+ Amazon Elastic Container Registry (Amazon ECR)
+ Amazon RDS
+ Amazon SNS
+ Amazon S3
+ AWS Batch
+ AWS Key Management Service (AWS KMS)
+ AWS Lambda
+ AWS Secrets Manager
+ Docker

## Tools
<a name="automate-backups-for-amazon-rds-for-postgresql-db-instances-by-using-aws-batch-tools"></a>
+ [Amazon CloudWatch Events](https://docs.aws.amazon.com/AmazonCloudWatch/latest/events/WhatIsCloudWatchEvents.html) – CloudWatch Events delivers a near real-time stream of system events that describe changes in AWS resources.
+ [Amazon DynamoDB](https://docs.aws.amazon.com/amazondynamodb/latest/developerguide/Introduction.html) – DynamoDB is a fully managed NoSQL database service that provides fast and predictable performance with seamless scalability.
+ [Amazon ECR](https://docs.aws.amazon.com/ecr/index.html) – Amazon Elastic Container Registry (Amazon ECR) is a managed AWS container image registry service that is secure, scalable, and reliable.
+ [Amazon RDS](https://docs.aws.amazon.com/rds/index.html) – Amazon Relational Database Service (Amazon RDS) is a web service that makes it easier to set up, operate, and scale a relational database in the AWS Cloud.
+ [Amazon SNS](https://docs.aws.amazon.com/sns/latest/dg/welcome.html) – Amazon Simple Notification Service (Amazon SNS) is a managed service that provides message delivery from publishers to subscribers.
+ [Amazon S3](https://docs.aws.amazon.com/s3/index.html) – Amazon Simple Storage Service (Amazon S3) is storage for the internet.
+ [AWS Batch](https://docs.aws.amazon.com/batch/index.html) – AWS Batch helps you run batch computing workloads on the AWS Cloud.
+ [AWS KMS](https://docs.aws.amazon.com/kms/index.html) – AWS Key Management Service (AWS KMS) is a managed service that makes it easy for you to create and control the encryption keys used to encrypt your data.
+ [AWS Lambda](https://docs.aws.amazon.com/lambda/index.html) – Lambda is a compute service that helps you run code without provisioning or managing servers.
+ [AWS Secrets Manager](https://docs.aws.amazon.com/secretsmanager/index.html) – Secrets Manager helps you replace hardcoded credentials in your code, including passwords, with an API call to Secrets Manager to retrieve the secret programmatically.
+ [Docker](https://www.docker.com/) – Docker helps developers easily pack, ship, and run any application as a lightweight, portable, and self-sufficient container.

Your PostgreSQL DB instances on Amazon RDS must have [tags applied to their metadata](https://docs.aws.amazon.com/AmazonRDS/latest/UserGuide/USER_Tagging.html). The Lambda function searches for tags to identify DB instances that should be backed up, and the following tags are typically used.


| 
| 
| Tag | Description | 
| --- |--- |
| bkp:AutomatedDBDump = Active | Identifies an Amazon RDS DB instance as a candidate for backups. | 
| bkp:AutomatedBackupSecret = <secret\$1name > | Identifies the Secrets Manager secret that contains the Amazon RDS login credentials. | 
| bkp:AutomatedDBDumpS3Bucket = <s3\$1bucket\$1name> | Identifies the S3 bucket to send backups to. | 
| bkp:AutomatedDBDumpFrequencybkp:AutomatedDBDumpTime | Identify the frequency and times when databases should be backed up.  | 
| bkp:pgdumpcommand = <pgdump\$1command> | Identifies the databases for which the backups need to be taken. | 

## Epics
<a name="automate-backups-for-amazon-rds-for-postgresql-db-instances-by-using-aws-batch-epics"></a>

### Create an inventory table in DynamoDB
<a name="create-an-inventory-table-in-dynamodb"></a>


| Task | Description | Skills required | 
| --- | --- | --- | 
| Create a table in DynamoDB. | Sign in to the AWS Management Console, open the Amazon DynamoDB console, and create a table. For help with this and other stories, see the *Related resources* section. | Cloud administrator, Database administrator | 
| Confirm that the table was created.  | Run the `aws dynamodb describe-table --table-name <table-name> \| grep TableStatus` command. If the table exists, the command will return the `"TableStatus": "ACTIVE",` result. | Cloud administrator, Database administrator | 

### Create an SNS topic for failed job events in AWS Batch
<a name="create-an-sns-topic-for-failed-job-events-in-aws-batch"></a>


| Task | Description | Skills required | 
| --- | --- | --- | 
| Create an SNS topic. | Open the Amazon SNS console, choose **Topics**, and create an SNS topic with the name `JobFailedAlert`. Subscribe an active email address to the topic, and check your email inbox to confirm the SNS subscription email from AWS Notifications. | Cloud administrator | 
| Create a failed job event rule for AWS Batch.  | Open the Amazon CloudWatch console, choose **Events**, and then choose **Create rule**. Choose **Show advanced options**, and choose **Edit**. For **Build a pattern that selects events for processing by your targets**, replace the existing text with the "Failed job event" code from the *Additional information* section. This code defines a CloudWatch Events rule that initiates when AWS Batch has a `Failed` event. | Cloud administrator | 
| Add event rule target.  | In **Targets**, choose **Add targets**, and choose the `JobFailedAlert` SNS topic. Configure the remaining details and create the Cloudwatch Events rule. | Cloud administrator | 

### Build a Docker image and push it to an Amazon ECR repository
<a name="build-a-docker-image-and-push-it-to-an-amazon-ecr-repository"></a>


| Task | Description | Skills required | 
| --- | --- | --- | 
| Create an Amazon ECR repository. | Open the Amazon ECR console and choose the AWS Region in which you want to create your repository. Choose **Repositories**, and then choose **Create repository**. Configure the repository according to your requirements. | Cloud administrator | 
| Write a Dockerfile.  | Sign in to Docker and use the "Sample Dockerfile" and "Sample entrypoint.sh file" from the *Additional information* section to build a Dockerfile. | DevOps engineer | 
| Create a Docker image and push it to the Amazon ECR repository. | Build the Dockerfile into a Docker image and push it to the Amazon ECR repository. For help with this story, see the *Related resources* section. | DevOps engineer | 

### Create the AWS Batch components
<a name="create-the-aws-batch-components"></a>


| Task | Description | Skills required | 
| --- | --- | --- | 
| Create an AWS Batch job definition. | Open the AWS Batch console and create a job definition that includes the Amazon ECR repository’s Uniform Resource Identifier (URI) as the property `Image`. | Cloud administrator | 
| Configure the AWS Batch job queue.  | On the AWS Batch console, choose **Job queues**, and then choose **Create queue**. Create a job queue that will store jobs until AWS Batch runs them on the resources within your compute environment. Important: Make sure you write logic for AWS Batch to record the backup details to the DynamoDB inventory table. | Cloud administrator | 

### Create and schedule a Lambda function
<a name="create-and-schedule-a-lambda-function"></a>


| Task | Description | Skills required | 
| --- | --- | --- | 
| Create a Lambda function to search for tags. | Create a Lambda function that searches for tags on your PostgreSQL DB instances and identifies backup candidates. Make sure your Lambda function can identify the `bkp:AutomatedDBDump = Active` tag and all other required tags. Important: The Lambda function must also be able to add jobs to the AWS Batch job queue. | DevOps engineer | 
| Create a time-based CloudWatch Events event.  | Open the Amazon CloudWatch console and create a CloudWatch Events event that uses a cron expression to run your Lambda function on a regular schedule. Important: All scheduled events use the UTC time zone. | Cloud administrator | 

### Test the backup automation
<a name="test-the-backup-automation"></a>


| Task | Description | Skills required | 
| --- | --- | --- | 
| Create an Amazon KMS key. | Open the Amazon KMS console and create a KMS key that can be used to encrypt the Amazon RDS credentials stored in AWS Secrets Manager. | Cloud administrator | 
| Create an AWS Secrets Manager secret. | Open the AWS Secrets Manager console and store your Amazon RDS for PostgreSQL database credentials as a secret. | Cloud administrator | 
| Add the required tags to the PostgreSQL DB instances. | Open the Amazon RDS console and add tags to the PostgreSQL DB instances that you want to automatically back up. You can use the tags from the table in the *Tools* section. If you require backups from multiple PostgreSQL databases within the same Amazon RDS instance, then use `-d test:-d test1` as the value for the `bkp:pgdumpcommand` tag. `test` and `test1` are database names. Make sure that there is no space after the colon (:). | Cloud administrator | 
| Verify the backup automation.  | To verify the backup automation, you can either invoke the Lambda function or wait for the backup schedule to begin. After the backup process is complete, check that the DynamoDB inventory table has a valid backup entry for your PostgreSQL DB instances. If they match, then the backup automation process is successful. | Cloud administrator | 

## Related resources
<a name="automate-backups-for-amazon-rds-for-postgresql-db-instances-by-using-aws-batch-resources"></a>

**Create an inventory table in DynamoDB**
+ [Create an Amazon DynamoDB table ](https://docs.aws.amazon.com/amazondynamodb/latest/developerguide/getting-started-step-1.html)

 

**Create an SNS topic for failed job events in AWS Batch**
+ [Create an Amazon SNS topic](https://docs.aws.amazon.com/sns/latest/dg/sns-tutorial-create-topic.html)
+ [Send SNS alerts for failed job events in AWS Batch](https://docs.aws.amazon.com/batch/latest/userguide/batch_sns_tutorial.html)

 

**Build a Docker image and push it to an Amazon ECR repository**
+ [Create an Amazon ECR repository](https://docs.aws.amazon.com/AmazonECR/latest/userguide/repository-create.html)    
+ [Write a Dockerfile, create a Docker image, and push it to Amazon ECR](https://docs.aws.amazon.com/AmazonECR/latest/userguide/getting-started-cli.html)

 

**Create the AWS Batch components **
+ [Create an AWS Batch job definition](https://docs.aws.amazon.com/batch/latest/userguide/Batch_GetStarted.html#first-run-step-1)    
+ [Configure your compute environment and AWS Batch job queue ](https://docs.aws.amazon.com/batch/latest/userguide/Batch_GetStarted.html#first-run-step-2)   
+ [Create a job queue in AWS Batch](https://docs.aws.amazon.com/batch/latest/userguide/create-job-queue.html)

 

**Create a Lambda function**
+ [Create a Lambda function and write code](https://docs.aws.amazon.com/lambda/latest/dg/getting-started-create-function.html)
+ [Use Lambda with DynamoDB](https://docs.aws.amazon.com/lambda/latest/dg/with-ddb.html)

 

**Create a CloudWatch Events event**
+ [Create a time-based CloudWatch Events event ](https://docs.aws.amazon.com/AmazonCloudWatch/latest/events/Create-CloudWatch-Events-Scheduled-Rule.html)   
+ [Use cron expressions in Cloudwatch Events](https://docs.aws.amazon.com/AmazonCloudWatch/latest/events/ScheduledEvents.html)

 

**Test the backup automation**
+ [Create an Amazon KMS key](https://docs.aws.amazon.com/kms/latest/developerguide/create-keys.html)    
+ [Create a Secrets Manager secret](https://docs.aws.amazon.com/secretsmanager/latest/userguide/tutorials_basic.html)
+ [Add tags to an Amazon RDS instance](https://docs.aws.amazon.com/AmazonRDS/latest/UserGuide/USER_Tagging.html)

## Additional information
<a name="automate-backups-for-amazon-rds-for-postgresql-db-instances-by-using-aws-batch-additional"></a>

**Failed job event:**

```
{
  "detail-type": [
    "Batch Job State Change"
  ],
  "source": [
    "aws.batch"
  ],
  "detail": {
    "status": [
      "FAILED"
    ]
  }
}
```

**Sample Dockerfile:**

```
FROM alpine:latest
RUN apk --update add py-pip postgresql-client jq bash && \
pip install awscli && \
rm -rf /var/cache/apk/*
ADD entrypoint.sh /usr/bin/
RUN chmod +x /usr/bin/entrypoint.sh
ENTRYPOINT ["entrypoint.sh"]
```

**Sample entrypoint.sh file:**

```
 #!/bin/bash
set -e
DATETIME=`date +"%Y-%m-%d_%H_%M"`
FILENAME=RDS_PostGres_dump_${RDS_INSTANCE_NAME}
FILE=${FILENAME}_${DATETIME}

aws configure --profile new-profile set role_arn arn:aws:iam::${TargetAccountId}:role/${TargetAccountRoleName}
aws configure --profile new-profile set credential_source EcsContainer

echo "Central Account access provider IAM role is: "
aws sts get-caller-identity

echo "Target Customer Account access provider IAM role is: "
aws sts get-caller-identity --profile new-profile

securestring=$(aws secretsmanager get-secret-value --secret-id $SECRETID --output json --query 'SecretString' --region=$REGION --profile new-profile)

if [[ ${securestring} ]]; then
    echo "successfully accessed secrets manager and got the credentials"
    export PGPASSWORD=$(echo $securestring | jq --raw-output | jq -r '.DB_PASSWORD')
    PGSQL_USER=$(echo $securestring | jq --raw-output | jq -r '.DB_USERNAME')
    echo "Executing pg_dump for the PostGres endpoint ${PGSQL_HOST}"
    # pg_dump -h $PGSQL_HOST -U $PGSQL_USER -n dms_sample | gzip -9 -c  | aws s3 cp - --region=$REGION  --profile new-profile s3://$BUCKET/$FILE
    # in="-n public:-n private"
    IFS=':' list=($EXECUTE_COMMAND);
    for command in "${list[@]}";
      do
        echo $command;
        pg_dump -h $PGSQL_HOST -U $PGSQL_USER ${command} | gzip -9 -c  | aws s3 cp - --region=$REGION --profile new-profile s3://${BUCKET}/${FILE}-${command}".sql.gz"
        echo $?;
        if  [[ $? -ne 0 ]]; then
            echo "Error occurred in database backup process. Exiting now....."
            exit 1
        else
            echo "Postgresql dump was successfully taken for the RDS endpoint ${PGSQL_HOST} and is uploaded to the following S3 location s3://${BUCKET}/${FILE}-${command}.sql.gz"
            #write the details into the inventory table in central account
            echo "Writing to DynamoDB inventory table"
            aws dynamodb put-item --table-name ${RDS_POSTGRES_DUMP_INVENTORY_TABLE} --region=$REGION --item '{ "accountId": { "S": "'"${TargetAccountId}"'" }, "dumpFileUrl": {"S": "'"s3://${BUCKET}/${FILE}-${command}.sql.gz"'" }, "DumpAvailableTime": {"S": "'"`date +"%Y-%m-%d::%H::%M::%S"` UTC"'"}}'
            echo $?
            if  [[ $? -ne 0 ]]; then
                echo "Error occurred while putting item to DynamoDb Inventory Table. Exiting now....."
                exit 1
            else
                echo "Successfully written to DynamoDb Inventory Table ${RDS_POSTGRES_DUMP_INVENTORY_TABLE}"
            fi
        fi
      done;
else
    echo "Something went wrong {$?}"
    exit 1
fi

exec "$@"
```

# Automate deployment of Node Termination Handler in Amazon EKS by using a CI/CD pipeline
<a name="automate-deployment-of-node-termination-handler-in-amazon-eks-by-using-a-ci-cd-pipeline"></a>

*Sandip Gangapadhyay, Sandeep Gawande, Viyoma Sachdeva, Pragtideep Singh, and John Vargas, Amazon Web Services*

## Summary
<a name="automate-deployment-of-node-termination-handler-in-amazon-eks-by-using-a-ci-cd-pipeline-summary"></a>

**Notice**: AWS CodeCommit is no longer available to new customers. Existing customers of AWS CodeCommit can continue to use the service as normal. [Learn more](https://aws.amazon.com/blogs/devops/how-to-migrate-your-aws-codecommit-repository-to-another-git-provider/)

On the Amazon Web Services (AWS) Cloud, you can use [AWS Node Termination Handler](https://github.com/aws/aws-node-termination-handler), an open-source project, to handle Amazon Elastic Compute Cloud (Amazon EC2) instance shutdown within Kubernetes gracefully. AWS Node Termination Handler helps to ensure that the Kubernetes control plane responds appropriately to events that can cause your EC2 instance to become unavailable. Such events include the following:
+ [EC2 instance scheduled maintenance](https://docs.aws.amazon.com/AWSEC2/latest/UserGuide/monitoring-instances-status-check_sched.html)
+ [Amazon EC2 Spot Instance interruptions](https://docs.aws.amazon.com/AWSEC2/latest/UserGuide/spot-interruptions.html)
+ [Auto Scaling group scale in](https://docs.aws.amazon.com/autoscaling/ec2/userguide/AutoScalingGroupLifecycle.html#as-lifecycle-scale-in)
+ [Auto Scaling group rebalancing](https://docs.aws.amazon.com/autoscaling/ec2/userguide/auto-scaling-benefits.html#AutoScalingBehavior.InstanceUsage) across Availability Zones
+ EC2 instance termination through the API or the AWS Management Console

If an event isn’t handled, your application code might not stop gracefully. It also might take longer to recover full availability, or it might accidentally schedule work to nodes that are going down. The `aws-node-termination-handler` (NTH) can operate in two different modes: Instance Metadata Service (IMDS) or Queue Processor. For more information about the two modes, see the [Readme file](https://github.com/aws/aws-node-termination-handler#readme).

This pattern uses AWS CodeCommit, and it automates the deployment of NTH by using Queue Processor through a continuous integration and continuous delivery (CI/CD) pipeline.

**Note**  
If you're using [EKS managed node groups](https://docs.aws.amazon.com/eks/latest/userguide/managed-node-groups.html), you don't need the `aws-node-termination-handler`.

## Prerequisites and limitations
<a name="automate-deployment-of-node-termination-handler-in-amazon-eks-by-using-a-ci-cd-pipeline-prereqs"></a>

**Prerequisites **
+ An active AWS account.
+ A web browser that is supported for use with the AWS Management Console. See the [list of supported browsers](https://aws.amazon.com/premiumsupport/knowledge-center/browsers-management-console/).
+ AWS Cloud Development Kit (AWS CDK) [installed](https://docs.aws.amazon.com/cdk/v2/guide/getting_started.html#getting_started_install).
+ `kubectl`, the Kubernetes command line tool, [installed](https://kubernetes.io/docs/tasks/tools/).
+ `eksctl`, the AWS Command Line Interface (AWS CLI) for Amazon Elastic Kubernetes Service (Amazon EKS), [installed](https://docs.aws.amazon.com/eks/latest/userguide/eksctl.html).
+ A running EKS cluster with version 1.20 or later.
+ A self-managed node group attached to the EKS cluster. To create an Amazon EKS cluster with a self-managed node group, run the following command.

  ```
  eksctl create cluster --managed=false --region <region> --name <cluster_name>
  ```

  For more information on `eksctl`, see the [eksctl documentation](https://eksctl.io/usage/creating-and-managing-clusters/).
+ AWS Identity and Access Management (IAM) OpenID Connect (OIDC) provider for your cluster. For more information, see [Creating an IAM OIDC provider for your cluster](https://docs.aws.amazon.com/eks/latest/userguide/enable-iam-roles-for-service-accounts.html).

**Limitations **
+ You must use an AWS Region that supports the Amazon EKS service.

**Product versions**
+ Kubernetes version 1.20 or later
+ `eksctl` version 0.107.0 or later
+ AWS CDK version 2.27.0 or later

## Architecture
<a name="automate-deployment-of-node-termination-handler-in-amazon-eks-by-using-a-ci-cd-pipeline-architecture"></a>

**Target technology stack  **
+ A virtual private cloud (VPC)
+ An EKS cluster
+ Amazon Simple Queue Service (Amazon SQS)
+ IAM
+ Kubernetes

**Target architecture**** **

The following diagram shows the high-level view of the end-to-end steps when the node termination is started.

![\[A VPC with an Auto Scaling group, an EKS cluster with Node Termination Handler, and an SQS queue.\]](http://docs.aws.amazon.com/prescriptive-guidance/latest/patterns/images/pattern-img/970dfb73-9526-4942-a974-e8eef6416596/images/9e0125ae-d55b-49dd-ae70-ccaedf03832a.png)


The workflow shown in the diagram consists of the following high-level steps:

1. The automatic scaling EC2 instance terminate event is sent to the SQS queue.

1. The NTH Pod monitors for new messages in the SQS queue.

1. The NTH Pod receives the new message and does the following:
   + Cordons the node so that new pod does not run on the node.
   + Drains the node, so that the existing pod is evacuated
   + Sends a lifecycle hook signal to the Auto Scaling group so that the node can be terminated.

**Automation and scale**
+ Code is managed and deployed by AWS CDK, backed by AWS CloudFormation nested stacks.
+ The [Amazon EKS control plane](https://docs.aws.amazon.com/eks/latest/userguide/disaster-recovery-resiliency.html) runs across multiple Availability Zones to ensure high availability.
+ For [automatic scaling](https://docs.aws.amazon.com/eks/latest/userguide/autoscaling.html), Amazon EKS supports the Kubernetes [Cluster Autoscaler](https://github.com/kubernetes/autoscaler/tree/master/cluster-autoscaler) and [Karpenter](https://karpenter.sh/).

## Tools
<a name="automate-deployment-of-node-termination-handler-in-amazon-eks-by-using-a-ci-cd-pipeline-tools"></a>

**AWS services**
+ [AWS Cloud Development Kit (AWS CDK)](https://docs.aws.amazon.com/cdk/latest/guide/home.html) is a software development framework that helps you define and provision AWS Cloud infrastructure in code.
+ [AWS CodeBuild](https://docs.aws.amazon.com/codebuild/latest/userguide/welcome.html) is a fully managed build service that helps you compile source code, run unit tests, and produce artifacts that are ready to deploy.
+ [AWS CodeCommit](https://docs.aws.amazon.com/codecommit/latest/userguide/welcome.html) is a version control service that helps you privately store and manage Git repositories, without needing to manage your own source control system.
+ [AWS CodePipeline](https://docs.aws.amazon.com/codepipeline/latest/userguide/welcome.html) helps you quickly model and configure the different stages of a software release and automate the steps required to release software changes continuously.
+ [Amazon Elastic Kubernetes Service (Amazon EKS)](https://docs.aws.amazon.com/eks/latest/userguide/getting-started.html) helps you run Kubernetes on AWS without needing to install or maintain your own Kubernetes control plane or nodes.
+ [Amazon EC2 Auto Scaling](https://docs.aws.amazon.com/autoscaling/ec2/userguide/what-is-amazon-ec2-auto-scaling.html) helps you maintain application availability and allows you to automatically add or remove Amazon EC2 instances according to conditions you define.
+ [Amazon Simple Queue Service (Amazon SQS)](https://docs.aws.amazon.com/AWSSimpleQueueService/latest/SQSDeveloperGuide/welcome.html) provides a secure, durable, and available hosted queue that helps you integrate and decouple distributed software systems and components.

**Other tools**
+ [kubectl](https://kubernetes.io/docs/reference/kubectl/kubectl/) is a Kubernetes command line tool for running commands against Kubernetes clusters. You can use kubectl to deploy applications, inspect and manage cluster resources, and view logs.

**Code **

The code for this pattern is available in the [deploy-nth-to-eks](https://github.com/aws-samples/deploy-nth-to-eks) repo on GitHub.com. The code repo contains the following files and folders.
+ `nth folder` – The Helm chart, values files, and the scripts to scan and deploy the AWS CloudFormation template for Node Termination Handler.
+ `config/config.json` – The configuration parameter file for the application. This file contains all the parameters needed for CDK to be deployed.
+ `cdk` – AWS CDK source code.
+ `setup.sh` – The script used to deploy the AWS CDK application to create the required CI/CD pipeline and other required resources.
+ `uninstall.sh` – The script used to clean up the resources.

To use the example code, follow the instructions in the *Epics* section.

## Best practices
<a name="automate-deployment-of-node-termination-handler-in-amazon-eks-by-using-a-ci-cd-pipeline-best-practices"></a>

For best practices when automating AWS Node Termination Handler, see the following:
+ [EKS Best Practices Guides](https://aws.github.io/aws-eks-best-practices/)
+ [Node Termination Handler - Configuration](https://github.com/aws/aws-node-termination-handler/tree/main/config/helm/aws-node-termination-handler)

## Epics
<a name="automate-deployment-of-node-termination-handler-in-amazon-eks-by-using-a-ci-cd-pipeline-epics"></a>

### Set up your environment
<a name="set-up-your-environment"></a>


| Task | Description | Skills required | 
| --- | --- | --- | 
| Clone the repo. | To clone the repo by using SSH (Secure Shell), run the following the command.<pre>git clone git@github.com:aws-samples/deploy-nth-to-eks.git</pre>To clone the repo by using HTTPS, run the following the command.<pre>git clone https://github.com/aws-samples/deploy-nth-to-eks.git</pre>Cloning the repo creates a folder named `deploy-nth-to-eks`.Change to that directory.<pre>cd deploy-nth-to-eks</pre> | App developer, AWS DevOps, DevOps engineer | 
| Set the kubeconfig file. | Set your AWS credentials in your terminal and confirm that you have rights to assume the cluster role. You can use the following example code.<pre>aws eks update-kubeconfig --name <Cluster_Name> --region <region>--role-arn <Role_ARN></pre> | AWS DevOps, DevOps engineer, App developer | 

### Deploy the CI/CD pipeline
<a name="deploy-the-ci-cd-pipeline"></a>


| Task | Description | Skills required | 
| --- | --- | --- | 
| Set up the parameters. | In the `config/config.json` file, set up the following required parameters.[\[See the AWS documentation website for more details\]](http://docs.aws.amazon.com/prescriptive-guidance/latest/patterns/automate-deployment-of-node-termination-handler-in-amazon-eks-by-using-a-ci-cd-pipeline.html) | App developer, AWS DevOps, DevOps engineer | 
| Create the CI/CD pipeline to deploy NTH. | Run the setup.sh script.<pre>./setup.sh</pre>The script will deploy the AWS CDK application that will create the CodeCommit repo with example code, the pipeline, and CodeBuild projects based on the user input parameters in `config/config.json` file.This script will ask for the password as it installs npm packages with the sudo command. | App developer, AWS DevOps, DevOps engineer | 
| Review the CI/CD pipeline. | Open the AWS Management Console, and review the following resources created in the stack.[\[See the AWS documentation website for more details\]](http://docs.aws.amazon.com/prescriptive-guidance/latest/patterns/automate-deployment-of-node-termination-handler-in-amazon-eks-by-using-a-ci-cd-pipeline.html)After the pipeline runs successfully, Helm release `aws-node-termination-handler` is installed in the EKS cluster. Also, a Pod named `aws-node-termination-handler` is running in the `kube-system` namespace in the cluster. | App developer, AWS DevOps, DevOps engineer | 

### Test NTH deployment
<a name="test-nth-deployment"></a>


| Task | Description | Skills required | 
| --- | --- | --- | 
| Simulate an Auto Scaling group scale-in event. | To simulate an automatic scaling scale-in event, do the following:[\[See the AWS documentation website for more details\]](http://docs.aws.amazon.com/prescriptive-guidance/latest/patterns/automate-deployment-of-node-termination-handler-in-amazon-eks-by-using-a-ci-cd-pipeline.html) |  | 
| Review the logs. | During the scale-in event, the NTH Pod will cordon and drain the corresponding worker node (the EC2 instance that will be terminated as part of the scale-in event). To check the logs, use the code in the *Additional information* section. | App developer, AWS DevOps, DevOps engineer | 

### Clean up
<a name="clean-up"></a>


| Task | Description | Skills required | 
| --- | --- | --- | 
| Clean up all AWS resources. | To clean up the resources created by this pattern, run the following command.<pre>./uninstall.sh</pre>This will clean up all the resources created in this pattern by deleting the CloudFormation stack. | DevOps engineer | 

## Troubleshooting
<a name="automate-deployment-of-node-termination-handler-in-amazon-eks-by-using-a-ci-cd-pipeline-troubleshooting"></a>


| Issue | Solution | 
| --- | --- | 
| The npm registry isn’t set correctly. | During the installation of this solution, the script installs npm install to download all the required packages. If, during the installation, you see a message that says "Cannot find module," the npm registry might not be set correctly. To see the current registry setting, run the following command.<pre>npm config get registry</pre>To set the registry with `https://registry.npmjs.org/`, run the following command.<pre>npm config set registry https://registry.npmjs.org</pre> | 
| Delay SQS message delivery. | As part of your troubleshooting, if you want to delay the SQS message delivery to NTH Pod, you can adjust the SQS delivery delay parameter. For more information, see [Amazon SQS delay queues](https://docs.aws.amazon.com/AWSSimpleQueueService/latest/SQSDeveloperGuide/sqs-delay-queues.html). | 

## Related resources
<a name="automate-deployment-of-node-termination-handler-in-amazon-eks-by-using-a-ci-cd-pipeline-resources"></a>
+ [AWS Node Termination Handler source code](https://github.com/aws/aws-node-termination-handler)
+ [EC2 workshop](https://ec2spotworkshops.com/using_ec2_spot_instances_with_eks/070_selfmanagednodegroupswithspot/deployhandler.html)
+ [AWS CodePipeline](https://aws.amazon.com/codepipeline/)
+ [Amazon Elastic Kubernetes Service (Amazon EKS)](https://aws.amazon.com/eks/)
+ [AWS Cloud Development Kit](https://aws.amazon.com/cdk/)
+ [AWS CloudFormation](https://aws.amazon.com/cloudformation/)

## Additional information
<a name="automate-deployment-of-node-termination-handler-in-amazon-eks-by-using-a-ci-cd-pipeline-additional"></a>

1. Find the NTH Pod name.

```
kubectl get pods -n kube-system |grep aws-node-termination-handler
aws-node-termination-handler-65445555-kbqc7   1/1     Running   0          26m
kubectl get pods -n kube-system |grep aws-node-termination-handler
aws-node-termination-handler-65445555-kbqc7   1/1     Running   0          26m
```

2. Check the logs. An example log looks like the following. It shows that the node has been cordoned and drained before sending the Auto Scaling group lifecycle hook completion signal.

```
kubectl -n kube-system logs aws-node-termination-handler-65445555-kbqc7
022/07/17 20:20:43 INF Adding new event to the event store event={"AutoScalingGroupName":"eksctl-my-cluster-target-nodegroup-ng-10d99c89-NodeGroup-ZME36IGAP7O1","Description":"ASG Lifecycle Termination event received. Instance will be interrupted at 2022-07-17 20:20:42.702 +0000 UTC \n","EndTime":"0001-01-01T00:00:00Z","EventID":"asg-lifecycle-term-33383831316538382d353564362d343332362d613931352d383430666165636334333564","InProgress":false,"InstanceID":"i-0409f2a9d3085b80e","IsManaged":true,"Kind":"SQS_TERMINATE","NodeLabels":null,"NodeName":"ip-192-168-75-60.us-east-2.compute.internal","NodeProcessed":false,"Pods":null,"ProviderID":"aws:///us-east-2c/i-0409f2a9d3085b80e","StartTime":"2022-07-17T20:20:42.702Z","State":""}
2022/07/17 20:20:44 INF Requesting instance drain event-id=asg-lifecycle-term-33383831316538382d353564362d343332362d613931352d383430666165636334333564 instance-id=i-0409f2a9d3085b80e kind=SQS_TERMINATE node-name=ip-192-168-75-60.us-east-2.compute.internal provider-id=aws:///us-east-2c/i-0409f2a9d3085b80e
2022/07/17 20:20:44 INF Pods on node node_name=ip-192-168-75-60.us-east-2.compute.internal pod_names=["aws-node-qchsw","aws-node-termination-handler-65445555-kbqc7","kube-proxy-mz5x5"]
2022/07/17 20:20:44 INF Draining the node
2022/07/17 20:20:44 ??? WARNING: ignoring DaemonSet-managed Pods: kube-system/aws-node-qchsw, kube-system/kube-proxy-mz5x5
2022/07/17 20:20:44 INF Node successfully cordoned and drained node_name=ip-192-168-75-60.us-east-2.compute.internal reason="ASG Lifecycle Termination event received. Instance will be interrupted at 2022-07-17 20:20:42.702 +0000 UTC \n"
2022/07/17 20:20:44 INF Completed ASG Lifecycle Hook (NTH-K8S-TERM-HOOK) for instance i-0409f2a9d3085b80e
```

# Automatically build and deploy a Java application to Amazon EKS using a CI/CD pipeline
<a name="automatically-build-and-deploy-a-java-application-to-amazon-eks-using-a-ci-cd-pipeline"></a>

*MAHESH RAGHUNANDANAN, Jomcy Pappachen, and James Radtke, Amazon Web Services*

## Summary
<a name="automatically-build-and-deploy-a-java-application-to-amazon-eks-using-a-ci-cd-pipeline-summary"></a>

This pattern describes how to create a continuous integration and continuous delivery (CI/CD) pipeline that automatically builds and deploys a Java application with recommended DevSecOps practices to an Amazon Elastic Kubernetes Service (Amazon EKS) cluster on the AWS Cloud. This pattern uses a greeting application developed with a Spring Boot Java framework and that uses Apache Maven.

You can use this pattern’s approach to build the code for a Java application, package the application artifacts as a Docker image, security scan the image, and upload the image as a workload container on Amazon EKS. This pattern's approach is useful if you want to migrate from a tightly coupled monolithic architecture to a microservices architecture. The approach also helps you to monitor and manage the entire lifecycle of a Java application, which ensures a higher level of automation and helps avoid errors or bugs.

## Prerequisites and limitations
<a name="automatically-build-and-deploy-a-java-application-to-amazon-eks-using-a-ci-cd-pipeline-prereqs"></a>

**Prerequisites **
+ An active AWS account.
+ AWS Command Line Interface (AWS CLI) version 2, installed and configured. For more information about this, see [Installing or updating to the latest version of the AWS CLI](https://docs.aws.amazon.com/cli/latest/userguide/install-cliv2.html) in the AWS CLI documentation.

  AWS CLI version 2 must be configured with the same AWS Identity and Access Management (IAM) role that creates the Amazon EKS cluster, because only that role is authorized to add other IAM roles to the `aws-auth` `ConfigMap`. For information and steps to configure AWS CLI, see [Configuring settings](https://docs.aws.amazon.com/cli/latest/userguide/cli-configure-quickstart.html) in the AWS CLI documentation.
+ IAM roles and permissions with full access to AWS CloudFormation. For more information about this, see [Controlling access with IAM](https://docs.aws.amazon.com/AWSCloudFormation/latest/UserGuide/using-iam-template.html) in the CloudFormation documentation.
+ An existing Amazon EKS cluster, with details of the IAM role name and the Amazon Resource Name (ARN) of the IAM role for worker nodes in the EKS cluster.
+ Kubernetes Cluster Autoscaler, installed and configured in your Amazon EKS cluster. For more information, see [Scale cluster compute with Karpenter and Cluster Autoscaler](https://docs.aws.amazon.com/eks/latest/userguide/cluster-autoscaler.html) in the Amazon EKS documentation. 
+ Access to code in the GitHub repository.

**Important**  
AWS Security Hub CSPM is enabled as part of the CloudFormation templates that are included in the code for this pattern. By default, after Security Hub CSPM is enabled, it comes with a 30–day free trial. After the trial, there is a cost associated with this AWS service. For more information about pricing, see [AWS Security Hub CSPM pricing](https://aws.amazon.com/security-hub/pricing/).

**Product versions**
+ Helm version 3.4.2 or later
+ Apache Maven version 3.6.3 or later
+ BridgeCrew Checkov version 2.2 or later
+ Aqua Security Trivy version 0.37 or later

## Architecture
<a name="automatically-build-and-deploy-a-java-application-to-amazon-eks-using-a-ci-cd-pipeline-architecture"></a>

**Technology stack**
+ AWS CodeBuild
+ AWS CodeCommit
+ Amazon CodeGuru
+ AWS CodePipeline
+ Amazon Elastic Container Registry (Amazon ECR)
+ Amazon EKS
+ Amazon EventBridge
+ AWS Security Hub CSPM
+ Amazon Simple Notification Service (Amazon SNS)

**Target architecture**

![\[Workflow for deploying a Java application to Amazon EKS.\]](http://docs.aws.amazon.com/prescriptive-guidance/latest/patterns/images/pattern-img/95a5b5c2-d7fb-41eb-9089-455318c0d585/images/4f5fd8c2-2b6d-4945-aa64-fcf317521711.png)


The diagram shows the following workflow:

1. The developer updates the Java application code in the base branch of the CodeCommit repository, which creates a pull request (PR).

1. As soon as the PR is submitted, Amazon CodeGuru Reviewer automatically reviews the code, analyzes it based on best practices for Java, and gives recommendations to the developer.

1. After the PR is merged to the base branch, an Amazon EventBridge event is created.

1. The EventBridge event initiates the CodePipeline pipeline, which starts.

1. CodePipeline runs the CodeSecurity Scan stage (continuous security).

1. AWS CodeBuild starts the security scan process in which the Dockerfile and Kubernetes deployment Helm files are scanned by using Checkov, and application source code is scanned based on incremental code changes. The application source code scan is performed by the [CodeGuru Reviewer Command Line Interface (CLI) wrapper](https://github.com/aws/aws-codeguru-cli).
**Note**  
As of November 7, 2025, you can't create new repository associations in Amazon CodeGuru Reviewer. To learn about services with capabilities similar to CodeGuru Reviewer, see [Amazon CodeGuru Reviewer availability change](https://docs.aws.amazon.com/codeguru/latest/reviewer-ug/codeguru-reviewer-availability-change.html) in the CodeGuru Reviewer documentation. 

1. If the security scan stage is successful, the Build stage (continuous integration) is initiated.

1. In the Build stage, CodeBuild builds the artifact, packages the artifact to a Docker image, scans the image for security vulnerabilities by using Aqua Security Trivy, and stores the image in Amazon ECR.

1. The vulnerabilities detected from step 8 are uploaded to Security Hub CSPM for further analysis by developers or engineers. Security Hub CSPM provides an overview and recommendations for remediating the vulnerabilities.

1. Email notifications of sequential phases within the CodePipeline pipeline are sent through Amazon SNS.

1. After the continuous integration phases are complete, CodePipeline enters the Deploy stage (continuous delivery).

1. The Docker image is deployed to Amazon EKS as a container workload (pod) by using Helm charts.

1. The application pod is configured with Amazon CodeGuru Profiler agent, which sends the profiling data of the application (CPU, heap usage, and latency) to CodeGuru Profiler, which helps developers understand the behavior of the application.

## Tools
<a name="automatically-build-and-deploy-a-java-application-to-amazon-eks-using-a-ci-cd-pipeline-tools"></a>

**AWS services**
+ [CloudFormation](https://docs.aws.amazon.com/AWSCloudFormation/latest/UserGuide/Welcome.html) helps you set up AWS resources, provision them quickly and consistently, and manage them throughout their lifecycle across AWS accounts and Regions.
+  [AWS CodeBuild](https://docs.aws.amazon.com/codebuild/latest/userguide/welcome.html) is a fully managed build service that helps you compile source code, run unit tests, and produce artifacts that are ready to deploy.
+ [AWS CodeCommit](https://docs.aws.amazon.com/codecommit/latest/userguide/welcome.html) is a version control service that helps you privately store and manage Git repositories, without needing to manage your own source control system.
+ [Amazon CodeGuru Profiler](https://docs.aws.amazon.com/codeguru/latest/profiler-ug/what-is-codeguru-profiler.html) collects runtime performance data from your live applications, and provides recommendations that can help you fine-tune your application performance.
+ [AWS CodePipeline](https://docs.aws.amazon.com/codepipeline/latest/userguide/welcome.html) helps you quickly model and configure the different stages of a software release and automate the steps required to release software changes continuously.
+ [Amazon Elastic Container Registry (Amazon ECR)](https://docs.aws.amazon.com/AmazonECR/latest/userguide/what-is-ecr.html) is a managed container image registry service that’s secure, scalable, and reliable.
+ [Amazon Elastic Kubernetes Service (Amazon EKS)](https://docs.aws.amazon.com/eks/latest/userguide/getting-started.html) helps you run Kubernetes on AWS without needing to install or maintain your own Kubernetes control plane or nodes.
+ [Amazon EventBridge](https://docs.aws.amazon.com/eventbridge/latest/userguide/eb-what-is.html) is a serverless event bus service that helps you connect your applications with real-time data from a variety of sources, including AWS Lambda functions, HTTP invocation endpoints using API destinations, or event buses in other AWS accounts.
+ [AWS Identity and Access Management (IAM)](https://docs.aws.amazon.com/IAM/latest/UserGuide/introduction.html) helps you securely manage access to your AWS resources by controlling who is authenticated and authorized to use them.
+ [AWS Security Hub CSPM](https://docs.aws.amazon.com/securityhub/latest/userguide/what-is-securityhub.html) provides a comprehensive view of your security state on AWS. It also helps you check your AWS environment against security industry standards and best practices.
+ [Amazon Simple Notification Service (Amazon SNS)](https://docs.aws.amazon.com/sns/latest/dg/welcome.html) helps you coordinate and manage the exchange of messages between publishers and clients, including web servers and email addresses.
+ [Amazon Simple Storage Service (Amazon S3)](https://docs.aws.amazon.com/AmazonS3/latest/userguide/Welcome.html) is a cloud-based object storage service that helps you store, protect, and retrieve any amount of data.

**Other services**
+ [Helm](https://helm.sh/docs/) is an open-source package manager for Kubernetes.
+ [Apache Maven](https://maven.apache.org/) is a software project management and comprehension tool.
+ [BridgeCrew Checkov](https://www.checkov.io/1.Welcome/What%20is%20Checkov.html) is a static code analysis tool for scanning infrastructure as code (IaC) files for misconfigurations that might lead to security or compliance problems.
+ [Aqua Security Trivy](https://github.com/aquasecurity/trivy) is a comprehensive scanner for vulnerabilities in container images, file systems, and Git repositories, in addition to configuration issues.

**Code **

The code for this pattern is available in the GitHub [aws-codepipeline-devsecops-amazoneks](https://github.com/aws-samples/aws-codepipeline-devsecops-amazoneks) repository.

## Best practices
<a name="automatically-build-and-deploy-a-java-application-to-amazon-eks-using-a-ci-cd-pipeline-best-practices"></a>
+ This pattern follows [IAM security best practices](https://docs.aws.amazon.com/IAM/latest/UserGuide/best-practices.html) to apply the principle of least privilege for IAM entities across all phases of the solution. If you want to extend the solution with additional AWS services or third-party tools, we recommend that you review the section on [applying least-privilege permissions](https://docs.aws.amazon.com/IAM/latest/UserGuide/best-practices.html#grant-least-privilege) in the IAM documentation.
+ If you have multiple Java applications, we recommend that you create separate CI/CD pipelines for each application.
+ If you have a monolith application, we recommend that you break the application into microservices where possible. Microservices are more flexible, they make it easier to deploy applications as containers, and they provide better visibility into the overall build and deployment of the application.

## Epics
<a name="automatically-build-and-deploy-a-java-application-to-amazon-eks-using-a-ci-cd-pipeline-epics"></a>

### Set up the environment
<a name="set-up-the-environment"></a>


| Task | Description | Skills required | 
| --- | --- | --- | 
| Clone the GitHub repository. | To clone the repository, run the following command.<pre>git clone https://github.com/aws-samples/aws-codepipeline-devsecops-amazoneks</pre> | App developer, DevOps engineer | 
| Create an S3 bucket and upload the code. | [\[See the AWS documentation website for more details\]](http://docs.aws.amazon.com/prescriptive-guidance/latest/patterns/automatically-build-and-deploy-a-java-application-to-amazon-eks-using-a-ci-cd-pipeline.html) | AWS DevOps, Cloud administrator, DevOps engineer | 
| Create an CloudFormation stack. | [\[See the AWS documentation website for more details\]](http://docs.aws.amazon.com/prescriptive-guidance/latest/patterns/automatically-build-and-deploy-a-java-application-to-amazon-eks-using-a-ci-cd-pipeline.html) | AWS DevOps, DevOps engineer | 
| Validate the CloudFormation stack deployment. | [\[See the AWS documentation website for more details\]](http://docs.aws.amazon.com/prescriptive-guidance/latest/patterns/automatically-build-and-deploy-a-java-application-to-amazon-eks-using-a-ci-cd-pipeline.html) | AWS DevOps, DevOps engineer | 
| Delete the S3 bucket. | Empty and delete the S3 bucket that you created earlier. For more information, see [Deleting a bucket](https://docs.aws.amazon.com/AmazonS3/latest/userguide/delete-bucket.html) in the Amazon S3 documentation. | AWS DevOps, DevOps engineer | 

### Configure the Helm charts
<a name="configure-the-helm-charts"></a>


| Task | Description | Skills required | 
| --- | --- | --- | 
| Configure the Helm charts of your Java application. | [\[See the AWS documentation website for more details\]](http://docs.aws.amazon.com/prescriptive-guidance/latest/patterns/automatically-build-and-deploy-a-java-application-to-amazon-eks-using-a-ci-cd-pipeline.html) | DevOps engineer | 
| Validate Helm charts for syntax errors. | [\[See the AWS documentation website for more details\]](http://docs.aws.amazon.com/prescriptive-guidance/latest/patterns/automatically-build-and-deploy-a-java-application-to-amazon-eks-using-a-ci-cd-pipeline.html) | DevOps engineer | 

### Set up the Java CI/CD pipeline
<a name="set-up-the-java-ci-cd-pipeline"></a>


| Task | Description | Skills required | 
| --- | --- | --- | 
| Create the CI/CD pipeline. | [\[See the AWS documentation website for more details\]](http://docs.aws.amazon.com/prescriptive-guidance/latest/patterns/automatically-build-and-deploy-a-java-application-to-amazon-eks-using-a-ci-cd-pipeline.html) | AWS DevOps | 

### Activate integration between Security Hub CSPM and Aqua Security
<a name="activate-integration-between-ash-and-aqua-security"></a>


| Task | Description | Skills required | 
| --- | --- | --- | 
| Turn on Aqua Security integration. | This step is required for uploading the Docker image vulnerability findings reported by Trivy to Security Hub CSPM. Because CloudFormation doesn’t support Security Hub CSPM integrations, this process must be done manually.[\[See the AWS documentation website for more details\]](http://docs.aws.amazon.com/prescriptive-guidance/latest/patterns/automatically-build-and-deploy-a-java-application-to-amazon-eks-using-a-ci-cd-pipeline.html) | AWS administrator, DevOps engineer | 

### Configure CodeBuild to run Helm or kubectl commands
<a name="configure-acb-to-run-helm-or-kubectl-commands"></a>


| Task | Description | Skills required | 
| --- | --- | --- | 
| Allow CodeBuild to run Helm or kubectl commands in the Amazon EKS cluster. | For CodeBuild to be authenticated to use Helm or `kubectl` commands with the Amazon EKS cluster, you must add the IAM roles to the `aws-auth` `ConfigMap`. In this case, add the ARN of the IAM role `EksCodeBuildkubeRoleARN`, which is the IAM role created for the CodeBuild service to access the Amazon EKS cluster and deploy workloads on it. This is a one-time activity.The following procedure must be completed before the deployment approval stage in CodePipeline.[\[See the AWS documentation website for more details\]](http://docs.aws.amazon.com/prescriptive-guidance/latest/patterns/automatically-build-and-deploy-a-java-application-to-amazon-eks-using-a-ci-cd-pipeline.html)The `aws_auth` `ConfigMap` is configured, and access is granted.  | DevOps | 

### Validate the CI/CD pipeline
<a name="validate-the-ci-cd-pipeline"></a>


| Task | Description | Skills required | 
| --- | --- | --- | 
| Verify that the CI/CD pipeline automatically initiates. | [\[See the AWS documentation website for more details\]](http://docs.aws.amazon.com/prescriptive-guidance/latest/patterns/automatically-build-and-deploy-a-java-application-to-amazon-eks-using-a-ci-cd-pipeline.html)For more information about starting the pipeline by using CodePipeline, see [Start a pipeline in ](https://docs.aws.amazon.com/codepipeline/latest/userguide/pipelines-about-starting.html)CodePipeline, [Start a pipeline manually](https://docs.aws.amazon.com/codepipeline/latest/userguide/pipelines-rerun-manually.html), and [Start a pipeline on a schedule](https://docs.aws.amazon.com/codepipeline/latest/userguide/pipelines-trigger-source-schedule.html) in the CodePipeline documentation. | DevOps | 
| Approve the deployment. | [\[See the AWS documentation website for more details\]](http://docs.aws.amazon.com/prescriptive-guidance/latest/patterns/automatically-build-and-deploy-a-java-application-to-amazon-eks-using-a-ci-cd-pipeline.html) | DevOps | 
| Validate application profiling. | After the deployment is complete and the application pod is deployed in Amazon EKS, the Amazon CodeGuru Profiler agent that is configured in the application will try to send profiling data of the application (CPU, heap summary, latency, and bottlenecks) to CodeGuru Profiler.For the initial deployment of an application, CodeGuru Profiler takes about 15 minutes to visualize the profiling data. | AWS DevOps | 

## Related resources
<a name="automatically-build-and-deploy-a-java-application-to-amazon-eks-using-a-ci-cd-pipeline-resources"></a>
+ [AWS CodePipeline documentation](https://docs.aws.amazon.com/codepipeline/index.html)
+ [Scanning images with Trivy in an AWS CodePipeline](https://aws.amazon.com/blogs/containers/scanning-images-with-trivy-in-an-aws-codepipeline/) (AWS blog post)
+ [Improving your Java applications using Amazon CodeGuru Profiler](https://aws.amazon.com/blogs/devops/improving-your-java-applications-using-amazon-codeguru-profiler) (AWS blog post)
+ [AWS Security Finding Format (ASFF) syntax](https://docs.aws.amazon.com/securityhub/latest/userguide/securityhub-findings-format-syntax.html)
+ [Amazon EventBridge event patterns](https://docs.aws.amazon.com/eventbridge/latest/userguide/eb-event-patterns.html)
+ [Helm upgrade](https://helm.sh/docs/helm/helm_upgrade/)

## Additional information
<a name="automatically-build-and-deploy-a-java-application-to-amazon-eks-using-a-ci-cd-pipeline-additional"></a>
+ CodeGuru Profiler should not be confused with the AWS X-Ray service in terms of functionality. We recommend that you use CodeGuru Profiler to identify the most expensive lines of codes that might cause bottlenecks or security issues, and fix them before they become a potential risk. The X-Ray service is for application performance monitoring.
+ In this pattern, event rules are associated with the default event bus. If needed, you can extend the pattern to use a custom event bus.
+ This pattern uses CodeGuru Reviewer as a static application security testing (SAST) tool for application code. You can also use this pipeline for other tools, such as SonarQube or Checkmarx. You can add the scan setup instructions for any of these tools to `buildspec/buildspec_secscan.yaml` to replace the CodeGuru scan instructions.
**Note**  
As of November 7, 2025, you can't create new repository associations in Amazon CodeGuru Reviewer. To learn about services with capabilities similar to CodeGuru Reviewer, see [Amazon CodeGuru Reviewer availability change](https://docs.aws.amazon.com/codeguru/latest/reviewer-ug/codeguru-reviewer-availability-change.html) in the CodeGuru Reviewer documentation.

# Copy Amazon ECR container images across AWS accounts and AWS Regions
<a name="copy-ecr-container-images-across-accounts-regions"></a>

*Faisal Shahdad, Amazon Web Services*

## Summary
<a name="copy-ecr-container-images-across-accounts-regions-summary"></a>

This pattern shows you how to use a serverless approach to replicate tagged images from existing Amazon Elastic Container Registry (Amazon ECR) repositories to other AWS accounts and AWS Regions. The solution uses AWS Step Functions to manage the replication workflow and AWS Lambda functions to copy large container images.

Amazon ECR uses native [cross-Region](https://docs.aws.amazon.com/AmazonECR/latest/userguide/registry-settings-examples.html#registry-settings-examples-crr-single) and [cross-account](https://docs.aws.amazon.com/AmazonECR/latest/userguide/registry-settings-examples.html#registry-settings-examples-crossaccount) replication features that replicate container images across Regions and accounts. But these features replicate images only from the moment replication is turned on. There is no mechanism to replicate existing images in different Regions and accounts. 

This pattern helps artificial intelligence (AI) teams distribute containerized machine learning (ML) models, frameworks (for example, PyTorch, TensorFlow, and Hugging Face), and dependencies to other accounts and Regions. This can help you overcome service limits and optimize GPU compute resources. You can also selectively replicate Amazon ECR repositories from specific source accounts and Regions. For more information, see [Cross-Region replication in Amazon ECR has landed](https://aws.amazon.com/blogs/containers/cross-region-replication-in-amazon-ecr-has-landed/).

## Prerequisites and limitations
<a name="copy-ecr-container-images-across-accounts-regions-prereqs"></a>

**Prerequisites**
+ Two or more active AWS accounts (one source account and one destination account, minimally)
+ Appropriate AWS Identity and Access Management (IAM) permissions in all accounts
+ Docker for building the Lambda container image
+ AWS Command Line Interface (AWS CLI) configured for all accounts

**Limitations**
+ **Untagged image exclusion –** The solution copies only container images that have explicit tags. It skips untagged images that exist with `SHA256` digests.
+ **Lambda execution timeout constraints –** AWS Lambda is limited to a maximum 15-minute execution timeout, which may be insufficient to copy large container images or repositories.
+ **Manual container image management –** The `crane-app.py` Python code requires rebuilding and redeploying the Lambda container image.
+ **Limited parallel processing capacity –** The `MaxConcurrency` state setting limits how many repositories you can copy at the same time. However, you can modify this setting in the source account’s AWS CloudFormation template. Note that higher concurrency values can cause you to exceed service rate limits and account-level Lambda execution quotas.

## Architecture
<a name="copy-ecr-container-images-across-accounts-regions-architecture"></a>

**Target stack**

The pattern has four main components:
+ **Source account infrastructure –** CloudFormation template that creates the orchestration components
+ **Destination account infrastructure –** CloudFormation template that creates cross-account access roles
+ **Lambda function –** Python-based function that uses Crane for efficient image copying
+ **Container image –** Docker container that packages the Lambda function with required tools

**Target architecture**

![\[alt text not found\]](http://docs.aws.amazon.com/prescriptive-guidance/latest/patterns/images/pattern-img/787185e7-664b-4ed8-b30f-1d9507f13377/images/cc7d9823-3dc8-4090-a203-910b1ac4447c.png)


**Step Functions workflow**

The Step Functions state machine orchestrates the following, as shown in the following diagram:
+ `PopulateRepositoryList`** –** Scans Amazon ECR repositories and populates Amazon DynamoDB
+ `GetRepositoryList`** –** Retrieves unique repository list from DynamoDB
+ `DeduplicateRepositories`** –** Ensures that there is no duplicate processing
+ `CopyRepositories`** –** Handles parallel copying of repositories
+ `NotifySuccess`/`NotifyFailure`** –** Amazon Simple Notification Service (Amazon SNS) notifications based on execution outcome

![\[alt text not found\]](http://docs.aws.amazon.com/prescriptive-guidance/latest/patterns/images/pattern-img/787185e7-664b-4ed8-b30f-1d9507f13377/images/1b740084-ba2b-4956-aa12-ebbf52be5e7d.png)


## Tools
<a name="copy-ecr-container-images-across-accounts-regions-tools"></a>

**Amazon tools**
+ [Amazon CloudWatch](https://docs.aws.amazon.com/AmazonCloudWatch/latest/monitoring/WhatIsCloudWatch.html) helps you monitor the metrics of your AWS resources and the applications you run on AWS in real time.
+ [Amazon DynamoDB](https://docs.aws.amazon.com/amazondynamodb/latest/developerguide/Introduction.html) is a fully managed NoSQL database service that provides fast, predictable, and scalable performance.
+ [Amazon Simple Notification Service (Amazon SNS)](https://docs.aws.amazon.com/sns/latest/dg/welcome.html) helps you coordinate and manage the exchange of messages between publishers and clients, including web servers and email addresses.
+ [AWS Identity and Access Management (IAM)](https://docs.aws.amazon.com/IAM/latest/UserGuide/introduction.html) helps you securely manage access to your AWS resources by controlling who is authenticated and authorized to use them.
+ [AWS Lambda](https://docs.aws.amazon.com/lambda/latest/dg/welcome.html) is a compute service that helps you run code without needing to provision or manage servers. It runs your code only when needed and scales automatically, so you pay only for the compute time that you use.
+ [AWS Step Functions](https://docs.aws.amazon.com/step-functions/latest/dg/welcome.html) is a serverless orchestration service that helps you combine Lambda functions and other AWS services to build business-critical applications.

**Other tools**
+ [Crane](https://michaelsauter.github.io/crane/index.html) is a Docker orchestration tool. It’s similar to Docker Compose but has additional features.
+ [Docker](https://www.docker.com/) is a set of platform as a service (PaaS) products that use virtualization at the operating system level to deliver software in containers.

**Code repository**
+ The code for this pattern is available in the GitHub [sample-ecr-copy repository](https://github.com/aws-samples/sample-ecr-copy). You can use the CloudFormation template from the repository to create the underlying resources.

## Best practices
<a name="copy-ecr-container-images-across-accounts-regions-best-practices"></a>

Follow the principle of least privilege and grant the minimum permissions required to perform a task. For more information, see [Grant least privilege](https://docs.aws.amazon.com/IAM/latest/UserGuide/access_policies.html#grant-least-priv) and [Security best practices](https://docs.aws.amazon.com/IAM/latest/UserGuide/best-practices.html) in the IAM documentation.

## Epics
<a name="copy-ecr-container-images-across-accounts-regions-epics"></a>

### Prepare your environment
<a name="prepare-your-environment"></a>


| Task | Description | Skills required | 
| --- | --- | --- | 
| Configure AWS CLI profiles. | [\[See the AWS documentation website for more details\]](http://docs.aws.amazon.com/prescriptive-guidance/latest/patterns/copy-ecr-container-images-across-accounts-regions.html) | DevOps engineer, Data engineer, ML engineer | 
| Gather required information. | [\[See the AWS documentation website for more details\]](http://docs.aws.amazon.com/prescriptive-guidance/latest/patterns/copy-ecr-container-images-across-accounts-regions.html) | DevOps engineer, Data engineer, ML engineer | 
| Clone the repository. | Clone the pattern’s repository to your local workstation:<pre>git clone https://github.com/aws-samples/sample-ecr-copy</pre> | DevOps engineer, Data engineer, ML engineer | 

### Deploy infrastructure for the destination account
<a name="deploy-infrastructure-for-the-destination-account"></a>


| Task | Description | Skills required | 
| --- | --- | --- | 
| Validate the template. | Validate the CloudFormation template:<pre>aws cloudformation validate-template \<br />  --template-body file://"Destination Account cf_template.yml" \<br />  --profile destination-account</pre> | DevOps engineer, ML engineer, Data engineer | 
| Deploy the destination infrastructure. | [\[See the AWS documentation website for more details\]](http://docs.aws.amazon.com/prescriptive-guidance/latest/patterns/copy-ecr-container-images-across-accounts-regions.html) | Data engineer, ML engineer, DevOps engineer | 
| Verify the deployment. | [\[See the AWS documentation website for more details\]](http://docs.aws.amazon.com/prescriptive-guidance/latest/patterns/copy-ecr-container-images-across-accounts-regions.html) | DevOps engineer, ML engineer, Data engineer | 

### Build and deploy the Lambda container image
<a name="build-and-deploy-the-lam-container-image"></a>


| Task | Description | Skills required | 
| --- | --- | --- | 
| Prepare the container build. | [\[See the AWS documentation website for more details\]](http://docs.aws.amazon.com/prescriptive-guidance/latest/patterns/copy-ecr-container-images-across-accounts-regions.html) | Data engineer, ML engineer, DevOps engineer | 
| Build the container image. | [\[See the AWS documentation website for more details\]](http://docs.aws.amazon.com/prescriptive-guidance/latest/patterns/copy-ecr-container-images-across-accounts-regions.html) | Data engineer, ML engineer, DevOps engineer | 
| Create a repository and upload the image. | [\[See the AWS documentation website for more details\]](http://docs.aws.amazon.com/prescriptive-guidance/latest/patterns/copy-ecr-container-images-across-accounts-regions.html) | Data engineer, ML engineer, DevOps engineer | 
| Verify the image. | [\[See the AWS documentation website for more details\]](http://docs.aws.amazon.com/prescriptive-guidance/latest/patterns/copy-ecr-container-images-across-accounts-regions.html) | Data engineer, ML engineer, DevOps engineer | 

### Deploy the source account infrastructure
<a name="deploy-the-source-account-infrastructure"></a>


| Task | Description | Skills required | 
| --- | --- | --- | 
| Prepare deployment parameters. | [\[See the AWS documentation website for more details\]](http://docs.aws.amazon.com/prescriptive-guidance/latest/patterns/copy-ecr-container-images-across-accounts-regions.html) | Data engineer, DevOps engineer, ML engineer | 
| Validate the source template. | Validate the source CloudFormation template:<pre>aws cloudformation validate-template \<br />  --template-body file://"Source Account Cf template.yml" \<br />  --profile source-account</pre> | Data engineer, ML engineer, DevOps engineer | 
| Deploy the source infrastructure. | [\[See the AWS documentation website for more details\]](http://docs.aws.amazon.com/prescriptive-guidance/latest/patterns/copy-ecr-container-images-across-accounts-regions.html) | Data engineer, ML engineer, DevOps engineer | 
| Verify the deployment and collect outputs. | [\[See the AWS documentation website for more details\]](http://docs.aws.amazon.com/prescriptive-guidance/latest/patterns/copy-ecr-container-images-across-accounts-regions.html) | DevOps engineer, ML engineer, Data engineer | 
| Confirm your email subscription. | [\[See the AWS documentation website for more details\]](http://docs.aws.amazon.com/prescriptive-guidance/latest/patterns/copy-ecr-container-images-across-accounts-regions.html) | Data engineer, ML engineer, DevOps engineer | 

### Run and monitor the copy process
<a name="run-and-monitor-the-copy-process"></a>


| Task | Description | Skills required | 
| --- | --- | --- | 
| Run and monitor the copy process. | [\[See the AWS documentation website for more details\]](http://docs.aws.amazon.com/prescriptive-guidance/latest/patterns/copy-ecr-container-images-across-accounts-regions.html) | DevOps engineer, ML engineer, Data engineer | 
| Run the step function. | [\[See the AWS documentation website for more details\]](http://docs.aws.amazon.com/prescriptive-guidance/latest/patterns/copy-ecr-container-images-across-accounts-regions.html) | DevOps engineer, ML engineer, Data engineer | 
| Monitor progress. | [\[See the AWS documentation website for more details\]](http://docs.aws.amazon.com/prescriptive-guidance/latest/patterns/copy-ecr-container-images-across-accounts-regions.html) | DevOps engineer, ML engineer, Data engineer | 
| Check the results. | Wait for the process to complete (updated every 30 seconds):<pre>while true; do<br />  STATUS=$(aws stepfunctions describe-execution \<br />    --execution-arn $EXECUTION_ARN \<br />    --profile source-account \<br />    --region $SOURCE_REGION \<br />    --query 'status' \<br />    --output text)<br />  <br />  echo "Current status: $STATUS"<br />  <br />  if [[ "$STATUS" == "SUCCEEDED" || "$STATUS" == "FAILED" || "$STATUS" == "TIMED_OUT" || "$STATUS" == "ABORTED" ]]; then<br />    break<br />  fi<br />  <br />  sleep 30<br />done<br /><br />echo "Final execution status: $STATUS"</pre> | DevOps engineer, ML engineer, Data engineer | 
| Verify the images. | [\[See the AWS documentation website for more details\]](http://docs.aws.amazon.com/prescriptive-guidance/latest/patterns/copy-ecr-container-images-across-accounts-regions.html) | DevOps engineer, Data engineer, ML engineer | 

## Troubleshooting
<a name="copy-ecr-container-images-across-accounts-regions-troubleshooting"></a>


| Issue | Solution | 
| --- | --- | 
| Step functions fail to run. | [\[See the AWS documentation website for more details\]](http://docs.aws.amazon.com/prescriptive-guidance/latest/patterns/copy-ecr-container-images-across-accounts-regions.html) | 

## Related resources
<a name="copy-ecr-container-images-across-accounts-regions-resources"></a>
+ [Crane documentation](https://github.com/google/go-containerregistry/blob/main/cmd/crane/doc/crane.md)
+ [What is Amazon Elastic Container Registry?](https://docs.aws.amazon.com/AmazonECR/latest/userguide/what-is-ecr.html)
+ [What is AWS Lambda?](https://docs.aws.amazon.com/lambda/latest/dg/welcome.html)
+ [What is Step Functions?](https://docs.aws.amazon.com/step-functions/latest/dg/welcome.html)

## Additional information
<a name="copy-ecr-container-images-across-accounts-regions-additional"></a>

**Configuration parameters**


| 
| 
| Parameter | Description | Example | 
| --- |--- |--- |
| `SourceAccountId` | Source AWS account ID | `11111111111` | 
| `DestinationAccountId` | Destination AWS account ID | `22222222222` | 
| `DestinationRegion` | Target AWS Region | `us-east-2` | 
| `SourceRegion` | Source AWS Region | `us-east-1` | 
| `NotificationEmail` | Email for notifications | `abc@xyz.com` | 
| `RepositoryList` | Repositories to copy | `repo1,repo2,repo3` | 
| `LambdaImageUri` | Lambda container image URI | `${ACCOUNT}.dkr.ecr.${REGION}.amazonaws.com/ecr-copy-lambda:latest` | 

# Create an Amazon ECS task definition and mount a file system on EC2 instances using Amazon EFS
<a name="create-an-amazon-ecs-task-definition-and-mount-a-file-system-on-ec2-instances-using-amazon-efs"></a>

*Durga Prasad Cheepuri, Amazon Web Services*

## Summary
<a name="create-an-amazon-ecs-task-definition-and-mount-a-file-system-on-ec2-instances-using-amazon-efs-summary"></a>

This pattern provides code samples and steps to create an Amazon Elastic Container Service (Amazon ECS) task definition that runs on Amazon Elastic Compute Cloud (Amazon EC2) instances in the Amazon Web Services (AWS) Cloud, while using Amazon Elastic File System (Amazon EFS) to mount a file system on those EC2 instances. Amazon ECS tasks that use Amazon EFS automatically mount the file systems that you specify in the task definition and make these file systems available to the task’s containers across all Availability Zones in an AWS Region.

To meet your persistent storage and shared storage requirements, you can use Amazon ECS and Amazon EFS together. For example, you can use Amazon EFS to store persistent user data and application data for your applications with active and standby ECS container pairs running in different Availability Zones for high availability. You can also use Amazon EFS to store shared data that can be accessed in parallel by ECS containers and distributed job workloads.

To use Amazon EFS with Amazon ECS, you can add one or more volume definitions to a task definition. A volume definition includes an Amazon EFS file system ID, access point ID, and a configuration for AWS Identity and Access Management (IAM) authorization or Transport Layer Security (TLS) encryption in transit. You can use container definitions within task definitions to specify the task definition volumes that get mounted when the container runs. When a task that uses an Amazon EFS file system runs, Amazon ECS ensures that the file system is mounted and available to the containers that need access to it.

## Prerequisites and limitations
<a name="create-an-amazon-ecs-task-definition-and-mount-a-file-system-on-ec2-instances-using-amazon-efs-prereqs"></a>

**Prerequisites**
+ An active AWS account
+ A virtual private cloud (VPC) with a virtual private network (VPN) endpoint or a router
+ (Recommended) [Amazon ECS container agent 1.38.0 or later](https://docs.aws.amazon.com/AmazonECS/latest/developerguide/ecs-agent-versions.html) for compatibility with Amazon EFS access points and IAM authorization features (For more information, see the AWS blog post [New for Amazon EFS – IAM Authorization and Access Points](https://aws.amazon.com/blogs/aws/new-for-amazon-efs-iam-authorization-and-access-points/).)

**Limitations**
+ Amazon ECS container agent versions earlier than 1.35.0 don’t support Amazon EFS file systems for tasks that use the EC2 launch type.

## Architecture
<a name="create-an-amazon-ecs-task-definition-and-mount-a-file-system-on-ec2-instances-using-amazon-efs-architecture"></a>

The following diagram shows an example of an application that uses Amazon ECS to create a task definition and mount an Amazon EFS file system on EC2 instances in ECS containers.

![\[Amazon ECS architecture with task definition, ECS service, containers, and EFS file system integration.\]](http://docs.aws.amazon.com/prescriptive-guidance/latest/patterns/images/pattern-img/090a3f03-a4c6-47e3-b1ae-b0eb5c5b269c/images/343e0f1d-44ee-4ec2-8392-aeddc0e48b83.png)


The diagram shows the following workflow:

1. Create an Amazon EFS file system.

1. Create a task definition with a container.

1. Configure the container instances to mount the Amazon EFS file system. The task definition references the volume mounts, so the container instance can use the Amazon EFS file system. ECS tasks have access to the same Amazon EFS file system, regardless of which container instance those tasks are created on.

1. Create an Amazon ECS service with three instances of the task definition.

**Technology stack**
+ Amazon EC2
+ Amazon ECS
+ Amazon EFS

## Tools
<a name="create-an-amazon-ecs-task-definition-and-mount-a-file-system-on-ec2-instances-using-amazon-efs-tools"></a>
+ [Amazon EC2](https://docs.aws.amazon.com/ec2/?id=docs_gateway) – Amazon Elastic Compute Cloud (Amazon EC2) provides scalable computing capacity in the AWS Cloud. You can use Amazon EC2 to launch as many or as few virtual servers as you need, and you can scale out or scale in.
+ [Amazon ECS](https://docs.aws.amazon.com/AmazonECS/latest/developerguide/Welcome.html) – Amazon Elastic Container Service (Amazon ECS) is a highly scalable, fast container management service for running, stopping, and managing containers on a cluster. You can run your tasks and services on a serverless infrastructure that is managed by AWS Fargate. Alternatively, for more control over your infrastructure, you can run your tasks and services on a cluster of EC2 instances that you manage.
+ [Amazon EFS](https://docs.aws.amazon.com/efs/latest/ug/whatisefs.html) – Amazon Elastic File System (Amazon EFS) provides a simple, scalable, fully managed elastic NFS file system for use with AWS Cloud services and on-premises resources.
+ [AWS CLI](https://docs.aws.amazon.com/cli/latest/userguide/cli-chap-welcome.html) – The AWS Command Line Interface (AWS CLI) is an open-source tool for interacting with AWS services through commands in your command-line shell. With minimal configuration, you can run AWS CLI commands that implement functionality equivalent to that provided by the browser-based AWS Management Console from a command prompt.

## Epics
<a name="create-an-amazon-ecs-task-definition-and-mount-a-file-system-on-ec2-instances-using-amazon-efs-epics"></a>

### Create an Amazon EFS file system
<a name="create-an-amazon-efs-file-system"></a>


| Task | Description | Skills required | 
| --- | --- | --- | 
| Create an Amazon EFS file system by using the AWS Management Console. | [\[See the AWS documentation website for more details\]](http://docs.aws.amazon.com/prescriptive-guidance/latest/patterns/create-an-amazon-ecs-task-definition-and-mount-a-file-system-on-ec2-instances-using-amazon-efs.html) | AWS DevOps | 

### Create an Amazon ECS task definition by using either an Amazon EFS file system or the AWS CLI
<a name="create-an-amazon-ecs-task-definition-by-using-either-an-amazon-efs-file-system-or-the-aws-cli"></a>


| Task | Description | Skills required | 
| --- | --- | --- | 
| Create a task definition using an Amazon EFS file system. | Create a task definition by using the [new Amazon ECS console](https://docs.aws.amazon.com/AmazonECS/latest/developerguide/create-task-definition.html) or [classic Amazon ECS console](https://docs.aws.amazon.com/AmazonECS/latest/developerguide/create-task-definition-classic.html) with the following configurations:[\[See the AWS documentation website for more details\]](http://docs.aws.amazon.com/prescriptive-guidance/latest/patterns/create-an-amazon-ecs-task-definition-and-mount-a-file-system-on-ec2-instances-using-amazon-efs.html) | AWS DevOps | 
| Create a task definition using the AWS CLI. | [\[See the AWS documentation website for more details\]](http://docs.aws.amazon.com/prescriptive-guidance/latest/patterns/create-an-amazon-ecs-task-definition-and-mount-a-file-system-on-ec2-instances-using-amazon-efs.html) | AWS DevOps | 

## Related resources
<a name="create-an-amazon-ecs-task-definition-and-mount-a-file-system-on-ec2-instances-using-amazon-efs-resources"></a>
+ [Amazon ECS task definitions](https://docs.aws.amazon.com/AmazonECS/latest/developerguide/task_definitions.html)
+ [Amazon EFS volumes](https://docs.aws.amazon.com/AmazonECS/latest/developerguide/efs-volumes.html)

## Attachments
<a name="attachments-090a3f03-a4c6-47e3-b1ae-b0eb5c5b269c"></a>

To access additional content that is associated with this document, unzip the following file: [attachment.zip](samples/p-attach/090a3f03-a4c6-47e3-b1ae-b0eb5c5b269c/attachments/attachment.zip)

# Deploy Lambda functions with container images
<a name="deploy-lambda-functions-with-container-images"></a>

*Ram Kandaswamy, Amazon Web Services*

## Summary
<a name="deploy-lambda-functions-with-container-images-summary"></a>

AWS Lambda supports containers images as a deployment model. This pattern shows how to deploy Lambda functions through container images. 

Lambda is a serverless, event-driven compute service that you can use to run code for virtually any type of application or backend service without provisioning or managing servers. With container image support for Lambda functions, you get the benefits of up to 10 GB of storage for your application artifact and the ability to use familiar container image development tools.

The example in this pattern uses Python as the underlying programming language, but you can use other languages, such as Java, Node.js, or Go. For the source, consider a Git-based system such as GitHub, GitLab, or Bitbucket, or use Amazon Simple Storage Service (Amazon S3).

## Prerequisites and limitations
<a name="deploy-lambda-functions-with-container-images-prereqs"></a>

**Prerequisites **
+ Amazon Elastic Container Registry (Amazon ECR) activated
+ Application code
+ Docker images with the runtime interface client and the latest version of Python
+ Working knowledge of Git

**Limitations **
+ Maximum image size supported is 10 GB.
+ Maximum runtime for a Lambda based container deployment is 15 minutes.

## Architecture
<a name="deploy-lambda-functions-with-container-images-architecture"></a>

**Target architecture **

![\[Four-step process to create the Lambda function.\]](http://docs.aws.amazon.com/prescriptive-guidance/latest/patterns/images/pattern-img/e421cc58-d33e-493d-b0bb-c3ffe39c2eb9/images/7f36d3d8-d161-497a-b036-26d886a16c69.png)


 

1. You create a Git repository and commit the application code to the repository.

1. The AWS CodeBuild project is triggered by commit changes.

1. The CodeBuild project creates the Docker image and publishes the built image to Amazon ECR.

1. You create the Lambda function using the image in Amazon ECR.

**Automation and scale**

This pattern can be automated by using AWS CloudFormation, AWS Cloud Development Kit (AWS CDK), or API operations from an SDK. Lambda can automatically scale based on the number of requests, and you can tune it by using the concurrency parameters. For more information, see the [Lambda documentation](https://docs.aws.amazon.com/lambda/latest/dg/lambda-concurrency.html).

## Tools
<a name="deploy-lambda-functions-with-container-images-tools"></a>

**AWS services**
+ [AWS CloudFormation](https://docs.aws.amazon.com/AWSCloudFormation/latest/UserGuide/Welcome.html) AWS CloudFormationhelps you set up AWS resources, provision them quickly and consistently, and manage them throughout their lifecycle across AWS accounts and AWS Regions. This pattern uses [AWS CloudFormation Application Composer](https://docs.aws.amazon.com/AWSCloudFormation/latest/UserGuide/app-composer-for-cloudformation.html), which helps you visually view and edit CloudFormation templates.
+ [AWS CodeBuild](https://docs.aws.amazon.com/codebuild/latest/userguide/welcome.html) is a fully managed build service that helps you compile source code, run unit tests, and produce artifacts that are ready to deploy.
+ [Amazon Elastic Container Registry (Amazon ECR)](https://docs.aws.amazon.com/AmazonECR/latest/userguide/what-is-ecr.html) is a managed container image registry service that’s secure, scalable, and reliable.
+ [AWS Lambda](https://docs.aws.amazon.com/lambda/latest/dg/welcome.html) is a compute service that helps you run code without needing to provision or manage servers. It runs your code only when needed and scales automatically, so you pay only for the compute time that you use.

**Other tools**
+ [Docker](https://www.docker.com/) is a set of platform as a service (PaaS) products that use virtualization at the operating-system level to deliver software in containers.
+ [GitHub](https://docs.github.com/en/repositories/creating-and-managing-repositories/quickstart-for-repositories), [GitLab](https://docs.gitlab.com/ee/user/get_started/get_started_projects.html), and [Bitbucket](https://support.atlassian.com/bitbucket-cloud/docs/tutorial-learn-bitbucket-with-git/) are some of the commonly used Git-based source control system to keep track of source code changes.

## Best practices
<a name="deploy-lambda-functions-with-container-images-best-practices"></a>
+ Make your function as efficient and small as possible to avoid loading unnecessary files.
+ Strive to have static layers higher up in your Docker file list, and place layers that change more often lower down. This improves caching, which improves performance.
+ The image owner is responsible for updating and patching the image. Add that update cadence to your operational processes. For more information, see the [AWS Lambda documentation](https://docs.aws.amazon.com/lambda/latest/dg/best-practices.html#function-code).

## Epics
<a name="deploy-lambda-functions-with-container-images-epics"></a>

### Create a project in CodeBuild
<a name="create-a-project-in-codebuild"></a>


| Task | Description | Skills required | 
| --- | --- | --- | 
| Create a Git repository. | Create a Git repository that will contain the application source code, the Dockerfile, and the `buildspec.yaml` file.  | Developer | 
| Create a CodeBuild project. | To use a CodeBuild project to create the custom Lambda image, do the following: [\[See the AWS documentation website for more details\]](http://docs.aws.amazon.com/prescriptive-guidance/latest/patterns/deploy-lambda-functions-with-container-images.html) | Developer | 
| Edit the Dockerfile. | The Dockerfile should be located in the top-level directory where you're developing the application. The Python code should be in the `src` folder.When you create the image, use the [official Lambda supported images](https://gallery.ecr.aws/lambda?page=1). Otherwise, a bootstrap error will occur, making the packing process more difficult.For details, see the [Additional information](#deploy-lambda-functions-with-container-images-additional) section. | Developer | 
| Create a repository in Amazon ECR. | Create a container repository in Amazon ECR. In the following example command, the name of the repository created is `cf-demo`:<pre>aws ecr create-repository --cf-demo </pre>The repository will be referenced in the `buildspec.yaml` file. | AWS administrator, Developer | 
| Push the image to Amazon ECR. | You can use CodeBuild to perform the image-build process. CodeBuild needs permission to interact with Amazon ECR and to work with S3. As part of the process, the Docker image is built and pushed to the Amazon ECR registry. For details on the template and the code, see the [Additional information](#deploy-lambda-functions-with-container-images-additional) section. | Developer | 
| Verify that the image is in the repository. | To verify that the image is in the repository, on the Amazon ECR console, choose **Repositories**. The image should be listed, with tags and with the results of a vulnerability scan report if that feature was turned on in the Amazon ECR settings.  For more information, see the [AWS documentation](https://docs.aws.amazon.com/cli/latest/reference/ecr/put-registry-scanning-configuration.html). | Developer | 

### Create the Lambda function to run the image
<a name="create-the-lambda-function-to-run-the-image"></a>


| Task | Description | Skills required | 
| --- | --- | --- | 
| Create the Lambda function. | On the Lambda console, choose **Create function**, and then choose **Container image**. Enter the function name and the URI for the image that is in the Amazon ECR repository, and then choose **Create function**. For more information, see the [AWS Lambda documentation](https://docs.aws.amazon.com/lambda/latest/dg/API_CreateFunction.html). | App developer | 
| Test the Lambda function. | To invoke and test the function, choose **Test**. For more information, see the [AWS Lambda documentation](https://docs.aws.amazon.com/lambda/latest/dg/testing-functions.html). | App developer | 

## Troubleshooting
<a name="deploy-lambda-functions-with-container-images-troubleshooting"></a>


| Issue | Solution | 
| --- | --- | 
| Build is not succeeding. | [\[See the AWS documentation website for more details\]](http://docs.aws.amazon.com/prescriptive-guidance/latest/patterns/deploy-lambda-functions-with-container-images.html) | 

## Related resources
<a name="deploy-lambda-functions-with-container-images-resources"></a>
+ [Base images for Lambda](https://docs.aws.amazon.com/lambda/latest/dg/runtimes-images.html)
+ [Docker sample for CodeBuild](https://docs.aws.amazon.com/codebuild/latest/userguide/sample-docker.html)
+ [Pass temporary credentials](https://aws.amazon.com/premiumsupport/knowledge-center/codebuild-temporary-credentials-docker/)

## Additional information
<a name="deploy-lambda-functions-with-container-images-additional"></a>

**Edit the Dockerfile**

The following code shows the commands that you edit in the Dockerfile:

```
FROM public.ecr.aws/lambda/python:3.xx

# Copy function code
COPY app.py ${LAMBDA_TASK_ROOT} 
COPY requirements.txt  ${LAMBDA_TASK_ROOT} 

# install dependencies
RUN pip3 install --user -r requirements.txt

# Set the CMD to your handler (could also be done as a parameter override outside of the Dockerfile)
CMD [ "app.lambda_handler" ]
```

In the `FROM` command, use appropriate value for the Python version that is supported by Lambda (for example, `3.12`). This will be the base image that is available in the public Amazon ECR image repository. 

The `COPY app.py ${LAMBDA_TASK_ROOT}` command copies the code to the task root directory, which the Lambda function will use. This command uses the environment variable so we don’t have to worry about the actual path. The function to be run is passed as an argument to the `CMD [ "app.lambda_handler" ]` command.

The `COPY requirements.txt` command captures the dependencies necessary for the code. 

The `RUN pip install --user -r requirements.txt` command installs the dependencies to the local user directory. 

To build your image, run the following command.

```
docker build -t <image name> .
```

**Add the image in Amazon ECR**

In the following code, replace `aws_account_id` with the account number, and replace `us-east-1` if you are using a different Region. The `buildspec` file uses the CodeBuild build number to uniquely identify image versions as a tag value. You can change this to fit your requirements.

*The buildspec custom code*

```
phases:
  install:
    runtime-versions:
       python: 3.xx
  pre_build:
    commands:
      - python3 --version
      - pip3 install --upgrade pip
      - pip3 install --upgrade awscli
      - sudo docker info
  build:
    commands:
      - echo Build started on `date`
      - echo Building the Docker image...
      - ls
      - cd app
      - docker build -t cf-demo:$CODEBUILD_BUILD_NUMBER .
      - docker container ls
  post_build:
    commands:
      - echo Build completed on `date`
      - echo Pushing the Docker image...
      - aws ecr get-login-password --region us-east-1 | docker login --username AWS --password-stdin aws_account_id.dkr.ecr.us-east-1.amazonaws.com
      - docker tag cf-demo:$CODEBUILD_BUILD_NUMBER aws_account_id.dkr.ecr.us-east-1.amazonaws.com/cf-demo:$CODEBUILD_BUILD_NUMBER
      - docker push aws_account_id.dkr.ecr.us-east-1.amazonaws.com/cf-demo:$CODEBUILD_BUILD_NUMBER
```

# Deploy Java microservices on Amazon ECS using AWS Fargate
<a name="deploy-java-microservices-on-amazon-ecs-using-aws-fargate"></a>

*Vijay Thompson and Sandeep Bondugula, Amazon Web Services*

## Summary
<a name="deploy-java-microservices-on-amazon-ecs-using-aws-fargate-summary"></a>

This pattern provides guidance for deploying containerized Java microservices on Amazon Elastic Container Service (Amazon ECS) by using AWS Fargate. The pattern doesn't use Amazon Elastic Container Registry (Amazon ECR) for container management; instead, Docker images are pulled in from a Docker hub. 

## Prerequisites and limitations
<a name="deploy-java-microservices-on-amazon-ecs-using-aws-fargate-prereqs"></a>

**Prerequisites**
+ An existing Java microservices application on a Docker hub
+ A public Docker repository
+ An active AWS account
+ Familiarity with AWS services, including Amazon ECS and Fargate
+ Docker, Java, and Spring Boot framework
+ Amazon Relational Database Service (Amazon RDS) up and running (optional)
+ A virtual private cloud (VPC) if the application requires Amazon RDS (optional)

## Architecture
<a name="deploy-java-microservices-on-amazon-ecs-using-aws-fargate-architecture"></a>

**Source technology stack**
+ Java microservices (for example, implemented in Spring Boot) and deployed on Docker

**Source architecture**

![\[Source architecture for Java microservices deployed on Docker\]](http://docs.aws.amazon.com/prescriptive-guidance/latest/patterns/images/pattern-img/65185957-2b8b-43a6-964c-95ce0a45ba17/images/0a946ca8-fe37-4ede-85cb-a80a1c36105d.png)


**Target technology stack**
+ An Amazon ECS cluster that hosts each microservice by using Fargate
+ A VPC network to host the Amazon ECS cluster and associated security groups 
+ A cluster/task definition for each microservice that spins up containers by using Fargate

**Target architecture**

![\[Target architecture on Java microservices on Amazon ECS\]](http://docs.aws.amazon.com/prescriptive-guidance/latest/patterns/images/pattern-img/65185957-2b8b-43a6-964c-95ce0a45ba17/images/b21349ea-21fc-4688-b76a-1bde479858aa.png)


## Tools
<a name="deploy-java-microservices-on-amazon-ecs-using-aws-fargate-tools"></a>

**Tools**
+ [Amazon ECS](https://docs.aws.amazon.com/AmazonECS/latest/developerguide/Welcome.html) eliminates the need to install and operate your own container orchestration software, manage and scale a cluster of virtual machines, or schedule containers on those virtual machines. 
+ [AWS Fargate](https://docs.aws.amazon.com/AmazonECS/latest/userguide/what-is-fargate.html) helps you run containers without needing to manage servers or Amazon Elastic Compute Cloud (Amazon EC2) instances. It’s used in conjunction with Amazon Elastic Container Service (Amazon ECS).
+ [Docker](https://www.docker.com/) is a software platform that allows you to build, test, and deploy applications quickly. Docker packages software into standardized units called *containers *that have everything the software needs to run, including libraries, system tools, code, and runtime. 

**Docker code**

The following Dockerfile specifies the Java Development Kit (JDK) version that is used, where the Java archive (JAR) file exists, the port number that is exposed, and the entry point for the application.

```
FROM openjdk:11
ADD target/Spring-docker.jar Spring-docker.jar
EXPOSE 8080
ENTRYPOINT ["java","-jar","Spring-docker.jar"]
```

## Epics
<a name="deploy-java-microservices-on-amazon-ecs-using-aws-fargate-epics"></a>

### Create new task definitions
<a name="create-new-task-definitions"></a>


| Task | Description | Skills required | 
| --- | --- | --- | 
| Create a task definition. | Running a Docker container in Amazon ECS requires a task definition. Open the Amazon ECS console at [https://console.aws.amazon.com/ecs/](https://console.aws.amazon.com/ecs/), choose **Task definitions**, and then create a new task definition. For more information, see the [Amazon ECS documentation](https://docs.aws.amazon.com/AmazonECS/latest/developerguide/create-task-definition.html). | AWS systems administrator, App developer | 
| Choose launch type. | Choose **Fargate** as the launch type. | AWS systems administrator, App developer | 
| Configure the task. | Define a task name and configure the application with the appropriate amount of task memory and CPU. | AWS systems administrator, App developer | 
| Define the container. | Specify the container name. For the image, enter the Docker site name, the repository name, and the tag name of the Docker image (`docker.io/sample-repo/sample-application:sample-tag-name`). Set memory limits for the application, and set port mappings (`8080, 80`)  for the allowed ports. | AWS systems administrator, App developer | 
| Create the task. | When the task and container configurations are in place, create the task. For detailed instructions, see the links in the *Related resources* section. | AWS systems administrator, App developer | 

### Configure the cluster
<a name="configure-the-cluster"></a>


| Task | Description | Skills required | 
| --- | --- | --- | 
| Create and configure a cluster. | Choose **Networking only** as the cluster type, configure the name, and then create the cluster or use an existing cluster if available. For more information, see the [Amazon ECS documentation](https://docs.aws.amazon.com/AmazonECS/latest/developerguide/create_cluster.html). | AWS systems administrator, App developer | 

### Configure Task
<a name="configure-task"></a>


| Task | Description | Skills required | 
| --- | --- | --- | 
| Create a task. | Inside the cluster, choose **Run new task**. | AWS systems administrator, App developer | 
| Choose launch type. | Choose **Fargate **as the launch type. | AWS systems administrator, App developer | 
| Choose task definition, revision, and platform version. | Choose the task that you want to run, the revision of the task definition, and the platform version. | AWS systems administrator, App developer | 
| Select the cluster. | Choose the cluster where you want to run the task from. | AWS systems administrator, App developer | 
| Specify the number of tasks. | Configure the number of tasks that should run. If you're launching with two or more tasks, a load balancer is required to distribute the traffic among the tasks. | AWS systems administrator, App developer | 
| Specify the task group. | (Optional) Specify a task group name to identify a set of related tasks as a task group. | AWS systems administrator, App developer | 
| Configure the cluster VPC, subnets, and security groups. | Configure the cluster VPC and the subnets on which you want to deploy the application. Create or update security groups  (HTTP, HTTPS, and port 8080) to provide access to inbound and outbound connections. | AWS systems administrator, App developer | 
| Configure public IP settings. | Enable or disable the public IP, depending on whether you want to use a public IP address for Fargate tasks. The default, recommended option is **Enabled**. | AWS systems administrator, App developer | 
| Review settings and create the task | Review your settings, and then choose **Run Task**. | AWS systems administrator, App developer | 

### Cut over
<a name="cut-over"></a>


| Task | Description | Skills required | 
| --- | --- | --- | 
| Copy the application URL. | When the task status has been updated to *Running*, select the task. In the Networking section, copy the public IP. | AWS systems administrator, App developer | 
| Test your application. | In your browser, enter the public IP to test the application. | AWS systems administrator, App developer | 

## Related resources
<a name="deploy-java-microservices-on-amazon-ecs-using-aws-fargate-resources"></a>
+ [Docker Basics for Amazon ECS](https://docs.aws.amazon.com/AmazonECS/latest/developerguide/docker-basics.html) (Amazon ECS documentation)
+ [Amazon ECS on AWS Fargate](https://docs.aws.amazon.com/AmazonECS/latest/developerguide/AWS_Fargate.html) (Amazon ECS documentation)
+ [Creating a Task Definition](https://docs.aws.amazon.com/AmazonECS/latest/developerguide/create-task-definition.html) (Amazon ECS documentation)
+ [Creating a Cluster](https://docs.aws.amazon.com/AmazonECS/latest/developerguide/create_cluster.html) (Amazon ECS documentation)
+ [Configuring Basic Service Parameters](https://docs.aws.amazon.com/AmazonECS/latest/developerguide/basic-service-params.html) (Amazon ECS documentation)
+ [Configuring a Network](https://docs.aws.amazon.com/AmazonECS/latest/developerguide/service-configure-network.html) (Amazon ECS documentation)
+ [Deploying Java Microservices on Amazon ECS](https://aws.amazon.com/blogs/compute/deploying-java-microservices-on-amazon-ec2-container-service/) (blog post)

# Deploy Kubernetes resources and packages using Amazon EKS and a Helm chart repository in Amazon S3
<a name="deploy-kubernetes-resources-and-packages-using-amazon-eks-and-a-helm-chart-repository-in-amazon-s3"></a>

*Sagar Panigrahi, Amazon Web Services*

## Summary
<a name="deploy-kubernetes-resources-and-packages-using-amazon-eks-and-a-helm-chart-repository-in-amazon-s3-summary"></a>

This pattern helps you to manage Kubernetes applications efficiently, regardless of their complexity. The pattern integrates Helm into your existing continuous integration and continuous delivery (CI/CD)  pipelines to deploy applications into a Kubernetes cluster. Helm is a Kubernetes package manager that helps you manage Kubernetes applications. Helm charts help to define, install, and upgrade complex Kubernetes applications. Charts can be versioned and stored in Helm repositories, which improves mean time to restore (MTTR) during outages. 

This pattern uses Amazon Elastic Kubernetes Service (Amazon EKS) for the Kubernetes cluster. It uses Amazon Simple Storage Service (Amazon S3) as a Helm chart repository, so that the charts can be centrally managed and accessed by developers across the organization.

## Prerequisites and limitations
<a name="deploy-kubernetes-resources-and-packages-using-amazon-eks-and-a-helm-chart-repository-in-amazon-s3-prereqs"></a>

**Prerequisites**
+ An active Amazon Web Services (AWS) account with a virtual private cloud (VPC)
+ An Amazon EKS cluster 
+ Worker nodes set up within the Amazon EKS cluster and ready to take workloads
+ Kubectl for configuring the Amazon EKS kubeconfig file for the target cluster in the client machine
+ AWS Identity and Access Management (IAM) access to create the S3 bucket
+ IAM (programmatic or role) access to Amazon S3 from the client machine
+ Source code management and a CI/CD pipeline

**Limitations**
+ There is no support at this time for upgrading, deleting, or managing custom resource definitions (CRDs).
+ If you are using a resource that refers to a CRD, the CRD must be installed separately (outside of the chart).

**Product versions**
+ Helm v3.6.3

## Architecture
<a name="deploy-kubernetes-resources-and-packages-using-amazon-eks-and-a-helm-chart-repository-in-amazon-s3-architecture"></a>

**Target technology stack**
+ Amazon EKS
+ Amazon VPC
+ Amazon S3
+ Source code management
+ Helm
+ Kubectl

**Target architecture **

![\[Client Helm and Kubectl deploy a Helm chart repo in Amazon S3 for Amazon EKS clusters.\]](http://docs.aws.amazon.com/prescriptive-guidance/latest/patterns/images/pattern-img/d3f993e6-4d96-4cb9-a075-c4debe431fd7/images/2f09f7bb-440a-4c4b-b29f-08d136d1ada4.png)


 

**Automation and scale**
+ AWS CloudFormation can be used to automate the infrastructure creation. For more information, see [Creating Amazon EKS resources with AWS CloudFormation](https://docs.aws.amazon.com/eks/latest/userguide/creating-resources-with-cloudformation.html) in the Amazon EKS documentation.
+ Helm is to be incorporated into your existing CI/CD automation tool to automate the packaging and versioning of Helm charts (out of scope for this pattern).
+ GitVersion or Jenkins build numbers can be used to automate the versioning of charts.

## Tools
<a name="deploy-kubernetes-resources-and-packages-using-amazon-eks-and-a-helm-chart-repository-in-amazon-s3-tools"></a>

**Tools**
+ [Amazon EKS](https://docs.aws.amazon.com/eks/latest/userguide/what-is-eks.html) – Amazon Elastic Kubernetes Service (Amazon EKS) is a managed service for running Kubernetes on AWS without needing to stand up or maintain your own Kubernetes control plane. Kubernetes is an open-source system for automating the deployment, scaling, and management of containerized applications.
+ [Helm](https://helm.sh/docs/) – Helm is a package manager for Kubernetes that helps you install and manage applications on your Kubernetes cluster.
+ [Amazon S3](https://docs.aws.amazon.com/AmazonS3/latest/gsg/GetStartedWithS3.html) – Amazon Simple Storage Service (Amazon S3) is storage for the internet. You can use Amazon S3 to store and retrieve any amount of data at any time, from anywhere on the web.
+ [Kubectl](https://kubernetes.io/docs/reference/kubectl/overview/) – Kubectl is a command line utility for running commands against Kubernetes clusters.

**Code**

The example code is attached.

## Epics
<a name="deploy-kubernetes-resources-and-packages-using-amazon-eks-and-a-helm-chart-repository-in-amazon-s3-epics"></a>

### Configure and initialize Helm
<a name="configure-and-initialize-helm"></a>


| Task | Description | Skills required | 
| --- | --- | --- | 
| Install the Helm client. | To download and install the Helm client on your local system, use the following command. <pre>sudo curl https://raw.githubusercontent.com/helm/helm/master/scripts/get-helm-3 | bash</pre> | DevOps engineer | 
| Validate the Helm installation. | To validate that Helm is able to communicate with the Kubernetes API server within the Amazon EKS cluster, run `helm version`. | DevOps engineer | 

### Create and install a Helm chart in the Amazon EKS cluster
<a name="create-and-install-a-helm-chart-in-the-amazon-eks-cluster"></a>


| Task | Description | Skills required | 
| --- | --- | --- | 
| Create a Helm chart for NGINX. | To create a helm chart named `my-nginx` on the client machine, run `helm create my-nginx`. | DevOps engineer | 
| Review the structure of the chart. | To review the structure of the chart, run the tree command `tree my-nginx/`. | DevOps engineer | 
| Deactivate service account creation in the chart. | In `values.yaml`, under the `serviceAccount` section, set the `create` key to `false`. This is turned off because there is no requirement to create a service account for this pattern. | DevOps engineer | 
| Validate (lint) the modified chart for syntactical errors. | To validate the chart for any syntactical error before installing it in the target cluster, run `helm lint my-nginx/`. | DevOps engineer | 
| Install the chart to deploy Kubernetes resources. | To run the Helm chart installation, use the following command. <pre>helm install --name my-nginx-release --debug my-nginx/ --namespace helm-space </pre>The optional `debug` flag outputs all debug messages during the installation. The `namespace` flag specifies the namespace in which the resources part of this chart will be created. | DevOps engineer | 
| Review the resources in the Amazon EKS cluster. | To review the resources that were created as part of the Helm chart in the `helm-space` namespace, use the following command. <pre>kubectl get all -n helm-space</pre> | DevOps engineer | 

### Roll back to a previous version of a Kubernetes application
<a name="roll-back-to-a-previous-version-of-a-kubernetes-application"></a>


| Task | Description | Skills required | 
| --- | --- | --- | 
| Modify and upgrade the release. | To modify the chart, in `values.yaml`, change the `replicaCount` value to `2`. Then upgrade the already installed release by running the following command.<pre>helm upgrade my-nginx-release my-nginx/ --namespace helm-space</pre> | DevOps engineer | 
| Review the history of the Helm release. | To list all the revisions for a specific release that have been installed using Helm, run the following command. <pre>helm history my-nginx-release</pre> | DevOps engineer | 
| Review the details for a specific revision. | Before switching or rolling back to a working version, and for an additional layer of validation before installing a revision, view which values were passed to each of the revisions by using the following command.<pre>helm get --revision=2 my-nginx-release</pre> | DevOps engineer | 
| Roll back to a previous version. | To roll back to a previous revision, use the following command. <pre>helm rollback my-nginx-release 1 </pre>This example is rolling back to revision number 1. | DevOps engineer | 

### Initialize an S3 bucket as a Helm repository
<a name="initialize-an-s3-bucket-as-a-helm-repository"></a>


| Task | Description | Skills required | 
| --- | --- | --- | 
| Create an S3 bucket for Helm charts. | Create a unique S3 bucket. In the bucket, create a folder called `charts`. The example in this pattern uses `s3://my-helm-charts/charts` as the target chart repository. | Cloud administrator | 
| Install the Helm plugin for Amazon S3. | To install the helm-s3 plugin on your client machine, use the following command. <pre>helm plugin install https://github.com/hypnoglow/helm-s3.git --version 0.10.0</pre>Note: Helm V3 support is available with plugin version 0.9.0 and above. | DevOps engineer | 
| Initialize the Amazon S3 Helm repository.  | To initialize the target folder as a Helm repository, use the following command. <pre>helm S3 init s3://my-helm-charts/charts </pre>The command creates an `index.yaml` file in the target to track all the chart information that is stored at that location. | DevOps engineer | 
| Add the Amazon S3 repository to Helm. | To add the repository in the client machine, use the following command.<pre>helm repo add my-helm-charts s3://my-helm-charts/charts </pre>This command adds an alias to the target repository in the Helm client machine. | DevOps engineer | 
| Review the repository list. | To view the list of repositories in the Helm client machine, run `helm repo list`. | DevOps engineer | 

### Package and store charts in the Amazon S3 Helm repository
<a name="package-and-store-charts-in-the-amazon-s3-helm-repository"></a>


| Task | Description | Skills required | 
| --- | --- | --- | 
| Package the chart. | To package the `my-nginx` chart that you created, run `helm package ./my-nginx/`. The command packages all the contents of the `my-nginx` chart folder into an archive file, which is named using the version number that is mentioned in the `Chart.yaml` file. | DevOps engineer | 
| Store the package in the Amazon S3 Helm repository. | To upload the package to the Helm repository in Amazon S3, run the following command, using the correct name of the `.tgz` file.<pre>helm s3 push ./my-nginx-0.1.0.tgz my-helm-charts</pre> | DevOps engineer | 
| Search for the Helm chart. | To confirm that the chart appears both locally and in the Helm repository in Amazon S3, run the following command.<pre>helm search repo my-nginx</pre> | DevOps engineer | 

### Modify, version, and package a chart
<a name="modify-version-and-package-a-chart"></a>


| Task | Description | Skills required | 
| --- | --- | --- | 
| Modify and package the chart. | In `values.yaml`, set the `replicaCount` value to `1`. Then package the chart by running `helm package ./my-nginx/`, this time changing the version in `Chart.yaml` to `0.1.1`. The versioning is ideally updated through automation using tools such as GitVersion or Jenkins build numbers in a CI/CD pipeline. Automating the version number is out of scope for this pattern. | DevOps engineer | 
| Push the new version to the Helm repository in Amazon S3. | To push the new package with version of 0.1.1 to the `my-helm-charts` Helm repository in Amazon S3, run the following command.<pre>helm s3 push ./my-nginx-0.1.1.tgz my-helm-charts</pre> | DevOps engineer | 

### Search for and install a chart from the Amazon S3 Helm repository
<a name="search-for-and-install-a-chart-from-the-amazon-s3-helm-repository"></a>


| Task | Description | Skills required | 
| --- | --- | --- | 
| Search for all versions of the my-nginx chart. | To view all the available versions of a chart, run the following command with the `--versions` flag.<pre>helm search repo my-nginx --versions</pre>Without the flag, Helm by default displays the latest uploaded version of a chart. | DevOps engineer | 
| Install a chart from the Amazon S3 Helm repository. | The search results from the previous task show the multiple versions of the `my-nginx` chart. To install the new version (0.1.1) from the Amazon S3 Helm repository, use the following command.<pre>helm upgrade my-nginx-release my-helm-charts/my-nginx --version 0.1.1 --namespace helm-space</pre> | DevOps engineer | 

## Related resources
<a name="deploy-kubernetes-resources-and-packages-using-amazon-eks-and-a-helm-chart-repository-in-amazon-s3-resources"></a>
+ [HELM documentation](https://helm.sh/docs/)
+ [helm-s3 plugin (MIT License)](https://github.com/hypnoglow/helm-s3.git)
+ [HELM client binary](https://raw.githubusercontent.com/helm/helm/master/scripts/get-helm-3)
+ [Amazon EKS documentation](https://docs.aws.amazon.com/eks/latest/userguide/getting-started.html)

## Attachments
<a name="attachments-d3f993e6-4d96-4cb9-a075-c4debe431fd7"></a>

To access additional content that is associated with this document, unzip the following file: [attachment.zip](samples/p-attach/d3f993e6-4d96-4cb9-a075-c4debe431fd7/attachments/attachment.zip)

# Deploy a CockroachDB cluster in Amazon EKS by using Terraform
<a name="deploy-cockroachdb-on-eks-using-terraform"></a>

*Sandip Gangapadhyay and Kalyan Senthilnathan, Amazon Web Services*

## Summary
<a name="deploy-cockroachdb-on-eks-using-terraform-summary"></a>

This pattern provides a HashiCorp Terraform module for deploying a multi-node [CockroachDB](https://www.cockroachlabs.com/docs/stable/) cluster on Amazon Elastic Kubernetes Service (Amazon EKS) by using the [CockroachDB operator](https://www.cockroachlabs.com/docs/v25.4/cockroachdb-operator-overview). CockroachDB is a distributed SQL database that provides automatic horizontal sharding, high availability, and consistent performance across geographically distributed clusters. This pattern uses Amazon EKS as the managed Kubernetes platform and implements [cert-manager](https://cert-manager.io/docs/) for TLS-secured node communication. It also uses a [Network Load Balancer](https://docs.aws.amazon.com/elasticloadbalancing/latest/network/introduction.html) for traffic distribution and creates CockroachDB [StatefulSets](https://kubernetes.io/docs/concepts/workloads/controllers/statefulset/) with pods that automatically replicate data for fault tolerance and performance.

**Intended audience**

To implement this pattern, we recommend that you are familiar with the following:
+ HashiCorp Terraform concepts and infrastructure as code (IaC) practices
+ AWS services, particularly Amazon EKS
+ Kubernetes fundamentals, including StatefulSets, operators, and service configurations
+ Distributed SQL databases
+ Security concepts, such as TLS certificate management.
+ DevOps practices, CI/CD workflows, and infrastructure automation

## Prerequisites and limitations
<a name="deploy-cockroachdb-on-eks-using-terraform-prereqs"></a>

**Prerequisites**
+ An active AWS account
+ Permissions to deploy resources in an Amazon EKS cluster
+ An Amazon EKS cluster version v1.23 or later, with nodes labeled `node=cockroachdb`
+ [Amazon Elastic Block Store Container Storage Interface (CSI) Driver](https://github.com/kubernetes-sigs/aws-ebs-csi-driver) version 1.19.0 or later, installed in the Amazon EKS cluster
+ Terraform CLI version 1.0.0 or later, [installed](https://developer.hashicorp.com/terraform/tutorials/aws-get-started/install-cli)
+ kubectl, [installed](https://docs.aws.amazon.com/eks/latest/userguide/install-kubectl.html)
+ Git, [installed](https://git-scm.com/install/)
+ AWS Command Line Interface (AWS CLI) version 2.9.18 or later, [installed](https://docs.aws.amazon.com/cli/latest/userguide/getting-started-install.html) and [configured](https://docs.aws.amazon.com/cli/latest/userguide/cli-chap-configure.html)

**Limitations**
+ The CockroachDB Kubernetes operator does not support multiple Kubernetes clusters for multi-Region deployments. For more limitation, see [Orchestrate CockroachDB Across Multiple Kubernetes Clusters](https://www.cockroachlabs.com/docs/stable/orchestrate-cockroachdb-with-kubernetes-multi-cluster.html#eks) (CockroachDB documentation) and [CockroachDB Kubernetes Operator](https://github.com/cockroachdb/cockroach-operator) (GitHub).
+ Automatic pruning of persistent volume claims (PVCs) is currently disabled by default. This means that after decommissioning and removing a node, the operator will not remove the persistent volume that was mounted to its pod. For more information, see [Automatic PVC pruning](https://www.cockroachlabs.com/docs/stable/scale-cockroachdb-kubernetes.html#automatic-pvc-pruning) in the CockroachDB documentation.

**Product versions**
+ CockroachDB version 22.2.2

## Architecture
<a name="deploy-cockroachdb-on-eks-using-terraform-architecture"></a>

**Target architecture**

The following diagram shows a highly available CockroachDB deployment across three AWS Availability Zones within a virtual private cloud (VPC). The CockroachDB pods are managed through Amazon EKS. The architecture illustrates how users access the database through a Network Load Balancer, which distributes traffic to the CockroachDB pods. The pods run on Amazon Elastic Compute Cloud (Amazon EC2) instances in each Availability Zone, which provides resilience and fault tolerance.

![\[A highly available CockroachDB deployment across three AWS Availability Zones within a VPC.\]](http://docs.aws.amazon.com/prescriptive-guidance/latest/patterns/images/pattern-img/e22d81ab-b85c-4709-8579-4c9cdb4afdb6/images/4b163abf-6fdc-4310-840c-bda621ab25dd.png)


**Resources created**

Deploying the Terraform module used in this pattern creates the following resources:

1. **Network Load Balancer** – This resource serves as the entry point for client requests and evenly distributes traffic across the CockroachDB instances.

1. **CockroachDB StatefulSet** – The StatefulSet defines the desired state of the CockroachDB deployment within the Amazon EKS cluster. It manages the ordered deployment, scaling, and updates of CockroachDB pods.

1. **CockroachDB pods** – These pods are instances of CockroachDB running as containers within Kubernetes pods. These pods store and manage the data across the distributed cluster.

1. **CockroachDB database** – This is the distributed database that is managed by CockroachDB, spanning multiple pods. It replicates data for high availability, fault tolerance, and performance.

## Tools
<a name="deploy-cockroachdb-on-eks-using-terraform-tools"></a>

**AWS services**
+ [AWS Command Line Interface (AWS CLI)](https://docs.aws.amazon.com/cli/latest/userguide/cli-chap-welcome.html) is an open source tool that helps you interact with AWS services through commands in your command-line shell.
+ [Amazon Elastic Kubernetes Service (Amazon EKS)](https://docs.aws.amazon.com/eks/latest/userguide/what-is-eks.html) helps you run Kubernetes on AWS without needing to install or maintain your own Kubernetes control plane or nodes.

**Other tools**
+ [HashiCorp Terraform](https://www.terraform.io/docs) is an infrastructure as code (IaC) tool that helps you use code to provision and manage cloud infrastructure and resources.
+ [kubectl](https://kubernetes.io/docs/tasks/tools/) is a command-line interface that helps you run commands against Kubernetes clusters.

**Code repository**

The code for this pattern is available in the GitHub [Deploy a CockroachDB cluster in Amazon EKS using Terraform](https://github.com/aws-samples/crdb-cluster-eks-terraform) repository. The code repository contains the following files and folders for Terraform:
+ `modules` folder – This folder contains the Terraform module for CockroachDB
+ `main` folder – This folder contains the root module that calls CockroachDB child module to create the CockroachDB database cluster.

## Best practices
<a name="deploy-cockroachdb-on-eks-using-terraform-best-practices"></a>
+ Do not scale down to fewer than three nodes. This is considered an anti-pattern on CockroachDB and can cause errors. For more information, see [Cluster Scaling](https://www.cockroachlabs.com/docs/stable/scale-cockroachdb-kubernetes.html) in the CockroachDB documentation.
+ Implement Amazon EKS autoscaling by using Karpernter or Cluster Autoscaler. This allows the CockroachDB cluster to scale horizontally and new nodes automatically. For more information, see [Scale cluster compute with Karpenter and Cluster Autoscaler](https://docs.aws.amazon.com/eks/latest/userguide/autoscaling.html) in the Amazon EKS documentation.
**Note**  
Due to the `podAntiAffinity` Kubernetes scheduling rule, only one CockroachDB pod can be schedule in one Amazon EKS node.
+ For Amazon EKS security best practices, see [Best Practices for Security](https://docs.aws.amazon.com/eks/latest/best-practices/security.html) in the Amazon EKS documentation.
+ For SQL performance best practices for CockroachDB, see [SQL Performance Best Practices](https://www.cockroachlabs.com/docs/stable/performance-best-practices-overview.html) in the CockroachDB documentation.
+ For more information about setting up an Amazon Simple Storage Service (Amazon S3) remote backend for the Terraform state file, see [Amazon S3](https://developer.hashicorp.com/terraform/language/backend/s3) in the Terraform documentation.

## Epics
<a name="deploy-cockroachdb-on-eks-using-terraform-epics"></a>

### Set up your environment
<a name="set-up-your-environment"></a>


| Task | Description | Skills required | 
| --- | --- | --- | 
| Clone the code repository. | Enter the following command to clone the repository:<pre>git clone https://github.com/aws-samples/crdb-cluster-eks-terraform.git</pre> | DevOps engineer, Git | 
| Update the Terraform variables. | [\[See the AWS documentation website for more details\]](http://docs.aws.amazon.com/prescriptive-guidance/latest/patterns/deploy-cockroachdb-on-eks-using-terraform.html) | DevOps engineer, Terraform | 

### Deploy the resources
<a name="deploy-the-resources"></a>


| Task | Description | Skills required | 
| --- | --- | --- | 
| Deploy the infrastructure. | [\[See the AWS documentation website for more details\]](http://docs.aws.amazon.com/prescriptive-guidance/latest/patterns/deploy-cockroachdb-on-eks-using-terraform.html) | DevOps engineer, Terraform | 

### Verify the deployment
<a name="verify-the-deployment"></a>


| Task | Description | Skills required | 
| --- | --- | --- | 
| Verify resource creation. | [\[See the AWS documentation website for more details\]](http://docs.aws.amazon.com/prescriptive-guidance/latest/patterns/deploy-cockroachdb-on-eks-using-terraform.html) | DevOps engineer | 
| (Optional) Scale up or down. | [\[See the AWS documentation website for more details\]](http://docs.aws.amazon.com/prescriptive-guidance/latest/patterns/deploy-cockroachdb-on-eks-using-terraform.html) | DevOps engineer, Terraform | 

### Clean up
<a name="clean-up"></a>


| Task | Description | Skills required | 
| --- | --- | --- | 
| Delete the infrastructure. | Scaling nodes to `0` will reduce compute costs. However, you will still incur charges for the persistent Amazon EBS volumes that were created by this module. To eliminate storage costs, follow these steps to delete all volumes:[\[See the AWS documentation website for more details\]](http://docs.aws.amazon.com/prescriptive-guidance/latest/patterns/deploy-cockroachdb-on-eks-using-terraform.html) | Terraform | 

## Troubleshooting
<a name="deploy-cockroachdb-on-eks-using-terraform-troubleshooting"></a>


| Issue | Solution | 
| --- | --- | 
| Error validating provider credentials | When you run the Terraform `apply` or `destroy` command, you might encounter the following error:`Error: configuring Terraform AWS Provider: error validating provider  credentials: error calling sts:GetCallerIdentity: operation error STS: GetCallerIdentity, https response error StatusCode: 403, RequestID: 123456a9-fbc1-40ed-b8d8-513d0133ba7f, api error InvalidClientTokenId: The security token included in the request is invalid.`This error is caused by the expiration of the security token for the credentials used in your local machine’s configuration. For instructions on how to resolve the error, see [Set and view configuration settings](https://docs.aws.amazon.com/cli/latest/userguide/cli-configure-files.html#cli-configure-files-methods) in the AWS CLI documentation. | 
| CockroachDB pods in pending state | [\[See the AWS documentation website for more details\]](http://docs.aws.amazon.com/prescriptive-guidance/latest/patterns/deploy-cockroachdb-on-eks-using-terraform.html) | 

## Related resources
<a name="deploy-cockroachdb-on-eks-using-terraform-resources"></a>
+ [Deploy CockroachDB in a Single Kubernetes Cluster](https://www.cockroachlabs.com/docs/dev/deploy-cockroachdb-with-kubernetes.html) (CockroachDB documentation)
+ [Orchestrate CockroachDB Across Multiple Kubernetes Clusters](https://www.cockroachlabs.com/docs/dev/orchestrate-cockroachdb-with-kubernetes-multi-cluster.html) (CockroachDB documentation)
+ [AWS Provider](https://registry.terraform.io/providers/hashicorp/aws/latest/docs) (Terraform documentation)

## Attachments
<a name="attachments-e22d81ab-b85c-4709-8579-4c9cdb4afdb6"></a>

To access additional content that is associated with this document, unzip the following file: [attachment.zip](samples/p-attach/e22d81ab-b85c-4709-8579-4c9cdb4afdb6/attachments/attachment.zip)

# Deploy a sample Java microservice on Amazon EKS and expose the microservice using an Application Load Balancer
<a name="deploy-a-sample-java-microservice-on-amazon-eks-and-expose-the-microservice-using-an-application-load-balancer"></a>

*Vijay Thompson and Akkamahadevi Hiremath, Amazon Web Services*

## Summary
<a name="deploy-a-sample-java-microservice-on-amazon-eks-and-expose-the-microservice-using-an-application-load-balancer-summary"></a>

This pattern describes how to deploy a sample Java microservice as a containerized application on Amazon Elastic Kubernetes Service (Amazon EKS) by using the `eksctl` command line utility and Amazon Elastic Container Registry (Amazon ECR). You can use an Application Load Balancer to load balance the application traffic.

## Prerequisites and limitations
<a name="deploy-a-sample-java-microservice-on-amazon-eks-and-expose-the-microservice-using-an-application-load-balancer-prereqs"></a>

**Prerequisites**
+ An active AWS account
+ The AWS Command Line Interface (AWS CLI) version 1.7 or later, installed and configured on macOS, Linux, or Windows
+ A running [Docker daemon](https://docs.docker.com/config/daemon/)
+ The `eksctl` command line utility, installed and configured on macOS, Linux, or Windows (For more information, see [Getting started with Amazon EKS – eksctl](https://docs.aws.amazon.com/eks/latest/userguide/getting-started-eksctl.html) in the Amazon EKS documentation.)
+ The `kubectl` command line utility, installed and configured on macOS, Linux, or Windows (For more information, see [Installing or updating kubectl](https://docs.aws.amazon.com/eks/latest/userguide/install-kubectl.html) in the Amazon EKS documentation.)

**Limitations**
+ This pattern doesn’t cover the installation of an SSL certificate for the Application Load Balancer.

## Architecture
<a name="deploy-a-sample-java-microservice-on-amazon-eks-and-expose-the-microservice-using-an-application-load-balancer-architecture"></a>

**Target technology stack**
+ Amazon ECR
+ Amazon EKS
+ Elastic Load Balancing

**Target architecture**

The following diagram shows an architecture for containerizing a Java microservice on Amazon EKS.

![\[A Java microservice deployed as a containerized application on Amazon EKS.\]](http://docs.aws.amazon.com/prescriptive-guidance/latest/patterns/images/pattern-img/e1dd8ab0-9e1e-4d2b-b7af-89d3e583e57c/images/aaca4fd9-5aaa-4df5-aebd-02a2ed881c3b.png)


## Tools
<a name="deploy-a-sample-java-microservice-on-amazon-eks-and-expose-the-microservice-using-an-application-load-balancer-tools"></a>
+ [Amazon Elastic Container Registry (Amazon ECR)](https://docs.aws.amazon.com/AmazonECR/latest/userguide/what-is-ecr.html) is a managed container image registry service that’s secure, scalable, and reliable.
+ [Amazon Elastic Kubernetes Service (Amazon EKS)](https://docs.aws.amazon.com/eks/latest/userguide/getting-started.html) helps you run Kubernetes on AWS without needing to install or maintain your own Kubernetes control plane or nodes.
+ [AWS Command Line Interface (AWS CLI)](https://docs.aws.amazon.com/cli/latest/userguide/cli-chap-welcome.html) is an open-source tool that helps you interact with AWS services through commands in your command-line shell.
+ [Elastic Load Balancing](https://docs.aws.amazon.com/elasticloadbalancing/latest/userguide/what-is-load-balancing.html) automatically distributes your incoming traffic across multiple targets, such as Amazon Elastic Compute Cloud (Amazon EC2) instances, containers, and IP addresses, in one or more Availability Zones.
+ [eksctl](https://eksctl.io/) helps you create clusters on Amazon EKS.
+ [kubectl](https://kubernetes.io/docs/tasks/tools/install-kubectl/) makes it possible to run commands against Kubernetes clusters.
+ [Docker](https://www.docker.com/) helps you build, test, and deliver applications in packages called containers.

## Epics
<a name="deploy-a-sample-java-microservice-on-amazon-eks-and-expose-the-microservice-using-an-application-load-balancer-epics"></a>

### Create an Amazon EKS cluster by using eksctl
<a name="create-an-amazon-eks-cluster-by-using-eksctl"></a>


| Task | Description | Skills required | 
| --- | --- | --- | 
| Create an Amazon EKS cluster.  | To create an Amazon EKS cluster that uses two t2.small Amazon EC2 instances as nodes, run the following command:<pre>eksctl create cluster --name <your-cluster-name> --version <version-number> --nodes=1 --node-type=t2.small</pre>The process can take between 15 to 20 minutes. After the cluster is created, the appropriate Kubernetes configuration is added to your [kubeconfig](https://docs.aws.amazon.com/eks/latest/userguide/create-kubeconfig.html) file. You can use the `kubeconfig` file with `kubectl`** **to deploy the application in later steps. | Developer, System Admin | 
| Verify the Amazon EKS cluster. | To verify that the cluster is created and that you can connect to it, run the `kubectl get nodes` command. | Developer, System Admin | 

### Create an Amazon ECR repository and push the Docker image.
<a name="create-an-amazon-ecr-repository-and-push-the-docker-image"></a>


| Task | Description | Skills required | 
| --- | --- | --- | 
| Create an Amazon ECR repository. | Follow the instructions from [Creating a private repository](https://docs.aws.amazon.com/AmazonECR/latest/userguide/repository-create.html) in the Amazon ECR documentation. | Developer, System Admin | 
| Create a POM XML file. | Create a `pom.xml` file based on the *Example POM file *code in the [Additional information](#deploy-a-sample-java-microservice-on-amazon-eks-and-expose-the-microservice-using-an-application-load-balancer-additional) section of this pattern. | Developer, System Admin | 
| Create a source file. | Create a source file called `HelloWorld.java` in the `src/main/java/eksExample` path based on the following example:<pre>package eksExample;<br />import static spark.Spark.get;<br /><br />public class HelloWorld {<br />    public static void main(String[] args) {<br />        get("/", (req, res) -> {<br />            return "Hello World!";<br />        });<br />    }<br />}</pre>Be sure to use the following directory structure:<pre>├── Dockerfile<br />├── deployment.yaml<br />├── ingress.yaml<br />├── pom.xml<br />├── service.yaml<br />└── src<br />    └── main<br />        └── java<br />            └── eksExample<br />                └── HelloWorld.java</pre> |  | 
| Create a Dockerfile. | Create a `Dockerfile` based on the *Example Dockerfile *code in the [Additional information](#deploy-a-sample-java-microservice-on-amazon-eks-and-expose-the-microservice-using-an-application-load-balancer-additional) section of this pattern. | Developer, System Admin | 
| Build and push the Docker image. | In the directory where you want your `Dockerfile` to build, tag, and push the image to Amazon ECR, run the following commands:<pre>aws ecr get-login-password --region <region>| docker login --username <username> --password-stdin <account_number>.dkr.ecr.<region>.amazonaws.com<br />docker buildx build --platform linux/amd64 -t hello-world-java:v1 .<br />docker tag hello-world-java:v1 <account_number>.dkr.ecr.<region>.amazonaws.com/<repository_name>:v1<br />docker push <account_number>.dkr.ecr.<region>.amazonaws.com/<repository_name>:v1</pre>Modify the AWS Region, account number, and repository details in the preceding commands. Be sure to note the image URL for later use.A macOS system with an M1 chip has a problem building an image that’s compatible with Amazon EKS running on an AMD64 platform. To resolve this issue, use [docker buildx](https://docs.docker.com/engine/reference/commandline/buildx/) to build a Docker image that works on Amazon EKS. |  | 

### Deploy the Java microservices
<a name="deploy-the-java-microservices"></a>


| Task | Description | Skills required | 
| --- | --- | --- | 
| Create a deployment file.  | Create a YAML file called `deployment.yaml` based on the *Example deployment file *code in the [Additional information](#deploy-a-sample-java-microservice-on-amazon-eks-and-expose-the-microservice-using-an-application-load-balancer-additional) section of this pattern.Use the image URL that you copied earlier as the path of the image file for the Amazon ECR repository. | Developer, System Admin | 
| Deploy the Java microservices on the Amazon EKS cluster.  | To create a deployment in your Amazon EKS cluster, run the `kubectl apply -f deployment.yaml` command. | Developer, System Admin | 
| Verify the status of the pods. | [\[See the AWS documentation website for more details\]](http://docs.aws.amazon.com/prescriptive-guidance/latest/patterns/deploy-a-sample-java-microservice-on-amazon-eks-and-expose-the-microservice-using-an-application-load-balancer.html) | Developer, System Admin | 
| Create a service. | [\[See the AWS documentation website for more details\]](http://docs.aws.amazon.com/prescriptive-guidance/latest/patterns/deploy-a-sample-java-microservice-on-amazon-eks-and-expose-the-microservice-using-an-application-load-balancer.html) | Developer, System Admin | 
| Install the AWS Load Balancer Controller add-on. | Follow the instructions from [Installing the AWS Load Balancer Controller add-on](https://docs.aws.amazon.com/eks/latest/userguide/aws-load-balancer-controller.html) in the Amazon EKS documentation.You must have the add-on installed to create an Application Load Balancer or Network Load Balancer for a Kubernetes service. | Devloper, System Admin | 
| Create an ingress resource. | Create a YAML file called `ingress.yaml` based on the *Example ingress resource file *code in the [Additional information](#deploy-a-sample-java-microservice-on-amazon-eks-and-expose-the-microservice-using-an-application-load-balancer-additional) section of this pattern. | Developer, System Admin | 
| Create an Application Load Balancer. | To deploy the ingress resource and create an Application Load Balancer, run the `kubectl apply -f ingress.yaml` command. | Developer, System Admin | 

### Test the application
<a name="test-the-application"></a>


| Task | Description | Skills required | 
| --- | --- | --- | 
| Test and verify the application. | [\[See the AWS documentation website for more details\]](http://docs.aws.amazon.com/prescriptive-guidance/latest/patterns/deploy-a-sample-java-microservice-on-amazon-eks-and-expose-the-microservice-using-an-application-load-balancer.html) | Developer, System Admin | 

## Related resources
<a name="deploy-a-sample-java-microservice-on-amazon-eks-and-expose-the-microservice-using-an-application-load-balancer-resources"></a>
+ [Creating a private repository](https://docs.aws.amazon.com/AmazonECR/latest/userguide/repository-create.html) (Amazon ECR documentation)
+ [Pushing a Docker image](https://docs.aws.amazon.com/AmazonECR/latest/userguide/docker-push-ecr-image.html) (Amazon ECR documentation)
+ [Ingress Controllers](https://www.eksworkshop.com/beginner/130_exposing-service/ingress_controller_alb/) (Amazon EKS Workshop)
+ [Docker buildx](https://docs.docker.com/engine/reference/commandline/buildx/) (Docker docs)

## Additional information
<a name="deploy-a-sample-java-microservice-on-amazon-eks-and-expose-the-microservice-using-an-application-load-balancer-additional"></a>

**Example POM file**

```
<?xml version="1.0" encoding="UTF-8"?>
<project xmlns="http://maven.apache.org/POM/4.0.0" xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance"
  xsi:schemaLocation="http://maven.apache.org/POM/4.0.0 http://maven.apache.org/xsd/maven-4.0.0.xsd">
  <modelVersion>4.0.0</modelVersion>


  <groupId>helloWorld</groupId>
  <artifactId>helloWorld</artifactId>
  <version>1.0-SNAPSHOT</version>


  <dependencies>
    <dependency>
      <groupId>com.sparkjava</groupId><artifactId>spark-core</artifactId><version>2.0.0</version>
    </dependency>
  </dependencies>
  <build>
    <plugins>
      <plugin>
        <groupId>org.apache.maven.plugins</groupId><artifactId>maven-jar-plugin</artifactId><version>2.4</version>
        <configuration><finalName>eksExample</finalName><archive><manifest>
              <addClasspath>true</addClasspath><mainClass>eksExample.HelloWorld</mainClass><classpathPrefix>dependency-jars/</classpathPrefix>
            </manifest></archive>
        </configuration>
      </plugin>
      <plugin>
        <groupId>org.apache.maven.plugins</groupId><artifactId>maven-compiler-plugin</artifactId><version>3.1</version>
        <configuration><source>1.8</source><target>1.8</target></configuration>
      </plugin>
      <plugin>
        <groupId>org.apache.maven.plugins</groupId><artifactId>maven-assembly-plugin</artifactId>
        <executions>
          <execution>
            <goals><goal>attached</goal></goals><phase>package</phase>
            <configuration>
              <finalName>eksExample</finalName>
              <descriptorRefs><descriptorRef>jar-with-dependencies</descriptorRef></descriptorRefs>
              <archive><manifest><mainClass>eksExample.HelloWorld</mainClass></manifest></archive>
            </configuration>
          </execution>
        </executions>
      </plugin>
    </plugins>
  </build>
</project>
```

**Example Dockerfile**

```
FROM bellsoft/liberica-openjdk-alpine-musl:17

RUN apk add maven
WORKDIR /code

# Prepare by downloading dependencies
ADD pom.xml /code/pom.xml
RUN ["mvn", "dependency:resolve"]
RUN ["mvn", "verify"]

# Adding source, compile and package into a fat jar
ADD src /code/src
RUN ["mvn", "package"]

EXPOSE 4567
CMD ["java", "-jar", "target/eksExample-jar-with-dependencies.jar"]
```

**Example deployment file**

```
apiVersion: apps/v1
kind: Deployment
metadata:
  name: microservice-deployment
spec:
  replicas: 2
  selector:
    matchLabels:
      app.kubernetes.io/name: java-microservice
  template:
    metadata:
      labels:
        app.kubernetes.io/name: java-microservice
    spec:
      containers:
      - name: java-microservice-container
        image: .dkr.ecr.amazonaws.com/:
        ports:
        - containerPort: 4567
```

**Example service file**

```
apiVersion: v1
kind: Service
metadata:
  name: "service-java-microservice"
spec:
  ports:
    - port: 80
      targetPort: 4567
      protocol: TCP
  type: NodePort
  selector:
    app.kubernetes.io/name: java-microservice
```

**Example ingress resource file**

```
apiVersion: networking.k8s.io/v1
kind: Ingress
metadata:
  name: "java-microservice-ingress"
  annotations:
    kubernetes.io/ingress.class: alb
    alb.ingress.kubernetes.io/load-balancer-name: apg2
    alb.ingress.kubernetes.io/target-type: ip
  labels:
    app: java-microservice
spec:
  rules:
    - http:
        paths:
          - path: /
            pathType: Prefix
            backend:
              service:
                name: "service-java-microservice"
                port:
                  number: 80
```

# Deploy a gRPC-based application on an Amazon EKS cluster and access it with an Application Load Balancer
<a name="deploy-a-grpc-based-application-on-an-amazon-eks-cluster-and-access-it-with-an-application-load-balancer"></a>

*Kirankumar Chandrashekar and Huy Nguyen, Amazon Web Services*

## Summary
<a name="deploy-a-grpc-based-application-on-an-amazon-eks-cluster-and-access-it-with-an-application-load-balancer-summary"></a>

This pattern describes how to host a gRPC-based application on an Amazon Elastic Kubernetes Service (Amazon EKS) cluster and securely access it through an Application Load Balancer.

[gRPC](https://grpc.io/) is an open-source remote procedure call (RPC) framework that can run in any environment. You can use it for microservice integrations and client-server communications. For more information about gRPC, see the AWS blog post [Application Load Balancer support for end-to-end HTTP/2 and gRPC](https://aws.amazon.com/blogs/aws/new-application-load-balancer-support-for-end-to-end-http-2-and-grpc/).

This pattern shows you how to host a gRPC-based application that runs on Kubernetes pods on Amazon EKS. The gRPC client connects to an Application Load Balancer through the HTTP/2 protocol with an SSL/TLS encrypted connection. The Application Load Balancer forwards traffic to the gRPC application that runs on Amazon EKS pods. The number of gRPC pods can be automatically scaled based on traffic by using the [Kubernetes Horizontal Pod Autoscaler](https://docs.aws.amazon.com/eks/latest/userguide/horizontal-pod-autoscaler.html). The Application Load Balancer's target group performs health checks on the Amazon EKS nodes, evaluates if the target is healthy, and forwards traffic only to healthy nodes.

## Prerequisites and limitations
<a name="deploy-a-grpc-based-application-on-an-amazon-eks-cluster-and-access-it-with-an-application-load-balancer-prereqs"></a>

**Prerequisites **
+ An active AWS account.
+ [Docker](https://www.docker.com/), installed and configured on Linux, macOS, or Windows.
+ [AWS Command Line Interface (AWS CLI) version 2](https://docs.aws.amazon.com/cli/latest/userguide/install-cliv2.html), installed and configured on Linux, macOS, or Windows.
+ [eksctl](https://github.com/eksctl-io/eksctl#installation), installed and configured on Linux, macOS, or Windows.
+ `kubectl`, installed and configured to access resources on your Amazon EKS cluster. For more information, see [Installing or updating kubectl](https://docs.aws.amazon.com/eks/latest/userguide/install-kubectl.html) in the Amazon EKS documentation. 
+ [gRPCurl](https://github.com/fullstorydev/grpcurl), installed and configured.
+ A new or existing Amazon EKS cluster. For more information, see [Getting started with Amazon EKS.](https://docs.aws.amazon.com/eks/latest/userguide/getting-started.html)
+ Your computer terminal configured to access the Amazon EKS cluster. For more information, see [Configure your computer to communicate with your cluster](https://docs.aws.amazon.com/eks/latest/userguide/getting-started-console.html#eks-configure-kubectl) in the Amazon EKS documentation.
+ [AWS Load Balancer Controller](https://docs.aws.amazon.com/eks/latest/userguide/aws-load-balancer-controller.html), provisioned in the Amazon EKS cluster.
+ An existing DNS host name with a valid SSL or SSL/TLS certificate. You can obtain a certificate for your domain by using AWS Certificate Manager (ACM) or uploading an existing certificate to ACM. For more information about these two options, see [Requesting a public certificate](https://docs.aws.amazon.com/acm/latest/userguide/gs-acm-request-public.html) and [Importing certificates into AWS Certificate Manager](https://docs.aws.amazon.com/acm/latest/userguide/import-certificate.html) in the ACM documentation.

## Architecture
<a name="deploy-a-grpc-based-application-on-an-amazon-eks-cluster-and-access-it-with-an-application-load-balancer-architecture"></a>

The following diagram shows the architecture implemented by this pattern.

![\[Architecture for gRPC-based application on Amazon EKS\]](http://docs.aws.amazon.com/prescriptive-guidance/latest/patterns/images/pattern-img/abf727c1-ff8b-43a7-923f-bce825d1b459/images/281936fa-bc43-4b4e-a343-ba1eab97df38.png)


 

The following diagram shows a workflow where SSL/TLS traffic is received from a gRPC client that offloads to an Application Load Balancer. Traffic is forwarded in plaintext to the gRPC server because it comes from a virtual private cloud (VPC).

![\[Workflow for sending SSL/TLS traffic to a gRPC server\]](http://docs.aws.amazon.com/prescriptive-guidance/latest/patterns/images/pattern-img/abf727c1-ff8b-43a7-923f-bce825d1b459/images/09e0c3f6-0c39-40b7-908f-8c4c693a5f02.png)


## Tools
<a name="deploy-a-grpc-based-application-on-an-amazon-eks-cluster-and-access-it-with-an-application-load-balancer-tools"></a>

**AWS services**
+ [AWS Command Line Interface (AWS CLI)](https://docs.aws.amazon.com/cli/latest/userguide/cli-chap-welcome.html) is an open-source tool that helps you interact with AWS services through commands in your command line shell.
+ [Elastic Load Balancing](https://docs.aws.amazon.com/elasticloadbalancing/latest/userguide/what-is-load-balancing.html) distributes incoming application or network traffic across multiple targets. For example, you can distribute traffic across Amazon Elastic Compute Cloud (Amazon EC2) instances, containers, and IP addresses in one or more Availability Zones.
+ [Amazon Elastic Container Registry (Amazon ECR)](https://docs.aws.amazon.com/AmazonECR/latest/userguide/what-is-ecr.html) is a managed container image registry service that’s secure, scalable, and reliable. 
+ [Amazon Elastic Kubernetes Service (Amazon EKS)](https://docs.aws.amazon.com/eks/latest/userguide/getting-started.html) helps you run Kubernetes on AWS without needing to install or maintain your own Kubernetes control plane or nodes.  

**Tools**
+ [eksctl](https://eksctl.io/) is a simple CLI tool for creating clusters on Amazon EKS.
+ [kubectl](https://kubernetes.io/docs/tasks/tools/install-kubectl/) is a command line utility for running commands against Kubernetes clusters.
+ [AWS Load Balancer Controller](https://docs.aws.amazon.com/eks/latest/userguide/aws-load-balancer-controller.html) helps you manage AWS Elastic Load Balancers for a Kubernetes cluster.
+ [gRPCurl ](https://github.com/fullstorydev/grpcurl)is a command line tool that helps you interact with gRPC services.

**Code repository**

The code for this pattern is available in the GitHub [grpc-traffic-on-alb-to-eks](https://github.com/aws-samples/grpc-traffic-on-alb-to-eks.git) repository.

## Epics
<a name="deploy-a-grpc-based-application-on-an-amazon-eks-cluster-and-access-it-with-an-application-load-balancer-epics"></a>

### Build and push the gRPC server’s Docker image to Amazon ECR
<a name="build-and-push-the-grpc-serverrsquor-s-docker-image-to-amazon-ecr"></a>


| Task | Description | Skills required | 
| --- | --- | --- | 
| Create an Amazon ECR repository. | Sign in to the AWS Management Console, open the [Amazon ECR console](https://console.aws.amazon.com/ecr/), and then create an Amazon ECR repository. For more information, see [Creating a repository](https://docs.aws.amazon.com/AmazonECR/latest/userguide/repository-create.html) in the Amazon ECR documentation. Make sure that you record the Amazon ECR repository’s URL.You can also create an Amazon ECR repository with AWS CLI by running the following command:<pre>aws ecr create-repository --repository-name helloworld-grpc</pre> | Cloud administrator | 
| Build the Docker image.  | [\[See the AWS documentation website for more details\]](http://docs.aws.amazon.com/prescriptive-guidance/latest/patterns/deploy-a-grpc-based-application-on-an-amazon-eks-cluster-and-access-it-with-an-application-load-balancer.html) | DevOps engineer | 
| Push the Docker image to Amazon ECR. | [\[See the AWS documentation website for more details\]](http://docs.aws.amazon.com/prescriptive-guidance/latest/patterns/deploy-a-grpc-based-application-on-an-amazon-eks-cluster-and-access-it-with-an-application-load-balancer.html) | DevOps engineer | 

### Deploy the Kubernetes manifests to the Amazon EKS cluster
<a name="deploy-the-kubernetes-manifests-to-the-amazon-eks-cluster"></a>


| Task | Description | Skills required | 
| --- | --- | --- | 
| Modify the values in the Kubernetes manifest file. | [\[See the AWS documentation website for more details\]](http://docs.aws.amazon.com/prescriptive-guidance/latest/patterns/deploy-a-grpc-based-application-on-an-amazon-eks-cluster-and-access-it-with-an-application-load-balancer.html) | DevOps engineer | 
| Deploy the Kubernetes manifest file.  | Deploy the `grpc-sample.yaml` file to the Amazon EKS cluster by running the following `kubectl` command: <pre>kubectl apply -f ./kubernetes/grpc-sample.yaml</pre> | DevOps engineer | 

### Create the DNS record for the Application Load Balancer's FQDN
<a name="create-the-dns-record-for-the-application-load-balancerapos-s-fqdn"></a>


| Task | Description | Skills required | 
| --- | --- | --- | 
| Record the FQDN for the Application Load Balancer. | [\[See the AWS documentation website for more details\]](http://docs.aws.amazon.com/prescriptive-guidance/latest/patterns/deploy-a-grpc-based-application-on-an-amazon-eks-cluster-and-access-it-with-an-application-load-balancer.html) | DevOps engineer | 

### Test the solution
<a name="test-the-solution"></a>


| Task | Description | Skills required | 
| --- | --- | --- | 
| Test the gRPC server.  | Use gRPCurl to test the endpoint by running the following command:<pre>grpcurl grpc.example.com:443 list <br />grpc.reflection.v1alpha.ServerReflection<br />helloworld.helloworld</pre>Replace `grpc.example.com` with your DNS name. | DevOps engineer | 
| Test the gRPC server using a gRPC client.  | In the `helloworld_client_ssl.py` sample gRPC client, replace the host name from `grpc.example.com` with the host name used for the gRPC server.  The following code sample shows the response from the gRPC server for the client's request:<pre>python ./app/helloworld_client_ssl.py<br />message: "Hello to gRPC server from Client"<br /><br />message: "Thanks for talking to gRPC server!! Welcome to hello world. Received message is \"Hello to gRPC server from Client\""<br />received: true</pre>This shows that the client can talk to the server and that the connection is successful. | DevOps engineer | 

### Clean up
<a name="clean-up"></a>


| Task | Description | Skills required | 
| --- | --- | --- | 
| Remove the DNS record. | Remove the DNS record that points to the Application Load Balancer's FQDN that you created earlier.  | Cloud administrator | 
| Remove the load balancer. | On the [Amazon EC2 console](https://console.aws.amazon.com/ec2/), choose **Load Balancers**, and then remove the load balancer that the Kubernetes controller created for your ingress resource. | Cloud administrator | 
| Delete the Amazon EKS cluster. | Delete the Amazon EKS cluster by using `eksctl`:<pre>eksctl delete cluster -f ./eks.yaml</pre> | AWS DevOps | 

## Related resources
<a name="deploy-a-grpc-based-application-on-an-amazon-eks-cluster-and-access-it-with-an-application-load-balancer-resources"></a>
+ [Network load balancing on Amazon EKS](https://docs.aws.amazon.com/eks/latest/userguide/load-balancing.html)
+ [Target groups for your Application Load Balancers](https://docs.aws.amazon.com/elasticloadbalancing/latest/application/load-balancer-target-groups.html#target-group-protocol-version)

## Additional information
<a name="deploy-a-grpc-based-application-on-an-amazon-eks-cluster-and-access-it-with-an-application-load-balancer-additional"></a>

**Sample ingress resource:**

```
---
apiVersion: networking.k8s.io/v1
kind: Ingress
metadata:
  annotations:
    alb.ingress.kubernetes.io/healthcheck-protocol: HTTP
    alb.ingress.kubernetes.io/ssl-redirect: "443"
    alb.ingress.kubernetes.io/backend-protocol-version: "GRPC"
    alb.ingress.kubernetes.io/listen-ports: '[{"HTTP": 80}, {"HTTPS":443}]'
    alb.ingress.kubernetes.io/scheme: internet-facing
    alb.ingress.kubernetes.io/target-type: ip
    alb.ingress.kubernetes.io/certificate-arn: arn:aws:acm:<AWS-Region>:<AccountId>:certificate/<certificate_ID>
  labels:
    app: grpcserver
    environment: dev
  name: grpcserver
  namespace: grpcserver
spec:
  ingressClassName: alb
  rules:
  - host: grpc.example.com # <----- replace this as per your host name for which the SSL certtficate is available in ACM
    http:
      paths:
      - backend:
          service:
            name: grpcserver
            port:
              number: 9000
        path: /
        pathType: Prefix
```

**Sample deployment resource:**

```
apiVersion: apps/v1
kind: Deployment
metadata:
  name: grpcserver
  namespace: grpcserver
spec:
  selector:
    matchLabels:
      app: grpcserver
  replicas: 1
  template:
    metadata:
      labels:
        app: grpcserver
    spec:
      containers:
      - name: grpc-demo
        image: <your_aws_account_id>.dkr.ecr.us-east-1.amazonaws.com/helloworld-grpc:1.0   #<------- Change to the URI that the Docker image is pushed to
        imagePullPolicy: Always
        ports:
        - name: grpc-api
          containerPort: 9000
        env:
        - name: POD_IP
          valueFrom:
            fieldRef:
              fieldPath: status.podIP
      restartPolicy: Always
```

**Sample output:**

```
NAME             CLASS           HOSTS                          Address                PORTS          AGE
 grpcserver      <none>      <DNS-HostName>                  <ELB-address>              80            27d
```

# Deploy containerized applications on AWS IoT Greengrass V2 running as a Docker container
<a name="deploy-containerized-applications-on-aws-iot-greengrass-version-2-running-as-a-docker-container"></a>

*Salih Bakir, Giuseppe Di Bella, and Gustav Svalander, Amazon Web Services*

## Summary
<a name="deploy-containerized-applications-on-aws-iot-greengrass-version-2-running-as-a-docker-container-summary"></a>

AWS IoT Greengrass Version 2, when deployed as a Docker container, doesn't natively support running Docker application containers. This pattern shows you how to create a custom container image based on the latest version of AWS IoT Greengrass V2 that enables Docker-in-Docker (DinD) functionality. With DinD, you can run containerized applications within the AWS IoT Greengrass V2 environment.

You can deploy this pattern as a stand-alone solution or integrate it with container orchestration platforms like Amazon ECS Anywhere. In either deployment model, you maintain full AWS IoT Greengrass V2 functionality including AWS IoT SiteWise Edge processing capabilities, while enabling scalable container-based deployments. 

## Prerequisites and limitations
<a name="deploy-containerized-applications-on-aws-iot-greengrass-version-2-running-as-a-docker-container-prereqs"></a>

**Prerequisites **
+ An active AWS account.
+ For general AWS IoT Greengrass Version 2 prerequisites, see [Prerequisites](https://docs.aws.amazon.com/greengrass/v2/developerguide/getting-started-prerequisites.html) in the AWS IoT Greengrass Version 2 documentation. 
+ Docker Engine, installed and configured on Linux, macOS, or Windows.
+ Docker Compose (if you use the Docker Compose command line interface (CLI) to run Docker images).
+ A Linux operating system.
+ A hypervisor with a host server that supports virtualization.
+ System requirements:
  + 2 GB of RAM (minimum)
  + 5 GB of available disk space (minimum)
  + For AWS IoT SiteWise Edge, an x86\$164 quad-core CPU with 16 GB of RAM and 50 GB of available disk space. For more information about AWS IoT SiteWise data processing, see [Data processing pack requirements](https://docs.aws.amazon.com/iot-sitewise/latest/userguide/configure-gateway-ggv2.html#w2aac17c19c13b7) in the AWS IoT SiteWise documentation.

**Product versions**
+ AWS IoT Greengrass Version 2 version 2.5.3 or later
+ Docker-in-Docker version 1.0.0 or later
+ Docker Compose version 1.22 or later
+ Docker Engine version 20.10.12 or later

**Limitations**
+ Some AWS services aren’t available in all AWS Regions. For Region availability, see [AWS Services by Region](https://aws.amazon.com/about-aws/global-infrastructure/regional-product-services/). For specific endpoints, see [Service endpoints and quotas](https://docs.aws.amazon.com/general/latest/gr/aws-service-information.html), and choose the link for the service.

## Architecture
<a name="deploy-containerized-applications-on-aws-iot-greengrass-version-2-running-as-a-docker-container-architecture"></a>

**Target technology stack**
+ **Data sources** – IoT devices, sensors, or industrial equipment that generates data for processing
+ **AWS IoT Greengrass V2** – Running as a Docker container with D-in-D capabilities, deployed on edge infrastructures
+ **Containerized applications** – Custom applications running within the AWS IoT Greengrass V2 environment as nested Docker containers
+ **(Optional) Amazon ECS Anywhere** – Container orchestration that manages the AWS IoT Greengrass V2 container deployment
+ **Other AWS services** – AWS IoT Core, AWS IoT SiteWise, and other AWS services for data processing and management

**Target architecture **

The following diagram shows an example target deployment architecture that uses Amazon ECS Anywhere, which is a container management tool.

![\[Deployment architecture using Amazon ECS Anywhere.\]](http://docs.aws.amazon.com/prescriptive-guidance/latest/patterns/images/pattern-img/2ecf5354-40e0-4fd9-9798-086719059784/images/5ed2652e-9604-4809-8962-b167e1991658.png)


The diagram shows the following workflow:

**1: Container image storage** – Amazon ECR stores the AWS IoT Greengrass container images and any custom application containers needed for edge processing.

**2 **and** 3: Container deployment** – Amazon ECS Anywhere deploys the AWS IoT Greengrass container image from Amazon ECR to the edge location, managing the container lifecycle and deployment process.

**4: Component deployment** – The deployed AWS IoT Greengrass core automatically deploys its relevant components based on its configuration. Components include AWS IoT SiteWise Edge and other necessary edge processing components within the containerized environment.

**5: Data ingestion** – After it’s fully configured, AWS IoT Greengrass begins ingesting telemetry and sensor data from various IoT data sources at the edge location.

**6: Data processing and cloud integration** – The containerized AWS IoT Greengrass core processes data locally using its deployed components (including AWS IoT SiteWise Edge for industrial data). Then, it sends processed data to AWS Cloud services for further analysis and storage.

## Tools
<a name="deploy-containerized-applications-on-aws-iot-greengrass-version-2-running-as-a-docker-container-tools"></a>

**AWS services**
+ [Amazon ECS Anywhere](https://docs.aws.amazon.com/AmazonECS/latest/developerguide/launch-type-external.html) helps you deploy, use, and manage Amazon ECS tasks and services on your own infrastructure.
+ [Amazon Elastic Compute Cloud (Amazon EC2)](https://docs.aws.amazon.com/AWSEC2/latest/UserGuide/concepts.html) provides scalable computing capacity in the AWS Cloud. You can launch as many virtual servers as you need and quickly scale them up or down.
+ [Amazon Elastic Container Registry (Amazon ECR)](https://docs.aws.amazon.com/AmazonECR/latest/userguide/what-is-ecr.html) is a managed container image registry service that’s secure, scalable, and reliable.
+ [AWS IoT Greengrass](https://docs.aws.amazon.com/greengrass/v2/developerguide/what-is-iot-greengrass.html) is an open source Internet of Things (IoT) edge runtime and cloud service that helps you build, deploy, and manage IoT applications on your devices.
+ [AWS IoT SiteWise](https://docs.aws.amazon.com/iot-sitewise/latest/userguide/what-is-sitewise.html) helps you collect, model, analyze, and visualize data from industrial equipment at scale.

**Other tools**
+ [Docker](https://www.docker.com/) is a set of platform as a service (PaaS) products that use virtualization at the operating-system level to deliver software in containers.
+ [Docker Compose](https://docs.docker.com/compose/) is a tool for defining and running multi-container applications.
+ [Docker Engine](https://docs.docker.com/engine/) is an open source containerization technology for building and containerizing applications.

**Code repository**

The code for this pattern is available in the GitHub [AWS IoT Greengrass v2 Docker-in-Docker](https://github.com/aws-samples/aws-iot-greengrass-docker-in-docker) repository.

## Epics
<a name="deploy-containerized-applications-on-aws-iot-greengrass-version-2-running-as-a-docker-container-epics"></a>

### Build the AWS IoT Greengrass V2 Docker-in-Docker image
<a name="build-the-gg2-docker-in-docker-image"></a>


| Task | Description | Skills required | 
| --- | --- | --- | 
| Clone and navigate to the repository. | To clone the repository, use the following command: `git clone https://github.com/aws-samples/aws-iot-greengrass-v2-docker-in-docker.git`To navigate to the `docker` directory, use the following command:`cd aws-iot-greengrass-v2-docker-in-docker/docker` | DevOps engineer, AWS DevOps | 
| Build the Docker image. | To build the Docker image with the default (latest) version, run the following command:`docker build -t x86_64/aws-iot-greengrass:latest .`Or, to build the Docker image with a specific version, run the following command:`docker build --build-arg GREENGRASS_RELEASE_VERSION=2.12.0 -t x86_64/aws-iot-greengrass:2.12.0 .`To verify the build, run the following command:`docker images \| grep aws-iot-greengrass`  | AWS DevOps, DevOps engineer, App developer | 
| (Optional) Push to Amazon ECR. | [\[See the AWS documentation website for more details\]](http://docs.aws.amazon.com/prescriptive-guidance/latest/patterns/deploy-containerized-applications-on-aws-iot-greengrass-version-2-running-as-a-docker-container.html) | App developer, AWS DevOps, DevOps engineer | 

### Configure AWS credentials
<a name="configure-aws-credentials"></a>


| Task | Description | Skills required | 
| --- | --- | --- | 
| Select authentication method. | Choose one of the following options:[\[See the AWS documentation website for more details\]](http://docs.aws.amazon.com/prescriptive-guidance/latest/patterns/deploy-containerized-applications-on-aws-iot-greengrass-version-2-running-as-a-docker-container.html) | AWS administrator | 
| Configure authentication method. | For the authentication method you selected, use the following configuration guidance:[\[See the AWS documentation website for more details\]](http://docs.aws.amazon.com/prescriptive-guidance/latest/patterns/deploy-containerized-applications-on-aws-iot-greengrass-version-2-running-as-a-docker-container.html) | AWS administrator | 

### Run with Docker Compose
<a name="run-with-docker-compose"></a>


| Task | Description | Skills required | 
| --- | --- | --- | 
| Configure `docker-compose.yml`. | Update the `docker-compose.yml` file with environment variables as follows:[\[See the AWS documentation website for more details\]](http://docs.aws.amazon.com/prescriptive-guidance/latest/patterns/deploy-containerized-applications-on-aws-iot-greengrass-version-2-running-as-a-docker-container.html)[\[See the AWS documentation website for more details\]](http://docs.aws.amazon.com/prescriptive-guidance/latest/patterns/deploy-containerized-applications-on-aws-iot-greengrass-version-2-running-as-a-docker-container.html)[\[See the AWS documentation website for more details\]](http://docs.aws.amazon.com/prescriptive-guidance/latest/patterns/deploy-containerized-applications-on-aws-iot-greengrass-version-2-running-as-a-docker-container.html) | DevOps engineer | 
| Start and verify container. | To start in the foreground, run the following command:`docker-compose up --build`Or, to start in the background, run the following command:`docker-compose up --build -d`To verify status, run the following command:`docker-compose ps`To monitor logs, run the following command:`docker-compose logs -f` | DevOps engineer | 

### Run with Docker CLI
<a name="run-with-docker-cli"></a>


| Task | Description | Skills required | 
| --- | --- | --- | 
| Run container with Docker CLI. | [\[See the AWS documentation website for more details\]](http://docs.aws.amazon.com/prescriptive-guidance/latest/patterns/deploy-containerized-applications-on-aws-iot-greengrass-version-2-running-as-a-docker-container.html)[\[See the AWS documentation website for more details\]](http://docs.aws.amazon.com/prescriptive-guidance/latest/patterns/deploy-containerized-applications-on-aws-iot-greengrass-version-2-running-as-a-docker-container.html)[\[See the AWS documentation website for more details\]](http://docs.aws.amazon.com/prescriptive-guidance/latest/patterns/deploy-containerized-applications-on-aws-iot-greengrass-version-2-running-as-a-docker-container.html)[\[See the AWS documentation website for more details\]](http://docs.aws.amazon.com/prescriptive-guidance/latest/patterns/deploy-containerized-applications-on-aws-iot-greengrass-version-2-running-as-a-docker-container.html)[\[See the AWS documentation website for more details\]](http://docs.aws.amazon.com/prescriptive-guidance/latest/patterns/deploy-containerized-applications-on-aws-iot-greengrass-version-2-running-as-a-docker-container.html) | DevOps engineer | 
| Verify container. | [\[See the AWS documentation website for more details\]](http://docs.aws.amazon.com/prescriptive-guidance/latest/patterns/deploy-containerized-applications-on-aws-iot-greengrass-version-2-running-as-a-docker-container.html)[\[See the AWS documentation website for more details\]](http://docs.aws.amazon.com/prescriptive-guidance/latest/patterns/deploy-containerized-applications-on-aws-iot-greengrass-version-2-running-as-a-docker-container.html) | DevOps engineer | 

### Manage containerized applications
<a name="manage-containerized-applications"></a>


| Task | Description | Skills required | 
| --- | --- | --- | 
| Deploy applications. | [\[See the AWS documentation website for more details\]](http://docs.aws.amazon.com/prescriptive-guidance/latest/patterns/deploy-containerized-applications-on-aws-iot-greengrass-version-2-running-as-a-docker-container.html)[\[See the AWS documentation website for more details\]](http://docs.aws.amazon.com/prescriptive-guidance/latest/patterns/deploy-containerized-applications-on-aws-iot-greengrass-version-2-running-as-a-docker-container.html)[\[See the AWS documentation website for more details\]](http://docs.aws.amazon.com/prescriptive-guidance/latest/patterns/deploy-containerized-applications-on-aws-iot-greengrass-version-2-running-as-a-docker-container.html) | App developer | 
| Access and test Docker-in-Docker. | [\[See the AWS documentation website for more details\]](http://docs.aws.amazon.com/prescriptive-guidance/latest/patterns/deploy-containerized-applications-on-aws-iot-greengrass-version-2-running-as-a-docker-container.html) | DevOps engineer | 

### (Optional) Integrate with Amazon ECS Anywhere
<a name="optional-integrate-with-ecs-anywhere"></a>


| Task | Description | Skills required | 
| --- | --- | --- | 
| Set up Amazon ECS cluster. | [\[See the AWS documentation website for more details\]](http://docs.aws.amazon.com/prescriptive-guidance/latest/patterns/deploy-containerized-applications-on-aws-iot-greengrass-version-2-running-as-a-docker-container.html)[\[See the AWS documentation website for more details\]](http://docs.aws.amazon.com/prescriptive-guidance/latest/patterns/deploy-containerized-applications-on-aws-iot-greengrass-version-2-running-as-a-docker-container.html)[\[See the AWS documentation website for more details\]](http://docs.aws.amazon.com/prescriptive-guidance/latest/patterns/deploy-containerized-applications-on-aws-iot-greengrass-version-2-running-as-a-docker-container.html) | AWS administrator | 
| Deploy Amazon ECS task. | [\[See the AWS documentation website for more details\]](http://docs.aws.amazon.com/prescriptive-guidance/latest/patterns/deploy-containerized-applications-on-aws-iot-greengrass-version-2-running-as-a-docker-container.html)[\[See the AWS documentation website for more details\]](http://docs.aws.amazon.com/prescriptive-guidance/latest/patterns/deploy-containerized-applications-on-aws-iot-greengrass-version-2-running-as-a-docker-container.html)[\[See the AWS documentation website for more details\]](http://docs.aws.amazon.com/prescriptive-guidance/latest/patterns/deploy-containerized-applications-on-aws-iot-greengrass-version-2-running-as-a-docker-container.html) | AWS administrator | 

### Stop and cleanup
<a name="stop-and-cleanup"></a>


| Task | Description | Skills required | 
| --- | --- | --- | 
| Stop container. | [\[See the AWS documentation website for more details\]](http://docs.aws.amazon.com/prescriptive-guidance/latest/patterns/deploy-containerized-applications-on-aws-iot-greengrass-version-2-running-as-a-docker-container.html)[\[See the AWS documentation website for more details\]](http://docs.aws.amazon.com/prescriptive-guidance/latest/patterns/deploy-containerized-applications-on-aws-iot-greengrass-version-2-running-as-a-docker-container.html)[\[See the AWS documentation website for more details\]](http://docs.aws.amazon.com/prescriptive-guidance/latest/patterns/deploy-containerized-applications-on-aws-iot-greengrass-version-2-running-as-a-docker-container.html) | DevOps engineer | 

## Troubleshooting
<a name="deploy-containerized-applications-on-aws-iot-greengrass-version-2-running-as-a-docker-container-troubleshooting"></a>


| Issue | Solution | 
| --- | --- | 
| Container fails to start with permission errors. | [\[See the AWS documentation website for more details\]](http://docs.aws.amazon.com/prescriptive-guidance/latest/patterns/deploy-containerized-applications-on-aws-iot-greengrass-version-2-running-as-a-docker-container.html)`--privileged` grants extended privileges to the container. | 
| Provisioning fails with credential errors. | To verify credentials are configured correctly, use the following steps:[\[See the AWS documentation website for more details\]](http://docs.aws.amazon.com/prescriptive-guidance/latest/patterns/deploy-containerized-applications-on-aws-iot-greengrass-version-2-running-as-a-docker-container.html)Make sure that IAM permissions include `iot:CreateThing`, `iot:CreatePolicy`, `iot:AttachPolicy`, `iam:CreateRole`, and `iam:AttachRolePolicy`. | 
| Cannot connect to Docker daemon inside container. | [\[See the AWS documentation website for more details\]](http://docs.aws.amazon.com/prescriptive-guidance/latest/patterns/deploy-containerized-applications-on-aws-iot-greengrass-version-2-running-as-a-docker-container.html) | 
| Container runs out of disk space. | [\[See the AWS documentation website for more details\]](http://docs.aws.amazon.com/prescriptive-guidance/latest/patterns/deploy-containerized-applications-on-aws-iot-greengrass-version-2-running-as-a-docker-container.html)Ensure minimum disk space: 5 GB for basic operations and 50 GB for AWS IoT SiteWise Edge | 
| Build issues. | [\[See the AWS documentation website for more details\]](http://docs.aws.amazon.com/prescriptive-guidance/latest/patterns/deploy-containerized-applications-on-aws-iot-greengrass-version-2-running-as-a-docker-container.html) | 
| Network connectivity issues. | [\[See the AWS documentation website for more details\]](http://docs.aws.amazon.com/prescriptive-guidance/latest/patterns/deploy-containerized-applications-on-aws-iot-greengrass-version-2-running-as-a-docker-container.html)Verify that the firewall allows outbound HTTPS (443) and MQTT (8883) traffic. | 
| Greengrass components fail to deploy. | [\[See the AWS documentation website for more details\]](http://docs.aws.amazon.com/prescriptive-guidance/latest/patterns/deploy-containerized-applications-on-aws-iot-greengrass-version-2-running-as-a-docker-container.html)Check component-specific logs in the `/greengrass/v2/logs/` directory. | 
| Container exits immediately after starting. | [\[See the AWS documentation website for more details\]](http://docs.aws.amazon.com/prescriptive-guidance/latest/patterns/deploy-containerized-applications-on-aws-iot-greengrass-version-2-running-as-a-docker-container.html)Verify all required environment variables are set correctly if `PROVISION=true`. Make sure that the `--init` flag is used when starting the container. | 

## Related resources
<a name="deploy-containerized-applications-on-aws-iot-greengrass-version-2-running-as-a-docker-container-resources"></a>

**AWS resources**
+ [Amazon Elastic Container Service](https://aws.amazon.com/ecs/)
+ [Configure edge data processing for AWS IoT SiteWise models and assets](https://docs.aws.amazon.com/iot-sitewise/latest/userguide/edge-processing.html)
+ [What is AWS IoT Greengrass](https://docs.aws.amazon.com/greengrass/v2/developerguide/what-is-iot-greengrass.html)

**Other resources**
+ [Docker documentation](https://docs.docker.com/)

## Additional information
<a name="deploy-containerized-applications-on-aws-iot-greengrass-version-2-running-as-a-docker-container-additional"></a>
+ For AWS IoT SiteWise Edge data processing, Docker must be available within the AWS IoT Greengrass environment.
+ To run a nested container, you must run the AWS IoT Greengrass container with administrator-level credentials.

# Deploy containers by using Elastic Beanstalk
<a name="deploy-containers-by-using-elastic-beanstalk"></a>

*Thomas Scott and Jean-Baptiste Guillois, Amazon Web Services*

## Summary
<a name="deploy-containers-by-using-elastic-beanstalk-summary"></a>

On the Amazon Web Services (AWS) Cloud, AWS Elastic Beanstalk supports Docker as an available platform, so that containers can run with the created environment. This pattern shows how to deploy containers using the Elastic Beanstalk service. The deployment of this pattern will use the web server environment based on the Docker platform.

To use Elastic Beanstalk for deploying and scaling web applications and services, you upload your code and the deployment is automatically handled. Capacity provisioning, load balancing, automatic scaling, and application health monitoring are also included. When you use Elastic Beanstalk, you can take full control over the AWS resources that it creates on your behalf. There is no additional charge for Elastic Beanstalk. You pay only for the AWS resources that are used to store and run your applications.

This pattern includes instructions for deployment using the [AWS Elastic Beanstalk Command Line Interface (EB CLI)](https://docs.aws.amazon.com/elasticbeanstalk/latest/dg/eb-cli3-install-advanced.html) and the AWS Management Console.

**Use cases**

Use cases for Elastic Beanstalk include the following: 
+ Deploy a prototype environment to demo a frontend application. (This pattern uses a Dockerfile** **as the example.)
+ Deploy an API to handle API requests for a given domain.
+ Deploy an orchestration solution using Docker-Compose (`docker-compose.yml` is** **not used as the practical example in this pattern).

## Prerequisites and limitations
<a name="deploy-containers-by-using-elastic-beanstalk-prereqs"></a>

**Prerequisites **
+ An AWS account
+ AWS EB CLI locally installed
+ Docker installed on a local machine

**Limitations **
+ There is a Docker pull limit of 100 pulls per 6 hours per IP address on the free plan.

## Architecture
<a name="deploy-containers-by-using-elastic-beanstalk-architecture"></a>

**Target technology stack **
+ Amazon Elastic Compute Cloud (Amazon EC2) instances
+ Security group
+ Application Load Balancer
+ Auto Scaling group

**Target architecture **

![\[Architecture for deploying containers with Elastic Beanstalk.\]](http://docs.aws.amazon.com/prescriptive-guidance/latest/patterns/images/pattern-img/dfabcdc2-747f-40e2-a603-08ea31ba71d3/images/1d17ff09-1aea-4c72-adb5-eaf741601428.png)


**Automation and scale**

AWS Elastic Beanstalk can automatically scale based on the number of requests made. AWS resources created for an environment include one Application Load Balancer, an Auto Scaling group, and one or more Amazon EC2 instances. 

The load balancer sits in front of the Amazon EC2 instances, which are part of the Auto Scaling group. Amazon EC2 Auto Scaling automatically starts additional Amazon EC2 instances to accommodate increasing load on your application. If the load on your application decreases, Amazon EC2 Auto Scaling stops instances, but it keeps at least one instance running.

**Automatic scaling triggers**

The Auto Scaling group in your Elastic Beanstalk environment uses two Amazon CloudWatch alarms to initiate scaling operations. The default triggers scale when the average outbound network traffic from each instance is higher than 6 MB or lower than 2 MB over a period of five minutes. To use Amazon EC2 Auto Scaling effectively, configure triggers that are appropriate for your application, instance type, and service requirements. You can scale based on several statistics including latency, disk I/O, CPU utilization, and request count. For more information, see [Auto Scaling triggers](https://docs.aws.amazon.com/elasticbeanstalk/latest/dg/environments-cfg-autoscaling-triggers.html).

## Tools
<a name="deploy-containers-by-using-elastic-beanstalk-tools"></a>

**AWS services**
+ [AWS Command Line Interface (AWS CLI)](https://docs.aws.amazon.com/cli/latest/userguide/cli-chap-welcome.html) is an open-source tool that helps you interact with AWS services through commands in your command-line shell.
+ [AWS EB Command Line Interface (EB CLI)](https://docs.aws.amazon.com/elasticbeanstalk/latest/dg/eb-cli3-install.html) is a command-line client that you can use to create, configure, and manage Elastic Beanstalk environments.
+ [Elastic Load Balancing](https://docs.aws.amazon.com/elasticloadbalancing/latest/userguide/what-is-load-balancing.html) distributes incoming application or network traffic across multiple targets. For example, you can distribute traffic across Amazon Elastic Compute Cloud (Amazon EC2) instances, containers, and IP addresses in one or more Availability Zones.

**Other services**
+ [Docker](https://www.docker.com/) packages software into standardized units called containers that include libraries, system tools, code, and runtime.

**Code**

The code for this pattern is available in the GitHub [Cluster Sample Application](https://github.com/aws-samples/cluster-sample-app) repository.

## Epics
<a name="deploy-containers-by-using-elastic-beanstalk-epics"></a>

### Build with a Dockerfile
<a name="build-with-a-dockerfile"></a>


| Task | Description | Skills required | 
| --- | --- | --- | 
| Clone the remote repository. | [\[See the AWS documentation website for more details\]](http://docs.aws.amazon.com/prescriptive-guidance/latest/patterns/deploy-containers-by-using-elastic-beanstalk.html) | App developer, AWS administrator, AWS DevOps | 
| Initialize the Elastic Beanstalk Docker project. | [\[See the AWS documentation website for more details\]](http://docs.aws.amazon.com/prescriptive-guidance/latest/patterns/deploy-containers-by-using-elastic-beanstalk.html) | App developer, AWS administrator, AWS DevOps | 
| Test the project locally. | [\[See the AWS documentation website for more details\]](http://docs.aws.amazon.com/prescriptive-guidance/latest/patterns/deploy-containers-by-using-elastic-beanstalk.html) | App developer, AWS administrator, AWS DevOps | 

### Deploy using EB CLI
<a name="deploy-using-eb-cli"></a>


| Task | Description | Skills required | 
| --- | --- | --- | 
| Run deployment command | [\[See the AWS documentation website for more details\]](http://docs.aws.amazon.com/prescriptive-guidance/latest/patterns/deploy-containers-by-using-elastic-beanstalk.html) | App developer, AWS administrator, AWS DevOps | 
| Access the deployed version. | After the deployment command has finished, access the project using the `eb open` command. | App developer, AWS administrator, AWS DevOps | 

### Deploy using the console
<a name="deploy-using-the-console"></a>


| Task | Description | Skills required | 
| --- | --- | --- | 
| Deploy the application by using the browser. | [\[See the AWS documentation website for more details\]](http://docs.aws.amazon.com/prescriptive-guidance/latest/patterns/deploy-containers-by-using-elastic-beanstalk.html) | App developer, AWS administrator, AWS DevOps | 
| Access the deployed version. | After deployment, access the deployed application, and choose the URL provided. | App developer, AWS administrator, AWS DevOps | 

## Related resources
<a name="deploy-containers-by-using-elastic-beanstalk-resources"></a>
+ [Web server environments](https://docs.aws.amazon.com/elasticbeanstalk/latest/dg/concepts-webserver.html)
+ [Install the EB CLI on macOS](https://docs.aws.amazon.com/elasticbeanstalk/latest/dg/eb-cli3-install-osx.html)
+ [Manually install the EB CLI](https://docs.aws.amazon.com/elasticbeanstalk/latest/dg/eb-cli3-install-advanced.html)

## Additional information
<a name="deploy-containers-by-using-elastic-beanstalk-additional"></a>

**Advantages of using Elastic Beanstalk**
+ Automatic infrastructure provisioning
+ Automatic management of the underlying platform
+ Automatic patching and updates to support the application
+ Automatic scaling of the application
+ Ability to customize the number of nodes
+ Ability to access the infrastructure components if needed
+ Ease of deployment over other container deployment solutions

# Generate a static outbound IP address using a Lambda function, Amazon VPC, and a serverless architecture
<a name="generate-a-static-outbound-ip-address-using-a-lambda-function-amazon-vpc-and-a-serverless-architecture"></a>

*Thomas Scott, Amazon Web Services*

## Summary
<a name="generate-a-static-outbound-ip-address-using-a-lambda-function-amazon-vpc-and-a-serverless-architecture-summary"></a>

This pattern describes how to generate a static outbound IP address in the Amazon Web Services (AWS) Cloud by using a serverless architecture. Your organization can benefit from this approach if it wants to send files to a separate business entity by using Secure File Transfer Protocol (SFTP). This means that the business entity must have access to an IP address that allows files through its firewall. 

The pattern’s approach helps you create an AWS Lambda function that uses an [Elastic IP address](https://docs.aws.amazon.com/AWSEC2/latest/UserGuide/elastic-ip-addresses-eip.html) as the outbound IP address. By following the steps in this pattern, you can create a Lambda function and a virtual private cloud (VPC) that routes outbound traffic through an internet gateway with a static IP address. To use the static IP address, you attach the Lambda function to the VPC and its subnets. 

## Prerequisites and limitations
<a name="generate-a-static-outbound-ip-address-using-a-lambda-function-amazon-vpc-and-a-serverless-architecture-prereqs"></a>

**Prerequisites **
+ An active AWS account. 
+ AWS Identity and Access Management (IAM) permissions to create and deploy a Lambda function, and to create a VPC and its subnets. For more information about this, see [Execution role and user permissions](https://docs.aws.amazon.com/lambda/latest/dg/configuration-vpc.html#vpc-permissions) in the AWS Lambda documentation.
+ If you plan to use infrastructure as code (IaC) to implement this pattern’s approach, you need an integrated development environment (IDE) such as AWS Cloud9. For more information about this, see [What is AWS Cloud9?](https://docs.aws.amazon.com/cloud9/latest/user-guide/welcome.html) in the AWS Cloud9 documentation.

## Architecture
<a name="generate-a-static-outbound-ip-address-using-a-lambda-function-amazon-vpc-and-a-serverless-architecture-architecture"></a>

The following diagram shows the serverless architecture for this pattern.

![\[AWS Cloud VPC architecture with two availability zones, public and private subnets, NAT gateways, and a Lambda function.\]](http://docs.aws.amazon.com/prescriptive-guidance/latest/patterns/images/pattern-img/eb1d0b05-df33-45ae-b27e-36090055b300/images/c15cc6da-ce4e-4ea0-9feb-de1c845d3ce8.png)


The diagram shows the following workflow:

1. Outbound traffic leaves `NAT gateway 1` in `Public subnet 1`.

1. Outbound traffic leaves `NAT gateway 2` in `Public subnet 2`.

1. The Lambda function can run in `Private subnet 1` or `Private subnet 2`.

1. `Private subnet 1` and `Private subnet 2` route traffic to the NAT gateways in the public subnets.

1. The NAT gateways send outbound traffic to the internet gateway from the public subnets.

1. Outbound data is transferred from the internet gateway to the external server.



**Technology stack  **
+ Lambda
+ Amazon Virtual Private Cloud (Amazon VPC)

 

**Automation and scale**

You can ensure high availability (HA) by using two public and two private subnets in different Availability Zones. Even if one Availability Zone becomes unavailable, the pattern’s solution continues to work.

## Tools
<a name="generate-a-static-outbound-ip-address-using-a-lambda-function-amazon-vpc-and-a-serverless-architecture-tools"></a>
+ [AWS Lambda](https://docs.aws.amazon.com/lambda/latest/dg/welcome.html) – AWS Lambda is a compute service that supports running code without provisioning or managing servers. Lambda runs your code only when needed and scales automatically, from a few requests per day to thousands per second. You pay only for the compute time that you consume—there is no charge when your code is not running.
+ [Amazon VPC](https://docs.aws.amazon.com/vpc/) – Amazon Virtual Private Cloud (Amazon VPC) provisions a logically isolated section of the AWS Cloud where you can launch AWS resources in a virtual network that you've defined. This virtual network closely resembles a traditional network that you'd operate in your own data center, with the benefits of using the scalable infrastructure of AWS.

## Epics
<a name="generate-a-static-outbound-ip-address-using-a-lambda-function-amazon-vpc-and-a-serverless-architecture-epics"></a>

### Create a new VPC
<a name="create-a-new-vpc"></a>


| Task | Description | Skills required | 
| --- | --- | --- | 
| Create a new VPC. | Sign in to the AWS Management Console, open the Amazon VPC console, and then create a VPC named `Lambda VPC` that has `10.0.0.0/25`** **as the IPv4 CIDR range.For more information about creating a VPC, see [Getting started with Amazon VPC](https://docs.aws.amazon.com/vpc/latest/userguide/vpc-getting-started.html#getting-started-create-vpc) in the Amazon VPC documentation.  | AWS administrator | 

### Create two public subnets
<a name="create-two-public-subnets"></a>


| Task | Description | Skills required | 
| --- | --- | --- | 
| Create the first public subnet. | [\[See the AWS documentation website for more details\]](http://docs.aws.amazon.com/prescriptive-guidance/latest/patterns/generate-a-static-outbound-ip-address-using-a-lambda-function-amazon-vpc-and-a-serverless-architecture.html) | AWS administrator | 
| Create the second public subnet. | [\[See the AWS documentation website for more details\]](http://docs.aws.amazon.com/prescriptive-guidance/latest/patterns/generate-a-static-outbound-ip-address-using-a-lambda-function-amazon-vpc-and-a-serverless-architecture.html) | AWS administrator | 

### Create two private subnets
<a name="create-two-private-subnets"></a>


| Task | Description | Skills required | 
| --- | --- | --- | 
| Create the first private subnet. | [\[See the AWS documentation website for more details\]](http://docs.aws.amazon.com/prescriptive-guidance/latest/patterns/generate-a-static-outbound-ip-address-using-a-lambda-function-amazon-vpc-and-a-serverless-architecture.html) | AWS administrator | 
| Create the second private subnet. | [\[See the AWS documentation website for more details\]](http://docs.aws.amazon.com/prescriptive-guidance/latest/patterns/generate-a-static-outbound-ip-address-using-a-lambda-function-amazon-vpc-and-a-serverless-architecture.html) | AWS administrator | 

### Create two Elastic IP addresses for your NAT gateways
<a name="create-two-elastic-ip-addresses-for-your-nat-gateways"></a>


| Task | Description | Skills required | 
| --- | --- | --- | 
|  Create the first Elastic IP address. | [\[See the AWS documentation website for more details\]](http://docs.aws.amazon.com/prescriptive-guidance/latest/patterns/generate-a-static-outbound-ip-address-using-a-lambda-function-amazon-vpc-and-a-serverless-architecture.html)This Elastic IP address is used for your first NAT gateway.  | AWS administrator | 
| Create the second Elastic IP address. | [\[See the AWS documentation website for more details\]](http://docs.aws.amazon.com/prescriptive-guidance/latest/patterns/generate-a-static-outbound-ip-address-using-a-lambda-function-amazon-vpc-and-a-serverless-architecture.html)This Elastic IP address is used for your second NAT gateway. | AWS administrator | 

### Create an internet gateway
<a name="create-an-internet-gateway"></a>


| Task | Description | Skills required | 
| --- | --- | --- | 
| Create an internet gateway. | [\[See the AWS documentation website for more details\]](http://docs.aws.amazon.com/prescriptive-guidance/latest/patterns/generate-a-static-outbound-ip-address-using-a-lambda-function-amazon-vpc-and-a-serverless-architecture.html) | AWS administrator | 
| Attach the internet gateway to the VPC. | Select the internet gateway that you just created, and then choose **Actions, Attach to VPC**. | AWS administrator | 

### Create two NAT gateways
<a name="create-two-nat-gateways"></a>


| Task | Description | Skills required | 
| --- | --- | --- | 
| Create the first NAT gateway. | [\[See the AWS documentation website for more details\]](http://docs.aws.amazon.com/prescriptive-guidance/latest/patterns/generate-a-static-outbound-ip-address-using-a-lambda-function-amazon-vpc-and-a-serverless-architecture.html) | AWS administrator | 
| Create the second NAT gateway. | [\[See the AWS documentation website for more details\]](http://docs.aws.amazon.com/prescriptive-guidance/latest/patterns/generate-a-static-outbound-ip-address-using-a-lambda-function-amazon-vpc-and-a-serverless-architecture.html) | AWS administrator | 

### Create route tables for your public and private subnets
<a name="create-route-tables-for-your-public-and-private-subnets"></a>


| Task | Description | Skills required | 
| --- | --- | --- | 
| Create the route table for the public-one subnet. | [\[See the AWS documentation website for more details\]](http://docs.aws.amazon.com/prescriptive-guidance/latest/patterns/generate-a-static-outbound-ip-address-using-a-lambda-function-amazon-vpc-and-a-serverless-architecture.html) | AWS administrator | 
| Create the route table for the public-two subnet. | [\[See the AWS documentation website for more details\]](http://docs.aws.amazon.com/prescriptive-guidance/latest/patterns/generate-a-static-outbound-ip-address-using-a-lambda-function-amazon-vpc-and-a-serverless-architecture.html) | AWS administrator | 
| Create the route table for the private-one subnet. | [\[See the AWS documentation website for more details\]](http://docs.aws.amazon.com/prescriptive-guidance/latest/patterns/generate-a-static-outbound-ip-address-using-a-lambda-function-amazon-vpc-and-a-serverless-architecture.html) | AWS administrator | 
| Create the route table for the private-two subnet. | [\[See the AWS documentation website for more details\]](http://docs.aws.amazon.com/prescriptive-guidance/latest/patterns/generate-a-static-outbound-ip-address-using-a-lambda-function-amazon-vpc-and-a-serverless-architecture.html) | AWS administrator | 

### Create the Lambda function, add it to the VPC, and test the solution
<a name="create-the-lambda-function-add-it-to-the-vpc-and-test-the-solution"></a>


| Task | Description | Skills required | 
| --- | --- | --- | 
| Create a new Lambda function. | [\[See the AWS documentation website for more details\]](http://docs.aws.amazon.com/prescriptive-guidance/latest/patterns/generate-a-static-outbound-ip-address-using-a-lambda-function-amazon-vpc-and-a-serverless-architecture.html) | AWS administrator | 
| Add the Lambda function to your VPC. | [\[See the AWS documentation website for more details\]](http://docs.aws.amazon.com/prescriptive-guidance/latest/patterns/generate-a-static-outbound-ip-address-using-a-lambda-function-amazon-vpc-and-a-serverless-architecture.html) | AWS administrator | 
| Write code to call an external service. | [\[See the AWS documentation website for more details\]](http://docs.aws.amazon.com/prescriptive-guidance/latest/patterns/generate-a-static-outbound-ip-address-using-a-lambda-function-amazon-vpc-and-a-serverless-architecture.html) | AWS administrator | 

## Related resources
<a name="generate-a-static-outbound-ip-address-using-a-lambda-function-amazon-vpc-and-a-serverless-architecture-resources"></a>
+ [Configuring a Lambda function to access resources in a VPC](https://docs.aws.amazon.com/lambda/latest/dg/configuration-vpc.html)

# Identify duplicate container images automatically when migrating to an Amazon ECR repository
<a name="identify-duplicate-container-images-automatically-when-migrating-to-ecr-repository"></a>

*Rishabh Yadav and Rishi Singla, Amazon Web Services*

## Summary
<a name="identify-duplicate-container-images-automatically-when-migrating-to-ecr-repository-summary"></a>

The pattern provides an automated solution to identify whether images that are stored in different container repositories are duplicates. This check is useful when you plan to migrate images from other container repositories to Amazon Elastic Container Registry (Amazon ECR).

For foundational information, the pattern also describes the components of a container image, such as the image digest, manifest, and tags. When you plan a migration to Amazon ECR, you might decide to synchronize your container images across container registries by comparing the digests of the images. Before you migrate your container images, you need to check whether these images already exist in the Amazon ECR repository to prevent duplication. However, it can be difficult to detect duplication by comparing image digests, and this might lead to issues in the initial migration phase.  This pattern compares the digests of two similar images that are stored in different container registries and explains why the digests vary, to help you compare images accurately.

## Prerequisites and limitations
<a name="identify-duplicate-container-images-automatically-when-migrating-to-ecr-repository-prereqs"></a>
+ An active AWS account
+ Access to the [Amazon ECR public registry](https://gallery.ecr.aws/)
+ Familiarity with the following AWS services:
  + [AWS CodeCommit](https://aws.amazon.com/codecommit/)
  + [AWS CodePipeline](https://aws.amazon.com/codepipeline/)
  + [AWS CodeBuild](https://aws.amazon.com/codebuild/)
  + [AWS Identity and Access Management (IAM)](https://aws.amazon.com/iam/)
  + [Amazon Simple Storage Service (Amazon S3)](https://aws.amazon.com/s3/)
+ Configured CodeCommit credentials (see [instructions](https://docs.aws.amazon.com/codecommit/latest/userguide/setting-up-gc.html))

## Architecture
<a name="identify-duplicate-container-images-automatically-when-migrating-to-ecr-repository-architecture"></a>

**Container image components**

The following diagram illustrates some of the components of a container image. These components are described after the diagram.

![\[Manifest,configuration, file system layers, and digests.\]](http://docs.aws.amazon.com/prescriptive-guidance/latest/patterns/images/pattern-img/7db5020c-6f5b-4e91-b91a-5b8ae844be1b/images/71b99c67-a934-4f94-8af8-2a8431fb91f5.png)


**Terms and definitions**

The following terms are defined in the [Open Container Initiative (OCI) Image Specification](https://github.com/opencontainers/image-spec/blob/main/spec.md).
+ **Registry:** A service for image storage and management.
+ **Client:** A tool that communicates with registries and works with local images.
+ **Push:** The process for uploading images to a registry.
+ **Pull:** The process for downloading images from a registry.
+ **Blob:** The binary form of content that is stored by a registry and can be addressed by a digest.
+ **Index:** A construct that identifies multiple image manifests for different computer platforms (such as x86-64 or ARM 64-bit) or media types. For more information, see the [OCI Image Index Specification](https://github.com/opencontainers/image-spec/blob/main/image-index.md).
+ **Manifest:** A JSON document that defines an image or artifact that is uploaded through the manifest's endpoint. A manifest can reference other blobs in a repository by using descriptors. For more information, see the [OCI Image Manifest Specification](https://github.com/opencontainers/image-spec/blob/main/manifest.md).
+ **Filesystem layer:** System libraries and other dependencies for an image.
+ **Configuration:** A blob that contains artifact metadata and is referenced in the manifest. For more information, see the [OCI Image Configuration Specification](https://github.com/opencontainers/image-spec/blob/main/config.md).
+ **Object or artifact:** A conceptual content item that's stored as a blob and associated with an accompanying manifest with a configuration.
+ **Digest:** A unique identifier that's created from a cryptographic hash of the contents of a manifest. The image digest helps uniquely identify an immutable container image. When you pull an image by using its digest, you will download the same image every time on any operating system or architecture. For more information, see the [OCI Image Specification](https://github.com/opencontainers/image-spec/blob/main/descriptor.md#digests).
+ **Tag:** A human-readable manifest identifier. Compared with image digests, which are immutable, tags are dynamic. A tag that points to an image can change and move from one image to another, although the underlying image digest remains the same.

**Target architecture**

The following diagram displays the high-level architecture of the solution provided by this pattern to identify duplicate container images by comparing images that are stored in Amazon ECR and private repositories.

![\[Automatically detecting duplicates with CodePipeline and CodeBuild.\]](http://docs.aws.amazon.com/prescriptive-guidance/latest/patterns/images/pattern-img/7db5020c-6f5b-4e91-b91a-5b8ae844be1b/images/5ee62bc8-db8d-48a3-9e79-f3392b6e9bf7.png)


## Tools
<a name="identify-duplicate-container-images-automatically-when-migrating-to-ecr-repository-tools"></a>

**AWS services**
+ [CloudFormation](https://docs.aws.amazon.com/AWSCloudFormation/latest/UserGuide/Welcome.html) helps you set up AWS resources, provision them quickly and consistently, and manage them throughout their lifecycle across AWS accounts and Regions.
+ [AWS CodeBuild](https://docs.aws.amazon.com/codebuild/latest/userguide/welcome.html)is a fully managed build service that helps you compile source code, run unit tests, and produce artifacts that are ready to deploy.
+ [AWS CodeCommit](https://docs.aws.amazon.com/codecommit/latest/userguide/welcome.html) is a version control service that helps you privately store and manage Git repositories, without needing to manage your own source control system.
+ [AWS CodePipeline](https://docs.aws.amazon.com/codepipeline/latest/userguide/welcome.html) helps you quickly model and configure the different stages of a software release and automate the steps required to release software changes continuously.
+ [Amazon Elastic Container Registry (Amazon ECR)](https://docs.aws.amazon.com/AmazonECR/latest/userguide/what-is-ecr.html) is a managed container image registry service that’s secure, scalable, and reliable.

**Code **

The code for this pattern is available in the GitHub repository** **[Automated solution to identify duplicate container images between repositories](https://github.com/aws-samples/automated-solution-to-identify-duplicate-container-images-between-repositories/).

## Best practices
<a name="identify-duplicate-container-images-automatically-when-migrating-to-ecr-repository-best-practices"></a>
+ [CloudFormation best practices](https://docs.aws.amazon.com/AWSCloudFormation/latest/UserGuide/best-practices.html)
+ [AWS CodePipeline best practices](https://docs.aws.amazon.com/codepipeline/latest/userguide/best-practices.html)

## Epics
<a name="identify-duplicate-container-images-automatically-when-migrating-to-ecr-repository-epics"></a>

### Pull container images from Amazon ECR public and private repositories
<a name="pull-container-images-from-ecr-public-and-private-repositories"></a>


| Task | Description | Skills required | 
| --- | --- | --- | 
| Pull an image from the Amazon ECR public repository. | From the terminal, run the following command to pull the image `amazonlinux` from the Amazon ECR public repository.<pre>$~ % docker pull public.ecr.aws/amazonlinux/amazonlinux:2018.03 </pre>When the image has been pulled to your local machine, you’ll see the following pull digest, which represents the image index.<pre>2018.03: Pulling from amazonlinux/amazonlinux<br />4ddc0f8d367f: Pull complete <br /><br />Digest: sha256:f972d24199508c52de7ad37a298bda35d8a1bd7df158149b381c03f6c6e363b5<br /><br />Status: Downloaded newer image for public.ecr.aws/amazonlinux/amazonlinux:2018.03<br />public.ecr.aws/amazonlinux/amazonlinux:2018.03</pre> | App developer, AWS DevOps, AWS administrator | 
| Push the image to an Amazon ECR private repository. | [\[See the AWS documentation website for more details\]](http://docs.aws.amazon.com/prescriptive-guidance/latest/patterns/identify-duplicate-container-images-automatically-when-migrating-to-ecr-repository.html) | AWS administrator, AWS DevOps, App developer | 
| Pull the same image from the Amazon ECR private repository. | [\[See the AWS documentation website for more details\]](http://docs.aws.amazon.com/prescriptive-guidance/latest/patterns/identify-duplicate-container-images-automatically-when-migrating-to-ecr-repository.html) | App developer, AWS DevOps, AWS administrator | 

### Compare the image manifests
<a name="compare-the-image-manifests"></a>


| Task | Description | Skills required | 
| --- | --- | --- | 
| Find the manifest of the image stored in the Amazon ECR public repository. | From the terminal, run the following command to pull the manifest of the image `public.ecr.aws/amazonlinux/amazonlinux:2018.03` from the Amazon ECR public repository.<pre>$~ % docker manifest inspect public.ecr.aws/amazonlinux/amazonlinux:2018.03<br />{<br />   "schemaVersion": 2,<br />   "mediaType": "application/vnd.docker.distribution.manifest.list.v2+json",<br />   "manifests": [<br />      {<br />         "mediaType": "application/vnd.docker.distribution.manifest.v2+json",<br />         "size": 529,<br />         "digest": "sha256:52db9000073d93b9bdee6a7246a68c35a741aaade05a8f4febba0bf795cdac02",<br />         "platform": {<br />            "architecture": "amd64",<br />            "os": "linux"<br />         }<br />      }<br />   ]<br />}</pre> | AWS administrator, AWS DevOps, App developer | 
| Find the manifest of the image stored in the Amazon ECR private repository. | From the terminal, run the following command to pull the manifest of the image `<account-id>.dkr.ecr.us-east-1.amazonaws.com/test_ecr_repository:latest` from the Amazon ECR private repository.<pre>$~ % docker manifest inspect <account-id>.dkr.ecr.us-east-1.amazonaws.com/test_ecr_repository:latest                                          <br />{<br />	"schemaVersion": 2,<br />	"mediaType": "application/vnd.docker.distribution.manifest.v2+json",<br />	"config": {<br />		"mediaType": "application/vnd.docker.container.image.v1+json",<br />		"size": 1477,<br />		"digest": "sha256:f7cee5e1af28ad4e147589c474d399b12d9b551ef4c3e11e02d982fce5eebc68"<br />	},<br />	"layers": [<br />		{<br />			"mediaType": "application/vnd.docker.image.rootfs.diff.tar.gzip",<br />			"size": 62267075,<br />			"digest": "sha256:4ddc0f8d367f424871a060e2067749f32bd36a91085e714dcb159952f2d71453"<br />		}<br />	]<br />}</pre> | AWS DevOps, AWS systems administrator, App developer | 
| Compare the digest pulled by Docker with the manifest digest for the image in the Amazon ECR private repository. | Another question is why the digest provided by the **docker pull** command differs from the manifest's digest for the image `<account-id>.dkr.ecr.us-east-1.amazonaws.com/test_ecr_repository:latest`.The digest used for **docker pull** represents the digest of the image manifest, which is stored in a registry. This digest is considered the root of a hash chain, because the manifest contains the hash of the content that will be downloaded and imported into Docker.The image ID used within Docker can be found in this manifest as `config.digest`. This represents the image configuration that Docker uses. So you could say that the manifest is the envelope, and the image is the content of the envelope. The manifest digest is always different from the image ID. However, a specific manifest should always produce the same image ID. Because the manifest digest is a hash chain, we cannot guarantee that it will always be the same for a given image ID. In most cases, it produces the same digest, although Docker cannot guarantee that. The possible difference in the manifest digest stems from Docker not storing the blobs that are compressed with gzip locally. Therefore, exporting layers might produce a different digest, although the uncompressed content remains the same. The image ID verifies that uncompressed content is the same; that is, the image ID is now a content addressable identifier (`chainID`).To confirm this information, you can compare the output of the **docker inspect** command on the Amazon ECR public and private repositories:[\[See the AWS documentation website for more details\]](http://docs.aws.amazon.com/prescriptive-guidance/latest/patterns/identify-duplicate-container-images-automatically-when-migrating-to-ecr-repository.html)The results verify that both images have the same image ID digest and layer digest.ID: `f7cee5e1af28ad4e147589c474d399b12d9b551ef4c3e11e02d982fce5eebc68`Layers: `d5655967c2c4e8d68f8ec7cf753218938669e6c16ac1324303c073c736a2e2a2`Additionally, the digests are based on the bytes of the object that's managed locally (the local file is a tar of the container image layer) or the blob that's pushed to the registry server. However, when you push the blob to a registry, the tar is compressed and the digest is computed in the compressed tar file. Therefore, the difference in the **docker pull** digest value arises from compression that is applied at the registry (Amazon ECR private or public) level.This explanation is specific to using a Docker client. You won’t see this behavior with other clients such as **nerdctl** or **Finch**, because they don’t automatically compress the image during push and pull operations. | AWS DevOps, AWS systems administrator, App developer | 

### Automatically identify duplicate images between Amazon ECR public and private repositories
<a name="automatically-identify-duplicate-images-between-ecr-public-and-private-repositories"></a>


| Task | Description | Skills required | 
| --- | --- | --- | 
| Clone the repository. | Clone the Github repository for this pattern into a local folder:<pre>$git clone https://github.com/aws-samples/automated-solution-to-identify-duplicate-container-images-between-repositories</pre> | AWS administrator, AWS DevOps | 
| Set up a CI/CD pipeline. | The GitHub repository includes a `.yaml` file that creates an CloudFormation stack to set up a pipeline in AWS CodePipeline.[\[See the AWS documentation website for more details\]](http://docs.aws.amazon.com/prescriptive-guidance/latest/patterns/identify-duplicate-container-images-automatically-when-migrating-to-ecr-repository.html)The pipeline will be set up with two stages (CodeCommit and CodeBuild, as shown in the architecture diagram) to identify images in the private repository that also exist in the public repository. The pipeline is configured with the following resources:[\[See the AWS documentation website for more details\]](http://docs.aws.amazon.com/prescriptive-guidance/latest/patterns/identify-duplicate-container-images-automatically-when-migrating-to-ecr-repository.html) | AWS administrator, AWS DevOps | 
| Populate the CodeCommit repository. | To populate the CodeCommit repository, perform these steps:[\[See the AWS documentation website for more details\]](http://docs.aws.amazon.com/prescriptive-guidance/latest/patterns/identify-duplicate-container-images-automatically-when-migrating-to-ecr-repository.html) | AWS administrator, AWS DevOps | 
| Clean up. | To avoid incurring future charges, delete the resources by following these steps:[\[See the AWS documentation website for more details\]](http://docs.aws.amazon.com/prescriptive-guidance/latest/patterns/identify-duplicate-container-images-automatically-when-migrating-to-ecr-repository.html) | AWS administrator | 

## Troubleshooting
<a name="identify-duplicate-container-images-automatically-when-migrating-to-ecr-repository-troubleshooting"></a>


| Issue | Solution | 
| --- | --- | 
| When you try to push, pull, or otherwise interact with a CodeCommit repository from the terminal or command line, you are prompted to provide a user name and password, and you must supply the Git credentials for your IAM user. | The most common causes for this error are the following:[\[See the AWS documentation website for more details\]](http://docs.aws.amazon.com/prescriptive-guidance/latest/patterns/identify-duplicate-container-images-automatically-when-migrating-to-ecr-repository.html)Depending on your operating system and local environment, you might need to install a credential manager, configure the credential manager that is included in your operating system, or customize your local environment to use credential storage. For example, if your computer is running macOS, you can use the Keychain Access utility to store your credentials. If your computer is running Windows, you can use the Git Credential Manager that is installed with Git for Windows. For more information, see [Setup for HTTPS users using Git credentials](https://docs.aws.amazon.com/codecommit/latest/userguide/setting-up-gc.html) in the CodeCommit documentation and [Credential Storage](https://git-scm.com/book/en/v2/Git-Tools-Credential-Storage) in the Git documentation. | 
| You encounter HTTP 403 or "no basic auth credentials" errors when you push an image to the Amazon ECR repository. | You might encounter these error messages from the **docker push** or **docker pull** command, even if you have successfully authenticated to Docker by using the **aws ecr get-login-password** command. Known causes are:[\[See the AWS documentation website for more details\]](http://docs.aws.amazon.com/prescriptive-guidance/latest/patterns/identify-duplicate-container-images-automatically-when-migrating-to-ecr-repository.html) | 

## Related resources
<a name="identify-duplicate-container-images-automatically-when-migrating-to-ecr-repository-resources"></a>
+ [Automated solution to identify duplicate container images between repositories](https://github.com/aws-samples/automated-solution-to-identify-duplicate-container-images-between-repositories/) (GitHub repository)
+ [Amazon ECR public gallery](https://gallery.ecr.aws/)
+ [Private images in Amazon ECR](https://docs.aws.amazon.com/AmazonECR/latest/userguide/images.html) (Amazon ECR documentation)
+ [AWS::CodePipeline::Pipeline resource](https://docs.aws.amazon.com/AWSCloudFormation/latest/UserGuide/aws-resource-codepipeline-pipeline.html) (CloudFormation documentation)
+ [OCI Image Format Specification](https://github.com/opencontainers/image-spec/blob/main/spec.md)

## Additional information
<a name="identify-duplicate-container-images-automatically-when-migrating-to-ecr-repository-additional"></a>

**Output of Docker inspection for image in Amazon ECR public repository**

```
[
    {
        "Id": "sha256:f7cee5e1af28ad4e147589c474d399b12d9b551ef4c3e11e02d982fce5eebc68",
        "RepoTags": [
            "<account-id>.dkr.ecr.us-east-1.amazonaws.com/test_ecr_repository:latest",
            "public.ecr.aws/amazonlinux/amazonlinux:2018.03"
        ],
        "RepoDigests": [
            "<account-id>.dkr.ecr.us-east-1.amazonaws.com/test_ecr_repository@sha256:52db9000073d93b9bdee6a7246a68c35a741aaade05a8f4febba0bf795cdac02",
            "public.ecr.aws/amazonlinux/amazonlinux@sha256:f972d24199508c52de7ad37a298bda35d8a1bd7df158149b381c03f6c6e363b5"
        ],
        "Parent": "",
        "Comment": "",
        "Created": "2023-02-23T06:20:11.575053226Z",
        "Container": "ec7f2fc7d2b6a382384061247ef603e7d647d65f5cd4fa397a3ccbba9278367c",
        "ContainerConfig": {
            "Hostname": "ec7f2fc7d2b6",
            "Domainname": "",
            "User": "",
            "AttachStdin": false,
            "AttachStdout": false,
            "AttachStderr": false,
            "Tty": false,
            "OpenStdin": false,
            "StdinOnce": false,
            "Env": [
                "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin"
            ],
            "Cmd": [
                "/bin/sh",
                "-c",
                "#(nop) ",
                "CMD [\"/bin/bash\"]"
            ],
            "Image": "sha256:c1bced1b5a65681e1e0e52d0a6ad17aaf76606149492ca0bf519a466ecb21e51",
            "Volumes": null,
            "WorkingDir": "",
            "Entrypoint": null,
            "OnBuild": null,
            "Labels": {}
        },
        "DockerVersion": "20.10.17",
        "Author": "",
        "Config": {
            "Hostname": "",
            "Domainname": "",
            "User": "",
            "AttachStdin": false,
            "AttachStdout": false,
            "AttachStderr": false,
            "Tty": false,
            "OpenStdin": false,
            "StdinOnce": false,
            "Env": [
                "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin"
            ],
            "Cmd": [
                "/bin/bash"
            ],
            "Image": "sha256:c1bced1b5a65681e1e0e52d0a6ad17aaf76606149492ca0bf519a466ecb21e51",
            "Volumes": null,
            "WorkingDir": "",
            "Entrypoint": null,
            "OnBuild": null,
            "Labels": null
        },
        "Architecture": "amd64",
        "Os": "linux",
        "Size": 167436755,
        "VirtualSize": 167436755,
        "GraphDriver": {
            "Data": {
                "MergedDir": "/var/lib/docker/overlay2/c2c2351a82b26cbdf7782507500e5adb5c2b3a2875bdbba79788a4b27cd6a913/merged",
                "UpperDir": "/var/lib/docker/overlay2/c2c2351a82b26cbdf7782507500e5adb5c2b3a2875bdbba79788a4b27cd6a913/diff",
                "WorkDir": "/var/lib/docker/overlay2/c2c2351a82b26cbdf7782507500e5adb5c2b3a2875bdbba79788a4b27cd6a913/work"
            },
            "Name": "overlay2"
        },
        "RootFS": {
            "Type": "layers",
            "Layers": [
                "sha256:d5655967c2c4e8d68f8ec7cf753218938669e6c16ac1324303c073c736a2e2a2"
            ]
        },
        "Metadata": {
            "LastTagTime": "2023-03-02T10:28:47.142155987Z"
        }
    }
]
```

**Output of Docker inspection for image in Amazon ECR private repository**

```
[
    {
        "Id": "sha256:f7cee5e1af28ad4e147589c474d399b12d9b551ef4c3e11e02d982fce5eebc68",
        "RepoTags": [
            "<account-id>.dkr.ecr.us-east-1.amazonaws.com/test_ecr_repository:latest",
            "public.ecr.aws/amazonlinux/amazonlinux:2018.03"
        ],
        "RepoDigests": [
            "<account-id>.dkr.ecr.us-east-1.amazonaws.com/test_ecr_repository@sha256:52db9000073d93b9bdee6a7246a68c35a741aaade05a8f4febba0bf795cdac02",
            "public.ecr.aws/amazonlinux/amazonlinux@sha256:f972d24199508c52de7ad37a298bda35d8a1bd7df158149b381c03f6c6e363b5"
        ],
        "Parent": "",
        "Comment": "",
        "Created": "2023-02-23T06:20:11.575053226Z",
        "Container": "ec7f2fc7d2b6a382384061247ef603e7d647d65f5cd4fa397a3ccbba9278367c",
        "ContainerConfig": {
            "Hostname": "ec7f2fc7d2b6",
            "Domainname": "",
            "User": "",
            "AttachStdin": false,
            "AttachStdout": false,
            "AttachStderr": false,
            "Tty": false,
            "OpenStdin": false,
            "StdinOnce": false,
            "Env": [
                "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin"
            ],
            "Cmd": [
                "/bin/sh",
                "-c",
                "#(nop) ",
                "CMD [\"/bin/bash\"]"
            ],
            "Image": "sha256:c1bced1b5a65681e1e0e52d0a6ad17aaf76606149492ca0bf519a466ecb21e51",
            "Volumes": null,
            "WorkingDir": "",
            "Entrypoint": null,
            "OnBuild": null,
            "Labels": {}
        },
        "DockerVersion": "20.10.17",
        "Author": "",
        "Config": {
            "Hostname": "",
            "Domainname": "",
            "User": "",
            "AttachStdin": false,
            "AttachStdout": false,
            "AttachStderr": false,
            "Tty": false,
            "OpenStdin": false,
            "StdinOnce": false,
            "Env": [
                "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin"
            ],
            "Cmd": [
                "/bin/bash"
            ],
            "Image": "sha256:c1bced1b5a65681e1e0e52d0a6ad17aaf76606149492ca0bf519a466ecb21e51",
            "Volumes": null,
            "WorkingDir": "",
            "Entrypoint": null,
            "OnBuild": null,
            "Labels": null
        },
        "Architecture": "amd64",
        "Os": "linux",
        "Size": 167436755,
        "VirtualSize": 167436755,
        "GraphDriver": {
            "Data": {
                "MergedDir": "/var/lib/docker/overlay2/c2c2351a82b26cbdf7782507500e5adb5c2b3a2875bdbba79788a4b27cd6a913/merged",
                "UpperDir": "/var/lib/docker/overlay2/c2c2351a82b26cbdf7782507500e5adb5c2b3a2875bdbba79788a4b27cd6a913/diff",
                "WorkDir": "/var/lib/docker/overlay2/c2c2351a82b26cbdf7782507500e5adb5c2b3a2875bdbba79788a4b27cd6a913/work"
            },
            "Name": "overlay2"
        },
        "RootFS": {
            "Type": "layers",
            "Layers": [
                "sha256:d5655967c2c4e8d68f8ec7cf753218938669e6c16ac1324303c073c736a2e2a2"
            ]
        },
        "Metadata": {
            "LastTagTime": "2023-03-02T10:28:47.142155987Z"
        }
    }
]
```

# Install SSM Agent on Amazon EKS worker nodes by using Kubernetes DaemonSet
<a name="install-ssm-agent-on-amazon-eks-worker-nodes-by-using-kubernetes-daemonset"></a>

*Mahendra Revanasiddappa, Amazon Web Services*

## Summary
<a name="install-ssm-agent-on-amazon-eks-worker-nodes-by-using-kubernetes-daemonset-summary"></a>

**Note, September 2021:** The latest Amazon EKS optimized AMIs install SSM Agent automatically. For more information, see the [release notes](https://github.com/awslabs/amazon-eks-ami/releases/tag/v20210621) for the June 2021 AMIs.

In Amazon Elastic Kubernetes Service (Amazon EKS), because of security guidelines, worker nodes don't have Secure Shell (SSH) key pairs attached to them. This pattern shows how you can use the Kubernetes DaemonSet resource type to install AWS Systems Manager Agent (SSM Agent) on all worker nodes, instead of installing it manually or replacing the Amazon Machine Image (AMI) for the nodes. DaemonSet uses a cron job on the worker node to schedule the installation of SSM Agent. You can also use this pattern to install other packages on worker nodes.

When you're troubleshooting issues in the cluster, installing SSM Agent on demand enables you to establish an SSH session with the worker node, to collect logs or to look into instance configuration, without SSH key pairs.

## Prerequisites and limitations
<a name="install-ssm-agent-on-amazon-eks-worker-nodes-by-using-kubernetes-daemonset-prereqs"></a>

**Prerequisites**
+ An existing Amazon EKS cluster with Amazon Elastic Compute Cloud (Amazon EC2) worker nodes.
+ Container instances should have the required permissions to communicate with the SSM service. The AWS Identity and Access Management (IAM) managed role **AmazonSSMManagedInstanceCore** provides the required permissions for SSM Agent to run on EC2 instances. For more information, see the [AWS Systems Manager documentation](https://docs.aws.amazon.com/systems-manager/latest/userguide/setup-instance-profile.html).

**Limitations**
+ This pattern isn't applicable to AWS Fargate, because DaemonSets aren't supported on the Fargate platform.
+ This pattern applies only to Linux-based worker nodes.
+ The DaemonSet pods run in privileged mode. If the Amazon EKS cluster has a webhook that blocks pods in privileged mode, the SSM Agent will not be installed.

## Architecture
<a name="install-ssm-agent-on-amazon-eks-worker-nodes-by-using-kubernetes-daemonset-architecture"></a>

The following diagram illustrates the architecture for this pattern.

![\[Using Kubernetes DaemonSet to install SSM Agent on Amazon EKS worker nodes.\]](http://docs.aws.amazon.com/prescriptive-guidance/latest/patterns/images/pattern-img/016d53f3-45c1-4913-b542-67124e1462b8/images/3a6dfd00-e54b-44d5-843a-4c26ce9826c9.png)


## Tools
<a name="install-ssm-agent-on-amazon-eks-worker-nodes-by-using-kubernetes-daemonset-tools"></a>

**Tools**
+ [kubectl](https://docs.aws.amazon.com/eks/latest/userguide/install-kubectl.html) is a command-line utility that is used to interact with an Amazon EKS cluster. This pattern uses `kubectl` to deploy a DaemonSet on the Amazon EKS cluster, which will install SSM Agent on all worker nodes.
+ [Amazon EKS](https://docs.aws.amazon.com/eks/latest/userguide/what-is-eks.html) makes it easy for you to run Kubernetes on AWS without having to install, operate, and maintain your own Kubernetes control plane or nodes. Kubernetes is an open-source system for automating the deployment, scaling, and management of containerized applications.
+ [AWS Systems Manager Session Manager](https://docs.aws.amazon.com/systems-manager/latest/userguide/session-manager.html) lets you manage your EC2 instances, on-premises instances, and virtual machines (VMs) through an interactive, one-click, browser-based shell or through the AWS Command Line Interface (AWS CLI).

**Code**

Use the following code to create a DaemonSet configuration file that will install SSM Agent on the Amazon EKS cluster. Follow the instructions in the [Epics](#install-ssm-agent-on-amazon-eks-worker-nodes-by-using-kubernetes-daemonset-epics) section.

```
cat << EOF > ssm_daemonset.yaml
apiVersion: apps/v1
kind: DaemonSet
metadata:
  labels:
    k8s-app: ssm-installer
  name: ssm-installer
  namespace: kube-system
spec:
  selector:
    matchLabels:
      k8s-app: ssm-installer
  template:
    metadata:
      labels:
        k8s-app: ssm-installer
    spec:
      containers:
      - name: sleeper
        image: busybox
        command: ['sh', '-c', 'echo I keep things running! && sleep 3600']
      initContainers:
      - image: amazonlinux
        imagePullPolicy: Always
        name: ssm
        command: ["/bin/bash"]
        args: ["-c","echo '* * * * * root yum install -y https://s3.amazonaws.com/ec2-downloads-windows/SSMAgent/latest/linux_amd64/amazon-ssm-agent.rpm & rm -rf /etc/cron.d/ssmstart' > /etc/cron.d/ssmstart"]
        securityContext:
          allowPrivilegeEscalation: true
        volumeMounts:
        - mountPath: /etc/cron.d
          name: cronfile
        terminationMessagePath: /dev/termination-log
        terminationMessagePolicy: File
      volumes:
      - name: cronfile
        hostPath:
          path: /etc/cron.d
          type: Directory
      dnsPolicy: ClusterFirst
      restartPolicy: Always
      schedulerName: default-scheduler
      terminationGracePeriodSeconds: 30
EOF
```

## Epics
<a name="install-ssm-agent-on-amazon-eks-worker-nodes-by-using-kubernetes-daemonset-epics"></a>

### Set up kubectl
<a name="set-up-kubectl"></a>


| Task | Description | Skills required | 
| --- | --- | --- | 
| Install and configure kubectl to access the EKS cluster. | If `kubectl` isn't already installed and configured to access the Amazon EKS cluster, see [Installing kubectl](https://docs.aws.amazon.com/eks/latest/userguide/install-kubectl.html) in the Amazon EKS documentation. | DevOps | 

### Deploy the DaemonSet
<a name="deploy-the-daemonset"></a>


| Task | Description | Skills required | 
| --- | --- | --- | 
| Create the DaemonSet configuration file. | Use the code in the [Code](#install-ssm-agent-on-amazon-eks-worker-nodes-by-using-kubernetes-daemonset-tools) section earlier in this pattern to create a DaemonSet configuration file called `ssm_daemonset.yaml`, which will be deployed to the Amazon EKS cluster. The pod launched by DaemonSet has a main container and an `init` container. The main container has a `sleep` command. The `init` container includes a `command` section that creates a cron job file to install SSM Agent at the path `/etc/cron.d/`. The cron job runs only once, and the file it creates is automatically deleted after the job is complete. When the init container has finished, the main container waits for 60 minutes before exiting. After 60 minutes, a new pod is launched. This pod installs SSM Agent, if it’s missing, or updates SSM Agent to the latest version.If required, you can modify the `sleep` command to restart the pod once a day or to run more often.  | DevOps | 
| Deploy the DaemonSet on the Amazon EKS cluster. | To deploy the DaemonSet configuration file you created in the previous step on the Amazon EKS cluster, use the following command:<pre>kubectl apply -f ssm_daemonset.yaml </pre>This command creates a DaemonSet to run the pods on worker nodes to install SSM Agent. | DevOps | 

## Related resources
<a name="install-ssm-agent-on-amazon-eks-worker-nodes-by-using-kubernetes-daemonset-resources"></a>
+ [Installing kubectl](https://docs.aws.amazon.com/eks/latest/userguide/install-kubectl.html) (Amazon EKS documentation)
+ [Setting up Session Manager](https://docs.aws.amazon.com/systems-manager/latest/userguide/session-manager-getting-started.html) (AWS Systems Manager documentation)

# Install the SSM Agent and CloudWatch agent on Amazon EKS worker nodes using preBootstrapCommands
<a name="install-the-ssm-agent-and-cloudwatch-agent-on-amazon-eks-worker-nodes-using-prebootstrapcommands"></a>

*Akkamahadevi Hiremath, Amazon Web Services*

## Summary
<a name="install-the-ssm-agent-and-cloudwatch-agent-on-amazon-eks-worker-nodes-using-prebootstrapcommands-summary"></a>

This pattern provides code samples and steps to install the AWS Systems Manager Agent (SSM Agent) and Amazon CloudWatch agent on Amazon Elastic Kubernetes Service (Amazon EKS) worker nodes in the Amazon Web Services (AWS) Cloud during Amazon EKS cluster creation. You can install the SSM Agent and CloudWatch agent by using the `preBootstrapCommands` property from the `eksctl` [config file schema](https://eksctl.io/usage/schema/) (Weaveworks documentation). Then, you can use the SSM Agent to connect to your worker nodes without using an Amazon Elastic Compute Cloud (Amazon EC2) key pair. Additionally, you can use the CloudWatch agent to monitor memory and disk utilization on your Amazon EKS worker nodes.

## Prerequisites and limitations
<a name="install-the-ssm-agent-and-cloudwatch-agent-on-amazon-eks-worker-nodes-using-prebootstrapcommands-prereqs"></a>

**Prerequisites**
+ An active AWS account
+ The [eksctl command line utility](https://docs.aws.amazon.com/eks/latest/userguide/eksctl.html), installed and configured on macOS, Linux, or Windows
+ The [kubectl command line utility](https://docs.aws.amazon.com/eks/latest/userguide/install-kubectl.html), installed and configured on macOS, Linux, or Windows

**Limitations**
+ We recommend that you avoid adding long-running scripts to the `preBootstrapCommands`** **property, because this delays the node from joining the Amazon EKS cluster during scaling activities. We recommend that you create a [custom Amazon Machine Image (AMI)](https://docs.aws.amazon.com/elasticbeanstalk/latest/dg/using-features.customenv.html) instead.
+ This pattern applies to Amazon EC2 Linux instances only.

## Architecture
<a name="install-the-ssm-agent-and-cloudwatch-agent-on-amazon-eks-worker-nodes-using-prebootstrapcommands-architecture"></a>

**Technology stack**
+ Amazon CloudWatch
+ Amazon Elastic Kubernetes Service (Amazon EKS)
+ AWS Systems Manager Parameter Store

**Target architecture**

The following diagram shows an example of a user connecting to Amazon EKS worker nodes using SSM Agent which was installed using the `preBootstrapCommands`.

![\[User connecting to Amazon EKS worker nodes via Systems Manager, with SSM Agent and CloudWatch agent on each node.\]](http://docs.aws.amazon.com/prescriptive-guidance/latest/patterns/images/pattern-img/b37a3cdb-204f-4014-8317-3600a793dac7/images/9a5760af-23bb-4616-97b0-b401a9d080cf.png)


The diagram shows the following workflow:

1. The user creates an Amazon EKS cluster by using the `eksctl` configuration file with the `preBootstrapCommands` property, which installs the SSM Agent and CloudWatch agent.

1. Any new instances that join the cluster later due to scaling activities get created with the pre-installed SSM Agent and CloudWatch agent.

1. The user connects to Amazon EC2 by using the SSM Agent and then monitors memory and disk utilization by using the CloudWatch agent.

## Tools
<a name="install-the-ssm-agent-and-cloudwatch-agent-on-amazon-eks-worker-nodes-using-prebootstrapcommands-tools"></a>
+ [Amazon CloudWatch](https://docs.aws.amazon.com/AmazonCloudWatch/latest/monitoring/WhatIsCloudWatch.html) helps you monitor the metrics of your AWS resources and the applications that you run on AWS in real time.
+ [Amazon Elastic Kubernetes Service (Amazon EKS)](https://docs.aws.amazon.com/eks/latest/userguide/getting-started.html) helps you run Kubernetes on AWS without needing to install or maintain your own Kubernetes control plane or nodes.
+ [AWS Systems Manager Parameter Store](https://docs.aws.amazon.com/systems-manager/latest/userguide/systems-manager-parameter-store.html) provides secure, hierarchical storage for configuration data management and secrets management.
+ [AWS Systems Manager Session Manager](https://docs.aws.amazon.com/systems-manager/latest/userguide/session-manager.html) helps you manage your EC2 instances, on-premises instances, and virtual machines through an interactive, one-click, browser-based shell or through the AWS Command Line Interface (AWS CLI).
+ [eksctl](https://eksctl.io/usage/schema/) is a command-line utility for creating and managing Kubernetes clusters on Amazon EKS.
+ [kubectl](https://kubernetes.io/docs/tasks/tools/install-kubectl/) is a command-line utility for communicating with the cluster API server.

## Epics
<a name="install-the-ssm-agent-and-cloudwatch-agent-on-amazon-eks-worker-nodes-using-prebootstrapcommands-epics"></a>

### Create an Amazon EKS cluster
<a name="create-an-amazon-eks-cluster"></a>


| Task | Description | Skills required | 
| --- | --- | --- | 
| Store the CloudWatch agent configuration file. | Store the CloudWatch agent configuration file in the [AWS Systems Manager Parameter Store](https://docs.aws.amazon.com/systems-manager/latest/userguide/systems-manager-parameter-store.html) in the AWS Region where you want to create your Amazon EKS cluster. To do this, [create a parameter](https://docs.aws.amazon.com/systems-manager/latest/userguide/parameter-create-console.html) in AWS Systems Manager Parameter Store and note the name of the parameter (for example, `AmazonCloudwatch-linux`).For more information, see the *Example CloudWatch agent configuration file *code in the [Additional information](#install-the-ssm-agent-and-cloudwatch-agent-on-amazon-eks-worker-nodes-using-prebootstrapcommands-additional) section of this pattern. | DevOps engineer | 
| Create the eksctl configuration file and cluster.  | [\[See the AWS documentation website for more details\]](http://docs.aws.amazon.com/prescriptive-guidance/latest/patterns/install-the-ssm-agent-and-cloudwatch-agent-on-amazon-eks-worker-nodes-using-prebootstrapcommands.html) | AWS DevOps | 

### Verify that the SSM Agent and CloudWatch agent work
<a name="verify-that-the-ssm-agent-and-cloudwatch-agent-work"></a>


| Task | Description | Skills required | 
| --- | --- | --- | 
| Test the SSM Agent. | Use SSH to connect to your Amazon EKS cluster nodes by using any of the methods covered in [Start a session](https://docs.aws.amazon.com/systems-manager/latest/userguide/session-manager-working-with-sessions-start.html#start-ec2-console%20%20or%20https:%2F%2Fdocs.aws.amazon.com%2Fsystems-manager%2Flatest%2Fuserguide%2Fsession-manager-working-with-sessions-start.html%23sessions-start-cli) from the AWS Systems Manager documentation. | AWS DevOps | 
| Test the CloudWatch agent. | Use the CloudWatch console to validate the CloudWatch agent:[\[See the AWS documentation website for more details\]](http://docs.aws.amazon.com/prescriptive-guidance/latest/patterns/install-the-ssm-agent-and-cloudwatch-agent-on-amazon-eks-worker-nodes-using-prebootstrapcommands.html) | AWS DevOps | 

## Related resources
<a name="install-the-ssm-agent-and-cloudwatch-agent-on-amazon-eks-worker-nodes-using-prebootstrapcommands-resources"></a>
+ [Installing and running the CloudWatch agent on your servers](https://docs.aws.amazon.com/AmazonCloudWatch/latest/monitoring/install-CloudWatch-Agent-commandline-fleet.html) (Amazon CloudWatch documentation)
+ [Create a Systems Manager parameter (console)](https://docs.aws.amazon.com/systems-manager/latest/userguide/parameter-create-console.html) (AWS Systems Manager documentation)
+ [Create the CloudWatch agent configuration file](https://docs.aws.amazon.com/AmazonCloudWatch/latest/monitoring/create-cloudwatch-agent-configuration-file.html) (Amazon CloudWatch documentation)
+ [Starting a session (AWS CLI)](https://docs.aws.amazon.com/systems-manager/latest/userguide/session-manager-working-with-sessions-start.html#sessions-start-cli) (AWS Systems Manager documentation)
+ [Starting a session (Amazon EC2 console)](https://docs.aws.amazon.com/systems-manager/latest/userguide/session-manager-working-with-sessions-start.html#start-ec2-console) (AWS Systems Manager documentation)

## Additional information
<a name="install-the-ssm-agent-and-cloudwatch-agent-on-amazon-eks-worker-nodes-using-prebootstrapcommands-additional"></a>

**Example CloudWatch agent configuration file**

In the following example, the CloudWatch agent is configured to monitor disk and memory utilization on Amazon Linux instances:

```
{
    "agent": {
        "metrics_collection_interval": 60,
        "run_as_user": "cwagent"
    },
    "metrics": {
        "append_dimensions": {
            "AutoScalingGroupName": "${aws:AutoScalingGroupName}",
            "ImageId": "${aws:ImageId}",
            "InstanceId": "${aws:InstanceId}",
            "InstanceType": "${aws:InstanceType}"
        },
        "metrics_collected": {
            "disk": {
                "measurement": [
                    "used_percent"
                ],
                "metrics_collection_interval": 60,
                "resources": [
                    "*"
                ]
            },
            "mem": {
                "measurement": [
                    "mem_used_percent"
                ],
                "metrics_collection_interval": 60
            }
        }
    }
}
```

**Example eksctl configuration file**

```
apiVersion: eksctl.io/v1alpha5
kind: ClusterConfig
metadata:
  name: test
  region: us-east-2
  version: "1.24"
managedNodeGroups:
  - name: test
    minSize: 2
    maxSize: 4
    desiredCapacity: 2
    volumeSize: 20
    instanceType: t3.medium
    preBootstrapCommands:
    - sudo yum install amazon-ssm-agent -y
    - sudo systemctl enable amazon-ssm-agent
    - sudo systemctl start amazon-ssm-agent
    - sudo yum install amazon-cloudwatch-agent -y
    - sudo /opt/aws/amazon-cloudwatch-agent/bin/amazon-cloudwatch-agent-ctl -a fetch-config -m ec2 -s -c ssm:AmazonCloudwatch-linux
    iam:
      attachPolicyARNs:
        - arn:aws:iam::aws:policy/AmazonEKSWorkerNodePolicy
        - arn:aws:iam::aws:policy/AmazonEKS_CNI_Policy
        - arn:aws:iam::aws:policy/AmazonEC2ContainerRegistryReadOnly
        - arn:aws:iam::aws:policy/CloudWatchAgentServerPolicy
        - arn:aws:iam::aws:policy/AmazonSSMManagedInstanceCore
```

**Additional code details**
+ In the last line of the `preBootstrapCommands` property, `AmazonCloudwatch-linux` is the name of the parameter created in AWS System Manager Parameter Store. You must include `AmazonCloudwatch-linux` in Parameter Store in the same AWS Region where you created the Amazon EKS cluster. You can also specify a file path, but we recommend using Systems Manager for easier automation and reusability.
+ If you use `preBootstrapCommands` in the `eksctl` configuration file, you see two launch templates in the AWS Management Console. The first launch template includes the commands specified in `preBootstrapCommands`. The second template includes the commands specified in `preBootstrapCommands` and default Amazon EKS user data. This data is required to get the nodes to join the cluster. The node group’s Auto Scaling group uses this user data to spin up new instances.
+ If you use the `iam` attribute in the `eksctl` configuration file, you must list the default Amazon EKS policies with any additional policies required in your attached AWS Identity and Access Management (IAM) policies. In the code snippet from the *Create the eksctl configuration file and cluster *step, `CloudWatchAgentServerPolicy` and `AmazonSSMMangedInstanceCore` are additional policies added to make sure that the CloudWatch agent and SSM Agent work as expected. The `AmazonEKSWorkerNodePolicy`, `AmazonEKS_CNI_Policy`, `AmazonEC2ContainerRegistryReadOnly` policies are mandatory policies required for the Amazon EKS cluster to function correctly.

# Migrate NGINX Ingress Controllers when enabling Amazon EKS Auto Mode
<a name="migrate-nginx-ingress-controller-eks-auto-mode"></a>

*Olawale Olaleye and Shamanth Devagari, Amazon Web Services*

## Summary
<a name="migrate-nginx-ingress-controller-eks-auto-mode-summary"></a>

[EKS Auto Mode](https://docs.aws.amazon.com/eks/latest/userguide/automode.html) for Amazon Elastic Kubernetes Service (Amazon EKS) can reduce the operational overhead of running your workloads on Kubernetes clusters. This mode allows AWS to also set up and manage the infrastructure on your behalf. When you enable EKS Auto Mode on an existing cluster, you must carefully plan the migration of [NGINX Ingress Controller](https://docs.nginx.com/nginx-ingress-controller/overview/about/) configurations. This is because the direct transfer of Network Load Balancers isn't possible.

You can use a blue/green deployment strategy to migrate an NGINX Ingress Controller instance when you enable EKS Auto Mode in an existing Amazon EKS cluster.

## Prerequisites and limitations
<a name="migrate-nginx-ingress-controller-eks-auto-mode-prereqs"></a>

**Prerequisites**
+ An active AWS account
+ An [Amazon EKS cluster](https://docs.aws.amazon.com/eks/latest/userguide/create-cluster.html) running Kubernetes version 1.29 or later
+ Amazon EKS add-ons running [minimum versions](https://docs.aws.amazon.com/eks/latest/userguide/auto-enable-existing.html#auto-addons-required)
+ Latest version of [kubectl](https://docs.aws.amazon.com/eks/latest/userguide/install-kubectl.html#kubectl-install-update)
+ An existing [NGINX Ingress Controller](https://kubernetes.github.io/ingress-nginx/deploy/#aws) instance
+ (Optional) A [hosted zone](https://docs.aws.amazon.com/Route53/latest/DeveloperGuide/hosted-zones-working-with.html) in Amazon Route 53 for DNS-based traffic shifting

## Architecture
<a name="migrate-nginx-ingress-controller-eks-auto-mode-architecture"></a>

A *blue/green deployment* is a deployment strategy in which you create two separate but identical environments. Blue/green deployments provide near-zero downtime release and rollback capabilities. The fundamental idea is to shift traffic between two identical environments that are running different versions of your application.

The following image shows the migration of Network Load Balancers from two different NGINX Ingress Controller instances when enabling EKS Auto Mode. You use a blue/green deployment to shift traffic between the two Network Load Balancers.

![\[Using a blue/green deployment strategy to migrate NGINX Ingress Controller instances.\]](http://docs.aws.amazon.com/prescriptive-guidance/latest/patterns/images/pattern-img/57e8c14f-cb50-4027-8ef6-ce8ea3f2db25/images/211a029a-90d8-4c92-8200-19e54062f936.png)


The original namespace is the *blue* namespace. This is where the original NGINX Ingress Controller service and instance run, before you enable EKS Auto Mode. The original service and instance connect to a Network Load Balancer that has a DNS name that is configured in Route 53. The [AWS Load Balancer Controller](https://kubernetes-sigs.github.io/aws-load-balancer-controller/v2.11/) deployed this Network Load Balancer in the target virtual private cloud (VPC).

The diagram shows the following workflow to set up an environment for a blue/green deployment:

1. Install and configure another NGINX Ingress Controller instance in a different namespace, a *green* namespace.

1. In Route 53, configure a DNS name for a new Network Load Balancer.

## Tools
<a name="migrate-nginx-ingress-controller-eks-auto-mode-tools"></a>

**AWS services**
+ [Amazon Elastic Kubernetes Service (Amazon EKS)](https://docs.aws.amazon.com/eks/latest/userguide/what-is-eks.html) helps you run Kubernetes on AWS without needing to install or maintain your own Kubernetes control plane or nodes.
+ [Elastic Load Balancing](https://docs.aws.amazon.com/elasticloadbalancing/latest/userguide/what-is-load-balancing.html) distributes incoming application or network traffic across multiple targets. For example, you can distribute traffic across Amazon Elastic Compute Cloud (Amazon EC2) instances, containers, and IP addresses in one or more Availability Zones.
+ [Amazon Route 53](https://docs.aws.amazon.com/Route53/latest/DeveloperGuide/Welcome.html) is a highly available and scalable DNS web service.
+ [Amazon Virtual Private Cloud (Amazon VPC)](https://docs.aws.amazon.com/vpc/latest/userguide/what-is-amazon-vpc.html) helps you launch AWS resources into a virtual network that you’ve defined. This virtual network resembles a traditional network that you’d operate in your own data center, with the benefits of using the scalable infrastructure of AWS.

**Other tools**
+ [Helm](https://helm.sh/) is an open source package manager for Kubernetes that helps you install and manage applications on your Kubernetes cluster.
+ [kubectl](https://kubernetes.io/docs/tasks/tools/install-kubectl/) is a command-line interface that helps you run commands against Kubernetes clusters.
+ [NGINX Ingress Controller](https://docs.nginx.com/nginx-ingress-controller/overview/about/) connects Kubernetes apps and services with request handling, auth, self-service custom resources, and debugging.

## Epics
<a name="migrate-nginx-ingress-controller-eks-auto-mode-epics"></a>

### Review the existing environment
<a name="review-the-existing-environment"></a>


| Task | Description | Skills required | 
| --- | --- | --- | 
| Confirm that the original NGINX Ingress Controller instance is operational. | Enter the following command to verify that the resources in the `ingress-nginx` namespace are operational. If you have deployed NGINX Ingress Controller in another namespace, update the namespace name in this command.<pre>kubectl get all -n ingress-nginx</pre>In the output, confirm that NGINX Ingress Controller pods are in running state. The following is an example output:<pre>NAME                                           READY   STATUS      RESTARTS      AGE<br />pod/ingress-nginx-admission-create-xqn9d       0/1     Completed   0             88m<br />pod/ingress-nginx-admission-patch-lhk4j        0/1     Completed   1             88m<br />pod/ingress-nginx-controller-68f68f859-xrz74   1/1     Running     2 (10m ago)   72m<br /><br />NAME                                         TYPE           CLUSTER-IP       EXTERNAL-IP                                                                     PORT(S)                      AGE<br />service/ingress-nginx-controller             LoadBalancer   10.100.67.255    k8s-ingressn-ingressn-abcdefg-12345.elb.eu-west-1.amazonaws.com   80:30330/TCP,443:31462/TCP   88m<br />service/ingress-nginx-controller-admission   ClusterIP      10.100.201.176   <none>                                                                          443/TCP                      88m<br /><br />NAME                                       READY   UP-TO-DATE   AVAILABLE   AGE<br />deployment.apps/ingress-nginx-controller   1/1     1            1           88m<br /><br />NAME                                                 DESIRED   CURRENT   READY   AGE<br />replicaset.apps/ingress-nginx-controller-68f68f859   1         1         1       72m<br />replicaset.apps/ingress-nginx-controller-d8c96cf68   0         0         0       88m<br /><br />NAME                                       STATUS     COMPLETIONS   DURATION   AGE<br />job.batch/ingress-nginx-admission-create   Complete   1/1           4s         88m<br />job.batch/ingress-nginx-admission-patch    Complete   1/1           5s         88m</pre> | DevOps engineer | 

### Deploy a sample HTTPd workload to use the NGINX Ingress Controller
<a name="deploy-a-sample-httpd-workload-to-use-the-nginx-ingress-controller"></a>


| Task | Description | Skills required | 
| --- | --- | --- | 
| Create the Kubernetes resources. | Enter the following commands to create a sample Kubernetes deployment, service, and ingress:<pre>kubectl create deployment demo --image=httpd --port=80</pre><pre>kubectl expose deployment demo</pre><pre> kubectl create ingress demo --class=nginx \<br />  --rule nginxautomode.local.dev/=demo:80</pre> | DevOps engineer | 
| Review the deployed resources. | Enter the following command to view a list of the deployed resources:<pre>kubectl get all,ingress</pre>In the output, confirm that the sample HTTPd pod is in a running state. The following is an example output:<pre>NAME                        READY   STATUS    RESTARTS   AGE<br />pod/demo-7d94f8cb4f-q68wc   1/1     Running   0          59m<br /><br />NAME                 TYPE        CLUSTER-IP      EXTERNAL-IP   PORT(S)   AGE<br />service/demo         ClusterIP   10.100.78.155   <none>        80/TCP    59m<br />service/kubernetes   ClusterIP   10.100.0.1      <none>        443/TCP   117m<br /><br />NAME                   READY   UP-TO-DATE   AVAILABLE   AGE<br />deployment.apps/demo   1/1     1            1           59m<br /><br />NAME                              DESIRED   CURRENT   READY   AGE<br />replicaset.apps/demo-7d94f8cb4f   1         1         1       59m<br /><br />NAME                             CLASS   HOSTS                                  ADDRESS                                                                         PORTS   AGE<br />ingress.networking.k8s.io/demo   nginx   nginxautomode.local.dev                k8s-ingressn-ingressn-abcdefg-12345.elb.eu-west-1.amazonaws.com                 80      56m</pre> | DevOps engineer | 
| Confirm the service is reachable. | Enter the following command to confirm that the service is reachable through the DNS name of the Network Load Balancer:<pre>curl -H "Host: nginxautomode.local.dev" http://k8s-ingressn-ingressn-abcdefg-12345.elb.eu-west-1.amazonaws.com</pre>The following is the expected output:<pre><html><body><h1>It works!</h1></body></html></pre> | DevOps engineer | 
| (Optional) Create a DNS record. | [\[See the AWS documentation website for more details\]](http://docs.aws.amazon.com/prescriptive-guidance/latest/patterns/migrate-nginx-ingress-controller-eks-auto-mode.html) | DevOps engineer, AWS DevOps | 

### Enable EKS Auto Mode on the existing cluster
<a name="enable-eks-auto-mode-on-the-existing-cluster"></a>


| Task | Description | Skills required | 
| --- | --- | --- | 
| Enable EKS Auto Mode. | Follow the instructions in [Enable EKS Auto Mode on an existing cluster](https://docs.aws.amazon.com/eks/latest/userguide/auto-enable-existing.html) (Amazon EKS documentation). | AWS DevOps | 

### Install a new NGINX Ingress Controller
<a name="install-a-new-nginx-ingress-controller"></a>


| Task | Description | Skills required | 
| --- | --- | --- | 
| Configure a new NGINX Ingress Controller instance. | [\[See the AWS documentation website for more details\]](http://docs.aws.amazon.com/prescriptive-guidance/latest/patterns/migrate-nginx-ingress-controller-eks-auto-mode.html) | DevOps engineer | 
| Deploy the new NGINX Instance Controller instance. | Enter the following command to apply the modified manifest file:<pre>kubectl apply -f deploy.yaml</pre> | DevOps engineer | 
| Confirm successful deployment. | Enter the following command to verify that the resources in the `ingress-nginx-v2` namespace are operational:<pre>kubectl get all -n ingress-nginx-v2</pre>In the output, confirm that NGINX Ingress Controller pods are in a running state. The following is an example output:<pre>NAME                                            READY   STATUS      RESTARTS   AGE<br />pod/ingress-nginx-admission-create-7shrj        0/1     Completed   0          24s<br />pod/ingress-nginx-admission-patch-vkxr5         0/1     Completed   1          24s<br />pod/ingress-nginx-controller-757bfcbc6d-4fw52   1/1     Running     0          24s<br /><br />NAME                                         TYPE           CLUSTER-IP       EXTERNAL-IP                                                                     PORT(S)                      AGE<br />service/ingress-nginx-controller             LoadBalancer   10.100.208.114   k8s-ingressn-ingressn-2e5e37fab6-848337cd9c9d520f.elb.eu-west-1.amazonaws.com   80:31469/TCP,443:30658/TCP   24s<br />service/ingress-nginx-controller-admission   ClusterIP      10.100.150.114   <none>                                                                          443/TCP                      24s<br /><br />NAME                                       READY   UP-TO-DATE   AVAILABLE   AGE<br />deployment.apps/ingress-nginx-controller   1/1     1            1           24s<br /><br />NAME                                                  DESIRED   CURRENT   READY   AGE<br />replicaset.apps/ingress-nginx-controller-757bfcbc6d   1         1         1       24s<br /><br />NAME                                       STATUS     COMPLETIONS   DURATION   AGE<br />job.batch/ingress-nginx-admission-create   Complete   1/1           4s         24s<br />job.batch/ingress-nginx-admission-patch    Complete   1/1           5s         24s</pre> | DevOps engineer | 
| Create a new ingress for the sample HTTPd workload. | Enter the following command to create a new ingress for the existing sample HTTPd workload:<pre>kubectl create ingress demo-new --class=nginx-v2 \<br />  --rule nginxautomode.local.dev/=demo:80</pre> | DevOps engineer | 
| Confirm that the new ingress works. | Enter the following command to confirm that the new ingress works:<pre>curl -H "Host: nginxautomode.local.dev" k8s-ingressn-ingressn-2e5e37fab6-848337cd9c9d520f.elb.eu-west-1.amazonaws.com</pre>The following is the expected output:<pre><html><body><h1>It works!</h1></body></html></pre> | DevOps engineer | 

### Cut over
<a name="cut-over"></a>


| Task | Description | Skills required | 
| --- | --- | --- | 
| Cut over to the new namespace. | [\[See the AWS documentation website for more details\]](http://docs.aws.amazon.com/prescriptive-guidance/latest/patterns/migrate-nginx-ingress-controller-eks-auto-mode.html) | AWS DevOps, DevOps engineer | 
| Review the two ingresses. | Enter the following command to review the two ingresses that were created for the sample HTTPd workload:<pre>kubectl get ingress</pre>The following is an example output:<pre>NAME       CLASS      HOSTS                                  ADDRESS                                                                         PORTS   AGE<br />demo       nginx      nginxautomode.local.dev   k8s-ingressn-ingressn-abcdefg-12345.elb.eu-west-1.amazonaws.com                              80      95m<br />demo-new   nginx-v2   nginxautomode.local.dev   k8s-ingressn-ingressn-2e5e37fab6-848337cd9c9d520f.elb.eu-west-1.amazonaws.com                80      33s</pre> | DevOps engineer | 

## Related resources
<a name="migrate-nginx-ingress-controller-eks-auto-mode-resources"></a>
+ [Enable EKS Auto Mode on an existing cluster](https://docs.aws.amazon.com/eks/latest/userguide/auto-enable-existing.html) (Amazon EKS documentation)
+ [Troubleshoot load balancers created by the Kubernetes service controller in Amazon EKS](https://repost.aws/knowledge-center/eks-load-balancers-troubleshooting) (AWS re:Post Knowledge Center)
+ [NGINX Ingress Controller](https://docs.nginx.com/nginx-ingress-controller/) (NGINX documentation)

# Migrate your container workloads from Azure Red Hat OpenShift (ARO) to Red Hat OpenShift Service on AWS (ROSA)
<a name="migrate-container-workloads-from-aro-to-rosa"></a>

*Naveen Ramasamy, Srikanth Rangavajhala, and Gireesh Sreekantan, Amazon Web Services*

## Summary
<a name="migrate-container-workloads-from-aro-to-rosa-summary"></a>

This pattern provides step-by-step instructions for migrating container workloads from Azure Red Hat OpenShift (ARO) to [Red Hat OpenShift Service on AWS (ROSA)](https://aws.amazon.com/rosa/). ROSA is a managed Kubernetes service provided by Red Hat in collaboration with AWS. It helps you deploy, manage, and scale your containerized applications by using the Kubernetes platform, and benefits from both Red Hat's expertise in Kubernetes and the AWS Cloud infrastructure.

Migrating container workloads from ARO, from other clouds, or from on premises to ROSA involves transferring applications, configurations, and data from one platform to another. This pattern helps ensure a smooth transition while optimizing for AWS Cloud services, security, and cost efficiency. It covers two methods for migrating your workloads to ROSA clusters: CI/CD and Migration Toolkit for Containers (MTC).

This pattern covers both methods. The method you choose depends on the complexity and certainty of your migration process. If you have full control over your application's state and can guarantee a consistent setup through a pipeline, we recommend that you use the CI/CD method. However, if your application state involves uncertainties, unforeseen changes, or a complex ecosystem, we recommend that you use MTC as a reliable and controlled path to migrate your application and its data to a new cluster. For a detailed comparison of the two methods, see the [Additional information](#migrate-container-workloads-from-aro-to-rosa-additional) section.

Benefits of migrating to ROSA:
+ ROSA seamlessly integrates with AWS as a native service. It is easily accessible through the AWS Management Console and billed through a single AWS account. It offers full compatibility with other AWS services and provides collaborative support from both AWS and Red Hat.
+ ROSA supports hybrid and multi-cloud deployments. It enables applications to run consistently across on-premises data centers and multiple cloud environments.
+ ROSA benefits from Red Hat's security focus, and provides features such as role-based access control (RBAC), image scanning, and vulnerability assessments to ensure a secure container environment.
+ ROSA is designed to scale applications easily and provides high availability options. It allows applications to grow as needed while maintaining reliability.
+ ROSA automates and simplifies the deployment of a Kubernetes cluster compared with manual setup and management methods. This accelerates the development and deployment process.
+ ROSA benefits from AWS Cloud services, and provides seamless integration with AWS offerings such as database services, storage solutions, and security services.

## Prerequisites and limitations
<a name="migrate-container-workloads-from-aro-to-rosa-prereqs"></a>

**Prerequisites**
+ An active AWS account.
+ Permissions configured for AWS services that ROSA relies on to deliver functionality. For more information, see [Prerequisites](https://docs.aws.amazon.com/rosa/latest/userguide/set-up.html) in the ROSA documentation.
+ ROSA enabled on the [ROSA console](https://console.aws.amazon.com/rosa). For instructions, see the [ROSA documentation](https://docs.aws.amazon.com/rosa/latest/userguide/set-up.html#enable-rosa).
+ The ROSA cluster installed and configured. For more information, see [Get started with ROSA](https://docs.aws.amazon.com/rosa/latest/userguide/getting-started.html)  in the ROSA documentation. To understand the different methods for setting up a ROSA cluster, see the AWS Prescriptive Guidance guide [ROSA implementation strategies](https://docs.aws.amazon.com/prescriptive-guidance/latest/red-hat-openshift-on-aws-implementation/).
+ Network connectivity established from the on-premises network to AWS through [AWS Direct Connect](https://docs.aws.amazon.com/whitepapers/latest/aws-vpc-connectivity-options/aws-direct-connect.html) (preferred) or [AWS Virtual Private Network (Site-to-Site VPN)](https://docs.aws.amazon.com/vpc/latest/userguide/vpn-connections.html).
+ An Amazon Elastic Compute Cloud (Amazon EC2) instance or another virtual server to install tools such as `aws client`, OpenShift CLI (`oc`) client, ROSA client, and Git binary.

Additional prerequisites for the CI/CD method:
+ Access to the on-premises Jenkins server with permissions to create a new pipeline, add stages, add OpenShift clusters, and perform builds.
+ Access to the Git repository where application source code is maintained, with permissions to create a new Git branch and perform commits to the new branch.

Additional prerequisites for the MTC method:
+ An Amazon Simple Storage Service (Amazon S3) bucket, which will be used as a replication repository.
+ Administrative access to the source ARO cluster. This is required to set up the MTC connection.

**Limitations**
+ Some AWS services aren’t available in all AWS Regions. For Region availability, see [AWS services by Region](https://aws.amazon.com/about-aws/global-infrastructure/regional-product-services/). For specific endpoints, see the [Service endpoints and quotas](https://docs.aws.amazon.com/general/latest/gr/aws-service-information.html) page, and choose the link for the service.

## Architecture
<a name="migrate-container-workloads-from-aro-to-rosa-architecture"></a>

ROSA provides three network deployment patterns: public, private, and [AWS PrivateLink](https://docs.aws.amazon.com/vpc/latest/privatelink/what-is-privatelink.html). PrivateLink enables Red Hat site reliability engineering (SRE) teams to manage the cluster by using a private subnet connected to the cluster’s PrivateLink endpoint in an existing VPC.

Choosing the PrivateLink option provides the most secure configuration. For that reason, we recommend it for sensitive workloads or strict compliance requirements. For information about the public and private network deployment options, see the [Red Hat OpenShift documentation](https://docs.openshift.com/rosa/architecture/rosa-architecture-models.html#rosa-hcp-architecture_rosa-architecture-models).

**Important**  
You can create a PrivateLink cluster only at installation time. You cannot change a cluster to use PrivateLink after installation.

The following diagram illustrates the PrivateLink architecture for a ROSA cluster that uses Direct Connect to connect to the on-premises and ARO environments.

![\[ROSA cluster that uses AWS Direct Connect and AWS PrivateLink.\]](http://docs.aws.amazon.com/prescriptive-guidance/latest/patterns/images/pattern-img/527cedfb-ec21-42be-bf21-d4e4e4f9db51/images/eff9b017-6fc7-4874-b610-849a42071ef4.png)


**AWS permissions to ROSA**

For AWS permissions to ROSA, we recommend that you use AWS Security Token Service (AWS STS) with short-lived, dynamic tokens. This method uses least-privilege predefined roles and policies to grant ROSA minimal permissions to operate in the AWS account, and supports ROSA installation, control plane, and compute functionality.

**CI/CD pipeline redeployment**

CI/CD pipeline redeployment is the recommended method for users who have a mature CI/CD pipeline. When you choose this option, you can use any [DevOps deployment strategy ](https://docs.aws.amazon.com/whitepapers/latest/introduction-devops-aws/deployment-strategies.html)to gradually shift your application load to deployments on ROSA.

**Note**  
This pattern assumes a common use case where you have an on-premises Git, JFrog Artifactory, and Jenkins pipeline. This approach requires that you establish network connectivity from your on-premises network to AWS through Direct Connect, and that you set up the ROSA cluster before you follow the instructions in the [Epics](#migrate-container-workloads-from-aro-to-rosa-epics) section. See the [Prerequisites](#migrate-container-workloads-from-aro-to-rosa-prereqs) section for details.

The following diagram shows the workflow for this method.

![\[Migrating containers from ARO to ROSA by using the CI/CD method.\]](http://docs.aws.amazon.com/prescriptive-guidance/latest/patterns/images/pattern-img/527cedfb-ec21-42be-bf21-d4e4e4f9db51/images/f658590e-fbd9-4297-a02c-0b516694d436.png)


**MTC method**

You can use the [Migration Toolkit for Containers (MTC)](https://docs.openshift.com/container-platform/4.13/migration_toolkit_for_containers/about-mtc.html)** **to migrate your containerized workloads between different Kubernetes environments, such as from ARO to ROSA. MTC simplifies the migration process by automating several key tasks and providing a comprehensive framework for managing the migration lifecycle.

The following diagram shows the workflow for this method.

![\[Migrating containers from ARO to ROSA by using the MTC method.\]](http://docs.aws.amazon.com/prescriptive-guidance/latest/patterns/images/pattern-img/527cedfb-ec21-42be-bf21-d4e4e4f9db51/images/979bbc7b-2e39-4dd1-b4f0-ea1032880a38.png)


## Tools
<a name="migrate-container-workloads-from-aro-to-rosa-tools"></a>

**AWS services**
+ [AWS DataSync](https://docs.aws.amazon.com/datasync/latest/userguide/what-is-datasync.html) is an online data transfer and discovery service that helps you move files or object data to, from, and between AWS storage services.
+ [AWS Direct Connect](https://docs.aws.amazon.com/directconnect/latest/UserGuide/Welcome.html) links your internal network to a Direct Connect location over a standard Ethernet fiber-optic cable. With this connection, you can create virtual interfaces directly to public AWS services while bypassing internet service providers in your network path.
+ [AWS PrivateLink](https://docs.aws.amazon.com/vpc/latest/privatelink/what-is-privatelink.html) helps you create unidirectional, private connections from your virtual private clouds (VPCs) to services outside of the VPC.
+ [Red Hat OpenShift Service on AWS (ROSA)](https://docs.aws.amazon.com/rosa/latest/userguide/what-is-rosa.html) is a managed service that helps Red Hat OpenShift users to build, scale, and manage containerized applications on AWS.
+ [AWS Security Token Service (AWS STS)](https://docs.aws.amazon.com/STS/latest/APIReference/welcome.html) helps you request temporary, limited-privilege credentials for users.

**Other tools**
+ [Migration Toolkit for Containers (MTC)](https://docs.openshift.com/container-platform/4.13/migration_toolkit_for_containers/about-mtc.html) provides a console and API for migrating containerized applications from ARO to ROSA.

## Best practices
<a name="migrate-container-workloads-from-aro-to-rosa-best-practices"></a>
+ For [resilience](https://docs.aws.amazon.com/ROSA/latest/userguide/disaster-recovery-resiliency.html) and if you have security compliance workloads, set up a Multi-AZ ROSA cluster that uses PrivateLink. For more information, see the [ROSA documentation](https://docs.aws.amazon.com/rosa/latest/userguide/getting-started-classic-private-link.html).
**Note**  
PrivateLink cannot be configured after installation.
+ The S3 bucket that you use for replication repository should not be public. Use the appropriate S3 bucket policies to restrict access.
+ If you choose the MTC method, use the **Stage** migration option to reduce the downtime window during cutover.
+ Review your service quotas before and after you provision the ROSA cluster. If necessary, request a quota increase according to your requirements. For more information, see the [Service Quotas documentation](https://docs.aws.amazon.com/servicequotas/latest/userguide/request-quota-increase.html).
+ Review the [ROSA security guidelines](https://docs.aws.amazon.com/ROSA/latest/userguide/security.html) and implement security best practices.
+ We recommend that you remove the default cluster administrator after installation. For more information, see the [Red Hat OpenShift documentation](https://docs.openshift.com/container-platform/4.13/post_installation_configuration/cluster-tasks.html).
+ Use machine pool automatic scaling to scale down unused worker nodes in the ROSA cluster to optimize costs. For more information, see the [ROSA Workshop](https://catalog.workshops.aws/aws-openshift-workshop/en-US/5-nodes-storage/3-autoscale-machine-pool).
+ Use the Red Hat Cost Management service for OpenShift Container Platform to better understand and track costs for clouds and containers. For more information, see the [ROSA Workshop](https://catalog.workshops.aws/aws-openshift-workshop/en-US/10-cost-management).
+ Monitor and audit ROSA cluster infrastructure services and applications by using AWS services. For more information, see the [ROSA Workshop](https://catalog.workshops.aws/aws-openshift-workshop/en-US/8-observability).

## Epics
<a name="migrate-container-workloads-from-aro-to-rosa-epics"></a>

### Option 1: Use a CI/CD pipeline
<a name="option-1-use-a-ci-cd-pipeline"></a>


| Task | Description | Skills required | 
| --- | --- | --- | 
| Add the new ROSA cluster to Jenkins. | [\[See the AWS documentation website for more details\]](http://docs.aws.amazon.com/prescriptive-guidance/latest/patterns/migrate-container-workloads-from-aro-to-rosa.html) | AWS administrator, AWS systems administrator, AWS DevOps | 
| Add the `oc` client to your Jenkins nodes. | [\[See the AWS documentation website for more details\]](http://docs.aws.amazon.com/prescriptive-guidance/latest/patterns/migrate-container-workloads-from-aro-to-rosa.html) | AWS administrator, AWS systems administrator, AWS DevOps | 
| Create a new Git branch. | Create a new branch in your Git repository for `rosa-dev`. This branch separates the code or configuration parameter changes for ROSA from your existing pipeline. | AWS DevOps | 
| Tag images for ROSA. | In your build stage, use a different tag to identify the images that are built from the ROSA pipeline. | AWS administrator, AWS systems administrator, AWS DevOps | 
| Create a pipeline. | Create a new Jenkins pipeline that's similar to your existing pipeline. For this pipeline, use the `rosa-dev` Git branch that you created earlier, and make sure to include the Git checkout, code scan, and build stages that are identical to your existing pipeline. | AWS administrator, AWS systems administrator, AWS DevOps | 
| Add a ROSA deployment stage. | In the new pipeline, add a stage to deploy to the ROSA cluster and reference the ROSA cluster that you added to the Jenkins global configuration. | AWS administrator, AWS DevOps, AWS systems administrator | 
| Start a new build. | In Jenkins, select your pipeline and choose **Build now**, or start a new build by committing a change to the `rosa-dev` branch in Git. | AWS administrator, AWS DevOps, AWS systems administrator | 
| Verify the deployment. | Use the **oc** command or the [ROSA console](https://console.aws.amazon.com/rosa) to verify that the application has been deployed on your target ROSA cluster. | AWS administrator, AWS DevOps, AWS systems administrator | 
| Copy data to the target cluster. | For stateful workloads, copy the data from the source cluster to the target cluster by using AWS DataSync or open source utilities such as **rsync**, or you can use the MTC method. For more information, see the [AWS DataSync documentation](https://docs.aws.amazon.com/datasync/latest/userguide/what-is-datasync.html). | AWS administrator, AWS DevOps, AWS systems administrator | 
| Test your application. | [\[See the AWS documentation website for more details\]](http://docs.aws.amazon.com/prescriptive-guidance/latest/patterns/migrate-container-workloads-from-aro-to-rosa.html) | AWS administrator, AWS DevOps, AWS systems administrator | 
| Cut over. | If your testing is successful, use the appropriate Amazon Route 53 policy to move the traffic from the ARO-hosted application to the ROSA-hosted application. When you complete this step, your application's workload will fully transition to the ROSA cluster. | AWS administrator, AWS systems administrator | 

### Option 2: Use MTC
<a name="option-2-use-mtc"></a>


| Task | Description | Skills required | 
| --- | --- | --- | 
| Install the MTC operator. | Install the MTC operator on both ARO and ROSA clusters:[\[See the AWS documentation website for more details\]](http://docs.aws.amazon.com/prescriptive-guidance/latest/patterns/migrate-container-workloads-from-aro-to-rosa.html) | AWS administrator, AWS DevOps, AWS systems administrator | 
| Configure network traffic to the replication repository. | If you're using a proxy server, configure it to allow network traffic between the replication repository and the clusters. The replication repository is an intermediate storage object that MTC uses to migrate data. The source and target clusters must have network access to the replication repository during migration. | AWS administrator, AWS DevOps, AWS systems administrator | 
| Add the source cluster to MTC. | On the MTC web console, add the ARO source cluster. | AWS administrator, AWS DevOps, AWS systems administrator | 
| Add Amazon S3 as your replication repository. | On the MTC web console, add the Amazon S3 bucket (see [Prerequisites](#migrate-container-workloads-from-aro-to-rosa-prereqs)) as the replication repository. | AWS administrator, AWS DevOps, AWS systems administrator | 
| Create a migration plan. | On the MTC web console, create a migration plan and specify the data transfer type as **Copy**. This will instruct MTC to copy the data from the source (ARO) cluster to the S3 bucket, and from the bucket to the target (ROSA) cluster. | AWS administrator, AWS DevOps, AWS systems administrator | 
| Run the migration plan. | Run the migration plan by using the **Stage** or **Cutover** option:[\[See the AWS documentation website for more details\]](http://docs.aws.amazon.com/prescriptive-guidance/latest/patterns/migrate-container-workloads-from-aro-to-rosa.html) | AWS administrator, AWS DevOps, AWS systems administrator | 

## Troubleshooting
<a name="migrate-container-workloads-from-aro-to-rosa-troubleshooting"></a>


| Issue | Solution | 
| --- | --- | 
| Connectivity issues | When you migrate your container workloads from ARO to ROSA, you might encounter connectivity issues that should be resolved to ensure a successful migration. To address these connectivity issues (listed in this table) during migration, meticulous planning, coordination with your network and security teams, and thorough testing are vital. Implementing a gradual migration strategy and verifying connectivity at each step will help minimize potential disruptions and ensure a smooth transition from ARO to ROSA. | 
| Network configuration differences | ARO and ROSA might have variations in their network configurations, such as virtual network (VNet) settings, subnets, and network policies. For proper communication between services, make sure that the network settings align between the two platforms. | 
| Security group and firewall rules | ROSA and ARO might have different default security group and firewall settings. Make sure to adjust and update these rules to permit necessary traffic to maintain connectivity among containers and services during migration.  | 
| IP address and DNS changes | When you migrate workloads, IP addresses and DNS names might change. Reconfigure applications that rely on static IPs or specific DNS names.  | 
| External service access | If your application depends on external services such as databases or APIs, you might have to update their connection settings to make sure they can communicate with the new services from ROSA. | 
| Azure Private Link configuration | If you use Azure Private Link or private endpoint services in ARO, you will need to set up the equivalent functionality in ROSA to ensure private connectivity between resources. | 
| Site-to-Site VPN or Direct Connect setup  | If there are existing Site-to-Site VPN or Direct Connect connections between your on-premises network and ARO, you will need to establish similar connections with ROSA for uninterrupted communication with your local resources. | 
| Ingress and load balancer settings | Configurations for ingress controllers and load balancers might differ between ARO and ROSA. Reconfigure these settings to maintain external access to your services. | 
| Certificate and TLS handling | If your applications use SSL certificates or TLS, make sure that the certificates are valid and configured correctly in ROSA. | 
| Container registry access | If your containers are hosted in an external container registry, set up the proper authentication and access permissions for ROSA. | 
| Monitoring and logging | Update monitoring and logging configurations to reflect the new infrastructure on ROSA so you can continue to monitor the health and performance of your containers effectively. | 

## Related resources
<a name="migrate-container-workloads-from-aro-to-rosa-resources"></a>

**AWS**** references**
+ [What is Red Hat OpenShift Service on AWS?](https://docs.aws.amazon.com/ROSA/latest/userguide/what-is-rosa.html) (ROSA documentation)
+ [Get started with ROSA](https://docs.aws.amazon.com/ROSA/latest/userguide/getting-started.html) (ROSA documentation)
+ [Red Hat OpenShift Service on AWS implementation strategies](https://docs.aws.amazon.com/prescriptive-guidance/latest/red-hat-openshift-on-aws-implementation/) (AWS Prescriptive Guidance)
+ [Red Hat OpenShift Service on AWS Now GA](https://aws.amazon.com/blogs/aws/red-hat-openshift-service-on-aws-now-generally-availably/) (AWS blog post)
+ [ROSA Workshop](https://catalog.workshops.aws/aws-openshift-workshop/en-US/0-introduction)
+ [ROSA FAQ](https://aws.amazon.com/rosa/faqs/)
+ [ROSA Workshop FAQ](https://www.rosaworkshop.io/rosa/14-faq/)
+ [ROSA pricing](https://aws.amazon.com/rosa/pricing/)

**Red Hat OpenShift documentation**
+ [Installing a cluster quickly on AWS](https://docs.openshift.com/container-platform/4.13/installing/installing_aws/installing-aws-default.html)
+ [Installing a cluster on AWS in a restricted network](https://docs.openshift.com/container-platform/4.13/installing/installing_aws/installing-restricted-networks-aws-installer-provisioned.html)
+ [Installing a cluster on AWS into an existing VPC](https://docs.openshift.com/container-platform/4.13/installing/installing_aws/installing-aws-vpc.html)
+ [Installing a cluster on user-provisioned infrastructure in AWS by using CloudFormation templates](https://docs.openshift.com/container-platform/4.13/installing/installing_aws/installing-aws-user-infra.html)
+ [Installing a cluster on AWS in a restricted network with user-provisioned infrastructure](https://docs.openshift.com/container-platform/4.13/installing/installing_aws/installing-restricted-networks-aws.html)
+ [Installing a cluster on AWS with customizations](https://docs.openshift.com/container-platform/4.13/installing/installing_aws/installing-aws-customizations.html)
+ [Getting started with the OpenShift CLI](https://docs.openshift.com/container-platform/4.13/cli_reference/openshift_cli/getting-started-cli.html)

## Additional information
<a name="migrate-container-workloads-from-aro-to-rosa-additional"></a>

**Choosing between MTC and CI/CD pipeline redeployment options**

Migrating applications from one OpenShift cluster to another requires careful consideration. Ideally, you would want a smooth transition by using a CI/CD pipeline to redeploy the application and handle the migration of persistent volume data. However, in practice, a running application on a cluster is susceptible to unforeseen changes over time. These changes can cause the application to gradually deviate from its original deployment state. MTC offers a solution for scenarios where the exact contents of a namespace are uncertain and a seamless migration of all application components to a new cluster is important.

Making the right choice requires evaluating your specific scenario and weighing the benefits of each approach. By doing so, you can ensure a successful and seamless migration that aligns with your needs and priorities. Here are additional guidelines for choosing between the two options.

**CI/CD pipeline redeployment**

The CI/CD pipeline method is the recommended approach if your application can be confidently redeployed by using a pipeline. This ensures that the migration is controlled, predictable, and aligned with your existing deployment practices. When you choose this method, you can use [blue/green deployment](https://docs.aws.amazon.com/whitepapers/latest/overview-deployment-options/bluegreen-deployments.html) or canary deployment strategies to gradually shift the load to deployments on ROSA. For this scenario, this pattern assumes that Jenkins is orchestrating application deployments from the on-premises environment.

Advantages:
+ You do not require administrative access to the source ARO cluster or need to deploy any operators on the source or destination cluster.
+ This approach helps you switch traffic gradually by using a DevOps strategy.

Disadvantages:
+ It requires more effort to test the functionality of your application.
+ If your application contains persistent data, it requires additional steps to copy the data by using AWS DataSync or other tools.

**MTC migration**

In the real world, running applications can undergo unanticipated changes that cause them to drift away from the initial deployment. Choose the MTC option when you're unsure about the current state of your application on the source cluster. For example, if your application ecosystem spans various components, configurations, and data storage volumes, we recommend that you choose MTC to ensure a complete migration that covers the application and its entire environment.

Advantages:
+ MTC provides complete backup and restore of the workload.
+ It will copy the persistent data from source to target while migrating the workload.
+ It does not require access to the source code repository.

Disadvantages:
+ You need administrative privileges to install the MTC operator on the source and destination clusters.
+ The DevOps team requires training to use the MTC tool and perform migrations. 

# Run Amazon ECS tasks on Amazon WorkSpaces with Amazon ECS Anywhere
<a name="run-amazon-ecs-tasks-on-amazon-workspaces-with-amazon-ecs-anywhere"></a>

*Akash Kumar, Amazon Web Services*

## Summary
<a name="run-amazon-ecs-tasks-on-amazon-workspaces-with-amazon-ecs-anywhere-summary"></a>

Amazon Elastic Container Service (Amazon ECS) Anywhere supports the deployment of Amazon ECS tasks in any environment, including Amazon Web Services (AWS) managed infrastructure and customer managed infrastructure. You can do this while using a fully AWS managed control plane that’s running in the cloud and always up to date. 

Enterprises often use Amazon WorkSpaces for developing container-based applications. This has required Amazon Elastic Compute Cloud (Amazon EC2) or AWS Fargate with an Amazon ECS cluster to test and run ECS tasks. Now, by using Amazon ECS Anywhere, you can add Amazon WorkSpaces as external instances directly to an ECS cluster, and you can run your tasks directly. This reduces your development time, because you can test your container with an ECS cluster locally on Amazon WorkSpaces. You can also save the cost of using EC2 or Fargate instances for testing your container applications.

This pattern showcases how to deploy ECS tasks on Amazon WorkSpaces with Amazon ECS Anywhere. It sets up the ECS cluster and uses AWS Directory Service Simple AD to launch the WorkSpaces. Then the example ECS task launches NGINX in the WorkSpaces.

## Prerequisites and limitations
<a name="run-amazon-ecs-tasks-on-amazon-workspaces-with-amazon-ecs-anywhere-prereqs"></a>
+ An active AWS account
+ AWS Command Line Interface (AWS CLI)
+ AWS credentials [configured on your machine](https://docs.aws.amazon.com/cli/latest/userguide/cli-configure-files.html)

## Architecture
<a name="run-amazon-ecs-tasks-on-amazon-workspaces-with-amazon-ecs-anywhere-architecture"></a>

**Target technology stack**
+ A virtual private cloud (VPC)
+ An Amazon ECS cluster
+ Amazon WorkSpaces
+ AWS Directory Service with Simple AD

**Target architecture **

![\[ECS Anywhere sets up ECS cluster and uses Simple AD to launch WorkSpaces.\]](http://docs.aws.amazon.com/prescriptive-guidance/latest/patterns/images/pattern-img/da8b2249-3423-485c-9fef-6f902025e969/images/fd354d14-f29b-4b9e-8f1a-c3cb7ed4d6bf.png)


 

The architecture includes the following services and resources:
+ An ECS cluster with public and private subnets in a custom VPC
+ Simple AD in the VPC to provide user access to Amazon WorkSpaces
+ Amazon WorkSpaces provisioned in the VPC using Simple AD
+ AWS Systems Manager activated for adding Amazon WorkSpaces as managed instances
+ Using Amazon ECS and AWS Systems Manager Agent (SSM Agent), Amazon WorkSpaces added to Systems Manager and the ECS cluster
+ An example ECS task to run in the WorkSpaces in the ECS cluster

## Tools
<a name="run-amazon-ecs-tasks-on-amazon-workspaces-with-amazon-ecs-anywhere-tools"></a>
+ [AWS Directory Service Simple Active Directory (Simple AD)](https://docs.aws.amazon.com/directoryservice/latest/admin-guide/directory_simple_ad.html) is a standalone managed directory powered by a Samba 4 Active Directory Compatible Server. Simple AD provides a subset of the features offered by AWS Managed Microsoft AD, including the ability to manage users and to securely connect to Amazon EC2 instances.
+ [Amazon Elastic Container Service (Amazon ECS)](https://docs.aws.amazon.com/AmazonECS/latest/developerguide/Welcome.html) is a fast and scalable container management service that helps you run, stop, and manage containers on a cluster.
+ [AWS Identity and Access Management (IAM)](https://docs.aws.amazon.com/IAM/latest/UserGuide/introduction.html) helps you securely manage access to your AWS resources by controlling who is authenticated and authorized to use them.
+ [AWS Systems Manager](https://docs.aws.amazon.com/systems-manager/latest/userguide/what-is-systems-manager.html) helps you manage your applications and infrastructure running in the AWS Cloud. It simplifies application and resource management, shortens the time to detect and resolve operational problems, and helps you manage your AWS resources securely at scale.
+ [Amazon WorkSpaces](https://docs.aws.amazon.com/workspaces/latest/adminguide/amazon-workspaces.html) helps you provision virtual, cloud-based Microsoft Windows or Amazon Linux desktops for your users, known as *WorkSpaces*. WorkSpaces eliminates the need to procure and deploy hardware or install complex software.

## Epics
<a name="run-amazon-ecs-tasks-on-amazon-workspaces-with-amazon-ecs-anywhere-epics"></a>

### Set up the ECS cluster
<a name="set-up-the-ecs-cluster"></a>


| Task | Description | Skills required | 
| --- | --- | --- | 
| Create and configure the ECS cluster. | To create the ECS cluster, follow the instructions in the [AWS documentation](https://docs.aws.amazon.com/AmazonECS/latest/developerguide/create_cluster.html), including the following steps:[\[See the AWS documentation website for more details\]](http://docs.aws.amazon.com/prescriptive-guidance/latest/patterns/run-amazon-ecs-tasks-on-amazon-workspaces-with-amazon-ecs-anywhere.html) | Cloud architect | 

### Launch Amazon WorkSpaces
<a name="launch-amazon-workspaces"></a>


| Task | Description | Skills required | 
| --- | --- | --- | 
| Set up Simple AD and launch Amazon WorkSpaces. | To provision a Simple AD directory for your newly created VPC and launch Amazon WorkSpaces, follow the instructions in the [AWS documentation](https://docs.aws.amazon.com/workspaces/latest/adminguide/launch-workspace-simple-ad.html). | Cloud architect | 

### Set up AWS Systems Manager for a hybrid environment
<a name="set-up-aws-systems-manager-for-a-hybrid-environment"></a>


| Task | Description | Skills required | 
| --- | --- | --- | 
| Download the attached scripts. | On your local machine, download the `ssm-trust-policy.json` and `ssm-activation.json` files that are in the *Attachments* section. | Cloud architect | 
| Add the IAM role. | Add environment variables based on your business requirements.<pre>export AWS_DEFAULT_REGION=${AWS_REGION_ID}<br />export ROLE_NAME=${ECS_TASK_ROLE}<br />export CLUSTER_NAME=${ECS_CLUSTER_NAME}<br />export SERVICE_NAME=${ECS_CLUSTER_SERVICE_NAME}</pre>Run the following command.<pre>aws iam create-role --role-name $ROLE_NAME --assume-role-policy-document file://ssm-trust-policy.json</pre> | Cloud architect | 
| Add the AmazonSSMManagedInstanceCore policy to the IAM role. | Run the following command.<pre>aws iam attach-role-policy --role-name $ROLE_NAME --policy-arn arn:aws:iam::aws:policy/AmazonSSMManagedInstanceCore</pre> | Cloud architect | 
| Add the AmazonEC2ContainerServiceforEC2Role policy to IAM role. | Run the following command.<pre>aws iam attach-role-policy --role-name $ROLE_NAME --policy-arn arn:aws:iam::aws:policy/service-role/AmazonEC2ContainerServiceforEC2Role</pre> | Cloud architect | 
| Verify the IAM role. | To verify the IAM role, run the following command.<pre>aws iam list-attached-role-policies --role-name $ROLE_NAME</pre> | Cloud architect | 
| Activate Systems Manager. | Run the following command.<pre>aws ssm create-activation --iam-role $ROLE_NAME | tee ssm-activation.json</pre> | Cloud architect | 

### Add WorkSpaces to the ECS cluster
<a name="add-workspaces-to-the-ecs-cluster"></a>


| Task | Description | Skills required | 
| --- | --- | --- | 
|  Connect to your WorkSpaces. | To connect to and set up your Workspaces, follow the instructions in the [AWS documentation](https://docs.aws.amazon.com/workspaces/latest/userguide/workspaces-user-getting-started.html). | App developer | 
| Download the ecs-anywhere install script. | At the command prompt, run the following command.<pre>curl -o "ecs-anywhere-install.sh" "https://amazon-ecs-agent-packages-preview.s3.us-east-1.amazonaws.com/ecs-anywhere-install.sh" && sudo chmod +x ecs-anywhere-install.sh</pre> | App developer | 
| Check integrity of the shell script. | (Optional) Run the following command.<pre>curl -o "ecs-anywhere-install.sh.sha256" "https://amazon-ecs-agent-packages-preview.s3.us-east-1.amazonaws.com/ecs-anywhere-install.sh.sha256" && sha256sum -c ecs-anywhere-install.sh.sha256<br /><br /><br /></pre> | App developer | 
| Add an EPEL repository on Amazon Linux. | To add an Extra Packages for Enterprise Linux (EPEL) repository, run the  command `sudo amazon-linux-extras install epel -y`. | App developer | 
| Install Amazon ECS Anywhere. | To run the install script, use the following command.<pre>sudo ./ecs-anywhere-install.sh --cluster $CLUSTER_NAME --activation-id $ACTIVATION_ID --activation-code $ACTIVATION_CODE --region $AWS_REGION<br /><br /><br /></pre> |  | 
| Check instance information from the ECS cluster. | To check the Systems Manager and ECS cluster instance information and validate that WorkSpaces were added on the cluster, run the following command from your local machine.<pre>aws ssm describe-instance-information" && "aws ecs list-container-instances --cluster $CLUSTER_NAME</pre> | App developer | 

### Add an ECS task for the WorkSpaces
<a name="add-an-ecs-task-for-the-workspaces"></a>


| Task | Description | Skills required | 
| --- | --- | --- | 
| Create a task execution IAM role. | Download `task-execution-assume-role.json` and `external-task-definition.json` from the *Attachments* section. On your local machine, run the following command.<pre>aws iam --region $AWS_DEFAULT_REGION create-role --role-name $ECS_TASK_EXECUTION_ROLE --assume-role-policy-document file://task-execution-assume-role.json</pre> | Cloud architect | 
| Add the policy to the execution role. | Run the following command.<pre>aws iam --region $AWS_DEFAULT_REGION attach-role-policy --role-name $ECS_TASK_EXECUTION_ROLE --policy-arn arn:aws:iam::aws:policy/service-role/AmazonECSTaskExecutionRolePolicy</pre> | Cloud architect | 
| Create a task role. | Run the following command.<pre>aws iam --region $AWS_DEFAULT_REGION create-role --role-name $ECS_TASK_EXECUTION_ROLE --assume-role-policy-document file://task-execution-assume-role.json<br /><br /><br /></pre> | Cloud architect | 
| Register the task definition to the cluster. | On your local machine, run the following command.<pre>aws ecs register-task-definition --cli-input-json file://external-task-definition.json</pre> | Cloud architect | 
| Run the task. | On your local machine, run the following command.<pre>aws ecs run-task --cluster $CLUSTER_NAME --launch-type EXTERNAL --task-definition nginx</pre> | Cloud architect | 
| Validate the task running state. | To fetch the task ID, run the following command.<pre>export TEST_TASKID=$(aws ecs list-tasks --cluster $CLUSTER_NAME | jq -r '.taskArns[0]')</pre>With the task ID, run the following command.<pre>aws ecs describe-tasks --cluster $CLUSTER_NAME --tasks ${TEST_TASKID}</pre> | Cloud architect | 
| Verify the task on the WorkSpace. | To check that NGINX is running on the WorkSpace, run the command` curl http://localhost:8080`. | App developer | 

## Related resources
<a name="run-amazon-ecs-tasks-on-amazon-workspaces-with-amazon-ecs-anywhere-resources"></a>
+ [ECS clusters](https://docs.aws.amazon.com/AmazonECS/latest/developerguide/clusters.html)
+ [Setting up a hybrid environment](https://docs.aws.amazon.com/systems-manager/latest/userguide/systems-manager-managedinstances.html)
+ [Amazon WorkSpaces](https://docs.aws.amazon.com/workspaces/latest/adminguide/amazon-workspaces.html)
+ [Simple AD](https://docs.aws.amazon.com/workspaces/latest/adminguide/launch-workspace-simple-ad.html)

## Attachments
<a name="attachments-da8b2249-3423-485c-9fef-6f902025e969"></a>

To access additional content that is associated with this document, unzip the following file: [attachment.zip](samples/p-attach/da8b2249-3423-485c-9fef-6f902025e969/attachments/attachment.zip)

# Run an ASP.NET Core web API Docker container on an Amazon EC2 Linux instance
<a name="run-an-asp-net-core-web-api-docker-container-on-an-amazon-ec2-linux-instance"></a>

*Vijai Anand Ramalingam and Sreelaxmi Pai, Amazon Web Services*

## Summary
<a name="run-an-asp-net-core-web-api-docker-container-on-an-amazon-ec2-linux-instance-summary"></a>

This pattern is for people who are starting to containerize their applications on the Amazon Web Services (AWS) Cloud. When you begin to containerize apps on cloud, usually there are no container orchestrating platforms set up. This pattern helps you quickly set up infrastructure on AWS to test your containerized applications without needing an elaborate container orchestrating infrastructure.

The first step in the modernization journey is to transform the application. If it's a legacy .NET Framework application, you must first change the runtime to ASP.NET Core. Then do the following:
+ Create the Docker container image
+ Run the Docker container using the built image
+ Validate the application before deploying it on any container orchestration platform, such as Amazon Elastic Container Service (Amazon ECS) or Amazon Elastic Kubernetes Service (Amazon EKS). 

This pattern covers the build, run, and validate aspects of modern application development on an Amazon Elastic Compute Cloud (Amazon EC2) Linux instance.

## Prerequisites and limitations
<a name="run-an-asp-net-core-web-api-docker-container-on-an-amazon-ec2-linux-instance-prereqs"></a>

**Prerequisites **
+ An active [Amazon Web Services (AWS) account](https://aws.amazon.com/account/)
+ An [AWS Identity and Access Management (IAM) role](https://docs.aws.amazon.com/IAM/latest/UserGuide/id_roles.html) with sufficient access to create AWS resources for this pattern 
+ [Visual Studio Community 2022](https://visualstudio.microsoft.com/downloads/) or later downloaded and installed
+ A .NET Framework project modernized to ASP.NET Core
+ A GitHub repository

**Product versions**
+ Visual Studio Community 2022 or later

## Architecture
<a name="run-an-asp-net-core-web-api-docker-container-on-an-amazon-ec2-linux-instance-architecture"></a>

**Target architecture **

This pattern uses an [AWS CloudFormation template](https://console.aws.amazon.com/cloudformation/home?region=us-east-2#/stacks/new?stackName=SSM-SSH-Demo&templateURL=https://aws-quickstart.s3.amazonaws.com/quickstart-examples/samples/session-manager-ssh/session-manager-example.yaml) to create the highly available architecture shown in the following diagram. An Amazon EC2 Linux instance is launched in a private subnet. AWS Systems Manager Session Manager is used to access the private Amazon EC2 Linux instance and to test the API running in the Docker container.

![\[A user accessing the Amazon EC2 Linux instance and testing the API running in the Docker container.\]](http://docs.aws.amazon.com/prescriptive-guidance/latest/patterns/images/pattern-img/512e61b2-10ba-43be-bbd8-2bdc597c3de3/images/9c5206f6-32b1-47be-9037-360c0bff713c.png)


1. Access to the Linux instance through Session Manager

## Tools
<a name="run-an-asp-net-core-web-api-docker-container-on-an-amazon-ec2-linux-instance-tools"></a>

**AWS services**
+ [AWS Command Line Interface](https://docs.aws.amazon.com/cli/latest/userguide/cli-chap-welcome.html) – AWS Command Line Interface (AWS CLI) is an open source tool for interacting with AWS services through commands in your command line shell. With minimal configuration, you can run AWS CLI commands that implement functionality equivalent to that provided by the browser-based AWS Management Console.
+ [AWS Management Console](https://docs.aws.amazon.com/awsconsolehelpdocs/latest/gsg/learn-whats-new.html) – The AWS Management Console is a web application that comprises and refers to a broad collection of service consoles for managing AWS resources. When you first sign in, you see the console home page. The home page provides access to each service console and offers a single place to access the information you need to perform your AWS related tasks.
+ [AWS Systems Manager Session Manager](https://docs.aws.amazon.com/systems-manager/latest/userguide/session-manager.html) – Session Manager is a fully managed AWS Systems Manager capability. With Session Manager, you can manage your Amazon Elastic Compute Cloud (Amazon EC2) instances. Session Manager provides secure and auditable node management without the need to open inbound ports, maintain bastion hosts, or manage SSH keys.

**Other tools**
+ [Visual Studio 2022](https://visualstudio.microsoft.com/downloads/) – Visual Studio 2022 is an integrated development environment (IDE).
+ [Docker](https://www.docker.com/) – Docker is a set of platform as a service (PaaS) products that use virtualization at the operating-system level to deliver software in containers.

**Code**

```
FROM mcr.microsoft.com/dotnet/aspnet:5.0 AS base
 WORKDIR /app
EXPOSE 80
EXPOSE 443
 
FROM mcr.microsoft.com/dotnet/sdk:5.0 AS build
WORKDIR /src
COPY ["DemoNetCoreWebAPI/DemoNetCoreWebAPI.csproj", "DemoNetCoreWebAPI/"]
RUN dotnet restore "DemoNetCoreWebAPI/DemoNetCoreWebAPI.csproj"
COPY . .
WORKDIR "/src/DemoNetCoreWebAPI"
RUN dotnet build "DemoNetCoreWebAPI.csproj" -c Release -o /app/build
 
FROM build AS publish
RUN dotnet publish "DemoNetCoreWebAPI.csproj" -c Release -o /app/publish
 
FROM base AS final
WORKDIR /app
COPY --from=publish /app/publish .
ENTRYPOINT ["dotnet", "DemoNetCoreWebAPI.dll"]
```

## Epics
<a name="run-an-asp-net-core-web-api-docker-container-on-an-amazon-ec2-linux-instance-epics"></a>

### Develop the ASP.NET Core web API
<a name="develop-the-asp-net-core-web-api"></a>


| Task | Description | Skills required | 
| --- | --- | --- | 
| Create an example ASP.NET Core web API using Visual Studio. | To create an example ASP.NET Core web API, do the following:[\[See the AWS documentation website for more details\]](http://docs.aws.amazon.com/prescriptive-guidance/latest/patterns/run-an-asp-net-core-web-api-docker-container-on-an-amazon-ec2-linux-instance.html) | App developer | 
| Create a Dockerfile. | To create a Dockerfile, do one of the following:[\[See the AWS documentation website for more details\]](http://docs.aws.amazon.com/prescriptive-guidance/latest/patterns/run-an-asp-net-core-web-api-docker-container-on-an-amazon-ec2-linux-instance.html)To push the changes to your GitHub repository, run the following command.<pre>git add --all<br />git commit -m "Dockerfile added"<br />git push</pre> | App developer | 

### Set up the Amazon EC2 Linux instance
<a name="set-up-the-amazon-ec2-linux-instance"></a>


| Task | Description | Skills required | 
| --- | --- | --- | 
| Set up the infrastructure. | Launch the [AWS CloudFormation template](https://console.aws.amazon.com/cloudformation/home?region=us-east-2#/stacks/new?stackName=SSM-SSH-Demo&templateURL=https://aws-quickstart.s3.amazonaws.com/quickstart-examples/samples/session-manager-ssh/session-manager-example.yaml) to create the infrastructure, which includes the following: [\[See the AWS documentation website for more details\]](http://docs.aws.amazon.com/prescriptive-guidance/latest/patterns/run-an-asp-net-core-web-api-docker-container-on-an-amazon-ec2-linux-instance.html)To learn more about accessing a private Amazon EC2 instance using Session Manager without requiring a bastion host, see the [Toward a bastion-less world](https://aws.amazon.com/blogs/infrastructure-and-automation/toward-a-bastion-less-world/) blog post. | App developer, AWS administrator, AWS DevOps | 
| Log in to the Amazon EC2 Linux instance. | To connect to the Amazon EC2 Linux instance in the private subnet, do the following:[\[See the AWS documentation website for more details\]](http://docs.aws.amazon.com/prescriptive-guidance/latest/patterns/run-an-asp-net-core-web-api-docker-container-on-an-amazon-ec2-linux-instance.html) | App developer | 
| Install and start Docker. | To install and start Docker in the Amazon EC2 Linux instance, do the following:[\[See the AWS documentation website for more details\]](http://docs.aws.amazon.com/prescriptive-guidance/latest/patterns/run-an-asp-net-core-web-api-docker-container-on-an-amazon-ec2-linux-instance.html) | App developer, AWS administrator, AWS DevOps | 
| Install Git and clone the repository. | To install Git on the Amazon EC2 Linux instance and clone the repository from GitHub, do the following.[\[See the AWS documentation website for more details\]](http://docs.aws.amazon.com/prescriptive-guidance/latest/patterns/run-an-asp-net-core-web-api-docker-container-on-an-amazon-ec2-linux-instance.html) | App developer, AWS administrator, AWS DevOps | 
| Build and run the Docker container. | To build the Docker image and run the container inside the Amazon EC2 Linux instance, do the following:[\[See the AWS documentation website for more details\]](http://docs.aws.amazon.com/prescriptive-guidance/latest/patterns/run-an-asp-net-core-web-api-docker-container-on-an-amazon-ec2-linux-instance.html) | App developer, AWS administrator, AWS DevOps | 

### Test the web API
<a name="test-the-web-api"></a>


| Task | Description | Skills required | 
| --- | --- | --- | 
| Test the web API using the curl command. | To test the web API, run the following command.<pre>curl -X GET "http://localhost/WeatherForecast" -H  "accept: text/plain"</pre>Verify the API response.You can get the curl commands for each endpoint from Swagger when you are running it locally. | App developer | 

### Clean up resources
<a name="clean-up-resources"></a>


| Task | Description | Skills required | 
| --- | --- | --- | 
| Delete all resources. | Delete the stack to remove all the resources. This ensures that you aren’t charged for any services that you aren’t using. | AWS administrator, AWS DevOps | 

## Related resources
<a name="run-an-asp-net-core-web-api-docker-container-on-an-amazon-ec2-linux-instance-resources"></a>
+ [Connect to your Linux instance from Windows using PuTTY](https://docs.aws.amazon.com/AWSEC2/latest/UserGuide/putty.html)
+ [Create a web API with ASP.NET Core](https://docs.microsoft.com/en-us/aspnet/core/tutorials/first-web-api?view=aspnetcore-5.0&tabs=visual-studio)
+ [Toward a bastion-less world](https://aws.amazon.com/blogs/infrastructure-and-automation/toward-a-bastion-less-world/)

# Run stateful workloads with persistent data storage by using Amazon EFS on Amazon EKS with AWS Fargate
<a name="run-stateful-workloads-with-persistent-data-storage-by-using-amazon-efs-on-amazon-eks-with-aws-fargate"></a>

*Ricardo Morais, Rodrigo Bersa, and Lucio Pereira, Amazon Web Services*

## Summary
<a name="run-stateful-workloads-with-persistent-data-storage-by-using-amazon-efs-on-amazon-eks-with-aws-fargate-summary"></a>

This pattern provides guidance for enabling Amazon Elastic File System (Amazon EFS) as a storage device for containers that are running on Amazon Elastic Kubernetes Service (Amazon EKS) by using AWS Fargate to provision your compute resources.

The setup described in this pattern follows security best practices and provides security at rest and security in transit by default. To encrypt your Amazon EFS file system, it uses an AWS Key Management Service (AWS KMS) key, but you can also specify a key alias that dispatches the process of creating a KMS key.

You can follow the steps in this pattern to create a namespace and Fargate profile for a proof-of-concept (PoC) application, install the Amazon EFS Container Storage Interface (CSI) driver that is used to integrate the Kubernetes cluster with Amazon EFS, configure the storage class, and deploy the PoC application. These steps result in an Amazon EFS file system that is shared among multiple Kubernetes workloads, running over Fargate. The pattern is accompanied by scripts that automate these steps.

You can use this pattern if you want data persistence in your containerized applications and want to avoid data loss during scaling operations. For example:
+ **DevOps tools** – A common scenario is to develop a continuous integration and continuous delivery (CI/CD) strategy. In this case, you can use Amazon EFS as a shared file system to store configurations among different instances of the CI/CD tool or to store a cache (for example, an Apache Maven repository) for pipeline stages among different instances of the CI/CD tool.
+ **Web servers** – A common scenario is to use Apache as an HTTP web server. You can use Amazon EFS as a shared file system to store static files that are shared among different instances of the web server. In this example scenario, modifications are applied directly to the file system instead of static files being baked into a Docker image.

## Prerequisites and limitations
<a name="run-stateful-workloads-with-persistent-data-storage-by-using-amazon-efs-on-amazon-eks-with-aws-fargate-prereqs"></a>

**Prerequisites**
+ An active AWS account
+ An existing Amazon EKS cluster with Kubernetes version 1.17 or later (tested up to version 1.27)
+ An existing Amazon EFS file system to bind a Kubernetes StorageClass and provision file systems dynamically
+ Cluster administration permissions
+ Context configured to point to the desired Amazon EKS cluster

**Limitations**
+ There are some limitations to consider when you’re using Amazon EKS with Fargate. For example, the use of some Kubernetes constructs, such as DaemonSets and privileged containers, aren’t supported. For more information, about Fargate limitations, see the [AWS Fargate considerations](https://docs.aws.amazon.com/eks/latest/userguide/fargate.html#fargate-considerations) in the Amazon EKS documentation.
+ The code provided with this pattern supports workstations that are running Linux or macOS.

**Product versions**
+ AWS Command Line Interface (AWS CLI) version 2 or later
+ Amazon EFS CSI driver version 1.0 or later (tested up to version 2.4.8)
+ eksctl version 0.24.0 or later (tested up to version 0.158.0)
+ jq version 1.6 or later
+ kubectl version 1.17 or later (tested up to version 1.27)
+ Kubernetes version 1.17 or later (tested up to version 1.27)

## Architecture
<a name="run-stateful-workloads-with-persistent-data-storage-by-using-amazon-efs-on-amazon-eks-with-aws-fargate-architecture"></a>

![\[Architecture diagram of running stateful workloads with persistent data storage by using Amazon EFS\]](http://docs.aws.amazon.com/prescriptive-guidance/latest/patterns/images/pattern-img/2487e285-269b-415b-a270-877f973e3aaf/images/ec8de63c-3307-4010-9e03-2bd7b9881fff.png)


The target architecture is comprised of the following infrastructure:
+ A virtual private cloud (VPC)
+ Two Availability Zones
+ A public subnet with a NAT gateway that provides internet access
+ A private subnet with an Amazon EKS cluster and Amazon EFS mount targets (also known as *mount points*)
+ Amazon EFS at the VPC level

The following is the environment infrastructure for the Amazon EKS cluster:
+ AWS Fargate profiles that accommodate the Kubernetes constructs at the namespace level
+ A Kubernetes namespace with:
  + Two application pods distributed across Availability Zones
  + One persistent volume claim (PVC) bound to a persistent volume (PV) at the cluster level
+ A cluster-wide PV that is bound to the PVC in the namespace and that points to the Amazon EFS mount targets in the private subnet, outside of the cluster

## Tools
<a name="run-stateful-workloads-with-persistent-data-storage-by-using-amazon-efs-on-amazon-eks-with-aws-fargate-tools"></a>

**AWS services**
+ [AWS Command Line Interface (AWS CLI)](https://docs.aws.amazon.com/cli/latest/userguide/cli-chap-welcome.html) is an open-source tool that you can use to interact with AWS services from the command line.
+ [Amazon Elastic File System (Amazon EFS)](https://docs.aws.amazon.com/efs/latest/ug/whatisefs.html) helps you create and configure shared file systems in the AWS Cloud. In this pattern, it provides a simple, scalable, fully managed, and shared file system for use with Amazon EKS.
+ [Amazon Elastic Kubernetes Service (Amazon EKS)](https://docs.aws.amazon.com/eks/latest/userguide/getting-started.html) helps you run Kubernetes on AWS without needing to install or operate your own clusters.
+ [AWS Fargate](https://docs.aws.amazon.com/eks/latest/userguide/fargate.html) is a serverless compute engine for Amazon EKS. It creates and manages compute resources for your Kubernetes applications.
+ [AWS Key Management Service (AWS KMS)](https://docs.aws.amazon.com/kms/latest/developerguide/overview.html) helps you create and control cryptographic keys to help protect your data.

**Other tools**
+ [Docker](https://www.docker.com/) is a set of platform as a service (PaaS) products that use virtualization at the operating-system level to deliver software in containers.
+ [eksctl](https://docs.aws.amazon.com/eks/latest/userguide/getting-started-eksctl.html) is a command-line utility for creating and managing Kubernetes clusters on Amazon EKS.
+ [kubectl](https://docs.aws.amazon.com/eks/latest/userguide/install-kubectl.html) is a command-line interface that helps you run commands against Kubernetes clusters.
+ [jq](https://stedolan.github.io/jq/download/) is a command-line tool for parsing JSON.

**Code**

The code for this pattern is provided in the GitHub [Persistence Configuration with Amazon EFS on Amazon EKS using AWS Fargate](https://github.com/aws-samples/eks-efs-share-within-fargate) repo. The scripts are organized by epic, in the folders `epic01` through `epic06`, corresponding to the order in the [Epics](#run-stateful-workloads-with-persistent-data-storage-by-using-amazon-efs-on-amazon-eks-with-aws-fargate-epics) section in this pattern.

## Best practices
<a name="run-stateful-workloads-with-persistent-data-storage-by-using-amazon-efs-on-amazon-eks-with-aws-fargate-best-practices"></a>

The target architecture includes the following services and components, and it follows [AWS Well-Architected Framework](https://aws.amazon.com/architecture/well-architected/) best practices:
+ Amazon EFS, which provides a simple, scalable, fully managed elastic NFS file system. This is used as a shared file system among all replications of the PoC application that are running in pods, which are distributed in the private subnets of the chosen Amazon EKS cluster.
+ An Amazon EFS mount target for each private subnet. This provides redundancy per Availability Zone within the virtual private cloud (VPC) of the cluster.
+ Amazon EKS, which runs the Kubernetes workloads. You must provision an Amazon EKS cluster before you use this pattern, as described in the [Prerequisites](#run-stateful-workloads-with-persistent-data-storage-by-using-amazon-efs-on-amazon-eks-with-aws-fargate-prereqs) section.
+ AWS KMS, which provides encryption at rest for the content that’s stored in the Amazon EFS file system.
+ Fargate, which manages the compute resources for the containers so that you can focus on business requirements instead of infrastructure burden. The Fargate profile is created for all private subnets. It provides redundancy per Availability Zone within the virtual private cloud (VPC) of the cluster.
+ Kubernetes Pods, for validating that content can be shared, consumed, and written by different instances of an application.

## Epics
<a name="run-stateful-workloads-with-persistent-data-storage-by-using-amazon-efs-on-amazon-eks-with-aws-fargate-epics"></a>

### Provision an Amazon EKS cluster (optional)
<a name="provision-an-amazon-eks-cluster-optional"></a>


| Task | Description | Skills required | 
| --- | --- | --- | 
| Create an Amazon EKS cluster. | If you already have a cluster deployed, skip to the next epic. Create an Amazon EKS cluster in your existing AWS account. In the [GitHub directory](https://github.com/aws-samples/eks-efs-share-within-fargate/tree/master/bootstrap), use one of the patterns to deploy an Amazon EKS cluster by using Terraform or eksctl. For more information, see [Creating an Amazon EKS cluster](https://docs.aws.amazon.com/eks/latest/userguide/create-cluster.html) in the Amazon EKS documentation. In the Terraform pattern, there are also examples showing how to: link Fargate profiles to your Amazon EKS cluster, create an Amazon EFS file system, and deploy Amazon EFS CSI driver in your Amazon EKS cluster. | AWS administrator, Terraform or eksctl administrator, Kubernetes administrator | 
| Export environment variables. | Run the env.sh script. This provides the information required in the next steps.<pre>source ./scripts/env.sh<br />Inform the AWS Account ID:<br /><13-digit-account-id><br />Inform your AWS Region:<br /><aws-Region-code><br />Inform your Amazon EKS Cluster Name:<br /><amazon-eks-cluster-name><br />Inform the Amazon EFS Creation Token:<br /><self-genereated-uuid></pre>If not noted yet, you can get all the information requested above with the following CLI commands.<pre># ACCOUNT ID<br />aws sts get-caller-identity --query "Account" --output text</pre><pre># REGION CODE<br />aws configure get region</pre><pre># CLUSTER EKS NAME<br />aws eks list-clusters --query "clusters" --output text</pre><pre># GENERATE EFS TOKEN<br />uuidgen</pre> | AWS systems administrator | 

### Create a Kubernetes namespace and a linked Fargate profile
<a name="create-a-kubernetes-namespace-and-a-linked-fargate-profile"></a>


| Task | Description | Skills required | 
| --- | --- | --- | 
| Create a Kubernetes namespace and Fargate profile for application workloads. | Create a namespace for receiving the application workloads that interact with Amazon EFS. Run the `create-k8s-ns-and-linked-fargate-profile.sh` script. You can choose to use a custom namespace name or the default provided namespace `poc-efs-eks-fargate`.**With a custom application namespace name:**<pre>export $APP_NAMESPACE=<CUSTOM_NAME><br />./scripts/epic01/create-k8s-ns-and-linked-fargate-profile.sh \<br />-c "$CLUSTER_NAME" -n "$APP_NAMESPACE"</pre>**Without a custom application namespace name:**<pre>./scripts/epic01/create-k8s-ns-and-linked-fargate-profile.sh \<br />    -c "$CLUSTER_NAME"</pre>where `$CLUSTER_NAME` is the name of your Amazon EKS cluster. The `-n <NAMESPACE>` parameter is optional; if not informed, a default generated namespace name will be provided. | Kubernetes user with granted permissions | 

### Create an Amazon EFS file system
<a name="create-an-amazon-efs-file-system"></a>


| Task | Description | Skills required | 
| --- | --- | --- | 
| Generate a unique token. | Amazon EFS requires a creation token to ensure idempotent operation (calling the operation with the same creation token has no effect). To meet this requirement, you must generate a unique token through an available technique. For example, you can generate a universally unique identifier (UUID) to use as a creation token. | AWS systems administrator | 
| Create an Amazon EFS file system. | Create the file system for receiving the data files that are read and written by the application workloads. You can create an encrypted or non-encrypted file system. (As a best practice, the code for this pattern creates an encrypted system to enable encryption at rest by default.) You can use a unique, symmetric AWS KMS key to encrypt your file system. If a custom key is not specified, an AWS managed key is used.Use the create-efs.sh script to create an encrypted or non-encrypted Amazon EFS file system, after you generate a unique token for Amazon EFS.**With encryption at rest, without a KMS key:**<pre>./scripts/epic02/create-efs.sh \<br />    -c "$CLUSTER_NAME" \<br />    -t "$EFS_CREATION_TOKEN"</pre>where `$CLUSTER_NAME` is the name of your Amazon EKS cluster and `$EFS_CREATION_TOKEN` is a unique creation token for the file system.**With encryption at rest, with a KMS key:**<pre>./scripts/epic02/create-efs.sh \<br />    -c "$CLUSTER_NAME" \<br />    -t "$EFS_CREATION_TOKEN" \<br />    -k "$KMS_KEY_ALIAS"</pre>where `$CLUSTER_NAME` is the name of your Amazon EKS cluster, `$EFS_CREATION_TOKEN` is a unique creation token for the file system, and `$KMS_KEY_ALIAS` is the alias for the KMS key.**Without encryption:**<pre>./scripts/epic02/create-efs.sh -d \<br />    -c "$CLUSTER_NAME" \<br />    -t "$EFS_CREATION_TOKEN"</pre>where `$CLUSTER_NAME` is the name of your Amazon EKS cluster, `$EFS_CREATION_TOKEN` is a unique creation token for the file system, and `–d` disables encryption at rest. | AWS systems administrator | 
| Create a security group. | Create a security group to allow the Amazon EKS cluster to access the Amazon EFS file system. | AWS systems administrator | 
| Update the inbound rule for the security group. | Update the inbound rules of the security group to allow incoming traffic for the following settings:[\[See the AWS documentation website for more details\]](http://docs.aws.amazon.com/prescriptive-guidance/latest/patterns/run-stateful-workloads-with-persistent-data-storage-by-using-amazon-efs-on-amazon-eks-with-aws-fargate.html) | AWS systems administrator | 
| Add a mount target for each private subnet. | For each private subnet of the Kubernetes cluster, create a mount target for the file system and the security group. | AWS systems administrator | 

### Install Amazon EFS components into the Kubernetes cluster
<a name="install-amazon-efs-components-into-the-kubernetes-cluster"></a>


| Task | Description | Skills required | 
| --- | --- | --- | 
| Deploy the Amazon EFS CSI driver. | Deploy the Amazon EFS CSI driver into the cluster. The driver provisions storage according to persistent volume claims created by applications. Run the `create-k8s-efs-csi-sc.sh` script to deploy the Amazon EFS CSI driver and the storage class into the cluster.<pre>./scripts/epic03/create-k8s-efs-csi-sc.sh</pre>This script uses the `kubectl` utility, so make sure that the context has been configured and point to the desired Amazon EKS cluster. | Kubernetes user with granted permissions | 
| Deploy the storage class. | Deploy the storage class into the cluster for the Amazon EFS provisioner (efs.csi.aws.com). | Kubernetes user with granted permissions | 

### Install the PoC application into the Kubernetes cluster
<a name="install-the-poc-application-into-the-kubernetes-cluster"></a>


| Task | Description | Skills required | 
| --- | --- | --- | 
| Deploy the persistent volume. | Deploy the persistent volume, and link it to the created storage class and to the ID of the Amazon EFS file system. The application uses the persistent volume to read and write content. You can specify any size for the persistent volume in the storage field. Kubernetes requires this field, but because Amazon EFS is an elastic file system, it does not enforce any file system capacity. You can deploy the persistent volume with or without encryption. (The Amazon EFS CSI driver enables encryption by default, as a best practice.) Run the `deploy-poc-app.sh` script to deploy the persistent volume, the persistent volume claim, and the two workloads.**With encryption in transit:**<pre>./scripts/epic04/deploy-poc-app.sh \<br />    -t "$EFS_CREATION_TOKEN"</pre>where `$EFS_CREATION_TOKEN` is the unique creation token for the file system.**Without encryption in transit:**<pre>./scripts/epic04/deploy-poc-app.sh -d \<br />    -t "$EFS_CREATION_TOKEN"</pre>where `$EFS_CREATION_TOKEN` is the unique creation token for the file system, and `–d` disables encryption in transit. | Kubernetes user with granted permissions | 
| Deploy the persistent volume claim requested by the application. | Deploy the persistent volume claim requested by the application, and link it to the storage class. Use the same access mode as the persistent volume you created previously. You can specify any size for the persistent volume claim in the storage field. Kubernetes requires this field, but because Amazon EFS is an elastic file system, it does not enforce any file system capacity. | Kubernetes user with granted permissions | 
| Deploy workload 1. | Deploy the pod that represents workload 1 of the application. This workload writes content to the file `/data/out1.txt`. | Kubernetes user with granted permissions | 
| Deploy workload 2. | Deploy the pod that represents workload 2 of the application. This workload writes content to the file `/data/out2.txt`. | Kubernetes user with granted permissions | 

### Validate file system persistence, durability, and shareability
<a name="validate-file-system-persistence-durability-and-shareability"></a>


| Task | Description | Skills required | 
| --- | --- | --- | 
| Check the status of the `PersistentVolume`. | Enter the following command to check the status of the `PersistentVolume`.<pre>kubectl get pv</pre>For an example output, see the [Additional information](#run-stateful-workloads-with-persistent-data-storage-by-using-amazon-efs-on-amazon-eks-with-aws-fargate-additional) section. | Kubernetes user with granted permissions | 
| Check the status of the `PersistentVolumeClaim`. | Enter the following command to check the status of the `PersistentVolumeClaim`.<pre>kubectl -n poc-efs-eks-fargate get pvc</pre>For an example output, see the [Additional information](#run-stateful-workloads-with-persistent-data-storage-by-using-amazon-efs-on-amazon-eks-with-aws-fargate-additional) section. | Kubernetes user with granted permissions | 
| Validate that workload 1 can write to the file system. | Enter the following command to validate that workload 1 is writing to `/data/out1.txt`.<pre>kubectl exec -ti poc-app1 -n poc-efs-eks-fargate -- tail -f /data/out1.txt</pre>The results are similar to the following:<pre>...<br />Thu Sep  3 15:25:07 UTC 2023 - PoC APP 1<br />Thu Sep  3 15:25:12 UTC 2023 - PoC APP 1<br />Thu Sep  3 15:25:17 UTC 2023 - PoC APP 1<br />...</pre> | Kubernetes user with granted permissions | 
| Validate that workload 2 can write to the file system. | Enter the following command to validate that workload 2 is writing to `/data/out2.txt`.<pre>kubectl -n $APP_NAMESPACE exec -ti poc-app2 -- tail -f /data/out2.txt</pre>The results are similar to the following:<pre>...<br />Thu Sep  3 15:26:48 UTC 2023 - PoC APP 2<br />Thu Sep  3 15:26:53 UTC 2023 - PoC APP 2<br />Thu Sep  3 15:26:58 UTC 2023 - PoC APP 2<br />...</pre> | Kubernetes user with granted permissions | 
| Validate that workload 1 can read the file written by workload 2. | Enter the following command to validate that workload 1 can read the `/data/out2.txt` file written by workload 2.<pre>kubectl exec -ti poc-app1 -n poc-efs-eks-fargate -- tail -n 3 /data/out2.txt</pre>The results are similar to the following:<pre>...<br />Thu Sep  3 15:26:48 UTC 2023 - PoC APP 2<br />Thu Sep  3 15:26:53 UTC 2023 - PoC APP 2<br />Thu Sep  3 15:26:58 UTC 2023 - PoC APP 2<br />...</pre> | Kubernetes user with granted permissions | 
| Validate that workload 2 can read the file written by workload 1. | Enter the following command to validate that workload 2 can read the `/data/out1.txt` file written by workload 1.<pre>kubectl -n $APP_NAMESPACE exec -ti poc-app2 -- tail -n 3 /data/out1.txt</pre>The results are similar to the following:<pre>...<br />Thu Sep  3 15:29:22 UTC 2023 - PoC APP 1<br />Thu Sep  3 15:29:27 UTC 2023 - PoC APP 1<br />Thu Sep  3 15:29:32 UTC 2023 - PoC APP 1<br />...</pre> | Kubernetes user with granted permissions | 
| Validate that files are retained after you remove application components. | Next, you use a script to remove the application components (persistent volume, persistent volume claim, and pods), and validate that the files `/data/out1.txt` and `/data/out2.txt` are retained in the file system. Run the `validate-efs-content.sh` script by using the following command.<pre>./scripts/epic05/validate-efs-content.sh \<br />    -t "$EFS_CREATION_TOKEN"</pre>where `$EFS_CREATION_TOKEN` is the unique creation token for the file system.The results are similar to the following:<pre>pod/poc-app-validation created<br />Waiting for pod get Running state...<br />Waiting for pod get Running state...<br />Waiting for pod get Running state...<br />Results from execution of 'find /data' on validation process pod:<br />/data<br />/data/out2.txt<br />/data/out1.txt</pre> | Kubernetes user with granted permissions, System administrator | 

### Monitor operations
<a name="monitor-operations"></a>


| Task | Description | Skills required | 
| --- | --- | --- | 
| Monitor application logs. | As part of a day-two operation, ship the application logs to Amazon CloudWatch for monitoring. | AWS systems administrator, Kubernetes user with granted permissions | 
| Monitor Amazon EKS and Kubernetes containers with Container Insights. | As part of a day-two operation, monitor the Amazon EKS and Kubernetes systems by using Amazon CloudWatch Container Insights. This tool collects, aggregates, and summarizes metrics from containerized applications at different levels and dimensions. For more information, see the [Related resources](#run-stateful-workloads-with-persistent-data-storage-by-using-amazon-efs-on-amazon-eks-with-aws-fargate-resources) section. | AWS systems administrator, Kubernetes user with granted permissions | 
| Monitor Amazon EFS with CloudWatch. | As part of a day-two operation, monitor the file systems using Amazon CloudWatch, which collects and processes raw data from Amazon EFS into readable, near real-time metrics. For more information, see the [Related resources](#run-stateful-workloads-with-persistent-data-storage-by-using-amazon-efs-on-amazon-eks-with-aws-fargate-resources) section. | AWS systems administrator | 

### Clean up resources
<a name="clean-up-resources"></a>


| Task | Description | Skills required | 
| --- | --- | --- | 
| Clean up all created resources for the pattern. | After you complete this pattern, clean up all resources, to avoid incurring AWS charges. Run the `clean-up-resources.sh` script to remove all resources after you have finished using the PoC application. Complete one of the following options.**With encryption at rest, with a KMS key:**<pre>./scripts/epic06/clean-up-resources.sh \<br />    -c "$CLUSTER_NAME" \<br />    -t "$EFS_CREATION_TOKEN" \<br />    -k "$KMS_KEY_ALIAS"</pre>where `$CLUSTER_NAME` is the name of your Amazon EKS cluster, `$EFS_CREATION_TOKEN` is the creation token for the file system, and `$KMS_KEY_ALIAS` is the alias for the KMS key.**Without encryption at rest:**<pre>./scripts/epic06/clean-up-resources.sh \<br />    -c "$CLUSTER_NAME" \<br />    -t "$EFS_CREATION_TOKEN"</pre>where `$CLUSTER_NAME` is the name of your Amazon EKS cluster and `$EFS_CREATION_TOKEN` is the creation token for the file system. | Kubernetes user with granted permissions, System administrator | 

## Related resources
<a name="run-stateful-workloads-with-persistent-data-storage-by-using-amazon-efs-on-amazon-eks-with-aws-fargate-resources"></a>

**References**
+ [AWS Fargate for Amazon EKS now supports Amazon EFS](https://aws.amazon.com/blogs/aws/new-aws-fargate-for-amazon-eks-now-supports-amazon-efs/) (announcement)
+ [How to capture application logs when using Amazon EKS on AWS Fargate](https://aws.amazon.com/blogs/containers/how-to-capture-application-logs-when-using-amazon-eks-on-aws-fargate/) (blog post)
+ [Using Container Insights](https://docs.aws.amazon.com/AmazonCloudWatch/latest/monitoring/ContainerInsights.html) (Amazon CloudWatch documentation)
+ [Setting Up Container Insights on Amazon EKS and Kubernetes](https://docs.aws.amazon.com/AmazonCloudWatch/latest/monitoring/deploy-container-insights-EKS.html) (Amazon CloudWatch documentation)
+ [Amazon EKS and Kubernetes Container Insights metrics](https://docs.aws.amazon.com/AmazonCloudWatch/latest/monitoring/Container-Insights-metrics-EKS.html) (Amazon CloudWatch documentation)
+ [Monitoring Amazon EFS with Amazon CloudWatch](https://docs.aws.amazon.com/efs/latest/ug/monitoring-cloudwatch.html) (Amazon EFS documentation)

**GitHub tutorials and examples**
+ [Static provisioning](https://github.com/kubernetes-sigs/aws-efs-csi-driver/blob/master/examples/kubernetes/static_provisioning/README.md)
+ [Encryption in transit](https://github.com/kubernetes-sigs/aws-efs-csi-driver/blob/master/examples/kubernetes/encryption_in_transit/README.md)
+ [Accessing the file system from multiple pods](https://github.com/kubernetes-sigs/aws-efs-csi-driver/blob/master/examples/kubernetes/multiple_pods/README.md)
+ [Consuming Amazon EFS in StatefulSets](https://github.com/kubernetes-sigs/aws-efs-csi-driver/blob/master/examples/kubernetes/statefulset/README.md)
+ [Mounting subpaths](https://github.com/kubernetes-sigs/aws-efs-csi-driver/blob/master/examples/kubernetes/volume_path/README.md)
+ [Using Amazon EFS access points](https://github.com/kubernetes-sigs/aws-efs-csi-driver/blob/master/examples/kubernetes/access_points/README.md)
+ [Amazon EKS Blueprints for Terraform](https://aws-ia.github.io/terraform-aws-eks-blueprints/)

**Required tools**
+ [Installing the AWS CLI version 2](https://docs.aws.amazon.com/cli/latest/userguide/install-cliv2.html)
+ [Installing eksctl](https://docs.aws.amazon.com/eks/latest/userguide/getting-started-eksctl.html)
+ [Installing kubectl](https://docs.aws.amazon.com/eks/latest/userguide/install-kubectl.html)
+ [Installing jq](https://stedolan.github.io/jq/download/)

## Additional information
<a name="run-stateful-workloads-with-persistent-data-storage-by-using-amazon-efs-on-amazon-eks-with-aws-fargate-additional"></a>

The following is an example output of the `kubectl get pv` command.

```
NAME         CAPACITY   ACCESS MODES   RECLAIM POLICY   STATUS   CLAIM                             STORAGECLASS   REASON   AGE
poc-app-pv   1Mi        RWX            Retain           Bound    poc-efs-eks-fargate/poc-app-pvc   efs-sc                  3m56s
```

The following is an example output of the `kubectl -n poc-efs-eks-fargate get pvc` command.

```
NAME          STATUS   VOLUME       CAPACITY   ACCESS MODES   STORAGECLASS   AGE
poc-app-pvc   Bound    poc-app-pv   1Mi        RWX            efs-sc         4m34s
```

# Set up event-driven auto scaling in Amazon EKS by using Amazon EKS Pod Identity and KEDA
<a name="event-driven-auto-scaling-with-eks-pod-identity-and-keda"></a>

*Dipen Desai, Abhay Diwan, Kamal Joshi, and Mahendra Revanasiddappa, Amazon Web Services*

## Summary
<a name="event-driven-auto-scaling-with-eks-pod-identity-and-keda-summary"></a>

Orchestration platforms, such as [Amazon Elastic Kubernetes Service (Amazon EKS)](https://docs.aws.amazon.com/eks/latest/userguide/what-is-eks.html), have streamlined the lifecycle management of container-based applications. This helps organizations focus on building, securing, operating, and maintaining container-based applications. As event-driven deployments become more common, organizations are more frequently scaling Kubernetes deployments based on various event sources. This method, combined with auto scaling, can result in significant cost savings by providing on-demand compute resources and efficient scaling that is tailored to application logic.

[KEDA](https://keda.sh/) is a Kubernetes-based event-driven autoscaler. KEDA helps you scale any container in Kubernetes based on the number of events that need to be processed. It is lightweight and integrates with any Kubernetes cluster. It also works with standard Kubernetes components, such as [Horizontal Pod Autoscaling (HPA)](https://kubernetes.io/docs/tasks/run-application/horizontal-pod-autoscale/). KEDA also offers [TriggerAuthentication](https://keda.sh/docs/2.14/concepts/authentication/#re-use-credentials-and-delegate-auth-with-triggerauthentication), which is a feature that helps you delegate authentication. It allows you to describe authentication parameters that are separate from the ScaledObject and the deployment containers.

AWS provides AWS Identity and Access Management (IAM) roles that support diverse Kubernetes deployment options, including Amazon EKS, Amazon EKS Anywhere, Red Hat OpenShift Service on AWS (ROSA), and self-managed Kubernetes clusters on Amazon Elastic Compute Cloud (Amazon EC2). These roles use IAM constructs, such as OpenID Connect (OIDC) identity providers and IAM trust policies, to operate across different environments without relying directly on Amazon EKS services or APIs. For more information, see [IAM roles for service accounts](https://docs.aws.amazon.com/eks/latest/userguide/iam-roles-for-service-accounts.html) in the Amazon EKS documentation.

[Amazon EKS Pod Identity](https://docs.aws.amazon.com/eks/latest/userguide/pod-identities.html) simplifies the process for Kubernetes service accounts to assume IAM roles without requiring OIDC providers. It provides the ability to manage credentials for your applications. Instead of creating and distributing your AWS credentials to the containers or using the Amazon EC2 instance’s role, you associate an IAM role with a Kubernetes service account and configure your Pods to use the service account. This helps you use an IAM role across multiple clusters and simplifies policy management by enabling the reuse of permission policies across IAM roles.

By implementing KEDA with Amazon EKS Pod Identity, businesses can achieve efficient event-driven auto scaling and simplified credential management. Applications scale based on demand, which optimizes resource utilization and reduces costs.

This pattern helps you integrate Amazon EKS Pod Identity with KEDA. It showcases how you can use the `keda-operator` service account and delegate authentication with `TriggerAuthentication`. It also describes how to set up a trust relationship between an IAM role for the KEDA operator and an IAM role for the application. This trust relationship allows KEDA to monitor messages in the event queues and adjust scaling for the destination Kubernetes objects.

## Prerequisites and limitations
<a name="event-driven-auto-scaling-with-eks-pod-identity-and-keda-prereqs"></a>

**Prerequisites**
+ AWS Command Line Interface (AWS CLI) version 2.13.17 or later, [installed](https://docs.aws.amazon.com/cli/latest/userguide/getting-started-install.html)
+ Python version 3.11.5 or later, [installed](https://www.python.org/downloads/)
+ AWS SDK for Python (Boto3) version 1.34.135 or later, [installed](https://boto3.amazonaws.com/v1/documentation/api/latest/guide/quickstart.html)
+ Helm version 3.12.3 or later, [installed](https://helm.sh/docs/intro/install/)
+ kubectl version 1.25.1 or later, [installed](https://kubernetes.io/docs/tasks/tools/)
+ Docker Engine version 26.1.1 or later, [installed](https://docs.docker.com/engine/install/)
+ An Amazon EKS cluster version 1.24 or later, [created](https://docs.aws.amazon.com/eks/latest/userguide/create-cluster.html)
+ Prerequisites for creating the Amazon EKS Pod Identity agent, [met](https://docs.aws.amazon.com/eks/latest/userguide/pod-id-agent-setup.html#pod-id-agent-add-on-create)

**Limitations**
+ It is required that you establish a trust relationship between the `keda-operator` role and the `keda-identity` role. Instructions are provided in the [Epics](#event-driven-auto-scaling-with-eks-pod-identity-and-keda-epics) section of this pattern.

## Architecture
<a name="event-driven-auto-scaling-with-eks-pod-identity-and-keda-architecture"></a>

In this pattern, you create the following AWS resources:
+ **Amazon Elastic Container Registry (Amazon ECR) repository** – In this pattern, this repo is named `keda-pod-identity-registry`. This private repo is used to store Docker images of the sample application.
+ **Amazon Simple Queue Service (Amazon SQS) queue** – In this pattern, this queue is named `event-messages-queue`. The queue acts as a message buffer that collects and stores incoming messages. KEDA monitors the queue metrics, such as message count or queue length, and it automatically scales the application based on these metrics.
+ **IAM role for the application** – In this pattern, this role is named `keda-identity`. The `keda-operator` role assumes this role. This role allows access to the Amazon SQS queue.
+ **IAM role for the KEDA operator** – In this pattern, this role is named `keda-operator`. The KEDA operator uses this role to make the required AWS API calls. This role has permissions to assume the `keda-identity` role. Because of the trust relationship between the `keda-operator` and the `keda-identity` roles, the `keda-operator` role has Amazon SQS permissions.

Through the `TriggerAuthentication` and `ScaledObject` Kubernetes custom resources, the operator uses the `keda-identity` role to connect with an Amazon SQS queue. Based on the queue size, KEDA automatically scales the application deployment. It adds 1 pod for every 5 unread messages in the queue. In the default configuration, if there are no unread messages in the Amazon SQS queue, the application scales down to 0 pods. The KEDA operator monitors the queue at an interval that you specify.

 

The following image shows how you use Amazon EKS Pod Identity to provide the `keda-operator` role with secure access to the Amazon SQS queue.

![\[Using KEDA and Amazon EKS Pod Identity to automatically scale a Kubernetes-based application.\]](http://docs.aws.amazon.com/prescriptive-guidance/latest/patterns/images/pattern-img/56f7506d-e8d3-43e5-bec6-42267fedd0ae/images/05bdbd09-9eb8-4c0b-8c0d-efe38aecb683.png)


The diagram shows the following workflow:

1. You install the Amazon EKS Pod Identity agent in the Amazon EKS cluster.

1. You deploy KEDA operator in the KEDA namespace in the Amazon EKS cluster.

1. You create the `keda-operator` and `keda-identity` IAM roles in the target AWS account.

1. You establish a trust relationship between the IAM roles.

1. You deploy the application in the `security` namespace.

1. The KEDA operator polls messages in an Amazon SQS queue.

1. KEDA initiates HPA, which automatically scales the application based on the queue size.

## Tools
<a name="event-driven-auto-scaling-with-eks-pod-identity-and-keda-tools"></a>

**AWS services**
+ [Amazon Elastic Container Registry (Amazon ECR)](https://docs.aws.amazon.com/AmazonECR/latest/userguide/what-is-ecr.html) is a managed container image registry service that’s secure, scalable, and reliable.
+ [Amazon Elastic Kubernetes Service (Amazon EKS)](https://docs.aws.amazon.com/eks/latest/userguide/getting-started.html) helps you run Kubernetes on AWS without needing to install or maintain your own Kubernetes control plane or nodes.
+ [AWS Identity and Access Management (IAM)](https://docs.aws.amazon.com/IAM/latest/UserGuide/introduction.html) helps you securely manage access to your AWS resources by controlling who is authenticated and authorized to use them.
+ [Amazon Simple Queue Service (Amazon SQS)](https://docs.aws.amazon.com/AWSSimpleQueueService/latest/SQSDeveloperGuide/welcome.html) provides a secure, durable, and available hosted queue that helps you integrate and decouple distributed software systems and components.

**Other tools**
+ [KEDA](https://keda.sh/) is a Kubernetes-based event-driven autoscaler.

**Code repository**

The code for this pattern is available in the GitHub [Event-driven auto scaling using EKS Pod Identity and KEDA](https://github.com/aws-samples/event-driven-autoscaling-using-podidentity-and-keda/tree/main) repository.

## Best practices
<a name="event-driven-auto-scaling-with-eks-pod-identity-and-keda-best-practices"></a>

We recommend that you adhere to the following best practices:
+ [Amazon EKS best practices](https://docs.aws.amazon.com/eks/latest/best-practices/introduction.html)
+ [Security best practices in IAM](https://docs.aws.amazon.com/IAM/latest/UserGuide/best-practices.html)
+ [Amazon SQS best practices](https://docs.aws.amazon.com/AWSSimpleQueueService/latest/SQSDeveloperGuide/sqs-best-practices.html)

## Epics
<a name="event-driven-auto-scaling-with-eks-pod-identity-and-keda-epics"></a>

### Create AWS resources
<a name="create-aws-resources"></a>


| Task | Description | Skills required | 
| --- | --- | --- | 
| Create the IAM role for the KEDA operator. | [\[See the AWS documentation website for more details\]](http://docs.aws.amazon.com/prescriptive-guidance/latest/patterns/event-driven-auto-scaling-with-eks-pod-identity-and-keda.html) | AWS administrator | 
| Create the IAM role for the sample application. | [\[See the AWS documentation website for more details\]](http://docs.aws.amazon.com/prescriptive-guidance/latest/patterns/event-driven-auto-scaling-with-eks-pod-identity-and-keda.html) | AWS administrator | 
| Create an Amazon SQS queue. | [\[See the AWS documentation website for more details\]](http://docs.aws.amazon.com/prescriptive-guidance/latest/patterns/event-driven-auto-scaling-with-eks-pod-identity-and-keda.html) | General AWS | 
| Create an Amazon ECR repository. | [\[See the AWS documentation website for more details\]](http://docs.aws.amazon.com/prescriptive-guidance/latest/patterns/event-driven-auto-scaling-with-eks-pod-identity-and-keda.html) | General AWS | 

### Set up the Amazon EKS cluster
<a name="set-up-the-eks-cluster"></a>


| Task | Description | Skills required | 
| --- | --- | --- | 
| Deploy the Amazon EKS Pod Identity agent. | For the target Amazon EKS cluster, set up the Amazon EKS Pod Identity agent. Follow the instructions in [Set up the Amazon EKS Pod Identity Agent](https://docs.aws.amazon.com/eks/latest/userguide/pod-id-agent-setup.html#pod-id-agent-add-on-create) in the Amazon EKS documentation. | AWS DevOps | 
| Deploy KEDA. | [\[See the AWS documentation website for more details\]](http://docs.aws.amazon.com/prescriptive-guidance/latest/patterns/event-driven-auto-scaling-with-eks-pod-identity-and-keda.html) | DevOps engineer | 
| Assign the IAM role to the Kubernetes service account. | Follow the instructions in [Assign an IAM role to a Kubernetes service account](https://docs.aws.amazon.com/eks/latest/userguide/pod-id-association.html) in the Amazon EKS documentation. Use the following values:[\[See the AWS documentation website for more details\]](http://docs.aws.amazon.com/prescriptive-guidance/latest/patterns/event-driven-auto-scaling-with-eks-pod-identity-and-keda.html) | AWS DevOps | 
| Create a namespace. | Enter the following command to create a `security` namespace in the target Amazon EKS cluster:<pre>kubectl create ns security</pre> | DevOps engineer | 

### Deploy the sample application
<a name="deploy-the-sample-application"></a>


| Task | Description | Skills required | 
| --- | --- | --- | 
| Clone the application files. | Enter the following command to clone the [Event-driven auto scaling using EKS Pod Identity and KEDA repository](https://github.com/aws-samples/event-driven-autoscaling-using-podidentity-and-keda/tree/main) from GitHub:<pre>git clone https://github.com/aws-samples/event-driven-autoscaling-using-podidentity-and-keda.git</pre> | DevOps engineer | 
| Build the Docker image. | [\[See the AWS documentation website for more details\]](http://docs.aws.amazon.com/prescriptive-guidance/latest/patterns/event-driven-auto-scaling-with-eks-pod-identity-and-keda.html) | DevOps engineer | 
| Push the Docker image to Amazon ECR. | [\[See the AWS documentation website for more details\]](http://docs.aws.amazon.com/prescriptive-guidance/latest/patterns/event-driven-auto-scaling-with-eks-pod-identity-and-keda.html)You can find push commands by navigating to the Amazon ECR repository page and then choosing **View push commands**. | DevOps engineer | 
| Deploy the sample application. | [\[See the AWS documentation website for more details\]](http://docs.aws.amazon.com/prescriptive-guidance/latest/patterns/event-driven-auto-scaling-with-eks-pod-identity-and-keda.html) | DevOps engineer | 
| Assign the IAM role to the application service account. | Do one of the following to associate the `keda-identity` IAM role with the service account for the sample application:[\[See the AWS documentation website for more details\]](http://docs.aws.amazon.com/prescriptive-guidance/latest/patterns/event-driven-auto-scaling-with-eks-pod-identity-and-keda.html) | DevOps engineer | 
| Deploy `ScaledObject` and `TriggerAuthentication`. | [\[See the AWS documentation website for more details\]](http://docs.aws.amazon.com/prescriptive-guidance/latest/patterns/event-driven-auto-scaling-with-eks-pod-identity-and-keda.html) | DevOps engineer | 

### Test auto scaling
<a name="test-auto-scaling"></a>


| Task | Description | Skills required | 
| --- | --- | --- | 
| Send messages to the Amazon SQS queue. | [\[See the AWS documentation website for more details\]](http://docs.aws.amazon.com/prescriptive-guidance/latest/patterns/event-driven-auto-scaling-with-eks-pod-identity-and-keda.html) | DevOps engineer | 
| Monitor the application pods. | [\[See the AWS documentation website for more details\]](http://docs.aws.amazon.com/prescriptive-guidance/latest/patterns/event-driven-auto-scaling-with-eks-pod-identity-and-keda.html) | DevOps engineer | 

## Troubleshooting
<a name="event-driven-auto-scaling-with-eks-pod-identity-and-keda-troubleshooting"></a>


| Issue | Solution | 
| --- | --- | 
| The KEDA operator cannot scale the application. | Enter the following command to check the logs of the `keda-operator` IAM role:<pre>kubectl logs -n keda -l app=keda-operator -c keda-operator</pre> If there is an `HTTP 403` response code, then the application and the KEDA scaler do not have sufficient permissions to access the Amazon SQS queue. Complete the following steps:[\[See the AWS documentation website for more details\]](http://docs.aws.amazon.com/prescriptive-guidance/latest/patterns/event-driven-auto-scaling-with-eks-pod-identity-and-keda.html)If there is an `Assume-Role` error, then an [Amazon EKS node IAM role](https://docs.aws.amazon.com/eks/latest/userguide/create-node-role.html) is unable to assume the IAM role that is defined for `TriggerAuthentication`. Complete the following steps:[\[See the AWS documentation website for more details\]](http://docs.aws.amazon.com/prescriptive-guidance/latest/patterns/event-driven-auto-scaling-with-eks-pod-identity-and-keda.html) | 

## Related resources
<a name="event-driven-auto-scaling-with-eks-pod-identity-and-keda-resources"></a>
+ [Set up the Amazon EKS Pod Identity Agent](https://docs.aws.amazon.com/eks/latest/userguide/pod-id-agent-setup.html) (Amazon EKS documentation)
+ [Deploying KEDA](https://keda.sh/docs/2.14/deploy/) (KEDA documentation)
+ [ScaledObject specification](https://keda.sh/docs/2.16/reference/scaledobject-spec/) (KEDA documentation)
+ [Authentication with TriggerAuthentication](https://keda.sh/docs/2.14/concepts/authentication/) (KEDA documentation)

# Streamline PostgreSQL deployments on Amazon EKS by using PGO
<a name="streamline-postgresql-deployments-amazon-eks-pgo"></a>

*Shalaka Dengale, Amazon Web Services*

## Summary
<a name="streamline-postgresql-deployments-amazon-eks-pgo-summary"></a>

This pattern integrates the Postgres Operator from Crunchy Data (PGO) with Amazon Elastic Kubernetes Service (Amazon EKS) to streamline PostgreSQL deployments in cloud-native environments. PGO provides automation and scalability for managing PostgreSQL databases in Kubernetes. When you combine PGO with Amazon EKS, it forms a robust platform for deploying, managing, and scaling PostgreSQL databases efficiently.

This integration provides the following key benefits:
+ Automated deployment: Simplifies PostgreSQL cluster deployment and management.
+ Custom resource definitions (CRDs):** **Uses Kubernetes primitives for PostgreSQL management.
+ High availability: Supports automatic failover and synchronous replication.
+ Automated backups and restores:** **Streamlines backup and restore processes.
+ Horizontal scaling:** **Enables dynamic scaling of PostgreSQL clusters.
+ Version upgrades: Facilitates rolling upgrades with minimal downtime.
+ Security: Enforces encryption, access controls, and authentication mechanisms.

## Prerequisites and limitations
<a name="streamline-postgresql-deployments-amazon-eks-pgo-prereqs"></a>

**Prerequisites**
+ An active AWS account.
+ [AWS Command Line Interface (AWS CLI) version 2](https://docs.aws.amazon.com/cli/latest/userguide/install-cliv2.html), installed and configured on Linux, macOS, or Windows.
+ [AWS CLI Config](https://docs.aws.amazon.com/cli/latest/userguide/getting-started-quickstart.html), to connect AWS resources from the command line.
+ [eksctl](https://github.com/eksctl-io/eksctl#installation), installed and configured on Linux, macOS, or Windows.
+ `kubectl`, installed and configured to access resources on your Amazon EKS cluster. For more information, see [Set up kubectl and eksctl](https://docs.aws.amazon.com/eks/latest/userguide/install-kubectl.html) in the Amazon EKS documentation. 
+ Your computer terminal configured to access the Amazon EKS cluster. For more information, see [Configure your computer to communicate with your cluster](https://docs.aws.amazon.com/eks/latest/userguide/getting-started-console.html#eks-configure-kubectl) in the Amazon EKS documentation.

**Product versions**
+ Kubernetes versions 1.21–1.24 or later (see the [PGO documentation](https://access.crunchydata.com/documentation/postgres-operator/5.2.5/)).
+ PostgreSQL version 10 or later. This pattern uses PostgreSQL version 16.

**Limitations**
+ Some AWS services aren’t available in all AWS Regions. For Region availability, see [AWS services by Region](https://aws.amazon.com/about-aws/global-infrastructure/regional-product-services/). For specific endpoints, see the [Service endpoints and quotas](https://docs.aws.amazon.com/general/latest/gr/aws-service-information.html) page, and choose the link for the service.

## Architecture
<a name="streamline-postgresql-deployments-amazon-eks-pgo-architecture"></a>

**Target technology stack **
+ Amazon EKS
+ Amazon Virtual Private Cloud (Amazon VPC)
+ Amazon Elastic Compute Cloud (Amazon EC2)

**Target architecture **

![\[Architecture for using PGO with three Availability Zones and two replicas, PgBouncer, and PGO operator.\]](http://docs.aws.amazon.com/prescriptive-guidance/latest/patterns/images/pattern-img/4c164012-7527-4ebe-b6a7-c129600328d6/images/26a5572b-405b-4634-b96a-91254c3ea2c1.png)


This pattern builds an architecture that contains an Amazon EKS cluster with three nodes. Each node runs on a set of EC2 instances in the backend. This PostgreSQL setup follows a primary replica architecture, which is particularly effective for read-heavy use cases. The architecture includes the following components:
+ **Primary database container (pg-primary)** hosts the main PostgreSQL instance where all write operations are directed.
+ **Secondary replica containers (pg-replica)** host the PostgreSQL instances that replicate the data from the primary database and handle read operations.
+ **PgBouncer** is a lightweight connection pooler for PostgreSQL databases that's included with PGO. It sits between the client and the PostgreSQL server, and acts as an intermediary for database connections.
+ **PGO** automates the deployment and management of PostgreSQL clusters in this Kubernetes environment.
+ **Patroni** is an open-source tool that manages and automates high availability configurations for PostgreSQL. It's included with PGO. When you use Patroni with PGO in Kubernetes, it plays a crucial role in ensuring the resilience and fault tolerance of a PostgreSQL cluster. For more information, see the [Patroni documentation](https://patroni.readthedocs.io/en/latest/).

The workflow includes these steps:
+ **Deploy the PGO operator**. You deploy the PGO operator on your Kubernetes cluster that runs on Amazon EKS. This can be done by using Kubernetes manifests or Helm charts. This pattern uses Kubernetes manifests.
+ **Define PostgreSQL instances**. When the operator is running, you create custom resources (CRs) to specify the desired state of PostgreSQL instances. This includes configurations such as storage, replication, and high availability settings.
+ **Operator management**. You interact with the operator through Kubernetes API objects such as CRs to create, update, or delete PostgreSQL instances.
+ **Monitoring and maintenance**. You can monitor the health and performance of the PostgreSQL instances running on Amazon EKS. Operators often provide metrics and logging for monitoring purposes. You can perform routine maintenance tasks such as upgrades and patching as necessary. For more information, see [Monitor your cluster performance and view logs](https://docs.aws.amazon.com/eks/latest/userguide/eks-observe.html) in the Amazon EKS documentation.
+ **Scaling and backup**: You can use the features provided by the operator to scale PostgreSQL instances and manage backups.

This pattern doesn't cover monitoring, maintenance, and backup operations.

**Automation and scale**
+ You can use CloudFormation to automate the infrastructure creation. For more information, see [Create Amazon EKS resources with CloudFormation](https://docs.aws.amazon.com/eks/latest/userguide/creating-resources-with-cloudformation.html) in the Amazon EKS documentation.
+ You can use GitVersion or Jenkins build numbers to automate the deployment of database instances.

## Tools
<a name="streamline-postgresql-deployments-amazon-eks-pgo-tools"></a>

**AWS services**
+ [Amazon Elastic Kubernetes Service (Amazon EKS)](https://docs.aws.amazon.com/eks/latest/userguide/getting-started.html) helps you run Kubernetes on AWS without needing to install or maintain your own Kubernetes control plane or nodes.  
+ [AWS Command Line Interface (AWS CLI)](https://docs.aws.amazon.com/cli/latest/userguide/cli-chap-welcome.html) is an open-source tool that helps you interact with AWS services through commands in your command line shell.

**Other tools**
+ [eksctl](https://eksctl.io/) is a simple command line tool for creating clusters on Amazon EKS.
+ [kubectl](https://kubernetes.io/docs/tasks/tools/install-kubectl/) is a command line utility for running commands against Kubernetes clusters.
+ [PGO](https://github.com/CrunchyData/postgres-operator) automates and scales the management of PostgreSQL databases in Kubernetes.

## Best practices
<a name="streamline-postgresql-deployments-amazon-eks-pgo-best-practices"></a>

Follow these best practices to ensure a smooth and efficient deployment:
+ **Secure your EKS cluster**. Implement security best practices for your EKS cluster, such as using AWS Identity and Access Management (IAM) roles for service accounts (IRSA), network policies, and VPC security groups. Limit access to the EKS cluster API server, and encrypt communications between nodes and the API server by using TLS.
+ **Ensure version compatibility** between PGO and Kubernetes running on Amazon EKS. Some PGO features might require specific Kubernetes versions or introduce compatibility limitations. For more information, see [Components and Compatibility](https://access.crunchydata.com/documentation/postgres-operator/5.2.5/references/components/) in the PGO documentation.
+ **Plan resource allocation** for your PGO deployment, including CPU, memory, and storage. Consider the resource requirements of both PGO and the PostgreSQL instances it manages. Monitor resource usage and scale resources as needed.
+ **Design for high availability**. Design your PGO deployment for high availability to minimize downtime and ensure reliability. Deploy multiple replicas of PGO across multiple Availability Zones for fault tolerance.
+ **Implement backup and restore procedures** for your PostgreSQL databases that PGO manages. Use features provided by PGO or third-party backup solutions that are compatible with Kubernetes and Amazon EKS.
+ **Set up monitoring and logging** for your PGO deployment to track performance, health, and events. Use tools such as Prometheus for monitoring metrics and Grafana for visualization. Configure logging to capture PGO logs for troubleshooting and auditing.
+ **Configure networking** properly to allow communications between PGO, PostgreSQL instances, and other services in your Kubernetes cluster. Use Amazon VPC networking features and Kubernetes networking plugins such as Calico or [Amazon VPC CNI](https://github.com/aws/amazon-vpc-cni-k8s) for network policy enforcement and traffic isolation.
+ **Choose appropriate storage options** for your PostgreSQL databases, considering factors such as performance, durability, and scalability. Use Amazon Elastic Block Store (Amazon EBS) volumes or AWS managed storage services for persistent storage. For more information, see [Store Kubernetes volumes with Amazon EBS](https://docs.aws.amazon.com/eks/latest/userguide/ebs-csi.html) in the Amazon EKS documentation.
+ **Use infrastructure as code (IaC) tools** such as CloudFormation to automate the deployment and configuration of PGO on Amazon EKS. Define infrastructure components—including the EKS cluster, networking, and PGO resources—as code for consistency, repeatability, and version control.

## Epics
<a name="streamline-postgresql-deployments-amazon-eks-pgo-epics"></a>

### Create an IAM role
<a name="create-an-iam-role"></a>


| Task | Description | Skills required | 
| --- | --- | --- | 
| Create an IAM role. | [\[See the AWS documentation website for more details\]](http://docs.aws.amazon.com/prescriptive-guidance/latest/patterns/streamline-postgresql-deployments-amazon-eks-pgo.html) | AWS administrator | 

### Create an Amazon EKS cluster
<a name="create-an-eks-cluster"></a>


| Task | Description | Skills required | 
| --- | --- | --- | 
| Create an Amazon EKS cluster. | If you've already deployed a cluster, skip this step. Otherwise, deploy an Amazon EKS cluster in your current AWS account by using `eksctl`, Terraform, or CloudFormation. This pattern uses `eksctl` for cluster deployment.This pattern uses Amazon EC2 as a node group for Amazon EKS. If you want to use AWS Fargate, see the `managedNodeGroups` configuration in the [eksctl documentation](https://eksctl.io/usage/schema/#managedNodeGroups).[\[See the AWS documentation website for more details\]](http://docs.aws.amazon.com/prescriptive-guidance/latest/patterns/streamline-postgresql-deployments-amazon-eks-pgo.html) | AWS administrator, Terraform or eksctl administrator, Kubernetes administrator | 
| Validate the status of the cluster. | Run the following command to see the current status of nodes in the cluster:<pre>kubectl get nodes</pre>If you encounter errors, see the [troubleshooting section](https://docs.aws.amazon.com/eks/latest/userguide/troubleshooting.html) of the Amazon EKS documentation. | AWS administrator, Terraform or eksctl administrator, Kubernetes administrator | 

### Create an OIDC identity provider
<a name="create-an-oidc-identity-provider"></a>


| Task | Description | Skills required | 
| --- | --- | --- | 
| Enable the IAM OIDC provider. | As a prerequisite for the Amazon EBS Container Storage Interface (CSI) driver, you must have an existing IAM OpenID Connect (OIDC) provider for your cluster.Enable the IAM OIDC provider by using the following command:<pre>eksctl utils associate-iam-oidc-provider --region={region} --cluster={YourClusterNameHere} --approve</pre>For more information about this step, see the [Amazon EKS documentation](https://docs.aws.amazon.com/eks/latest/userguide/ebs-csi.html). | AWS administrator | 
| Create an IAM role for the Amazon EBS CSI driver. | Use the following `eksctl` command to create the IAM role for the CSI driver:<pre>eksctl create iamserviceaccount \<br />  --region {RegionName} \<br />  --name ebs-csi-controller-sa \<br />  --namespace kube-system \<br />  --cluster {YourClusterNameHere} \<br />  --attach-policy-arn arn:aws:iam::aws:policy/service-role/AmazonEBSCSIDriverPolicy \<br />  --approve \<br />  --role-only \<br />  --role-name AmazonEKS_EBS_CSI_DriverRole</pre>If you use encrypted Amazon EBS drives, you have to configure the policy further. For instructions, see the [Amazon EBS SCI driver documentation](https://github.com/kubernetes-sigs/aws-ebs-csi-driver/blob/master/docs/install.md#installation-1). | AWS administrator | 
| Add the Amazon EBS CSI driver. | Use the following `eksctl` command to add the Amazon EBS CSI driver:<pre>eksctl create addon \<br />  --name aws-ebs-csi-driver \<br />  --cluster <YourClusterName> service-account-role-arn arn:aws:iam::$(aws sts get-caller-identity \<br />  --query Account \<br />  --output text):role/AmazonEKS_EBS_CSI_DriverRole \<br />  --force</pre> | AWS administrator | 

### Install PGO
<a name="install-pgo"></a>


| Task | Description | Skills required | 
| --- | --- | --- | 
| Clone the PGO repository. | Clone the GitHub repository for PGO:<pre>git clone https://github.com/CrunchyData/postgres-operator-examples.git </pre> | AWS DevOps | 
| Provide the role details for service account creation. | To grant the Amazon EKS cluster access to the required AWS resources, specify the Amazon Resource Name (ARN) of the OIDC role that you created earlier in the `service_account.yaml` file that is located in [GitHub](https://github.com/CrunchyData/postgres-operator/blob/main/config/rbac/cluster/service_account.yaml).<pre>cd postgres-operator-examples</pre><pre>---<br />metadata:<br />  annotations:<br />    eks.amazonaws.com/role-arn: arn:aws:iam::<accountId>:role/<role_name> # Update the OIDC role ARN created earlier</pre> | AWS administrator, Kubernetes administrator | 
| Create the namespace and PGO prerequisites. | [\[See the AWS documentation website for more details\]](http://docs.aws.amazon.com/prescriptive-guidance/latest/patterns/streamline-postgresql-deployments-amazon-eks-pgo.html) | Kubernetes administrator | 
| Verify the creation of pods. | Verify that the namespace and default configuration were created:<pre>kubectl get pods -n postgres-operator</pre> | AWS administrator, Kubernetes administrator | 
| Verify PVCs. | Use the following command to verify persistent volume claims (PVCs):<pre>kubectl describe pvc -n postgres-operator</pre> | AWS administrator, Kubernetes administrator | 

### Create and deploy an operator
<a name="create-and-deploy-an-operator"></a>


| Task | Description | Skills required | 
| --- | --- | --- | 
| Create an operator. | Revise the contents of the file located at `/kustomize/postgres/postgres.yaml` to match the following:<pre>spec:<br />  instances:<br />    - name: pg-1<br />      replicas: 3<br />  patroni:<br />    dynamicConfiguration:<br />      postgresql:<br />      pg_hba:<br />        - "host all all 0.0.0.0/0 trust" # this line enabled logical replication with programmatic access<br />        - "host all postgres 127.0.0.1/32 md5"<br />      synchronous_mode: true<br />  users:<br />  - name: replicator<br />    databases:<br />      - testdb<br />    options: "REPLICATION"</pre>These updates do the following:[\[See the AWS documentation website for more details\]](http://docs.aws.amazon.com/prescriptive-guidance/latest/patterns/streamline-postgresql-deployments-amazon-eks-pgo.html) | AWS administrator, DBA, Kubernetes administrator | 
| Deploy the operator. | Deploy the PGO operator to enable the streamlined management and operation of PostgreSQL databases in Kubernetes environments:<pre>kubectl apply -k kustomize/postgres</pre> | AWS administrator, DBA, Kubernetes administrator | 
| Verify the deployment. | [\[See the AWS documentation website for more details\]](http://docs.aws.amazon.com/prescriptive-guidance/latest/patterns/streamline-postgresql-deployments-amazon-eks-pgo.html)From the command output, note the primary replica (`primary_pod_name`) and read replica (`read_pod_name`). You will uses these in the next steps. | AWS administrator, DBA, Kubernetes administrator | 

### Verify streaming replication
<a name="verify-streaming-replication"></a>


| Task | Description | Skills required | 
| --- | --- | --- | 
| Write data to the primary replica. | Use the following commands to connect to the PostgreSQL primary replica and write data to the database:<pre>kubectl exec -it <primary_pod_name> bash -n postgres-operator</pre><pre>psql</pre><pre>CREATE TABLE customers (firstname text, customer_id serial, date_created timestamp);<br />\dt</pre> | AWS administrator, Kubernetes administrator | 
| Confirm that the read replica has the same data. | Connect to the PostgreSQL read replica and check whether the streaming replication is working correctly:<pre>kubectl exec -it {read_pod_name} bash -n postgres-operator</pre><pre>psql</pre><pre>\dt</pre>The read replica should have the table that you created in the primary replica in the previous step. | AWS administrator, Kubernetes administrator | 

## Troubleshooting
<a name="streamline-postgresql-deployments-amazon-eks-pgo-troubleshooting"></a>


| Issue | Solution | 
| --- | --- | 
| The pod doesn’t start. | [\[See the AWS documentation website for more details\]](http://docs.aws.amazon.com/prescriptive-guidance/latest/patterns/streamline-postgresql-deployments-amazon-eks-pgo.html) | 
| Replicas are significantly behind the primary database. | [\[See the AWS documentation website for more details\]](http://docs.aws.amazon.com/prescriptive-guidance/latest/patterns/streamline-postgresql-deployments-amazon-eks-pgo.html) | 
| You don’t have visibility into the performance and health of the PostgreSQL cluster. | [\[See the AWS documentation website for more details\]](http://docs.aws.amazon.com/prescriptive-guidance/latest/patterns/streamline-postgresql-deployments-amazon-eks-pgo.html) | 
| Replication doesn’t work. | [\[See the AWS documentation website for more details\]](http://docs.aws.amazon.com/prescriptive-guidance/latest/patterns/streamline-postgresql-deployments-amazon-eks-pgo.html) | 

## Related resources
<a name="streamline-postgresql-deployments-amazon-eks-pgo-resources"></a>
+ [Amazon Elastic Kubernetes Service](https://docs.aws.amazon.com/whitepapers/latest/overview-deployment-options/amazon-elastic-kubernetes-service.html) (*Overview of Deployment Options on AWS* whitepaper)
+  [CloudFormation](https://docs.aws.amazon.com/whitepapers/latest/overview-deployment-options/aws-cloudformation.html) (*Overview of Deployment Options on AWS* whitepaper)
+ [Get started with Amazon EKS – eksctl](https://docs.aws.amazon.com/eks/latest/userguide/getting-started-eksctl.html) (*Amazon EKS User Guide*)
+ [Set up kubectl and eksctl](https://docs.aws.amazon.com/eks/latest/userguide/install-kubectl.html) (*Amazon EKS User Guide*)
+ [Create a role for OpenID Connect federation](https://docs.aws.amazon.com/IAM/latest/UserGuide/id_roles_create_for-idp_oidc.html) (*IAM User Guide*)
+ [Configuring settings for the AWS CLI](https://docs.aws.amazon.com/cli/latest/userguide/cli-chap-configure.html) (*AWS CLI User Guide*)
+ [Crunchy Postgres for Kubernetes documentation](https://access.crunchydata.com/documentation/postgres-operator/latest)
+ [Crunch & Learn: Crunchy Postgres for Kubernetes 5.0](https://www.youtube-nocookie.com/embed/IIf9WZO3K50) (video)

# Simplify application authentication with mutual TLS in Amazon ECS by using Application Load Balancer
<a name="simplify-application-authentication-with-mutual-tls-in-amazon-ecs"></a>

*Olawale Olaleye and Shamanth Devagari, Amazon Web Services*

## Summary
<a name="simplify-application-authentication-with-mutual-tls-in-amazon-ecs-summary"></a>

This pattern helps you to simplify your application authentication and offload security burdens with mutual TLS in Amazon Elastic Container Service (Amazon ECS) by using [Application Load Balancer (ALB)](https://docs.aws.amazon.com/elasticloadbalancing/latest/application/mutual-authentication.html). With ALB, you can authenticate X.509 client certificates from AWS Private Certificate Authority. This powerful combination helps to achieve secure communication between your services, reducing the need for complex authentication mechanisms within your applications. In addition, the pattern uses Amazon Elastic Container Registry (Amazon ECR) to store container images.

The example in this pattern uses Docker images from a public gallery to create the sample workloads initially. Subsequently, new Docker images are built to be stored in Amazon ECR. For the source, consider a Git-based system such as GitHub, GitLab, or Bitbucket, or use Amazon Simple Storage Service Amazon S3 (Amazon S3). For building the Docker images, consider using AWS CodeBuild for the subsequent images.

## Prerequisites and limitations
<a name="simplify-application-authentication-with-mutual-tls-in-amazon-ecs-prereqs"></a>

**Prerequisites**
+ An active AWS account with access to deploy AWS CloudFormation stacks. Make sure that you have AWS Identity and Access Management (IAM) [user or role permissions](https://docs.aws.amazon.com/AWSCloudFormation/latest/UserGuide/control-access-with-iam.html) to deploy CloudFormation.
+ AWS Command Line Interface (AWS CLI) [installed](https://docs.aws.amazon.com/cli/latest/userguide/getting-started-install.html). [Configure](https://docs.aws.amazon.com/cli/latest/userguide/cli-chap-configure.html) your AWS credentials on your local machine or in your environment by either using the AWS CLI or by setting the environment variables in the `~/.aws/credentials` file.
+ OpenSSL [installed](https://www.openssl.org/).
+ Docker [installed](https://www.docker.com/get-started/).
+ Familiarity with the AWS services described in [Tools](#simplify-application-authentication-with-mutual-tls-in-amazon-ecs-tools).
+ Knowledge of Docker and NGINX.

**Limitations**
+ Mutual TLS for Application Load Balancer only supports X.509v3 client certificates. X.509v1 client certificates are not supported.
+ The CloudFormation template that is provided in this pattern’s code repository doesn’t include provisioning a CodeBuild project as part of the stack.
+ Some AWS services aren’t available in all AWS Regions. For Region availability, see [AWS Services by Region](https://aws.amazon.com/about-aws/global-infrastructure/regional-product-services/). For specific endpoints, see [Service endpoints and quotas](https://docs.aws.amazon.com/general/latest/gr/aws-service-information.html), and choose the link for the service.

**Product versions**
+ Docker version 27.3.1 or later
+ AWS CLI version 2.14.5 or later

## Architecture
<a name="simplify-application-authentication-with-mutual-tls-in-amazon-ecs-architecture"></a>

The following diagram shows the architecture components for this pattern.

![\[Workflow to authenticate with mutual TLS using Application Load Balancer.\]](http://docs.aws.amazon.com/prescriptive-guidance/latest/patterns/images/pattern-img/a343fa4e-097f-416b-9c83-01a28eb57dc3/images/e1371297-b987-4487-9b13-8120933c921f.png)


 The diagram shows the following workflow:

1. Create a Git repository, and commit the application code to the repository.

1. Create a private certificate authority (CA) in AWS Private CA.

1. Create a CodeBuild project. The CodeBuildproject is triggered by commit changes and creates the Docker image and publishes the built image to Amazon ECR.

1. Copy the certificate chain and certificate body from the CA, and upload the certificate bundle to Amazon S3.

1. Create a trust store with the CA bundle that you uploaded to Amazon S3. Associate the trust store with the mutual TLS listeners on the Application Load Balancer (ALB).

1. Use the private CA to issue client certificates for the container workloads. Also create a private TLS certificate using AWS Private CA.

1. Import the private TLS certificate into AWS Certificate Manager (ACM), and use it with the ALB.

1. The container workload in `ServiceTwo` uses the issued client certificate to authenticate with the ALB when it communicates with the container workload in `ServiceOne`.

1. The container workload in `ServiceOne` uses the issued client certificate to authenticate with the ALB when it communicates with the container workload in `ServiceTwo`.

**Automation and scale**

This pattern can be fully automated by using CloudFormation, AWS Cloud Development Kit (AWS CDK) , or API operations from an SDK to provision the AWS resources.

You can use AWS CodePipeline to implement a continuous integration and continuous deployment (CI/CD) pipeline using CodeBuild to automate container image build process and deploying new releases to the Amazon ECS cluster services.

## Tools
<a name="simplify-application-authentication-with-mutual-tls-in-amazon-ecs-tools"></a>

**AWS services **
+ [AWS Certificate Manager (ACM)](https://docs.aws.amazon.com/acm/latest/userguide/acm-overview.html) helps you create, store, and renew public and private SSL/TLS X.509 certificates and keys that protect your AWS websites and applications.
+ [AWS CloudFormation](https://docs.aws.amazon.com/AWSCloudFormation/latest/UserGuide/Welcome.html) helps you set up AWS resources, provision them quickly and consistently, and manage them throughout their lifecycle across AWS accounts and AWS Regions.
+ [AWS CodeBuild](https://docs.aws.amazon.com/codebuild/latest/userguide/welcome.html) is a fully managed build service that helps you compile source code, run unit tests, and produce artifacts that are ready to deploy.
+ [Amazon Elastic Container Registry (Amazon ECR)](https://docs.aws.amazon.com/AmazonECR/latest/userguide/what-is-ecr.html) is a managed container image registry service that’s secure, scalable, and reliable.
+ [Amazon Elastic Container Service (Amazon ECS)](https://docs.aws.amazon.com/AmazonECS/latest/developerguide/Welcome.html) is a highly scalable, fast container management service for running, stopping, and managing containers on a cluster. You can run your tasks and services on a serverless infrastructure that is managed by AWS Fargate. Alternatively, for more control over your infrastructure, you can run your tasks and services on a cluster of Amazon Elastic Compute Cloud (Amazon EC2) instances that you manage.
+ [Amazon ECS Exec](https://docs.aws.amazon.com/AmazonECS/latest/developerguide/ecs-exec.html) allows you to directly interact with containers without needing to first interact with the host container operating system, open inbound ports, or manage SSH keys. You can use ECS Exec to run commands in, or get a shell to, a container running on an Amazon EC2 instance or on AWS Fargate.
+ [Elastic Load Balancing (ELB)](https://docs.aws.amazon.com/elasticloadbalancing/latest/userguide/what-is-load-balancing.html) distributes incoming application or network traffic across multiple targets. For example, you can distribute traffic across Amazon EC2 instances, containers, and IP addresses, in one or more Availability Zones. ELB monitors the health of its registered targets, and routes traffic only to the healthy targets. ELB scales your load balancer as your incoming traffic changes over time. It can automatically scale to the majority of workloads.
+ [AWS Fargate](https://docs.aws.amazon.com/AmazonECS/latest/userguide/what-is-fargate.html) helps you run containers without needing to manage servers or Amazon EC2 instances. Fargate is compatible with both Amazon ECS and Amazon Elastic Kubernetes Service (Amazon EKS). You can run your Amazon ECS tasks and services with the Fargate launch type or a Fargate capacity provider. To do so, package your application in containers, specify the CPU and memory requirements, define networking and IAM policies, and launch the application. Each Fargate task has its own isolation boundary and doesn’t share the underlying kernel, CPU resources, memory resources, or elastic network interface with another task.
+ [AWS Private Certificate Authority](https://docs.aws.amazon.com/privateca/latest/userguide/PcaWelcome.html) enables creation of private certificate authority (CA) hierarchies, including root and subordinate CAs, without the investment and maintenance costs of operating an on-premises CA.

**Other tools**** **
+ [Docker](https://www.docker.com/) is a set of platform as a service (PaaS) products that use virtualization at the operating-system level to deliver software in containers.
+ [GitHub](https://docs.github.com/en/repositories/creating-and-managing-repositories/quickstart-for-repositories), [GitLab](https://docs.gitlab.com/ee/user/get_started/get_started_projects.html), and [Bitbucket](https://support.atlassian.com/bitbucket-cloud/docs/tutorial-learn-bitbucket-with-git/) are some of the commonly used Git-based source control system to keep track of source code changes.
+ [NGINX Open Source](https://nginx.org/en/docs/?_ga=2.187509224.1322712425.1699399865-405102969.1699399865) is an open source load balancer, content cache, and web server. This pattern uses it as a web server.
+ [OpenSSL](https://www.openssl.org/) is an open source library that provides services that are used by the OpenSSL implementations of TLS and CMS. 

**Code repository**

The code for this pattern is available in the GitHub [mTLS-with-Application-Load-Balancer-in-Amazon-ECS](https://github.com/aws-samples/mTLS-with-Application-Load-Balancer-in-Amazon-ECS) repository.

## Best practices
<a name="simplify-application-authentication-with-mutual-tls-in-amazon-ecs-best-practices"></a>
+ Use Amazon ECS Exec to run commands or get a shell to a container running on Fargate. You can also use ECS Exec to help collect diagnostic information for debugging.
+ Use security groups and network access control lists (ACLs) to control inbound and outbound traffic between the services. Fargate tasks receive an IP address from the configured subnet in your virtual private cloud (VPC).

## Epics
<a name="simplify-application-authentication-with-mutual-tls-in-amazon-ecs-epics"></a>

### Create the repository
<a name="create-the-repository"></a>


| Task | Description | Skills required | 
| --- | --- | --- | 
| Download the source code. | To download this pattern’s source code, fork or clone the GitHub [mTLS-with-Application-Load-Balancer-in-Amazon-ECS](https://github.com/aws-samples/mTLS-with-Application-Load-Balancer-in-Amazon-ECS) repository. | DevOps engineer | 
| Create a Git repository. | To create a Git repository to contain the Dockerfile and the `buildspec.yaml` files, use the following steps:[\[See the AWS documentation website for more details\]](http://docs.aws.amazon.com/prescriptive-guidance/latest/patterns/simplify-application-authentication-with-mutual-tls-in-amazon-ecs.html)`git clone https://github.com/aws-samples/mTLS-with-Application-Load-Balancer-in-Amazon-ECS.git` | DevOps engineer | 

### Create CA and generate certificates
<a name="create-ca-and-generate-certificates"></a>


| Task | Description | Skills required | 
| --- | --- | --- | 
| Create a private CA in AWS Private CA. | To create a private certificate authority (CA), run the following commands in your terminal. Replace the values in the example variables with your own values. <pre>export AWS_DEFAULT_REGION="us-west-2"<br />export SERVICES_DOMAIN="www.example.com"<br /><br />export ROOT_CA_ARN=`aws acm-pca create-certificate-authority \<br />    --certificate-authority-type ROOT \<br />    --certificate-authority-configuration \<br />    "KeyAlgorithm=RSA_2048,<br />    SigningAlgorithm=SHA256WITHRSA,<br />    Subject={<br />        Country=US,<br />        State=WA,<br />        Locality=Seattle,<br />        Organization=Build on AWS,<br />        OrganizationalUnit=mTLS Amazon ECS and ALB Example,<br />        CommonName=${SERVICES_DOMAIN}}" \<br />        --query CertificateAuthorityArn --output text`</pre>For more details, see [Create a private CA in AWS Private CA](https://docs.aws.amazon.com/privateca/latest/userguide/create-CA.html) in the AWS documentation. | DevOps engineer, AWS DevOps | 
| Create and install your private CA certificate. | To create and install a certificate for your private root CA, run the following commands in your terminal:[\[See the AWS documentation website for more details\]](http://docs.aws.amazon.com/prescriptive-guidance/latest/patterns/simplify-application-authentication-with-mutual-tls-in-amazon-ecs.html) | AWS DevOps, DevOps engineer | 
| Request a managed certificate. | To request a private certificate in AWS Certificate Manager to use with your private ALB, use the following command:<pre>export TLS_CERTIFICATE_ARN=`aws acm request-certificate \<br />    --domain-name "*.${DOMAIN_DOMAIN}" \<br />    --certificate-authority-arn ${ROOT_CA_ARN} \<br />    --query CertificateArn --output text`</pre> | DevOps engineer, AWS DevOps | 
| Use the private CA to issue a client certificate. | [\[See the AWS documentation website for more details\]](http://docs.aws.amazon.com/prescriptive-guidance/latest/patterns/simplify-application-authentication-with-mutual-tls-in-amazon-ecs.html)`openssl req -out client_csr1.pem -new -newkey rsa:2048 -nodes -keyout client_private-key1.pem``openssl req -out client_csr2.pem -new -newkey rsa:2048 -nodes -keyout client_private-key2.pem`This command returns the CSR and the private key for the two services. [\[See the AWS documentation website for more details\]](http://docs.aws.amazon.com/prescriptive-guidance/latest/patterns/simplify-application-authentication-with-mutual-tls-in-amazon-ecs.html)<pre>SERVICE_ONE_CERT_ARN=`aws acm-pca issue-certificate \<br />    --certificate-authority-arn ${ROOT_CA_ARN} \<br />    --csr fileb://client_csr1.pem \<br />    --signing-algorithm "SHA256WITHRSA" \<br />    --validity Value=5,Type="YEARS" --query CertificateArn --output text` <br /><br />echo "SERVICE_ONE_CERT_ARN: ${SERVICE_ONE_CERT_ARN}"<br /><br />aws acm-pca get-certificate \<br />    --certificate-authority-arn ${ROOT_CA_ARN} \<br />    --certificate-arn ${SERVICE_ONE_CERT_ARN} \<br />     | jq -r '.Certificate' > client_cert1.cert<br /><br />SERVICE_TWO_CERT_ARN=`aws acm-pca issue-certificate \<br />    --certificate-authority-arn ${ROOT_CA_ARN} \<br />    --csr fileb://client_csr2.pem \<br />    --signing-algorithm "SHA256WITHRSA" \<br />    --validity Value=5,Type="YEARS" --query CertificateArn --output text` <br /><br />echo "SERVICE_TWO_CERT_ARN: ${SERVICE_TWO_CERT_ARN}"<br /><br />aws acm-pca get-certificate \<br />    --certificate-authority-arn ${ROOT_CA_ARN} \<br />    --certificate-arn ${SERVICE_TWO_CERT_ARN} \<br />     | jq -r '.Certificate' > client_cert2.cert</pre>For more information, see [Issue private end-entity certificates](https://docs.aws.amazon.com/privateca/latest/userguide/PcaIssueCert.html) in the AWS documentation. | DevOps engineer, AWS DevOps | 

### Provision AWS services
<a name="provision-aws-services"></a>


| Task | Description | Skills required | 
| --- | --- | --- | 
| Provision AWS services with the CloudFormation template. | To provision the virtual private cloud (VPC), Amazon ECS cluster, Amazon ECS services, Application Load Balancer, and Amazon Elastic Container Registry (Amazon ECR), use the CloudFormation template. | DevOps engineer | 
| Get variables. | Verify that you have an Amazon ECS cluster with two services running. To retrieve the resource details and store them as variables, use the following commands:<pre><br />export LoadBalancerDNS=$(aws cloudformation describe-stacks --stack-name ecs-mtls \<br />--output text \<br />--query 'Stacks[0].Outputs[?OutputKey==`LoadBalancerDNS`].OutputValue')<br /><br />export ECRRepositoryUri=$(aws cloudformation describe-stacks --stack-name ecs-mtls \<br />--output text \<br />--query 'Stacks[0].Outputs[?OutputKey==`ECRRepositoryUri`].OutputValue')<br /><br />export ECRRepositoryServiceOneUri=$(aws cloudformation describe-stacks --stack-name ecs-mtls \<br />--output text \<br />--query 'Stacks[0].Outputs[?OutputKey==`ECRRepositoryServiceOneUri`].OutputValue')<br /><br />export ECRRepositoryServiceTwoUri=$(aws cloudformation describe-stacks --stack-name ecs-mtls \<br />--output text \<br />--query 'Stacks[0].Outputs[?OutputKey==`ECRRepositoryServiceTwoUri`].OutputValue')<br /><br />export ClusterName=$(aws cloudformation describe-stacks --stack-name ecs-mtls \<br />--output text \<br />--query 'Stacks[0].Outputs[?OutputKey==`ClusterName`].OutputValue')<br /><br />export BucketName=$(aws cloudformation describe-stacks --stack-name ecs-mtls \<br />--output text \<br />--query 'Stacks[0].Outputs[?OutputKey==`BucketName`].OutputValue')<br /><br />export Service1ListenerArn=$(aws cloudformation describe-stacks --stack-name ecs-mtls \<br />--output text \<br />--query 'Stacks[0].Outputs[?OutputKey==`Service1ListenerArn`].OutputValue')<br /><br />export Service2ListenerArn=$(aws cloudformation describe-stacks --stack-name ecs-mtls \<br />--output text \<br />--query 'Stacks[0].Outputs[?OutputKey==`Service2ListenerArn`].OutputValue')</pre> | DevOps engineer | 
| Create a CodeBuild project. | To use a CodeBuild project to create the Docker images for your Amazon ECS services, do the following:[\[See the AWS documentation website for more details\]](http://docs.aws.amazon.com/prescriptive-guidance/latest/patterns/simplify-application-authentication-with-mutual-tls-in-amazon-ecs.html)For more details, see [Create a build project in AWS CodeBuild](https://docs.aws.amazon.com/codebuild/latest/userguide/create-project.html) in the AWS documentation. | AWS DevOps, DevOps engineer | 
| Build the Docker images. | You can use CodeBuild to perform the image build process. CodeBuild needs permissions to interact with Amazon ECR and to work with Amazon S3.As part of the process, the Docker image is built and pushed to the Amazon ECR registry. For details about the template and the code, see [Additional information](#simplify-application-authentication-with-mutual-tls-in-amazon-ecs-additional).(Optional) To build locally for test purposes, use the following command:<pre># login to ECR<br />aws ecr get-login-password | docker login --username AWS --password-stdin $ECRRepositoryUri<br /><br /># build image for service one<br />cd /service1<br />aws s3 cp s3://$BucketName/serviceone/ service1/ --recursive<br />docker build -t $ECRRepositoryServiceOneUri .<br />docker push $ECRRepositoryServiceOneUri<br /><br /># build image for service two<br />cd ../service2<br />aws s3 cp s3://$BucketName/servicetwo/ service2/ --recursive<br />docker build -t $ECRRepositoryServiceTwoUri .<br />docker push $ECRRepositoryServiceTwoUri</pre> | DevOps engineer | 

### Enable mutual TLS
<a name="enable-mutual-tls"></a>


| Task | Description | Skills required | 
| --- | --- | --- | 
| Upload the CA certificate to Amazon S3. | To upload the CA certificate to the Amazon S3 bucket, use the following example command:`aws s3 cp ca-cert.pem s3://$BucketName/acm-trust-store/ ` | AWS DevOps, DevOps engineer | 
| Create the trust store. | To create the trust store, use the following example command:<pre>TrustStoreArn=`aws elbv2 create-trust-store --name acm-pca-trust-certs \<br />    --ca-certificates-bundle-s3-bucket $BucketName \<br />    --ca-certificates-bundle-s3-key acm-trust-store/ca-cert.pem --query 'TrustStores[].TrustStoreArn' --output text`</pre> | AWS DevOps, DevOps engineer | 
| Upload client certificates. | To upload client certificates to Amazon S3 for Docker images, use the following example command:<pre># for service one<br />aws s3 cp client_cert1.cert s3://$BucketName/serviceone/<br />aws s3 cp client_private-key1.pem s3://$BucketName/serviceone/<br /><br /># for service two<br />aws s3 cp client_cert2.cert s3://$BucketName/servicetwo/<br />aws s3 cp client_private-key2.pem s3://$BucketName/servicetwo/</pre> | AWS DevOps, DevOps engineer | 
| Modify the listener. | To enable mutual TLS on the ALB, modify the HTTPS listeners by using the following commands:<pre>aws elbv2 modify-listener \<br />    --listener-arn $Service1ListenerArn \<br />    --certificates CertificateArn=$TLS_CERTIFICATE_ARN_TWO \<br />    --ssl-policy ELBSecurityPolicy-2016-08 \<br />    --protocol HTTPS \<br />    --port 8080 \<br />    --mutual-authentication Mode=verify,TrustStoreArn=$TrustStoreArn,IgnoreClientCertificateExpiry=false<br /><br />aws elbv2 modify-listener \<br />    --listener-arn $Service2ListenerArn \<br />    --certificates CertificateArn=$TLS_CERTIFICATE_ARN_TWO \<br />    --ssl-policy ELBSecurityPolicy-2016-08 \<br />    --protocol HTTPS \<br />    --port 8090 \<br />    --mutual-authentication Mode=verify,TrustStoreArn=$TrustStoreArn,IgnoreClientCertificateExpiry=false<br /></pre>For more information, see [Configuring mutual TLS on an Application Load Balancer](https://docs.aws.amazon.com/elasticloadbalancing/latest/application/configuring-mtls-with-elb.html) in the AWS documentation. | AWS DevOps, DevOps engineer | 

### Update the services
<a name="update-the-services"></a>


| Task | Description | Skills required | 
| --- | --- | --- | 
| Update the Amazon ECS task definition. | To update the Amazon ECS task definition, modify the `image` parameter in the new revision.To get the values for the respective services, update the task definitions with the new Docker images Uri that you built in the previous steps: `echo $ECRRepositoryServiceOneUri` or `echo $ECRRepositoryServiceTwoUri`<pre><br />    "containerDefinitions": [<br />        {<br />            "name": "nginx",<br />            "image": "public.ecr.aws/nginx/nginx:latest",   # <----- change to new Uri<br />            "cpu": 0,</pre>For more information, see [Updating an Amazon ECS task definition](https://docs.aws.amazon.com/AmazonECS/latest/developerguide/update-task-definition-console-v2.html) using the console in the AWS documentation.  | AWS DevOps, DevOps engineer | 
| Update the Amazon ECS service. | Update the service with the latest task definition. This task definition is the blueprint for the newly built Docker images, and it contains the client certificate that’s required for the mutual TLS authentication.  To update the service, use the following procedure:[\[See the AWS documentation website for more details\]](http://docs.aws.amazon.com/prescriptive-guidance/latest/patterns/simplify-application-authentication-with-mutual-tls-in-amazon-ecs.html)Repeat the steps for the other service. | AWS administrator, AWS DevOps, DevOps engineer | 

### Access the application
<a name="access-the-application"></a>


| Task | Description | Skills required | 
| --- | --- | --- | 
| Copy the application URL. | Use the Amazon ECS console to view the task. When the task status has been updated to **Running**, select the task. In the **Task **section, copy the task ID. | AWS administrator, AWS DevOps | 
| Test your application. | To test your application, use ECS Exec to access the tasks.[\[See the AWS documentation website for more details\]](http://docs.aws.amazon.com/prescriptive-guidance/latest/patterns/simplify-application-authentication-with-mutual-tls-in-amazon-ecs.html) | AWS administrator, AWS DevOps | 

## Related resources
<a name="simplify-application-authentication-with-mutual-tls-in-amazon-ecs-resources"></a>

**Amazon ECS documentation**
+ [Creating an Amazon ECS task definition using the console](https://docs.aws.amazon.com/AmazonECS/latest/developerguide/create-task-definition.html)
+ [Creating a container image for use on Amazon ECS](https://docs.aws.amazon.com/AmazonECS/latest/developerguide/create-container-image.html)
+ [Amazon ECS clusters](https://docs.aws.amazon.com/AmazonECS/latest/developerguide/clusters.html)
+ [Amazon ECS for AWS Fargate](https://docs.aws.amazon.com/AmazonECS/latest/developerguide/create-container-image.html#create-container-image-next-steps)
+ [Amazon ECS networking best practices](https://docs.aws.amazon.com/AmazonECS/latest/developerguide/networking-best-practices.html)
+ [Amazon ECS service definition parameters](https://docs.aws.amazon.com/AmazonECS/latest/developerguide/service_definition_parameters.html)

**Other AWS resources**
+ [How do I use AWS private CA to configure mTLS on the Application Load Balancer?](https://repost.aws/knowledge-center/elb-alb-configure-private-ca-mtls) (AWS re:Post)

## Additional information
<a name="simplify-application-authentication-with-mutual-tls-in-amazon-ecs-additional"></a>

**Editing the Dockerfile**** **

The following code shows the commands that you edit in the Dockerfile for service 1:

```
FROM public.ecr.aws/nginx/nginx:latest
WORKDIR /usr/share/nginx/html
RUN echo "Returning response from Service 1: Ok" > /usr/share/nginx/html/index.html
ADD client_cert1.cert client_private-key1.pem /usr/local/share/ca-certificates/
RUN chmod -R 400 /usr/local/share/ca-certificates/
```

The following code shows the commands that you edit in the Dockerfile for service 2:

```
FROM public.ecr.aws/nginx/nginx:latest
WORKDIR /usr/share/nginx/html
RUN echo "Returning response from Service 2: Ok" > /usr/share/nginx/html/index.html
ADD client_cert2.cert client_private-key2.pem /usr/local/share/ca-certificates/
RUN chmod -R 400 /usr/local/share/ca-certificates/
```

If you’re building the Docker images with CodeBuild, the `buildspec` file uses the CodeBuild build number to uniquely identify image versions as a tag value. You can change the `buildspec` file to fit your requirements, as shown in the following `buildspec `custom code: 

```
version: 0.2

phases:
  pre_build:
    commands:
      - echo Logging in to Amazon ECR...
      - aws ecr get-login-password --region $AWS_DEFAULT_REGION | docker login --username AWS --password-stdin $ECR_REPOSITORY_URI
      - COMMIT_HASH=$(echo $CODEBUILD_RESOLVED_SOURCE_VERSION | cut -c 1-7)
      - IMAGE_TAG=${COMMIT_HASH:=latest}
  build:
    commands:
        # change the S3 path depending on the service
      - aws s3 cp s3://$YOUR_S3_BUCKET_NAME/serviceone/ $CodeBuild_SRC_DIR/ --recursive 
      - echo Build started on `date`
      - echo Building the Docker image...
      - docker build -t $ECR_REPOSITORY_URI:latest .
      - docker tag $ECR_REPOSITORY_URI:latest $ECR_REPOSITORY_URI:$IMAGE_TAG
  post_build:
    commands:
      - echo Build completed on `date`
      - echo Pushing the Docker images...
      - docker push $ECR_REPOSITORY_URI:latest
      - docker push $ECR_REPOSITORY_URI:$IMAGE_TAG
      - echo Writing image definitions file...
      # for ECS deployment reference
      - printf '[{"name":"%s","imageUri":"%s"}]' $CONTAINER_NAME $ECR_REPOSITORY_URI:$IMAGE_TAG > imagedefinitions.json   

artifacts:
  files:
    - imagedefinitions.json
```

# More patterns
<a name="containersandmicroservices-more-patterns-pattern-list"></a>

**Topics**
+ [Automate deletion of AWS CloudFormation stacks and associated resources](automate-deletion-cloudformation-stacks-associated-resources.md)
+ [Automate dynamic pipeline management for deploying hotfix solutions in Gitflow environments by using AWS Service Catalog and AWS CodePipeline](automate-dynamic-pipeline-management-for-deploying-hotfix-solutions.md)
+ [Automatically build CI/CD pipelines and Amazon ECS clusters for microservices using AWS CDK](automatically-build-ci-cd-pipelines-and-amazon-ecs-clusters-for-microservices-using-aws-cdk.md)
+ [Build and push Docker images to Amazon ECR using GitHub Actions and Terraform](build-and-push-docker-images-to-amazon-ecr-using-github-actions-and-terraform.md)
+ [Containerize mainframe workloads that have been modernized by Blu Age](containerize-mainframe-workloads-that-have-been-modernized-by-blu-age.md)
+ [Create a custom log parser for Amazon ECS using a Firelens log router](create-a-custom-log-parser-for-amazon-ecs-using-a-firelens-log-router.md)
+ [Deploy agentic systems on Amazon Bedrock with the CrewAI framework by using Terraform](deploy-agentic-systems-on-amazon-bedrock-with-the-crewai-framework.md)
+ [Deploy an environment for containerized Blu Age applications by using Terraform](deploy-an-environment-for-containerized-blu-age-applications-by-using-terraform.md)
+ [Deploy preprocessing logic into an ML model in a single endpoint using an inference pipeline in Amazon SageMaker](deploy-preprocessing-logic-into-an-ml-model-in-a-single-endpoint-using-an-inference-pipeline-in-amazon-sagemaker.md)
+ [Deploy workloads from Azure DevOps pipelines to private Amazon EKS clusters](deploy-workloads-from-azure-devops-pipelines-to-private-amazon-eks-clusters.md)
+ [Implement AI-powered Kubernetes diagnostics and troubleshooting with K8sGPT and Amazon Bedrock integration](implement-ai-powered-kubernetes-diagnostics-and-troubleshooting-with-k8sgpt-and-amazon-bedrock-integration.md)
+ [Manage blue/green deployments of microservices to multiple accounts and Regions by using AWS code services and AWS KMS multi-Region keys](manage-blue-green-deployments-of-microservices-to-multiple-accounts-and-regions-by-using-aws-code-services-and-aws-kms-multi-region-keys.md)
+ [Manage on-premises container applications by setting up Amazon ECS Anywhere with the AWS CDK](manage-on-premises-container-applications-by-setting-up-amazon-ecs-anywhere-with-the-aws-cdk.md)
+ [Migrate from Oracle WebLogic to Apache Tomcat (TomEE) on Amazon ECS](migrate-from-oracle-weblogic-to-apache-tomcat-tomee-on-amazon-ecs.md)
+ [Modernize ASP.NET Web Forms applications on AWS](modernize-asp-net-web-forms-applications-on-aws.md)
+ [Monitor Amazon ECR repositories for wildcard permissions using AWS CloudFormation and AWS Config](monitor-amazon-ecr-repositories-for-wildcard-permissions-using-aws-cloudformation-and-aws-config.md)
+ [Monitor application activity by using CloudWatch Logs Insights](monitor-application-activity-by-using-cloudwatch-logs-insights.md)
+ [Set up a CI/CD pipeline for hybrid workloads on Amazon ECS Anywhere by using AWS CDK and GitLab](set-up-a-ci-cd-pipeline-for-hybrid-workloads-on-amazon-ecs-anywhere-by-using-aws-cdk-and-gitlab.md)
+ [Set up end-to-end encryption for applications on Amazon EKS using cert-manager and Let's Encrypt](set-up-end-to-end-encryption-for-applications-on-amazon-eks-using-cert-manager-and-let-s-encrypt.md)
+ [Simplify Amazon EKS multi-tenant application deployment by using Flux](simplify-amazon-eks-multi-tenant-application-deployment-by-using-flux.md)
+ [Streamline machine learning workflows from local development to scalable experiments by using SageMaker AI and Hydra](streamline-machine-learning-workflows-by-using-amazon-sagemaker.md)
+ [Structure a Python project in hexagonal architecture using AWS Lambda](structure-a-python-project-in-hexagonal-architecture-using-aws-lambda.md)
+ [Test AWS infrastructure by using LocalStack and Terraform Tests](test-aws-infra-localstack-terraform.md)
+ [Coordinate resource dependency and task execution by using the AWS Fargate WaitCondition hook construct](use-the-aws-fargate-waitcondition-hook-construct.md)
+ [Use Amazon Bedrock agents to automate creation of access entry controls in Amazon EKS through text-based prompts](using-amazon-bedrock-agents-to-automate-creation-of-access-entry-controls-in-amazon-eks.md)

# Serverless
<a name="serverless-pattern-list"></a>

**Topics**
+ [Build a serverless React Native mobile app by using AWS Amplify](build-a-serverless-react-native-mobile-app-by-using-aws-amplify.md)
+ [Manage tenants across multiple SaaS products on a single control plane](manage-tenants-across-multiple-saas-products-on-a-single-control-plane.md)
+ [Consolidate Amazon S3 presigned URL generation and object downloads by using an endpoint associated with static IP addresses](consolidate-amazon-s3-presigned-url-generation-and-object-downloads-by-using-an-endpoint-associated-with-static-ip-addresses.md)
+ [Create a cross-account Amazon EventBridge connection in an organization](create-cross-account-amazon-eventbridge-connection-organization.md)
+ [Deliver DynamoDB records to Amazon S3 using Kinesis Data Streams and Firehose with AWS CDK](deliver-dynamodb-records-to-amazon-s3-using-kinesis-data-streams-and-amazon-data-firehose-with-aws-cdk.md)
+ [Implement path-based API versioning by using custom domains in Amazon API Gateway](implement-path-based-api-versioning-by-using-custom-domains.md)
+ [Import the psycopg2 library to AWS Lambda to interact with your PostgreSQL database](import-psycopg2-library-lambda.md)
+ [Integrate Amazon API Gateway with Amazon SQS to handle asynchronous REST APIs](integrate-amazon-api-gateway-with-amazon-sqs-to-handle-asynchronous-rest-apis.md)
+ [Process events asynchronously with Amazon API Gateway and AWS Lambda](process-events-asynchronously-with-amazon-api-gateway-and-aws-lambda.md)
+ [Process events asynchronously with Amazon API Gateway and Amazon DynamoDB Streams](processing-events-asynchronously-with-amazon-api-gateway-and-amazon-dynamodb-streams.md)
+ [Process events asynchronously with Amazon API Gateway, Amazon SQS, and AWS Fargate](process-events-asynchronously-with-amazon-api-gateway-amazon-sqs-and-aws-fargate.md)
+ [Run AWS Systems Manager Automation tasks synchronously from AWS Step Functions](run-aws-systems-manager-automation-tasks-synchronously-from-aws-step-functions.md)
+ [Run parallel reads of S3 objects by using Python in an AWS Lambda function](run-parallel-reads-of-s3-objects-by-using-python-in-an-aws-lambda-function.md)
+ [Send telemetry data from AWS Lambda to OpenSearch for real-time analytics and visualization](send-telemetry-data-from-lambda-to-opensearch-for-analytics-visualization.md)
+ [Set up a serverless cell router for a cell-based architecture](serverless-cell-router-architecture.md)
+ [Set up private access to an Amazon S3 bucket through a VPC endpoint](set-up-private-access-to-an-amazon-s3-bucket-through-a-vpc-endpoint.md)
+ [Troubleshoot states in AWS Step Functions by using Amazon Bedrock](troubleshooting-states-in-aws-step-functions.md)
+ [More patterns](serverless-more-patterns-pattern-list.md)

# Build a serverless React Native mobile app by using AWS Amplify
<a name="build-a-serverless-react-native-mobile-app-by-using-aws-amplify"></a>

*Deekshitulu Pentakota, Amazon Web Services*

## Summary
<a name="build-a-serverless-react-native-mobile-app-by-using-aws-amplify-summary"></a>

This pattern shows how to create a serverless backend for a React Native mobile app by using AWS Amplify and the following AWS services:
+ AWS AppSync
+ Amazon Cognito
+ Amazon DynamoDB

After you configure and deploy the app’s backend by using Amplify, Amazon Cognito authenticates app users and authorizes them to access the app. AWS AppSync then interacts with the frontend app and with a backend DynamoDB table to create and fetch data.

**Note**  
This pattern uses a simple "ToDoList" app as an example, but you can use a similar procedure to create any React Native mobile app.

## Prerequisites and limitations
<a name="build-a-serverless-react-native-mobile-app-by-using-aws-amplify-prereqs"></a>

**Prerequisites **
+ An active AWS Account
+ [Amplify Command Line Interface (Amplify CLI)](https://docs.amplify.aws/cli/start/install/), installed and configured
+ XCode (any version)
+ Microsoft Visual Studio (any version, any code editor, any text editor)
+ Familiarity with Amplify
+ Familiarity with Amazon Cognito
+ Familiarity with AWS AppSync
+ Familiarity with DynamoDB
+ Familiarity with Node.js
+ Familiarity with npm
+ Familiarity with React and React Native
+ Familiarity with JavaScript and ECMAScript 6 (ES6)
+ Familiarity with GraphQL

## Architecture
<a name="build-a-serverless-react-native-mobile-app-by-using-aws-amplify-architecture"></a>

The following diagram shows an example architecture for running a React Native mobile app’s backend in the AWS Cloud:

![\[Workflow for running a React Native mobile app with AWS services.\]](http://docs.aws.amazon.com/prescriptive-guidance/latest/patterns/images/pattern-img/c95e0150-5762-4c90-946c-efa3a22913e4/images/5beff5f9-9d14-49dc-a046-b74e5bfbd13f.png)


The diagram shows the following architecture:

1. Amazon Cognito authenticates app users and authorizes them to access the app.

1. To create and fetch data, AWS AppSync uses a GraphQL API to interact with the frontend app and a backend DynamoDB table.

## Tools
<a name="build-a-serverless-react-native-mobile-app-by-using-aws-amplify-tools"></a>

**AWS services**
+ [AWS Amplify](https://docs.aws.amazon.com/amplify/latest/userguide/welcome.html) is a set of purpose-built tools and features that helps frontend web and mobile developers quickly build full-stack applications on AWS.
+ [AWS AppSync](https://docs.aws.amazon.com/appsync/latest/devguide/what-is-appsync.html) provides a scalable GraphQL interface that helps application developers combine data from multiple sources, including Amazon DynamoDB, AWS Lambda, and HTTP APIs.
+ [Amazon Cognito](https://docs.aws.amazon.com/cognito/latest/developerguide/what-is-amazon-cognito.html) provides authentication, authorization, and user management for web and mobile apps.
+ [Amazon DynamoDB](https://docs.aws.amazon.com/amazondynamodb/latest/developerguide/Introduction.html) is a fully managed NoSQL database service that provides fast, predictable, and scalable performance.

**Code**

The code for the sample application that’s used in this pattern is available in the GitHub [aws-amplify-react-native-ios-todo-app](https://github.com/aws-samples/aws-amplify-react-native-ios-todo-app) repository. To use the sample files, follow the instructions in the** Epics** section of this pattern.

## Epics
<a name="build-a-serverless-react-native-mobile-app-by-using-aws-amplify-epics"></a>

### Create and run your React Native app
<a name="create-and-run-your-react-native-app"></a>


| Task | Description | Skills required | 
| --- | --- | --- | 
| Set up a React Native development environment.  | For instructions, see [Setting up the development environment](https://reactnative.dev/docs/next/environment-setup) in the React Native documentation. | App developer | 
| Create and run the ToDoList React Native mobile app in iOS Simulator. | [\[See the AWS documentation website for more details\]](http://docs.aws.amazon.com/prescriptive-guidance/latest/patterns/build-a-serverless-react-native-mobile-app-by-using-aws-amplify.html) | App developer | 

### Initialize a new backend environment for the app
<a name="initialize-a-new-backend-environment-for-the-app"></a>


| Task | Description | Skills required | 
| --- | --- | --- | 
| Create the backend services needed to support the app in Amplify.  | [\[See the AWS documentation website for more details\]](http://docs.aws.amazon.com/prescriptive-guidance/latest/patterns/build-a-serverless-react-native-mobile-app-by-using-aws-amplify.html)For the ToDoList app setup used in this pattern, apply the following example configuration.**Example React Native Amplify app configuration settings**<pre>? Name: ToDoListAmplify<br /><br />? Environment: dev<br /><br />? Default editor: Visual Studio Code<br /><br />? App type: javascript<br /><br />? Javascript framework: react-native<br /><br />? Source Directory Path: src<br /><br />? Distribution Directory Path: /<br /><br />? Build Command: npm run-script build<br /><br />? Start Command: npm run-script start<br /><br />? Select the authentication method you want to use: AWS profile<br /><br />? Please choose the profile you want to use: default</pre>For more information, see [Create a new Amplify backend](https://docs.amplify.aws/lib/project-setup/create-application/q/platform/js/#create-a-new-amplify-backend) in the Amplify Dev Center documentation.The `amplify init` command provisions the following resources by using [AWS CloudFormation](https://docs.aws.amazon.com/AWSCloudFormation/latest/UserGuide/Welcome.html): [\[See the AWS documentation website for more details\]](http://docs.aws.amazon.com/prescriptive-guidance/latest/patterns/build-a-serverless-react-native-mobile-app-by-using-aws-amplify.html) | App developer | 

### Add Amazon Cognito authentication to your Amplify React Native app
<a name="add-amazon-cognito-authentication-to-your-amplify-react-native-app"></a>


| Task | Description | Skills required | 
| --- | --- | --- | 
| Create an Amazon Cognito authentication service. | [\[See the AWS documentation website for more details\]](http://docs.aws.amazon.com/prescriptive-guidance/latest/patterns/build-a-serverless-react-native-mobile-app-by-using-aws-amplify.html)For the ToDoList app setup used in this pattern, apply the following example configuration.**Example authentication service configuration settings**<pre>? Do you want to use the default authentication and security configuration? \ <br />Default configuration<br /> <br />? How do you want users to be able to sign in? \ <br />Username <br /><br />? Do you want to configure advanced settings? \ <br />No, I am done</pre>The `amplify add auth` command creates the necessary folders, files, and dependency files in a local folder (**amplify**) within the project’s root directory. For the ToDoList app setup used in this pattern, the **aws-exports.js** is created for this purpose. | App developer | 
| Deploy the Amazon Cognito service to the AWS Cloud. | [\[See the AWS documentation website for more details\]](http://docs.aws.amazon.com/prescriptive-guidance/latest/patterns/build-a-serverless-react-native-mobile-app-by-using-aws-amplify.html)To see the deployed services in your project, go to the Amplify console by running the following command:`amplify console` | App developer | 
| Install the required Amplify libraries for React Native and the CocoaPods dependencies for iOS. | [\[See the AWS documentation website for more details\]](http://docs.aws.amazon.com/prescriptive-guidance/latest/patterns/build-a-serverless-react-native-mobile-app-by-using-aws-amplify.html) | App developer | 
| Import and configure the Amplify service. | In the app’s entry point file (for example,** App.js**), import and load the Amplify service’s configuration file by entering the following lines of code:<pre>import Amplify from 'aws-amplify'<br />import config from './src/aws-exports'<br />Amplify.configure(config)</pre>If you receive an error after importing the Amplify service in the app’s entry point file, stop the app. Then, open XCode and select the **ToDoListAmplify.xcworkspace** from the project’s iOS folder and run the app. | App developer | 
| Update your app's entry point file to use the withAuthenticator Higher-order component (HOC). | The `withAuthenticator` HOC provides sign-in, sign-up, and forgot password workflows in your app by using only a few lines of code. For more information, see [Option 1: Use pre-build UI components](https://docs.amplify.aws/lib/auth/getting-started/q/platform/js/#option-1-use-pre-built-ui-components) in the Amplify Dev Center. Also, [Higher-order components](https://reactjs.org/docs/higher-order-components.html) in the React documentation.[\[See the AWS documentation website for more details\]](http://docs.aws.amazon.com/prescriptive-guidance/latest/patterns/build-a-serverless-react-native-mobile-app-by-using-aws-amplify.html)**withAuthenticator HOC code example**<pre>import Amplify from 'aws-amplify'<br />import config from './src/aws-exports'<br />Amplify.configure(config)<br />import { withAuthenticator } from 'aws-amplify-react-native';<br /><br /><br />const App = () => {<br />  return null;<br />};<br /><br /><br />export default withAuthenticator(App);</pre>In iOS Simulator, the app shows the login screen that’s provided by the Amazon Cognito service. | App developer | 
| Test the authentication service setup. | In iOS Simulator, do the following:[\[See the AWS documentation website for more details\]](http://docs.aws.amazon.com/prescriptive-guidance/latest/patterns/build-a-serverless-react-native-mobile-app-by-using-aws-amplify.html)You can also open the [Amazon Cognito console](https://console.aws.amazon.com/cognito/) and check if a new user has been created in the **Identity Pool** or not. | App developer | 

### Connect an AWS AppSync API and DynamoDB database to the app
<a name="connect-an-aws-appsync-api-and-dynamodb-database-to-the-app"></a>


| Task | Description | Skills required | 
| --- | --- | --- | 
| Create an AWS AppSync API and DynamoDB database. | [\[See the AWS documentation website for more details\]](http://docs.aws.amazon.com/prescriptive-guidance/latest/patterns/build-a-serverless-react-native-mobile-app-by-using-aws-amplify.html)For the ToDoList app setup used in this pattern, apply the following example configuration.**Example API and database configuration settings**<pre>? Please select from one of the below mentioned services: \ <br />GraphQL <br /><br />? Provide API name: todolistamplify<br /><br />? Choose the default authorization type for the API \ <br />Amazon Cognito User Pool<br /><br />Do you want to use the default authentication and security configuration<br /><br />? Default configuration How do you want users to be able to sign in? \ <br />Username<br /><br />Do you want to configure advanced settings? \ <br />No, I am done.<br /><br />? Do you want to configure advanced settings for the GraphQL API \ <br />No, I am done.<br /><br />? Do you have an annotated GraphQL schema? \ <br />No<br /><br />? Choose a schema template: \ <br />Single object with fields (e.g., "Todo" with ID, name, description)<br /><br />? Do you want to edit the schema now? \ <br />Yes</pre>**Example GraphQL schema**<pre> type Todo @model {<br />   id: ID!<br />   name: String!<br />   description: String<br />}</pre> | App developer | 
| Deploy the AWS AppSync API. | [\[See the AWS documentation website for more details\]](http://docs.aws.amazon.com/prescriptive-guidance/latest/patterns/build-a-serverless-react-native-mobile-app-by-using-aws-amplify.html)For the ToDoList app setup used in this pattern, apply the following example configuration.**Example AWS AppSync API configuration settings**The following configuration creates the GraphQL API in AWS AppSync and a **Todo** table in Dynamo DB.<pre> ? Are you sure you want to continue? Yes<br />? Do you want to generate code for your newly created GraphQL API Yes<br />? Choose the code generation language target javascript<br />? Enter the file name pattern of graphql queries, mutations and subscriptions src/graphql/**/*.js<br />? Do you want to generate/update all possible GraphQL operations - \ <br />queries, mutations and subscriptions Yes<br />? Enter maximum statement depth \<br />[increase from default if your schema is deeply nested] 2</pre> | App developer | 
| Connect the app's frontend to the AWS AppSync API. | To use the example ToDoList app provided in this pattern, copy the code from the **App.js** file in the [aws-amplify-react-native-ios-todo-app](https://github.com/aws-samples/aws-amplify-react-native-ios-todo-app) GitHub repository. Then, integrate the example code into your local environment.The example code provided in the repository’s **App.js** file does the following:[\[See the AWS documentation website for more details\]](http://docs.aws.amazon.com/prescriptive-guidance/latest/patterns/build-a-serverless-react-native-mobile-app-by-using-aws-amplify.html) | App developer | 

## Related resources
<a name="build-a-serverless-react-native-mobile-app-by-using-aws-amplify-resources"></a>
+ [AWS Amplify](https://aws.amazon.com/amplify/)
+ [Amazon Cognito](https://aws.amazon.com/cognito/)
+ [AWS AppSync](https://aws.amazon.com/appsync/)
+ [Amazon DynamoDB](https://aws.amazon.com/dynamodb/)
+ [React](https://reactjs.org/) (React documentation) 

# Manage tenants across multiple SaaS products on a single control plane
<a name="manage-tenants-across-multiple-saas-products-on-a-single-control-plane"></a>

*Ramanna Avancha, Kishan Kavala, Anusha Mandava, and Jenifer Pascal, Amazon Web Services*

## Summary
<a name="manage-tenants-across-multiple-saas-products-on-a-single-control-plane-summary"></a>

This pattern shows how to manage tenant lifecycles across multiple software as a service (SaaS) products on a single control plane in the AWS Cloud. The reference architecture provided can help organizations reduce the implementation of redundant, shared features across their individual SaaS products and provide governance efficiencies at scale.

Large enterprises can have multiple SaaS products across various business units. These products often need to be provisioned for use by external tenants at different subscription levels. Without a common tenant solution, IT administrators must spend time managing undifferentiated features across multiple SaaS APIs, instead of focusing on core product feature development.

The common tenant solution provided in this pattern can help centralize the management of many of an organization's shared SaaS product features, including the following:
+ Security
+ Tenant provisioning
+ Tenant data storage
+ Tenant communications
+ Product management
+ Metrics logging and monitoring

## Prerequisites and limitations
<a name="manage-tenants-across-multiple-saas-products-on-a-single-control-plane-prereqs"></a>

**Prerequisites **
+ An active AWS account
+ Knowledge of Amazon Cognito or a third-party identity provider (IdP)
+ Knowledge of Amazon API Gateway
+ Knowledge of AWS Lambda
+ Knowledge of Amazon DynamoDB
+ Knowledge of AWS Identity and Access Management (IAM)
+ Knowledge of AWS Step Functions
+ Knowledge of AWS CloudTrail and Amazon CloudWatch
+ Knowledge of Python libraries and code
+ Knowledge of SaaS APIs, including the different types of users (organizations, tenants, administrators, and application users), subscription models, and tenant isolation models
+ Knowledge of your organization's multi-product SaaS requirements and multi-tenant subscriptions

**Limitations**
+ Integrations between the common tenant solution and individual SaaS products aren’t covered in this pattern.
+ This pattern deploys the Amazon Cognito service in a single AWS Region only.

## Architecture
<a name="manage-tenants-across-multiple-saas-products-on-a-single-control-plane-architecture"></a>

**Target technology stack  **
+ Amazon API Gateway
+ Amazon Cognito
+ AWS CloudTrail
+ Amazon CloudWatch
+ Amazon DynamoDB
+ IAM
+ AWS Lambda
+ Amazon Simple Storage Service (Amazon S3)
+ Amazon Simple Notification Service (Amazon SNS)
+ AWS Step functions

**Target architecture **

The following diagram shows an example workflow for managing tenant lifecycles across multiple SaaS products on a single control plane in the AWS Cloud.

![\[Workflow for managing tenant lifecycles on a single control plane.\]](http://docs.aws.amazon.com/prescriptive-guidance/latest/patterns/images/pattern-img/4306bc76-22a7-45ca-a107-43df6c6f7ac8/images/700faf4d-c28f-4814-96aa-2d895cdcb518.png)


 The diagram shows the following workflow:

1. An AWS user initiates tenant provisioning, product provisioning, or administration-related actions by making a call to an API Gateway endpoint.

1. The user is authenticated by an access token that’s retrieved from an Amazon Cognito user pool, or another IdP.

1. Individual provisioning or administration tasks are run by Lambda functions that are integrated with API Gateway API endpoints.

1. Administration APIs for the common tenant solution (for tenants, products, and users) gather all of the required input parameters, headers, and tokens. Then, the administration APIs invoke the associated Lambda functions.

1. IAM permissions for both the administration APIs and the Lambda functions are validated by the IAM service.

1. Lambda functions store and retrieve data from the catalogs (for tenants, products, and users) in DynamoDB and Amazon S3.

1. After permissions are validated, an AWS Step Functions workflow is invoked to perform a specific task. The example in the diagram shows a tenant provisioning workflow.

1. Individual AWS Step Functions workflow tasks are run in a predetermined workflow (state machine).

1. Any essential data that’s needed to run the Lambda function associated with each workflow task is retrieved from either DynamoDB or Amazon S3. Other AWS resources might need to be provisioned by using an AWS CloudFormation template.

1. If needed, the workflow sends a request to provision additional AWS resources for a specific SaaS product to that product’s AWS account.

1. When the request succeeds or fails, the workflow publishes the status update as a message to an Amazon SNS topic.

1. Amazon SNS is subscribed to the Step Functions workflow’s Amazon SNS topic.

1. Amazon SNS then sends the workflow status update back to the AWS user.

1. Logs of each AWS service’s actions, including an audit trail of API calls, are sent to CloudWatch. Specific rules and alarms can be configured in CloudWatch for each use case.

1. Logs are archived in Amazon S3 buckets for auditing purposes.

**Automation and scale**

This pattern uses a CloudFormation template to help automate the deployment of the common tenant solution. The template can also help you quickly scale the associated resources up or down.

For more information, see [Working with AWS CloudFormation templates](https://docs.aws.amazon.com/AWSCloudFormation/latest/UserGuide/template-guide.html) in the *AWS CloudFormation User Guide*.

## Tools
<a name="manage-tenants-across-multiple-saas-products-on-a-single-control-plane-tools"></a>

**AWS services**
+ [Amazon API Gateway](https://docs.aws.amazon.com/apigateway/latest/developerguide/welcome.html) helps you create, publish, maintain, monitor, and secure REST, HTTP, and WebSocket APIs at any scale.
+ [Amazon Cognito](https://docs.aws.amazon.com/cognito/latest/developerguide/what-is-amazon-cognito.html) provides authentication, authorization, and user management for web and mobile apps.
+ [AWS CloudTrail](https://docs.aws.amazon.com/awscloudtrail/latest/userguide/cloudtrail-user-guide.html) helps you audit the governance, compliance, and operational risk of your AWS account.
+ [Amazon CloudWatch](https://docs.aws.amazon.com/AmazonCloudWatch/latest/monitoring/WhatIsCloudWatch.html) helps you monitor the metrics of your AWS resources and the applications you run on AWS in real time.
+ [Amazon DynamoDB](https://docs.aws.amazon.com/amazondynamodb/latest/developerguide/Introduction.html) is a fully managed NoSQL database service that provides fast, predictable, and scalable performance.
+ [AWS Identity and Access Management (IAM)](https://docs.aws.amazon.com/IAM/latest/UserGuide/introduction.html) helps you securely manage access to your AWS resources by controlling who is authenticated and authorized to use them.
+ [AWS Lambda](https://docs.aws.amazon.com/lambda/latest/dg/welcome.html) is a compute service that helps you run code without needing to provision or manage servers. It runs your code only when needed and scales automatically, so you pay only for the compute time that you use.
+ [Amazon Simple Storage Service (Amazon S3)](https://docs.aws.amazon.com/AmazonS3/latest/userguide/Welcome.html) is a cloud-based object storage service that helps you store, protect, and retrieve any amount of data.
+ [Amazon Simple Notification Service (Amazon SNS)](https://docs.aws.amazon.com/sns/latest/dg/welcome.html) helps you coordinate and manage the exchange of messages between publishers and clients, including web servers and email addresses.
+ [AWS Step Functions](https://docs.aws.amazon.com/step-functions/latest/dg/welcome.html) is a serverless orchestration service that helps you combine AWS Lambda functions and other AWS services to build business-critical applications.

## Best practices
<a name="manage-tenants-across-multiple-saas-products-on-a-single-control-plane-best-practices"></a>

The solution in this pattern uses a single control plane to manage the onboarding of multiple tenants and to provision access to multiple SaaS products. The control plane helps administrative users manage four other, feature-specific planes:
+ Security plane
+ Workflow plane
+ Communication plane
+ Logging and monitoring plane

## Epics
<a name="manage-tenants-across-multiple-saas-products-on-a-single-control-plane-epics"></a>

### Configure the security plane
<a name="configure-the-security-plane"></a>


| Task | Description | Skills required | 
| --- | --- | --- | 
| Establish the requirements for your multi-tenant SaaS platform. | Establish detailed requirements for the following:[\[See the AWS documentation website for more details\]](http://docs.aws.amazon.com/prescriptive-guidance/latest/patterns/manage-tenants-across-multiple-saas-products-on-a-single-control-plane.html) | Cloud architect, AWS systems administrator | 
| Set up the Amazon Cognito service. | Follow the instructions in [Getting started with Amazon Cognito](https://docs.aws.amazon.com/cognito/latest/developerguide/cognito-getting-started.html) in the *Amazon Cognito Developer Guide*. | Cloud architect | 
| Configure the required IAM policies. | Create the required IAM policies for your use case. Then, map the policies to IAM roles in Amazon Cognito.For more information, see [Managing access using policies](https://docs.aws.amazon.com/cognito/latest/developerguide/security-iam.html#security_iam_access-manage) and [Role-based access control](https://docs.aws.amazon.com/cognito/latest/developerguide/role-based-access-control.html) in the *Amazon Cognito Developer Guide*. | Cloud administrator, Cloud architect, AWS IAM security | 
| Configure the required API permissions.  | Set up API Gateway access permissions by using IAM roles and policies, and Lambda authorizers.For instructions, see the following sections of the *Amazon API Gateway Developer Guide*:[\[See the AWS documentation website for more details\]](http://docs.aws.amazon.com/prescriptive-guidance/latest/patterns/manage-tenants-across-multiple-saas-products-on-a-single-control-plane.html) | Cloud administrator, Cloud architect | 

### Configure the data plane
<a name="configure-the-data-plane"></a>


| Task | Description | Skills required | 
| --- | --- | --- | 
| Create the required data catalogs. | [\[See the AWS documentation website for more details\]](http://docs.aws.amazon.com/prescriptive-guidance/latest/patterns/manage-tenants-across-multiple-saas-products-on-a-single-control-plane.html)For more information, see [Setting up DynamoDB ](https://docs.aws.amazon.com/amazondynamodb/latest/developerguide/SettingUp.html)in the *Amazon DynamoDB Developer Guide*. | DBA | 

### Configure the control plane
<a name="configure-the-control-plane"></a>


| Task | Description | Skills required | 
| --- | --- | --- | 
| Create Lambda functions and API Gateway APIs to run required control plane tasks. | Create separate Lambda functions and API Gateway APIs to add, delete, and manage the following:[\[See the AWS documentation website for more details\]](http://docs.aws.amazon.com/prescriptive-guidance/latest/patterns/manage-tenants-across-multiple-saas-products-on-a-single-control-plane.html)For more information, see [Using AWS Lambda with Amazon API Gateway](https://docs.aws.amazon.com/lambda/latest/dg/services-apigateway.html) in the *AWS Lambda Developer Guide*. | App developer | 

### Configure the workflow plane
<a name="configure-the-workflow-plane"></a>


| Task | Description | Skills required | 
| --- | --- | --- | 
| Identify the tasks that AWS Step Functions workflows must run. | Identify and document the detailed AWS Step Functions workflow requirements for the following:[\[See the AWS documentation website for more details\]](http://docs.aws.amazon.com/prescriptive-guidance/latest/patterns/manage-tenants-across-multiple-saas-products-on-a-single-control-plane.html)Make sure that key stakeholders approve the requirements. | App owner | 
| Create the required AWS Step Functions workflows. | [\[See the AWS documentation website for more details\]](http://docs.aws.amazon.com/prescriptive-guidance/latest/patterns/manage-tenants-across-multiple-saas-products-on-a-single-control-plane.html) | App developer, Build lead | 

### Configure the communication plane
<a name="configure-the-communication-plane"></a>


| Task | Description | Skills required | 
| --- | --- | --- | 
| Create Amazon SNS topics. | Create Amazon SNS topics to receive notifications about the following:[\[See the AWS documentation website for more details\]](http://docs.aws.amazon.com/prescriptive-guidance/latest/patterns/manage-tenants-across-multiple-saas-products-on-a-single-control-plane.html)For more information, see [Creating an SNS topic](https://docs.aws.amazon.com/sns/latest/dg/sns-create-topic.html) in the *Amazon SNS Developer Guide*. | App owner, Cloud architect | 
| Subscribe endpoints to each Amazon SNS topic. | To receive messages published to an Amazon SNS topic, you must subscribe an endpoint to each topic.For more information, see [Subscribing to an Amazon SNS topic](https://docs.aws.amazon.com/sns/latest/dg/sns-create-subscribe-endpoint-to-topic.html) in the *Amazon SNS Developer Guide*. | App developer, Cloud architect | 

### Configure the logging and monitoring plane
<a name="configure-the-logging-and-monitoring-plane"></a>


| Task | Description | Skills required | 
| --- | --- | --- | 
| Activate logging for each component of the common tenant solution. | Activate logging at the component level for each resource in the common tenant solution that you created.For instructions, see the following:[\[See the AWS documentation website for more details\]](http://docs.aws.amazon.com/prescriptive-guidance/latest/patterns/manage-tenants-across-multiple-saas-products-on-a-single-control-plane.html)You can consolidate logs for each resource into a centralized logging account by using IAM policies. For more information, see [Centralized logging and multiple-account security guardrails](https://docs.aws.amazon.com/prescriptive-guidance/latest/patterns/centralized-logging-and-multiple-account-security-guardrails.html). | App developer, AWS systems administrator, Cloud administrator | 

### Provision and deploy the common tenant solution
<a name="provision-and-deploy-the-common-tenant-solution"></a>


| Task | Description | Skills required | 
| --- | --- | --- | 
| Create CloudFormation templates. | Automate the deployment and maintenance of the full common tenant solution and all its components by using CloudFormation templates.For more information, see the [https://docs.aws.amazon.com/AWSCloudFormation/latest/UserGuide/template-guide.html](https://docs.aws.amazon.com/AWSCloudFormation/latest/UserGuide/template-guide.html). | App developer, DevOps engineer, CloudFormation developer | 

## Related resources
<a name="manage-tenants-across-multiple-saas-products-on-a-single-control-plane-resources"></a>
+ [Control access to a REST API using Amazon Cognito user pools as authorizer](https://docs.aws.amazon.com/apigateway/latest/developerguide/apigateway-integrate-with-cognito.html) (*Amazon API Gateway Developer Guide*)
+ [Use API Gateway Lambda authorizers](https://docs.aws.amazon.com/apigateway/latest/developerguide/apigateway-use-lambda-authorizer.html) (*Amazon API Gateway Developer Guide*)
+ [Amazon Cognito user pools](https://docs.aws.amazon.com/cognito/latest/developerguide/cognito-user-identity-pools.html) (*Amazon Cognito Developer Guide*)
+ [Cross-account cross-Region CloudWatch console](https://docs.aws.amazon.com/AmazonCloudWatch/latest/monitoring/Cross-Account-Cross-Region.html) (*Amazon CloudWatch User Guide*)

# Consolidate Amazon S3 presigned URL generation and object downloads by using an endpoint associated with static IP addresses
<a name="consolidate-amazon-s3-presigned-url-generation-and-object-downloads-by-using-an-endpoint-associated-with-static-ip-addresses"></a>

*Song Jin, Eunhye Jo, and Jun Soung Lee, Amazon Web Services*

## Summary
<a name="consolidate-amazon-s3-presigned-url-generation-and-object-downloads-by-using-an-endpoint-associated-with-static-ip-addresses-summary"></a>

This pattern simplifies access to Amazon Simple Storage Service (Amazon S3) by creating secure, custom presigned URLs for object downloads. The solution provides a single endpoint with a unique domain and static IP addresses. It's tailored for customers who require consolidation of both API and Amazon S3 endpoints under a unified domain with static IP addresses. The use case involves users following an IP and domain allowlist firewall policy, limiting API access to specific domains and IP addresses. 

The architecture employs key AWS services, including AWS Global Accelerator, Amazon API Gateway, AWS Lambda, Application Load Balancer, AWS PrivateLink, and Amazon S3. This design centralizes the API for generating presigned URLs and the Amazon S3 endpoint under a single domain, linked to an accelerator with two static IP addresses. Consequently, users can effortlessly request presigned URLs and download Amazon S3 objects through a unified domain endpoint with static IP addresses. 

This architecture is especially beneficial for customers with strict policies or compliance requirements, such as those in the public, medical, and finance sectors.

## Prerequisites and limitations
<a name="consolidate-amazon-s3-presigned-url-generation-and-object-downloads-by-using-an-endpoint-associated-with-static-ip-addresses-prereqs"></a>

**Prerequisites**
+ An active AWS account
+ A public hosted zone for your custom domain name
+ A domain imported in AWS Certificate Manager (ACM) in the AWS Region of your choice

**Limitations**
+ The Amazon S3 bucket name must match the domain name of the endpoint. This requirement is to ensure that the Amazon S3 endpoint can be served through the single API endpoint.
+ The custom domain name used in API Gateway should align with the domain name of the single API endpoint.
+ Some AWS services aren’t available in all AWS Regions. For Region availability, see [AWS Services by Region](https://aws.amazon.com/about-aws/global-infrastructure/regional-product-services/). For specific endpoints, see [Service endpoints and quotas](https://docs.aws.amazon.com/general/latest/gr/aws-service-information.html), and choose the link for the service.

## Architecture
<a name="consolidate-amazon-s3-presigned-url-generation-and-object-downloads-by-using-an-endpoint-associated-with-static-ip-addresses-architecture"></a>

The following diagram shows the target architecture and workflow for this pattern.

![\[Components and workflow for presigned URL generation and object download.\]](http://docs.aws.amazon.com/prescriptive-guidance/latest/patterns/images/pattern-img/e19ebcb5-2138-481e-952e-3cfee9ad9e97/images/effd197c-d4d7-4990-8b66-3eb1c64aab4c.png)


The diagram illustrates the following concept and workflow:

1. A user initiates a request to generate a presigned URL by using the custom endpoint served through AWS Global Accelerator, using the custom domain name and associated IP addresses.

1. A Lambda function generates the presigned URL, pointing to the custom endpoint. It responds with a 301 redirect that contains the generated presigned URL. Through the redirected presigned URL, the user downloads the object automatically by using the custom endpoint served through Global Accelerator.

The components of the overall architecture for presigned URL generation and object download workflow are as follows:
+ Provisioning of static IP addresses by Global Accelerator.
+ Registration of the accelerator’s alias as an A record into the Amazon Route 53 public hosted zone with the custom domain name.
+ Creation of an Amazon S3 bucket with a bucket name that matches the registered custom domain name.
+ Creation of VPC endpoints for API Gateway and the Amazon S3 service.
+ Configuration of an internal-facing Application Load Balancer to connect to Global Accelerator.
+ Assignment of a custom domain name for API Gateway with an ACM certificate attached.
+ Deployment of a private API Gateway integrated with a Lambda function.
+ The Lambda function is equipped with an AWS Identity and Access Management (IAM) role attached (with [GetObject](https://docs.aws.amazon.com/AmazonS3/latest/API/API_GetObject.html) permissions).

## Tools
<a name="consolidate-amazon-s3-presigned-url-generation-and-object-downloads-by-using-an-endpoint-associated-with-static-ip-addresses-tools"></a>

**AWS services**
+ [Amazon API Gateway](https://docs.aws.amazon.com/apigateway/latest/developerguide/welcome.html) helps you create, publish, maintain, monitor, and secure REST, HTTP, and WebSocket APIs at any scale.
+ [Application Load Balancers](https://docs.aws.amazon.com/elasticloadbalancing/latest/application/) distribute incoming application traffic across multiple targets, such as Amazon Elastic Compute Cloud (Amazon EC2) instances, in multiple Availability Zones.
+ [AWS Certificate Manager (ACM)](https://docs.aws.amazon.com/acm/latest/userguide/acm-overview.html) helps you create, store, and renew public and private SSL/TLS X.509 certificates and keys that protect your AWS websites and applications.
+ [AWS Cloud Development Kit (AWS CDK)](https://docs.aws.amazon.com/cdk/latest/guide/home.html) is a software development framework that helps you define and provision AWS Cloud infrastructure in code.
+ [AWS Global Accelerator](https://docs.aws.amazon.com/global-accelerator/latest/dg/what-is-global-accelerator.html) is a global service that supports endpoints in multiple AWS Regions. You can create accelerators that direct traffic to optimal endpoints over the AWS global network. This improves the availability and performance of your internet applications that are used by a global audience.
+ [AWS Identity and Access Management (IAM)](https://docs.aws.amazon.com/IAM/latest/UserGuide/introduction.html) helps you securely manage access to your AWS resources by controlling who is authenticated and authorized to use them.
+ [AWS Lambda](https://docs.aws.amazon.com/lambda/latest/dg/welcome.html) is a compute service that helps you run code without needing to provision or manage servers. It runs your code only when needed and scales automatically, so you pay only for the compute time that you use.
+ [AWS PrivateLink](https://docs.aws.amazon.com/vpc/latest/privatelink/what-is-privatelink.html) helps you create unidirectional, private connections from your virtual private clouds (VPCs) to services outside of the VPC.
+ [Amazon Route 53](https://docs.aws.amazon.com/Route53/latest/DeveloperGuide/Welcome.html) is a highly available and scalable DNS web service.
+ [Amazon Simple Storage Service (Amazon S3)](https://docs.aws.amazon.com/AmazonS3/latest/userguide/Welcome.html) is a cloud-based object storage service that helps you store, protect, and retrieve any amount of data.

**Other tools**
+ [Terraform](https://www.terraform.io/) is an infrastructure as code (IaC) tool from HashiCorp that helps you create and manage cloud and on-premises resources.

**Code repository**

You can deploy this pattern by using either the AWS CDK or Terraform based on your preference. The [Epics](#consolidate-amazon-s3-presigned-url-generation-and-object-downloads-by-using-an-endpoint-associated-with-static-ip-addresses-epics) section contains instructions for both deployment methods. The code for this pattern is available in the following GitHub repositories:
+ **AWS CDK** – [s3-presignedurl-staticips-endpoint-with-cdk](https://github.com/aws-samples/s3-presignedurl-staticips-endpoint-with-cdk)
+ **Terraform** – [s3-presignedurl-staticips-endpoint-with-terraform](https://github.com/aws-samples/s3-presignedurl-staticips-endpoint-with-terraform)

## Best practices
<a name="consolidate-amazon-s3-presigned-url-generation-and-object-downloads-by-using-an-endpoint-associated-with-static-ip-addresses-best-practices"></a>
+ To enhance security in the production environment, it’s crucial to implement authorization mechanisms, such as [Amazon Cognito](https://docs.aws.amazon.com/cognito/latest/developerguide/what-is-amazon-cognito.html), to restrict access to the `PresignedUrl` generation API.
+ Follow the principle of least privilege and grant the minimum permissions required to perform a task. For more information, see [Grant least privilege](https://docs.aws.amazon.com/IAM/latest/UserGuide/access_policies.html#grant-least-priv) and [Security best practices](https://docs.aws.amazon.com/IAM/latest/UserGuide/best-practices.html) in the IAM documentation.

## Epics
<a name="consolidate-amazon-s3-presigned-url-generation-and-object-downloads-by-using-an-endpoint-associated-with-static-ip-addresses-epics"></a>

### Prepare the environment
<a name="prepare-the-environment"></a>


| Task | Description | Skills required | 
| --- | --- | --- | 
| Decide on a domain name. | Decide on a public domain name for the unified Amazon S3 endpoint. The domain name is also used as the Amazon S3 bucket name. | AWS administrator, Network administrator | 
| Create a public hosted zone. | [Create a public hosted zone](https://docs.aws.amazon.com/Route53/latest/DeveloperGuide/CreatingHostedZone.html) in Amazon Route 53. Its domain name must match the domain name that’s used in API Gateway. | AWS administrator, Network administrator | 
| Prepare an SSL certificate. | Use AWS Certificate Manager (ACM) to [request](https://docs.aws.amazon.com/acm/latest/userguide/acm-public-certificates.html) or [import](https://docs.aws.amazon.com/acm/latest/userguide/import-certificate.html) an SSL certificate for your web application domain. | AWS administrator, Network administrator | 

### Deploy the pattern with Terraform
<a name="deploy-the-pattern-with-terraform"></a>


| Task | Description | Skills required | 
| --- | --- | --- | 
| Set up the Terraform development environment. | To set up the development environment, do the following:[\[See the AWS documentation website for more details\]](http://docs.aws.amazon.com/prescriptive-guidance/latest/patterns/consolidate-amazon-s3-presigned-url-generation-and-object-downloads-by-using-an-endpoint-associated-with-static-ip-addresses.html) | AWS administrator, Cloud administrator | 
| Modify the `.tfvars` and ** **`provider.tf` files. | [\[See the AWS documentation website for more details\]](http://docs.aws.amazon.com/prescriptive-guidance/latest/patterns/consolidate-amazon-s3-presigned-url-generation-and-object-downloads-by-using-an-endpoint-associated-with-static-ip-addresses.html)**Note the following:**[\[See the AWS documentation website for more details\]](http://docs.aws.amazon.com/prescriptive-guidance/latest/patterns/consolidate-amazon-s3-presigned-url-generation-and-object-downloads-by-using-an-endpoint-associated-with-static-ip-addresses.html) | AWS administrator, Cloud administrator | 
| Provision network resources. | To provision network resources, run the following commands:<pre>cd ./2.vpc_alb_ga<br />terraform init<br />terraform plan --var-file=apg.tfvars<br />terraform apply --var-file=apg.tfvars</pre>During the `apply `command’s execution, type **yes** when prompted. | AWS administrator, Cloud administrator | 
| Provision API Gateway, Amazon S3, and Lambda. | To provision network resources, use the following commands:<pre>cd ./2.apigw_s3_lambda<br />terraform init<br />terraform plan --var-file=apg.tfvars<br />terraform apply --var-file=apg.tfvars</pre> | AWS administrator, Cloud administrator | 

### Deploy the pattern with AWS CDK
<a name="deploy-the-pattern-with-cdk"></a>


| Task | Description | Skills required | 
| --- | --- | --- | 
| Set up the AWS CDK development environment. | To set up the development environment, do the following:[\[See the AWS documentation website for more details\]](http://docs.aws.amazon.com/prescriptive-guidance/latest/patterns/consolidate-amazon-s3-presigned-url-generation-and-object-downloads-by-using-an-endpoint-associated-with-static-ip-addresses.html) | AWS administrator, Cloud administrator | 
| Configure domain settings in the `config/index.ts` file. | To edit the options of the constant variable, use the following commands:<pre>export const options = {<br />    certificateArn: '{arn of the acm which created before}',<br />    dnsAttr: {<br />        zoneName: '{public hosted zone name}',<br />        hostedZoneId: 'hosted zone Id',<br />    },<br />    domainNamePrefix: '{Prefix for the domain}',<br />    presignPath: 'presign',<br />    objectsPath: 'objects',<br />};</pre>In the commands, replace each placeholder with your own information:[\[See the AWS documentation website for more details\]](http://docs.aws.amazon.com/prescriptive-guidance/latest/patterns/consolidate-amazon-s3-presigned-url-generation-and-object-downloads-by-using-an-endpoint-associated-with-static-ip-addresses.html) | AWS administrator, Cloud administrator | 
| Deploy the stacks. | To deploy two stacks, one for the virtual private cloud (VPC) and another for the application, use the following command:<pre>$ npm install <br />$ cdk synth <br />$ cdk deploy --all</pre> | AWS administrator, Cloud administrator | 

### Test the pattern
<a name="test-the-pattern"></a>


| Task | Description | Skills required | 
| --- | --- | --- | 
| Verify the IP addresses of the endpoint. | To verify that the domain for this pattern has static IP addresses, use the following command:<pre>nslookup ${s3-bucket-prefix}.${domain}</pre> | Network administrator | 
| Upload a test file that you can later download. | Upload the test file to the `'/objects'` folder in the Amazon S3 bucket. | AWS administrator, Cloud administrator | 
| Invoke the API to generate a presigned URL. | To generate a presigned URL, call the URL from a browser or API client (for example, [Postman](https://www.postman.com/product/what-is-postman/)) using the following format:<pre>https://${s3-bucket-prefix}.${domain}/presign/objects/${uploaded-filename}</pre>Replace the placeholder values in `${s3-bucket-prefix}` and `${domain}` with the values that you set in previous steps. | App owner | 
| Check the result. | The expected result is that you should receive a 301 (Moved Permanently) redirect status code. This response will contain the presigned URL, which should automatically initiate the download of your test file. | Test engineer | 

### Clean up with Terraform
<a name="clean-up-with-terraform"></a>


| Task | Description | Skills required | 
| --- | --- | --- | 
| Destroy API Gateway, Amazon S3, and Lambda resources. | To delete resources, use the following commands:<pre>cd ./2.apigw_s3_lambda<br />terraform init<br />terraform plan --destroy --var-file=apg.tfvars<br />terraform destroy --var-file=apg.tfvars<br /></pre> | AWS administrator, Cloud administrator | 
| Destroy network resources. | To delete network resources, use the following commands:<pre>cd ./1.vpc_alb_ga<br />terraform init<br />terraform plan --destroy --var-file=apg.tfvars<br />terraform destroy --var-file=apg.tfvars<br /></pre> | AWS administrator, Cloud administrator | 

### Clean up with AWS CDK
<a name="clean-up-with-cdk"></a>


| Task | Description | Skills required | 
| --- | --- | --- | 
| Destroy the stacks. | To destroy both the VPC and application stacks, use the following command:<pre>$ cdk destroy --all</pre> | AWS administrator, Cloud administrator | 
| Empty and delete the Amazon S3 buckets. | [Empty](https://docs.aws.amazon.com/AmazonS3/latest/userguide/empty-bucket.html) and [delete](https://docs.aws.amazon.com/AmazonS3/latest/userguide/delete-bucket.html) the object Amazon S3 bucket and the logs Amazon S3 bucket that are not deleted by default.The Amazon S3 bucket names are `${s3-bucket-prefix}.${domain}` and `${s3-bucket-prefix}.${domain}-logs`.If you prefer to use the [AWS Command Line Interface (AWS CLI)](https://docs.aws.amazon.com/cli/latest/userguide/cli-chap-welcome.html) for deleting the buckets, use the following commands:<pre>$ aws s3 rm s3://${s3-bucket-prefix}.${domain} --recursive<br />$ aws s3 rb s3://${s3-bucket-prefix}.${domain} --force<br />$ aws s3 rm s3://${s3-bucket-prefix}.${domain}-logs --recursive<br />$ aws s3 rb s3://${s3-bucket-prefix}.${domain}-logs --force</pre>Replace `${s3-bucket-prefix}` and `${domain}` with the values you set in previous steps.,/p> | AWS administrator, Cloud administrator | 

## Related resources
<a name="consolidate-amazon-s3-presigned-url-generation-and-object-downloads-by-using-an-endpoint-associated-with-static-ip-addresses-resources"></a>

**AWS Blogs**
+ [Accessing an Amazon API Gateway via static IP addresses provided by AWS Global Accelerator](https://aws.amazon.com/blogs/networking-and-content-delivery/accessing-an-aws-api-gateway-via-static-ip-addresses-provided-by-aws-global-accelerator/) 
+ [Generate a presigned URL in modular AWS CDK for JavaScript](https://aws.amazon.com/blogs/developer/generate-presigned-url-modular-aws-sdk-javascript/) 
+ [Hosting Internal HTTPS Static Websites with ALB, S3, and PrivateLink](https://aws.amazon.com/blogs/networking-and-content-delivery/hosting-internal-https-static-websites-with-alb-s3-and-privatelink/) 

# Create a cross-account Amazon EventBridge connection in an organization
<a name="create-cross-account-amazon-eventbridge-connection-organization"></a>

*Sam Wilson and Robert Stone, Amazon Web Services*

## Summary
<a name="create-cross-account-amazon-eventbridge-connection-organization-summary"></a>

Large distributed systems use Amazon EventBridge to communicate changes in state between various Amazon Web Services (AWS) accounts in an AWS Organizations organization. However, EventBridge is generally able to target only endpoints or consumers in the same AWS account. The exception is an event bus in a different account. That event bus is a valid target. To consume events from an event bus in another account, the events must be pushed from the source account's event bus to the destination account’s event bus. To avoid challenges when managing critical events across applications within different AWS accounts, use the recommended approach presented in this pattern.

This pattern illustrates how to implement an event-driven architecture with EventBridge that involves multiple AWS accounts in an AWS Organizations organization. The pattern uses AWS Cloud Development Kit (AWS CDK) Toolkit and AWS CloudFormation.

EventBridge offers a serverless event bus that helps you receive, filter, transform, route, and deliver events. A critical component of event-driven architectures, EventBridge supports separation between producers of messages and consumers of those messages. In a single account, this is straight forward. A multi-account structure requires additional considerations for events on the event bus in one account to be consumed in other accounts within the same organization.

For information about account-specific considerations for producers and consumers, see the [Additional information](#create-cross-account-amazon-eventbridge-connection-organization-additional) section.

## Prerequisites and limitations
<a name="create-cross-account-amazon-eventbridge-connection-organization-prereqs"></a>

**Prerequisites**
+ An AWS Organizations organization with at least two associated AWS accounts
+ An AWS Identity and Access Management (IAM) role in both AWS accounts that allows you to provision infrastructure in both AWS accounts by using AWS CloudFormation
+ Git [installed locally](https://git-scm.com/book/en/v2/Getting-Started-Installing-Git)
+ AWS Command Line Interface (AWS CLI) [installed locally](https://docs.aws.amazon.com/cli/latest/userguide/getting-started-install.html)
+ AWS CDK [installed locally](https://docs.aws.amazon.com/cdk/latest/guide/cli.html) and [bootstrapped](https://docs.aws.amazon.com/cdk/v2/guide/bootstrapping.html#bootstrapping-howto) in both AWS accounts

**Product versions**

This pattern has been built and tested by using the following tools and versions:
+ AWS CDK Toolkit 2.126.0
+ Node.js 18.19.0
+ npm 10.2.3
+ Python 3.12

This pattern should work with any version of AWS CDK v2 or npm. Node.js versions 13.0.0 through 13.6.0 are not compatible with AWS CDK.

## Architecture
<a name="create-cross-account-amazon-eventbridge-connection-organization-architecture"></a>

**Target architecture**

The following diagram shows the architecture workflow for pushing an event from one account and consuming it in another account.

![\[The three-step process for connecting the Source producer account and Destination consumer account.\]](http://docs.aws.amazon.com/prescriptive-guidance/latest/patterns/images/pattern-img/34a5f3ae-511d-4636-999f-c73396770117/images/ccc4878a-6281-4a77-a483-4e6f299d7807.png)


The workflow contains the following steps:

1. The Producer AWS Lambda function in the Source account puts an event on the account’s EventBridge event bus.

1. The cross-account EventBridge rule routes the event to an EventBridge event bus in the Destination account.

1. The EventBridge event bus in the Destination account has a target Lambda rule that invokes the Consumer Lambda function.

A best practice is to use a [Dead Letter Queue (DLQ)](https://docs.aws.amazon.com/AWSSimpleQueueService/latest/SQSDeveloperGuide/sqs-dead-letter-queues.html) for handling failed invocations of the Consumer Lambda function. However, the DLQ was omitted from this solution for clarity. To learn more about how to implement a DLQ in your workflows and improve your workflows’ ability to recover from failures, see the [Implementing AWS Lambda error handling patterns](https://aws.amazon.com/blogs/compute/implementing-aws-lambda-error-handling-patterns/) blog post.

**Automation and scale**

AWS CDK automatically provisions the required architecture. EventBridge can scale to thousands of records per second depending on the AWS Region. For more information, see the [Amazon EventBridge quotas documentation](https://docs.aws.amazon.com/eventbridge/latest/userguide/eb-quota.html).

## Tools
<a name="create-cross-account-amazon-eventbridge-connection-organization-tools"></a>

**AWS services**
+ [AWS Cloud Development Kit (AWS CDK)](https://docs.aws.amazon.com/cdk/v2/guide/home.html) is a software development framework that helps you define and provision AWS Cloud infrastructure in code. This pattern uses the [AWS CDK Toolkit](https://docs.aws.amazon.com/cdk/latest/guide/cli.html), a command line cloud development kit that helps you interact with your AWS CDK app.
+ [Amazon EventBridge](https://docs.aws.amazon.com/eventbridge/latest/userguide/eb-what-is.html) is a serverless event bus service that helps you connect your applications with real-time data from a variety of sources. For example, AWS Lambda functions, HTTP invocation endpoints using API destinations, or event buses in other AWS accounts.
+ [AWS Lambda](https://docs.aws.amazon.com/lambda/latest/dg/welcome.html) is a compute service that helps you run code without needing to provision or manage servers. It runs your code only when needed and scales automatically, so you pay only for the compute time that you use.
+ [AWS Organizations](https://docs.aws.amazon.com/organizations/latest/userguide/orgs_introduction.html) is an account management service that helps you consolidate multiple AWS accounts into an organization that you create and centrally manage.

**Other tools**
+ [Node.js](https://nodejs.org/en/docs/) is an event-driven JavaScript runtime environment designed for building scalable network applications.
+ [npm](https://docs.npmjs.com/about-npm) is a software registry that runs in a Node.js environment and is used to share or borrow packages and manage deployment of private packages.
+ [Python](https://www.python.org/) is a general-purpose computer programming language.

**Code repository**

The code for this pattern is available in the GitHub [cross-account-eventbridge-in-organization](https://github.com/aws-samples/aws-cdk-examples/tree/main/python/cross-account-eventbridge-in-organization) repository.

## Best practices
<a name="create-cross-account-amazon-eventbridge-connection-organization-best-practices"></a>

For best practices when working with EventBridge, see the following resources:
+ [Best practices for Amazon EventBridge event patterns](https://docs.aws.amazon.com/eventbridge/latest/userguide/eb-patterns-best-practices.html)
+ [Best practices when defining rules in Amazon EventBridge](https://docs.aws.amazon.com/eventbridge/latest/userguide/eb-rules-best-practices.html)

## Epics
<a name="create-cross-account-amazon-eventbridge-connection-organization-epics"></a>

### Prepare your local AWS CDK deployment environment
<a name="prepare-your-local-cdk-deployment-environment"></a>


| Task | Description | Skills required | 
| --- | --- | --- | 
| Configure local credentials for the Source account and Destination account. | Review [Setting up new configuration and credentials](https://docs.aws.amazon.com/cli/latest/userguide/getting-started-quickstart.html#getting-started-quickstart-new), and use the authentication and credential method that makes the most sense to your environment.Be sure to configure the AWS CLI for both Source account and Destination account authentication.These instructions assume that you have configured two AWS profiles locally: `sourceAccount` and `destinationAccount`. | App developer | 
| Bootstrap both AWS accounts. | To bootstrap the accounts, run the following commands:<pre>cdk bootstrap --profile sourceAccount<br />cdk bootstrap --profile destinationAccount</pre> | App developer | 
| Clone the pattern code. | To clone the repository, run the following command:<pre>git clone git@github.com:aws-samples/aws-cdk-examples.git</pre>Then, change the directory to the newly cloned project folder:<pre>cd aws-cdk-examples/python/cross-account-eventbridge-in-organization</pre> | App developer | 

### Deploy ProducerStack to the Source account
<a name="deploy-producerstack-to-the-source-account"></a>


| Task | Description | Skills required | 
| --- | --- | --- | 
| Modify `cdk.json` with your AWS Organizations and account details. | In the root folder of the project, make the following changes to `cdk.json`:[\[See the AWS documentation website for more details\]](http://docs.aws.amazon.com/prescriptive-guidance/latest/patterns/create-cross-account-amazon-eventbridge-connection-organization.html) | App developer | 
| Deploy the ProducerStack resources. | Run the following command from the project’s root directory:<pre>cdk deploy ProducerStack --profile sourceAccount</pre>When prompted, accept the new IAM roles and other security-related permissions created through AWS CloudFormation. | App developer | 
| Verify that ProducerStack resources are deployed. | To verify the resources, do the following:[\[See the AWS documentation website for more details\]](http://docs.aws.amazon.com/prescriptive-guidance/latest/patterns/create-cross-account-amazon-eventbridge-connection-organization.html) | App developer | 

### Deploy ConsumerStack to the Destination account
<a name="deploy-consumerstack-to-the-destination-account"></a>


| Task | Description | Skills required | 
| --- | --- | --- | 
| Deploy the ConsumerStack resources. | Run the following command from the project’s root directory:<pre>cdk deploy ConsumerStack --profile destinationAccount</pre>When prompted, accept the new IAM roles and other security-related permissions created through CloudFormation. | App developer | 
| Verify that ConsumerStack resources are deployed | [\[See the AWS documentation website for more details\]](http://docs.aws.amazon.com/prescriptive-guidance/latest/patterns/create-cross-account-amazon-eventbridge-connection-organization.html) | App developer | 

### Produce and consume events
<a name="produce-and-consume-events"></a>


| Task | Description | Skills required | 
| --- | --- | --- | 
| Invoke the Producer Lambda function. | [\[See the AWS documentation website for more details\]](http://docs.aws.amazon.com/prescriptive-guidance/latest/patterns/create-cross-account-amazon-eventbridge-connection-organization.html) | App developer | 
| Verify that the event was received. | [\[See the AWS documentation website for more details\]](http://docs.aws.amazon.com/prescriptive-guidance/latest/patterns/create-cross-account-amazon-eventbridge-connection-organization.html) | App developer | 

### Cleanup
<a name="cleanup"></a>


| Task | Description | Skills required | 
| --- | --- | --- | 
| Destroy the ConsumerStack resources. | If you are using this pattern as a test, clean up the deployed resources to avoid incurring additional costs.Run the following command from the project’s root directory:<pre>cdk destroy ConsumerStack --profile destinationAccount</pre>You will be prompted to confirm deletion of the stack. | App developer | 
| Destroy the ProducerStack resources. | Run the following command from the project’s root directory:<pre>cdk destroy ProducerStack --profile sourceAccount</pre>You will be prompted to confirm deletion of the stack. | App developer | 

## Troubleshooting
<a name="create-cross-account-amazon-eventbridge-connection-organization-troubleshooting"></a>


| Issue | Solution | 
| --- | --- | 
| No event was received in the Destination account. | [\[See the AWS documentation website for more details\]](http://docs.aws.amazon.com/prescriptive-guidance/latest/patterns/create-cross-account-amazon-eventbridge-connection-organization.html) | 
| Invoking a Lambda function from the console returns the following error:: `User: arn:aws:iam::123456789012:user/XXXXX is not authorized to perform: lambda:Invoke` | Contact your AWS account administrator to receive the appropriate `lambda:Invoke` action permissions on the `ProducerStack-ProducerLambdaXXXX` Lambda function. | 

## Related resources
<a name="create-cross-account-amazon-eventbridge-connection-organization-resources"></a>

**References**
+ [AWS Organizations User Guide](https://docs.aws.amazon.com/organizations/latest/userguide/orgs_introduction.html)
+ [Amazon EventBridge event patterns](https://docs.aws.amazon.com/eventbridge/latest/userguide/eb-event-patterns.html)
+ [Rules in Amazon EventBridge](https://docs.aws.amazon.com/eventbridge/latest/userguide/eb-rules.html)

**Tutorials and videos**
+ [Tutorial: Creating and configuration an organization](https://docs.aws.amazon.com/organizations/latest/userguide/orgs_tutorials_basic.html)
+ [AWS re:Invent 2023 - Advanced event-driven patterns with Amazon EventBridge (COM301-R)](https://www.youtube.com/watch?v=6X4lSPkn4ps)

## Additional information
<a name="create-cross-account-amazon-eventbridge-connection-organization-additional"></a>

**Producer rule**

In the Source account, an EventBridge event bus is created to accept messages from producers (as shown in the *Architecture* section). A rule with accompanying IAM permissions is created on this event bus. The rules target the EventBridge event bus in the Destination account based on the following `cdk.json` structure:

```
"rules": [
  {
    "id": "CrossAccount",
    "sources": ["Producer"],
    "detail_types": ["TestType"],
    "targets": [
      {
        "id": "ConsumerEventBus",
        "arn": "arn:aws:events:us-east-2:012345678901:event-bus/CrossAccount"
      }
    ]
  }
]
```

For each consuming event bus, the event pattern and the target event bus must be included.

*Event Pattern*

[Event patterns](https://docs.aws.amazon.com/eventbridge/latest/userguide/eb-event-patterns.html) filter which events this rule will apply to. For purposes of this example, the event sources and the record `detail_types` identify which events to transmit from the Source account’s event bus to the Destination account’s event bus.

*Target event bus*

This rule targets an event bus that exists in another account. The full `arn` (Amazon Resource Name) is needed to uniquely identify the target event bus, and the `id` is the [logical ID](https://docs.aws.amazon.com/cdk/v2/guide/identifiers.html#identifiers_logical_ids) used by AWS CloudFormation. The target event bus need not actually exist at the time of target rule creation.

**Destination account-specific considerations**

In the Destination account, an EventBridge event bus is created to receive messages from the Source account’s event bus. To allow events to be published from the Source account, you must create a [resource-based policy](https://docs.aws.amazon.com/eventbridge/latest/userguide/eb-use-resource-based.html):

```
{
  "Version": "2012-10-17",		 	 	 
  "Statement": [{
    "Sid": "AllowOrgToPutEvents",
    "Effect": "Allow",
    "Principal": "*",
    "Action": "events:PutEvents",
    "Resource": "arn:aws:events:us-east-2:012345678901:event-bus/CrossAccount",
    "Condition": {
      "StringEquals": {
        "aws:PrincipalOrgID": "o-XXXXXXXXX"
      }
    }
  }]
}
```

It's especially important to grant the `events:PutEvents` permission, which allows any other account in the same organization to publish events to this event bus. Setting `aws:PrincipalOrgId` as the organization ID grants the needed permissions.

**Event pattern**

You can modify the included event pattern to meet your use case:

```
rule = events.Rule(
    self,
    self.id + 'Rule' + rule_definition['id'],
    event_bus=event_bus,
    event_pattern=events.EventPattern(
        source=rule_definition['sources'],
        detail_type=rule_definition['detail_types'],
    )
)
```

To reduce unnecessary processing, the event pattern should specify that only events to be processed by the Destination account are transmitted to the Destination account’s event bus.

*Resource-based policy*

This example uses the organization ID to control which accounts are allowed to put events on the Destination account’s event bus. Consider using a more restrictive policy, such as specifying the Source account.

*EventBridge quotas*

Keep in mind the following [quotas](https://docs.aws.amazon.com/eventbridge/latest/userguide/eb-quota.html):
+ 300 rules per event bus is the default quota. This can be expanded if necessary, but it should fit most use cases.
+ Five targets per rule is the maximum allowed. We recommend that application architects should use a distinct rule for each Destination account to support fine-grained control over the event pattern.

# Deliver DynamoDB records to Amazon S3 using Kinesis Data Streams and Firehose with AWS CDK
<a name="deliver-dynamodb-records-to-amazon-s3-using-kinesis-data-streams-and-amazon-data-firehose-with-aws-cdk"></a>

*Shashank Shrivastava and Daniel Matuki da Cunha, Amazon Web Services*

## Summary
<a name="deliver-dynamodb-records-to-amazon-s3-using-kinesis-data-streams-and-amazon-data-firehose-with-aws-cdk-summary"></a>

This pattern provides sample code and an application for delivering records from Amazon DynamoDB to Amazon Simple Storage Service (Amazon S3) by using Amazon Kinesis Data Streams and Amazon Data Firehose. The pattern’s approach uses [AWS Cloud Development Kit (AWS CDK) L3 constructs](https://docs.aws.amazon.com/cdk/latest/guide/getting_started.html) and includes an example of how to perform data transformation with AWS Lambda before data is delivered to the target S3 bucket on the Amazon Web Services (AWS) Cloud.

Kinesis Data Streams records item-level modifications in DynamoDB tables and replicates them to the required Kinesis data stream. Your applications can access the Kinesis data stream and view the item-level changes in near-real time. Kinesis Data Streams also provides access to other Amazon Kinesis services, such as Firehose and Amazon Managed Service for Apache Flink. This means that you can build applications that provide real-time dashboards, generate alerts, implement dynamic pricing and advertising, and perform sophisticated data analysis.

You can use this pattern for your data integration use cases. For example, transportation vehicles or industrial equipment can send high volumes of data to a DynamoDB table. This data can then be transformed and stored in a data lake hosted in Amazon S3. You can then query and process the data and predict any potential defects by using serverless services such as Amazon Athena, Amazon Redshift Spectrum, Amazon Rekognition, and AWS Glue.

## Prerequisites and limitations
<a name="deliver-dynamodb-records-to-amazon-s3-using-kinesis-data-streams-and-amazon-data-firehose-with-aws-cdk-prereqs"></a>

*Prerequisites*
+ An active AWS account.
+ AWS Command Line Interface (AWS CLI), installed and configured. For more information, see [Getting started with the AWS CLI](https://docs.aws.amazon.com/cli/latest/userguide/cli-chap-getting-started.html) in the AWS CLI documentation.
+ Node.js (18.x\$1) and npm, installed and configured. For more information, see [Downloading and installing Node.js and npm](https://docs.npmjs.com/downloading-and-installing-node-js-and-npm) in the `npm` documentation.
+ aws-cdk (2.x\$1), installed and configured. For more information, see [Getting started with the AWS CDK](https://docs.aws.amazon.com/cdk/v2/guide/getting_started.html) in the AWS CDK documentation.
+ The GitHub [aws-dynamodb-kinesisfirehose-s3-ingestion](https://github.com/aws-samples/aws-dynamodb-kinesisfirehose-s3-ingestion/) repository, cloned and configured on your local machine.
+ Existing sample data for the DynamoDB table. The data must use the following format: `{"SourceDataId": {"S": "123"},"MessageData":{"S": "Hello World"}}`

## Architecture
<a name="deliver-dynamodb-records-to-amazon-s3-using-kinesis-data-streams-and-amazon-data-firehose-with-aws-cdk-architecture"></a>

The following diagram shows an example workflow for delivering records from DynamoDB to Amazon S3 by using Kinesis Data Streams and Firehose.

![\[An example workflow for delivering records from DynamoDB to Amazon S3 using Kinesis Data Streams and Firehose.\]](http://docs.aws.amazon.com/prescriptive-guidance/latest/patterns/images/pattern-img/e2a9c412-312e-4900-9774-19a281c578e4/images/6e6df998-e6c2-4eaf-b263-ace752194689.png)


The diagram shows the following workflow:

1. Data is ingested using Amazon API Gateway as a proxy for DynamoDB. You can also use any other source to ingest data into DynamoDB. 

1. Item-level changes are generated in near-real time in Kinesis Data Streams for delivery to Amazon S3.

1. Kinesis Data Streams sends the records to Firehose for transformation and delivery. 

1. A Lambda function converts the records from a DynamoDB record format to JSON format, which contains only the record item attribute names and values.

## Tools
<a name="deliver-dynamodb-records-to-amazon-s3-using-kinesis-data-streams-and-amazon-data-firehose-with-aws-cdk-tools"></a>

*AWS services*
+ [AWS Cloud Development Kit (AWS CDK)](https://docs.aws.amazon.com/cdk/latest/guide/home.html) is a software development framework that helps you define and provision AWS Cloud infrastructure in code.
+ [AWS CDK Toolkit](https://docs.aws.amazon.com/cdk/latest/guide/cli.html) is a command line cloud development kit that helps you interact with your AWS CDK app.
+ [AWS Command Line Interface (AWS CLI)](https://docs.aws.amazon.com/cli/latest/userguide/cli-chap-welcome.html) is an open-source tool that helps you interact with AWS services through commands in your command-line shell.
+ [AWS CloudFormation](https://docs.aws.amazon.com/AWSCloudFormation/latest/UserGuide/Welcome.html) helps you set up AWS resources, provision them quickly and consistently, and manage them throughout their lifecycle across AWS accounts and AWS Regions.

*Code repository*

The code for this pattern is available in the GitHub [aws-dynamodb-kinesisfirehose-s3-ingestion](https://github.com/aws-samples/aws-dynamodb-kinesisfirehose-s3-ingestion/) repository.

## Epics
<a name="deliver-dynamodb-records-to-amazon-s3-using-kinesis-data-streams-and-amazon-data-firehose-with-aws-cdk-epics"></a>

### Set up and configure the sample code
<a name="set-up-and-configure-the-sample-code"></a>


| Task | Description | Skills required | 
| --- | --- | --- | 
| Install the dependencies. | On your local machine, install the dependencies from the `package.json` files in the `pattern/aws-dynamodb-kinesisstreams-s3` and `sample-application` directories by running the following commands:<pre>cd <project_root>/pattern/aws-dynamodb-kinesisstreams-s3 </pre><pre>npm install && npm run build</pre><pre>cd <project_root>/sample-application/</pre><pre>npm install && npm run build</pre>  | App developer, General AWS | 
| Generate the CloudFormation template. | [\[See the AWS documentation website for more details\]](http://docs.aws.amazon.com/prescriptive-guidance/latest/patterns/deliver-dynamodb-records-to-amazon-s3-using-kinesis-data-streams-and-amazon-data-firehose-with-aws-cdk.html) | App developer, General AWS, AWS DevOps | 

### Deploy the resources
<a name="deploy-the-resources"></a>


| Task | Description | Skills required | 
| --- | --- | --- | 
| Check and deploy the resources. | [\[See the AWS documentation website for more details\]](http://docs.aws.amazon.com/prescriptive-guidance/latest/patterns/deliver-dynamodb-records-to-amazon-s3-using-kinesis-data-streams-and-amazon-data-firehose-with-aws-cdk.html) | App developer, General AWS, AWS DevOps | 

### Ingest data into the DynamoDB table to test the solution
<a name="ingest-data-into-the-dynamodb-table-to-test-the-solution"></a>


| Task | Description | Skills required | 
| --- | --- | --- | 
| Ingest your sample data into the DynamoDB table. | Send a request to your DynamoDB table by running the following command in AWS CLI:`aws dynamodb put-item --table-name <your_table_name> --item '{"<table_partition_key>": {"S": "<partition_key_ID>"},"MessageData":{"S": "<data>"}}'`example:`aws dynamodb put-item --table-name SourceData_table --item '{"SourceDataId": {"S": "123"},"MessageData":{"S": "Hello World"}}'`By default, the `put-item` doesn't return any value as output if the operation succeeds. If the operation fails, it returns an error. The data is stored in DynamoDB and then sent to Kinesis Data Streams and Firehose. You use different approaches to add data into a DynamoDB table. For more information, see [Load data into tables](https://docs.aws.amazon.com/amazondynamodb/latest/developerguide/SampleData.LoadData.html) in the DynamoDB documentation. | App developer | 
| Verify that a new object is created in the S3 bucket. | Sign in to the AWS Management Console and monitor the S3 bucket to verify that a new object was created with the data that you sent. For more information, see [GetObject](https://docs.aws.amazon.com/AmazonS3/latest/API/API_GetObject.html) in the Amazon S3 documentation. | App developer, General AWS | 

### Clean up resources
<a name="clean-up-resources"></a>


| Task | Description | Skills required | 
| --- | --- | --- | 
| Clean up resources.  | Run the `cdk destroy` command to delete all the resources used by this pattern. | App developer, General AWS | 

## Related resources
<a name="deliver-dynamodb-records-to-amazon-s3-using-kinesis-data-streams-and-amazon-data-firehose-with-aws-cdk-resources"></a>
+ [s3-static-site-stack.ts](https://github.com/awslabs/aws-solutions-constructs/blob/main/source/use_cases/aws-s3-static-website/lib/s3-static-site-stack.ts#L25) (GitHub repository)
+ [aws-apigateway-dynamodb module](https://github.com/awslabs/aws-solutions-constructs/tree/main/source/patterns/%40aws-solutions-constructs/aws-apigateway-dynamodb) (GitHub repository)
+ [aws-kinesisstreams-kinesisfirehose-s3 module](https://github.com/awslabs/aws-solutions-constructs/tree/main/source/patterns/%40aws-solutions-constructs/aws-kinesisstreams-kinesisfirehose-s3) (GitHub repository)
+ [Change data capture for DynamoDB Streams](https://docs.aws.amazon.com/amazondynamodb/latest/developerguide/Streams.html) (DynamoDB documentation)
+ [Using Kinesis Data Streams to capture changes to DynamoDB](https://docs.aws.amazon.com/amazondynamodb/latest/developerguide/kds.html) (DynamoDB documentation)

# Implement path-based API versioning by using custom domains in Amazon API Gateway
<a name="implement-path-based-api-versioning-by-using-custom-domains"></a>

*Corey Schnedl, Marcelo Barbosa, Mario Lopez Martinez, Anbazhagan Ponnuswamy, Gaurav Samudra, and Abhilash Vinod, Amazon Web Services*

## Summary
<a name="implement-path-based-api-versioning-by-using-custom-domains-summary"></a>

This pattern demonstrates how you can use the [API mappings](https://docs.aws.amazon.com/apigateway/latest/developerguide/rest-api-mappings.html) feature of [custom domains](https://docs.aws.amazon.com/apigateway/latest/developerguide/how-to-custom-domains.html) to implement a path-based API versioning solution for Amazon API Gateway.

Amazon API Gateway is a fully managed service that you can use to create, publish, maintain, monitor, and secure APIs at any scale. By using the service’s custom domain feature, you can create custom domain names that are simpler with more intuitive URLs that you can provide to your API users. You can use API mappings to connect API stages to a custom domain name. After you create a domain name and configure DNS records, you use API mappings to send traffic to your APIs through your custom domain name.

After an API becomes publicly available, consumers use it. As a public API evolves, its service contract also evolves to reflect new features and capabilities. However, it’s unwise to change or remove existing features. Any breaking changes might impact the consumer’s applications and break them at runtime. API versioning is important to avoid breaking backward compatibility and breaking a contract.

You need a clear strategy for API versioning to help consumers adopt them. Versioning APIs by using path-based URLs is the most straightforward and commonly used approach. In this type of versioning, versions are explicitly defined as part of API URIs. The following example URLs show how a consumer can use the URI to specify an API version for their request:

`https://api.example.com/api/v1/orders `

`https://api.example.com/api/v2/orders `

`https://api.example.com/api/vX/orders`

This pattern uses the AWS Cloud Development Kit (AWS CDK) to build, deploy, and test a sample implementation of a scalable path-based versioning solution for your API. AWS CDK is an open source software development framework to model and provision your cloud application resources using familiar programming languages.

## Prerequisites and limitations
<a name="implement-path-based-api-versioning-by-using-custom-domains-prereqs"></a>

**Prerequisites **
+ An active AWS account.
+ Ownership of a domain is required to use this pattern’s sample repository and to use Amazon API Gateway custom domain functionality. You can use Amazon Route 53 to create and manage your domains for your organization. For information about how to register or transfer a domain with Route 53, see [Registering new domains](https://docs.aws.amazon.com/Route53/latest/DeveloperGuide/domain-register-update.html) in the Route 53 documentation.
+ Before setting up a custom domain name for an API, you must have an [SSL/TLS certificate](https://docs.aws.amazon.com/apigateway/latest/developerguide/how-to-specify-certificate-for-custom-domain-name.html) ready in AWS Certificate Manager.
+ You must create or update your DNS provider's resource record to map to your API endpoint. Without such a mapping, API requests bound for the custom domain name can’t reach API Gateway.

**Limitations **
+ A custom domain name must be unique within an AWS Region across all AWS accounts.
+ To configure API mappings with multiple levels, you must use a Regional custom domain name and use the TLS 1.2 security policy.
+ In an API mapping, the custom domain name and mapped APIs must be in the same AWS account.
+ API mappings must contain only letters, numbers, and the following characters: `$-_.+!*'()/`
+ The maximum length for the path in an API mapping is 300 characters.
+ You can have 200 API mappings with multiple levels for each domain name.
+ You can only map HTTP APIs to a Regional custom domain name with the TLS 1.2 security policy.
+ You can't map WebSocket APIs to the same custom domain name as an HTTP API or REST API.
+ Some AWS services aren’t available in all AWS Regions. For Region availability, see [AWS Services by Region](https://aws.amazon.com/about-aws/global-infrastructure/regional-product-services/). For specific endpoints, see [Service endpoints and quotas](https://docs.aws.amazon.com/general/latest/gr/aws-service-information.html), and choose the link for the service.

**Product versions**
+ This sample implementation uses [AWS CDK in TypeScript](https://docs.aws.amazon.com/cdk/v2/guide/work-with-cdk-typescript.html) version 2.149.0.

## Architecture
<a name="implement-path-based-api-versioning-by-using-custom-domains-architecture"></a>

The following diagram shows the architecture workflow.

![\[Workflow using API mappings and custom domains to implement a path-based API versioning solution.\]](http://docs.aws.amazon.com/prescriptive-guidance/latest/patterns/images/pattern-img/e1b32d2b-410f-4ace-967e-f0b8aaf0304c/images/fa9f04f1-efa6-4fb1-a541-ae3da4076b00.png)


The diagram illustrates the following:

1. The API user sends a request to Amazon API Gateway with a custom domain name.

1. API Gateway dynamically routes the user’s request to an appropriate instance and stage of API Gateway, based on the path indicated in the URL of the request. The following table shows an example of how the different URL-based paths can be routed to specific stages for different instances of API Gateway.    
[\[See the AWS documentation website for more details\]](http://docs.aws.amazon.com/prescriptive-guidance/latest/patterns/implement-path-based-api-versioning-by-using-custom-domains.html)

1. The destination API Gateway instance processes the request and returns the result to the user.

**Automation and scale**

We recommend that you use separate AWS CloudFormation stacks for each version of your API. With this approach, you can have complete isolation between the backend APIs that can be routed to by the custom domain API mapping feature. An advantage of this approach is that different versions of your API can be deployed or removed independently without introducing the risk of modifying another API. This approach increases resilience through isolation of CloudFormation stacks. Also, it provides you with different back-end options for your API such as AWS Lambda, AWS Fargate, HTTP endpoints, and actions of AWS services.

You can use Git branching strategies, such as [Gitflow](https://docs.aws.amazon.com/prescriptive-guidance/latest/choosing-git-branch-approach/gitflow-branching-strategy.html), in combination with isolated CloudFormation stacks to manage the source code that’s deployed to different versions of the API. By using this option, you can maintain different versions of your API without the need to duplicate the source code for new versions. With Gitflow, you can add tags to commits within your git repository as releases are performed. As a result, you have a complete snapshot of the source code related to a specific release. As updates need to be performed, you can check out the code from a specific release, make updates, and then deploy the updated source code to the CloudFormation stack that aligns with the corresponding major version. This approach reduces the risk of breaking another API version because each version of the API has isolated source code and is deployed to separate CloudFormation stacks.

## Tools
<a name="implement-path-based-api-versioning-by-using-custom-domains-tools"></a>

**AWS services**
+ [Amazon API Gateway](https://docs.aws.amazon.com/apigateway/latest/developerguide/welcome.html) helps you create, publish, maintain, monitor, and secure REST, HTTP, and WebSocket APIs at any scale.
+ [AWS Certificate Manager (ACM)](https://docs.aws.amazon.com/acm/latest/userguide/acm-overview.html) helps you create, store, and renew public and private SSL/TLS X.509 certificates and keys that protect your AWS websites and applications.
+ [AWS Cloud Development Kit (AWS CDK)](https://docs.aws.amazon.com/cdk/v2/guide/home.html) is an open-source software development framework for defining your cloud infrastructure in code and provisioning it through CloudFormation. This pattern’s sample implementation uses the [AWS CDK in TypeScript](https://docs.aws.amazon.com/cdk/v2/guide/work-with-cdk-typescript.html). Working with the AWS CDK in TypeScript uses familiar tools, including the Microsoft TypeScript compiler (`tsc`), [Node.js](https://nodejs.org/), and the node package manager (`npm`). If you prefer, you can use [Yarn](https://yarnpkg.com/) although the examples in this pattern use `npm`. The modules that comprise the [AWS Construct Library](https://docs.aws.amazon.com/cdk/v2/guide/libraries.html#libraries-construct) are distributed through the `npm `repository, [npmjs.org](https://docs.npmjs.com/).
+ [CloudFormation](https://docs.aws.amazon.com/AWSCloudFormation/latest/UserGuide/Welcome.html) helps you set up AWS resources, provision them quickly and consistently, and manage them throughout their lifecycle across AWS accounts and AWS Regions.
+ [AWS Lambda](https://docs.aws.amazon.com/lambda/latest/dg/welcome.html) is a compute service that helps you run code without needing to provision or manage servers. It runs your code only when needed and scales automatically, so you pay only for the compute time that you use.
+ [Amazon Route 53](https://docs.aws.amazon.com/Route53/latest/DeveloperGuide/Welcome.html) is a highly available and scalable DNS web service.
+ [AWS WAF](https://docs.aws.amazon.com/waf/latest/developerguide/what-is-aws-waf.html) is a web application firewall that helps you monitor HTTP and HTTPS requests that are forwarded to your protected web application resources.

**Other tools**
+ [Bruno](https://www.usebruno.com/) is an open source, git-friendly API testing client.
+ [cdk-nag](https://github.com/cdklabs/cdk-nag) is an open source utility that checks AWS CDK applications for best practices by using rule packs.

**Code repository**

The code for this pattern is available in the GitHub [path-based-versioning-with-api-gateway](https://github.com/aws-samples/path-based-versioning-with-api-gateway) repository.

## Best practices
<a name="implement-path-based-api-versioning-by-using-custom-domains-best-practices"></a>
+ Use a robust continuous integration and continuous delivery (CI/CD) pipeline to automate the testing and deployment of your CloudFormation stacks that are built with the AWS CDK. For more information related to this recommendation, see the [AWS Well-Architected DevOps Guidance](https://docs.aws.amazon.com/wellarchitected/latest/devops-guidance/devops-guidance.html).
+ AWS WAF is a managed firewall that easily integrates with services like Amazon API Gateway. Although AWS WAF isn’t a necessary component for this versioning pattern to work, we recommend as a security best practice to include AWS WAF with API Gateway.
+ Encourage API consumers to upgrade regularly to the latest version of your API so that older versions of your API can be deprecated and removed efficiently.
+ Before using this approach in a production setting, implement a firewall and authorization strategy for your API.
+ Implement access to the management of AWS resources of your AWS account by using the [least-privilege access model](https://docs.aws.amazon.com/IAM/latest/UserGuide/best-practices.html#grant-least-privilege).
+ To enforce best practices and security recommendations for applications built with the AWS CDK, we recommend that you use the [cdk-nag utility](https://docs.aws.amazon.com/prescriptive-guidance/latest/patterns/check-aws-cdk-applications-or-cloudformation-templates-for-best-practices-by-using-cdk-nag-rule-packs.html). 

## Epics
<a name="implement-path-based-api-versioning-by-using-custom-domains-epics"></a>

### Prepare your local environment
<a name="prepare-your-local-environment"></a>


| Task | Description | Skills required | 
| --- | --- | --- | 
| Clone the repository. | To clone the sample application repository, run the following command:<pre>git clone https://github.com/aws-samples/path-based-versioning-with-api-gateway</pre> | App developer | 
| Navigate to the cloned repository. | To navigate to the cloned repository folder location, run the following command: <pre>cd api-gateway-custom-domain-versioning</pre> | App developer | 
| Install the required dependencies. | To install the required dependencies, run the following command:<pre>npm install </pre> | App developer | 

### Deploy the CloudFormation routing stack
<a name="deploy-the-cfnshort-routing-stack"></a>


| Task | Description | Skills required | 
| --- | --- | --- | 
| Initiate deployment of the routing stack. | To initiate the deployment of the CloudFormation routing stack `CustomDomainRouterStack`, run the following command, replacing `example.com` with the name of the domain that you own:<pre>npx cdk deploy CustomDomainRouterStack --parameters PrerequisiteDomainName=example.com</pre>The stack deployment will not succeed until the following domain DNS validation task is performed successfully. | App developer | 

### Verify domain ownership
<a name="verify-domain-ownership"></a>


| Task | Description | Skills required | 
| --- | --- | --- | 
| Verify ownership of your domain. | The certificate will remain in a **Pending validation** status until you prove ownership of the associated domain. To prove ownership, add CNAME records to the hosted zone that is associated with the domain. For more information, see [DNS validation](https://docs.aws.amazon.com/acm/latest/userguide/dns-validation.html) in the AWS Certificate Manager documentation. Adding the appropriate records enables the `CustomDomainRouterStack` deployment to succeed. | App developer, AWS systems administrator, Network administrator | 
| Create an alias record to point to your API Gateway custom domain. | After the certificate is issued and validated successfully, [create a DNS record](https://docs.aws.amazon.com/apigateway/latest/developerguide/apigateway-regional-api-custom-domain-create.html#apigateway-regional-api-custom-domain-dns-record) that points to your Amazon API Gateway custom domain URL. The custom domain URL is uniquely generated by the provisioning of the custom domain and is specified as a CloudFormation output parameter. Following is an [example of the record](https://docs.aws.amazon.com/Route53/latest/DeveloperGuide/resource-record-sets-values-basic.html): **Routing policy**: Simple routing**Record name**: `demo.api-gateway-custom-domain-versioning.example.com`**Alias**: Yes**Record type**: A DNS record of type "A" that points to an AWS resource**Value**: `d-xxxxxxxxxx.execute-api.xx-xxxx-x.amazonaws.com`**TTL (seconds)**: 300 | App developer, AWS systems administrator, Network administrator | 

### Deploy CloudFormation stacks and invoke the API
<a name="deploy-cfnshort-stacks-and-invoke-the-api"></a>


| Task | Description | Skills required | 
| --- | --- | --- | 
| Deploy the `ApiStackV1` stack. | To deploy the `ApiStackV1` stack, use the following command:<pre>npm run deploy-v1</pre>The following CDK code adds API mapping:<pre>var apiMapping = new CfnApiMapping(this, "ApiMapping", {<br />      apiId: this.lambdaRestApi.restApiId,<br />      domainName: props.customDomainName.domainName,<br />      stage: "api",<br />      apiMappingKey: "api/v1",<br />    });</pre> | App developer | 
| Deploy the `ApiStackV2` stack. | To deploy the `ApiStackV2` stack, use the following command:<pre>npm run deploy-v2</pre> | App developer | 
| Invoke the API. | To invoke the API and test the API endpoints by using Bruno, see the instructions in [Additional information](#implement-path-based-api-versioning-by-using-custom-domains-additional). | App developer | 

### Clean up resources
<a name="clean-up-resources"></a>


| Task | Description | Skills required | 
| --- | --- | --- | 
| Clean up resources. | To destroy the resources associated with this sample application, use the following command:<pre>npx cdk destroy --all</pre>Make sure that you clean up any Route 53 DNS records that were added manually for the domain ownership verification process. | App developer | 

## Troubleshooting
<a name="implement-path-based-api-versioning-by-using-custom-domains-troubleshooting"></a>


| Issue | Solution | 
| --- | --- | 
| The deployment of `CustomDomainRouterStack` times out because the certificate can’t be validated. | Make sure that you added the proper DNS validation CNAME records as described in the earlier task. Your new certificate might continue to display a status of **Pending validation** for up to 30 minutes after adding the DNS validation records. For more information, see [DNS validation](https://docs.aws.amazon.com/acm/latest/userguide/dns-validation.html) in the AWS Certificate Manager documentation. | 

## Related resources
<a name="implement-path-based-api-versioning-by-using-custom-domains-resources"></a>
+ [Implementing header-based API Gateway versioning with Amazon CloudFront](https://aws.amazon.com/blogs/compute/implementing-header-based-api-gateway-versioning-with-amazon-cloudfront/) – This AWS Compute Blog post offers a header-based versioning strategy as an alternative to the path-based versioning strategy outlined in this pattern.
+ [AWS CDK Workshop](https://cdkworkshop.com/20-typescript.html) – This introductory workshop focuses on building and deploying applications on AWS by using the AWS Cloud Development Kit (AWS CDK). This workshop supports Go, Python, and TypeScript. 

## Additional information
<a name="implement-path-based-api-versioning-by-using-custom-domains-additional"></a>

**Testing your API with Bruno**

We recommend that you use [Bruno](https://www.usebruno.com/), an open source API testing tool, to verify that the path-based routing is working properly for the sample application. This pattern provides a sample collection to facilitate testing your sample API.

To invoke and test your API, use the following steps:

1. [Install Bruno.](https://www.usebruno.com/downloads)

1. Open Bruno.

1. In this pattern’s [code repository](https://github.com/aws-samples/path-based-versioning-with-api-gateway), select **Bruno/Sample-API-Gateway-Custom-Domain-Versioning **and open the collection.

1. To see the **Environments** dropdown in the top right of the user interface (UI), select any request in the collection.

1. In the **Environments** dropdown, select **Configure**.

1. Replace the `REPLACE_ME_WITH_YOUR_DOMAIN` value with your custom domain.

1. Choose **Save**, and then close the **Configuration** section.

1. For **Sandbox Environment**,** **verify that the **Active** option is selected.

1. Invoke your API by using the **->** button for the selected request.

1. Take note on how validation (passing in non-number values) is handled in V1 compared to V2.

To see screenshots of example API invocation and comparison of V1 and V2 validation, see **Testing your sample API** in the `README.md` file in this pattern’s [code repository](https://github.com/aws-samples/path-based-versioning-with-api-gateway).

# Import the psycopg2 library to AWS Lambda to interact with your PostgreSQL database
<a name="import-psycopg2-library-lambda"></a>

*Louis Hourcade, Amazon Web Services*

## Summary
<a name="import-psycopg2-library-lambda-summary"></a>

[Psycopg](https://www.psycopg.org/docs/) is a PostgresSQL database adapter for Python. Developers use the `psycopg2` library to write Python applications that interact with PostgreSQL databases.

On Amazon Web Services (AWS), developers also use [AWS Lambda](https://docs.aws.amazon.com/lambda/latest/dg/welcome.html) to run code for applications or backend services. Lambda is a serverless, event-driven compute service runs code without the need to provision or manage servers.

By default, when you create a new function that uses a [Python runtime that’s supported by Lambda](https://docs.aws.amazon.com/lambda/latest/dg/lambda-python.html), the Lambda runtime environment is created from a [base image for Lambda](https://github.com/aws/aws-lambda-base-images) provided by AWS. Libraries, such as `pandas` or `psycopg2`, aren't included in the base image. To use a library, you need to bundle it in a custom package and attach it to Lambda.

There are multiple ways to bundle and attach a library, including the following:
+ Deploy your Lambda function from a [.zip file archive](https://docs.aws.amazon.com/lambda/latest/dg/configuration-function-zip.html).
+ Deploy your Lambda function from a custom container image.
+ Create a [Lambda layer](https://docs.aws.amazon.com/lambda/latest/dg/chapter-layers.html#lambda-layer-versions), and attach it to your Lambda function.

This pattern demonstrates the first two options.

With a .zip deployment package, adding the `pandas` library to your Lambda function is relatively straightforward. Create a folder on your Linux machine, add the Lambda script together with the `pandas` library and the library's dependencies to the folder, zip the folder, and provide it as a source for your Lambda function.

Although using a .zip deployment package is a common practice, that approach doesn't work for the `psycopg2` library. This pattern first shows the error that you get if you use a .zip deployment package to add the `psycopg2` library to your Lambda function. The pattern then shows how to deploy Lambda from a Dockerfile and edit the Lambda image to make the `psycopg2` library work.

For information about the three resources that the pattern deploys, see the [Additional information](#import-psycopg2-library-lambda-additional) section.

## Prerequisites and limitations
<a name="import-psycopg2-library-lambda-prereqs"></a>

**Prerequisites **
+ An active AWS account with sufficient permissions to deploy the AWS resources used by this pattern
+ AWS Cloud Development Kit (AWS CDK) installed globally by running `npm install -g aws-cdk`
+ A Git client
+ Python
+ Docker

**Limitations **
+ Some AWS services aren’t available in all AWS Regions. For Region availability, see [AWS services by Region](https://aws.amazon.com/about-aws/global-infrastructure/regional-product-services/). For specific endpoints, see the [Service endpoints and quotas](https://docs.aws.amazon.com/general/latest/gr/aws-service-information.html) page, and choose the link for the service.

**Product versions**
+ Python runtime version that’s [supported by Lambda](https://docs.aws.amazon.com/lambda/latest/dg/lambda-python.html)
+ Psycopg2 version 2.9.3
+ Pandas version 1.5.2

## Architecture
<a name="import-psycopg2-library-lambda-architecture"></a>

**Solution overview **

To illustrate the challenges that you might face when using the `psycopg2` library in Lambda, the pattern deploys two Lambda functions:
+ One Lambda function with the Python runtime created from a .zip file. The `psycopg2` and `pandas` libraries are installed in this .zip deployment package by using [pip](https://pypi.org/project/pip/).
+ One Lambda function with the Python runtime created from a Dockerfile. The Dockerfile installs the `psycopg2` and `pandas` libraries into the Lambda container image.

The first Lambda function installs the `pandas` library and its dependencies in a .zip file, and Lambda can use that library.

The second Lambda function demonstrates that by building a container image for your Lambda function, you can run the  `pandas` and `psycopg2` libraries in Lambda.

## Tools
<a name="import-psycopg2-library-lambda-tools"></a>

**AWS services**
+ [AWS Cloud Development Kit (AWS CDK)](https://docs.aws.amazon.com/cdk/v2/guide/home.html) is a software development framework that helps you define and provision AWS Cloud infrastructure in code.
+ [AWS Lambda](https://docs.aws.amazon.com/lambda/latest/dg/welcome.html) is a compute service that helps you run code without needing to provision or manage servers. It runs your code only when needed and scales automatically, so you pay only for the compute time that you use.

**Other tools**
+ [Docker](https://www.docker.com/) is a set of platform as a service (PaaS) products that use virtualization at the operating-system level to deliver software in containers.
+ [pandas](https://pandas.pydata.org/) is a Python-based open source tool for data analysis and manipulation.
+ [Psycopg](https://www.psycopg.org/docs/) is a PostgreSQL database adapter for the Python language that is designed for multithreaded applications. This pattern uses Psycopg 2.
+ [Python](https://www.python.org/) is a general-purpose computer programming language.

**Code repository**

The code for this pattern is available in the [import-psycopg2-in-lambda-to-interact-with-postgres-database](https://github.com/aws-samples/import-psycopg2-in-lambda-to-interact-with-postgres-database) repository on GitHub.

## Best practices
<a name="import-psycopg2-library-lambda-best-practices"></a>

This pattern provides you with a working example of using AWS CDK to create a Lambda function from a Dockerfile. If you reuse this code in your application, make sure that the deployed resources meet all security requirements. Use tools such as [Checkov](https://www.checkov.io/), which scans cloud infrastructure configurations to find misconfiguration before the infrastructure is deployed.

## Epics
<a name="import-psycopg2-library-lambda-epics"></a>

### Clone the repository and configure the deployment
<a name="clone-the-repository-and-configure-the-deployment"></a>


| Task | Description | Skills required | 
| --- | --- | --- | 
| Clone the repository. | To clone the GitHub repository on your local machine, run the following commands:<pre>git clone https://github.com/aws-samples/import-psycopg2-in-lambda-to-interact-with-postgres-database.git<br />cd AWS-lambda-psycopg2</pre> | General AWS | 
| Configure your deployment. | Edit the `app.py` file with information about your AWS account:<pre>aws_acccount = "AWS_ACCOUNT_ID"<br />region = "AWS_REGION"<br /># Select the CPU architecture you are using to build the image (ARM or X86)<br />architecture = "ARM"</pre> | General AWS | 

### Bootstrap your AWS account and deploy the application
<a name="bootstrap-your-aws-account-and-deploy-the-application"></a>


| Task | Description | Skills required | 
| --- | --- | --- | 
| Bootstrap your AWS account. | If you haven't already [bootstrapped your AWS environment](https://docs.aws.amazon.com/cdk/v2/guide/bootstrapping.html), run the following commands with the AWS credentials of your AWS account:<pre>cdk bootstrap aws://<tooling-account-id>/<aws-region></pre> | General AWS | 
| Deploy the code. | To deploy the AWS CDK application, run the following command:<pre>cdk deploy AWSLambdaPyscopg2</pre> | General AWS | 

### Test the Lambda functions from the AWS Management Console
<a name="test-the-lambda-functions-from-the-aws-management-console"></a>


| Task | Description | Skills required | 
| --- | --- | --- | 
| Test the Lambda function created from the .zip file. | To test the Lambda function that was created from the .zip file, do the following:[\[See the AWS documentation website for more details\]](http://docs.aws.amazon.com/prescriptive-guidance/latest/patterns/import-psycopg2-library-lambda.html)Because Lambda doesn't find the required PostgreSQL libraries in the default image, it can't use the `psycopg2` library. | General AWS | 
| Test the Lambda function created from the Dockerfile. | To use the `psycopg2` library within your Lambda function, you must edit the Lambda Amazon Machine Image (AMI).To test the Lambda function that was created from the Dockerfile, do the following:[\[See the AWS documentation website for more details\]](http://docs.aws.amazon.com/prescriptive-guidance/latest/patterns/import-psycopg2-library-lambda.html)The following code shows the Dockerfile that the AWS CDK template creates:<pre># Start from lambda Python3.13 image<br />FROM public.ecr.aws/lambda/python:3.13<br /><br /># Copy the lambda code, together with its requirements<br />COPY lambda/requirements.txt ${LAMBDA_TASK_ROOT}<br />COPY lambda/lambda_code.py ${LAMBDA_TASK_ROOT}<br /><br /># Install postgresql-devel in your image<br />RUN yum install -y gcc postgresql-devel<br /><br /># install the requirements for the Lambda code<br />RUN pip3 install -r requirements.txt --target "${LAMBDA_TASK_ROOT}"<br /><br /># Command can be overwritten by providing a different command in the template directly.<br />CMD ["lambda_code.handler"]</pre>The Dockerfile takes the AWS provided Lambda image for the Python runtime and installs [postgresql-devel](https://yum-info.contradodigital.com/view-package/updates/postgresql-devel/), which contains the libraries needed to compile applications that directly interact with the PostgreSQL management server. The Dockerfile also installs the `pandas` and `psycopg2` libraries, which are indicated in the `requirements.txt` file. | General AWS | 

## Related resources
<a name="import-psycopg2-library-lambda-resources"></a>
+ [AWS CDK documentation](https://docs.aws.amazon.com/cdk/v2/guide/home.html)
+ [AWS Lambda documentation](https://docs.aws.amazon.com/lambda/latest/dg/welcome.html)

## Additional information
<a name="import-psycopg2-library-lambda-additional"></a>

In this pattern, the AWS CDK template provides an AWS stack with three resources:
+ An [AWS Identity and Access Management (IAM) role](https://docs.aws.amazon.com/IAM/latest/UserGuide/id_roles.html) for the Lambda functions.
+ A Lambda function with a Python runtime. The function is deployed from the `Constructs/lambda/lambda_deploy.zip` deployment package.
+ A Lambda function with a Python runtime. The function is deployed from the Dockerfile under the `Constructs` folder

The script for both Lambda functions checks whether the `pandas` and `psycopg2` libraries are successfully imported:

```
import pandas
print("pandas successfully imported")

import psycopg2
print("psycopg2 successfully imported")

def handler(event, context):
    """Function that checks whether psycopg2  and pandas are successfully imported or not"""
    return {"Status": "psycopg2 and pandas successfully imported"}
```

The `lambda_deploy.zip` deployment package is built with the `Constructs/lambda/build.sh` bash script. This script creates a folder, copies the Lambda script, installs the `pandas` and `psycopg2` libraries, and generates the .zip file. To generate the .zip file yourself, run this bash script and redeploy the AWS CDK stack.

The Dockerfile starts with the AWS provided base image for Lambda with a Python runtime. The Dockerfile installs the `pandas` and `psycopg2` libraries on top of the default image.

# Integrate Amazon API Gateway with Amazon SQS to handle asynchronous REST APIs
<a name="integrate-amazon-api-gateway-with-amazon-sqs-to-handle-asynchronous-rest-apis"></a>

*Natalia Colantonio Favero and Gustavo Martim, Amazon Web Services*

## Summary
<a name="integrate-amazon-api-gateway-with-amazon-sqs-to-handle-asynchronous-rest-apis-summary"></a>

When you deploy REST APIs, sometimes you need to expose a message queue that client applications can publish. For example, you might have problems with the latency of third-party APIs and delays in responses, or you might want to avoid the response time of database queries or avoid scaling the server when there are a large number of concurrent APIs. In these scenarios, the client applications that publish to the queue only need to know that the API received the data—not what happens after the data was received.

This pattern creates a REST API endpoint by using [Amazon API Gateway](https://aws.amazon.com/api-gateway/) to send a message to [Amazon Simple Queue Service (Amazon SQS)](https://aws.amazon.com/sqs/). It creates an easy-to-implement integration between the two services that avoids direct access to the SQS queue.

## Prerequisites and limitations
<a name="integrate-amazon-api-gateway-with-amazon-sqs-to-handle-asynchronous-rest-apis-prereqs"></a>
+ An [active AWS account](https://portal.aws.amazon.com/billing/signup/iam)

## Architecture
<a name="integrate-amazon-api-gateway-with-amazon-sqs-to-handle-asynchronous-rest-apis-architecture"></a>

![\[Architecture for integrating API Gateway with Amazon SQS\]](http://docs.aws.amazon.com/prescriptive-guidance/latest/patterns/images/pattern-img/70984dee-e49f-4446-9d52-49ce826c3909/images/737ba0b2-da8f-4478-8c54-0a4835fd69f9.png)


The diagram illustrates these steps:

1. Request a POST REST API endpoint by using a tool such as Postman, another API, or other technologies.

1. API Gateway posts a message, which is received on the request's body, on the queue.

1. Amazon SQS receives the message and sends an answer to API Gateway with a success or failure code.

## Tools
<a name="integrate-amazon-api-gateway-with-amazon-sqs-to-handle-asynchronous-rest-apis-tools"></a>
+ [Amazon API Gateway](https://docs.aws.amazon.com/apigateway/latest/developerguide/welcome.html) helps you create, publish, maintain, monitor, and secure REST, HTTP, and WebSocket APIs at any scale.
+ [AWS Identity and Access Management (IAM)](https://docs.aws.amazon.com/IAM/latest/UserGuide/introduction.html) helps you securely manage access to your AWS resources by controlling who is authenticated and authorized to use them.
+ [Amazon Simple Queue Service (Amazon SQS)](https://docs.aws.amazon.com/AWSSimpleQueueService/latest/SQSDeveloperGuide/welcome.html) provides a secure, durable, and available hosted queue that helps you integrate and decouple distributed software systems and components.   

## Epics
<a name="integrate-amazon-api-gateway-with-amazon-sqs-to-handle-asynchronous-rest-apis-epics"></a>

### Create an SQS queue
<a name="create-an-sqs-queue"></a>


| Task | Description | Skills required | 
| --- | --- | --- | 
| Create a queue. | To create an SQS queue that receives the messages from the REST API:[\[See the AWS documentation website for more details\]](http://docs.aws.amazon.com/prescriptive-guidance/latest/patterns/integrate-amazon-api-gateway-with-amazon-sqs-to-handle-asynchronous-rest-apis.html) | App developer | 

### Provide access to Amazon SQS
<a name="provide-access-to-sqs"></a>


| Task | Description | Skills required | 
| --- | --- | --- | 
| Create an IAM role. | This IAM role gives API Gateway resources full access to Amazon SQS.[\[See the AWS documentation website for more details\]](http://docs.aws.amazon.com/prescriptive-guidance/latest/patterns/integrate-amazon-api-gateway-with-amazon-sqs-to-handle-asynchronous-rest-apis.html) | App developer, AWS administrator | 

### Create a REST API
<a name="create-a-rest-api"></a>


| Task | Description | Skills required | 
| --- | --- | --- | 
| Create a REST API. | This is the REST API that HTTP requests are sent to.[\[See the AWS documentation website for more details\]](http://docs.aws.amazon.com/prescriptive-guidance/latest/patterns/integrate-amazon-api-gateway-with-amazon-sqs-to-handle-asynchronous-rest-apis.html) | App developer | 
| Connect API Gateway to Amazon SQS. | This step enables the message to flow from inside the HTTP request’s body to Amazon SQS.[\[See the AWS documentation website for more details\]](http://docs.aws.amazon.com/prescriptive-guidance/latest/patterns/integrate-amazon-api-gateway-with-amazon-sqs-to-handle-asynchronous-rest-apis.html) | App developer | 

### Test the REST API
<a name="test-the-rest-api"></a>


| Task | Description | Skills required | 
| --- | --- | --- | 
| Test the REST API. | Run a test to check for missing configuration:[\[See the AWS documentation website for more details\]](http://docs.aws.amazon.com/prescriptive-guidance/latest/patterns/integrate-amazon-api-gateway-with-amazon-sqs-to-handle-asynchronous-rest-apis.html) | App developer | 
| Change the API integration to forward the request properly to Amazon SQS. | Complete the configuration to fix the integration error:[\[See the AWS documentation website for more details\]](http://docs.aws.amazon.com/prescriptive-guidance/latest/patterns/integrate-amazon-api-gateway-with-amazon-sqs-to-handle-asynchronous-rest-apis.html) | App developer | 
| Test and validate the message in Amazon SQS. | Run a test to confirm that the test completed successfully:[\[See the AWS documentation website for more details\]](http://docs.aws.amazon.com/prescriptive-guidance/latest/patterns/integrate-amazon-api-gateway-with-amazon-sqs-to-handle-asynchronous-rest-apis.html) | App developer | 
| Test API Gateway with a special character. | Run a test that includes special characters (such as &) that aren't acceptable in a message:[\[See the AWS documentation website for more details\]](http://docs.aws.amazon.com/prescriptive-guidance/latest/patterns/integrate-amazon-api-gateway-with-amazon-sqs-to-handle-asynchronous-rest-apis.html)This is because special characters aren't supported by default in the message body. In the next step, you'll configure API Gateway to support special characters. For more information about content type conversions, see the [API Gateway documentation](https://docs.aws.amazon.com/apigateway/latest/developerguide/api-gateway-payload-encodings-workflow.html). | App developer | 
| Change the API configuration to support special characters. | Adjust the configuration to accept special characters in the message:[\[See the AWS documentation website for more details\]](http://docs.aws.amazon.com/prescriptive-guidance/latest/patterns/integrate-amazon-api-gateway-with-amazon-sqs-to-handle-asynchronous-rest-apis.html)The new message should include the special character. | App developer | 

### Deploy the REST API
<a name="deploy-the-rest-api"></a>


| Task | Description | Skills required | 
| --- | --- | --- | 
| Deploy the API. |  To deploy the REST API:[\[See the AWS documentation website for more details\]](http://docs.aws.amazon.com/prescriptive-guidance/latest/patterns/integrate-amazon-api-gateway-with-amazon-sqs-to-handle-asynchronous-rest-apis.html) | App developer | 
| Test with an external tool. | Run a test with an external tool to confirm that the message is received successfully:[\[See the AWS documentation website for more details\]](http://docs.aws.amazon.com/prescriptive-guidance/latest/patterns/integrate-amazon-api-gateway-with-amazon-sqs-to-handle-asynchronous-rest-apis.html) | App developer | 

### Clean Up
<a name="clean-up"></a>


| Task | Description | Skills required | 
| --- | --- | --- | 
| Delete the API. | On the [API Gateway console](https://console.aws.amazon.com/apigateway/), choose the API you created, and then choose **Delete**. | App developer | 
| Delete the IAM role. | On the [IAM console](https://console.aws.amazon.com/iam/), in the **Roles** pane, select **AWSGatewayRoleForSQS**, and then choose **Delete**. | App developer | 
| Delete the SQS queue. | On the [Amazon SQS console](https://console.aws.amazon.com/sqs/), in the **Queues** pane, choose the SQS queue you created, and then choose **Delete**. | App developer | 

## Related resources
<a name="integrate-amazon-api-gateway-with-amazon-sqs-to-handle-asynchronous-rest-apis-resources"></a>
+ [SQS-SendMessage](https://docs.aws.amazon.com/apigateway/latest/developerguide/http-api-develop-integrations-aws-services-reference.html#SQS-SendMessage) (API Gateway documentation)
+ [Content type conversions in API Gateway](https://docs.aws.amazon.com/apigateway/latest/developerguide/api-gateway-payload-encodings-workflow.html) (API Gateway documentation)
+ [\$1util variables](https://docs.aws.amazon.com/apigateway/latest/developerguide/api-gateway-mapping-template-reference.html#util-template-reference) (API Gateway documentation)
+ [How do I integrate an API Gateway REST API with Amazon SQS and resolve common errors?](https://repost.aws/knowledge-center/api-gateway-rest-api-sqs-errors) (AWS re:Post article)

# Process events asynchronously with Amazon API Gateway and AWS Lambda
<a name="process-events-asynchronously-with-amazon-api-gateway-and-aws-lambda"></a>

*Andrea Meroni, Mariem Kthiri, Nadim Majed, and Michael Wallner, Amazon Web Services*

## Summary
<a name="process-events-asynchronously-with-amazon-api-gateway-and-aws-lambda-summary"></a>

[Amazon API Gateway](https://docs.aws.amazon.com/apigateway/latest/developerguide/welcome.html) is a fully managed service that developers can use to create, publish, maintain, monitor, and secure APIs at any scale. It handles the tasks involved in accepting and processing up to hundreds of thousands of concurrent API calls.

An important service quota of API Gateway is the integration timeout. The timeout is the maximum time in which a backend service must return a response before the REST API returns an error. The hard limit of 29 seconds is generally acceptable for synchronous workloads. However, that limit represents a challenge for those developers who want to use API Gateway with asynchronous workloads.

This pattern shows an example architecture to process events asynchronously using API Gateway and AWS Lambda. The architecture supports running processing jobs of duration up to 15 minutes, and it uses a basic REST API as the interface.

[Projen](https://pypi.org/project/projen/) is used to set up the local development environment and to deploy the example architecture to a target AWS account, in combination with the [AWS Cloud Development Kit (AWS CDK) Toolkit](https://docs.aws.amazon.com/cdk/v2/guide/cli.html), [Docker](https://docs.docker.com/get-docker/), and [Node.js](https://nodejs.org/en/download/). Projen automatically sets up a [Python](https://www.python.org/downloads/) virtual environment with [pre-commit](https://pre-commit.com/) and the tools that are used for code quality assurance, security scanning, and unit testing. For more information, see the [Tools](#process-events-asynchronously-with-amazon-api-gateway-and-aws-lambda-tools) section.

## Prerequisites and limitations
<a name="process-events-asynchronously-with-amazon-api-gateway-and-aws-lambda-prereqs"></a>

**Prerequisites **
+ An active AWS account
+ The following tools installed on your workstation:
  + [AWS Cloud Development Kit (AWS CDK) Toolkit](https://docs.aws.amazon.com/cdk/v2/guide/cli.html) version 2.85.0
  + [Docker](https://docs.docker.com/get-docker/) version 20.10.21
  + [Node.js](https://nodejs.org/en/download/) version 18.13.0
  + [Projen](https://pypi.org/project/projen/) version 0.71.111
  + [Python](https://www.python.org/downloads/) version 3.9.16

**Limitations **
+ The maximum runtime of a job is limited by the maximum runtime for Lambda functions (15 minutes).
+ The maximum number of concurrent job requests is limited by the reserved concurrency of the Lambda function.

## Architecture
<a name="process-events-asynchronously-with-amazon-api-gateway-and-aws-lambda-architecture"></a>

The following diagram shows the interaction of the jobs API with the event-processing and error-handling Lambda functions, with events stored in an Amazon EventBridge event archive.

![\[AWS Cloud architecture showing user interaction with jobs API, Lambda functions, and EventBridge.\]](http://docs.aws.amazon.com/prescriptive-guidance/latest/patterns/images/pattern-img/e027130c-44c1-41ab-bbe9-f196a49bd9ac/images/3c437b65-48e3-477d-aeea-6ff938cc3285.png)


A typical workflow includes the following steps:

1. You authenticate against AWS Identity and Access Management (IAM) and obtain security credentials.

1. You send an HTTP `POST` request to the `/jobs` jobs API endpoint, specifying the job parameters in the request body.

1. The jobs API, which is an API Gateway REST API, returns to you an HTTP response that contains the job identifier.

1. The jobs API invokes asynchronously the event-processing Lambda function.

1. The event-processing function processes the event, and then it puts the job results in the jobs Amazon DynamoDB table

1. You send an HTTP `GET` request to the `/jobs/{jobId}` jobs API endpoint, with the job identifier from step 3 as `{jobId}`.

1. The jobs API queries the `jobs` DynamoDB table to retrieve the job results.

1. The jobs API returns an HTTP response that contains the job results.

1. If the event processing fails, the event-processing function sends the event to the error-handling function.

1. The error-handling function puts the job parameters in the `jobs` DynamoDB table.

1. You can retrieve the job parameters by sending an HTTP `GET` request to the `/jobs/{jobId}` jobs API endpoint.

1. If the error handling fails, the error-handling function sends the event to an EventBridge event archive.

   You can replay the archived events by using EventBridge.

## Tools
<a name="process-events-asynchronously-with-amazon-api-gateway-and-aws-lambda-tools"></a>

**AWS services**
+ [AWS Cloud Development Kit (AWS CDK)](https://docs.aws.amazon.com/cdk/latest/guide/home.html) is a software development framework that helps you define and provision AWS Cloud infrastructure in code.
+ [AWS Command Line Interface (AWS CLI)](https://docs.aws.amazon.com/cli/latest/userguide/cli-chap-welcome.html) is an open source tool that helps you interact with AWS services through commands in your command-line shell.
+ [Amazon DynamoDB](https://docs.aws.amazon.com/amazondynamodb/latest/developerguide/Introduction.html) is a fully managed NoSQL database service that provides fast, predictable, and scalable performance.
+ [Amazon EventBridge](https://docs.aws.amazon.com/eventbridge/latest/userguide/eb-what-is.html) is a serverless event bus service that helps you connect your applications with real-time data from a variety of sources. For example, Lambda functions, HTTP invocation endpoints using API destinations, or event buses in other AWS accounts.
+ [AWS Lambda](https://docs.aws.amazon.com/lambda/latest/dg/welcome.html) is a compute service that helps you run code without needing to provision or manage servers. It runs your code only when needed and scales automatically, so you pay only for the compute time that you use.

**Other tools**
+ [autopep8](https://github.com/hhatto/autopep8) automatically formats Python code based on the Python Enhancement Proposal (PEP) 8 style guide.
+ [Bandit](https://bandit.readthedocs.io/en/latest/) scans Python code to find common security issues.
+ [Commitizen](https://commitizen-tools.github.io/commitizen/) is a Git commit checker and `CHANGELOG` generator.
+ [cfn-lint](https://github.com/aws-cloudformation/cfn-lint) is an AWS CloudFormation linter
+ [Checkov](https://github.com/bridgecrewio/checkov) is a static code-analysis tool that checks infrastructure as code (IaC) for security and compliance misconfigurations.
+ [jq](https://stedolan.github.io/jq/download/) is a command-line tool for parsing JSON.
+ [Postman](https://www.postman.com/) is an API platform.
+ [pre-commit](https://pre-commit.com/) is a Git hooks manager.
+ [Projen](https://github.com/projen/projen) is a project generator.
+ [pytest](https://docs.pytest.org/en/7.2.x/index.html) is a Python framework for writing small, readable tests.

**Code repository**

This example architecture code can be found in the GitHub [Asynchronous Event Processing with API Gateway and Lambda](https://github.com/aws-samples/asynchronous-event-processing-api-gateway-lambda-cdk) repository.

## Best practices
<a name="process-events-asynchronously-with-amazon-api-gateway-and-aws-lambda-best-practices"></a>
+ This example architecture doesn't include monitoring of the deployed infrastructure. If your use case requires monitoring, evaluate adding [CDK Monitoring Constructs](https://constructs.dev/packages/cdk-monitoring-constructs) or another monitoring solution.
+ This example architecture uses [IAM permissions](https://docs.aws.amazon.com/apigateway/latest/developerguide/permissions.html) to control the access to the jobs API. Anyone authorized to assume the `JobsAPIInvokeRole` will be able to invoke the jobs API. As such, the access control mechanism is binary. If your use case requires a more complex authorization model, evaluate using a different [access control mechanism](https://docs.aws.amazon.com/apigateway/latest/developerguide/apigateway-control-access-to-api.html).
+ When a user sends an HTTP `POST` request to the `/jobs` jobs API endpoint, the input data is validated at two different levels:
  + Amazon API Gateway is in charge of the first [request validation](https://docs.aws.amazon.com/apigateway/latest/developerguide/api-gateway-method-request-validation.html).
  + The event processing function performs the second request.

    No validation is performed when the user does an HTTP `GET` request to the `/jobs/{jobId}` jobs API endpoint. If your use case requires additional input validation and an increased level of security, evaluate [using AWS WAF to protect your API](https://docs.aws.amazon.com/apigateway/latest/developerguide/apigateway-control-access-aws-waf.html).

## Epics
<a name="process-events-asynchronously-with-amazon-api-gateway-and-aws-lambda-epics"></a>

### Set up the environment
<a name="set-up-the-environment"></a>


| Task | Description | Skills required | 
| --- | --- | --- | 
| Clone the repository. | To clone the repository locally, run the following command:<pre>git clone https://github.com/aws-samples/asynchronous-event-processing-api-gateway-lambda-cdk.git</pre> | DevOps engineer | 
| Set up the project. | Change the directory to the repository root and set up the Python virtual environment and all the tools by using [Projen](https://github.com/projen/projen):<pre>cd asynchronous-event-processing-api-gateway-api-gateway-lambda-cdk<br />npx projen</pre> | DevOps engineer | 
| Install pre-commit hooks. | To install pre-commit hooks, do the following:[\[See the AWS documentation website for more details\]](http://docs.aws.amazon.com/prescriptive-guidance/latest/patterns/process-events-asynchronously-with-amazon-api-gateway-and-aws-lambda.html) | DevOps engineer | 

### Deploy the example architecture
<a name="deploy-the-example-architecture"></a>


| Task | Description | Skills required | 
| --- | --- | --- | 
| Bootstrap AWS CDK. | To bootstrap AWS CDK in your AWS account, run the following command:<pre>AWS_PROFILE=$YOUR_AWS_PROFILE npx projen bootstrap</pre> | AWS DevOps | 
| Deploy the example architecture. | To deploy the example architecture in your AWS account, run the following command:<pre>AWS_PROFILE=$YOUR_AWS_PROFILE npx projen deploy</pre> | AWS DevOps | 

### Test the architecture
<a name="test-the-architecture"></a>


| Task | Description | Skills required | 
| --- | --- | --- | 
| Install test prerequisites. | Install on your workstation the [AWS Command Line Interface (AWS CLI)](https://docs.aws.amazon.com/cli/latest/userguide/getting-started-install.html), [Postman](https://www.postman.com/downloads/), and [jq](https://jqlang.github.io/jq/).Using [Postman](https://www.postman.com/downloads/) to test this example architecture is suggested but not mandatory. If you choose an alternative API testing tool, make sure that it supports [AWS Signature Version 4 authentication](https://docs.aws.amazon.com/AmazonS3/latest/API/sig-v4-authenticating-requests.html), and refer to the exposed API endpoints that can be inspected by [exporting the REST API](https://docs.aws.amazon.com/apigateway/latest/developerguide/api-gateway-export-api.html). | DevOps engineer | 
| Assume the `JobsAPIInvokeRole`. | [Assume](https://awscli.amazonaws.com/v2/documentation/api/latest/reference/sts/assume-role.html) the `JobsAPIInvokeRole` that was printed as output from the deploy command:<pre>CREDENTIALS=$(AWS_PROFILE=$<YOUR_AWS_PROFILE> aws sts assume-role \<br />--no-cli-pager \<br />--role-arn $<JOBS_API_INVOKE_ROLE_ARN> \<br />--role-session-name JobsAPIInvoke)<br />export AWS_ACCESS_KEY_ID=$(cat $CREDENTIALS | jq ‘.Credentials’’.AccessKeyId’)<br />export AWS_SECRET_ACCESS_KEY=$(cat $CREDENTIALS | jq ‘.Credentials’’.SecretAccessKey’)<br />export AWS_SESSION_TOKEN==$(cat $CREDENTIALS | jq ‘.Credentials’’.SessionToken’)</pre> | AWS DevOps | 
| Configure Postman. | [\[See the AWS documentation website for more details\]](http://docs.aws.amazon.com/prescriptive-guidance/latest/patterns/process-events-asynchronously-with-amazon-api-gateway-and-aws-lambda.html) | AWS DevOps | 
| Test the example architecture. | To test the example architecture, [send requests](https://learning.postman.com/docs/sending-requests/requests/#next-steps) to the jobs API. For more information, see the [Postman documentation](https://learning.postman.com/docs/getting-started/first-steps/sending-the-first-request/#send-an-api-request). | DevOps engineer | 

## Troubleshooting
<a name="process-events-asynchronously-with-amazon-api-gateway-and-aws-lambda-troubleshooting"></a>


| Issue | Solution | 
| --- | --- | 
| Destruction and subsequent redeployment of the example architecture fails because the [Amazon CloudWatch Logs log group](https://docs.aws.amazon.com/AmazonCloudWatch/latest/logs/WhatIsCloudWatchLogs.html) `/aws/apigateway/JobsAPIAccessLogs` already exists. | [\[See the AWS documentation website for more details\]](http://docs.aws.amazon.com/prescriptive-guidance/latest/patterns/process-events-asynchronously-with-amazon-api-gateway-and-aws-lambda.html) | 

## Related resources
<a name="process-events-asynchronously-with-amazon-api-gateway-and-aws-lambda-resources"></a>
+ [API Gateway mapping template and access logging variable reference](https://docs.aws.amazon.com/apigateway/latest/developerguide/api-gateway-mapping-template-reference.html)
+ [Set up asynchronous invocation of the backend Lambda function](https://docs.aws.amazon.com/apigateway/latest/developerguide/set-up-lambda-integration-async.html)

# Process events asynchronously with Amazon API Gateway and Amazon DynamoDB Streams
<a name="processing-events-asynchronously-with-amazon-api-gateway-and-amazon-dynamodb-streams"></a>

*Andrea Meroni, Mariem Kthiri, Nadim Majed, Alessandro Trisolini, and Michael Wallner, Amazon Web Services*

## Summary
<a name="processing-events-asynchronously-with-amazon-api-gateway-and-amazon-dynamodb-streams-summary"></a>

[Amazon API Gateway](https://docs.aws.amazon.com/apigateway/latest/developerguide/welcome.html) is a fully managed service that developers can use to create, publish, maintain, monitor, and secure APIs at any scale. It handles the tasks involved in accepting and processing up to hundreds of thousands of concurrent API calls.

An important service quota of API Gateway is the integration timeout. The timeout is the maximum time in which a backend service must return a response before the REST API returns an error. The hard limit of 29 seconds is generally acceptable for synchronous workloads. However, that limit represents a challenge for those developers who want to use API Gateway with asynchronous workloads.

This pattern shows an example architecture for processing events asynchronously using API Gateway, Amazon DynamoDB Streams, and AWS Lambda. The architecture supports running parallel processing jobs with the same input parameters, and it uses a basic REST API as the interface. In this example, using Lambda as the backend limits the duration of jobs to 15 minutes. You can avoid this limit by using an alternative service to process incoming events (for example, AWS Fargate).

[Projen](https://pypi.org/project/projen/) is used to set up the local development environment and to deploy the example architecture to a target AWS account, in combination with the [AWS Cloud Development Kit (AWS CDK) Toolkit](https://docs.aws.amazon.com/cdk/v2/guide/cli.html), [Docker](https://docs.docker.com/get-docker/) and [Node.js](https://nodejs.org/en/download/). Projen automatically sets up a [Python](https://www.python.org/downloads/) virtual environment with [pre-commit](https://pre-commit.com/) and the tools that are used for code quality assurance, security scanning, and unit testing. For more information, see the [Tools](#processing-events-asynchronously-with-amazon-api-gateway-and-amazon-dynamodb-streams-tools) section.

## Prerequisites and limitations
<a name="processing-events-asynchronously-with-amazon-api-gateway-and-amazon-dynamodb-streams-prereqs"></a>

**Prerequisites **
+ An active AWS account
+ The following tools installed on your workstation:
  + [AWS Cloud Development Kit (AWS CDK) Toolkit](https://docs.aws.amazon.com/cdk/v2/guide/cli.html) version 2.85.0 or later
  + [Docker](https://docs.docker.com/get-docker/) version 20.10.21 or later
  + [Node.js](https://nodejs.org/en/download/) version 18 or later
  + [Projen](https://pypi.org/project/projen/) version 0.71.111 or later
  + [Python](https://www.python.org/downloads/) version 3.9.16 or later

**Limitations **
+ The advised maximum number of readers for DynamoDB Streams is two to avoid throttling.
+ The maximum runtime of a job is limited by the maximum runtime for Lambda functions (15 minutes).
+ The maximum number of concurrent job requests is limited by the reserved concurrency of the Lambda functions.

## Architecture
<a name="processing-events-asynchronously-with-amazon-api-gateway-and-amazon-dynamodb-streams-architecture"></a>

**Architecture**

The following diagram shows the interaction of the jobs API with DynamoDB Streams and the event-processing and error-handling Lambda functions, with events stored in an Amazon EventBridge event archive.

![\[Diagram of architecture and process, with steps listed after the diagram.\]](http://docs.aws.amazon.com/prescriptive-guidance/latest/patterns/images/pattern-img/68a46501-16e5-48e4-99c6-fc67a8b4133a/images/29fe6982-ad81-4099-9c65-08b17c96e78f.png)


A typical workflow includes the following steps:

1. You authenticate against AWS Identity and Access Management (IAM) and obtain security credentials.

1. You send an HTTP `POST` request to the `/jobs` jobs API endpoint, specifying the job parameters in the request body.

1. The jobs API returns to you an HTTP response that contains the job identifier.

1. The jobs API puts the job parameters in the `jobs_table` Amazon DynamoDB table.

1. The `jobs_table` DynamoDB table DynamoDB stream invokes the event-processing Lambda functions.

1. The event-processing Lambda functions process the event and then put the job results in the `jobs_table` DynamoDB table. To help ensure consistent results, the event-processing functions implement an [optimistic locking](https://docs.aws.amazon.com/amazondynamodb/latest/developerguide/DynamoDBMapper.OptimisticLocking.html) mechanism.

1. You send an HTTP `GET` request to the `/jobs/{jobId}` jobs API endpoint, with the job identifier from step 3 as `{jobId}`.

1. The jobs API queries the `jobs_table` DynamoDB table to retrieve the job results.

1. The jobs API returns an HTTP response that contains the job results.

1. If the event processing fails, the event-processing function's source mapping sends the event to the error-handling Amazon Simple Notification Service (Amazon SNS) topic.

1. The error-handling SNS topic asynchronously pushes the event to the error-handling function.

1. The error-handling function puts the job parameters in the `jobs_table` DynamoDB table.

   You can retrieve the job parameters by sending an HTTP `GET` request to the `/jobs/{jobId}` jobs API endpoint.

1. If the error handling fails, the error-handling function sends the event to an Amazon EventBridge archive.

   You can replay the archived events by using EventBridge.

## Tools
<a name="processing-events-asynchronously-with-amazon-api-gateway-and-amazon-dynamodb-streams-tools"></a>

**AWS services**
+ [AWS Cloud Development Kit (AWS CDK)](https://docs.aws.amazon.com/cdk/v2/guide/home.html) is a software development framework that helps you define and provision AWS Cloud infrastructure in code.
+ [Amazon DynamoDB](https://docs.aws.amazon.com/amazondynamodb/latest/developerguide/Introduction.html) is a fully managed NoSQL database service that provides fast, predictable, and scalable performance.
+ [Amazon EventBridge](https://docs.aws.amazon.com/eventbridge/latest/userguide/eb-what-is.html) is a serverless event bus service that helps you connect your applications with real-time data from a variety of sources. For example, AWS Lambda functions, HTTP invocation endpoints using API destinations, or event buses in other AWS accounts.
+ [AWS Lambda](https://docs.aws.amazon.com/lambda/latest/dg/welcome.html) is a compute service that helps you run code without needing to provision or manage servers. It runs your code only when needed and scales automatically, so you pay only for the compute time that you use.
+ [Amazon Simple Notification Service (Amazon SNS)](https://docs.aws.amazon.com/sns/latest/dg/welcome.html) helps you coordinate and manage the exchange of messages between publishers and clients, including web servers and email addresses.

**Other tools**
+ [autopep8](https://github.com/hhatto/autopep8) automatically formats Python code based on the Python Enhancement Proposal (PEP) 8 style guide.
+ [Bandit](https://bandit.readthedocs.io/en/latest/) scans Python code to find common security issues.
+ [Commitizen](https://commitizen-tools.github.io/commitizen/) is a Git commit checker and `CHANGELOG` generator.
+ [cfn-lint](https://github.com/aws-cloudformation/cfn-lint) is an AWS CloudFormation linter
+ [Checkov](https://github.com/bridgecrewio/checkov) is a static code-analysis tool that checks infrastructure as code (IaC) for security and compliance misconfigurations.
+ [jq](https://stedolan.github.io/jq/download/) is a command-line tool for parsing JSON.
+ [Postman](https://www.postman.com/) is an API platform.
+ [pre-commit](https://pre-commit.com/) is a Git hooks manager.
+ [Projen](https://github.com/projen/projen) is a project generator.
+ [pytest](https://docs.pytest.org/en/7.2.x/index.html) is a Python framework for writing small, readable tests.

**Code repository**

This example architecture code can be found in the GitHub [Asynchronous Processing with API Gateway and DynamoDB Streams](https://github.com/aws-samples/asynchronous-event-processing-api-gateway-dynamodb-streams-cdk) repository.

## Best practices
<a name="processing-events-asynchronously-with-amazon-api-gateway-and-amazon-dynamodb-streams-best-practices"></a>
+ This example architecture doesn't include monitoring of the deployed infrastructure. If your use case requires monitoring, evaluate adding [CDK Monitoring Constructs](https://constructs.dev/packages/cdk-monitoring-constructs) or another monitoring solution.
+ This example architecture uses [IAM permissions](https://docs.aws.amazon.com/apigateway/latest/developerguide/permissions.html) to control the access to the jobs API. Anyone authorized to assume the `JobsAPIInvokeRole` will be able to invoke the jobs API. As such, the access control mechanism is binary. If your use case requires a more complex authorization model, evaluate using a different [access control mechanism](https://docs.aws.amazon.com/apigateway/latest/developerguide/apigateway-control-access-to-api.html).
+ When a user sends an HTTP `POST` request to the `/jobs` jobs API endpoint, the input data is validated at two different levels:
  + API Gateway is in charge of the first [request validation](https://docs.aws.amazon.com/apigateway/latest/developerguide/api-gateway-method-request-validation.html).
  + The event processing function performs the second request.

    No validation is performed when the user does an HTTP `GET` request to the `/jobs/{jobId}` jobs API endpoint. If your use case requires additional input validation and an increased level of security, evaluate [using AWS WAF to protect your API](https://docs.aws.amazon.com/apigateway/latest/developerguide/apigateway-control-access-aws-waf.html).
+ To avoid throttling, the [DynamoDB Streams documentation](https://docs.aws.amazon.com/amazondynamodb/latest/developerguide/Streams.html#Streams.Processing) discourages users from reading with more than two consumers from the same stream’s shard. To scale out the number of consumers, we recommend using [Amazon Kinesis Data Streams](https://docs.aws.amazon.com/amazondynamodb/latest/developerguide/kds.html).
+ [Optimistic locking](https://docs.aws.amazon.com/amazondynamodb/latest/developerguide/DynamoDBMapper.OptimisticLocking.html) has been used in this example to ensure consistent updates of items in the `jobs_table` DynamoDB table. Depending on the use-case requirement, you might need to implement more reliable locking mechanisms, such as pessimistic locking.

## Epics
<a name="processing-events-asynchronously-with-amazon-api-gateway-and-amazon-dynamodb-streams-epics"></a>

### Set up the environment
<a name="set-up-the-environment"></a>


| Task | Description | Skills required | 
| --- | --- | --- | 
| Clone the repository. | To clone the repository locally, run the following command:<pre>git clone https://github.com/aws-samples/asynchronous-event-processing-api-gateway-dynamodb-streams-cdk.git</pre> | DevOps engineer | 
| Set up the project. | Change the directory to the repository root, and set up the Python virtual environment and all the tools by using [Projen](https://github.com/projen/projen):<pre>cd asynchronous-event-processing-api-gateway-api-gateway-dynamodb-streams-cdk<br />npx projen</pre> | DevOps engineer | 
| Install pre-commit hooks. | To install pre-commit hooks, do the following:[\[See the AWS documentation website for more details\]](http://docs.aws.amazon.com/prescriptive-guidance/latest/patterns/processing-events-asynchronously-with-amazon-api-gateway-and-amazon-dynamodb-streams.html) | DevOps engineer | 

### Deploy the example architecture
<a name="deploy-the-example-architecture"></a>


| Task | Description | Skills required | 
| --- | --- | --- | 
| Bootstrap AWS CDK. | To bootstrap [AWS CDK](https://aws.amazon.com/cdk/) in your AWS account, run the following command:<pre>AWS_PROFILE=$YOUR_AWS_PROFILE npx projen bootstrap</pre> | AWS DevOps | 
| Deploy the example architecture. | To deploy the example architecture in your AWS account, run the following command:<pre>AWS_PROFILE=$YOUR_AWS_PROFILE npx projen deploy</pre> | AWS DevOps | 

### Test the architecture
<a name="test-the-architecture"></a>


| Task | Description | Skills required | 
| --- | --- | --- | 
| Install test prerequisites. | Install on your workstation the [AWS Command Line Interface (AWS CLI)](https://docs.aws.amazon.com/cli/latest/userguide/getting-started-install.html), [Postman](https://www.postman.com/downloads/), and [jq](https://jqlang.github.io/jq/).Using [Postman](https://www.postman.com/downloads/) to test this example architecture is suggested but not mandatory. If you choose an alternative API testing tool, make sure that it supports [AWS Signature Version 4 authentication](https://docs.aws.amazon.com/AmazonS3/latest/API/sig-v4-authenticating-requests.html), and refer to the exposed API endpoints that can be inspected by [exporting the REST API](https://docs.aws.amazon.com/apigateway/latest/developerguide/api-gateway-export-api.html). | DevOps engineer | 
| Assume the `JobsAPIInvokeRole`. | [Assume](https://awscli.amazonaws.com/v2/documentation/api/latest/reference/sts/assume-role.html) the `JobsAPIInvokeRole` that was printed as output from the `deploy` command:<pre>CREDENTIALS=$(AWS_PROFILE=$<YOUR_AWS_PROFILE> aws sts assume-role \<br />--no-cli-pager \<br />--role-arn $<JOBS_API_INVOKE_ROLE_ARN> \<br />--role-session-name JobsAPIInvoke)<br />export AWS_ACCESS_KEY_ID=$(cat $CREDENTIALS | jq ‘.Credentials’’.AccessKeyId’)<br />export AWS_SECRET_ACCESS_KEY=$(cat $CREDENTIALS | jq ‘.Credentials’’.SecretAccessKey’)<br />export AWS_SESSION_TOKEN==$(cat $CREDENTIALS | jq ‘.Credentials’’.SessionToken’)</pre> | AWS DevOps | 
| Configure Postman. | [\[See the AWS documentation website for more details\]](http://docs.aws.amazon.com/prescriptive-guidance/latest/patterns/processing-events-asynchronously-with-amazon-api-gateway-and-amazon-dynamodb-streams.html) | AWS DevOps | 
| Test the example architecture. | To test the example architecture, send requests to the jobs API. For more information, see the [Postman documentation](https://learning.postman.com/docs/getting-started/first-steps/sending-the-first-request/#send-an-api-request). | DevOps engineer | 

## Troubleshooting
<a name="processing-events-asynchronously-with-amazon-api-gateway-and-amazon-dynamodb-streams-troubleshooting"></a>


| Issue | Solution | 
| --- | --- | 
| Destruction and subsequent redeployment of the example architecture fails because the [Amazon CloudWatch Logs log group](https://docs.aws.amazon.com/AmazonCloudWatch/latest/logs/WhatIsCloudWatchLogs.html) `/aws/apigateway/JobsAPIAccessLogs` already exists. | [\[See the AWS documentation website for more details\]](http://docs.aws.amazon.com/prescriptive-guidance/latest/patterns/processing-events-asynchronously-with-amazon-api-gateway-and-amazon-dynamodb-streams.html) | 

## Related resources
<a name="processing-events-asynchronously-with-amazon-api-gateway-and-amazon-dynamodb-streams-resources"></a>
+ [API Gateway mapping template and access logging variable reference](https://docs.aws.amazon.com/apigateway/latest/developerguide/api-gateway-mapping-template-reference.html)
+ [Change data capture for DynamoDB Streams](https://docs.aws.amazon.com/amazondynamodb/latest/developerguide/Streams.html)
+ [Optimistic locking with version number](https://docs.aws.amazon.com/amazondynamodb/latest/developerguide/DynamoDBMapper.OptimisticLocking.html)
+ [Using Kinesis Data Streams to capture changes to DynamoDB](https://docs.aws.amazon.com/amazondynamodb/latest/developerguide/kds.html)

# Process events asynchronously with Amazon API Gateway, Amazon SQS, and AWS Fargate
<a name="process-events-asynchronously-with-amazon-api-gateway-amazon-sqs-and-aws-fargate"></a>

*Andrea Meroni, Mariem Kthiri, Nadim Majed, Alessandro Trisolini, and Michael Wallner, Amazon Web Services*

## Summary
<a name="process-events-asynchronously-with-amazon-api-gateway-amazon-sqs-and-aws-fargate-summary"></a>

[Amazon API Gateway](https://docs.aws.amazon.com/apigateway/latest/developerguide/welcome.html) is a fully managed service that developers can use to create, publish, maintain, monitor, and secure APIs at any scale. It handles the tasks involved in accepting and processing up to hundreds of thousands of concurrent API calls.

An important service quota of API Gateway is the integration timeout. The timeout is the maximum time in which a backend service must return a response before the REST API returns an error. The hard limit of 29 seconds is generally acceptable for synchronous workloads. However, that limit represents a challenge for those developers who want to use API Gateway with asynchronous workloads.

This pattern shows an example architecture to process events asynchronously using API Gateway, Amazon Simple Queue Service (Amazon SQS) and AWS Fargate. The architecture supports running processing jobs without duration restrictions, and it uses a basic REST API as the interface.

[Projen](https://pypi.org/project/projen/) is used to set up the local development environment and to deploy the example architecture to a target AWS account, in combination with the [AWS Cloud Development Kit (AWS CDK)](https://docs.aws.amazon.com/cdk/v2/guide/cli.html), [Docker](https://docs.docker.com/get-docker/), and [Node.js](https://nodejs.org/en/download/). Projen automatically sets up a [Python](https://www.python.org/downloads/) virtual environment with [pre-commit](https://pre-commit.com/) and the tools that are used for code quality assurance, security scanning, and unit testing. For more information, see the [Tools](#process-events-asynchronously-with-amazon-api-gateway-amazon-sqs-and-aws-fargate-tools) section.

## Prerequisites and limitations
<a name="process-events-asynchronously-with-amazon-api-gateway-amazon-sqs-and-aws-fargate-prereqs"></a>

**Prerequisites**
+ An active AWS account
+ The following tools installed on your workstation:
  + [AWS Cloud Development Kit (AWS CDK) Toolkit](https://docs.aws.amazon.com/cdk/v2/guide/cli.html) version 2.85.0 or later
  + [Docker](https://docs.docker.com/get-docker/) version 20.10.21 or later
  + [Node.js](https://nodejs.org/en/download/) version 18 or later
  + [Projen](https://pypi.org/project/projen/) version 0.71.111 or later
  + [Python](https://www.python.org/downloads/) version 3.9.16 or later

**Limitations**
+ Concurrent jobs are limited to 500 tasks per minute, which is the maximum number of tasks that Fargate can provision.

## Architecture
<a name="process-events-asynchronously-with-amazon-api-gateway-amazon-sqs-and-aws-fargate-architecture"></a>

The following diagram shows the interaction of the jobs API with the `jobs` Amazon DynamoDB table, the event-processing Fargate service, and the error-handling AWS Lambda function. Events are stored in an Amazon EventBridge event archive.

![\[Architecture diagram with description following the diagram.\]](http://docs.aws.amazon.com/prescriptive-guidance/latest/patterns/images/pattern-img/8a03149c-8f34-4593-84d5-accc1800a0a2/images/5e1071aa-4fbc-495c-bc22-8e62a32a136b.png)


A typical workflow includes the following steps:

1. You authenticate against AWS Identity and Access Management (IAM) and obtain security credentials.

1. You send an HTTP `POST` request to the `/jobs` jobs API endpoint, specifying the job parameters in the request body.

1. The jobs API, which is an API Gateway REST API, returns to you an HTTP response that contains the job identifier.

1. The jobs API sends a message to the SQS queue.

1. Fargate pulls the message from the SQS queue, processes the event, and then puts the job results in the `jobs` DynamoDB table.

1. You send an HTTP `GET` request to the `/jobs/{jobId}` jobs API endpoint, with the job identifier from step 3 as `{jobId}`.

1. The jobs API queries the `jobs` DynamoDB table to retrieve the job results.

1. The jobs API returns an HTTP response that contains the job results.

1. If the event processing fails, the SQS queue sends the event to the dead-letter queue (DLQ).

1. An EventBridge event initiates the error-handling function.

1. The error-handling function puts the job parameters in the `jobs` DynamoDB table.

1. You can retrieve the job parameters by sending an HTTP `GET` request to the `/jobs/{jobId}` jobs API endpoint.

1. If the error handling fails, the error-handling function sends the event to an EventBridge archive.

   You can replay the archived events by using EventBridge.

## Tools
<a name="process-events-asynchronously-with-amazon-api-gateway-amazon-sqs-and-aws-fargate-tools"></a>

**AWS services**
+ [AWS Cloud Development Kit (AWS CDK)](https://docs.aws.amazon.com/cdk/v2/guide/home.html) is a software development framework that helps you define and provision AWS Cloud infrastructure in code.
+ [Amazon DynamoDB](https://docs.aws.amazon.com/amazondynamodb/latest/developerguide/Introduction.html) is a fully managed NoSQL database service that provides fast, predictable, and scalable performance.
+ [AWS Fargate](https://docs.aws.amazon.com/AmazonECS/latest/userguide/what-is-fargate.html) helps you run containers without needing to manage servers or Amazon Elastic Compute Cloud (Amazon EC2) instances. It’s used in conjunction with Amazon Elastic Container Service (Amazon ECS).
+ [Amazon EventBridge](https://docs.aws.amazon.com/eventbridge/latest/userguide/eb-what-is.html) is a serverless event bus service that helps you connect your applications with real-time data from a variety of sources. For example, Lambda functions, HTTP invocation endpoints using API destinations, or event buses in other AWS accounts.
+ [AWS Lambda](https://docs.aws.amazon.com/lambda/latest/dg/welcome.html) is a compute service that helps you run code without needing to provision or manage servers. It runs your code only when needed and scales automatically, so you pay only for the compute time that you use.
+ [Amazon Simple Queue Service (Amazon SQS)](https://docs.aws.amazon.com/AWSSimpleQueueService/latest/SQSDeveloperGuide/welcome.html) provides a secure, durable, and available hosted queue that helps you integrate and decouple distributed software systems and components.

**Other tools**
+ [autopep8](https://github.com/hhatto/autopep8) automatically formats Python code based on the Python Enhancement Proposal (PEP) 8 style guide.
+ [Bandit](https://bandit.readthedocs.io/en/latest/) scans Python code to find common security issues.
+ [Commitizen](https://commitizen-tools.github.io/commitizen/) is a Git commit checker and `CHANGELOG` generator.
+ [cfn-lint](https://github.com/aws-cloudformation/cfn-lint) is an AWS CloudFormation linter
+ [Checkov](https://github.com/bridgecrewio/checkov) is a static code-analysis tool that checks infrastructure as code (IaC) for security and compliance misconfigurations.
+ [jq](https://stedolan.github.io/jq/download/) is a command-line tool for parsing JSON.
+ [Postman](https://www.postman.com/) is an API platform.
+ [pre-commit](https://pre-commit.com/) is a Git hooks manager.
+ [Projen](https://github.com/projen/projen) is a project generator.
+ [pytest](https://docs.pytest.org/en/7.2.x/index.html) is a Python framework for writing small, readable tests.

**Code repository**

This example architecture code can be found in the GitHub [Asynchronous Processing with API Gateway and SQS](https://github.com/aws-samples/asynchronous-event-processing-api-gateway-sqs-cdk) repository.

## Best practices
<a name="process-events-asynchronously-with-amazon-api-gateway-amazon-sqs-and-aws-fargate-best-practices"></a>
+ This example architecture doesn't include monitoring of the deployed infrastructure. If your use case requires monitoring, evaluate adding [CDK Monitoring Constructs](https://constructs.dev/packages/cdk-monitoring-constructs) or another monitoring solution.
+ This example architecture uses [IAM permissions](https://docs.aws.amazon.com/apigateway/latest/developerguide/permissions.html) to control the access to the jobs API. Anyone authorized to assume the `JobsAPIInvokeRole` will be able to invoke the jobs API. As such, the access control mechanism is binary. If your use case requires a more complex authorization model, evaluate using a different [access control mechanism](https://docs.aws.amazon.com/apigateway/latest/developerguide/apigateway-control-access-to-api.html).
+ When a user sends an HTTP `POST` request to the `/jobs` jobs API endpoint, the input data is validated at two different levels:
  + API Gateway is in charge of the first [request validation](https://docs.aws.amazon.com/apigateway/latest/developerguide/api-gateway-method-request-validation.html).
  + The event processing function performs the second request.

    No validation is performed when the user does an HTTP `GET` request to the `/jobs/{jobId}` jobs API endpoint. If your use case requires additional input validation and an increased level of security, evaluate [using AWS WAF to protect your API](https://docs.aws.amazon.com/apigateway/latest/developerguide/apigateway-control-access-aws-waf.html).

## Epics
<a name="process-events-asynchronously-with-amazon-api-gateway-amazon-sqs-and-aws-fargate-epics"></a>

### Set up the environment
<a name="set-up-the-environment"></a>


| Task | Description | Skills required | 
| --- | --- | --- | 
| Clone the repository. | To clone the repository locally, run the following command:<pre>git clone https://github.com/aws-samples/asynchronous-event-processing-api-gateway-sqs-cdk.git</pre> | DevOps engineer | 
| Set up the project. | Change the directory to the repository root, and set up the Python virtual environment and all the tools by using [Projen](https://github.com/projen/projen):<pre>cd asynchronous-event-processing-api-gateway-api-gateway-sqs-cdk<br />npx projen</pre> | DevOps engineer | 
| Install pre-commit hooks. | To install pre-commit hooks, do the following:[\[See the AWS documentation website for more details\]](http://docs.aws.amazon.com/prescriptive-guidance/latest/patterns/process-events-asynchronously-with-amazon-api-gateway-amazon-sqs-and-aws-fargate.html) | DevOps engineer | 

### Deploy the example architecture
<a name="deploy-the-example-architecture"></a>


| Task | Description | Skills required | 
| --- | --- | --- | 
| Bootstrap AWS CDK. | To bootstrap [AWS CDK](https://aws.amazon.com/cdk/) in your AWS account, run the following command:<pre>AWS_PROFILE=$YOUR_AWS_PROFILE npx projen bootstrap</pre> | AWS DevOps | 
| Deploy the example architecture. | To deploy the example architecture in your AWS account, run the following command:<pre>AWS_PROFILE=$YOUR_AWS_PROFILE npx projen deploy</pre> | AWS DevOps | 

### Test the architecture
<a name="test-the-architecture"></a>


| Task | Description | Skills required | 
| --- | --- | --- | 
| Install test prerequisites. | Install on your workstation the [AWS Command Line Interface (AWS CLI)](https://docs.aws.amazon.com/cli/latest/userguide/getting-started-install.html), [Postman](https://www.postman.com/downloads/), and [jq](https://jqlang.github.io/jq/).Using [Postman](https://www.postman.com/downloads/) to test this example architecture is suggested but not mandatory. If you choose an alternative API testing tool, make sure that it supports [AWS Signature Version 4 authentication](https://docs.aws.amazon.com/AmazonS3/latest/API/sig-v4-authenticating-requests.html), and refer to the exposed API endpoints that can be inspected by [exporting the REST API](https://docs.aws.amazon.com/apigateway/latest/developerguide/api-gateway-export-api.html). | DevOps engineer | 
| Assume the `JobsAPIInvokeRole`. | [Assume](https://awscli.amazonaws.com/v2/documentation/api/latest/reference/sts/assume-role.html) the `JobsAPIInvokeRole` that was printed as output from the `deploy` command:<pre>CREDENTIALS=$(AWS_PROFILE=$<YOUR_AWS_PROFILE> aws sts assume-role \<br />--no-cli-pager \<br />--role-arn $<JOBS_API_INVOKE_ROLE_ARN> \<br />--role-session-name JobsAPIInvoke)<br />export AWS_ACCESS_KEY_ID=$(cat $CREDENTIALS | jq ‘.Credentials’’.AccessKeyId’)<br />export AWS_SECRET_ACCESS_KEY=$(cat $CREDENTIALS | jq ‘.Credentials’’.SecretAccessKey’)<br />export AWS_SESSION_TOKEN==$(cat $CREDENTIALS | jq ‘.Credentials’’.SessionToken’)</pre> | AWS DevOps | 
| Configure Postman. | [\[See the AWS documentation website for more details\]](http://docs.aws.amazon.com/prescriptive-guidance/latest/patterns/process-events-asynchronously-with-amazon-api-gateway-amazon-sqs-and-aws-fargate.html) | AWS DevOps | 
| Test the example architecture. | To test the example architecture, send requests to the jobs API. For more information, see the [Postman documentation](https://learning.postman.com/docs/getting-started/first-steps/sending-the-first-request/#send-an-api-request). | DevOps engineer | 

## Troubleshooting
<a name="process-events-asynchronously-with-amazon-api-gateway-amazon-sqs-and-aws-fargate-troubleshooting"></a>


| Issue | Solution | 
| --- | --- | 
| Destruction and subsequent redeployment of the example architecture fails because the [Amazon CloudWatch Logs log group](https://docs.aws.amazon.com/AmazonCloudWatch/latest/logs/WhatIsCloudWatchLogs.html) `/aws/apigateway/JobsAPIAccessLogs` already exists. | [\[See the AWS documentation website for more details\]](http://docs.aws.amazon.com/prescriptive-guidance/latest/patterns/process-events-asynchronously-with-amazon-api-gateway-amazon-sqs-and-aws-fargate.html) | 
| Destruction and subsequent redeployment of the example architecture fails because the [CloudWatch Logs log group](https://docs.aws.amazon.com/AmazonCloudWatch/latest/logs/WhatIsCloudWatchLogs.html) `/aws/ecs/EventProcessingServiceLogs` already exists. | [\[See the AWS documentation website for more details\]](http://docs.aws.amazon.com/prescriptive-guidance/latest/patterns/process-events-asynchronously-with-amazon-api-gateway-amazon-sqs-and-aws-fargate.html) | 

## Related resources
<a name="process-events-asynchronously-with-amazon-api-gateway-amazon-sqs-and-aws-fargate-resources"></a>
+ [API Gateway mapping template and access logging variable reference](https://docs.aws.amazon.com/apigateway/latest/developerguide/api-gateway-mapping-template-reference.html)
+ [How do I integrate an API Gateway REST API with Amazon SQS and resolve common errors?](https://aws.amazon.com/premiumsupport/knowledge-center/api-gateway-rest-api-sqs-errors/)

# Run AWS Systems Manager Automation tasks synchronously from AWS Step Functions
<a name="run-aws-systems-manager-automation-tasks-synchronously-from-aws-step-functions"></a>

*Elie El khoury, Amazon Web Services*

## Summary
<a name="run-aws-systems-manager-automation-tasks-synchronously-from-aws-step-functions-summary"></a>

This pattern explains how to integrate AWS Step Functions with AWS Systems Manager. It uses AWS SDK service integrations to call the Systems Manager **startAutomationExecution** API with a task token from a state machine workflow, and pauses until the token returns with a success or failure call. To demonstrate the integration, this pattern implements an Automation document (runbook) wrapper around the `AWS-RunShellScript` or `AWS-RunPowerShellScript` document, and uses `.waitForTaskToken` to synchronously call `AWS-RunShellScript` or `AWS-RunPowerShellScript`. For more information about AWS SDK service integrations in Step Functions, see the [AWS Step Functions Developer Guide](https://docs.aws.amazon.com/step-functions/latest/dg/supported-services-awssdk.html).

Step Functions** **is a low-code, visual workflow service that you can use to build distributed applications, automate IT and business processes, and build data and machine learning pipelines by using AWS services. Workflows manage failures, retries, parallelization, service integrations, and observability so you can focus on higher-value business logic.

Automation, a capability of AWS Systems Manager, simplifies common maintenance, deployment, and remediation tasks for AWS services such as Amazon Elastic Compute Cloud (Amazon EC2), Amazon Relational Database Service (Amazon RDS), Amazon Redshift, and Amazon Simple Storage Service (Amazon S3). Automation gives you granular control over the concurrency of your automations. For example, you can specify how many resources to target concurrently, and how many errors can occur before an automation is stopped.

For implementation details, including runbook steps, parameters, and examples, see the [Additional information](#run-aws-systems-manager-automation-tasks-synchronously-from-aws-step-functions-additional) section.

## Prerequisites and limitations
<a name="run-aws-systems-manager-automation-tasks-synchronously-from-aws-step-functions-prereqs"></a>

**Prerequisites**
+ An active AWS account
+ AWS Identity and Access Management (IAM) permissions to access Step Functions and Systems Manager
+ An EC2 instance with Systems Manager Agent (SSM Agent) [installed](https://docs.aws.amazon.com/systems-manager/latest/userguide/sysman-install-ssm-agent.html) on the instance
+ [An IAM instance profile for Systems Manager](https://docs.aws.amazon.com/systems-manager/latest/userguide/setup-instance-profile.html) attached to the instance where you plan to run the runbook
+ A Step Functions role that has the following IAM permissions (which follow the principle of least privilege):

```
{
             "Effect": "Allow",
             "Action": "ssm:StartAutomationExecution",
             "Resource": "*"
 }
```

**Product versions**
+ SSM document schema version 0.3 or later
+ SSM Agent version 2.3.672.0 or later

## Architecture
<a name="run-aws-systems-manager-automation-tasks-synchronously-from-aws-step-functions-architecture"></a>

**Target technology stack  **
+ AWS Step Functions
+ AWS Systems Manager Automation

**Target architecture**

![\[Architecture for running Systems Manager automation tasks synchronously from Step Functions\]](http://docs.aws.amazon.com/prescriptive-guidance/latest/patterns/images/pattern-img/47c19e4f-d68d-4f91-bb68-202098757529/images/2d248aae-d858-4565-8af2-593cde0da780.png)


**Automation and scale**
+ This pattern provides an AWS CloudFormation template that you can use to deploy the runbooks on multiple instances. (See the GitHub [Step Functions and Systems Manager implementation](https://github.com/aws-samples/amazon-stepfunctions-ssm-waitfortasktoken) repository.)

## Tools
<a name="run-aws-systems-manager-automation-tasks-synchronously-from-aws-step-functions-tools"></a>

**AWS services**
+ [AWS CloudFormation](https://docs.aws.amazon.com/AWSCloudFormation/latest/UserGuide/Welcome.html) helps you set up AWS resources, provision them quickly and consistently, and manage them throughout their lifecycle across AWS accounts and Regions.
+ [AWS Identity and Access Management (IAM)](https://docs.aws.amazon.com/IAM/latest/UserGuide/introduction.html) helps you securely manage access to your AWS resources by controlling who is authenticated and authorized to use them.
+ [AWS Step Functions](https://docs.aws.amazon.com/step-functions/latest/dg/welcome.html) is a serverless orchestration service that helps you combine AWS Lambda functions and other AWS services to build business-critical applications.
+ [AWS Systems Manager](https://docs.aws.amazon.com/systems-manager/latest/userguide/what-is-systems-manager.html) helps you manage your applications and infrastructure running in the AWS Cloud. It simplifies application and resource management, shortens the time to detect and resolve operational problems, and helps you manage your AWS resources securely at scale.

**Code **

The code for this pattern is available in the GitHub [Step Functions and Systems Manager implementation](https://github.com/aws-samples/amazon-stepfunctions-ssm-waitfortasktoken) repository. 

## Epics
<a name="run-aws-systems-manager-automation-tasks-synchronously-from-aws-step-functions-epics"></a>

### Create runbooks
<a name="create-runbooks"></a>


| Task | Description | Skills required | 
| --- | --- | --- | 
| Download the CloudFormation template. | Download the `ssm-automation-documents.cfn.json` template from the `cloudformation `folder of the GitHub repository. | AWS DevOps | 
| Create runbooks. | Sign in to the AWS Management Console, open the [CloudFormation console](https://console.aws.amazon.com/cloudformation/), and deploy the template. For more information about deploying CloudFormation templates, see [Creating a stack on the CloudFormation console](https://docs.aws.amazon.com/AWSCloudFormation/latest/UserGuide/cfn-console-create-stack.html) in the CloudFormation documentation. The CloudFormation template deploys three resources:[\[See the AWS documentation website for more details\]](http://docs.aws.amazon.com/prescriptive-guidance/latest/patterns/run-aws-systems-manager-automation-tasks-synchronously-from-aws-step-functions.html) | AWS DevOps | 

### Create a sample state machine
<a name="create-a-sample-state-machine"></a>


| Task | Description | Skills required | 
| --- | --- | --- | 
| Create a test state machine.  | Follow the instructions in the [AWS Step Functions Developer Guide](https://docs.aws.amazon.com/step-functions/latest/dg/getting-started-with-sfn.html) to create and run a state machine. For the definition, use the following code. Make sure to update the `InstanceIds` value with the ID of a valid Systems Manager-enabled instance in your account.<pre>{<br />  "Comment": "A description of my state machine",<br />  "StartAt": "StartAutomationWaitForCallBack",<br />  "States": {<br />    "StartAutomationWaitForCallBack": {<br />      "Type": "Task",<br />      "Resource": "arn:aws:states:::aws-sdk:ssm:startAutomationExecution.waitForTaskToken",<br />      "Parameters": {<br />        "DocumentName": "SfnRunCommandByInstanceIds",<br />        "Parameters": {<br />          "InstanceIds": [<br />            "i-1234567890abcdef0"<br />          ],<br />          "taskToken.$": "States.Array($$.Task.Token)",<br />          "workingDirectory": [<br />            "/home/ssm-user/"<br />          ],<br />          "Commands": [<br />            "echo \"This is a test running automation waitForTaskToken\" >> automation.log",<br />            "sleep 100"<br />          ],<br />          "executionTimeout": [<br />              "10800"<br />          ],<br />          "deliveryTimeout": [<br />              "30"<br />          ],<br />          "shell": [<br />              "Shell"<br />          ]<br />            }<br />      },<br />      "End": true<br />    }<br />  }<br />}</pre>This code calls the runbook to run two commands that demonstrate the `waitForTaskToken` call to Systems Manager Automation.The `shell` parameter value (`Shell` or `PowerShell`) determines whether the Automation document runs `AWS-RunShellScript` or `AWS-RunPowerShellScript`.The task writes "This is a test running automation waitForTaskToken" into the `/home/ssm-user/automation.log` file, and then sleeps for 100 seconds before it responds with the task token and releases the next task in the workflow.If you want to call the `SfnRunCommandByTargets` runbook instead, replace the `Parameters` section of the previous code with the following:<pre>"Parameters": {<br />          "Targets": [<br />            {<br />              "Key": "InstanceIds",<br />              "Values": [<br />                "i-02573cafcfEXAMPLE",<br />                "i-0471e04240EXAMPLE"<br />              ]<br />            }<br />          ],</pre> | AWS DevOps | 
| Update the IAM role for the state machine. | The previous step automatically creates a dedicated IAM role for the state machine. However, it doesn’t grant permissions to call the runbook. Update the role by adding the following permissions:<pre>{<br />      "Effect": "Allow",<br />      "Action": "ssm:StartAutomationExecution",<br />      "Resource": "*"<br /> }</pre> | AWS DevOps | 
| Validate the synchronous calls. | Run the state machine to validate the synchronous call between Step Functions and Systems Manager Automation. For sample output, see the [Additional information](#run-aws-systems-manager-automation-tasks-synchronously-from-aws-step-functions-additional) section.  | AWS DevOps | 

## Related resources
<a name="run-aws-systems-manager-automation-tasks-synchronously-from-aws-step-functions-resources"></a>
+ [Getting started with AWS Step Functions](https://docs.aws.amazon.com/step-functions/latest/dg/getting-started-with-sfn.html) (*AWS Step Functions Developer Guide*)
+ [Wait for a callback with the task token](https://docs.aws.amazon.com/step-functions/latest/dg/connect-to-resource.html#connect-wait-token) (*AWS Step Functions Developer Guide*, service integration patterns)
+ [send\$1task\$1success](https://boto3.amazonaws.com/v1/documentation/api/latest/reference/services/stepfunctions/client/send_task_success.html) and [send\$1task\$1failure](https://boto3.amazonaws.com/v1/documentation/api/latest/reference/services/stepfunctions/client/send_task_failure.html) API calls (Boto3 documentation) 
+ [AWS Systems Manager Automation](https://docs.aws.amazon.com/systems-manager/latest/userguide/systems-manager-automation.html) (*AWS Systems Manager User Guide*)

## Additional information
<a name="run-aws-systems-manager-automation-tasks-synchronously-from-aws-step-functions-additional"></a>

**Implementation details**

This pattern provides a CloudFormation template that deploys two Systems Manager runbooks:
+ `SfnRunCommandByInstanceIds` runs the `AWS-RunShellScript` or `AWS-RunPowerShellScript` command by using instance IDs.
+ `SfnRunCommandByTargets` runs the `AWS-RunShellScript` or `AWS-RunPowerShellScript` command by using targets.

Each runbook implements four steps to achieve a synchronous call when using the `.waitForTaskToken` option in Step Functions.


| 
| 
| Step | Action | Description | 
| --- |--- |--- |
| **1** | `Branch` | Checks the `shell` parameter value (`Shell` or `PowerShell`) to decide whether to run `AWS-RunShellScript` for Linux or `AWS-RunPowerShellScript` for Windows. | 
| **2** | `RunCommand_Shell` or `RunCommand_PowerShell` | Takes several inputs and runs the `RunShellScript` or `RunPowerShellScript` command. For more information, check the **Details** tab for the `RunCommand_Shell` or `RunCommand_PowerShell` Automation document on the Systems Manager console. | 
| **3** | `SendTaskFailure` | Runs when step 2 is aborted or canceled. It calls the Step Functions [send\$1task\$1failure](https://boto3.amazonaws.com/v1/documentation/api/latest/reference/services/stepfunctions/client/send_task_failure.html) API, which accepts three parameters as input: the token passed by the state machine, the failure error, and a description of the cause of the failure. | 
| **4** | `SendTaskSuccess` | Runs when step 2 is successful. It calls the Step Functions [send\$1task\$1success](https://boto3.amazonaws.com/v1/documentation/api/latest/reference/services/stepfunctions/client/send_task_success.html) API, which accepts the token passed by the state machine as input. | 

**Runbook parameters**

`SfnRunCommandByInstanceIds` runbook:


| 
| 
| Parameter name | Type | Optional or required | Description | 
| --- |--- |--- |--- |
| `shell` | String | Required | The instances shell to decide whether to run `AWS-RunShellScript` for Linux or `AWS-RunPowerShellScript` for Windows. | 
| `deliveryTimeout` | Integer | Optional | The time, in seconds, to wait for a command to deliver to the SSM Agent on an instance. This parameter has a minimum value of 30 (0.5 minute) and a maximum value of 2592000 (720 hours). | 
| `executionTimeout` | String | Optional | The time, in seconds, for a command to complete before it is considered to have failed. The default value is 3600 (1 hour). The maximum value is 172800 (48 hours). | 
| `workingDirectory` | String | Optional | The path to the working directory on your instance. | 
| `Commands` | StringList | Required | The shell script or command to run. | 
| `InstanceIds` | StringList | Required | The IDs of the instances where you want to run the command. | 
| `taskToken` | String | Required | The task token to use for callback responses. | 

`SfnRunCommandByTargets` runbook:


| 
| 
| Name | Type | Optional or required | Description | 
| --- |--- |--- |--- |
| `shell` | String | Required | The instances shell to decide whether to run `AWS-RunShellScript` for Linux or `AWS-RunPowerShellScript` for Windows. | 
| `deliveryTimeout` | Integer | Optional | The time, in seconds, to wait for a command to deliver to the SSM Agent on an instance. This parameter has a minimum value of 30 (0.5 minute) and a maximum value of 2592000 (720 hours). | 
| `executionTimeout` | Integer | Optional | The time, in seconds, for a command to complete before it is considered to have failed. The default value is 3600 (1 hour). The maximum value is 172800 (48 hours). | 
| `workingDirectory` | String | Optional | The path to the working directory on your instance. | 
| `Commands` | StringList | Required | The shell script or command to run. | 
| `Targets` | MapList | Required | An array of search criteria that identifies instances by using key-value pairs that you specify. For example: `[{"Key":"InstanceIds","Values":["i-02573cafcfEXAMPLE","i-0471e04240EXAMPLE"]}]` | 
| `taskToken` | String | Required | The task token to use for callback responses. | 

**Sample output**

The following table provides sample output from the step function. It shows that the total run time is over 100 seconds between step 5 (`TaskSubmitted`) and step 6 (`TaskSucceeded`). This demonstrates that the step function waited for the `sleep 100` command to finish before moving to the next task in the workflow.


| 
| 
| ID | Type | Step | Resource | Elapsed Time (ms) | Timestamp | 
| --- |--- |--- |--- |--- |--- |
| **  1** | `ExecutionStarted` |  | - | 0 | Mar 11, 2022 02:50:34.303 PM | 
| **  2** | `TaskStateEntered` | `StartAutomationWaitForCallBack` | - | 40 | Mar 11, 2022 02:50:34.343 PM | 
| **  3** | `TaskScheduled` | `StartAutomationWaitForCallBack` | - | 40 | Mar 11, 2022 02:50:34.343 PM | 
| **  4** | `TaskStarted` | `StartAutomationWaitForCallBack` | - | 154 | Mar 11, 2022 02:50:34.457 PM | 
| **  5** | `TaskSubmitted` | `StartAutomationWaitForCallBack` | - | 657 | Mar 11, 2022 02:50:34.960 PM | 
| **  6** | `TaskSucceeded` | `StartAutomationWaitForCallBack` | - | 103835 | Mar 11, 2022 02:52:18.138 PM | 
| **  7** | `TaskStateExited` | `StartAutomationWaitForCallBack` | - | 103860 | Mar 11, 2022 02:52:18.163 PM | 
| **  8** | `ExecutionSucceeded` |  | - | 103897 | Mar 11, 2022 02:52:18.200 PM | 

# Run parallel reads of S3 objects by using Python in an AWS Lambda function
<a name="run-parallel-reads-of-s3-objects-by-using-python-in-an-aws-lambda-function"></a>

*Eduardo Bortoluzzi, Amazon Web Services*

## Summary
<a name="run-parallel-reads-of-s3-objects-by-using-python-in-an-aws-lambda-function-summary"></a>

You can use this pattern to retrieve and summarize a list of documents from Amazon Simple Storage Service (Amazon S3) buckets in real time. The pattern provides example code to parallel read objects from S3 buckets on Amazon Web Services (AWS). The pattern showcases how to efficiently run I/O bound tasks with AWS Lambda functions using Python.

A financial company used this pattern in an interactive solution to manually approve or reject correlated financial transactions in real time. The financial transaction documents were stored in an S3 bucket related to the market. An operator selected a list of documents from the S3 bucket, analyzed the total value of the transactions that the solution calculated, and decided to approve or reject the selected batch.

I/O bound tasks support multiple threads. In this example code, the [concurrent.futures.ThreadPoolExecutor](https://docs.python.org/3.13/library/concurrent.futures.html#concurrent.futures.ThreadPoolExecutor) is used with a maximum of 30 simultaneous threads, even though Lambda functions support up to 1,024 threads (with one of those threads being your main process). This limit is because too many threads create latency issues due to context switching and utilization of computing resources. You also need to increase the maximum pool connections in `botocore` so that all threads can perform the S3 object download simultaneously.

The example code uses one 8.3 KB object, with JSON data, in an S3 bucket. The object is read multiple times. After the Lambda function reads the object, the JSON data is decoded to a Python object. In December 2024, the result after running this example was 1,000 reads processed in 2.3 seconds and 10,000 reads processed in 27 seconds using a Lambda function configured with 2,304 MB of memory. AWS Lambda supports memory configurations from 128 MB to 10,240 MB (10 GB), though increasing the Lambdamemory beyond 2,304 MB didn't help to decrease the time to run this particular I/O-bound task.

The [AWS Lambda Power Tuning](https://github.com/alexcasalboni/aws-lambda-power-tuning) tool was used to test different Lambda memory configurations and verify the best performance-to-cost ratio for the task. For test results, see the [Additional information](#run-parallel-reads-of-s3-objects-by-using-python-in-an-aws-lambda-function-additional) section.

## Prerequisites and limitations
<a name="run-parallel-reads-of-s3-objects-by-using-python-in-an-aws-lambda-function-prereqs"></a>

**Prerequisites **
+ An active AWS account
+ Proficiency with Python development

**Limitations **
+ A Lambda function can have at most [1,024 execution processes or threads](https://docs.aws.amazon.com/lambda/latest/dg/gettingstarted-limits.html#function-configuration-deployment-and-execution).
+ New AWS accounts have a Lambda memory limit of 3,008 MB. Adjust the AWS Lambda Power Tuning tool accordingly. For more information, see the [Troubleshooting](#run-parallel-reads-of-s3-objects-by-using-python-in-an-aws-lambda-function-troubleshooting) section.
+ Amazon S3 has a limit of [5,500 GET/HEAD requests per second per partitioned prefix](https://docs.aws.amazon.com/AmazonS3/latest/userguide/optimizing-performance.html).

**Product versions**
+ Python 3.9 or later
+ AWS Cloud Development Kit (AWS CDK) v2
+ AWS Command Line Interface (AWS CLI) version 2
+ AWS Lambda Power Tuning 4.3.6 (optional)

## Architecture
<a name="run-parallel-reads-of-s3-objects-by-using-python-in-an-aws-lambda-function-architecture"></a>

**Target technology stack  **
+ AWS Lambda
+ Amazon S3
+ AWS Step Functions (if AWS Lambda Power Tuning is deployed)

**Target architecture **

The following diagram shows a Lambda function that reads objects from an S3 bucket in parallel. The diagram also has a Step Functions workflow for the AWS Lambda Power Tuning tool to fine-tune the Lambda function memory. This fine-tuning helps to achieve a good balance between cost and performance.

![\[Diagram showing Lambda function, S3 bucket, and AWS Step Functions.\]](http://docs.aws.amazon.com/prescriptive-guidance/latest/patterns/images/pattern-img/b46e9b16-9842-4291-adfa-3ef012b89aec/images/828696e2-6df7-4536-9205-951c99449f4e.png)


**Automation and scale**

The Lambda functions scale fast when required. To avoid 503 Slow Down errors from Amazon S3 during high demand, we recommend putting some limits on the scaling.

## Tools
<a name="run-parallel-reads-of-s3-objects-by-using-python-in-an-aws-lambda-function-tools"></a>

**AWS services**
+ [AWS Cloud Development Kit (AWS CDK) v2](https://docs.aws.amazon.com/cdk/v2/guide/getting_started.html) is a software development framework that helps you define and provision AWS Cloud infrastructure in code. The example infrastructure was created to be deployed with AWS CDK.
+ [AWS Command Line InterfaceAWS CLI](https://docs.aws.amazon.com/cli/latest/userguide/cli-chap-welcome.html) is an open source tool that helps you interact with AWS services through commands in your command-line shell. In this pattern, AWS CLI version 2 is used to upload an example JSON file.
+ [AWS Lambda](https://docs.aws.amazon.com/lambda/latest/dg/welcome.html) is a compute service that helps you run code without needing to provision or manage servers. It runs your code only when needed and scales automatically, so you pay only for the compute time that you use.
+ [Amazon Simple Storage Service Amazon S3](https://docs.aws.amazon.com/AmazonS3/latest/userguide/Welcome.html) is a cloud-based object storage service that helps you store, protect, and retrieve any amount of data.
+ [AWS Step Functions](https://docs.aws.amazon.com/step-functions/latest/dg/welcome.html) is a serverless orchestration service that helps you combine AWS Lambda functions and other AWS services to build business-critical applications.

**Other tools**
+ [Python](https://www.python.org/) is a general-purpose computer programming language. The [reuse of idle worker threads](https://docs.python.org/3.8/library/concurrent.futures.html#concurrent.futures.ThreadPoolExecutor) was introduced in Python version 3.8, and the Lambda function code in this pattern was created for Python version 3.9 and later.

**Code repository**

The code for this pattern is available in the [aws-lambda-parallel-download](https://github.com/aws-samples/aws-lambda-parallel-download) GitHub repository.

## Best practices
<a name="run-parallel-reads-of-s3-objects-by-using-python-in-an-aws-lambda-function-best-practices"></a>
+ This AWS CDK construct relies on your AWS account's user permissions to deploy the infrastructure. If you plan to use AWS CDK Pipelines or cross-account deployments, see [Stack synthesizers](https://docs.aws.amazon.com/cdk/v2/guide/bootstrapping.html#bootstrapping-synthesizers).
+ This example application doesn't have the access logs enabled at the S3 bucket. It's a best practice to enable access logs in production code.

## Epics
<a name="run-parallel-reads-of-s3-objects-by-using-python-in-an-aws-lambda-function-epics"></a>

### Prepare the development environment
<a name="prepare-the-development-environment"></a>


| Task | Description | Skills required | 
| --- | --- | --- | 
| Check the Python installed version. | This code has been tested specifically on Python 3.9 and Python 3.13, and it should work on all versions between these releases. To check your Python version, run `python3 -V` in your terminal, and install a newer version if needed.To verify that the required modules are installed, run `python3 -c "import pip, venv"`. No error message means the modules are properly installed and you're ready to run this example.  | Cloud architect | 
| Install AWS CDK. | To install the AWS CDK if it isn't already installed, follow the instructions at [Getting started with the AWS CDK](https://docs.aws.amazon.com/cdk/v2/guide/getting_started.html). To confirm that the installed AWS CDK version is 2.0 or later, run `cdk –version`. | Cloud architect | 
| Bootstrap your environment. | To bootstrap your environment, if it hasn’t already been done, follow the instructions at [Bootstrap your environment for use with the AWS CDK](https://docs.aws.amazon.com/cdk/v2/guide/bootstrapping-env.html). | Cloud architect | 

### Clone the example repository
<a name="clone-the-example-repository"></a>


| Task | Description | Skills required | 
| --- | --- | --- | 
| Clone the repository. | To clone the latest version of the repository, run the following command:<pre>git clone --depth 1 --branch v1.2.0 \<br />git@github.com:aws-samples/aws-lambda-parallel-download.git</pre> | Cloud architect | 
| Change the working directory to the cloned repository. | Run the following command:<pre>cd aws-lambda-parallel-download</pre> | Cloud architect | 
| Create the Python virtual environment. | To create a Python virtual environment, run the following command:<pre>python3 -m venv .venv</pre> | Cloud architect | 
| Activate the virtual environment. | To activate the virtual environment, run the following command:<pre>source .venv/bin/activate</pre> | Cloud architect | 
| Install the dependencies. | To install the Python dependencies, run the `pip` command:<pre>pip install -r requirements.txt</pre> | Cloud architect | 
| Browse the code. | (Optional) The example code that downloads an object from the S3 bucket is at `resources/parallel.py`.The infrastructure code is in the `parallel_download` folder. | Cloud architect | 

### Deploy and test the app
<a name="deploy-and-test-the-app"></a>


| Task | Description | Skills required | 
| --- | --- | --- | 
| Deploy the app. | Run `cdk deploy`.Write down the AWS CDK outputs:[\[See the AWS documentation website for more details\]](http://docs.aws.amazon.com/prescriptive-guidance/latest/patterns/run-parallel-reads-of-s3-objects-by-using-python-in-an-aws-lambda-function.html) | Cloud architect | 
| Upload an example JSON file. | The repository contains an example JSON file of about 9 KB. To upload the file to the S3 bucket of the created stack, run the following command:<pre>aws s3 cp sample.json s3://<ParallelDownloadStack.SampleS3BucketName></pre>Replace `<ParallelDownloadStack.SampleS3BucketName>` with the corresponding value from the AWS CDK output. | Cloud architect | 
| Run the app. | To run the app, do the following:[\[See the AWS documentation website for more details\]](http://docs.aws.amazon.com/prescriptive-guidance/latest/patterns/run-parallel-reads-of-s3-objects-by-using-python-in-an-aws-lambda-function.html) | Cloud architect | 
| Add the number of downloads. | (Optional) To run 1,500 get object calls, use the following JSON in **Event JSON** of the `Test` parameter:<pre>{"repeat": 1500, "objectKey": "sample.json"}</pre> | Cloud architect | 

### Optional: Run AWS Lambda Power Tuning
<a name="optional-run-lamlong-power-tuning"></a>


| Task | Description | Skills required | 
| --- | --- | --- | 
| Run the AWS Lambda Power Tuning tool. | [\[See the AWS documentation website for more details\]](http://docs.aws.amazon.com/prescriptive-guidance/latest/patterns/run-parallel-reads-of-s3-objects-by-using-python-in-an-aws-lambda-function.html)At the end of the run, the result will be on the **Execution input and output** tab. | Cloud architect | 
| View the AWS Lambda Power Tuning results in a graph. | On the **Execution input and output** tab, copy the `visualization` property link, and paste it in a new browser tab. | Cloud architect | 

### Clean up
<a name="clean-up"></a>


| Task | Description | Skills required | 
| --- | --- | --- | 
| Remove the objects from the S3 bucket. | Before you destroy the deployed resources, you remove all the objects from the S3 bucket:<pre>aws s3 rm s3://<ParallelDownloadStack.SampleS3BucketName> \<br />--recursive</pre>Remember to replace `<ParallelDownloadStack.SampleS3BucketName>` with the value from the AWS CDK outputs. | Cloud architect | 
| Destroy the resources. | To destroy all the resources that were created for this pilot, run the following command:<pre>cdk destroy</pre> | Cloud architect | 

## Troubleshooting
<a name="run-parallel-reads-of-s3-objects-by-using-python-in-an-aws-lambda-function-troubleshooting"></a>


| Issue | Solution | 
| --- | --- | 
| `'MemorySize' value failed to satisfy constraint: Member must have value less than or equal to 3008` | For new accounts, you might not be able to configure more than 3,008 MB in your Lambda functions. To test using AWS Lambda Power Tuning, add the following property at the input JSON when you are starting the Step Functions execution:<pre>"powerValues": [<br />    512,<br />    1024,<br />    1536,<br />    2048,<br />    2560,<br />    3008<br />  ]</pre> | 

## Related resources
<a name="run-parallel-reads-of-s3-objects-by-using-python-in-an-aws-lambda-function-resources"></a>
+ [Python – concurrent.futures.ThreadPoolExecutor](https://docs.python.org/3/library/concurrent.futures.html#concurrent.futures.ThreadPoolExecutor)
+ [Lambda quotas – Function configuration, deployment, and execution](https://docs.aws.amazon.com/lambda/latest/dg/gettingstarted-limits.html#function-configuration-deployment-and-execution)
+ [Working with the AWS CDK in Python](https://docs.aws.amazon.com/cdk/v2/guide/work-with-cdk-python.html)
+ [Profiling functions with AWS Lambda Power Tuning](https://docs.aws.amazon.com/lambda/latest/operatorguide/profile-functions.html)

## Additional information
<a name="run-parallel-reads-of-s3-objects-by-using-python-in-an-aws-lambda-function-additional"></a>

**Code**

The following code snippet performs the parallel I/O processing:

```
with ThreadPoolExecutor(max_workers=MAX_WORKERS) as executor:
  for result in executor.map(a_function, (the_arguments)):
    ...
```

The `ThreadPoolExecutor` reuses the threads when they become available.

**Testing and results**

These tests were conducted in December 2024.

The first test processed 2,500 object reads, with the following result.

![\[Invocation time falling and invocation cost rising as memory increases.\]](http://docs.aws.amazon.com/prescriptive-guidance/latest/patterns/images/pattern-img/b46e9b16-9842-4291-adfa-3ef012b89aec/images/f6743412-1e52-4c4c-a51c-ac0f75b3b998.png)


Starting at 3,009 MB, the processing-time level stayed almost the same for any memory increase, but the cost increased as the memory size increased.

Another test investigated the range between 1,536 MB and 3,072 MB of memory, using values that were multiples of 256 MB and processing 10,000 object reads, with the following results.

![\[Decreased difference between invocation time falling and invocation cost rising.\]](http://docs.aws.amazon.com/prescriptive-guidance/latest/patterns/images/pattern-img/b46e9b16-9842-4291-adfa-3ef012b89aec/images/c75d4443-74d8-4b93-9b4d-b2640869381e.png)


The best performance-to-cost ratio was with the 2,304 MB memory Lambda configuration.

For comparison, a sequential process of 2,500 object reads took 47 seconds. The parallel process using the 2,304 MB Lambda configuration took 7 seconds, which is 85 percent less.

![\[Chart showing the decrease in time when switching from sequential to parallel processing.\]](http://docs.aws.amazon.com/prescriptive-guidance/latest/patterns/images/pattern-img/b46e9b16-9842-4291-adfa-3ef012b89aec/images/f3dcc44d-ac20-4b75-897d-1d71f0d59781.png)


# Send telemetry data from AWS Lambda to OpenSearch for real-time analytics and visualization
<a name="send-telemetry-data-from-lambda-to-opensearch-for-analytics-visualization"></a>

*Tabby Ward, Guy Bachar, and David Kilzer, Amazon Web Services*

## Summary
<a name="send-telemetry-data-from-lambda-to-opensearch-for-analytics-visualization-summary"></a>

Modern applications are becoming increasingly distributed and event-driven, which reinforces the need for real-time monitoring and observability. AWS Lambda is a serverless computing service that plays a crucial role in building scalable and event-driven architectures. However, monitoring and troubleshooting Lambda functions can be challenging if you rely solely on Amazon CloudWatch Logs, which can introduce latency and limited retention periods.

To address this challenge, AWS introduced the Lambda Telemetry API, which enables Lambda functions to send telemetry data directly to third-party monitoring and observability tools. This API supports real-time streaming of logs, metrics, and traces, and provides a comprehensive and timely view of the performance and health of your Lambda functions.

This pattern explains how to integrate the Lambda Telemetry API with [OpenSearch](https://opensearch.org/docs/latest/), which is an open-source, distributed search and analytics engine. OpenSearch offers a powerful and scalable platform for ingesting, storing, and analyzing large volumes of data, which makes it an ideal choice for Lambda telemetry data. Specifically, this pattern demonstrates how to send logs from a Lambda function that's written in Python directly to an OpenSearch cluster by using a Lambda extension that's provided by AWS. This solution is flexible and customizable, so you can create your own Lambda extension or alter the sample source code to change the output format as desired.

The pattern explains how to set up and configure the Lambda Telemetry API integration with OpenSearch, and includes best practices for security, cost optimization, and scalability. The objective is to help you gain deeper insights into your Lambda functions and enhance the overall observability of your serverless applications.


| 
| 
| Note: This pattern focuses on integrating the Lambda Telemetry API with managed OpenSearch. However, the principles and techniques discussed are also applicable to self-managed OpenSearch and Elasticsearch. | 
| --- |

## Prerequisites and limitations
<a name="send-telemetry-data-from-lambda-to-opensearch-for-analytics-visualization-prereqs"></a>

Before you begin the integration process, make sure that you have the following prerequisites in place:

**AWS account**: An active AWS account with appropriate permissions to create and manage the following AWS resources:
+ AWS Lambda
+ AWS Identity and Access Management (IAM)
+ Amazon OpenSearch Service (if you're using a managed OpenSearch cluster)

**OpenSearch cluster**:
+ You can use an existing self-managed OpenSearch cluster or a managed service such as OpenSearch Service.
+ If you're using OpenSearch Service, set up your OpenSearch cluster by following the instructions in [Getting started with Amazon OpenSearch Service](https://docs.aws.amazon.com/opensearch-service/latest/developerguide/gsg.html) in the OpenSearch Service documentation.
+ Make sure that the OpenSearch cluster is accessible from your Lambda function and is configured with the necessary security settings, such as access policies, encryption, and authentication.
+ Configure the OpenSearch cluster with the necessary index mappings and settings to ingest the Lambda telemetry data. For more information, see [Loading streaming data into Amazon OpenSearch Service](https://docs.aws.amazon.com/opensearch-service/latest/developerguide/integrations.html) in the OpenSearch Service documentation.

**Network connectivity**:
+ Ensure that your Lambda function has the necessary network connectivity to access the OpenSearch cluster. For guidance on how to configure virtual private cloud (VPC) settings, see [Launching your Amazon OpenSearch Service domains within a VPC](https://docs.aws.amazon.com/opensearch-service/latest/developerguide/vpc.html) in the OpenSearch Service documentation.

**IAM roles and policies**:
+ Create an IAM role with the necessary permissions for your Lambda function to access the OpenSearch cluster and access your credentials stored in AWS Secrets Manager.
+ Attach the appropriate IAM policies to the role, such as the `AWSLambdaBasicExecutionRole` policy and any additional permissions required to interact with OpenSearch.
+ Verify that the IAM permissions granted to your Lambda function allow it to write data to the OpenSearch cluster. For information about managing IAM permissions, see [Defining Lambda function permissions with an execution role](https://docs.aws.amazon.com/lambda/latest/dg/lambda-intro-execution-role.html) in the Lambda documentation.

**Programming language knowledge**:
+ You will need basic knowledge of Python (or the programming language of your choice) to understand and modify the sample code for the Lambda function and the Lambda extension.

**Development environment**:
+ Set up a local development environment with the necessary tools and dependencies for building and deploying Lambda functions and extensions. 

**AWS CLI or AWS Management Console**:
+ Install and configure the [AWS Command Line Interface (AWS CLI)](https://docs.aws.amazon.com/cli/latest/userguide/getting-started-install.html) or use the AWS Management Console with appropriate credentials to interact with the required AWS services.

**Monitoring and logging**:
+ Become familiar with monitoring and logging best practices on AWS, including services such as Amazon CloudWatch and AWS CloudTrail for monitoring and auditing purposes.
+ Check CloudWatch Logs for your Lambda function to identify any errors or exceptions related to the Lambda Telemetry API integration. For troubleshooting guidance, see the [Lambda Telemetry API documentation](https://docs.aws.amazon.com/lambda/latest/dg/telemetry-api.html).

## Architecture
<a name="send-telemetry-data-from-lambda-to-opensearch-for-analytics-visualization-architecture"></a>

This pattern uses OpenSearch Service to store logs and telemetry data that are generated by Lambda functions. This approach enables you to quickly stream logs directly to your OpenSearch cluster, which reduces the latency and costs associated with using CloudWatch Logs as an intermediary.


| 
| 
| Your Lambda extension code can push telemetry to OpenSearch Service, either by directly using the OpenSearch API or by using an [OpenSearch client library](https://opensearch.org/docs/latest/clients/index/). The Lambda extension can use the bulk operations supported by the OpenSearch API to batch telemetry events together and send them to OpenSearch Service in a single request. | 
| --- |

The following workflow diagram illustrates the log workflow for Lambda functions when you use an OpenSearch cluster as the endpoint.

![\[Workflow for sending telemetry data to an OpenSearch cluster.\]](http://docs.aws.amazon.com/prescriptive-guidance/latest/patterns/images/pattern-img/57fe8796-9f36-46cf-8304-f506242b9f04/images/283ccdcd-a0e1-40a2-a95a-3bd046bfa8ca.png)


The architecture includes these components:
+ Lambda function: The serverless function that generates logs and telemetry data during execution.
+ Lambda extension: A Python-based extension that uses the Lambda Telemetry API to integrate directly with the OpenSearch cluster. This extension runs alongside the Lambda function in the same execution environment.
+ Lambda Telemetry API: The API that enables Lambda extensions to send telemetry data, including logs, metrics, and traces, directly to third-party monitoring and observability tools.
+ Amazon OpenSearch Service cluster: A managed OpenSearch cluster that's hosted on AWS. This cluster is responsible for ingesting, storing, and indexing the log data streamed from the Lambda function through the Lambda extension.

The workflow consists of these steps:

1. The Lambda function is called, and generates logs and telemetry data during its execution.

1. The Lambda extension runs alongside the function to capture the logs and telemetry data by using the Lambda Telemetry API.

1. The Lambda extension establishes a secure connection with the OpenSearch Service cluster and streams the log data in real time.

1. The OpenSearch Service cluster ingests, indexes, and stores the log data to make it available for search, analysis, and visualization through the use of tools such as Kibana or other compatible applications.

By circumventing CloudWatch Logs and sending log data directly to the OpenSearch cluster, this solution provides several benefits:
+ Real-time log streaming and analysis, enabling faster troubleshooting and improved observability.
+ Reduced latency and potential retention limitations associated with CloudWatch Logs.
+ Flexibility to customize the Lambda extension or create your own extension for specific output formats or additional processing.
+ Integration with the search, analytics, and visualization capabilities of OpenSearch Service for log analysis and monitoring.

The [Epics](#send-telemetry-data-from-lambda-to-opensearch-for-analytics-visualization-epics) section provides step-by-step instructions for setting up the Lambda extension, configuring the Lambda function, and integrating with the OpenSearch Service cluster. For security considerations, cost optimization strategies, and tips for monitoring and troubleshooting the solution, see the [Best practices](#send-telemetry-data-from-lambda-to-opensearch-for-analytics-visualization-best-practices) section.

## Tools
<a name="send-telemetry-data-from-lambda-to-opensearch-for-analytics-visualization-tools"></a>

**AWS services**
+ [AWS Lambda](https://aws.amazon.com/lambda/) is a compute service that lets you run code without provisioning or managing servers. Lambda runs your code only when needed and scales automatically, from a few requests per day to thousands per second.
+ [Amazon OpenSearch Service](https://aws.amazon.com/opensearch-service/) is a fully managed service provided by AWS that makes it easy to deploy, operate, and scale OpenSearch clusters in the cloud.
+ [Lambda extensions](https://docs.aws.amazon.com/lambda/latest/dg/lambda-extensions.html) extends the functionality of your Lambda functions by running custom code alongside them. You can use Lambda extensions to integrate Lambda with various monitoring, observability, security, and governance tools.
+ [AWS Lambda Telemetry API](https://docs.aws.amazon.com/lambda/latest/dg/telemetry-api.html) enables you to use extensions to capture enhanced monitoring and observability data directly from Lambda and send it to a destination of your choice.
+ [CloudFormation](https://aws.amazon.com/cloudformation/) helps you model and set up your AWS resources so that you can spend less time managing those resources and more time focusing on your applications.

**Code repositories**
+ [AWS Lambda Extensions](https://github.com/aws-samples/aws-lambda-extensions) includes demos and sample projects from AWS and AWS Partners to help you get started with building your own extensions.
+ [Example Lambda Telemetry Integrations for OpenSearch](https://github.com/aws-samples/aws-lambda-extensions/tree/main/python-example-telemetry-opensearch-extension) provides a sample Lambda extension that demonstrates how to send logs from a Lambda function to an OpenSearch cluster.

**Other tools**
+ [OpenSearch](https://opensearch.org/faq/) is an open-source distributed search and analytics engine that provides a powerful platform for ingesting, storing, and analyzing large volumes of data.
+ Kibana is an open-source data visualization and exploration tool that you can use with OpenSearch. Note that the implementation of visualization and analytics is beyond the scope of this pattern. For more information, see the [Kibana documentation](https://www.elastic.co/guide/en/kibana/current/index.html) and other resources.

## Best practices
<a name="send-telemetry-data-from-lambda-to-opensearch-for-analytics-visualization-best-practices"></a>

When you integrate the Lambda Telemetry API with OpenSearch, consider the following best practices.

**Security and access control**
+ **Secure communication**: Encrypt all communications between your Lambda functions and the OpenSearch cluster by using HTTPS. Configure the necessary SSL/TLS settings in your Lambda extension and OpenSearch configuration.
+ **IAM permissions**:
  + Extensions run in the same execution environment as the Lambda function, so they inherit the same level of access to resources such as the file system, networking, and environment variables.
  + Grant the minimum necessary IAM permissions to your Lambda functions to access the Lambda Telemetry API and write data to the OpenSearch cluster. Use the [principle of least privilege](https://docs.aws.amazon.com/lambda/latest/operatorguide/least-privilege.html) to limit the scope of permissions.
+ **OpenSearch access control**: Implement fine-grained access control in your OpenSearch cluster to restrict access to sensitive data. Use the built-in security features, such as user authentication, role-based access control, and index-level permissions, in OpenSearch.
+ **Trusted extensions**: Always install extensions from a trusted source only. Use infrastructure as code (IaC) tools such as CloudFormation to simplify the process of attaching the same extension configuration, including IAM permissions, to multiple Lambda functions. IaC tools also provide an audit record of the extensions and versions used previously.
+ **Sensitive data handling**: When building extensions, avoid logging sensitive data. Sanitize payloads and metadata before logging or persisting them for audit purposes.

**Cost optimization**
+ **Monitoring and alerting**: Set up monitoring and alerting mechanisms to track the volume of data being sent to OpenSearch from your Lambda functions. This will help you identify and address any potential cost overruns.
+ **Data retention**: Carefully consider the appropriate data retention period for your Lambda telemetry data in OpenSearch. Longer retention periods can increase storage costs, so balance your observability needs with cost optimization.
+ **Compression and indexing**: Enable data compression and optimize your OpenSearch indexing strategy to reduce the storage footprint of your Lambda telemetry data.
+ **Reduced reliance on CloudWatch**: By integrating the Lambda Telemetry API directly with OpenSearch, you can potentially reduce your reliance on CloudWatch Logs, which can result in cost savings. This is because the Lambda Telemetry API enables you to send logs directly to OpenSearch, which bypasses the need to store and process the data in CloudWatch.

**Scalability and reliability**
+ **Asynchronous processing**: Use asynchronous processing patterns, such as Amazon Simple Queue Service (Amazon SQS) or Amazon Kinesis, to decouple the Lambda function execution from the OpenSearch data ingestion. This helps maintain the responsiveness of your Lambda functions and improves the overall reliability of the system.
+ **OpenSearch cluster scaling**: Monitor the performance and resource utilization of your OpenSearch cluster, and scale it up or down as needed to handle the increasing volume of Lambda telemetry data.
+ **Failover and disaster recovery**: Implement a robust disaster recovery strategy for your OpenSearch cluster, including regular backups and the ability to quickly restore data in the event of a failure.

**Observability and monitoring**
+ **Dashboards and visualizations**: Use Kibana or other dashboard tools to create custom dashboards and visualizations that provide insights into the performance and health of your Lambda functions based on the telemetry data in OpenSearch.
+ **Alerting and notifications**: Set up alerts and notifications to proactively monitor for anomalies, errors, or performance issues in your Lambda functions. Integrate these alerts and notifications with your existing incident management processes.
+ **Tracing and correlation**: Ensure that your Lambda telemetry data includes relevant tracing information, such as request IDs or correlation IDs, to enable end-to-end observability and troubleshooting across your distributed serverless applications.

By following these best practices, you can ensure that your integration of the Lambda Telemetry API with OpenSearch is secure, cost-effective, and scalable, and provides comprehensive observability for your serverless applications.

## Epics
<a name="send-telemetry-data-from-lambda-to-opensearch-for-analytics-visualization-epics"></a>

### Build and deploy the Lambda extension layer
<a name="build-and-deploy-the-lam-extension-layer"></a>


| Task | Description | Skills required | 
| --- | --- | --- | 
| Download the source code. | Download the sample extensions from the [AWS Lambda Extensions](https://github.com/aws-samples/aws-lambda-extensions) repository. | App developer, Cloud architect | 
| Navigate to the `python-example-telemetry-opensearch-extension` folder. | The [AWS Lambda Extensions](https://github.com/aws-samples/aws-lambda-extensions) repository that you downloaded contains numerous examples for several use cases and language runtimes. Navigate to the [python-example-telemetry-opensearch-extension](https://github.com/aws-samples/aws-lambda-extensions/tree/main/python-example-telemetry-opensearch-extension) folder to use the Python OpenSearch extension, which sends logs to OpenSearch. | App developer, Cloud architect | 
| Add permissions to execute the extension endpoint. | Run the following command to make the extension endpoint executable:<pre>chmod +x python-example-telemetry-opensearch-extension/extension.py</pre> | App developer, Cloud architect | 
| Install the extension dependencies locally. | Run the following command to install local dependencies for the Python code:<pre>pip3 install -r python-example-telemetry-opensearch-extension/requirements.txt -t ./python-example-telemetry-opensearch-extension/</pre>These dependencies will be mounted along with the extension code. | App developer, Cloud architect | 
| Create a .zip package for the extension to deploy it as a layer. | The extension .zip file should contain a root directory called `extensions/`, where the extension executable is located, and another root directory called `python-example-telemetry-opensearch-extension/`, where the core logic of the extension and its dependencies are located.Create the .zip package for the extension:<pre>chmod +x extensions/python-example-telemetry-opensearch-extension<br />zip -r extension.zip extensions python-example-telemetry-opensearch-extension</pre> | App developer, Cloud architect | 
| Deploy the extension as a Lambda layer. | Publish the layer by using your extension .zip file and the following command:<pre>aws lambda publish-layer-version \<br />--layer-name "python-example-telemetry-opensearch-extension" \<br />--zip-file "fileb://extension.zip"</pre> | App developer, Cloud architect | 

### Integrate the extension into your function
<a name="integrate-the-extension-into-your-function"></a>


| Task | Description | Skills required | 
| --- | --- | --- | 
| Add the layer to your function. | [\[See the AWS documentation website for more details\]](http://docs.aws.amazon.com/prescriptive-guidance/latest/patterns/send-telemetry-data-from-lambda-to-opensearch-for-analytics-visualization.html)For more information about adding a layer to your Lambda function, see the [Lambda documentation](https://docs.aws.amazon.com/lambda/latest/dg/adding-layers.html). | App developer, Cloud architect | 
| Set the environment variables for the function. | On the function page, choose the **Configuration** tab and add the following environment variables to your function:[\[See the AWS documentation website for more details\]](http://docs.aws.amazon.com/prescriptive-guidance/latest/patterns/send-telemetry-data-from-lambda-to-opensearch-for-analytics-visualization.html) | App developer, Cloud architect | 

### Add logging statements and test your function
<a name="add-logging-statements-and-test-your-function"></a>


| Task | Description | Skills required | 
| --- | --- | --- | 
| Add logging statements to your function. | Add logging statements to your function by using one of the [built-in logging mechanisms](https://docs.aws.amazon.com/lambda/latest/dg/python-logging.html) or your logging module of choice. Here are examples of logging messages in Python:<pre>print("Your Log Message Here")<br />logger = logging.getLogger(__name__)<br /><br />logger.info("Test Info Log.")<br />logger.error("Test Error Log.")</pre> | App developer, Cloud architect | 
| Test your function. | [\[See the AWS documentation website for more details\]](http://docs.aws.amazon.com/prescriptive-guidance/latest/patterns/send-telemetry-data-from-lambda-to-opensearch-for-analytics-visualization.html)You should see **Executing function: succeeded** if everything works properly. | App developer, Cloud architect | 

### View your logs in OpenSearch
<a name="view-your-logs-in-opensearch"></a>


| Task | Description | Skills required | 
| --- | --- | --- | 
| Query your indexes. | In OpenSearch, run the following command to query your indexes:<pre>SELECT * FROM index-name</pre>Your logs should be displayed in the query results. | Cloud architect | 

## Troubleshooting
<a name="send-telemetry-data-from-lambda-to-opensearch-for-analytics-visualization-troubleshooting"></a>


| Issue | Solution | 
| --- | --- | 
| Connectivity issues | [\[See the AWS documentation website for more details\]](http://docs.aws.amazon.com/prescriptive-guidance/latest/patterns/send-telemetry-data-from-lambda-to-opensearch-for-analytics-visualization.html) | 
| Data ingestion errors | [\[See the AWS documentation website for more details\]](http://docs.aws.amazon.com/prescriptive-guidance/latest/patterns/send-telemetry-data-from-lambda-to-opensearch-for-analytics-visualization.html) | 

## Related resources
<a name="send-telemetry-data-from-lambda-to-opensearch-for-analytics-visualization-resources"></a>
+ [Example Lambda Telemetry Integrations for OpenSearch](https://github.com/aws-samples/aws-lambda-extensions/tree/main/python-example-telemetry-opensearch-extension) (GitHub repository)
+ [Augment Lambda functions using Lambda extensions](https://docs.aws.amazon.com/lambda/latest/dg/lambda-extensions.html) (Lambda documentation)
+ [Lambda Telemetry API](https://docs.aws.amazon.com/lambda/latest/dg/telemetry-api.html) (Lambda documentation)
+ [Introducing the AWS Lambda Telemetry API](https://aws.amazon.com/blogs/compute/introducing-the-aws-lambda-telemetry-api/) (AWS blog post)
+ [Integrating the AWS Lambda Telemetry API with Prometheus and OpenSearch](https://aws.amazon.com/blogs/opensource/integrating-the-aws-lambda-telemetry-api-with-prometheus-and-opensearch) (AWS blog post)

## Additional information
<a name="send-telemetry-data-from-lambda-to-opensearch-for-analytics-visualization-additional"></a>

**Altering the log structure**

The extension sends logs as a nested document to OpenSearch by default. This allows you to perform nested queries to retrieve individual column values.

If the default log output doesn't meet your specific needs, you can customize it by modifying the source code of the Lambda extension that’s provided by AWS. AWS encourages customers to adapt the output to suit their business requirements. To change the log output, locate the `dispatch_to_opensearch` function in the `telemetry_dispatcher.py` file within the extension's source code and make the necessary alterations.

# Set up a serverless cell router for a cell-based architecture
<a name="serverless-cell-router-architecture"></a>

*Mian Tariq and Ioannis Lioupras, Amazon Web Services*

## Summary
<a name="serverless-cell-router-architecture-summary"></a>

As the entry point to a global cell-based application's system, the cell router is responsible for efficiently assigning users to the appropriate cells and providing the endpoints to the users. The cell router handles functions such as storing user-to-cell mappings, monitoring cell capacity, and requesting new cells when needed. It's important to maintain cell-router functionality during potential disruptions.

The cell-router design framework in this pattern focuses on resilience, scalability, and overall performance optimization. The pattern uses static routing, where clients cache endpoints upon initial login and communicate directly with cells. This decoupling enhances system resilience by helping to ensure uninterrupted functionality of the cell-based application during a cell-router impairment.

This pattern uses an AWS CloudFormation template to deploy the architecture. For details about what the template deploys, or to deploy the same configuration by using the AWS Management Console, see the [Additional information](#serverless-cell-router-architecture-additional) section.

**Important**  
The demonstration, the code, and the CloudFormation template presented in this pattern are intended for explanatory purposes only. The material provided is solely for the purpose of illustrating the design pattern and aiding in comprehension. The demo and code are not production-ready and should not be used for any live production activities. Any attempt to use the code or demo in a production environment is strongly discouraged and is at your own risk. We recommend consulting with appropriate professionals and performing thorough testing before implementing this pattern or any of its components in a production setting.

## Prerequisites and limitations
<a name="serverless-cell-router-architecture-prereqs"></a>

**Prerequisites**
+ An active Amazon Web Services (AWS) account
+ The latest version of [AWS Command Line Interface (AWS CLI](https://docs.aws.amazon.com/cli/latest/userguide/cli-chap-welcome.html))
+ [AWS credentials](https://docs.aws.amazon.com/cli/latest/userguide/cli-configure-files.html) with the necessary permissions to create the CloudFormation stack, AWS Lambda functions, and related resources

**Product versions**
+ Python 3.12

## Architecture
<a name="serverless-cell-router-architecture-architecture"></a>

The following diagram shows a high-level design of the cell router.

![\[The five-step process of the cell router.\]](http://docs.aws.amazon.com/prescriptive-guidance/latest/patterns/images/pattern-img/fd2fbf9d-9ae4-4c27-bc32-cf117350137a/images/feb90b51-dd91-483b-b5a3-b0a5359686e3.png)


The diagram steps through the following workflow:

1. The user contacts Amazon API Gateway, which serves as the front for the cell-router API endpoints.

1. Amazon Cognito handles the authentication and authorization.

1. The AWS Step Functions workflow consists of the following components:
   + **Orchestrator** ‒ The `Orchestrator` uses AWS Step Functions to create a workflow, or state machine. The workflow is triggered by the cell router API. The `Orchestrator` executes Lambda functions based on the resource path.
   + **Dispatcher** ‒ The `Dispatcher` Lambda function identifies and assigns one static cell per registered new user. The function searches for the cell with the least number of users, assigns it to the user, and returns the endpoints.
   + **Mapper** ‒ The `Mapper` operation handles the user-to-cell mappings within the `RoutingDB` Amazon DynamoDB database that was created by the CloudFormation template. When triggered, the `Mapper` function provides the already assigned users with their endpoints.
   + **Scaler** ‒ The `Scaler` function keeps track of the cell occupancy and available capacity. When needed, the `Scaler` function can send a request through Amazon Simple Queue Service (Amazon SQS) to the Provision and Deploy layer to request new cells.
   + **Validator** ‒ The `Validator` function validates the cell endpoints and detects any potential issues.

1. The `RoutingDB` stores cell information and attributes (API endpoints, AWS Region, state, metrics).

1. When the available capacity of cells exceeds a threshold, the cell router requests provisioning and deployment services through Amazon SQS to create new cells.

When new cells are created, `RoutingDB` gets updated from the Provision and Deploy layer. However, that process is beyond the scope of this pattern. For an overview of cell-based architecture design premises and details about the cell-router design used in this pattern, see the [Additional information](#serverless-cell-router-architecture-additional) section.

## Tools
<a name="serverless-cell-router-architecture-tools"></a>

**AWS services**
+ [Amazon API Gateway](https://docs.aws.amazon.com/apigateway/latest/developerguide/welcome.html) helps you create, publish, maintain, monitor, and secure REST, HTTP, and WebSocket APIs at any scale.
+ [AWS CloudFormation](https://docs.aws.amazon.com/AWSCloudFormation/latest/UserGuide/Welcome.html) helps you set up AWS resources, provision them quickly and consistently, and manage them throughout their lifecycle across AWS accounts and AWS Regions.
+ [Amazon Cognito](https://docs.aws.amazon.com/cognito/latest/developerguide/what-is-amazon-cognito.html) provides authentication, authorization, and user management for web and mobile apps.
+ [Amazon DynamoDB](https://docs.aws.amazon.com/amazondynamodb/latest/developerguide/Introduction.html) is a fully managed NoSQL database service that provides fast, predictable, and scalable performance.
+ [AWS Lambda](https://docs.aws.amazon.com/lambda/latest/dg/welcome.html) is a compute service that helps you run code without needing to provision or manage servers. It runs your code only when needed and scales automatically, so you pay only for the compute time that you use.
+ [Amazon Simple Storage Service (Amazon S3)](https://docs.aws.amazon.com/AmazonS3/latest/userguide/Welcome.html) is a cloud-based object storage service that helps you store, protect, and retrieve any amount of data.
+ [Amazon Simple Queue Service (Amazon SQS)](https://docs.aws.amazon.com/AWSSimpleQueueService/latest/SQSDeveloperGuide/welcome.html) provides a secure, durable, and available hosted queue that helps you integrate and decouple distributed software systems and components.
+ [AWS Step Functions](https://docs.aws.amazon.com/step-functions/latest/dg/welcome.html) is a serverless orchestration service that helps you combine Lambda functions and other AWS services to build business-critical applications.

**Other tools**
+ [Python](https://www.python.org/) is a general-purpose computer programming language.

**Code repository**

The code for this pattern is available in the GitHub [Serverless-Cell-Router](https://github.com/aws-samples/Serverless-Cell-Router/) repository. 

## Best practices
<a name="serverless-cell-router-architecture-best-practices"></a>

For best practices when building cell-based architectures, see the following AWS Well-Architected guidance:
+ [Reducing the Scope of Impact with Cell-Based Architecture](https://docs.aws.amazon.com/wellarchitected/latest/reducing-scope-of-impact-with-cell-based-architecture/reducing-scope-of-impact-with-cell-based-architecture.html)
+ [AWS Well-Architected Framework Reliability Pillar: REL10-BP04 Use bulkhead architectures to limit scope of impact](https://docs.aws.amazon.com/wellarchitected/latest/reliability-pillar/rel_fault_isolation_use_bulkhead.html)

## Epics
<a name="serverless-cell-router-architecture-epics"></a>

### Prepare source files
<a name="prepare-source-files"></a>


| Task | Description | Skills required | 
| --- | --- | --- | 
| Clone the example code repository. | To clone the Serverless-Cell-Router GitHub repository to your computer, use the following command:<pre>git clone https://github.com/aws-samples/Serverless-Cell-Router/</pre> | Developer | 
| Set up AWS CLI temporary credentials. | Configure the AWS CLI with credentials for your AWS account. This walkthrough uses temporary credentials provided by the AWS IAM Identity Center **Command line or programmatic access** option. This sets the `AWS_ACCESS_KEY_ID`, `AWS_SECRET_ACCESS_KEY`, and `AWS_SESSION_TOKEN` AWS environment variables with the appropriate credentials for use with the AWS CLI. | Developer | 
| Create an S3 bucket. | Create an S3 bucket that will be used to store and access the Serverless-Cell-Router Lambda functions for deployment by the CloudFormation template. To create the S3 bucket, use following command: <pre>aws s3api create-bucket --bucket <bucket name> --region eu-central-1 --create-bucket-configuration LocationConstraint=eu-central-1</pre> | Developer | 
| Create .zip files. | Create one .zip file for each Lambda function located in the [Functions](https://github.com/aws-samples/Serverless-Cell-Router/tree/main/Functions) directory. These .zip files will be used to deploy the Lambda functions. On a Mac, use the following `zip` commands:<pre>zip -j mapper-scr.zip Functions/Mapper.py<br />zip -j dispatcher-scr.zip Functions/Dispatcher.py<br />zip -j scaler-scr.zip Functions/Scaler.py<br />zip -j cp validator-scr.zip Functions/Validator.py<br />zip -j dynamodbDummyData-scr.zip Functions/DynamodbDummyData.py</pre> | Developer | 
| Copy the .zip files to the S3 bucket. | To copy all the Lambda function .zip files to the S3 bucket, use the following commands:<pre>aws s3 cp mapper-scr.zip s3://<bucket name><br />aws s3 cp dispatcher-scr.zip s3://<bucket name><br />aws s3 cp scaler-scr.zip s3://<bucket name><br />aws s3 cp validator-scr.zip s3://<bucket name><br />aws s3 cp dynamodbDummyData-scr.zip s3://<bucket name></pre> | Developer | 

### Create the CloudFormation stack
<a name="create-the-cfn-stack"></a>


| Task | Description | Skills required | 
| --- | --- | --- | 
| Deploy the CloudFormation template. | To deploy the CloudFormation template, run the following AWS CLI command:<pre>aws cloudformation create-stack --stack-name serverless.cell-router \<br />--template-body file://Serverless-Cell-Router-Stack-v10.yaml \<br />--capabilities CAPABILITY_IAM \<br />--parameters ParameterKey=LambdaFunctionMapperS3KeyParameterSCR,ParameterValue=mapper-scr.zip \<br />ParameterKey=LambdaFunctionDispatcherS3KeyParameterSCR,ParameterValue=dispatcher-scr.zip \<br />ParameterKey=LambdaFunctionScalerS3KeyParameterSCR,ParameterValue=scaler-scr.zip \<br />ParameterKey=LambdaFunctionAddDynamoDBDummyItemsS3KeyParameterSCR,ParameterValue=dynamodbDummyData-scr.zip \<br />ParameterKey=LambdaFunctionsS3BucketParameterSCR,ParameterValue=<S3 bucket storing lambda zip files> \<br />ParameterKey=CognitoDomain,ParameterValue=<Cognito Domain Name> \<br />--region <enter your aws region id, e.g. "eu-central-1"></pre> | Developer | 
| Check progress. | Sign in to the AWS Management Console, open the CloudFormation console at  [https://console.aws.amazon.com/cloudformation/](), and check the progress of stack development. When the status is `CREATE_COMPLETE`, the stack has been deployed successfully. | Developer | 

### Assess and verify
<a name="assess-and-verify"></a>


| Task | Description | Skills required | 
| --- | --- | --- | 
| Assign cells to the user. | To initiate the `Orchestrator`, run the following curl command:<pre>curl -X POST \<br />-H "Authorization: Bearer {User id_token}" \<br />https://xxxxxx.execute-api.eu-central-1.amazonaws.com/Cell_Router_Development/cells</pre>The `Orchestrator` triggers the execution of the `Dispatcher` function. The `Dispatcher`, in turn, verifies the existence of the user. If the user is found, the `Dispatcher` returns the associated cell ID and endpoint URLs. If the user isn't found, the `Dispatcher` allocates a cell to the user and sends the cell ID to the `Scaler` function for assessment of the assigned cell's residual capacity.The `Scaler` function's response is the following:`"cellID : cell-0002 , endPoint_1 : https://xxxxx.execute-api.eu-north-1.amazonaws.com/ , endPoint_2 : https://xxxxxxx.execute-api.eu-central-1.amazonaws.com/"` | Developer | 
| Retrieve user cells. | To use the `Orchestrator` to execute the `Mapper` function, run the following command:<pre>curl -X POST \<br />-H "Authorization: Bearer {User id_token}" \<br />https://xxxxxxxxx.execute-api.eu-central-1.amazonaws.com/Cell_Router_Development/mapper</pre>The `Orchestrator` searches for the cell assigned to the user and returns the cell ID and URLs in the following response:`"cellID : cell-0002 , endPoint_1 : https://xxxxx.execute-api.eu-north-1.amazonaws.com/ , endPoint_2 : https://xxxxxxx.execute-api.eu-central-1.amazonaws.com/"` | Developer | 

### Clean up
<a name="clean-up"></a>


| Task | Description | Skills required | 
| --- | --- | --- | 
| Clean up the resources. | To avoid incurring additional charges in your account, do the following:[\[See the AWS documentation website for more details\]](http://docs.aws.amazon.com/prescriptive-guidance/latest/patterns/serverless-cell-router-architecture.html) | App developer | 

## Related resources
<a name="serverless-cell-router-architecture-resources"></a>

**References**
+ [Static stability using Availability Zones](https://aws.amazon.com/builders-library/static-stability-using-availability-zones/)
+ [AWS Fault Isolation Boundaries: Static stability](https://docs.aws.amazon.com/whitepapers/latest/aws-fault-isolation-boundaries/static-stability.html)

**Video**

[Physalia: Cell-based Architecture to Provide Higher Availability on Amazon EBS](https://www.youtube.com/watch?v=6IknqRZMFic) 




[https://www.youtube-nocookie.com/embed/6IknqRZMFic?controls=0](https://www.youtube-nocookie.com/embed/6IknqRZMFic?controls=0)

## Additional information
<a name="serverless-cell-router-architecture-additional"></a>

**Cell-based architecture design premises**

Although this pattern focuses on the cell router, it's important to understand the whole environment. The environment is structured into three discrete layers:
+ The Routing layer, or Thin layer, which contains the cell router
+ The Cell layer, comprising various cells
+ The Provision and Deploy Layer, which provisions cells and deploys the application

Each layer sustains functionality even in the event of impairments affecting other layers. AWS accounts serve as a fault isolation boundary.

The following diagram shows the layers at a high level. The Cell layer and the Provision and Deploy layer are outside the scope of this pattern.

![\[The Routing layer, the Cell layer with multiple cell accounts, and the Provision and Deploy layer.\]](http://docs.aws.amazon.com/prescriptive-guidance/latest/patterns/images/pattern-img/fd2fbf9d-9ae4-4c27-bc32-cf117350137a/images/137ac34d-43c3-42b6-95de-a365ff611ce8.png)


For more information about cell-based architecture, see [Reducing the Scope of Impact with Cell-Based Architecture: Cell routing](https://docs.aws.amazon.com/wellarchitected/latest/reducing-scope-of-impact-with-cell-based-architecture/cell-routing.html).

**Cell-router design pattern**

The cell router is a shared component across cells. To mitigate potential impacts, it's important for the Routing layer to use a simplistic and horizontally scalable design that's as thin as possible. Serving as the system’s entry point, the Routing layer consists of only the components that are required to efficiently assign users to the appropriate cells. Components within this layer don't engage in the management or creation of cells.

This pattern uses static routing, which means that the client caches the endpoints at the initial login and subsequently establishes direct communication with the cell. Periodic interactions between the client and the cell router are initiated to confirm the current status or retrieve any updates. This intentional decoupling enables uninterrupted operations for existing users in the event of cell-router downtime, and it provides continued functionality and resilience within the system.

In this pattern, the cell router supports the following functionalities:
+ Retrieving cell data from the cell database in the Provision and Deploy layer and storing or updating the local database.
+ Assigning a cell to each new registered user of the application by using the cell assignment algorithm.
+ Storing the user-to-cells mapping in the local database.
+ Checking the capacity of the cells during user assignment and raising an event for the vending machine to the Provision and Deploy layer to create cells.
+ Using the cell creation criteria algorithm to provide this functionality.
+ Responding to the newly registered user requests by providing the URLs of the static cells. These URLs will be cached on the client with a time to live (TTL).
+ Responding to the existing user requests of an invalid URL by providing a new or updated URL.

To further understand the demonstration cell router that is set up by the CloudFormation template, review the following components and steps:

1. Set up and configure the Amazon Cognito user pool.

1. Set up and configure the API Gateway API for the cell router.

1. Create a DynamoDB table.

1. Create and configure an SQS queue.

1. Implement the `Orchestrator`.

1. Implement the Lambda functions: `Dispatcher`, `Scaler`, `Mapper`, `Validator`.

1. Asses and verify.

The presupposition is that the Provision and Deploy layer is already established. Its implementation details fall beyond the scope of this artifact.

Because these components are set up and configured by an CloudFormation template, the following steps are presented at a descriptive and high level. The assumption is that you have the required AWS skills to complete the setup and configuration.

*1. Setup and configure the Amazon Cognito user pool*

Sign in to the AWS Management Console, and open the Amazon Cognito console at [https://console.aws.amazon.com/cognito/](). Set up and configure an Amazon Cognito user pool named `CellRouterPool`, with app integration, hosted UI, and the necessary permissions.

*2. Set up and configure the API Gateway API for the cell router*

Open the API Gateway console at [https://console.aws.amazon.com/apigateway/](). Set up and configure an API named `CellRouter`, using an Amazon Cognito authorizer integrated with the Amazon Cognito user pool `CellRouterPool`. Implement the following elements:
+ `CellRouter` API resources, including `POST` methods
+ Integration with the Step Functions workflow implemented in step 5
+ Authorization through the Amazon Cognito authorizer
+ Integration request and response mappings
+ Allocation of necessary permissions

*3. Create a DynamoDB table*

Open the DynamoDB console at [https://console.aws.amazon.com/dynamodb/](), and create a standard DynamoDB table called `tbl_router` with the following configuration:
+ **Partition key** ‒ `marketId`
+ **Sort key** ‒ `cellId`
+ **Capacity mode** ‒ Provisioned
+ **Point-in-time recovery (PITR)** ‒ Off

On the **Indexes** tab, create a global secondary index called `marketId-currentCapacity-index`. The `Scaler` Lambda function will use the index to conduct efficient searches for the cell with the lowest number of assigned users.

Create the table structure with the following attributes:
+ `marketId` ‒ Europe
+ `cellId` ‒ cell-0002
+ `currentCapacity` ‒ 2
+ `endPoint_1` ‒ <your endpoint for the first Region>
+ `endPoint_2` ‒ <your endpoint for the second Region>
+ `IsHealthy` ‒ True
+ `maxCapacity` ‒ 10
+ `regionCode_1` ‒ `eu-north-1`
+ `regionCode_2` ‒ `eu-central-1`
+ `userIds` ‒ <your email address>

*4. Create and configure an SQS queue*

Open the Amazon SQS console at [https://console.aws.amazon.com/sqs/](), and create a standard SQS queue called `CellProvisioning` configured with **Amazon SQS key** encryption.

*5. Implement the Orchestrator*

Develop a Step Functions workflow to serve as the `Orchestrator` for the router. The workflow is callable through the cell router API. The workflow executes the designated Lambda functions based on the resource path. Integrate the step function with the API Gateway API for the cell router `CellRouter`, and configure the necessary permissions to invoke the Lambda functions.

The following diagram shows the workflow. The choice state invokes one of the Lambda functions. If the Lambda function is successful, the workflow ends. If the Lambda function fails, fail state is called.

![\[A diagram of the workflow with the four functions and ending in a fail state.\]](http://docs.aws.amazon.com/prescriptive-guidance/latest/patterns/images/pattern-img/fd2fbf9d-9ae4-4c27-bc32-cf117350137a/images/cfe8d029-6f30-49a1-aaad-cad503bdcbae.png)


*6. Implement the Lambda functions*

Implement the `Dispatcher`, `Mapper`, `Scaler`, and `Validator` functions. When you set up and configure each function in the demonstration, define a role for the function and assign the necessary permissions for performing required operations on the DynamoDB table `tbl_router`. Additionally, integrate each function into the workflow `Orchestrator`.

*Dispatcher function*

The `Dispatcher` function is responsible for identifying and assigning a single static cell for each new registered user. When a new user registers with the global application, the request goes to the `Dispatcher` function. The function processes the request by using predefined evaluation criteria such as the following:

1. **Region** ‒ Select the cell in the market where the user is located. For example, if the user is accessing the global application from Europe, select a cell that uses AWS Regions in Europe.

1. **Proximity or latency** ‒ Select the cell closest to the user For example, if the user is accessing the application from Holland, the function considers a cell that uses Frankfurt and Ireland. The decision regarding which cell is closest is based on metrics such as latency between the user's location and the cell Regions. For this example pattern, the information is statically fed from the Provision and Deploy layer.

1. **Health** ‒ The `Dispatcher` function checks whether the selected cell is healthy based on the provided cell state (Healthy = true or false).

1. **Capacity** ‒ The user distribution is based on *least number of users in a cell* logic, so the user is assigned to the cell that has least number of users.

**Note**  
These criteria are presented to explain this example pattern only. For a real-life cell-router implementation, you can define more refined and use case‒based criteria.

The `Orchestrator` invokes the Dispatcher function to assign users to cells. In this demo function, the market value is a static parameter defined as `europe`.

The `Dispatcher` function assesses whether a cell is already assigned to the user. If the cell is already assigned, the `Dispatcher` function returns the cell's endpoints. If no cell is assigned to the user, the function searches for the cell with the least number of users, assigns it to the user, and returns the endpoints. The efficiency of the cell search query is optimized by using the global secondary index.

*Mapper function*

The `Mapper` function oversees the storage and maintenance of user-to-cell mappings in the database. A singular cell is allocated to each registered user. Each cell has two distinct URLs—one for each AWS Region. Serving as API endpoints hosted on API Gateway, these URLs function as inbound points to the global application.

When the `Mapper` function receives a request from the client application, it runs a query on the DynamoDB table `tbl_router` to retrieve the user-to-cell mapping that is associated with the provided email ID. If it finds an assigned cell, the `Mapper` function promptly provides the cell's two URLs. The `Mapper` function also actively monitors alterations to the cell URLs, and it initiates notifications or updates to user settings.

*Scaler function*

The `Scaler` function manages the residual capacity of the cell. For each new user-registration request, the `Scaler` function assesses the available capacity of the cell that the `Dispatcher` function assigned to the user. If the cell has reached its predetermined limit according to the specified evaluation criteria, the function initiates a request through an Amazon SQS queue to the Provision and Deploy layer, soliciting the provisioning and deployment of new cells. The scaling of cells can be executed based on a set of evaluation criteria such as the following:

1. **Maximum users** ‒ Each cell can have 500 maximum number of users.

1. **Buffer capacity** ‒ The buffer capacity of each cell is 20 percent, which  means that each cell can be assigned to 400 users at any time. The remaining 20 percent buffer capacity is reserved for future use cases and handling of unexpected scenarios (for example, when cell creation and provisioning services are unavailable).

1. **Cell creation** ‒ As soon as an existing cell reaches 70 percent of capacity, a request is triggered to create an additional cell.

**Note**  
These criteria are presented to explain this example pattern only. For a real-life cell-router implementation, you can define more refined and use case‒based criteria.

The demonstration `Scaler` code is executed by the `Orchestrator` after the `Dispatcher` successfully assigns a cell to the newly registered user. The `Scaler`, upon receipt of the cell ID from the `Dispatcher`, evaluates whether the designated cell has adequate capacity to accommodate additional users, based on predefined evaluation criteria. If the cell's capacity is insufficient, the `Scaler` function dispatches a message to the Amazon SQS service. This message is retrieved by the service within the Provision and Deploy layer, initiating the provisioning of a new cell.

**Validator function**

The `Validator` function identifies and resolves issues pertaining to cell access. When a user signs in to the global application, the application retrieves the cell's URLs from the user profile settings and routes user requests to one of the two assigned Regions within the cell. If the URLs are inaccessible, the application can dispatch a validate URL request to the cell router. The cell-router `Orchestrator` invokes the `Validator`. The `Validator` initiates the validation process. Validation might include, among other checks, the following:
+ Cross-referencing cell URLs in the request with URLs stored in database to identify and process potential updates
+ Running a deep health check (for example, an `HTTP GET` request to the cell's endpoint)

In conclusion, the `Validator` function delivers responses to client application requests, furnishing validation status along with any required remediation steps.

The `Validator` is designed to enhance user experience. Consider a scenario where certain users encounter difficulty accessing the global application because an incident causes cells to be temporarily unavailable. Instead of presenting generic errors, the `Validator` function can provide instructive remediation steps. These steps might include the following actions:
+ Inform users about the incident.
+ Provide an approximate wait time before service availability.
+ Provide a support contact number for obtaining additional information.

The demo code for the `Validator` function verifies that the user-supplied cell URLs in the request match the records stored in the `tbl_router` table. The `Validator` function also checks whether the cells are healthy.

# Set up private access to an Amazon S3 bucket through a VPC endpoint
<a name="set-up-private-access-to-an-amazon-s3-bucket-through-a-vpc-endpoint"></a>

*Martin Maritsch, Nicolas Jacob Baer, Gabriel Rodriguez Garcia, Shukhrat Khodjaev, Mohan Gowda Purushothama, and Joaquin Rinaudo, Amazon Web Services*

## Summary
<a name="set-up-private-access-to-an-amazon-s3-bucket-through-a-vpc-endpoint-summary"></a>

In Amazon Simple Storage Service (Amazon S3), presigned URLs enable you to share files of arbitrary size with target users. By default, Amazon S3 presigned URLs are accessible from the internet within an expiration time window, which makes them convenient to use. However, corporate environments often require access to Amazon S3 presigned URLs to be limited to a private network only.

This pattern presents a serverless solution for securely interacting with S3 objects by using presigned URLs from a private network without internet traversal. In the architecture, users access an Application Load Balancer through an internal domain name. Traffic is routed internally through Amazon API Gateway and a virtual private cloud (VPC) endpoint for the S3 bucket. The AWS Lambda function generates presigned URLs for file downloads through the private VPC endpoint, which helps enhance security and privacy for sensitive data.

## Prerequisites and limitations
<a name="set-up-private-access-to-an-amazon-s3-bucket-through-a-vpc-endpoint-prereqs"></a>

**Prerequisites**
+ A VPC that includes a subnet deployed in an AWS account that is connected to the corporate network (for example, through AWS Direct Connect).

**Limitations**
+ The S3 bucket must have the same name as the domain, so we recommend that you check [Amazon S3 bucket naming rules.](https://docs.aws.amazon.com/AmazonS3/latest/userguide/bucketnamingrules.html)
+ This sample architecture doesn't include monitoring features for the deployed infrastructure. If your use case requires monitoring, consider adding [AWS monitoring services](https://docs.aws.amazon.com/prescriptive-guidance/latest/implementing-logging-monitoring-cloudwatch/welcome.html).
+ This sample architecture doesn't include input validation. If your use case requires input validation and an increased level of security, consider [using AWS WAF to protect your API](https://docs.aws.amazon.com/apigateway/latest/developerguide/apigateway-control-access-aws-waf.html).
+ This sample architecture doesn't include access logging with the Application Load Balancer. If your use case requires access logging, consider enabling [load balancer access logs](https://docs.aws.amazon.com/elasticloadbalancing/latest/application/load-balancer-access-logs.html).

**Versions**
+ Python version 3.11 or later
+ Terraform version 1.6 or later

## Architecture
<a name="set-up-private-access-to-an-amazon-s3-bucket-through-a-vpc-endpoint-architecture"></a>

**Target technology stack**

The following AWS services are used in the target technology stack:
+ **Amazon S3** is the core storage service used for uploading, downloading, and storing files securely.
+ **Amazon API Gateway** exposes resources and endpoints for interacting with the S3 bucket. This service plays a role in generating presigned URLs for downloading or uploading data.
+ **AWS Lambda** generates presigned URLs for downloading files from Amazon S3. The Lambda function is called by API Gateway.
+ **Amazon VPC** deploys resources within a VPC to provide network isolation. The VPC includes subnets and routing tables to control traffic flow.
+ **Application Load Balancer** routes incoming traffic either to API Gateway or to the VPC endpoint of the S3 bucket. It allows users from the corporate network to access resources internally.
+ **VPC endpoint for Amazon S3** enables direct, private communication between resources in the VPC and Amazon S3 without traversing the public internet.
+ **AWS Identity and Access Management (IAM)** controls access to AWS resources. Permissions are set up to ensure secure interactions with the API and other services.

**Target architecture**

![\[Setting up private access to an S3 bucket through a VPC endpoing\]](http://docs.aws.amazon.com/prescriptive-guidance/latest/patterns/images/pattern-img/683ca6a1-789c-4444-bcbf-e4e80d253df3/images/1ca7ee17-d346-4eb9-bf61-ccf42528a401.png)


The diagram illustrates the following:

1. Users from the corporate network can access the Application Load Balancer through an internal domain name. We assume that a connection exists between the corporate network and the intranet subnet in the AWS account (for example, through a Direct Connect connection).

1. The Application Load Balancer routes incoming traffic either to API Gateway to generate presigned URLs to download or upload data to Amazon S3, or to the VPC endpoint of the S3 bucket. In both scenarios, requests are routed internally and do not need to traverse the internet.

1. API Gateway exposes resources and endpoints to interact with the S3 bucket. In this example, we provide an endpoint to download files from the S3 bucket, but this could be extended to provide upload functionality as well.

1. The Lambda function generates the presigned URL to download a file from Amazon S3 by using the domain name of the Application Load Balancer instead of the public Amazon S3 domain.

1. The user receives the presigned URL and uses it to download the file from Amazon S3 by using the Application Load Balancer. The load balancer includes a default route to send traffic that's not intended for the API toward the VPC endpoint of the S3 bucket.

1. The VPC endpoint routes the presigned URL with the custom domain name to the S3 bucket. The S3 bucket must have the same name as the domain.

**Automation and scale**

This pattern uses Terraform to deploy the infrastructure from the code repository into an AWS account.

## Tools
<a name="set-up-private-access-to-an-amazon-s3-bucket-through-a-vpc-endpoint-tools"></a>

**Tools**
+ [Python](https://www.python.org/) is a general-purpose computer programming language.
+ [Terraform](https://www.terraform.io/) is an infrastructure as code (IaC) tool from HashiCorp that helps you create and manage cloud and on-premises resources.
+ [AWS Command Line Interface (AWS CLI)](https://docs.aws.amazon.com/cli/latest/userguide/cli-chap-welcome.html) is an open source tool that helps you interact with AWS services through commands in your command-line shell.

**Code repository**

The code for this pattern is available in a GitHub repository at [https://github.com/aws-samples/private-s3-vpce](https://github.com/aws-samples/private-s3-vpce).

## Best practices
<a name="set-up-private-access-to-an-amazon-s3-bucket-through-a-vpc-endpoint-best-practices"></a>

The sample architecture for this pattern uses [IAM permissions](https://docs.aws.amazon.com/apigateway/latest/developerguide/permissions.html) to control access to the API. Anyone who has valid IAM credentials can call the API. If your use case requires a more complex authorization model, you might want to [use a different access control mechanism](https://docs.aws.amazon.com/apigateway/latest/developerguide/apigateway-control-access-to-api.html).

## Epics
<a name="set-up-private-access-to-an-amazon-s3-bucket-through-a-vpc-endpoint-epics"></a>

### Deploy the solution in an AWS account
<a name="deploy-the-solution-in-an-aws-account"></a>


| Task | Description | Skills required | 
| --- | --- | --- | 
| Obtain AWS credentials. | Review your AWS credentials and your access to your account. For instructions, see [Configuration and credential file settings](https://docs.aws.amazon.com/cli/latest/userguide/cli-configure-files.html) in the AWS CLI documentation. | AWS DevOps, General AWS | 
| Clone the repository. | Clone the GitHub repository provided with this pattern:<pre>git clone https://github.com/aws-samples/private-s3-vpce</pre> | AWS DevOps, General AWS | 
| Configure variables. | [\[See the AWS documentation website for more details\]](http://docs.aws.amazon.com/prescriptive-guidance/latest/patterns/set-up-private-access-to-an-amazon-s3-bucket-through-a-vpc-endpoint.html) | AWS DevOps, General AWS | 
| Deploy solution. | [\[See the AWS documentation website for more details\]](http://docs.aws.amazon.com/prescriptive-guidance/latest/patterns/set-up-private-access-to-an-amazon-s3-bucket-through-a-vpc-endpoint.html) | AWS DevOps, General AWS | 

### Test the solution
<a name="test-the-solution"></a>


| Task | Description | Skills required | 
| --- | --- | --- | 
| Create a test file. | Upload a file to Amazon S3 to create a test scenario for the file download. You can use the [Amazon S3 console](https://console.aws.amazon.com/s3/) or the following AWS CLI command:<pre>aws s3 cp /path/to/testfile s3://your-bucket-name/testfile</pre> | AWS DevOps, General AWS | 
| Test presigned URL functionality. | [\[See the AWS documentation website for more details\]](http://docs.aws.amazon.com/prescriptive-guidance/latest/patterns/set-up-private-access-to-an-amazon-s3-bucket-through-a-vpc-endpoint.html) | AWS DevOps, General AWS | 
| Clean up. | Make sure to remove the resources when they are no longer required:<pre>terraform destroy</pre> | AWS DevOps, General AWS | 

## Troubleshooting
<a name="set-up-private-access-to-an-amazon-s3-bucket-through-a-vpc-endpoint-troubleshooting"></a>


| Issue | Solution | 
| --- | --- | 
| S3 object key names with special characters such as number signs (\$1) break URL parameters and lead to errors. | Encode URL parameters properly, and make sure that the S3 object key name follows [Amazon S3 guidelines](https://docs.aws.amazon.com/AmazonS3/latest/userguide/object-keys.html). | 

## Related resources
<a name="set-up-private-access-to-an-amazon-s3-bucket-through-a-vpc-endpoint-resources"></a>

Amazon S3:
+ [Sharing objects with presigned URLs](https://docs.aws.amazon.com/AmazonS3/latest/userguide/ShareObjectPreSignedURL.html)
+ [Controlling access from VPC endpoints with bucket policies](https://docs.aws.amazon.com/AmazonS3/latest/userguide/example-bucket-policies-vpc-endpoint.html)

Amazon API Gateway:
+ [Use VPC endpoint policies for private APIs in API Gateway](https://docs.aws.amazon.com/apigateway/latest/developerguide/apigateway-vpc-endpoint-policies.html)

Application Load Balancer:
+ [Hosting Internal HTTPS Static Websites with ALB, S3, and PrivateLink](https://aws.amazon.com/blogs/networking-and-content-delivery/hosting-internal-https-static-websites-with-alb-s3-and-privatelink/) (AWS blog post)

# Troubleshoot states in AWS Step Functions by using Amazon Bedrock
<a name="troubleshooting-states-in-aws-step-functions"></a>

*Aniket Kurzadkar and Sangam Kushwaha, Amazon Web Services*

## Summary
<a name="troubleshooting-states-in-aws-step-functions-summary"></a>

AWS Step Functions error handling capabilities can help you see an error that occurs during a state in a [workflow](https://docs.aws.amazon.com/step-functions/latest/dg/concepts-statemachines.html), but it can still be a challenge to find the root cause of an error and debug it. This pattern addresses that challenge and shows how Amazon Bedrock can help you resolve errors that occur during states in Step Functions. 

Step Functions provides workflow orchestration, making it easier for developers to automate processes. Step Functions also provides error handling functionality that provides the following benefits:
+ Developers can create more resilient applications that don't fail completely when something goes wrong.
+ Workflows can include conditional logic to handle different types of errors differently.
+ The system can automatically retry failed operations, perhaps with exponential backoff.
+ Alternative execution paths can be defined for error scenarios, allowing the workflow to adapt and continue processing.

When an error occurs in a Step Functions workflow, this pattern shows how the error message and context can be sent to a foundation model (FM) like Claude 3 that’s supported by Step Functions. The FM can analyze the error, categorize it, and suggest potential remediation steps.

## Prerequisites and limitations
<a name="troubleshooting-states-in-aws-step-functions-prereqs"></a>

**Prerequisites**
+ An active AWS account
+ Basic understanding of [AWS Step Functions and workflows](https://docs.aws.amazon.com/step-functions/latest/dg/concepts-statemachines.html)
+ Amazon Bedrock [API connectivity](https://docs.aws.amazon.com/bedrock/latest/userguide/getting-started-api.html)

**Limitations**
+ You can use this pattern’s approach for various AWS services. However, the results might vary according to the prompt created by AWS Lambda that’s subsequently evaluated by Amazon Bedrock.
+ Some AWS services aren’t available in all AWS Regions. For Region availability, see [AWS services by Region](https://aws.amazon.com/about-aws/global-infrastructure/regional-product-services/). For specific endpoints, see [Service endpoints and quotas](https://docs.aws.amazon.com/general/latest/gr/aws-service-information.html), and choose the link for the service.

## Architecture
<a name="troubleshooting-states-in-aws-step-functions-architecture"></a>

The following diagram shows the workflow and architecture components for this pattern.

![\[Workflow for error handling and notification using Step Functions, Amazon Bedrock, and Amazon SNS.\]](http://docs.aws.amazon.com/prescriptive-guidance/latest/patterns/images/pattern-img/78f86c74-c9de-4562-adcc-105b87a77a54/images/d8eda499-ea1d-45e5-8a36-e04a44ad5c4b.png)


The diagram shows the automated workflow for error handling and notification in a Step Functions state machine:

1. The developer starts a state machine’s execution.

1. The Step Functions state machine begins processing its states. There are two possible outcomes:
   + (a) If all states execute successfully, the workflow proceeds directly to Amazon SNS for an email success notification.
   + (b) If any state fails, the workflow moves to the error handling Lambda function.

1. In case of an error, the following occurs:
   + (a) The Lambda function (error handler) is triggered. The Lambda function extracts the error message from the event data that the Step Functions state machine passed to it. Then the Lambda function prepares a prompt based on this error message and sends the prompt to Amazon Bedrock. The prompt requests solutions and suggestions related to the specific error encountered.
   + (b) Amazon Bedrock, which hosts the generative AI model, processes the input prompt. (This pattern uses the Anthropic Claude 3 foundation model (FM), which is one of many FMs that Amazon Bedrock supports.) The AI model analyses the error context. Then the model generates a response that can include explanations of why the error occurred, potential solutions to resolve the error, and suggestions to avoid making the same mistakes in the future.

     Amazon Bedrock returns its AI-generated response to the Lambda function. The Lambda function processes the response, potentially formatting it or extracting key information. Then the Lambda function sends the response to the state machine output.

1. After error handling or successful execution, the workflow concludes by triggering Amazon SNS to send an email notification.

## Tools
<a name="troubleshooting-states-in-aws-step-functions-tools"></a>

**AWS services**
+ [Amazon Bedrock](https://docs.aws.amazon.com/bedrock/latest/userguide/what-is-bedrock.html) is a fully managed service that makes high-performing foundation models (FMs) from leading AI startups and Amazon available for your use through a unified API.
+ [AWS Lambda](https://docs.aws.amazon.com/lambda/latest/dg/welcome.html) is a compute service that helps you run code without needing to provision or manage servers. It runs your code only when needed and scales automatically, so you pay only for the compute time that you use.
+ [Amazon Simple Notification Service (Amazon SNS)](https://docs.aws.amazon.com/sns/latest/dg/welcome.html) helps you coordinate and manage the exchange of messages between publishers and clients, including web servers and email addresses.
+ [AWS Step Functions](https://docs.aws.amazon.com/step-functions/latest/dg/welcome.html) is a serverless orchestration service that helps you combine AWS Lambda functions and other AWS services to build business-critical applications.

## Best practices
<a name="troubleshooting-states-in-aws-step-functions-best-practices"></a>
+ Given that Amazon Bedrock is a generative AI model that learns from trained data, it also uses that data to train and generate context. As a best practice, conceal any private information that might lead to data leak problems. 
+ Although generative AI can provide valuable insights, critical error-handling decisions should still involve human oversight, especially in production environments.

## Epics
<a name="troubleshooting-states-in-aws-step-functions-epics"></a>

### Create a state machine for your workflow
<a name="create-a-state-machine-for-your-workflow"></a>


| Task | Description | Skills required | 
| --- | --- | --- | 
| Create a state machine. | To create a state machine that’s appropriate for your workflow, do the following:[\[See the AWS documentation website for more details\]](http://docs.aws.amazon.com/prescriptive-guidance/latest/patterns/troubleshooting-states-in-aws-step-functions.html) | AWS DevOps | 

### Create a Lambda function
<a name="create-a-lam-function"></a>


| Task | Description | Skills required | 
| --- | --- | --- | 
| Create a Lambda function.  | To create a Lambda function, do the following:[\[See the AWS documentation website for more details\]](http://docs.aws.amazon.com/prescriptive-guidance/latest/patterns/troubleshooting-states-in-aws-step-functions.html) | AWS DevOps | 
| Set up the required logic in the Lambda code. | [\[See the AWS documentation website for more details\]](http://docs.aws.amazon.com/prescriptive-guidance/latest/patterns/troubleshooting-states-in-aws-step-functions.html)<pre>client = boto3.client(<br />        service_name="bedrock-runtime", region_name="selected-region"<br />    )<br /><br />    # Invoke Claude 3 with the text prompt<br />    model_id = "your-model-id" # Select your Model ID, Based on the Model Id, Change the body format<br /><br />    try:<br />        response = client.invoke_model(<br />            modelId=model_id,<br />            body=json.dumps(<br />                {<br />                    "anthropic_version": "bedrock-2023-05-31",<br />                    "max_tokens": 1024,<br />                    "messages": [<br />                        {<br />                            "role": "user",<br />                            "content": [{"type": "text", "text": prompt}],<br />                        }<br />                    ],<br />                }<br />            ),<br />        )<br /></pre>[\[See the AWS documentation website for more details\]](http://docs.aws.amazon.com/prescriptive-guidance/latest/patterns/troubleshooting-states-in-aws-step-functions.html) | AWS DevOps | 

### Integrate Step Functions with Lambda
<a name="integrate-sfn-with-lam"></a>


| Task | Description | Skills required | 
| --- | --- | --- | 
| Set up Lambda to handle errors in Step Functions. | To set up Step Functions to handle errors without disrupting the workflow, do the following:[\[See the AWS documentation website for more details\]](http://docs.aws.amazon.com/prescriptive-guidance/latest/patterns/troubleshooting-states-in-aws-step-functions.html) | AWS DevOps | 

## Troubleshooting
<a name="troubleshooting-states-in-aws-step-functions-troubleshooting"></a>


| Issue | Solution | 
| --- | --- | 
| Lambda cannot access the Amazon Bedrock API (Not authorized to perform) | This error occurs when the Lambda role doesn’t have permission to access the Amazon Bedrock API. To resolve this issue, add the `AmazonBedrockFullAccess` policy for the Lambda role. For more information, see [AmazonBedrockFullAccess](https://docs.aws.amazon.com/aws-managed-policy/latest/reference/AmazonBedrockFullAccess.html) in the *AWS Managed Policy Reference Guide*. | 
| Lambda timeout error | Sometimes it might take more than 30 seconds to generate a response and send it back, depending on the prompt. To resolve this issue, increase the configuration time. For more information, see [Configure Lambda function timeout](https://docs.aws.amazon.com/aws-managed-policy/latest/reference/AmazonBedrockFullAccess.html) in the *AWS Lambda Developer Guide*. | 

## Related resources
<a name="troubleshooting-states-in-aws-step-functions-resources"></a>
+ [Amazon Bedrock](https://docs.aws.amazon.com/bedrock/latest/userguide/what-is-bedrock.html)
+ [Amazon Bedrock API access](https://docs.aws.amazon.com/bedrock/latest/userguide/getting-started-api.html)
+ [Create your first Lambda function](https://docs.aws.amazon.com/lambda/latest/dg/getting-started.html)
+ [Developing workflows with Step Functions](https://docs.aws.amazon.com/step-functions/latest/dg/developing-workflows.html#development-run-debug)
+ [AWS Step Functions](https://docs.aws.amazon.com/step-functions/latest/dg/welcome.html) 

# More patterns
<a name="serverless-more-patterns-pattern-list"></a>

**Topics**
+ [Access, query, and join Amazon DynamoDB tables using Athena](access-query-and-join-amazon-dynamodb-tables-using-athena.md)
+ [Automate Amazon CodeGuru reviews for AWS CDK Python applications by using GitHub Actions](automate-amazon-codeguru-reviews-for-aws-cdk-python-applications.md)
+ [Automate AWS resource assessment](automate-aws-resource-assessment.md)
+ [Automate deployment of nested applications using AWS SAM](automate-deployment-of-nested-applications-using-aws-sam.md)
+ [Automate AWS Supply Chain data lakes deployment in a multi-repository setup](automate-the-deployment-of-aws-supply-chain-data-lakes.md)
+ [Automate the replication of Amazon RDS instances across AWS accounts](automate-the-replication-of-amazon-rds-instances-across-aws-accounts.md)
+ [Automate the setup of inter-Region peering with AWS Transit Gateway](automate-the-setup-of-inter-region-peering-with-aws-transit-gateway.md)
+ [Automatically archive items to Amazon S3 using DynamoDB TTL](automatically-archive-items-to-amazon-s3-using-dynamodb-ttl.md)
+ [Automatically detect changes and initiate different CodePipeline pipelines for a monorepo in CodeCommit](automatically-detect-changes-and-initiate-different-codepipeline-pipelines-for-a-monorepo-in-codecommit.md)
+ [Build a multi-tenant serverless architecture in Amazon OpenSearch Service](build-a-multi-tenant-serverless-architecture-in-amazon-opensearch-service.md)
+ [Build an advanced mainframe file viewer in the AWS Cloud](build-an-advanced-mainframe-file-viewer-in-the-aws-cloud.md)
+ [Calculate value at risk (VaR) by using AWS services](calculate-value-at-risk-var-by-using-aws-services.md)
+ [Copy AWS Service Catalog products across different AWS accounts and AWS Regions](copy-aws-service-catalog-products-across-different-aws-accounts-and-aws-regions.md)
+ [Create dynamic CI pipelines for Java and Python projects automatically](create-dynamic-ci-pipelines-for-java-and-python-projects-automatically.md)
+ [Decompose monoliths into microservices by using CQRS and event sourcing](decompose-monoliths-into-microservices-by-using-cqrs-and-event-sourcing.md)
+ [Deploy a React-based single-page application to Amazon S3 and CloudFront](deploy-a-react-based-single-page-application-to-amazon-s3-and-cloudfront.md)
+ [Deploy an Amazon API Gateway API on an internal website using private endpoints and an Application Load Balancer](deploy-an-amazon-api-gateway-api-on-an-internal-website-using-private-endpoints-and-an-application-load-balancer.md)
+ [Deploy and manage a serverless data lake on the AWS Cloud by using infrastructure as code](deploy-and-manage-a-serverless-data-lake-on-the-aws-cloud-by-using-infrastructure-as-code.md)
+ [Deploy a RAG use case on AWS by using Terraform and Amazon Bedrock](deploy-rag-use-case-on-aws.md)
+ [Develop a fully automated chat-based assistant by using Amazon Bedrock agents and knowledge bases](develop-a-fully-automated-chat-based-assistant-by-using-amazon-bedrock-agents-and-knowledge-bases.md)
+ [Develop advanced generative AI chat-based assistants by using RAG and ReAct prompting](develop-advanced-generative-ai-chat-based-assistants-by-using-rag-and-react-prompting.md)
+ [Dynamically generate an IAM policy with IAM Access Analyzer by using Step Functions](dynamically-generate-an-iam-policy-with-iam-access-analyzer-by-using-step-functions.md)
+ [Embed Amazon Quick Sight visual components into web applications by using Amazon Cognito and IaC automation](embed-quick-sight-visual-components-into-web-apps-cognito-iac.md)
+ [Ensure Amazon EMR logging to Amazon S3 is enabled at launch](ensure-amazon-emr-logging-to-amazon-s3-is-enabled-at-launch.md)
+ [Estimate the cost of a DynamoDB table for on-demand capacity](estimate-the-cost-of-a-dynamodb-table-for-on-demand-capacity.md)
+ [Generate personalized and re-ranked recommendations using Amazon Personalize](generate-personalized-and-re-ranked-recommendations-using-amazon-personalize.md)
+ [Generate test data using an AWS Glue job and Python](generate-test-data-using-an-aws-glue-job-and-python.md)
+ [Implement SHA1 hashing for PII data when migrating from SQL Server to PostgreSQL](implement-sha1-hashing-for-pii-data-when-migrating-from-sql-server-to-postgresql.md)
+ [Implement the serverless saga pattern by using AWS Step Functions](implement-the-serverless-saga-pattern-by-using-aws-step-functions.md)
+ [Improve operational performance by enabling Amazon DevOps Guru across multiple AWS Regions, accounts, and OUs with the AWS CDK](improve-operational-performance-by-enabling-amazon-devops-guru-across-multiple-aws-regions-accounts-and-ous-with-the-aws-cdk.md)
+ [Launch a CodeBuild project across AWS accounts using Step Functions and a Lambda proxy function](launch-a-codebuild-project-across-aws-accounts-using-step-functions-and-a-lambda-proxy-function.md)
+ [Migrate Apache Cassandra workloads to Amazon Keyspaces by using AWS Glue](migrate-apache-cassandra-workloads-to-amazon-keyspaces-by-using-aws-glue.md)
+ [Monitor use of a shared Amazon Machine Image across multiple AWS accounts](monitor-use-of-a-shared-amazon-machine-image-across-multiple-aws-accounts.md)
+ [Optimize multi-account serverless deployments by using the AWS CDK and GitHub Actions workflows](optimize-multi-account-serverless-deployments.md)
+ [Orchestrate an ETL pipeline with validation, transformation, and partitioning using AWS Step Functions](orchestrate-an-etl-pipeline-with-validation-transformation-and-partitioning-using-aws-step-functions.md)
+ [Query Amazon DynamoDB tables with SQL by using Amazon Athena](query-amazon-dynamodb-tables-sql-amazon-athena.md)
+ [Send custom attributes to Amazon Cognito and inject them into tokens](send-custom-attributes-cognito.md)
+ [Serve static content in an Amazon S3 bucket through a VPC by using Amazon CloudFront](serve-static-content-in-an-amazon-s3-bucket-through-a-vpc-by-using-amazon-cloudfront.md)
+ [Streamline Amazon Lex bot development and deployment by using an automated workflow](streamline-amazon-lex-bot-development-and-deployment-using-an-automated-workflow.md)
+ [Structure a Python project in hexagonal architecture using AWS Lambda](structure-a-python-project-in-hexagonal-architecture-using-aws-lambda.md)
+ [Translate natural language into query DSL for OpenSearch and Elasticsearch queries](translate-natural-language-query-dsl-opensearch-elasticsearch.md)
+ [Unload data from an Amazon Redshift cluster across accounts to Amazon S3](unload-data-from-amazon-redshift-cross-accounts-to-amazon-s3.md)
+ [Coordinate resource dependency and task execution by using the AWS Fargate WaitCondition hook construct](use-the-aws-fargate-waitcondition-hook-construct.md)
+ [Use Amazon Bedrock agents to automate creation of access entry controls in Amazon EKS through text-based prompts](using-amazon-bedrock-agents-to-automate-creation-of-access-entry-controls-in-amazon-eks.md)

# Networking
<a name="networking-pattern-list"></a>

**Topics**
+ [Automate the setup of inter-Region peering with AWS Transit Gateway](automate-the-setup-of-inter-region-peering-with-aws-transit-gateway.md)
+ [Centralize network connectivity using AWS Transit Gateway](centralize-network-connectivity-using-aws-transit-gateway.md)
+ [Configure HTTPS encryption for Oracle JD Edwards EnterpriseOne on Oracle WebLogic by using an Application Load Balancer](configure-https-encryption-for-oracle-jd-edwards-enterpriseone-on-oracle-weblogic-by-using-an-application-load-balancer.md)
+ [Connect to Application Migration Service data and control planes over a private network](connect-to-application-migration-service-data-and-control-planes-over-a-private-network.md)
+ [Create Infoblox objects using AWS CloudFormation custom resources and Amazon SNS](create-infoblox-objects-using-aws-cloudformation-custom-resources-and-amazon-sns.md)
+ [Create a hierarchical, multi-Region IPAM architecture on AWS by using Terraform](multi-region-ipam-architecture.md)
+ [Customize Amazon CloudWatch alerts for AWS Network Firewall](customize-amazon-cloudwatch-alerts-for-aws-network-firewall.md)
+ [Deploy resources in an AWS Wavelength Zone by using Terraform](deploy-resources-wavelength-zone-using-terraform.md)
+ [Migrate DNS records in bulk to an Amazon Route 53 private hosted zone](migrate-dns-records-in-bulk-to-an-amazon-route-53-private-hosted-zone.md)
+ [Modify HTTP headers when you migrate from F5 to an Application Load Balancer on AWS](modify-http-headers-when-you-migrate-from-f5-to-an-application-load-balancer-on-aws.md)
+ [Create a report of Network Access Analyzer findings for inbound internet access in multiple AWS accounts](create-a-report-of-network-access-analyzer-findings-for-inbound-internet-access-in-multiple-aws-accounts.md)
+ [Set up DNS resolution for hybrid networks in a multi-account AWS environment](set-up-dns-resolution-for-hybrid-networks-in-a-multi-account-aws-environment.md)
+ [Verify that ELB load balancers require TLS termination](verify-that-elb-load-balancers-require-tls-termination.md)
+ [View AWS Network Firewall logs and metrics by using Splunk](view-aws-network-firewall-logs-and-metrics-by-using-splunk.md)
+ [More patterns](networking-more-patterns-pattern-list.md)

# Automate the setup of inter-Region peering with AWS Transit Gateway
<a name="automate-the-setup-of-inter-region-peering-with-aws-transit-gateway"></a>

*Ram Kandaswamy, Amazon Web Services*

## Summary
<a name="automate-the-setup-of-inter-region-peering-with-aws-transit-gateway-summary"></a>

[AWS Transit Gateway](https://docs.aws.amazon.com/vpc/latest/tgw/what-is-transit-gateway.html) connects virtual private clouds (VPCs) and on-premises networks through a central hub. Transit Gateway traffic doesn't traverse the public internet, which reduces threat vectors, such as common exploits and distributed denial of service (DDoS) attacks.

If you need to communicate between two or more AWS Regions, you can use inter-Region Transit Gateway peering to establish peering connections between transit gateways in different Regions. However, manually configuring inter-Region peering with Transit Gateway can be a complex and time-consuming. This pattern provides guidance for using infrastructure as code (IaC) to set up peering. You can use this approach if you have to repeatedly configure several Regions and AWS accounts for a multi-Region organization setup.

This pattern sets up an [AWS CloudFormation](https://docs.aws.amazon.com/AWSCloudFormation/latest/UserGuide/Welcome.html)stack that includes an AWS Step Functions [workflow](https://docs.aws.amazon.com/step-functions/latest/dg/concepts-statemachines.html), AWS Lambda [functions](https://docs.aws.amazon.com/lambda/latest/dg/concepts-basics.html#gettingstarted-concepts-function), AWS Identity and Access Management (IAM) [roles](https://docs.aws.amazon.com/IAM/latest/UserGuide/id_roles.html), and [log groups](https://docs.aws.amazon.com/AmazonCloudWatch/latest/logs/Working-with-log-groups-and-streams.html) in Amazon CloudWatch Logs. You then run the Step Functions workflow to create the inter-Region peering connection for your transit gateways.

## Prerequisites and limitations
<a name="automate-the-setup-of-inter-region-peering-with-aws-transit-gateway-prereqs"></a>

**Prerequisites **
+ An active AWS account.
+ An IDE that has code-generation capability, such as [Kiro](https://kiro.dev/#what-is-kiro).
+ An Amazon Simple Storage Service (Amazon S3) bucket and permissions to upload objects to it.
+ Transit gateways created in the requesting and accepting Regions.
+ VPCs created in the requesting and accepting Regions. Tag the VPCs with an `addToTransitGateway` key with a value of `true`.
+ [Security groups](https://docs.aws.amazon.com/vpc/latest/userguide/vpc-security-groups.html) configured for your VPCs according to your requirements.
+ [Network access control lists (ACLs)](https://docs.aws.amazon.com/vpc/latest/userguide/vpc-network-acls.html) configured for your VPCs according to your requirements.

**Limitations**
+ Only some AWS Regions support inter-Region peering. For a full list of Regions that support inter-Region peering, see the [AWS Transit Gateway FAQs](https://aws.amazon.com/transit-gateway/faqs/).

## Architecture
<a name="automate-the-setup-of-inter-region-peering-with-aws-transit-gateway-architecture"></a>

 The agentic AI development approach described in this pattern involves the following steps:

1. **Define the automation prompt** – Kiro receives a natural language prompt that details the peering requirements.

1. **Generate automation script** – Kiro generates the CloudFormation and Lambda scripts based on the provided prompt.

1. **Deploy the stack** – Kiro uses CloudFormation to deploy the required resources.

1. **Set up peering** – Kiro runs the Step Functions workflow, which calls Lambda functions to create peering connections and modify route tables.

The following diagram shows the Step Functions workflow:

![\[Step Functions workflow to call Lambda function to modify route tables for transit gateway peering.\]](http://docs.aws.amazon.com/prescriptive-guidance/latest/patterns/images/pattern-img/b678bb87-c7b9-4f7b-b26e-eaac650e5d1b/images/2f235f47-5d68-492c-b954-7dc170939cae.png)


 

The workflow contains the following steps:

1. The Step Functions workflow calls the Lambda function for the Transit Gateway peering. 

1. The workflow waits for one minute.

1. The workflow retrieves the peering status and sends it to the condition block. The block is responsible for the looping. 

1. If the success condition is not met, the workflow is coded to enter the timer stage. 

1. If the success condition is met, a Lambda function modifies the route tables. 

1. The Step Functions workflow ends.

## Tools
<a name="automate-the-setup-of-inter-region-peering-with-aws-transit-gateway-tools"></a>
+ [AWS CloudFormation](https://docs.aws.amazon.com/AWSCloudFormation/latest/UserGuide/Welcome.html) helps you set up AWS resources, provision them quickly and consistently, and manage them throughout their lifecycle across AWS accounts and AWS Regions. 
+ [Amazon CloudWatch Logs](https://docs.aws.amazon.com/AmazonCloudWatch/latest/logs/WhatIsCloudWatchLogs.html) helps you centralize the logs from all your systems, applications, and AWS services so you can monitor them and archive them securely.
+ [AWS Identity and Access Management](https://docs.aws.amazon.com/IAM/latest/UserGuide/introduction.html)[ (](https://docs.aws.amazon.com/IAM/latest/UserGuide/introduction.html)[IAM](https://docs.aws.amazon.com/IAM/latest/UserGuide/introduction.html)[)](https://docs.aws.amazon.com/IAM/latest/UserGuide/introduction.html) helps you securely manage access to your AWS resources by controlling who is authenticated and authorized to use them.
+ [Kiro](https://kiro.dev/#what-is-kiro) is an agentic AI development tool that helps you build production-ready applications through spec-driven development. 
+ [AWS Lambda](https://docs.aws.amazon.com/lambda/latest/dg/welcome.html) is a compute service that helps you run code without needing to provision or manage servers. It runs your code only when needed and scales automatically, so you pay only for the compute time that you use.
+ [AWS Step Functions](https://docs.aws.amazon.com/step-functions/latest/dg/welcome.html) is a serverless orchestration service that helps you combine AWS Lambda functions and other AWS services to build business-critical applications.  

## Epics
<a name="automate-the-setup-of-inter-region-peering-with-aws-transit-gateway-epics"></a>

### Generate Lambda and Step Functions code
<a name="generate-lam-and-sfn-code"></a>


| Task | Description | Skills required | 
| --- | --- | --- | 
| Filling prompt placeholders with specific details | [\[See the AWS documentation website for more details\]](http://docs.aws.amazon.com/prescriptive-guidance/latest/patterns/automate-the-setup-of-inter-region-peering-with-aws-transit-gateway.html)Alternatively, you can add this as an inline prompt that references the above variables without attaching the file for context. | General AWS, Network administrator | 
| Create a Lambda function that creates the peering attachments. | [\[See the AWS documentation website for more details\]](http://docs.aws.amazon.com/prescriptive-guidance/latest/patterns/automate-the-setup-of-inter-region-peering-with-aws-transit-gateway.html) | General AWS, Network administrator, Prompt engineering | 
| Create a Lambda function that polls the peering attachment status. | [\[See the AWS documentation website for more details\]](http://docs.aws.amazon.com/prescriptive-guidance/latest/patterns/automate-the-setup-of-inter-region-peering-with-aws-transit-gateway.html) | General AWS, Network administrator, Prompt engineering | 
| Create a Lambda function that adds static routes to both Regions. | [\[See the AWS documentation website for more details\]](http://docs.aws.amazon.com/prescriptive-guidance/latest/patterns/automate-the-setup-of-inter-region-peering-with-aws-transit-gateway.html) | General AWS, Network administrator | 
| Create the CloudFormation template. | [\[See the AWS documentation website for more details\]](http://docs.aws.amazon.com/prescriptive-guidance/latest/patterns/automate-the-setup-of-inter-region-peering-with-aws-transit-gateway.html) | AWS DevOps, General AWS, Prompt engineering | 

### Deploy the AWS resources
<a name="deploy-the-aws-resources"></a>


| Task | Description | Skills required | 
| --- | --- | --- | 
| Deploy the CloudFormation stack by using prompts. | Enter the following prompt:<pre>Using the outputs from Prompts 1-4, package and deploy the full stack. Steps:<br /><br />1. For each of the three Python files from Prompts 1-3, create a zip named after the file (e.g. peer-transit-gateway.zip that contains peer-transit-gateway.py).<br />2. Upload all three zips to S3_BUCKET.<br />3. Deploy the CloudFormation template from Prompt 4 to ACTIVE_REGION with S3BucketName=S3_BUCKET and CAPABILITY_NAMED_IAM.<br />4. Initiate the Step Function from the deployed stack.<br /><br />Zip file names must match the S3Key values in the template exactly.</pre> | AWS DevOps, Cloud administrator, General AWS, Prompt engineering | 
| Validate deployment. | [\[See the AWS documentation website for more details\]](http://docs.aws.amazon.com/prescriptive-guidance/latest/patterns/automate-the-setup-of-inter-region-peering-with-aws-transit-gateway.html) | General AWS | 

## Related resources
<a name="automate-the-setup-of-inter-region-peering-with-aws-transit-gateway-resources"></a>
+ [Starting state machine executions in Step Functions](https://docs.aws.amazon.com/step-functions/latest/dg/concepts-state-machine-executions.html)
+ [Transit Gateway peering attachments](https://docs.aws.amazon.com/vpc/latest/tgw/tgw-peering.html)
+ [Interconnecting VPCs across AWS Regions using AWS Transit Gateway](https://www.youtube.com/watch?v=cj1rQqLxXU8) (video)

# Centralize network connectivity using AWS Transit Gateway
<a name="centralize-network-connectivity-using-aws-transit-gateway"></a>

*Mydhili Palagummi and Nikhil Marrapu, Amazon Web Services*

## Summary
<a name="centralize-network-connectivity-using-aws-transit-gateway-summary"></a>

This pattern describes the simplest configuration in which AWS Transit Gateway can be used to connect an on-premises network to virtual private clouds (VPCs) in multiple AWS accounts within an AWS Region. Using this setup, you can establish a hybrid network that connects multiple VPC networks in a Region and an on-premises network. This is accomplished by using a transit gateway and a virtual private network (VPN) connection to the on-premises network. 

## Prerequisites and limitations
<a name="centralize-network-connectivity-using-aws-transit-gateway-prereqs"></a>

**Prerequisites **
+ An account for hosting network services, managed as a member account of an organization in AWS Organizations
+ VPCs in multiple AWS accounts, without overlapping Classless Inter-Domain Routing (CIDR) blocks

**Limitations **

This pattern does not support the isolation of traffic between certain VPCs or the on-premises network. All the networks attached to the transit gateway will be able to reach each other. To isolate traffic, you need to use custom route tables on the transit gateway. This pattern only connects the VPCs and on-premises network by using a single default transit gateway route table, which is the simplest configuration.

## Architecture
<a name="centralize-network-connectivity-using-aws-transit-gateway-architecture"></a>

**Target technology stack  **
+ AWS Transit Gateway
+ AWS Site-to-Site VPN
+ VPC
+ AWS Resource Access Manager (AWS RAM)

 

**Target architecture **

![\[AWS Transit Gateway connects on-premises network to VPCs in multiple AWS accounts within a Region.\]](http://docs.aws.amazon.com/prescriptive-guidance/latest/patterns/images/pattern-img/e23f5faf-e75e-42a3-80e3-142516a2db4e/images/1ecf7e04-bbf8-4304-88c8-6aceb7271d1e.jpeg)


## Tools
<a name="centralize-network-connectivity-using-aws-transit-gateway-tools"></a>

**AWS services**
+ [AWS Resource Access Manager (AWS RAM)](https://docs.aws.amazon.com/ram/latest/userguide/what-is.html) helps you securely share your resources across your AWS accounts, organizational units, or your entire organization from AWS Organizations.
+ [AWS Transit Gateway](https://docs.aws.amazon.com/vpc/latest/tgw/what-is-transit-gateway.html) is a central hub that connects virtual private clouds (VPCs) and on-premises networks.

## Epics
<a name="centralize-network-connectivity-using-aws-transit-gateway-epics"></a>

### Create a transit gateway in the network services account
<a name="create-a-transit-gateway-in-the-network-services-account"></a>


| Task | Description | Skills required | 
| --- | --- | --- | 
| Create a transit gateway. | In the AWS account where you want to host network services, create a transit gateway in the target AWS Region. For instructions, see [Create a transit gateway](https://docs.aws.amazon.com/vpc/latest/tgw/tgw-transit-gateways.html#create-tgw). Note the following:[\[See the AWS documentation website for more details\]](http://docs.aws.amazon.com/prescriptive-guidance/latest/patterns/centralize-network-connectivity-using-aws-transit-gateway.html) | Network administrator | 

### Connect the transit gateway to your on-premises network
<a name="connect-the-transit-gateway-to-your-on-premises-network"></a>


| Task | Description | Skills required | 
| --- | --- | --- | 
| Set up a customer gateway device for the VPN connection. | The customer gateway device is attached on the on-premises side of the Site-to-Site VPN connection between the transit gateway and your on-premises network. For more information, see [Your customer gateway device](https://docs.aws.amazon.com/vpn/latest/s2svpn/your-cgw.html) in the AWS Site-to-Site VPN documentation. Identify or launch a supported on-premises customer device and note its public IP address. VPN configuration is completed later in this epic.  | Network administrator | 
| In the network services account, create a VPN attachment to the transit gateway. | To set up a connection, create a VPN attachment for the transit gateway. For instructions, see [Transit gateway VPN attachments](https://docs.aws.amazon.com/vpc/latest/tgw/tgw-vpn-attachments.html). | Network administrator | 
| Configure the VPN on the customer gateway device in your on-premises network.  | Download the configuration file for the Site-to-Site VPN connection associated with the transit gateway and configure VPN settings on the customer gateway device. For instructions, see [Download the configuration file](https://docs.aws.amazon.com/vpn/latest/s2svpn/SetUpVPNConnections.html#vpn-download-config). | Network administrator | 

### Share the transit gateway in the network services account to other AWS accounts or your organization
<a name="share-the-transit-gateway-in-the-network-services-account-to-other-aws-accounts-or-your-organization"></a>


| Task | Description | Skills required | 
| --- | --- | --- | 
| In the AWS Organizations management account, turn on sharing. | To share the transit gateway with your organization or with certain organizational units, turn on sharing in AWS Organizations. Otherwise, you would need to share the transit gateway for each account individually. For instructions, see [Enable resource sharing within AWS Organizations](https://docs.aws.amazon.com/ram/latest/userguide/getting-started-sharing.html#getting-started-sharing-orgs). | AWS systems administrator | 
| Create the transit gateway resource share in the network services account. | To allow VPCs in other AWS accounts within your organization to connect to the transit gateway, in the network services account, use the AWS RAM console to share the transit gateway resource. For instructions, see [Create a resource share](https://docs.aws.amazon.com/ram/latest/userguide/getting-started-sharing.html#getting-started-sharing-create). | AWS systems administrator | 

### Connect VPCs to the transit gateway
<a name="connect-vpcs-to-the-transit-gateway"></a>


| Task | Description | Skills required | 
| --- | --- | --- | 
| Create VPC attachments in individual accounts.  | In the accounts to which the transit gateway has been shared, create transit gateway VPC attachments. For instructions, see [Create a transit gateway attachment to a VPC](https://docs.aws.amazon.com/vpc/latest/tgw/tgw-vpc-attachments.html#create-vpc-attachment). | Network administrator | 
| Accept the VPC attachment requests. | In the network services account, accept the transit gateway VPC attachment requests. For instructions, see [Accept a shared attachment](https://docs.aws.amazon.com/vpc/latest/tgw/tgw-transit-gateways.html#tgw-accept-shared-attachment). | Network administrator | 

### Configure routing
<a name="configure-routing"></a>


| Task | Description | Skills required | 
| --- | --- | --- | 
| Configure routes in individual account VPCs. | In each individual account VPC, add routes to the on-premises network and to other VPC networks, using the transit gateway as the target. For instructions, see [Add and remove routes from a route table](https://docs.aws.amazon.com/vpc/latest/userguide/WorkWithRouteTables.html#AddRemoveRoutes). | Network administrator | 
| Configure routes in the transit gateway route table. | Routes from VPCs and the VPN connection should be propagated and should appear in the transit gateway default route table. If needed, create any static routes (one example is static routes for the static VPN connection) in the transit gateway default route table. For instructions, see [Create a static route](https://docs.aws.amazon.com/vpc/latest/tgw/tgw-route-tables.html#tgw-create-static-route). | Network administrator | 
| Add security group and network access control list (ACL) rules. | For the EC2 instances and other resources in the VPC, ensure that the security group rules and the network ACL rules allow traffic between VPCs as well as the on-premises network. For instructions, see [Control traffic to resources using security groups](https://docs.aws.amazon.com/vpc/latest/userguide/VPC_SecurityGroups.html#AddRemoveRules) and [Add and delete rules from an ACL](https://docs.aws.amazon.com/vpc/latest/userguide/vpc-network-acls.html#Rules). | Network administrator | 

### Test connectivity
<a name="test-connectivity"></a>


| Task | Description | Skills required | 
| --- | --- | --- | 
| Test connectivity between VPCs. | Ensure that network ACL and security groups allow Internet Control Message Protocol (ICMP) traffic, and then ping from instances in a VPC to another VPC that is also connected to the transit gateway. | Network administrator | 
| Test connectivity between VPCs and the on-premises network. | Ensure that network ACL rules, security group rules, and any firewalls allow ICMP traffic, and then ping between the on-premises network and the EC2 instances in the VPCs. Network communication must be initiated from the on-premises network first to bring the VPN connection to `UP` status. | Network administrator | 

## Related resources
<a name="centralize-network-connectivity-using-aws-transit-gateway-resources"></a>
+ [Building a scalable and secure multi VPC AWS Network Infrastructure](https://d1.awsstatic.com/whitepapers/building-a-scalable-and-secure-multi-vpc-aws-network-infrastructure.pdf) (AWS whitepaper)
+ [Working with shared resources](https://docs.aws.amazon.com/ram/latest/userguide/working-with.html) (AWS RAM documentation)
+ [Working with transit gateways](https://docs.aws.amazon.com/vpc/latest/tgw/working-with-transit-gateways.html) (AWS Transit Gateway documentation)

# Configure HTTPS encryption for Oracle JD Edwards EnterpriseOne on Oracle WebLogic by using an Application Load Balancer
<a name="configure-https-encryption-for-oracle-jd-edwards-enterpriseone-on-oracle-weblogic-by-using-an-application-load-balancer"></a>

*Thanigaivel Thirumalai, Amazon Web Services*

## Summary
<a name="configure-https-encryption-for-oracle-jd-edwards-enterpriseone-on-oracle-weblogic-by-using-an-application-load-balancer-summary"></a>

This pattern explains how to configure HTTPS encryption for SSL offloading in Oracle JD Edwards EnterpriseOne on Oracle WebLogic workloads. This approach encrypts traffic between the user’s browser and a load balancer to remove the encryption burden from the EnterpriseOne servers.

Many users scale the EnterpriseOne JAVA virtual machine (JVM) tier horizontally by using an [AWS Application Load Balancer. ](https://docs.aws.amazon.com/elasticloadbalancing/latest/application/introduction.html)The load balancer serves as the single point of contact for clients, and distributes incoming traffic across multiple JVMs. Optionally, the load balancer can distribute the traffic across multiple Availability Zones and increase the availability of EnterpriseOne.

The process  described in this pattern configures encryption between the browser and the load balancer instead of encrypting the traffic between the load balancer and the EnterpriseOne JVMs. This approach is referred to as *SSL offloading*. Offloading the SSL decryption process from the EnterpriseOne web or application server to the Application Load Balancer reduces the burden on the application side. After SSL termination at the load balancer, the unencrypted traffic is routed to the application on AWS.

[Oracle JD Edwards EnterpriseOne](https://www.oracle.com/applications/jd-edwards-enterpriseone/) is an enterprise resource planning (ERP) solution for organizations that manufacture, construct, distribute, service, or manage products or physical assets. JD Edwards EnterpriseOne supports various hardware, operating systems, and database platforms.

## Prerequisites and limitations
<a name="configure-https-encryption-for-oracle-jd-edwards-enterpriseone-on-oracle-weblogic-by-using-an-application-load-balancer-prereqs"></a>

**Prerequisites**
+ An active AWS account
+ An AWS Identity and Access Management (IAM) role that has permissions to make AWS service calls and manage AWS resources
+ An SSL certificate

**Product versions**
+ This pattern was tested with Oracle WebLogic 12c, but you can also use other versions.

## Architecture
<a name="configure-https-encryption-for-oracle-jd-edwards-enterpriseone-on-oracle-weblogic-by-using-an-application-load-balancer-architecture"></a>

There are multiple approaches to perform SSL offloading. This pattern uses an Application Load Balancer and Oracle HTTP Server (OHS), as illustrated in the following diagram.

![\[SSL offloading with a load balancer and OHS\]](http://docs.aws.amazon.com/prescriptive-guidance/latest/patterns/images/pattern-img/c62b976b-31e4-42ca-b7e8-13f7c9d9a187/images/2ae2d0eb-b9f3-41f8-ad86-9af3aade7072.png)


The following diagram shows the JD Edwards EnterpriseOne, Application Load Balancer, and Java Application Server (JAS) JVM layout.

![\[EnterpriseOne, load balancer, and JAS JVM layout\]](http://docs.aws.amazon.com/prescriptive-guidance/latest/patterns/images/pattern-img/c62b976b-31e4-42ca-b7e8-13f7c9d9a187/images/72ea35b0-2907-48b3-aeb7-0c5d9a3b831b.png)


## Tools
<a name="configure-https-encryption-for-oracle-jd-edwards-enterpriseone-on-oracle-weblogic-by-using-an-application-load-balancer-tools"></a>

**AWS services**
+ [Application Load Balancers](https://docs.aws.amazon.com/elasticloadbalancing/latest/application/) distribute incoming application traffic across multiple targets, such as Amazon Elastic Compute Cloud (Amazon EC2 instances), in multiple Availability Zones.
+ [AWS Certificate Manager (ACM)](https://docs.aws.amazon.com/acm/latest/userguide/acm-overview.html) helps you create, store, and renew public and private SSL/TLS X.509 certificates and keys that protect your AWS websites and applications.
+ [Amazon Route 53](https://docs.aws.amazon.com/Route53/latest/DeveloperGuide/Welcome.html) is a highly available and scalable DNS web service.

## Best practices
<a name="configure-https-encryption-for-oracle-jd-edwards-enterpriseone-on-oracle-weblogic-by-using-an-application-load-balancer-best-practices"></a>
+ For ACM best practices, see the [ACM documentation.](https://docs.aws.amazon.com/acm/latest/userguide/acm-bestpractices.html)

## Epics
<a name="configure-https-encryption-for-oracle-jd-edwards-enterpriseone-on-oracle-weblogic-by-using-an-application-load-balancer-epics"></a>

### Set up WebLogic and OHS
<a name="set-up-weblogic-and-ohs"></a>


| Task | Description | Skills required | 
| --- | --- | --- | 
| Install and configure Oracle components. | [\[See the AWS documentation website for more details\]](http://docs.aws.amazon.com/prescriptive-guidance/latest/patterns/configure-https-encryption-for-oracle-jd-edwards-enterpriseone-on-oracle-weblogic-by-using-an-application-load-balancer.html) | JDE CNC, WebLogic administrator | 
| Enable the WebLogic plugin at the domain level. | The WebLogic plugin is required for load balancing. To enable the plugin:[\[See the AWS documentation website for more details\]](http://docs.aws.amazon.com/prescriptive-guidance/latest/patterns/configure-https-encryption-for-oracle-jd-edwards-enterpriseone-on-oracle-weblogic-by-using-an-application-load-balancer.html) | JDE CNC, WebLogic administrator | 
| Edit the configuration file. | The `mod_wl_ohs.conf` file configures proxy requests from OHS to WebLogic.[\[See the AWS documentation website for more details\]](http://docs.aws.amazon.com/prescriptive-guidance/latest/patterns/configure-https-encryption-for-oracle-jd-edwards-enterpriseone-on-oracle-weblogic-by-using-an-application-load-balancer.html)<pre><VirtualHost *:8000><br /><Location /jde><br />WLSRequest On<br />SetHandler weblogic-handler<br />WebLogicHost localhost<br />WebLogicPort 8000<br />WLProxySSL On<br />WLProxySSLPassThrough On<br /></Location><br /></VirtualHost></pre> | JDE CNC, WebLogic administrator | 
| Start OHS by using the Enterprise Manager. | [\[See the AWS documentation website for more details\]](http://docs.aws.amazon.com/prescriptive-guidance/latest/patterns/configure-https-encryption-for-oracle-jd-edwards-enterpriseone-on-oracle-weblogic-by-using-an-application-load-balancer.html) | JDE CNC, WebLogic administrator | 

### Configure the Application Load Balancer
<a name="configure-the-application-load-balancer"></a>


| Task | Description | Skills required | 
| --- | --- | --- | 
| Set up a target group. | [\[See the AWS documentation website for more details\]](http://docs.aws.amazon.com/prescriptive-guidance/latest/patterns/configure-https-encryption-for-oracle-jd-edwards-enterpriseone-on-oracle-weblogic-by-using-an-application-load-balancer.html)For detailed instructions, see the [Elastic Load Balancing documentation](https://docs.aws.amazon.com/elasticloadbalancing/latest/application/create-target-group.html). | AWS administrator | 
| Set up the load balancer. | [\[See the AWS documentation website for more details\]](http://docs.aws.amazon.com/prescriptive-guidance/latest/patterns/configure-https-encryption-for-oracle-jd-edwards-enterpriseone-on-oracle-weblogic-by-using-an-application-load-balancer.html) | AWS administrator | 
| Add a Route 53 (DNS) record. | (Optional) You can add an Amazon Route 53 DNS record for the subdomain. This record would point to your Application Load Balancer. For instructions, see the [Route 53 documentation](https://docs.aws.amazon.com/Route53/latest/DeveloperGuide/resource-record-sets-creating.html). | AWS administrator | 

## Troubleshooting
<a name="configure-https-encryption-for-oracle-jd-edwards-enterpriseone-on-oracle-weblogic-by-using-an-application-load-balancer-troubleshooting"></a>


| Issue | Solution | 
| --- | --- | 
| HTTP server doesn’t appear. | If **HTTP Server** doesn’t appear in the **Target Navigation** list on the Enterprise Manager console, follow these steps:[\[See the AWS documentation website for more details\]](http://docs.aws.amazon.com/prescriptive-guidance/latest/patterns/configure-https-encryption-for-oracle-jd-edwards-enterpriseone-on-oracle-weblogic-by-using-an-application-load-balancer.html)When the instance has been created and changes have been activated, you will be able to see the HTTP server in the **Target Navigation** panel. | 

## Related resources
<a name="configure-https-encryption-for-oracle-jd-edwards-enterpriseone-on-oracle-weblogic-by-using-an-application-load-balancer-resources"></a>

**AWS documentation**
+ [Application Load Balancers](https://docs.aws.amazon.com/elasticloadbalancing/latest/application/introduction.html)
+ [Working with public hosted zones](https://docs.aws.amazon.com/Route53/latest/DeveloperGuide/AboutHZWorkingWith.html)
+ [Working with private hosted zones](https://docs.aws.amazon.com/Route53/latest/DeveloperGuide/hosted-zones-private.html)

**Oracle documentation:**
+ [Overview of Oracle WebLogic Server Proxy Plug-In](https://docs.oracle.com/middleware/1221/webtier/develop-plugin/overview.htm#PLGWL391)
+ [Installing WebLogic Server using the Infrastructure Installer](https://www.oracle.com/webfolder/technetwork/tutorials/obe/fmw/wls/12c/12_2_1/02-01-004-InstallWLSInfrastructure/installweblogicinfrastructure.html)
+ [Installing and Configuring Oracle HTTP Server ](https://docs.oracle.com/middleware/1221/core/install-ohs/toc.htm)

# Connect to Application Migration Service data and control planes over a private network
<a name="connect-to-application-migration-service-data-and-control-planes-over-a-private-network"></a>

*Dipin Jain and Mike Kuznetsov, Amazon Web Services*

## Summary
<a name="connect-to-application-migration-service-data-and-control-planes-over-a-private-network-summary"></a>

This pattern explains how you can connect to an AWS Application Migration Service data plane and control plane on a private, secured network by using interface VPC endpoints.

Application Migration Service is a highly automated lift-and-shift (rehost) solution that simplifies, expedites, and reduces the cost of migrating applications to AWS. It enables companies to rehost a large number of physical, virtual, or cloud servers without compatibility issues, performance disruption, or long cutover windows. Application Migration Service is available from the AWS Management Console. This enables seamless integration with other AWS services, such as AWS CloudTrail, Amazon CloudWatch, and AWS Identity and Access Management (IAM).

You can connect from a source data center to a data plane—that is, to a subnet that serves as a staging area for data replication in the destination VPC—over a private connection by using Site-to-Site VPN services, AWS Direct Connect, or VPC peering in Application Migration Service. You can also use [interface VPC endpoints](https://docs.aws.amazon.com/vpc/latest/privatelink/vpce-interface.html) powered by AWS PrivateLink to connect to an Application Migration Service control plane over a private network. 

## Prerequisites and limitations
<a name="connect-to-application-migration-service-data-and-control-planes-over-a-private-network-prereqs"></a>

**Prerequisites **
+ **Staging area subnet** – Before you set up Application Migration Service, create a subnet to be used as a staging area for data replicated from your source servers to AWS (that is, a data plane). You must specify this subnet in the [Replication Settings template](https://docs.aws.amazon.com/mgn/latest/ug/template-vs-server.html) when you first access the Application Migration Service console. You can override this subnet for specific source servers in the Replication Settings template. Although you can use an existing subnet in your AWS account, we recommend that you create a new, dedicated subnet for this purpose.
+ **Network requirements** – The replication servers that are launched by Application Migration Service in your staging area subnet have to be able to send data to the Application Migration Service API endpoint at `https://mgn.<region>.amazonaws.com/`, where `<region>` is the code for the AWS Region you are replicating to (for example, `https://mgn.us-east-1.amazonaws.com`). Amazon Simple Storage Service (Amazon S3) service URLs are required for downloading Application Migration Service software.
  + The AWS Replication Agent installer should have access to the Amazon Simple Storage Service (Amazon S3) bucket URL of the AWS Region you are using with Application Migration Service.
  + The staging area subnet should have access to Amazon S3.
  + The source servers on which the AWS Replication Agent is installed must be able to send data to the replication servers in the staging area subnet and to the Application Migration Service API endpoint at `https://mgn.<region>.amazonaws.com/`.

The following table lists the required ports.


| 
| 
| Source | Destination | Port | For more information, see | 
| --- |--- |--- |--- |
| Source data center | Amazon S3 service URLs | 443 (TCP) | [Communication over TCP port 443](https://docs.aws.amazon.com/mgn/latest/ug/Network-Requirements.html#TCP-443) | 
| Source data center | AWS Region-specific console address for Application Migration Service | 443 (TCP) | [Communication between the source servers and Application Migration Service over TCP port 443](https://docs.aws.amazon.com/mgn/latest/ug/Network-Requirements.html#Source-Manager-TCP-443) | 
| Source data center | Staging area subnet | 1500 (TCP) | [Communication between the source servers and the staging area subnet over TCP port 1500](https://docs.aws.amazon.com/mgn/latest/ug/Network-Requirements.html#Communication-TCP-1500) | 
| Staging area subnet | AWS Region-specific console address for Application Migration Service | 443 (TCP) | [Communication between the staging area subnet and Application Migration Service over TCP port 443](https://docs.aws.amazon.com/mgn/latest/ug/Network-Requirements.html#Communication-TCP-443-Staging) | 
| Staging area subnet | Amazon S3 service URLs | 443 (TCP) | [Communication over TCP port 443](https://docs.aws.amazon.com/mgn/latest/ug/Network-Requirements.html#TCP-443) | 
| Staging area subnet | Amazon Elastic Compute Cloud (Amazon EC2) endpoint of the subnet’s AWS Region | 443 (TCP) | [Communication over TCP port 443](https://docs.aws.amazon.com/mgn/latest/ug/Network-Requirements.html#TCP-443) | 

** Limitations**

Application Migration Service isn’t currently available in all AWS Regions and operating systems.
+ [Supported AWS Regions](https://docs.aws.amazon.com/mgn/latest/ug/supported-regions.html)
+ [Supported operating systems](https://docs.aws.amazon.com/mgn/latest/ug/Supported-Operating-Systems.html)

## Architecture
<a name="connect-to-application-migration-service-data-and-control-planes-over-a-private-network-architecture"></a>

The following diagram illustrates the network architecture for a typical migration. For more information about this architecture, see the [Application Migration Service documentation](https://docs.aws.amazon.com/mgn/latest/ug/Network-Settings-Video.html) and the [Application Migration Service service architecture and network architecture video](https://youtu.be/ao8geVzmmRo).

![\[Network architecture for Application Migration Service for a typical migration\]](http://docs.aws.amazon.com/prescriptive-guidance/latest/patterns/images/pattern-img/21346c0f-0643-4f4f-b21f-fdfe24fc6a8f/images/546598b2-8026-4849-a441-eaa2bc2bf6bb.png)


The following detailed view shows the configuration of interface VPC endpoints in the staging area VPC to connect Amazon S3 and Application Migration Service.

![\[Network architecture for Application Migration Service for a typical migration - detailed view\]](http://docs.aws.amazon.com/prescriptive-guidance/latest/patterns/images/pattern-img/21346c0f-0643-4f4f-b21f-fdfe24fc6a8f/images/bd0dfd42-4ab0-466f-b696-804dedcf4513.png)


## Tools
<a name="connect-to-application-migration-service-data-and-control-planes-over-a-private-network-tools"></a>
+ [AWS Application Migration Service](https://docs.aws.amazon.com/mgn/latest/ug/what-is-application-migration-service.html) simplifies, expedites, and reduces the cost of rehosting applications on AWS.
+ [Interface VPC endpoints](https://docs.aws.amazon.com/vpc/latest/privatelink/vpce-interface.html) enable you to connect to services that are powered by AWS PrivateLink without requiring an internet gateway, NAT device, VPN connection, or AWS Direct Connect connection. Instances in your VPC do not require public IP addresses to communicate with resources in the service. Traffic between your VPC and the other service does not leave the Amazon network.

## Epics
<a name="connect-to-application-migration-service-data-and-control-planes-over-a-private-network-epics"></a>

### Create endpoints for Application Migration Service, Amazon EC2, and Amazon S3
<a name="create-endpoints-for-mgn-ec2-and-s3"></a>


| Task | Description | Skills required | 
| --- | --- | --- | 
| Configure the interface endpoint for Application Migration Service. | The source data center and staging area VPC connect privately to the Application Migration Service control plane through the interface endpoint that you create in the target staging area VPC. To create the endpoint:[\[See the AWS documentation website for more details\]](http://docs.aws.amazon.com/prescriptive-guidance/latest/patterns/connect-to-application-migration-service-data-and-control-planes-over-a-private-network.html)For more information, see [Access an AWS service using an interface VPC endpoint in the Amazon VPC](https://docs.aws.amazon.com/vpc/latest/privatelink/vpce-interface.html) documentation. | Migration lead | 
| Configure the interface endpoint for Amazon EC2. | The staging area VPC connects privately to the Amazon EC2 API through the interface endpoint that you create in the target staging area VPC. To create the endpoint, follow the instructions provided in the previous story.[\[See the AWS documentation website for more details\]](http://docs.aws.amazon.com/prescriptive-guidance/latest/patterns/connect-to-application-migration-service-data-and-control-planes-over-a-private-network.html) | Migration lead | 
| Configure the interface endpoint for Amazon S3. | The source data center and staging area VPC connect privately to the Amazon S3 API through the interface endpoint that you create in the target staging area VPC. To create the endpoint, follow the instructions provided in the first story.[\[See the AWS documentation website for more details\]](http://docs.aws.amazon.com/prescriptive-guidance/latest/patterns/connect-to-application-migration-service-data-and-control-planes-over-a-private-network.html)You use an interface endpoint because gateway endpoint connections cannot be extended out of a VPC. (For details, see the [AWS PrivateLink documentation](https://docs.aws.amazon.com/vpc/latest/privatelink/vpce-gateway.html).) | Migration lead | 
| Configure the Amazon S3 Gateway endpoint. | During the configuration phase, the replication server has to connect to an S3 bucket to download the AWS Replication Server’s software updates. However, Amazon S3 interface endpoints do not support private DNS names*,* and there is no way to supply an Amazon S3 endpoint DNS name to a replication server. To mitigate this issue, you create an Amazon S3 gateway endpoint in the VPC that the staging area subnet belongs to, and update the staging subnet’s route tables with the relevant routes. For more information, see [Create a gateway endpoint](https://docs.aws.amazon.com/vpc/latest/privatelink/vpc-endpoints-s3.html#create-gateway-endpoint-s3) in the AWS PrivateLink documentation. | Cloud administrator | 
| Configure on-premises DNS to resolve private DNS names for endpoints. | The interface endpoints for Application Migration Service and Amazon EC2 have private DNS names that can be resolved in the VPC. However, you also need to configure on-premises servers to resolve private DNS names for these interface endpoints.There are multiple ways to configure these servers. In this pattern, we tested this functionality by forwarding on-premises DNS queries to the Amazon Route 53 Resolver inbound endpoint in the staging area VPC. For more information, see [Resolving DNS queries between VPCs and your network](https://docs.aws.amazon.com/Route53/latest/DeveloperGuide/resolver-overview-DSN-queries-to-vpc.html) in the Route 53 documentation. | Migration engineer | 

### Connect to the Application Migration Service control plane over a private link
<a name="connect-to-the-mgn-control-plane-over-a-private-link"></a>


| Task | Description | Skills required | 
| --- | --- | --- | 
| Install AWS Replication Agent by using AWS PrivateLink. | [\[See the AWS documentation website for more details\]](http://docs.aws.amazon.com/prescriptive-guidance/latest/patterns/connect-to-application-migration-service-data-and-control-planes-over-a-private-network.html)Here’s an example for Linux:[\[See the AWS documentation website for more details\]](http://docs.aws.amazon.com/prescriptive-guidance/latest/patterns/connect-to-application-migration-service-data-and-control-planes-over-a-private-network.html)After you have established your connection with Application Migration Service and installed the AWS Replication Agent, follow the instructions in the [Application Migration Service documentation](https://docs.aws.amazon.com/mgn/latest/ug/migration-workflow-gs.html) to migrate your source servers to your target VPC and subnet. | Migration engineer | 

## Related resources
<a name="connect-to-application-migration-service-data-and-control-planes-over-a-private-network-resources"></a>

**Application Migration Service documentation**
+ [Concepts](https://docs.aws.amazon.com/mgn/latest/ug/CloudEndure-Concepts.html)
+ [Migration workflow ](https://docs.aws.amazon.com/mgn/latest/ug/migration-workflow-gs.html)
+ [Quick start guide](https://docs.aws.amazon.com/mgn/latest/ug/quick-start-guide-gs.html)
+ [FAQ](https://docs.aws.amazon.com/mgn/latest/ug/FAQ.html)
+ [Troubleshooting](https://docs.aws.amazon.com/mgn/latest/ug/troubleshooting.html)

**Additional resources**
+ [Rehosting your applications in a multi-account architecture on AWS by using VPC interface endpoints](https://docs.aws.amazon.com/prescriptive-guidance/latest/rehost-multi-account-architecture-interface-endpoints/) (AWS Prescriptive Guidance guide)
+ [AWS Application Migration Service – A Technical Introduction](https://www.aws.training/Details/eLearning?id=71732) (AWS Training and Certification walkthrough)
+ [AWS Application Migration Service architecture and network architecture](https://youtu.be/ao8geVzmmRo) (video)

## Additional information
<a name="connect-to-application-migration-service-data-and-control-planes-over-a-private-network-additional"></a>

**Troubleshooting ***AWS ***Replication Agent installations on Linux servers**

If you get a **gcc** error on an Amazon Linux server, configure the package repository and use the following command:

```
## sudo yum groupinstall "Development Tools"
```

# Create Infoblox objects using AWS CloudFormation custom resources and Amazon SNS
<a name="create-infoblox-objects-using-aws-cloudformation-custom-resources-and-amazon-sns"></a>

*Tim Sutton, Amazon Web Services*

## Summary
<a name="create-infoblox-objects-using-aws-cloudformation-custom-resources-and-amazon-sns-summary"></a>

**Notice**: AWS Cloud9 is no longer available to new customers. Existing customers of AWS Cloud9 can continue to use the service as normal. [Learn more](https://aws.amazon.com/blogs/devops/how-to-migrate-from-aws-cloud9-to-aws-ide-toolkits-or-aws-cloudshell/)

Infoblox Domain Name System (DNS), Dynamic Host Configuration Protocol (DHCP), and IP address management ([Infoblox DDI](https://www.infoblox.com/products/ddi/)) enables you to centralize and efficiently control a complex hybrid environment. With Infoblox DDI, you can discover and record all network assets in one authoritative IP address management (IPAM) database, in addition to managing DNS on premises and on the Amazon Web Services (AWS) Cloud by using the same appliances.

This pattern describes how to use an AWS CloudFormation custom resource to create Infoblox objects (for example, DNS records or IPAM objects) by calling the Infoblox WAPI API. For more information about the Infoblox WAPI, see the [WAPI documentation](https://www.infoblox.com/wp-content/uploads/infoblox-deployment-infoblox-rest-api.pdf) in the Infoblox documentation.

By using this pattern’s approach, you can obtain a unified view of DNS records and IPAM configurations for your AWS and on-premises environments, in addition to removing manual processes that create records and provision your networks. You can use this pattern’s approach for the following use cases:
+ Adding an A record after creating an Amazon Elastic Compute Cloud (Amazon EC2) instance 
+ Adding a CNAME record after creating an Application Load Balancer
+ Adding a network object after creating a virtual private cloud (VPC)
+ Providing the next network range and using that range to create subnets

You can also extend this pattern and use other Infoblox device features such as adding different DNS record types or configuring Infoblox vDiscovery. 

The pattern uses a hub-and-spoke design in which the hub requires connectivity to the Infoblox appliance on the AWS Cloud or on premises and uses AWS Lambda to call the Infoblox API. The spoke is in the same or a different account in the same organization in AWS Organizations, and calls the Lambda function by using an AWS CloudFormation custom resource.

## Prerequisites and limitations
<a name="create-infoblox-objects-using-aws-cloudformation-custom-resources-and-amazon-sns-prereqs"></a>

**Prerequisites **
+ An existing Infoblox appliance or grid, installed on the AWS Cloud, on premises, or both, and configured with an admin user that can administer IPAM and DNS actions. For more information about this, see [About admin accounts](https://docs.infoblox.com/display/nios86/About+Admin+Accounts) in the Infoblox documentation. 
+ An existing DNS authoritative zone that you want to add records on the Infoblox appliance. For more information about this, see [Configuring authoritative zones](https://docs.infoblox.com/display/nios86/Configuring+Authoritative+Zones) in the Infoblox documentation.  
+ Two active AWS accounts in AWS Organizations. One account is the hub account and the other account is the spoke account.
+ The hub and spoke accounts must be in the same AWS Region. 
+ The hub account’s VPC must connect to the Infoblox appliance; for example, by using AWS Transit Gateway or VPC peering.
+ [AWS Serverless Application Model (AWS SAM),](https://docs.aws.amazon.com/serverless-application-model/latest/developerguide/what-is-sam.html) locally installed and configured with AWS Cloud9 or AWS CloudShell.
+ The `Infoblox-Hub.zip` and `ClientTest.yaml` files (attached), downloaded to the local environment that contains AWS SAM.

**Limitations **
+ The AWS CloudFormation custom resource’s service token must be from the same Region where the stack is created. We recommend that you use a hub account in each Region, instead of creating an Amazon Simple Notification Service (Amazon SNS) topic in one Region and calling the Lambda function in another Region.

**Product versions**
+ Infoblox WAPI version 2.7

## Architecture
<a name="create-infoblox-objects-using-aws-cloudformation-custom-resources-and-amazon-sns-architecture"></a>

The following diagrams shows this pattern’s workflow. 

![\[Creating Infoblox objects using AWS CloudFormation custom resources and Amazon SNS.\]](http://docs.aws.amazon.com/prescriptive-guidance/latest/patterns/images/pattern-img/8d609d3f-6f5e-4084-849f-ca191db8055e/images/3594a064-e103-4211-84b7-da67c41ebb15.png)


The diagram shows the following components for this pattern’s solution:

1. AWS CloudFormation custom resources enable you to write custom provisioning logic in templates that AWS CloudFormation runs when you create, update, or delete stacks. When you create a stack, AWS CloudFormation sends a `create` request to an SNS topic that's monitored by an application running on an EC2 instance.

1. The Amazon SNS notification from the AWS CloudFormation custom resource is encrypted through a specific AWS Key Management Service (AWS KMS) key and access is restricted to accounts in your organization in Organizations. The SNS topic initiates the Lambda resource that calls the Infoblox WAPI API.

1. Amazon SNS invokes the following Lambda functions that take the Infoblox WAPI URL, the user name, and password AWS Secrets Manager Amazon Resource Names (ARNs) as environment variables: 
   + `dnsapi.lambda_handler` – Receives the `DNSName`, `DNSType`, and `DNSValue` values from the AWS CloudFormation custom resource and uses these to create DNS A records and CNAMES.
   + `ipaddr.lambda_handler` – Receives the `VPCCIDR`, `Type`, `SubnetPrefix`, and `Network Name` values from the AWS CloudFormation custom resource and uses these to add the network data into the Infoblox IPAM database or provide the custom resource with the next available network that can be used to create new subnets.
   + `describeprefixes.lambda_handler` – Calls the `describe_managed_prefix_lists` AWS API by using the `"com.amazonaws."+Region+".s3"` filter to retrieve the required `prefix ID`.
**Important**  
These Lambda functions are written in Python and are similar to each other but call different APIs.

1. You can deploy the Infoblox grid as physical, virtual, or cloud-based network appliances.  It can be deployed on-premises or as a virtual appliance using a range of hypervisors, including VMware ESXi, Microsoft Hyper-V, Linux KVM, and Xen. You can also deploy the Infoblox grid on the AWS Cloud with an Amazon Machine Image (AMI).

1. The diagram shows a hybrid solution for the Infoblox grid that provides DNS and IPAM to resources on the AWS Cloud and on premises.

**Technology stack  **
+ AWS CloudFormation
+ IAM
+ AWS KMS
+ AWS Lambda
+ AWS SAM
+ AWS Secrets Manager
+ Amazon SNS
+ Amazon VPC 

## Tools
<a name="create-infoblox-objects-using-aws-cloudformation-custom-resources-and-amazon-sns-tools"></a>
+ [AWS CloudFormation](https://docs.aws.amazon.com/AWSCloudFormation/latest/UserGuide/Welcome.html) helps you set up AWS resources, provision them quickly and consistently, and manage them throughout their lifecycle across AWS accounts and Regions.
+ [AWS Identity and Access Management (IAM)](https://docs.aws.amazon.com/IAM/latest/UserGuide/introduction.html) helps you securely manage access to your AWS resources by controlling who is authenticated and authorized to use them.
+ [AWS Key Management Service (AWS KMS)](https://docs.aws.amazon.com/kms/latest/developerguide/overview.html) helps you create and control cryptographic keys to help protect your data.
+ [AWS Lambda](https://docs.aws.amazon.com/lambda/latest/dg/welcome.html) is a compute service that helps you run code without needing to provision or manage servers. It runs your code only when needed and scales automatically, so you pay only for the compute time that you use.
+ [AWS Organizations](https://docs.aws.amazon.com/organizations/latest/userguide/orgs_introduction.html) is an account management service that helps you consolidate multiple AWS accounts into an organization that you create and centrally manage.
+ [AWS Secrets Manager](https://docs.aws.amazon.com/secretsmanager/latest/userguide/intro.html) helps you replace hardcoded credentials in your code, including passwords, with an API call to Secrets Manager to retrieve the secret programmatically.
+ [AWS Serverless Application Model (AWS SAM)](https://docs.aws.amazon.com/serverless-application-model/latest/developerguide/what-is-sam.html) is an open-source framework that helps you build serverless applications in the AWS Cloud.
+ [Amazon Simple Notification Service (Amazon SNS)](https://docs.aws.amazon.com/sns/latest/dg/welcome.html) helps you coordinate and manage the exchange of messages between publishers and clients, including web servers and email addresses.
+ [Amazon Virtual Private Cloud (Amazon VPC)](https://docs.aws.amazon.com/vpc/latest/userguide/what-is-amazon-vpc.html) helps you launch AWS resources into a virtual network that you’ve defined. This virtual network resembles a traditional network that you’d operate in your own data center, with the benefits of using the scalable infrastructure of AWS.

**Code**

You can use the `ClientTest.yaml` sample AWS CloudFormation template (attached) to test the Infoblox hub. You can customize the AWS CloudFormation template to include the custom resources from the following table.


|  | 
| --- |
| Create an A record using the Infoblox spoke custom resource | Return values: `infobloxref ` – Infoblox referencesExample resource:

```
ARECORDCustomResource:

  Type: "Custom::InfobloxAPI"

  Properties:

    ServiceToken: !Sub  arn:aws:sns:${AWS::Region}:${HubAccountID}:RunInfobloxDNSFunction

    DNSName: 'arecordtest.company.com'

    DNSType: 'ARecord' 

    DNSValue: '10.0.0.1'
``` | 
| --- |--- |
| Create a CNAME record using the Infoblox spoke custom resource | **Return values**: `infobloxref ` – Infoblox references**Example resource**:<pre>CNAMECustomResource:<br /><br />  Type: "Custom::InfobloxAPI"<br /><br />  Properties:<br /><br />    ServiceToken: !Sub arn:aws:sns:${AWS::Region}:${HubAccountID}:RunInfoblox    <br /><br />    DNSFunction<br /><br />    DNSName: 'cnametest.company.com'<br /><br />    DNSType: 'cname' <br /><br />    DNSValue: 'aws.amazon.com'</pre> | 
| Create a network object using the Infoblox spoke custom resource | **Return values:**`infobloxref ` – Infoblox references`network` – Network range (the same as `VPCCIDR`)**Example resource:**<pre>VPCCustomResource:<br /><br />  Type: 'Custom::InfobloxAPI'<br /><br />  Properties:<br /><br />    ServiceToken: !Sub  arn:aws:sns:${AWS::Region}:${HubAccountID}:RunInfobloxNextSubnetFunction<br /><br />    VPCCIDR: !Ref VpcCIDR<br /><br />    Type: VPC<br /><br />    NetworkName: My-VPC</pre> | 
| Retrieve the next available subnet using the Infoblox spoke custom resource | **Return values:**`infobloxref` – Infoblox references`network ` – The subnet's network range**Example resource:**<pre>Subnet1CustomResource:<br /><br />  Type: 'Custom::InfobloxAPI'<br /><br />  DependsOn: VPCCustomResource<br /><br />  Properties:<br /><br />    ServiceToken: !Sub  arn:aws:sns:${AWS::Region}:${HubAccountID}:RunInfobloxNextSubnetFunction<br /><br />    VPCCIDR: !Ref VpcCIDR<br /><br />    Type: Subnet<br /><br />    SubnetPrefix: !Ref SubnetPrefix<br /><br />NetworkName: My-Subnet</pre> | 

## Epics
<a name="create-infoblox-objects-using-aws-cloudformation-custom-resources-and-amazon-sns-epics"></a>

### Create and configure the hub account’s VPC
<a name="create-and-configure-the-hub-accountrsquor-s-vpc"></a>


| Task | Description | Skills required | 
| --- | --- | --- | 
| Create a VPC with a connection to the Infoblox appliance. | Sign in to the AWS Management Console for your hub account and create a VPC by following the steps in the [Amazon VPC on the AWS Cloud Quick Start reference deployment](https://aws-quickstart.github.io/quickstart-aws-vpc/) from AWS Quick Starts.The VPC must have HTTPS connectivity to the Infoblox appliance and we recommend that you use a private subnet for this connection. | Network administrator, System administrator | 
| (Optional) Create the VPC endpoints for private subnets.  | VPC endpoints provide connectivity to public services for your private subnets. The following endpoints are required:[\[See the AWS documentation website for more details\]](http://docs.aws.amazon.com/prescriptive-guidance/latest/patterns/create-infoblox-objects-using-aws-cloudformation-custom-resources-and-amazon-sns.html)For more information about creating endpoints for private subnets, see [VPC endpoints](https://docs.aws.amazon.com/vpc/latest/privatelink/vpc-endpoints.html) in the Amazon VPC documentation. | Network administrator, Systems administrator | 

### Deploy the Infoblox hub
<a name="deploy-the-infoblox-hub"></a>


| Task | Description | Skills required | 
| --- | --- | --- | 
| Build the AWS SAM template. | [\[See the AWS documentation website for more details\]](http://docs.aws.amazon.com/prescriptive-guidance/latest/patterns/create-infoblox-objects-using-aws-cloudformation-custom-resources-and-amazon-sns.html) | Developer, System administrator | 
| Deploy the AWS SAM template. | The `sam deploy` command takes the required parameters and saves them into the `samconfig.toml` file, stores the AWS CloudFormation template and Lambda functions in an S3 bucket, and then deploys the AWS CloudFormation template into your hub account.  The following sample code shows how to deploy the AWS SAM template:<pre>$ sam deploy --guided<br /><br />Configuring SAM deploy<br />======================<br />        Looking for config file [samconfig.toml] :  Found<br />        Reading default arguments  :  Success<br />        Setting default arguments for 'sam deploy'<br />        =========================================<br />        Stack Name [Infoblox-Hub]:<br />        AWS Region [eu-west-1]:<br />        Parameter InfobloxUsername:<br />        Parameter InfobloxPassword:<br />        Parameter InfobloxIPAddress [xxx.xxx.xx.xxx]:<br />        Parameter AWSOrganisationID [o-xxxxxxxxx]:<br />        Parameter VPCID [vpc-xxxxxxxxx]:<br />        Parameter VPCCIDR [xxx.xxx.xxx.xxx/16]:<br />        Parameter VPCSubnetID1 [subnet-xxx]:<br />        Parameter VPCSubnetID2 [subnet-xxx]:<br />        Parameter VPCSubnetID3 [subnet-xxx]:<br />        Parameter VPCSubnetID4 []: <br />        #Shows you resources changes to be deployed and require a 'Y' to initiate deploy<br />        Confirm changes before deploy [Y/n]: y<br />        #SAM needs permission to be able to create roles to connect to the resources in your template<br />Allow SAM CLI IAM role creation [Y/n]: n<br />Capabilities [['CAPABILITY_NAMED_IAM']]:<br />        Save arguments to configuration file [Y/n]: y<br />        SAM configuration file [samconfig.toml]:<br />        SAM configuration environment [default]: </pre>You must use the `--guided` option each time because the Infoblox sign-in credentials are not stored in the `samconfig.toml` file. | Developer, System administrator | 

## Related resources
<a name="create-infoblox-objects-using-aws-cloudformation-custom-resources-and-amazon-sns-resources"></a>
+ [Getting started with WAPIs using Postman](https://blogs.infoblox.com/community/getting-started-with-wapis-using-postman/) (Infoblox Blog)
+ [Provisioning vNIOS for AWS Using the BYOL Model](https://docs.infoblox.com/display/NAIG/Provisioning+vNIOS+for+AWS+Using+the+BYOL+Model) (Infoblox documentation)
+ [quickstart-aws-vpc](https://github.com/aws-quickstart/quickstart-aws-vpc) (GitHub repo)
+ [describe\$1managed\$1prefix\$1lists](https://boto3.amazonaws.com/v1/documentation/api/latest/reference/services/ec2.html#EC2.Client.describe_managed_prefix_lists) (AWS SDK for Python documentation)

## Attachments
<a name="attachments-8d609d3f-6f5e-4084-849f-ca191db8055e"></a>

To access additional content that is associated with this document, unzip the following file: [attachment.zip](samples/p-attach/8d609d3f-6f5e-4084-849f-ca191db8055e/attachments/attachment.zip)

# Create a hierarchical, multi-Region IPAM architecture on AWS by using Terraform
<a name="multi-region-ipam-architecture"></a>

*Donny Schreiber, Amazon Web Services*

## Summary
<a name="multi-region-ipam-architecture-summary"></a>

*IP address management (IPAM)* is a critical component of network management, and it becomes increasingly complex as organizations scale their cloud infrastructure. Without proper IPAM, organizations risk IP address conflicts, wasted address space, and complex troubleshooting that can lead to outages and application downtime. This pattern demonstrates how to implement a comprehensive IPAM solution for AWS enterprise environments by using HashiCorp Terraform. It helps organizations to create a hierarchical, multi-Region IPAM architecture that facilitates centralized IP address management across all AWS accounts in an [AWS organization](https://docs.aws.amazon.com/organizations/latest/userguide/orgs_getting-started_concepts.html#organization-structure).

This pattern helps you implement [Amazon VPC IP Address Manager](https://docs.aws.amazon.com/vpc/latest/ipam/what-it-is-ipam.html) with a sophisticated four-tier pool hierarchy: top-level pool, Regional pools, business unit pools, and environment-specific pools. This structure supports proper IP address governance while enabling delegation of IP management to appropriate teams within the organization. The solution uses AWS Resource Access Manager (AWS RAM) to seamlessly share IP Address Manager pools across the organization. AWS RAM centralizes and standardizes IPAM specifications, which teams can build upon across all managed accounts.

This pattern can help you achieve the following:
+ Automate IP address allocation across AWS Regions, business units, and environments.
+ Enforce organizational network policies through programmatic validation.
+ Scale network infrastructure efficiently as business requirements evolve.
+ Reduce operational overhead through centralized management of IP address spaces.
+ Accelerate cloud-native workload deployments with self-service CIDR range allocation.
+ Prevent address conflicts through policy-based controls and validation.

## Prerequisites and limitations
<a name="multi-region-ipam-architecture-prereqs"></a>

**Prerequisites**
+ One or more AWS accounts, managed as an organization in AWS Organizations.
+ A network hub or network management account that will serve as the IP Address Manager delegated administrator.
+ AWS Command Line Interface (AWS CLI), [installed](https://docs.aws.amazon.com/cli/latest/userguide/getting-started-install.html) and [configured](https://docs.aws.amazon.com/cli/latest/userguide/cli-chap-configure.html).
+ Terraform version 1.5.0 or later, [installed](https://developer.hashicorp.com/terraform/tutorials/aws-get-started/install-cli).
+ AWS Provider for Terraform, [configured](https://registry.terraform.io/providers/hashicorp/aws/latest/docs).
+ Permissions to manage [IP Address Manager](https://docs.aws.amazon.com/vpc/latest/ipam/iam-ipam.html), [AWS RAM](https://docs.aws.amazon.com/ram/latest/userguide/security-iam.html), and [virtual private clouds (VPCs)](https://docs.aws.amazon.com/vpc/latest/userguide/security-iam.html), configured in AWS Identity and Access Management (IAM).

**Limitations**
+ IP Address Manager is subject to [service quotas](https://docs.aws.amazon.com/vpc/latest/ipam/quotas-ipam.html). The default service quota for pools is 50 per scope. Running this deployment for 6 Regions, 2 business units, and 4 environments would create 67 pools. Therefore, a quota increase might be necessary.
+ Modifying or deleting IP Address Manager pools after resources have been allocated can cause dependency issues. You must [release the allocation](https://docs.aws.amazon.com/vpc/latest/ipam/release-alloc-ipam.html) before you can delete the pool.
+ In IP Address Manager, [resource monitoring](https://docs.aws.amazon.com/vpc/latest/ipam/monitor-cidr-compliance-ipam.html) can experience a slight delay in reflecting resource changes. This delay can be approximately 20 minutes.
+ IP Address Manager cannot automatically enforce IP address uniqueness across different scopes.
+ Custom tags must adhere to [AWS tagging best practices](https://docs.aws.amazon.com/whitepapers/latest/tagging-best-practices/tagging-best-practices.html). For example, each key must be unique and cannot begin with `aws:`.
+ There are [considerations and limitations](https://docs.aws.amazon.com/vpc/latest/ipam/enable-integ-ipam-outside-org-considerations.html) when integrating IP Address Manager with accounts outside of your organization.

## Architecture
<a name="multi-region-ipam-architecture-architecture"></a>

**Target architecture**

*IP Address Manager configuration and pool hierarchy*

The following diagram shows the logical constructs of the target architecture. A *scope* is the highest-level container in IP Address Manager. Each scope represents the IP address space for a single network. The *pools* are collections of contiguous IP address ranges (or CIDR ranges) within the scope. Pools help you organize your IP addresses according to your routing and security needs. This diagram shows four hierarchical levels of pools: a top-level pool, Regional pools, business unit pools, and environment pools.

![\[A private scope and four levels of pools in a single AWS Region in a Network account.\]](http://docs.aws.amazon.com/prescriptive-guidance/latest/patterns/images/pattern-img/780e344e-37f7-4b70-8d7c-94ec67a29305/images/1e23b2a7-a274-4a19-9097-61d8a31dfbf8.png)


This solution establishes a clear hierarchy of IP Address Manager pools:

1. The top-level pool encompasses the entire organizational IP address space, such as `10.176.0.0/12`.

1. The Regional pools are for Region-specific allocations, such as `10.176.0.0/15` for `us-east-1`.

1. The business unit pools are domain-specific allocations within each AWS Region. For example, the finance business unit in the `us-east-1` Region might have `10.176.0.0/16`.

1. The environment pools are purpose-specific allocations for different environments. For example, the finance business unit in the `us-east-1` Region might have `10.176.0.0/18` for a production environment.

This deployment topology distributes IP Address Manager resources geographically while maintaining centralized control. The following are its features:
+ IP Address Manager is deployed in a single primary AWS Region.
+ Additional Regions are registered as [operating regions](https://docs.aws.amazon.com/vpc/latest/ipam/mod-ipam-region.html), where IP Address Manager can manage resources.
+ Each operating region receives a dedicated address pool from the top-level pool.
+ Resources in all operating regions are centrally managed through IP Address Manager in the primary Region.
+ Each Regional pool has a locale property tied to its Region to help you properly allocate resources.

*Advanced CIDR range validation*

This solution is designed to prevent deployment of invalid configurations. When you deploy the pools through Terraform, the following are validated during the Terraform plan phase:
+ Validates that all environment CIDR ranges are contained within their parent business unit CIDR ranges
+ Confirms that all business unit CIDR ranges are contained within their parent regional CIDR ranges
+ Verifies that all Regional CIDR ranges are contained within the top-level CIDR ranges
+ Checks for overlapping CIDR ranges within the same hierarchy level
+ Validates proper mapping of environments to their respective business units

*CIDR range allocation*

The following diagram shows an example of how developers or administrators can create new VPCs and allocate IP addresses from the pool levels.

![\[A private scope and four levels of pools in a single AWS Region in a Network account.\]](http://docs.aws.amazon.com/prescriptive-guidance/latest/patterns/images/pattern-img/780e344e-37f7-4b70-8d7c-94ec67a29305/images/7c3de2e3-e71b-4fc0-abcd-7e88cfab5c87.png)


The diagram shows the following workflow:

1. Through the AWS Management Console, the AWS CLI, or through infrastructure as code (IaC), a developer or administrator requests the next available CIDR range in the `AY3` environment pool.

1. IP Address Manager allocates the next available CIDR range in that pool to the `AY3-4` VPC. This CIDR range can no longer be used.

**Automation and scale**

This solution is designed for scalability as follows:
+ **Regional expansion** – Add new Regions by extending the Terraform configuration with additional Regional pool entries.
+ **Business unit growth** – Support new business units by adding them to the BU configuration map.
+ **Environment flexibility** – Configure different environment types, such as development or production, based on organizational needs.
+ **Multi-account support** – Share pools across all accounts in your organization through AWS RAM.
+ **Automated VPC provisioning** – Integrate with VPC provisioning workflows to automate CIDR range allocation.

The hierarchical structure also allows for different scales of delegation and control, such as the following:
+ Network administrators might manage the top-level and Regional pools.
+ Business unit IT teams might have delegated control of their respective pools.
+ Application teams might consume IP addresses from their designated environment pools.

**Note**  
You can also integrate this solution with [AWS Control Tower Account Factory for Terraform (AFT)](https://docs.aws.amazon.com/controltower/latest/userguide/aft-overview.html). For more information, see *Integration with AFT* in the [Additional information](#multi-region-ipam-architecture-additional) section of this pattern.

## Tools
<a name="multi-region-ipam-architecture-tools"></a>

**AWS services**
+ [Amazon CloudWatch](https://docs.aws.amazon.com/AmazonCloudWatch/latest/monitoring/WhatIsCloudWatch.html) helps you monitor the metrics of your AWS resources and the applications you run on AWS in real time.
+ [AWS Command Line Interface (AWS CLI)](https://docs.aws.amazon.com/cli/latest/userguide/cli-chap-welcome.html) is an open source tool that helps you interact with AWS services through commands in your command-line shell.
+ [AWS Organizations](https://docs.aws.amazon.com/organizations/latest/userguide/orgs_introduction.html) is an account management service that helps you consolidate multiple AWS accounts into an organization that you create and centrally manage.
+ [AWS Resource Access Manager (AWS RAM)](https://docs.aws.amazon.com/ram/latest/userguide/what-is.html) helps you securely share your resources across AWS accounts to reduce operational overhead and provide visibility and auditability.
+ [Amazon Virtual Private Cloud (Amazon VPC)](https://docs.aws.amazon.com/vpc/latest/userguide/what-is-amazon-vpc.html) helps you launch AWS resources into a virtual network that you’ve defined. This virtual network resembles a traditional network that you’d operate in your own data center, with the benefits of using the scalable infrastructure of AWS. [IP Address Manager](https://docs.aws.amazon.com/vpc/latest/ipam/what-it-is-ipam.html) is a feature of Amazon VPC. It helps you plan, track, and monitor IP addresses for your AWS workloads.

**Other tools**
+ [HashiCorp Terraform](https://www.terraform.io/docs) is an infrastructure as code (IaC) tool that helps you use code to provision and manage cloud infrastructure and resources.

**Code repository**

The code for this pattern is available in the [Sample Terraform Implementation for Hierarchical IPAM on AWS](https://github.com/aws-samples/sample-amazon-vpc-ipam-terraform)** **repository on GitHub. The repository structure includes:
+ **Root module** – Deployment orchestration and input variables.
+ **IPAM module** – Core implementation of the architecture described in this pattern.
+ **Tags module** – Standardized tagging for all resources.

## Best practices
<a name="multi-region-ipam-architecture-best-practices"></a>

Consider the following best practices for network planning:
+ **Plan first** – Thoroughly plan your IP address space before deployment. For more information, see [Plan for IP address provisioning](https://docs.aws.amazon.com/vpc/latest/ipam/planning-ipam.html).
+ **Avoid overlapping CIDR ranges** – Make sure that CIDR ranges at each level do not overlap.
+ **Reserve buffer space** – Always allocate larger CIDR ranges than immediately needed to accommodate growth.
+ **Document IP address allocation** – Maintain documentation of your IP address allocation strategy.

Consider the following deployment best practices:
+ **Start with non-production** – Deploy in non-production environments first.
+ **Use Terraform state management** – Implement remote state storage and locking. For more information, see [State storage and locking](https://developer.hashicorp.com/terraform/language/state/backends) in the Terraform documentation.
+ **Implement version control** – Version control all Terraform code.
+ **Implement CI/CD integration** – Use continuous integration and continuous delivery (CI/CD) pipelines for repeatable deployments.

Consider the following operational best practices:
+ **Enable auto-import** – Configure an IP Address Manager pool to automatically discover and import existing resources. Follow the instructions in [Edit an IPAM pool](https://docs.aws.amazon.com/vpc/latest/ipam/mod-pool-ipam.html) to turn on auto-import.
+ **Monitor IP address utilization** – Set up alarms for IP address utilization thresholds. For more information, see [Monitor IPAM with Amazon CloudWatch](https://docs.aws.amazon.com/vpc/latest/ipam/cloudwatch-ipam.html).
+ **Audit regularly** – Periodically audit IP address usage and compliance. For more information, see [Tracking IP address usage in IPAM](https://docs.aws.amazon.com/vpc/latest/ipam/tracking-ip-addresses-ipam.html).
+ **Clean up unused allocations** – Release IP address allocations when resources are decommissioned. For more information, see [Deprovision CIDRs from a pool](https://docs.aws.amazon.com/vpc/latest/ipam/depro-pool-cidr-ipam.html).

Consider the following security best practices:
+ **Implement least privilege** – Use IAM roles with the minimum required permissions. For more information, see [Security best practices in IAM](https://docs.aws.amazon.com/IAM/latest/UserGuide/best-practices.html) and [Identity and access management in IPAM](https://docs.aws.amazon.com/vpc/latest/ipam/iam-ipam.html).
+ **Use service control policies** – Implement service control policies (SCPs) to enforce IP Address Manager usage in your organization. For more information, see [Enforce IPAM use for VPC creation with SCPs](https://docs.aws.amazon.com/vpc/latest/ipam/scp-ipam.html).
+ **Control resource sharing** – Carefully manage the scope of IP Address Manager resource sharing in AWS RAM. For more information, see [Share an IPAM pool using AWS RAM](https://docs.aws.amazon.com/vpc/latest/ipam/share-pool-ipam.html).
+ **Enforce tagging** – Implement mandatory tagging for all resources related to IP Address Manager. For more information, see *Tagging strategy* in the [Additional information](#multi-region-ipam-architecture-additional) section.

## Epics
<a name="multi-region-ipam-architecture-epics"></a>

### Set up a delegated administrator account for IP Address Manager
<a name="set-up-a-delegated-administrator-account-for-ip-address-manager"></a>


| Task | Description | Skills required | 
| --- | --- | --- | 
| Enable AWS Organizations features. | Make sure that AWS Organizations has all features enabled. For instructions, see [Enabling all features for an organization with AWS Organizations](https://docs.aws.amazon.com/organizations/latest/userguide/orgs_manage_org_support-all-features.html) in the AWS Organizations documentation. | AWS administrator | 
| Enable resource sharing in AWS RAM. | Using the AWS CLI, enter the following command to enable AWS RAM resource sharing for your organization:<pre>aws ram enable-sharing-with-aws-organization</pre>For more information, see [Enable resource sharing within AWS Organizations](https://docs.aws.amazon.com/ram/latest/userguide/getting-started-sharing.html#getting-started-sharing-orgs) in the AWS RAM documentation. | AWS administrator | 
| Designate an administrator for IP Address Manager. | From the organization’s management account, using the AWS CLI, enter the following command, where `123456789012` is the ID of the account that will administer IP Address Manager:<pre>aws ec2 enable-ipam-organization-admin-account \<br />    --delegated-admin-account-id 123456789012</pre>Typically, a network or network hub account is used as the delegated administrator for IP Address Manager.For more information, see [Integrate IPAM with accounts in an AWS Organization](https://docs.aws.amazon.com/vpc/latest/ipam/enable-integ-ipam.html) in the IP Address Manager documentation. | AWS administrator | 

### Deploy the infrastructure
<a name="deploy-the-infrastructure"></a>


| Task | Description | Skills required | 
| --- | --- | --- | 
| Define the network architecture. | Define and document your network architecture, including the CIDR ranges for Regions, business units, and environments. For more information, see [Plan for IP address provisioning](https://docs.aws.amazon.com/vpc/latest/ipam/planning-ipam.html) in the IP Address Manager documentation. | Network engineer | 
| Clone the repository. | [\[See the AWS documentation website for more details\]](http://docs.aws.amazon.com/prescriptive-guidance/latest/patterns/multi-region-ipam-architecture.html) | DevOps engineer | 
| Configure the variables. | [\[See the AWS documentation website for more details\]](http://docs.aws.amazon.com/prescriptive-guidance/latest/patterns/multi-region-ipam-architecture.html) | Network engineer, Terraform | 
| Deploy the IP Address Manager resources. | [\[See the AWS documentation website for more details\]](http://docs.aws.amazon.com/prescriptive-guidance/latest/patterns/multi-region-ipam-architecture.html) | Terraform | 
| Validate the deployment. | [\[See the AWS documentation website for more details\]](http://docs.aws.amazon.com/prescriptive-guidance/latest/patterns/multi-region-ipam-architecture.html) | General AWS, Network engineer | 

### Create VPCs and set up monitoring
<a name="create-vpcs-and-set-up-monitoring"></a>


| Task | Description | Skills required | 
| --- | --- | --- | 
| Create a VPC. | Follow the steps in [Create a VPC](https://docs.aws.amazon.com/vpc/latest/userguide/create-vpc.html) in the Amazon VPC documentation. When you reach the step to choose a CIDR range for the VPC, allocate the next available from one of your Regional, business unit, and environment pools. | General AWS, Network administrator, Network engineer | 
| Validate the CIDR range allocation. | [\[See the AWS documentation website for more details\]](http://docs.aws.amazon.com/prescriptive-guidance/latest/patterns/multi-region-ipam-architecture.html) | General AWS, Network administrator, Network engineer | 
| Monitor IP Address Manager. | Configure monitoring and alarms related to the allocation of IP Address Manager resources. For more information and instructions, see [Monitor IPAM with Amazon CloudWatch](https://docs.aws.amazon.com/vpc/latest/ipam/cloudwatch-ipam.html) and [Monitor CIDR usage by resource](https://docs.aws.amazon.com/vpc/latest/ipam/monitor-cidr-compliance-ipam.html) in the IP Address Manager documentation. | General AWS | 
| Enforce use of IP Address Manager. | Create a service control policy (SCP) in AWS Organizations that requires members in your organization to use IP Address Manager when they create a VPC. For instructions, see [Enforce IPAM use for VPC creation with SCPs](https://docs.aws.amazon.com/vpc/latest/ipam/scp-ipam.html) in the IP Address Manager documentation. | General AWS, AWS administrator | 

## Troubleshooting
<a name="multi-region-ipam-architecture-troubleshooting"></a>


| Issue | Solution | 
| --- | --- | 
| Terraform fails with IP Address Manager resource not found | Make sure that the IP Address Manager administrator account is properly delegated and that your AWS Provider is authenticated to that account. | 
| CIDR range allocation fails | Check that the requested CIDR range fits within the available range of the IP Address Manager pool and doesn't overlap with existing allocations. | 
| AWS RAM sharing issues | Verify that resource sharing is enabled for your AWS Organization. Verify that the correct principal, the organization Amazon Resource Name (ARN), is used in the AWS RAM share. | 
| Pool hierarchy validation errors | Make sure that the child pool CIDR ranges are properly contained within their parent pool CIDR ranges and don't overlap with sibling pools. | 
| IP Address Manager quota limit exceeded | Request a quota increase for IP Address Manager pools. For more information, see [Requesting a quota increase](https://docs.aws.amazon.com/servicequotas/latest/userguide/request-quota-increase.html) in the *Service Quotas User Guide*. | 

## Related resources
<a name="multi-region-ipam-architecture-resources"></a>

**AWS service documentation**
+ [Amazon VPC IP Address Manager documentation](https://docs.aws.amazon.com/vpc/latest/ipam/what-it-is-ipam.html)
+ [AWS Resource Access Manager documentation](https://docs.aws.amazon.com/ram/latest/userguide/what-is.html)
+ [AWS Organizations documentation](https://docs.aws.amazon.com/organizations/latest/userguide/orgs_introduction.html)

**AWS blog posts**
+ [Managing IP pools across VPCs and Regions using Amazon VPC IP Address Manager](https://aws.amazon.com/blogs/networking-and-content-delivery/managing-ip-pools-across-vpcs-and-regions-using-amazon-vpc-ip-address-manager/)
+ [Network address management and auditing at scale with Amazon VPC IP Address Manager](https://aws.amazon.com/blogs/aws/network-address-management-and-auditing-at-scale-with-amazon-vpc-ip-address-manager/)

**Videos and tutorials**
+ [AWS re:Invent 2022: Best practices for Amazon VPC design and IPAM (NET310)](https://www.youtube.com/watch?v=XrEHsy_8RYs)
+ [AWS re:Invent 2022: Advanced VPC design and new capabilities (NET401)](https://www.youtube.com/watch?v=tbXTVpwx87o)

## Additional information
<a name="multi-region-ipam-architecture-additional"></a>

**Integration with AFT**

You can integrate this solution with AWS Control Tower Account Factory for Terraform (AFT) to make sure that newly provisioned accounts automatically receive proper network configurations. By deploying this IPAM solution in your network hub account, new accounts created through AFT can reference the shared IP Address Manager pools when you create VPCs.

The following code sample demonstrates AFT integration in an account customization by using AWS Systems Manager Parameter Store:

```
# Get the IP Address Manager pool ID from Parameter Store
data "aws_ssm_parameter" "dev_ipam_pool_id" {
  name = "/org/network/ipam/finance/dev/pool-id"
}

# Create a VPC using the IP Address Manager pool
resource "aws_vpc" "this" {
  ipv4_ipam_pool_id   = data.aws_ssm_parameter.dev_ipam_pool_id.value
  ipv4_netmask_length = 24
  
  tags = {
    Name = "aft-account-vpc"
  }
}
```

**Tagging strategy**

The solution implements a comprehensive tagging strategy to facilitate resource management. The following code sample demonstrates how it is used:

```
# Example tag configuration
module "tags" {
  source = "./modules/tags"
  
  # Required tags
  product_name  = "enterprise-network"
  feature_name  = "ipam"
  org_id        = "finance"
  business_unit = "network-operations"
  owner         = "network-team"
  environment   = "prod"
  repo          = "https://github.com/myorg/ipam-terraform"
  branch        = "main"
  cost_center   = "123456"
  dr_tier       = "tier1"
  
  # Optional tags
  optional_tags = {
    "project"    = "network-modernization"
    "stack_role" = "infrastructure"
  }
}
```

These tags are automatically applied to all IP Address Manager resources. This facilitates consistent governance, cost allocation, and resource management.

# Customize Amazon CloudWatch alerts for AWS Network Firewall
<a name="customize-amazon-cloudwatch-alerts-for-aws-network-firewall"></a>

*Jason Owens, Amazon Web Services*

## Summary
<a name="customize-amazon-cloudwatch-alerts-for-aws-network-firewall-summary"></a>

The pattern helps you customize the Amazon CloudWatch alerts that are generated by AWS Network Firewall. You can use predefined rules or create custom rules that determine the message, metadata, and severity of the alerts. You can then act upon these alerts or automate responses by other Amazon services, such as Amazon EventBridge.

In this pattern, you generate Suricata-compatible firewall rules. [Suricata](https://suricata.io/) is an open-source threat detection engine. You first create simple rules and then test them to confirm that the CloudWatch alerts are generated and logged. Once you have successfully tested the rules, you modify them to define custom messages, metadata, and severities, and you then test once more to confirm the updates.

## Prerequisites and limitations
<a name="customize-amazon-cloudwatch-alerts-for-aws-network-firewall-prereqs"></a>

**Prerequisites **
+ An active AWS account.
+ AWS Command Line Interface (AWS CLI) installed and configured on your Linux, macOS, or Windows workstation. For more information, see [Installing or updating the latest version of the AWS CLI](https://docs.aws.amazon.com/cli/latest/userguide/getting-started-install.html).
+ AWS Network Firewall installed and configured to use CloudWatch Logs. For more information, see [Logging network traffic from AWS Network Firewall](https://docs.aws.amazon.com/network-firewall/latest/developerguide/firewall-logging.html).
+ An Amazon Elastic Compute Cloud (Amazon EC2) instance in a private subnet of a virtual private cloud (VPC) that is protected by Network Firewall.

**Product versions**
+ For version 1 of AWS CLI, use 1.18.180 or later. For version 2 of AWS CLI, use 2.1.2 or later.
+ The classification.config file from Suricata version 5.0.2. For a copy of this configuration file, see the [Additional information](#customize-amazon-cloudwatch-alerts-for-aws-network-firewall-additional) section.

## Architecture
<a name="customize-amazon-cloudwatch-alerts-for-aws-network-firewall-architecture"></a>

![\[An EC2 instance request generates alert in Network Firewall, which forwards alert to CloudWatch\]](http://docs.aws.amazon.com/prescriptive-guidance/latest/patterns/images/pattern-img/da6087a9-e942-4cfe-85e3-3b08de6f3ba5/images/778d85cd-bc87-4ed0-a161-d35eb5daa694.png)


The architecture diagram shows the following workflow:

1. An Amazon EC2 instance in a private subnet makes a request by using either [curl](https://curl.se/) or [Wget](https://www.gnu.org/software/wget/).

1. Network Firewall processes the traffic and generates an alert.

1. Network Firewall sends the logged alerts to CloudWatch Logs.

## Tools
<a name="customize-amazon-cloudwatch-alerts-for-aws-network-firewall-tools"></a>

**AWS services**
+ [Amazon CloudWatch](https://docs.aws.amazon.com/AmazonCloudWatch/latest/monitoring/WhatIsCloudWatch.html) helps you monitor the metrics of your AWS resources and the applications you run on AWS in real time.
+ [Amazon CloudWatch Logs](https://docs.aws.amazon.com/AmazonCloudWatch/latest/logs/WhatIsCloudWatchLogs.html) helps you centralize the logs from all your systems, applications, and AWS services so you can monitor them and archive them securely.
+ [AWS Command Line Interface (AWS CLI)](https://docs.aws.amazon.com/cli/latest/userguide/cli-chap-welcome.html) is an open-source tool that helps you interact with AWS services through commands in your command-line shell.
+ [AWS Network Firewall](https://docs.aws.amazon.com/network-firewall/latest/developerguide/what-is-aws-network-firewall.html) is a stateful, managed, network firewall and intrusion detection and prevention service for virtual private clouds (VPCs) in the AWS Cloud. 

**Other tools**
+ [curl](https://curl.se/) is an open-source command line tool and library.
+ [GNU Wget](https://www.gnu.org/software/wget/) is a free command line tool.

## Epics
<a name="customize-amazon-cloudwatch-alerts-for-aws-network-firewall-epics"></a>

### Create the firewall rules and rule group
<a name="create-the-firewall-rules-and-rule-group"></a>


| Task | Description | Skills required | 
| --- | --- | --- | 
| Create rules. | [\[See the AWS documentation website for more details\]](http://docs.aws.amazon.com/prescriptive-guidance/latest/patterns/customize-amazon-cloudwatch-alerts-for-aws-network-firewall.html) | AWS systems administrator, Network administrator | 
| Create the rule group. | In the AWS CLI, enter the following command. This creates the rule group.<pre>❯ aws network-firewall create-rule-group \<br />        --rule-group-name custom --type STATEFUL \<br />        --capacity 10 --rules file://custom.rules \<br />        --tags Key=environment,Value=development</pre>The following is an example output. Make note of the `RuleGroupArn`, which you need in a later step.<pre>{<br />    "UpdateToken": "4f998d72-973c-490a-bed2-fc3460547e23",<br />    "RuleGroupResponse": {<br />        "RuleGroupArn": "arn:aws:network-firewall:us-east-2:1234567890:stateful-rulegroup/custom",<br />        "RuleGroupName": "custom",<br />        "RuleGroupId": "238a8259-9eaf-48bb-90af-5e690cf8c48b",<br />        "Type": "STATEFUL",<br />        "Capacity": 10,<br />        "RuleGroupStatus": "ACTIVE",<br />        "Tags": [<br />            {<br />                "Key": "environment",<br />                "Value": "development"<br />            }<br />        ]<br />    }</pre> | AWS systems administrator | 

### Update the firewall policy
<a name="update-the-firewall-policy"></a>


| Task | Description | Skills required | 
| --- | --- | --- | 
| Get the ARN of the firewall policy. | In the AWS CLI, enter the following command. This returns the Amazon Resource Name (ARN) of the firewall policy. Record the ARN for use later in this pattern.<pre>❯ aws network-firewall describe-firewall \<br />    --firewall-name aws-network-firewall-anfw \<br />    --query 'Firewall.FirewallPolicyArn'</pre>The following is an example ARN that is returned by this command.<pre>"arn:aws:network-firewall:us-east-2:1234567890:firewall-policy/firewall-policy-anfw"</pre> | AWS systems administrator | 
| Update the firewall policy. | In a text editor, copy the paste the following code. Replace `<RuleGroupArn>` with the value you recorded in the previous epic. Save the file as `firewall-policy-anfw.json`.<pre>{<br />    "StatelessDefaultActions": [<br />        "aws:forward_to_sfe"<br />    ],<br />    "StatelessFragmentDefaultActions": [<br />        "aws:forward_to_sfe"<br />    ],<br />    "StatefulRuleGroupReferences": [<br />        {<br />            "ResourceArn": "<RuleGroupArn>"<br />        }<br />    ]<br />}</pre>Enter the following command in the AWS CLI. This command requires an [update token](https://docs.aws.amazon.com/cli/latest/reference/network-firewall/update-firewall-policy.html) to add the new rules. The token is used to confirm that the policy hasn't changed since you last retrieved it.<pre>UPDATETOKEN=(`aws network-firewall describe-firewall-policy \<br />              --firewall-policy-name firewall-policy-anfw \<br />              --output text --query UpdateToken`)<br /> <br /> aws network-firewall update-firewall-policy \<br /> --update-token $UPDATETOKEN \<br /> --firewall-policy-name firewall-policy-anfw \<br /> --firewall-policy file://firewall-policy-anfw.json</pre> | AWS systems administrator | 
| Confirm the policy updates. | (Optional) If you would like to confirm the rules were added and view the policy format, enter the following command in the AWS CLI.<pre>❯ aws network-firewall describe-firewall-policy \<br />  --firewall-policy-name firewall-policy-anfw \<br />  --query FirewallPolicy</pre>The following is an example output.<pre>{<br />    "StatelessDefaultActions": [<br />        "aws:forward_to_sfe"<br />    ],<br />    "StatelessFragmentDefaultActions": [<br />        "aws:forward_to_sfe"<br />    ],<br />    "StatefulRuleGroupReferences": [<br />        {<br />            "ResourceArn": "arn:aws:network-firewall:us-east-2:1234567890:stateful-rulegroup/custom"<br />        }<br />    ]<br />}</pre> | AWS systems administrator | 

### Test alert functionality
<a name="test-alert-functionality"></a>


| Task | Description | Skills required | 
| --- | --- | --- | 
| Generate alerts for testing. | [\[See the AWS documentation website for more details\]](http://docs.aws.amazon.com/prescriptive-guidance/latest/patterns/customize-amazon-cloudwatch-alerts-for-aws-network-firewall.html) | AWS systems administrator | 
| Validate that the alerts are logged. | [\[See the AWS documentation website for more details\]](http://docs.aws.amazon.com/prescriptive-guidance/latest/patterns/customize-amazon-cloudwatch-alerts-for-aws-network-firewall.html) | AWS systems administrator | 

### Update the firewall rules and rule group
<a name="update-the-firewall-rules-and-rule-group"></a>


| Task | Description | Skills required | 
| --- | --- | --- | 
| Update the firewall rules. | [\[See the AWS documentation website for more details\]](http://docs.aws.amazon.com/prescriptive-guidance/latest/patterns/customize-amazon-cloudwatch-alerts-for-aws-network-firewall.html) | AWS systems administrator | 
| Update the rule group. | In the AWS CLI, run the following commands. Use the ARN of your firewall policy. These commands get an update token and update the rule group with the rule changes.<pre>❯ UPDATETOKEN=(`aws network-firewall \<br />                describe-rule-group \<br />--rule-group-arn arn:aws:network-firewall:us-east-2:123457890:stateful-rulegroup/custom \<br />--output text --query UpdateToken`)</pre><pre> ❯ aws network-firewall update-rule-group \<br />  --rule-group-arn arn:aws:network-firewall:us-east-2:1234567890:stateful-rulegroup/custom \<br />--rules file://custom.rules \<br />--update-token $UPDATETOKEN</pre>The following is an example output.<pre>{<br />    "UpdateToken": "7536939f-6a1d-414c-96d1-bb28110996ed",<br />    "RuleGroupResponse": {<br />        "RuleGroupArn": "arn:aws:network-firewall:us-east-2:1234567890:stateful-rulegroup/custom",<br />        "RuleGroupName": "custom",<br />        "RuleGroupId": "238a8259-9eaf-48bb-90af-5e690cf8c48b",<br />        "Type": "STATEFUL",<br />        "Capacity": 10,<br />        "RuleGroupStatus": "ACTIVE",<br />        "Tags": [<br />            {<br />                "Key": "environment",<br />                "Value": "development"<br />            }<br />        ]<br />    }<br />}</pre> | AWS systems administrator | 

### Test the updated alert functionality
<a name="test-the-updated-alert-functionality"></a>


| Task | Description | Skills required | 
| --- | --- | --- | 
| Generate an alert for testing. | [\[See the AWS documentation website for more details\]](http://docs.aws.amazon.com/prescriptive-guidance/latest/patterns/customize-amazon-cloudwatch-alerts-for-aws-network-firewall.html) | AWS systems administrator | 
| Validate the alert changed. | [\[See the AWS documentation website for more details\]](http://docs.aws.amazon.com/prescriptive-guidance/latest/patterns/customize-amazon-cloudwatch-alerts-for-aws-network-firewall.html) | AWS systems administrator | 

## Related resources
<a name="customize-amazon-cloudwatch-alerts-for-aws-network-firewall-resources"></a>

**References**
+ [Send alerts from AWS Network Firewall to a Slack channel](https://docs.aws.amazon.com/prescriptive-guidance/latest/patterns/send-alerts-from-aws-network-firewall-to-a-slack-channel.html) (AWS Prescriptive Guidance)
+ [Scaling threat prevention on AWS with Suricata](https://aws.amazon.com/blogs/opensource/scaling-threat-prevention-on-aws-with-suricata/) (AWS blog post)
+ [Deployment models for AWS Network Firewall](https://aws.amazon.com/blogs/networking-and-content-delivery/deployment-models-for-aws-network-firewall/) (AWS blog post)
+ [Suricata meta keyworks](https://suricata.readthedocs.io/en/suricata-6.0.1/rules/meta.html) (Suricata documentation)

**Tutorials and videos**
+ [AWS Network Firewall workshop](https://networkfirewall.workshop.aws/)

## Additional information
<a name="customize-amazon-cloudwatch-alerts-for-aws-network-firewall-additional"></a>

The following is the classification configuration file from Suricata 5.0.2. These classifications are used when creating the firewall rules.

```
# config classification:shortname,short description,priority
 
config classification: not-suspicious,Not Suspicious Traffic,3
config classification: unknown,Unknown Traffic,3
config classification: bad-unknown,Potentially Bad Traffic, 2
config classification: attempted-recon,Attempted Information Leak,2
config classification: successful-recon-limited,Information Leak,2
config classification: successful-recon-largescale,Large Scale Information Leak,2
config classification: attempted-dos,Attempted Denial of Service,2
config classification: successful-dos,Denial of Service,2
config classification: attempted-user,Attempted User Privilege Gain,1
config classification: unsuccessful-user,Unsuccessful User Privilege Gain,1
config classification: successful-user,Successful User Privilege Gain,1
config classification: attempted-admin,Attempted Administrator Privilege Gain,1
config classification: successful-admin,Successful Administrator Privilege Gain,1
 
# NEW CLASSIFICATIONS
config classification: rpc-portmap-decode,Decode of an RPC Query,2
config classification: shellcode-detect,Executable code was detected,1
config classification: string-detect,A suspicious string was detected,3
config classification: suspicious-filename-detect,A suspicious filename was detected,2
config classification: suspicious-login,An attempted login using a suspicious username was detected,2
config classification: system-call-detect,A system call was detected,2
config classification: tcp-connection,A TCP connection was detected,4
config classification: trojan-activity,A Network Trojan was detected, 1
config classification: unusual-client-port-connection,A client was using an unusual port,2
config classification: network-scan,Detection of a Network Scan,3
config classification: denial-of-service,Detection of a Denial of Service Attack,2
config classification: non-standard-protocol,Detection of a non-standard protocol or event,2
config classification: protocol-command-decode,Generic Protocol Command Decode,3
config classification: web-application-activity,access to a potentially vulnerable web application,2
config classification: web-application-attack,Web Application Attack,1
config classification: misc-activity,Misc activity,3
config classification: misc-attack,Misc Attack,2
config classification: icmp-event,Generic ICMP event,3
config classification: inappropriate-content,Inappropriate Content was Detected,1
config classification: policy-violation,Potential Corporate Privacy Violation,1
config classification: default-login-attempt,Attempt to login by a default username and password,2
 
# Update
config classification: targeted-activity,Targeted Malicious Activity was Detected,1
config classification: exploit-kit,Exploit Kit Activity Detected,1
config classification: external-ip-check,Device Retrieving External IP Address Detected,2
config classification: domain-c2,Domain Observed Used for C2 Detected,1
config classification: pup-activity,Possibly Unwanted Program Detected,2
config classification: credential-theft,Successful Credential Theft Detected,1
config classification: social-engineering,Possible Social Engineering Attempted,2
config classification: coin-mining,Crypto Currency Mining Activity Detected,2
config classification: command-and-control,Malware Command and Control Activity Detected,1
```

# Deploy resources in an AWS Wavelength Zone by using Terraform
<a name="deploy-resources-wavelength-zone-using-terraform"></a>

*Zahoor Chaudhrey and Luca Iannario, Amazon Web Services*

## Summary
<a name="deploy-resources-wavelength-zone-using-terraform-summary"></a>

[AWS Wavelength](https://docs.aws.amazon.com/wavelength/latest/developerguide/what-is-wavelength.html) helps you build infrastructure that is optimized for Multi-Access Edge Computing (MEC) applications. *Wavelength Zones* are AWS infrastructure deployments that embed AWS compute and storage services within communications service providers’ (CSP) 5G networks. Application traffic from 5G devices reaches application servers running in Wavelength Zones without leaving the telecommunications network. The following facilitate network connectivity through Wavelength:
+ **Virtual private clouds (VPCs)** – VPCs in an AWS account can extend to span multiple Availability Zones, including Wavelength Zones. Amazon Elastic Compute Cloud (Amazon EC2) instances and related services appear as part of your Regional VPC. VPCs are created and managed in [Amazon Virtual Private Cloud (Amazon VPC)](https://docs.aws.amazon.com/vpc/latest/userguide/what-is-amazon-vpc.html).
+ **Carrier gateway** – A carrier gateway enables connectivity from the subnet in the Wavelength Zone to the CSP network, the internet, or the AWS Region through the CSP’s network. The carrier gateway serves two purposes. It allows inbound traffic from a CSP network in a specific location, and it allows outbound traffic to the telecommunications network and the internet.

This pattern and its associated Terraform code help you launch resources, such as Amazon EC2 instances, Amazon Elastic Block Store (Amazon EBS) volumes, VPCs, subnets, and a carrier gateway, in a Wavelength Zone.

## Prerequisites and limitations
<a name="deploy-resources-wavelength-zone-using-terraform-prereqs"></a>

**Prerequisites**
+ An active AWS account
+ An integrated development environment (IDE)
+ [Opt in](https://docs.aws.amazon.com/wavelength/latest/developerguide/get-started-wavelength.html#enable-zone-group) to the target Wavelength Zone
+ AWS Command Line Interface (AWS CLI), [installed](https://docs.aws.amazon.com/cli/latest/userguide/getting-started-install.html) and [configured](https://docs.aws.amazon.com/cli/latest/userguide/cli-chap-configure.html)
+ Terraform version 1.8.4 or later, [installed](https://developer.hashicorp.com/terraform/tutorials/aws-get-started/install-cli) (Terraform documentation)
+ Terraform AWS Provider version 5.32.1 or later, [configured](https://hashicorp.github.io/terraform-provider-aws/) (Terraform documentation)
+ Git, [installed](https://github.com/git-guides/install-git) (GitHub)
+ [Permissions](https://docs.aws.amazon.com/IAM/latest/UserGuide/access_policies.html) to create Amazon VPC, Wavelength, and Amazon EC2 resources

**Limitations**

Not all AWS Regions support Wavelength Zones. For more information, see [Available Wavelength Zones](https://docs.aws.amazon.com/wavelength/latest/developerguide/available-wavelength-zones.html) in the Wavelength documentation.

## Architecture
<a name="deploy-resources-wavelength-zone-using-terraform-architecture"></a>

The following diagram shows how you can create a subnet and AWS resources in a Wavelength Zone. VPCs that contain a subnet in a Wavelength Zone can connect to a carrier gateway. A carrier gateway allows you to connect to the following resources:
+ 4G/LTE and 5G devices on the telecommunication carrier's network.
+ Fixed wireless access for select Wavelength Zone partners. For more information, see [Multi-access AWS Wavelength](https://docs.aws.amazon.com/wavelength/latest/developerguide/multi-access.html).
+ Outbound traffic to public internet resources.

![\[A carrier gateway connects AWS resources in the Wavelength Zone to the CSP network.\]](http://docs.aws.amazon.com/prescriptive-guidance/latest/patterns/images/pattern-img/8c507de1-208c-4563-bb58-52388ab2fa6d/images/a4cc0699-0cbc-4f15-ab14-3ae569ced7f4.png)


## Tools
<a name="deploy-resources-wavelength-zone-using-terraform-tools"></a>

**AWS services**
+ [Amazon Virtual Private Cloud (Amazon VPC)](https://docs.aws.amazon.com/vpc/latest/userguide/what-is-amazon-vpc.html) helps you launch AWS resources into a virtual network that you’ve defined. This virtual network resembles a traditional network that you’d operate in your own data center, with the benefits of using the scalable infrastructure of AWS.
+ [AWS Wavelength](https://docs.aws.amazon.com/wavelength/latest/developerguide/what-is-wavelength.html) extends AWS Cloud infrastructure to telecommunication providers’ 5G networks. This helps you build applications that deliver ultra-low latencies to mobile devices and end users.

**Other tools**
+ [Terraform](https://www.terraform.io/) is an infrastructure as code (IaC) tool from HashiCorp that helps you create and manage cloud and on-premises resources.

**Code repository**

The code for this pattern is available in the GitHub [Creating AWS Wavelength Infrastructure using Terraform](https://github.com/aws-samples/terraform-wavelength-infrastructure) repository. The Terraform code deploys the following infrastructure and resources:
+ A VPC
+ A Wavelength Zone
+ A pubic subnet in the Wavelength Zone
+ A carrier gateway in the Wavelength Zone
+ An Amazon EC2 instance in the Wavelength Zone

## Best practices
<a name="deploy-resources-wavelength-zone-using-terraform-best-practices"></a>
+ Before deploying, confirm that you're using the latest versions of Terraform and the AWS CLI.
+ Use a continuous integration and continuous delivery (CI/CD) pipeline to deploy IaC. For more information, see [Best practices for managing Terraform State files in AWS CI/CD Pipeline](https://aws.amazon.com/blogs/devops/best-practices-for-managing-terraform-state-files-in-aws-ci-cd-pipeline/) on AWS Blogs.

## Epics
<a name="deploy-resources-wavelength-zone-using-terraform-epics"></a>

### Provision the infrastructure
<a name="provision-the-infrastructure"></a>


| Task | Description | Skills required | 
| --- | --- | --- | 
| Clone the repository. | Enter the following command to clone the [Creating AWS Wavelength Infrastructure using Terraform](https://github.com/aws-samples/terraform-wavelength-infrastructure) repository to your environment.`git clone git@github.com:aws-samples/terraform-wavelength-infrastructure.git` | DevOps engineer | 
| Update the variables. | [\[See the AWS documentation website for more details\]](http://docs.aws.amazon.com/prescriptive-guidance/latest/patterns/deploy-resources-wavelength-zone-using-terraform.html) | DevOps engineer, Terraform | 
| Initialize the configuration. | Enter the following command to initialize the working directory.<pre>terraform init</pre> | DevOps engineer, Terraform | 
| Preview the Terraform plan. | Enter the following command to compare the target state against the current state of your AWS environment. This command generates a preview of the resources that will be configured.<pre>terraform plan</pre> | DevOps engineer, Terraform | 
| Verify and deploy. | [\[See the AWS documentation website for more details\]](http://docs.aws.amazon.com/prescriptive-guidance/latest/patterns/deploy-resources-wavelength-zone-using-terraform.html) | DevOps engineer, Terraform | 

### Validate and clean up
<a name="validate-and-clean-up"></a>


| Task | Description | Skills required | 
| --- | --- | --- | 
| Verify the infrastructure deployment. | [\[See the AWS documentation website for more details\]](http://docs.aws.amazon.com/prescriptive-guidance/latest/patterns/deploy-resources-wavelength-zone-using-terraform.html) | AWS DevOps, DevOps engineer | 
| (Optional) Clean up the infrastructure. | If you need to delete all of the resources that were provisioned by Terraform, do the following:[\[See the AWS documentation website for more details\]](http://docs.aws.amazon.com/prescriptive-guidance/latest/patterns/deploy-resources-wavelength-zone-using-terraform.html) | DevOps engineer, Terraform | 

## Troubleshooting
<a name="deploy-resources-wavelength-zone-using-terraform-troubleshooting"></a>


| Issue | Solution | 
| --- | --- | 
| Connectivity to Amazon EC2 instances in the AWS Region. | See [Troubleshoot connecting to your Linux instance](https://docs.aws.amazon.com/AWSEC2/latest/UserGuide/TroubleshootingInstancesConnecting.html) or [Troubleshoot connecting to your Windows instance](https://docs.aws.amazon.com/AWSEC2/latest/WindowsGuide/troubleshooting-windows-instances.html). | 
| Connectivity to Amazon EC2 instances in the Wavelength Zone. | See [Troubleshoot SSH or RDP connectivity to my EC2 instances launched in a Wavelength Zone](https://repost.aws/knowledge-center/ec2-wavelength-zone-connection-errors). | 
| Capacity in the Wavelength Zone. | See [Quotas and considerations for Wavelength Zones](https://docs.aws.amazon.com/wavelength/latest/developerguide/wavelength-quotas.html). | 
| Mobile or carrier connectivity from the carrier network to the AWS Region. | [\[See the AWS documentation website for more details\]](http://docs.aws.amazon.com/prescriptive-guidance/latest/patterns/deploy-resources-wavelength-zone-using-terraform.html) | 

## Related resources
<a name="deploy-resources-wavelength-zone-using-terraform-resources"></a>
+ [What is AWS Wavelength?](https://docs.aws.amazon.com/wavelength/latest/developerguide/what-is-wavelength.html)
+ [How AWS Wavelength works](https://docs.aws.amazon.com/wavelength/latest/developerguide/how-wavelengths-work.html)
+ [Resilience in AWS Wavelength](https://docs.aws.amazon.com/wavelength/latest/developerguide/disaster-recovery-resiliency.html)

# Migrate DNS records in bulk to an Amazon Route 53 private hosted zone
<a name="migrate-dns-records-in-bulk-to-an-amazon-route-53-private-hosted-zone"></a>

*Ram Kandaswamy, Amazon Web Services*

## Summary
<a name="migrate-dns-records-in-bulk-to-an-amazon-route-53-private-hosted-zone-summary"></a>

Network engineers and cloud administrators need an efficient and simple way to add Domain Name System (DNS) records to private hosted zones in Amazon Route 53. Using a manual approach to copy entries from a Microsoft Excel worksheet to appropriate locations in the Route 53 console is tedious and error prone. This pattern describes an automated approach that reduces the time and effort required to add multiple records. It also provides a repeatable set of steps for multiple hosted zone creation.

This pattern uses Amazon Simple Storage Service (Amazon S3) to store records. To work with data efficiently, the pattern uses the JSON format because of its simplicity and its ability to support a Python dictionary (`dict` data type).

**Note**  
If you can generate a zone file from your system, consider using the [Route 53 import feature](https://docs.aws.amazon.com/Route53/latest/DeveloperGuide/resource-record-sets-creating-import.html) instead.

## Prerequisites and limitations
<a name="migrate-dns-records-in-bulk-to-an-amazon-route-53-private-hosted-zone-prereqs"></a>

**Prerequisites **
+ An Excel worksheet that contains private hosted zone records
+ Familiarity with different types of DNS records such as A record, Name Authority Pointer (NAPTR) record, and SRV record (see [Supported DNS record types](https://docs.aws.amazon.com/Route53/latest/DeveloperGuide/ResourceRecordTypes.html))
+ Familiarity with the Python language and its libraries

**Limitations**
+ The pattern doesn’t provide extensive coverage for all use case scenarios. For example, the [change\$1resource\$1record\$1sets](https://boto3.amazonaws.com/v1/documentation/api/latest/reference/services/route53.html#Route53.Client.change_resource_record_sets) call doesn’t use all the available properties of the API.
+ In the Excel worksheet, the value in each row is assumed to be unique. Multiple values for each fully qualified domain name (FQDN) are expected to appear in the same row. If that is not true, you should modify the code provided in this pattern to perform the necessary concatenation.
+ The pattern uses the AWS SDK for Python (Boto3) to call the Route 53 service directly. You can enhance the code to use an AWS CloudFormation wrapper for the `create_stack` and `update_stack` commands, and use the JSON values to populate template resources.

## Architecture
<a name="migrate-dns-records-in-bulk-to-an-amazon-route-53-private-hosted-zone-architecture"></a>

**Technology stack**
+ Route 53 private hosted zones for routing traffic
+ Amazon S3 for storing the output JSON file

![\[Workflow for migrating DNS records in bulk to a Route 53 private hosted zone.\]](http://docs.aws.amazon.com/prescriptive-guidance/latest/patterns/images/pattern-img/a81c29ea-f0c5-4d4a-ba87-93111a0f1ee9/images/2ada844b-4147-4f9f-8883-d22605aa42d8.png)


The workflow consists of these steps, as illustrated in the previous diagram and discussed in the *Epics* section:

1. Upload an Excel worksheet that has the record set information to an S3 bucket.

1. Create and run a Python script that converts the Excel data to JSON format.

1. Read the records from the S3 bucket and clean the data.

1. Create record sets in your private hosted zone.

## Tools
<a name="migrate-dns-records-in-bulk-to-an-amazon-route-53-private-hosted-zone-tools"></a>
+ [Route 53](https://docs.aws.amazon.com/Route53/latest/DeveloperGuide/Welcome.html) –  Amazon Route 53 is a highly available and scalable DNS web service that handles domain registration, DNS routing, and health checking.
+ [Amazon S3](https://docs.aws.amazon.com/AmazonS3/latest/userguide/Welcome.html) – Amazon Simple Storage Service (Amazon S3) is an object storage service. You can use Amazon S3 to store and retrieve any amount of data at any time, from anywhere on the web.

## Epics
<a name="migrate-dns-records-in-bulk-to-an-amazon-route-53-private-hosted-zone-epics"></a>

### Prepare data for automation
<a name="prepare-data-for-automation"></a>


| Task | Description | Skills required | 
| --- | --- | --- | 
| Create an Excel file for your records. | Use the records you exported from your current system to create an Excel worksheet that has the required columns for a record, such as fully qualified domain name (FQDN), record type, Time to Live (TTL), and value. For NAPTR and SRV records, the value is a combination of multiple properties, so use Excel's `concat` method to combine these properties.[\[See the AWS documentation website for more details\]](http://docs.aws.amazon.com/prescriptive-guidance/latest/patterns/migrate-dns-records-in-bulk-to-an-amazon-route-53-private-hosted-zone.html) | Data engineer, Excel skills | 
| Verify the working environment. | In your IDE, create a Python file to convert the Excel input worksheet to JSON format. (Instead of an IDE, you can also use an Amazon SageMaker notebook to work with Python code.)Verify that the Python version you’re using is version 3.7 or later.<pre> python3 --version</pre>Install the **pandas** package.<pre> pip3 install pandas --user</pre> | General AWS | 
| Convert the Excel worksheet data to JSON. | Create a Python file that contains the following code to convert from Excel to JSON.<pre>import pandas as pd<br />data=pd.read_excel('./Book1.xls')<br />data.to_json(path_or_buf='my.json',orient='records')</pre>where `Book1` is the name of the Excel worksheet and `my.json` is the name of the output JSON file. | Data engineer, Python skills | 
| Upload the JSON file to an S3 bucket. | Upload the `my.json` file to an S3 bucket. For more information, see [Creating a bucket](https://docs.aws.amazon.com/AmazonS3/latest/userguide/create-bucket-overview.html) in the Amazon S3 documentation. | App developer | 
| FqdnName | RecordType | Value | TTL | 
| something.example.org | A | 1.1.1.1 | 900 | 

### Insert records
<a name="insert-records"></a>


| Task | Description | Skills required | 
| --- | --- | --- | 
| Create a private hosted zone. | Use the [create\$1hosted\$1zone](https://boto3.amazonaws.com/v1/documentation/api/latest/reference/services/route53.html#Route53.Client.create_hosted_zone) API and the following Python sample code to create a private hosted zone. Replace the parameters `hostedZoneName`, `vpcRegion`, and `vpcId` with your own values.<pre>import boto3<br />import random<br />hostedZoneName ="xxx"<br />vpcRegion = "us-east-1"<br />vpcId="vpc-xxxx"<br />route53_client = boto3.client('route53')<br />response = route53_client.create_hosted_zone(<br />        Name= hostedZoneName,<br />        VPC={<br />            'VPCRegion: vpcRegion,<br />            'VPCId': vpcId<br />        },<br />        CallerReference=str(random.random()*100000),<br />        HostedZoneConfig={<br />            'Comment': "private hosted zone created by automation",<br />            'PrivateZone': True<br />        }<br />    )<br /> print(response)</pre>You can also use an infrastructure as code (IaC) tool such as AWS CloudFormation to replace these steps with a template that creates a stack with the appropriate resources and properties. | Cloud architect, Network administrator, Python skills | 
| Retrieve details as a dictionary from Amazon S3. | Use the following code to read from the S3 bucket and to get the JSON values as a Python dictionary. <pre>fileobj = s3_client.get_object(<br />        Bucket=bucket_name,<br />        Key='my.json'<br />        )<br />    filedata = fileobj['Body'].read()<br />    contents = filedata.decode('utf-8')<br />    json_content=json.loads(contents)<br />    print(json_content)</pre>where `json_content` contains the Python dictionary. | App developer, Python skills | 
| Clean data values for spaces and Unicode characters. | As a safety measure to ensure the correctness of data, use the following code to perform a strip operation on the values in `json_content`. This code removes the space characters at the front and end of each string. It also uses the `replace` method to remove hard (non-breaking) spaces (the `\xa0` characters).<pre>for item in json_content:<br />    fqn_name = unicodedata.normalize("NFKD",item["FqdnName"].replace("u'", "'").replace('\xa0', '').strip())<br />    rec_type = item["RecordType"].replace('\xa0', '').strip()<br />    res_rec = {<br />                 'Value': item["Value"].replace('\xa0', '').strip()<br />                }</pre> | App developer, Python skills | 
| Insert records. | Use the following code as part of the previous `for` loop.<pre>change_response = route53_client.change_resource_record_sets(<br />            HostedZoneId="xxxxxxxx",<br />            ChangeBatch={<br />                'Comment': 'Created by automation',<br />                'Changes': [<br />                    {<br />                        'Action': 'UPSERT',<br />                        'ResourceRecordSet': {<br />                            'Name': fqn_name,<br />                            'Type': rec_type,<br />                            'TTL': item["TTL"],<br />                            'ResourceRecords': res_rec<br />                        }<br />                    }<br />                ]<br />            }<br />    )</pre>Where `xxxxxxx` is the hosted zone ID from the first step of this epic. | App developer, Python skills | 

## Related resources
<a name="migrate-dns-records-in-bulk-to-an-amazon-route-53-private-hosted-zone-resources"></a>

**References**
+ [Creating records by importing a zone file](https://docs.aws.amazon.com/Route53/latest/DeveloperGuide/resource-record-sets-creating-import.html) (Amazon Route 53 documentation)
+ [create\$1hosted\$1zone method](https://boto3.amazonaws.com/v1/documentation/api/latest/reference/services/route53.html#Route53.Client.create_hosted_zone) (Boto3 documentation)
+ [change\$1resource\$1record\$1sets method](https://boto3.amazonaws.com/v1/documentation/api/latest/reference/services/route53.html#Route53.Client.change_resource_record_sets) (Boto3 documentation)

**Tutorials and videos **
+ [The Python Tutorial](https://docs.python.org/3/tutorial/) (Python documentation)
+ [DNS design using Amazon Route 53](https://www.youtube.com/watch?v=2y_RBjDkRgY) (YouTube video, *AWS Online Tech Talks*)

# Modify HTTP headers when you migrate from F5 to an Application Load Balancer on AWS
<a name="modify-http-headers-when-you-migrate-from-f5-to-an-application-load-balancer-on-aws"></a>

*Sachin Trivedi, Amazon Web Services*

## Summary
<a name="modify-http-headers-when-you-migrate-from-f5-to-an-application-load-balancer-on-aws-summary"></a>

When you migrate an application that uses an F5 Load balancer to Amazon Web Services (AWS) and want to use an Application Load Balancer on AWS, migrating F5 rules for header modifications is a common problem. An Application Load Balancer doesn’t support header modifications, but you can use Amazon CloudFront as a content delivery network (CDN) and Lambda@Edge to modify headers.

This pattern describes the required integrations and provides sample code for header modification by using AWS CloudFront and Lambda@Edge.

## Prerequisites and limitations
<a name="modify-http-headers-when-you-migrate-from-f5-to-an-application-load-balancer-on-aws-prereqs"></a>

**Prerequisites **
+ An on-premises application that uses an F5 load balancer with a configuration that replaces the  HTTP header value by using `if, else`. For more information about this configuration, see [HTTP::header](https://clouddocs.f5.com/api/irules/HTTP__header.html) in the F5 product documentation. 

**Limitations **
+ This pattern applies to F5 load balancer header customization. For other third-party load balancers, please check the load balancer documentation for support information.
+ The Lambda functions that you use for Lambda@Edge must be in the US East (N. Virginia) Region.

## Architecture
<a name="modify-http-headers-when-you-migrate-from-f5-to-an-application-load-balancer-on-aws-architecture"></a>

The following diagram shows the architecture on AWS, including the integration flow between the CDN and other AWS components.

![\[Architecture for header modification by using Amazon CloudFront and Lambda@Edge\]](http://docs.aws.amazon.com/prescriptive-guidance/latest/patterns/images/pattern-img/00abbe3c-2453-4291-9b24-b488dced4868/images/4ee9a19e-6da2-4c5a-a8bc-19d3918a166e.png)


## Tools
<a name="modify-http-headers-when-you-migrate-from-f5-to-an-application-load-balancer-on-aws-tools"></a>

**AWS services**
+ [Application Load Balancer](https://docs.aws.amazon.com/elasticloadbalancing/latest/application/introduction.html) ─  An Application Load Balancer is an AWS fully managed load balancing service that functions at the seventh layer of the Open Systems Interconnection (OSI) model. It balances traffic across multiple targets and supports advanced routing requests based on HTTP headers and methods, query strings, and host-based or path-based routing.
+ [Amazon CloudFront](https://docs.aws.amazon.com/AmazonCloudFront/latest/DeveloperGuide/Introduction.html) – Amazon CloudFront is a web service that speeds up the distribution of your static and dynamic web content, such as .html, .css, .js, and image files, to your users. CloudFront delivers your content through a worldwide network of data centers called edge locations for lower latency and improved performance.
+ [Lambda@Edge](https://docs.aws.amazon.com/AmazonCloudFront/latest/DeveloperGuide/lambda-at-the-edge.html) ─ Lambda@Edge is an extension of AWS Lambda that lets you run functions to customize the content that CloudFront delivers. You can author functions in the US East (N. Virginia) Region, and then associate the function with a CloudFront distribution to automatically replicate your code around the world, without provisioning or managing servers. This reduces latency and improves the user experience.

**Code**

The following sample code provides a blueprint for modifying CloudFront response headers. Follow the instructions in the *Epics* section to deploy the code.

```
exports.handler = async (event, context) => {
    const response = event.Records[0].cf.response;
    const headers = response.headers;


    const headerNameSrc = 'content-security-policy';
    const headerNameValue = '*.xyz.com';


    if (headers[headerNameSrc.toLowerCase()]) {
        headers[headerNameSrc.toLowerCase()] = [{
            key: headerNameSrc,
            value: headerNameValue,
        }];
        console.log(`Response header "${headerNameSrc}" was set to ` +
                    `"${headers[headerNameSrc.toLowerCase()][0].value}"`);
    }
    else {
            headers[headerNameSrc.toLowerCase()] = [{
            key: headerNameSrc,
            value: headerNameValue,
            }];
    }
    return response;
};
```

## Epics
<a name="modify-http-headers-when-you-migrate-from-f5-to-an-application-load-balancer-on-aws-epics"></a>

### Create a CDN distribution
<a name="create-a-cdn-distribution"></a>


| Task | Description | Skills required | 
| --- | --- | --- | 
| Create a CloudFront web distribution.  | In this step, you create a CloudFront distribution to tell CloudFront where you want content to be delivered from, and the details about how to track and manage content delivery.To create a distribution by using the console, sign in to the AWS Management Console, open the [CloudFront console](https://console.aws.amazon.com/cloudfront/v3/home), and then follow the steps in the [CloudFront documentation](https://docs.aws.amazon.com/AmazonCloudFront/latest/DeveloperGuide/distribution-web-creating-console.html). | Cloud administrator | 

### Create and deploy the Lambda@Edge function
<a name="create-and-deploy-the-lambda-edge-function"></a>


| Task | Description | Skills required | 
| --- | --- | --- | 
| Create and deploy a Lambda@Edge function. | You can create a Lambda@Edge function by using a blueprint for modifying CloudFront response headers. (Other bluePrints are available for different use cases; for more information, see [Lambda@Edge example functions](https://docs.aws.amazon.com/AmazonCloudFront/latest/DeveloperGuide/lambda-examples.html) in the CloudFront documentation.) To create a Lambda@Edge function:[\[See the AWS documentation website for more details\]](http://docs.aws.amazon.com/prescriptive-guidance/latest/patterns/modify-http-headers-when-you-migrate-from-f5-to-an-application-load-balancer-on-aws.html) | AWS administrator | 
| Deploy the Lambda@Edge function. | Follow the instructions in [step 4](https://docs.aws.amazon.com/AmazonCloudFront/latest/DeveloperGuide/lambda-edge-how-it-works-tutorial.html#lambda-edge-how-it-works-tutorial-add-trigger) of the *Tutorial: Creating a simple Lambda@Edge function* in the Amazon CloudFront documentation to configure the CloudFront trigger and deploy the function. | AWS administrator | 

## Related resources
<a name="modify-http-headers-when-you-migrate-from-f5-to-an-application-load-balancer-on-aws-resources"></a>

**CloudFront documentation**
+ [Request and response behavior for custom origins](https://docs.aws.amazon.com/AmazonCloudFront/latest/DeveloperGuide/RequestAndResponseBehaviorCustomOrigin.html) 
+ [Working with distributions](https://docs.aws.amazon.com/AmazonCloudFront/latest/DeveloperGuide/distribution-working-with.html) 
+ [Lambda@Edge example functions](https://docs.aws.amazon.com/AmazonCloudFront/latest/DeveloperGuide/lambda-examples.html) 
+ [Customizing at the edge with Lambda@Edge](https://docs.aws.amazon.com/AmazonCloudFront/latest/DeveloperGuide/lambda-at-the-edge.html)
+ [Tutorial: Creating a simple Lambda@Edge function](https://docs.aws.amazon.com/AmazonCloudFront/latest/DeveloperGuide/lambda-edge-how-it-works-tutorial.html)

# Create a report of Network Access Analyzer findings for inbound internet access in multiple AWS accounts
<a name="create-a-report-of-network-access-analyzer-findings-for-inbound-internet-access-in-multiple-aws-accounts"></a>

*Mike Virgilio, Amazon Web Services*

## Summary
<a name="create-a-report-of-network-access-analyzer-findings-for-inbound-internet-access-in-multiple-aws-accounts-summary"></a>

Unintentional inbound internet access to AWS resources can pose risks to an organization’s data perimeter. [Network Access Analyzer](https://docs.aws.amazon.com/vpc/latest/network-access-analyzer/what-is-network-access-analyzer.html) is an Amazon Virtual Private Cloud (Amazon VPC) feature that helps you identify unintended network access to your resources on Amazon Web Services (AWS). You can use Network Access Analyzer to specify your network access requirements and to identify potential network paths that do not meet your specified requirements. You can use Network Access Analyzer to do the following:

1. Identify AWS resources that are accessible to the internet through internet gateways.

1. Validate that your virtual private clouds (VPCs) are appropriately segmented, such as isolating production and development environments and separating transactional workloads.

Network Access Analyzer analyzes end-to-end network reachability conditions and not just a single component. To determine whether a resource is internet accessible, Network Access Analyzer evaluates the internet gateway, VPC route tables, network access control lists (ACLs), public IP addresses on elastic network interfaces, and security groups. If any of these components prevent internet access, Network Access Analyzer doesn’t generate a finding. For example, if an Amazon Elastic Compute Cloud (Amazon EC2) instance has an open security group that allows traffic from `0/0` but the instance is in a private subnet that isn’t routable from any internet gateway, then Network Access Analyzer wouldn’t generate a finding. This provides high-fidelity results so that you can identify resources that are truly accessible from the internet.

When you run Network Access Analyzer, you use [Network Access Scopes](https://docs.aws.amazon.com/vpc/latest/network-access-analyzer/what-is-network-access-analyzer.html#concepts) to specify your network access requirements. This solution identifies network paths between an internet gateway and an elastic network interface. In this pattern, you deploy the solution in a centralized AWS account in your organization, managed by AWS Organizations, and it analyzes all of the accounts, in any AWS Region, in the organization.

This solution was designed with the following in mind:
+ The AWS CloudFormation templates reduce the effort required to deploy the AWS resources in this pattern.
+ You can adjust the parameters in the CloudFormation templates and **naa-script.sh** script at the time of deployment to customize them for your environment.
+ Bash scripting automatically provisions and analyzes the Network Access Scopes for multiple accounts, in parallel.
+ A Python script processes the findings, extracts the data, and then consolidates the results. You can choose to review the consolidated report of Network Access Analyzer findings in CSV format or in AWS Security Hub CSPM. An example of the CSV report is available in the [Additional information](#create-a-report-of-network-access-analyzer-findings-for-inbound-internet-access-in-multiple-aws-accounts-additional) section of this pattern.
+ You can remediate findings, or you can exclude them from future analyses by adding them to the **naa-exclusions.csv** file.

## Prerequisites and limitations
<a name="create-a-report-of-network-access-analyzer-findings-for-inbound-internet-access-in-multiple-aws-accounts-prereqs"></a>

**Prerequisites**
+ An AWS account for hosting security services and tools, managed as a member account of an organization in AWS Organizations. In this pattern, this account is referred to as the security account.
+ In the security account, you must have a private subnet with outbound internet access. For instructions, see [Create a subnet](https://docs.aws.amazon.com/vpc/latest/userguide/create-subnets.html) in the Amazon VPC documentation. You can establish internet access by using an [NAT gateway](https://docs.aws.amazon.com/vpc/latest/userguide/vpc-nat-gateway.html) or an [interface VPC endpoint](https://docs.aws.amazon.com/vpc/latest/privatelink/create-interface-endpoint.html).
+ Access to the AWS Organizations management account or an account that has delegated administrator permissions for CloudFormation. For instructions, see [Register a delegated administrator](https://docs.aws.amazon.com/AWSCloudFormation/latest/UserGuide/stacksets-orgs-delegated-admin.html) in the CloudFormation documentation.
+ Enable trusted access between AWS Organizations and CloudFormation. For instructions, see [Enable trusted access with AWS Organizations](https://docs.aws.amazon.com/AWSCloudFormation/latest/UserGuide/stacksets-orgs-enable-trusted-access.html) in the CloudFormation documentation.
+ If you’re uploading the findings to Security Hub CSPM, Security Hub CSPM must be enabled in the account and AWS Region where the Amazon EC2 instance is provisioned. For more information, see [Setting up AWS Security Hub CSPM](https://docs.aws.amazon.com/securityhub/latest/userguide/securityhub-settingup.html).

**Limitations**
+ Cross-account network paths are not currently analyzed due to limitations of the Network Access Analyzer feature.
+ The target AWS accounts must be managed as an organization in AWS Organizations. If you are not using AWS Organizations, you can update the **naa-execrole.yaml** CloudFormation template and the **naa-script.sh** script for your environment. Instead, you provide a list of AWS account IDs and Regions where you want to run the script.
+ The CloudFormation template is designed to deploy the Amazon EC2 instance in a private subnet that has outbound internet access. The AWS Systems Manager Agent (SSM Agent) requires outbound access to reach the Systems Manager service endpoint, and you need outbound access to clone the code repository and install dependencies. If you want to use a public subnet, you must modify the **naa-resources.yaml** template to associate an [Elastic IP address](https://docs.aws.amazon.com/AWSEC2/latest/UserGuide/elastic-ip-addresses-eip.html) with the Amazon EC2 instance.

## Architecture
<a name="create-a-report-of-network-access-analyzer-findings-for-inbound-internet-access-in-multiple-aws-accounts-architecture"></a>

**Target architecture**

*Option 1: Access findings in an Amazon S3 bucket*

![\[Architecture diagram of accessing the Network Access Analyzer findings report in an Amazon S3 bucket\]](http://docs.aws.amazon.com/prescriptive-guidance/latest/patterns/images/pattern-img/eda6abba-632a-4e3d-92b9-31848fa6dead/images/d0b08437-e5b0-47a1-abdd-040c67b5da8f.png)


The diagram shows the following process:

1. If you’re manually running the solution, the user authenticates to the Amazon EC2 instance by using Session Manager and then runs the **naa-script.sh** script. This shell script performs steps 2–7.

   If you’re automatically running the solution, the **naa-script.sh** script starts automatically on the schedule you defined in the cron expression. This shell script performs steps 2–7. For more information, see *Automation and scale* at the end of this section.

1. The Amazon EC2 instance downloads the latest **naa-exception.csv** file from the Amazon S3 bucket. This file is used later in the process when the Python script processes the exclusions.

1. The Amazon EC2 instance assumes the `NAAEC2Role` AWS Identity and Access Management (IAM) role, which grants permissions to access the Amazon S3 bucket and to assume the `NAAExecRole` IAM roles in the other accounts in the organization.

1. The Amazon EC2 instance assumes the `NAAExecRole` IAM role in the organization’s management account and generates a list of the accounts in the organization.

1. The Amazon EC2 instance assumes the `NAAExecRole` IAM role in the organization’s member accounts (called *workload accounts* in the architecture diagram) and performs a security assessment in each account. The findings are stored as JSON files on the Amazon EC2 instance.

1. The Amazon EC2 instance uses a Python script to process the JSON files, extract the data fields, and create a CSV report.

1. The Amazon EC2 instance uploads the CSV file to the Amazon S3 bucket.

1. An Amazon EventBridge rule detects the file upload and uses an Amazon SNS topic to send an email that notifies the user that the report is complete.

1. The user downloads the CSV file from the Amazon S3 bucket. The user imports the results into the Excel template and reviews the results.

*Option 2: Access findings in AWS Security Hub CSPM*

![\[Architecture diagram of accessing the Network Access Analyzer findings through AWS Security Hub\]](http://docs.aws.amazon.com/prescriptive-guidance/latest/patterns/images/pattern-img/eda6abba-632a-4e3d-92b9-31848fa6dead/images/9cb4f059-dfb6-4a33-9f8d-159fe5df0d64.png)


The diagram shows the following process:

1. If you’re manually running the solution, the user authenticates to the Amazon EC2 instance by using Session Manager and then runs the **naa-script.sh** script. This shell script performs steps 2–7.

   If you’re automatically running the solution, the **naa-script.sh** script starts automatically on the schedule you defined in the cron expression. This shell script performs steps 2–7. For more information, see *Automation and scale* at the end of this section.

1. The Amazon EC2 instance downloads the latest **naa-exception.csv** file from the Amazon S3 bucket. This file is used later in the process when the Python script processes the exclusions.

1. The Amazon EC2 instance assumes the `NAAEC2Role` IAM role, which grants permissions to access the Amazon S3 bucket and to assume the `NAAExecRole` IAM roles in the other accounts in the organization.

1. The Amazon EC2 instance assumes the `NAAExecRole` IAM role in the organization’s management account and generates a list of the accounts in the organization.

1. The Amazon EC2 instance assumes the `NAAExecRole` IAM role in the organization’s member accounts (called *workload accounts* in the architecture diagram) and performs a security assessment in each account. The findings are stored as JSON files on the Amazon EC2 instance.

1. The Amazon EC2 instance uses a Python script to process the JSON files and extract the data fields for import into Security Hub CSPM.

1. The Amazon EC2 instance imports the Network Access Analyzer findings to Security Hub CSPM.

1. An Amazon EventBridge rule detects the import and uses an Amazon SNS topic to send an email that notifies the user that the process is complete.

1. The user views the findings in Security Hub CSPM.

**Automation and scale**

You can schedule this solution to run the **naa-script.sh** script automatically on a custom schedule. To set a custom schedule, in the **naa-resources.yaml** CloudFormation template, modify the `CronScheduleExpression` parameter. For example, the default value of `0 0 * * 0` runs the solution at midnight on every Sunday. A value of `0 0 * 1-12 0` would run the solution at midnight on the first Sunday of every month. For more information about using cron expressions, see [Cron and rate expressions](https://docs.aws.amazon.com/systems-manager/latest/userguide/reference-cron-and-rate-expressions.html) in the Systems Manager documentation.

If you want adjust the schedule after the `NAA-Resources` stack has been deployed, you can manually edit the cron schedule in `/etc/cron.d/naa-schedule`.

## Tools
<a name="create-a-report-of-network-access-analyzer-findings-for-inbound-internet-access-in-multiple-aws-accounts-tools"></a>

**AWS services**
+ [Amazon Elastic Compute Cloud (Amazon EC2)](https://docs.aws.amazon.com/ec2/) provides scalable computing capacity in the AWS Cloud. You can launch as many virtual servers as you need and quickly scale them up or down.
+ [Amazon EventBridge](https://docs.aws.amazon.com/eventbridge/latest/userguide/eb-what-is.html) is a serverless event bus service that helps you connect your applications with real-time data from a variety of sources. For example, AWS Lambda functions, HTTP invocation endpoints using API destinations, or event buses in other AWS accounts.
+ [AWS Identity and Access Management (IAM)](https://docs.aws.amazon.com/IAM/latest/UserGuide/introduction.html) helps you securely manage access to your AWS resources by controlling who is authenticated and authorized to use them.
+ [AWS Organizations](https://docs.aws.amazon.com/organizations/latest/userguide/orgs_introduction.html) is an account management service that helps you consolidate multiple AWS accounts into an organization that you create and centrally manage.
+ [AWS Security Hub CSPM](https://docs.aws.amazon.com/securityhub/latest/userguide/what-is-securityhub.html) provides a comprehensive view of your security state in AWS. It also helps you check your AWS environment against security industry standards and best practices.
+ [Amazon Simple Notification Service (Amazon SNS)](https://docs.aws.amazon.com/sns/latest/dg/welcome.html) helps you coordinate and manage the exchange of messages between publishers and clients, including web servers and email addresses.
+ [Amazon Simple Storage Service (Amazon S3)](https://docs.aws.amazon.com/AmazonS3/latest/userguide/Welcome.html) is a cloud-based object storage service that helps you store, protect, and retrieve any amount of data.
+ [AWS Systems Manager](https://docs.aws.amazon.com/systems-manager/latest/userguide/what-is-systems-manager.html) helps you manage your applications and infrastructure running in the AWS Cloud. It simplifies application and resource management, shortens the time to detect and resolve operational problems, and helps you manage your AWS resources securely at scale. This pattern uses Session Manager, a capability of Systems Manager.

**Code repository**

The code for this pattern is available in the GitHub [Network Access Analyzer Multi-Account Analysis](https://github.com/aws-samples/network-access-analyzer-multi-account-analysis) repository. The code repository contains the following files:
+ **naa-script.sh** – This bash script is used to start a Network Access Analyzer analysis of multiple AWS accounts, in parallel. As defined in the **naa-resources.yaml** CloudFormation template, this script is automatically deployed to the `/usr/local/naa` folder on the Amazon EC2 instance.
+ **naa-resources.yaml** – You use this CloudFormation template to create a stack in the security account in the organization. This template deploys all of the required resources for this account in order to support the solution. This stack must be deployed before the **naa-execrole.yaml** template.
**Note**  
If this stack is deleted and redeployed, you must rebuild the `NAAExecRole` stack set in order to rebuild the cross-account dependencies between the IAM roles.
+ **naa-execrole.yaml** – You use this CloudFormation template to create a stack set that deploys the `NAAExecRole` IAM role in all accounts in the organization, including the management account.
+ **naa-processfindings.py** – The **naa-script.sh **script automatically calls this Python script to process the Network Access Analyzer JSON outputs, exclude any known-good resources in the **naa-exclusions.csv** file, and then either generate a CSV file of the consolidated results or import the results into Security Hub CSPM.

## Epics
<a name="create-a-report-of-network-access-analyzer-findings-for-inbound-internet-access-in-multiple-aws-accounts-epics"></a>

### Prepare for deployment
<a name="prepare-for-deployment"></a>


| Task | Description | Skills required | 
| --- | --- | --- | 
| Clone the code repository. | [\[See the AWS documentation website for more details\]](http://docs.aws.amazon.com/prescriptive-guidance/latest/patterns/create-a-report-of-network-access-analyzer-findings-for-inbound-internet-access-in-multiple-aws-accounts.html) | AWS DevOps | 
| Review the templates. | [\[See the AWS documentation website for more details\]](http://docs.aws.amazon.com/prescriptive-guidance/latest/patterns/create-a-report-of-network-access-analyzer-findings-for-inbound-internet-access-in-multiple-aws-accounts.html) | AWS DevOps | 

### Create the CloudFormation stacks
<a name="create-the-cfnshort-stacks"></a>


| Task | Description | Skills required | 
| --- | --- | --- | 
| Provision resources in the security account. | Using the **naa-resources.yaml** template, you create a CloudFormation stack that deploys all of the required resources in the security account. For instructions, see [Creating a stack](https://docs.aws.amazon.com/AWSCloudFormation/latest/UserGuide/cfn-console-create-stack.html) in the CloudFormation documentation. Note the following when deploying this template:[\[See the AWS documentation website for more details\]](http://docs.aws.amazon.com/prescriptive-guidance/latest/patterns/create-a-report-of-network-access-analyzer-findings-for-inbound-internet-access-in-multiple-aws-accounts.html)[\[See the AWS documentation website for more details\]](http://docs.aws.amazon.com/prescriptive-guidance/latest/patterns/create-a-report-of-network-access-analyzer-findings-for-inbound-internet-access-in-multiple-aws-accounts.html) | AWS DevOps | 
| Provision the IAM role in the member accounts. | In the AWS Organizations management account or an account with delegated administrator permissions for CloudFormation, use the **naa-execrole.yaml** template to create a CloudFormation stack set. The stack set deploys the `NAAExecRole` IAM role in all member accounts in the organization. For instructions, see [Create a stack set with service-managed permissions](https://docs.aws.amazon.com/AWSCloudFormation/latest/UserGuide/stacksets-getting-started-create.html#stacksets-orgs-associate-stackset-with-org) in the CloudFormation documentation. Note the following when deploying this template:[\[See the AWS documentation website for more details\]](http://docs.aws.amazon.com/prescriptive-guidance/latest/patterns/create-a-report-of-network-access-analyzer-findings-for-inbound-internet-access-in-multiple-aws-accounts.html) | AWS DevOps | 
| Provision the IAM role in the management account. | Using the **naa-execrole.yaml** template, you create a CloudFormation stack that deploys the `NAAExecRole` IAM role in the management account of the organization. The stack set you created previously doesn’t deploy the IAM role in the management account. For instructions, see [Creating a stack](https://docs.aws.amazon.com/AWSCloudFormation/latest/UserGuide/cfn-console-create-stack.html) in the CloudFormation documentation. Note the following when deploying this template:[\[See the AWS documentation website for more details\]](http://docs.aws.amazon.com/prescriptive-guidance/latest/patterns/create-a-report-of-network-access-analyzer-findings-for-inbound-internet-access-in-multiple-aws-accounts.html) | AWS DevOps | 

### Perform the analysis
<a name="perform-the-analysis"></a>


| Task | Description | Skills required | 
| --- | --- | --- | 
| Customize the shell script. | [\[See the AWS documentation website for more details\]](http://docs.aws.amazon.com/prescriptive-guidance/latest/patterns/create-a-report-of-network-access-analyzer-findings-for-inbound-internet-access-in-multiple-aws-accounts.html) | AWS DevOps | 
| Analyze the target accounts. | [\[See the AWS documentation website for more details\]](http://docs.aws.amazon.com/prescriptive-guidance/latest/patterns/create-a-report-of-network-access-analyzer-findings-for-inbound-internet-access-in-multiple-aws-accounts.html) | AWS DevOps | 
| Option 1 – Retrieve the results from the Amazon S3 bucket. | [\[See the AWS documentation website for more details\]](http://docs.aws.amazon.com/prescriptive-guidance/latest/patterns/create-a-report-of-network-access-analyzer-findings-for-inbound-internet-access-in-multiple-aws-accounts.html) | AWS DevOps | 
| Option 2 – Review the results in Security Hub CSPM. | [\[See the AWS documentation website for more details\]](http://docs.aws.amazon.com/prescriptive-guidance/latest/patterns/create-a-report-of-network-access-analyzer-findings-for-inbound-internet-access-in-multiple-aws-accounts.html) | AWS DevOps | 

### Remediate and exclude findings
<a name="remediate-and-exclude-findings"></a>


| Task | Description | Skills required | 
| --- | --- | --- | 
| Remediate findings. | Remediate any findings that you want to address. For more information and best practices about how to create a perimeter around your AWS identities, resources, and networks, see [Building a data perimeter on AWS](https://docs.aws.amazon.com/whitepapers/latest/building-a-data-perimeter-on-aws/building-a-data-perimeter-on-aws.html) (AWS Whitepaper). | AWS DevOps | 
| Exclude resources with known-good network paths. | If Network Access Analyzer generates findings for resources that should be accessible from the internet, then you can add these resources to an exclusion list. The next time Network Access Analyzer runs, it won’t generate a finding for that resource.[\[See the AWS documentation website for more details\]](http://docs.aws.amazon.com/prescriptive-guidance/latest/patterns/create-a-report-of-network-access-analyzer-findings-for-inbound-internet-access-in-multiple-aws-accounts.html) | AWS DevOps | 

### (Optional) Update the naa-script.sh script
<a name="optional-update-the-naa-script-sh-script"></a>


| Task | Description | Skills required | 
| --- | --- | --- | 
| Update the naa-script.sh script. | If you want to update the **naa-script.sh** script to the latest version in the repo, do the following:[\[See the AWS documentation website for more details\]](http://docs.aws.amazon.com/prescriptive-guidance/latest/patterns/create-a-report-of-network-access-analyzer-findings-for-inbound-internet-access-in-multiple-aws-accounts.html) | AWS DevOps | 

### (Optional) Clean up
<a name="optional-clean-up"></a>


| Task | Description | Skills required | 
| --- | --- | --- | 
| Delete all deployed resources. | You can leave the resources deployed in the accounts.If you want to deprovision all resources, do the following:[\[See the AWS documentation website for more details\]](http://docs.aws.amazon.com/prescriptive-guidance/latest/patterns/create-a-report-of-network-access-analyzer-findings-for-inbound-internet-access-in-multiple-aws-accounts.html) | AWS DevOps | 

## Troubleshooting
<a name="create-a-report-of-network-access-analyzer-findings-for-inbound-internet-access-in-multiple-aws-accounts-troubleshooting"></a>


| Issue | Solution | 
| --- | --- | 
| Unable to connect to the Amazon EC2 instance by using Session Manager. | The SSM Agent must be able to communicate with the Systems Manager endpoint. Do the following:[\[See the AWS documentation website for more details\]](http://docs.aws.amazon.com/prescriptive-guidance/latest/patterns/create-a-report-of-network-access-analyzer-findings-for-inbound-internet-access-in-multiple-aws-accounts.html) | 
| When deploying the stack set, the CloudFormation console prompts you to `Enable trusted access with AWS Organizations to use service-managed permissions`. | This indicates that trusted access has not been enabled between AWS Organizations and CloudFormation. Trusted access is required to deploy the service-managed stack set. Choose the button to enable trusted access. For more information, see [Enable trusted access](https://docs.aws.amazon.com/AWSCloudFormation/latest/UserGuide/stacksets-orgs-enable-trusted-access.html) in the CloudFormation documentation. | 

## Related resources
<a name="create-a-report-of-network-access-analyzer-findings-for-inbound-internet-access-in-multiple-aws-accounts-resources"></a>
+ [New – Amazon VPC Network Access Analyzer](https://aws.amazon.com/blogs/aws/new-amazon-vpc-network-access-analyzer/) (AWS blog post)
+ [AWS re:Inforce 2022 - Validate effective network access controls on AWS (NIS202)](https://youtu.be/aN2P2zeQek0) (video)
+ [Demo - Organization-wide Internet Ingress Data Path Analysis Using Network Access Analyzer](https://youtu.be/1IFNZWy4iy0) (video)

## Additional information
<a name="create-a-report-of-network-access-analyzer-findings-for-inbound-internet-access-in-multiple-aws-accounts-additional"></a>

**Example console output**

The following sample shows the output of generating the list of target accounts and analyzing the target accounts.

```
[root@ip-10-10-43-82 naa]# ./naa-script.sh
download: s3://naa-<account ID>-us-east-1/naa-exclusions.csv to ./naa-exclusions.csv

AWS Management Account: <Management account ID>

AWS Accounts being processed...
<Account ID 1> <Account ID 2> <Account ID 3>

Assessing AWS Account: <Account ID 1>, using Role: NAAExecRole
Assessing AWS Account: <Account ID 2>, using Role: NAAExecRole
Assessing AWS Account: <Account ID 3>, using Role: NAAExecRole
Processing account: <Account ID 1> / Region: us-east-1
Account: <Account ID 1> / Region: us-east-1 – Detecting Network Analyzer scope...
Processing account: <Account ID 2> / Region: us-east-1
Account: <Account ID 2> / Region: us-east-1 – Detecting Network Analyzer scope...
Processing account: <Account ID 3> / Region: us-east-1
Account: <Account ID 3> / Region: us-east-1 – Detecting Network Analyzer scope...
Account: <Account ID 1> / Region: us-east-1 – Network Access Analyzer scope detected.
Account: <Account ID 1> / Region: us-east-1 – Continuing analyses with Scope ID. Accounts with many resources may take up to one hour
Account: <Account ID 2> / Region: us-east-1 – Network Access Analyzer scope detected.
Account: <Account ID 2> / Region: us-east-1 – Continuing analyses with Scope ID. Accounts with many resources may take up to one hour
Account: <Account ID 3> / Region: us-east-1 – Network Access Analyzer scope detected.
Account: <Account ID 3> / Region: us-east-1 – Continuing analyses with Scope ID. Accounts with many resources may take up to one hour
```

**CSV report examples**

The following images are examples of the CSV output.

![\[Example 1 of the CSV report generated by this solution.\]](http://docs.aws.amazon.com/prescriptive-guidance/latest/patterns/images/pattern-img/eda6abba-632a-4e3d-92b9-31848fa6dead/images/55e02e61-054e-4da6-aaae-c9a8b6f4f272.png)


![\[Example 2 of the CSV report generated by this solution.\]](http://docs.aws.amazon.com/prescriptive-guidance/latest/patterns/images/pattern-img/eda6abba-632a-4e3d-92b9-31848fa6dead/images/95f980ad-92c1-4392-92d4-9c742755aab2.png)


# Set up DNS resolution for hybrid networks in a multi-account AWS environment
<a name="set-up-dns-resolution-for-hybrid-networks-in-a-multi-account-aws-environment"></a>

*Anvesh Koganti, Amazon Web Services*

## Summary
<a name="set-up-dns-resolution-for-hybrid-networks-in-a-multi-account-aws-environment-summary"></a>

This pattern provides a comprehensive solution for setting up DNS resolution in hybrid network environments that include multiple Amazon Web Services (AWS) accounts. It enables bidirectional DNS resolution between on-premises networks and the AWS environment through Amazon Route 53 Resolver endpoints. The pattern presents two solutions to enable DNS resolution in a [multi-account, centralized architecture](https://docs.aws.amazon.com/whitepapers/latest/hybrid-cloud-dns-options-for-vpc/scaling-dns-management-across-multiple-accounts-and-vpcs.html#multi-account-centralized):
+ *Basic setup* doesn't use Route 53 Profiles. It helps optimize costs for small to medium deployments of lower complexity.
+ *Enhanced setup* uses Route 53 Profiles to simplify operations. It is best for larger or more complex DNS deployments.

**Note**  
Review the *Limitations* section for service limitations and quotas before implementation. Consider factors such as management overhead, costs, operational complexity, and team expertise when you make your decision.

## Prerequisites and limitations
<a name="set-up-dns-resolution-for-hybrid-networks-in-a-multi-account-aws-environment-prereqs"></a>

**Prerequisites **
+ An AWS multi-account environment with Amazon Virtual Private Cloud (Amazon VPC) deployed across Shared Services and workload accounts (preferably set up through [AWS Control Tower by following AWS best practices](https://docs.aws.amazon.com/controltower/latest/userguide/aws-multi-account-landing-zone.html) for account structure).
+ Existing hybrid connectivity (AWS Direct Connect or AWS Site-to-Site VPN) between your on-premises network and the AWS environment.
+ Amazon VPC peering, AWS Transit Gateway, or AWS Cloud WAN for Layer 3 network connectivity between VPCs. (This connectivity is required for application traffic. It is not required for DNS resolution to work. DNS resolution operates independently of network connectivity between the VPCs.)
+ DNS servers running in the on-premises environment.

**Limitations**
+ Route 53 Resolver endpoints, rules, and Profiles are Regional constructs and might require replication in multiple AWS Regions for global organizations.
+ For a comprehensive list of service quotas for Route 53 Resolver, private hosted zones, and Profiles, see [Quotas](https://docs.aws.amazon.com/Route53/latest/DeveloperGuide/DNSLimitations.html) in the Route 53 documentation.

## Architecture
<a name="set-up-dns-resolution-for-hybrid-networks-in-a-multi-account-aws-environment-architecture"></a>

**Target technology stack  **
+ Route 53 outbound and inbound endpoints
+ Route 53 Resolver rules for conditional forwarding
+ AWS Resource Access Manager (AWS RAM)
+ Route 53 private hosted zone

**Target architecture **

**Outbound and inbound endpoints**

The following diagram shows the DNS resolution flow from AWS to on premises. This is the connectivity setup for outbound resolutions where the domain is hosted on premises. Here is a high-level overview of the process involved in setting this up. For details, see the [Epics](#set-up-dns-resolution-for-hybrid-networks-in-a-multi-account-aws-environment-epics) section.

1. Deploy outbound Route 53 Resolver endpoints in the Shared Services VPC.

1. Create Route 53 Resolver rules (forwarding rules) in the Shared Services account for domains that are hosted on premises.

1. Share and associate the rules with VPCs in other accounts that host resources that need to resolve on-premises hosted domains. This can be done in different ways depending on your use case, as described later in this section.

![\[Inbound and outbound endpoints in an AWS to on premises DNS resolution flow.\]](http://docs.aws.amazon.com/prescriptive-guidance/latest/patterns/images/pattern-img/01e700cd-be8c-4a5d-bc89-b901a260d045/images/d69d4cad-5e2c-4481-9370-2708e8a4f8c1.png)


After you set up connectivity, the steps involved in the outbound resolution are as follows:

1. The Amazon Elastic Compute Cloud (Amazon EC2) instance sends a DNS resolution request for `db.onprem.example.com` to the VPC's Route 53 Resolver at the VPC\$12 address.

1. Route 53 Resolver checks the Resolver rules and forwards the request to the on-premises DNS server IPs by using the outbound endpoint.

1. The outbound endpoint forwards the request to the on-premises DNS IPs. The traffic goes over the established hybrid network connectivity between the Shared Services VPC and the on-premises data center.

1. The on-premises DNS server responds back to the outbound endpoint, which then forwards the response back to the VPC's Route 53 Resolver. The Resolver returns the response to the EC2 instance.

The next diagram shows the DNS resolution flow from the on-premises environment to AWS. This is the connectivity setup for inbound resolutions where the domain is hosted on AWS. Here is a high-level overview of the process involved in setting this up. For details, see the [Epics](#set-up-dns-resolution-for-hybrid-networks-in-a-multi-account-aws-environment-epics) section.

1. Deploy inbound Resolver endpoints in the Shared Services VPC.

1. Create private hosted zones in the Shared Services account (centralized approach).

1. Associate the private hosted zones with the Shared Services VPC. Share and associate these zones with cross-account VPCs for VPC-to-VPC DNS resolution. This can be done in different ways depending on your use case, as described later in this section.

![\[Inbound and outbound endpoints in an on premises to AWS DNS resolution flow.\]](http://docs.aws.amazon.com/prescriptive-guidance/latest/patterns/images/pattern-img/01e700cd-be8c-4a5d-bc89-b901a260d045/images/a6f5348c-2041-453e-8939-2b4ee0b7ebd8.png)


After you set up connectivity, the steps involved in the inbound resolution are as follows:

1. The on-premises resource sends a DNS resolution request for `ec2.prod.aws.example.com` to the on-premises DNS server.

1. The on-premises DNS server forwards the request to the inbound Resolver endpoint in the Shared Services VPC over the hybrid network connection.

1. The inbound Resolver endpoint looks up the request in the associated private hosted zone with the help of the VPC Route 53 Resolver and gets the appropriate IP address.

1. These IP addresses are sent back to the on-premises DNS server, which returns the response to the on-premises resource.

This configuration enables on-premises resources to resolve AWS private domain names by routing queries through the inbound endpoints to the appropriate private hosted zone. In this architecture, private hosted zones are centralized in a Shared Services VPC, which allows for central DNS management by a single team. These zones can be associated with many VPCs to address the VPC-to-VPC DNS resolution use case. Alternatively, you might want to delegate DNS domain ownership and management to each AWS account. In that case, each account manages its own private hosted zones and associates each zone with the central Shared Services VPC for a unified resolution with the on-premises environment. This decentralized approach is outside the scope of this pattern. For more information, see [Scaling DNS management across multiple accounts and VPCs](https://docs.aws.amazon.com/whitepapers/latest/hybrid-cloud-dns-options-for-vpc/scaling-dns-management-across-multiple-accounts-and-vpcs.html) in the *Hybrid Cloud DNS Options for Amazon VPC* whitepaper.

When you establish the fundamental DNS resolution flows by using Resolver endpoints, you need to determine how to manage the sharing and association of Resolver rules and private hosted zones across your AWS accounts. You can approach this in two ways: through self-managed sharing by using AWS RAM to share Resolver rules and direct private hosted zone associations, as detailed in the *Basic setup* section, or through Route 53 Profiles, as discussed in the *Enhanced setup* section. The choice depends on your organization's DNS management preferences and operational requirements. The following architecture diagrams illustrate a scaled environment that includes multiple VPCs across different accounts, which represents a typical enterprise deployment.

**Basic setup**

In basic setup, the implementation for hybrid DNS resolution in a multi-account AWS environment uses AWS RAM to share Resolver forwarding rules and private hosted zone associations to manage DNS queries between on-premises and AWS resources. This method uses centralized Route 53 Resolver endpoints in a Shared Services VPC that's connected to your on-premises network to handle both inbound and outbound DNS resolution efficiently.
+ For outbound resolution, Resolver forwarding rules are created in the Shared Services account and then shared with other AWS accounts by using AWS RAM. This sharing is limited to accounts within the same Region. The target accounts can then associate these rules with their VPCs and enable the resources in those VPCs to resolve on-premises domain names.
+ For inbound resolution, private hosted zones are created in the Shared Services account and associated with the Shared Services VPC. These zones can then be associated with VPCs in other accounts by using the Route 53 API, AWS SDKs, or the AWS Command Line Interface (AWS CLI). The resources in associated VPCs can then resolve DNS records defined in the private hosted zones, which creates a unified DNS view across your AWS environment.

The following diagram shows DNS resolution flows in this basic setup.

![\[Using basic setup for hybrid DNS resolution in a multi-account AWS environment.\]](http://docs.aws.amazon.com/prescriptive-guidance/latest/patterns/images/pattern-img/01e700cd-be8c-4a5d-bc89-b901a260d045/images/258e4bcd-e9c6-43b5-bab8-856ca22206b9.png)


This setup works well when you work with DNS infrastructure on a limited scale. However, it can become challenging to manage as your environment grows. The operational overhead of managing how private hosted zone and Resolver rules are shared and associated with VPCs individually increases significantly with scale. Additionally, service quotas such as the 300 VPC association limit per private hosted zone can become constraining factors in large-scale deployments. Enhanced setup addresses these challenges.

**Enhanced setup**

Route 53 Profiles offer a streamlined solution for managing DNS resolution in hybrid networks across multiple AWS accounts. Instead of managing private hosted zones and Resolver rules individually, you can group DNS configurations into a single container that can be easily shared and applied across multiple VPCs and accounts in a Region. This setup maintains the centralized Resolver endpoint architecture in a Shared Services VPC while significantly simplifying the management of DNS configurations.

The following diagram shows DNS resolution flows in an enhanced setup.

![\[Using advanced setup with Route 53 Profiles for hybrid DNS resolution in a multi-account AWS environment.\]](http://docs.aws.amazon.com/prescriptive-guidance/latest/patterns/images/pattern-img/01e700cd-be8c-4a5d-bc89-b901a260d045/images/55b9681d-ddb4-4a55-b4ec-fc9afa9870fa.png)


Route 53 Profiles let you package private hosted zone associations, Resolver forwarding rules, and DNS firewall rules into a single, shareable unit. You can create Profiles in the Shared Services account and share them with member accounts by using AWS RAM. When a profile is shared and applied to target VPCs, all necessary associations and configurations are automatically handled by the service. This significantly reduces the operational overhead of DNS management and provides excellent scalability for growing environments.

**Automation and scale**

Use infrastructure as code (IaC) tools such as CloudFormation or Terraform to automatically provision and manage Route 53 Resolver endpoints, rules, private hosted zones, and Profiles. Integrate DNS configuration with continuous integration and continuous delivery (CI/CD) pipelines for consistency, repeatability, and rapid updates.

## Tools
<a name="set-up-dns-resolution-for-hybrid-networks-in-a-multi-account-aws-environment-tools"></a>

**AWS services**
+ [AWS Resource Access Manager (AWS RAM)](https://docs.aws.amazon.com/ram/latest/userguide/what-is.html) helps you securely share your resources across AWS accounts to reduce operational overhead and provide visibility and auditability.
+ [Amazon Route 53 Resolver](https://docs.aws.amazon.com/Route53/latest/DeveloperGuide/resolver.html) responds recursively to DNS queries from AWS resources and is available by default in all VPCs. You can create Resolver endpoints and conditional forwarding rules to resolve DNS namespaces between your on-premises data center and your VPCs.
+ [Amazon Route 53 private hosted zone](https://docs.aws.amazon.com/Route53/latest/DeveloperGuide/hosted-zones-private.html) is a container that holds information about how you want Route 53 to respond to DNS queries for a domain and its subdomains.
+ [Amazon Route 53 Profiles](https://docs.aws.amazon.com/Route53/latest/DeveloperGuide/profiles.html) allow you to apply and manage DNS-related Route 53 configurations across many VPCs and in different AWS accounts in a simplified manner.

## Best practices
<a name="set-up-dns-resolution-for-hybrid-networks-in-a-multi-account-aws-environment-best-practices"></a>

This section provides some of the best practices for optimizing Route 53 Resolver. These represent a subset of Route 53 best practices. For a comprehensive list, see [Best practices for Amazon Route 53](https://docs.aws.amazon.com/Route53/latest/DeveloperGuide/best-practices.html).

**Avoid loop configurations with Resolver endpoints**
+ Design your DNS architecture to prevent recursive routing by carefully planning VPC associations. When a VPC hosts an inbound endpoint, avoid associating it with Resolver rules that could create circular references.
+ Use AWS RAM strategically when you share DNS resources across accounts to maintain clean routing paths.

For more information, see [Avoid loop configurations with Resolver endpoints](https://docs.aws.amazon.com/Route53/latest/DeveloperGuide/best-practices-resolver-endpoints.html) in the Route 53 documentation.

**Scale Resolver endpoints**
+ For environments that require a high number of queries per second (QPS), be aware that there is a limit of 10,000 QPS per ENI in an endpoint. More ENIs can be added to an endpoint to scale DNS QPS.
+ Amazon CloudWatch provides `InboundQueryVolume` and `OutboundQueryVolume` metrics (see the [CloudWatch documentation](https://docs.aws.amazon.com/Route53/latest/DeveloperGuide/monitoring-resolver-with-cloudwatch.html)). We recommend that you set up monitoring rules that alert you if the threshold exceeds a certain value (for example, 80 percent of 10,000 QPS).
+ Configure stateful security group rules for Resolver endpoints to prevent connection tracking limits from causing DNS query throttling during high-volume traffic. To learn more about how connection tracking works in security groups, see [Amazon EC2 security group connection tracking](https://docs.aws.amazon.com/AWSEC2/latest/UserGuide/security-group-connection-tracking.html) in the Amazon EC2 documentation.

For more information, see [Resolver endpoint scaling](https://docs.aws.amazon.com/Route53/latest/DeveloperGuide/best-practices-resolver-endpoint-scaling.html) in the Route 53 documentation.

**Provide high availability for Resolver endpoints**
+ Create inbound endpoints with IP addresses in at least two Availability Zones for redundancy.
+ Provision additional network interfaces to ensure availability during maintenance or traffic surges.

For more information, see [High availability for Resolver endpoints](https://docs.aws.amazon.com/Route53/latest/DeveloperGuide/best-practices-resolver-endpoint-high-availability.html) in the Route 53 documentation.

## Epics
<a name="set-up-dns-resolution-for-hybrid-networks-in-a-multi-account-aws-environment-epics"></a>

### Deploy Route 53 Resolver endpoints
<a name="deploy-r53r-endpoints"></a>


| Task | Description | Skills required | 
| --- | --- | --- | 
| Deploy an inbound endpoint. | Route 53 Resolver uses the inbound endpoint to receive DNS queries from on-premises DNS resolvers. For instructions, see [Forwarding inbound DNS queries to your VPCs ](https://docs.aws.amazon.com/Route53/latest/DeveloperGuide/resolver-forwarding-inbound-queries.html)in the Route 53 documentation. Make a note of the inbound endpoint IP address. | AWS administrator, Cloud administrator | 
| Deploy an outbound endpoint. | Route 53 Resolver uses the outbound endpoint to send DNS queries to on-premises DNS resolvers. For instructions, see [Forwarding outbound DNS queries to your network](https://docs.aws.amazon.com/Route53/latest/DeveloperGuide/resolver-forwarding-outbound-queries.html) in the Route 53 documentation. Make a note of the output endpoint ID. | AWS administrator, Cloud administrator | 

### Configure and share Route 53 private hosted zones
<a name="configure-and-share-r53-private-hosted-zones"></a>


| Task | Description | Skills required | 
| --- | --- | --- | 
| Create a private hosted zone for a domain that’s hosted on AWS. | This zone holds the DNS records for resources in an AWS-hosted domain (for example, `prod.aws.example.com`) that should be resolved from the on-premises environment. For instructions, see [Creating a private hosted zone](https://docs.aws.amazon.com/Route53/latest/DeveloperGuide/hosted-zone-private-creating.html) in the Route 53 documentation.When you create a private hosted zone, you must associate a VPC with the hosted zone owned by the same account. Select the Shared Services VPC for this purpose. | AWS administrator, Cloud administrator | 
| Basic setup: Associate the private hosted zone with VPCs in other accounts. | If you're using basic setup (see the [Architecture](#set-up-dns-resolution-for-hybrid-networks-in-a-multi-account-aws-environment-architecture) section):To enable resources in the member account VPCs to resolve DNS records in this private hosted zone, you must associate your VPCs with the hosted zone. You must authorize the association and then make the association programmatically. For instructions, see [Associating an Amazon VPC and a private hosted zone that you created with different AWS accounts](https://docs.aws.amazon.com/Route53/latest/DeveloperGuide/hosted-zone-private-associate-vpcs-different-accounts.html) in the Route 53 documentation. | AWS administrator, Cloud administrator | 
| Enhanced setup: Configure and share Route 53 Profiles. | If you're using enhanced setup (see the [Architecture](#set-up-dns-resolution-for-hybrid-networks-in-a-multi-account-aws-environment-architecture) section):[\[See the AWS documentation website for more details\]](http://docs.aws.amazon.com/prescriptive-guidance/latest/patterns/set-up-dns-resolution-for-hybrid-networks-in-a-multi-account-aws-environment.html)Depending on your organization's structure and DNS requirements, you might need to create and manage multiple Profiles for different accounts or workloads. | AWS administrator, Cloud administrator | 

### Configure and share Route 53 Resolver forwarding rules
<a name="configure-and-share-r53r-forwarding-rules"></a>


| Task | Description | Skills required | 
| --- | --- | --- | 
| Create a forwarding rule for a domain that’s hosted on premises. | This rule will instruct Route 53 Resolver to forward any DNS queries for on-premises domains (such as `onprem.example.com`) to on-premises DNS resolvers. To create this rule, you need the IP addresses of the on-premises DNS resolvers and the outbound endpoint ID. For instructions, see [Creating forwarding rules ](https://docs.aws.amazon.com/Route53/latest/DeveloperGuide/resolver-rules-managing-creating-rules.html)in the Route 53 documentation. | AWS administrator, Cloud administrator | 
| Basic setup: Share and associate the forwarding rule with your VPCs in other accounts. | If you're using basic setup:For the forwarding rule to take effect, you must share and associate the rule with your VPCs in other accounts. Route 53 Resolver then takes the rule into consideration when it resolves a domain. For instructions, see [Sharing Resolver rules with other AWS accounts and using shared rules](https://docs.aws.amazon.com/Route53/latest/DeveloperGuide/resolver-rules-managing-sharing.html) and [Associating forwarding rules with a VPC ](https://docs.aws.amazon.com/Route53/latest/DeveloperGuide/resolver-rules-managing-associating-rules.html)in the Route 53 documentation. | AWS administrator, Cloud administrator | 
| Enhanced setup: Configure and share Route 53 Profiles. | If you're using enhanced setup:[\[See the AWS documentation website for more details\]](http://docs.aws.amazon.com/prescriptive-guidance/latest/patterns/set-up-dns-resolution-for-hybrid-networks-in-a-multi-account-aws-environment.html)Depending on your organization's structure and DNS requirements, you might need to create and manage multiple Profiles for different accounts or workloads. | AWS administrator, Cloud administrator | 

### Configure on-premises DNS resolvers for AWS integration
<a name="configure-on-premises-dns-resolvers-for-aws-integration"></a>


| Task | Description | Skills required | 
| --- | --- | --- | 
|  Configure conditional forwarding in the on-premises DNS resolvers. | For DNS queries to be sent to AWS from the on-premises environment for resolution, you must configure conditional forwarding in the on-premises DNS resolvers to point to the inbound endpoint IP address. This instructs the DNS resolvers to forward all DNS queries for the AWS-hosted domain (for example, for `prod.aws.example.com`) to the inbound endpoint IP address for resolution by Route 53 Resolver.  | Network administrator | 

### Verify end-to-end DNS resolution in a hybrid environment
<a name="verify-end-to-end-dns-resolution-in-a-hybrid-environment"></a>


| Task | Description | Skills required | 
| --- | --- | --- | 
| Test DNS resolution from AWS to the on-premises environment. | From an instance in a VPC that has the forwarding rule associated with it, perform a DNS query for an on-premises hosted domain (for example, for `db.onprem.example.com`). | Network administrator | 
| Test DNS resolution from the on-premises environment to AWS. | From an on-premises server, perform DNS resolution for an AWS-hosted domain (for example, for `ec2.prod.aws.example.com`). | Network administrator | 

## Related resources
<a name="set-up-dns-resolution-for-hybrid-networks-in-a-multi-account-aws-environment-resources"></a>
+ [Hybrid Cloud DNS Options for Amazon VPC](https://docs.aws.amazon.com/whitepapers/latest/hybrid-cloud-dns-options-for-vpc/hybrid-cloud-dns-options-for-vpc.html) (AWS whitepaper)
+ [Working with private hosted zones](https://docs.aws.amazon.com/Route53/latest/DeveloperGuide/hosted-zones-private.html) (Route 53 documentation)
+ [Getting started with Route 53 Resolver](https://docs.aws.amazon.com/Route53/latest/DeveloperGuide/resolver-getting-started.html) (Route 53 documentation)
+ [Simplify DNS management in a multi-account environment with Route 53 Resolver](https://aws.amazon.com/blogs/security/simplify-dns-management-in-a-multiaccount-environment-with-route-53-resolver/) (AWS blog post)
+ [Unify DNS management using Amazon Route 53 Profiles with multiple VPCs and AWS accounts](https://aws.amazon.com/blogs/aws/unify-dns-management-using-amazon-route-53-profiles-with-multiple-vpcs-and-aws-accounts/) (AWS blog post)
+ [Migrating your multi-account DNS environment to Amazon Route 53 Profiles](https://aws.amazon.com/blogs/networking-and-content-delivery/migrating-your-multi-account-dns-environment-to-amazon-route-53-profiles/) (AWS blog post)
+ [Using Amazon Route 53 Profiles for scalable multi-account AWS environments](https://aws.amazon.com/blogs/networking-and-content-delivery/using-amazon-route-53-profiles-for-scalable-multi-account-aws-environments/) (AWS blog post)

 

# Verify that ELB load balancers require TLS termination
<a name="verify-that-elb-load-balancers-require-tls-termination"></a>

*Priyanka Chaudhary, Amazon Web Services*

## Summary
<a name="verify-that-elb-load-balancers-require-tls-termination-summary"></a>

On the Amazon Web Services (AWS) Cloud, Elastic Load Balancing (ELB) automatically distributes incoming application traffic across multiple targets, such as Amazon Elastic Compute Cloud (Amazon EC2) instances, containers, IP addresses, and AWS Lambda functions. The load balancers use listeners to define the ports and protocols that the load balancer uses to accept traffic from users. Application Load Balancers make routing decisions at the application layer and use the HTTP/HTTPS protocols. Classic Load Balancers make routing decisions at either the transport layer, by using TCP or Secure Sockets Layer (SSL) protocols, or at the application layer, by using HTTP/HTTPS.

This pattern provides a security control that examines multiple event types for Application Load Balancers and Classic Load Balancers. When the function is invoked, AWS Lambda inspects the event and ensures that the load balancer is compliant.

The function initiates an Amazon CloudWatch Events event on the following API calls: [CreateLoadBalancer](https://docs.aws.amazon.com/elasticloadbalancing/2012-06-01/APIReference/API_CreateLoadBalancer.html), [CreateLoadBalancerListeners](https://docs.aws.amazon.com/elasticloadbalancing/2012-06-01/APIReference/API_CreateLoadBalancerListeners.html), [DeleteLoadBalancerListeners](https://docs.aws.amazon.com/elasticloadbalancing/2012-06-01/APIReference/API_DeleteLoadBalancerListeners.html), [CreateLoadBalancerPolicy](https://docs.aws.amazon.com/elasticloadbalancing/2012-06-01/APIReference/API_CreateLoadBalancerPolicy.html), [SetLoadBalancerPoliciesOfListener](https://docs.aws.amazon.com/elasticloadbalancing/2012-06-01/APIReference/API_SetLoadBalancerPoliciesOfListener.html), [CreateListener](https://docs.aws.amazon.com/elasticloadbalancing/latest/APIReference/API_CreateListener.html), [DeleteListener](https://docs.aws.amazon.com/elasticloadbalancing/latest/APIReference/API_DeleteListener.html), and [ModifyListener](https://docs.aws.amazon.com/elasticloadbalancing/latest/APIReference/API_ModifyListener.html). When the event detects one of these APIs, it calls AWS Lambda, which runs a Python script. The Python script evaluates to see if the listener contains an SSL certificate, and if the policy that is applied is using Transport Layer Security (TLS). If the SSL policy is determined to be anything other than TLS, the function sends an Amazon Simple Notification Service (Amazon SNS) notification to the user with the relevant information. 

## Prerequisites and limitations
<a name="verify-that-elb-load-balancers-require-tls-termination-prereqs"></a>

**Prerequisites **
+ An active AWS account

**Limitations **
+ This security control does not check for existing load balancers, unless an update is made to the load balancer listeners.
+ This security control is regional. You must deploy it in each AWS Region you want to monitor.

## Architecture
<a name="verify-that-elb-load-balancers-require-tls-termination-architecture"></a>

**Target architecture**

![\[Ensuring that load balancers require TLS termination.\]](http://docs.aws.amazon.com/prescriptive-guidance/latest/patterns/images/pattern-img/da99cda2-ac34-4791-a2bd-d37264d8d3d9/images/af92b3c8-32bb-45eb-a2a8-d8276fb3e824.png)


**Automation and scale**
+ If you are using [AWS Organizations](https://aws.amazon.com/organizations/), you can use [AWS Cloudformation StackSets](https://docs.aws.amazon.com/AWSCloudFormation/latest/UserGuide/what-is-cfnstacksets.html) to deploy this template in multiple accounts that you want to monitor.

## Tools
<a name="verify-that-elb-load-balancers-require-tls-termination-tools"></a>

**AWS services**
+ [AWS CloudFormation](https://docs.aws.amazon.com/AWSCloudFormation/latest/UserGuide/Welcome.html) – AWS CloudFormation helps you model and set up your AWS resources, provision them quickly and consistently, and manage them throughout their lifecycle. You can use a template to describe your resources and their dependencies, and launch and configure them together as a stack, instead of managing resources individually.
+ [Amazon CloudWatch Events](https://docs.aws.amazon.com/AmazonCloudWatch/latest/events/WhatIsCloudWatchEvents.html) – Amazon CloudWatch Events delivers a near real-time stream of system events that describe changes in AWS resources.
+ [AWS Lambda](https://docs.aws.amazon.com/lambda/latest/dg/welcome.html) – AWS Lambda is a compute service that supports running code without provisioning or managing servers.
+ [Amazon S3](https://docs.aws.amazon.com/AmazonS3/latest/dev/Welcome.html) – Amazon Simple Storage Service (Amazon S3) is a highly scalable object storage service that can be used for a wide range of storage solutions, including websites, mobile applications, backups, and data lakes.
+ [Amazon SNS](https://docs.aws.amazon.com/sns/latest/dg/welcome.html) – Amazon Simple Notification Service (Amazon SNS) coordinates and manages the delivery or sending of messages between publishers and clients, including web servers and email addresses. Subscribers receive all messages published to the topics to which they subscribe, and all subscribers to a topic receive the same messages.

**Code**

This pattern includes the following attachments:
+ `ELBRequirestlstermination.zip` – The Lambda code for the security control.
+ `ELBRequirestlstermination.yml` – The CloudFormation template that sets up the event and Lambda function.

## Epics
<a name="verify-that-elb-load-balancers-require-tls-termination-epics"></a>

### Set up the S3 bucket
<a name="set-up-the-s3-bucket"></a>


| Task | Description | Skills required | 
| --- | --- | --- | 
| Define the S3 bucket. | On the [Amazon S3 console](https://console.aws.amazon.com/s3/), choose or create an S3 bucket to host the Lambda code .zip file. This S3 bucket must be in the same AWS Region as the load balancer that you want to evaluate. An S3 bucket name is globally unique, and the namespace is shared by all AWS accounts. The S3 bucket name cannot include leading slashes. | Cloud architect | 
| Upload the Lambda code. | Upload the Lambda code (`ELBRequirestlstermination.zip` file) that's provided in the *Attachments *section to the S3 bucket. | Cloud architect | 

### Deploy the CloudFormation template
<a name="deploy-the-cloudformation-template"></a>


| Task | Description | Skills required | 
| --- | --- | --- | 
| Launch the AWS CloudFormation template. | Open the [AWS CloudFormation console](https://console.aws.amazon.com/cloudformation/) in the same AWS Region as your S3 bucket and deploy the attached template `ELBRequirestlstermination.yml`. For more information about deploying AWS CloudFormation templates, see [Creating a stack on the AWS CloudFormation console](https://docs.aws.amazon.com/AWSCloudFormation/latest/UserGuide/cfn-console-create-stack.html) in the CloudFormation documentation. | Cloud architect | 
| Complete the parameters in the template. | When you launch the template, you'll be prompted for the following information:[\[See the AWS documentation website for more details\]](http://docs.aws.amazon.com/prescriptive-guidance/latest/patterns/verify-that-elb-load-balancers-require-tls-termination.html) | Cloud architect | 

### Confirm the subscription
<a name="confirm-the-subscription"></a>


| Task | Description | Skills required | 
| --- | --- | --- | 
| Confirm the subscription. | When the CloudFormation template deploys successfully, it sends a subscription email to the email address you provided. You must confirm this email subscription to start receiving violation notifications. | Cloud architect | 

## Related resources
<a name="verify-that-elb-load-balancers-require-tls-termination-resources"></a>
+ [Creating a stack on the AWS CloudFormation console](https://docs.aws.amazon.com/AWSCloudFormation/latest/UserGuide/cfn-console-create-stack.html) (AWS CloudFormation documentation)
+ [What is AWS Lambda?](https://docs.aws.amazon.com/lambda/latest/dg/welcome.html) (AWS Lambda documentation)
+ [What is a Classic Load Balancer?](https://docs.aws.amazon.com/elasticloadbalancing/latest/classic/introduction.html) (ELB documentation)
+ [What is an Application Load Balancer?](https://docs.aws.amazon.com/elasticloadbalancing/latest/application/introduction.html) (ELB documentation)

## Attachments
<a name="attachments-da99cda2-ac34-4791-a2bd-d37264d8d3d9"></a>

To access additional content that is associated with this document, unzip the following file: [attachment.zip](samples/p-attach/da99cda2-ac34-4791-a2bd-d37264d8d3d9/attachments/attachment.zip)

# View AWS Network Firewall logs and metrics by using Splunk
<a name="view-aws-network-firewall-logs-and-metrics-by-using-splunk"></a>

*Ivo Pinto, Amazon Web Services*

## Summary
<a name="view-aws-network-firewall-logs-and-metrics-by-using-splunk-summary"></a>

Many organizations use [Splunk Enterprise](https://www.splunk.com/en_us/products/splunk-enterprise.html) as a centralized aggregation and visualization tool for logs and metrics from different sources. This pattern helps you configure Splunk to fetch [AWS Network Firewall](https://docs.aws.amazon.com/network-firewall/latest/developerguide/what-is-aws-network-firewall.html) logs and metrics from [Amazon CloudWatch Logs](https://docs.aws.amazon.com/AmazonCloudWatch/latest/logs/WhatIsCloudWatchLogs.html) by using the Splunk Add-On for AWS. 

To achieve this, you create a read-only AWS Identity and Access Management (IAM) role. Splunk Add-On for AWS uses this role to access CloudWatch. You configure the Splunk Add-On for AWS to fetch metrics and logs from CloudWatch. Finally, you create visualizations in Splunk from the retrieved log data and metrics.

## Prerequisites and limitations
<a name="view-aws-network-firewall-logs-and-metrics-by-using-splunk-prereqs"></a>

**Prerequisites**
+ A [Splunk](https://www.splunk.com/) account
+ A Splunk Enterprise instance, version 8.2.2 or later 
+ An active AWS account
+ Network Firewall, [set up](https://docs.aws.amazon.com/network-firewall/latest/developerguide/getting-started.html) and [configured](https://docs.aws.amazon.com/network-firewall/latest/developerguide/logging-cw-logs.html) to send logs to CloudWatch Logs

**Limitations**
+ Splunk Enterprise must be deployed as a cluster of Amazon Elastic Compute Cloud (Amazon EC2) instances in the AWS Cloud.
+ Collecting data by using an automatically discovered IAM role for Amazon EC2 is not supported in the AWS China Regions.

## Architecture
<a name="view-aws-network-firewall-logs-and-metrics-by-using-splunk-architecture"></a>

![\[AWS Network Firewall and Splunk logging architecture\]](http://docs.aws.amazon.com/prescriptive-guidance/latest/patterns/images/pattern-img/c6ce254a-841f-4bed-8f9f-b35e99f22e56/images/3dd420e9-70af-4a42-b24d-c54872c55e0b.png)


The diagram illustrates the following:

1. Network Firewall publishes logs to CloudWatch Logs.

1. Splunk Enterprise retrieves metrics and logs from CloudWatch.

To populate example metrics and logs in this architecture, a workload generates traffic that passes through the Network Firewall endpoint to go to the internet. This is achieved by the use of [route tables](https://docs.aws.amazon.com/network-firewall/latest/developerguide/vpc-config.html#vpc-config-route-tables). Although this pattern uses a single Amazon EC2 instance as the workload, this pattern can apply to any architecture as long as Network Firewall is configured to send logs to CloudWatch Logs.

This architecture also uses a Splunk Enterprise instance in another virtual private cloud (VPC). However, the Splunk instance can be in another location, such as in the same VPC as the workload, as long as it can reach the CloudWatch APIs.

## Tools
<a name="view-aws-network-firewall-logs-and-metrics-by-using-splunk-tools"></a>

**AWS services**
+ [Amazon CloudWatch Logs](https://docs.aws.amazon.com/AmazonCloudWatch/latest/logs/WhatIsCloudWatchLogs.html) helps you centralize the logs from all your systems, applications, and AWS services so you can monitor them and archive them securely.
+ [Amazon Elastic Compute Cloud (Amazon EC2)](https://docs.aws.amazon.com/ec2/) provides scalable computing capacity in the AWS Cloud. You can launch as many virtual servers as you need and quickly scale them up or down.
+ [AWS Network Firewall](https://docs.aws.amazon.com/network-firewall/latest/developerguide/what-is-aws-network-firewall.html) is a stateful, managed, network firewall and intrusion detection and prevention service for VPCs in the AWS Cloud.

**Other tools**
+ [Splunk](https://www.splunk.com/) helps you monitor, visualize, and analyze log data.

## Epics
<a name="view-aws-network-firewall-logs-and-metrics-by-using-splunk-epics"></a>

### Create an IAM role
<a name="create-an-iam-role"></a>


| Task | Description | Skills required | 
| --- | --- | --- | 
| Create the IAM policy. | Follow the instructions in [Creating policies using the JSON editor](https://docs.aws.amazon.com/IAM/latest/UserGuide/access_policies_create-console.html#access_policies_create-json-editor) to create the IAM policy that grants read-only access to the CloudWatch Logs data and CloudWatch metrics. Paste the following policy into the JSON editor.<pre>{<br />    "Statement": [<br />        {<br />            "Action": [<br />                "cloudwatch:List*",<br />                "cloudwatch:Get*",<br />                "network-firewall:List*",<br />                "logs:Describe*",<br />                "logs:Get*",<br />                "logs:List*",<br />                "logs:StartQuery",<br />                "logs:StopQuery",<br />                "logs:TestMetricFilter",<br />                "logs:FilterLogEvents",<br />                "network-firewall:Describe*"<br />            ],<br />            "Effect": "Allow",<br />            "Resource": "*"<br />        }<br />    ],<br />    "Version": "2012-10-17"<br />}</pre> | AWS administrator | 
| Create a new IAM role. | Follow the instructions in [Creating a role to delegate permissions to an AWS service](https://docs.aws.amazon.com/IAM/latest/UserGuide/id_roles_create_for-service.html) to create the IAM role that the Splunk Add-On for AWS uses to access CloudWatch. For **Permissions policies**, choose the policy that you created previously. | AWS administrator | 
| Assign the IAM role to the EC2 instances in the Splunk cluster. | [\[See the AWS documentation website for more details\]](http://docs.aws.amazon.com/prescriptive-guidance/latest/patterns/view-aws-network-firewall-logs-and-metrics-by-using-splunk.html) | AWS administrator | 

### Install the Splunk Add-On for AWS
<a name="install-the-splunk-add-on-for-aws"></a>


| Task | Description | Skills required | 
| --- | --- | --- | 
| Install the add-on. | [\[See the AWS documentation website for more details\]](http://docs.aws.amazon.com/prescriptive-guidance/latest/patterns/view-aws-network-firewall-logs-and-metrics-by-using-splunk.html) | Splunk administrator | 
| Configure the AWS credentials. | [\[See the AWS documentation website for more details\]](http://docs.aws.amazon.com/prescriptive-guidance/latest/patterns/view-aws-network-firewall-logs-and-metrics-by-using-splunk.html)For more information, see [Find an IAM role within your Splunk platform instance](https://splunk.github.io/splunk-add-on-for-amazon-web-services/#Find_an_IAM_role_within_your_Splunk_platform_instance) in the Splunk documentation. | Splunk administrator | 

### Configure Splunk access to CloudWatch
<a name="configure-splunk-access-to-cloudwatch"></a>


| Task | Description | Skills required | 
| --- | --- | --- | 
| Configure the retrieval of Network Firewall logs from CloudWatch Logs. | [\[See the AWS documentation website for more details\]](http://docs.aws.amazon.com/prescriptive-guidance/latest/patterns/view-aws-network-firewall-logs-and-metrics-by-using-splunk.html)By default, Splunk fetches the log data every 10 minutes. This is a configurable parameter under **Advanced Settings**. For more information, see [Configure a CloudWatch Logs input using Splunk Web](https://splunk.github.io/splunk-add-on-for-amazon-web-services/#Configure_a_CloudWatch_Logs_input_using_Splunk_Web) in the Splunk documentation. | Splunk administrator | 
| Configure the retrieval of Network Firewall metrics from CloudWatch. | [\[See the AWS documentation website for more details\]](http://docs.aws.amazon.com/prescriptive-guidance/latest/patterns/view-aws-network-firewall-logs-and-metrics-by-using-splunk.html)By default, Splunk fetches the metric data every 5 minutes. This is a configurable parameter under **Advanced Settings**. For more information, see [Configure a CloudWatch input using Splunk Web](https://splunk.github.io/splunk-add-on-for-amazon-web-services/#Configure_a_CloudWatch_input_using_Splunk_Web) in the Splunk documentation. | Splunk administrator | 

### Create Splunk visualizations by using queries
<a name="create-splunk-visualizations-by-using-queries"></a>


| Task | Description | Skills required | 
| --- | --- | --- | 
| View the top source IP addresses. | [\[See the AWS documentation website for more details\]](http://docs.aws.amazon.com/prescriptive-guidance/latest/patterns/view-aws-network-firewall-logs-and-metrics-by-using-splunk.html) | Splunk administrator | 
| View packet statistics. | [\[See the AWS documentation website for more details\]](http://docs.aws.amazon.com/prescriptive-guidance/latest/patterns/view-aws-network-firewall-logs-and-metrics-by-using-splunk.html) | Splunk administrator | 
| View the most-used source ports. | [\[See the AWS documentation website for more details\]](http://docs.aws.amazon.com/prescriptive-guidance/latest/patterns/view-aws-network-firewall-logs-and-metrics-by-using-splunk.html) | Splunk administrator | 

## Related resources
<a name="view-aws-network-firewall-logs-and-metrics-by-using-splunk-resources"></a>

**AWS documentation**
+ [Creating a role to delegate permissions to an AWS service](https://docs.aws.amazon.com/IAM/latest/UserGuide/id_roles_create_for-service.html) (IAM documentation)
+ [Creating IAM policies](https://docs.aws.amazon.com/IAM/latest/UserGuide/access_policies_create-console.html#access_policies_create-start) (IAM documentation)
+ [Logging and monitoring in AWS Network Firewall](https://docs.aws.amazon.com/network-firewall/latest/developerguide/logging-monitoring.html) (Network Firewall documentation)
+ [Route table configurations for AWS Network Firewall](https://docs.aws.amazon.com/network-firewall/latest/developerguide/route-tables.html) (Network Firewall documentation)

**AWS blog posts**
+ [AWS Network Firewall deployment models](https://aws.amazon.com/pt/blogs/networking-and-content-delivery/deployment-models-for-aws-network-firewall/)

**AWS Marketplace**
+ [Splunk Enterprise Amazon Machine Image (AMI)](https://aws.amazon.com/marketplace/pp/prodview-l6oos72bsyaks)

# More patterns
<a name="networking-more-patterns-pattern-list"></a>

**Topics**
+ [Access a bastion host by using Session Manager and Amazon EC2 Instance Connect](access-a-bastion-host-by-using-session-manager-and-amazon-ec2-instance-connect.md)
+ [Access container applications privately on Amazon ECS by using AWS Fargate, AWS PrivateLink, and a Network Load Balancer](access-container-applications-privately-on-amazon-ecs-by-using-aws-fargate-aws-privatelink-and-a-network-load-balancer.md)
+ [Access container applications privately on Amazon ECS by using AWS PrivateLink and a Network Load Balancer](access-container-applications-privately-on-amazon-ecs-by-using-aws-privatelink-and-a-network-load-balancer.md)
+ [Centralize DNS resolution by using AWS Managed Microsoft AD and on-premises Microsoft Active Directory](centralize-dns-resolution-by-using-aws-managed-microsoft-ad-and-on-premises-microsoft-active-directory.md)
+ [Create a portal for micro-frontends by using AWS Amplify, Angular, and Module Federation](create-amplify-micro-frontend-portal.md)
+ [Deploy an Amazon API Gateway API on an internal website using private endpoints and an Application Load Balancer](deploy-an-amazon-api-gateway-api-on-an-internal-website-using-private-endpoints-and-an-application-load-balancer.md)
+ [Deploy detective attribute-based access controls for public subnets by using AWS Config](deploy-detective-attribute-based-access-controls-for-public-subnets-by-using-aws-config.md)
+ [Deploy preventative attribute-based access controls for public subnets](deploy-preventative-attribute-based-access-controls-for-public-subnets.md)
+ [Enable encrypted connections for PostgreSQL DB instances in Amazon RDS](enable-encrypted-connections-for-postgresql-db-instances-in-amazon-rds.md)
+ [Extend VRFs to AWS by using AWS Transit Gateway Connect](extend-vrfs-to-aws-by-using-aws-transit-gateway-connect.md)
+ [Migrate an F5 BIG-IP workload to F5 BIG-IP VE on the AWS Cloud](migrate-an-f5-big-ip-workload-to-f5-big-ip-ve-on-the-aws-cloud.md)
+ [Migrate NGINX Ingress Controllers when enabling Amazon EKS Auto Mode](migrate-nginx-ingress-controller-eks-auto-mode.md)
+ [Preserve routable IP space in multi-account VPC designs for non-workload subnets](preserve-routable-ip-space-in-multi-account-vpc-designs-for-non-workload-subnets.md)
+ [Prevent internet access at the account level by using a service control policy](prevent-internet-access-at-the-account-level-by-using-a-service-control-policy.md)
+ [Send alerts from AWS Network Firewall to a Slack channel](send-alerts-from-aws-network-firewall-to-a-slack-channel.md)
+ [Serve static content in an Amazon S3 bucket through a VPC by using Amazon CloudFront](serve-static-content-in-an-amazon-s3-bucket-through-a-vpc-by-using-amazon-cloudfront.md)
+ [Set up disaster recovery for Oracle JD Edwards EnterpriseOne with AWS Elastic Disaster Recovery](set-up-disaster-recovery-for-oracle-jd-edwards-enterpriseone-with-aws-elastic-disaster-recovery.md)
+ [Use BMC Discovery queries to extract migration data for migration planning](use-bmc-discovery-queries-to-extract-migration-data-for-migration-planning.md)
+ [Use Network Firewall to capture the DNS domain names from the Server Name Indication for outbound traffic](use-network-firewall-to-capture-the-dns-domain-names-from-the-server-name-indication-sni-for-outbound-traffic.md)

# Content delivery
<a name="contentdelivery-pattern-list"></a>

**Topics**
+ [Send AWS WAF logs to Splunk by using AWS Firewall Manager and Amazon Data Firehose](send-aws-waf-logs-to-splunk-by-using-aws-firewall-manager-and-amazon-data-firehose.md)
+ [Serve static content in an Amazon S3 bucket through a VPC by using Amazon CloudFront](serve-static-content-in-an-amazon-s3-bucket-through-a-vpc-by-using-amazon-cloudfront.md)
+ [More patterns](contentdelivery-more-patterns-pattern-list.md)

# Send AWS WAF logs to Splunk by using AWS Firewall Manager and Amazon Data Firehose
<a name="send-aws-waf-logs-to-splunk-by-using-aws-firewall-manager-and-amazon-data-firehose"></a>

*Michael Friedenthal, Aman Kaur Gandhi, and JJ Johnson, Amazon Web Services*

## Summary
<a name="send-aws-waf-logs-to-splunk-by-using-aws-firewall-manager-and-amazon-data-firehose-summary"></a>

Historically, there were two ways to move data into Splunk: a push or a pull architecture. A *pull architecture* offers delivery data guarantees through retries, but it requires dedicated resources in Splunk that poll data. Pull architectures usually are not real time because of the polling. A *push architecture* in typically has lower latency, is more scalable, and reduces operational complexity and costs. However, it doesn’t guarantee delivery and typically requires agents.

Splunk integration with Amazon Data Firehose delivers real-time streaming data to Splunk through an HTTP event collector (HEC). This integration provides the advantages of both push and pull architectures—it guarantees data delivery through retries, is near real-time, and is low latency and low complexity. The HEC quickly and efficiently sends data over HTTP or HTTPS directly to Splunk. HECs are token-based, which eliminates the need to hardcode credentials in an application or in supporting files.

In an AWS Firewall Manager policy, you can configure logging for all of the AWS WAF web ACL traffic in all of your accounts, and you can then use a Firehose delivery stream to send that log data to Splunk for monitoring, visualization, and analysis. This solution provides the following benefits:
+ Central management and logging for AWS WAF web ACL traffic in all of your accounts
+ Splunk integration with a single AWS account
+ Scalability
+ Near real-time delivery of log data
+ Cost optimization through the use of a serverless solution, so you don't have to pay for unused resources.

## Prerequisites and limitations
<a name="send-aws-waf-logs-to-splunk-by-using-aws-firewall-manager-and-amazon-data-firehose-prereqs"></a>

**Prerequisites **
+ An active AWS account that is part of an organization in AWS Organizations.
+ You must have the following permissions to enable logging with Firehose:
  + `iam:CreateServiceLinkedRole`
  + `firehose:ListDeliveryStreams`
  + `wafv2:PutLoggingConfiguration`
+ AWS WAF and its web ACLs must be configured. For instructions, see [Getting started with AWS WAF](https://docs.aws.amazon.com/waf/latest/developerguide/getting-started.html).
+ AWS Firewall Manager must be setup. For instructions, see [AWS Firewall Manager prerequisites](https://docs.aws.amazon.com/waf/latest/developerguide/fms-prereq.html).
+ The Firewall Manager security policies for AWS WAF must be configured. For instructions, see [Getting started with AWS Firewall ManagerAWS WAF policies](https://docs.aws.amazon.com/waf/latest/developerguide/getting-started-fms.html).
+ Splunk must be setup with a public HTTP endpoint that can be reached by Firehose.

**Limitations**
+ The AWS accounts must be managed in a single organization in AWS Organizations.
+ The web ACL must be in the same Region as the delivery stream. If you are capturing logs for Amazon CloudFront, create the Firehose delivery stream in the US East (N. Virginia) Region, `us-east-1`.
+ The Splunk add-on for Firehose is available for paid Splunk Cloud deployments, distributed Splunk Enterprise deployments, and single-instance Splunk Enterprise deployments. This add-on is not supported for free trial Splunk Cloud deployments.

## Architecture
<a name="send-aws-waf-logs-to-splunk-by-using-aws-firewall-manager-and-amazon-data-firehose-architecture"></a>

**Target technology stack**
+ Firewall Manager
+ Firehose
+ Amazon Simple Storage Service (Amazon S3)
+ AWS WAF
+ Splunk

**Target architecture **

The following image shows how you can use Firewall Manager to centrally log all AWS WAF data and send it to Splunk through Firehose.

![\[Architecture diagram showing sending AWS WAF log data to Splunk through Amazon Data Firehose\]](http://docs.aws.amazon.com/prescriptive-guidance/latest/patterns/images/pattern-img/3dfeaae0-985a-42b8-91c4-ece081f0b51b/images/669169b1-caa4-419b-9988-19806ded54eb.png)


1. The AWS WAF web ACLs send firewall log data to Firewall Manager.

1. Firewall Manager sends the log data to Firehose.

1. The Firehose delivery stream forwards the log data to Splunk and to an S3 bucket. The S3 bucket acts as a backup in the event of an error with the Firehose delivery stream.

**Automation and scale**

This solution is designed to scale and accommodate all AWS WAF web ALCs within the organization. You can configure all web ACLs to use the same Firehose instance. However, if you want to set up and use multiple Firehose instances, you can.

## Tools
<a name="send-aws-waf-logs-to-splunk-by-using-aws-firewall-manager-and-amazon-data-firehose-tools"></a>

**AWS services**
+ [AWS Firewall Manager](https://docs.aws.amazon.com/waf/latest/developerguide/fms-chapter.html) is a security management service that helps you to centrally configure and manage firewall rules across your accounts and applications in AWS Organizations.
+ [Amazon Data Firehose](https://docs.aws.amazon.com/firehose/latest/dev/what-is-this-service.html) helps you deliver real-time [streaming data](http://aws.amazon.com/streaming-data/) to other AWS services, custom HTTP endpoints, and HTTP endpoints owned by supported third-party service providers, such as Splunk.
+ [Amazon Simple Storage Service (Amazon S3)](https://docs.aws.amazon.com/AmazonS3/latest/userguide/Welcome.html) is a cloud-based object storage service that helps you store, protect, and retrieve any amount of data.
+ [AWS WAF](https://docs.aws.amazon.com/waf/latest/developerguide/what-is-aws-waf.html) is a web application firewall that helps you monitor HTTP and HTTPS requests that are forwarded to your protected web application resources.

**Other tools**
+ [Splunk](https://docs.splunk.com/Documentation) helps you monitor, visualize, and analyze log data.

## Epics
<a name="send-aws-waf-logs-to-splunk-by-using-aws-firewall-manager-and-amazon-data-firehose-epics"></a>

### Configure Splunk
<a name="configure-splunk"></a>


| Task | Description | Skills required | 
| --- | --- | --- | 
| Install the Splunk App for AWS. | [\[See the AWS documentation website for more details\]](http://docs.aws.amazon.com/prescriptive-guidance/latest/patterns/send-aws-waf-logs-to-splunk-by-using-aws-firewall-manager-and-amazon-data-firehose.html) | Security administrator, Splunk administrator | 
| Install the add-on for AWS WAF. | Repeat the previous instructions to install the **AWS Web Application Firewall Add-on** for Splunk. | Security administrator, Splunk administrator | 
| Install and configure the Splunk add-on for Firehose. | [\[See the AWS documentation website for more details\]](http://docs.aws.amazon.com/prescriptive-guidance/latest/patterns/send-aws-waf-logs-to-splunk-by-using-aws-firewall-manager-and-amazon-data-firehose.html) | Security administrator, Splunk administrator | 

### Create the Firehose delivery stream
<a name="create-the-akf-delivery-stream"></a>


| Task | Description | Skills required | 
| --- | --- | --- | 
| Grant Firehose access to a Splunk destination. | Configure the access policy that permits Firehose to access a Splunk destination and back up the log data to an S3 bucket. For more information, see [Grant Firehose access to a Splunk destination](https://docs.aws.amazon.com/firehose/latest/dev/controlling-access.html#using-iam-splunk). | Security administrator | 
| Create a Firehose delivery stream. | In the same account where you manage the web ACLs for AWS WAF, create a delivery stream in Firehose. You are required to have an IAM role when creating a delivery stream. Firehose assumes that IAM role and gains access to the specified S3 bucket. For instructions, see [Creating a delivery stream](https://docs.aws.amazon.com/firehose/latest/dev/basic-create.html). Note the following:[\[See the AWS documentation website for more details\]](http://docs.aws.amazon.com/prescriptive-guidance/latest/patterns/send-aws-waf-logs-to-splunk-by-using-aws-firewall-manager-and-amazon-data-firehose.html)Repeat this process for each token that you configured in the HTTP event collector. | Security administrator | 
| Test the delivery stream. | Test the delivery stream to validate that it is properly configured. For instructions, see [Test using Splunk as the destination](https://docs.aws.amazon.com/firehose/latest/dev/test-drive-firehose.html#test-drive-destination-splunk) in the Firehose documentation. | Security administrator | 

### Configure Firewall Manager to log data
<a name="configure-firewall-manager-to-log-data"></a>


| Task | Description | Skills required | 
| --- | --- | --- | 
| Configure the Firewall Manager policies. | The Firewall Manager policies must be configured to enable logging and to forward logs to the correct Firehose delivery stream. For more information and instructions, see [Configuring logging for an AWS WAF policy](https://docs.aws.amazon.com/waf/latest/developerguide/waf-policies.html#waf-policies-logging-config). | Security administrator | 

## Related resources
<a name="send-aws-waf-logs-to-splunk-by-using-aws-firewall-manager-and-amazon-data-firehose-resources"></a>

**AWS resources**
+ [Logging web ACL traffic](https://docs.aws.amazon.com/waf/latest/developerguide/logging.html) (AWS WAF documentation)
+ [Configuring logging for an AWS WAF policy](https://docs.aws.amazon.com/waf/latest/developerguide/waf-policies.html#waf-policies-logging-config) (AWS WAF documentation)
+ [Tutorial: Sending VPC Flow Logs to Splunk Using Amazon Data Firehose](https://docs.aws.amazon.com/firehose/latest/dev/vpc-splunk-tutorial.html) (Firehose documentation)
+ [How do I push VPC flow logs to Splunk using Amazon Data Firehose?](https://aws.amazon.com/premiumsupport/knowledge-center/push-flow-logs-splunk-firehose/) (AWS Knowledge Center)
+ [Power data ingestion into Splunk using Amazon Data Firehose](https://aws.amazon.com/blogs/big-data/power-data-ingestion-into-splunk-using-amazon-kinesis-data-firehose/) (AWS blog post)

**Splunk documentation**
+ [Splunk Add-on for Amazon Data Firehose](https://docs.splunk.com/Documentation/AddOns/released/Firehose/About)

# Serve static content in an Amazon S3 bucket through a VPC by using Amazon CloudFront
<a name="serve-static-content-in-an-amazon-s3-bucket-through-a-vpc-by-using-amazon-cloudfront"></a>

*Angel Emmanuel Hernandez Cebrian, Amazon Web Services*

## Summary
<a name="serve-static-content-in-an-amazon-s3-bucket-through-a-vpc-by-using-amazon-cloudfront-summary"></a>

When you serve static content that is hosted on Amazon Web Services (AWS), the recommended approach is to use an Amazon Simple Storage Service (S3) bucket as the origin and use Amazon CloudFront to distribute the content. This solution has two primary benefits: the convenience of caching static content at edge locations, and the ability to define [web access control lists](https://docs.aws.amazon.com/waf/latest/developerguide/web-acl.html) (web ACLs) for the CloudFront distribution, which helps you secure requests to the content with minimal configuration and administrative overhead.

However, there is a common architectural limitation to the standard, recommended approach. In some environments, you want virtual firewall appliances deployed in a virtual private cloud (VPC) to inspect all content, including static content. The standard approach doesn’t route traffic through the VPC for inspection. This pattern provides an alternative architectural solution. You still use a CloudFront distribution to serve static content in an S3 bucket, but the traffic is routed through the VPC by using an Application Load Balancer. An AWS Lambda function then retrieves and returns the content from the S3 bucket.

## Prerequisites and limitations
<a name="serve-static-content-in-an-amazon-s3-bucket-through-a-vpc-by-using-amazon-cloudfront-prereqs"></a>

**Prerequisites**
+ An active AWS account.
+ Static website content hosted in an S3 bucket.

**Limitations**
+ The resources in this pattern must be in a single AWS Region, but they can be provisioned in different AWS accounts.
+ Limits apply to the maximum request and response size that the Lambda function can receive and send, respectively. For more information, see *Limits* in [Lambda functions as targets](https://docs.aws.amazon.com/elasticloadbalancing/latest/application/lambda-functions.html) (Elastic Load Balancing documentation).
+ It's important to find a good balance between performance, scalability, security, and cost-effectiveness when using this approach. Despite the high scalability of Lambda, if the number of concurrent Lambda invocations exceeds the maximum quota, some requests are throttled. For more information, see Lambda quotas (Lambda documentation). You also need to consider pricing when using Lambda. To minimize Lambda invocations, make sure that you properly define the cache for the CloudFront distribution. For more information, see [Optimizing caching and availability](https://docs.aws.amazon.com/AmazonCloudFront/latest/DeveloperGuide/ConfiguringCaching.html) (CloudFront documentation).

## Architecture
<a name="serve-static-content-in-an-amazon-s3-bucket-through-a-vpc-by-using-amazon-cloudfront-architecture"></a>

**Target technology stack  **
+ CloudFront
+ Amazon Virtual Private Cloud (Amazon VPC)
+ Application Load Balancer
+ Lambda
+ Amazon S3

**Target architecture**

The following image shows the suggested architecture when you need to use CloudFront to serve static content from an S3 bucket through a VPC.

![\[Traffic flow through Application Load Balancers in the VPC to the Lambda function.\]](http://docs.aws.amazon.com/prescriptive-guidance/latest/patterns/images/pattern-img/e0dd6928-4fe0-47ab-954f-9de5563349d8/images/b42c7dd9-4a72-4998-bf88-195c8f90ed3e.png)


1. The client requests the URL of CloudFront distribution to get a particular website file in the S3 bucket.

1. CloudFront sends the request to AWS WAF. AWS WAF filters the request by using the web ACLs applied to the CloudFront distribution. If the request is determined to be valid, the flow continues. If the request is determined to be invalid, the client receives a 403 error.

1. CloudFront checks its internal cache. If there is a valid key matching the incoming request, the associated value is sent back to the client as a response. If not, the flow continues.

1. CloudFront forwards the request to the URL of the specified Application Load Balancer.

1. The Application Load Balancer has a listener associated with a target group based on a Lambda function. The Application Load Balancer invokes the Lambda function.

1. The Lambda function connects to the S3 bucket, perform a `GetObject` operation on it, and returns the content as a response.

**Automation and scale**

To automate the deployment of static content using this approach, create CI/CD pipelines to update the Amazon S3 buckets that host websites.

The Lambda function scales automatically to handle the concurrent requests, within the quotas and limitations of the service. For more information, see [Lambda function scaling](https://docs.aws.amazon.com/lambda/latest/dg/invocation-scaling.html) and [Lambda quotas](https://docs.aws.amazon.com/lambda/latest/dg/gettingstarted-limits.html) (Lambda documentation). For the other AWS services and features, such as CloudFront and the Application Load Balancer, AWS scales these automatically.

## Tools
<a name="serve-static-content-in-an-amazon-s3-bucket-through-a-vpc-by-using-amazon-cloudfront-tools"></a>
+ [Amazon CloudFront](https://docs.aws.amazon.com/AmazonCloudFront/latest/DeveloperGuide/Introduction.html) speeds up distribution of your web content by delivering it through a worldwide network of data centers, which lowers latency and improves performance.
+ [Elastic Load Balancing (ELB)](https://docs.aws.amazon.com/elasticloadbalancing/latest/userguide/what-is-load-balancing.html) distributes incoming application or network traffic across multiple targets. In this pattern, you use an [Application Load Balancer](https://docs.aws.amazon.com/elasticloadbalancing/latest/application/introduction.html) provisioned through Elastic Load Balancing to direct traffic to the Lambda function.
+ [AWS Lambda](https://docs.aws.amazon.com/lambda/latest/dg/welcome.html) is a compute service that helps you run code without needing to provision or manage servers. It runs your code only when needed and scales automatically, so you pay only for the compute time that you use.
+ [Amazon Simple Storage Service (Amazon S3)](https://docs.aws.amazon.com/AmazonS3/latest/userguide/Welcome.html) is a cloud-based object storage service that helps you store, protect, and retrieve any amount of data.
+ [Amazon Virtual Private Cloud (Amazon VPC)](https://docs.aws.amazon.com/vpc/latest/userguide/what-is-amazon-vpc.html) helps you launch AWS resources into a virtual network that you’ve defined. This virtual network resembles a traditional network that you’d operate in your own data center, with the benefits of using the scalable infrastructure of AWS.

## Epics
<a name="serve-static-content-in-an-amazon-s3-bucket-through-a-vpc-by-using-amazon-cloudfront-epics"></a>

### Use CloudFront to serve static content from Amazon S3 through a VPC
<a name="use-cloudfront-to-serve-static-content-from-amazon-s3-through-a-vpc"></a>


| Task | Description | Skills required | 
| --- | --- | --- | 
| Create a VPC. | Create a VPC for hosting the resources deployed in this pattern, such as the Application Load Balancer and the Lambda function.  For instructions, see [Create a VPC](https://docs.aws.amazon.com/vpc/latest/userguide/working-with-vpcs.html#Create-VPC) (Amazon VPC documentation). | Cloud architect | 
| Create an AWS WAF web ACL. | Create an AWS WAF web ACL. Later in this pattern, you apply this web ACL to the CloudFront distribution. For instructions, see [Creating a web ACL](https://docs.aws.amazon.com/waf/latest/developerguide/web-acl-creating.html) (AWS WAF documentation). | Cloud architect | 
| Create the Lambda function. | Create the Lambda function that serves the static content hosted in the S3 bucket as a website. Use the code provided in the [Additional information](#serve-static-content-in-an-amazon-s3-bucket-through-a-vpc-by-using-amazon-cloudfront-additional) section of this pattern. Customize the code to identify your target S3 bucket. | General AWS | 
| Upload the Lambda function. | Enter the following command to upload the Lambda function code to a .zip file archive in Lambda.<pre>aws lambda update-function-code \<br />--function-name  \ <br />--zip-file fileb://lambda-alb-s3-website.zip</pre> | General AWS | 
| Create an Application Load Balancer. | Create an internet-facing Application Load Balancer that points to the Lambda function. For instructions, see [Create a target group for the Lambda function](https://docs.aws.amazon.com/elasticloadbalancing/latest/application/lambda-functions.html#register-lambda-function) (Elastic Load Balancing documentation). For a high-availability configuration, create the Application Load Balancer and attach it to private subnets in different Availability Zones. | Cloud architect | 
| Create a CloudFront distribution. | Create a CloudFront distribution that points to the Application Load Balancer you created.[\[See the AWS documentation website for more details\]](http://docs.aws.amazon.com/prescriptive-guidance/latest/patterns/serve-static-content-in-an-amazon-s3-bucket-through-a-vpc-by-using-amazon-cloudfront.html) | Cloud architect | 

## Related resources
<a name="serve-static-content-in-an-amazon-s3-bucket-through-a-vpc-by-using-amazon-cloudfront-resources"></a>

**AWS documentation**
+ [Optimizing caching and availability](https://docs.aws.amazon.com/AmazonCloudFront/latest/DeveloperGuide/ConfiguringCaching.html) (CloudFront documentation)
+ [Lambda functions as targets](https://docs.aws.amazon.com/elasticloadbalancing/latest/application/lambda-functions.html) (Elastic Load Balancing documentation)
+ [Lambda quotas](https://docs.aws.amazon.com/lambda/latest/dg/gettingstarted-limits.html) (Lambda documentation)

**AWS service websites**
+ [Application Load Balancer](https://aws.amazon.com/es/elasticloadbalancing/application-load-balancer/)
+ [Lambda](https://aws.amazon.com/en/lambda/)
+ [CloudFront](https://aws.amazon.com/en/cloudfront/)
+ [Amazon S3](https://aws.amazon.com/en/s3/)
+ [AWS WAF](https://aws.amazon.com/en/waf/)
+ [Amazon VPC](https://aws.amazon.com/en/vpc/)

## Additional information
<a name="serve-static-content-in-an-amazon-s3-bucket-through-a-vpc-by-using-amazon-cloudfront-additional"></a>

**Code**

The following example Lambda function is written in Node.js. This Lambda function acts as a web server that performs a `GetObject` operation to an S3 bucket that contains the website resources. 

```
/**

 * This is an AWS Lambda function created for demonstration purposes.

 * It retrieves static assets from a defined Amazon S3 bucket.

 * To make the content available through a URL, use an Application Load Balancer with a Lambda integration.
 * 
 * Set the S3_BUCKET environment variable in the Lambda function definition.
 */

var AWS = require('aws-sdk');

exports.handler = function(event, context, callback) {

    var bucket = process.env.S3_BUCKET;    
    var key = event.path.replace('/', '');
    
    if (key == '') {
        key = 'index.html';
    }

    // Fetch from S3
    var s3 = new AWS.S3();
    return s3.getObject({Bucket: bucket, Key: key},
       function(err, data) {

            if (err) {
                return err;
            }

            var isBase64Encoded = false;
            var encoding = 'utf8';
            
            if (data.ContentType.indexOf('image/') > -1) {
                isBase64Encoded = true;
                encoding = 'base64'
            }
    
            var resp = {
                statusCode: 200,
                headers: {
                    'Content-Type': data.ContentType,
                },
                body: new Buffer(data.Body).toString(encoding),
                isBase64Encoded: isBase64Encoded
            };

            callback(null, resp);
        }
    );
};
```

# More patterns
<a name="contentdelivery-more-patterns-pattern-list"></a>

**Topics**
+ [Check an Amazon CloudFront distribution for access logging, HTTPS, and TLS version](check-an-amazon-cloudfront-distribution-for-access-logging-https-and-tls-version.md)
+ [Deploy a gRPC-based application on an Amazon EKS cluster and access it with an Application Load Balancer](deploy-a-grpc-based-application-on-an-amazon-eks-cluster-and-access-it-with-an-application-load-balancer.md)
+ [Deploy preventative attribute-based access controls for public subnets](deploy-preventative-attribute-based-access-controls-for-public-subnets.md)
+ [Deploy resources in an AWS Wavelength Zone by using Terraform](deploy-resources-wavelength-zone-using-terraform.md)
+ [Deploy the Security Automations for AWS WAF solution by using Terraform](deploy-the-security-automations-for-aws-waf-solution-by-using-terraform.md)
+ [Set up a serverless cell router for a cell-based architecture](serverless-cell-router-architecture.md)
+ [Use Amazon Q Developer as a coding assistant to increase your productivity](use-q-developer-as-coding-assistant-to-increase-productivity.md)
+ [View AWS Network Firewall logs and metrics by using Splunk](view-aws-network-firewall-logs-and-metrics-by-using-splunk.md)