

# Developer tools
<a name="developer-tools-pattern-list"></a>

**Topics**
+ [DevOps](devops-pattern-list.md)
+ [Infrastructure](infrastructure-pattern-list.md)
+ [Web & mobile apps](websitesandwebapps-pattern-list.md)

# DevOps
<a name="devops-pattern-list"></a>

**Topics**
+ [Accelerate MLOps with Backstage and self-service Amazon SageMaker AI templates](accelerate-mlops-with-backstage-and-sagemaker-templates.md)
+ [Automate AWS infrastructure operations by using Amazon Bedrock](automate-aws-infrastructure-operations-by-using-amazon-bedrock.md)
+ [Automate CloudFront updates when load balancer endpoints change by using Terraform](automate-cloudfront-updates-when-load-balancer-endpoints-change.md)
+ [Automate Amazon CodeGuru reviews for AWS CDK Python applications by using GitHub Actions](automate-amazon-codeguru-reviews-for-aws-cdk-python-applications.md)
+ [Automate AWS Supply Chain data lakes deployment in a multi-repository setup](automate-the-deployment-of-aws-supply-chain-data-lakes.md)
+ [Automate AWS resource assessment](automate-aws-resource-assessment.md)
+ [Install SAP systems automatically by using open-source tools](install-sap-systems-automatically-by-using-open-source-tools.md)
+ [Automate AWS Service Catalog portfolio and product deployment by using AWS CDK](automate-aws-service-catalog-portfolio-and-product-deployment-by-using-aws-cdk.md)
+ [Automate dynamic pipeline management for deploying hotfix solutions in Gitflow environments by using AWS Service Catalog and AWS CodePipeline](automate-dynamic-pipeline-management-for-deploying-hotfix-solutions.md)
+ [Automate deletion of AWS CloudFormation stacks and associated resources](automate-deletion-cloudformation-stacks-associated-resources.md)
+ [Automate ingestion and visualization of Amazon MWAA custom metrics on Amazon Managed Grafana by using Terraform](automate-ingestion-and-visualization-of-amazon-mwaa-custom-metrics.md)
+ [Automatically attach an AWS managed policy for Systems Manager to EC2 instance profiles using Cloud Custodian and AWS CDK](automatically-attach-an-aws-managed-policy-for-systems-manager-to-ec2-instance-profiles-using-cloud-custodian-and-aws-cdk.md)
+ [Automatically build CI/CD pipelines and Amazon ECS clusters for microservices using AWS CDK](automatically-build-ci-cd-pipelines-and-amazon-ecs-clusters-for-microservices-using-aws-cdk.md)
+ [Build and push Docker images to Amazon ECR using GitHub Actions and Terraform](build-and-push-docker-images-to-amazon-ecr-using-github-actions-and-terraform.md)
+ [Build and test iOS apps with AWS CodeCommit, AWS CodePipeline, and AWS Device Farm](build-and-test-ios-apps-with-aws-codecommit-aws-codepipeline-and-aws-device-farm.md)
+ [Configure mutual TLS authentication for applications running on Amazon EKS](configure-mutual-tls-authentication-for-applications-running-on-amazon-eks.md)
+ [Automate the creation of Amazon WorkSpaces Applications resources using AWS CloudFormation](automate-the-creation-of-appstream-2-0-resources-using-aws-cloudformation.md)
+ [Create a custom log parser for Amazon ECS using a Firelens log router](create-a-custom-log-parser-for-amazon-ecs-using-a-firelens-log-router.md)
+ [Create an API-driven resource orchestration framework using GitHub Actions and Terragrunt](create-an-api-driven-resource-orchestration-framework-using-github-actions-and-terragrunt.md)
+ [Create automated pull requests for Terraform-managed AWS infrastructure by using GitHub Actions](create-automated-pull-requests-for-terraform-managed-aws-infrastructure.md)
+ [Create dynamic CI pipelines for Java and Python projects automatically](create-dynamic-ci-pipelines-for-java-and-python-projects-automatically.md)
+ [Deploy CloudWatch Synthetics canaries by using Terraform](deploy-cloudwatch-synthetics-canaries-by-using-terraform.md)
+ [Deploy a ChatOps solution to manage SAST scan results by using Amazon Q Developer in chat applications custom actions and CloudFormation](deploy-chatops-solution-to-manage-sast-scan-results.md)
+ [Deploy agentic systems on Amazon Bedrock with the CrewAI framework by using Terraform](deploy-agentic-systems-on-amazon-bedrock-with-the-crewai-framework.md)
+ [Deploy an AWS Glue job with an AWS CodePipeline CI/CD pipeline](deploy-an-aws-glue-job-with-an-aws-codepipeline-ci-cd-pipeline.md)
+ [Deploy code in multiple AWS Regions using AWS CodePipeline, AWS CodeCommit, and AWS CodeBuild](deploy-code-in-multiple-aws-regions-using-aws-codepipeline-aws-codecommit-and-aws-codebuild.md)
+ [Deploy workloads from Azure DevOps pipelines to private Amazon EKS clusters](deploy-workloads-from-azure-devops-pipelines-to-private-amazon-eks-clusters.md)
+ [Execute Amazon Redshift SQL queries by using Terraform](execute-redshift-sql-queries-using-terraform.md)
+ [Export tags for a list of Amazon EC2 instances to a CSV file](export-tags-for-a-list-of-amazon-ec2-instances-to-a-csv-file.md)
+ [Export AWS Backup reports from across an organization in AWS Organizations as a CSV file](export-aws-backup-reports-from-across-an-organization-in-aws-organizations-as-a-csv-file.md)
+ [Generate an AWS CloudFormation template containing AWS Config managed rules using Troposphere](generate-an-aws-cloudformation-template-containing-aws-config-managed-rules-using-troposphere.md)
+ [Give SageMaker notebook instances temporary access to a CodeCommit repository in another AWS account](give-sagemaker-notebook-instances-temporary-access-to-a-codecommit-repository-in-another-aws-account.md)
+ [Implement a GitHub Flow branching strategy for multi-account DevOps environments](implement-a-github-flow-branching-strategy-for-multi-account-devops-environments.md)
+ [Implement a Gitflow branching strategy for multi-account DevOps environments](implement-a-gitflow-branching-strategy-for-multi-account-devops-environments.md)
+ [Implement a Trunk branching strategy for multi-account DevOps environments](implement-a-trunk-branching-strategy-for-multi-account-devops-environments.md)
+ [Implement centralized custom Checkov scanning to enforce policy before deploying AWS infrastructure](centralized-custom-checkov-scanning.md)
+ [Implement AI-powered Kubernetes diagnostics and troubleshooting with K8sGPT and Amazon Bedrock integration](implement-ai-powered-kubernetes-diagnostics-and-troubleshooting-with-k8sgpt-and-amazon-bedrock-integration.md)
+ [Automatically detect changes and initiate different CodePipeline pipelines for a monorepo in CodeCommit](automatically-detect-changes-and-initiate-different-codepipeline-pipelines-for-a-monorepo-in-codecommit.md)
+ [Integrate a Bitbucket repository with AWS Amplify using AWS CloudFormation](integrate-a-bitbucket-repository-with-aws-amplify-using-aws-cloudformation.md)
+ [Launch a CodeBuild project across AWS accounts using Step Functions and a Lambda proxy function](launch-a-codebuild-project-across-aws-accounts-using-step-functions-and-a-lambda-proxy-function.md)
+ [Manage Multi-AZ failover for EMR clusters by using Application Recovery Controller](multi-az-failover-spark-emr-clusters-arc.md)
+ [Manage blue/green deployments of microservices to multiple accounts and Regions by using AWS code services and AWS KMS multi-Region keys](manage-blue-green-deployments-of-microservices-to-multiple-accounts-and-regions-by-using-aws-code-services-and-aws-kms-multi-region-keys.md)
+ [Monitor Amazon ECR repositories for wildcard permissions using AWS CloudFormation and AWS Config](monitor-amazon-ecr-repositories-for-wildcard-permissions-using-aws-cloudformation-and-aws-config.md)
+ [Optimize multi-account serverless deployments by using the AWS CDK and GitHub Actions workflows](optimize-multi-account-serverless-deployments.md)
+ [Provision AWS Service Catalog products based on AWS CloudFormation templates by using GitHub Actions](provision-aws-service-catalog-products-using-github-actions.md)
+ [Provision least-privilege IAM roles by deploying a role vending machine solution](provision-least-privilege-iam-roles-by-deploying-a-role-vending-machine-solution.md)
+ [Publish Amazon CloudWatch metrics to a CSV file](publish-amazon-cloudwatch-metrics-to-a-csv-file.md)
+ [Remove Amazon EC2 entries across AWS accounts from AWS Managed Microsoft AD by using AWS Lambda automation](remove-amazon-ec2-entries-across-aws-accounts-from-aws-managed-microsoft-ad.md)
+ [Remove Amazon EC2 entries in the same AWS account from AWS Managed Microsoft AD by using AWS Lambda automation](remove-amazon-ec2-entries-in-the-same-aws-account-from-aws-managed-microsoft-ad.md)
+ [Run unit tests for Python ETL jobs in AWS Glue using the pytest framework](run-unit-tests-for-python-etl-jobs-in-aws-glue-using-the-pytest-framework.md)
+ [Set up a CI/CD pipeline by using AWS CodePipeline and AWS CDK](set-up-a-ci-cd-pipeline-by-using-aws-codepipeline-and-aws-cdk.md)
+ [Set up centralized logging at enterprise scale by using Terraform](set-up-centralized-logging-at-enterprise-scale-by-using-terraform.md)
+ [Set up end-to-end encryption for applications on Amazon EKS using cert-manager and Let's Encrypt](set-up-end-to-end-encryption-for-applications-on-amazon-eks-using-cert-manager-and-let-s-encrypt.md)
+ [Simplify Amazon EKS multi-tenant application deployment by using Flux](simplify-amazon-eks-multi-tenant-application-deployment-by-using-flux.md)
+ [Streamline Amazon Lex bot development and deployment by using an automated workflow](streamline-amazon-lex-bot-development-and-deployment-using-an-automated-workflow.md)
+ [Coordinate resource dependency and task execution by using the AWS Fargate WaitCondition hook construct](use-the-aws-fargate-waitcondition-hook-construct.md)
+ [Use third-party Git source repositories in AWS CodePipeline](use-third-party-git-source-repositories-in-aws-codepipeline.md)
+ [Create a CI/CD pipeline to validate Terraform configurations by using AWS CodePipeline](create-a-ci-cd-pipeline-to-validate-terraform-configurations-by-using-aws-codepipeline.md)
+ [More patterns](devops-more-patterns-pattern-list.md)

# Accelerate MLOps with Backstage and self-service Amazon SageMaker AI templates
<a name="accelerate-mlops-with-backstage-and-sagemaker-templates"></a>

*Ashish Bhatt, Shashank Hirematt, and Shivanshu Suryakar, Amazon Web Services*

## Summary
<a name="accelerate-mlops-with-backstage-and-sagemaker-templates-summary"></a>

Organizations that use machine learning operations (MLOps) systems face significant challenges in scaling, standardizing, and securing their ML infrastructure. This pattern introduces a transformative approach that combines [Backstage](https://backstage.io/), an open source developer portal, with [Amazon SageMaker AI](https://aws.amazon.com/sagemaker/) and hardened infrastructure as code (IaC) modules to improve how your data science teams can develop, deploy, and manage ML workflows.

The IaC modules for this pattern are provided in the GitHub [AWS AIOps modules](https://github.com/awslabs/aiops-modules/tree/main/modules/sagemaker) repository. These modules offer pre-built templates for setting up ML infrastructure and creating consistent ML environments. However, data scientists often struggle to use these templates directly because they require infrastructure expertise. Adding a developer portal such as Backstage creates a user-friendly way for data scientists to deploy standardized ML environments without needing to understand the underlying infrastructure details.

By using Backstage as a self-service platform and integrating preconfigured SageMaker AI templates, you can:
+ Accelerate time to value for your ML initiatives.
+ Help enforce consistent security and governance.
+ Provide data scientists with standardized, compliant environments.
+ Reduce operational overhead and infrastructure complexity.

This pattern provides a solution that addresses the critical challenges of MLOps and also provides a scalable, repeatable framework that enables innovation while maintaining organizational standards.

**Target audience**

This pattern is intended for a broad audience involved in ML, cloud architecture, and platform engineering within an organization. This includes:
+ **ML engineers** who want to standardize and automate ML workflow deployments.
+ **Data scientists** who want self-service access to preconfigured and compliant ML environments.
+ **Platform engineers** who are responsible for building and maintaining internal developer platforms and shared infrastructure.
+ **Cloud architects** who design scalable, secure, and cost-effective cloud solutions for MLOps.
+ **DevOps engineers** who are interested in extending continuous integration and continuous delivery (CI/CD) practices to ML infrastructure provisioning and workflows.
+ **Technical leads and managers** who oversee ML initiatives and want to improve team productivity, governance, and time to market.

For more information about MLOps challenges, SageMaker AI MLOps modules, and how the solution provided by this pattern can address the needs of your ML teams, see the [Additional information](#accelerate-mlops-with-backstage-and-sagemaker-templates-additional) section.

## Prerequisites and limitations
<a name="accelerate-mlops-with-backstage-and-sagemaker-templates-prereqs"></a>

**Prerequisites**
+ AWS Identity and Access Management (IAM) [roles and permissions](https://github.com/aws-samples/sample-aiops-idp-backstage/blob/main/SETUP.md#prerequisites) for provisioning resources into your AWS account
+ An understanding of [Amazon SageMaker Studio](https://docs.aws.amazon.com/sagemaker/latest/dg/studio-updated.html), [SageMaker Projects](https://docs.aws.amazon.com/sagemaker/latest/dg/sagemaker-projects-whatis.html), [SageMaker Pipelines](https://docs.aws.amazon.com/sagemaker/latest/dg/pipelines-overview.html), and [SageMaker Model Registry](https://docs.aws.amazon.com/sagemaker/latest/dg/model-registry.html) concepts
+ An understanding of IaC principles and experience with tools such as the [AWS Cloud Development Kit (AWS CDK)](https://aws.amazon.com/cdk/)

**Limitations**
+ **Limited template coverage**. Currently, the solution supports only SageMaker AI-related AIOps modules from the broader [AIOps solution](https://github.com/awslabs/aiops-modules). Other modules, such as Ray on Amazon Elastic Kubernetes Service (Amazon EKS), MLflow, Apache Airflow, and fine-tuning for Amazon Bedrock, are not yet available as Backstage templates.
+ **Non-configurable default settings**. Templates use fixed default configurations from the AIOps SageMaker modules with no customization. You cannot modify instance types, storage sizes, networking configurations, or security policies through the Backstage interface, which limits flexibility for specific use cases.
+ **AWS-only support**. The platform is designed exclusively for AWS deployments and doesn't support multicloud scenarios. Organizations that use cloud services outside the AWS Cloud cannot use these templates for their ML infrastructure needs.
+ **Manual credential management**. You must manually provide your AWS credentials for each deployment. This solution doesn’t provide integration with corporate identity providers, AWS IAM Identity Center, or automated credential rotation.
+ **Limited lifecycle management**. The templates lack comprehensive resource lifecycle management features such as automated cleanup policies, cost optimization recommendations, and infrastructure drift detection. You must manually manage and monitor deployed resources after creation.

## Architecture
<a name="accelerate-mlops-with-backstage-and-sagemaker-templates-architecture"></a>

The following diagram shows the solution architecture for a unified developer portal that standardizes and accelerates ML infrastructure deployment with SageMaker AI across environments.

![\[Architecture for unified developer portal with Backstage, CNOE, GitHub Actions, and Seed-Farmer.\]](http://docs.aws.amazon.com/prescriptive-guidance/latest/patterns/images/pattern-img/c16160cf-d637-423e-93a7-485ffbb28646/images/233adab3-83cf-42f3-a1de-72d0b8ade5ae.png)


In this architecture:

1. [AWS application modernization blueprints](https://github.com/aws-samples/appmod-blueprints.git) provision the infrastructure setup with an [Amazon EKS](https://docs.aws.amazon.com/eks/latest/userguide/what-is-eks.html) cluster as a base for the [Cloud Native Operational Excellence (CNOE)](https://cnoe.io/) framework. This comprehensive solution addresses complex cloud-native infrastructure management challenges by providing a scalable internal developer platform (IDP). The blueprints offer a structured approach to setting up a robust, flexible infrastructure that can adapt to your evolving organizational needs.

1. The CNOE open source framework consolidates DevOps tools and solves ecosystem fragmentation through a unified platform engineering approach. By bringing together disparate tools and technologies, it simplifies the complex landscape of cloud-native development, so your teams can focus on innovation instead of toolchain management. The framework provides a standardized methodology for selecting, integrating, and managing development tools.

1. With CNOE, Backstage is deployed as an out-of-the-box solution within the Amazon EKS cluster. Backstage is bundled with robust authentication through [Keycloak](https://www.keycloak.org/) and comprehensive deployment workflows through [Argo CD](https://argo-cd.readthedocs.io/en/stable/). This integrated platform creates a centralized environment for managing development processes and provides a single place for teams to access, deploy, and monitor their infrastructure and applications across multiple environments.

1. A GitHub repository contains preconfigured AIOps software templates that cover the entire SageMaker AI lifecycle. These templates address critical ML infrastructure needs, including SageMaker Studio provisioning, model training, inference pipelines, and model monitoring. These templates help you accelerate your ML initiatives and ensure consistency across different projects and teams.

1. [GitHub Actions](https://github.com/features/actions) implements an automated workflow that dynamically triggers resource provisioning through the [Seed-Farmer](https://github.com/awslabs/seed-farmer) utility. This approach integrates the Backstage catalog with the AIOps modules repository and creates a streamlined infrastructure deployment process. The automation reduces manual intervention, minimizes human error, and ensures rapid, consistent infrastructure creation across different environments.

1. The [AWS CDK](https://docs.aws.amazon.com/cdk/v2/guide/home.html) helps you define and provision infrastructure as code, and ensures repeatable, secure, and compliant resource deployment across specified AWS accounts. This approach provides maximum governance with minimal manual intervention, so you can create standardized infrastructure templates that can be easily replicated, version-controlled, and audited.

## Tools
<a name="accelerate-mlops-with-backstage-and-sagemaker-templates-tools"></a>

**AWS services**
+ [AWS Cloud Development Kit (AWS CDK)](https://docs.aws.amazon.com/cdk/v2/guide/home.html) is a software development framework that helps you define and provision AWS Cloud infrastructure in code.
+ [Amazon Elastic Kubernetes Service (Amazon EKS)](https://docs.aws.amazon.com/eks/latest/userguide/what-is-eks.html) helps you run Kubernetes on AWS without needing to install or maintain your own Kubernetes control plane or nodes.
+ [Amazon SageMaker AI](https://docs.aws.amazon.com/sagemaker/) is a managed ML service that helps you build and train ML models and then deploy them into a production-ready hosted environment.

**Other tools**
+ [Backstage](https://backstage.io/) is an open source framework that helps you build internal developer portals.
+ [GitHub Actions](https://github.com/features/actions) is a CI/CD platform that automates software development workflows, including tasks such as building, testing, and deploying code.

**Code repositories**

This pattern uses code and templates from the following GitHub repositories:
+ [AIOps internal developer platform (IDP) with Backstage](https://github.com/aws-samples/sample-aiops-idp-backstage/) repository
+ SageMaker AI-related modules from the [AWS AIOps modules](https://github.com/awslabs/aiops-modules) repository
+ [Modern engineering on AWS](https://github.com/aws-samples/appmod-blueprints) repository

**Implementation**

This implementation uses a production-grade deployment pattern for Backstage from the [Modern engineering on AWS](https://github.com/aws-samples/appmod-blueprints) repository. This approach significantly simplifies the setup process while incorporating AWS best practices for security and scalability.

The [Epics](#accelerate-mlops-with-backstage-and-sagemaker-templates-epics) section of this pattern outlines the implementation approach. For detailed, step-by-step deployment instructions, see the comprehensive [deployment guide](https://github.com/aws-samples/sample-aiops-idp-backstage/blob/main/SETUP.md) available in the [AIOps internal developer platform (IDP) with Backstage](https://github.com/aws-samples/sample-aiops-idp-backstage/) repository. The implementation includes:
+ Initial Backstage platform deployment
+ Integration of SageMaker software templates with Backstage
+ Consuming and maintaining Backstage templates

The deployment guide also includes guidance for ongoing maintenance, troubleshooting, and platform scaling.

## Best practices
<a name="accelerate-mlops-with-backstage-and-sagemaker-templates-best-practices"></a>

Follow these best practices to help ensure security, governance, and operational excellence in your MLOps infrastructure implementations.

**Template management**
+ Never make breaking changes to live templates.
+ Always test updates thoroughly before production deployment.
+ Maintain clear and well-documented template versions.

**Security**
+ Pin GitHub Actions to specific commit secure hash algorithms (SHAs) to help prevent supply chain attacks.
+ Use least privilege IAM roles with granular permissions.
+ Store sensitive credentials in [GitHub Secrets](https://docs.github.com/en/actions/concepts/security/secrets) and [AWS Secrets Manager](https://docs.aws.amazon.com/secretsmanager/latest/userguide/intro.html).
+ Never hardcode credentials in templates.

**Governance and tracking**
+ Implement comprehensive resource tagging standards.
+ Enable precise cost tracking and compliance monitoring.
+ Maintain clear audit trails for infrastructure changes.

This guide provides a strong foundation for implementing these best practices by using Backstage, SageMaker AI, and IaC modules.

## Epics
<a name="accelerate-mlops-with-backstage-and-sagemaker-templates-epics"></a>

### Set up your ML environment
<a name="set-up-your-ml-environment"></a>


| Task | Description | Skills required | 
| --- | --- | --- | 
| Deploy Backstage. | This step uses the blueprints in the [Modern engineering on AWS](https://github.com/aws-samples/appmod-blueprints) repository to build a robust, scalable infrastructure that integrates multiple AWS services to create a centralized IDP for ML workflows. Follow the instructions in the [Backstage deployment section](https://github.com/aws-samples/sample-aiops-idp-backstage/blob/main/SETUP.md#backstage-deployment) of the deployment guide to clone the repository, install dependencies, bootstrap the AWS CDK configure environment variables, and deploy the Backstage platform.The infrastructure uses Amazon EKS as a container orchestration platform for deploying IDP components. The Amazon EKS architecture includes secure networking configurations to establish strict network isolation and control access patterns. The platform integrates with authentication mechanisms to help secure user access across services and environments. | Platform engineer | 
| Set up your SageMaker AI templates. | This step uses the scripts in the GitHub [AIOps internal developer platform (IDP) with Backstage](https://github.com/aws-samples/sample-aiops-idp-backstage/) repository. Follow the instructions in the [SageMaker template setup](https://github.com/aws-samples/sample-aiops-idp-backstage/blob/main/SETUP.md#sagemaker-template-setup) section of the deployment guide to clone the repository, set up prerequisites, and run the setup script.This process creates a repository that contains the SageMaker AI templates that are required for integration with Backstage. | Platform engineer | 
| Integrate the SageMaker AI** **templates with Backstage. | Follow the instructions in the [SageMaker templates integration](https://github.com/aws-samples/sample-aiops-idp-backstage/blob/main/SETUP.md#sagemaker-templates-integration) section of the deployment guide to register your SageMaker AI templates.This step integrates the AIOps modules (SageMaker AI templates from the last step) into your Backstage deployment so you can self-service your ML infrastructure needs. | Platform engineer | 
| Use the SageMaker AI templates from Backstage. | Follow the instructions in the [Using SageMaker templates](https://github.com/aws-samples/sample-aiops-idp-backstage/blob/main/SETUP.md#using-sagemaker-templates) section of the deployment guide to access the Backstage portal and create the ML environment in SageMaker Studio.In the Backstage portal, you can select from available SageMaker AI templates, including options for SageMaker Studio environments, SageMaker notebooks, custom SageMaker project templates, and model deployment pipelines. After you provide configuration parameters, the platform creates dedicated repositories automatically and provisions AWS resources through GitHub Actions and Seed-Farmer. You can monitor progress through GitHub Actions logs and the Backstage component catalog. | Data scientist, Data engineer, Developer | 

### Manage templates for governance and compliance
<a name="manage-templates-for-governance-and-compliance"></a>


| Task | Description | Skills required | 
| --- | --- | --- | 
| Update SageMaker AI templates. | To update a SageMaker AI template in Backstage, follow these steps.[\[See the AWS documentation website for more details\]](http://docs.aws.amazon.com/prescriptive-guidance/latest/patterns/accelerate-mlops-with-backstage-and-sagemaker-templates.html) | Platform engineer | 
| Create and manage multiple versions of a template. | For breaking changes or upgrades, you might want to create multiple versions of a SageMaker AI template.[\[See the AWS documentation website for more details\]](http://docs.aws.amazon.com/prescriptive-guidance/latest/patterns/accelerate-mlops-with-backstage-and-sagemaker-templates.html) | Platform engineer | 

### Extend your ML environment
<a name="extend-your-ml-environment"></a>


| Task | Description | Skills required | 
| --- | --- | --- | 
| Expand template coverage beyond SageMaker AI. | The current solution implements only SageMaker AI-related AIOps templates. You can extend the ML environment by adding [AIOps modules](https://github.com/awslabs/aiops-modules) and integrating custom software templates for additional AWS services and applications. You can create these by using the template designer interface in Backstage, by implementing custom scaffolder actions, or by maintaining template repositories with standard metadata. The platform supports template versioning, cross-team sharing, and validation workflows for consistency. For more information, see the [Backstage documentation](https://backstage.io/docs/overview/what-is-backstage/).You can also implement template inheritance patterns to create specialized versions of base templates. This extensibility enables you to manage diverse AWS resources and applications beyond SageMaker AI while preserving the simplified developer experience and maintaining your organization’s standards. | Platform engineer | 
| Use dynamic parameter injection. | The current templates use default configurations without customization, and run the Seed-Farmer CLI to deploy resources with default variables. You can extend the default configuration by using dynamic parameter injection for module-specific configurations. | Platform engineer | 
| Enhance security and compliance. | To enhance security in the creation of AWS resources, you can enable role-based access control (RBAC) integration with single sign-on (SSO), SAML, OpenID Connect (OIDC), and policy as code enforcement. | Platform engineer | 
| Add automated resource cleanup. | You can enable features for automated cleanup policies, and also add infrastructure drift detection and remediation. | Platform engineer | 

### Clean up resources
<a name="clean-up-resources"></a>


| Task | Description | Skills required | 
| --- | --- | --- | 
| Remove the Backstage infrastructure and SageMaker AI resources. | When you’ve finished using your ML environment, follow the instructions in the [Cleanup and resource management](https://github.com/aws-samples/sample-aiops-idp-backstage/blob/main/SETUP.md#cleanup-and-resource-management) section of the deployment guide to remove the Backstage infrastructure and to delete the SageMaker AI resources in your ML environment. | Platform engineer | 

## Troubleshooting
<a name="accelerate-mlops-with-backstage-and-sagemaker-templates-troubleshooting"></a>


| Issue | Solution | 
| --- | --- | 
| AWS CDK bootstrap failures |  Verify AWS credentials and Region configuration. | 
| Amazon EKS cluster access issues | Check **kubectl** configuration and IAM permissions. | 
| Application Load Balancer connectivity issues | Make sure that security groups allow inbound traffic on port 80/443. | 
| GitHub integration issues | Verify GitHub token permissions and organization access. | 
| SageMaker AI deployment failures | Check [AWS service quotas](https://docs.aws.amazon.com/general/latest/gr/sagemaker.html#limits_sagemaker) and IAM permissions. | 

## Related resources
<a name="accelerate-mlops-with-backstage-and-sagemaker-templates-resources"></a>
+ [Platform engineering ](https://docs.aws.amazon.com/prescriptive-guidance/latest/aws-caf-platform-perspective/platform-eng.html)(in the guide *AWS Cloud Adoption Framework: Platform perspective*)
+ [Amazon SageMaker AI documentation](https://docs.aws.amazon.com/sagemaker/)
+ [Backstage Software Templates](https://backstage.io/docs/features/software-templates/) (Backstage website)
+ [AIOps modules repository](https://github.com/awslabs/aiops-modules) (collection of reusable IaC modules for ML)
+ [AIOps internal developer platform (IDP) with Backstage](https://github.com/aws-samples/sample-aiops-idp-backstage/) repository
+ [Modern engineering on AWS](https://github.com/aws-samples/appmod-blueprints) repository
+ [Cloud Native Operational Excellence (CNOE) website](https://cnoe.io/)

## Additional information
<a name="accelerate-mlops-with-backstage-and-sagemaker-templates-additional"></a>

**Business challenges**

Organizations that embark on or scale their MLOps initiatives frequently encounter these business and technical challenges:
+ **Inconsistent environments**. The lack of standardized development and deployment environments makes collaboration difficult and increases deployment risks.
+ **Manual provisioning overhead**. Manually setting up an ML infrastructure with SageMaker Studio, Amazon Simple Storage Service (Amazon S3) buckets, IAM roles, and CI/CD pipelines is time-consuming and error-prone, and diverts data scientists from their core task of model development.
+ **Lack of discoverability and reuse**. The lack of a centralized catalog makes it difficult to find existing ML models, datasets, and pipelines. This leads to redundant work and missed opportunities for reuse.
+ **Complex governance and compliance**. Ensuring that ML projects adhere to organizational security policies, data privacy regulations, and compliance standards such as Health Insurance Portability and Accountability Act (HIPAA) and General Data Protection Regulation (GDPR) can be challenging without automated guardrails.
+ **Slow time to value**. The cumulative effect of these challenges results in protracted ML project lifecycles and delays the realization of business value from ML investments.
+ **Security risks**. Inconsistent configurations and manual processes can introduce security vulnerabilities that make it difficult to enforce least privilege and network isolation.

These issues prolong development cycles, increase operational overhead, and introduce security risks. The iterative nature of ML requires repeatable workflows and efficient collaboration.

Gartner predicts that by 2026, 80% of software engineering organizations will have platform teams. (See [Platform Engineering Empowers Developers to be Better, Faster, Happier](https://www.gartner.com/en/experts/top-tech-trends-unpacked-series/platform-engineering-empowers-developers) on the Gartner website.) This prediction highlights how an IDP can accelerate software delivery. As an IDP, Backstage helps restore order to complex infrastructure so that teams can deliver high-quality code rapidly and safely. Integrating Backstage with hardened AIOps modules helps you shift from reactive troubleshooting to proactive prevention.

**MLOps SageMaker modules**

The [AIOps modules](https://github.com/awslabs/aiops-modules) in the GitHub repository used for this pattern provide a valuable foundation for standardizing MLOps on AWS through reusable and hardened IaC. These modules encapsulate best practices for provisioning SageMaker projects, pipelines, and associated networking and storage resources, with the goal to reduce complexity and accelerate the setup of ML environments. You can use these templates for various MLOps use cases to establish consistent and secure deployment patterns that foster a more governed and efficient approach to ML workflows. 

Using the AIOps modules directly often requires platform teams to deploy and manage these IaC templates, which can present challenges for data scientists who want self-service access. Discovering and understanding the available templates, configuring the necessary parameters, and triggering their deployment might require navigating AWS service consoles or directly interacting with IaC tools. This can create friction, increase cognitive load for data scientists who prefer to focus on ML tasks, and potentially lead to inconsistent parameterization or deviations from organizational standards if these templates aren’t managed through a centralized and user-friendly interface. Integrating these powerful AIOps modules with an IDP such as Backstage helps address these challenges by providing a streamlined, self-service experience, enhanced discoverability, and stronger governance controls for using these standardized MLOps building blocks.

**Backstage as IDP**

An internal developer platform (IDP) is a self-service layer built by platform teams to simplify and standardize how developers build, deploy, and manage applications. It abstracts infrastructure complexity and provides developers with easy access to tools, environments, and services through a unified interface.

The primary goal of an IDP is to enhance developer experience and productivity by:
+ Enabling self-service for tasks such as service creation and deployment.
+ Promoting consistency and compliance through standard templates.
+ Integrating tools across the development lifecycle (CI/CD, monitoring, and documentation).

Backstage is an open source developer portal that was created by Spotify and is now part of the Cloud Native Computing Foundation (CNCF). It helps organizations build their own IDP by providing a centralized, extensible platform to manage software components, tools, and documentation. With Backstage, developers can:
+ Discover and manage all internal services through a software catalog.
+ Create new projects by using predefined templates through the scaffolder plugin.
+ Access integrated tooling such as CI/CD pipelines, Kubernetes dashboards, and monitoring systems from one location.
+ Maintain consistent, markdown-based documentation through TechDocs.

**FAQ**

**What's the difference between using this Backstage template versus deploying SageMaker Studio manually through the SageMaker console?**

The Backstage template provides several advantages over manual AWS console deployment, including standardized configurations that follow organizational best practices, automated IaC deployment using Seed-Farmer and the AWS CDK, built-in security policies and compliance measures, and integration with your organization's developer workflows through GitHub. The template also creates reproducible deployments with version control, which make it easier to replicate environments across different stages (development, staging, production) and maintain consistency across teams. Additionally, the template includes automated cleanup capabilities and integrates with your organization's identity management system through Backstage. Manual deployment through the console requires deep AWS expertise and doesn’t provide version control or the same level of standardization and governance that the template offers. For these reasons, console deployments are more suitable for one-off experiments than production ML environments.

**What is Seed-Farmer and why does this solution use it?**

Seed-Farmer is an AWS deployment orchestration tool that manages infrastructure modules by using the AWS CDK. This pattern uses Seed-Farmer because it provides standardized, reusable infrastructure components that are specifically designed for AI/ML workloads, handles complex dependencies between AWS services automatically, and ensures consistent deployments across different environments.

**Do I need to install the AWS CLI to use these templates?**

No, you don't have to install the AWS CLI on your computer. The templates run entirely through GitHub Actions in the cloud. You provide your AWS credentials (access key, secret key, and session token) through the Backstage interface, and the deployment happens automatically in the GitHub Actions environment.

**How long does it take to deploy a SageMaker Studio environment?**

A typical SageMaker Studio deployment takes 15-25 minutes to complete. This includes AWS CDK bootstrapping (2-3 minutes), Seed-Farmer toolchain setup (3-5 minutes), and resource creation (10-15 minutes). The exact time depends on your AWS Region and the complexity of your networking setup.

**Can I deploy multiple SageMaker environments in the same AWS account?**

Yes, you can. Each deployment creates resources with unique names based on the component name you provide in the template. However, be aware of AWS service quotas: Each account can have a limited number of SageMaker domains per Region, so [check your quotas](https://docs.aws.amazon.com/general/latest/gr/sagemaker.html#limits_sagemaker) before you create multiple environments.

# Automate AWS infrastructure operations by using Amazon Bedrock
<a name="automate-aws-infrastructure-operations-by-using-amazon-bedrock"></a>

*Ishwar Chauthaiwale and Anand Bukkapatnam Tirumala, Amazon Web Services*

## Summary
<a name="automate-aws-infrastructure-operations-by-using-amazon-bedrock-summary"></a>

In cloud native solutions, automating common infrastructure operations play a vital role in maintaining efficient, secure, and cost-effective environments. Manually handling operations is time-consuming and prone to human error. Additionally, team members with varying levels of AWS expertise need to perform these tasks while ensuring compliance with security protocols. This pattern demonstrates how to use Amazon Bedrock to automate common AWS infrastructure operations through natural language processing (NLP).

This pattern can help organizations to develop reusable, modular, and secure code for deploying generative AI-based infrastructure across multiple environments. Through its focus on infrastructure as code (IaC) and automation, it delivers key DevOps benefits including version control, consistent deployments, reduced errors, faster provisioning, and improved collaboration.

The pattern implements a secure architecture that enables teams to manage operations related to key AWS services including:
+ Amazon Simple Storage Service (Amazon S3) bucket versioning management
+ Amazon Relational Database Service (Amazon RDS) snapshot creation
+ Amazon Elastic Compute Cloud (Amazon EC2) instance management

The architecture employs Amazon Virtual Private Cloud (Amazon VPC) endpoints and private networking for secure communication, with AWS Lambda functions operating as task executors within private subnets. Amazon S3 provides data management and implements comprehensive AWS Identity and Access Management (IAM) roles and permissions to ensure proper access controls. This solution doesn’t include a chat history feature, and the chat isn’t stored.

## Prerequisites and limitations
<a name="automate-aws-infrastructure-operations-by-using-amazon-bedrock-prereqs"></a>
+ An active AWS account.
+ Proper access control measures should be in place to help secure and control access. Examples of access control include using AWS Systems Manager, foundation models access, an IAM role for deployment, and service-based roles, disabling public access to Amazon S3 buckets, and setting up a dead-letter queue.
+ An AWS Key Management Service (AWS KMS) [customer managed key](https://docs.aws.amazon.com/kms/latest/developerguide/concepts.html#customer-cmk).
+ AWS Command Line Interface (AWS CLI) version 2 or later, [installed](https://docs.aws.amazon.com/cli/latest/userguide/cli-chap-getting-started.html) and [configured](https://docs.aws.amazon.com/cli/latest/userguide/cli-chap-configure.html) on the deployment environment.
+ Terraform AWS Provider version 4 or later [installed](https://registry.terraform.io/providers/-/aws/latest/docs/guides/version-4-upgrade) and configured.
+ Terraform version 1.5.7 or later [installed](https://developer.hashicorp.com/terraform/install) and configured.
+ Review and [Define OpenAPI schemas for your agent's action groups in Amazon Bedrock](https://docs.aws.amazon.com/bedrock/latest/userguide/agents-api-schema.html) to help protect against unauthorized access and maintain data integrity.
+ [Access enabled](https://docs.aws.amazon.com/bedrock/latest/userguide/model-access-modify.html) in your AWS account for the required Amazon Titan Text Embeddings v2 and either the Claude 3.5 Sonnet or Claude 3 Haiku [foundation models.](https://docs.aws.amazon.com/bedrock/latest/userguide/models-supported.html) To avoid deployment failure, confirm that your target deployment AWS Region [supports the required models](https://docs.aws.amazon.com/bedrock/latest/userguide/models-regions.html).
+ A configured virtual private cloud (VPC) that follows the [AWS Well Architected Framework](https://docs.aws.amazon.com/wellarchitected/latest/framework/sec-design.html) best practices.
+ Completed review of the [Amazon Responsible AI policy](https://aws.amazon.com/ai/responsible-ai/policy/).

**Product versions**
+ Amazon Titan Text Embeddings v2
+ Anthropic Claude 3.5 Sonnet or Claude 3 Haiku
+ Terraform AWS Provider version 4 or later
+ Terraform version 1.5.7 or later

## Architecture
<a name="automate-aws-infrastructure-operations-by-using-amazon-bedrock-architecture"></a>

The following diagram shows the workflow and architecture components for this pattern.

![\[Workflow to automate common AWS infrastructure operations by using Amazon Bedrock.\]](http://docs.aws.amazon.com/prescriptive-guidance/latest/patterns/images/pattern-img/924e503f-bfc5-4452-abdf-d72a58d4d36f/images/bd56ad29-b435-4543-8ee8-dc4e1d38df18.png)


The solution architecture consists of multiple layers that work together to process natural language requests and execute corresponding AWS operations:

1. The user makes operations requests through the Amazon Bedrock chat console.

1. The chatbot uses Amazon Bedrock Knowledge Bases for request processing. It implements the Amazon Titan Text Embeddings v2 model for natural language processing.

1. If the user prompt includes an action request, the Amazon Bedrock action group uses either the Anthropic Claude 3 Haiku or the Claude 3.5 Sonnet model (depending on your choice) for execution logic and defines operations through an OpenAPI schema.

1. The action group reaches the Amazon VPC [endpoints](https://docs.aws.amazon.com/whitepapers/latest/aws-privatelink/what-are-vpc-endpoints.html) using AWS PrivateLink for secure service communication.

1. The AWS Lambda function is reached through Amazon VPC endpoints for Amazon Bedrock services.

1. The Lambda functions are the primary execution engine. Based on the request, the Lambda function calls the API to perform actions on the AWS services. The Lambda function also handles operation routing and execution.

1. The AWS services get the API request from the Lambda function and corresponding operations are performed.

1. The Lambda function computes an output payload that is understood by Amazon Bedrock.

1. This payload is sent to Amazon Bedrock by using PrivateLink for secure service communication. The large language model (LLM) used by Amazon Bedrock understands this payload and converts it into human understandable format.

1. The output is then shown to the user on the Amazon Bedrock chat console.

The solution enables the following primary operations:
+ Amazon S3 – Enable bucket versioning for version control.
+ Amazon RDS – Create database snapshots for backup.
+ Amazon EC2 – List instances and control the start and stop of instances.

## Tools
<a name="automate-aws-infrastructure-operations-by-using-amazon-bedrock-tools"></a>

**AWS services**
+ [Amazon Bedrock](https://docs.aws.amazon.com/bedrock/latest/userguide/what-is-bedrock.html) is a fully managed service that makes high-performing foundation models (FMs) from leading AI startups and Amazon available for your use through a unified API.
+ [AWS Command Line Interface (AWS CLI)](https://docs.aws.amazon.com/cli/latest/userguide/cli-chap-welcome.html) is an open source tool that helps you interact with AWS services through commands in your command-line shell.
+ [Amazon Elastic Compute Cloud (Amazon EC2)](https://docs.aws.amazon.com/AWSEC2/latest/UserGuide/concepts.html) provides scalable computing capacity in the AWS Cloud. You can launch as many virtual servers as you need and quickly scale them up or down.
+ [AWS Identity and Access Management (IAM)](https://docs.aws.amazon.com/IAM/latest/UserGuide/introduction.html) helps you securely manage access to your AWS resources by controlling who is authenticated and authorized to use them.
+ [AWS Lambda](https://docs.aws.amazon.com/lambda/latest/dg/welcome.html) is a compute service that helps you run code without needing to provision or manage servers. It runs your code only when needed and scales automatically, so you pay only for the compute time that you use.
+ [Amazon OpenSearch Serverless](https://docs.aws.amazon.com/opensearch-service/latest/developerguide/serverless-overview.html) is an on-demand serverless configuration for Amazon OpenSearch Service.
+ [AWS PrivateLink](https://docs.aws.amazon.com/vpc/latest/privatelink/what-is-privatelink.html) helps you create unidirectional, private connections from your virtual private clouds (VPCs) to services outside of the VPC.
+ [Amazon Relational Database Service (Amazon RDS)](https://docs.aws.amazon.com/AmazonRDS/latest/UserGuide/Welcome.html) helps you set up, operate, and scale a relational database in the AWS Cloud.
+ [Amazon Simple Storage Service (Amazon S3)](https://docs.aws.amazon.com/AmazonS3/latest/userguide/Welcome.html) is a cloud-based object storage service that helps you store, protect, and retrieve any amount of data.
+ [AWS Systems Manager](https://docs.aws.amazon.com/systems-manager/latest/userguide/what-is-systems-manager.html) helps you manage your applications and infrastructure running in the AWS Cloud. It simplifies application and resource management, shortens the time to detect and resolve operational problems, and helps you manage your AWS resources securely at scale.
+ [Amazon Virtual Private Cloud (Amazon VPC)](https://docs.aws.amazon.com/vpc/latest/userguide/what-is-amazon-vpc.html) helps you launch AWS resources into a virtual network that you’ve defined. This virtual network resembles a traditional network that you’d operate in your own data center, with the benefits of using the scalable infrastructure of AWS.

**Other tools**
+ [Git](https://git-scm.com/docs) is an open source, distributed version control system.
+ [Terraform](https://www.terraform.io/) is an infrastructure as code (IaC) tool from HashiCorp that helps you create and manage cloud and on-premises resources.

**Code repository**

The code for this pattern is available in the GitHub [aws-samples/infra-ops-orchestrator](https://github.com/aws-samples/infra-ops-orchestrator) repository.

## Best practices
<a name="automate-aws-infrastructure-operations-by-using-amazon-bedrock-best-practices"></a>
+ Monitor Lambda execution logs regularly. For more information, see [Monitoring and troubleshooting Lambda functions](https://docs.aws.amazon.com/lambda/latest/dg/lambda-monitoring.html). For more information about best practices, see [Best practices for working with AWS Lambda functions](https://docs.aws.amazon.com/lambda/latest/dg/best-practices.html).
+ Review security configurations periodically to ensure compliance with your organization's requirements. For more information, see [Security best practices](https://docs.aws.amazon.com/wellarchitected/latest/framework/sec-bp.html).
+ Follow the principle of least privilege and grant the minimum permissions required to perform a task. For more information, see [Grant least privilege](https://docs.aws.amazon.com/IAM/latest/UserGuide/access_policies.html#grant-least-priv) and [Security best practices](https://docs.aws.amazon.com/IAM/latest/UserGuide/best-practices.html) in the IAM documentation.

## Epics
<a name="automate-aws-infrastructure-operations-by-using-amazon-bedrock-epics"></a>

### Deploy the solution
<a name="deploy-the-solution"></a>


| Task | Description | Skills required | 
| --- | --- | --- | 
| Clone the repository. | To clone the repository on your local machine, run the following command:<pre>git clone "git@github.com:aws-samples/infra-ops-orchestrator.git"<br />cd infra-ops-orchestrator</pre> | AWS DevOps, DevOps engineer | 
| Edit the environment variables. | Edit the `terraform.tfvars` file. in the root directory of the cloned repository. Review the placeholders that are indicated by `[XXXXX]`, and update them according to your environment. | AWS DevOps, DevOps engineer | 
| Create the infrastructure. | To create the infrastructure, run the following commands:<pre>terraform init</pre><pre>terraform plan</pre>Review the execution plan carefully. If the planned changes are acceptable, then run the following command:<pre>terraform apply --auto-approve</pre> | AWS DevOps, DevOps engineer | 

### Access the solution
<a name="access-the-solution"></a>


| Task | Description | Skills required | 
| --- | --- | --- | 
| Access the solution. | After successful deployment, follow these steps to use the chat-based interface:[\[See the AWS documentation website for more details\]](http://docs.aws.amazon.com/prescriptive-guidance/latest/patterns/automate-aws-infrastructure-operations-by-using-amazon-bedrock.html) | AWS DevOps, DevOps engineer | 

### Clean up resources
<a name="clean-up-resources"></a>


| Task | Description | Skills required | 
| --- | --- | --- | 
| Delete the created resources. | To delete all infrastructure created by this pattern, run the following command:<pre>terraform plan -destroy </pre>Review the destruction plan carefully. If the planned deletions are acceptable, then run the following command:<pre>terraform destroy</pre>Note: This command will permanently delete all resources created by this pattern. The command will prompt for confirmation before removing any resources. | AWS DevOps, DevOps engineer | 

## Troubleshooting
<a name="automate-aws-infrastructure-operations-by-using-amazon-bedrock-troubleshooting"></a>


| Issue | Solution | 
| --- | --- | 
| Agent behavior  | For information about this issue, see [Test and troubleshoot agent behavior](https://docs.aws.amazon.com/lambda/latest/dg/troubleshooting-networking.html) in the Amazon Bedrock documentation. | 
| Lambda network issues | For information about these issues, see [Troubleshoot networking issues in Lambda ](https://docs.aws.amazon.com/lambda/latest/dg/troubleshooting-networking.html)in the Lambda documentation. | 
| IAM permissions | For information about these issues, see [Troubleshoot IAM](https://docs.aws.amazon.com/IAM/latest/UserGuide/troubleshoot.html) in the IAM documentation. | 

## Related resources
<a name="automate-aws-infrastructure-operations-by-using-amazon-bedrock-resources"></a>
+ [Creating a DB snapshot for a Single-AZ DB instance for Amazon RDS](https://docs.aws.amazon.com/AmazonRDS/latest/UserGuide/USER_CreateSnapshot.html)
+ [Define OpenAPI schemas for your agent's action groups in Amazon Bedrock](https://docs.aws.amazon.com/bedrock/latest/userguide/agents-api-schema.html)
+ [Enabling versioning on buckets](https://docs.aws.amazon.com/AmazonS3/latest/userguide/manage-versioning-examples.html)
+ [How Amazon Bedrock Agents works](https://docs.aws.amazon.com/bedrock/latest/userguide/agents-how.html)
+ [Retrieve data and generate AI responses with Amazon Bedrock Knowledge Bases](https://docs.aws.amazon.com/whitepapers/latest/aws-privatelink/aws-privatelink.html)
+ [Securely Access Services Over AWS PrivateLink](https://docs.aws.amazon.com/whitepapers/latest/aws-privatelink/aws-privatelink.html)
+ [Stop and start Amazon EC2 instances](https://docs.aws.amazon.com/AWSEC2/latest/UserGuide/Stop_Start.html)
+ [Use action groups to define actions for your agent to perform](https://docs.aws.amazon.com/bedrock/latest/userguide/agents-action-create.html)

# Automate CloudFront updates when load balancer endpoints change by using Terraform
<a name="automate-cloudfront-updates-when-load-balancer-endpoints-change"></a>

*Tamilselvan P, Mohan Annam, and Naveen Suthar, Amazon Web Services*

## Summary
<a name="automate-cloudfront-updates-when-load-balancer-endpoints-change-summary"></a>

When users of Amazon Elastic Kubernetes Service (Amazon EKS) delete and re-install their ingress configuration through Helm charts, a new Application Load Balancer (ALB) is created. This creates a problem because Amazon CloudFront continues to reference the old ALB’s DNS record. As a result, services destined to this endpoint will not be reachable. (For more details about this problematic workflow, see [Additional information](#automate-cloudfront-updates-when-load-balancer-endpoints-change-additional).)

To solve this issue, this pattern describes using a custom AWS Lambda function that was developed with Python. This Lambda function automatically detects when a new ALB is created through Amazon EventBridge rules. Using the AWS SDK for Python (Boto3), the function then updates the CloudFront configuration with the new ALB’s DNS address, ensuring that traffic is routed to the correct endpoint.

This automated solution maintains service continuity without additional routing or latency. The process helps to ensure that CloudFront always references the correct ALB DNS endpoint, even when the underlying infrastructure changes.

## Prerequisites and limitations
<a name="automate-cloudfront-updates-when-load-balancer-endpoints-change-prereqs"></a>

**Prerequisites**
+ An active AWS account.
+ A sample web application for testing and validation that is deployed on Amazon EKS by using Helm. For more information, see [Deploy applications with Helm on Amazon EKS](https://docs.aws.amazon.com/eks/latest/userguide/helm.html) in the Amazon EKS documentation.
+ Configure CloudFront to route calls to an ALB that is created by a Helm [ingress controller](https://kubernetes.io/docs/concepts/services-networking/ingress-controllers/). For more information, see [Install AWS Load Balancer Controller with Helm](https://docs.aws.amazon.com/eks/latest/userguide/lbc-helm.html) in the Amazon EKS documentation and [Restrict access to Application Load Balancers](https://docs.aws.amazon.com/AmazonCloudFront/latest/DeveloperGuide/restrict-access-to-load-balancer.html) in the CloudFront documentation.
+ Terraform [installed](https://developer.hashicorp.com/terraform/install?product_intent=terraform) and configured in a local workspace.

**Limitations**
+ Some AWS services aren’t available in all AWS Regions. For Region availability, see [AWS Services by Region](https://aws.amazon.com/about-aws/global-infrastructure/regional-product-services/). For specific endpoints, see [Service endpoints and quotas](https://docs.aws.amazon.com/general/latest/gr/aws-service-information.html), and choose the link for the service.

**Product versions**
+ Terraform version 1.0.0 or later
+ Terraform [AWS Provider](https://registry.terraform.io/providers/hashicorp/aws/latest/docs) version 4.20 or later

## Architecture
<a name="automate-cloudfront-updates-when-load-balancer-endpoints-change-architecture"></a>

The following diagram shows the workflow and architecture components for this pattern.

![\[Workflow to update CloudFront with new ALB DNS address detected through EventBridge rule.\]](http://docs.aws.amazon.com/prescriptive-guidance/latest/patterns/images/pattern-img/03c30b18-4dd7-4dd4-b960-5a5cc58cec63/images/28854767-0902-4398-80af-b19141dd94e4.png)


This solution performs the following steps:

1. The Amazon EKS ingress controller creates a new Application Load Balancer (ALB) whenever there is a Helm restart or deployment.

1. EventBridge looks for ALB creation events.

1. The ALB creation event triggers the Lambda function.

1. The Lambda function has been deployed based on python 3.9 and uses boto3 API to call AWS services. The Lambda function updates the CloudFront entry with the latest load balancer DNS name, which is received from create load balancer events.

## Tools
<a name="automate-cloudfront-updates-when-load-balancer-endpoints-change-tools"></a>

**AWS services**
+ [Amazon CloudFront](https://docs.aws.amazon.com/AmazonCloudFront/latest/DeveloperGuide/Introduction.html) speeds up distribution of your web content by delivering it through a worldwide network of data centers, which lowers latency and improves performance.
+ [Amazon Elastic Kubernetes Service (Amazon EKS)](https://docs.aws.amazon.com/eks/latest/userguide/getting-started.html) helps you run Kubernetes on AWS without needing to install or maintain your own Kubernetes control plane or nodes.
+ [Amazon EventBridge](https://docs.aws.amazon.com/eventbridge/latest/userguide/eb-what-is.html) is a serverless event bus service that helps you connect your applications with real-time data from a variety of sources. For example, AWS Lambda functions, HTTP invocation endpoints using API destinations, or event buses in other AWS accounts.
+ [AWS Lambda](https://docs.aws.amazon.com/lambda/latest/dg/welcome.html) is a compute service that helps you run code without needing to provision or manage servers. It runs your code only when needed and scales automatically, so you pay only for the compute time that you use.
+ [AWS SDK for Python (Boto3)](https://boto3.amazonaws.com/v1/documentation/api/latest/guide/quickstart.html) is a software development kit that helps you integrate your Python application, library, or script with AWS services.

**Other tools**
+ [Python](https://www.python.org/) is a general-purpose computer programming language.
+ [Terraform](https://www.terraform.io/) is an infrastructure as code (IaC) tool from HashiCorp that helps you create and manage cloud and on-premises resources.

**Code repository**

The code for this pattern is available in the GitHub [aws-cloudfront-automation-terraform-samples](https://github.com/aws-samples/aws-cloudfront-automation-terraform-samples) repository.

## Epics
<a name="automate-cloudfront-updates-when-load-balancer-endpoints-change-epics"></a>

### Set up local workstation
<a name="set-up-local-workstation"></a>


| Task | Description | Skills required | 
| --- | --- | --- | 
| Set up and configure the Git CLI. | To install and configure the Git command line interface (CLI) in your local workstation, follow the [Getting Started – Installing Git](https://git-scm.com/book/en/v2/Getting-Started-Installing-Git) instructions in the Git documentation. | DevOps engineer | 
| Create the project folder and add the files. | [\[See the AWS documentation website for more details\]](http://docs.aws.amazon.com/prescriptive-guidance/latest/patterns/automate-cloudfront-updates-when-load-balancer-endpoints-change.html) | DevOps engineer | 

### Provision the target architecture using the Terraform configuration
<a name="provision-the-target-architecture-using-the-terraform-configuration"></a>


| Task | Description | Skills required | 
| --- | --- | --- | 
| Deploy the solution. | To deploy resources in the target AWS account, use the following steps:[\[See the AWS documentation website for more details\]](http://docs.aws.amazon.com/prescriptive-guidance/latest/patterns/automate-cloudfront-updates-when-load-balancer-endpoints-change.html) | DevOps engineer | 

### Verify the deployment
<a name="verify-the-deployment"></a>


| Task | Description | Skills required | 
| --- | --- | --- | 
| Validate the deployment. | [\[See the AWS documentation website for more details\]](http://docs.aws.amazon.com/prescriptive-guidance/latest/patterns/automate-cloudfront-updates-when-load-balancer-endpoints-change.html) | DevOps engineer | 

### Clean up infrastructure
<a name="clean-up-infrastructure"></a>


| Task | Description | Skills required | 
| --- | --- | --- | 
| Clean up the infrastructure. | To clean up the infrastructure that you created earlier, use the following steps:[\[See the AWS documentation website for more details\]](http://docs.aws.amazon.com/prescriptive-guidance/latest/patterns/automate-cloudfront-updates-when-load-balancer-endpoints-change.html) | DevOps engineer | 

## Troubleshooting
<a name="automate-cloudfront-updates-when-load-balancer-endpoints-change-troubleshooting"></a>


| Issue | Solution | 
| --- | --- | 
| Error validating provider credentials | When you run the Terraform `apply` or `destroy` commands from your local machine, you might encounter an error similar to the following:<pre>Error: configuring Terraform AWS Provider: error validating provider <br />credentials: error calling sts:GetCallerIdentity: operation error STS: <br />GetCallerIdentity, https response error StatusCode: 403, RequestID: <br />123456a9-fbc1-40ed-b8d8-513d0133ba7f, api error InvalidClientTokenId: <br />The security token included in the request is invalid.</pre>This error is caused by the expiration of the security token for the credentials used in your local machine’s configuration.To resolve the error, see [Set and view configuration settings](https://docs.aws.amazon.com/cli/latest/userguide/cli-configure-files.html#cli-configure-files-methods) in the AWS Command Line Interface (AWS CLI) documentation. | 

## Related resources
<a name="automate-cloudfront-updates-when-load-balancer-endpoints-change-resources"></a>

**AWS resources**
+ [Restrict access to Application Load Balancers](https://docs.aws.amazon.com/AmazonCloudFront/latest/DeveloperGuide/restrict-access-to-load-balancer.html)
+ [Route internet traffic with AWS Load Balancer Controller](https://docs.aws.amazon.com/eks/latest/userguide/aws-load-balancer-controller.html)

**Terraform documentation**
+ [AWS Provider](https://registry.terraform.io/providers/hashicorp/aws/latest/docs)
+ [Install Terraform](https://developer.hashicorp.com/terraform/tutorials/aws-get-started/install-cli)
+ [Remote State](https://developer.hashicorp.com/terraform/language/state/remote)

## Additional information
<a name="automate-cloudfront-updates-when-load-balancer-endpoints-change-additional"></a>

**Problematic workflow**

![\[Workflow that produces out-of-date ALB DNS entry in CloudFront.\]](http://docs.aws.amazon.com/prescriptive-guidance/latest/patterns/images/pattern-img/03c30b18-4dd7-4dd4-b960-5a5cc58cec63/images/bb3c2c93-c749-435d-9b1d-2bbf6f0cf085.png)


The diagram shows the following workflow:

1. When the user accesses the application, the call goes to CloudFront.

1. CloudFront routes the calls to the respective Application Load Balancer (ALB).

1. The ALB includes the target IP addresses which are the application pod's IP addresses. From there, the ALB provides the expected results to the user.

However, this workflow demonstrates a problem. The application deployments are happening through Helm charts. Whenever there is a deployment or if someone restarts Helm, the respective ingress is also re-created. As a result, the external load balancer controller re-creates the ALB. Also, during each re-creation, the ALB is re-created with a different DNS name. Because of this, CloudFront will have a stale entry in the origin settings. Because of this stale entry, the application will not be reachable for the user. This issue results in downtime for users.

**Alternative solution**

Another possible solution is to create an [external DNS](https://github.com/kubernetes-sigs/external-dns) for the ALB and then point it to the Amazon Route 53 private hosted zone endpoint in CloudFront. However, this approach adds another hop in the application flow, which might cause application latency. This pattern’s Lambda function solution doesn’t disrupt current flow.

# Automate Amazon CodeGuru reviews for AWS CDK Python applications by using GitHub Actions
<a name="automate-amazon-codeguru-reviews-for-aws-cdk-python-applications"></a>

*Vanitha Dontireddy and Sarat Chandra Pothula, Amazon Web Services*

## Summary
<a name="automate-amazon-codeguru-reviews-for-aws-cdk-python-applications-summary"></a>

Note: As of November 7, 2025, you can't create new repository associations in Amazon CodeGuru Reviewer. To learn about services with capabilities similar to CodeGuru Reviewer, see [Amazon CodeGuru Reviewer availability change](https://docs.aws.amazon.com/codeguru/latest/reviewer-ug/codeguru-reviewer-availability-change.html) in the CodeGuru Reviewer documentation.

This pattern showcases the integration of Amazon CodeGuru automated code reviews for AWS Cloud Development Kit (AWS CDK) Python applications, orchestrated through GitHub Actions. The solution deploys a serverless architecture defined in AWS CDK Python. By automating expert code analysis within the development pipeline, this approach can do the following for AWS CDK Python projects:
+ Enhance code quality.
+ Streamline workflows.
+ Maximize the benefits of serverless computing.

## Prerequisites and limitations
<a name="automate-amazon-codeguru-reviews-for-aws-cdk-python-applications-prereqs"></a>

**Prerequisites**
+ An active AWS account.
+ AWS Command Line Interface (AWS CLI) version 2.9.11 or later, [installed](https://docs.aws.amazon.com/cli/latest/userguide/cli-chap-getting-started.html) and [configured](https://docs.aws.amazon.com/cli/latest/userguide/cli-chap-configure.html).
+ An active GitHub account and a GitHub repository with read and write workflow permissions and creation of pull requests (PR) by GitHub Actions to ensure the PR workflow operates correctly.
+ An OpenID Connect (OIDC) role in GitHub Actions to deploy the solution in the AWS account. To create the role, use the [AWS CDK construct](https://github.com/aws-samples/github-actions-oidc-cdk-construct).

**Limitations**
+ Amazon CodeGuru Profiler [supports applications](https://docs.aws.amazon.com/codeguru/latest/profiler-ug/what-is-codeguru-profiler.html#what-is-language-support) written in all Java virtual machine (JVM) languages (such as Scala and Kotlin) and runtimes and Python 3.6 or later.
+ Amazon CodeGuru Reviewer [supports associations](https://docs.aws.amazon.com/codeguru/latest/reviewer-ug/working-with-repositories.html) with Java and Python code repositories only from the following source providers: AWS CodeCommit, Bitbucket, GitHub, GitHub Enterprise Cloud, and GitHub Enterprise Server. In addition, Amazon Simple Storage Service (Amazon S3) repositories are only supported through GitHub Actions.
+ There isn’t an automated way to print the findings during the continuous integration and continuous deployment (CI/CD) pipeline. Instead, this pattern uses GitHub Actions as an alternative method to handle and display the findings.
+ Some AWS services aren’t available in all AWS Regions. For Region availability, see [AWS services by Region](https://aws.amazon.com/about-aws/global-infrastructure/regional-product-services/). For specific endpoints, see [Service endpoints and quotas](https://docs.aws.amazon.com/general/latest/gr/aws-service-information.html), and choose the link for the service.

## Architecture
<a name="automate-amazon-codeguru-reviews-for-aws-cdk-python-applications-architecture"></a>

The following diagram shows the architecture for this solution.

![\[Workflow to integrate CodeGuru code review for AWS CDK Python applications using GitHub Actions.\]](http://docs.aws.amazon.com/prescriptive-guidance/latest/patterns/images/pattern-img/c5395e3e-ff2a-41cf-bd64-c73cc928b60b/images/18f880a2-9bc3-4d71-a598-bb83b68ee383.png)


As shown in the diagram, when a developer creates a pull request (PR) for review, GitHub Actions triggers the following steps:

1. IAM role assumption – The pipeline uses the IAM role that’s specified in GitHub Secrets to perform deployment tasks.

1. Code analysis
   + CodeGuru Reviewer analyzes the code stored in the Amazon S3 bucket. It identifies defects and provides recommendations for fixes and optimizations.
   + CodeGuru Security scans for policy violations and vulnerabilities.

1. Findings review
   + The pipeline prints a link to the findings dashboard in the console output.
   + If critical findings are detected, the pipeline fails immediately.
   + For high, normal, or low severity findings, the pipeline continues to the next step.

1. PR approval
   + A reviewer must manually approve the PR.
   + If the PR is denied, the pipeline fails and halts further deployment steps.

1. CDK deployment – Upon PR approval, the CDK deployment process begins. It sets up the following AWS services and resources:
   + CodeGuru Profiler
   + AWS Lambda function
   + Amazon Simple Queue Service (Amazon SQS) queue

1. Profiling data generation – To generate sufficient profiling data for CodeGuru Profiler:
   + The pipeline invokes the Lambda function multiple times by sending messages to the Amazon SQS queue periodically.

## Tools
<a name="automate-amazon-codeguru-reviews-for-aws-cdk-python-applications-tools"></a>

**AWS services**
+ [AWS Cloud Development Kit (AWS CDK)](https://docs.aws.amazon.com/cdk/latest/guide/home.html) is a software development framework that helps you define and provision AWS Cloud infrastructure in code.
+ [CDK Toolkit](https://docs.aws.amazon.com/cdk/latest/guide/cli.html) is a command line cloud development kit that helps you interact with your AWS CDK app.
+ [Amazon CodeGuru Profiler](https://docs.aws.amazon.com/codeguru/latest/profiler-ug/what-is-codeguru-profiler.html) collects runtime performance data from your live applications, and provides recommendations that can help you fine-tune your application performance.
+ [Amazon CodeGuru Reviewer](https://docs.aws.amazon.com/codeguru/latest/reviewer-ug/welcome.html) uses program analysis and machine learning to detect potential defects that are difficult for developers to find. Then, CodeGuru Profiler offers suggestions for improving your Java and Python code.
+ Amazon CodeGuru Security is a static application security tool that uses machine learning to detect security policy violations and vulnerabilities. It provides suggestions for addressing security risks and generates metrics so you can track the security posture of your applications.
+ [AWS Identity and Access Management (IAM)](https://docs.aws.amazon.com/IAM/latest/UserGuide/introduction.html) helps you securely manage access to your AWS resources by controlling who is authenticated and authorized to use them.
+ [AWS Lambda](https://docs.aws.amazon.com/lambda/latest/dg/welcome.html) is a compute service that helps you run code without needing to provision or manage servers. It runs your code only when needed and scales automatically, so you pay only for the compute time that you use.
+ [Amazon Simple Queue Service (Amazon SQS)](https://docs.aws.amazon.com/AWSSimpleQueueService/latest/SQSDeveloperGuide/welcome.html) provides a secure, durable, and available hosted queue that helps you integrate and decouple distributed software systems and components.
+ [Amazon Simple Storage Service (Amazon S3)](https://docs.aws.amazon.com/AmazonS3/latest/userguide/Welcome.html) is a cloud-based object storage service that helps you store, protect, and retrieve any amount of data.

**Other tools**
+ [GitHub Actions](https://docs.github.com/en/actions/writing-workflows/quickstart) is a continuous integration and continuous delivery (CI/CD) platform that’s tightly integrated with GitHub repositories. You can use GitHub Actions to automate your build, test, and deployment pipeline.

**Code repository**

The code for this pattern is available in the GitHub [amazon-codeguru-suite-cdk-python](https://github.com/aws-samples/amazon-codeguru-suite-cdk-python) repository.

## Best practices
<a name="automate-amazon-codeguru-reviews-for-aws-cdk-python-applications-best-practices"></a>
+ Adhere to the [Best practices for developing and deploying cloud infrastructure with the AWS CDK](https://docs.aws.amazon.com/cdk/v2/guide/best-practices.html).
+ Follow [Security best practices in IAM](https://docs.aws.amazon.com/IAM/latest/UserGuide/best-practices.html) when using AWS services in GitHub Actions workflows, including:
  + Do not store credentials in your repository code.
  + [Assume an IAM role](https://docs.aws.amazon.com/IAM/latest/UserGuide/best-practices.html#bp-workloads-use-roles) to receive temporary credentials, and use temporary credentials when possible.
  + [Grant least privilege](https://docs.aws.amazon.com/IAM/latest/UserGuide/best-practices.html#grant-least-privilege) to the IAM role used in GitHub Actions workflows. Grant only the permissions that are required to perform the actions in your GitHub Actions workflows. 
  + [Monitor the activity](https://docs.aws.amazon.com/IAM/latest/UserGuide/best-practices.html#remove-credentials) of the IAM role that’s used in GitHub Actions workflows.
  + Periodically rotate any long-term credentials that you use.

## Epics
<a name="automate-amazon-codeguru-reviews-for-aws-cdk-python-applications-epics"></a>

### Set up your environment
<a name="set-up-your-environment"></a>


| Task | Description | Skills required | 
| --- | --- | --- | 
| Set up AWS credentials. | To export the variables that define the AWS account and AWS Region where you’re deploying the stack, run the following commands:<pre>export CDK_DEFAULT_ACCOUNT=<12-digit AWS account number></pre><pre>export CDK_DEFAULT_REGION=<AWS Region></pre>The AWS credentials for the AWS CDK are provided through environment variables. | AWS DevOps, DevOps engineer | 
| Clone the repository. | To clone the repository on your local machine, run the following command:<pre>git clone https://github.com/aws-samples/amazon-codeguru-suite-cdk-python.git</pre> | AWS DevOps, DevOps engineer | 
| Install the CDK Toolkit. | To confirm that the CDK Toolkit is installed and to check the version, run the following command: <pre>cdk --version</pre>If the CDK Toolkit version is earlier than 2.27.0, enter the following command to update it to version 2.27.0:<pre>npm install -g aws-cdk@2.27.0</pre>If the CDK Toolkit is *not* installed, run the following command to install it:<pre>npm install -g aws-cdk@2.27.0 --force</pre> | AWS DevOps, DevOps engineer | 
| Install the required dependencies. | To install the required project dependencies, run the following command:<pre>python -m pip install --upgrade pip<br />pip install -r requirements.txt</pre> | AWS DevOps, DevOps engineer | 
| Bootstrap the CDK environment. | To [bootstrap](https://docs.aws.amazon.com/cdk/v2/guide/bootstrapping.html) an AWS CDK environment, run the following commands:<pre>npm install<br />npm run cdk bootstrap "aws://${ACCOUNT_NUMBER}/${AWS_REGION}"</pre>After you successfully bootstrap the environment, the following output should be displayed:<pre>⏳  Bootstrapping environment aws://{account}/{region}...<br />✅  Environment aws://{account}/{region} bootstrapped</pre> | AWS DevOps, DevOps engineer | 

### Deploy the CDK app
<a name="deploy-the-cdk-app"></a>


| Task | Description | Skills required | 
| --- | --- | --- | 
| Synthesize the AWS CDK app. | To synthesize an AWS CDK app, run the following command:<pre>cdk synth</pre>For more information about this command, see [cdk synthesize](https://docs.aws.amazon.com/cdk/v2/guide/ref-cli-cmd-synth.html) in the AWS CDK documentation. | AWS DevOps, DevOps engineer | 
| Deploy the resources. | To deploy the resources, run the following command:<pre>cdk deploy --require-approval never</pre>The `--require-approval never` flag means that the CDK will approve and execute all changes automatically. This includes changes that the CDK would normally flag as needing manual review (such as IAM policy changes or removal of resources). Make sure that your CDK code and CI/CD pipeline are well-tested and secure before you use the `--require-approval never` flag in production environments. | AWS DevOps, DevOps engineer | 

### Create GitHub secrets and personal access token
<a name="create-github-secrets-and-personal-access-token"></a>


| Task | Description | Skills required | 
| --- | --- | --- | 
| Create the required secrets in GitHub. | To allow GitHub Actions workflows to access AWS resources securely without exposing sensitive information in your repository's code, create secrets. To create the secrets in GitHub for `ROLE_TO_ASSUME`, `CodeGuruReviewArtifactBucketName`, and `AWS_ACCOUNT_ID`, follow the instructions in [Creating secrets for a repository](https://docs.github.com/en/actions/security-for-github-actions/security-guides/using-secrets-in-github-actions#creating-secrets-for-a-repository) in the GitHub Actions documentation.Following is more information about the variables:[\[See the AWS documentation website for more details\]](http://docs.aws.amazon.com/prescriptive-guidance/latest/patterns/automate-amazon-codeguru-reviews-for-aws-cdk-python-applications.html) | AWS DevOps, DevOps engineer | 
| Create a GitHub personal access token. | To set up a secure way for your GitHub Actions workflows to authenticate and interact with GitHub, do the following:[\[See the AWS documentation website for more details\]](http://docs.aws.amazon.com/prescriptive-guidance/latest/patterns/automate-amazon-codeguru-reviews-for-aws-cdk-python-applications.html) | AWS DevOps, DevOps engineer | 

### Clean up
<a name="clean-up"></a>


| Task | Description | Skills required | 
| --- | --- | --- | 
| Clean up resources. | To clean up your AWS CDK Python app, run the following command:<pre>cdk destroy --all</pre> | DevOps engineer | 

## Troubleshooting
<a name="automate-amazon-codeguru-reviews-for-aws-cdk-python-applications-troubleshooting"></a>


| Issue | Solution | 
| --- | --- | 
| Display link to the dashboard findings. | There is no way to print the findings during the CI/CD pipeline. Instead, this pattern uses GitHub Actions as an alternative method to handle and display the findings. | 

## Related resources
<a name="automate-amazon-codeguru-reviews-for-aws-cdk-python-applications-resources"></a>

**AWS resources**
+ [AWS Cloud Development Kit](https://aws.amazon.com/cdk/)
+ [Amazon CodeGuru Documentation](https://docs.aws.amazon.com/codeguru/)
+ [Amazon S3](https://aws.amazon.com/s3/)
+ [AWS Identity and Access Management](https://aws.amazon.com/iam/)
+ [Amazon Simple Queue Service](https://aws.amazon.com/sqs/)
+ [What is AWS Lambda?](https://docs.aws.amazon.com/lambda/latest/dg/welcome.html)

**GitHub documentation**
+ [Configuring OpenID Connect in Amazon Web Services](https://docs.github.com/en/actions/security-for-github-actions/security-hardening-your-deployments/configuring-openid-connect-in-amazon-web-services)
+ [GitHub Actions](https://github.com/features/actions)
+ [Reusing workflows](https://docs.github.com/en/actions/using-workflows/reusing-workflows)
+ [Triggering a workflow](https://docs.github.com/en/actions/using-workflows/triggering-a-workflow) 

# Automate AWS Supply Chain data lakes deployment in a multi-repository setup
<a name="automate-the-deployment-of-aws-supply-chain-data-lakes"></a>

*Keshav Ganesh, Amazon Web Services*

## Summary
<a name="automate-the-deployment-of-aws-supply-chain-data-lakes-summary"></a>

This pattern provides an automated approach for deploying and managing AWS Supply Chain data lakes using a multi-repository continuous integration and continuous deployment (CI/CD) pipeline. It demonstrates two deployment methods: automated deployment using GitHub Actions workflows, or manual deployment using Terraform directly. Both approaches use Terraform for infrastructure as code (IaC), with the automated method adding GitHub Actions and JFrog Artifactory for enhanced CI/CD capabilities.

The solution leverages AWS Supply Chain, AWS Lambda, and Amazon Simple Storage Service (Amazon S3) to establish the data lake infrastructure, while using either deployment method to automate configuration and resource creation. This automation eliminates manual configuration steps and ensures consistent deployments across environments. In addition, AWS Supply Chain eliminates the need for deep expertise in extract, transform, and load (ETL) and can provide insights and analytics powered by Amazon Quick Sight.

By implementing this pattern, organizations can reduce deployment time, maintain infrastructure as code, and manage supply chain data lakes through a version-controlled, automated process. The multi-repository approach provides fine-grained access control and supports independent deployment of different components. Teams can choose the deployment method that best fits their existing tools and processes.

## Prerequisites and limitations
<a name="automate-the-deployment-of-aws-supply-chain-data-lakes-prereqs"></a>

**Prerequisites**

Ensure the following are installed on your local machine:
+ [AWS Command Line Interface (AWS CLI)](https://docs.aws.amazon.com/cli/latest/userguide/getting-started-install.html) version 2
+ [GitHub CLI](https://docs.github.com/en/get-started/git-basics/set-up-git)
+ [Python](https://www.python.org/downloads/) v3.13
+ [Terraform](https://developer.hashicorp.com/terraform/tutorials/aws-get-started/install-cli) v1.12 or later

Ensure the following are in place before deployment:
+ An active AWS account.
+ A [virtual private cloud (VPC)](https://docs.aws.amazon.com/vpc/latest/userguide/create-vpc.html) with two [private subnets](https://docs.aws.amazon.com/vpc/latest/userguide/vpc-example-private-subnets-nat.html) in your AWS account in the AWS Region of your choice.
+ Sufficient permissions for the AWS Identity and Access Management (IAM) role used for deployment to the following services:
  + AWS Supply Chain – Full Access preferred for deploying its components like datasets and integration flows, along with accessing it from the AWS Management Console.
  + Amazon CloudWatch Logs – For creating and managing CloudWatch log groups.
  + Amazon Elastic Compute Cloud (Amazon EC2) – For Amazon EC2 security groups and Amazon Virtual Private Cloud (Amazon VPC) endpoints.
  + Amazon EventBridge – For use by AWS Supply Chain.
  + IAM – For creating AWS Lambda service roles.
  + AWS Key Management Service (AWS KMS) – For access to the AWS KMS keys used for the Amazon S3 artifacts bucket and the Amazon S3 AWS Supply Chain staging bucket.
  + AWS Lambda – For creating the Lambda functions that deploy the AWS Supply Chain components.
  + Amazon S3 – For access to the Amazon S3 artifacts bucket, server access logging bucket, and AWS Supply Chain staging bucket. If you’re using manual deployment, permissions for the Amazon S3 Terraform artifacts bucket are also required.
  + Amazon VPC – For creating and managing a VPC.

If you prefer to use GitHub Actions workflows for deployment, do the following:
+ Set up [OpenID Connect (OIDC)](https://docs.github.com/en/actions/how-tos/secure-your-work/security-harden-deployments/oidc-in-aws#configuring-the-role-and-trust-policy) for the IAM role with the permissions mentioned earlier.
+ Create an IAM role with similar permissions to access the AWS Management Console. For more information, see [Create a role to give permissions to an IAM user](https://docs.aws.amazon.com/IAM/latest/UserGuide/id_roles_create_for-user.html) in the IAM documentation.

If you prefer to do a manual deployment, do the following:
+ Create an IAM user to assume the IAM role with the permissions mentioned earlier. For more information, see [Create a role to give permissions to an IAM user](https://docs.aws.amazon.com/IAM/latest/UserGuide/id_roles_create_for-user.html) in the IAM documentation.
+ [Assume the role](https://docs.aws.amazon.com/cli/v1/userguide/cli-configure-role.html) in your local terminal.

If you prefer to use GitHub Actions workflows for deployment, set up the following:
+ A [JFrog Artifactory account](https://jfrog.com/artifactory/?utm_source=google&utm_medium=cpc_search&utm_campaign=SearchDSKBrandAPACIN202506&utm_term=jfrog%20cloud&gads_network=g&utm_content=u-bin&gads_campaign_id=22674833884&gads_adgroup_id=184501797241&gads_extension_id=233003714635&gads_target_id=aud-312135645780:kwd-1598615735032&gads_matchtype=b&gad_source=1&gad_campaignid=22674833884&gbraid=0AAAAADqV85U5B37iapTR9IIFHBvydF5AQ&gclid=CjwKCAjwiY_GBhBEEiwAFaghvqdNV-odNLZXPHjT7NAwf8lA-QuMtg666hgvDW1oCJ4nn7wvf869_xoCW4IQAvD_BwE) to get the host name, login username, and login access token.
+ A [JFrog project key and repository](https://docs.jfrog.com/projects/docs/create-a-project) for storing artifacts.

**Limitations**
+ The AWS Supply Chain instance doesn’t support complex data transformation techniques.
+ AWS Supply Chain is most suited for supply chain domains because it provides built-in analytics and insights. For any other domain, AWS Supply Chain can be used as a data store as part of the data lake architecture.
+ Lambda functions used in this solution might need to be enhanced to handle API retries and memory management in a production scale deployment.
+ Some AWS services aren’t available in all AWS Regions. For Region availability, see [AWS Services by Region](https://aws.amazon.com/about-aws/global-infrastructure/regional-product-services/). For specific endpoints, see [Service endpoints and quotas](https://docs.aws.amazon.com/general/latest/gr/aws-service-information.html), and choose the link for the service.

## Architecture
<a name="automate-the-deployment-of-aws-supply-chain-data-lakes-architecture"></a>

You can deploy this solution either by using automated GitHub Actions workflows or manually using Terraform.

**Automated deployment with GitHub Actions**

The following diagram shows the automated deployment option that uses GitHub Actions workflows. JFrog Artifactory is used for artifacts management. It stores resource information and outputs for use in a multi-repository deployment.

![\[Automated deployment option that uses GitHub Actions workflows and JFrog.\]](http://docs.aws.amazon.com/prescriptive-guidance/latest/patterns/images/pattern-img/2f0b78b0-a174-4703-b533-d66b3fb005e0/images/d454a5c5-ed51-421c-a87f-ff74cfcb30be.png)


**Manual deployment with Terraform**

The following diagram shows the manual deployment option through Terraform. Instead of JFrog Artifactory, Amazon S3 is used for artifacts management.

![\[Manual deployment option using Terraform and Amazon S3.\]](http://docs.aws.amazon.com/prescriptive-guidance/latest/patterns/images/pattern-img/2f0b78b0-a174-4703-b533-d66b3fb005e0/images/1130e728-44d5-4ae7-9586-1e497f54352a.png)


**Deployment workflow**

The diagrams show the following workflow:

1. Deploy AWS Supply Chain service datasets infrastructure and databases using one of the following deployment methods:
   + **Automated deployment** – Uses GitHub Actions workflows to orchestrate all deployment steps and uses JFrog Artifactory for artifacts management.
   + **Manual deployment** – Executes Terraform commands directly for each deployment step and uses Amazon S3 for artifacts management.

1. Create the supporting AWS resources that are required for AWS Supply Chain service operation:
   + Amazon VPC endpoints and security groups
   + AWS KMS keys
   + CloudWatch Logs log groups

1. Create and deploy the following infrastructure resources:
   + Lambda functions that manage (create, update, and delete) the AWS Supply Chain service instance, namespaces, and datasets.
   + AWS Supply Chain staging Amazon S3 bucket for data ingestion

1. Deploy the Lambda function that manages integration flows between the staging bucket and AWS Supply Chain datasets. After deployment is complete, the remaining workflow steps manage data ingestion and analysis.

1. Configure source data ingestion to the AWS Supply Chain staging Amazon S3 bucket.

1. After data is added to the AWS Supply Chain staging Amazon S3 bucket, the service automatically triggers the integration flow to the AWS Supply Chain datasets.

1. AWS Supply Chain integrates with Quick Sight Analytics to produce dashboards based on the ingested data.

## Tools
<a name="automate-the-deployment-of-aws-supply-chain-data-lakes-tools"></a>

**AWS services**
+ [Amazon CloudWatch Logs](https://docs.aws.amazon.com/AmazonCloudWatch/latest/logs/WhatIsCloudWatchLogs.html) helps you centralize the logs from all your systems, applications, and AWS services so you can monitor them and archive them securely.
+ [AWS Command Line Interface (AWS CLI)](https://docs.aws.amazon.com/cli/latest/userguide/cli-chap-welcome.html) is an open source tool that helps you interact with AWS services through commands in your command-line shell.
+ [Amazon Elastic Compute Cloud (Amazon EC2)](https://docs.aws.amazon.com/AWSEC2/latest/UserGuide/concepts.html) provides scalable computing capacity in the AWS Cloud. You can launch as many virtual servers as you need and quickly scale them up or down.
+ [Amazon EventBridge](https://docs.aws.amazon.com/eventbridge/latest/userguide/eb-what-is.html) is a serverless event bus service that helps you connect your applications with real-time data from a variety of sources. For example, AWS Lambda functions, HTTP invocation endpoints using API destinations, or event buses in other AWS accounts.
+ [AWS Identity and Access Management (IAM)](https://docs.aws.amazon.com/IAM/latest/UserGuide/introduction.html) helps you securely manage access to your AWS resources by controlling who is authenticated and authorized to use them.
+ [AWS IAM Identity Center](https://docs.aws.amazon.com/singlesignon/latest/userguide/what-is.html) helps you centrally manage single sign-on (SSO) access to all of your AWS accounts and cloud applications.
+ [AWS Key Management Service (AWS KMS)](https://docs.aws.amazon.com/kms/latest/developerguide/overview.html) helps you create and control cryptographic keys to help protect your data.
+ [AWS Lambda](https://docs.aws.amazon.com/lambda/latest/dg/welcome.html) is a compute service that helps you run code without needing to provision or manage servers. It runs your code only when needed and scales automatically, so you pay only for the compute time that you use.
+ [Amazon Q](https://docs.aws.amazon.com/aws-supply-chain/latest/userguide/qinasc.html) in AWS Supply Chain is an interactive generative AI assistant that helps you operate your supply chain more efficiently by analyzing the data in your AWS Supply Chain data lake.
+ [Amazon Quick Sight](https://docs.aws.amazon.com/quicksight/latest/user/welcome.html) is a cloud-scale business intelligence (BI) service that helps you visualize, analyze, and report your data in a single dashboard.
+ [Amazon Simple Storage Service (Amazon S3)](https://docs.aws.amazon.com/AmazonS3/latest/userguide/Welcome.html) is a cloud-based object storage service that helps you store, protect, and retrieve any amount of data.
+ [AWS Supply Chain](https://docs.aws.amazon.com/aws-supply-chain/latest/adminguide/getting-started.html) is a cloud-based managed application that can be used as a data store in organizations for supply chain domains, which can be used to generate insights and perform analysis on the ingested data.
+ [Amazon Virtual Private Cloud (Amazon VPC)](https://docs.aws.amazon.com/vpc/latest/userguide/what-is-amazon-vpc.html) helps you launch AWS resources into a virtual network that you’ve defined. This virtual network resembles a traditional network that you’d operate in your own data center, with the benefits of using the scalable infrastructure of AWS. An [Amazon VPC endpoint](https://docs.aws.amazon.com/vpc/latest/privatelink/create-interface-endpoint.html) is a virtual device that helps you privately connect your VPC to supported AWS services without requiring an internet gateway, NAT device, VPN connection, or AWS Direct Connect connection.

**Other tools**
+ [GitHub Actions](https://docs.github.com/en/actions) is a continuous integration and continuous delivery (CI/CD) platform that’s tightly integrated with GitHub repositories. You can use GitHub Actions to automate your build, test, and deployment pipeline.
+ [HashiCorp Terraform](https://www.terraform.io/) is an infrastructure as code (IaC) tool that helps you create and manage cloud and on-premises resources.
+ [JFrog Artifactory](https://jfrog.com/help/r/jfrog-artifactory-documentation/jfrog-artifactory) provides end-to-end automation and management of binaries and artifacts through the application delivery process.
+ [Python](https://www.python.org/) is a general-purpose computer programming language. This pattern uses Python for the AWS function’s code to interact with AWS Supply Chain

  .

## Best practices
<a name="automate-the-deployment-of-aws-supply-chain-data-lakes-best-practices"></a>
+ Maintain the highest possible security when implementing this pattern. As stated in [Prerequisites](#automate-the-deployment-of-aws-supply-chain-data-lakes-prereqs), make sure a [virtual private cloud (VPC)](https://docs.aws.amazon.com/vpc/latest/userguide/create-vpc.html) with two [private subnets](https://docs.aws.amazon.com/vpc/latest/userguide/vpc-example-private-subnets-nat.html) is in your AWS account in the AWS Region of your choice.
+ Use AWS KMS [customer managed keys](https://docs.aws.amazon.com/kms/latest/cryptographic-details/basic-concepts.html) wherever possible, and grant limited access permissions to them.
+ To set up IAM roles with the least access required for ingesting data for this pattern, see [Secure Data Ingestion from Source Systems to Amazon S3](https://github.com/aws-samples/sample-automate-aws-supply-chain-deployment/tree/main?tab=readme-ov-file#secure-data-ingestion-from-source-systems-to-amazon-s3) in this pattern’s repository.

## Epics
<a name="automate-the-deployment-of-aws-supply-chain-data-lakes-epics"></a>

### (Both options) Set up local workstation
<a name="both-options-set-up-local-workstation"></a>


| Task | Description | Skills required | 
| --- | --- | --- | 
| Clone the repository. | To clone this pattern’s repository, run the following command in your local workstation:<pre>git clone https://github.com/aws-samples/sample-automate-aws-supply-chain-deployment.git<br />cd ASC-Deployment</pre> | AWS DevOps | 
| (Automated option) Verify prerequisites for deployment. | Make sure that the [Prerequisites](#automate-the-deployment-of-aws-supply-chain-data-lakes-prereqs) are complete for the automated deployment. | App owner | 
| (Manual option) Prepare for deployment of AWS Supply Chain datasets. | To go to the `terraform-deployment` directory of `ASC-Datasets`, run the following command:<pre>cd ASC-Datasets/terraform-deployment</pre>To assume the role ARN that was created in the [Prerequisites](#automate-the-deployment-of-aws-supply-chain-data-lakes-prereqs), run the following command:<pre>aws sts assume-role --role-arn <enter AWS user role ARN> --role-session-name <your-session-name></pre>To configure and export the environment variables, run the following commands:<pre># Export Environment variables<br />export REGION=<Enter deployment region><br />export REPO_NAME=<Enter Current ASC Datasets dir name><br />export PROJECT_NAME="asc-deployment-poc"<br />export ACCOUNT_ID=<Enter deployment Account ID><br />export ENVIRONMENT="dev"<br />export LAMBDA_LAYER_TEMP_DIR_TERRAFORM="layerOutput"<br />export LAMBDA_FUNCTION_TEMP_DIR_TERRAFORM="lambdaOutput"<br />export AWS_USER_ROLE=<Enter user role ARN for AWS Console access and deployment><br />export S3_TERRAFORM_ARTIFACTS_BUCKET_NAME="$PROJECT_NAME-$ACCOUNT_ID-$REGION-terraform-artifacts-$ENVIRONMENT"</pre> | AWS DevOps | 
| (Manual option) Prepare for managing AWS Supply Chain integration flows in deployment. | To go to the `terraform-deployment` directory of `ASC-Integration-Flows`, run the following command:<pre>cd ASC-Integration-Flows/terraform-deployment</pre>To assume the role ARN that was created earlier, run the following command:<pre>aws sts assume-role --role-arn <enter AWS user role ARN> --role-session-name <your-session-name></pre>To configure and export the environment variables, run the following commands:<pre># Export Environment variables<br />export REGION=<Enter deployment region><br />export REPO_NAME=<Enter Current ASC Integration Flows dir name><br />export ASC_DATASET_VARS_REPO=<Enter Current ASC Datasets dir name>  #Must be the same directory name used for ASC Datasets deployment<br />export PROJECT_NAME="asc-deployment-poc"<br />export ACCOUNT_ID=<Enter deployment Account ID><br />export ENVIRONMENT="dev"<br />export LAMBDA_LAYER_TEMP_DIR_TERRAFORM="layerOutput"<br />export LAMBDA_FUNCTION_TEMP_DIR_TERRAFORM="lambdaOutput"<br />export S3_TERRAFORM_ARTIFACTS_BUCKET_NAME="$PROJECT_NAME-$ACCOUNT_ID-$REGION-terraform-artifacts-$ENVIRONMENT"</pre> | App owner | 

### (Automated option) Deploy AWS Supply Chain datasets using GitHub Actions workflows
<a name="automated-option-deploy-supplychain-datasets-using-github-actions-workflows"></a>


| Task | Description | Skills required | 
| --- | --- | --- | 
| Copy the `ASC-Datasets` directory. | To copy the `ASC-Datasets` directory to a new location, use the following steps:[\[See the AWS documentation website for more details\]](http://docs.aws.amazon.com/prescriptive-guidance/latest/patterns/automate-the-deployment-of-aws-supply-chain-data-lakes.html) | AWS DevOps | 
| Set up the `ASC-Datasets` directory. | To set up `ASC-Datasets` as a standalone repository in your organization, run the following commands:<pre>git init<br />git add .<br />git commit -m "Initial commit: ASC-Datasets standalone repository"<br />git remote add origin <INSERT_ASC_DATASETS_GITHUB_URL><br />git branch -M dev</pre> | AWS DevOps | 
| Configure the branch name in the .github workflow file. | Set up the branch name in the [deployment](https://github.com/aws-samples/sample-automate-aws-supply-chain-deployment/blob/main/ASC-Datasets/.github/workflows/asc-datasets.yml) workflow file as shown in the following example:<pre>   on:<br />     workflow_dispatch:<br />     push:<br />       branches:<br />         - dev     #Change to any other branch preferred for deployment</pre> | App owner | 
| Set up GitHub environments and configure environment values. | To set up GitHub environments in your GitHub organization, use the instructions in [Setup GitHub environments](https://github.com/aws-samples/sample-automate-aws-supply-chain-deployment/tree/main/ASC-Datasets#setup-github-environments) in this pattern’s repository.To configure [environment values](https://github.com/aws-samples/sample-automate-aws-supply-chain-deployment/tree/main/ASC-Datasets#setup-environment-values-in-the-workflow-files) in the workflow files, use the instructions in [Setup environment values in the workflow files](https://github.com/aws-samples/sample-automate-aws-supply-chain-deployment/tree/main/ASC-Datasets#setup-environment-values-in-the-workflow-files) in this pattern’s repository. | App owner | 
| Trigger the workflow. | To push your changes to your GitHub organization and trigger the deployment workflow, run the following command:<pre>git push -u origin dev</pre> | AWS DevOps | 

### (Automated option) Deploy AWS Supply Chain integration flows using GitHub Actions workflows
<a name="automated-option-deploy-supplychain-integration-flows-using-github-actions-workflows"></a>


| Task | Description | Skills required | 
| --- | --- | --- | 
| Copy the `ASC-Integration-Flows` directory. | To copy the `ASC-Integration-Flows` directory to a new location, use the following steps:[\[See the AWS documentation website for more details\]](http://docs.aws.amazon.com/prescriptive-guidance/latest/patterns/automate-the-deployment-of-aws-supply-chain-data-lakes.html) | AWS DevOps | 
| Set up the `ASC-Integration-Flows` directory. | To set up the `ASC-Integration-Flows` directory as a standalone repository in your organization, run the following commands:<pre>git init<br />git add .<br />git commit -m "Initial commit: ASC-Integration-Flows standalone repository"<br />git remote add origin <INSERT_ASC_Integration_Flows_GITHUB_URL><br />git branch -M dev</pre> | AWS DevOps | 
| Configure the branch name in the .github workflow file. | Set up the branch name in the [deployment](https://github.com/aws-samples/sample-automate-aws-supply-chain-deployment/blob/main/ASC-Integration-Flows/.github/workflows/asc-integration-flows.yml) workflow file as shown in the following example:<pre>   on:<br />     workflow_dispatch:<br />     push:<br />       branches:<br />         - dev     #Change to any other branch preferred for deployment</pre> | App owner | 
| Set up GitHub environments and configure environment values. | To set up GitHub environments in your GitHub organization, use the instructions in [Setup GitHub environments](https://github.com/aws-samples/sample-automate-aws-supply-chain-deployment/tree/main/ASC-Integration-Flows#setup-github-environments) in this pattern’s repository.To configure [environment values](https://github.com/aws-samples/sample-automate-aws-supply-chain-deployment/tree/main/ASC-Integration-Flows#setup-github-environments) in the workflow files, use the instructions in [Setup environment values in the workflow files](https://github.com/aws-samples/sample-automate-aws-supply-chain-deployment/tree/main/ASC-Integration-Flows#setup-environment-values-in-the-workflow-files) in this pattern’s repository. | App owner | 
| Trigger the workflow. | To push your changes to your GitHub organization and trigger the deployment workflow, run the following command:<pre>git push -u origin dev</pre> | AWS DevOps | 

### (Manual option) Deploy AWS Supply Chain datasets using Terraform
<a name="manual-option-deploy-supplychain-datasets-using-terraform"></a>


| Task | Description | Skills required | 
| --- | --- | --- | 
| Navigate to the `terraform-deployment `directory. | To go to the `terraform-deployment` directory of `ASC-Datasets`, run the following command:<pre>cd ASC-Datasets/terraform-deployment</pre> | AWS DevOps | 
| Set up the Terraform state Amazon S3 bucket. | To set up the Terraform state Amazon S3 bucket, use the following script:<pre># Setup terraform bucket<br />chmod +x ../scripts/setup-terraform.sh<br />../scripts/setup-terraform.sh</pre> | AWS DevOps | 
| Set up the Terraform artifacts Amazon S3 bucket. | To set up the Terraform artifacts Amazon S3 bucket, use the following script:<pre># Setup terraform artifacts bucket<br />chmod +x ../scripts/setup-terraform-artifacts-bucket.sh<br />../scripts/setup-terraform-artifacts-bucket.sh</pre> | AWS DevOps | 
| Set up the Terraform backend and providers configuration. | To set up the Terraform backend and providers configuration, use the following script:<pre># Setup terraform backend and providers config if they don't exist<br />chmod +x ../scripts/generate-terraform-config.sh<br />../scripts/generate-terraform-config.sh</pre> | AWS DevOps | 
| Generate a deployment plan. | To generate a deployment plan, run the following commands:<pre># Run terraform init and validate<br />terraform init<br />terraform validate<br /><br /># Run terraform plan<br />terraform plan \<br />-var-file="tfInputs/$ENVIRONMENT.tfvars" \<br />-var="project_name=$PROJECT_NAME" \<br />-var="environment=$ENVIRONMENT" \<br />-var="user_role=$AWS_USER_ROLE" \<br />-var="lambda_temp_dir=$LAMBDA_FUNCTION_TEMP_DIR_TERRAFORM" \<br />-var="layer_temp_dir=$LAMBDA_LAYER_TEMP_DIR_TERRAFORM" \<br />-parallelism=40 \<br />-out='tfplan.out'</pre> | AWS DevOps | 
| Deploy the configurations. | To deploy the configurations, run the following command:<pre># Run terraform apply<br />terraform apply tfplan.out</pre> | AWS DevOps | 
| Update other configurations and store outputs. | To update AWS KMS key policies and store the applied configurations outputs in the Terraform artifacts Amazon S3 bucket, run the following commands:<pre># Update AWS Supply Chain KMS Key policy with the service's requirements<br />chmod +x ../scripts/update-asc-kms-policy.sh<br />../scripts/update-asc-kms-policy.sh<br /></pre><pre># Update AWS KMS Keys' policy with IAM roles<br />chmod +x ../scripts/update-kms-policy.sh<br />../scripts/update-kms-policy.sh<br /></pre><pre># Create terraform outputs file to be used as input variables<br />terraform output -json > raw_output.json<br />jq -r 'to_entries | map(<br />  if .value.type == "string" then<br />      "\(.key) = \"\(.value.value)\""<br />  else<br />      "\(.key) = \(.value.value | tojson)"<br />  end<br />) | .[]' raw_output.json > $REPO_NAME-outputs.tfvars<br /></pre><pre># Upload reformed outputs file to Amazon S3 terraform artifacts bucket (For retrieval from other repositories)<br />aws s3 cp $REPO_NAME-outputs.tfvars s3://$S3_TERRAFORM_ARTIFACTS_BUCKET_NAME/$REPO_NAME-outputs.tfvars<br />rm -f raw_output.json<br />rm -f $REPO_NAME-outputs.tfvars<br /></pre> | AWS DevOps | 

### (Manual option) Deploy AWS Supply Chain service integration flows using Terraform
<a name="manual-option-deploy-supplychain-service-integration-flows-using-terraform"></a>


| Task | Description | Skills required | 
| --- | --- | --- | 
| Navigate to the `terraform-deployment` directory. | To go to the `terraform-deployment` directory of `ASC-Integration-Flows`, run the following command:<pre>cd ASC-Integration-Flows/terraform-deployment</pre> | AWS DevOps | 
| Set up the Terraform backend and providers configuration. | To set up the Terraform backend and provider configurations, use the following script:<pre># Setup terraform backend and providers config if they don't exist<br />chmod +x ../scripts/generate-terraform-config.sh<br />../scripts/generate-terraform-config.sh</pre> | AWS DevOps | 
| Generate a deployment plan. | To generate a deployment plan, run the following commands. These commands initialize your Terraform environment, merge configuration variables from `ASC-Datasets` with your existing Terraform configurations, and generate a deployment plan.<pre># Run terraform init and validate<br />terraform init<br />terraform validate<br /></pre><pre># Download and merge ASC DATASET tfvars<br />chmod +x ../scripts/download-vars-through-s3.sh<br />../scripts/download-vars-through-s3.sh $ASC_DATASET_VARS_REPO<br /></pre><pre># Run terraform plan<br />terraform plan \<br />-var-file="tfInputs/$ENVIRONMENT.tfvars" \<br />-var="project_name=$PROJECT_NAME" \<br />-var="environment=$ENVIRONMENT" \<br />-var="lambda_temp_dir=$LAMBDA_FUNCTION_TEMP_DIR_TERRAFORM" \<br />-var="layer_temp_dir=$LAMBDA_LAYER_TEMP_DIR_TERRAFORM" \<br />-parallelism=40 \<br />-out='tfplan.out'</pre> | AWS DevOps | 
| Deploy the configurations. | To deploy the configurations, run the following command:<pre># Run terraform apply<br />terraform apply tfplan.out</pre> | AWS DevOps | 
| Update other configurations. | To update AWS KMS key policies and store the applied configurations outputs in the Terraform artifacts Amazon S3 bucket, run the following commands:<pre># Update AWS KMS Keys' policy with IAM roles<br />chmod +x ../scripts/update-kms-policy-through-s3.sh<br />../scripts/update-kms-policy-through-s3.sh $ASC_DATASET_VARS_REPO<br /></pre><pre># Create terraform outputs file to be used as input variables<br />terraform output -json > raw_output.json<br />jq -r 'to_entries | map(<br />  if .value.type == "string" then<br />      "\(.key) = \"\(.value.value)\""<br />  else<br />      "\(.key) = \(.value.value | tojson)"<br />  end<br />) | .[]' raw_output.json > $REPO_NAME-outputs.tfvars<br /></pre><pre># Upload reformed outputs file to Amazon S3 terraform artifacts bucket (For retrieval from other repositories)<br />aws s3 cp $REPO_NAME-outputs.tfvars s3://$S3_TERRAFORM_ARTIFACTS_BUCKET_NAME/$REPO_NAME-outputs.tfvars<br />rm -f raw_output.json<br />rm -f $REPO_NAME-outputs.tfvars<br /></pre> | AWS DevOps | 

### (Both options) Ingest data
<a name="both-options-ingest-data"></a>


| Task | Description | Skills required | 
| --- | --- | --- | 
| Upload sample CSV files. | To upload sample CSV files for the datasets, use the following steps:[\[See the AWS documentation website for more details\]](http://docs.aws.amazon.com/prescriptive-guidance/latest/patterns/automate-the-deployment-of-aws-supply-chain-data-lakes.html) | Data engineer | 

### (Both options) Set up AWS Supply Chain access
<a name="both-options-set-up-supplychain-access"></a>


| Task | Description | Skills required | 
| --- | --- | --- | 
| Set up AWS Supply Chain access. | To set up AWS Supply Chain access from the AWS Management Console, use the following steps:[\[See the AWS documentation website for more details\]](http://docs.aws.amazon.com/prescriptive-guidance/latest/patterns/automate-the-deployment-of-aws-supply-chain-data-lakes.html) | App owner | 

### (Automated option) Clean up all resources using GitHub Actions workflows
<a name="automated-option-clean-up-all-resources-using-github-actions-workflows"></a>


| Task | Description | Skills required | 
| --- | --- | --- | 
| Trigger the destroy workflow for integration flows resources. | Trigger the [destroy workflow](https://github.com/aws-samples/sample-automate-aws-supply-chain-deployment/blob/main/ASC-Integration-Flows/.github/workflows/destroy-workflow.yml) of `ASC-Integration-Flows` from your deployment branch in your GitHub organization. | AWS DevOps | 
| Trigger the destroy workflow for datasets resources. | Trigger the [destroy workflow](https://github.com/aws-samples/sample-automate-aws-supply-chain-deployment/blob/main/ASC-Datasets/.github/workflows/destroy-workflow.yml) of `ASC-Datasets` from your deployment branch in your GitHub organization. | AWS DevOps | 

### (Manual option) Clean up resources of AWS Supply Chain integration flows using Terraform
<a name="manual-option-clean-up-resources-of-supplychain-integration-flows-using-terraform"></a>


| Task | Description | Skills required | 
| --- | --- | --- | 
| Navigate to the `terraform-deployment` directory. | To go to the `terraform-deployment` directory of `ASC-Integration-Flows`, run the following command:<pre>cd ASC-Integration-Flows/terraform-deployment</pre> | AWS DevOps | 
| Set up the Terraform backend and providers configuration. | To set up the Terraform backend and providers configuration, use the following script:<pre># Setup terraform backend and providers config if they don't exist<br />chmod +x ../scripts/generate-terraform-config.sh<br />../scripts/generate-terraform-config.sh</pre> | AWS DevOps | 
| Generate infrastructure destruction plan. | To prepare for the controlled destruction of your AWS infrastructure by generating a detailed teardown plan, run the following commands. The process initializes Terraform, incorporates AWS Supply Chain dataset configurations, and creates a destruction plan that you can review before executing.<pre># Run terraform init and validate<br />terraform init<br />terraform validate<br /></pre><pre># Download and merge ASC DATASET tfvars<br />chmod +x ../scripts/download-vars-through-s3.sh<br />../scripts/download-vars-through-s3.sh $ASC_DATASET_VARS_REPO<br /></pre><pre># Run terraform plan<br />terraform plan -destroy\<br />-var-file="tfInputs/$ENVIRONMENT.tfvars" \<br />-var="project_name=$PROJECT_NAME" \<br />-var="environment=$ENVIRONMENT" \<br />-var="lambda_temp_dir=$LAMBDA_FUNCTION_TEMP_DIR_TERRAFORM" \<br />-var="layer_temp_dir=$LAMBDA_LAYER_TEMP_DIR_TERRAFORM" \<br />-parallelism=40 \<br />-out='tfplan.out'</pre> | AWS DevOps | 
| Execute infrastructure destruction plan. | To execute the planned destruction of your infrastructure, run the following command:<pre># Run terraform apply<br />terraform apply tfplan.out</pre> | AWS DevOps | 
| Remove Terraform outputs from Amazon S3 bucket. | To remove the outputs file that was uploaded during the deployment of `ASC-Integration-Flows`, run the following command:<pre># Delete the outputs file<br />aws s3 rm s3://$S3_TERRAFORM_ARTIFACTS_BUCKET_NAME/$REPO_NAME-outputs.tfvars</pre> | AWS DevOps | 

### (Manual option) Clean up resources of AWS Supply Chain service datasets using Terraform
<a name="manual-option-clean-up-resources-of-supplychain-service-datasets-using-terraform"></a>


| Task | Description | Skills required | 
| --- | --- | --- | 
| Navigate to the `terraform-deployment` directory. | To go to the `terraform-deployment` directory of `ASC-Datasets`, run the following command:<pre>cd ASC-Datasets/terraform-deployment</pre> | AWS DevOps | 
| Set up the Terraform backend and providers configuration. | To set up the Terraform backend and providers configuration, use the following script:<pre># Setup terraform backend and providers config if they don't exist<br />chmod +x ../scripts/generate-terraform-config.sh<br />../scripts/generate-terraform-config.sh</pre> | AWS DevOps | 
| Generate infrastructure destruction plan. | To create a plan for destroying AWS Supply Chain dataset resources, run the following commands:<pre># Run terraform init and validate<br />terraform init<br />terraform validate<br /><br /># Run terraform plan<br />terraform plan -destroy\<br />-var-file="tfInputs/$ENVIRONMENT.tfvars" \<br />-var="project_name=$PROJECT_NAME" \<br />-var="environment=$ENVIRONMENT" \<br />-var="user_role=$AWS_USER_ROLE" \<br />-var="lambda_temp_dir=$LAMBDA_FUNCTION_TEMP_DIR_TERRAFORM" \<br />-var="layer_temp_dir=$LAMBDA_LAYER_TEMP_DIR_TERRAFORM" \<br />-parallelism=40 \<br />-out='tfplan.out'</pre> | AWS DevOps | 
| Empty Amazon S3 buckets. | To empty all Amazon S3 buckets (except the server access logging bucket, which is configured for `force-destroy`), use the following script:<pre># Delete S3 buckets excluding server access logging bucket<br />chmod +x ../scripts/empty-s3-buckets.sh<br />../scripts/empty-s3-buckets.sh tfplan.out</pre> | AWS DevOps | 
| Execute infrastructure destruction plan. | To execute the planned destruction of your AWS Supply Chain dataset infrastructure using the generated plan, run the following command:<pre># Run terraform apply<br />terraform apply tfplan.out</pre> | AWS DevOps | 
| Remove Terraform outputs from the Amazon S3 Terraform artifacts bucket. | To complete the cleanup process, remove the outputs file that was uploaded during the deployment of `ASC-Datasets` by running the following command:<pre># Delete the outputs file<br />aws s3 rm s3://$S3_TERRAFORM_ARTIFACTS_BUCKET_NAME/$REPO_NAME-outputs.tfvars</pre> | AWS DevOps | 

## Troubleshooting
<a name="automate-the-deployment-of-aws-supply-chain-data-lakes-troubleshooting"></a>


| Issue | Solution | 
| --- | --- | 
| An AWS Supply Chain dataset or integration flow did not deploy correctly because of AWS Supply Chain internal errors or insufficient IAM permissions for the service role. | First, clean up all resources. Then, redeploy the AWS Supply Chain [dataset resources](https://github.com/aws-samples/sample-automate-aws-supply-chain-deployment/blob/main/ASC-Datasets/README.md) and then redeploy the AWS Supply Chain [integration flow resources](https://github.com/aws-samples/sample-automate-aws-supply-chain-deployment/blob/main/ASC-Integration-Flows/README.md). | 
| The AWS Supply Chain integration flow doesn’t fetch the new data files uploaded for the AWS Supply Chain datasets. | [\[See the AWS documentation website for more details\]](http://docs.aws.amazon.com/prescriptive-guidance/latest/patterns/automate-the-deployment-of-aws-supply-chain-data-lakes.html) | 

## Related resources
<a name="automate-the-deployment-of-aws-supply-chain-data-lakes-resources"></a>

**AWS documentation**
+ [AWS Supply Chain](https://docs.aws.amazon.com/aws-supply-chain/latest/adminguide/getting-started.html)

**Other resources**
+ [Understanding GitHub Actions workflows](https://docs.github.com/en/actions/get-started/understand-github-actions) (GitHub documentation)

## Additional information
<a name="automate-the-deployment-of-aws-supply-chain-data-lakes-additional"></a>

This solution can be replicated for more datasets and can be queried for further analysis, through prebuilt dashboards provided with AWS Supply Chain or custom integration with Amazon Quick Sight. In addition, you can use Amazon Q to ask questions related to your AWS Supply Chain instance.

**Analyze data with AWS Supply Chain Analytics**

For instructions to set up AWS Supply Chain Analytics, see [Setting AWS Supply Chain Analytics](https://docs.aws.amazon.com/aws-supply-chain/latest/userguide/setting_analytics.html) in the AWS Supply Chain documentation.

This pattern demonstrated the creation of **Calendar** and **Outbound\$1Order\$1Line** datasets. To create an analysis that uses these datasets, use the following steps:

1. To analyze the datasets, use the **Seasonality Analysis** dashboard. To add the dashboard, follow the steps in [Prebuilt dashboards](https://docs.aws.amazon.com/aws-supply-chain/latest/userguide/prebuilt_dashboards.html) in the AWS Supply Chain documentation.

1. Choose the dashboard to see its analysis that is based on sample CSV files for Calendar data and Outbound Order Line data.

The dashboard provides insights on demand over the years based on the ingested data for the datasets. You can further specify the ProductID, CustomerID, years, and other parameters for analysis.

**Use Amazon Q to ask questions related to your AWS Supply Chain instance**

[Amazon Q in AWS Supply Chain](https://docs.aws.amazon.com/aws-supply-chain/latest/userguide/qinasc.html) is an interactive generative AI assistant that helps you operate your supply chain more efficiently. Amazon Q can do the following:
+ Analyze the data in your AWS Supply Chain data lake.
+ Provide operational and financial insights.
+ Answer your immediate supply chain questions.

For more information about using Amazon Q, see [Enabling Amazon Q in AWS Supply Chain](https://docs.aws.amazon.com/aws-supply-chain/latest/userguide/enabling_QinASC.html) and [Using Amazon Q in AWS Supply Chain](https://docs.aws.amazon.com/aws-supply-chain/latest/userguide/using_QinASC.html) in the AWS Supply Chain documentation.

# Automate AWS resource assessment
<a name="automate-aws-resource-assessment"></a>

*Naveen Suthar, Arun Bagal, Manish Garg, and Sandeep Gawande, Amazon Web Services*

## Summary
<a name="automate-aws-resource-assessment-summary"></a>

This pattern describes an automated approach for setting up resource assessment capabilities by using the [AWS Cloud Development Kit (AWS CDK)](https://docs.aws.amazon.com/cdk/v2/guide/home.html). By using this pattern, operations teams gather resource auditing details in an automated manner and view the details of all resources deployed in an AWS account on a single dashboard. This is helpful in the following use cases:
+ Identifying infrastructure as code (IaC) tools and isolating resources created by different IaC solutions such as [HashiCorp Terraform](https://www.terraform.io/), [AWS CloudFormation](https://docs.aws.amazon.com/AWSCloudFormation/latest/UserGuide/Welcome.html), AWS CDK, and [AWS Command Line Interface (AWS CLI)](https://docs.aws.amazon.com/cli/latest/userguide/cli-chap-welcome.html)
+ Fetching resource-auditing information

This solution will also help the leadership team obtain insights about the resources and activities in an AWS account from a single dashboard. 


| 
| 
| Note: [Amazon Quick Sight](https://docs.aws.amazon.com/quicksight/latest/user/welcome.html) is a paid service. Before running it to analyze data and create a dashboard, review the [Amazon Quick Sight pricing](https://aws.amazon.com/quicksight/pricing/). | 
| --- |

## Prerequisites and limitations
<a name="automate-aws-resource-assessment-prereqs"></a>

**Prerequisites**
+ An active AWS account.
+ AWS Identity and Access Management (IAM) roles and permissions with access to provision resources
+ An [Amazon Quick account](https://docs.aws.amazon.com/quicksight/latest/user/signing-up.html) created with access to [Amazon Simple Storage Service (Amazon S3)](https://docs.aws.amazon.com/AmazonS3/latest/userguide/Welcome.html) and [Amazon Athena](https://docs.aws.amazon.com/athena/latest/ug/what-is.html)
+ AWS CDK version 2.55.1 or later installed 
+ [Python](https://www.python.org/downloads/release/python-390/) version 3.9 or later installed

**Limitations**
+ This solution is deployed to a single AWS account.
+ The solution will not track the events that happened before its deployment unless AWS CloudTrail was already set up and storing data in an S3 bucket.

**Product versions**
+ AWS CDK version 2.55.1 or later
+ Python version 3.9 or later

## Architecture
<a name="automate-aws-resource-assessment-architecture"></a>

**Target technology stack**
+ Amazon Athena
+ AWS CloudTrail
+ AWS Glue
+ AWS Lambda
+ Amazon Quick Sight
+ Amazon S3

**Target architecture**

The AWS CDK code will deploy all the resources that are required to set up resource-assessment capabilities in an AWS account. The following diagram shows the process of sending CloudTrail logs to AWS Glue, Amazon Athena, and Quick Sight.

![\[AWS resource assessment with AWS Glue, Amazon Athena, and Amazon QuickSight in a six-step process.\]](http://docs.aws.amazon.com/prescriptive-guidance/latest/patterns/images/pattern-img/a504774e-db7a-4c36-a22c-ce56d252fb58/images/8f2b549d-33a8-4cbf-86fd-33244716b668.png)


1. CloudTrail sends logs to an S3 bucket for storage.

1. An event notification invokes a Lambda function that processes the logs and generates filtered data.

1. The filtered data is stored in another S3 bucket.

1. An AWS Glue crawler is set up on the filtered data that is in the S3 bucket to create a schema in the AWS Glue Data Catalog table.

1. The filtered data is ready to be queried by Amazon Athena.

1. The queried data is accessed by Quick Sight for visualization.

**Automation and scale**
+ This solution can be scaled from one AWS account to multiple AWS accounts if there is an organization-wide CloudTrail trail in AWS Organizations. By deploying CloudTrail at the organizational level, you can also use this solution to fetch resource-auditing details for all the required resources.
+ This pattern uses AWS serverless resources to deploy the solution.

## Tools
<a name="automate-aws-resource-assessment-tools"></a>

**AWS services**
+ [Amazon Athena](https://docs.aws.amazon.com/athena/latest/ug/what-is.html) is an interactive query service that helps you analyze data directly in Amazon S3 by using standard SQL.
+ [AWS Cloud Development Kit (AWS CDK)](https://docs.aws.amazon.com/cdk/latest/guide/home.html) is a software development framework that helps you define and provision AWS Cloud infrastructure in code.
+ [AWS CloudFormation](https://docs.aws.amazon.com/AWSCloudFormation/latest/UserGuide/Welcome.html) helps you set up AWS resources, provision them quickly and consistently, and manage them throughout their lifecycle across AWS accounts and AWS Regions.
+ [AWS CloudTrail](https://docs.aws.amazon.com/awscloudtrail/latest/userguide/cloudtrail-user-guide.html) helps you audit the governance, compliance, and operational risk of your AWS account.
+ [AWS Glue](https://docs.aws.amazon.com/glue/latest/dg/what-is-glue.html) is a fully managed extract, transform, and load (ETL) service. It helps you reliably categorize, clean, enrich, and move data between data stores and data streams. This pattern uses an AWS Glue crawler and an AWS Glue Data Catalog table.
+ [AWS Lambda](https://docs.aws.amazon.com/lambda/latest/dg/welcome.html) is a compute service that helps you run code without needing to provision or manage servers. It runs your code only when needed and scales automatically, so you pay only for the compute time that you use.
+ [Amazon Quick](https://docs.aws.amazon.com/quicksight/latest/user/welcome.html) is a cloud-scale business intelligence (BI) service that helps you visualize, analyze, and report your data in a single dashboard.
+ [Amazon Simple Storage Service (Amazon S3)](https://docs.aws.amazon.com/AmazonS3/latest/userguide/Welcome.html) is a cloud-based object storage service that helps you store, protect, and retrieve any amount of data.

**Code repository**

The code for this pattern is available in the GitHub [infrastructure-assessment-iac-automation](https://github.com/aws-samples/infrastructure-assessment-iac-automation) repository.

The code repository contains the following files and folders:
+ `lib` folder – The AWS CDK construct Python files used to create AWS resources
+ `src/lambda_code` – The Python code that is run in the Lambda function
+ `requirements.txt` – The list of all Python dependencies that must be installed
+ `cdk.json` – The input file to provide values required to spin up resources

## Best practices
<a name="automate-aws-resource-assessment-best-practices"></a>

Set up monitoring and alerting for the Lambda function. For more information, see [Monitoring and troubleshooting Lambda functions](https://docs.aws.amazon.com/lambda/latest/dg/lambda-monitoring.html). For general best practices when working with Lambda functions, see the [AWS documentation](https://docs.aws.amazon.com/lambda/latest/dg/best-practices.html).

## Epics
<a name="automate-aws-resource-assessment-epics"></a>

### Set up your environment
<a name="set-up-your-environment"></a>


| Task | Description | Skills required | 
| --- | --- | --- | 
| Clone the repo on your local machine. | To clone the repository, run the command `git clone https://github.com/aws-samples/infrastructure-assessment-iac-automation.git`. | AWS DevOps, DevOps engineer | 
| Set up the Python virtual environment and install required dependencies. | To set up the Python virtual environment, run the following commands.<pre>cd infrastructure-assessment-iac-automation<br />python3 -m venv .venv<br />source .venv/bin/activate</pre>To set up the required dependencies, run the command `pip install -r requirements.txt`. | AWS DevOps, DevOps engineer | 
| Set up the AWS CDK environment and synthesize the AWS CDK code. | [\[See the AWS documentation website for more details\]](http://docs.aws.amazon.com/prescriptive-guidance/latest/patterns/automate-aws-resource-assessment.html) | AWS DevOps, DevOps engineer | 

### Set up AWS credentials on your local machine
<a name="set-up-aws-credentials-on-your-local-machine"></a>


| Task | Description | Skills required | 
| --- | --- | --- | 
| Export variables for the account and Region where the stack will be deployed. | To provide AWS credentials for AWS CDK by using environment variables, run the following commands.<pre>export CDK_DEFAULT_ACCOUNT=<12 Digit AWS Account Number><br />export CDK_DEFAULT_REGION=<region></pre> | AWS DevOps, DevOps engineer | 
| Set up the AWS CLI profile. | To set up the AWS CLI profile for the account, follow the instructions in the [AWS documentation](https://docs.aws.amazon.com/toolkit-for-visual-studio/latest/user-guide/keys-profiles-credentials.html). | AWS DevOps, DevOps engineer | 

### Configure and deploy the resource-assessment tool
<a name="configure-and-deploy-the-resource-assessment-tool"></a>


| Task | Description | Skills required | 
| --- | --- | --- | 
| Deploy resources in the account. | To deploy resources in the AWS account by using AWS CDK, do the following:[\[See the AWS documentation website for more details\]](http://docs.aws.amazon.com/prescriptive-guidance/latest/patterns/automate-aws-resource-assessment.html) | AWS DevOps | 
| Run the AWS Glue crawler and create the Data Catalog table. | An [AWS Glue crawler](https://docs.aws.amazon.com/glue/latest/dg/add-crawler.html) is used to keep the data schema dynamic. The solution creates and updates partitions in the [AWS Glue Data Catalog table](https://docs.aws.amazon.com/athena/latest/ug/querying-glue-catalog.html) by running the crawler periodically as defined by the AWS Glue crawler scheduler. After the data is available in the output S3 bucket, use the following steps to run the AWS Glue crawler and create the Data Catalog table schema for testing:[\[See the AWS documentation website for more details\]](http://docs.aws.amazon.com/prescriptive-guidance/latest/patterns/automate-aws-resource-assessment.html)The AWS CDK code configures the AWS Glue crawler to run at a particular time, but you can also run it on demand. | AWS DevOps, DevOps engineer | 
| Deploy the Quick Sight construct. | [\[See the AWS documentation website for more details\]](http://docs.aws.amazon.com/prescriptive-guidance/latest/patterns/automate-aws-resource-assessment.html) | AWS DevOps, DevOps engineer | 
| Create the Quick Sight dashboard. | To create the example Quick Sight dashboard and analysis, do the following:[\[See the AWS documentation website for more details\]](http://docs.aws.amazon.com/prescriptive-guidance/latest/patterns/automate-aws-resource-assessment.html)For more information, see [Starting an analysis in Amazon Quick Sight](https://docs.aws.amazon.com/quicksight/latest/user/creating-an-analysis.html) and [Visual types in Amazon Quick Sight](https://docs.aws.amazon.com/quicksight/latest/user/working-with-visual-types.html). | AWS DevOps, DevOps engineer | 

### Clean up all AWS resources in the solution
<a name="clean-up-all-aws-resources-in-the-solution"></a>


| Task | Description | Skills required | 
| --- | --- | --- | 
| Remove the AWS resources. | [\[See the AWS documentation website for more details\]](http://docs.aws.amazon.com/prescriptive-guidance/latest/patterns/automate-aws-resource-assessment.html) | AWS DevOps, DevOps engineer | 

### Set up additional features on top of the AWS resource-assessment tool automation
<a name="set-up-additional-features-on-top-of-the-aws-resource-assessment-tool-automation"></a>


| Task | Description | Skills required | 
| --- | --- | --- | 
| Monitor and clean up manually created resources. | (Optional) If your organization has compliance requirements to create resources using IaC tools, you can achieve compliance by using AWS resource-assessment tool automation to fetch manually provisioned resources. You can also use the tool to import the resources to an IaC tool or to re-create them. To monitor manually provisioned resources, perform the following high-level tasks:[\[See the AWS documentation website for more details\]](http://docs.aws.amazon.com/prescriptive-guidance/latest/patterns/automate-aws-resource-assessment.html) | AWS DevOps, DevOps engineer | 

## Troubleshooting
<a name="automate-aws-resource-assessment-troubleshooting"></a>


| Issue | Solution | 
| --- | --- | 
| AWS CDK returns errors. | For help with AWS CDK issues, see [Troubleshooting common AWS CDK issues](https://docs.aws.amazon.com/cdk/v2/guide/troubleshooting.html). | 

## Related resources
<a name="automate-aws-resource-assessment-resources"></a>
+ [Building Lambda functions with Python](https://docs.aws.amazon.com/lambda/latest/dg/lambda-python.html)
+ [Get started with AWS CDK](https://docs.aws.amazon.com/cdk/v2/guide/getting_started.html)
+ [Working with AWS CDK in Python](https://docs.aws.amazon.com/cdk/v2/guide/work-with-cdk-python.html)
+ [Creating a CloudTrail log trail](https://docs.aws.amazon.com/awscloudtrail/latest/userguide/cloudtrail-create-and-update-a-trail.html)
+ [Get Started with Amazon Quick Sight](https://aws.amazon.com/quicksight/getting-started/)

## Additional information
<a name="automate-aws-resource-assessment-additional"></a>

**Multiple accounts**

To set up the AWS CLI credential for multiple accounts, use AWS profiles. For more information, see the *Configure multiple profiles* section in [Set up the AWS CLI](https://aws.amazon.com/getting-started/guides/setup-environment/module-three/).

**AWS CDK commands**

When working with AWS CDK, keep in mind the following useful commands:
+ Lists all stacks in the app

  ```
  cdk ls
  ```
+ Emits the synthesized AWS CloudFormation template

  ```
  cdk synth
  ```
+ Deploys the stack to your default AWS account and Region

  ```
  cdk deploy
  ```
+ Compares the deployed stack with the current state

  ```
  cdk diff
  ```
+ Opens the AWS CDK documentation

  ```
  cdk docs
  ```

# Install SAP systems automatically by using open-source tools
<a name="install-sap-systems-automatically-by-using-open-source-tools"></a>

*Guilherme Sesterheim, Amazon Web Services*

## Summary
<a name="install-sap-systems-automatically-by-using-open-source-tools-summary"></a>

This pattern shows how to automate SAP systems installation by using open-source tools to create the following resources:
+ An SAP S/4HANA 1909 database
+ An SAP ABAP Central Services (ASCS) instance
+ An SAP Primary Application Server (PAS) instance

HashiCorp Terraform creates the SAP system’s infrastructure and Ansible configures the operating system (OS) and installs SAP applications. Jenkins runs the installation.

This setup turns SAP systems installation into a repeatable process, which can help increase deployment efficiency and quality.

**Note**  
The example code provided in this pattern works for both high-availability (HA) systems and non-HA systems.

## Prerequisites and limitations
<a name="install-sap-systems-automatically-by-using-open-source-tools-prereqs"></a>

**Prerequisites**
+ An active AWS account
+ An Amazon Simple Storage Service (Amazon S3) bucket that contains all of your SAP media files
+ An AWS Identity and Access Management (IAM) principal with an [access key and secret key](https://docs.aws.amazon.com/IAM/latest/UserGuide/id_credentials_access-keys.html), and that has the following permissions:
  + **Read only permissions:** Amazon Route 53, AWS Key Management Service (AWS KMS)
  + **Read and write permissions:** Amazon S3, Amazon Elastic Compute Cloud (Amazon EC2), Amazon Elastic File System (Amazon EFS), IAM, Amazon CloudWatch, Amazon DynamoDB
+ A Route 53 [private hosted zone](https://docs.aws.amazon.com/Route53/latest/DeveloperGuide/hosted-zones-private.html)
+ A subscription to the [Red Hat Enterprise Linux for SAP with HA and Update Services 8.2](https://aws.amazon.com/marketplace/pp/prodview-5grz5a5thx7c2) Amazon Machine Image (AMI) in Amazon Marketplace
+ An [AWS KMS customer managed key](https://docs.aws.amazon.com/AmazonS3/latest/userguide/UsingKMSEncryption.html#aws-managed-customer-managed-keys)
+ A [Secure Shell (SSH) key pair](https://docs.aws.amazon.com/AWSEC2/latest/UserGuide/ec2-key-pairs.html)
+ An [Amazon EC2 security group](https://docs.aws.amazon.com/AWSEC2/latest/UserGuide/ec2-security-groups.html) that allows SSH connection on port 22 from the hostname where you install Jenkins (the hostname is most likely **localhost**)
+ [Vagrant](https://www.vagrantup.com/) by HashiCorp installed and configured
+ [VirtualBox](https://www.virtualbox.org/) by Oracle installed and configured
+ Familiarity with Git, Terraform, Ansible, and Jenkins

**Limitations**
+ Only SAP S/4HANA 1909 is fully tested for this specific scenario. The example Ansible code in this pattern requires modification if you use another version of SAP HANA.
+ The example procedure in this pattern works for Mac OS and Linux operating systems. Some of the commands can be run only in Unix-based terminals. However, you can achieve a similar result by using different commands and a Windows OS.

**Product versions**
+ SAP S/4HANA 1909
+ Red Hat Enterprise Linux (RHEL) 8.2 or higher versions

## Architecture
<a name="install-sap-systems-automatically-by-using-open-source-tools-architecture"></a>

The following diagram shows an example workflow that uses open-source tools to automate SAP systems installation in an AWS account:

![\[Example workflow uses open-source tools to automate SAP systems installation in an AWS account.\]](http://docs.aws.amazon.com/prescriptive-guidance/latest/patterns/images/pattern-img/aaf11dac-38cc-4e89-be86-51d4409cf238/images/d7902f9d-f1be-461f-b69b-cf3c663c8f2f.png)


The diagram shows the following workflow:

1. Jenkins orchestrates running the SAP system installation by running Terraform and Ansible code.

1. Terraform code builds the SAP system’s infrastructure.

1. Ansible code configures the OS and installs SAP applications.

1. An SAP S/4HANA 1909 database, an ASCS instance, and PAS instance that include all defined prerequisites are installed on an Amazon EC2 instance.

**Note**  
The example setup in this pattern automatically creates an Amazon S3 bucket in your AWS account to store the Terraform state file.

**Technology stack**
+ Terraform
+ Ansible
+ Jenkins
+ An SAP S/4HANA 1909 database
+ An SAP ASCS instance
+ An SAP PAS instance
+ Amazon EC2 

## Tools
<a name="install-sap-systems-automatically-by-using-open-source-tools-tools"></a>

**AWS services**
+ [Amazon Elastic Compute Cloud (Amazon EC2)](https://docs.aws.amazon.com/ec2/?id=docs_gateway) provides scalable computing capacity in the AWS Cloud. You can launch as many virtual servers as you need, and quickly scale them up or down.
+ [AWS Identity and Access Management (IAM)](https://docs.aws.amazon.com/IAM/latest/UserGuide/introduction.html) helps you securely manage access to your AWS resources by controlling who is authenticated and authorized to use them.
+ [AWS Key Management Service (AWS KMS) ](https://docs.aws.amazon.com/kms/latest/developerguide/overview.html)helps you create and control cryptographic keys to protect your data.
+ [Amazon Virtual Private Cloud (Amazon VPC)](https://docs.aws.amazon.com/vpc/latest/userguide/what-is-amazon-vpc.html) helps you launch AWS resources into a virtual network that you’ve defined. This virtual network resembles a traditional network that you’d operate in your own data center, with the benefits of using the scalable infrastructure of AWS.

**Other tools**
+ [HashiCorp Terraform](https://www.terraform.io/docs) is a command-line interface application that helps you use code to provision and manage cloud infrastructure and resources.
+ [Ansible](https://www.ansible.com/) is an open-source configuration as code (CaC) tool that helps automate applications, configurations, and IT infrastructure.
+ [Jenkins](https://www.jenkins.io/) is an open-source automation server that enables developers to build, test, and deploy their software.

**Code**

The code for this pattern is available in the GitHub [aws-install-sap-with-jenkins-ansible](https://github.com/aws-samples/aws-install-sap-with-jenkins-ansible) repository.

## Epics
<a name="install-sap-systems-automatically-by-using-open-source-tools-epics"></a>

### Configure the prerequisites
<a name="configure-the-prerequisites"></a>


| Task | Description | Skills required | 
| --- | --- | --- | 
| Add your SAP media files to an Amazon S3 bucket. | [Create an Amazon S3 bucket](https://docs.aws.amazon.com/AmazonS3/latest/userguide/create-bucket-overview.html) that contains all of your SAP media files.Make sure that you follow the AWS Launch Wizard’s folder hierarchy for **S/4HANA** in the [Launch Wizard documentation](https://docs.aws.amazon.com/launchwizard/latest/userguide/launch-wizard-sap-software-install-details.html). | Cloud administrator | 
| Install VirtualBox. | Install and configure [VirtualBox](https://www.virtualbox.org/) by Oracle. | DevOps engineer | 
| Install Vagrant. | Install and configure [Vagrant](https://www.vagrantup.com/) by HashiCorp. | DevOps engineer | 
| Configure your AWS account. | [\[See the AWS documentation website for more details\]](http://docs.aws.amazon.com/prescriptive-guidance/latest/patterns/install-sap-systems-automatically-by-using-open-source-tools.html) | General AWS | 

### Build and run your SAP installation
<a name="build-and-run-your-sap-installation"></a>


| Task | Description | Skills required | 
| --- | --- | --- | 
| Clone the code repository from GitHub. | Clone the [aws-install-sap-with-jenkins-ansible](https://github.com/aws-samples/aws-install-sap-with-jenkins-ansible) repository on GitHub. | DevOps engineer | 
| Start the Jenkins service. | Open the Linux terminal. Then, navigate to the local folder that contains the cloned code repository folder and run the following command:<pre>sudo vagrant up</pre>The Jenkins startup takes about 20 minutes. The command returns a **Service is up and running** message when successful. | DevOps engineer | 
| Open Jenkins in a web browser and log in. | [\[See the AWS documentation website for more details\]](http://docs.aws.amazon.com/prescriptive-guidance/latest/patterns/install-sap-systems-automatically-by-using-open-source-tools.html) | DevOps engineer | 
| Configure your SAP system installation parameters. | [\[See the AWS documentation website for more details\]](http://docs.aws.amazon.com/prescriptive-guidance/latest/patterns/install-sap-systems-automatically-by-using-open-source-tools.html)[\[See the AWS documentation website for more details\]](http://docs.aws.amazon.com/prescriptive-guidance/latest/patterns/install-sap-systems-automatically-by-using-open-source-tools.html)You can configure the other nonrequired parameters as needed, based on your use case. For example, you can change the SAP system ID (SID) of the instances, default password, names, and tags for your SAP system. All of the required variables have **(Required)** at the beginning of their names. | AWS systems administrator, DevOps engineer | 
| Run you SAP system installation. | [\[See the AWS documentation website for more details\]](http://docs.aws.amazon.com/prescriptive-guidance/latest/patterns/install-sap-systems-automatically-by-using-open-source-tools.html)For information on the pipeline steps, see the **Understanding the pipeline steps** section of [Automating SAP installation with open-source tools](https://aws.amazon.com/blogs/awsforsap/automating-sap-installation-with-open-source-tools/) on the AWS Blog.If an error occurs, move your cursor over the red error box that appears and choose **Logs**. The logs for the pipeline step that errored out appear. Most errors occur because of incorrect parameter settings. | DevOps engineer, AWS systems administrator | 

## Related resources
<a name="install-sap-systems-automatically-by-using-open-source-tools-resources"></a>
+ [DevOps for SAP – SAP Installation: From 2 Months to 2 Hours](https://videos.itrevolution.com/watch/707351918/) (DevOps Enterprise Summit Video Library)

# Automate AWS Service Catalog portfolio and product deployment by using AWS CDK
<a name="automate-aws-service-catalog-portfolio-and-product-deployment-by-using-aws-cdk"></a>

*Sandeep Gawande, Viyoma Sachdeva, and RAJNEESH TYAGI, Amazon Web Services*

## Summary
<a name="automate-aws-service-catalog-portfolio-and-product-deployment-by-using-aws-cdk-summary"></a>

AWS Service Catalog helps you centrally manage catalogs of IT services, or *products*, that are approved for use in your organization’s AWS environment. A collection of products is called a *portfolio*, and a portfolio also contains configuration information. With AWS Service Catalog, you can create a customized portfolio for each type of user in your organization and then grant access to the appropriate portfolio. Those users can then quickly deploy any product they need from within the portfolio.

If you have a complex networking infrastructure, such as multi-Region and multi-account architectures, it is recommended that you create and manage Service Catalog portfolios in a single, central account. This pattern describes how to use AWS Cloud Development Kit (AWS CDK) to automate creation of Service Catalog portfolios in a central account, grant end users access to them, and then, optionally, provision products in one or more target AWS accounts. This ready-to-use solution creates the Service Catalog portfolios in the source account. It also, optionally, provisions products in target accounts by using AWS CloudFormation stacks and helps you configure TagOptions for the products:
+ **AWS CloudFormation StackSets** – You can use StackSets to launch Service Catalog products across multiple AWS Regions and accounts. In this solution, you have the option to automatically provision products when you deploy this solution. For more information, see [Using AWS CloudFormation StackSets](https://docs.aws.amazon.com/servicecatalog/latest/adminguide/using-stacksets.html) (Service Catalog documentation) and [StackSets concepts](https://docs.aws.amazon.com/AWSCloudFormation/latest/UserGuide/stacksets-concepts.html) (CloudFormation documentation).
+ **TagOption library** – You can manage tags on provisioned products by using TagOption library. A *TagOption* is a key-value pair managed in AWS Service Catalog. It is not an AWS tag, but it serves as a template for creating an AWS tag based on the TagOption. For more information, see [TagOption library](https://docs.aws.amazon.com/servicecatalog/latest/adminguide/tagoptions.html) (Service Catalog documentation).

## Prerequisites and limitations
<a name="automate-aws-service-catalog-portfolio-and-product-deployment-by-using-aws-cdk-prereqs"></a>

**Prerequisites**
+ An active AWS account that you want to use as the source account for administering Service Catalog portfolios.
+ If you are using this solution to provision products in one or more target accounts, the target account must already exist and be active.
+ AWS Identity and Access Management (IAM) permissions to access AWS Service Catalog, AWS CloudFormation, and AWS IAM.

**Product versions**
+ AWS CDK version 2.27.0

## Architecture
<a name="automate-aws-service-catalog-portfolio-and-product-deployment-by-using-aws-cdk-architecture"></a>

**Target technology stack**
+ Service Catalog portfolios in a centralized AWS account
+ Service Catalog products deployed in target account

**Target architecture**

![\[AWS CDK creating Service Catalog portfolios and provisioning products in the target account.\]](http://docs.aws.amazon.com/prescriptive-guidance/latest/patterns/images/pattern-img/e8f217a7-aec4-4c85-8f6b-f91995506be0/images/1f027b82-14c3-485a-909b-1544e974b90a.png)


1. In the portfolio (or *source*) account, you update the **config.json** file with the AWS account, AWS Region, IAM role, portfolio, and product information for your use case.

1. You deploy the AWS CDK application.

1. The AWS CDK application assumes the deployment IAM role and creates the Service Catalog portfolios and products defined in the **config.json** file.

   If you configured StackSets to deploy products in a target account, the process continues. If you didn’t configure StackSets to provision any products, then the process is complete.

1. The AWS CDK application assumes the **StackSet administrator** role and deploys the AWS CloudFormation stack set you defined in the **config.json** file.

1. In the target account, StackSets assumes the **StackSet execution** role and provisions the products.

## Tools
<a name="automate-aws-service-catalog-portfolio-and-product-deployment-by-using-aws-cdk-tools"></a>

**AWS services**
+ [AWS Cloud Development Kit (AWS CDK)](https://docs.aws.amazon.com/cdk/latest/guide/home.html) is a software development framework that helps you define and provision AWS Cloud infrastructure in code.
+ [AWS CDK Toolkit](https://docs.aws.amazon.com/cdk/latest/guide/cli.html) is a command line cloud development kit that helps you interact with your AWS CDK app.
+ [AWS CloudFormation](https://docs.aws.amazon.com/AWSCloudFormation/latest/UserGuide/Welcome.html) helps you set up AWS resources, provision them quickly and consistently, and manage them throughout their lifecycle across AWS accounts and Regions.
+ [AWS Identity and Access Management (IAM)](https://docs.aws.amazon.com/IAM/latest/UserGuide/introduction.html) helps you securely manage access to your AWS resources by controlling who is authenticated and authorized to use them.
+ [AWS Service Catalog](https://docs.aws.amazon.com/servicecatalog/latest/adminguide/introduction.html) helps you centrally manage catalogs of IT services that are approved for AWS. End users can quickly deploy only the approved IT services they need, following the constraints set by your organization.

**Code repository**

The code for this pattern is available on GitHub, in the [aws-cdk-servicecatalog-automation](https://github.com/aws-samples/aws-cdk-servicecatalog-automation.git) repository. The code repository contains the following files and folders:
+ **cdk-sevicecatalog-app** – This folder contains the AWS CDK application for this solution.
+ **config** – This folder contains the **config.json** file and the CloudFormation template for deploying the products in the Service Catalog portfolio.
+ **config/config.json** – This file contains all of the configuration information. You update this file to customize this solution for your use case.
+ **config/templates** – This folder contains the CloudFormation templates for the Service Center products.
+ **setup.sh** – This script deploys the solution.
+ **uninstall.sh** – This script deletes the stack and all of the AWS resources created when deploying this solution.

To use the sample code, follow the instructions in the [Epics](#automate-aws-service-catalog-portfolio-and-product-deployment-by-using-aws-cdk-epics) section.

## Best practices
<a name="automate-aws-service-catalog-portfolio-and-product-deployment-by-using-aws-cdk-best-practices"></a>
+ IAM roles used to deploy this solution should adhere to the [principle of least-privilege](https://docs.aws.amazon.com/IAM/latest/UserGuide/best-practices.html#grant-least-privilege) (IAM documentation).
+ Adhere to the [Best practices for developing cloud applications with AWS CDK](https://aws.amazon.com/blogs/devops/best-practices-for-developing-cloud-applications-with-aws-cdk/) (AWS blog post).
+ Adhere to the [AWS CloudFormation best practices](https://docs.aws.amazon.com/AWSCloudFormation/latest/UserGuide/best-practices.html) (CloudFormation documentation).

## Epics
<a name="automate-aws-service-catalog-portfolio-and-product-deployment-by-using-aws-cdk-epics"></a>

### Set up your environment
<a name="set-up-your-environment"></a>


| Task | Description | Skills required | 
| --- | --- | --- | 
| Install the AWS CDK Toolkit. | Make sure you have AWS CDK Toolkit installed. Enter the following command to confirm whether it is installed and check the version. <pre>cdk --version</pre>If AWS CDK Toolkit is not installed, then enter the following command to install it.<pre>npm install -g aws-cdk@2.27.0</pre>If AWS CDK Toolkit version is earlier than 2.27.0, then enter the following command to update it to version 2.27.0.<pre>npm install -g aws-cdk@2.27.0 --force</pre> | AWS DevOps, DevOps engineer | 
| Clone the repository. | Enter the following command. In *Clone the repository* in the [Additional information](#automate-aws-service-catalog-portfolio-and-product-deployment-by-using-aws-cdk-additional) section, you can copy the full command containing the URL for the repository. This clones the [aws-cdk-servicecatalog-automation](https://github.com/aws-samples/aws-cdk-servicecatalog-automation) repository from GitHub.<pre>git clone <repository-URL>.git</pre>This creates a `cd aws-cdk-servicecatalog-automation` folder in the target directory. Enter the following command to navigate into this folder.<pre>cd aws-cdk-servicecatalog-automation</pre> | AWS DevOps, DevOps engineer | 
| Set up AWS credentials. | Enter the following commands. These export the following variables, which define the AWS account and Region where you are deploying the stack.<pre>export CDK_DEFAULT_ACCOUNT=<12-digit AWS account number></pre><pre>export CDK_DEFAULT_REGION=<AWS Region></pre>AWS credentials for AWS CDK are provided through environment variables. | AWS DevOps, DevOps engineer | 
| Configure permissions for end user IAM roles. | If you are going to use IAM roles to grant access to the portfolio and the products in it, the roles must have permissions to be assumed by the **servicecatalog.amazonaws.com** service principal. For instructions about how to grant these permissions, see [Enabling trusted access with Service Catalog](https://docs.aws.amazon.com/organizations/latest/userguide/services-that-can-integrate-servicecatalog.html#integrate-enable-ta-servicecatalog) (AWS Organizations documentation). | AWS DevOps, DevOps engineer | 
| Configure IAM roles required by StackSets. | If you are using StackSets to automatically provision products in target accounts, you need to configure the IAM roles that administer and run the stack set.[\[See the AWS documentation website for more details\]](http://docs.aws.amazon.com/prescriptive-guidance/latest/patterns/automate-aws-service-catalog-portfolio-and-product-deployment-by-using-aws-cdk.html) | AWS DevOps, DevOps engineer | 

### Customize and deploy the solution
<a name="customize-and-deploy-the-solution"></a>


| Task | Description | Skills required | 
| --- | --- | --- | 
| Create the CloudFormation templates. | In the `config/templates` folder, create CloudFormation templates for any products that you want to include in your portfolios. For more information, see [Working with AWS CloudFormation templates](https://docs.aws.amazon.com/AWSCloudFormation/latest/UserGuide/template-guide.html) (CloudFormation documentation). | App developer, AWS DevOps, DevOps engineer | 
| Customize the config file. | In the `config` folder, open the **config.json** file and define the parameters as appropriate for your use case. The following parameters are required unless otherwise noted:[\[See the AWS documentation website for more details\]](http://docs.aws.amazon.com/prescriptive-guidance/latest/patterns/automate-aws-service-catalog-portfolio-and-product-deployment-by-using-aws-cdk.html)For an example of a completed config file, see *Sample config file* in the [Additional information](#automate-aws-service-catalog-portfolio-and-product-deployment-by-using-aws-cdk-additional) section. | App developer, DevOps engineer, AWS DevOps | 
| Deploy the solution. | Enter the following command. This deploys the AWS CDK app and provisions the Service Catalog portfolios and products as specified in the **config.json** file.<pre>sh +x setup.sh</pre> | App developer, DevOps engineer, AWS DevOps | 
| Verify the deployment. | Verify successful deployment by doing the following: [\[See the AWS documentation website for more details\]](http://docs.aws.amazon.com/prescriptive-guidance/latest/patterns/automate-aws-service-catalog-portfolio-and-product-deployment-by-using-aws-cdk.html) | General AWS | 
| (Optional) Update the portfolios and products. | If you want to use this solution to update the portfolios or products or to provision new products:[\[See the AWS documentation website for more details\]](http://docs.aws.amazon.com/prescriptive-guidance/latest/patterns/automate-aws-service-catalog-portfolio-and-product-deployment-by-using-aws-cdk.html)For example, you can add additional portfolios or provision more resources. The AWS CDK app implements only the changes. If there are no changes to previously deployed portfolios or products, the redeployment doesn’t affect them. | App developer, DevOps engineer, General AWS | 

### Clean up the solution
<a name="clean-up-the-solution"></a>


| Task | Description | Skills required | 
| --- | --- | --- | 
| (Optional) Remove AWS resources deployed by this solution. | If you want to delete a provisioned product, follow the instructions in [Deleting provisioned products](https://docs.aws.amazon.com/servicecatalog/latest/userguide/enduser-delete.html) (Service Catalog documentation).If you want to delete all the resources created by this solution, enter the following command.<pre>sh uninstall.sh</pre> | AWS DevOps, DevOps engineer, App developer | 

## Related resources
<a name="automate-aws-service-catalog-portfolio-and-product-deployment-by-using-aws-cdk-resources"></a>
+ [AWS Service Catalog Construct Library](https://docs.aws.amazon.com/cdk/api/v2/docs/aws-cdk-lib.aws_servicecatalog-readme.html) (AWS API Reference)
+ [StackSets concepts](https://docs.aws.amazon.com/AWSCloudFormation/latest/UserGuide/stacksets-concepts.html) (CloudFormation documentation)
+ [AWS Service Catalog](https://aws.amazon.com/servicecatalog) (AWS marketing)
+ [Using Service Catalog with the AWS CDK](https://catalog.us-east-1.prod.workshops.aws/workshops/d40750d7-a330-49be-9945-cde864610de9/en-US/4-builders-devs/sc-cdk) (AWS workshop)

## Additional information
<a name="automate-aws-service-catalog-portfolio-and-product-deployment-by-using-aws-cdk-additional"></a>

**Clone the repository**

Enter the following command to clone the repository from GitHub.

```
git clone https://github.com/aws-samples/aws-cdk-servicecatalog-automation.git
```

**Sample config file**

The following is a sample **config.json** file with example values.

```
{
    "portfolios": [
        {
            "displayName": "EC2 Product Portfolio",
            "providerName": "User1",
            "description": "Test1",
            "roles": [
                "<Names of IAM roles that can access the products>"
            ],
            "users": [
                "<Names of IAM users who can access the products>"
            ],
            "groups": [
                "<Names of IAM user groups that can access the products>"
            ]
        },
        {
            "displayName": "Autoscaling Product Portfolio",
            "providerName": "User2",
            "description": "Test2",
            "roles": [
                "<Name of IAM role>"
            ]
        }
    ],
    "tagOption": [
        {
            "key": "Group",
            "value": [
                "finance",
                "engineering",
                "marketing",
                "research"
            ]
        },
        {
            "key": "CostCenter",
            "value": [
                "01",
                "02",
                "03",
                "04"
            ]
        },
        {
            "key": "Environment",
            "value": [
                "dev",
                "prod",
                "stage"
            ]
        }
    ],
    "products": [
        {
            "portfolioName": "EC2 Product Profile",
            "productName": "Ec2",
            "owner": "owner1",
            "productVersionName": "v1",
            "templatePath": "../../config/templates/template1.json"
        },
        {
            "portfolioName": "Autoscaling Product Profile",
            "productName": "autoscaling",
            "owner": "owner1",
            "productVersionName": "v1",
            "templatePath": "../../config/templates/template2.json",
            "deployWithStackSets": {
                "accounts": [
                    "012345678901",
                ],
                "regions": [
                    "us-west-2"
                ],
                "stackSetAdministrationRoleName": "AWSCloudFormationStackSetAdministrationRole",
                "stackSetExecutionRoleName": "AWSCloudFormationStackSetExecutionRole"
            }
        }
    ]
}
```

# Automate dynamic pipeline management for deploying hotfix solutions in Gitflow environments by using AWS Service Catalog and AWS CodePipeline
<a name="automate-dynamic-pipeline-management-for-deploying-hotfix-solutions"></a>

*Balaji Vedagiri, Faisal Shahdad, Shanmugam Shanker, and Vivek Thangamuthu, Amazon Web Services*

## Summary
<a name="automate-dynamic-pipeline-management-for-deploying-hotfix-solutions-summary"></a>

**Note**  
AWS CodeCommit is no longer available to new customers. Existing customers of AWS CodeCommit can continue to use the service as normal. [Learn more](https://aws.amazon.com/blogs/devops/how-to-migrate-your-aws-codecommit-repository-to-another-git-provider)

This pattern addresses a scenario of managing a dynamic hotfix pipeline that’s dedicated solely to deploying hotfix solutions to a production environment securely. The solution is implemented and managed by using an AWS Service Catalog portfolio and product. An Amazon EventBridge rule is used for event automation. Restrictions are enforced by using Service Catalog portfolio constraints and AWS Identity and Access Management (IAM) roles for developers. Only an AWS Lambda function is allowed to launch the Service Catalog product, triggered by the EventBridge rule. This pattern is designed for environments with a specific Gitflow setup, which is described in [Additional information](#automate-dynamic-pipeline-management-for-deploying-hotfix-solutions-additional).

Typically, a hotfix is deployed to address critical or security issues reported in a live environment, such as Production. Hotfixes should be deployed directly to Staging and Production environments only. The Staging and Production pipelines are used extensively for regular development requests. These pipelines can’t be used to deploy hotfixes because there are ongoing features in quality assurance that can’t be promoted to Production. To release hotfixes, this pattern describes a dynamic, short-lived pipeline with the following security features:
+ **Automatic creation** – A hotfix pipeline is automatically created whenever a hotfix branch is created in an AWS CodeCommit repository. 
+ **Access restrictions** – Developers don’t have access to create this pipeline outside of the hotfix process. 
+ **Controlled stage** – The pipeline has a controlled stage with a special access token, ensuring that a pull request (PR) can only be created once. 
+ **Approval stages** – Approval stages are included in the pipeline to get necessary approvals from relevant stakeholders. 
+ **Automatic deletion** – The hotfix pipeline is automatically deleted whenever a `hotfix` branch is deleted in the CodeCommit repository after it’s merged with a PR. 

## Prerequisites and limitations
<a name="automate-dynamic-pipeline-management-for-deploying-hotfix-solutions-prereqs"></a>

**Prerequisites**
+ Three active AWS accounts are required as follows:
  + Tools account - For continuous integration and continuous delivery (CI/CD) setup.
  + Stage account - For user acceptance testing.
  + Production account - For a business end user.
  + (Optional) Add an AWS account to act as a QA account. This account is required if you want both a main pipeline setup, including QA, and a hotfix pipeline solution for testing.
+ An AWS CloudFormation stack with an optional condition to deploy in the QA account using the main pipeline, if needed. The pattern can still be tested without the main pipeline setup by creating and deleting a `hotfix` branch.
+ An Amazon Simple Storage Service (Amazon S3) bucket to store the CloudFormation templates that are used to create Service Catalog products.
+ Create PR approval rules for the CodeCommit repository in accordance with the compliance requirements (after creating the repository).
+ Restrict the IAM permissions of developers and team leads to deny the execution of the [prcreation-lambda](https://github.com/aws-samples/dynamic_hotfix_codepipeline/blob/main/pre-requisites/lambdasetup.yaml#L55) Lambda function because it should be invoked only from the pipeline.

**Limitations**
+ The CloudFormation provider is used in the deploy stage, and the application is deployed using a CloudFormation change set. If you want to use a different deployment option, modify the CodePipeline stack as required.
+ This pattern uses AWS CodeBuild and other configuration files to deploy a sample microservice. If you have a different workload type (for example, serverless workloads), you must update all relevant configurations.
+ This pattern deploys the application in a single AWS Region (for example, US East (N. Virginia) us-east-1) across AWS accounts. To deploy across multiple Regions, change the Region reference in commands and stacks.
+ Some AWS services aren’t available in all AWS Regions. For Region availability, see [AWS services by Region](https://aws.amazon.com/about-aws/global-infrastructure/regional-product-services/). For specific endpoints, see [Service endpoints and quotas](https://docs.aws.amazon.com/general/latest/gr/aws-service-information.html), and choose the link for the service.

## Architecture
<a name="automate-dynamic-pipeline-management-for-deploying-hotfix-solutions-architecture"></a>

The diagrams in this section provide workflows for a create lifecycle event and for a delete lifecycle event.

![\[Workflow to create a lifecycle event.\]](http://docs.aws.amazon.com/prescriptive-guidance/latest/patterns/images/pattern-img/64311acc-8c0f-4734-aa1b-74345d86c752/images/3939f77c-4221-4c23-a3a1-3e8a294b2b32.png)


The preceding diagram for creating a lifecycle event shows the following:

1. The developer creates a `hotfix-*` branch in the CodeCommit repository to develop a hotfix-related solution.

1. The `hotfix-*` branch creation event is captured through the EventBridge rule. The event details include the repository name and branch name.

1. The EventBridge rule invokes the AWS Lambda function `hotfix-lambda-function`. The EventBridge rule passes the event information to the Lambda function as input.

1. The Lambda function processes the input to retrieve the repository name and branch name. It launches the Service Catalog product with values retrieved from the processed input.

1. The Service Catalog product includes a pipeline setup that will deploy the solution to the Stage and Production environments. The pipeline block includes source, build, and deploy stages. Also, there is a manual approval stage to promote the deployment for the Production environment.

1. The source stage retrieves the code from the repository and `hotfix-*` branch that was created in the first step. The code is passed to the build stage through an Amazon S3 bucket for artifacts. In the build stage, a container image is created that includes the hotfix that is developed in the `hotfix-*` branch and pushed into Amazon Elastic Container Registry (Amazon ECR).

1. The deploy stage to the stage environment updates Amazon Elastic Container Service (Amazon ECS) with the latest container image that includes the hotfix. The hotfix is deployed by creating and executing a CloudFormation change set.

1. The `prcreation-lambda` Lambda function is invoked after successful deployment in the Stage environment. This Lambda function creates a PR from the `hotfix-*` branch to the `develop` and `main` branches of the repository. The Lambda function ensures that the fix developed in the `hotfix-*` branch is backmerged and included in subsequent deployments.

1. A manual approval stage helps to ensure that the necessary stakeholders review the fix and give approval to deploy in Production.

1. The deploy stage to the production environment updates Amazon ECS with the latest container image that includes the hotfix. The hotfix is deployed by creating and executing a CloudFormation change set.

![\[Workflow to delete a lifecycle event.\]](http://docs.aws.amazon.com/prescriptive-guidance/latest/patterns/images/pattern-img/64311acc-8c0f-4734-aa1b-74345d86c752/images/192aa897-bd9b-4a9f-804e-340371612b3b.png)


The preceding diagram for deleting a lifecycle event shows the following:

1. The developer deletes the `hotfix-*` branch after successful deployment of the hotfix to the production environment.

1. The `hotfix-*` branch deletion event is captured through an EventBridge rule. The event details include the repository name and branch name.

1. The EventBridge rule invokes the Lambda function. The EventBridge rule passes the event information to the Lambda function as input.

1. The Lambda function processes the input to retrieve the repository name and branch name. The Lambda function determines the respective Service Catalog product from the passed input and then terminates the product.

1. The Service Catalog provisioned product termination deletes the pipeline and relevant resources that were created earlier in that product.

**Automation and scale**
+ The pattern includes an EventBridge rule and a Lambda function, which can handle multiple hotfix branch creation requests in parallel. The Lambda function provisions the Service Catalog product for the matching event rule.
+ The pipeline setup is handled by using the Service Catalog product, which provides version control capabilities. The solution also scales automatically to handle multiple hotfix developments for the same application in parallel.
+ The [prcreation-lambda](https://github.com/aws-samples/dynamic_hotfix_codepipeline/blob/main/pre-requisites/lambdasetup.yaml#L55) function ensures that these hotfix changes are also merged back into the `main` and the `develop` branches through an automatic pull request creation. This approach is essential to keep the `main` and the `develop` branches up to date with all fixes and avoid potential code regressions. This process helps to maintain consistency across branches and prevent code regressions by ensuring that all long-lived branches have the latest fixes.

## Tools
<a name="automate-dynamic-pipeline-management-for-deploying-hotfix-solutions-tools"></a>

**AWS services**
+ [AWS CloudFormation](https://docs.aws.amazon.com/AWSCloudFormation/latest/UserGuide/Welcome.html) helps you set up AWS resources, provision them quickly and consistently, and manage them throughout their lifecycle across AWS accounts and AWS Regions.
+ [AWS CodeBuild](https://docs.aws.amazon.com/codebuild/latest/userguide/welcome.html) is a fully managed build service that helps you compile source code, run unit tests, and produce artifacts that are ready to deploy.
+ [AWS CodeCommit](https://docs.aws.amazon.com/codecommit/latest/userguide/welcome.html) is a version control service that helps you privately store and manage Git repositories, without needing to manage your own source control system. AWS CodeCommit is no longer available to new customers. Existing customers of AWS CodeCommit can continue to use the service as normal. For more information, see [How to migrate your AWS CodeCommit repository to another Git provider](https://aws.amazon.com/blogs/devops/how-to-migrate-your-aws-codecommit-repository-to-another-git-provider/).
+ [AWS CodePipeline](https://docs.aws.amazon.com/codepipeline/latest/userguide/welcome.html) helps you quickly model and configure the different stages of a software release and automate the steps required to release software changes continuously.
+ [Amazon Elastic Container Registry (Amazon ECR)](https://docs.aws.amazon.com/AmazonECR/latest/userguide/what-is-ecr.html) is a managed container image registry service that’s secure, scalable, and reliable.
+ [Amazon Elastic Container Service (Amazon ECS)](https://docs.aws.amazon.com/AmazonECS/latest/developerguide/Welcome.html) is a fast and scalable container management service that helps you run, stop, and manage containers on a cluster.
+ [AWS Key Management Service (AWS KMS)](https://docs.aws.amazon.com/kms/latest/developerguide/overview.html) helps you create and control cryptographic keys to help protect your data.
+ [AWS Service Catalog](https://docs.aws.amazon.com/servicecatalog/latest/adminguide/introduction.html) helps you centrally manage catalogs of IT services that are approved for AWS. End users can quickly deploy only the approved IT services they need, following the constraints set by your organization.
+ [Amazon Simple Storage Service (Amazon S3)](https://docs.aws.amazon.com/AmazonS3/latest/userguide/Welcome.html) is a cloud-based object storage service that helps you store, protect, and retrieve any amount of data.

**Other tools**
+ [CloudFormation Linter (cfn-lint)](https://github.com/aws-cloudformation/cfn-lint) is a linter that checks CloudFormation YAML or JSON templates against the [CloudFormation resource specification](https://docs.aws.amazon.com/AWSCloudFormation/latest/UserGuide/cfn-resource-specification.html). It also performs other checks, such as checking for valid values for resource properties and adherence to best practices.
+ [cfn-nag](https://github.com/stelligent/cfn_nag) is an open source tool that that identifies potential security issues in CloudFormation templates by searching for patterns.
+ [Docker](https://www.docker.com/) is a set of platform as a service (PaaS) products that use virtualization at the operating-system level to deliver software in containers. This pattern uses Docker to build and test container images locally.
+ [Git](https://git-scm.com/docs) is an open-source, distributed version control system.

**Code repository**

The code for this pattern is available in the GitHub [dynamic\$1hotfix\$1codepipeline](https://github.com/aws-samples/dynamic_hotfix_codepipeline) repository.

## Best practices
<a name="automate-dynamic-pipeline-management-for-deploying-hotfix-solutions-best-practices"></a>

Review and adjust IAM roles and service control policies (SCP) in your environment to ensure that they restrict access appropriately. This is crucial to prevent any actions that could override the security measures included in this pattern. Follow the principle of least privilege and grant the minimum permissions required to perform a task. For more information, see [Grant least privilege](https://docs.aws.amazon.com/IAM/latest/UserGuide/access_policies.html#grant-least-priv) and [Security best practices](https://docs.aws.amazon.com/IAM/latest/UserGuide/best-practices.html) in the IAM documentation.

## Epics
<a name="automate-dynamic-pipeline-management-for-deploying-hotfix-solutions-epics"></a>

### Set up the work environment
<a name="set-up-the-work-environment"></a>


| Task | Description | Skills required | 
| --- | --- | --- | 
| Clone the repository. | To clone the sample [repository](https://github.com/aws-samples/dynamic_hotfix_codepipeline) into a new directory in your work location, run the following command:<pre>git clone git@github.com:aws-samples/dynamic_hotfix_codepipeline.git</pre> | AWS DevOps | 
| Export environment variables for CloudFormation stack deployment. | Define the following environment variables that will be used as input to the CloudFormation stacks later in this pattern.[\[See the AWS documentation website for more details\]](http://docs.aws.amazon.com/prescriptive-guidance/latest/patterns/automate-dynamic-pipeline-management-for-deploying-hotfix-solutions.html)<pre>export BucketStartName=<BucketName></pre>[\[See the AWS documentation website for more details\]](http://docs.aws.amazon.com/prescriptive-guidance/latest/patterns/automate-dynamic-pipeline-management-for-deploying-hotfix-solutions.html)<pre>export ProdAccount=<prodaccountnumber><br />export StageAccount=<stage/preprodaccountnumber><br />export QAAccount=<qaccountnumber><br />export ToolsAccount=<toolsaccountnumber><br />export DepRegion=<region></pre> | AWS DevOps | 

### Set up prerequisites required in AWS accounts
<a name="set-up-prerequisites-required-in-aws-accounts"></a>


| Task | Description | Skills required | 
| --- | --- | --- | 
| Create resources required for CI/CD in the tools account. | To deploy the CloudFormation stack in the tools account, use the following commands. (Remove the `QAAccount` parameter if you’re not using the QA account for setup.)<pre>#InToolsAccount<br />aws cloudformation deploy \<br />    --template-file pre-requisites/pre-reqs.yaml \<br />    --stack-name prereqs \<br />    --parameter-overrides BucketStartName=${BucketStartName} \<br />    ApplicationName=${ApplicationName} ProdAccount=${ProdAccount} \<br />    StageAccount=${StageAccount} ToolsAccount=${ToolsAccount} \<br />    QAAccount=${QAAccount} \<br />    --capabilities CAPABILITY_IAM CAPABILITY_NAMED_IAM --region ${DepRegion}</pre>Make a note of the resources that the CodeCommit repository and Amazon ECR created from the preceding stack. These parameters are necessary to set up the `main` branch of the pipeline in upcoming steps. | AWS DevOps | 
| Create resources required for CI/CD in the workload accounts. | [\[See the AWS documentation website for more details\]](http://docs.aws.amazon.com/prescriptive-guidance/latest/patterns/automate-dynamic-pipeline-management-for-deploying-hotfix-solutions.html) | AWS DevOps | 
| Update the S3 artifact bucket policy to allow access for workload accounts. | To update the CloudFormation stack prerequisites in the tools account, use the following commands to add all required permissions for the Stage and Production workload accounts. (Remove the `QAAccount` parameter if you’re not using it for setup.)<pre>#InToolsAccount<br />aws cloudformation deploy \<br />    --template-file pre-requisites/pre-reqs.yaml \<br />    --stack-name prereqs \<br />    --parameter-overrides BucketStartName=${BucketStartName} \<br />    ApplicationName=${ApplicationName} ProdAccount=${ProdAccount} \<br />    StageAccount=${StageAccount} ToolsAccount=${ToolsAccount} \<br />    QAAccount=${QAAccount} PutPolicy=true \<br />    --capabilities CAPABILITY_IAM CAPABILITY_NAMED_IAM --region ${DepRegion}</pre> | AWS DevOps | 

### Set up Lambda function and Service Catalog resources in tools account
<a name="set-up-lam-function-and-sc-resources-in-tools-account"></a>


| Task | Description | Skills required | 
| --- | --- | --- | 
| Set up the Service Catalog portfolio and products. | To set up the Service Catalog portfolio and products, do the following:[\[See the AWS documentation website for more details\]](http://docs.aws.amazon.com/prescriptive-guidance/latest/patterns/automate-dynamic-pipeline-management-for-deploying-hotfix-solutions.html) | AWS DevOps | 
| Set up Lambda functions. | This solution uses the following Lambda functions to manage hotfix workflows:[\[See the AWS documentation website for more details\]](http://docs.aws.amazon.com/prescriptive-guidance/latest/patterns/automate-dynamic-pipeline-management-for-deploying-hotfix-solutions.html)To enable the Lambda functions to provision and terminate Service Catalog products when `hotfix `branches are created or deleted through the associated EventBridge rule, use the following steps:[\[See the AWS documentation website for more details\]](http://docs.aws.amazon.com/prescriptive-guidance/latest/patterns/automate-dynamic-pipeline-management-for-deploying-hotfix-solutions.html) | AWS DevOps | 

### Create pipeline for main branch and deploy application in workload accounts
<a name="create-pipeline-for-main-branch-and-deploy-application-in-workload-accounts"></a>


| Task | Description | Skills required | 
| --- | --- | --- | 
| Set up pipeline for `main` branch. | To set up the pipeline for the main branch, run the following command in the tools account. Replace the parameters for `MainProductId` and `MainProductArtifactId` with values from `servicecatalogsetup` stack outputs.<pre>#InToolsAccount<br />aws servicecatalog provision-product \<br />    --product-id <MainProductId> \<br />    --provisioning-artifact-id <MainProductArtifactId> \<br />    --provisioned-product-name "${ApplicationName}-main-pipeline" \<br />    --provisioning-parameters Key=CodeCommitRepoName,Value="${ApplicationName}-repository" Key=ECRRepository,Value="${ApplicationName}-app" \<br />    --region=${DepRegion}</pre> | AWS DevOps | 
| Deploy application using the `main` branch. | [\[See the AWS documentation website for more details\]](http://docs.aws.amazon.com/prescriptive-guidance/latest/patterns/automate-dynamic-pipeline-management-for-deploying-hotfix-solutions.html) | AWS DevOps | 

### Create the pipeline for a hotfix-\$1 branch and deploy the hotfix
<a name="create-the-pipeline-for-a-hotfix--branch-and-deploy-the-hotfix"></a>


| Task | Description | Skills required | 
| --- | --- | --- | 
| Create a `hotfix-*` branch and commit changes. | To create a pipeline for the `hotfix-*` branch and deploy the hotfix to the workload accounts, do the following:[\[See the AWS documentation website for more details\]](http://docs.aws.amazon.com/prescriptive-guidance/latest/patterns/automate-dynamic-pipeline-management-for-deploying-hotfix-solutions.html) | AWS DevOps | 
| Delete the `hotfix-check1` branch. | To delete the `hotfix-check1` branch created earlier, do the following:[\[See the AWS documentation website for more details\]](http://docs.aws.amazon.com/prescriptive-guidance/latest/patterns/automate-dynamic-pipeline-management-for-deploying-hotfix-solutions.html) | AWS DevOps | 

### Clean up resources
<a name="clean-up-resources"></a>


| Task | Description | Skills required | 
| --- | --- | --- | 
| Clean up the deployed resources. | To clean up the resources that were deployed earlier, do the following:[\[See the AWS documentation website for more details\]](http://docs.aws.amazon.com/prescriptive-guidance/latest/patterns/automate-dynamic-pipeline-management-for-deploying-hotfix-solutions.html)<pre>##In Tools Account##<br />aws cloudformation delete-stack --stack-name servicecatalogsetup --region ${DepRegion}<br />aws cloudformation delete-stack --stack-name prlambdasetup --region ${DepRegion}<br />aws cloudformation delete-stack --stack-name prereqs --region ${DepRegion}</pre><pre>##In Workload Accounts##<br />aws cloudformation delete-stack --stack-name inframainstack --region ${DepRegion}</pre>For more information, see [Deleting provisioned products](https://docs.aws.amazon.com/servicecatalog/latest/userguide/enduser-delete.html) in the Service Catalog documentation. | AWS DevOps | 

## Troubleshooting
<a name="automate-dynamic-pipeline-management-for-deploying-hotfix-solutions-troubleshooting"></a>


| Issue | Solution | 
| --- | --- | 
| Changes that you committed to the CodeCommit repository aren’t getting deployed. | Check the CodeBuild logs for errors in the Docker build action. For more information, see the [CodeBuild documentation](https://docs.aws.amazon.com/codebuild/latest/userguide/troubleshooting.html). | 
| The Service Catalog product isn't being provisioned. | Review the related CloudFormation stacks for failed events. For more information, see the [CloudFormation documentation](https://docs.aws.amazon.com/AWSCloudFormation/latest/UserGuide/troubleshooting.html). | 

## Related resources
<a name="automate-dynamic-pipeline-management-for-deploying-hotfix-solutions-resources"></a>
+ [Basic Git commands](https://docs.aws.amazon.com/codecommit/latest/userguide/how-to-basic-git.html)
+ [Configure an IAM policy to limit pushes and merges to a branch](https://docs.aws.amazon.com/codecommit/latest/userguide/how-to-conditional-branch.html#how-to-conditional-branch-create-policy)
+ [Connect to an AWS CodeCommit repository](https://docs.aws.amazon.com/codecommit/latest/userguide/how-to-connect.html)
+ [Granting access to users](https://docs.aws.amazon.com/servicecatalog/latest/adminguide/catalogs_portfolios_users.html)
+ [Pushing a Docker image to an Amazon ECR private repository](https://docs.aws.amazon.com/AmazonECR/latest/userguide/docker-push-ecr-image.html)
+ [Troubleshooting AWS CodeBuild](https://docs.aws.amazon.com/codebuild/latest/userguide/troubleshooting.html)
+ [What is AWS CodePipeline?](https://docs.aws.amazon.com/codepipeline/latest/userguide/welcome.html)

## Additional information
<a name="automate-dynamic-pipeline-management-for-deploying-hotfix-solutions-additional"></a>

This pattern is designed for environments with a Gitflow setup that is adopted for the development workflow in the CI/CD process. The pipelines follow the deployment cycle that starts from development and moves through quality assurance (QA), stage, and production environments. The CI/CD setup includes two git branches with promotional deployments to environments as follows:
+ The `develop` branch deploys to the development environment.
+ The `main` branch deploys to the QA, stage, and production environments.

In this setup, it’s a challenge to apply a hotfix or a security patch faster than the usual deployment cycle while the active development of new features is ongoing. A dedicated process is necessary to address hotfix or security requests, ensuring that live environments remain properly functioning and secure.

However, you can use other available options without requiring a dedicated deployment process if:
+ The CI/CD process is well-equipped with automated testing, such as functional and end-to-end tests, which eliminate the need for manual testing and prevent delays in deployments to production. However, if automated testing isn’t well integrated into the CI/CD process, pushing a small fix to the production environment can become complex and cumbersome for developers. This is because there might be new features waiting in the QA environment for approval and sign-off. A hotfix or security fix can’t be pushed into production in a straightforward manner simultaneously.
+ Development teams continuously deploy new features into the production environment, integrating hotfixes or security patches into the scheduled deployment of each new feature. In other words, the next feature update to the production environment comprises two components: The addition of a new feature and the inclusion of the hotfix or security patch. However, if the deployment cycle isn’t continuous, there can be multiple new features already awaiting approval in the QA environment. Managing different versions and ensuring the correct changes are reapplied can then become complex and error-prone.

**Note**  
If you’re using [version 2](https://docs.aws.amazon.com/codepipeline/latest/userguide/pipeline-types.html#:~:text=V2%20type%20pipelines%20have%20the%20same%20structure) of AWS CodePipeline with proper triggers set up on the `hotfix` branch, you still require a dedicated process to address unscheduled requests. In version 2, you can set up triggers for either push or pull requests. The execution will either be queued or executed immediately, depending on the previous state of the pipeline. However, with a dedicated pipeline, the fixes are applied immediately to the production environment, ensuring that urgent issues are resolved without delay.

# Automate deletion of AWS CloudFormation stacks and associated resources
<a name="automate-deletion-cloudformation-stacks-associated-resources"></a>

*SANDEEP SINGH and James Jacob, Amazon Web Services*

## Summary
<a name="automate-deletion-cloudformation-stacks-associated-resources-summary"></a>

[AWS CloudFormation](https://docs.aws.amazon.com/AWSCloudFormation/latest/UserGuide/Welcome.html) is a widely-used service for managing cloud infrastructure as code (IaC). When you use CloudFormation, you manage related resources as a single unit called a *stack*. You create, update, and delete a collection of resources by creating, updating, and deleting stacks.

Sometimes, you no longer need the resources in a CloudFormation stack. Depending on the resources and their configurations, it can be complicated to delete a stack and its associated resources. In real-world production systems, deletions sometimes fail or take a long time due to conflicting conditions or restrictions that CloudFormation cannot override. It can require careful planning and execution to make sure that all resources are properly deleted in an efficient and consistent manner. This pattern describes how to set up a framework that helps you manage the deletion of CloudFormation stacks that involve the following complexities:
+ **Resources with delete protection** – Some resources might have delete protection enabled. Common examples are [Amazon DynamoDB](https://docs.aws.amazon.com/amazondynamodb/latest/developerguide/Introduction.html) tables and [Amazon Simple Storage Service (Amazon S3)](https://docs.aws.amazon.com/AmazonS3/latest/userguide/Welcome.html) buckets. Delete protection prevents automated deletion, such as deletion through CloudFormation. If you want to delete these resources, you must manually or programmatically override or temporarily disable the delete protection. You should carefully consider the implication of deleting these resources before proceeding.
+ **Resources with retention policies** – Certain resources, such as AWS Key Management Service (AWS KMS) keys and Amazon S3 buckets, might have retention policies that specify how long they should be retained after deletion is requested. You should account for these policies in the cleanup strategy to maintain compliance with organizational policies and regulatory requirements.
+ **Delayed deletion of Lambda functions that are attached to a VPC** – Deleting an [AWS Lambda](https://docs.aws.amazon.com/lambda/latest/dg/welcome.html) function that is attached to a virtual private cloud (VPC) can take 5–40 minutes, depending the multiple interconnected dependencies involved in the process. If you detach the function from the VPC before deleting the stack, you can reduce this delay to under 1 minute.
+ **Resources not directly created by CloudFormation** – In certain application designs, resources might be created outside of the original CloudFormation stack, either by the application itself or by resources provisioned through the stack. The following are two examples:
  + CloudFormation might provision an [Amazon Elastic Compute Cloud (Amazon EC2)](https://docs.aws.amazon.com/AWSEC2/latest/UserGuide/concepts.html) instance that runs a user data script. Then, this script might create an [AWS Systems Manager](https://docs.aws.amazon.com/systems-manager/latest/userguide/what-is-systems-manager.html) parameter to store application-related data. This parameter is not managed through CloudFormation.
  + CloudFormation might provision a Lambda function that automatically generates an [Amazon CloudWatch Logs](https://docs.aws.amazon.com/AmazonCloudWatch/latest/logs/WhatIsCloudWatchLogs.html) group for storing logs. This log group is not managed through CloudFormation.

  Even though these resources aren't directly managed by CloudFormation, they often need to be cleaned up when the stack is deleted. If left unmanaged, they can become orphaned and lead to unnecessary resource consumption.

Although these guardrails can cause complexity, they are intentional and critical. Allowing CloudFormation to override all constraints and indiscriminately delete resources could lead to detrimental and unforeseen consequences in many scenarios. However, as a DevOps or cloud engineer who is responsible for managing the environment, there are times when overriding these constraints might be necessary, particularly in development, testing, or staging environments.

**Targeted business outcomes**

By implementing this framework, you can achieve the following benefits:
+ **Cost management** – Regular and efficient cleanup of temporary environments, such as end-to-end or user-acceptance testing environments, helps prevent resources from running longer than necessary. This can significantly reduce costs.
+ **Security** – Automated cleanup of outdated or unused resources reduces the attack surface and helps maintain a secure AWS environment.
+ **Operational efficiency** – Regular and automated cleanup can provide the following operational benefits:
  + Automated scripts that remove old log groups or empty Amazon S3 buckets can improve operational efficiency by keeping the environment clean and manageable.
  + Quickly deleting and recreating stacks supports rapid iteration for design and implementation, which can lead to a more robust and resilient architecture.
  + Regularly deleting and rebuilding environments can help you identify and fix potential issues. This can help you make sure that the infrastructure can withstand real-world scenarios.

## Prerequisites and limitations
<a name="automate-deletion-cloudformation-stacks-associated-resources-prereqs"></a>

**Prerequisites**
+ An active AWS account
+ Python version 3.6 or later, [installed](https://www.python.org/downloads/)
+ AWS Command Line Interface (AWS CLI), [installed](https://docs.aws.amazon.com/cli/latest/userguide/getting-started-install.html) and [configured](https://docs.aws.amazon.com/cli/latest/userguide/cli-chap-configure.html)

**Limitations**
+ A naming convention is used to identify the resources that should be deleted. The sample code in this pattern uses a prefix for the resource name, but you can define your own naming convention. Resources that do not use this naming convention will not be identified or subsequently deleted.

## Architecture
<a name="automate-deletion-cloudformation-stacks-associated-resources-architecture"></a>

The following diagram shows how this framework identifies the target CloudFormation stack and the additional resources associated with it.

![\[The phases that discover, process, and delete CloudFormation stacks and their associated resources.\]](http://docs.aws.amazon.com/prescriptive-guidance/latest/patterns/images/pattern-img/ab7c3b56-3476-41a3-8ece-68915605a546/images/a7fceb1c-d624-47b3-957d-f910ef2f44d7.png)


The diagram shows the following workflow:

1. **Gather resources** – The automation framework uses a naming convention to return all relevant CloudFormation stacks, Amazon Elastic Container Registry (Amazon ECR) repositories, DynamoDB tables, and Amazon S3 buckets.
**Note**  
The functions for this stage use [paginators](https://boto3.amazonaws.com/v1/documentation/api/latest/guide/paginators.html), a feature in Boto3 that abstracts the process of iterating over a truncated API result set. This makes sure that all resources are processed. To further optimize performance, consider applying [server-side](https://boto3.amazonaws.com/v1/documentation/api/latest/guide/paginators.html#filtering-results) filtering or consider using JMESPath to perform [client-side](https://boto3.amazonaws.com/v1/documentation/api/latest/guide/paginators.html#filtering-results-with-jmespath) filtering.

1. **Pre-processing** – The automation framework identifies and addresses the service constraints that must be overridden in order to allow CloudFormation to delete the resources. For example, it changes the `DeletionProtectionEnabled` setting for DynamoDB tables to `False`. In the command-line interface, for each resource, you receive a prompt asking if you want to override the constraint.

1. **Delete stack** – The automation framework deletes the CloudFormation stack. In the command-line interface, you receive a prompt asking if you want to delete the stack.

1. **Post-processing** – The automation framework deletes any related resources that were not directly provisioned through CloudFormation as part of the stack. Examples of these resource types include Systems Manager parameters and CloudWatch log groups. Separate functions gather these resources, pre-process them, and then delete them. In the command-line interface, for each resource, you receive a prompt asking if you want to delete the resource.
**Note**  
The functions for this stage use [paginators](https://boto3.amazonaws.com/v1/documentation/api/latest/guide/paginators.html), a feature in Boto3 that abstracts the process of iterating over a truncated API result set. This makes sure that all resources are processed. To further optimize performance, consider applying [server-side](https://boto3.amazonaws.com/v1/documentation/api/latest/guide/paginators.html#filtering-results) filtering or consider using JMESPath to perform [client-side](https://boto3.amazonaws.com/v1/documentation/api/latest/guide/paginators.html#filtering-results-with-jmespath) filtering.

**Automation and scale**

If your CloudFormation stack includes other resources that are not included in the sample code, or if the stack has a constraint that has not been addressed in this pattern, then you can adapt the automation framework for your use case. Follow the same methodology of gathering resources, pre-processing, deleting the stack, and then post-processing.

## Tools
<a name="automate-deletion-cloudformation-stacks-associated-resources-tools"></a>

**AWS services**
+ [AWS CloudFormation](https://docs.aws.amazon.com/AWSCloudFormation/latest/UserGuide/Welcome.html) helps you set up AWS resources, provision them quickly and consistently, and manage them throughout their lifecycle across AWS accounts and AWS Regions.
+ [CloudFormation Command Line Interface (CFN-CLI)](https://docs.aws.amazon.com/cloudformation-cli/latest/userguide/what-is-cloudformation-cli.html) is an open source tool that helps you develop and test AWS and third-party extensions and then register them for use in CloudFormation.
+ [AWS SDK for Python (Boto3)](https://boto3.amazonaws.com/v1/documentation/api/latest/guide/quickstart.html) is a software development kit that helps you integrate your Python application, library, or script with AWS services.

**Other tools**
+ [Click](https://click.palletsprojects.com/en/stable/) is a Python tool that helps you create command line interfaces.
+ [Poetry](https://python-poetry.org/docs/) is a tool for dependency management and packaging in Python.
+ [Pyenv](https://github.com/pyenv/pyenv) is a tool that helps you manage and switch between versions of Python.
+ [Python](https://www.python.org/) is a general-purpose computer programming language.

**Code repository**

The code for this pattern is available in the GitHub [cloudformation-stack-cleanup](https://github.com/aws-samples/cloudformation-stack-cleanup/) repository.

## Best practices
<a name="automate-deletion-cloudformation-stacks-associated-resources-best-practices"></a>
+ **Tag resources for easy identification** – Implement a [tagging strategy](https://aws.amazon.com/solutions/guidance/tagging-on-aws/) to identify resources that are created for different environments and purposes. Tags can simplify the cleanup process by helping you filter resources based on their tags.
+ **Set up resource life cycles** – Define resource life cycles in order to automatically delete resources after a certain period. This practice helps you make sure that temporary environments do not become permanent cost liabilities.

## Epics
<a name="automate-deletion-cloudformation-stacks-associated-resources-epics"></a>

### Install tools
<a name="install-tools"></a>


| Task | Description | Skills required | 
| --- | --- | --- | 
| Clone the repository. | [\[See the AWS documentation website for more details\]](http://docs.aws.amazon.com/prescriptive-guidance/latest/patterns/automate-deletion-cloudformation-stacks-associated-resources.html) | DevOps engineer | 
| Install Poetry. | Follow the [instructions](https://python-poetry.org/docs/) (Poetry documentation) to install Poetry in the target virtual environment. | DevOps engineer | 
| Install dependencies. | [\[See the AWS documentation website for more details\]](http://docs.aws.amazon.com/prescriptive-guidance/latest/patterns/automate-deletion-cloudformation-stacks-associated-resources.html) | DevOps engineer | 
| (Optional) Install Pyenv. | Follow the [instructions](https://github.com/pyenv/pyenv#installation) (GitHub) to install Pyenv. | DevOps engineer | 

### (Optional) Customize the framework
<a name="optional-customize-the-framework"></a>


| Task | Description | Skills required | 
| --- | --- | --- | 
| Create functions that gather, pre-process, and delete the target resources. | [\[See the AWS documentation website for more details\]](http://docs.aws.amazon.com/prescriptive-guidance/latest/patterns/automate-deletion-cloudformation-stacks-associated-resources.html) | DevOps engineer, Python | 

### Create sample resources
<a name="create-sample-resources"></a>


| Task | Description | Skills required | 
| --- | --- | --- | 
| Create a CloudFormation stack. | [\[See the AWS documentation website for more details\]](http://docs.aws.amazon.com/prescriptive-guidance/latest/patterns/automate-deletion-cloudformation-stacks-associated-resources.html) | AWS DevOps | 
| Create a Systems Manager parameter. | Enter the following command to create a Systems Manager parameter that isn't provisioned through CloudFormation:<pre>aws ssm put-parameter \<br />  --name "/sampleforcleanup/database/password" \<br />  --value "your_db_password" \<br />  --type "SecureString" \<br />  --description "Database password for my app" \<br />  --tier "Standard" \<br />  --region "us-east-1"</pre> | AWS DevOps | 
| Create an Amazon S3 bucket. | Enter the following command to create an Amazon S3 bucket that isn't provisioned through CloudFormation:<pre>aws s3api create-bucket \<br />  --bucket samplesorcleanup-unmanagedbucket-<UniqueIdentifier> \<br />  --region us-east-1 \<br />  --create-bucket-configuration LocationConstraint=us-east-1</pre> | AWS DevOps | 

### Delete the sample resources
<a name="delete-the-sample-resources"></a>


| Task | Description | Skills required | 
| --- | --- | --- | 
| Delete the CloudFormation stack. | [\[See the AWS documentation website for more details\]](http://docs.aws.amazon.com/prescriptive-guidance/latest/patterns/automate-deletion-cloudformation-stacks-associated-resources.html) | AWS DevOps | 
| Validate resource deletion. | In the output, confirm that all of the sample resources have been deleted. For a sample output, see the [Additional resources](#automate-deletion-cloudformation-stacks-associated-resources-additional) section of this pattern. | AWS DevOps | 

## Related resources
<a name="automate-deletion-cloudformation-stacks-associated-resources-resources"></a>
+ [Delete a stack](https://docs.aws.amazon.com/AWSCloudFormation/latest/UserGuide/cfn-console-delete-stack.html) (CloudFormation documentation)
+ [Troubleshooting CloudFormation](https://docs.aws.amazon.com/AWSCloudFormation/latest/UserGuide/troubleshooting.html) (CloudFormation documentation)
+ [Giving Lambda functions access to resources in an Amazon VPC](https://docs.aws.amazon.com/lambda/latest/dg/configuration-vpc.html) (Lambda documentation)
+ [How do I delete an AWS CloudFormation stack that's stuck in DELETE\$1FAILED status?](https://repost.aws/knowledge-center/cloudformation-stack-delete-failed) (AWS Knowledge Center)

## Additional information
<a name="automate-deletion-cloudformation-stacks-associated-resources-additional"></a>

The following is a sample output from the `cfncli` command:

```
cfncli --region aus-east-1  dev cleanup-env --prefix-list sampleforcleanup                                                                                                                              
https://sts.us-east-1.amazonaws.com
Cleaning up: ['sampleforcleanup'] in xxxxxxxxxx:us-east-1
Do you want to proceed? [Y/n]: Y
No S3 buckets
No ECR repositories
No Lambda functions in VPC
The following DynamoDB tables will have their deletion protection removed:
sampleforcleanup-MyDynamoDBTable
Do you want to proceed with removing deletion protection from these tables? [Y/n]: Y
Deletion protection disabled for DynamoDB table 'sampleforcleanup-MyDynamoDBTable'.
The following CloudFormation stacks will be deleted:
sampleforcleanup-Stack
Do you want to proceed with deleting these CloudFormation stacks? [Y/n]: Y
Initiated deletion of CloudFormation stack: `sampleforcleanup-Stack`
Waiting for stack `sampleforcleanup-Stack` to be deleted...
CloudFormation stack `sampleforcleanup-Stack` deleted successfully.
The following ssm_params will be deleted:
/sampleforcleanup/database/password
Do you want to proceed with deleting these ssm_params? [Y/n]: Y
Deleted SSM Parameter: /sampleforcleanup/database/password
Cleaned up: ['sampleforcleanup']
```

# Automate ingestion and visualization of Amazon MWAA custom metrics on Amazon Managed Grafana by using Terraform
<a name="automate-ingestion-and-visualization-of-amazon-mwaa-custom-metrics"></a>

*Faisal Abdullah and Satya Vajrapu, Amazon Web Services*

## Summary
<a name="automate-ingestion-and-visualization-of-amazon-mwaa-custom-metrics-summary"></a>

This pattern discusses how to use Amazon Managed Grafana to create and monitor custom metrics that are ingested by Amazon Managed Workflows for Apache Airflow (Amazon MWAA). Amazon MWAA serves as the orchestrator for workflows, employing Directed Acyclic Graphs (DAGs) that are scripted in Python. This pattern centers on the monitoring of custom metrics, including the total number of DAGs running within the last hour, the count of passed and failed DAGs each hour, and the average duration of these processes. This analysis shows how Amazon Managed Grafana integrates with Amazon MWAA to enable comprehensive monitoring and insights into the orchestration of workflows within this environment.

## Prerequisites and limitations
<a name="automate-ingestion-and-visualization-of-amazon-mwaa-custom-metrics-prereqs"></a>

**Prerequisites**
+ An active AWS account with the necessary user permissions to create and manage the following AWS services:
  + AWS Identity and Access Management (IAM) roles and policies
  + AWS Lambda
  + Amazon Managed Grafana
  + Amazon Managed Workflows for Apache Airflow (Amazon MWAA)
  + Amazon Simple Storage Service (Amazon S3)
  + Amazon Timestream
+ Access to a shell environment which can be a terminal on your local machine or [AWS CloudShell](https://docs.aws.amazon.com/cloudshell/latest/userguide/welcome.html).
+ A shell environment with Git installed and the latest version of the AWS Command Line Interface (AWS CLI) installed and configured. For more information, see [Installing or updating to the latest version of the AWS CLI](https://docs.aws.amazon.com/cli/latest/userguide/getting-started-install.html) in the AWS CLI documentation.
+ The following Terraform version installed: `required_version = ">= 1.6.1, < 2.0.0"` You can use [tfswitch](https://tfswitch.warrensbox.com/) to switch between different versions of Terraform.
+ Configured identity source in AWS IAM Identity Center for your AWS account. For more information, see [Confirm your identity sources in IAM Identity Center](https://docs.aws.amazon.com/singlesignon/latest/userguide/prereq-identity-sources.html) in the IAM Identity Center documentation. You can choose from the default IAM Identity Center directory, Active Directory, or an external Identity provider (IdP) such as Okta. For more information, see [Related resources](#automate-ingestion-and-visualization-of-amazon-mwaa-custom-metrics-resources).

**Limitations**
+ Some AWS services aren’t available in all AWS Regions. For Region availability, see [AWS services by Region](https://aws.amazon.com/about-aws/global-infrastructure/regional-product-services/). For specific endpoints, see [Service endpoints and quotas](https://docs.aws.amazon.com/general/latest/gr/aws-service-information.html), and choose the link for the service.

**Product versions**
+ Terraform `required_version = ">= 1.6.1, < 2.0.0"`
+ Amazon Managed Grafana version 9.4 or later. This pattern was tested on version 9.4.

## Architecture
<a name="automate-ingestion-and-visualization-of-amazon-mwaa-custom-metrics-architecture"></a>

The following architecture diagram highlights the AWS services used in the solution.

![\[Workflow to automate the ingestion of Amazon MWAA custom metrics.\]](http://docs.aws.amazon.com/prescriptive-guidance/latest/patterns/images/pattern-img/3458d0a9-aee1-428a-bf2f-c357bb531c64/images/b43ed8d2-94ac-4438-913b-81c7eba8f3e0.png)


The preceding diagram steps through the following workflow:

1. Custom metrics within Amazon MWAA originate from DAGs that are executing within the environment. The metrics upload to the Amazon S3 bucket in a CSV file format. The following DAGs use the database querying capabilities of Amazon MWAA:
   + `run-example-dag` – This DAG contains sample Python code that defines one or more tasks. It runs every 7 minutes and prints the date. After printing the date, the DAG includes a task to sleep, or pause, execution for a specific duration.
   + `other-sample-dag` – This DAG runs every 10 mins and prints the date. After printing the date, the DAG includes a task to sleep, or pause, execution for a specific duration.
   + `data-extract` – This DAG runs every hour and queries the Amazon MWAA database and collects metrics. After the metrics are collected, this DAG writes them to an Amazon S3 bucket for further processing and analysis.

1. To streamline data processing, Lambda functions run when they’re triggered by Amazon S3 events, which facilitates the loading of metrics into Timestream.

1. Timestream is integrated as a data source within Amazon Managed Grafana where all the custom metrics from Amazon MWAA are stored.

1. Users can query the data and construct custom dashboards to visualize key performance indicators and gain insights into the orchestration of workflows within Amazon MWAA.

## Tools
<a name="automate-ingestion-and-visualization-of-amazon-mwaa-custom-metrics-tools"></a>

**AWS services**
+ [AWS IAM Identity Center](https://docs.aws.amazon.com/singlesignon/latest/userguide/what-is.html) helps you centrally manage single sign-on (SSO) access to all of your AWS accounts and cloud applications.
+ [AWS Lambda](https://docs.aws.amazon.com/lambda/latest/dg/welcome.html) is a compute service that helps you run code without needing to provision or manage servers. It runs your code only when needed and scales automatically, so you pay only for the compute time that you use. In this pattern, AWS Lambda runs the Python code in response to Amazon S3 events and manages the compute resources automatically.
+ [Amazon Managed Grafana](https://docs.aws.amazon.com/grafana/latest/userguide/what-is-Amazon-Managed-Service-Grafana.html) is a fully managed data visualization service that you can use to query, correlate, and visualize, and alert on your metrics, logs, and traces. This pattern uses Amazon Managed Grafana to create a dashboard for metrics visualization and alerts.
+ [Amazon Managed Workflows for Apache Airflow (Amazon MWAA)](https://docs.aws.amazon.com/mwaa/latest/userguide/what-is-mwaa.html) is a managed orchestration service for Apache Airflow that you can use to set up and operate data pipelines in the cloud at scale. [Apache Airflow](https://airflow.apache.org/) is an open source tool used to programmatically author, schedule, and monitor sequences of processes and tasks referred to as workflows. In this pattern, sample DAGs and a metrics extractor DAG are deployed in Amazon MWAA.
+ [Amazon Simple Storage Service (Amazon S3)](https://docs.aws.amazon.com/AmazonS3/latest/userguide/Welcome.html) is a cloud-based object storage service that helps you store, protect, and retrieve any amount of data. In this pattern, Amazon S3 is used to store DAGs, scripts, and custom metrics in CSV format.
+ [Amazon Timestream for LiveAnalytics](https://docs.aws.amazon.com/timestream/latest/developerguide/what-is-timestream.html) is is a fast, scalable, fully managed, purpose-built time series database that makes it easy to store and analyze trillions of time series data points per day. Timestream for LiveAnalytics also integrates with commonly used services for data collection, visualization, and machine learning. In this pattern, it’s used to ingest the generated Amazon MWAA custom metrics.

**Other tools**
+ [HashiCorp Terraform](https://www.terraform.io/docs) is an infrastructure as code (IaC) tool that helps you use code to provision and manage cloud infrastructure and resources. This pattern uses a Terraform module to automate the provisioning of infrastructure in AWS.

**Code repository**

The code for this pattern is available on GitHub in the [visualize-amazon-mwaa-custom-metrics-grafana](https://github.com/aws-samples/visualize-amazon-mwaa-custom-metrics-grafana) repository. The `stacks/Infra` folder contains the following:
+ Terraform configuration files for all AWS resources
+ Grafana dashboard .json file in the `grafana` folder
+ Amazon Managed Workflows for Apache Airflow DAGs in the `mwaa/dags` folder
+ Lambda code to parse the .csv file and store metrics in the Timestream database in the `src` folder
+ IAM policy .json files in the `templates` folder

## Best practices
<a name="automate-ingestion-and-visualization-of-amazon-mwaa-custom-metrics-best-practices"></a>

Terraform must store state about your managed infrastructure and configuration so that it can map real-world resources to your configuration. By default, Terraform stores state locally in a file named `terraform.tfstate`. It's crucial to ensure the safety and integrity of your Terraform state file because it maintains the current state of your infrastructure. For more information, see [Remote State](https://developer.hashicorp.com/terraform/language/state/remote) in the Terraform documentation. 

## Epics
<a name="automate-ingestion-and-visualization-of-amazon-mwaa-custom-metrics-epics"></a>

### Deploy the infrastructure using Terraform
<a name="deploy-the-infrastructure-using-terraform"></a>


| Task | Description | Skills required | 
| --- | --- | --- | 
| Deploy the infrastructure. | To deploy the solution infrastructure, do the following:[\[See the AWS documentation website for more details\]](http://docs.aws.amazon.com/prescriptive-guidance/latest/patterns/automate-ingestion-and-visualization-of-amazon-mwaa-custom-metrics.html) | AWS DevOps | 

### Validate the deployed infrastructure resources
<a name="validate-the-deployed-infrastructure-resources"></a>


| Task | Description | Skills required | 
| --- | --- | --- | 
| Validate the Amazon MWAA environment. | To validate the Amazon MWAA environment, do the following:[\[See the AWS documentation website for more details\]](http://docs.aws.amazon.com/prescriptive-guidance/latest/patterns/automate-ingestion-and-visualization-of-amazon-mwaa-custom-metrics.html) | AWS DevOps, Data engineer | 
| Verify the DAG schedules. | To view each DAG schedule, go to the **Schedule** tab in the **Airflow UI**.Each of the following DAGs has a pre-configured schedule, which runs in the Amazon MWAA environment and generates custom metrics: [\[See the AWS documentation website for more details\]](http://docs.aws.amazon.com/prescriptive-guidance/latest/patterns/automate-ingestion-and-visualization-of-amazon-mwaa-custom-metrics.html)You can also see the successful runs of each DAG under the **Runs **column.  | Data engineer, AWS DevOps | 

### Configure the Amazon Managed Grafana environment
<a name="configure-the-gra-environment"></a>


| Task | Description | Skills required | 
| --- | --- | --- | 
| Configure access to the Amazon Managed Grafana workspace. | The Terraform scripts created the required Amazon Managed Grafana workspace, dashboards, and metrics page. To configure access so that you can view them, do the following:[\[See the AWS documentation website for more details\]](http://docs.aws.amazon.com/prescriptive-guidance/latest/patterns/automate-ingestion-and-visualization-of-amazon-mwaa-custom-metrics.html) | AWS DevOps | 
| Install the Amazon Timestream plugin.  | Amazon MWAA custom metrics are loaded into the Timestream database. You use the Timestream plugin to visualize the metrics with Amazon Managed Grafana dashboards.To install the Timestream plugin, do the following:[\[See the AWS documentation website for more details\]](http://docs.aws.amazon.com/prescriptive-guidance/latest/patterns/automate-ingestion-and-visualization-of-amazon-mwaa-custom-metrics.html)For more information, see [Extend your workspace with plugins](https://docs.aws.amazon.com/grafana/latest/userguide/grafana-plugins.html#manage-plugins) in the Amazon Managed Grafana documentation. | AWS DevOps, DevOps engineer | 

### Visualize the custom metrics in the Amazon Managed Grafana dashboard
<a name="visualize-the-custom-metrics-in-the-gra-dashboard"></a>


| Task | Description | Skills required | 
| --- | --- | --- | 
| View the Amazon Managed Grafana dashboard. | To view the metrics that were ingested into the Amazon Managed Grafana workspace, do the following:[\[See the AWS documentation website for more details\]](http://docs.aws.amazon.com/prescriptive-guidance/latest/patterns/automate-ingestion-and-visualization-of-amazon-mwaa-custom-metrics.html)The dashboard metrics page shows the following information:[\[See the AWS documentation website for more details\]](http://docs.aws.amazon.com/prescriptive-guidance/latest/patterns/automate-ingestion-and-visualization-of-amazon-mwaa-custom-metrics.html) | AWS DevOps | 
| Customize the Amazon Managed Grafana dashboard. | To customize the dashboards for further future enhancements, do the following:[\[See the AWS documentation website for more details\]](http://docs.aws.amazon.com/prescriptive-guidance/latest/patterns/automate-ingestion-and-visualization-of-amazon-mwaa-custom-metrics.html)Alternatively, the source code for this dashboard is available in the `dashboard.json` file in the `stacks/infra/grafana` folder in the [GitHub repository](https://github.com/aws-samples/visualize-amazon-mwaa-custom-metrics-grafana/blob/main/stacks/infra/grafana/dashboard.json). | AWS DevOps | 

### Clean up AWS resources
<a name="clean-up-aws-resources"></a>


| Task | Description | Skills required | 
| --- | --- | --- | 
| Pause the Amazon MWAA DAG runs. | To pause the DAG runs, do the following:[\[See the AWS documentation website for more details\]](http://docs.aws.amazon.com/prescriptive-guidance/latest/patterns/automate-ingestion-and-visualization-of-amazon-mwaa-custom-metrics.html) | AWS DevOps, Data engineer | 
| Delete the objects in the Amazon S3 buckets. | To delete the Amazon S3 buckets **mwaa-events-bucket-\$1** and **mwaa-metrics-bucket-\$1**, follow the instructions for using the Amazon S3 console in [Deleting a bucket](https://docs.aws.amazon.com/AmazonS3/latest/userguide/delete-bucket.html) in the Amazon S3 documentation. | AWS DevOps | 
| Destroy the resources created by Terraform. | To destroy the resources created by Terraform and the associated local Terraform state file, do the following:[\[See the AWS documentation website for more details\]](http://docs.aws.amazon.com/prescriptive-guidance/latest/patterns/automate-ingestion-and-visualization-of-amazon-mwaa-custom-metrics.html) | AWS DevOps | 

## Troubleshooting
<a name="automate-ingestion-and-visualization-of-amazon-mwaa-custom-metrics-troubleshooting"></a>


| Issue | Solution | 
| --- | --- | 
| `null_resource.plugin_mgmt (local-exec): aws: error: argument operation: Invalid choice, valid choices are:` | Upgrade your AWS CLI to the [latest version](https://docs.aws.amazon.com/cli/latest/userguide/getting-started-install.html). | 
| Loading data sources error - `Fetch error: 404 Not Found Instantiating…` | The error is intermittent. Wait a few minutes, and then refresh your data sources to view the listed Timestream data source.  | 

## Related resources
<a name="automate-ingestion-and-visualization-of-amazon-mwaa-custom-metrics-resources"></a>

**AWS documentation**
+ [Amazon Managed Grafana for dashboarding and visualization](https://docs.aws.amazon.com/prescriptive-guidance/latest/implementing-logging-monitoring-cloudwatch/amg-dashboarding-visualization.html)
+ [Configure Amazon Managed Grafana to use Okta](https://docs.aws.amazon.com/grafana/latest/userguide/AMG-SAML-providers-okta.html)
+ [Use AWS IAM Identity Center with your Amazon Managed Grafana workspace](https://docs.aws.amazon.com/grafana/latest/userguide/authentication-in-AMG-SSO.html)
+ [Working with DAGs on Amazon MWAA](https://docs.aws.amazon.com/mwaa/latest/userguide/working-dags.html)

**AWS videos**
+ Configure IAM Identity Center with Amazon Managed Grafana for authentication, as shown in the following [video](https://www.youtube.com/watch?v=XX2Xcz-Ps9U).




[https://www.youtube-nocookie.com/embed/XX2Xcz-Ps9U?controls=0](https://www.youtube-nocookie.com/embed/XX2Xcz-Ps9U?controls=0)
+ If IAM Identity Center isn’t available, you can also integrate the Amazon Managed Grafana authentication by using an external Identity provider (IdP) such as Okta, as shown in the following [video](https://www.youtube.com/watch?v=Z4JHxl2xpOg).




[https://www.youtube-nocookie.com/embed/Z4JHxl2xpOg?controls=0](https://www.youtube-nocookie.com/embed/Z4JHxl2xpOg?controls=0)

## Additional information
<a name="automate-ingestion-and-visualization-of-amazon-mwaa-custom-metrics-additional"></a>

You can create a comprehensive monitoring and alerting solution for your Amazon MWAA environment, enabling proactive management and rapid response to potential issues or anomalies. Amazon Managed Grafana includes the following capabilities:

**Alerting** – You can configure alerts in Amazon Managed Grafana based on predefined thresholds or conditions. Set up email notifications to alert relevant stakeholders when certain metrics exceed or fall below specified thresholds. For more information, see [Grafana alerting](https://docs.aws.amazon.com/grafana/latest/userguide/alerts-overview.html) in the Amazon Managed Grafana documentation.

**Integration** – You can integrate Amazon Managed Grafana with various third-party tools such as OpsGenie, PagerDuty, or Slack for enhanced notification capabilities. For example, you can set up webhooks or integrate with APIs to trigger incidents and notifications in these platforms based on alerts generated in Amazon Managed Grafana. In addition, this pattern provides a [GitHub repository](https://github.com/aws-samples/visualize-amazon-mwaa-custom-metrics-grafana) to create AWS resources. You can further integrate this code with your infrastructure deployment workflows.

# Automatically attach an AWS managed policy for Systems Manager to EC2 instance profiles using Cloud Custodian and AWS CDK
<a name="automatically-attach-an-aws-managed-policy-for-systems-manager-to-ec2-instance-profiles-using-cloud-custodian-and-aws-cdk"></a>

*Ali Asfour and Aaron Lennon, Amazon Web Services*

## Summary
<a name="automatically-attach-an-aws-managed-policy-for-systems-manager-to-ec2-instance-profiles-using-cloud-custodian-and-aws-cdk-summary"></a>

You can integrate Amazon Elastic Compute Cloud (Amazon EC2) instances with AWS Systems Manager to automate operational tasks and provide more visibility and control. To integrate with Systems Manager, EC2 instances must have an installed [AWS Systems Manager Agent (SSM Agent)](https://docs.aws.amazon.com/systems-manager/latest/userguide/ssm-agent.html) and an `AmazonSSMManagedInstanceCore` AWS Identity and Access Management (IAM) policy attached to their instance profiles. 

However, if you want to ensure that all EC2 instance profiles have the `AmazonSSMManagedInstanceCore` policy attached, you can face challenges updating new EC2 instances that don’t have instance profiles or EC2 instances that have an instance profile but don’t have the `AmazonSSMManagedInstanceCore` policy. It can also be difficult to add this policy across multiple Amazon Web Services (AWS) accounts and AWS Regions.

This pattern helps solve these challenges by deploying three [Cloud Custodian](https://cloudcustodian.io/) policies in your AWS accounts:
+ The first Cloud Custodian policy checks for existing EC2 instances that have an instance profile but don't have the `AmazonSSMManagedInstanceCore` policy. The `AmazonSSMManagedInstanceCore` policy is then attached. 
+ The second Cloud Custodian policy checks for existing EC2 instances without an instance profile and adds a default instance profile that has the `AmazonSSMManagedInstanceCore` policy attached.
+ The third Cloud Custodian policy creates [AWS Lambda functions](https://cloudcustodian.io/docs/aws/lambda.html) in your accounts to monitor the creation of EC2 instances and instance profiles. This ensures that the `AmazonSSMManagedInstanceCore` policy is automatically attached when an EC2 instance is created.

This pattern uses [AWS DevOps](https://aws.amazon.com/devops/) tools to achieve a continuous, at-scale deployment of the Cloud Custodian policies to a multi-account environment, without provisioning a separate compute environment. 

## Prerequisites and limitations
<a name="automatically-attach-an-aws-managed-policy-for-systems-manager-to-ec2-instance-profiles-using-cloud-custodian-and-aws-cdk-prereqs"></a>

**Prerequisites**
+ Two or more active AWS accounts. One account is the *security account* and the others are *member accounts*.
+ Permissions to provision AWS resources in the security account. This pattern uses [administrator permissions](https://docs.aws.amazon.com/singlesignon/latest/userguide/getting-started.html), but you should grant permissions according to your organization’s requirements and policies.
+ Ability to assume an IAM role from the security account to member accounts and create the required IAM roles. For more information about this, see [Delegate access across AWS accounts using IAM roles](https://docs.aws.amazon.com/IAM/latest/UserGuide/tutorial_cross-account-with-roles.html) in the IAM documentation.
+ 
**Important**  
AWS Command Line Interface (AWS CLI), installed and configured. For testing purposes, you can configure AWS CLI by using the `aws configure` command or setting environment variables. : This isn't recommended for production environments and we recommend that this account is only granted least privilege access. For more information about this, see [Grant least privilege](https://docs.aws.amazon.com/IAM/latest/UserGuide/best-practices.html#grant-least-privilege) in the IAM documentation.
+ The `devops-cdk-cloudcustodian.zip` file (attached), downloaded to your local computer.
+ Familiarity with Python.
+ The required tools (Node.js, AWS Cloud Development Kit (AWS CDK), and Git), installed and configured. You can use the `install-prerequisites.sh` file in the `devops-cdk-cloudcustodian.zip` file to** **install these tools.** **Make sure you run this file with root privileges. 

**Limitations**
+ Although this pattern can be used in a production environment, make sure that all IAM roles and policies meet your organization’s requirements and policies. 

**Package versions**
+ Cloud Custodian version 0.9 or later
+ TypeScript version 3.9.7 or later
+ Node.js version 14.15.4 or later
+ `npm` version 7.6.1 or later
+ AWS CDK version 1.96.0 or later

## Architecture
<a name="automatically-attach-an-aws-managed-policy-for-systems-manager-to-ec2-instance-profiles-using-cloud-custodian-and-aws-cdk-architecture"></a>

![\[AWS CodePipeline workflow with CodeCommit, CodeBuild, and deployment to member accounts.\]](http://docs.aws.amazon.com/prescriptive-guidance/latest/patterns/images/pattern-img/169a7bde-789e-4ebd-b4ca-80eb28ac9927/images/8ec0b6b4-d4b0-42e5-833d-24d1e6098fd9.png)


 

The diagram shows the following workflow:

1. Cloud Custodian policies are pushed to an AWS CodeCommit repository in the security account. An Amazon CloudWatch Events rule automatically initiates the AWS CodePipeline pipeline.

1. The pipeline fetches the most recent code from CodeCommit and sends it to the continuous integration part of the continuous integration and continuous delivery (CI/CD) pipeline handled by AWS CodeBuild.

1. CodeBuild performs the complete DevSecOps actions, including policy syntax validation on the Cloud Custodian policies, and runs these policies in `--dryrun` mode to check which resources are identified.

1. If there are no errors, the next task alerts an administrator to review the changes and approve the deployment into the member accounts.

**Technology stack**
+ AWS CDK
+ CodeBuild
+ CodeCommit
+ CodePipeline
+ IAM
+ Cloud Custodian 

**Automation and scale**

The AWS CDK pipelines module provisions a CI/CD pipeline that uses CodePipeline to orchestrate the building and testing of source code with CodeBuild, in addition to the deployment of AWS resources with AWS CloudFormation stacks. You can use this pattern for all member accounts and Regions in your organization. You can also extend the `Roles creation` stack to deploy other IAM roles in your member accounts. 

## Tools
<a name="automatically-attach-an-aws-managed-policy-for-systems-manager-to-ec2-instance-profiles-using-cloud-custodian-and-aws-cdk-tools"></a>
+ [AWS Cloud Development Kit (AWS CDK)](https://docs.aws.amazon.com/cdk/latest/guide/home.html) is a software development framework for defining cloud infrastructure in code and provisioning it through AWS CloudFormation.
+ [AWS Command Line Interface (AWS CLI)](https://docs.aws.amazon.com/cli/latest/userguide/cli-chap-welcome.html) is an open-source tool that enables you to interact with AWS services using commands in your command-line shell.
+ [AWS CodeBuild ](https://docs.aws.amazon.com/codebuild/latest/userguide/welcome.html)is a fully managed build service in the cloud.
+ [AWS CodeCommit](https://docs.aws.amazon.com/codecommit/latest/userguide/welcome.html) is a version control service that you can use to privately store and manage assets.
+ [AWS CodePipeline](https://docs.aws.amazon.com/codepipeline/latest/userguide/welcome.html) is a continuous delivery service you can use to model, visualize, and automate the steps required to release your software.
+ [AWS Identity and Access Management  ](https://docs.aws.amazon.com/IAM/latest/UserGuide/introduction.html)is a web service that helps you securely control access to AWS resources.
+ [Cloud Custodian](https://cloudcustodian.io/) is a tool that unifies the dozens of tools and scripts most organizations use for managing their public cloud accounts into one open-source tool.
+ [Node.js ](https://nodejs.org/en/)is a JavaScript runtime built on Google Chrome's V8 JavaScript engine.

**Code **

For a detailed list of modules, account functions, files, and deployment commands used in this pattern, see the `README` file in the `devops-cdk-cloudcustodian.zip` file (attached).

## Epics
<a name="automatically-attach-an-aws-managed-policy-for-systems-manager-to-ec2-instance-profiles-using-cloud-custodian-and-aws-cdk-epics"></a>

### Set up the pipeline with AWS CDK
<a name="set-up-the-pipeline-with-aws-cdk"></a>


| Task | Description | Skills required | 
| --- | --- | --- | 
| Set up the CodeCommit repository. | [\[See the AWS documentation website for more details\]](http://docs.aws.amazon.com/prescriptive-guidance/latest/patterns/automatically-attach-an-aws-managed-policy-for-systems-manager-to-ec2-instance-profiles-using-cloud-custodian-and-aws-cdk.html)[\[See the AWS documentation website for more details\]](http://docs.aws.amazon.com/prescriptive-guidance/latest/patterns/automatically-attach-an-aws-managed-policy-for-systems-manager-to-ec2-instance-profiles-using-cloud-custodian-and-aws-cdk.html)For more information about this, see [Creating a CodeCommit repository](https://docs.aws.amazon.com/codecommit/latest/userguide/how-to-create-repository.html) in the AWS CodeCommit documentation. | Developer | 
| Install the required tools. | Use the `install-prerequisites.sh` file to install all the required tools on Amazon Linux. This doesn’t include AWS CLI because it comes pre-installed.For more information about this, see the [Prerequisites](https://docs.aws.amazon.com/cdk/latest/guide/getting_started.html#getting_started_prerequisites) section of [Getting started with the AWS CDK](https://docs.aws.amazon.com/cdk/latest/guide/getting_started.html) in the AWS CDK documentation. | Developer | 
| Install the required AWS CDK packages. | [\[See the AWS documentation website for more details\]](http://docs.aws.amazon.com/prescriptive-guidance/latest/patterns/automatically-attach-an-aws-managed-policy-for-systems-manager-to-ec2-instance-profiles-using-cloud-custodian-and-aws-cdk.html)The following packages are required by AWS CDK and are included in the `requirements.txt` file:[\[See the AWS documentation website for more details\]](http://docs.aws.amazon.com/prescriptive-guidance/latest/patterns/automatically-attach-an-aws-managed-policy-for-systems-manager-to-ec2-instance-profiles-using-cloud-custodian-and-aws-cdk.html) | Developer | 

### Configure your environment
<a name="configure-your-environment"></a>


| Task | Description | Skills required | 
| --- | --- | --- | 
| Update the required variables. | Open the `vars.py` file in the root folder of your CodeCommit repository and update the following variables:[\[See the AWS documentation website for more details\]](http://docs.aws.amazon.com/prescriptive-guidance/latest/patterns/automatically-attach-an-aws-managed-policy-for-systems-manager-to-ec2-instance-profiles-using-cloud-custodian-and-aws-cdk.html) | Developer | 
| Update the account.yml file with the member account information. | To run the [c7n-org Cloud Custodian](https://cloudcustodian.io/docs/tools/c7n-org.html) tool against multiple accounts, you must place the `accounts.yml` config file in the root of the repository. The following is a sample Cloud Custodian config file for AWS:<pre>accounts:<br />- account_id: '123123123123'<br />  name: account-1<br />  regions:<br />  - us-east-1<br />  - us-west-2<br />  role: arn:aws:iam::123123123123:role/CloudCustodian<br />  vars:<br />    charge_code: xyz<br />  tags:<br />  - type:prod<br />  - division:some division<br />  - partition:us<br />  - scope:pci</pre> | Developer | 

### Bootstrap the AWS accounts
<a name="bootstrap-the-aws-accounts"></a>


| Task | Description | Skills required | 
| --- | --- | --- | 
| Boostrap the security account. | Bootstrap the `deploy_account` with the `cloudcustodian_stack` application by running the following command:<pre>cdk bootstrap -a 'python3 <br />cloudcustodian/cloudcustodian_stack.py</pre> | Developer | 
| Option 1 - Automatically bootstrap the member accounts. | If the `cdk_bootstrap_member_accounts` variable is set to `True` in the `vars.py` file, the accounts specified in the `member_accounts` variable are automatically bootstrapped by the pipeline.If required, you can update `*cdk_bootstrap_role*` with an IAM role that you can assume from the security account and that has the required permissions to bootstrap the AWS CDK.New accounts added to the `member_accounts `variable are automatically bootstrapped by the pipeline so that the required roles can be deployed. | Developer | 
| Option 2 - Manually bootstrap the member accounts.  | Although we don’t recommend using this approach, you can set the value of `cdk_bootstrap_member_accounts` to `False` and perform this step manually by running the following command:<pre>$ cdk bootstrap -a 'python3 cloudcustodian/member_account_roles_stack.py' \<br /><br />--trust {security_account_id} \<br /><br />--context assume-role-credentials:writeIamRoleName={role_name} \<br /><br />--context assume-role-credentials:readIamRoleName={role_name} \<br /><br />--mode=ForWriting \<br /><br />--context bootstrap=true \<br /><br />--cloudformation-execution-policies arn:aws:iam::aws:policy/AdministratorAccess</pre>Make sure that you update the `{security_account_id}` and `{role_name}` values with the name of an IAM role that you can assume from the security account and that has the required permissions to bootstrap the AWS CDK.You can also use other approaches to bootstrap the member accounts, for example, with AWS CloudFormation. For more information about this, see [Bootstrapping](https://docs.aws.amazon.com/cdk/latest/guide/bootstrapping.html) in the AWS CDK documentation. | Developer | 

### Deploy the AWS CDK stacks
<a name="deploy-the-aws-cdk-stacks"></a>


| Task | Description | Skills required | 
| --- | --- | --- | 
| Create the IAM roles in the member accounts. | Run the following command to deploy the `member_account_roles_stack` stack and create the IAM roles in the member accounts:<pre>cdk deploy --all -a 'python3 cloudcustodian/member_account_roles_stack.py' --require-approval never</pre> | Developer | 
| Deploy the Cloud Custodian pipeline stack. | Run the following command to create the Cloud Custodian `cloudcustodian_stack.py` pipeline that is deployed into the security account:<pre>cdk deploy -a 'python3 cloudcustodian/cloudcustodian_stack.py'</pre> | Developer | 

## Related resources
<a name="automatically-attach-an-aws-managed-policy-for-systems-manager-to-ec2-instance-profiles-using-cloud-custodian-and-aws-cdk-resources"></a>
+ [Getting started with the AWS CDK](https://docs.aws.amazon.com/cdk/latest/guide/getting_started.html)

## Attachments
<a name="attachments-169a7bde-789e-4ebd-b4ca-80eb28ac9927"></a>

To access additional content that is associated with this document, unzip the following file: [attachment.zip](samples/p-attach/169a7bde-789e-4ebd-b4ca-80eb28ac9927/attachments/attachment.zip)

# Automatically build CI/CD pipelines and Amazon ECS clusters for microservices using AWS CDK
<a name="automatically-build-ci-cd-pipelines-and-amazon-ecs-clusters-for-microservices-using-aws-cdk"></a>

*Varsha Raju, Amazon Web Services*

## Summary
<a name="automatically-build-ci-cd-pipelines-and-amazon-ecs-clusters-for-microservices-using-aws-cdk-summary"></a>

This pattern describes how to automatically create the continuous integration and continuous delivery (CI/CD) pipelines and underlying infrastructure for building and deploying microservices on Amazon Elastic Container Service (Amazon ECS). You can use this approach if you want to set up proof-of-concept CI/CD pipelines to show your organization the benefits of CI/CD, microservices, and DevOps. You can also use this approach to create initial CI/CD pipelines that you can then customize or change according to your organization’s requirements. 

The pattern’s approach creates a production environment and non-production environment that each have a virtual private cloud (VPC) and an Amazon ECS cluster configured to run in two Availability Zones. These environments are shared by all your microservices and you then create a CI/CD pipeline for each microservice. These CI/CD pipelines pull changes from a source repository in AWS CodeCommit, automatically build the changes, and then deploy them into your production and non-production environments. When a pipeline successfully completes all of its stages, you can use URLs to access the microservice in the production and non-production environments.

## Prerequisites and limitations
<a name="automatically-build-ci-cd-pipelines-and-amazon-ecs-clusters-for-microservices-using-aws-cdk-prereqs"></a>

**Prerequisites **
+ An active Amazon Web Services (AWS) account.
+ An existing Amazon Simple Storage Service (Amazon S3) bucket that contains the `starter-code.zip` file (attached).
+ AWS Cloud Development Kit (AWS CDK), installed and configured in your account. For more information about this, see [Getting started with the AWS CDK](https://docs.aws.amazon.com/cdk/latest/guide/getting_started.html) in the AWS CDK documentation.
+ Python 3 and `pip`, installed and configured. For more information about this, see the [Python documentation](https://www.python.org/).
+ Familiarity with AWS CDK, AWS CodePipeline, AWS CodeBuild, CodeCommit, Amazon Elastic Container Registry (Amazon ECR), Amazon ECS, and AWS Fargate.
+ Familiarity with Docker.
+ An understanding of CI/CD and DevOps.

**Limitations**
+ General AWS account limits apply. For more information about this, see [AWS service quotas](https://docs.aws.amazon.com/general/latest/gr/aws_service_limits.html) in the AWS General Reference documentation.

**Product versions**
+ The code was tested using Node.js version 16.13.0 and AWS CDK version 1.132.0.

## Architecture
<a name="automatically-build-ci-cd-pipelines-and-amazon-ecs-clusters-for-microservices-using-aws-cdk-architecture"></a>

![\[AWS Cloud architecture diagram showing CI/CD pipeline and deployment to production and non-production VPCs.\]](http://docs.aws.amazon.com/prescriptive-guidance/latest/patterns/images/pattern-img/05ac2cad-408e-433f-8150-0a2b71f63cfd/images/6fa3dbef-88de-4b3f-ae41-dfa90256a058.png)


The diagram shows the following workflow:

1. An application developer commits code to a CodeCommit repository.

1. A pipeline is initiated.

1. CodeBuild builds and pushes the Docker image to an Amazon ECR repository

1. CodePipeline deploys a new image to an existing Fargate service in a non-production Amazon ECS cluster.

1. Amazon ECS pulls the image from the Amazon ECR repository into a non-production Fargate service.

1. Testing is performed using a non-production URL.

1. The release manager approves the production deployment.

1. CodePipeline deploys the new image to an existing Fargate service in a production Amazon ECS cluster

1. Amazon ECS pulls the image from the Amazon ECR repository into the production Fargate service.

1. Production users access your feature by using a production URL.

**Technology stack  **
+ AWS CDK
+ CodeBuild
+ CodeCommit 
+ CodePipeline
+ Amazon ECR 
+ Amazon ECS 
+ Amazon VPC

**Automation and scale**

You can use this pattern’s approach to create pipelines for microservices deployed in a shared AWS CloudFormation stack. The automation can create more than one Amazon ECS cluster in each VPC and also create pipelines for microservices deployed in a shared Amazon ECS cluster. However, this requires that you provide new resource information as inputs to the pipeline stack.

## Tools
<a name="automatically-build-ci-cd-pipelines-and-amazon-ecs-clusters-for-microservices-using-aws-cdk-tools"></a>
+ [AWS CDK](https://docs.aws.amazon.com/cdk/latest/guide/home.html) – AWS Cloud Development Kit (AWS CDK) is a software development framework for defining cloud infrastructure in code and provisioning it through AWS CloudFormation.
+ [AWS CodeBuild ](https://docs.aws.amazon.com/codebuild/latest/userguide/welcome.html)– AWS CodeBuild is a fully managed build service in the cloud. CodeBuild compiles your source code, runs unit tests, and produces artifacts that are ready to deploy.
+ [AWS CodeCommit](https://docs.aws.amazon.com/codecommit/latest/userguide/welcome.html) – AWS CodeCommit is a version control service that enables you to privately store and manage Git repositories in the AWS Cloud. CodeCommit eliminates the need for you to manage your own source control system or worry about scaling its infrastructure.
+ [AWS CodePipeline](https://docs.aws.amazon.com/codepipeline/latest/userguide/welcome.html)  – AWS CodePipeline is a continuous delivery service you can use to model, visualize, and automate the steps required to release your software. You can quickly model and configure the different stages of a software release process. CodePipeline automates the steps required to release your software changes continuously.
+ [Amazon ECS](https://docs.aws.amazon.com/AmazonECS/latest/developerguide/Welcome.html) – Amazon Elastic Container Service (Amazon ECS) is a highly scalable, fast container management service that is used for running, stopping, and managing containers on a cluster. You can run your tasks and services on a serverless infrastructure that is managed by AWS Fargate. Alternatively, for more control over your infrastructure, you can run your tasks and services on a cluster of Amazon Elastic Compute Cloud (Amazon EC2) instances that you manage.
+ [Docker](https://www.docker.com/) – Docker helps developers to pack, ship, and run any application as a lightweight, portable, and self-sufficient container.

**Code **

The code for this pattern is available in the `cicdstarter.zip` and `starter-code.zip` files (attached).

## Epics
<a name="automatically-build-ci-cd-pipelines-and-amazon-ecs-clusters-for-microservices-using-aws-cdk-epics"></a>

### Set up your environment
<a name="set-up-your-environment"></a>


| Task | Description | Skills required | 
| --- | --- | --- | 
| Set up the working directory for AWS CDK.  | [\[See the AWS documentation website for more details\]](http://docs.aws.amazon.com/prescriptive-guidance/latest/patterns/automatically-build-ci-cd-pipelines-and-amazon-ecs-clusters-for-microservices-using-aws-cdk.html) | AWS DevOps, Cloud infrastructure | 

### Create the shared infrastructure
<a name="create-the-shared-infrastructure"></a>


| Task | Description | Skills required | 
| --- | --- | --- | 
| Create the shared infrastructure. | [\[See the AWS documentation website for more details\]](http://docs.aws.amazon.com/prescriptive-guidance/latest/patterns/automatically-build-ci-cd-pipelines-and-amazon-ecs-clusters-for-microservices-using-aws-cdk.html) | AWS DevOps, Cloud infrastructure | 
| Monitor the AWS CloudFormation stack. | [\[See the AWS documentation website for more details\]](http://docs.aws.amazon.com/prescriptive-guidance/latest/patterns/automatically-build-ci-cd-pipelines-and-amazon-ecs-clusters-for-microservices-using-aws-cdk.html) | AWS DevOps, Cloud infrastructure | 
| Test the AWS CloudFormation stack. | [\[See the AWS documentation website for more details\]](http://docs.aws.amazon.com/prescriptive-guidance/latest/patterns/automatically-build-ci-cd-pipelines-and-amazon-ecs-clusters-for-microservices-using-aws-cdk.html)Make sure that you record the IDs for the two VPCs and the security group IDs for the default security groups in both VPCs. | AWS DevOps, Cloud infrastructure | 

### Create a CI/CD pipeline for a microservice
<a name="create-a-ci-cd-pipeline-for-a-microservice"></a>


| Task | Description | Skills required | 
| --- | --- | --- | 
| Create the infrastructure for the microservice. | [\[See the AWS documentation website for more details\]](http://docs.aws.amazon.com/prescriptive-guidance/latest/patterns/automatically-build-ci-cd-pipelines-and-amazon-ecs-clusters-for-microservices-using-aws-cdk.html)You can also provide the values for both commands by using the `cdk.json` file in the directory. | AWS DevOps, Cloud infrastructure | 
| Monitor the AWS CloudFormation stack. | Open the AWS CloudFormation console and monitor the progress of the `myservice1-cicd-stack` stack. Eventually, the status changes to `CREATE_COMPLETE`*.* | AWS DevOps, Cloud infrastructure | 
| Test the AWS CloudFormation stack. | [\[See the AWS documentation website for more details\]](http://docs.aws.amazon.com/prescriptive-guidance/latest/patterns/automatically-build-ci-cd-pipelines-and-amazon-ecs-clusters-for-microservices-using-aws-cdk.html) |  | 
| Use the pipeline. | [\[See the AWS documentation website for more details\]](http://docs.aws.amazon.com/prescriptive-guidance/latest/patterns/automatically-build-ci-cd-pipelines-and-amazon-ecs-clusters-for-microservices-using-aws-cdk.html) | AWS DevOps, Cloud infrastructure | 
| Repeat this epic for each microservice. | Repeat the tasks in this epic to create a CI/CD pipeline for each of your microservices. | AWS DevOps, Cloud infrastructure | 

## Related resources
<a name="automatically-build-ci-cd-pipelines-and-amazon-ecs-clusters-for-microservices-using-aws-cdk-resources"></a>
+ [Using Python with AWS CDK](https://docs.aws.amazon.com/cdk/latest/guide/work-with-cdk-python.html) 
+ [AWS CDK Python reference](https://docs.aws.amazon.com/cdk/api/latest/python/index.html)
+ [Creating an AWS Fargate service using the AWS CDK](https://docs.aws.amazon.com/cdk/latest/guide/ecs_example.html)

## Additional information
<a name="automatically-build-ci-cd-pipelines-and-amazon-ecs-clusters-for-microservices-using-aws-cdk-additional"></a>

**`cdk synth` command**

```
cdk synth --context aws_account=<aws_account_number> --context aws_region=<aws_region> --context vpc_nonprod_id=<id_of_non_production VPC> --context vpc_prod_id=<id_of_production_VPC> --context ecssg_nonprod_id=< default_security_group_id_of_non-production_VPC> --context ecssg_prod_id=<default_security_group_id_of_production_VPC> --context code_commit_s3_bucket_for_code=<S3 bucket name> --context code_commit_s3_object_key_for_code=<Object_key_of_starter_code> --context microservice_name=<name_of_microservice>
```

**`cdk deploy `command**

```
cdk deploy --context aws_account=<aws_account_number> --context aws_region=<aws_region> --context vpc_nonprod_id=<id_of_non_production_VPC> --context vpc_prod_id=<id_of_production_VPC> --context ecssg_nonprod_id=< default_security_group_id_of_non-production_VPC> --context ecssg_prod_id=<default_security_group_id_of_production_VPC> --context code_commit_s3_bucket_for_code=<S3 bucket name> --context code_commit_s3_object_key_for_code=<Object_key_of_starter_code> --context microservice_name=<name_of_microservice> 
```

## Attachments
<a name="attachments-05ac2cad-408e-433f-8150-0a2b71f63cfd"></a>

To access additional content that is associated with this document, unzip the following file: [attachment.zip](samples/p-attach/05ac2cad-408e-433f-8150-0a2b71f63cfd/attachments/attachment.zip)

# Build and push Docker images to Amazon ECR using GitHub Actions and Terraform
<a name="build-and-push-docker-images-to-amazon-ecr-using-github-actions-and-terraform"></a>

*Ruchika Modi, Amazon Web Services*

## Summary
<a name="build-and-push-docker-images-to-amazon-ecr-using-github-actions-and-terraform-summary"></a>

This pattern explains how you can create reusable GitHub workflows to build your Dockerfile and push the resulting image to Amazon Elastic Container Registry (Amazon ECR). The pattern automates the build process of your Dockerfiles by using Terraform and GitHub Actions. This minimizes the possibility of human error and substantially reduces deployment time.

A GitHub push action to the main branch of your GitHub repository initiates the deployment of resources. The workflow creates a unique Amazon ECR repository based on the combination of the GitHub organization and repository name. It then pushes the Dockerfile image to the Amazon ECR repository.

## Prerequisites and limitations
<a name="build-and-push-docker-images-to-amazon-ecr-using-github-actions-and-terraform-prereqs"></a>

**Prerequisites **
+ An active AWS account.
+ An active GitHub account.
+ A [GitHub repository](https://docs.github.com/en/get-started/quickstart/create-a-repo).
+ Terraform version 1 or later [installed and configured](https://developer.hashicorp.com/terraform/tutorials/aws-get-started/install-cli).
+ An Amazon Simple Storage Service (Amazon S3) bucket for the [Terraform backend](https://developer.hashicorp.com/terraform/language/settings/backends/s3).
+ An [Amazon DynamoDB](https://www.googleadservices.com/pagead/aclk?sa=L&ai=DChcSEwjO95K9xqCCAxW-KIMDHfOvD7IYABADGgJzZg&gclid=EAIaIQobChMIzveSvcagggMVviiDAx3zrw-yEAAYASADEgJYWfD_BwE&ohost=www.google.com&cid=CAASJuRoKjv_llGjIU3liZ4T2IRecPqw0dVHSvjZ7bee1lvcc36K_lO_&sig=AOD64_1b294pq65HiFN-T1YxQAuXmRu_hw&adurl&ved=2ahUKEwjhiY29xqCCAxUgzjgGHRu6CAIQqyQoAnoECAkQDQ) table for Terraform state locking and consistency. The table must have a partition key named `LockID` with a type of `String`. If this isn't configured, state locking will be disabled.
+ An AWS Identity and Access Management (IAM) role that has permissions to set up the Amazon S3 backend for Terraform. For configuration instructions, see the [Terraform documentation](https://developer.hashicorp.com/terraform/language/settings/backends/s3#assume-role-configuration).

**Limitations **

This reusable code has been tested only with GitHub Actions.

## Architecture
<a name="build-and-push-docker-images-to-amazon-ecr-using-github-actions-and-terraform-architecture"></a>

**Target technology stack**
+ Amazon ECR repository
+ GitHub Actions
+ Terraform

**Target architecture**

![\[Workflow to create reusable GitHub workflows to build Dockerfile and push image to Amazon ECR.\]](http://docs.aws.amazon.com/prescriptive-guidance/latest/patterns/images/pattern-img/c39c110e-cbe5-459e-a0aa-de27e884fb10/images/298e0e16-3054-49b7-8695-db510e0df2df.png)


The diagram illustrates the following:

1. A user adds a Dockerfile and Terraform templates to the GitHub repository.

2. These additions initiate a GitHub Actions workflow.

3. The workflow checks whether an Amazon ECR repository exists. If not, it creates the repository based on the GitHub organization and repository name.

4. The workflow builds the Dockerfile and pushes the image to the Amazon ECR repository.

## Tools
<a name="build-and-push-docker-images-to-amazon-ecr-using-github-actions-and-terraform-tools"></a>

**Amazon services**
+ [Amazon Elastic Container Registry (Amazon ECR)](https://docs.aws.amazon.com/AmazonECR/latest/userguide/what-is-ecr.html) is a managed container registry service that’s secure, scalable, and reliable.

**Other tools**
+ [GitHub Actions](https://docs.github.com/en/actions) is integrated into the GitHub platform to help you create, share, and run workflows within your GitHub repositories. You can use GitHub Actions to automate tasks such as building, testing, and deploying your code.
+ [Terraform](https://developer.hashicorp.com/terraform/intro) is an infrastructure as code (IaC) tool from HashiCorp that helps you create and manage cloud and on-premises infrastructure.

**Code repository**

The code for this pattern is available in the GitHub [Docker ECR Actions Workflow](https://github.com/aws-samples/docker-ecr-actions-workflow) repository.
+ When you create GitHub Actions, Docker workflow files are saved in the `/.github/workflows/` folder of this repository. The workflow for this solution is in the [workflow.yaml](https://github.com/aws-samples/docker-ecr-actions-workflow/blob/main/.github/workflows/workflow.yaml) file.
+ The `e2e-test` folder provides a sample Dockerfile for reference and testing.

## Best practices
<a name="build-and-push-docker-images-to-amazon-ecr-using-github-actions-and-terraform-best-practices"></a>
+ For best practices for writing Dockerfiles, see the [Docker documentation](https://docs.docker.com/develop/develop-images/dockerfile_best-practices/).
+ Use a [VPC endpoint for Amazon ECR](https://docs.aws.amazon.com/vpc/latest/privatelink/create-interface-endpoint.html). VPC endpoints are powered by AWS PrivateLink, a technology that enables you to privately access Amazon ECR APIs through private IP addresses. For Amazon ECS tasks that use the Fargate launch type, the VPC endpoint enables the task to pull private images from Amazon ECR without assigning a public IP address to the task.

## Epics
<a name="build-and-push-docker-images-to-amazon-ecr-using-github-actions-and-terraform-epics"></a>

### Set up the OIDC provider and GitHub repository
<a name="set-up-the-oidc-provider-and-github-repository"></a>


| Task | Description | Skills required | 
| --- | --- | --- | 
| Configure OpenID Connect. | Create an OpenID Connect (OIDC) provider. You will use the provider in the trust policy for the IAM role used in this action. For instructions, see [Configuring OpenID Connect in Amazon Web Services](https://docs.github.com/en/actions/deployment/security-hardening-your-deployments/configuring-openid-connect-in-amazon-web-services) in the GitHub documentation. | AWS administrator, AWS DevOps, General AWS | 
| Clone the GitHub repository. | Clone the GitHub [Docker ECR Actions Workflow](https://github.com/aws-samples/docker-ecr-actions-workflow) repository into your local folder:<pre>$git clone https://github.com/aws-samples/docker-ecr-actions-workflow</pre> | DevOps engineer | 

### Customize the GitHub reusable workflow and deploy the Docker image
<a name="customize-the-github-reusable-workflow-and-deploy-the-docker-image"></a>


| Task | Description | Skills required | 
| --- | --- | --- | 
| Customize the event that initiates the Docker workflow. | The workflow for this solution is in [workflow.yaml](https://github.com/aws-samples/docker-ecr-actions-workflow/blob/main/.github/workflows/workflow.yaml). This script is currently configured to deploy resources when it receives the `workflow_dispatch` event. You can customize this configuration by changing the event to `workflow_call` and calling the workflow from another parent workflow. | DevOps engineer | 
| Customize the workflow. | The [workflow.yaml](https://github.com/aws-samples/docker-ecr-actions-workflow/blob/main/.github/workflows/workflow.yaml) file is configured to create a dynamic, reusable GitHub workflow. You can edit this file to customize the default configuration, or you can pass the input values from the GitHub Actions console if you're using the `workflow_dispatch` event to initiate deployment manually.[\[See the AWS documentation website for more details\]](http://docs.aws.amazon.com/prescriptive-guidance/latest/patterns/build-and-push-docker-images-to-amazon-ecr-using-github-actions-and-terraform.html) | DevOps engineer | 
| Deploy the Terraform templates. | The workflow automatically deploys the Terraform templates that create the Amazon ECR repository, based on the GitHub event you configured. These templates are available as `.tf` files at the [root of the Github repository](https://github.com/aws-samples/docker-ecr-actions-workflow/tree/main). | AWS DevOps, DevOps engineer | 

## Troubleshooting
<a name="build-and-push-docker-images-to-amazon-ecr-using-github-actions-and-terraform-troubleshooting"></a>


| Issue | Solution | 
| --- | --- | 
| Issues or errors when you configure Amazon S3 and DynamoDB as the Terraform remote backend. | Follow the instructions in the [Terraform documentation](https://developer.hashicorp.com/terraform/language/settings/backends/s3) to set up the required permissions on the Amazon S3 and DynamoDB resources for the remote backend configuration. | 
| Unable to run or start the workflow with the `workflow_dispatch` event. | The workflow that's configured to deploy from the `workflow_dispatch` event will work only if the workflow is configured on the main branch as well. | 

## Related resources
<a name="build-and-push-docker-images-to-amazon-ecr-using-github-actions-and-terraform-resources"></a>
+ [Reusing workflows](https://docs.github.com/en/actions/using-workflows/reusing-workflows) (GitHub documentation)
+ [Triggering a workflow](https://docs.github.com/en/actions/using-workflows/triggering-a-workflow) (GitHub documentation)

# Build and test iOS apps with AWS CodeCommit, AWS CodePipeline, and AWS Device Farm
<a name="build-and-test-ios-apps-with-aws-codecommit-aws-codepipeline-and-aws-device-farm"></a>

*Abdullahi Olaoye, Amazon Web Services*

## Summary
<a name="build-and-test-ios-apps-with-aws-codecommit-aws-codepipeline-and-aws-device-farm-summary"></a>

This pattern outlines the steps for creating a continuous integration and continuous delivery (CI/CD) pipeline that uses AWS CodePipeline to build and test iOS applications on real devices on AWS. The pattern uses AWS CodeCommit to store the application code, the Jenkins open-source tool to build the iOS application, and AWS Device Farm to test the built application on real devices. These three phases are orchestrated together in a pipeline by using AWS CodePipeline.

This pattern is based on the post [Building and testing iOS and iPadOS apps with AWS DevOps and mobile services](https://aws.amazon.com/blogs/devops/building-and-testing-ios-and-ipados-apps-with-aws-devops-and-mobile-services/) on the AWS DevOps blog. For detailed instructions, see the blog post.

## Prerequisites and limitations
<a name="build-and-test-ios-apps-with-aws-codecommit-aws-codepipeline-and-aws-device-farm-prereqs"></a>

**Prerequisites **
+ An active AWS account
+ An Apple developer account
+ Build server (macOS)
+ [Xcode](https://developer.apple.com/xcode/) version 11.3 (installed and set up on the build server)
+ AWS Command Line Interface (AWS CLI) [installed](https://docs.aws.amazon.com/cli/latest/userguide/install-cliv1.html) and [configured ](https://docs.aws.amazon.com/cli/latest/userguide/cli-chap-configure.html)on the workstation
+ Basic knowledge of [Git](https://git-scm.com/docs)

**Limitations **
+ The application build server must be running macOS. 
+ The build server must have a public IP address, so CodePipeline can connect to it remotely to initiate builds.

## Architecture
<a name="build-and-test-ios-apps-with-aws-codecommit-aws-codepipeline-and-aws-device-farm-architecture"></a>

**Source technology stack  **
+ An on-premises iOS application build process that involves using a simulator or manual test on physical devices

**Target technology stack  **
+ An AWS CodeCommit repository for storing application source code
+ A Jenkins server for application builds using Xcode
+ An AWS Device Farm device pool for testing applications on real devices

**Target architecture **

When a user commits changes to the source repository, the pipeline (AWS CodePipeline) fetches the code from the source repository, initiates a Jenkins build, and passes the application code to Jenkins. After the build, the pipeline retrieves the build artifact and starts an AWS Device Farm job to test the application against a device pool.

 

![\[CI/CD pipeline uses AWS CodePipeline to build and test iOS applications on real devices.\]](http://docs.aws.amazon.com/prescriptive-guidance/latest/patterns/images/pattern-img/06fbd82f-4aed-441c-818c-5f89f56af78e/images/0ae3d7b6-b40c-44ef-9580-8c8266c3d841.png)


## Tools
<a name="build-and-test-ios-apps-with-aws-codecommit-aws-codepipeline-and-aws-device-farm-tools"></a>
+ [AWS CodePipeline](https://docs.aws.amazon.com/codepipeline/latest/userguide/welcome.html) is a fully managed continuous delivery service that helps you automate your release pipelines for fast and reliable application and infrastructure updates. CodePipeline automates the build, test, and deploy phases of your release process every time there is a code change, based on the release model you define.
+ [AWS CodeCommit](https://docs.aws.amazon.com/codecommit/latest/userguide/welcome.html) is a fully managed source control service that hosts secure Git-based repositories. It makes it easy for teams to collaborate on code in a secure and highly scalable ecosystem. CodeCommit eliminates the need to operate your own source control system or worry about scaling its infrastructure.
+ [AWS Device Farm](https://docs.aws.amazon.com/devicefarm/latest/developerguide/welcome.html) is an application testing service that lets you improve the quality of your web and mobile apps by testing them across an extensive range of desktop browsers and real mobile devices, without having to provision and manage any testing infrastructure.
+ [Jenkins](https://www.jenkins.io/) is an open-source automation server that enables developers to build, test, and deploy their software.

## Epics
<a name="build-and-test-ios-apps-with-aws-codecommit-aws-codepipeline-and-aws-device-farm-epics"></a>

### Set up the build environment
<a name="set-up-the-build-environment"></a>


| Task | Description | Skills required | 
| --- | --- | --- | 
| Install Jenkins on the build server that's running macOS. | Jenkins will be used for building the application, so you must first install it on the build server. To get detailed instructions for this and subsequent tasks, see the AWS blog post [Building and testing iOS and iPadOS apps with AWS DevOps and mobile services ](https://aws.amazon.com/blogs/devops/building-and-testing-ios-and-ipados-apps-with-aws-devops-and-mobile-services/)and other resources in the [Related resources](#build-and-test-ios-apps-with-aws-codecommit-aws-codepipeline-and-aws-device-farm-resources) section at the end of this pattern. | DevOps | 
| Configure Jenkins. | Follow the on-screen instructions to configure Jenkins. | DevOps | 
| Install the AWS CodePipeline plugin for Jenkins. | This plugin must be installed on the Jenkins server in order for Jenkins to interact with the AWS CodePipeline service. | DevOps | 
| Create a Jenkins freestyle project. | In Jenkins, create a freestyle project. Configure the project to specify triggers and other build configuration options. | DevOps | 

### Configure AWS Device Farm
<a name="configure-aws-device-farm"></a>


| Task | Description | Skills required | 
| --- | --- | --- | 
| Create a Device Farm project. | Open the AWS Device Farm console. Create a project and a device pool for testing. For instructions, see the blog post. | Developer | 

### Configure the source repository
<a name="configure-the-source-repository"></a>


| Task | Description | Skills required | 
| --- | --- | --- | 
| Create a CodeCommit repository. | Create a repository where the source code will be stored. | DevOps | 
| Commit your application code to the repository. | Connect to the CodeCommit repository you created. Push the code from your local machine to the repository. | DevOps | 

### Configure the pipeline
<a name="configure-the-pipeline"></a>


| Task | Description | Skills required | 
| --- | --- | --- | 
| Create a pipeline in AWS CodePipeline. | Open the AWS CodePipeline console, and create a pipeline. The pipeline orchestrates all the phases of the CI/CD process. For instructions, see the AWS blog post [Building and testing iOS and iPadOS apps with AWS DevOps and mobile services](https://aws.amazon.com/blogs/devops/building-and-testing-ios-and-ipados-apps-with-aws-devops-and-mobile-services/). | DevOps | 
| Add a test stage to the pipeline. | To add a test stage and integrate it with AWS Device Farm, edit the pipeline. | DevOps | 
| Initiate the pipeline. | To start the pipeline and the CI/CD process, choose **Release change**. | DevOps | 

### View application test results
<a name="view-application-test-results"></a>


| Task | Description | Skills required | 
| --- | --- | --- | 
| Review test results. | In the AWS Device Farm console, select the project you created, and review the results of the tests. The console will show the details of each test. | Developer | 

## Related resources
<a name="build-and-test-ios-apps-with-aws-codecommit-aws-codepipeline-and-aws-device-farm-resources"></a>

**Step-by-step instructions for this pattern**
+ [Building and testing iOS and iPadOS apps with AWS DevOps and mobile services](https://aws.amazon.com/blogs/devops/building-and-testing-ios-and-ipados-apps-with-aws-devops-and-mobile-services/) (AWS DevOps blog post)

**Configure AWS Device Farm**
+ [AWS Device Farm console](https://console.aws.amazon.com/devicefarm)

**Configure the source repository**
+ [Create an AWS CodeCommit repository](https://docs.aws.amazon.com/codecommit/latest/userguide/how-to-create-repository.html)
+ [Connect to an AWS CodeCommit repository](https://docs.aws.amazon.com/codecommit/latest/userguide/how-to-connect.html)

**Configure the pipeline**
+ [AWS CodePipeline console](https://console.aws.amazon.com/codesuite/codepipeline/home)

**Additional resources**
+ [AWS CodePipeline documentation](https://docs.aws.amazon.com/codepipeline/latest/userguide/welcome.html)
+ [AWS CodeCommit documentation](https://docs.aws.amazon.com/codecommit/latest/userguide/welcome.html)
+ [AWS Device Farm documentation](https://docs.aws.amazon.com/devicefarm/latest/developerguide/welcome.html)
+ [Jenkins documentation](https://www.jenkins.io/doc/)
+ [Jenkins installation on macOS](https://www.jenkins.io/download/weekly/macos/)
+ [AWS CodePipeline plugin for Jenkins](https://plugins.jenkins.io/aws-codepipeline/)
+ [Xcode installation](https://developer.apple.com/xcode/)
+ AWS CLI [Installation](https://docs.aws.amazon.com/cli/latest/userguide/install-cliv1.html) and [configuration](https://docs.aws.amazon.com/cli/latest/userguide/cli-chap-configure.html)
+ [Git documentation](https://git-scm.com/docs)

# Configure mutual TLS authentication for applications running on Amazon EKS
<a name="configure-mutual-tls-authentication-for-applications-running-on-amazon-eks"></a>

*Mahendra Revanasiddappa, Amazon Web Services*

## Summary
<a name="configure-mutual-tls-authentication-for-applications-running-on-amazon-eks-summary"></a>

Certificate-based mutual Transport Layer Security (TLS) is an optional TLS component that provides two-way peer authentication between servers and clients. With mutual TLS, clients must provide an X.509 certificate during the session negotiation process. The server uses this certificate to identify and authenticate the client.

Mutual TLS is a common requirement for Internet of Things (IoT) applications and can be used for business-to-business applications or standards such as [Open Banking](https://docs.aws.amazon.com/wellarchitected/latest/financial-services-industry-lens/open-banking.html).

This pattern describes how to configure mutual TLS for applications running on an Amazon Elastic Kubernetes Service (Amazon EKS) cluster by using an NGINX ingress controller. You can enable built-in mutual TLS features for the NGINX ingress controller by annotating the ingress resource. For more information about mutual TLS annotations on NGINX controllers, see [Client certificate authentication](https://kubernetes.github.io/ingress-nginx/user-guide/nginx-configuration/annotations/#client-certificate-authentication) in the Kubernetes documentation.

**Important**  
This pattern uses self-signed certificates. We recommend that you use this pattern only with test clusters, and not in production environments. If you want to use this pattern in a production environment, you can use [AWS Private Certificate Authority (AWS Private CA)](https://docs.aws.amazon.com/privateca/latest/userguide/PcaWelcome.html) or your existing public key infrastructure (PKI) standard to issue private certificates.

## Prerequisites and limitations
<a name="configure-mutual-tls-authentication-for-applications-running-on-amazon-eks-prereqs"></a>

**Prerequisites **
+ An active Amazon Web Services (AWS) account.
+ An existing Amazon EKS cluster.
+ AWS Command Line Interface (AWS CLI) version 1.7 or later, installed and configured on macOS, Linux, or Windows.
+ The kubectl command line utility, installed and configured to access the Amazon EKS cluster. For more information about this, see [Installing kubectl](https://docs.aws.amazon.com/eks/latest/userguide/install-kubectl.html) in the Amazon EKS documentation.
+ An existing Domain Name System (DNS) name to test the application.

**Limitations **
+ This pattern uses self-signed certificates. We recommend that you use this pattern only with test clusters, and not in production environments.

## Architecture
<a name="configure-mutual-tls-authentication-for-applications-running-on-amazon-eks-architecture"></a>

![\[Configuring mutual TLS authentication for applications running on Amazon EKS\]](http://docs.aws.amazon.com/prescriptive-guidance/latest/patterns/images/pattern-img/ae2761e3-7ed2-4c2a-ba54-a4ddce8a1e7e/images/cefc60f9-2f29-4052-b7ae-df4eb6395e1c.png)


**Technology stack**
+ Amazon EKS
+ Amazon Route 53
+ Kubectl

## Tools
<a name="configure-mutual-tls-authentication-for-applications-running-on-amazon-eks-tools"></a>
+ [Amazon Elastic Kubernetes Service (Amazon EKS)](https://docs.aws.amazon.com/eks/latest/userguide/getting-started.html) helps you run Kubernetes on AWS without needing to install or maintain your own Kubernetes control plane or nodes.
+ [Amazon Route 53](https://docs.aws.amazon.com/Route53/latest/DeveloperGuide/Welcome.html) is a highly available and scalable DNS web service.
+ [Kubectl](https://docs.aws.amazon.com/eks/latest/userguide/install-kubectl.html) is a command line utility that you use to interact with an Amazon EKS cluster.

## Epics
<a name="configure-mutual-tls-authentication-for-applications-running-on-amazon-eks-epics"></a>

### Generate the self-signed certificates
<a name="generate-the-self-signed-certificates"></a>


| Task | Description | Skills required | 
| --- | --- | --- | 
|  Generate the CA key and certificate. | Generate the certificate authority (CA) key and certificate by running the following command.<pre>openssl req -x509 -sha256 -newkey rsa:4096 -keyout ca.key -out ca.crt -days 356 -nodes -subj '/CN=Test Cert Authority'</pre> | DevOps engineer | 
| Generate the server key and certificate, and sign with the CA certificate. | Generate the server key and certificate, and sign with the CA certificate by running the following command.<pre>openssl req -new -newkey rsa:4096 -keyout server.key -out server.csr -nodes -subj '/CN= <your_domain_name> ' && openssl x509 -req -sha256 -days 365 -in server.csr -CA ca.crt -CAkey ca.key -set_serial 01 -out server.crt</pre>Make sure sure that you replace `<your_domain_name>` with your existing domain name. | DevOps engineer | 
|  Generate the client key and certificate, and sign with the CA certificate. | Generate the client key and certificate, and sign with the CA certificate by running the following command.<pre>openssl req -new -newkey rsa:4096 -keyout client.key -out client.csr -nodes -subj '/CN=Test' && openssl x509 -req -sha256 -days 365 -in client.csr -CA ca.crt -CAkey ca.key -set_serial 02 -out client.crt</pre> | DevOps engineer | 

### Deploy the NGINX ingress controller
<a name="deploy-the-nginx-ingress-controller"></a>


| Task | Description | Skills required | 
| --- | --- | --- | 
| Deploy the NGINX ingress controller in your Amazon EKS cluster. | Deploy the NGINX ingress controller by using the following command.<pre>kubectl apply -f https://raw.githubusercontent.com/kubernetes/ingress-nginx/controller-v1.7.0/deploy/static/provider/aws/deploy.yaml</pre> | DevOps engineer | 
|  Verify that the NGINX ingress controller service is running. | Verify that the NGINX ingress controller service is running by using the following command.<pre>kubectl get svc -n ingress-nginx</pre>Make sure that the field of service address contains the Network Load Balancer’s domain name. | DevOps engineer | 

### Create a namespace in the Amazon EKS cluster to test mutual TLS
<a name="create-a-namespace-in-the-amazon-eks-cluster-to-test-mutual-tls"></a>


| Task | Description | Skills required | 
| --- | --- | --- | 
| Create a namespace in the Amazon EKS cluster.  | Create a namespace called `mtls` in your Amazon EKS cluster by running the following command. <pre>kubectl create ns mtls</pre>This deploys the sample application to test mutual TLS. | DevOps engineer | 

### Create the deployment and service for the sample application
<a name="create-the-deployment-and-service-for-the-sample-application"></a>


| Task | Description | Skills required | 
| --- | --- | --- | 
| Create the Kubernetes deployment and service in the mtls namespace. | Create a file named `mtls.yaml`. Paste the following code into the file. <pre>kind: Deployment<br />apiVersion: apps/v1<br />metadata:<br />  name: mtls-app<br />  labels:<br />    app: mtls<br />spec:<br />  replicas: 1<br />  selector:<br />    matchLabels:<br />      app: mtls<br />  template:<br />    metadata:<br />      labels:<br />        app: mtls<br />    spec:<br />      containers:<br />      - name: mtls-app<br />        image: hashicorp/http-echo<br />        args:<br />          - "-text=mTLS is working"<br /><br /><br />---<br /><br />kind: Service<br />apiVersion: v1<br />metadata:<br />  name: mtls-service<br />spec:<br />  selector:<br />    app: mtls<br />  ports:<br />    - port: 5678 # Default port for image</pre> Create the Kubernetes deployment and service in the `mtls` namespace by running the following command.<pre>kubectl create -f mtls.yaml -n mtls</pre> | DevOps engineer | 
| Verify that the Kubernetes deployment is created. | Run the following command to verify that the deployment is created and has one pod in available status.<pre>kubectl get deploy -n mtls</pre> | DevOps engineer | 
| Verify that the Kubernetes service is created. | Verify that the Kubernetes service is created by running the following command.<pre>kubectl get service -n mtls</pre> | DevOps engineer | 

### Create a secret in the mtls namespace
<a name="create-a-secret-in-the-mtls-namespace"></a>


| Task | Description | Skills required | 
| --- | --- | --- | 
| Create a secret for the ingress resource. | Run the folllowing command to create a secret for the NGINX ingress controller by using the certificates that you created earlier.<pre>kubectl create secret generic mtls-certs --from-file=tls.crt=server.crt --from-file=tls.key=server.key --from-file=ca.crt=ca.crt -n mtls </pre>Your secret has a server certificate for the client to identify the server and a CA certificate for the server to verify the client certificates. | DevOps engineer | 

### Create the ingress resource in the mtls namespace
<a name="create-the-ingress-resource-in-the-mtls-namespace"></a>


| Task | Description | Skills required | 
| --- | --- | --- | 
| Create the ingress resource in the mtls namespace. | Create a file named `ingress.yaml`. Paste the following code into the file (replace `<your_domain_name>` with your existing domain name).<pre>apiVersion: networking.k8s.io/v1<br />kind: Ingress<br />metadata:<br />  annotations:<br />    nginx.ingress.kubernetes.io/auth-tls-verify-client: "on"<br />    nginx.ingress.kubernetes.io/auth-tls-secret: mtls/mtls-certs<br />  name: mtls-ingress<br />spec:<br />  ingressClassName: nginx<br />  rules:<br />  - host: "*.<your_domain_name>"<br />    http:<br />      paths:<br />      - path: /<br />        pathType: Prefix<br />        backend:<br />          service:<br />            name: mtls-service<br />            port:<br />              number: 5678<br />  tls:<br />  - hosts:<br />    - "*.<your_domain_name>"<br />    secretName: mtls-certs</pre>Create the ingress resource in the `mtls` namespace  by running the following command.<pre>kubectl create -f ingress.yaml -n mtls</pre>This means that the NGINX ingress controller can route traffic to your sample application. | DevOps engineer | 
| Verify that the ingress resource is created. | Verify that the ingress resource is created by running the following command.<pre>kubectl get ing -n mtls</pre>Make sure that the address of the ingress resource shows the load balancer created for the NGINX ingress controller. | DevOps engineer | 

### Configure DNS to point the hostname to the load balancer
<a name="configure-dns-to-point-the-hostname-to-the-load-balancer"></a>


| Task | Description | Skills required | 
| --- | --- | --- | 
| Create CNAME record that points to the load balancer for the NGINX ingress controller. | Sign in to the AWS Management Console, open the Amazon Route 53 console, and create a Canonical Name (CNAME) record that points `mtls.<your_domain_name>` to the load balancer for the NGINX ingress controller.For more information, see [Creating records by using the Route 53 console](https://docs.aws.amazon.com/Route53/latest/DeveloperGuide/resource-record-sets-creating.html) in the Route 53 documentation. | DevOps engineer | 

### Test the application
<a name="test-the-application"></a>


| Task | Description | Skills required | 
| --- | --- | --- | 
| Test mutual TLS setup without certificates. | Run the following command.<pre>curl -k https://mtls.<your_domain_name> </pre>You should receive the "400 No required SSL certificate was sent" error response. | DevOps engineer | 
| Test mutual TLS setup with certificates. | Run the following command.<pre>curl -k https://mtls.<your_domain_name> --cert client.crt --key client.key</pre>You should receive the "mTLS is working" response. | DevOps engineer | 

## Related resources
<a name="configure-mutual-tls-authentication-for-applications-running-on-amazon-eks-resources"></a>
+ [Creating records by using the Amazon Route 53 console](https://docs.aws.amazon.com/Route53/latest/DeveloperGuide/resource-record-sets-creating.html)
+ [Using a Network Load Balancer with the NGINX ingress controller on Amazon EKS](https://aws.amazon.com/blogs/opensource/network-load-balancer-nginx-ingress-controller-eks/)
+ [Client Certificate Authentication](https://kubernetes.github.io/ingress-nginx/examples/auth/client-certs/)

# Automate the creation of Amazon WorkSpaces Applications resources using AWS CloudFormation
<a name="automate-the-creation-of-appstream-2-0-resources-using-aws-cloudformation"></a>

*Ram Kandaswamy, Amazon Web Services*

## Summary
<a name="automate-the-creation-of-appstream-2-0-resources-using-aws-cloudformation-summary"></a>

This pattern provides code samples and steps to automate the creation of [Amazon WorkSpaces Applications](https://aws.amazon.com/workspaces/applications/) resources in the AWS Cloud by using an [AWS CloudFormation](https://docs.aws.amazon.com/AWSCloudFormation/latest/UserGuide/Welcome.html) template. The pattern shows you how to use a CloudFormation stack to automate the creation of your WorkSpaces Applications application resources, including an image builder, image, fleet instance, and stack. You can stream your WorkSpaces Applications application to end users on an HTML5-compliant browser by using either the desktop or application delivery mode.

## Prerequisites and limitations
<a name="automate-the-creation-of-appstream-2-0-resources-using-aws-cloudformation-prereqs"></a>

**Prerequisites**
+ An active AWS account
+ An acceptance of WorkSpaces Applications terms and conditions
+ Basic knowledge of WorkSpaces Applications resources, such as [fleets and stacks](https://docs.aws.amazon.com/appstream2/latest/developerguide/managing-stacks-fleets.html) and [image builders](https://docs.aws.amazon.com/appstream2/latest/developerguide/managing-image-builders.html)

**Limitations**
+ You can’t modify the [AWS Identity and Access Management](https://docs.aws.amazon.com/IAM/latest/UserGuide/introduction.html) (IAM) role associated with an WorkSpaces Applications instance after that instance is created.
+ You can’t modify properties (such as the [subnet](https://docs.aws.amazon.com/vpc/latest/userguide/configure-subnets.html#subnet-basics) or [security group](https://docs.aws.amazon.com/vpc/latest/userguide/vpc-security-groups.html)) on the WorkSpaces Applications image builder instance after that image builder is created.

## Architecture
<a name="automate-the-creation-of-appstream-2-0-resources-using-aws-cloudformation-architecture"></a>

The following diagram shows you how to automate the creation of WorkSpaces Applications resources by using a CloudFormation template.

![\[Workflow for automatically creating WorkSpaces Applications resources.\]](http://docs.aws.amazon.com/prescriptive-guidance/latest/patterns/images/pattern-img/4f0205f5-5b91-4832-9f0f-2135ae866226/images/cb578939-d9af-4f60-93c9-286881df4c3a.png)


The diagram shows the following workflow:

1. You create a CloudFormation template based on the YAML code in the [Additional information](#automate-the-creation-of-appstream-2-0-resources-using-aws-cloudformation-additional) section of this pattern.

1. The CloudFormation template creates a CloudFormation test stack.

   1. (Optional) You create an image builder instance by using WorkSpaces Applications.

   1. (Optional) You create a Windows image by using your custom software.

1. The CloudFormation stack creates an WorkSpaces Applications fleet instance and stack.

1. You deploy your WorkSpaces Applications resources to end users on an HTML5-compliant browser.

## Tools
<a name="automate-the-creation-of-appstream-2-0-resources-using-aws-cloudformation-tools"></a>
+ [Amazon WorkSpaces Applications](https://docs.aws.amazon.com/appstream2/latest/developerguide/what-is-appstream.html) is a fully managed application streaming service that provides you with instant access to your desktop applications from anywhere. WorkSpaces Applications manages the AWS resources required to host and run your applications, scales automatically, and provides access to your users on demand.
+ [AWS CloudFormation](https://docs.aws.amazon.com/AWSCloudFormation/latest/UserGuide/Welcome.html) helps you model and set up your AWS resources, provision them quickly and consistently, and manage them throughout their lifecycle. You can use a template to describe your resources and their dependencies, and launch and configure them together as a stack, instead of managing resources individually. You can manage and provision stacks across multiple AWS accounts and AWS Regions.

## Best practices
<a name="automate-the-creation-of-appstream-2-0-resources-using-aws-cloudformation-best-practices"></a>
+ **Configure network access for image builders correctly** – Launch image builders in virtual private cloud (VPC) subnets with proper internet access by using a NAT gateway for outbound-only internet access.

  Test network connectivity to required resources (such as application servers, databases, and licensing servers) before creating images. Verify that VPC route tables allow connections to all necessary network resources. For more information, see [Internet access](https://docs.aws.amazon.com/appstream2/latest/developerguide/internet-access.html) in the WorkSpaces Applications documentation.
+ **Monitor fleet capacity against service quotas proactively** – WorkSpaces Applications instance type and size quotas are per AWS account, per AWS Region. If you have multiple fleets in the same Region that use the same instance type and size, the total number of instances in all fleets in that Region must be less than or equal to the applicable quota. For more information, see [Troubleshooting Fleets](https://docs.aws.amazon.com/appstream2/latest/developerguide/troubleshooting-fleets.html) in the WorkSpaces Applications documentation.
+ **Test applications in Image Builder Test mode before fleet deployment** – Always validate applications in Image Builder Test mode before creating images and deploying to fleets. Test mode simulates the limited permissions that end users have on fleet instances. For more information, see [Troubleshooting Image Builders](https://docs.aws.amazon.com/appstream2/latest/developerguide/troubleshooting-image-builder.html#troubleshooting-07) in the WorkSpaces Applications documentation.

## Epics
<a name="automate-the-creation-of-appstream-2-0-resources-using-aws-cloudformation-epics"></a>

### (Optional) Create a WorkSpaces Applications image
<a name="optional-create-a-aas2-image"></a>


| Task | Description | Skills required | 
| --- | --- | --- | 
| Install custom software and create an image. | [\[See the AWS documentation website for more details\]](http://docs.aws.amazon.com/prescriptive-guidance/latest/patterns/automate-the-creation-of-appstream-2-0-resources-using-aws-cloudformation.html)Consider using the Windows AppLocker feature to further lock down the image. | AWS DevOps, Cloud architect | 

### Deploy the CloudFormation template
<a name="deploy-the-cfn-template"></a>


| Task | Description | Skills required | 
| --- | --- | --- | 
| Update the CloudFormation template. | [\[See the AWS documentation website for more details\]](http://docs.aws.amazon.com/prescriptive-guidance/latest/patterns/automate-the-creation-of-appstream-2-0-resources-using-aws-cloudformation.html) | AWS systems administrator, Cloud administrator, Cloud architect, General AWS, AWS administrator | 
| Create a CloudFormation stack by using the template. | [\[See the AWS documentation website for more details\]](http://docs.aws.amazon.com/prescriptive-guidance/latest/patterns/automate-the-creation-of-appstream-2-0-resources-using-aws-cloudformation.html) | App owner, AWS systems administrator, Windows Engineer | 

## Troubleshooting
<a name="automate-the-creation-of-appstream-2-0-resources-using-aws-cloudformation-troubleshooting"></a>


| Issue | Solution | 
| --- | --- | 
| Various issues | For more information, see [Troubleshooting](https://docs.aws.amazon.com/appstream2/latest/developerguide/troubleshooting.html) in the WorkSpaces Applications documentation. | 

## Related resources
<a name="automate-the-creation-of-appstream-2-0-resources-using-aws-cloudformation-resources"></a>

**References**
+ [Get started with Amazon WorkSpaces Applications: Set up with sample applications](https://docs.aws.amazon.com/appstream2/latest/developerguide/getting-started.html)
+ [Create an Amazon WorkSpaces Applications fleet and stack](https://docs.aws.amazon.com/appstream2/latest/developerguide/set-up-stacks-fleets.html)

**Tutorials and videos**
+ [Amazon WorkSpaces Applications User Workflow](https://www.youtube.com/watch?v=hVGQ87-Uhrc)
+ [How to Migrate a Legacy Windows Forms App to Amazon WorkSpaces Applications](https://www.youtube.com/watch?v=CIImtS2iVbg)
+ [AWS re:Invent 2018: Securely Deliver Desktop Applications with Amazon WorkSpaces Applications (BAP201)](https://www.youtube.com/watch?v=xNIyc_inOhM)

## Additional information
<a name="automate-the-creation-of-appstream-2-0-resources-using-aws-cloudformation-additional"></a>

The following code is an example of a CloudFormation template that you can use to automatically create WorkSpaces Applications resources.

```
AWSTemplateFormatVersion: 2010-09-09
Parameters:
  SubnetIds:
    Type: 'List<AWS::EC2::Subnet::Id>'
  testSecurityGroup:
    Type: 'AWS::EC2::SecurityGroup::Id'
  ImageName:
    Type: String
Resources:
  
  AppStreamFleet:
    Type: 'AWS::AppStream::Fleet'
    Properties:
      ComputeCapacity:
        DesiredInstances: 5
      InstanceType: stream.standard.medium
      Name: appstream-test-fleet
      DisconnectTimeoutInSeconds: 1200
      FleetType: ON_DEMAND
      IdleDisconnectTimeoutInSeconds: 1200
      ImageName: !Ref ImageName
      MaxUserDurationInSeconds: 345600
      VpcConfig:
        SecurityGroupIds:
          - !Ref testSecurityGroup
        SubnetIds: !Ref SubnetIds
  AppStreamStack:
    Type: 'AWS::AppStream::Stack'
    Properties:
      Description: AppStream stack for test
      DisplayName: AppStream test Stack
      Name: appstream-test-stack
      StorageConnectors:
        - ConnectorType: HOMEFOLDERS
      UserSettings:
        - Action: CLIPBOARD_COPY_FROM_LOCAL_DEVICE
          Permission: ENABLED
        - Action: CLIPBOARD_COPY_TO_LOCAL_DEVICE
          Permission: ENABLED
        - Action: FILE_DOWNLOAD
          Permission: ENABLED
        - Action: PRINTING_TO_LOCAL_DEVICE
          Permission: ENABLED
  AppStreamFleetAssociation:
    Type: 'AWS::AppStream::StackFleetAssociation'
    Properties:
      FleetName: appstream-test-fleet
      StackName: appstream-test-stack
    DependsOn:
      - AppStreamFleet
      - AppStreamStack
```

# Create a custom log parser for Amazon ECS using a Firelens log router
<a name="create-a-custom-log-parser-for-amazon-ecs-using-a-firelens-log-router"></a>

*Varun Sharma, Amazon Web Services*

## Summary
<a name="create-a-custom-log-parser-for-amazon-ecs-using-a-firelens-log-router-summary"></a>

Firelens is a log router for Amazon Elastic Container Service (Amazon ECS) and AWS Fargate. You can use Firelens to route container logs from Amazon ECS to Amazon CloudWatch and other destinations (for example, [Splunk](https://www.splunk.com/) or [Sumo Logic](https://www.sumologic.com/)). Firelens works with [Fluentd](https://www.fluentd.org/) or [Fluent Bit](https://fluentbit.io/) as the logging agent, which means that you can use [Amazon ECS task definition parameters](https://docs.aws.amazon.com/AmazonECS/latest/developerguide/task_definition_parameters.html) to route logs.

By choosing to parse logs at the source level, you can analyze your logging data and perform queries to more efficiently and effectively respond to operational issues. Because different applications have different logging patterns, you need to use a custom parser that structures the logs and makes searching easier at your end destination.

This pattern uses a Firelens log router with a custom parser to push logs to CloudWatch from a sample Spring Boot application running on Amazon ECS. You can then use Amazon CloudWatch Logs Insights to filter the logs based on custom fields that are generated by the custom parser. 

## Prerequisites and limitations
<a name="create-a-custom-log-parser-for-amazon-ecs-using-a-firelens-log-router-prereqs"></a>

**Prerequisites **
+ An active Amazon Web Services (AWS) account.
+ AWS Command Line Interface (AWS CLI), installed and configured on your local machine.
+ Docker, installed and configured on your local machine.
+ An existing Spring Boot-based containerized application on Amazon Elastic Container Registry (Amazon ECR). 

## Architecture
<a name="create-a-custom-log-parser-for-amazon-ecs-using-a-firelens-log-router-architecture"></a>

![\[Using a Firelens log router to push logs to CloudWatch from an application running on Amazon ECS.\]](http://docs.aws.amazon.com/prescriptive-guidance/latest/patterns/images/pattern-img/e82b4992-c4e0-4af5-b87e-cb0b1c1ed8c9/images/ef60e087-965a-40e9-9f80-35edbda2befe.png)


**Technology stack  **
+ CloudWatch
+ Amazon ECR
+ Amazon ECS
+ Fargate
+ Docker
+ Fluent Bit

## Tools
<a name="create-a-custom-log-parser-for-amazon-ecs-using-a-firelens-log-router-tools"></a>
+ [Amazon ECR](https://docs.aws.amazon.com/AmazonECR/latest/userguide/what-is-ecr.html) – Amazon Elastic Container Registry (Amazon ECR) is an AWS managed container image registry service that is secure, scalable, and reliable.
+ [Amazon ECS](https://docs.aws.amazon.com/AmazonECS/latest/developerguide/Welcome.html) – Amazon Elastic Container Service (Amazon ECS) is a highly scalable, fast container management service that makes it easy to run, stop, and manage containers on a cluster.
+ [AWS Identity and Access Management (IAM)](https://docs.aws.amazon.com/IAM/latest/UserGuide/introduction.html) – IAM is a web service for securely controlling access to AWS services.
+ [AWS CLI](https://docs.aws.amazon.com/cli/latest/userguide/cli-chap-welcome.html) – AWS Command Line Interface (AWS CLI) is an open-source tool that enables you to interact with AWS services using commands in your command-line shell.
+ [Docker](https://www.docker.com/) – Docker is an open platform for developing, shipping, and running applications.

**Code**

The following files are attached to this pattern:
+ `customFluentBit.zip` – Contains the files to add the custom parsing and configurations.
+ `firelens_policy.json` – Contains the policy document to create an IAM policy.
+ `Task.json` – Contains a sample task definition for Amazon ECS.

## Epics
<a name="create-a-custom-log-parser-for-amazon-ecs-using-a-firelens-log-router-epics"></a>

### Create a custom Fluent Bit image
<a name="create-a-custom-fluent-bit-image"></a>


| Task | Description | Skills required | 
| --- | --- | --- | 
| Create an Amazon ECR repository. | Sign in to the AWS Management Console, open the Amazon ECR console, and create a repository called `fluentbit_custom`.For more information about this, see [Creating a repository](https://docs.aws.amazon.com/AmazonECR/latest/userguide/repository-create.html) in the Amazon ECR documentation. | Systems administrator, Developer | 
| Unzip the customFluentBit.zip package. |  [\[See the AWS documentation website for more details\]](http://docs.aws.amazon.com/prescriptive-guidance/latest/patterns/create-a-custom-log-parser-for-amazon-ecs-using-a-firelens-log-router.html) |  | 
| Create the custom Docker image. | [\[See the AWS documentation website for more details\]](http://docs.aws.amazon.com/prescriptive-guidance/latest/patterns/create-a-custom-log-parser-for-amazon-ecs-using-a-firelens-log-router.html)For more information about this, see [Pushing a Docker image](https://docs.aws.amazon.com/AmazonECR/latest/userguide/docker-push-ecr-image.html) in the Amazon ECR documentation.  | Systems administrator, Developer | 

### Set up the Amazon ECS cluster
<a name="set-up-the-amazon-ecs-cluster"></a>


| Task | Description | Skills required | 
| --- | --- | --- | 
| Create an Amazon ECS cluster. | Create an Amazon ECS cluster by following the instructions from the *Networking only template* section of [Creating a cluster](https://docs.aws.amazon.com/AmazonECS/latest/developerguide/create_cluster.html) in the Amazon ECS documentation.Make sure that you choose **Create VPC** to create a new virtual private cloud (VPC) for your Amazon ECS cluster. | Systems administrator, Developer | 

### Set up the Amazon ECS task
<a name="set-up-the-amazon-ecs-task"></a>


| Task | Description | Skills required | 
| --- | --- | --- | 
|  Set up the Amazon ECS task execution IAM role. | Create an Amazon ECS task execution IAM role by using the `AmazonECSTaskExecutionRolePolicy` managed policy. For more information about this, see [Amazon ECS task execution IAM role](https://docs.aws.amazon.com/AmazonECS/latest/developerguide/task_execution_IAM_role.html) in the Amazon ECS documentation.Make sure that you record the IAM role’s Amazon Resource Name (ARN). | Systems administrator, Developer | 
|  Attach the IAM policy to the Amazon ECS task execution IAM role. | [\[See the AWS documentation website for more details\]](http://docs.aws.amazon.com/prescriptive-guidance/latest/patterns/create-a-custom-log-parser-for-amazon-ecs-using-a-firelens-log-router.html) | Systems administrator, Developer | 
| Set up the Amazon ECS task definition. | [\[See the AWS documentation website for more details\]](http://docs.aws.amazon.com/prescriptive-guidance/latest/patterns/create-a-custom-log-parser-for-amazon-ecs-using-a-firelens-log-router.html)For more information about this, see [Creating a task definition](https://docs.aws.amazon.com/AmazonECS/latest/developerguide/create-task-definition.html) in the Amazon ECS documentation. | Systems administrator, Developer | 

### Run the Amazon ECS task
<a name="run-the-amazon-ecs-task"></a>


| Task | Description | Skills required | 
| --- | --- | --- | 
| Run the Amazon ECS task.  | On the Amazon ECS console, choose **Clusters**, choose the cluster that you created earlier, and then run the standalone task.For more information about this, see [Run a standalone task](https://docs.aws.amazon.com/AmazonECS/latest/developerguide/ecs_run_task.html) in the Amazon ECS documentation. | Systems administrator, Developer | 

### Verify the CloudWatch logs
<a name="verify-the-cloudwatch-logs"></a>


| Task | Description | Skills required | 
| --- | --- | --- | 
| Verify the logs.  | [\[See the AWS documentation website for more details\]](http://docs.aws.amazon.com/prescriptive-guidance/latest/patterns/create-a-custom-log-parser-for-amazon-ecs-using-a-firelens-log-router.html) | Systems administrator, Developer | 

## Related resources
<a name="create-a-custom-log-parser-for-amazon-ecs-using-a-firelens-log-router-resources"></a>
+ [Docker basics for Amazon ECS](https://docs.aws.amazon.com/AmazonECS/latest/developerguide/docker-basics.html) 
+ [Amazon ECS on AWS Fargate](https://docs.aws.amazon.com/AmazonECS/latest/developerguide/AWS_Fargate.html) 
+ [Configuring basic service parameters](https://docs.aws.amazon.com/AmazonECS/latest/developerguide/basic-service-params.html) 

## Attachments
<a name="attachments-e82b4992-c4e0-4af5-b87e-cb0b1c1ed8c9"></a>

To access additional content that is associated with this document, unzip the following file: [attachment.zip](samples/p-attach/e82b4992-c4e0-4af5-b87e-cb0b1c1ed8c9/attachments/attachment.zip)

# Create an API-driven resource orchestration framework using GitHub Actions and Terragrunt
<a name="create-an-api-driven-resource-orchestration-framework-using-github-actions-and-terragrunt"></a>

*Tamilselvan P, Abhigyan Dandriyal, Sandeep Gawande, and Akash Kumar, Amazon Web Services*

## Summary
<a name="create-an-api-driven-resource-orchestration-framework-using-github-actions-and-terragrunt-summary"></a>

This pattern leverages GitHub Actions workflows to automate resource provisioning through standardized JSON payloads, eliminating the need for manual configuration. This automated pipeline manages the complete deployment lifecycle and can seamlessly integrate with various frontend systems, from custom UI components to ServiceNow. The solution’s flexibility allows users to interact with the system through their preferred interfaces while maintaining standardized processes.

The configurable pipeline architecture can be adapted to meet different organizational requirements. The example implementation focuses on Amazon Virtual Private Cloud (Amazon VPC) and Amazon Simple Storage Service (Amazon S3) provisioning. The pattern effectively addresses common cloud resources management challenges by standardizing requests across the organization and providing consistent integration points. This approach makes it easier for teams to request and manage resources while ensuring standardization.

## Prerequisites and limitations
<a name="create-an-api-driven-resource-orchestration-framework-using-github-actions-and-terragrunt-prereqs"></a>

**Prerequisites**
+ An active AWS account
+ An active GitHub account with access to the configured repository

**Limitations**
+ New resources require manual addition of `terragrunt.hcl` files to the repository configuration.
+ Some AWS services aren’t available in all AWS Regions. For Region availability, see [AWS Services by Region](https://aws.amazon.com/about-aws/global-infrastructure/regional-product-services/). For specific endpoints, see [Service endpoints and quotas](https://docs.aws.amazon.com/general/latest/gr/aws-service-information.html), and choose the link for the service.

## Architecture
<a name="create-an-api-driven-resource-orchestration-framework-using-github-actions-and-terragrunt-architecture"></a>

The following diagram shows the components and workflow of this pattern.

![\[Workflow to automate resource provisioning with GitHub Actions and Terraform.\]](http://docs.aws.amazon.com/prescriptive-guidance/latest/patterns/images/pattern-img/bff5d70e-e8f1-454a-94bc-60e8cc16e69f/images/d4a768c8-4e11-493c-85ed-f4bf7e76ce60.png)


The architecture diagram shows the following actions:

1. The user submits a JSON payload to GitHub Actions, triggering the automation pipeline.

1. The GitHub Actions pipeline retrieves the required resources code from the Terragrunt and Terraform repositories, based on the payload specifications.

1. The pipeline assumes the appropriate AWS Identity and Access Management (IAM) role using the specified AWS account ID. Then, the pipeline deploys the resources to the target AWS account and manages Terraform state using the account-specific Amazon S3 bucket and Amazon DynamoDB table.

Each AWS account contains IAM roles for secure access, an Amazon S3 bucket for Terraform state storage, and a DynamoDB table for state locking. This design enables controlled, automated resource deployment across AWS accounts. The deployment process maintains proper state management and access control through dedicated Amazon S3 buckets and IAM roles in each account.

## Tools
<a name="create-an-api-driven-resource-orchestration-framework-using-github-actions-and-terragrunt-tools"></a>

**AWS services**
+ [Amazon DynamoDB](https://docs.aws.amazon.com/amazondynamodb/latest/developerguide/Introduction.html) is a fully managed NoSQL database service that provides fast, predictable, and scalable performance.
+ [AWS Identity and Access Management (IAM)](https://docs.aws.amazon.com/IAM/latest/UserGuide/introduction.html) helps you securely manage access to your AWS resources by controlling who is authenticated and authorized to use them.
+ [Amazon Simple Storage Service (Amazon S3)](https://docs.aws.amazon.com/AmazonS3/latest/userguide/Welcome.html) is a cloud-based object storage service that helps you store, protect, and retrieve any amount of data.
+ [Amazon Virtual Private Cloud (Amazon VPC)](https://docs.aws.amazon.com/vpc/latest/userguide/what-is-amazon-vpc.html) helps you launch AWS resources into a virtual network that you’ve defined. This virtual network resembles a traditional network that you’d operate in your own data center, with the benefits of using the scalable infrastructure of AWS.

**Other tools**
+ [GitHub Actions](https://docs.github.com/en/actions) is a continuous integration and continuous delivery (CI/CD) platform that’s tightly integrated with GitHub repositories. You can use GitHub Actions to automate your build, test, and deployment pipeline.
+ [Terraform](https://www.terraform.io/) is an infrastructure as code (IaC) tool from HashiCorp that helps you create and manage cloud and on-premises resources.
+ [Terragrunt](https://terragrunt.gruntwork.io/docs/getting-started/overview/) is an orchestration tool that extends both OpenTofu and Terraform capabilities. It manages how generic infrastructure patterns are applied, making it easier to scale and maintain large infrastructure estates.

**Code repository**

The code for this pattern is available in the GitHub [sample-aws-orchestration-pipeline-terraform](https://github.com/aws-samples/sample-aws-orchestration-pipeline-terraform) repository.

## Best practices
<a name="create-an-api-driven-resource-orchestration-framework-using-github-actions-and-terragrunt-best-practices"></a>
+ Store AWS credentials and sensitive data using GitHub repository secrets for secure access.
+ Configure the OpenID Connect (OIDC) provider for GitHub Actions to assume the IAM role, avoiding static credentials.
+ Follow the principle of least privilege and grant the minimum permissions required to perform a task. For more information, see [Grant least privilege](https://docs.aws.amazon.com/IAM/latest/UserGuide/access_policies.html#grant-least-priv) and [Security best practices](https://docs.aws.amazon.com/IAM/latest/UserGuide/best-practices.html) in the IAM documentation.

## Epics
<a name="create-an-api-driven-resource-orchestration-framework-using-github-actions-and-terragrunt-epics"></a>

### Create and configure repository
<a name="create-and-configure-repository"></a>


| Task | Description | Skills required | 
| --- | --- | --- | 
| Initialize the GitHub repository. | To initialize the GitHub repository, use the following steps:[\[See the AWS documentation website for more details\]](http://docs.aws.amazon.com/prescriptive-guidance/latest/patterns/create-an-api-driven-resource-orchestration-framework-using-github-actions-and-terragrunt.html) | DevOps engineer | 
| Configure the IAM roles and permissions. | To configure the IAM roles and permissions, use the following steps:[\[See the AWS documentation website for more details\]](http://docs.aws.amazon.com/prescriptive-guidance/latest/patterns/create-an-api-driven-resource-orchestration-framework-using-github-actions-and-terragrunt.html) | DevOps engineer | 
| Set up GitHub secrets and variables. | For instructions about how to set up repository secrets and variables in the GitHub repository, see [Creating configuration variables for a repository](https://docs.github.com/en/actions/how-tos/write-workflows/choose-what-workflows-do/use-variables#creating-configuration-variables-for-a-repository) in the GitHub documentation. Configure the following variables:[\[See the AWS documentation website for more details\]](http://docs.aws.amazon.com/prescriptive-guidance/latest/patterns/create-an-api-driven-resource-orchestration-framework-using-github-actions-and-terragrunt.html) | DevOps engineer | 
| Create the repository structure. | To create the repository structure, use the following steps:[\[See the AWS documentation website for more details\]](http://docs.aws.amazon.com/prescriptive-guidance/latest/patterns/create-an-api-driven-resource-orchestration-framework-using-github-actions-and-terragrunt.html) | DevOps engineer | 

### Trigger the pipeline and validate results
<a name="trigger-the-pipeline-and-validate-results"></a>


| Task | Description | Skills required | 
| --- | --- | --- | 
| Execute the pipeline using curl.  | To execute the pipeline by using [curl](https://curl.se/), use the following steps:[\[See the AWS documentation website for more details\]](http://docs.aws.amazon.com/prescriptive-guidance/latest/patterns/create-an-api-driven-resource-orchestration-framework-using-github-actions-and-terragrunt.html)For more information about the pipeline execution process, see [Additional information](#create-an-api-driven-resource-orchestration-framework-using-github-actions-and-terragrunt-additional). | DevOps engineer | 
| Validate results of the pipeline execution | To validate the results, use the following steps:[\[See the AWS documentation website for more details\]](http://docs.aws.amazon.com/prescriptive-guidance/latest/patterns/create-an-api-driven-resource-orchestration-framework-using-github-actions-and-terragrunt.html)You can also cross-verify the created resources by using the `output.json` file created in the repository that’s inside the same resource as the `terragrunt.hcl` file. | DevOps engineer | 

### Clean up resources
<a name="clean-up-resources"></a>


| Task | Description | Skills required | 
| --- | --- | --- | 
| Submit a cleanup request. | To delete resources that are no longer required, use the following steps:[\[See the AWS documentation website for more details\]](http://docs.aws.amazon.com/prescriptive-guidance/latest/patterns/create-an-api-driven-resource-orchestration-framework-using-github-actions-and-terragrunt.html) | DevOps engineer | 

## Related resources
<a name="create-an-api-driven-resource-orchestration-framework-using-github-actions-and-terragrunt-resources"></a>

**AWS Blogs**
+ [Use IAM roles to connect GitHub Actions to actions in AWS](https://aws.amazon.com/blogs/security/use-iam-roles-to-connect-github-actions-to-actions-in-aws/)

**AWS service documentation**
+ [IAM role creation](https://docs.aws.amazon.com/IAM/latest/UserGuide/id_roles_create.html)
+ [Monitoring CloudTrail log files with Amazon CloudWatch Logs](https://docs.aws.amazon.com/awscloudtrail/latest/userguide/monitor-cloudtrail-log-files-with-cloudwatch-logs.html)
+ [Security best practices for Amazon S3](https://docs.aws.amazon.com/AmazonS3/latest/userguide/security-best-practices.html)

**GitHub resources**
+ [Create a repository dispatch event](https://docs.github.com/en/rest/repos/repos?apiVersion=2022-11-28#create-a-repository-dispatch-event)
+ [Creating webhooks](https://docs.github.com/en/webhooks/using-webhooks/creating-webhooks#payload)
+ [Implement strong access controls on the GitHub repository](https://docs.github.com/en/get-started/learning-about-github/access-permissions-on-github)
+ [Regularly audit repository access](https://docs.github.com/en/organizations/keeping-your-organization-secure/managing-security-settings-for-your-organization)
+ [Security checks in the CI/CD pipeline](https://github.com/marketplace/actions/checkov-github-action)
+ [Use multi-factor authentication for GitHub accounts](https://docs.github.com/en/authentication/securing-your-account-with-two-factor-authentication-2fa/configuring-two-factor-authentication)

## Additional information
<a name="create-an-api-driven-resource-orchestration-framework-using-github-actions-and-terragrunt-additional"></a>

**Pipeline execution process**

Following are the steps of the pipeline execution:

1. **Validates JSON payload format** - Ensures that the incoming JSON configuration is properly structured and contains all required parameters

1. **Assumes specified IAM role** - Authenticates and assumes the required IAM role for AWS operations

1. **Downloads required Terraform and Terragrunt code** - Retrieves the specified version of resource code and dependencies

1. **Executes resource deployment** - Applies the configuration to deploy or update AWS resources in the target environment

**Sample payload used for VPC creation**

Following is example code for Terraform backend state bucket creation:

```
state_bucket_name = "${local.payload.ApplicationName}-${local.payload.EnvironmentId}-tfstate"
```

```
lock_table_name = "${local.payload.ApplicationName}-${local.payload.EnvironmentId}-tfstate-lock"
```

Following is an example payload for creating a VPC with Amazon VPC, where `vpc_cidr` defines the [CIDR block](https://docs.aws.amazon.com/vpc/latest/userguide/vpc-cidr-blocks.html) specifications for the VPC. The Terraform state bucket is mapped to a variable defined in the `terraform` files. The `ref` parameter contains the branch name of the code to execute.

```
{
    "ref": "main",
    "inputs": {
        "RequestParameters": {
            "RequestId": "1111111",
            "RequestType": "create",
            "ResourceType": "vpc",
            "AccountId": "1234567890",
            "AccountAlias": "account-alias",
            "RegionId": "us-west-2",
            "ApplicationName": "myapp",
            "DivisionName": "division-name",
            "EnvironmentId": "dev",
            "Suffix": "poc"
        },
        "ResourceParameters": [
            {
                "VPC": {
                    "vpc_cidr": "10.0.0.0/16"
                }
            }
        ]
    }
}
```

`RequestParameters` are used to track the request status in the pipeline section and `tfstate` is created based on this information. The following parameters contain metadata and control information:
+ `RequestId` – Unique identifier for the request
+ `RequestType` – Type of operation (create, update, or delete)
+ `ResourceType` – Type of resource to be provisioned
+ `AccountId` – Target AWS account for deployment
+ `AccountAlias` – Friendly name for the AWS account
+ `RegionId` – AWS Region for resource deployment
+ `ApplicationName` – Name of the application
+ `DivisionName` – Organization division
+ `EnvironmentId` – Environment (for example, dev and prod)
+ `Suffix` – Additional identifier for the resources

`ResourceParameters` contain resource-specific configuration that maps to variables defined in the Terraform files. Any custom variables that need to be passed to the Terraform modules should be included in `ResourceParameters`. The parameter `vpc_cidr` is mandatory for Amazon VPC.

# Create automated pull requests for Terraform-managed AWS infrastructure by using GitHub Actions
<a name="create-automated-pull-requests-for-terraform-managed-aws-infrastructure"></a>

*Matt Padgett, Ashish Bhatt, Ashwin Divakaran, Sandip Gangapadhyay, and Prafful Gupta, Amazon Web Services*

## Summary
<a name="create-automated-pull-requests-for-terraform-managed-aws-infrastructure-summary"></a>

This pattern presents an automation utility that’s designed to eliminate the manual, repetitive work involved in managing changes across multiple Terraform repositories. Many organizations use Terraform repositories to manage their infrastructure as code (IaC), often with hundreds of separate repositories representing different environments, services, or teams. Managing these repositories at scale presents a significant operational challenge. Routine tasks such as updating a parameter, upgrading module versions, or applying configuration changes often require creating and managing pull requests (PRs) across many repositories multiple times a day.

Even for simple changes, this repetitive and manual process is time-consuming and error prone. Engineers must consistently apply the same change across all targeted repositories and craft meaningful PR titles and descriptions. In addition, they often must interact with external tools like Jira to fetch or include issue tracking references. These tasks, while necessary, are undifferentiated heavy lifting that consume valuable engineering time and reduce overall efficiency. The lack of automation in this workflow creates friction, slows down delivery, and increases the cognitive burden on teams tasked with maintaining large-scale Terraform infrastructures.

**Solution overview**

To address this challenge, this pattern offers a utility that’s entirely configuration-driven, allowing users to define their desired changes in a structured configuration file. This file specifies the target repositories, modules, parameters, and values using a clearly defined schema.

Once configured, the utility performs the following automated steps:

1. **Reads the user-defined configuration** to determine the scope and nature of changes

1. **Creates a new branch** in each target repository with the required updates applied

1. **Generates a PR** for each change, ensuring consistency across all repositories

1. **Sends Slack notifications** (optional) to alert stakeholders with direct links to the created PRs

By automating these repetitive tasks, the utility significantly reduces the time, effort, and risk associated with managing large-scale infrastructure updates. It enables teams to focus on higher-value engineering work while helping to ensure that changes are applied consistently and can be traced across all repositories.

## Prerequisites and limitations
<a name="create-automated-pull-requests-for-terraform-managed-aws-infrastructure-prereqs"></a>

**Prerequisites**
+ An active AWS account.
+ Python version 3.8 or later.
+ A GitHub personal access token (PAT). For more information, see [Creating a personal access token (classic)](https://docs.github.com/en/authentication/keeping-your-account-and-data-secure/managing-your-personal-access-tokens#creating-a-personal-access-token-classic) in the GitHub documentation.
+ The GitHub PAT can access your target repositories so that the utility can perform operations like creating branches and pull requests. For more information, see this pattern’s GitHub [code repository](https://github.com/aws-samples/sample-terraform-pr-automation-utility?tab=readme-ov-file#repository-access-verification).

**Limitations**
+ **Configuration complexity** presents the primary challenge. The automation's effectiveness is constrained by the capabilities of its configuration file. Although the system handles standard changes efficiently, complex infrastructure modifications might require manual intervention and certain edge cases remain beyond the scope of automated handling.
+ **Security and access** presents significant considerations particularly in managing GitHub access tokens and API rate limits. Organizations must carefully balance the need for automation with secure credential storage and management, ensuring proper access controls while maintaining operational efficiency.
+ **Validation constraints** pose another notable limitation because the automated system has limited ability to validate business logic and environment-specific requirements. Complex dependencies and cross-service interactions often necessitate human oversight because automated validation can’t fully capture all contextual nuances and business rules.
+ **Scale and performance** issues emerge when dealing with large-scale infrastructure changes. The system must operate within GitHub API limits while managing numerous repositories simultaneously. Resource-intensive operations across extensive infrastructure can create performance bottlenecks that require careful management.
+ **Integration boundaries** restrict the system's flexibility because it's primarily designed to work with specific tools like GitHub and Slack. Organizations that use different tools might need custom solutions and the workflow customization options of this pattern are limited to supported integration points.

## Architecture
<a name="create-automated-pull-requests-for-terraform-managed-aws-infrastructure-architecture"></a>

The following diagram shows the workflow and components for this solution.

![\[Workflow to create automated pull requests using GitHub Actions.\]](http://docs.aws.amazon.com/prescriptive-guidance/latest/patterns/images/pattern-img/e211359a-03b1-4e69-b152-eb7c09bdb01a/images/6cee0660-5b44-4abe-970c-c0a3c830a9aa.png)


The workflow consists of the following steps:

1. The developer triggers GitHub Actions by specifying the Terraform repository.

1. The automation utility reads the defined configurations.

1. The automation utility also pulls the provided Terraform repository.

1. The automation utility creates a new branch and makes updates to Terraform templates locally.

1. The automation utility pushes the new branch to the repository and creates a new PR. 

1. The automation utility uses Slack notifications that include PR links to notify developers and enables Terraform templates for AWS Cloud deployment.

## Tools
<a name="create-automated-pull-requests-for-terraform-managed-aws-infrastructure-tools"></a>
+ [GitHub](https://docs.github.com/) is a developer platform that developers can use to create, store, manage, and share their code.
+ [GitHub Actions](https://docs.github.com/en/actions) is a continuous integration and continuous delivery (CI/CD) platform that’s tightly integrated with GitHub repositories. You can use GitHub Actions to automate your build, test, and deployment pipeline.
+ [HashiCorp Terraform](https://www.terraform.io/) is an infrastructure as code (IaC) tool that helps you create and manage cloud and on-premises resources.
+ [Slack](https://slack.com/help/articles/115004071768-What-is-Slack-), a Salesforce offering, is an AI-powered conversational platform that provides chat and video collaboration, automates processes with no code, and supports information sharing.

**Code repository**

The code for this pattern is available in the GitHub [Automated Terraform Infrastructure Update Workflow using GitHub Actions](https://github.com/aws-samples/sample-terraform-pr-automation-utility?tab=readme-ov-file) repository.

## Best practices
<a name="create-automated-pull-requests-for-terraform-managed-aws-infrastructure-best-practices"></a>
+ Effective **change management** is crucial for successful implementation. Organizations should adopt a gradual rollout strategy for large-scale changes. Maintain consistent branch naming conventions and PR descriptions and ensure comprehensive documentation of all changes.
+ **Security controls** must be rigorously implemented, focusing on least-privilege access principles and secure credential management. Enable branch protection rules to prevent unauthorized changes. Conduct regular security audits to maintain system integrity.
+ A robust **testing protocol** should include automated `terraform plan` execution in continuous integration and continuous deployment (CI/CD) pipelines. The protocol should also include pre-commit validation checks, and dedicated review environments for critical changes. This multi-layered testing approach helps catch issues early and ensures infrastructure stability.
+ **Monitoring strategy** needs to encompass comprehensive alerting mechanisms, detailed success/failure metrics tracking, and automated retry mechanisms for failed operations. This strategy helps to ensure operational visibility and enables quick response to any issues that arise.
+ **Configuration standards** should emphasize version control for all configurations, maintaining modularity for reusability and scalability. Clear documentation of schema and examples helps teams understand and use the automation system effectively.

## Epics
<a name="create-automated-pull-requests-for-terraform-managed-aws-infrastructure-epics"></a>

### Installation and setup
<a name="installation-and-setup"></a>


| Task | Description | Skills required | 
| --- | --- | --- | 
| Set up the repository. | To set up the repository, run the following commands:<pre># Clone the automation tool repository<br />git clone https://github.com/aws-samples/sample-terraform-pr-automation-utility<br />cd sample-terraform-pr-automation-utility<br /><br /># Copy example configuration<br />cp config.example.yaml config.yaml<br /></pre> | AWS DevOps | 
| Install dependencies. | To install and verify the Python dependencies, run the following commands:<pre># Install Python dependencies<br />pip3 install -r requirements.txt<br /><br /># Verify installation<br />python3 -c "import github; import hcl2; import yaml; import requests; print('All packages installed successfully')"<br /></pre> | AWS DevOps | 
| Configure the GitHub token. | To configure the GitHub token and then verify that it works, run the following commands:<pre># Set GitHub token environment variable<br />export GITHUB_TOKEN="your_github_token_here"<br /><br /># Verify token works<br />curl -H "Authorization: token $GITHUB_TOKEN" https://api.github.com/user<br /></pre> | AWS DevOps | 

### Set up configuration file for Terraform changes
<a name="set-up-configuration-file-for-terraform-changes"></a>


| Task | Description | Skills required | 
| --- | --- | --- | 
| Set up the `config.yaml` file. | To define your target repositories and desired changes, edit the `config.yam`l file as follows:<pre>repositories:<br />  - owner: "your-org"<br />    repo: "your-terraform-repo"<br />    files:<br />      - path: "variables.tf"<br />        changes:<br />          variables:<br />            - app_version:<br />                default:<br />                  update:<br />                    - from: ["1.0.0"]<br />                      to: "1.1.0"<br /><br />settings:<br />  pr_title_template: "Infrastructure Update - {{timestamp}}"<br />  slack:<br />    username: "Terraform Bot"<br />    icon_emoji: ":terraform:"<br />    notify_on_success: true<br />    notify_on_error: true<br />    notify_batch_summary: true<br /></pre> | AWS DevOps | 

### Test and validate
<a name="test-and-validate"></a>


| Task | Description | Skills required | 
| --- | --- | --- | 
| Do pre-flight testing. | Always test your configuration before running it on production repositories. Use the following commands:<pre># 1. Test configuration syntax<br />python3 -c "from main import get_config_content; get_config_content()"<br /><br /># 2. Run in dry-run mode first<br />DRY_RUN=true python3 main.py<br /><br /># 3. Test with minimal configuration<br /># Use a simple config.yaml with just one repository and one change</pre> | AWS DevOps | 
| Verify repository access. | To verify that the GitHub token can access the repository, run the following command:<pre># Test GitHub token access<br />curl -H "Authorization: token $GITHUB_TOKEN" \<br />  https://api.github.com/repos/owner/repo-name<br /><br /># Should return repository information, not 404</pre> | AWS DevOps | 

### Run the automation utility
<a name="run-the-automation-utility"></a>


| Task | Description | Skills required | 
| --- | --- | --- | 
| Run the automation utility by using the GitHub Actions UI. | To run the automation utility using the GitHub Actions UI, do the following:[\[See the AWS documentation website for more details\]](http://docs.aws.amazon.com/prescriptive-guidance/latest/patterns/create-automated-pull-requests-for-terraform-managed-aws-infrastructure.html) | AWS DevOps | 
| (Alternative) Run the automation utility from the command line. | If you prefer, you can run the automation utility from the command line instead of by using the GitHub Actions UI. Use the following command:<pre># Run actual automation<br />python3 main.py</pre> | AWS DevOps | 

### Validate PRs and changes
<a name="validate-prs-and-changes"></a>


| Task | Description | Skills required | 
| --- | --- | --- | 
| Review the created PRs and changes. | To monitor the results of the GitHub workflow execution, do the following:[\[See the AWS documentation website for more details\]](http://docs.aws.amazon.com/prescriptive-guidance/latest/patterns/create-automated-pull-requests-for-terraform-managed-aws-infrastructure.html) | AWS DevOps | 

### Clean up
<a name="clean-up"></a>


| Task | Description | Skills required | 
| --- | --- | --- | 
| (Optional) Clean up PRs. | Close abandoned or unnecessary PRs. | AWS DevOps | 

## Related resources
<a name="create-automated-pull-requests-for-terraform-managed-aws-infrastructure-resources"></a>

**AWS Prescriptive Guidance**
+ [Using Terraform as an IaC tool for the AWS Cloud](https://docs.aws.amazon.com/prescriptive-guidance/latest/choose-iac-tool/terraform.html)

**GitHub documentation**
+ [About pull requests](https://docs.github.com/en/pull-requests/collaborating-with-pull-requests/proposing-changes-to-your-work-with-pull-requests/about-pull-requests)
+ [Managing your personal access tokens](https://docs.github.com/en/authentication/keeping-your-account-and-data-secure/managing-your-personal-access-tokens)
+ [Understanding GitHub Actions](https://docs.github.com/en/actions/get-started/understand-github-actions)
+ [Quickstart for GitHub Actions](https://docs.github.com/en/actions/get-started/quickstart)

# Create dynamic CI pipelines for Java and Python projects automatically
<a name="create-dynamic-ci-pipelines-for-java-and-python-projects-automatically"></a>

*Aromal Raj Jayarajan, Vijesh Vijayakumaran Nair, MAHESH RAGHUNANDANAN, and Amarnath Reddy, Amazon Web Services*

## Summary
<a name="create-dynamic-ci-pipelines-for-java-and-python-projects-automatically-summary"></a>

This pattern shows how to create dynamic continuous integration (CI) pipelines for Java and Python projects automatically by using AWS developer tools.

As technology stacks diversify and development activities increase, it can become difficult to create and maintain CI pipelines that are consistent across an organization. By automating the process in AWS Step Functions, you can make sure that your CI pipelines are consistent in their usage and approach.

To automate the creation of dynamic CI pipelines, this pattern uses the following variable inputs:
+ Programming language (Java or Python only)
+ Pipeline name
+ Required pipeline stages

**Note**  
Step Functions orchestrates pipeline creation by using multiple AWS services. For more information about the AWS services used in this solution, see the **Tools** section of this pattern.

## Prerequisites and limitations
<a name="create-dynamic-ci-pipelines-for-java-and-python-projects-automatically-prereqs"></a>

**Prerequisites**
+ An active AWS account
+ An Amazon S3 bucket in the same AWS Region that this solution is being deployed
+ An AWS Identity and Access Management (IAM) [principal](https://docs.aws.amazon.com/IAM/latest/UserGuide/reference_policies_elements_principal.html) that has the AWS CloudFormation permissions required to create the resources needed for this solution

**Limitations**
+ This pattern supports Java and Python projects only.
+ The IAM roles provisioned in this pattern follow the principle of least privilege. The IAM roles’ permissions must be updated based on the specific resources that your CI pipeline needs to create.

## Architecture
<a name="create-dynamic-ci-pipelines-for-java-and-python-projects-automatically-architecture"></a>

**Target technology stack**
+ AWS CloudFormation
+ AWS CodeBuild
+ AWS CodeCommit
+ AWS CodePipeline
+ IAM
+ Amazon Simple Storage Service (Amazon S3)
+ AWS Systems Manager
+ AWS Step Functions
+ AWS Lambda
+ Amazon DynamoDB

**Target architecture **

The following diagram shows an example workflow for creating dynamic CI pipelines for Java and Python projects automatically by using AWS developer tools.

![\[Workflow to create dynamic CI pipelines for Java and Python projects automatically using AWS tools.\]](http://docs.aws.amazon.com/prescriptive-guidance/latest/patterns/images/pattern-img/bef2ccb8-68b3-4c0f-9ee7-4b93e9422d9c/images/b5ed003f-cf16-4130-8bfb-2bc2cb9a0d33.png)


The diagram shows the following workflow:

1. An AWS user provides the input parameters for CI pipeline creation in JSON format. This input starts a Step Functions workflow (*state machine*) that creates a CI pipeline by using AWS developer tools.

1. A Lambda function reads a folder named **input-reference**, which is stored in an Amazon S3 bucket, and then generates a** buildspec.yml **file. This generated file defines the CI pipeline stages and is stored back in the same Amazon S3 bucket that stores the parameter references.

1. Step Functions checks the CI pipeline creation workflow’s dependencies for any changes, and updates the dependencies stack as needed.

1. Step Functions creates the CI pipeline resources in a CloudFormation stack, including a CodeCommit repository, CodeBuild project, and a CodePipeline pipeline.

1. The CloudFormation stack copies the sample source code for the selected technology stack (Java or Python) and the **buildspec.yml** file to the CodeCommit repository.

1. CI pipeline runtime details are stored in a DynamoDB table.

**Automation and scale**
+ This pattern is for use in a single development environment only. Configuration changes are required for use across multiple development environments.
+ To add support for more than one CloudFormation stack, you can create additional CloudFormation templates. For more information, see [Getting started with AWS CloudFormation](https://docs.aws.amazon.com/AWSCloudFormation/latest/UserGuide/GettingStarted.html) in the CloudFormation documentation.

## Tools
<a name="create-dynamic-ci-pipelines-for-java-and-python-projects-automatically-tools"></a>

**Tools**
+ [AWS Step Functions](https://docs.aws.amazon.com/step-functions/latest/dg/welcome.html) is a serverless orchestration service that helps you combine AWS Lambda functions and other AWS services to build business-critical applications.
+ [AWS Lambda](https://docs.aws.amazon.com/lambda/latest/dg/welcome.html) is a compute service that helps you run code without needing to provision or manage servers. It runs your code only when needed and scales automatically, so you pay only for the compute time that you use.
+ [AWS CodeBuild](https://docs.aws.amazon.com/codebuild/latest/userguide/welcome.html) is a fully managed build service that helps you compile source code, run unit tests, and produce artifacts that are ready to deploy.
+ [AWS CodeCommit](https://docs.aws.amazon.com/codecommit/latest/userguide/welcome.html) is a version control service that helps you privately store and manage Git repositories, without needing to manage your own source control system.
+ [AWS CodePipeline](https://docs.aws.amazon.com/codepipeline/latest/userguide/welcome.html) helps you quickly model and configure the different stages of a software release and automate the steps required to release software changes continuously.
+ [AWS Identity and Access Management (IAM)](https://docs.aws.amazon.com/IAM/latest/UserGuide/introduction.html) helps you securely manage access to your AWS resources by controlling who is authenticated and authorized to use them.
+ [AWS Key Management Service (AWS KMS)](https://docs.aws.amazon.com/kms/latest/developerguide/overview.html) helps you create and control cryptographic keys to help protect your data.
+ [Amazon Simple Storage Service (Amazon S3)](https://docs.aws.amazon.com/AmazonS3/latest/userguide/Welcome.html) is a cloud-based object storage service that helps you store, protect, and retrieve any amount of data.
+ [AWS CloudFormation](https://docs.aws.amazon.com/AWSCloudFormation/latest/UserGuide/Welcome.html) helps you set up AWS resources, provision them quickly and consistently, and manage them throughout their lifecycle across AWS accounts and Regions.
+ [Amazon DynamoDB](https://docs.aws.amazon.com/amazondynamodb/latest/developerguide/Introduction.html) is a fully managed NoSQL database service that provides fast, predictable, and scalable performance.
+ [AWS Systems Manager Parameter Store](https://docs.aws.amazon.com/systems-manager/latest/userguide/systems-manager-parameter-store.html) provides secure, hierarchical storage for configuration data management and secrets management.

**Code**

The code for this pattern is available in the GitHub [automated-ci-pipeline-creation](https://github.com/aws-samples/automated-ci-pipeline-creation) repository. The repository contains the CloudFormation templates required to create the target architecture outlined in this pattern.

## Best practices
<a name="create-dynamic-ci-pipelines-for-java-and-python-projects-automatically-best-practices"></a>
+ Don’t enter credentials (*secrets*) such as tokens or passwords directly into CloudFormation templates or Step Functions action configurations. If you do, the information will be displayed in the DynamoDB logs. Instead, use AWS Secrets Manager to set up and store secrets. Then, reference the secrets stored in Secrets Manager within the CloudFormation templates and Step Functions action configurations as needed. For more information, see [What is AWS Secrets Manager](https://docs.aws.amazon.com/secretsmanager/latest/userguide/intro.html) in the Secrets Manager documentation.
+ Configure server-side encryption for CodePipeline artifacts stored in Amazon S3. For more information, see [Configure server-side encryption for artifacts stored in Amazon S3 for CodePipeline](https://docs.aws.amazon.com/codepipeline/latest/userguide/S3-artifact-encryption.html) in the CodePipeline documentation.
+ Apply least-privilege permissions when configuring IAM roles. For more information, see [Apply least-privilege permissions](https://docs.aws.amazon.com/IAM/latest/UserGuide/best-practices.html#grant-least-privilege) in the IAM documentation.
+ Make sure that your Amazon S3 bucket is not publicly accessible. For more information, see [Configuring block public access setting for your S3 buckets](https://docs.aws.amazon.com/AmazonS3/latest/userguide/configuring-block-public-access-bucket.html) in the Amazon S3 documentation.
+ Make sure that you activate versioning for your Amazon S3 bucket. For more information, see [Using versioning in S3 buckets](https://docs.aws.amazon.com/AmazonS3/latest/userguide/Versioning.html) in the Amazon S3 documentation.
+ Use IAM Access Analyzer when configuring IAM policies. The tool provides actionable recommendations to help you author secure and functional IAM policies. For more information, see [Using AWS Identity and Access Management Access Analyzer](https://docs.aws.amazon.com/IAM/latest/UserGuide/what-is-access-analyzer.html) in the IAM documentation.
+ When possible, define specific access conditions when configuring IAM policies.
+ Activate Amazon CloudWatch logging for monitoring and auditing purposes. For more information, see [What is Amazon CloudWatch Logs?](https://docs.aws.amazon.com/AmazonCloudWatch/latest/logs/WhatIsCloudWatchLogs.html) in the CloudWatch documentation.

## Epics
<a name="create-dynamic-ci-pipelines-for-java-and-python-projects-automatically-epics"></a>

### Configure the prerequisites
<a name="configure-the-prerequisites"></a>


| Task | Description | Skills required | 
| --- | --- | --- | 
| Create an Amazon S3 bucket. | Create an Amazon S3 bucket (or use an existing bucket) to store the required CloudFormation templates, source code, and input files for the solution.For more information, see [Step 1: Create your first S3 bucket](https://docs.aws.amazon.com/AmazonS3/latest/userguide/creating-bucket.html) in the Amazon S3 documentation.The Amazon S3 bucket must be in the same AWS Region that you’re deploying the solution to. | AWS DevOps | 
| Clone the GitHub repository. | Clone the GitHub [automated-ci-pipeline-creation](https://github.com/aws-samples/automated-ci-pipeline-creation) repository by running the following command in a terminal window:<pre>git clone https://github.com/aws-samples/automated-ci-pipeline-creation.git</pre>For more information, see [Cloning a repository](https://docs.github.com/en/repositories/creating-and-managing-repositories/cloning-a-repository) in the GitHub documentation. | AWS DevOps | 
| Upload the Solution Templates folder from the cloned GitHub repository to your Amazon S3 bucket.  | Copy the contents from the cloned **Solution-Templates** folder and upload them into the Amazon S3 bucket that you created.For more information, see [Uploading objects](https://docs.aws.amazon.com/AmazonS3/latest/userguide/upload-objects.html) in the Amazon S3 documentation.Make sure that you upload the contents of the **Solution-Templates** folder only. You can upload the files at the Amazon S3 bucket’s root level only. | AWS DevOps | 

### Deploy the solution
<a name="deploy-the-solution"></a>


| Task | Description | Skills required | 
| --- | --- | --- | 
| Create a CloudFormation stack to deploy the solution by using the template.yml file in the cloned GitHub repository. | [\[See the AWS documentation website for more details\]](http://docs.aws.amazon.com/prescriptive-guidance/latest/patterns/create-dynamic-ci-pipelines-for-java-and-python-projects-automatically.html)While your stack is being created, it’s listed on the **Stacks** page with a status of **CREATE\$1IN\$1PROGRESS**. Make sure that you wait for the stack’s status to change to **CREATE\$1COMPLETE** before completing the remaining steps in this pattern. | AWS administrator, AWS DevOps | 

### Test the setup
<a name="test-the-setup"></a>


| Task | Description | Skills required | 
| --- | --- | --- | 
| Run the step function that you created.  | [\[See the AWS documentation website for more details\]](http://docs.aws.amazon.com/prescriptive-guidance/latest/patterns/create-dynamic-ci-pipelines-for-java-and-python-projects-automatically.html)**JSON formatting**<pre>{<br />  "details": {<br />    "tech_stack": "Name of the Tech Stack (python/java)",<br />    "project_name": "Name of the Project that you want to create with",<br />    "pre_build": "Choose the step if it required in the buildspec.yml file i.e., yes/no",<br />    "build": "Choose the step if it required in the buildspec.yml file i.e., yes/no",<br />    "post_build": "Choose the step if it required in the buildspec.yml file i.e., yes/no",<br />    "reports": "Choose the step if it required in the buildspec.yml file i.e., yes/no",<br />  }<br />}</pre>**Java JSON input example**<pre>{<br />  "details": {<br />    "tech_stack": "java",<br />    "project_name": "pipeline-java-pjt",<br />    "pre_build": "yes",<br />    "build": "yes",<br />    "post_build": "yes",<br />    "reports": "yes"<br />  }<br />}</pre>**Python JSON input example**<pre>{<br />  "details": {<br />    "tech_stack": "python",<br />    "project_name": "pipeline-python-pjt",<br />    "pre_build": "yes",<br />    "build": "yes",<br />    "post_build": "yes",<br />    "reports": "yes"<br />  }<br />}</pre> | AWS administrator, AWS DevOps | 
| Confirm that the CodeCommit repository for the CI pipeline was created. | [\[See the AWS documentation website for more details\]](http://docs.aws.amazon.com/prescriptive-guidance/latest/patterns/create-dynamic-ci-pipelines-for-java-and-python-projects-automatically.html) | AWS DevOps | 
| Check the CodeBuild project resources. | [\[See the AWS documentation website for more details\]](http://docs.aws.amazon.com/prescriptive-guidance/latest/patterns/create-dynamic-ci-pipelines-for-java-and-python-projects-automatically.html) | AWS DevOps | 
| Validate the CodePipeline stages. | [\[See the AWS documentation website for more details\]](http://docs.aws.amazon.com/prescriptive-guidance/latest/patterns/create-dynamic-ci-pipelines-for-java-and-python-projects-automatically.html) | AWS DevOps | 
| Confirm that the CI pipeline ran successfully. | [\[See the AWS documentation website for more details\]](http://docs.aws.amazon.com/prescriptive-guidance/latest/patterns/create-dynamic-ci-pipelines-for-java-and-python-projects-automatically.html) | AWS DevOps | 

### Clean up your resources
<a name="clean-up-your-resources"></a>


| Task | Description | Skills required | 
| --- | --- | --- | 
| Delete the resources stack in CloudFormation. | Delete the CI pipeline’s resources stack in CloudFormation.For more information, see [Deleting a stack on the AWS CloudFormation console](https://docs.aws.amazon.com/AWSCloudFormation/latest/UserGuide/cfn-console-delete-stack.html) in the CloudFormation documentation.Make sure that you delete the stack named **<project\$1name>-stack**. | AWS DevOps | 
| Delete the CI pipeline’s dependencies in Amazon S3 and CloudFormation. | [\[See the AWS documentation website for more details\]](http://docs.aws.amazon.com/prescriptive-guidance/latest/patterns/create-dynamic-ci-pipelines-for-java-and-python-projects-automatically.html)Make sure that you delete the stack named **pipeline-creation-dependencies-stack**. | AWS DevOps | 
| Delete the Amazon S3 template bucket. | Delete the Amazon s3 bucket that you created in the **Configure the prerequisites** section of this pattern, which stores the templates for this solution.For more information, see [Deleting a bucket](https://docs.aws.amazon.com/AmazonS3/latest/userguide/delete-bucket.html) in the Amazon S3 documentation. | AWS DevOps | 

## Related resources
<a name="create-dynamic-ci-pipelines-for-java-and-python-projects-automatically-resources"></a>
+ [Creating a Step Functions state machine that uses Lambda](https://docs.aws.amazon.com/step-functions/latest/dg/tutorial-creating-lambda-state-machine.html) (AWS Step Functions documentation)
+ [AWS Step Functions WorkFlow Studio](https://docs.aws.amazon.com/step-functions/latest/dg/workflow-studio.html) (AWS Step Functions documentation)
+ [DevOps and AWS](https://aws.amazon.com/devops/)
+ [How does AWS CloudFormation work?](https://docs.aws.amazon.com/AWSCloudFormation/latest/UserGuide/cfn-whatis-howdoesitwork.html) (AWS CloudFormation documentation)
+ [Complete CI/CD with AWS CodeCommit, AWS CodeBuild, AWS CodeDeploy, and AWS CodePipeline](https://aws.amazon.com/blogs/devops/complete-ci-cd-with-aws-codecommit-aws-codebuild-aws-codedeploy-and-aws-codepipeline/) (AWS blog post)
+ [IAM and AWS STS quotas, name requirements, and character limits](https://docs.aws.amazon.com/IAM/latest/UserGuide/reference_iam-quotas.html) (IAM documentation)

# Deploy CloudWatch Synthetics canaries by using Terraform
<a name="deploy-cloudwatch-synthetics-canaries-by-using-terraform"></a>

*Dhrubajyoti Mukherjee and Jean-Francois Landreau, Amazon Web Services*

## Summary
<a name="deploy-cloudwatch-synthetics-canaries-by-using-terraform-summary"></a>

It’s important to validate the health of a system from a customer perspective and confirm that customers are able to connect. This is more difficult when the customers don’t constantly call the endpoint. [Amazon CloudWatch Synthetics](https://docs.aws.amazon.com/AmazonCloudWatch/latest/monitoring/CloudWatch_Synthetics_Canaries.html) supports the creation of canaries, which can test both public and private endpoints. By using canaries, you can know the status of a system even if it isn’t in use. These canaries are either Node.js Puppeteer scripts or Python Selenium scripts.

This pattern describes how to use HashiCorp Terraform to deploy canaries that test private endpoints. It embeds a Puppeteer script that tests whether a URL returns `200-OK`. The Terraform script can then be integrated with the script that deploys the private endpoint. You can also modify the solution to monitor public endpoints.

## Prerequisites and limitations
<a name="deploy-cloudwatch-synthetics-canaries-by-using-terraform-prereqs"></a>

**Prerequisites **
+ An active Amazon Web Services (AWS) account with a virtual private cloud (VPC) and private subnets
+ The URL of the endpoint that can be reached from the private subnets
+ Terraform installed in the deployment environment

**Limitations **

The current solution works for the following CloudWatch Synthetics runtime versions:
+ syn-nodejs-puppeteer-3.4
+ syn-nodejs-puppeteer-3.5
+ syn-nodejs-puppeteer-3.6
+ syn-nodejs-puppeteer-3.7

As new runtime versions are released, you might need to update the current solution. You will also need to modify the solution to keep up with security updates.

**Product versions**
+ Terraform 1.3.0

## Architecture
<a name="deploy-cloudwatch-synthetics-canaries-by-using-terraform-architecture"></a>

Amazon CloudWatch Synthetics is based on CloudWatch, Lambda, and Amazon Simple Storage Service (Amazon S3). Amazon CloudWatch offers a wizard to create the canaries and a dashboard that displays the status of the canary runs. The Lambda function runs the script. Amazon S3 stores the logs and screenshots from the canary runs.

This pattern simulates a private endpoint through an Amazon Elastic Compute Cloud (Amazon EC2) instance deployed in the targeted subnets. The Lambda function requires elastic network interfaces in the VPC where the private endpoint is deployed.

![\[Description follows the diagram.\]](http://docs.aws.amazon.com/prescriptive-guidance/latest/patterns/images/pattern-img/73ed0103-ec45-4653-bb29-f402a88f0c64/images/39aaed0f-f259-4f2a-98fb-8e3a340d0b02.png)


The diagram shows the following:

1. The Synthetics canary initiates the canary Lambda function.

1. The canary Lambda function connects to the elastic network interface.

1. The canary Lambda function monitors the status of the endpoint.

1. The Synthetics canary pushes run data to the S3 bucket and CloudWatch metrics.

1. A CloudWatch alarm is initiated based on the metrics.

1. The CloudWatch alarm initiates the Amazon Simple Notification Service (Amazon SNS) topic.

## Tools
<a name="deploy-cloudwatch-synthetics-canaries-by-using-terraform-tools"></a>

**AWS services**
+ [Amazon CloudWatch](https://docs.aws.amazon.com/AmazonCloudWatch/latest/monitoring/WhatIsCloudWatch.html) helps you monitor the metrics of your AWS resources and the applications you run on AWS in real time.
+ [AWS Lambda](https://docs.aws.amazon.com/lambda/latest/dg/welcome.html) is a compute service that helps you run code without needing to provision or manage servers. It runs your code only when needed and scales automatically, so you pay only for the compute time that you use.
+ [Amazon Simple Notification Service (Amazon SNS)](https://docs.aws.amazon.com/sns/latest/dg/welcome.html) helps you coordinate and manage the exchange of messages between publishers and clients, including web servers and email addresses.
+ [Amazon Simple Storage Service (Amazon S3)](https://docs.aws.amazon.com/AmazonS3/latest/userguide/Welcome.html) is a cloud-based object storage service that helps you store, protect, and retrieve any amount of data.
+ [Amazon Virtual Private Cloud (Amazon VPC)](https://docs.aws.amazon.com/vpc/latest/userguide/what-is-amazon-vpc.html) helps you launch AWS resources into a virtual network that you’ve defined. This virtual network resembles a traditional network that you’d operate in your own data center, with the benefits of using the scalable infrastructure of AWS. This pattern uses VPC endpoints and elastic network interfaces.

**Other services**
+ [HashiCorp Terraform](https://www.terraform.io/docs) is an open-source infrastructure as code (IaC) tool that helps you use code to provision and manage cloud infrastructure and resources. This pattern uses Terraform to deploy the infrastructure.
+ [Puppeteer](https://pptr.dev/) is a Node.js library. The CloudWatch Synthetics runtime uses the Puppeteer framework.

**Code**

The solution is available in the GitHub [cloud watch-synthetics-canary-terraform](https://github.com/aws-samples/cloudwatch-synthetics-canary-terraform) repository. For more information, see the *Additional information* section.

## Epics
<a name="deploy-cloudwatch-synthetics-canaries-by-using-terraform-epics"></a>

### Implement the solution for monitoring a private URL
<a name="implement-the-solution-for-monitoring-a-private-url"></a>


| Task | Description | Skills required | 
| --- | --- | --- | 
| Gather requirements for monitoring the private URL. | Gather the full URL definition: domain, parameters, and headers. To communicate privately to Amazon S3 and Amazon CloudWatch, use VPC endpoints. Note how the VPC and subnets are accessible to the endpoint. Consider the frequency of canary runs. | Cloud architect, Network administrator | 
| Modify the existing solution to monitor the private URL. | Modify the `terraform.tfvars` file:[\[See the AWS documentation website for more details\]](http://docs.aws.amazon.com/prescriptive-guidance/latest/patterns/deploy-cloudwatch-synthetics-canaries-by-using-terraform.html) | Cloud architect | 
| Deploy and operate the solution. | To deploy the solution, do the following:[\[See the AWS documentation website for more details\]](http://docs.aws.amazon.com/prescriptive-guidance/latest/patterns/deploy-cloudwatch-synthetics-canaries-by-using-terraform.html) | Cloud architect, DevOps engineer | 

## Troubleshooting
<a name="deploy-cloudwatch-synthetics-canaries-by-using-terraform-troubleshooting"></a>


| Issue | Solution | 
| --- | --- | 
| Deletion of the provisioned resources gets stuck. | Manually delete the canary Lambda function, corresponding elastic network interface, and security group, in that order. | 

## Related resources
<a name="deploy-cloudwatch-synthetics-canaries-by-using-terraform-resources"></a>
+ [Using synthetic monitoring](https://docs.aws.amazon.com/AmazonCloudWatch/latest/monitoring/CloudWatch_Synthetics_Canaries.html)
+ [Monitor API Gateway endpoints with Amazon CloudWatch Synthetics](https://aws.amazon.com/blogs/mt/monitor-api-gateway-endpoints-with-amazon-cloudwatch-synthetics/) (blog post)

## Additional information
<a name="deploy-cloudwatch-synthetics-canaries-by-using-terraform-additional"></a>

**Repository artifacts**

The repository artifacts are in the following structure.

```
.
├── README.md
├── main.tf
├── modules
│   ├── canary
│   └── canary-infra
├── terraform.tfvars
├── tf.plan
└── variable.tf
```

The `main.tf` file contains the core module, and it deploys two submodules:
+ `canary-infra` deploys the infrastructure required for the canaries.
+ `canary` deploys the canaries.

The input parameters for the solution are located in the `terraform.tfvars` file. You can use the following code example to create one canary.

```
module "canary" {
    source = "./modules/canary"
    name   = var.name
    runtime_version = var.runtime_version
    take_screenshot = var.take_screenshot
    api_hostname = var.api_hostname
    api_path = var.api_path
    reports-bucket = module.canary_infra.reports-bucket
    role = module.canary_infra.role
    security_group_id = module.canary_infra.security_group_id
    subnet_ids = var.subnet_ids
    frequency = var.frequency
    alert_sns_topic = var.alert_sns_topic
}
```

The corresponding .var file follows.

```
name   = "my-canary"
runtime_version = "syn-nodejs-puppeteer-3.7"
take_screenshot = false
api_hostname = "mydomain.internal"
api_path = "/path?param=value"
vpc_id = "vpc_id"
subnet_ids = ["subnet_id1"]
frequency = 5
alert_sns_topic = "arn:aws:sns:eu-central-1:111111111111:yyyyy"
```

**Cleaning up the solution**

If you are testing this in a development environment, you can clean up the solution to avoid accruing costs.

1. On the AWS Management Console, navigate to the Amazon S3 console. Empty the Amazon S3 bucket that the solution created. Make sure to take a backup of the data, if required.

1. In your development environment, from the `cloudwatch-synthetics-canary-terraform` directory, run the `destroy` command.

   ```
   terraform destroy
   ```

# Deploy a ChatOps solution to manage SAST scan results by using Amazon Q Developer in chat applications custom actions and CloudFormation
<a name="deploy-chatops-solution-to-manage-sast-scan-results"></a>

*Anand Bukkapatnam Tirumala, Amazon Web Services*

## Summary
<a name="deploy-chatops-solution-to-manage-sast-scan-results-summary"></a>

This pattern presents a comprehensive solution that uses Amazon Q Developer in chat applications to streamline the management of static application security testing (SAST) scan failures reported through SonarQube. This innovative approach integrates custom actions and notifications into a conversational interface, enabling efficient collaboration and decision-making processes within development teams.

In today's fast-paced software development environment, managing SAST scan results efficiently is crucial for maintaining code quality and security. However, many organizations face the following significant challenges:
+ Delayed awareness of critical vulnerabilities because of inefficient notification systems
+ Slow decision-making processes caused by disconnected approval workflows
+ Lack of immediate, actionable responses to SAST scan failures
+ Fragmented communication and collaboration around security findings
+ Time-consuming and error-prone manual infrastructure setup for security tooling

These issues often lead to increased security risks, delayed releases, and reduced team productivity. To address these challenges effectively requires a solution that can streamline SAST result management, enhance team collaboration, and automate infrastructure provisioning.

Key features of the solution include:
+ **Customized notifications** – Real-time alerts and notifications are delivered directly to team chat channels, ensuring prompt awareness and action on SAST scan vulnerabilities or failures.
+ **Conversational approvals** – Stakeholders can initiate and complete approval workflows for SAST scan results seamlessly within the chat interface, accelerating decision-making processes.
+ **Custom actions** – Teams can define and execute custom actions based on SAST scan outcomes, such as automatically triggering email messages for quality gate failures, enhancing responsiveness to security issues.
+ **Centralized collaboration** – All SAST scan-related discussions, decisions, and actions are kept within a unified chat environment, fostering improved collaboration and knowledge-sharing among team members.
+ **Infrastructure as code (IaC)** – The entire solution is wrapped with AWS CloudFormation templates, enabling faster and more reliable infrastructure provisioning while reducing manual setup errors.

## Prerequisites and limitations
<a name="deploy-chatops-solution-to-manage-sast-scan-results-prereqs"></a>

**Prerequisites**
+ An active AWS account.
+ An AWS Identity and Access Management (IAM) role with permissions to create and manage resources associated with the AWS services listed in [Tools](#deploy-chatops-solution-to-manage-sast-scan-results-tools).
+ A Slack workspace.
+ Amazon Q Developer in chat applications added to the required Slack workspace as a plugin. For more information, see [Add apps to your Slack workspace](https://slack.com/intl/en-in/help/articles/202035138-Add-apps-to-your-Slack-workspace) in the Slack documentation. Keep a note of the Slack workspace ID as shown on the AWS Management Console after successful registration.
+ A configured Amazon Q Developer in chat applications client, with the workspace ID readily available for input in the CloudFormation console. For instructions, see [Configure a Slack client](https://docs.aws.amazon.com/chatbot/latest/adminguide/slack-setup.html#slack-client-setup) in the *Amazon Q Developer in chat applications Administrator Guide*.
+ A source email account that is created and verified in Amazon Simple Email Service (Amazon SES) to send out approval email messages. For setup instructions, see [Creating and verifying email identities](https://docs.aws.amazon.com/ses/latest/dg/creating-identities.html#verify-email-addresses-procedure) in the *Amazon Simple Email Service Developer Guide*.
+ A destination email address for receiving approval notifications. This address can be a shared inbox or a specific team distribution list.
+ An operational SonarQube instance that’s accessible from your AWS account. For more information, see the [SonarQube installation instructions](https://docs.sonarsource.com/sonarqube/latest/setup-and-upgrade/install-the-server/introduction/).
+ A SonarQube [user token](https://docs.sonarsource.com/sonarqube-server/latest/user-guide/managing-tokens/) with permissions to trigger and create projects through the pipeline.

**Limitations**
+ The creation of custom action buttons is a manual process in this solution. 
+ Some AWS services aren’t available in all AWS Regions. For Region availability, see [AWS services by Region](https://aws.amazon.com/about-aws/global-infrastructure/regional-product-services/). For specific endpoints, see [Service endpoints and quotas](https://docs.aws.amazon.com/general/latest/gr/aws-service-information.html), and choose the link for the service.

## Architecture
<a name="deploy-chatops-solution-to-manage-sast-scan-results-architecture"></a>

The following diagram shows the workflow and architecture components for this pattern.

![\[Workflow to deploy automated code quality assurance for release management using Amazon Q Developer.\]](http://docs.aws.amazon.com/prescriptive-guidance/latest/patterns/images/pattern-img/198312ed-e379-49a7-b706-8e79e2142f21/images/a977924c-957e-4f91-99d6-ed790e343ea6.png)


The diagram shows the automated code quality assurance workflow:

1. Code preparation and upload:
   + The developer compresses the codebase into a .zip file.
   + The developer manually uploads the .zip file to a designated Amazon Simple Storage Service (Amazon S3) bucket.

1. Amazon S3 event trigger and AWS Step Functions orchestration:
   + The Amazon S3 upload event triggers a Step Functions workflow.
   + Step Functions orchestrates a SAST scan using SonarQube.
   + The workflow monitors the AWS CodeBuild job status to determine next actions. If CodeBuild succeeds (quality gate pass), the workflow terminates. If CodeBuild fails, an AWS Lambda function is invoked for diagnostics. For more details, see **AWS Step Functions logic** later in this section.

1. AWS CodeBuild execution:
   + The CodeBuild job executes a SonarQube scan on the uploaded codebase.
   + Scan artifacts are stored in a separate Amazon S3 bucket for auditing and analysis.

1. Failure analysis (Lambda function):
   + On CodeBuild failure, the `CheckBuildStatus` Lambda function is triggered.
   + On CodeBuild success, the process is terminated and no further action is needed.

1. Lambda function analyzes failure cause (quality gate failure or other issues)
   + The `CheckBuildStatus` function creates a custom payload with detailed failure information.
   + The `CheckBuildStatus` function publishes the custom payload to an Amazon Simple Notification Service (Amazon SNS) topic.

1. Notification system:
   + Amazon SNS forwards the payload to Amazon Q Developer in chat applications for Slack integration.

1. Slack integration:
   + Amazon Q Developer in chat applications posts a notification in the designated Slack channel.

1. Approval process:
   + Approvers review the failure details in the Slack notification.
   + Approvers can initiate approval using the **Approve** button in Slack.

1. Approval handler:
   + An Approval Lambda function processes the approval action from Slack.
   + The Approval function publishes the custom message to Amazon SES.

1. Message generated:
   + The Approval function generates a custom message for developer notification.

1. Developer notification:
   + Amazon SES sends an email message to the developer with next steps or required actions.

This workflow combines manual code upload with automated quality checks, provides immediate feedback through Slack, and allows for human intervention when necessary, ensuring a robust and flexible code review process.

**AWS Step Functions logic**

As shown in the previous architecture diagram, if the quality gate pass on SonarQube fails, the workflow goes to the `CheckBuildStatus` Lambda function. The `CheckBuildStatus` function triggers a notification on the Slack channel. Each notification includes information with suggested next steps. Following are the types of notifications:
+ **Application has failed in code security scan** – The user receives this notification when the uploaded code did not pass the SonarQube security scan. The user can choose **APPROVE** to accept the build. However, the notification advises the user to beware of potential poor code quality and security risks. The notification includes the following details:
  + Next steps: Error: Quality gate status: FAILED – View details at the provided URL.
  + Triage the vulnerabilities as mentioned in the document at the provided URL.
  + CodeBuild details are available at the location at the provided URL.
+ **Application scan pipeline has failed because of some other reason** – The user receives this notification when the pipeline failed for some reason other than failing the code security scan. The notification includes the following details:
  + For next steps, go to the link provided for further troubleshooting.

To see screenshots of the notifications as they appear in a Slack channel, go to the [assets folder](https://github.com/aws-samples/chatops-slack/tree/main/assets) in the GitHub chatops-slack repository.

The following diagram shows an example of Step Functions step status after the quality gate pass fails.

![\[Workflow of AWS Step Functions step status after quality gate pass fails.\]](http://docs.aws.amazon.com/prescriptive-guidance/latest/patterns/images/pattern-img/198312ed-e379-49a7-b706-8e79e2142f21/images/40b7ebf0-2518-4413-9717-0bfb7559adde.png)


## Tools
<a name="deploy-chatops-solution-to-manage-sast-scan-results-tools"></a>

**AWS services**
+ [Amazon Q Developer in chat applications](https://docs.aws.amazon.com/chatbot/latest/adminguide/what-is.html) enables you to use Amazon Chime, Microsoft Teams, and Slack chat channels to monitor and respond to operational events in your AWS applications. *End of support notice:* On February 20, 2026, AWS will end support for the Amazon Chime service. After February 20, 2026, you will no longer be able to access the Amazon Chime console or Amazon Chime application resources. For more information, visit the [blog post](https://aws.amazon.com/blogs/messaging-and-targeting/update-on-support-for-amazon-chime/). This does not impact the availability of the [Amazon Chime SDK service](https://aws.amazon.com/chime/chime-sdk/).
+ [AWS CloudFormation](https://docs.aws.amazon.com/AWSCloudFormation/latest/UserGuide/Welcome.html) helps you set up AWS resources, provision them quickly and consistently, and manage them throughout their lifecycle across AWS accounts and AWS Regions.
+ [AWS CodeBuild](https://docs.aws.amazon.com/codebuild/latest/userguide/welcome.html) is a fully managed build service that helps you compile source code, run unit tests, and produce artifacts that are ready to deploy.
+ [AWS Identity and Access Management (IAM)](https://docs.aws.amazon.com/IAM/latest/UserGuide/introduction.html) helps you securely manage access to your AWS resources by controlling who is authenticated and authorized to use them.
+ [AWS Key Management Service (AWS KMS)](https://docs.aws.amazon.com/kms/latest/developerguide/overview.html) helps you create and control cryptographic keys to help protect your data.
+ [AWS Lambda](https://docs.aws.amazon.com/lambda/latest/dg/welcome.html) is a compute service that helps you run code without needing to provision or manage servers. It runs your code only when needed and scales automatically, so you pay only for the compute time that you use.
+ [AWS Secrets Manager](https://docs.aws.amazon.com/secretsmanager/latest/userguide/intro.html) helps you replace hardcoded credentials in your code, including passwords, with an API call to Secrets Manager to retrieve the secret programmatically.
+ [Amazon Simple Email Service (Amazon SES)](https://docs.aws.amazon.com/ses/latest/dg/Welcome.html) helps you send and receive email messages by using your own email addresses and domains.
+ [Amazon Simple Notification Service (Amazon SNS)](https://docs.aws.amazon.com/sns/latest/dg/welcome.html) helps you coordinate and manage the exchange of messages between publishers and clients, including web servers and email addresses.
+ [Amazon Simple Storage Service (Amazon S3)](https://docs.aws.amazon.com/AmazonS3/latest/userguide/Welcome.html) is a cloud-based object storage service that helps you store, protect, and retrieve any amount of data.
+ [AWS Step Functions](https://docs.aws.amazon.com/step-functions/latest/dg/welcome.html) is a serverless orchestration service that helps you combine AWS Lambda functions and other AWS services to build business-critical applications.

**Other tools**
+ [Slack](https://slack.com/help/articles/115004071768-What-is-Slack-), a Salesforce offering, is an AI-powered conversational platform that provides chat and video collaboration, automates processes with no code, and supports information sharing.
+ [SonarQube](https://docs.sonarsource.com/sonarqube/latest/user-guide/user-account/generating-and-using-tokens/) is an on-premises analysis tool designed to detect coding issues in over 30 languages, frameworks, and IaC platforms.

**Code repository**

The code for this pattern is available in the GitHub [chatops-slack](https://github.com/aws-samples/chatops-slack) repository.

## Best practices
<a name="deploy-chatops-solution-to-manage-sast-scan-results-best-practices"></a>
+ **CloudFormation stack management** – If you encounter any failures during CloudFormation stack execution, we recommend that you delete the failed stack. Then, re-create it with the correct parameter values. This approach supports a clean deployment and helps avoid potential conflicts or partial implementations.
+ **Shared inbox email configuration** – When you configure the `SharedInboxEmail` parameter, use a common distribution list that’s accessible to all relevant developers. This approach promotes transparency and helps important notifications reach the relevant team members.
+ **Production approval workflow** – For production environments, restrict access to the Slack channel that’s used for build approvals. Only designated approvers should be members of this channel. This practice maintains a clear chain of responsibility and enhances security by limiting who can approve critical changes.
+ **IAM permissions** – Follow the principle of least privilege and grant the minimum permissions required to perform a task. For more information, see [Grant least privilege](https://docs.aws.amazon.com/IAM/latest/UserGuide/access_policies.html#grant-least-priv) and [Security best practices](https://docs.aws.amazon.com/IAM/latest/UserGuide/IAMBestPracticesAndUseCases.html) in the IAM documentation.

## Epics
<a name="deploy-chatops-solution-to-manage-sast-scan-results-epics"></a>

### Perform initial setup
<a name="perform-initial-setup"></a>


| Task | Description | Skills required | 
| --- | --- | --- | 
| Clone the repository. | To clone the [chatops-slack](https://github.com/aws-samples/chatops-slack) repository for this pattern, use the following command.`git clone "git@github.com:aws-samples/chatops-slack.git"` | AWS DevOps, Build lead, DevOps engineer, Cloud administrator | 
| Create the .zip files that contain Lambda code. | Create the .zip files for the AWS Lambda function code for the `CheckBuildStatus` and `ApprovalEmail` functionality. To create `notification.zip` and `approval.zip`, use the following commands.<pre>cd chatops-slack/src</pre><pre>chmod -R 775 *</pre><pre>zip -r approval.zip approval</pre><pre>zip -r notification.zip notification</pre> | AWS DevOps, Build lead, DevOps engineer, Cloud administrator | 

### Deploy the pre-requisite.yml stack file
<a name="deploy-the-pre-requisite-yml-stack-file"></a>


| Task | Description | Skills required | 
| --- | --- | --- | 
| Execute the `pre-requisite.yml` stack file. | The `pre-requisite.yml` CloudFormation stack file deploys the initial resources that are required before you execute the `app-security.yml` stack file. To execute the `pre-requisite.yml` file, do the following:[\[See the AWS documentation website for more details\]](http://docs.aws.amazon.com/prescriptive-guidance/latest/patterns/deploy-chatops-solution-to-manage-sast-scan-results.html) | AWS administrator, AWS DevOps, Build lead, DevOps engineer | 
| Upload the .zip files to the Amazon S3 bucket. | Upload the `notification.zip` and `approval.zip` files that you created earlier to the Amazon S3 bucket named `S3LambdaBucket`. The `app-security.yml` CloudFormation stack file uses `S3LambdaBucket` to provision the Lambda function. | AWS DevOps, Build lead, DevOps engineer, AWS systems administrator | 

### Execute the app-security.yml stack file
<a name="execute-the-app-security-yml-stack-file"></a>


| Task | Description | Skills required | 
| --- | --- | --- | 
| Execute the `app-security.yml` stack file. | The `app-security.yml` stack files deploys the remaining infrastructure for the notification and the approval system. To execute the `app-security.yml` file, do the following:[\[See the AWS documentation website for more details\]](http://docs.aws.amazon.com/prescriptive-guidance/latest/patterns/deploy-chatops-solution-to-manage-sast-scan-results.html) | AWS DevOps, AWS systems administrator, DevOps engineer, Build lead | 
| Test the notification setup. | To test the notification setup, do the following:[\[See the AWS documentation website for more details\]](http://docs.aws.amazon.com/prescriptive-guidance/latest/patterns/deploy-chatops-solution-to-manage-sast-scan-results.html)After the test message is delivered successfully, you should see a notification on the Slack channel. For more information, see [Test notifications from AWS services to Slack](https://docs.aws.amazon.com/chatbot/latest/adminguide/slack-setup.html#test-notifications-slack) in the *Amazon Q Developer in chat applications Administrator Guide.* | AWS DevOps, AWS systems administrator, DevOps engineer, Build lead | 

### Set up approval flow
<a name="set-up-approval-flow"></a>


| Task | Description | Skills required | 
| --- | --- | --- | 
| Configure custom Lambda action. | To set up the custom AWS Lambda action, do the following:[\[See the AWS documentation website for more details\]](http://docs.aws.amazon.com/prescriptive-guidance/latest/patterns/deploy-chatops-solution-to-manage-sast-scan-results.html) | AWS administrator, AWS DevOps, Build lead, DevOps engineer, Slack Admin | 
| Validate approval flow. | To validate that the approval flow works as expected, choose the **Approve** button in Slack.Slackbot should send a notification on the message thread with the confirmation string **Approval Email sent successfully**. | AWS administrator, AWS DevOps, DevOps engineer, Slack Admin | 

## Troubleshooting
<a name="deploy-chatops-solution-to-manage-sast-scan-results-troubleshooting"></a>


| Issue | Solution | 
| --- | --- | 
| Slack misconfigurations | For information about troubleshooting issues related to Slack misconfigurations, see Troubleshooting Amazon Q Developer in the *Amazon Q Developer in chat applications Administrator Guide*. | 
| Scan failed because of some other reason | This error means that the code build task has failed. To troubleshoot the issue, go to the link that’s in the message. The failure of the code build task might have the following possible causes:[\[See the AWS documentation website for more details\]](http://docs.aws.amazon.com/prescriptive-guidance/latest/patterns/deploy-chatops-solution-to-manage-sast-scan-results.html) | 

## Related resources
<a name="deploy-chatops-solution-to-manage-sast-scan-results-resources"></a>

**AWS documentation**
+ [Configure a Slack client](https://docs.aws.amazon.com/chatbot/latest/adminguide/slack-setup.html#slack-client-setup)
+ [Creating a custom action](https://docs.aws.amazon.com/chatbot/latest/adminguide/custom-actions.html#creating-custom-actions)
+ [Creating an email address identity](https://docs.aws.amazon.com/ses/latest/dg/creating-identities.html#verify-email-addresses-procedure) [procedure](https://docs.aws.amazon.com/ses/latest/dg/creating-identities.html#verify-email-addresses-procedure)
+ [Tutorial: Get started with Slack](https://docs.aws.amazon.com/chatbot/latest/adminguide/slack-setup.html)

**Other resources**
+ [Add apps to your Slack workspace](https://slack.com/intl/en-in/help/articles/202035138-Add-apps-to-your-Slack-workspace) (Slack documentation)
+ [Generating and using tokens](https://docs.sonarsource.com/sonarqube/latest/user-guide/user-account/generating-and-using-tokens/) (SonarQube documentation)
+ [Introduction to the server installation](https://docs.sonarsource.com/sonarqube/latest/setup-and-upgrade/install-the-server/introduction/) (SonarQube documentation)

## Additional information
<a name="deploy-chatops-solution-to-manage-sast-scan-results-additional"></a>

This solution emphasizes Amazon Q Developer in chat applications custom actions for release management purposes. However, you can reuse the solution by modifying the Lambda code for your specific use case and build on top of it.

**Parameters of CloudFormation stack files**

The following table shows the parameters and their descriptions for the CloudFormation stack file `pre-requisite.yml`.


| 
| 
| **Key** | **Description** | 
| --- |--- |
| `StackName` | The name of the CloudFormation stack. | 
| `S3LambdaBucket` | The name of the Amazon S3 bucket where you upload the Lambda code. The name must be globally unique. | 
| `SonarToken` | The SonarQube user token as described in [Prerequisites](#deploy-chatops-solution-to-manage-sast-scan-results-prereqs). | 

The following table shows the parameters and their descriptions for the CloudFormation stack file `app-security.yml`.


| 
| 
| **Key** | **Description** | 
| --- |--- |
| `CKMSKeyArn` | The AWS KMS key Amazon Resource Name (ARN) that is used in IAM roles and Lambda functions created in this stack. | 
| `CKMSKeyId` | The AWS KMS key ID that is used in the Amazon SNS topic created in this stack. | 
| `EnvironmentType` | The name of the client environment for deployment of the application scan pipeline. Select the environment name from the dropdown list of allowed values. | 
| `S3LambdaBucket` | The name of the Amazon S3 bucket that contains the `approval.zip` and `notification.zip` files. | 
| `SESEmail` | The name of the registered email identity in Amazon SES as described in [Prerequisites](#deploy-chatops-solution-to-manage-sast-scan-results-prereqs). This identity is the source email address. | 
| `SharedInboxMail` | The destination email address to which the scan notifications are sent. | 
| `SlackChannelId` | The channel ID of the Slack channel where you want the notifications sent. To find the channel ID, right-click the channel name in **Channel Details** on the Slack app. The channel ID is at the bottom. | 
| `SlackWorkspaceId` | The Slack workspace ID as described in [Prerequisites](#deploy-chatops-solution-to-manage-sast-scan-results-prereqs). To find the Slack workspace ID, sign in to the AWS Management Console, open the Amazon Q Developer in chat applications console, and choose **Configured clients**, **Slack**, **WorkspaceID**. | 
| `StackName` | The name of the CloudFormation stack. | 
| `SonarFileDirectory` | The directory that contains the `sonar.project.<env>.properties` file. | 
| `SonarFileName` | The name of the `sonar.project.<env>properties` file. | 
| `SourceCodeZip` | The name of the .zip file that contains the `sonar.project.<env>properties` file and the source code. | 

# Deploy agentic systems on Amazon Bedrock with the CrewAI framework by using Terraform
<a name="deploy-agentic-systems-on-amazon-bedrock-with-the-crewai-framework"></a>

*Vanitha Dontireddy, Amazon Web Services*

## Summary
<a name="deploy-agentic-systems-on-amazon-bedrock-with-the-crewai-framework-summary"></a>

This pattern demonstrates how to implement scalable multi-agent AI systems by using the [CrewAI](https://www.crewai.com/) framework integrated with [Amazon Bedrock](https://aws.amazon.com/bedrock/?nc1=h_ls) and [Terraform](https://registry.terraform.io/). The solution enables organizations to create, deploy, and manage sophisticated AI agent workflows through infrastructure as code (IaC). In this pattern, CrewAI multi-agent orchestration capabilities combine with Amazon Bedrock foundation models and Terraform infrastructure automation. As a result, teams can build production-ready AI systems that tackle complex tasks with minimal human oversight. The pattern implements enterprise-grade security, scalability, and operational best practices. 

## Prerequisites and limitations
<a name="deploy-agentic-systems-on-amazon-bedrock-with-the-crewai-framework-prereqs"></a>

**Prerequisites**
+ An active AWS account with appropriate permissions to [access Amazon Bedrock foundation models](https://docs.aws.amazon.com/bedrock/latest/userguide/model-access.html)
+ Terraform version 1.5 or later [installed](https://developer.hashicorp.com/terraform/install)
+ Python version 3.9 or later [installed](https://www.python.org/downloads/)
+ CrewAI framework [installed](https://docs.crewai.com/installation)

**Limitations**
+ Agent interactions are limited by model context windows.
+ Terraform state management considerations for large-scale deployments apply to this pattern.
+ Some AWS services aren’t available in all AWS Regions. For Region availability, see [AWS Services by Region](https://aws.amazon.com/about-aws/global-infrastructure/regional-product-services/). For specific endpoints, see [Service endpoints and quotas](https://docs.aws.amazon.com/general/latest/gr/aws-service-information.html), and choose the link for the service.

## Architecture
<a name="deploy-agentic-systems-on-amazon-bedrock-with-the-crewai-framework-architecture"></a>

In this pattern, the following interactions occur:
+ Amazon Bedrock provides the foundation for agent intelligence through its suite of foundation models (FMs). It enables natural language processing (NLP), reasoning, and decision-making capabilities for the AI agents while maintaining high availability and scalability.
+ The CrewAI framework serves as the core orchestration layer for creating and managing AI agents. It handles agent communication protocols, task delegation, and workflow management while integrating with Amazon Bedrock.
+ Terraform manages the entire infrastructure stack through code, including compute resources, networking, security groups, and AWS Identity and Access Management (IAM) roles. It ensures consistent, version-controlled deployments across environments. The Terraform deployment creates the following:
  + AWS Lambda function to run the CrewAI application
  + Amazon Simple Storage Service (Amazon S3) buckets for code and reports
  + IAM roles with appropriate permissions
  + Amazon CloudWatch logging
  + Scheduled execution by Amazon EventBridge

The following diagram illustrates the architecture for deploying CrewAI multi-agent systems by using Amazon Bedrock and Terraform.

![\[Workflow to deploy CrewAI multi-agent systems using Terraform and Amazon Bedrock.\]](http://docs.aws.amazon.com/prescriptive-guidance/latest/patterns/images/pattern-img/b46069e9-4c38-405f-b0f0-310eabb06b06/images/b3296b17-e388-46ba-8d71-2ec7ce3ed3e0.png)


The diagram shows the following workflow:

1. The user clones the repository.

1. The user runs the command `terraform apply` to deploy the AWS resources.

1. Amazon Bedrock model configuration includes specifying the foundation model (FM) to use for configuring the CrewAI agents.

1. An EventBridge rule is established to trigger the Lambda function according to the defined schedule.

1. When triggered (either by schedule or manually), the Lambda function initializes and assumes the IAM role with permissions to access AWS services and Amazon Bedrock.

1. The CrewAI framework loads agent configurations from YAML files and creates specialized AI agents (the *AWS infrastructure security audit* crew). The Lambda function sequentially executes these agents to scan AWS resources, analyze security vulnerabilities, and generate comprehensive audit reports.

1. CloudWatch Logs captures detailed execution information from the Lambda function with a 365-day retention period and AWS Key Management Service (AWS KMS) encryption for compliance requirements. The logs provide visibility into agent activities, error tracking, and performance metrics, enabling effective monitoring and troubleshooting of the security audit process.

1. The security audit report is automatically generated and stored in the designated Amazon S3 bucket. The automated setup helps maintain consistent security monitoring with minimal operational overhead.

After the initial deployment, the workflow provides ongoing security auditing and reporting for your AWS infrastructure without manual intervention.

**Overview of AI agents**

This pattern creates multiple AI agents, each with unique roles, goals, and tools:
+ The **security analyst agent** collects and analyzes AWS resource information.
+ The **penetration tester agent **identifies vulnerabilities in AWS resources.
+ The **compliance expert agent **checks configurations against compliance standards.
+ The **report writer agent** compiles findings into comprehensive reports.

These agents collaborate on a series of tasks, leveraging their collective skills to perform security audits and generate comprehensive reports. (The `config/agents.yaml` file outlines the capabilities and configurations of each agent in this crew.)

Security analysis processing consists of the following actions:

1. The security analyst agent examines the collected data about AWS resources such as the following:
   + Amazon Elastic Compute Cloud (Amazon EC2) instances and security groups
   + Amazon S3 buckets and configurations
   + IAM roles, policies, and permissions
   + Virtual private cloud (VPC) configurations and network settings
   + Amazon RDS databases and security settings
   + Lambda functions and configurations
   + Other AWS services within audit scope

1. The penetration tester agent identifies potential vulnerabilities.

1. The agents collaborate through the CrewAI framework to share findings.

Report generation consists of the following actions:

1. The report writer agent compiles findings from all other agents.

1. Security issues are organized by service, severity, and compliance impact.

1. Remediation recommendations are generated for each identified issue.

1. A comprehensive security audit report is created in markdown format and uploaded to the designated Amazon S3 bucket. Historical reports are preserved for compliance tracking and security posture improvement.

Logging and monitoring activities include:
+ CloudWatch logs capture execution details and any errors.
+ Lambda execution metrics are recorded for monitoring.

**Note**  
The code for `aws-security-auditor-crew` is sourced from the GitHub [3P-Agentic\$1frameworks](https://github.com/aws-samples/3P-Agentic-Frameworks/blob/main/crewai/aws-security-auditor-crew/README.md) repository, available in the AWS Samples collection.

**Availability and scale**

You can expand the available agents to more than the four core agents. To scale with additional specialized agents, consider the following new agent types:
+ A *threat intelligence specialist* agent can do the following:
  + Monitors external threat feeds and correlates with internal findings
  + Provides context on emerging threats relevant to your infrastructure
  + Prioritizes vulnerabilities based on active exploitation in the wild
+ *Compliance framework* agents can focus on specific regulatory areas such as the following:
  + Payment Card Industry Data Security Standard (PCI DSS) compliance agent
  + Health Insurance Portability and Accountability Act of 1996 (HIPAA) compliance agent
  + System and Organization Controls 2 (SOC 2) compliance agent
  + General Data Protection Regulation (GDPR) compliance agent

By thoughtfully expanding the available agents, this solution can provide deeper, more specialized security insights while maintaining scalability across large AWS environments. For more information about an implementation approach, tool development, and scaling considerations, see [Additional information](#deploy-agentic-systems-on-amazon-bedrock-with-the-crewai-framework-additional).

## Tools
<a name="deploy-agentic-systems-on-amazon-bedrock-with-the-crewai-framework-tools"></a>

**AWS services**
+ [Amazon Bedrock](https://docs.aws.amazon.com/bedrock/latest/userguide/what-is-bedrock.html) is a fully managed AI service that makes high-performing foundation models (FMs) available for use through a unified API.
+ [Amazon CloudWatch Logs](https://docs.aws.amazon.com/AmazonCloudWatch/latest/logs/WhatIsCloudWatchLogs.html) helps you centralize the logs from all your systems, applications, and AWS services so you can monitor them and archive them securely.
+ [Amazon EventBridge](https://docs.aws.amazon.com/eventbridge/latest/userguide/eb-what-is.html) is a serverless event bus service that helps you connect your applications with real-time data from a variety of sources. For example, AWS Lambda functions, HTTP invocation endpoints using API destinations, or event buses in other AWS accounts. In this pattern, it’s used for scheduling and orchestrating agent workflows.
+ [AWS Identity and Access Management (IAM)](https://docs.aws.amazon.com/IAM/latest/UserGuide/introduction.html) helps you securely manage access to your AWS resources by controlling who is authenticated and authorized to use them.
+ [AWS Lambda](https://docs.aws.amazon.com/lambda/latest/dg/welcome.html) is a compute service that helps you run code without needing to provision or manage servers. It runs your code only when needed and scales automatically, so you pay only for the compute time that you use.
+ [AWS SDK for Python (Boto3)](https://boto3.amazonaws.com/v1/documentation/api/latest/guide/quickstart.html) is a software development kit that helps you integrate your Python application, library, or script with AWS services.
+ [Amazon Simple Storage Service (Amazon S3)](https://docs.aws.amazon.com/AmazonS3/latest/userguide/Welcome.html) is a cloud-based object storage service that helps you store, protect, and retrieve any amount of data. In this pattern, it provides object storage for agent artifacts and state management.

**Other tools**
+ [CrewAI](https://www.crewai.com/open-source) is an open source Python-based framework for building multi-agent AI systems.
+ [Terraform](https://www.terraform.io/) is an infrastructure as code (IaC) tool from HashiCorp that helps you create and manage cloud and on-premises resources.

**Code repository**

The code for this pattern is available in the GitHub [deploy-crewai-agents-terraform](https://github.com/aws-samples/deploy-crewai-agents-terraform.git) repository.

## Best practices
<a name="deploy-agentic-systems-on-amazon-bedrock-with-the-crewai-framework-best-practices"></a>
+ Implement proper state management for Terraform by using an Amazon S3 backend with Amazon DynamoDB locking. For more information, see [Backend best practices](https://docs.aws.amazon.com/prescriptive-guidance/latest/terraform-aws-provider-best-practices/backend.html) in *Best practices for using the Terraform AWS Provider*.
+ Use workspaces to separate development, staging, and production environments.
+ Follow the principle of least privilege and grant the minimum permissions required to perform a task. For more information, see [Grant least privilege](https://docs.aws.amazon.com/IAM/latest/UserGuide/access_policies.html#grant-least-priv) and [Security best practices](https://docs.aws.amazon.com/IAM/latest/UserGuide/best-practices.html) in the IAM documentation.
+ Enable detailed logging and monitoring through CloudWatch Logs.
+ Implement retry mechanisms and error handling for agent operations.

## Epics
<a name="deploy-agentic-systems-on-amazon-bedrock-with-the-crewai-framework-epics"></a>

### Deploy CrewAI framework
<a name="deploy-crewai-framework"></a>


| Task | Description | Skills required | 
| --- | --- | --- | 
| Clone the repository. | To clone this pattern’s repository on your local machine, run the following command:<pre>git clone "git@github.com:aws-samples/deploy-crewai-agents-terraform.git"<br />cd deploy-crewai-agents-terraform</pre> | DevOps engineer | 
| Edit the environment variables. | To edit the environment variables, do the following:[\[See the AWS documentation website for more details\]](http://docs.aws.amazon.com/prescriptive-guidance/latest/patterns/deploy-agentic-systems-on-amazon-bedrock-with-the-crewai-framework.html) | DevOps engineer | 
| Create the infrastructure. | To create the infrastructure, run the following commands:<pre>cd terraform</pre><pre>terraform init</pre><pre>terraform plan</pre>Review the execution plan carefully. If the planned changes are acceptable, then run the following command:<pre>terraform apply --auto-approve</pre> | DevOps engineer | 

### Access CrewAI agents
<a name="access-crewai-agents"></a>


| Task | Description | Skills required | 
| --- | --- | --- | 
| Access the agents. | The agents in the AWS Infrastructure Security Audit and Reporting crew are deployed as a Lambda function. To access the agents, use the following steps:[\[See the AWS documentation website for more details\]](http://docs.aws.amazon.com/prescriptive-guidance/latest/patterns/deploy-agentic-systems-on-amazon-bedrock-with-the-crewai-framework.html) | DevOps engineer | 
| (Optional) Configure manual execution of the agents. | The agents are configured to run automatically on a daily schedule (midnight UTC). However, you can trigger them manually by using the following steps:[\[See the AWS documentation website for more details\]](http://docs.aws.amazon.com/prescriptive-guidance/latest/patterns/deploy-agentic-systems-on-amazon-bedrock-with-the-crewai-framework.html)For more details, see [Testing Lambda functions in the console](https://docs.aws.amazon.com/lambda/latest/dg/testing-functions.html) in the Lambda documentation. | DevOps engineer | 
| Access agent logs for debugging. | The CrewAI agents are running in a Lambda environment with the necessary permissions to perform security audits and store reports in Amazon S3. The output is a markdown report that provides a comprehensive security analysis of your AWS infrastructure.To assist with detailed debugging of agent behavior, do the following:[\[See the AWS documentation website for more details\]](http://docs.aws.amazon.com/prescriptive-guidance/latest/patterns/deploy-agentic-systems-on-amazon-bedrock-with-the-crewai-framework.html) | DevOps engineer | 
| View results of agent execution. | To view the results of an agent execution, do the following:[\[See the AWS documentation website for more details\]](http://docs.aws.amazon.com/prescriptive-guidance/latest/patterns/deploy-agentic-systems-on-amazon-bedrock-with-the-crewai-framework.html)Reports are stored with timestamp-based filenames as follows: `security-audit-report-YYYY-MM-DD-HH-MM-SS.md)` | DevOps engineer | 
| Monitor agent execution. | To monitor the agents' execution through CloudWatch logs, do the following:[\[See the AWS documentation website for more details\]](http://docs.aws.amazon.com/prescriptive-guidance/latest/patterns/deploy-agentic-systems-on-amazon-bedrock-with-the-crewai-framework.html) | DevOps engineer | 
|  Customize agent behavior. | To modify the agents or their tasks, do the following:[\[See the AWS documentation website for more details\]](http://docs.aws.amazon.com/prescriptive-guidance/latest/patterns/deploy-agentic-systems-on-amazon-bedrock-with-the-crewai-framework.html)<pre>cd terraform </pre><pre>terraform apply</pre> | DevOps engineer | 

### Clean up resources
<a name="clean-up-resources"></a>


| Task | Description | Skills required | 
| --- | --- | --- | 
| Delete the created resources. | To delete all infrastructure created by this pattern, run the following command:<pre>terraform plan -destroy </pre>The following command will permanently delete all resources created by this pattern. The command will prompt for confirmation before removing any resources.Review the destruction plan carefully. If the planned deletions are acceptable, then run the following command:<pre>terraform destroy</pre> | DevOps engineer | 

## Troubleshooting
<a name="deploy-agentic-systems-on-amazon-bedrock-with-the-crewai-framework-troubleshooting"></a>


| Issue | Solution | 
| --- | --- | 
| Agent behavior | For information about this issue, see [Test and troubleshoot agent behavior](https://docs.aws.amazon.com/lambda/latest/dg/troubleshooting-networking.html) in the Amazon Bedrock documentation. | 
| Lambda network issues | For information about these issues, see [Troubleshoot networking issues in Lambda](https://docs.aws.amazon.com/lambda/latest/dg/troubleshooting-networking.html) in the Lambda documentation. | 
| IAM permissions | For information about these issues, see [Troubleshoot IAM](https://docs.aws.amazon.com/IAM/latest/UserGuide/troubleshoot.html) in the IAM documentation. | 

## Related resources
<a name="deploy-agentic-systems-on-amazon-bedrock-with-the-crewai-framework-resources"></a>

**AWS Blogs**
+ [Build agentic systems with CrewAI and Amazon Bedrock](https://aws.amazon.com/blogs/machine-learning/build-agentic-systems-with-crewai-and-amazon-bedrock/)

**AWS documentation**
+ [Amazon Bedrock documentation](https://docs.aws.amazon.com/bedrock/)
+ [How Amazon Bedrock Agents works](https://docs.aws.amazon.com/bedrock/latest/userguide/agents-how.html)
+ [AWS Well-Architected Framework](https://docs.aws.amazon.com/wellarchitected/latest/framework/welcome.html)

**Other resources**
+ [CrewAI documentation](https://docs.crewai.com/introduction)
+ [Terraform AWS Provider documentation](https://registry.terraform.io/providers/hashicorp/aws/latest/docs)

## Additional information
<a name="deploy-agentic-systems-on-amazon-bedrock-with-the-crewai-framework-additional"></a>

This section contains information about an implementation approach, tool development, and scaling considerations related to the earlier discussion in [Automation and scale](#deploy-agentic-systems-on-amazon-bedrock-with-the-crewai-framework-architecture).

**Implementation approach**

Consider the following approach to adding agents:

1. Agent configuration:
   + Add new agent definitions to the `config/agents.yaml` file.
   + Define specialized backstories, goals, and tools for each agent.
   + Configure memory and analysis capabilities based on agent specialty.

1. Task orchestration:
   + Update the `config/tasks.yaml` file to include new agent-specific tasks.
   + Create dependencies between tasks to help ensure proper information flow.
   + Implement parallel task execution where appropriate.

**Technical implementation**

Following is an addition to the `agents.yaml` file for a proposed Threat Intelligence Specialist agent:

```
Example new agent configuration in agents.yaml
threat_intelligence_agent:
 name: "Threat Intelligence Specialist"
 role: "Cybersecurity Threat Intelligence Analyst"
 goal: "Correlate AWS security findings with external threat intelligence"
 backstory: "Expert in threat intelligence with experience in identifying emerging threats and attack patterns relevant to cloud infrastructure." 
verbose: true 
allow_delegation: true 
tools: 
- "ThreatIntelligenceTool" 
- "AWSResourceAnalyzer"
```

**Tool development**

With the CrewAI framework, you can take the following actions to enhance your security audit crew's effectiveness:
+ Create custom tools for new agents.
+ Integrate with external APIs for threat intelligence.
+ Develop specialized analyzers for different AWS services.

**Scaling considerations**

When expanding your AWS Infrastructure Security Audit and Reporting system to handle larger environments or more comprehensive audits, address the following scaling factors:
+ **Computational resources**
  + Increase Lambda memory allocation to handle additional agents.
  + Consider splitting agent workloads across multiple Lambda functions.
+ **Cost management**
  + Monitor Amazon Bedrock API usage as agent count increases.
  + Implement selective agent activation based on audit scope.
+ **Collaboration efficiency**
  + Optimize information sharing between agents.
  + Implement hierarchical agent structures for complex environments.
+ **Knowledge base enhancement**
  + Provide agents with specialized knowledge bases for their domains.
  + Regularly update agent knowledge with new security best practices.

# Deploy an AWS Glue job with an AWS CodePipeline CI/CD pipeline
<a name="deploy-an-aws-glue-job-with-an-aws-codepipeline-ci-cd-pipeline"></a>

*Bruno Klein and Luis Henrique Massao Yamada, Amazon Web Services*

## Summary
<a name="deploy-an-aws-glue-job-with-an-aws-codepipeline-ci-cd-pipeline-summary"></a>

This pattern demonstrates how you can integrate AWS CodeCommit and AWS CodePipeline with AWS Glue, and use AWS Lambda to launch jobs as soon as a developer pushes their changes to a remote AWS CodeCommit repository. 

When a developer submits a change to an extract, transform, and load (ETL) repository and pushes the changes to AWS CodeCommit, a new pipeline is invoked. The pipeline initiates a Lambda function that launches an AWS Glue job with these changes. The AWS Glue job performs the ETL task.

This solution is helpful in the situation where businesses, developers, and data engineers want to launch jobs as soon as changes are committed and pushed to the target repositories. It helps achieve a higher level of automation and reproducibility, therefore avoiding errors during the job launch and lifecycle.

## Prerequisites and limitations
<a name="deploy-an-aws-glue-job-with-an-aws-codepipeline-ci-cd-pipeline-prereqs"></a>

**Prerequisites **
+ An active AWS account
+ [Git](https://git-scm.com/) installed on the local machine
+ [Amazon Cloud Development Kit (Amazon CDK)](https://docs.aws.amazon.com/cdk/latest/guide/home.html) installed on the local machine
+ [Python](https://www.python.org/) installed on the local machine
+ The code in the *Attachments *section

**Limitations**
+ The pipeline is finished as soon as the AWS Glue job is successfully launched. It does not wait for the conclusion of the job.
+ The code provided in the attachment is intended for demo purposes only.

## Architecture
<a name="deploy-an-aws-glue-job-with-an-aws-codepipeline-ci-cd-pipeline-architecture"></a>

**Target technology stack  **
+ AWS Glue
+ AWS Lambda
+ AWS CodePipeline
+ AWS CodeCommit

**Target architecture **

![\[Using Lambda to launch a Glue job as soon as a developer pushes changes to a CodeCommit repo.\]](http://docs.aws.amazon.com/prescriptive-guidance/latest/patterns/images/pattern-img/99a67388-5939-4267-8324-b6ca8bfa7962/images/917c9041-b94d-4e95-a3c4-9a1115ead228.png)


 

The process consists of these steps:

1. The developer or data engineer makes a modification in the ETL code, commits, and pushes the change to AWS CodeCommit.

1. The push initiates the pipeline.

1. The pipeline initiates a Lambda function, which calls `codecommit:GetFile` on the repository and uploads the file to Amazon Simple Storage Service (Amazon S3).

1. The Lambda function launches a new AWS Glue job with the ETL code.

1. The Lambda function finishes the pipeline.

**Automation and scale**

The sample attachment demonstrates how you can integrate AWS Glue with AWS CodePipeline. It provides a baseline example that you can customize or extend for your own use. For details, see the *Epics *section.

## Tools
<a name="deploy-an-aws-glue-job-with-an-aws-codepipeline-ci-cd-pipeline-tools"></a>
+ [AWS CodePipeline](https://aws.amazon.com/codepipeline/) – AWS CodePipeline is a fully managed [continuous delivery](https://aws.amazon.com/devops/continuous-delivery/) service that helps you automate your release pipelines for fast and reliable application and infrastructure updates.
+ [AWS CodeCommit](https://aws.amazon.com/codecommit/) – AWS CodeCommit is a fully managed [source control](https://aws.amazon.com/devops/source-control/) service that hosts secure, Git-based repositories.
+ [AWS Lambda](https://aws.amazon.com/lambda/) – AWS Lambda is a serverless compute service that lets you run code without provisioning or managing servers.
+ [AWS Glue](https://aws.amazon.com/glue) – AWS Glue is a serverless data integration service that makes it easy to discover, prepare, and combine data for analytics, machine learning, and application development.
+ [Git client](https://git-scm.com/downloads) – Git provides GUI tools, or you can use the command line or a desktop tool to check out the required artifacts from GitHub. 
+ [AWS CDK](https://aws.amazon.com/cdk/) – The AWS CDK is an open source software development framework that helps you define your cloud application resources by using familiar programming languages.

## Epics
<a name="deploy-an-aws-glue-job-with-an-aws-codepipeline-ci-cd-pipeline-epics"></a>

### Deploy the sample code
<a name="deploy-the-sample-code"></a>


| Task | Description | Skills required | 
| --- | --- | --- | 
| Configure the AWS CLI. | Configure the AWS Command Line Interface (AWS CLI) to target and authenticate with your current AWS account. For instructions, see the [AWS CLI documentation](https://docs.aws.amazon.com/cli/latest/userguide/cli-chap-configure.html). | Developer, DevOps engineer | 
| Extract the sample project files. | Extract the files from the attachment to create a folder that contains the sample project files. | Developer, DevOps engineer | 
| Deploy the sample code. | After you extract the files, run the following commands from the extract location to create a baseline example:<pre>cdk bootstrap<br />cdk deploy<br />git init<br />git remote add origin <code-commit-repository-url><br />git stage .<br />git commit -m "adds sample code"<br />git push --set-upstream origin main</pre>After the last command, you can monitor the status of the pipeline and the AWS Glue job. | Developer, DevOps engineer | 
| Customize the code. | Customize the code for the etl.py file in accordance with your business requirements. You can revise the ETL code, modify the pipeline stages, or extend the solution. | Data engineer | 

## Related resources
<a name="deploy-an-aws-glue-job-with-an-aws-codepipeline-ci-cd-pipeline-resources"></a>
+ [Getting started with the AWS CDK](https://docs.aws.amazon.com/cdk/latest/guide/getting_started.html)
+ [Adding jobs in AWS Glue](https://docs.aws.amazon.com/glue/latest/dg/add-job.html)
+ [Source action integrations in CodePipeline](https://docs.aws.amazon.com/codepipeline/latest/userguide/integrations-action-type.html#integrations-source)
+ [Invoke an AWS Lambda function in a pipeline in CodePipeline](https://docs.aws.amazon.com/codepipeline/latest/userguide/actions-invoke-lambda-function.html)
+ [AWS Glue programming](https://docs.aws.amazon.com/glue/latest/dg/aws-glue-programming.html)
+ [AWS CodeCommit GetFile API](https://docs.aws.amazon.com/codecommit/latest/APIReference/API_GetFile.html)

## Attachments
<a name="attachments-99a67388-5939-4267-8324-b6ca8bfa7962"></a>

To access additional content that is associated with this document, unzip the following file: [attachment.zip](samples/p-attach/99a67388-5939-4267-8324-b6ca8bfa7962/attachments/attachment.zip)

# Deploy code in multiple AWS Regions using AWS CodePipeline, AWS CodeCommit, and AWS CodeBuild
<a name="deploy-code-in-multiple-aws-regions-using-aws-codepipeline-aws-codecommit-and-aws-codebuild"></a>

*Anand Krishna Varanasi, Amazon Web Services*

## Summary
<a name="deploy-code-in-multiple-aws-regions-using-aws-codepipeline-aws-codecommit-and-aws-codebuild-summary"></a>

This pattern demonstrates how to build infrastructure or architecture across multiple Amazon Web Services (AWS) Regions by using AWS CloudFormation. It includes continuous integration (CI)/continuous deployment (CD) across multiple AWS Regions for faster deployments.** **The steps in this pattern have been tested for the creation of an AWS CodePipeline job to deploy to three AWS Regions as an example. You can change the number of Regions based on your use case.

## Prerequisites and limitations
<a name="deploy-code-in-multiple-aws-regions-using-aws-codepipeline-aws-codecommit-and-aws-codebuild-prereqs"></a>

**Prerequisites**
+ An active AWS account.
+ 
  + A CodeBuild role with the *AmazonS3FullAccess* and *CloudWatchFullAccess* policies. These policies give CodeBuild access to watch events of AWS CodeCommit through Amazon CloudWatch and to use Amazon Simple Storage Service (Amazon S3) as an artifact store.
  + An AWS CloudFormation role with the following policies, which give AWS CloudFormation, in the final Build stage, the ability to create or update AWS Lambda functions, push or watch Amazon CloudWatch logs, and to create and update change sets. 
    + *AWSLambdaFullAccess*
    + *AWSCodeDeployFullAccess*
    + *CloudWatchFullAccess*
    + *AWSCloudFormationFullAccess*
    + *AWSCodePipelineFullAccess*
**Note**  
Two AWS Identity and Access Management (IAM) roles for AWS CodeBuild and AWS CloudFormation with proper policies for CodeBuild to perform the CI tasks of testing, bundling, packaging the artifacts, and deploying to multiple AWS Regions in parallel.  Cross-check the policies created by CodePipeline to verify that CodeBuild and AWS CloudFormation have proper permissions in the CI and CD phases.

## Architecture
<a name="deploy-code-in-multiple-aws-regions-using-aws-codepipeline-aws-codecommit-and-aws-codebuild-architecture"></a>

![\[An AWS CodePipeline job that deploys to three AWS Regions.\]](http://docs.aws.amazon.com/prescriptive-guidance/latest/patterns/images/pattern-img/d44c393c-7243-4d4e-8b84-88a8503af98f/images/5c27fc35-5e62-4292-8b18-a7bc7faf2631.png)


This pattern's multiple-Region architecture and workflow comprise the following steps.

1. You send your code to a CodeCommit repository.

1. Upon receiving any code update or commit, CodeCommit invokes a CloudWatch event, which in turn starts a CodePipeline job.

1. CodePipeline engages the CI that is handled by CodeBuild. The following tasks are performed.
   + Testing of the AWS CloudFormation templates (optional)
   + Packaging of the AWS CloudFormation templates for each Region included in the deployment. For example, this pattern deploys in parallel to three AWS Regions, so CodeBuild packages the AWS CloudFormation templates into three S3 buckets, one in each specified Region. The S3 buckets are used by CodeBuild as artifact repositories only.

1. CodeBuild packages the artifacts as input for next Deploy phase, which runs in parallel in the three AWS Regions. If you specify a different number of Regions, CodePipeline will deploy to those Regions.

## Tools
<a name="deploy-code-in-multiple-aws-regions-using-aws-codepipeline-aws-codecommit-and-aws-codebuild-tools"></a>

**Tools**
+ [AWS CodePipeline](https://docs.aws.amazon.com/codepipeline/latest/userguide/welcome.html) – CodePipeline is a continuous delivery service you can use to model, visualize, and automate the steps required to release your software changes continuously.
+ [AWS CodeBuild](https://docs.aws.amazon.com/codebuild/latest/userguide/welcome.html) – CodeBuild is a fully managed build service that compiles your source code, runs unit tests, and produces artifacts that are ready to deploy.
+ [AWS CodeCommit](https://docs.aws.amazon.com/codecommit/latest/userguide/welcome.html) – CodeCommit is a version control service hosted by Amazon Web Services that you can use to privately store and manage assets (such as source code and binary files) in the cloud.
+ [AWS CloudFormation](https://docs.aws.amazon.com/AWSCloudFormation/latest/UserGuide/Welcome.html) – AWS CloudFormation is a service that helps you model and set up your Amazon Web Services resources so that you can spend less time managing those resources and more time focusing on your applications that run in AWS.
+ [AWS Identity and Access Management](https://docs.aws.amazon.com/IAM/latest/UserGuide/introduction.html) – AWS Identity and Access Management (IAM) is a web service that helps you securely control access to AWS resources.
+ [Amazon S3](https://docs.aws.amazon.com/AmazonS3/latest/dev/Welcome.html) – Amazon Simple Storage Service (Amazon S3) is storage for the internet. It is designed to make web-scale computing easier for developers.

**Code**

The following sample code is for the `BuildSpec.yaml` file (Build phase).

```
---
artifacts:
discard-paths: true
files:
- packaged-first-region.yaml
- packaged-second-region.yaml
- packaged-third-region.yaml
phases:
build:
commands:
- echo "********BUILD PHASE - CF PACKAGING**********"
- "aws cloudformation package --template-file sam-template.yaml --s3-bucket $S3_FIRST_REGION --output-template-file packaged-first-region.yaml --region $FIRST_REGION"
- "aws cloudformation package --template-file sam-template.yaml --s3-bucket $S3_SECOND_REGION --output-template-file packaged-second-region.yaml --region $SECOND_REGION"
- "aws cloudformation package --template-file sam-template-anand.yaml --s3-bucket $S3_THIRD_REGION --output-template-file packaged-third-region.yaml --region $THIRD_REGION"
install:
commands:
- echo "********BUILD PHASE - PYTHON SETUP**********"
runtime-versions:
python: 3.8
post_build:
commands:
- echo "********BUILD PHASE - PACKAGING COMPLETION**********"
pre_build:
commands:
- echo "********BUILD PHASE - DEPENDENCY SETUP**********"
- "npm install --silent --no-progress"
- echo "********BUILD PHASE - DEPENDENCY SETUP DONE**********"
version: 0.2
```

## Epics
<a name="deploy-code-in-multiple-aws-regions-using-aws-codepipeline-aws-codecommit-and-aws-codebuild-epics"></a>

### Prepare the code and the CodeCommit repository
<a name="prepare-the-code-and-the-codecommit-repository"></a>


| Task | Description | Skills required | 
| --- | --- | --- | 
| Select the primary AWS Region for the deployment. | Sign in to your AWS account and choose the primary Region for the deployment. The CodeCommit repository will be in the primary Region. | DevOps | 
| Create the CodeCommit repository. | Create the CodeCommit repository, and push the required code into it. The code generally includes the AWS CloudFormation or AWS SAM templates, Lambda code if any, and the CodeBuild `buildspec.yaml` files as input to the AWS CodePipeline. | DevOps | 
| Push the code into the CodeCommit repository. | In the *Attachments* section, download the code for this example, and then push the required code into it. Generally, the code can include AWS CloudFormation or AWS SAM templates, Lambda code and the CodeBuild `buildspec.yaml` files as input to the pipeline. | DevOps | 

### Source phase: Create the pipeline
<a name="source-phase-create-the-pipeline"></a>


| Task | Description | Skills required | 
| --- | --- | --- | 
| Create the CodePipeline job. | On the CodePipeline console, choose **Create pipeline**. | DevOps | 
| Name the CodePipeline job and choose the service role setting. | Enter a name for the job, and keep the default service role setting so that CodePipeline creates the role with the necessary policies attached. | DevOps | 
| Specify the location for the artifact store. | Under **Advanced settings**, keep the default option so that CodePipeline creates an S3 bucket to use for code artifact storage. If you use an existing S3 bucket instead, the bucket must be in the primary Region that you specified in the first epic. | DevOps | 
| Specify the encryption key. | Keep the default option, **Default AWS Managed Key**, or choose to use your own AWS Key Management Service (AWS KMS) customer managed key. | DevOps | 
| Specify the source provider. | Under **Source provider**, choose **AWS CodeCommit**. | DevOps | 
| Specify the repository. | Choose the CodeCommit repository that you created in the first epic. If you placed the code in a branch, choose the branch. | DevOps | 
| Specify how code changes are detected. | Keep the default, **Amazon CloudWatch Events**, as the change trigger for CodeCommit to start the CodePipeline job. | DevOps | 

### Build phase: Configure the pipeline
<a name="build-phase-configure-the-pipeline"></a>


| Task | Description | Skills required | 
| --- | --- | --- | 
| Specify the build provider. | For the build provider, choose **AWS CodeBuild**. | DevOps | 
| Specify the AWS Region. | Choose the primary Region, which you specified in the first epic. | DevOps | 

### Build phase: Create and configure the project
<a name="build-phase-create-and-configure-the-project"></a>


| Task | Description | Skills required | 
| --- | --- | --- | 
| Create the project | Choose **Create project**, and enter a name for the project. | DevOps | 
| Specify the environment image. | For this pattern demonstration, use the default CodeBuild managed image. You also have the option to use a custom Docker image if you have one. | DevOps | 
| Specify the operating system. | Choose either Amazon Linux 2 or Ubuntu.Amazon Linux 2 is nearing end of support. For more information, see the [Amazon Linux 2 FAQs](http://aws.amazon.com/amazon-linux-2/faqs/). | DevOps | 
| Specify the service role. | Choose the role you created for CodeBuild before you started to create the CodePipeline job. (See the *Prerequisites* section.) | DevOps | 
| Set additional options. | For **Timeout** and **Queued timeout**, keep the default values. For certificate, keep the default setting unless you have a custom certificate that you want to use. | DevOps | 
| Create the environment variables. | For each AWS Region that you want to deploy to, create environment variables by providing the S3 bucket name and the Region name (for example, us-east-1). | DevOps | 
| Provide the buildspec file name, if it is not buildspec.yml. | Keep this field blank if the file name is the default, `buildspec.yaml`. If you renamed the buildspec file, enter the name here. Make sure it matches the name of the file that is in the CodeCommit repository. | DevOps | 
| Specify logging. | To see logs for Amazon CloudWatch Events, keep the default setting. Or you can define any specific group or logger names. | DevOps | 

### Skip the Deploy phase
<a name="skip-the-deploy-phase"></a>


| Task | Description | Skills required | 
| --- | --- | --- | 
| Skip the deploy phase and complete the creation of the pipeline. | When you set up the pipeline, CodePipeline allows you to create only one stage in the Deploy phase. To deploy to multiple AWS Regions, skip this phase. After the pipeline is created, you can add multiple Deploy phase stages. | DevOps | 

### Deploy phase: Configure the pipeline for deployment to the first Region
<a name="deploy-phase-configure-the-pipeline-for-deployment-to-the-first-region"></a>


| Task | Description | Skills required | 
| --- | --- | --- | 
| Add a stage to the Deploy phase. | Edit the pipeline and choose **Add stage** in the Deploy phase. This first stage is for the primary Region. | DevOps | 
| Provide an action name for the stage. | Enter a unique name that reflects the first (primary) stage and Region. For example, enter **primary\$1<region>\$1deploy**. | DevOps | 
| Specify the action provider. | For **Action provider**, choose AWS CloudFormation. | DevOps | 
| Configure the Region for the first stage. | Choose the first (primary) Region, the same Region where CodePipeline and CodeBuild are set up. This is the primary Region where you want to deploy the stack. | DevOps | 
| Specify the input artifact. | Choose **BuildArtifact**. This is the output of the build phase. | DevOps | 
| Specify the action to take. | For **Action mode**, choose **Create or update a stack**. | DevOps | 
| Enter a name for the CloudFormation stack. |  | DevOps | 
| Specify the template for the first Region. | Select the Region-specific package name that was packaged by CodeBuild and dumped into the S3 bucket for the first (primary) Region. | DevOps | 
| Specify the capabilities. | Capabilities are required if the stack template includes IAM resources or if you create a stack directly from a template that contains macros. For this pattern, use CAPABILITY\$1IAM, CAPABILITY\$1NAMED\$1IAM, CAPABILITY\$1AUTO\$1EXPAND. | DevOps | 

### Deploy phase: Configure the pipeline for deployment to the second Region
<a name="deploy-phase-configure-the-pipeline-for-deployment-to-the-second-region"></a>


| Task | Description | Skills required | 
| --- | --- | --- | 
| Add the second stage to the Deploy phase. | To add a stage for the second Region, edit the pipeline and choose **Add stage** in the Deploy phase. Important: The process of creating the second Region is the same as that of the first Region, except for the following values. | DevOps | 
| Provide an action name for the second stage. | Enter a unique name that reflects the second stage and the second Region. | DevOps | 
| Configure the Region for the second stage. | Choose the second Region where you want to deploy the stack. | DevOps | 
| Specify the template for the second Region. | Select the Region-specific package name that was packaged by CodeBuild and dumped into the S3 bucket for the second Region. | DevOps | 

### Deploy phase: Configure the pipeline for deployment to the third Region
<a name="deploy-phase-configure-the-pipeline-for-deployment-to-the-third-region"></a>


| Task | Description | Skills required | 
| --- | --- | --- | 
| Add the third stage to the Deploy phase. | To add a stage for the third Region, edit the pipeline and choose **Add stage** in the Deploy phase. Important: The process of creating the second Region is the same as that of the previous two Regions, except for the following values. | DevOps | 
| Provide an action name for the third stage. | Enter a unique name that reflects the third stage and the third Region. | DevOps | 
| Configure the Region for the third stage. | Choose the third Region where you want to deploy the stack. | DevOps | 
| Specify the template for the third Region. | Select the Region-specific package name that was packaged by CodeBuild and dumped into the S3 bucket for the third Region. | DevOps | 

### Clean up the deployment
<a name="clean-up-the-deployment"></a>


| Task | Description | Skills required | 
| --- | --- | --- | 
| Delete the AWS resources. | To clean up the deployment, delete the CloudFormation stacks in each Region. Then delete the CodeCommit, CodeBuild, and CodePipeline resources from the primary Region. | DevOps | 

## Related resources
<a name="deploy-code-in-multiple-aws-regions-using-aws-codepipeline-aws-codecommit-and-aws-codebuild-resources"></a>
+ [What is AWS CodePipeline?](https://docs.aws.amazon.com/codepipeline/latest/userguide/welcome.html)
+ [AWS Serverless Application Model](https://aws.amazon.com/serverless/sam/)
+ [AWS CloudFormation](https://aws.amazon.com/cloudformation/)
+ [AWS CloudFormation architecture structure reference for AWS CodePipeline](https://docs.aws.amazon.com/codepipeline/latest/userguide/action-reference-CloudFormation.html)

## Attachments
<a name="attachments-d44c393c-7243-4d4e-8b84-88a8503af98f"></a>

To access additional content that is associated with this document, unzip the following file: [attachment.zip](samples/p-attach/d44c393c-7243-4d4e-8b84-88a8503af98f/attachments/attachment.zip)

# Deploy workloads from Azure DevOps pipelines to private Amazon EKS clusters
<a name="deploy-workloads-from-azure-devops-pipelines-to-private-amazon-eks-clusters"></a>

*Mahendra Revanasiddappa, Amazon Web Services*

## Summary
<a name="deploy-workloads-from-azure-devops-pipelines-to-private-amazon-eks-clusters-summary"></a>

This pattern demonstrates how to implement continuous integration and continuous delivery (CI/CD) from Azure DevOps pipelines to private Amazon Elastic Kubernetes Service (Amazon EKS) clusters. It addresses a critical challenge faced by organizations that are enhancing their security posture by transitioning to private API server endpoints for their Amazon EKS clusters.

A public endpoint exposes the Kubernetes API server directly to the internet, creating a larger attack surface that malicious actors could potentially target. By switching to a private endpoint, access to the cluster's control plane is restricted to within the customer's virtual private cloud (VPC).

Although transitioning an Amazon EKS cluster to a private API endpoint significantly enhances security, it introduces connectivity challenges for external CI/CD platforms like Azure DevOps. The private endpoint is only accessible from within the cluster's VPC or peered networks. Therefore, standard Microsoft-hosted Azure DevOps agents, operating outside the AWS private network, can’t reach the Kubernetes API server directly. This breaks typical deployment workflows that rely on tools like kubectl or Helm running on these agents because they fail to establish a connection to the cluster.

To overcome this problem, this pattern showcases an efficient approach by using self-hosted Azure DevOps agents within private Amazon EKS clusters. This solution offers superior cost optimization, operational efficiency, and scalability while preserving security requirements. This approach particularly benefits enterprises seeking to streamline their multi-cloud DevOps processes without compromising on performance or security.

## Prerequisites and limitations
<a name="deploy-workloads-from-azure-devops-pipelines-to-private-amazon-eks-clusters-prereqs"></a>

**Prerequisites**
+ An active AWS account.
+ AWS Command Line Interface (AWS CLI) version 2.13.17 or later, [installed.](https://docs.aws.amazon.com/cli/latest/userguide/getting-started-install.html)
+ kubectl version 1.25.1 or later, [installed.](https://kubernetes.io/docs/tasks/tools/)
+ A private Amazon EKS cluster version 1.24 or later [created](https://docs.aws.amazon.com/eks/latest/userguide/create-cluster.html), with permissions to create namespaces, secrets, and deployments[.](https://docs.aws.amazon.com/eks/latest/userguide/create-cluster.html)
+ Worker nodes in an Amazon EKS cluster with outbound connectivity to the internet so that the Azure DevOps agent running on them can connect to Azure DevOps agent pool.
+ GitHub account [created](https://github.com/signup).
+ An Azure DevOps project with access to configure service connections, which are authenticated connections between Azure Pipelines and external or remote services, [created](https://learn.microsoft.com/en-us/azure/devops/user-guide/sign-up-invite-teammates?view=azure-devops&tabs=microsoft-account).
+ The AWS Toolkit for Azure DevOps version 1.15 or later installed for the Azure DevOps project described in the previous point. For installation instructions, see [AWS Toolkit for Azure DevOps](https://marketplace.visualstudio.com/items?itemName=AmazonWebServices.aws-vsts-tools) in Visual Studio Marketplace.

**Limitations**
+ Some AWS services aren’t available in all AWS Regions. For Region availability, see [AWS Services by Region](https://aws.amazon.com/about-aws/global-infrastructure/regional-product-services/). For specific endpoints, see [Service endpoints and quotas](https://docs.aws.amazon.com/general/latest/gr/aws-service-information.html), and choose the link for the service.

## Architecture
<a name="deploy-workloads-from-azure-devops-pipelines-to-private-amazon-eks-clusters-architecture"></a>

This pattern creates the following:
+ **Amazon ECR repository** - The Amazon Elastic Container Registry (Amazon ECR) repository stores the Docker image with the Azure DevOps agent and the sample app that is deployed.
+ **Azure DevOps agent pool** - An Azure DevOps self-hosted agent pool registers the agent running on the private Amazon EKS cluster.
+ **IAM role** - An AWS Identity and Access Management (IAM) role for the Azure service connection to provide required access to the agent that’s running on a private Amazon EKS cluster.
+ **Azure DevOps service connection** - A service connection in an Azure DevOps account to use the IAM role that provides the required access for the pipeline jobs to access AWS services.

The following diagram shows the architecture of deploying a self-hosted Azure DevOps agent on a private Amazon EKS cluster and deploying a sample application on the same cluster.

![\[Deployment of self-hosted Azure DevOps agent and sample application on private Amazon EKS cluster.\]](http://docs.aws.amazon.com/prescriptive-guidance/latest/patterns/images/pattern-img/a965834f-a1e2-4679-bd8c-15eed4f57b55/images/ee22bd3e-311c-46e0-8024-9b7e7752080a.png)


The diagram shows the following workflow:

1. Deploy a self-hosted Azure DevOps agent as a deployment inside an Amazon EKS cluster.

1. An Azure DevOps agent connects to the agent pool on an Azure DevOps account using a personal access token (PAT) for authentication.

1. Azure Pipelines configures a pipeline to deploy by using code from a GitHub repository.

1. The pipeline runs on the agent from the agent pool that was configured in the pipeline configuration. The Azure DevOps agent gets the job information of the pipeline by constantly polling to the Azure DevOps account.

1. The Azure DevOps agent builds a Docker image as part of the pipeline job and pushes the image to the Amazon ECR repository.

1. The Azure DevOps agent deploys the sample application on a private Amazon EKS cluster in a namespace called `webapp`. 

## Tools
<a name="deploy-workloads-from-azure-devops-pipelines-to-private-amazon-eks-clusters-tools"></a>

**Tools**
+ [Amazon Elastic Container Registry (Amazon ECR)](https://docs.aws.amazon.com/AmazonECR/latest/userguide/what-is-ecr.html) is a managed container image registry service that’s secure, scalable, and reliable.
+ [Amazon Elastic Kubernetes Service (Amazon EKS)](https://docs.aws.amazon.com/eks/latest/userguide/what-is-eks.html) helps you run Kubernetes on AWS without needing to install or maintain your own Kubernetes control plane or nodes.
+ [AWS Identity and Access Management (IAM)](https://docs.aws.amazon.com/IAM/latest/UserGuide/introduction.html) helps you securely manage access to your AWS resources by controlling who is authenticated and authorized to use them.

**Other tools**
+ [Docker](https://www.docker.com/) is a set of platform as a service (PaaS) products that use virtualization at the operating-system level to deliver software in containers.
+ [kubectl](https://kubernetes.io/docs/tasks/tools/) is a command-line interface that helps you run commands against Kubernetes clusters.

**Code repository**
+ The code for this pattern is available in the GitHub [deploy-kubernetes-resources-to-amazon-eks-using-azure-devops](https://github.com/aws-samples/deploy-kubernetes-resources-to-amazon-eks-using-azure-devops) repository.

## Best practices
<a name="deploy-workloads-from-azure-devops-pipelines-to-private-amazon-eks-clusters-best-practices"></a>
+ For Amazon EKS, see the [Amazon EKS Best Practices Guide](https://docs.aws.amazon.com/eks/latest/best-practices/introduction.html).
+ Follow the principle of least privilege and grant the minimum permissions required to perform a task. For more information, see [Grant least privilege](https://docs.aws.amazon.com/IAM/latest/UserGuide/access_policies.html#grant-least-priv) and [Security best practices](https://docs.aws.amazon.com/IAM/latest/UserGuide/best-practices.html) in the IAM documentation.

## Epics
<a name="deploy-workloads-from-azure-devops-pipelines-to-private-amazon-eks-clusters-epics"></a>

### Create a service connection
<a name="create-a-service-connection"></a>


| Task | Description | Skills required | 
| --- | --- | --- | 
| Find the Azure DevOps organization GUID. | Sign in to your Azure DevOps account, and then use the following URL to find the organization GUID: `https://dev.azure.com/{DevOps_Org_ID}/_apis/projectCollections?api-version=6.0` In the URL, replace `{DevOps_org_ID}` with your Azure DevOps organization ID. | AWS DevOps | 
| Configure an IdP in the AWS account. | To configure an Identity provider (IdP) in the AWS account for an Azure service connection, use the following steps:[\[See the AWS documentation website for more details\]](http://docs.aws.amazon.com/prescriptive-guidance/latest/patterns/deploy-workloads-from-azure-devops-pipelines-to-private-amazon-eks-clusters.html)For more details, see [How to federate into AWS from Azure DevOps using OpenID Connect](https://aws.amazon.com/blogs/modernizing-with-aws/how-to-federate-into-aws-from-azure-devops-using-openid-connect/). | AWS DevOps | 
| Create an IAM policy in the AWS account. | To create an IAM policy to provide the required permissions to the IAM role used by the Azure DevOps pipeline, use the following steps:[\[See the AWS documentation website for more details\]](http://docs.aws.amazon.com/prescriptive-guidance/latest/patterns/deploy-workloads-from-azure-devops-pipelines-to-private-amazon-eks-clusters.html) | AWS DevOps | 
| Create an IAM role in the AWS account. | To configure an IAM role in the AWS account for the Azure service connection, use the following steps:[\[See the AWS documentation website for more details\]](http://docs.aws.amazon.com/prescriptive-guidance/latest/patterns/deploy-workloads-from-azure-devops-pipelines-to-private-amazon-eks-clusters.html)<pre>{<br />  "Version": "2012-10-17",		 	 	 <br />  "Statement": [<br />    {<br />      "Effect": "Allow",<br />      "Principal": {<br />        "Federated": "arn:aws:iam::{account_id}:oidc-provider/vstoken.dev.azure.com/{OrganizationGUID}"<br />      },<br />      "Action": "sts:AssumeRoleWithWebIdentity",<br />      "Condition": {<br />        "StringEquals": {<br />          "vstoken.dev.azure.com/{OrganizationGUID}:aud": "api://AzureADTokenExchange",<br />          "vstoken.dev.azure.com/{OrganizationGUID}:sub": "sc://{OrganizationName}/{ProjectName}/{ServiceConnectionName}"<br />        }<br />      }<br />    }<br />  ]<br />}</pre>In the policy, provide your information for the following placeholders:[\[See the AWS documentation website for more details\]](http://docs.aws.amazon.com/prescriptive-guidance/latest/patterns/deploy-workloads-from-azure-devops-pipelines-to-private-amazon-eks-clusters.html) | AWS DevOps | 
| Create a service connection in the Azure DevOps account. | To configure an Azure service connection, use the following steps:[\[See the AWS documentation website for more details\]](http://docs.aws.amazon.com/prescriptive-guidance/latest/patterns/deploy-workloads-from-azure-devops-pipelines-to-private-amazon-eks-clusters.html)For more details, see [Create a service connection](https://learn.microsoft.com/en-us/azure/devops/pipelines/library/service-endpoints?view=azure-devops#create-a-service-connection) in the Microsoft documentation. | AWS DevOps | 
| Add IAM role to Amazon EKS configuration file. | The IAM role must have the necessary permissions to perform the required operations on the Amazon EKS cluster. Because it’s a pipeline role, the IAM role must be able to manage almost all types of resources on the cluster. Therefore, the `system:masters` group permission is appropriate for this role.To add the required configuration to the `aws-auth ConfigMap` within Kubernetes, use the following code:<pre>- groups:<br />  - system:masters<br />  rolearn: arn:aws:iam::{account_id}:role/ADO-role<br />  username: ADO-role</pre>Replace `{account_id}` with your AWS account ID.For more information, see [How Amazon EKS works with IAM](https://docs.aws.amazon.com/eks/latest/userguide/security-iam-service-with-iam.html#security-iam-service-with-iam-roles) in the Amazon EKS documentation. | AWS DevOps | 

### Create an agent pool
<a name="create-an-agent-pool"></a>


| Task | Description | Skills required | 
| --- | --- | --- | 
| Create a self-hosted agent pool. | To configure a self-hosted agent pool in the Azure DevOps account, use the following steps:[\[See the AWS documentation website for more details\]](http://docs.aws.amazon.com/prescriptive-guidance/latest/patterns/deploy-workloads-from-azure-devops-pipelines-to-private-amazon-eks-clusters.html)For more details, see [Create and manage agent pools](https://learn.microsoft.com/en-us/azure/devops/pipelines/agents/pools-queues?view=azure-devops&tabs=yaml%2Cbrowser) in the Microsoft documentation. |  | 

### Build Azure DevOps agent image and push to Amazon ECR
<a name="build-azure-devops-agent-image-and-push-to-ecr"></a>


| Task | Description | Skills required | 
| --- | --- | --- | 
| Create an Amazon ECR repository. | The Docker images that are used to deploy the Azure DevOps agent and sample application (`webapp`) on the private Amazon EKS cluster must be stored in an Amazon ECR repository. To create an Amazon ECR repository, use the following steps:[\[See the AWS documentation website for more details\]](http://docs.aws.amazon.com/prescriptive-guidance/latest/patterns/deploy-workloads-from-azure-devops-pipelines-to-private-amazon-eks-clusters.html)For more details, see [Creating an Amazon ECR private repository to store images](https://docs.aws.amazon.com/AmazonECR/latest/userguide/repository-create.html) in the Amazon ECR documentation. | AWS DevOps | 
| Create a Dockerfile to build the Azure DevOps agent. | Create a Dockerfile to build the Docker image that has the Azure DevOps agent installed. Store the following content in a file named `Dockerfile`:<pre><br />FROM ubuntu:22.04 <br />ENV TARGETARCH="linux-x64"<br />RUN apt update && apt upgrade -y && apt install -y curl git jq libicu70 unzip wget<br /><br />RUN curl "https://awscli.amazonaws.com/awscli-exe-linux-x86_64.zip" -o "awscliv2.zip"<br />RUN unzip awscliv2.zip<br />RUN ./aws/install<br />RUN rm -rf aws awscliv2.zip<br /><br />RUN curl -sSL https://get.docker.com/ | sh<br /><br />RUN curl -sL https://aka.ms/InstallAzureCLIDeb | bash<br />RUN mkdir -p azp <br />WORKDIR /azp/<br /><br />COPY ./start.sh ./ <br />RUN chmod +x ./start.sh<br /><br />RUN useradd -m -d /home/agent agent <br />RUN chown -R agent:agent /azp /home/agent<br />RUN groupadd -f docker <br />RUN usermod -aG docker agent<br />USER agent<br /><br />ENTRYPOINT [ "./start.sh" ]</pre> | AWS DevOps | 
| Create script for the Azure DevOps agent. | To create the `start.sh` script, use the following steps:[\[See the AWS documentation website for more details\]](http://docs.aws.amazon.com/prescriptive-guidance/latest/patterns/deploy-workloads-from-azure-devops-pipelines-to-private-amazon-eks-clusters.html) | AWS DevOps | 
| Build a Docker image with the Azure DevOps agent.  | To create a Docker image to install the Azure DevOps agent, use the Dockerfile that you created earlier to build the image. In the same directory where the Dockerfile is stored, run the following commands:<pre>aws ecr get-login-password --region region | docker login --username AWS --password-stdin aws_account_id.dkr.ecr.region.amazonaws.com<br /><br />docker build --platform linux/amd64 -t ado-agent:latest .<br /><br />docker tag ado-agent:latest aws_account_id.dkr.ecr.region.amazonaws.com/webapp:latest<br /><br />docker push aws_account_id.dkr.ecr.region.amazonaws.com/webapp:latest</pre>Replace `aws_account_id` and `region` with your AWS account ID and AWS Region. | AWS DevOps | 

### Deploy the Azure DevOps agent to a private Amazon EKS cluster
<a name="deploy-the-azure-devops-agent-to-a-private-eks-cluster"></a>


| Task | Description | Skills required | 
| --- | --- | --- | 
| Generate an Azure personal access token. | The agent running on the private Amazon EKS cluster requires a personal access token (PAT) so that it can authenticate with the Azure DevOps account. To generate a PAT, use the following steps:[\[See the AWS documentation website for more details\]](http://docs.aws.amazon.com/prescriptive-guidance/latest/patterns/deploy-workloads-from-azure-devops-pipelines-to-private-amazon-eks-clusters.html)[\[See the AWS documentation website for more details\]](http://docs.aws.amazon.com/prescriptive-guidance/latest/patterns/deploy-workloads-from-azure-devops-pipelines-to-private-amazon-eks-clusters.html)<pre>apiVersion: v1<br />kind: Secret<br />metadata:<br />  name: azdevops-pat<br />  namespace: default<br />type: Opaque<br />stringData:<br />  AZP_TOKEN: <PAT Token></pre>[\[See the AWS documentation website for more details\]](http://docs.aws.amazon.com/prescriptive-guidance/latest/patterns/deploy-workloads-from-azure-devops-pipelines-to-private-amazon-eks-clusters.html)<pre>kubectl create -f ado-secret.yaml</pre>For more details, see [Register an agent using a personal access token (PAT)](https://learn.microsoft.com/en-us/azure/devops/pipelines/agents/personal-access-token-agent-registration?view=azure-devops) in the Microsoft documentation. | AWS DevOps | 
| Use the Kubernetes manifest file for agent deployment. | To deploy the Azure DevOps agent on the private Amazon EKS cluster, copy the following manifest file and store the file as `agent-deployment.yaml`: <pre>apiVersion: apps/v1<br />kind: Deployment<br />metadata:<br />  name: azure-pipelines-agent-eks<br />  labels:<br />    app: azure-pipelines-agent<br />spec:<br />  replicas: 1<br />  selector:<br />    matchLabels:<br />      app: azure-pipelines-agent<br />  template:<br />    metadata:<br />      labels:<br />        app: azure-pipelines-agent<br />    spec:<br />      containers:<br />      - name: docker<br />        image: docker:dind<br />        securityContext: <br />          privileged: true<br />        volumeMounts:<br />        - name: shared-workspace<br />          mountPath: /workspace<br />        - name: dind-storage<br />          mountPath: /var/lib/docker<br />        env:<br />        - name: DOCKER_TLS_CERTDIR<br />          value: ""<br />      - name: azure-pipelines-agent<br />        image: aws_account_id.dkr.ecr.region.amazonaws.com/webapp:latest<br />        env:<br />        - name: AZP_URL<br />          value: "<Azure account URL>"<br />        - name: AZP_POOL<br />          value: "eks-agent"<br />        - name: AZP_TOKEN<br />          valueFrom:<br />            secretKeyRef:<br />              name: azdevops-pat<br />              key: AZP_TOKEN<br />        - name: AZP_AGENT_NAME<br />          valueFrom:<br />            fieldRef:<br />              fieldPath: metadata.name<br />        - name: DOCKER_HOST<br />          value: tcp://localhost:2375<br />        volumeMounts:<br />        - mountPath: /workspace<br />          name: shared-workspace<br />      volumes:<br />      - name: dind-storage<br />        emptyDir: {}<br />      - name: shared-workspace<br />        emptyDir: {}</pre>Replace `aws_account_id` and `<Azure account URL>` with your AWS account ID and Azure DevOps account URL. | AWS DevOps | 
| Deploy the agent on the private Amazon EKS cluster. | To deploy the Azure Devops agent on the private Amazon EKS cluster, use the following command:<pre>kubectl create -f agent-deployment.tf</pre> | AWS DevOps | 
| Verify the agent is running. | To verify that the Azure DevOps agent is running, use the following command:<pre>kubectl get deploy azure-pipelines-agent-eks<br /></pre>The expected output should be similar to the following:<pre><br />NAME                        READY   UP-TO-DATE   AVAILABLE   AGE<br />azure-pipelines-agent-eks   1/1     1            1           58s</pre>Make sure that the `READY `column shows `1/1`. | AWS DevOps | 
| Verify the agent is registered with the Azure DevOps agent pool. | To verify that the agent is deployed on the private Amazon EKS cluster and is registered with the agent pool `eks-agent`, use the following steps:[\[See the AWS documentation website for more details\]](http://docs.aws.amazon.com/prescriptive-guidance/latest/patterns/deploy-workloads-from-azure-devops-pipelines-to-private-amazon-eks-clusters.html)You should see one agent listed with a **Status** of **Online**, and the name of the agent should start with **azure-pipelines-agent-eks-\$1**. | AWS DevOps | 

### Deploy sample application
<a name="deploy-sample-application"></a>


| Task | Description | Skills required | 
| --- | --- | --- | 
| Fork the sample application repository to your GitHub account.  | Fork the following AWS Samples repository to your GitHub account:[https://github.com/aws-samples/deploy-kubernetes-resources-to-amazon-eks-using-azure-devops](https://github.com/aws-samples/deploy-kubernetes-resources-to-amazon-eks-using-azure-devops) | AWS DevOps | 
| Create a pipeline. | To create a pipeline in your Azure DevOps account, use the following steps:[\[See the AWS documentation website for more details\]](http://docs.aws.amazon.com/prescriptive-guidance/latest/patterns/deploy-workloads-from-azure-devops-pipelines-to-private-amazon-eks-clusters.html)<pre>pool:<br />  name: eks-agent<br />#pool: self-hosted # If you are running self-hosted Azure DevOps Agents<br /><br />stages:<br /># Refering the pipeline template, input parameter that are not specified will be added with defaults<br />- template: ./pipeline_templates/main_template.yaml<br />  parameters:<br />    serviceConnectionName: aws-sc<br />    awsRegion: <your region><br />    awsEKSClusterName: <name of your EKS cluster><br />    projectName: webapp<br /></pre>[\[See the AWS documentation website for more details\]](http://docs.aws.amazon.com/prescriptive-guidance/latest/patterns/deploy-workloads-from-azure-devops-pipelines-to-private-amazon-eks-clusters.html) | AWS DevOps | 
| Verify that the sample application deployed. | After the pipeline completes, verify the successful deployment of the sample application by checking both the Amazon ECR repository and the Amazon EKS cluster.To verify artifacts in the Amazon ECR repository, use the following steps:[\[See the AWS documentation website for more details\]](http://docs.aws.amazon.com/prescriptive-guidance/latest/patterns/deploy-workloads-from-azure-devops-pipelines-to-private-amazon-eks-clusters.html)[\[See the AWS documentation website for more details\]](http://docs.aws.amazon.com/prescriptive-guidance/latest/patterns/deploy-workloads-from-azure-devops-pipelines-to-private-amazon-eks-clusters.html)For example, `20250501.1-image` and `20250501.1-helm`.To verify deployment on the private Amazon EKS cluster in the namespace `webapp`, use the following command:<pre>kubectl get deploy -n webapp </pre>The expected output is as follows:<pre><br />NAME     READY   UP-TO-DATE   AVAILABLE<br />webapp   1/1     1            1           </pre>Note: If this is your first pipeline run, you might need to authorize the service connection and agent pool. Look for permission requests in the Azure DevOps pipeline interface, and approve them to proceed. | AWS DevOps | 

## Troubleshooting
<a name="deploy-workloads-from-azure-devops-pipelines-to-private-amazon-eks-clusters-troubleshooting"></a>


| Issue | Solution | 
| --- | --- | 
| Pipeline fails when Amazon ECR repository name doesn’t match `webapp` | The sample application expects the Amazon ECR repository name to match the `projectName: webapp` parameter in `azure_pipeline.yml`.To resolve this issue, rename your Amazon ECR repository to `webapp`, or update the following:[\[See the AWS documentation website for more details\]](http://docs.aws.amazon.com/prescriptive-guidance/latest/patterns/deploy-workloads-from-azure-devops-pipelines-to-private-amazon-eks-clusters.html) | 
| Error: Kubernetes cluster unreachable: the server has asked for the client to provide credentials | If you encounter this error in the "Pull and Deploy Helm Chart" step in your Azure pipeline, the root cause typically stems from an incorrect IAM role configuration in your Amazon EKS cluster's `aws-auth ConfigMap`.To resolve this issue, check the following:[\[See the AWS documentation website for more details\]](http://docs.aws.amazon.com/prescriptive-guidance/latest/patterns/deploy-workloads-from-azure-devops-pipelines-to-private-amazon-eks-clusters.html) | 

## Related resources
<a name="deploy-workloads-from-azure-devops-pipelines-to-private-amazon-eks-clusters-resources"></a>

**AWS Blogs**
+ [How to federate into AWS from Azure DevOps using OpenID Connect](https://aws.amazon.com/blogs/modernizing-with-aws/how-to-federate-into-aws-from-azure-devops-using-openid-connect/)

**AWS services documentation**
+ [Amazon ECR](https://docs.aws.amazon.com/AmazonECR/latest/userguide/what-is-ecr.html)
+ [Amazon EKS](https://docs.aws.amazon.com/eks/latest/userguide/what-is-eks.html)

**Microsoft documentation**
+ [What is Azure DevOps?](https://learn.microsoft.com/en-us/azure/devops/user-guide/what-is-azure-devops?view=azure-devops)
+ [What is Azure Pipelines?](https://learn.microsoft.com/en-us/azure/devops/pipelines/get-started/what-is-azure-pipelines?view=azure-devops)

# Execute Amazon Redshift SQL queries by using Terraform
<a name="execute-redshift-sql-queries-using-terraform"></a>

*Sylvia Qi and Aditya Ambati, Amazon Web Services*

## Summary
<a name="execute-redshift-sql-queries-using-terraform-summary"></a>

Using infrastructure as code (IaC) for the deployment and management of Amazon Redshift is a prevalent practice within DevOps. IaC facilitates the deployment and configuration of various Amazon Redshift resources, such as clusters, snapshots, and parameter groups. However, IaC doesn’t extend to the management of database resources like tables, schemas, views, and stored procedures. These database elements are managed through SQL queries and are not directly supported by IaC tools. Although solutions and tools exist for managing these resources, you might prefer not to introduce additional tools into your technology stack.

This pattern outlines a methodology that uses Terraform to deploy Amazon Redshift database resources, including tables, schemas, views, and stored procedures. The pattern distinguishes between two types of SQL queries:
+ **Nonrepeatable queries** – These queries are executed once during the initial Amazon Redshift deployment to establish the essential database components. 
+ **Repeatable queries** – These queries are immutable and can be rerun without impacting the database. The solution uses Terraform to monitor changes in repeatable queries and apply them accordingly.

For more details, see *Solution walkthrough* in [Additional information](#execute-redshift-sql-queries-using-terraform-additional).

## Prerequisites and limitations
<a name="execute-redshift-sql-queries-using-terraform-prereqs"></a>

**Prerequisites**

You must have an active AWS account and install the following on your deployment machine:
+ [AWS Command Line Interface](https://docs.aws.amazon.com/cli/latest/userguide/getting-started-install.html) (AWS CLI)
+ An [AWS CLI profile](https://docs.aws.amazon.com/cli/latest/userguide/cli-configure-files.html) configured with Amazon Redshift read/write permissions
+ [Terraform](https://developer.hashicorp.com/terraform/tutorials/aws-get-started/install-cli) version 1.6.2 or later
+ [Python3](https://www.python.org/downloads/)
+ [Boto3](https://boto3.amazonaws.com/v1/documentation/api/latest/guide/quickstart.html)

**Limitations**
+ This solution supports a single Amazon Redshift database because Terraform only allows for the creation of one database during cluster creation.
+ This pattern doesn’t include tests to validate changes to the repeatable queries before applying them. We recommend that you incorporate such tests for enhanced reliability.
+ To illustrate the solution, this pattern provides a sample `redshift.tf` file that uses a local Terraform state file. However, for production environments, we strongly recommend that you employ a remote state file with a locking mechanism for enhanced stability and collaboration.
+ Some AWS services aren’t available in all AWS Regions. For Region availability, see [AWS services by Region](https://aws.amazon.com/about-aws/global-infrastructure/regional-product-services/). For specific endpoints, see [Service endpoints and quotas](https://docs.aws.amazon.com/general/latest/gr/aws-service-information.html), and choose the link for the service.

**Product versions**

This solution is developed and tested on [Amazon Redshift patch 179](https://docs.aws.amazon.com/redshift/latest/mgmt/cluster-versions.html#cluster-version-179).

**Code repository**

The code for this pattern is available in the GitHub [amazon-redshift-sql-deploy-terraform](https://github.com/aws-samples/amazon-redshift-sql-deploy-terraform) repository.

## Architecture
<a name="execute-redshift-sql-queries-using-terraform-architecture"></a>

The following diagram illustrates how Terraform manages the Amazon Redshift database resources by handling both nonrepeatable and repeatable SQL queries.

![\[Process for Terraform to manage Amazon Redshift database resources using SQL queries.\]](http://docs.aws.amazon.com/prescriptive-guidance/latest/patterns/images/pattern-img/0f4467ac-761b-4b6b-a32f-e18a2ca2245d/images/3b6ff9e8-e3d1-48ed-9fa1-4b14f7d3d65b.png)


The diagram shows the following steps:

1. Terraform applies nonrepeatable SQL queries during the initial Amazon Redshift cluster deployment.

1. The developer commits changes to the repeatable SQL queries.

1. Terraform monitors changes in the repeatable SQL queries.

1. Terraform applies repeatable SQL queries to the Amazon Redshift database.

The solution provided by this pattern is built based on the [Terraform module for Amazon Redshift](https://registry.terraform.io/modules/terraform-aws-modules/redshift/aws/latest). The Terraform module provisions an Amazon Redshift cluster and database. To enhance the module, we used `terraform_data` resources, which invokes a custom Python script to execute SQL queries using the Amazon Redshift [ExecuteStatement](https://docs.aws.amazon.com/redshift-data/latest/APIReference/API_ExecuteStatement.html) API operation. As a result, the module can do the following:
+ Deploy any number of database resources by using SQL queries after the database is provisioned.
+ Monitor continuously for changes in the repeatable SQL queries and apply those changes using Terraform.

For more details, see *Solution walkthrough* in [Additional information](#execute-redshift-sql-queries-using-terraform-additional).

## Tools
<a name="execute-redshift-sql-queries-using-terraform-tools"></a>

**AWS services**
+ [Amazon Redshift](https://docs.aws.amazon.com/redshift/latest/mgmt/welcome.html) is a fully managed petabyte-scale data warehouse service in the AWS Cloud.

**Other tools**
+ [Terraform](https://www.terraform.io/) is an infrastructure as code (IaC) tool from HashiCorp that helps you create and manage cloud and on-premises resources.
+ [Python](https://www.python.org/) is a general-purpose programming language that’s used in this pattern to execute SQL queries. 

## Best practices
<a name="execute-redshift-sql-queries-using-terraform-best-practices"></a>
+ [Amazon Redshift best practices](https://docs.aws.amazon.com/redshift/latest/dg/best-practices.html)
+ [Using the Amazon Redshift Data API to interact with Amazon Redshift clusters](https://aws.amazon.com/blogs/big-data/using-the-amazon-redshift-data-api-to-interact-with-amazon-redshift-clusters/)

## Epics
<a name="execute-redshift-sql-queries-using-terraform-epics"></a>

### Deploy the solution using Terraform
<a name="deploy-the-solution-using-terraform"></a>


| Task | Description | Skills required | 
| --- | --- | --- | 
| **Clone the repository.** | To clone the Git repository containing the Terraform code for provisioning an Amazon Redshift cluster, use the following command.<pre>git clone https://github.com/aws-samples/amazon-redshift-sql-deploy-terraform.git</pre> | DevOps engineer | 
| **Update the Terraform variables.** | To customize the Amazon Redshift cluster deployment according to your specific requirements, update the following parameters in the `terraform.tfvars` file.<pre>region                    = "<AWS_REGION>"<br />cluster_identifier        = "<REDSHIFT_CLUSTER_IDENTIFIER>"<br />node_type                 = "<REDSHIFT_NODE_TYPE>"<br />number_of_nodes           = "<REDSHIFT_NODE_COUNT>"<br />database_name             = "<REDSHIFT_DB_NAME>"<br />subnet_ids                = "<REDSHIFT_SUBNET_IDS>"<br />vpc_security_group_ids    = "<REDSHIFT_SECURITY_GROUP_IDS>"<br />run_nonrepeatable_queries = true<br />run_repeatable_queries    = true<br />sql_path_bootstrap        = "<BOOTSTRAP_SQLS_PATH>"<br />sql_path_nonrepeatable    = "<NON-REPEATABLE_SQLS_PATH>"<br />sql_path_repeatable       = "<REPEATABLE_SQLS_PATH>"<br />sql_path_finalize         = "<FINALIZE_SQLS_PATH>"<br />create_random_password    = false<br />master_username           = "<REDSHIFT_MASTER_USERNAME>"</pre> | DevOps engineer | 
| Deploy the resources using Terraform. | [\[See the AWS documentation website for more details\]](http://docs.aws.amazon.com/prescriptive-guidance/latest/patterns/execute-redshift-sql-queries-using-terraform.html) | DevOps engineer | 
| (Optional) Execute additional SQL queries. | The sample repository provides several SQL queries for demo purposes. To execute your own SQL queries, add them to the following folders:`/bootstrap` `/nonrepeatable` `/repeatable` `/finalize` |  | 

### Monitor the execution of SQL statements
<a name="monitor-the-execution-of-sql-statements"></a>


| Task | Description | Skills required | 
| --- | --- | --- | 
| Monitor the deployment of SQL statements. | You can monitor the results of the SQL executions to an Amazon Redshift cluster. For examples of output that show a failed and a successful SQL execution, see *Example SQL statements* in [Additional information](#execute-redshift-sql-queries-using-terraform-additional).  | DBA, DevOps engineer | 
| Clean up resources. | To delete all the resources deployed by Terraform, run the following command.<pre>terraform destroy</pre> | DevOps engineer | 

### Validate the results
<a name="validate-the-results"></a>


| Task | Description | Skills required | 
| --- | --- | --- | 
| Validate the data in the Amazon Redshift cluster. | [\[See the AWS documentation website for more details\]](http://docs.aws.amazon.com/prescriptive-guidance/latest/patterns/execute-redshift-sql-queries-using-terraform.html) | DBA, AWS DevOps | 

## Related resources
<a name="execute-redshift-sql-queries-using-terraform-resources"></a>

**AWS documentation**
+ [Amazon Redshift provisioned clusters](https://docs.aws.amazon.com/redshift/latest/mgmt/working-with-clusters.html)
+ [Troubleshooting issues for Amazon Redshift Data API](https://docs.aws.amazon.com/redshift/latest/mgmt/data-api-troubleshooting.html)

**Other resources**
+ [Command: apply](https://developer.hashicorp.com/terraform/cli/commands/apply) (Terraform documentation)

## Additional information
<a name="execute-redshift-sql-queries-using-terraform-additional"></a>

**Solution walkthrough**

To use the solution, you must organize your Amazon Redshift SQL queries in a specific way. All SQL queries must be stored in files with a `.sql` extension.

In the code example provided with this pattern, the SQL queries are organized in the following folder structure. You can modify the code (`sql-queries.tf` and `sql-queries.py`) to work with any structure that fits your unique use case.

```
/bootstrap
     |- Any # of files
     |- Any # of sub-folders
/nonrepeatable
     |- Any # of files
     |- Any # of sub-folders
/repeatable
     /udf
          |- Any # of files
          |- Any # of sub-folders
     /table
          |- Any # of files
          |- Any # of sub-folders
     /view
          |- Any # of files
          |- Any # of sub-folders
     /stored-procedure
          |- Any # of files
          |- Any # of sub-folders
/finalize
     |- Any # of files
     |- Any # of sub-folders
```

Given the preceding folder structure, during Amazon Redshift cluster deployment, Terraform executes the queries in the following order:

1. `/bootstrap`

1. `/nonrepeatable`

1. `/repeatable`

1. `/finalize`

The `/repeatable` folder contains four subfolders: `/udf`, `/table`, `/view`, and `/stored-procedure`. These subfolders indicate the order in which Terraform executes the SQL queries.

The Python script that executes the SQL queries is `sql-queries.py`. First, the script reads all the files and subfolders of a specific source directory, for example, the `sql_path_bootstrap` parameter. Then the script executes the queries by calling the Amazon Redshift [ExecuteStatement](https://docs.aws.amazon.com/redshift-data/latest/APIReference/API_ExecuteStatement.html) API operation. You might have one or more SQL queries in a file. The following code snippet shows the Python function that executes SQL statements stored in a file against an Amazon Redshift cluster.

```
def execute_sql_statement(filename, cluster_id, db_name, secret_arn, aws_region):
    """Execute SQL statements in a file"""
    redshift_client = boto3.client(
        'redshift-data', region_name=aws_region)
    contents = get_contents_from_file(filename),
    response = redshift_client.execute_statement(
        Sql=contents[0],
        ClusterIdentifier=cluster_id,
        Database=db_name,
        WithEvent=True,
        StatementName=filename,
        SecretArn=secret_arn
    )
    ...
```

The Terraform script `sql-queries.tf` creates the [terraform\$1data](https://developer.hashicorp.com/terraform/language/resources/terraform-data) resources that invoke the `sql-queries.py` script. There is a `terraform_data` resource for each of the four folders: `/bootstrap`, `/nonrepeatable`, `/repeatable`, and `/finalize`. The following code snippet shows the `terraform_data` resource that execute the SQL queries in the `/bootstrap` folder.

```
locals {
  program               = "${path.module}/sql-queries.py"
  redshift_cluster_name = try(aws_redshift_cluster.this[0].id, null)
}

resource "terraform_data" "run_bootstrap_queries" {
  count      = var.create && var.run_nonrepeatable_queries && (var.sql_path_bootstrap != "") && (var.snapshot_identifier == null) ? 1 : 0
  depends_on = [aws_redshift_cluster.this[0]]

  provisioner "local-exec" {
    command = "python3 ${local.program} ${var.sql_path_bootstrap} ${local.redshift_cluster_name} ${var.database_name} ${var.redshift_secret_arn} ${local.aws_region}"
  }
}
```

You can control whether to run these queries by using the following variables. If you don’t want to run queries in `sql_path_bootstrap`, `sql_path_nonrepeatable`, `sql_path_repeatable`, or `sql_path_finalize`, set their values to `""`.

```
  run_nonrepeatable_queries = true
  run_repeatable_queries    = true
  sql_path_bootstrap        = "src/redshift/bootstrap"
  sql_path_nonrepeatable    = "src/redshift/nonrepeatable"
  sql_path_repeatable       = "src/redshift/repeatable"
  sql_path_finalize         = "src/redshift/finalize"
```

When you run `terraform apply`, Terraform considers the `terraform_data` resource added after the script is completed, regardless of the results of the script. If some SQL queries failed, and you want to rerun them, you can manually remove the resource from the Terraform state, and run `terraform apply` again. For example, the following command removes the `run_bootstrap_queries` resource from the Terraform state.

`terraform state rm module.redshift.terraform_data.run_bootstrap_queries[0]`

The following code example shows how the `run_repeatable_queries` resource monitors changes in the `repeatable` folder by using the [sha256 hash](https://developer.hashicorp.com/terraform/language/functions/sha256). If any file within the folder is updated, Terraform marks the entire directory for an update. Then, Terraform runs the queries in the directory again during the next `terraform apply`.

```
resource "terraform_data" "run_repeatable_queries" {
  count      = var.create_redshift && var.run_repeatable_queries && (var.sql_path_repeatable != "") ? 1 : 0
  depends_on = [terraform_data.run_nonrepeatable_queries]

  # Continuously monitor and apply changes in the repeatable folder
  triggers_replace = {
    dir_sha256 = sha256(join("", [for f in fileset("${var.sql_path_repeatable}", "**") : filesha256("${var.sql_path_repeatable}/${f}")]))
  }

  provisioner "local-exec" {
    command = "python3 ${local.sql_queries} ${var.sql_path_repeatable} ${local.redshift_cluster_name} ${var.database_name} ${var.redshift_secret_arn}"
  }
}
```

To refine the code, you can implement a mechanism to detect and apply changes only to the files that have been updated within the `repeatable` folder, rather than applying changes to all files indiscriminately.

**Example SQL statements**

The following output shows a failed SQL execution, along with an error message.

```
module.redshift.terraform_data.run_nonrepeatable_queries[0] (local-exec): Executing: ["/bin/sh" "-c" "python3 modules/redshift/sql-queries.py src/redshift/nonrepeatable testcluster-1 db1 arn:aws:secretsmanager:us-east-1:XXXXXXXXXXXX:secret:/redshift/master_user/password-8RapGH us-east-1"]
module.redshift.terraform_data.run_nonrepeatable_queries[0] (local-exec): -------------------------------------------------------------------
module.redshift.terraform_data.run_nonrepeatable_queries[0] (local-exec): src/redshift/nonrepeatable/table/admin/admin.application_family.sql
module.redshift.terraform_data.run_nonrepeatable_queries[0] (local-exec): -------------------------------------------------------------------
module.redshift.terraform_data.run_nonrepeatable_queries[0] (local-exec): Status: FAILED
module.redshift.terraform_data.run_nonrepeatable_queries[0] (local-exec): SQL execution failed.
module.redshift.terraform_data.run_nonrepeatable_queries[0] (local-exec): Error message: ERROR: syntax error at or near ")"
module.redshift.terraform_data.run_nonrepeatable_queries[0] (local-exec):   Position: 244
module.redshift.terraform_data.run_nonrepeatable_queries[0]: Creation complete after 3s [id=ee50ba6c-11ae-5b64-7e2f-86fd8caa8b76]
```

The following output shows a successful SQL execution.

```
module.redshift.terraform_data.run_bootstrap_queries[0]: Provisioning with 'local-exec'...
module.redshift.terraform_data.run_bootstrap_queries[0] (local-exec): Executing: ["/bin/sh" "-c" "python3 modules/redshift/sql-queries.py src/redshift/bootstrap testcluster-1 db1 arn:aws:secretsmanager:us-east-1:XXXXXXXXXXXX:secret:/redshift/master_user/password-8RapGH us-east-1"]
module.redshift.terraform_data.run_bootstrap_queries[0] (local-exec): -------------------------------------------------------------------
module.redshift.terraform_data.run_bootstrap_queries[0] (local-exec): src/redshift/bootstrap/db.sql
module.redshift.terraform_data.run_bootstrap_queries[0] (local-exec): -------------------------------------------------------------------
module.redshift.terraform_data.run_bootstrap_queries[0] (local-exec): Status: FINISHED
module.redshift.terraform_data.run_bootstrap_queries[0] (local-exec): SQL execution successful.
module.redshift.terraform_data.run_bootstrap_queries[0]: Creation complete after 2s [id=d565ef6d-be86-8afd-8e90-111e5ea4a1be]
```

# Export tags for a list of Amazon EC2 instances to a CSV file
<a name="export-tags-for-a-list-of-amazon-ec2-instances-to-a-csv-file"></a>

*Sida Ju and Pac Joonhyun, Amazon Web Services*

## Summary
<a name="export-tags-for-a-list-of-amazon-ec2-instances-to-a-csv-file-summary"></a>

This pattern shows how to programmatically export tags for a list of Amazon Elastic Compute Cloud (Amazon EC2) instances to a CSV file.

By using the example Python script provided, you can reduce how long it takes to review and categorize your Amazon EC2 instances by specific tags. For example, you could use the script to quickly identify and categorize a list of instances that your security team has flagged for software updates.

## Prerequisites and limitations
<a name="export-tags-for-a-list-of-amazon-ec2-instances-to-a-csv-file-prereqs"></a>

**Prerequisites**
+ Python 3 installed and configured
+ AWS Command Line Interface (AWS CLI) installed and configured

**Limitations**

The example Python script provided in this pattern can search Amazon EC2 instances based on the following attributes only:
+ Instance IDs
+ Private IPv4 addresses
+ Public IPv4 addresses

## Tools
<a name="export-tags-for-a-list-of-amazon-ec2-instances-to-a-csv-file-tools"></a>

**AWS services**
+ [AWS Command Line Interface (AWS CLI)](https://docs.aws.amazon.com/cli/latest/userguide/cli-chap-welcome.html) is an open-source tool that helps you interact with AWS services through commands in your command-line shell.

**Other tools**
+ [Python](https://www.python.org/) is a general-purpose computer programming language.
+ [virtualenv](https://virtualenv.pypa.io/en/latest/) helps you create isolated Python environments.

**Code repository**

The example Python script for this pattern is available in the GitHub [search-ec2-instances-export-tags](https://github.com/aws-samples/search-ec2-instances-export-tags) repository.

## Epics
<a name="export-tags-for-a-list-of-amazon-ec2-instances-to-a-csv-file-epics"></a>

### Install and configure the prerequisites
<a name="install-and-configure-the-prerequisites"></a>


| Task | Description | Skills required | 
| --- | --- | --- | 
| Clone the GitHub repository. | If you receive errors when running AWS CLI commands, [make sure that you’re using the most recent AWS CLI version](https://docs.aws.amazon.com/cli/latest/userguide/cli-chap-troubleshooting.html).Clone the GitHub [search-ec2-instances-export-tags](https://github.com/aws-samples/search-ec2-instances-export-tags) repository by running the following Git command in a terminal window:<pre>git clone https://github.com/aws-samples/search-ec2-instances-export-tags.git</pre> | DevOps engineer | 
| Install and activate virtualenv. | [\[See the AWS documentation website for more details\]](http://docs.aws.amazon.com/prescriptive-guidance/latest/patterns/export-tags-for-a-list-of-amazon-ec2-instances-to-a-csv-file.html)For more information, see the [virtualenv documentation](https://virtualenv.pypa.io/en/latest/how-to/install.html). | DevOps engineer | 
| Install dependencies. | [\[See the AWS documentation website for more details\]](http://docs.aws.amazon.com/prescriptive-guidance/latest/patterns/export-tags-for-a-list-of-amazon-ec2-instances-to-a-csv-file.html) | DevOps engineer | 
| Configure an AWS named profile. | If you haven’t already, configure an AWS named profile that includes the required credentials to run the script. To create a named profile, run the [aws configure](https://docs.aws.amazon.com/cli/latest/userguide/cli-configure-files.html#cli-configure-files-methods) command.For more information, see [Using named profiles](https://docs.aws.amazon.com/cli/latest/userguide/cli-configure-files.html#cli-configure-files-using-profiles) in the AWS CLI documentation. | DevOps engineer | 

### Configure and run the Python script
<a name="configure-and-run-the-python-script"></a>


| Task | Description | Skills required | 
| --- | --- | --- | 
| Create the input file. | Create an input file that contains a list of the Amazon EC2 instances that you want the script to search and export tags for. You can list instance IDs, private IPv4 addresses, or public IPv4 addresses.Make sure that each Amazon EC2 instance is listed on its own line in the input file.**Input file example**<pre>1    i-0547c351bdfe85b9f<br />2    54.157.194.156<br />3    172.31.85.33<br />4    54.165.198.144<br />5    i-0b6223b5914111a4b<br />6    172.31.85.44<br />7    54.165.198.145<br />8    172.31.80.219<br />9    172.31.94.199</pre> | DevOps engineer | 
| Run the Python script. | Run the script by running the following command in the terminal:<pre>python search_instances.py -i INPUTFILE -o OUTPUTFILE -r REGION [-p PROFILE]</pre>Replace `INPUTFILE` with the name of your input file. Replace `OUTPUTFILE` with the name you want to give the CSV output file. Replace `REGION` with the AWS Region that your Amazon EC2 resources are in. If you’re using an AWS named profile, replace `PROFILE` with the named profile that you’re using.To get a list of supported parameters and their description, run the following command:<pre>python search_instances.py -h</pre>For more information and to see an output file example, see the `README.md` file in the GitHub [search-ec2-instances-export-tags](https://github.com/aws-samples/search-ec2-instances-export-tags) repository. | DevOps engineer | 

## Related resources
<a name="export-tags-for-a-list-of-amazon-ec2-instances-to-a-csv-file-resources"></a>
+ [Configuring the AWS CLI](https://docs.aws.amazon.com/cli/latest/userguide/cli-chap-configure.html) (AWS CLI documentation)

# Export AWS Backup reports from across an organization in AWS Organizations as a CSV file
<a name="export-aws-backup-reports-from-across-an-organization-in-aws-organizations-as-a-csv-file"></a>

*Aromal Raj Jayarajan and Purushotham G K, Amazon Web Services*

## Summary
<a name="export-aws-backup-reports-from-across-an-organization-in-aws-organizations-as-a-csv-file-summary"></a>

This pattern shows how to export AWS Backup job reports from across an organization in AWS Organizations as a CSV file. The solution uses AWS Lambda and Amazon EventBridge to categorize AWS Backup job reports based on their status, which can help when configuring status-based automations.

AWS Backup helps organizations centrally manage and automate data protection across AWS services, in the cloud, and on premises. However, for AWS Backup jobs configured within AWS Organizations, consolidated reporting is available only in the AWS Management Console of each organization’s management account. Bringing this reporting outside of the management account can reduce the effort required for auditing and increase the scope for automations, notifications, and alerting.

## Prerequisites and limitations
<a name="export-aws-backup-reports-from-across-an-organization-in-aws-organizations-as-a-csv-file-prereqs"></a>

**Prerequisites **
+ An active AWS account
+ An active [organization](https://docs.aws.amazon.com/organizations/latest/userguide/orgs_tutorials_basic.html) in AWS Organizations that includes at least a management account and a member account
+ AWS Backup configured at the organization level in AWS Organizations (for more information, see [Automate centralized backup at scale across AWS services using AWS Backup](https://aws.amazon.com/blogs/storage/automate-centralized-backup-at-scale-across-aws-services-using-aws-backup/) on the AWS Blog)
+ [Git](https://git-scm.com/book/en/v2/Getting-Started-Installing-Git), installed and configured on your local machine

**Limitations **

The solution provided in this pattern identifies AWS resources that are configured for AWS Backup jobs only. The report can’t identify AWS resources that aren’t configured for backup through AWS Backup.

## Architecture
<a name="export-aws-backup-reports-from-across-an-organization-in-aws-organizations-as-a-csv-file-architecture"></a>

**Target technology stack**
+ AWS Backup
+ AWS CloudFormation
+ Amazon EventBridge
+ AWS Lambda
+ AWS Security Token Service (AWS STS)
+ Amazon Simple Storage Service (Amazon S3)
+ AWS Identity and Access Management (IAM)

**Target architecture**

The following diagram shows an example workflow for exporting AWS Backup job reports from across an organization in AWS Organizations as a CSV file.

![\[Using EventBridge, Lambda, AWS STS, and IAM to export AWS Backup job reports from across an organization in CSV format.\]](http://docs.aws.amazon.com/prescriptive-guidance/latest/patterns/images/pattern-img/74955aad-cc6d-488b-aa34-ae43f50fec60/images/5c39c79f-e731-4ad0-b404-51ebe0976420.png)


The diagram shows the following workflow:

1. A scheduled EventBridge event rule invokes a Lambda function in the member (reporting) AWS account.

1. The Lambda function then uses AWS STS to assume an IAM role that has the permissions required to connect to the management account.

1. The Lambda function then does the following:
   + Requests the consolidated AWS Backup jobs report from the AWS Backup service
   + Categorizes the results based on AWS Backup job status
   + Converts the response to a CSV file
   + Uploads the results to an Amazon S3 bucket in the reporting account within folders that are labeled based on their creation date

## Tools
<a name="export-aws-backup-reports-from-across-an-organization-in-aws-organizations-as-a-csv-file-tools"></a>

**Tools**
+ [AWS Backup](https://docs.aws.amazon.com/aws-backup/latest/devguide/whatisbackup.html) is a fully managed service that helps you centralize and automate data protection across AWS services, in the cloud, and on premises.
+ [AWS CloudFormation](https://docs.aws.amazon.com/AWSCloudFormation/latest/UserGuide/Welcome.html) helps you set up AWS resources, provision them quickly and consistently, and manage them throughout their lifecycle across AWS accounts and Regions.
+ [Amazon EventBridge](https://docs.aws.amazon.com/eventbridge/latest/userguide/eb-what-is.html) is a serverless event bus service that helps you connect your applications with real-time data from a variety of sources. For example, AWS Lambda functions, HTTP invocation endpoints using API destinations, or event buses in other AWS accounts.
+ [AWS Identity and Access Management (IAM)](https://docs.aws.amazon.com/IAM/latest/UserGuide/introduction.html) helps you securely manage access to your AWS resources by controlling who is authenticated and authorized to use them.
+ [AWS Lambda](https://docs.aws.amazon.com/lambda/latest/dg/welcome.html) is a compute service that helps you run code without needing to provision or manage servers. It runs your code only when needed and scales automatically, so you pay only for the compute time that you use.
+ [Amazon Simple Storage Service (Amazon S3)](https://docs.aws.amazon.com/AmazonS3/latest/userguide/Welcome.html) is a cloud-based object storage service that helps you store, protect, and retrieve any amount of data.

**Code **

The code for this pattern is available in the GitHub [aws-backup-report-generator](https://github.com/aws-samples/aws-backup-report-generator) repository.

## Best practices
<a name="export-aws-backup-reports-from-across-an-organization-in-aws-organizations-as-a-csv-file-best-practices"></a>
+ [Security best practices for Amazon S3](https://docs.aws.amazon.com/AmazonS3/latest/userguide/security-best-practices.html) (*Amazon S3 User Guide*)
+ [Best practices for working with AWS Lambda functions](https://docs.aws.amazon.com/lambda/latest/dg/best-practices.html) (*AWS Lambda Developer Guide*)
+ [Best practices for the management account](https://docs.aws.amazon.com/organizations/latest/userguide/orgs_best-practices_mgmt-acct.html) (*AWS Organizations User Guide*)

## Epics
<a name="export-aws-backup-reports-from-across-an-organization-in-aws-organizations-as-a-csv-file-epics"></a>

### Deploy the solution components
<a name="deploy-the-solution-components"></a>


| Task | Description | Skills required | 
| --- | --- | --- | 
| Clone the GitHub repository. | Clone the GitHub [aws-backup-report-generator](https://github.com/aws-samples/aws-backup-report-generator) repository by running the following command in a terminal window:<pre>git clone https://github.com/aws-samples/aws-backup-report-generator.git</pre>For more information, see [Cloning a repository](https://docs.github.com/en/repositories/creating-and-managing-repositories/cloning-a-repository) in the GitHub Docs. | AWS DevOps, DevOps engineer | 
| Deploy the solution components in the member (reporting) AWS account. | [\[See the AWS documentation website for more details\]](http://docs.aws.amazon.com/prescriptive-guidance/latest/patterns/export-aws-backup-reports-from-across-an-organization-in-aws-organizations-as-a-csv-file.html) | DevOps engineer, AWS DevOps | 

### Test the solution
<a name="test-the-solution"></a>


| Task | Description | Skills required | 
| --- | --- | --- | 
| Make sure that the EventBridge rule runs prior to testing. | Make sure that the EventBridge rule runs by waiting at least 24 hours, or by increasing the report frequency in the CloudFormation template’s **template-reporting.yml** file.**To increase the report frequency**[\[See the AWS documentation website for more details\]](http://docs.aws.amazon.com/prescriptive-guidance/latest/patterns/export-aws-backup-reports-from-across-an-organization-in-aws-organizations-as-a-csv-file.html) | AWS DevOps, DevOps engineer | 
| Check the Amazon S3 bucket for the generated report. | [\[See the AWS documentation website for more details\]](http://docs.aws.amazon.com/prescriptive-guidance/latest/patterns/export-aws-backup-reports-from-across-an-organization-in-aws-organizations-as-a-csv-file.html) | AWS DevOps, DevOps engineer | 

### Clean up your resources
<a name="clean-up-your-resources"></a>


| Task | Description | Skills required | 
| --- | --- | --- | 
| Delete the solution components from the member (reporting) account. | [\[See the AWS documentation website for more details\]](http://docs.aws.amazon.com/prescriptive-guidance/latest/patterns/export-aws-backup-reports-from-across-an-organization-in-aws-organizations-as-a-csv-file.html) | AWS DevOps, DevOps engineer | 
| Delete the solution components from the management account. | [\[See the AWS documentation website for more details\]](http://docs.aws.amazon.com/prescriptive-guidance/latest/patterns/export-aws-backup-reports-from-across-an-organization-in-aws-organizations-as-a-csv-file.html) | AWS DevOps, DevOps engineer | 

## Related resources
<a name="export-aws-backup-reports-from-across-an-organization-in-aws-organizations-as-a-csv-file-resources"></a>
+ [Tutorial: Using AWS Lambda with scheduled events ](https://docs.aws.amazon.com/lambda/latest/dg/services-cloudwatchevents-tutorial.html)(AWS Lambda documentation)
+ [Creating scheduled events to run AWS Lambda functions](https://docs.aws.amazon.com/sdk-for-javascript/v3/developer-guide/scheduled-events-invoking-lambda-example.html) (AWS SDK for JavaScript documentation)
+ [IAM tutorial: Delegate access across AWS accounts using IAM roles](https://docs.aws.amazon.com/IAM/latest/UserGuide/tutorial_cross-account-with-roles.html) (IAM documentation)
+ [AWS Organizations terminology and concepts](https://docs.aws.amazon.com/organizations/latest/userguide/orgs_getting-started_concepts.html) (AWS Organizations documentation)
+ [Creating report plans using the AWS Backup console](https://docs.aws.amazon.com/aws-backup/latest/devguide/create-report-plan-console.html) (AWS Backup documentation)
+ [Create an audit report](https://docs.aws.amazon.com/aws-backup/latest/devguide/create-audit-report.html) (AWS Backup documentation)
+ [Creating on-demand reports](https://docs.aws.amazon.com/aws-backup/latest/devguide/create-on-demand-reports.html) (AWS Backup documentation)
+ [What is AWS Backup?](https://docs.aws.amazon.com/aws-backup/latest/devguide/whatisbackup.html) (AWS Backup documentation)
+ [Automate centralized backup at scale across AWS services using AWS Backup](https://aws.amazon.com/blogs/storage/automate-centralized-backup-at-scale-across-aws-services-using-aws-backup/) (AWS blog post)

# Generate an AWS CloudFormation template containing AWS Config managed rules using Troposphere
<a name="generate-an-aws-cloudformation-template-containing-aws-config-managed-rules-using-troposphere"></a>

*Lucas Nation and Freddie Wilson, Amazon Web Services*

## Summary
<a name="generate-an-aws-cloudformation-template-containing-aws-config-managed-rules-using-troposphere-summary"></a>

Many organizations use [AWS Config managed](https://docs.aws.amazon.com/config/latest/developerguide/evaluate-config_use-managed-rules.html) rules to evaluate the compliance of their Amazon Web Services (AWS) resources against common best practices. However, these rules can be time consuming to maintain and this pattern helps you leverage [Troposphere](https://troposphere.readthedocs.io/en/latest/quick_start.html), a Python library, to generate and manage AWS Config managed rules.

The pattern helps you to manage your AWS Config managed rules by using a Python script to convert a Microsoft Excel spreadsheet containing AWS managed rules into an AWS CloudFormation template. Troposphere acts as the infrastructure as code (IaC) and this means that you can update the Excel spreadsheet with managed rules, instead of using a JSON or YAML-formatted file. You then use the template to launch an AWS CloudFormation stack that creates and updates the managed rules in your AWS account.

The AWS CloudFormation template defines each AWS Config managed rule by using the Excel spreadsheet and helps you to avoid manually creating individual rules in the AWS Management Console. The script defaults each managed rule's parameters to an empty dictionary and the scope's `ComplianceResourceTypes` defaults from `THE_RULE_IDENTIFIER.template file`*. *For more information about the rule identifier, see [Creating AWS Config managed rules with AWS CloudFormation templates](https://docs.aws.amazon.com/config/latest/developerguide/aws-config-managed-rules-cloudformation-templates.html) in the AWS Config documentation.

## Prerequisites and limitations
<a name="generate-an-aws-cloudformation-template-containing-aws-config-managed-rules-using-troposphere-prereqs"></a>

**Prerequisites **
+ An active AWS account.
+ Familiarity with using AWS CloudFormation templates to create AWS Config managed rules. For more information about this, see [Creating AWS Config managed rules with AWS CloudFormation templates](https://docs.aws.amazon.com/config/latest/developerguide/aws-config-managed-rules-cloudformation-templates.html) in the AWS Config documentation.  
+ Python 3, installed and configured. For more information about this, see the [Python documentation](https://www.python.org/).
+ An existing integrated development environment (IDE).  
+ Identify your organizational units (OUs) in a column in the sample `excel_config_rules.xlsx` Excel spreadsheet (attached).

## Epics
<a name="generate-an-aws-cloudformation-template-containing-aws-config-managed-rules-using-troposphere-epics"></a>

### Customize and configure the AWS Config managed rules
<a name="customize-and-configure-the-aws-config-managed-rules"></a>


| Task | Description | Skills required | 
| --- | --- | --- | 
| Update the sample Excel spreadsheet. | Download the sample `excel_config_rules.xlsx` Excel spreadsheet (attached) and label as `Implemented` the AWS Config managed rules that you want to use. Rules marked as `Implemented` will be added to the AWS CloudFormation template. | Developer | 
| (Optional) Update the config\$1rules\$1params.json file with AWS Config rule parameters. | Some AWS Config managed rules require parameters and should be passed to the Python script as a JSON file by using the `--param-file` option. For example, the `access-keys-rotated` managed rule uses the following `maxAccessKeyAge` parameter:<pre>{<br />         "access-keys-rotated": {<br />             "InputParameters": {<br />                 "maxAccessKeyAge": 90<br />             }<br />         }<br />     }</pre>In this sample parameter, the `maxAccessKeyAge` is set to 90 days. The script reads the parameter file and adds any `InputParameters` that it finds. | Developer | 
| (Optional) Update the config\$1rules\$1params.json file with AWS Config ComplianceResourceTypes. | By default, the Python script retrieves the `ComplianceResourceTypes` from AWS defined templates. If you want to override the scope of a specific AWS Config managed rule, then you need to pass it to the Python script as a JSON file using the `--param-file` option.For example, the following sample code shows how the `ComplianceResourceTypes` for `ec2-volume-inuse-check` is set to the `["AWS::EC2::Volume"]` list:<pre>{<br />         "ec2-volume-inuse-check": {<br />             "Scope": {<br />                 "ComplianceResourceTypes": [<br />                     "AWS::EC2::Volume"<br />                 ]<br />             }<br />         }<br />     }</pre> | Developer | 

### Run the Python script
<a name="run-the-python-script"></a>


| Task | Description | Skills required | 
| --- | --- | --- | 
| Install the pip packages from the requirements.txt file.  | Download the `requirements.txt` file (attached) and run the following command in your IDE to install the Python packages:`pip3 install -r requirements.txt` | Developer | 
| Run the Python script.  | [\[See the AWS documentation website for more details\]](http://docs.aws.amazon.com/prescriptive-guidance/latest/patterns/generate-an-aws-cloudformation-template-containing-aws-config-managed-rules-using-troposphere.html)You can also add the following optional parameters:[\[See the AWS documentation website for more details\]](http://docs.aws.amazon.com/prescriptive-guidance/latest/patterns/generate-an-aws-cloudformation-template-containing-aws-config-managed-rules-using-troposphere.html) | Developer | 

### Deploy the AWS Config managed rules
<a name="deploy-the-aws-config-managed-rules"></a>


| Task | Description | Skills required | 
| --- | --- | --- | 
| Launch the AWS CloudFormation stack. | [\[See the AWS documentation website for more details\]](http://docs.aws.amazon.com/prescriptive-guidance/latest/patterns/generate-an-aws-cloudformation-template-containing-aws-config-managed-rules-using-troposphere.html) | Developer | 

## Attachments
<a name="attachments-07c1cfff-fc9e-4a1f-bd36-48f025808bd8"></a>

To access additional content that is associated with this document, unzip the following file: [attachment.zip](samples/p-attach/07c1cfff-fc9e-4a1f-bd36-48f025808bd8/attachments/attachment.zip)

# Give SageMaker notebook instances temporary access to a CodeCommit repository in another AWS account
<a name="give-sagemaker-notebook-instances-temporary-access-to-a-codecommit-repository-in-another-aws-account"></a>

*Helge Aufderheide, Amazon Web Services*

## Summary
<a name="give-sagemaker-notebook-instances-temporary-access-to-a-codecommit-repository-in-another-aws-account-summary"></a>

This pattern shows how to grant Amazon SageMaker notebook instances and users temporary access to an AWS CodeCommit repository that’s in another AWS account. This pattern also shows how you can grant granular permissions for specific actions each entity can perform on each repository.

Organizations often store CodeCommit repositories in a different AWS account than the account that hosts their development environment. This multi-account setup helps control access to the repositories and reduces the risk of their accidental deletion. To grant these cross-account permissions, it’s a best practice to use AWS Identity and Access Management (IAM) roles. Then, predefined IAM identities in each AWS account can temporarily assume the roles to create a controlled chain of trust across the accounts.

**Note**  
You can apply a similar procedure to grant other IAM identities cross-account access to a CodeCommit repository. For more information, see [Configure cross-account access to an AWS CodeCommit repository using roles](https://docs.aws.amazon.com/codecommit/latest/userguide/cross-account.html) in the *AWS CodeCommit User Guide*.

## Prerequisites and limitations
<a name="give-sagemaker-notebook-instances-temporary-access-to-a-codecommit-repository-in-another-aws-account-prereqs"></a>

**Prerequisites**
+ An active AWS account with a CodeCommit repository (*account A*)
+ A second active AWS account with a SageMaker notebook instance (*account B*)
+ An AWS user with sufficient permissions to create and modify IAM roles in account A
+ A second AWS user with sufficient permissions to create and modify IAM roles in account B

## Architecture
<a name="give-sagemaker-notebook-instances-temporary-access-to-a-codecommit-repository-in-another-aws-account-architecture"></a>

The following diagram shows an example workflow for granting a SageMaker notebook instance and users in one AWS account cross-account access to a CodeCommit repository:

![\[Workflow for cross-account access to CodeCommit\]](http://docs.aws.amazon.com/prescriptive-guidance/latest/patterns/images/pattern-img/54d0fdb3-6d25-4433-9f67-c87846633d61/images/97a799af-ce88-4495-a61c-d0cd22493ce2.png)


The diagram shows the following workflow:

1. The AWS user role and SageMaker notebook instance role in account B assume a [named profile](https://docs.aws.amazon.com/cli/latest/userguide/cli-configure-files.html#cli-configure-files-using-profiles).

1. The named profile’s permissions policy specifies a CodeCommit access role in account A that the profile then assumes.

1. The CodeCommit access role’s trust policy in account A allows the named profile in account B to assume the CodeCommit access role.

1. The CodeCommit repository’s IAM permissions policy in account A allows the CodeCommit access role to access the CodeCommit repository.

**Technology stack**
+ CodeCommit
+ Git
+ IAM
+ pip
+ SageMaker

## Tools
<a name="give-sagemaker-notebook-instances-temporary-access-to-a-codecommit-repository-in-another-aws-account-tools"></a>
+ [AWS CodeCommit](https://docs.aws.amazon.com/codecommit/latest/userguide/welcome.html) is a version control service that helps you privately store and manage Git repositories, without needing to manage your own source control system.
+ [AWS Identity and Access Management (IAM)](https://docs.aws.amazon.com/IAM/latest/UserGuide/introduction.html) helps you securely manage access to your AWS resources by controlling who is authenticated and authorized to use them.
+ [Git](https://git-scm.com/) is a distributed version-control system for tracking changes in source code during software development.
+ [git-remote-codecommit](https://docs.aws.amazon.com/codecommit/latest/userguide/setting-up-git-remote-codecommit.html) is a utility that helps you push and pull code from CodeCommit repositories by extending Git.
+ [pip](https://pypi.org/project/pip/) is the package installer for Python. You can use pip to install packages from the Python Package Index and other indexes.

## Best practices
<a name="give-sagemaker-notebook-instances-temporary-access-to-a-codecommit-repository-in-another-aws-account-best-practices"></a>

When you set permissions with IAM policies, make sure that you grant only the permissions required to perform a task. For more information, see [Apply least-privilege permissions](https://docs.aws.amazon.com/IAM/latest/UserGuide/best-practices.html#grant-least-privilege) in the IAM documentation.

When implementing this pattern, make sure that you do the following:
+ Confirm that IAM principles have only the permissions required to perform specific, needed actions within each repository. For example, it’s recommended to allow approved IAM principles to push and merge changes to specific repository branches, but only request merges to protected branches. 
+ Confirm that IAM principles are assigned different IAM roles based on their respective roles and responsibilities for each project. For example, a developer will have different access permissions than a release manager or AWS Administrator. 

## Epics
<a name="give-sagemaker-notebook-instances-temporary-access-to-a-codecommit-repository-in-another-aws-account-epics"></a>

### Configure the IAM roles
<a name="configure-the-iam-roles"></a>


| Task | Description | Skills required | 
| --- | --- | --- | 
| Configure the CodeCommit access role and permissions policy. | To automate the manual setup process documented in this epic,** **you can use an [AWS CloudFormation template](https://docs.aws.amazon.com/AWSCloudFormation/latest/UserGuide/template-guide.html).In the account that contains the CodeCommit repository (*account A*), do the following:[\[See the AWS documentation website for more details\]](http://docs.aws.amazon.com/prescriptive-guidance/latest/patterns/give-sagemaker-notebook-instances-temporary-access-to-a-codecommit-repository-in-another-aws-account.html)Before moving this setup into your production environment, it’s a best practice to write your own IAM policy that applies [least-privilege permissions](https://docs.aws.amazon.com/IAM/latest/UserGuide/best-practices.html#grant-least-privilege). For more information, see the **Additional information** section of this pattern. | General AWS, AWS DevOps | 
| Grant the SageMaker notebook instance's role in account B permissions to assume the CodeCommit access role in account A. | In the account that contains the SageMaker notebook instance’s IAM role (*account B),* do the following:[\[See the AWS documentation website for more details\]](http://docs.aws.amazon.com/prescriptive-guidance/latest/patterns/give-sagemaker-notebook-instances-temporary-access-to-a-codecommit-repository-in-another-aws-account.html)To view your repository’s Amazon Resource Name (ARN), see [View CodeCommit repository details](https://docs.aws.amazon.com/codecommit/latest/userguide/how-to-view-repository-details.html) in the *AWS CodeCommit User Guide*. | General AWS, AWS DevOps | 

### Set up your SageMaker notebook instance in account B
<a name="set-up-your-sagemaker-notebook-instance-in-account-b"></a>


| Task | Description | Skills required | 
| --- | --- | --- | 
| Set up a user profile on the AWS SageMaker notebook instance to assume the role in account A.  | [Make sure that you have the latest version of the AWS Command Line Interface (AWS CLI) installed](https://docs.aws.amazon.com/cli/latest/userguide/getting-started-install.html).In the account that contains the SageMaker notebook instance (*account B),* do the following:[\[See the AWS documentation website for more details\]](http://docs.aws.amazon.com/prescriptive-guidance/latest/patterns/give-sagemaker-notebook-instances-temporary-access-to-a-codecommit-repository-in-another-aws-account.html)<pre>------.aws/config--------------<br />[profile remoterepouser]<br />role_arn = arn:aws:iam::<ID of Account A>:role/<rolename><br />role_session_name = remoteaccesssession<br />region = eu-west-1<br />credential_source  = Ec2InstanceMetadata<br />----------------------------------</pre> | General AWS, AWS DevOps | 
| Install the git-remote-codecommit utility. | Follow the instructions in [Step 2: Install git-remote-codecommit](https://docs.aws.amazon.com/codecommit/latest/userguide/setting-up-git-remote-codecommit.html#setting-up-git-remote-codecommit-install) in the *AWS CodeCommit User Guide*. | Data scientist | 

### Access the repository
<a name="access-the-repository"></a>


| Task | Description | Skills required | 
| --- | --- | --- | 
| Access the CodeCommit repository by using Git commands or SageMaker. | **To use Git**IAM principals that assume the SageMaker notebook instance’s role in account B can now run Git commands to access the CodeCommit repository in account A. For example, users can run commands such as `git clone`, `git pull`, and `git push`.For instructions, see [Connect to an AWS CodeCommit repository](https://docs.aws.amazon.com/codecommit/latest/userguide/how-to-connect.html) in the *AWS CodeCommit User Guide*.For information about how to use Git with CodeCommit, see [Getting started with AWS CodeCommit](https://docs.aws.amazon.com/codecommit/latest/userguide/getting-started-cc.html) in the *AWS CodeCommit User Guide*.**To use SageMaker**To use Git from the SageMaker console, you must allow Git to retrieve credentials from your CodeCommit repository. For instructions, see [Associate a CodeCommit repository in a different AWS account with a notebook instance](https://docs.aws.amazon.com/sagemaker/latest/dg/nbi-git-cross.html) in the SageMaker documentation. | Git, bash console | 

## Related resources
<a name="give-sagemaker-notebook-instances-temporary-access-to-a-codecommit-repository-in-another-aws-account-resources"></a>
+ [Configure cross-account access to an AWS CodeCommit repository using roles](https://docs.aws.amazon.com/codecommit/latest/userguide/cross-account.html) (AWS CodeCommit documentation)
+ [IAM tutorial: Delegate access across AWS accounts using IAM roles](https://docs.aws.amazon.com/IAM/latest/UserGuide/tutorial_cross-account-with-roles.html) (IAM documentation)

## Additional information
<a name="give-sagemaker-notebook-instances-temporary-access-to-a-codecommit-repository-in-another-aws-account-additional"></a>

**Restricting CodeCommit permissions to specific actions**

To restrict the actions that an IAM principal can take in the CodeCommit repository, modify the actions that are allowed in the CodeCommit access policy.

For more information about CodeCommit API operations, see [CodeCommit permissions reference](https://docs.aws.amazon.com/codecommit/latest/userguide/auth-and-access-control-permissions-reference.html) in the *AWS CodeCommit User Guide*.

**Note**  
You can also edit the [AWSCodeCommitPowerUser](https://docs.aws.amazon.com/codecommit/latest/userguide/security-iam-awsmanpol.html#managed-policies-poweruser) AWS managed policy to fit your use case.

**Restricting CodeCommit permissions to specific repositories**

To create a multitenant environment where more than one code repository is accessible to only specific users, do the following:

1. Create multiple CodeCommit access roles in account A. Then, configure each access role’s trust policy to allow specific users in account B to assume the role.

1. Restrict what code repositories that each role can assume by adding a **"Resource"** condition to each CodeCommit access role’s policy.

**Example "Resource" condition that restricts an IAM principal’s access to a specific CodeCommit repository**

```
"Resource" : [<REPOSITORY_ARN>,<REPOSITORY_ARN> ]
```

**Note**  
To help identify and differentiate multiple code repositories in the same AWS account, you can assign different prefixes to the repositories’ names. For example, you can name code repositories with prefixes that align to different developer groups, such as **myproject-subproject1-repo1** and **myproject-subproject2-repo1**. Then, you can create an IAM role for each developer group based on their assigned prefixes. For example, you could create a role named **myproject-subproject1-repoaccess** and grant it access to all of the code repositories that include the prefix **myproject-subproject1**.

**Example "Resource" condition that refers to a code repository ARN that includes a specific prefix**

```
"Resource" : arn:aws:codecommit:<region>:<account-id>:myproject-subproject1-*
```

# Implement a GitHub Flow branching strategy for multi-account DevOps environments
<a name="implement-a-github-flow-branching-strategy-for-multi-account-devops-environments"></a>

*Mike Stephens and Abhilash Vinod, Amazon Web Services*

## Summary
<a name="implement-a-github-flow-branching-strategy-for-multi-account-devops-environments-summary"></a>

When managing a source code repository, different branching strategies affect the software development and release processes that development teams use. Examples of common branching strategies include Trunk, GitHub Flow, and Gitflow. These strategies use different branches, and the activities performed in each environment are different. Organizations that are implementing DevOps processes would benefit from a visual guide to help them understand the differences between these branching strategies. Using this visual in your organization helps development teams align their work and follow organizational standards. This pattern provides this visual and describes the process of implementing a GitHub Flow branching strategy in your organization.

This pattern is part of a documentation series about choosing and implementing DevOps branching strategies for organizations with multiple AWS accounts. This series is designed to help you apply the correct strategy and best practices from the outset, to streamline your experience in the cloud. GitHub Flow is just one possible branching strategy that your organization can use. This documentation series also covers [Trunk](https://docs.aws.amazon.com/prescriptive-guidance/latest/patterns/implement-a-trunk-branching-strategy-for-multi-account-devops-environments.html) and [Gitflow](https://docs.aws.amazon.com/prescriptive-guidance/latest/patterns/implement-a-gitflow-branching-strategy-for-multi-account-devops-environments.html) branching models. If you haven't done so already, we recommend that you review [Choosing a Git branching strategy for multi-account DevOps environments](https://docs.aws.amazon.com/prescriptive-guidance/latest/choosing-git-branch-approach/) prior to implementing the guidance in this pattern. Please use due diligence to choose the right branching strategy for your organization.

This guide provides a diagram that shows how an organization might implement the GitHub Flow strategy. It is recommended that you review the [AWS Well-Architected DevOps Guidance](https://docs.aws.amazon.com/wellarchitected/latest/devops-guidance/devops-guidance.html) to review best practices. This pattern includes recommended tasks, steps, and restrictions for each step in the DevOps process.

## Prerequisites and limitations
<a name="implement-a-github-flow-branching-strategy-for-multi-account-devops-environments-prereqs"></a>

**Prerequisites**
+ Git, [installed](https://git-scm.com/downloads). This is used as a source code repository tool.
+ Draw.io, [installed](https://github.com/jgraph/drawio-desktop/releases). This application is used to view and edit the diagram.

## Architecture
<a name="implement-a-github-flow-branching-strategy-for-multi-account-devops-environments-architecture"></a>

**Target architecture**

The following diagram can be used like a [Punnett square](https://en.wikipedia.org/wiki/Punnett_square) (Wikipedia). You line up the branches on the vertical axis with the AWS environments on the horizontal axis to determine what actions to perform in each scenario. The numbers indicate the sequence of the actions in the workflow. This example takes you from a `feature` branch through deployment in production.

![\[Punnett square of the GitHub Flow activities in each branch and environment.\]](http://docs.aws.amazon.com/prescriptive-guidance/latest/patterns/images/pattern-img/780a5bce-3cd2-4092-8537-b7a77c3d6b8d/images/8a2a774a-cd85-466e-838e-a9a1f3b58a63.png)


For more information about the AWS accounts, environments, and branches in a GitHub Flow approach, see [Choosing a Git branching strategy for multi-account DevOps environments](https://docs.aws.amazon.com/prescriptive-guidance/latest/choosing-git-branch-approach).

**Automation and scale**

Continuous integration and continuous delivery (CI/CD) is the process of automating the software release lifecycle. It automates much or all of the manual processes traditionally required to get new code from an initial commit into production. A CI/CD pipeline encompasses the sandbox, development, testing, staging, and production environments. In each environment, the CI/CD pipeline provisions any infrastructure that is needed to deploy or test the code. By using CI/CD, development teams can make changes to code that are then automatically tested and deployed. CI/CD pipelines also provide governance and guardrails for development teams by enforcing consistency, standards, best practices, and minimal acceptance levels for feature acceptance and deployment. For more information, see [Practicing Continuous Integration and Continuous Delivery on AWS](https://docs.aws.amazon.com/whitepapers/latest/practicing-continuous-integration-continuous-delivery/welcome.html).

AWS offers a suite of developer services that are designed to help you build CI/CD pipelines. For example, [AWS CodePipeline](https://docs.aws.amazon.com/codepipeline/latest/userguide/welcome.html) is a fully managed continuous delivery service that helps you automate your release pipelines for fast and reliable application and infrastructure updates. [AWS CodeBuild](https://docs.aws.amazon.com/codebuild/latest/userguide/welcome.html) compiles source code, runs tests, and produces ready-to-deploy software packages. For more information, see [Developer Tools on AWS](https://aws.amazon.com/products/developer-tools/).

## Tools
<a name="implement-a-github-flow-branching-strategy-for-multi-account-devops-environments-tools"></a>

**AWS services and tools**

AWS provides a suite of developer services that you can use to implement this pattern:
+ [AWS CodeArtifact](https://docs.aws.amazon.com/codeartifact/latest/ug/welcome.html) is a highly scalable, managed artifact repository service that helps you store and share software packages for application development.
+ [AWS CodeBuild](https://docs.aws.amazon.com/codebuild/latest/userguide/welcome.html) is a fully managed build service that helps you compile source code, run unit tests, and produce artifacts that are ready to deploy.
+ [AWS CodeDeploy](https://docs.aws.amazon.com/codedeploy/latest/userguide/welcome.html) automates deployments to Amazon Elastic Compute Cloud (Amazon EC2) or on-premises instances, AWS Lambda functions, or Amazon Elastic Container Service (Amazon ECS) services.
+ [AWS CodePipeline](https://docs.aws.amazon.com/codepipeline/latest/userguide/welcome.html) helps you quickly model and configure the different stages of a software release and automate the steps required to release software changes continuously.

**Other tools**
+ [Draw.io Desktop](https://github.com/jgraph/drawio-desktop/releases) is an application for making flowcharts and diagrams. The code repository contains templates in .drawio format for Draw.io.
+ [Figma](https://www.figma.com/design-overview/) is an online design tool designed for collaboration. The code repository contains templates in .fig format for Figma.

**Code repository**

This source file for the diagram in this pattern is available in the GitHub [Git Branching Strategy for GitHub Flow](https://github.com/awslabs/git-branching-strategies-for-multiaccount-devops/tree/main/github-flow) repository. It includes files in PNG, draw.io, and Figma formats. You can modify these diagrams to support your organization's processes.

## Best practices
<a name="implement-a-github-flow-branching-strategy-for-multi-account-devops-environments-best-practices"></a>

Follow the best practices and recommendations in [AWS Well-Architected DevOps Guidance](https://docs.aws.amazon.com/wellarchitected/latest/devops-guidance/devops-guidance.html) and [Choosing a Git branching strategy for multi-account DevOps environments](https://docs.aws.amazon.com/prescriptive-guidance/latest/choosing-git-branch-approach/). These help you effectively implement GitHub Flow-based development, foster collaboration, improve code quality, and streamline the development process.

## Epics
<a name="implement-a-github-flow-branching-strategy-for-multi-account-devops-environments-epics"></a>

### Reviewing the GitHub Flow workflows
<a name="reviewing-the-github-flow-workflows"></a>


| Task | Description | Skills required | 
| --- | --- | --- | 
| Review the standard GitHub Flow process. | [\[See the AWS documentation website for more details\]](http://docs.aws.amazon.com/prescriptive-guidance/latest/patterns/implement-a-github-flow-branching-strategy-for-multi-account-devops-environments.html) | DevOps engineer | 
| Review the bugfix GitHub Flow process. | [\[See the AWS documentation website for more details\]](http://docs.aws.amazon.com/prescriptive-guidance/latest/patterns/implement-a-github-flow-branching-strategy-for-multi-account-devops-environments.html) | DevOps engineer | 
| Review the hotfix GitHub Flow process. | GitHub Flow is designed to enable continuous delivery, where code changes are frequently and reliably deployed to higher environments. The key is that every `feature` branch is deployable at any time.`Hotfix` branches, which are akin to `feature` or `bugfix` branches, can follow the same process as either of these other branches. However, given their urgency, hotfixes typically have a higher priority. Depending on the team's policies and the immediacy of the situation, certain steps in the process could be expedited. For instance, code reviews for hotfixes might be fast-tracked. Therefore, while the hotfix process parallels the feature or bugfix process, the urgency surrounding hotfixes may warrant modifications in the procedural adherence. It's crucial to establish guidelines about managing hotfixes to make sure that they are handled efficiently and securely. | DevOps engineer | 

## Troubleshooting
<a name="implement-a-github-flow-branching-strategy-for-multi-account-devops-environments-troubleshooting"></a>


| Issue | Solution | 
| --- | --- | 
| Branch conflicts | A common issue that can occur with the GitHub Flow model is where a hotfix needs to occur in production but a corresponding change needs to occur in a `feature`, `bugfix`, or `hotfix` branch where the same resources are being modified. We recommend that you frequently merge changes from `main` into lower branches to avoid significant conflicts when you merge to `main`. | 
| Team maturity | GitHub Flow encourages daily deployments to higher environments, embracing true continuous integration and continuous delivery (CI/CD). It is imperative that the team has the engineering maturity to build features and create automation tests for them. The team must perform an exhaustive merge request review before changes are approved. This fosters a robust engineering culture that promotes quality, accountability, and efficiency in the development process. | 

## Related resources
<a name="implement-a-github-flow-branching-strategy-for-multi-account-devops-environments-resources"></a>

This guide doesn't include training for Git; however, there are many high-quality resources available on the internet if you need this training. We recommend that you start with the [Git documentation](https://git-scm.com/doc) site.

The following resources can help you with your GitHub Flow branching journey in the AWS Cloud.

**AWS DevOps guidance**
+ [AWS DevOps Guidance](https://docs.aws.amazon.com/wellarchitected/latest/devops-guidance/devops-guidance.html)
+ [AWS Deployment Pipeline Reference Architecture](https://pipelines.devops.aws.dev/)
+ [What is DevOps?](https://aws.amazon.com/devops/what-is-devops/)
+ [DevOps resources](https://aws.amazon.com/devops/resources/)

**GitHub Flow guidance**
+ [GitHub Flow Quickstart Tutorial](https://docs.github.com/en/get-started/using-github/github-flow) (GitHub)
+ [Why GitHub Flow?](https://githubflow.github.io/)

**Other resources**
+ [Twelve-factor app methodology](https://12factor.net/) (12factor.net)

# Implement a Gitflow branching strategy for multi-account DevOps environments
<a name="implement-a-gitflow-branching-strategy-for-multi-account-devops-environments"></a>

*Mike Stephens, Stephen DiCato, Abhilash Vinod, and Tim Wondergem, Amazon Web Services*

## Summary
<a name="implement-a-gitflow-branching-strategy-for-multi-account-devops-environments-summary"></a>

When managing a source code repository, different branching strategies affect the software development and release processes that development teams use. Examples of common branching strategies include Trunk, Gitflow, and GitHub Flow. These strategies use different branches, and the activities performed in each environment are different. Organizations that are implementing DevOps processes would benefit from a visual guide to help them understand the differences between these branching strategies. Using this visual in your organization helps development teams align their work and follow organizational standards. This pattern provides this visual and describes the process of implementing a Gitflow branching strategy in your organization.

This pattern is part of a documentation series about choosing and implementing DevOps branching strategies for organizations with multiple AWS accounts. This series is designed to help you apply the correct strategy and best practices from the outset, to streamline your experience in the cloud. Gitflow is just one possible branching strategy that your organization can use. This documentation series also covers [Trunk](https://docs.aws.amazon.com/prescriptive-guidance/latest/patterns/implement-a-trunk-branching-strategy-for-multi-account-devops-environments.html) and [GitHub Flow](https://docs.aws.amazon.com/prescriptive-guidance/latest/patterns/implement-a-github-flow-branching-strategy-for-multi-account-devops-environments.html) branching models. If you haven't done so already, we recommend that you review [Choosing a Git branching strategy for multi-account DevOps environments](https://docs.aws.amazon.com/prescriptive-guidance/latest/choosing-git-branch-approach/) prior to implementing the guidance in this pattern. Please use due diligence to choose the right branching strategy for your organization.

This guide provides a diagram that shows how an organization might implement the Gitflow strategy. It is recommended that you review the [AWS Well-Architected DevOps Guidance](https://docs.aws.amazon.com/wellarchitected/latest/devops-guidance/devops-guidance.html) to review best practices. This pattern includes recommended tasks, steps, and restrictions for each step in the DevOps process.

## Prerequisites and limitations
<a name="implement-a-gitflow-branching-strategy-for-multi-account-devops-environments-prereqs"></a>

**Prerequisites**
+ Git, [installed](https://git-scm.com/downloads). This is used as a source code repository tool.
+ Draw.io, [installed](https://github.com/jgraph/drawio-desktop/releases). This application is used to view and edit the diagram.
+ (Optional) Gitflow plugin, [installed](https://github.com/nvie/gitflow).

## Architecture
<a name="implement-a-gitflow-branching-strategy-for-multi-account-devops-environments-architecture"></a>

**Target architecture**

The following diagram can be used like a [Punnett square](https://en.wikipedia.org/wiki/Punnett_square) (Wikipedia). You line up the branches on the vertical axis with the AWS environments on the horizontal axis to determine what actions to perform in each scenario. The numbers indicate the sequence of the actions in the workflow. This example takes you from a feature branch through deployment in production.

![\[Punnett square of the Gitflow activities in each branch and environment.\]](http://docs.aws.amazon.com/prescriptive-guidance/latest/patterns/images/pattern-img/1dee2a06-cc54-4797-b9a9-78b6685edd33/images/d8be49bf-dca1-4892-ac4c-11996a7258c2.png)


For more information about the AWS accounts, environments, and branches in a Gitflow approach, see [Choosing a Git branching strategy for multi-account DevOps environments](https://docs.aws.amazon.com/prescriptive-guidance/latest/choosing-git-branch-approach/).

**Automation and scale**

Continuous integration and continuous delivery (CI/CD) is the process of automating the software release lifecycle. It automates much or all of the manual processes traditionally required to get new code from an initial commit into production. A CI/CD pipeline encompasses the sandbox, development, testing, staging, and production environments. In each environment, the CI/CD pipeline provisions any infrastructure that is needed to deploy or test the code. By using CI/CD, development teams can make changes to code that are then automatically tested and deployed. CI/CD pipelines also provide governance and guardrails for development teams by enforcing consistency, standards, best practices, and minimal acceptance levels for feature acceptance and deployment. For more information, see [Practicing Continuous Integration and Continuous Delivery on AWS](https://docs.aws.amazon.com/whitepapers/latest/practicing-continuous-integration-continuous-delivery/welcome.html).

AWS offers a suite of developer services that are designed to help you build CI/CD pipelines. For example, [AWS CodePipeline](https://docs.aws.amazon.com/codepipeline/latest/userguide/welcome.html) is a fully managed continuous delivery service that helps you automate your release pipelines for fast and reliable application and infrastructure updates. [AWS CodeBuild](https://docs.aws.amazon.com/codebuild/latest/userguide/welcome.html) compiles source code, runs tests, and produces ready-to-deploy software packages. For more information, see [Developer Tools on AWS](https://aws.amazon.com/products/developer-tools/).

## Tools
<a name="implement-a-gitflow-branching-strategy-for-multi-account-devops-environments-tools"></a>

**AWS services and tools**

AWS provides a suite of developer services that you can use to implement this pattern:
+ [AWS CodeArtifact](https://docs.aws.amazon.com/codeartifact/latest/ug/welcome.html) is a highly scalable, managed artifact repository service that helps you store and share software packages for application development.
+ [AWS CodeBuild](https://docs.aws.amazon.com/codebuild/latest/userguide/welcome.html) is a fully managed build service that helps you compile source code, run unit tests, and produce artifacts that are ready to deploy.
+ [AWS CodeDeploy](https://docs.aws.amazon.com/codedeploy/latest/userguide/welcome.html) automates deployments to Amazon Elastic Compute Cloud (Amazon EC2) or on-premises instances, AWS Lambda functions, or Amazon Elastic Container Service (Amazon ECS) services.
+ [AWS CodePipeline](https://docs.aws.amazon.com/codepipeline/latest/userguide/welcome.html) helps you quickly model and configure the different stages of a software release and automate the steps required to release software changes continuously.

**Other tools**
+ [Draw.io Desktop](https://github.com/jgraph/drawio-desktop/releases) is an application for making flowcharts and diagrams. The code repository contains templates in .drawio format for Draw.io.
+ [Figma](https://www.figma.com/design-overview/) is an online design tool designed for collaboration. The code repository contains templates in .fig format for Figma.
+ (Optional) [Gitflow plugin](https://github.com/nvie/gitflow) is a collection of Git extensions that provide high-level repository operations for the Gitflow branching model.

**Code repository**

This source file for the diagram in this pattern is available in the GitHub [Git Branching Strategy for GitFlow](https://github.com/awslabs/git-branching-strategies-for-multiaccount-devops/tree/main/gitflow) repository. It includes files in PNG, draw.io, and Figma formats. You can modify these diagrams to support your organization's processes.

## Best practices
<a name="implement-a-gitflow-branching-strategy-for-multi-account-devops-environments-best-practices"></a>

Follow the best practices and recommendations in [AWS Well-Architected DevOps Guidance](https://docs.aws.amazon.com/wellarchitected/latest/devops-guidance/devops-guidance.html) and [Choosing a Git branching strategy for multi-account DevOps environments](https://docs.aws.amazon.com/prescriptive-guidance/latest/choosing-git-branch-approach/). These help you effectively implement Gitflow-based development, foster collaboration, improve code quality, and streamline the development process.

## Epics
<a name="implement-a-gitflow-branching-strategy-for-multi-account-devops-environments-epics"></a>

### Reviewing the Gitflow workflows
<a name="reviewing-the-gitflow-workflows"></a>


| Task | Description | Skills required | 
| --- | --- | --- | 
| Review the standard Gitflow process. | [\[See the AWS documentation website for more details\]](http://docs.aws.amazon.com/prescriptive-guidance/latest/patterns/implement-a-gitflow-branching-strategy-for-multi-account-devops-environments.html) | DevOps engineer | 
| Review the hotfix Gitflow process. | [\[See the AWS documentation website for more details\]](http://docs.aws.amazon.com/prescriptive-guidance/latest/patterns/implement-a-gitflow-branching-strategy-for-multi-account-devops-environments.html) | DevOps engineer | 
| Review the bugfix Gitflow process. | [\[See the AWS documentation website for more details\]](http://docs.aws.amazon.com/prescriptive-guidance/latest/patterns/implement-a-gitflow-branching-strategy-for-multi-account-devops-environments.html) | DevOps engineer | 

## Troubleshooting
<a name="implement-a-gitflow-branching-strategy-for-multi-account-devops-environments-troubleshooting"></a>


| Issue | Solution | 
| --- | --- | 
| Branch conflicts | A common issue that can occur with the Gitflow model is where a hotfix needs to occur in production but a corresponding change needs to occur in a lower environment, where another branch is modifying the same resources. We recommend that you have only a single release branch active at a time. If you have more than one active at a time, the changes in the environments might collide, and you might be unable to move a branch forward to production. | 
| Merging | Releases should be merged back into main and develop as soon as possible to consolidate work back into the primary branches. | 
| Squash merging | Only use a squash merge when you are merging from a `feature` branch to a `develop` branch. Using squash merges in higher branches causes difficulty when merging changes back down to lower branches. | 

## Related resources
<a name="implement-a-gitflow-branching-strategy-for-multi-account-devops-environments-resources"></a>

This guide doesn't include training for Git; however, there are many high-quality resources available on the internet if you need this training. We recommend that you start with the [Git documentation](https://git-scm.com/doc) site.

The following resources can help you with your Gitflow branching journey in the AWS Cloud.

**AWS DevOps guidance**
+ [AWS DevOps Guidance](https://docs.aws.amazon.com/wellarchitected/latest/devops-guidance/devops-guidance.html)
+ [AWS Deployment Pipeline Reference Architecture](https://pipelines.devops.aws.dev/)
+ [What is DevOps?](https://aws.amazon.com/devops/what-is-devops/)
+ [DevOps resources](https://aws.amazon.com/devops/resources/)

**Gitflow guidance**
+ [The original Gitflow blog](https://nvie.com/posts/a-successful-git-branching-model/) (Vincent Driessen blog post)
+ [Gitflow workflow](https://www.atlassian.com/git/tutorials/comparing-workflows/gitflow-workflow) (Atlassian)
+ [Gitflow on GitHub: How to use Git Flow workflows with GitHub Based Repos](https://youtu.be/WQuxeEvaCxs) (YouTube video)
+ [Git Flow Init Example](https://www.youtube.com/watch?v=d4cDLBFbekw) (YouTube video)
+ [The Gitflow Release Branch from Start to Finish](https://www.youtube.com/watch?v=rX80eKPdA28) (YouTube video)

**Other resources**

[Twelve-factor app methodology](https://12factor.net/) (12factor.net)

# Implement a Trunk branching strategy for multi-account DevOps environments
<a name="implement-a-trunk-branching-strategy-for-multi-account-devops-environments"></a>

*Mike Stephens and Rayjan Wilson, Amazon Web Services*

## Summary
<a name="implement-a-trunk-branching-strategy-for-multi-account-devops-environments-summary"></a>

When managing a source code repository, different branching strategies affect the software development and release processes that development teams use. Examples of common branching strategies include Trunk, GitHub Flow, and Gitflow. These strategies use different branches, and the activities performed in each environment are different. Organizations that are implementing DevOps processes would benefit from a visual guide to help them understand the differences between these branching strategies. Using this visual in your organization helps development teams align their work and follow organizational standards. This pattern provides this visual and describes the process of implementing a Trunk branching strategy in your organization.

This pattern is part of a documentation series about choosing and implementing DevOps branching strategies for organizations with multiple AWS accounts. This series is designed to help you apply the correct strategy and best practices from the outset, to streamline your experience in the cloud. Trunk is just one possible branching strategy that your organization can use. This documentation series also covers [GitHub Flow](https://docs.aws.amazon.com/prescriptive-guidance/latest/patterns/implement-a-github-flow-branching-strategy-for-multi-account-devops-environments.html) and [Gitflow](https://docs.aws.amazon.com/prescriptive-guidance/latest/patterns/implement-a-gitflow-branching-strategy-for-multi-account-devops-environments.html) branching models. If you haven't done so already, we recommend that you review [Choosing a Git branching strategy for multi-account DevOps environments](https://docs.aws.amazon.com/prescriptive-guidance/latest/choosing-git-branch-approach/) prior to implementing the guidance in this pattern. Please use due diligence to choose the right branching strategy for your organization.

This guide provides a diagram that shows how an organization might implement the Trunk strategy. It is recommended that you review the official [AWS Well-Architected DevOps Guidance](https://docs.aws.amazon.com/wellarchitected/latest/devops-guidance/devops-guidance.html) to review best practices. This pattern includes recommended tasks, steps, and restrictions for each step in the DevOps process.

## Prerequisites and limitations
<a name="implement-a-trunk-branching-strategy-for-multi-account-devops-environments-prereqs"></a>

**Prerequisites**
+ Git, [installed](https://git-scm.com/downloads). This is used as a  source code repository tool.
+ Draw.io, [installed](https://github.com/jgraph/drawio-desktop/releases). This application is used to view and edit the diagram.

## Architecture
<a name="implement-a-trunk-branching-strategy-for-multi-account-devops-environments-architecture"></a>

**Target architecture**

The following diagram can be used like a [Punnett square](https://en.wikipedia.org/wiki/Punnett_square) (Wikipedia). You line up the branches on the vertical axis with the AWS environments on the horizontal axis to determine what actions to perform in each scenario. The numbers indicate the sequence of the actions in the workflow. This example takes you from a `feature` branch through deployment in production.

![\[Punnett square of the Trunk activities in each branch and environment\]](http://docs.aws.amazon.com/prescriptive-guidance/latest/patterns/images/pattern-img/5df23e4d-84fe-4ab3-a54f-96b6406abc57/images/ad549ef4-90ad-47c1-bd01-f21d6ce5511a.png)


For more information about the AWS accounts, environments, and branches in a Trunk approach, see [Choosing a Git branching strategy for multi-account DevOps environments](https://docs.aws.amazon.com/prescriptive-guidance/latest/choosing-git-branch-approach).

**Automation and scale**

Continuous integration and continuous delivery (CI/CD) is the process of automating the software release lifecycle. It automates much or all of the manual processes traditionally required to get new code from an initial commit into production. A CI/CD pipeline encompasses the sandbox, development, testing, staging, and production environments. In each environment, the CI/CD pipeline provisions any infrastructure that is needed to deploy or test the code. By using CI/CD, development teams can make changes to code that are then automatically tested and deployed. CI/CD pipelines also provide governance and guardrails for development teams by enforcing consistency, standards, best practices, and minimal acceptance levels for feature acceptance and deployment. For more information, see [Practicing Continuous Integration and Continuous Delivery on AWS](https://docs.aws.amazon.com/whitepapers/latest/practicing-continuous-integration-continuous-delivery/welcome.html).

AWS offers a suite of developer services that are designed to help you build CI/CD pipelines. For example, [AWS CodePipeline](https://docs.aws.amazon.com/codepipeline/latest/userguide/welcome.html) is a fully managed continuous delivery service that helps you automate your release pipelines for fast and reliable application and infrastructure updates. [AWS CodeBuild](https://docs.aws.amazon.com/codebuild/latest/userguide/welcome.html) compiles source code, runs tests, and produces ready-to-deploy software packages. For more information, see [Developer Tools on AWS](https://aws.amazon.com/products/developer-tools/).

## Tools
<a name="implement-a-trunk-branching-strategy-for-multi-account-devops-environments-tools"></a>

**AWS services and tools**

AWS provides a suite of developer services that you can use to implement this pattern:
+ [AWS CodeArtifact](https://docs.aws.amazon.com/codeartifact/latest/ug/welcome.html) is a highly scalable, managed artifact repository service that helps you store and share software packages for application development.
+ [AWS CodeBuild](https://docs.aws.amazon.com/codebuild/latest/userguide/welcome.html) is a fully managed build service that helps you compile source code, run unit tests, and produce artifacts that are ready to deploy.
+ [AWS CodeDeploy](https://docs.aws.amazon.com/codedeploy/latest/userguide/welcome.html) automates deployments to Amazon Elastic Compute Cloud (Amazon EC2) or on-premises instances, AWS Lambda functions, or Amazon Elastic Container Service (Amazon ECS) services.
+ [AWS CodePipeline](https://docs.aws.amazon.com/codepipeline/latest/userguide/welcome.html) helps you quickly model and configure the different stages of a software release and automate the steps required to release software changes continuously.

**Other tools**
+ [Draw.io Desktop](https://github.com/jgraph/drawio-desktop/releases) – An application for making flowcharts and diagrams.
+ [Figma](https://www.figma.com/design-overview/) is an online design tool designed for collaboration. The code repository contains templates in .fig format for Figma.

**Code repository**

This source file for the diagram in this pattern is available in the GitHub [Git Branching Strategy for Trunk](https://github.com/awslabs/git-branching-strategies-for-multiaccount-devops/tree/main/trunk) repository. It includes files in PNG, draw.io, and Figma formats. You can modify these diagrams to support your organization's processes.

## Best practices
<a name="implement-a-trunk-branching-strategy-for-multi-account-devops-environments-best-practices"></a>

Follow the best practices and recommendations in [AWS Well-Architected DevOps Guidance](https://docs.aws.amazon.com/wellarchitected/latest/devops-guidance/devops-guidance.html) and [Choosing a Git branching strategy for multi-account DevOps environments](https://docs.aws.amazon.com/prescriptive-guidance/latest/choosing-git-branch-approach/). These help you effectively implement Trunk-based development, foster collaboration, improve code quality, and streamline the development process.

## Epics
<a name="implement-a-trunk-branching-strategy-for-multi-account-devops-environments-epics"></a>

### Reviewing the Trunk workflow
<a name="reviewing-the-trunk-workflow"></a>


| Task | Description | Skills required | 
| --- | --- | --- | 
| Review the standard Trunk process. | [\[See the AWS documentation website for more details\]](http://docs.aws.amazon.com/prescriptive-guidance/latest/patterns/implement-a-trunk-branching-strategy-for-multi-account-devops-environments.html) | DevOps engineer | 

## Troubleshooting
<a name="implement-a-trunk-branching-strategy-for-multi-account-devops-environments-troubleshooting"></a>


| Issue | Solution | 
| --- | --- | 
| Branch conflicts | A common issue that can occur with the Trunk model is where a hotfix needs to occur in production but a corresponding change needs to occur in a `feature` branch, where the same resources are being modified. We recommend that you frequently merge changes from `main` into lower branches to avoid significant conflicts on merge to `main`. | 

## Related resources
<a name="implement-a-trunk-branching-strategy-for-multi-account-devops-environments-resources"></a>

This guide doesn't include training for Git; however, there are many high-quality resources available on the internet if you need this training. We recommend that you start with the [Git documentation](https://git-scm.com/doc) site.

The following resources can help you with your Trunk branching journey in the AWS Cloud.

**AWS DevOps guidance**
+ [AWS DevOps Guidance](https://docs.aws.amazon.com/wellarchitected/latest/devops-guidance/devops-guidance.html)
+ [AWS Deployment Pipeline Reference Architecture](https://pipelines.devops.aws.dev/)
+ [What is DevOps?](https://aws.amazon.com/devops/what-is-devops/)
+ [DevOps resources](https://aws.amazon.com/devops/resources/)

**Trunk guidance**
+ [Trunk Based Development](https://trunkbaseddevelopment.com/)

**Other resources**
+ [Twelve-factor app methodology](https://12factor.net/) (12factor.net)

# Implement centralized custom Checkov scanning to enforce policy before deploying AWS infrastructure
<a name="centralized-custom-checkov-scanning"></a>

*Benjamin Morris, Amazon Web Services*

## Summary
<a name="centralized-custom-checkov-scanning-summary"></a>

This pattern provides a GitHub Actions framework for writing custom Checkov policies in one repository that can be reused across a GitHub organization. By following this pattern, an information security team can write, add, and maintain custom policies based on company requirements. The custom policies can be pulled into all pipelines in the GitHub organization automatically. This approach can be used to enforce company standards for resources before the resources are deployed.

## Prerequisites and limitations
<a name="centralized-custom-checkov-scanning-prereqs"></a>

**Prerequisites **
+ An active AWS account
+ A GitHub organization using GitHub Actions
+ AWS infrastructure deployed with either HashiCorp Terraform or AWS CloudFormation

**Limitations **
+ This pattern is written for GitHub Actions. However, it can be adapted to similar continuous integration and continuous delivery (CI/CD) frameworks such as GitLab. No specific paid version of GitHub is required.
+ Some AWS services aren’t available in all AWS Regions. For Region availability, see [Service endpoints and quotas](https://docs.aws.amazon.com/general/latest/gr/aws-service-information.html) in the AWS documentation, and choose the link for the service.

## Architecture
<a name="centralized-custom-checkov-scanning-architecture"></a>

This pattern is designed to be deployed as a GitHub repository that contains a GitHub reusable workflow and custom Checkov policies. The reusable workflow can scan both Terraform and CloudFormation infrastructure as code (IaC) repositories.

The following diagram shows the **Reusable GitHub workflows repository** and **Custom Checkov policies repository** as separate icons. However, you can implement these repositories either as separate repositories or a single repository. The example code uses a single repository, with files for workflows (`.github/workflows`) and files for custom policies (`custom_policies` folder and the `.checkov.yml` config file) in the same repository.

![\[GitHub Actions uses reusable GitHub workflow and custom Checkov policies to evaluate IaC.\]](http://docs.aws.amazon.com/prescriptive-guidance/latest/patterns/images/pattern-img/6c0c941f-14f9-4569-92da-9f81ab3e525c/images/a1539ce5-0ee6-4af1-bd01-cafad0f71708.png)


The diagram shows the following workflow:

1. A user creates a pull request in a GitHub repository.

1. Pipeline workflows start in GitHub Actions, including a reference to a Checkov reusable workflow.

1. The pipeline workflow downloads the referenced Checkov reusable workflow from an external repository and runs that Checkov workflow by using GitHub Actions.

1. The Checkov reusable workflow downloads the custom policies from an external repository.

1. The Checkov reusable workflow evaluates the IaC in the GitHub repository against both built-in and custom Checkov policies. The Checkov reusable workflow passes or fails based on whether security issues are found.

**Automation and scale**

This pattern allows for central management of Checkov configuration, so that policy updates can be applied in one location. However, this pattern does require that each repository use a workflow that contains a reference to the central reusable workflow. You can add this reference manually or use scripts to push the file to the `.github/workflows` folder for each repository.

## Tools
<a name="centralized-custom-checkov-scanning-tools"></a>

**AWS services**
+ [CloudFormation](https://docs.aws.amazon.com/AWSCloudFormation/latest/UserGuide/Welcome.html) helps you set up AWS resources, provision them quickly and consistently, and manage them throughout their lifecycle across AWS accounts and Regions. Checkov can scan CloudFormation.

**Other tools**
+ [Checkov](https://www.checkov.io/) is a static code analysis tool that checks IaC for security and compliance misconfigurations.
+ [GitHub Actions](https://github.com/features/actions) is integrated into the GitHub platform to help you create, share, and run workflows within your GitHub repositories. You can use GitHub Actions to automate tasks such as building, testing, and deploying your code.
+ [Terraform](https://www.terraform.io/) is an IaC tool from HashiCorp that helps you create and manage cloud and on-premises resources. Checkov can scan Terraform.

**Code repository**

The code for this pattern is available in the GitHub [centralized-custom-checkov-sast](https://github.com/aws-samples/centralized-custom-checkov-sast) repository.

## Best practices
<a name="centralized-custom-checkov-scanning-best-practices"></a>
+ To maintain a consistent security posture, align your company’s security policies with the Checkov policies.
+ In the early phases of implementing Checkov custom policies, you can use the soft-fail option in your Checkov scan to allow IaC with security issues to be merged. As the process matures, switch from the soft-fail option to the hard-fail option.

## Epics
<a name="centralized-custom-checkov-scanning-epics"></a>

### Create a central Checkov repository for custom policies
<a name="create-a-central-checkov-repository-for-custom-policies"></a>


| Task | Description | Skills required | 
| --- | --- | --- | 
| Create a central Checkov repository. | Create a repository to store custom Checkov policies that will be used within the organization.For a quick start, you can copy the contents of this pattern’s GitHub [centralized-custom-checkov-sast ](https://github.com/aws-samples/centralized-custom-checkov-sast)repository into your central Checkov repository. | DevOps engineer | 
| Create a repository for reusable workflows. | If a repository for reusable workflows already exists, or you plan to include reusable workflow files in the same repository as the custom Checkov policies, you can skip this step.Create a GitHub repository to hold reusable workflows. Other repositories’ pipelines will reference this repository. | DevOps engineer | 

### Create reusable and example Checkov workflows
<a name="create-reusable-and-example-checkov-workflows"></a>


| Task | Description | Skills required | 
| --- | --- | --- | 
| Add a reusable Checkov workflow. | Create a reusable Checkov GitHub Actions workflow (YAML file) in the reusable workflows repository. You can adapt this reusable workflow from the workflow file provided in this pattern.An example of a change that you might want to make is to change the reusable workflow to use the soft-fail option. Setting `soft-fail` to `true` allows the job to complete successfully even if there is a failed Checkov scan. For instructions, see [Hard and soft fail](https://www.checkov.io/2.Basics/Hard%20and%20soft%20fail.html) in the Checkov documentation. | DevOps engineer | 
| Add an example workflow. | Add an example Checkov workflow that references the `reusable` workflow. This will provide a template for how to reuse the `reusable` workflow. In the example repository, `checkov-source.yaml` is the reusable workflow and `checkov-scan.yaml` is the example that consumes `checkov-source`.For more details about writing an example Checkov workflow, see [Additional information](#centralized-custom-checkov-scanning-additional). | DevOps engineer | 

### Associate company policies to Checkov custom policies
<a name="associate-company-policies-to-checkov-custom-policies"></a>


| Task | Description | Skills required | 
| --- | --- | --- | 
| Determine policies that can be enforced with Checkov. | [\[See the AWS documentation website for more details\]](http://docs.aws.amazon.com/prescriptive-guidance/latest/patterns/centralized-custom-checkov-scanning.html)For more details about creating Checkov custom policies, see [Custom Policies Overview](https://www.checkov.io/3.Custom%20Policies/Custom%20Policies%20Overview.html) in the Checkov documentation. | Security and Compliance | 
| Add Checkov custom policies. | Convert the identified company policies to custom Checkov policies in the central repository. You can write simple Checkov policies in either Python or YAML. | Security | 

### Implement centralized Checkov custom policies
<a name="implement-centralized-checkov-custom-policies"></a>


| Task | Description | Skills required | 
| --- | --- | --- | 
| Add the Checkov reusable workflow to all repositories. | At this point, you should have an example Checkov workflow that references the reusable workflow. Copy the sample Checkov workflow that references the reusable workflow to each repository that requires it. | DevOps engineer | 
| Create a mechanism to ensure that Checkov runs before merges. | To ensure that the Checkov workflow gets run for every pull request, create a [status check](https://docs.github.com/en/pull-requests/collaborating-with-pull-requests/collaborating-on-repositories-with-code-quality-features/about-status-checks) that requires a successful Checkov workflow before pull requests can be merged. GitHub allows you to require that specific workflows run before pull requests can be merged. | DevOps engineer | 
| Create an organization-wide PAT, and share it as a secret. | If your GitHub organization is publicly visible, you can skip this step.This pattern requires that the Checkov workflow be able to download custom policies from the custom policy repository in your GitHub organization. You must provide permissions such that the Checkov workflow can access those repositories.To do this, [create a personal access token](https://docs.github.com/en/authentication/keeping-your-account-and-data-secure/managing-your-personal-access-tokens#creating-a-fine-grained-personal-access-token) (PAT) with permissions to read organization repositories. Share this PAT with repositories, either as an organization-wide secret (if on a paid plan) or a secret in each repository (free version). In the sample code, the default name for the secret is `ORG_PAT`. | DevOps engineer | 
| (Optional) Protect the Checkov workflow files from modification. | To protect the Checkov workflow files from unwanted changes, you can use a `CODEOWNERS` file. The `CODEOWNERS` file is typically deployed in the root of the directory.For example, to require approvals from your GitHub organization’s `secEng` group when the `checkov-scan.yaml` file is modified, append the following to a repository’s `CODEOWNERS` file:<pre>[Checkov]<br />.github/workflows/checkov-scan.yaml @myOrg/secEng</pre>A `CODEOWNERS` file is specific to the repository it lives in. To protect the Checkov workflow used by the repository, you must add (or update) a `CODEOWNERS` file in each repository.For more information about protecting Checkov workflow files, see [Additional information](#centralized-custom-checkov-scanning-additional). For more information about `CODEOWNERS` files, see the official documentation for your CI/CD provider (such as [GitHub](https://docs.github.com/en/repositories/managing-your-repositorys-settings-and-features/customizing-your-repository/about-code-owners)). | DevOps engineer | 

## Related resources
<a name="centralized-custom-checkov-scanning-resources"></a>
+ [Checkov Custom Policies Overview](https://www.checkov.io/3.Custom%20Policies/Custom%20Policies%20Overview.html)
+ [CloudFormation Configuration Scanning](https://www.checkov.io/7.Scan%20Examples/Cloudformation.html)
+ [GitHub Actions Reusable Workflows](https://docs.github.com/en/actions/using-workflows/reusing-workflows)

## Additional information
<a name="centralized-custom-checkov-scanning-additional"></a>

**Writing Checkov workflow files**

When writing `checkov-scan.yaml`, consider when you want it to run. The top-level `on` key determines when the workflow runs. In the example repository, the workflow will run when there is a pull request targeting the `main` branch (and any time that pull request’s source branch is modified). The workflow can also be run as required because of the `workflow_dispatch` key.

You can change the workflow trigger conditions based on how often you want the workflow to run. For example, you could change the workflow to run every time code is pushed to any branch by replacing `pull_request` with `push` and removing the `branches` key.

You can modify the example workflow file that you created within an individual repository. For example, you could adjust the target branch’s name from `main` to `production` if a repository is structured around a `production` branch.

**Protecting Checkov workflow files**

Checkov scanning provides useful information about potential security misconfiguration. However, some developers might perceive it to be a barrier to their productivity and attempt to remove or disable the scanning workflow.

There are several ways to address this problem, including better messaging about the long-term value of security scanning and clearer documentation about how to deploy secure infrastructure. These are important "soft" approaches to DevSecOps collaboration that can be seen as the solution to this problem’s root cause. However, you can also use technical controls such as a `CODEOWNERS` file as guardrails to help keep developers on the right path.

**Testing pattern in a sandbox**

To test this pattern in a sandbox environment, follow these steps:

1. Create a new GitHub organization. Create a token with read-only access to all repositories in the organization. Because this token is for a sandbox environment, not a paid environment, you will not be able to store this token in an organization-wide secret.

1. Create a `checkov` repository to hold the Checkov configuration and a `github-workflows` repository to hold the reusable workflow configuration. Populate the repositories with the contents of the example repository.

1. Create an application repository, and copy and paste the `checkov-scan.yaml` workflow to its `.github/workflows` folder. Add a secret to the repository that contains the PAT you created for organization read-only access. The default secret is `ORG_PAT`.

1. Create a pull request that adds some Terraform or CloudFormation code to the application repository. Checkov should scan and return a result.

# Implement AI-powered Kubernetes diagnostics and troubleshooting with K8sGPT and Amazon Bedrock integration
<a name="implement-ai-powered-kubernetes-diagnostics-and-troubleshooting-with-k8sgpt-and-amazon-bedrock-integration"></a>

*Ishwar Chauthaiwale, Muskan ., and Prafful Gupta, Amazon Web Services*

## Summary
<a name="implement-ai-powered-kubernetes-diagnostics-and-troubleshooting-with-k8sgpt-and-amazon-bedrock-integration-summary"></a>

This pattern demonstrates how to implement AI-powered Kubernetes diagnostics and troubleshooting by integrating K8sGPT with the Anthropic Claude v2 model available on Amazon Bedrock. The solution provides natural language analysis and remediation steps for Kubernetes cluster issues through a secure bastion host architecture. By combining K8sGPT Kubernetes expertise with Amazon Bedrock advanced language capabilities, DevOps teams can quickly identify and resolve cluster problems. With these capabilities, it’s possible to reduce mean time to resolution (MTTR) by up to 50 percent. 

This cloud-native pattern leverages Amazon Elastic Kubernetes Service (Amazon EKS) for Kubernetes management. The pattern implements security best practices through proper AWS Identity and Access Management (IAM) roles and network isolation. This solution is particularly valuable for organizations who want to streamline their Kubernetes operations and enhance their troubleshooting capabilities with AI assistance.

## Prerequisites and limitations
<a name="implement-ai-powered-kubernetes-diagnostics-and-troubleshooting-with-k8sgpt-and-amazon-bedrock-integration-prereqs"></a>

**Prerequisites**
+ An active AWS account with appropriate permissions
+ AWS Command Line Interface (AWS CLI) [installed](https://docs.aws.amazon.com/cli/latest/userguide/getting-started-install.html) and [configured](https://docs.aws.amazon.com/cli/latest/userguide/cli-chap-configure.html)
+ An Amazon EKS cluster
+ Access to Anthropic Claude 2 model on [Amazon Bedrock](https://docs.aws.amazon.com/bedrock/latest/userguide/model-access.html)
+ A bastion host with required security group settings
+ K8sGPT [installed](https://docs.k8sgpt.ai/getting-started/installation/)

**Limitations**
+ K8sGPT analysis is limited by the context window size of the Claude v2 model.
+ Amazon Bedrock API rate limits apply based on your account quotas.
+ Some AWS services aren’t available in all AWS Regions. For Region availability, see [AWS Services by Region](https://aws.amazon.com/about-aws/global-infrastructure/regional-product-services/). For specific endpoints, see [Service endpoints and quotas](https://docs.aws.amazon.com/general/latest/gr/aws-service-information.html), and choose the link for the service.

**Product versions**
+ Amazon EKS [version 1.31 or later](https://docs.aws.amazon.com/eks/latest/userguide/kubernetes-versions.html)
+ [Claude 2 model](https://docs.aws.amazon.com/bedrock/latest/userguide/models-supported.html) on Amazon Bedrock
+ K8sGPT [v0.4.2 or later](https://github.com/k8sgpt-ai/k8sgpt/releases)

## Architecture
<a name="implement-ai-powered-kubernetes-diagnostics-and-troubleshooting-with-k8sgpt-and-amazon-bedrock-integration-architecture"></a>

The following diagram shows the architecture for AI-powered Kubernetes diagnostics using K8sGPT integrated with Amazon Bedrock in the AWS Cloud.

![\[Workflow for Kubernetes diagnostics using K8sGPT integrated with Amazon Bedrock.\]](http://docs.aws.amazon.com/prescriptive-guidance/latest/patterns/images/pattern-img/09bc08f6-e191-4cef-b26b-dcb6225b15cc/images/8789891d-4a90-44b0-a108-387f6d96496b.png)


The architecture shows the following workflow:

1. Developers access the environment through a secure connection to the bastion host. This Amazon EC2 instance serves as the secure entry point and contains the K8sGPT command line interface (CLI) installation and required configurations.

1. The bastion host, configured with specific IAM roles, establishes secure connections to both the Amazon EKS cluster and the Amazon Bedrock endpoints. K8sGPT is installed and configured on the bastion host to perform Kubernetes cluster analysis.

1. Amazon EKS manages the Kubernetes control plane and worker nodes, providing the target environment for K8sGPT analysis. The service runs across multiple Availability Zones within a virtual private cloud (VPC), which helps to provide high availability and resilience. Amazon EKS supplies operational data through the Kubernetes API, enabling comprehensive cluster analysis.

1. K8sGPT sends analysis data to Amazon Bedrock, which provides the Claude v2 foundation model (FM) for natural language processing. The service processes K8sGPT analysis to generate human-readable explanations and offers detailed remediation suggestions based on identified issues. Amazon Bedrock operates as a serverless AI service with high availability and scalability.

**Note**  
Throughout this workflow, IAM controls access between components through roles and policies, managing authentication for the bastion host, Amazon EKS, and Amazon Bedrock interactions. IAM implements the principle of least privilege and enables secure cross-service communication throughout the architecture.

**Automation and scale**

K8sGPT operations can be automated and scaled across multiple Amazon EKS clusters through various AWS services and tools. This solution supports continuous integration and continuous deployment (CI/CD) integration using [Jenkins](https://www.jenkins.io/), [GitHub Actions](https://docs.github.com/en/actions/get-started/understand-github-actions), or [AWS CodeBuild](https://docs.aws.amazon.com/codebuild/latest/userguide/welcome.html) for scheduled analysis. The K8sGPT operator enables continuous in-cluster monitoring with automated issue detection and reporting capabilities. For enterprise-scale deployments, you can use [Amazon EventBridge](https://docs.aws.amazon.com/eventbridge/latest/userguide/eb-what-is.html) to schedule scans and trigger automated responses with custom scripts. AWS SDK integration enables programmatic control across large fleet of clusters.

## Tools
<a name="implement-ai-powered-kubernetes-diagnostics-and-troubleshooting-with-k8sgpt-and-amazon-bedrock-integration-tools"></a>

**AWS services**
+ [AWS Command Line Interface (AWS CLI)](https://docs.aws.amazon.com/cli/latest/userguide/cli-chap-welcome.html) is an open source tool that helps you interact with AWS services through commands in your command line shell.
+ [Amazon Elastic Kubernetes Service (Amazon EKS)](https://docs.aws.amazon.com/eks/latest/userguide/getting-started.html) helps you run Kubernetes on AWS without needing to install or maintain your own Kubernetes control plane or nodes.
+ [AWS Identity and Access Management (IAM)](https://docs.aws.amazon.com/IAM/latest/UserGuide/introduction.html) helps you securely manage access to your AWS resources by controlling who is authenticated and authorized to use them.

**Other tools**
+ [K8sGPT](https://k8sgpt.ai/) is an open source AI-powered tool that transforms Kubernetes management. It acts as a virtual site reliability engineering (SRE) expert, automatically scanning, diagnosing, and troubleshooting Kubernetes cluster issues. Administrators can interact with K8sGPT using natural language and get clear, actionable insights about cluster state, pod crashes, and service failures. The tool's built-in analyzers detect a wide range of issues, from misconfigured components to resource constraints, and provide easy-to-understand explanations and solutions.

## Best practices
<a name="implement-ai-powered-kubernetes-diagnostics-and-troubleshooting-with-k8sgpt-and-amazon-bedrock-integration-best-practices"></a>
+ Implement secure access controls by using AWS Systems Manager Session Manager for [bastion host access](https://docs.aws.amazon.com/prescriptive-guidance/latest/patterns/access-a-bastion-host-by-using-session-manager-and-amazon-ec2-instance-connect.html).
+ Make sure that K8sGPT authentication uses dedicated IAM roles with least privilege permissions for Amazon Bedrock and Amazon EKS interactions . For more information, see [Grant least privilege](https://docs.aws.amazon.com/IAM/latest/UserGuide/access_policies.html#grant-least-priv) and [Security best practices](https://docs.aws.amazon.com/IAM/latest/UserGuide/best-practices.html) in the IAM documentation.
+ Configure [resource tagging](https://docs.aws.amazon.com/whitepapers/latest/tagging-best-practices/what-are-tags.html), enable Amazon CloudWatch [logging for audit trails](https://docs.aws.amazon.com/awscloudtrail/latest/userguide/monitor-cloudtrail-log-files-with-cloudwatch-logs.html), and implement [data anonymization](https://aws.amazon.com/solutions/guidance/data-anonymization-on-aws/) for sensitive information. 
+ Maintain regular backups of K8sGPT configurations while setting up automated scanning schedules during off-peak hours to minimize operational impact.

## Epics
<a name="implement-ai-powered-kubernetes-diagnostics-and-troubleshooting-with-k8sgpt-and-amazon-bedrock-integration-epics"></a>

### Add Amazon Bedrock to AI backend provider list.
<a name="add-br-to-ai-backend-provider-list"></a>


| Task | Description | Skills required | 
| --- | --- | --- | 
| Set Amazon Bedrock as the AI backend provider for K8sGPT. | To set Amazon Bedrock as the AI [backend provide](https://docs.k8sgpt.ai/reference/providers/backend/)r for K8sGPT, use the following AWS CLI command:<pre>k8sgpt auth add -b amazonbedrock \<br /> -r us-west-2 \<br /> -m anthropic.claude-v2 \<br /> -n endpoint-name <br /></pre>The example command uses `us-west-2` for the AWS Region. However, you can select another Region, provided that both the Amazon EKS cluster and the corresponding Amazon Bedrock model are available and enabled in that selected Region.To check that `amazonbedrock` is added to the AI backend provider list and is in the `Active` state, run the following command:<pre>k8sgpt auth list</pre>Following is an example of the expected output of this command:<pre>Default: <br />> openai<br />Active: <br />> amazonbedrock<br />Unused: <br />> openai<br />> localai<br />> ollama<br />> azureopenai<br />> cohere<br />> amazonsagemaker<br />> google<br />> noopai<br />> huggingface<br />> googlevertexai<br />> oci<br />> customrest<br />> ibmwatsonxai</pre> | AWS DevOps | 

### Scan resources using a filter
<a name="scan-resources-using-a-filter"></a>


| Task | Description | Skills required | 
| --- | --- | --- | 
| View a list of available filters. | To see the list of all available filters, use the following AWS CLI command:<pre>k8sgpt filters list</pre>Following is an example of the expected output of this command:<pre>Active: <br />> Deployment<br />> ReplicaSet<br />> PersistentVolumeClaim<br />> Service<br />> CronJob<br />> Node<br />> MutatingWebhookConfiguration<br />> Pod<br />> Ingress<br />> StatefulSet<br />> ValidatingWebhookConfiguration</pre> | AWS DevOps | 
| Scan a pod in a specific namespace by using a filter. | This command is useful for targeted debugging of specific pod issues within a Kubernetes cluster, using Amazon Bedrock AI capabilities to analyze and explain the problems it finds.To scan a pod in a specific namespace by using a filter, use the following AWS CLI command:<pre>k8sgpt analyze --backend amazonbedrock --explain --filter Pod -n default</pre>Following is an example of the expected output of this command:<pre>100% |████████████████████████████████████████████████████████| (1/1, 645 it/s)        <br />AI Provider: amazonbedrock<br /><br />0: Pod default/crashme()<br />- Error: the last termination reason is Error container=crashme pod=crashme<br />Error: The pod named crashme terminated because the container named crashme crashed.<br />Solution: Check logs for crashme pod to identify reason for crash. Restart pod or redeploy application to resolve crash.</pre> | AWS DevOps | 
| Scan a deployment in a specific namespace by using a filter. | This command is useful for identifying and troubleshooting deployment-specific issues, particularly when the actual state doesn't match the desired state.To scan a deployment in a specific namespace by using a filter, use the following AWS CLI command:<pre>k8sgpt analyze --backend amazonbedrock --explain --filter Deployment -n default</pre>Following is an example of the expected output of this command:<pre>100% |██████████████████████████████████████████████████████████| (1/1, 10 it/min)        <br />AI Provider: amazonbedrock<br /><br />0: Deployment default/nginx()<br />- Error: Deployment default/nginx has 1 replicas but 2 are available<br /> Error: The Deployment named nginx in the default namespace has 1 replica specified but 2 pod replicas are running.<br />Solution: Check if any other controllers like ReplicaSet or StatefulSet have created extra pods. Delete extra pods or adjust replica count to match available pods.</pre> | AWS DevOps | 
| Scan a node in a specific namespace by using a filter. | To scan a node in a specific namespace by using a filter, use the following AWS CLI command:<pre>k8sgpt analyze --backend amazonbedrock --explain --filter Node -n default </pre>Following is an example of the expected output of this command:<pre>AI Provider: amazonbedrock<br /><br />No problems detected</pre> | AWS DevOps | 

### Analyze detailed outputs
<a name="analyze-detailed-outputs"></a>


| Task | Description | Skills required | 
| --- | --- | --- | 
| Get detailed outputs. |  To get detailed outputs, use the following AWS CLI command:<pre>k8sgpt analyze --backend amazonbedrock --explain --ouput json</pre>Following is an example of the expected output of this command:<pre>{<br />  "provider": "amazonbedrock",<br />  "errors": null,<br />  "status": "ProblemDetected",<br />  "problems": 1,<br />  "results": [<br />    {<br />      "kind": "Pod",<br />      "name": "default/crashme",<br />      "error": [<br />        {<br />          "Text": "the last termination reason is Error container=crashme pod=crashme",<br />          "KubernetesDoc": "",<br />          "Sensitive": []<br />        }<br />      ],<br />      "details": " Error: The pod named crashme terminated because the container named crashme crashed.\nSolution: Check logs for crashme pod to identify reason for crash. Restart pod or redeploy application to resolve crash.",<br />      "parentObject": ""<br />    }<br />  ]<br />}</pre> | AWS DevOps | 
| Check problematic pods. | To check for specific problematic pods, use the following AWS CLI command:<pre>kubectl get pods --all-namespaces | grep -v Running</pre>Following is an example of the expected output of this command:<pre>NAMESPACE    NAME      READY    STATUS          RESTARTS      AGE                                       <br />default     crashme     0/1   CrashLoopBackOff   260(91s ago)   21h</pre> | AWS DevOps | 
| Get application-specific insights. | This command is particularly useful when:[\[See the AWS documentation website for more details\]](http://docs.aws.amazon.com/prescriptive-guidance/latest/patterns/implement-ai-powered-kubernetes-diagnostics-and-troubleshooting-with-k8sgpt-and-amazon-bedrock-integration.html)To get application-specific insights, use the following command:<pre>k8sgpt analyze --backend amazonbedrock --explain -L app=nginx -n default</pre>Following is an example of the expected output of this command:<pre>AI Provider: amazonbedrock<br /><br />No problems detected</pre> |  | 

## Related resources
<a name="implement-ai-powered-kubernetes-diagnostics-and-troubleshooting-with-k8sgpt-and-amazon-bedrock-integration-resources"></a>

**AWS Blogs**
+ [Automate Amazon EKS troubleshooting using an Amazon Bedrock agentic workflow](https://aws.amazon.com/blogs/machine-learning/automate-amazon-eks-troubleshooting-using-an-amazon-bedrock-agentic-workflow/)
+ [Use K8sGPT and Amazon Bedrock for simplified Kubernetes cluster maintenance](https://aws.amazon.com/blogs/machine-learning/use-k8sgpt-and-amazon-bedrock-for-simplified-kubernetes-cluster-maintenance/)

**AWS documentation**
+ AWS CLI commands: [create-cluster](https://docs.aws.amazon.com/cli/latest/reference/eks/create-cluster.html) and [describe-cluster](https://awscli.amazonaws.com/v2/documentation/api/latest/reference/eks/describe-cluster.html)
+ [Get started with Amazon EKS](https://docs.aws.amazon.com/eks/latest/userguide/getting-started.html) (Amazon EKS documentation)
+ [Security best practices in IAM](https://docs.aws.amazon.com/IAM/latest/UserGuide/best-practices.html) (IAM documentation)

**Other resources**
+ [K8sGPT](https://k8sgpt.ai/)

# Automatically detect changes and initiate different CodePipeline pipelines for a monorepo in CodeCommit
<a name="automatically-detect-changes-and-initiate-different-codepipeline-pipelines-for-a-monorepo-in-codecommit"></a>

*Helton Ribeiro, Petrus Batalha, and Ricardo Morais, Amazon Web Services*

## Summary
<a name="automatically-detect-changes-and-initiate-different-codepipeline-pipelines-for-a-monorepo-in-codecommit-summary"></a>

**Notice**: AWS Cloud9 is no longer available to new customers. Existing customers of AWS Cloud9 can continue to use the service as normal. [Learn more](https://aws.amazon.com/blogs/devops/how-to-migrate-from-aws-cloud9-to-aws-ide-toolkits-or-aws-cloudshell/)

This pattern helps you automatically detect changes to the source code of a monorepo-based application in AWS CodeCommit and then initiate a pipeline in AWS CodePipeline that runs the continuous integration and continuous delivery (CI/CD) automation for each microservice. This approach means that each microservice in your monorepo-based application can have a dedicated CI/CD pipeline, which ensures better visibility, easier sharing of code, and improved collaboration, standardization, and discoverability.

The solution described in this pattern doesn't perform any dependency analysis among the microservices inside the monorepo. It only detects changes in the source code and initiates the matching CI/CD pipeline.

The pattern uses AWS Cloud9 as the integrated development environment (IDE) and AWS Cloud Development Kit (AWS CDK) to define an infrastructure by using two CloudFormation stacks: `MonoRepoStack` and `PipelinesStack`. The `MonoRepoStack` stack creates the monorepo in AWS CodeCommit and the AWS Lambda function that initiates the CI/CD pipelines. The `PipelinesStack` stack defines your pipeline infrastructure.

**Important**  
This pattern’s workflow is a proof of concept (PoC). We recommend that you use it only in a test environment. If you want to use this pattern’s approach in a production environment, see [Security best practices in IAM](https://docs.aws.amazon.com/IAM/latest/UserGuide/best-practices.html) in the AWS Identity and Access Management (IAM) documentation and make the required changes to your IAM roles and AWS services. 

## Prerequisites and limitations
<a name="automatically-detect-changes-and-initiate-different-codepipeline-pipelines-for-a-monorepo-in-codecommit-prereqs"></a>

**Prerequisites **
+ An active AWS account.
+ AWS Command Line Interface (AWS CLI), installed and configured. For more information, see [Installing, updating, and uninstalling the AWS CLI ](https://docs.aws.amazon.com/cli/latest/userguide/cli-chap-install.html)in the AWS CLI documentation.  
+ Python 3 and `pip`, installed on your local machine. For more information, see the [Python documentation](https://www.python.org/). 
+ AWS CDK, installed and configured. For more information, see [Getting started with the AWS CDK](https://docs.aws.amazon.com/cdk/latest/guide/getting_started.html) in the AWS CDK documentation. 
+ An AWS Cloud9 IDE, installed and configured. For more information, see [Setting up AWS Cloud9](https://docs.aws.amazon.com/cloud9/latest/user-guide/setting-up.html) in the AWS Cloud9 documentation. 
+ The GitHub [AWS CodeCommit monorepo multi-pipeline triggers](https://github.com/aws-samples/monorepo-multi-pipeline-trigger) repository, cloned on your local machine. 
+ An existing directory containing application code that you want to build and deploy with CodePipeline.
+ Familiarity and experience with DevOps best practices on the AWS Cloud. To increase your familiarity with DevOps, you can use the pattern [Build a loosely coupled architecture with microservices using DevOps practices and AWS Cloud9](https://docs.aws.amazon.com/prescriptive-guidance/latest/patterns/build-a-loosely-coupled-architecture-with-microservices-using-devops-practices-and-aws-cloud9.html) on the AWS Prescriptive Guidance website.  

## Architecture
<a name="automatically-detect-changes-and-initiate-different-codepipeline-pipelines-for-a-monorepo-in-codecommit-architecture"></a>

The following diagram shows how to use the AWS CDK to define an infrastructure with two AWS CloudFormation stacks: `MonoRepoStack` and `PipelinesStack`.

![\[Workflow to use the AWS CDK to define an infrastructure with two CloudFormation stacks.\]](http://docs.aws.amazon.com/prescriptive-guidance/latest/patterns/images/pattern-img/a3397158-a208-4033-844e-969af13ae8b6/images/b0bb1094-b598-4b3d-ab8b-ad9b0eb45f38.png)


The diagram shows the following workflow:

1. The bootstrap process uses the AWS CDK to create the AWS CloudFormation stacks `MonoRepoStack` and `PipelinesStack`.

1. The `MonoRepoStack` stack creates the CodeCommit repository for your application and the `monorepo-event-handler` Lambda function that is initiated after each commit.

1. The `PipelinesStack` stack creates the pipelines in CodePipeline that are initiated by the Lambda function. Each microservice must have a defined infrastructure pipeline.

1. The pipeline for `microservice-n` is initiated by the Lambda function and starts its isolated CI/CD stages that are based on the source code in CodeCommit.

1. The pipeline for `microservice-1` is initiated by the Lambda function and starts its isolated CI/CD stages that are based on the source code in CodeCommit.

The following diagram shows the deployment of the AWS CloudFormation stacks `MonoRepoStack` and `PipelinesStack` in an account.

![\[Deployment of the CloudFormation stacks MonoRepoStack and PipelinesStack in an AWS account.\]](http://docs.aws.amazon.com/prescriptive-guidance/latest/patterns/images/pattern-img/a3397158-a208-4033-844e-969af13ae8b6/images/39e60e49-dea2-486d-8a2c-6cae438f69b4.png)


1. A user changes code in one of the application’s microservices.

1. The user pushes the changes from a local repository to a CodeCommit repository.

1. The push activity initiates the Lambda function that receives all pushes to the CodeCommit repository.

1. The Lambda function reads a parameter in Parameter Store, a capability of AWS Systems Manager, to retrieve the most recent commit ID. The parameter has the naming format: `/MonoRepoTrigger/{repository}/{branch_name}/LastCommit`. If the parameter isn’t found, the Lambda function reads the last commit ID from the CodeCommit repository and saves the returned value in Parameter Store.

1. After identifying the commit ID and the changed files, the Lambda function identifies the pipelines for each microservice directory and initiates the required CodePipeline pipeline.

## Tools
<a name="automatically-detect-changes-and-initiate-different-codepipeline-pipelines-for-a-monorepo-in-codecommit-tools"></a>
+ [AWS Cloud Development Kit (AWS CDK)](https://docs.aws.amazon.com/cdk/latest/guide/home.html) is a software development framework for defining cloud infrastructure in code and provisioning it through CloudFormation.
+ [Python ](https://www.python.org/)is a programming language that lets you work quickly and integrate systems more effectively.

**Code **

The source code and templates for this pattern are available in the GitHub [AWS CodeCommit monorepo multi-pipeline triggers](https://github.com/aws-samples/monorepo-multi-pipeline-trigger) repository.

## Best practices
<a name="automatically-detect-changes-and-initiate-different-codepipeline-pipelines-for-a-monorepo-in-codecommit-best-practices"></a>
+ This sample architecture doesn't include a monitoring solution for the deployed infrastructure. If you want to deploy this solution in a production environment, we recommend that you enable monitoring. For more information, see [Monitor your serverless applications with CloudWatch Application Insights](https://docs.aws.amazon.com/serverless-application-model/latest/developerguide/monitor-app-insights.html) in the AWS Serverless Application Model (AWS SAM) documentation.
+ When you edit the sample code provided by this pattern, follow the [best practices for developing and deploying cloud infrastructure](https://docs.aws.amazon.com/cdk/v2/guide/best-practices.html) in the AWS CDK documentation.
+ When you define your microservice pipelines, review the [security best practices](https://docs.aws.amazon.com/codepipeline/latest/userguide/security-best-practices.html) in the AWS CodePipeline documentation.
+ You can also check your AWS CDK code for best practices by using the [cdk-nag](https://github.com/cdklabs/cdk-nag) utility. This tool uses a set of rules, grouped by packs, to evaluate your code. The available packs are:
  + [AWS Solutions Library](https://github.com/cdklabs/cdk-nag/blob/main/RULES.md#awssolutions)
  + [Health Insurance Portability and Accountability Act (HIPAA) security](https://github.com/cdklabs/cdk-nag/blob/main/RULES.md#hipaa-security)
  + [National Institute of Standards and Technology (NIST) 800-53 rev 4](https://github.com/cdklabs/cdk-nag/blob/main/RULES.md#nist-800-53-rev-4)
  + [NIST 800-53 rev 5](https://github.com/cdklabs/cdk-nag/blob/main/RULES.md#nist-800-53-rev-5)
  + [Payment Card Industry Data Security Standard (PCI DSS) 3.2.1](https://github.com/cdklabs/cdk-nag/blob/main/RULES.md#pci-dss-321)

## Epics
<a name="automatically-detect-changes-and-initiate-different-codepipeline-pipelines-for-a-monorepo-in-codecommit-epics"></a>

### Set up the environment
<a name="set-up-the-environment"></a>


| Task | Description | Skills required | 
| --- | --- | --- | 
| Create a virtual Python envionment. | In your AWS Cloud9 IDE, create a virtual Python environment and install the required dependencies by running the following command:`make install` | Developer | 
| Bootstrap the AWS account and AWS Region for the AWS CDK. | Bootstrap the required AWS account and Region by running the following command:`make bootstrap account-id=<your-AWS-account-ID> region=<required-region>` | Developer | 

### Add a new pipeline for a microservice
<a name="add-a-new-pipeline-for-a-microservice"></a>


| Task | Description | Skills required | 
| --- | --- | --- | 
|  Add your sample code to your application directory. | Add the directory that contains your sample application code to the `monorepo-sample` directory in the cloned GitHub [AWS CodeCommit monorepo multi-pipeline triggers](https://github.com/aws-samples/monorepo-multi-pipeline-trigger) repository. | Developer | 
| Edit the `monorepo-main.json` file. | Add the directory name of your application’s code and the pipeline's name to the `monorepo-main.json` file in the cloned repository . | Developer | 
| Create the pipeline. | In the `Pipelines` directory for the repository, add the pipeline `class` for your application. The directory contains two sample files, `pipeline_hotsite.py` and `pipeline_demo.py`. Each file has three stages: source, build, and deploy.You can copy one of the files and makes changes to it according to your application’s requirements.  | Developer | 
| Edit the `monorepo_config.py` file. | In `service_map`, add the directory name for your application and the class that you created for the pipeline.For example, the following code shows a pipeline definition in the `Pipelines` directory that uses a file named `pipeline_mysample.py`  with a `MySamplePipeline` class:<pre>...<br /># Pipeline definition imports<br />from pipelines.pipeline_demo import DemoPipeline<br />from pipelines.pipeline_hotsite import HotsitePipeline<br />from pipelines.pipeline_mysample import MySamplePipeline<br /><br />### Add your pipeline configuration here<br />service_map: Dict[str, ServicePipeline]  = {<br />    # folder-name -> pipeline-class<br />    'demo': DemoPipeline(),<br />    'hotsite': HotsitePipeline(),<br />    'mysample': MySamplePipeline()<br />}</pre> | Developer | 

### Deploy the MonoRepoStack stack
<a name="deploy-the-monorepostack-stack"></a>


| Task | Description | Skills required | 
| --- | --- | --- | 
| Deploy the AWS CloudFormation stack. | Deploy the AWS CloudFormation `MonoRepoStack` stack with default parameter values in the root directory of the cloned repository by running the `make deploy-core` command.You can change the repository’s name by running the `make deploy-core monorepo-name=<repo_name>` command.You can simultaneously deploy both pipelines by using the `make deploy monorepo-name=<repo_name>` command. | Developer | 
| Validate the CodeCommit repository. | Validate that your resources were created by running the `aws codecommit get-repository --repository-name <repo_name>` command. Because the CloudFormation stack creates the CodeCommit repository where the monorepo is stored, don’t run the `cdk destroy MonoRepoStack `command if you have started to push modifications into it. | Developer | 
| Validate the CloudFormation stack results. | Validate that the CloudFormation `MonoRepoStack` stack is correctly created and configured by running the following command:<pre>aws cloudformation list-stacks --stack-status-filter CREATE_COMPLETE --query 'StackSummaries[?StackName == 'MonoRepoStack']'</pre> | Developer | 

### Deploy the PipelinesStack stack
<a name="deploy-the-pipelinesstack-stack"></a>


| Task | Description | Skills required | 
| --- | --- | --- | 
| Deploy the CloudFormation stack. | The AWS CloudFormation `PipelinesStack` stack must be deployed after you deploy the `MonoRepoStack` stack. The stack increases in size when new microservices are added to the monorepo’s code base and is redeployed when a new microservice is onboarded.Deploy the PipelinesStack stack by running the `make deploy-pipelines` command.You can also deploy simultaneously deploy both pipelines by running the `make deploy monorepo-name=<repo_name>` command.The following sample output shows how the `PipelinesStacks` deployment prints the URLs for the microservices at the end of the implementation:<pre>Outputs:<br />PipelinesStack.demourl = .cloudfront.net<br />PipelinesStack.hotsiteurl = .cloudfront.net</pre> | Developer | 
| Validate the AWS CloudFormation stack results. | Validate that the AWS CloudFormation `PipelinesStacks` stack is correctly created and configured by running the following command:<pre>aws cloudformation list-stacks --stack-status-filter CREATE_COMPLETE UPDATE_COMPLETE --query 'StackSummaries[?StackName == 'PipelinesStack']'</pre> | Developer | 

### Clean up resources
<a name="clean-up-resources"></a>


| Task | Description | Skills required | 
| --- | --- | --- | 
| Delete your AWS CloudFormation stacks. | Run the `make destroy` command. | Developer | 
| Delete the S3 buckets for your pipelines. | [\[See the AWS documentation website for more details\]](http://docs.aws.amazon.com/prescriptive-guidance/latest/patterns/automatically-detect-changes-and-initiate-different-codepipeline-pipelines-for-a-monorepo-in-codecommit.html) | Developer | 

## Troubleshooting
<a name="automatically-detect-changes-and-initiate-different-codepipeline-pipelines-for-a-monorepo-in-codecommit-troubleshooting"></a>


| Issue | Solution | 
| --- | --- | 
| I encountered AWS CDK issues. | See [Troubleshooting common AWS CDK issues](https://docs.aws.amazon.com/cdk/v2/guide/troubleshooting.html) in the AWS CDK documentation. | 
| I pushed my microservice code, but the microservice pipeline didn't run. | **Setup validation***Verify branch configuration:*[\[See the AWS documentation website for more details\]](http://docs.aws.amazon.com/prescriptive-guidance/latest/patterns/automatically-detect-changes-and-initiate-different-codepipeline-pipelines-for-a-monorepo-in-codecommit.html)*Validate configuration files:*[\[See the AWS documentation website for more details\]](http://docs.aws.amazon.com/prescriptive-guidance/latest/patterns/automatically-detect-changes-and-initiate-different-codepipeline-pipelines-for-a-monorepo-in-codecommit.html)**Troubleshooting on the console***AWS CodePipeline checks:*[\[See the AWS documentation website for more details\]](http://docs.aws.amazon.com/prescriptive-guidance/latest/patterns/automatically-detect-changes-and-initiate-different-codepipeline-pipelines-for-a-monorepo-in-codecommit.html)*AWS Lambda troubleshooting:*[\[See the AWS documentation website for more details\]](http://docs.aws.amazon.com/prescriptive-guidance/latest/patterns/automatically-detect-changes-and-initiate-different-codepipeline-pipelines-for-a-monorepo-in-codecommit.html) | 
| I need to redeploy all my microservices.  | There are two approaches to force the redeployment of all microservices. Choose the option that fits your requirements.**Approach 1: Delete a parameter in Parameter Store**This method involves deleting a specific parameter within Systems Manager Parameter Store that tracks the last commit ID used for deployment. When you remove this parameter, the system is forced to redeploy all microservices upon the next trigger, because it perceives it as a fresh state.Steps:[\[See the AWS documentation website for more details\]](http://docs.aws.amazon.com/prescriptive-guidance/latest/patterns/automatically-detect-changes-and-initiate-different-codepipeline-pipelines-for-a-monorepo-in-codecommit.html)Pros:[\[See the AWS documentation website for more details\]](http://docs.aws.amazon.com/prescriptive-guidance/latest/patterns/automatically-detect-changes-and-initiate-different-codepipeline-pipelines-for-a-monorepo-in-codecommit.html)Cons:[\[See the AWS documentation website for more details\]](http://docs.aws.amazon.com/prescriptive-guidance/latest/patterns/automatically-detect-changes-and-initiate-different-codepipeline-pipelines-for-a-monorepo-in-codecommit.html)**Approach 2: Push a commit in each monorepo subfolder**This method involves making a minor change and pushing it in each microservice subfolder within the monorepo to initiate their individual pipelines.Steps:[\[See the AWS documentation website for more details\]](http://docs.aws.amazon.com/prescriptive-guidance/latest/patterns/automatically-detect-changes-and-initiate-different-codepipeline-pipelines-for-a-monorepo-in-codecommit.html)Pros:[\[See the AWS documentation website for more details\]](http://docs.aws.amazon.com/prescriptive-guidance/latest/patterns/automatically-detect-changes-and-initiate-different-codepipeline-pipelines-for-a-monorepo-in-codecommit.html)Cons:[\[See the AWS documentation website for more details\]](http://docs.aws.amazon.com/prescriptive-guidance/latest/patterns/automatically-detect-changes-and-initiate-different-codepipeline-pipelines-for-a-monorepo-in-codecommit.html) | 

## Related resources
<a name="automatically-detect-changes-and-initiate-different-codepipeline-pipelines-for-a-monorepo-in-codecommit-resources"></a>
+ [Continuous integration and delivery (CI/CD) using CDK Pipelines](https://docs.aws.amazon.com/cdk/latest/guide/cdk_pipeline.html) (AWS CDK documentation)
+ [aws-cdk/pipelines module](https://docs.aws.amazon.com/cdk/api/latest/docs/pipelines-readme.html) (AWS CDK API reference)

# Integrate a Bitbucket repository with AWS Amplify using AWS CloudFormation
<a name="integrate-a-bitbucket-repository-with-aws-amplify-using-aws-cloudformation"></a>

*Alwin Abraham, Amazon Web Services*

## Summary
<a name="integrate-a-bitbucket-repository-with-aws-amplify-using-aws-cloudformation-summary"></a>

AWS Amplify helps you to quickly deploy and test static websites without having to set up the infrastructure that is typically required. You can deploy this pattern's approach if your organization wants to use Bitbucket for source control, whether to migrate existing application code or build a new application. By using AWS CloudFormation to automatically set up Amplify, you provide visibility into the configurations that you use.

This pattern describes how to create a front-end continuous integration and continuous deployment (CI/CD) pipeline and deployment environment by using AWS CloudFormation to integrate a Bitbucket repository with AWS Amplify. The pattern's approach means that you can build an Amplify front-end pipeline for repeatable deployments.

## Prerequisites and limitations
<a name="integrate-a-bitbucket-repository-with-aws-amplify-using-aws-cloudformation-prereqs"></a>

**Prerequisites**** **
+ An active Amazon Web Services (AWS) account
+ An active Bitbucket account with administrator access
+ Access to a terminal that uses [cURL](https://curl.se/) or the [Postman](https://www.postman.com/) application
+ Familiarity with Amplify
+ Familiarity with AWS CloudFormation
+ Familiarity with YAML-formatted files

## Architecture
<a name="integrate-a-bitbucket-repository-with-aws-amplify-using-aws-cloudformation-architecture"></a>

![\[Diagram showing user interaction with Bitbucket repository connected to AWS Amplify in AWS Cloud region.\]](http://docs.aws.amazon.com/prescriptive-guidance/latest/patterns/images/pattern-img/24ae87ed-aa5d-4114-9c5d-bdcb4d40a78b/images/25d73a9d-d2ae-40bc-9ebc-57f9bd13884a.png)


**Technology stack  **
+ Amplify
+ AWS CloudFormation
+ Bitbucket

## Tools
<a name="integrate-a-bitbucket-repository-with-aws-amplify-using-aws-cloudformation-tools"></a>
+ [AWS Amplify](https://docs.aws.amazon.com/amplify/) – Amplify helps developers to develop and deploy cloud-powered mobile and web apps.
+ [AWS CloudFormation](https://docs.aws.amazon.com/AWSCloudFormation/latest/UserGuide/Welcome.html) – AWS CloudFormation is a service that helps you model and set up your AWS resources so that you can spend less time managing those resources and more time focusing on your applications that run in AWS.
+ [Bitbucket](https://bitbucket.org/) – Bitbucket is a Git repository management solution designed for professional teams. It gives you a central place to manage Git repositories, collaborate on your source code, and guide you through the development flow.

 

**Code **

The `bitbucket-amplify.yml` file (attached) contains the AWS CloudFormation template for this pattern.

## Epics
<a name="integrate-a-bitbucket-repository-with-aws-amplify-using-aws-cloudformation-epics"></a>

### Configure the Bitbucket repository
<a name="configure-the-bitbucket-repository"></a>


| Task | Description | Skills required | 
| --- | --- | --- | 
| (Optional) Create a Bitbucket repository.  | [\[See the AWS documentation website for more details\]](http://docs.aws.amazon.com/prescriptive-guidance/latest/patterns/integrate-a-bitbucket-repository-with-aws-amplify-using-aws-cloudformation.html)You can also use an existing Bitbucket repository. | DevOps engineer | 
| Open the workspace settings. | [\[See the AWS documentation website for more details\]](http://docs.aws.amazon.com/prescriptive-guidance/latest/patterns/integrate-a-bitbucket-repository-with-aws-amplify-using-aws-cloudformation.html) | DevOps engineer | 
| Create an OAuth consumer. | [\[See the AWS documentation website for more details\]](http://docs.aws.amazon.com/prescriptive-guidance/latest/patterns/integrate-a-bitbucket-repository-with-aws-amplify-using-aws-cloudformation.html) | DevOps engineer | 
| Obtain OAuth access token.  | [\[See the AWS documentation website for more details\]](http://docs.aws.amazon.com/prescriptive-guidance/latest/patterns/integrate-a-bitbucket-repository-with-aws-amplify-using-aws-cloudformation.html)`curl -X POST -u "KEY:SECRET" https://bitbucket.org/site/oauth2/access_token -d grant_type=client_credentials `Replace `KEY` and `SECRET` with the key and secret that you recorded earlier. 2. Record the access token without using the quotation marks. The token is only valid for a limited time and the default time is two hours. You must run the AWS CloudFormation template in this timeframe. | DevOps engineer | 

### Create and deploy the AWS CloudFormation stack
<a name="create-and-deploy-the-aws-cloudformation-stack"></a>


| Task | Description | Skills required | 
| --- | --- | --- | 
|  Download the AWS CloudFormation template. | Download the `bitbucket-amplify.yml` AWS CloudFormation template (attached). This template creates the CI/CD pipeline in Amplify, in addition to the Amplify project and branch. |  | 
| Create and deploy the AWS CloudFormation stack. | [\[See the AWS documentation website for more details\]](http://docs.aws.amazon.com/prescriptive-guidance/latest/patterns/integrate-a-bitbucket-repository-with-aws-amplify-using-aws-cloudformation.html)5. Choose **Next** and then choose **Create Stack**. | DevOps engineer | 

### Test the CI/CD pipeline
<a name="test-the-ci-cd-pipeline"></a>


| Task | Description | Skills required | 
| --- | --- | --- | 
| Deploy the code to the branch in your repository. | [\[See the AWS documentation website for more details\]](http://docs.aws.amazon.com/prescriptive-guidance/latest/patterns/integrate-a-bitbucket-repository-with-aws-amplify-using-aws-cloudformation.html)For more information about this, see [Basic Git commands](https://confluence.atlassian.com/bitbucketserver/basic-git-commands-776639767.html) in the Bitbucket documentation.  | App developer | 

## Related resources
<a name="integrate-a-bitbucket-repository-with-aws-amplify-using-aws-cloudformation-resources"></a>

[Authentication methods](https://developer.atlassian.com/bitbucket/api/2/reference/meta/authentication) (Atlassian documentation)

## Attachments
<a name="attachments-24ae87ed-aa5d-4114-9c5d-bdcb4d40a78b"></a>

To access additional content that is associated with this document, unzip the following file: [attachment.zip](samples/p-attach/24ae87ed-aa5d-4114-9c5d-bdcb4d40a78b/attachments/attachment.zip)

# Launch a CodeBuild project across AWS accounts using Step Functions and a Lambda proxy function
<a name="launch-a-codebuild-project-across-aws-accounts-using-step-functions-and-a-lambda-proxy-function"></a>

*Richard Milner-Watts and Amit Anjarlekar, Amazon Web Services*

## Summary
<a name="launch-a-codebuild-project-across-aws-accounts-using-step-functions-and-a-lambda-proxy-function-summary"></a>

This pattern demonstrates how to asynchronously launch an AWS CodeBuild project across multiple AWS accounts by using AWS Step Functions and an AWS Lambda proxy function. You can use the pattern’s sample Step Functions state machine to test the success of your CodeBuild project.

CodeBuild helps you launch operational tasks using the AWS Command Line Interface (AWS CLI) from a fully-managed runtime environment. You can change the behavior of your CodeBuild project at runtime by overriding environment variables. Additionally, you can use CodeBuild to manage workflows. For more information, see [Service Catalog Tools](https://service-catalog-tools-workshop.com/tools.html) on the AWS Workshop website and [Schedule jobs in Amazon RDS for PostgreSQL using AWS CodeBuild and Amazon EventBridge](https://aws.amazon.com/blogs/database/schedule-jobs-in-amazon-rds-for-postgresql-using-aws-codebuild-and-amazon-eventbridge/) on the AWS Database Blog.

## Prerequisites and limitations
<a name="launch-a-codebuild-project-across-aws-accounts-using-step-functions-and-a-lambda-proxy-function-prereqs"></a>

**Prerequisites**
+ Two active AWS accounts: a source account for invoking a Lambda proxy function with Step Functions and a target account for building a remote CodeBuild sample project

**Limitations**
+ This pattern cannot be used to copy [artifacts](https://docs.aws.amazon.com/AWSCloudFormation/latest/UserGuide/aws-properties-codebuild-project-artifacts.html) between accounts.

## Architecture
<a name="launch-a-codebuild-project-across-aws-accounts-using-step-functions-and-a-lambda-proxy-function-architecture"></a>

The following diagram shows the architecture that this pattern builds.

![\[Architecture diagram of launching a CodeBuild project across multiple AWS accounts\]](http://docs.aws.amazon.com/prescriptive-guidance/latest/patterns/images/pattern-img/809a5716-56e5-477c-aac6-02243675a2f2/images/857ba3ae-eb9a-4d6b-b73e-e596f41c8cb8.png)


The diagram shows the following workflow:

1. The Step Functions state machine parses the supplied input map and invokes the Lambda proxy function (`codebuild-proxy-lambda`) for each account, Region, and project you defined.

1. The Lambda proxy function uses AWS Security Token Service (AWS STS) to assume an IAM proxy role (`codebuild-proxy-role`), which is associated with an IAM policy (`codebuild-proxy-policy`) in the target account.

1. Using the assumed role, the Lambda function launches the CodeBuild project and returns the CodeBuild job ID. The Step Functions state machine loops and polls the CodeBuild job until receiving a success or failure status.

The state machine logic is shown in the following image.

![\[Workflow of Step Functions state machine\]](http://docs.aws.amazon.com/prescriptive-guidance/latest/patterns/images/pattern-img/809a5716-56e5-477c-aac6-02243675a2f2/images/4729bbfc-79ad-455d-a85a-b96cce00f432.png)


**Technology stack**
+ AWS CloudFormation
+ CodeBuild
+ IAM
+ Lambda
+ Step Functions
+ X-Ray

## Tools
<a name="launch-a-codebuild-project-across-aws-accounts-using-step-functions-and-a-lambda-proxy-function-tools"></a>
+ [AWS CloudFormation](https://docs.aws.amazon.com/AWSCloudFormation/latest/UserGuide/Welcome.html) helps you set up AWS resources, provision them quickly and consistently, and manage them throughout their lifecycle across AWS accounts and Regions.
+ [AWS CloudFormation Designer](https://docs.aws.amazon.com/AWSCloudFormation/latest/UserGuide/working-with-templates-cfn-designer-json-editor.html) provides an integrated JSON and YAML editor that helps you view and edit CloudFormation templates.
+ [AWS CodeBuild](https://docs.aws.amazon.com/codebuild/latest/userguide/welcome.html) is a fully managed build service that helps you compile source code, run unit tests, and produce artifacts that are ready to deploy.
+ [AWS Identity and Access Management (IAM)](https://docs.aws.amazon.com/IAM/latest/UserGuide/introduction.html) helps you securely manage access to your AWS resources by controlling who is authenticated and authorized to use them.
+ [AWS Lambda](https://docs.aws.amazon.com/lambda/latest/dg/welcome.html) is a compute service that helps you run code without needing to provision or manage servers. It runs your code only when needed and scales automatically, so you pay only for the compute time that you use.
+ [AWS Step Functions](https://docs.aws.amazon.com/step-functions/latest/dg/welcome.html) is a serverless orchestration service that helps you combine AWS Lambda functions and other AWS services to build business-critical applications.
+ [AWS X-Ray](https://docs.aws.amazon.com/xray/latest/devguide/aws-xray.html) helps you collect data about the requests that your application serves, and provides tools that you can use to view, filter, and gain insights into that data to identify issues and opportunities for optimization.

**Code**

The sample code for this pattern is available in the GitHub [Cross Account CodeBuild Proxy](https://github.com/aws-samples/cross-account-codebuild-proxy) repository. This pattern uses the AWS Lambda Powertools for Python library to provide logging and tracing functionality. For more information on this library and its utilities, see [Powertools for AWS Lambda (Python)](https://docs.powertools.aws.dev/lambda/python/latest/).

## Best practices
<a name="launch-a-codebuild-project-across-aws-accounts-using-step-functions-and-a-lambda-proxy-function-best-practices"></a>

1. Adjust the wait time values in the Step Function state machine to minimize polling requests for job status. Use the expected execution time for the CodeBuild project.

1. Adjust the `MaxConcurrency` property of the map in Step Functions to control how many CodeBuild projects can run in parallel.

1. If required, review the sample code for production readiness. Consider what data might be logged by the solution and whether the default Amazon CloudWatch encryption is sufficient.

## Epics
<a name="launch-a-codebuild-project-across-aws-accounts-using-step-functions-and-a-lambda-proxy-function-epics"></a>

### Create the Lambda proxy function and associated IAM role in the source account
<a name="create-the-lambda-proxy-function-and-associated-iam-role-in-the-source-account"></a>


| Task | Description | Skills required | 
| --- | --- | --- | 
| Record the AWS account IDs. | AWS account IDs are required to set up access across accounts.Record the AWS account ID for your source and target accounts. For more information, see [Finding your AWS account ID](https://docs.aws.amazon.com/IAM/latest/UserGuide/console_account-alias.html#FindingYourAWSId) in the IAM documentation. | AWS DevOps | 
| Download the AWS CloudFormation templates. | [\[See the AWS documentation website for more details\]](http://docs.aws.amazon.com/prescriptive-guidance/latest/patterns/launch-a-codebuild-project-across-aws-accounts-using-step-functions-and-a-lambda-proxy-function.html)In the AWS CloudFormation templates, `<SourceAccountId>` is the AWS account ID for the source account, and `<TargetAccountId>` is the AWS account ID for the target account. | AWS DevOps | 
| Create and deploy the AWS CloudFormation stack. | [\[See the AWS documentation website for more details\]](http://docs.aws.amazon.com/prescriptive-guidance/latest/patterns/launch-a-codebuild-project-across-aws-accounts-using-step-functions-and-a-lambda-proxy-function.html)You must create the AWS CloudFormation stack for the proxy Lambda function before creating any resources in target accounts. When you create a trust policy in a target account, the IAM role is translated from the role name to an internal identifier. This is why the IAM role must already exist. | AWS DevOps | 
| Confirm the creation of the proxy function and state machine. | [\[See the AWS documentation website for more details\]](http://docs.aws.amazon.com/prescriptive-guidance/latest/patterns/launch-a-codebuild-project-across-aws-accounts-using-step-functions-and-a-lambda-proxy-function.html) | AWS DevOps | 

### Create an IAM role in the target account and launch a sample CodeBuild project
<a name="create-an-iam-role-in-the-target-account-and-launch-a-sample-codebuild-project"></a>


| Task | Description | Skills required | 
| --- | --- | --- | 
| Create and deploy the AWS CloudFormation stack. | [\[See the AWS documentation website for more details\]](http://docs.aws.amazon.com/prescriptive-guidance/latest/patterns/launch-a-codebuild-project-across-aws-accounts-using-step-functions-and-a-lambda-proxy-function.html) | AWS DevOps | 
| Verify the creation of the sample CodeBuild project.  | [\[See the AWS documentation website for more details\]](http://docs.aws.amazon.com/prescriptive-guidance/latest/patterns/launch-a-codebuild-project-across-aws-accounts-using-step-functions-and-a-lambda-proxy-function.html) | AWS DevOps | 

### Test the cross-account Lambda proxy function
<a name="test-the-cross-account-lambda-proxy-function"></a>


| Task | Description | Skills required | 
| --- | --- | --- | 
| Launch the state machine. | [\[See the AWS documentation website for more details\]](http://docs.aws.amazon.com/prescriptive-guidance/latest/patterns/launch-a-codebuild-project-across-aws-accounts-using-step-functions-and-a-lambda-proxy-function.html) | AWS DevOps | 
| Validate the environment variables. | [\[See the AWS documentation website for more details\]](http://docs.aws.amazon.com/prescriptive-guidance/latest/patterns/launch-a-codebuild-project-across-aws-accounts-using-step-functions-and-a-lambda-proxy-function.html) | AWS DevOps | 

## Troubleshooting
<a name="launch-a-codebuild-project-across-aws-accounts-using-step-functions-and-a-lambda-proxy-function-troubleshooting"></a>


| Issue | Solution | 
| --- | --- | 
| Step Functions execution is taking longer than expected. | Adjust the `MaxConcurrency` property of the map in the Step Function state machine to control how many CodeBuild projects can run in parallel. | 
| The execution of the CodeBuild jobs is taking longer than expected. | [\[See the AWS documentation website for more details\]](http://docs.aws.amazon.com/prescriptive-guidance/latest/patterns/launch-a-codebuild-project-across-aws-accounts-using-step-functions-and-a-lambda-proxy-function.html) | 

# Manage Multi-AZ failover for EMR clusters by using Application Recovery Controller
<a name="multi-az-failover-spark-emr-clusters-arc"></a>

*Aarti Rajput, Ashish Bhatt, Neeti Mishra, and Nidhi Sharma, Amazon Web Services*

## Summary
<a name="multi-az-failover-spark-emr-clusters-arc-summary"></a>

This pattern offers an efficient disaster recovery strategy for Amazon EMR workloads to help ensure high availability and data consistency across multiple Availability Zones within a single AWS Region. The design uses [Amazon Application Recovery Controller](https://docs.aws.amazon.com/r53recovery/latest/dg/what-is-route53-recovery.html) and an [Application Load Balancer](https://docs.aws.amazon.com/elasticloadbalancing/latest/application/introduction.html) to manage failover operations and traffic distribution for an Apache Spark-based EMR cluster. 

Under standard conditions, the primary Availability Zone hosts an active EMR cluster and application with full read/write functionality. If an Availability Zone fails unexpectedly, traffic is automatically redirected to the secondary Availability Zone, where a new EMR cluster is launched. Both Availability Zones access a shared Amazon Simple Storage Service (Amazon S3) bucket through dedicated [gateway endpoints](https://docs.aws.amazon.com/vpc/latest/privatelink/vpc-endpoints-s3.html), which ensure consistent data management. This approach minimizes downtime and enables rapid recovery for critical big data workloads during Availability Zone failures. The solution is useful in industries such as finance or retail, where real-time analytics are crucial.

## Prerequisites and limitations
<a name="multi-az-failover-spark-emr-clusters-arc-prereqs"></a>

**Prerequisites**
+ An active [AWS account](https://aws.amazon.com/resources/create-account/)
+ [Amazon EMR](https://docs.aws.amazon.com/emr/latest/ManagementGuide/emr-what-is-emr.html) on Amazon Elastic Compute Cloud (Amazon EC2)
+ Access from the master node of the EMR cluster to Amazon S3.
+ AWS Multi-AZ infrastructure

**Limitations**
+ Some AWS services aren’t available in all AWS Regions. For Region availability, see [AWS services by Region](https://aws.amazon.com/about-aws/global-infrastructure/regional-product-services/). For specific endpoints, see the [Service endpoints and quotas](https://docs.aws.amazon.com/general/latest/gr/aws-service-information.html) page, and choose the link for the service.

**Product versions**
+ [Amazon EMR 6.x and later releases](https://docs.aws.amazon.com/emr/latest/ReleaseGuide/emr-release-components.html)

## Architecture
<a name="multi-az-failover-spark-emr-clusters-arc-architecture"></a>

**Target technology stack**
+ Amazon EMR cluster
+ Amazon Application Recovery Controller
+ Application Load Balancer
+ Amazon S3 bucket
+ Gateway endpoints for Amazon S3

**Target architecture**

![\[Architecture for an automated recovery mechanism with Application Recovery Cotnroller.\]](http://docs.aws.amazon.com/prescriptive-guidance/latest/patterns/images/pattern-img/e5ecdb66-0eef-4a6a-8367-982a55104748/images/e982d580-13db-4bdd-9f6b-6400d7c31c01.png)


This architecture provides application resilience by using multiple Availability Zones and implementing an automated recovery mechanism through the Application Recovery Controller.

1. The Application Load Balancer routes traffic to the active Amazon EMR environment, which is typically the primary EMR cluster in the primary Availability Zone.

1. The active EMR cluster processes the application requests and connects to Amazon S3 through its dedicated Amazon S3 gateway endpoint for read and write operations.

1. Amazon S3 serves as a central data repository and is potentially used as a checkpoint or as shared storage between EMR clusters. EMR clusters maintain data consistency when they write directly to Amazon S3 through the `s3://` protocol and the [EMR File System (EMRFS)](https://docs.aws.amazon.com/emr/latest/ReleaseGuide/emr-fs.html). 

1. Application Recovery Controller continuously monitors the health of the primary Availability Zone and automatically manages failover operations when necessary.

1. If the Application Recovery Controller detects a failure in the primary EMR cluster, it takes these actions:
   + Initiates the failover process to the secondary EMR cluster in Availability Zone 2.
   + Updates routing configurations to direct traffic to the secondary cluster.

## Tools
<a name="multi-az-failover-spark-emr-clusters-arc-tools"></a>

**AWS services**
+ [Amazon Application Recovery Controller](https://docs.aws.amazon.com/r53recovery/latest/dg/what-is-route53-recovery.html)** **helps you manage and coordinate the recovery of your applications across AWS Regions and Availability Zones. This service simplifies the process and improves the reliability of application recovery by reducing the manual steps required by traditional tools and processes.
+ [Application Load Balancer](https://docs.aws.amazon.com/elasticloadbalancing/latest/application/introduction.html) operates at the application layer, which is the seventh layer of the Open Systems Interconnection (OSI) model. It distributes incoming application traffic across multiple targets, such as EC2 instances, in multiple Availability Zones. This increases the availability of your application.
+ [AWS Command Line Interface (AWS CLI)](https://docs.aws.amazon.com/cli/latest/userguide/cli-chap-welcome.html) is an open source tool that helps you interact with AWS services through commands in your command line shell.
+ [Amazon EMR](https://docs.aws.amazon.com/emr/latest/ManagementGuide/emr-what-is-emr.html) is a big data platform that provides data processing, interactive analysis, and machine learning for open source frameworks such as Apache Spark, Apache Hive, and Presto.
+ [AWS Identity and Access Management (IAM)](https://docs.aws.amazon.com/IAM/latest/UserGuide/introduction.html) helps you securely manage access to your AWS resources by controlling who is authenticated and authorized to use them.
+ [Amazon S3](https://docs.aws.amazon.com/AmazonS3/latest/userguide/Welcome.html) provides a simple web service interface that you can use to store and retrieve any amount of data, at any time, from anywhere. Using this service, you can easily build applications that make use of cloud native storage.
+ [Gateway endpoints for Amazon S3](https://docs.aws.amazon.com/vpc/latest/privatelink/vpc-endpoints-s3.html) are gateways that you specify in your route table to access Amazon S3 from your virtual private cloud (VPC) over the AWS network.

## Best practices
<a name="multi-az-failover-spark-emr-clusters-arc-best-practices"></a>
+ Follow [AWS best practices for security, identity, and compliance](https://aws.amazon.com/architecture/security-identity-compliance/?cards-all.sort-by=%5b…%5d.sort-order=desc&awsf.content-type=*all&awsf.methodology=*all) to ensure a robust and secure architecture.
+ Align the architecture with the [AWS Well-Architected Framework.](https://aws.amazon.com/architecture/well-architected/)
+ Use Amazon S3 Access Grants to manage access from your Spark-based EMR cluster to Amazon S3. For details, see the blog post [Use Amazon EMR with S3 Access Grants to Scale Spark access to Amazon S3](https://aws.amazon.com/blogs/big-data/use-amazon-emr-with-s3-access-grants-to-scale-spark-access-to-amazon-s3/).
+ [Improve Spark performance with Amazon S3](https://docs.aws.amazon.com/emr/latest/ReleaseGuide/emr-spark-s3-performance.html).

## Epics
<a name="multi-az-failover-spark-emr-clusters-arc-epics"></a>

### Set up your environment
<a name="set-up-your-environment"></a>


| Task | Description | Skills required | 
| --- | --- | --- | 
| Sign in to the AWS Management Console. | Sign in to the [AWS Management Console](https://console.aws.amazon.com/) as an IAM user. For instructions, see the [AWS documentation](https://docs.aws.amazon.com/signin/latest/userguide/introduction-to-iam-user-sign-in-tutorial.html). | AWS DevOps | 
| Configure the AWS CLI.** ** | Install the AWS CLI or update it to the latest version so you can interact with AWS services in the AWS Management Console. For instructions, see the [AWS CLI documentation](https://docs.aws.amazon.com/cli/latest/userguide/getting-started-install.html). | AWS DevOps | 

### Deploy a Spark application on your EMR cluster
<a name="deploy-a-spark-application-on-your-emr-cluster"></a>


| Task | Description | Skills required | 
| --- | --- | --- | 
| Create an S3 bucket. | [\[See the AWS documentation website for more details\]](http://docs.aws.amazon.com/prescriptive-guidance/latest/patterns/multi-az-failover-spark-emr-clusters-arc.html) | AWS DevOps | 
| Create an EMR cluster. | [\[See the AWS documentation website for more details\]](http://docs.aws.amazon.com/prescriptive-guidance/latest/patterns/multi-az-failover-spark-emr-clusters-arc.html) | AWS DevOps | 
| Configure security settings for the EMR cluster. | [\[See the AWS documentation website for more details\]](http://docs.aws.amazon.com/prescriptive-guidance/latest/patterns/multi-az-failover-spark-emr-clusters-arc.html) | AWS DevOps | 
| Connect to the EMR cluster. | Connect to the master node of the EMR cluster through SSH by using the provided key pair.Ensure that the key pair file is present in the same directory as your application.Run the following commands to set the correct permissions for the key pair and to establish the SSH connection:<pre>chmod 400 <key-pair-name><br />ssh -i ./<key-pair-name> hadoop@<master-node-public-dns></pre> | AWS DevOps | 
| Deploy the Spark application. | After you establish the SSH connection, you will be in the Hadoop console.[\[See the AWS documentation website for more details\]](http://docs.aws.amazon.com/prescriptive-guidance/latest/patterns/multi-az-failover-spark-emr-clusters-arc.html) | AWS DevOps | 
| Monitor the Spark application. | [\[See the AWS documentation website for more details\]](http://docs.aws.amazon.com/prescriptive-guidance/latest/patterns/multi-az-failover-spark-emr-clusters-arc.html) | AWS DevOps | 

### Shift traffic to another Availability Zone
<a name="shift-traffic-to-another-availability-zone"></a>


| Task | Description | Skills required | 
| --- | --- | --- | 
| Create an Application Load Balancer. | Set up the target group that routes traffic between Amazon EMR master nodes that are deployed across two Availability Zones within an AWS Region.For instructions, see [Create a target group for your Application Load Balancer](https://docs.aws.amazon.com/elasticloadbalancing/latest/application/create-target-group.html) in the Elastic Load Balancing documentation. | AWS DevOps | 
| Configure zonal shift in Application Recovery Controller. | In this step, you'll use the [zonal shift feature](https://docs.aws.amazon.com/r53recovery/latest/dg/arc-zonal-shift.html) in Application Recovery Controller to shift traffic to another Availability Zone.[\[See the AWS documentation website for more details\]](http://docs.aws.amazon.com/prescriptive-guidance/latest/patterns/multi-az-failover-spark-emr-clusters-arc.html)To use the AWS CLI, see [Examples of using the AWS CLI with zonal shift](https://docs.aws.amazon.com/r53recovery/latest/dg/getting-started-cli-zonalshift.html) in the Application Recovery Controller documentation. | AWS DevOps | 
| Verify zonal shift configuration and progress. | [\[See the AWS documentation website for more details\]](http://docs.aws.amazon.com/prescriptive-guidance/latest/patterns/multi-az-failover-spark-emr-clusters-arc.html) | AWS DevOps | 

## Related resources
<a name="multi-az-failover-spark-emr-clusters-arc-resources"></a>
+ AWS CLI commands:
  + [create-cluster](https://awscli.amazonaws.com/v2/documentation/api/latest/reference/emr/create-cluster.html)
  + [describe-cluster](https://awscli.amazonaws.com/v2/documentation/api/latest/reference/emr/describe-cluster.html)
  + [arc-zonal-shift](https://awscli.amazonaws.com/v2/documentation/api/latest/reference/arc-zonal-shift/index.html)
+ [Configuring Amazon EMR cluster instance types and best practices for Spot instances](https://docs.aws.amazon.com/emr/latest/ManagementGuide/emr-plan-instances-guidelines.html) (Amazon EMR documentation)
+ [Security best practices in IAM](https://docs.aws.amazon.com/IAM/latest/UserGuide/best-practices.html) (IAM documentation)
+ [Use instance profiles](https://docs.aws.amazon.com/IAM/latest/UserGuide/id_roles_use_switch-role-ec2_instance-profiles.html) (IAM documentation)
+ [Use zonal shift and zonal autoshift to recovery applications in ARC](https://docs.aws.amazon.com/r53recovery/latest/dg/multi-az.html) (Application Recovery Controller documentation)

# Manage blue/green deployments of microservices to multiple accounts and Regions by using AWS code services and AWS KMS multi-Region keys
<a name="manage-blue-green-deployments-of-microservices-to-multiple-accounts-and-regions-by-using-aws-code-services-and-aws-kms-multi-region-keys"></a>

*Balaji Vedagiri, Vanitha Dontireddy, Ashish Kumar, Faisal Shahdad, Vivek Thangamuthu, and Anand Krishna Varanasi, Amazon Web Services*

## Summary
<a name="manage-blue-green-deployments-of-microservices-to-multiple-accounts-and-regions-by-using-aws-code-services-and-aws-kms-multi-region-keys-summary"></a>

This pattern describes how to deploy a global microservices application from a central AWS account to multiple workload accounts and Regions in accordance with a blue/green deployment strategy. The pattern supports the following:
+ Software is developed in a central account, whereas workloads and applications are spread across multiple accounts and AWS Regions.
+ A single AWS Key Management System (AWS KMS) multi-Region key is used for encryption and decryption to cover disaster recovery.
+ The KMS key is Region-specific and has to be maintained or created in three different Regions for pipeline artifacts. A KMS multi-Region key helps retain the same key ID across Regions.
+ The Git workflow branching model is implemented with two branches (development and main) and code is merged by using pull requests (PRs). The AWS Lambda function that is deployed from this stack creates a PR from the development branch to the main branch. The PR merge to the main branch initiates an AWS CodePipeline pipeline, which orchestrates the continuous integration and continuous delivery (CI/CD) flow and deploys the stacks across accounts.

This pattern  provides a sample infrastructure as code (IaC) setup through AWS CloudFormation stacks to demonstrate this use case. The blue/green deployment of microservices is implemented by using AWS CodeDeploy.

## Prerequisites and limitations
<a name="manage-blue-green-deployments-of-microservices-to-multiple-accounts-and-regions-by-using-aws-code-services-and-aws-kms-multi-region-keys-prereqs"></a>

**Prerequisites**
+ Four active AWS accounts:
  + A tools account to manage the code pipeline and maintain the AWS CodeCommit repository.
  + Three workload (test) accounts for deploying the microservices workload.
+ This pattern uses the following Regions. If  you want to use other Regions, you must make the appropriate modifications to the AWS CodeDeploy and AWS KMS multi-Region stacks.
  + Tools (AWS CodeCommit) account: `ap-south-1`
  + Workload (test) account 1: `ap-south-1`
  + Workload (test) account 2: `eu-central-1`
  + Workload (test) account 3: `us-east-1`
+ Three Amazon Simple Storage Service (Amazon S3) buckets for the deployment Regions in each workload account. (These are called `S3BUCKETNAMETESTACCOUNT1`, `S3BUCKETNAMETESTACCOUNT2 `and `S3BUCKETNAMETESTACCOUNT3 `later in this pattern.)

  For example, you can create these buckets in specific accounts and Regions with unique bucket names as follows (replace *xxxx* with a random number):

  ```
  ##In Test Account 1
  aws s3 mb s3://ecs-codepipeline-xxxx-ap-south-1 --region ap-south-1
  ##In Test Account 2
  aws s3 mb s3://ecs-codepipeline-xxxx-eu-central-1 --region eu-central-1
  ##In Test Account 3
  aws s3 mb s3://ecs-codepipeline-xxxx-us-east-1 --region us-east-1
  
  #Example
  ##In Test Account 1
  aws s3 mb s3://ecs-codepipeline-18903-ap-south-1 --region ap-south-1
  ##In Test Account 2
  aws s3 mb s3://ecs-codepipeline-18903-eu-central-1 --region eu-central-1
  ##In Test Account 3
  aws s3 mb s3://ecs-codepipeline-18903-us-east-1 --region us-east-1
  ```

**Limitations**

The pattern uses AWS CodeBuild and other configuration files to deploy a sample microservice. If you have a different workload type (for example, serverless), you must update all relevant configurations.

## Architecture
<a name="manage-blue-green-deployments-of-microservices-to-multiple-accounts-and-regions-by-using-aws-code-services-and-aws-kms-multi-region-keys-architecture"></a>

**Target technology stack **
+ AWS CloudFormation
+ AWS CodeCommit
+ AWS CodeBuild
+ AWS CodeDeploy
+ AWS CodePipeline

**Target architecture **

![\[Target architecture for deploying microservices to multiple accounts and Regions\]](http://docs.aws.amazon.com/prescriptive-guidance/latest/patterns/images/pattern-img/a144c977-6823-4b08-a215-fae779b3ce7c/images/eedfabdb-f266-4190-b271-5caf7ac9b47b.png)


**Automation and scale**

The setup is automated by using AWS CloudFormation stack templates (IaC). It can be easily scaled for multiple environments and accounts.

## Tools
<a name="manage-blue-green-deployments-of-microservices-to-multiple-accounts-and-regions-by-using-aws-code-services-and-aws-kms-multi-region-keys-tools"></a>

**AWS services**
+ [AWS CloudFormation](https://docs.aws.amazon.com/AWSCloudFormation/latest/UserGuide/Welcome.html) helps you set up AWS resources, provision them quickly and consistently, and manage them throughout their lifecycle across AWS accounts and Regions.
+ [AWS CodeBuild](https://docs.aws.amazon.com/codebuild/latest/userguide/welcome.html) is a fully managed build service that helps you compile source code, run unit tests, and produce artifacts that are ready to deploy.
+ [AWS CodeCommit](https://docs.aws.amazon.com/codecommit/latest/userguide/welcome.html) is a version control service that helps you privately store and manage Git repositories, without needing to manage your own source control system.
+ [AWS CodeDeploy](https://docs.aws.amazon.com/codedeploy/latest/userguide/welcome.html) automates deployments to Amazon Elastic Compute Cloud (Amazon EC2) or on-premises instances, AWS Lambda functions, or Amazon Elastic Container Service (Amazon ECS) services.
+ [AWS CodePipeline](https://docs.aws.amazon.com/codepipeline/latest/userguide/welcome.html) helps you quickly model and configure the different stages of a software release and automate the steps required to release software changes continuously.
+ [Amazon Elastic Container Registry (Amazon ECR)](https://docs.aws.amazon.com/AmazonECR/latest/userguide/what-is-ecr.html) is a managed container image registry service that’s secure, scalable, and reliable.
+ [Amazon Elastic Container Service (Amazon ECS)](https://docs.aws.amazon.com/AmazonECS/latest/developerguide/Welcome.html) is a fast and scalable container management service that helps you run, stop, and manage containers on a cluster.
+ [AWS Key Management Service (AWS KMS)](https://docs.aws.amazon.com/kms/latest/developerguide/overview.html) helps you create and control cryptographic keys to help protect your data.
+ [Amazon Simple Storage Service (Amazon S3)](https://docs.aws.amazon.com/AmazonS3/latest/userguide/Welcome.html) is a cloud-based object storage service that helps you store, protect, and retrieve any amount of data.

**Additional tools**
+ [Git](https://git-scm.com/docs) is an open-source, distributed version control system that works with the AWS CodeCommit repository.
+ [Docker](https://www.docker.com/) is a set of platform as a service (PaaS) products that use virtualization at the operating-system level to deliver software in containers. This pattern uses Docker to build and test container images locally.
+ [cfn-lint](https://github.com/aws-cloudformation/cfn-lint) and [cfn-nag](https://github.com/stelligent/cfn_nag) are open-source tools that help you review CloudFormation stacks for any errors and security issues.

**Code repository**

The code for this pattern is available in the GitHub [Global Blue/Green deployments in multiple regions and accounts](https://github.com/aws-samples/ecs-blue-green-global-deployment-with-multiregion-cmk-codepipeline) repository.

## Epics
<a name="manage-blue-green-deployments-of-microservices-to-multiple-accounts-and-regions-by-using-aws-code-services-and-aws-kms-multi-region-keys-epics"></a>

### Set up environment variables
<a name="set-up-environment-variables"></a>


| Task | Description | Skills required | 
| --- | --- | --- | 
| Export environment variables for CloudFormation stack deployment. | Define environment variables that will be used as input to the CloudFormation stacks later in this pattern.[\[See the AWS documentation website for more details\]](http://docs.aws.amazon.com/prescriptive-guidance/latest/patterns/manage-blue-green-deployments-of-microservices-to-multiple-accounts-and-regions-by-using-aws-code-services-and-aws-kms-multi-region-keys.html) | AWS DevOps | 

### Package and deploy the CloudFormation stacks for the infrastructure
<a name="package-and-deploy-the-cloudformation-stacks-for-the-infrastructure"></a>


| Task | Description | Skills required | 
| --- | --- | --- | 
| Clone the repository. | Clone the [sample repository](https://github.com/aws-samples/ecs-blue-green-global-deployment-with-multiregion-cmk-codepipeline) into a new repository in your work location:<pre>##In work location<br />git clone https://github.com/aws-samples/ecs-blue-green-global-deployment-with-multiregion-cmk-codepipeline.git</pre> | AWS DevOps | 
| Package the Cloudformation resources. | In this step, you package the local artifacts that the CloudFormation templates reference to create the infrastructure resources required for services such as Amazon Virtual Private Cloud (Amazon VPC) and Application Load Balancer.The templates are available in the `Infra` folder of the code repository.<pre>##In TestAccount1##<br />aws cloudformation package \<br />    --template-file mainInfraStack.yaml \<br />    --s3-bucket $S3BUCKETNAMETESTACCOUNT1 \<br />    --s3-prefix infraStack \<br />    --region $TESTACCOUNT1REGION \<br />    --output-template-file infrastructure_${TESTACCOUNT1}.template</pre><pre>##In TestAccount2##<br />aws cloudformation package \<br />    --template-file mainInfraStack.yaml \<br />    --s3-bucket $S3BUCKETNAMETESTACCOUNT2 \<br />    --s3-prefix infraStack \<br />    --region $TESTACCOUNT2REGION \<br />    --output-template-file infrastructure_${TESTACCOUNT2}.template</pre><pre>##In TestAccount3##<br />aws cloudformation package \<br />    --template-file mainInfraStack.yaml \<br />    --s3-bucket $S3BUCKETNAMETESTACCOUNT3 \<br />    --s3-prefix infraStack \<br />    --region $TESTACCOUNT3REGION \<br />    --output-template-file infrastructure_${TESTACCOUNT3}.template</pre> | AWS DevOps | 
| Validate the package templates. | Validate the package templates:<pre>aws cloudformation validate-template \<br />    --template-body file://infrastructure_${TESTACCOUNT1}.template<br /><br />aws cloudformation validate-template \<br />    --template-body file://infrastructure_${TESTACCOUNT2}.template<br /><br />aws cloudformation validate-template \<br />    --template-body file://infrastructure_${TESTACCOUNT3}.template</pre> | AWS DevOps | 
| Deploy the package files into the workload accounts, | [\[See the AWS documentation website for more details\]](http://docs.aws.amazon.com/prescriptive-guidance/latest/patterns/manage-blue-green-deployments-of-microservices-to-multiple-accounts-and-regions-by-using-aws-code-services-and-aws-kms-multi-region-keys.html) | AWS DevOps | 

### Push a sample image and scale Amazon ECS
<a name="push-a-sample-image-and-scale-amazon-ecs"></a>


| Task | Description | Skills required | 
| --- | --- | --- | 
| Push a sample image to the Amazon ECR repository. | Push a sample (NGINX) image to the Amazon Elastic Container Registry (Amazon ECR) repository named `web` (as set in parameters). You can customize the image as required.To log in and set the credentials for pushing an image to Amazon ECR, follow the instructions in the [Amazon ECR documentation](https://docs.aws.amazon.com/AmazonECR/latest/userguide/docker-push-ecr-image.html).The commands are:<pre>  docker pull nginx<br />  docker images<br />  docker tag <imageid> aws_account_id.dkr.ecr.region.amazonaws.com/<web>:latest<br />  docker push <aws_account_id>.dkr.ecr.<region>.amazonaws.com/<web>:tag </pre> | AWS DevOps | 
| Scale Amazon ECS and verify access. | [\[See the AWS documentation website for more details\]](http://docs.aws.amazon.com/prescriptive-guidance/latest/patterns/manage-blue-green-deployments-of-microservices-to-multiple-accounts-and-regions-by-using-aws-code-services-and-aws-kms-multi-region-keys.html) | AWS DevOps | 

### Set up code services and resources
<a name="set-up-code-services-and-resources"></a>


| Task | Description | Skills required | 
| --- | --- | --- | 
| Create a CodeCommit repository in the tools account. | Create a CodeCommit repository in the tools account by using the `codecommit.yaml` template, which is in the `code` folder of the GitHub repository. You must create this repository only in the single Region where you plan to develop the code.<pre>aws cloudformation deploy --stack-name codecommitrepoStack --parameter-overrides  CodeCommitReponame=$CODECOMMITREPONAME \<br />ToolsAccount=$TOOLSACCOUNT --template-file codecommit.yaml  --region $TOOLSACCOUNTREGION \<br />--capabilities CAPABILITY_NAMED_IAM</pre> | AWS DevOps | 
| Create an S3 bucket for managing artifacts generated by CodePipeline. | Create an S3 bucket for managing artifacts generated by CodePipeline by using the `pre-reqs-bucket.yaml` template, which is in the `code` folder of the GitHub repository. The stacks must be deployed in all three workload (test) and tools accounts and Regions.<pre>aws cloudformation deploy --stack-name pre-reqs-artifacts-bucket --parameter-overrides BucketStartName=$BUCKETSTARTNAME \<br />TestAccount1=$TESTACCOUNT1 TestAccount2=$TESTACCOUNT2 \<br />TestAccount3=$TESTACCOUNT3 CodeCommitAccount=$CODECOMMITACCOUNT ToolsAccount=$TOOLSACCOUNT \<br />--template-file pre-reqs_bucket.yaml --region $TESTACCOUNT1REGION --capabilities CAPABILITY_NAMED_IAM<br /><br />aws cloudformation deploy --stack-name pre-reqs-artifacts-bucket --parameter-overrides BucketStartName=$BUCKETSTARTNAME \<br />TestAccount1=$TESTACCOUNT1 TestAccount2=$TESTACCOUNT2 \<br />TestAccount3=$TESTACCOUNT3 CodeCommitAccount=$CODECOMMITACCOUNT ToolsAccount=$TOOLSACCOUNT \<br />--template-file pre-reqs_bucket.yaml --region $TESTACCOUNT2REGION --capabilities CAPABILITY_NAMED_IAM<br /><br />aws cloudformation deploy --stack-name pre-reqs-artifacts-bucket --parameter-overrides BucketStartName=$BUCKETSTARTNAME \<br />TestAccount1=$TESTACCOUNT1 TestAccount2=$TESTACCOUNT2 \<br />TestAccount3=$TESTACCOUNT3 CodeCommitAccount=$CODECOMMITACCOUNT ToolsAccount=$TOOLSACCOUNT \<br />--template-file pre-reqs_bucket.yaml --region $TESTACCOUNT3REGION --capabilities CAPABILITY_NAMED_IAM<br /><br />aws cloudformation deploy --stack-name pre-reqs-artifacts-bucket --parameter-overrides BucketStartName=$BUCKETSTARTNAME \<br />TestAccount1=$TESTACCOUNT1 TestAccount2=$TESTACCOUNT2 \<br />TestAccount3=$TESTACCOUNT3 CodeCommitAccount=$CODECOMMITACCOUNT ToolsAccount=$TOOLSACCOUNT \<br />--template-file pre-reqs_bucket.yaml --region $TOOLSACCOUNTREGION --capabilities CAPABILITY_NAMED_IAM</pre> | AWS DevOps | 
| Set up a multi-Region KMS key. | [\[See the AWS documentation website for more details\]](http://docs.aws.amazon.com/prescriptive-guidance/latest/patterns/manage-blue-green-deployments-of-microservices-to-multiple-accounts-and-regions-by-using-aws-code-services-and-aws-kms-multi-region-keys.html) | AWS DevOps | 
| Set up the CodeBuild project in the tools account. | [\[See the AWS documentation website for more details\]](http://docs.aws.amazon.com/prescriptive-guidance/latest/patterns/manage-blue-green-deployments-of-microservices-to-multiple-accounts-and-regions-by-using-aws-code-services-and-aws-kms-multi-region-keys.html) | AWS DevOps | 
| Set up CodeDeploy in workload accounts. | Use the `codedeploy.yaml` template in the `code` folder of the GitHub repository to set up CodeDeploy in all three workload accounts. The output of `mainInfraStack` includes the Amazon Resource Names (ARNs) of the Amazon ECS cluster and Application Load Balancer listener.The values from the infrastructure stacks are exported already, so they are imported by the CodeDeploy stack templates.<pre>##WorkloadAccount1##<br />aws cloudformation deploy --stack-name ecscodedeploystack \<br />--parameter-overrides  ToolsAccount=$TOOLSACCOUNT mainInfrastackname=mainInfrastack \<br />--template-file codedeploy.yaml  --region $TESTACCOUNT1REGION --capabilities CAPABILITY_NAMED_IAM<br /><br />##WorkloadAccount2##<br />aws cloudformation deploy --stack-name ecscodedeploystack \<br />--parameter-overrides ToolsAccount=$TOOLSACCOUNT mainInfrastackname=mainInfrastack \<br />--template-file codedeploy.yaml  --region $TESTACCOUNT2REGION --capabilities CAPABILITY_NAMED_IAM<br /><br />##WorkloadAccount3##<br />aws cloudformation deploy --stack-name ecscodedeploystack \<br />--parameter-overrides ToolsAccount=$TOOLSACCOUNT mainInfrastackname=mainInfrastack \<br />--template-file codedeploy.yaml  --region $TESTACCOUNT3REGION --capabilities CAPABILITY_NAMED_IAM</pre> | AWS DevOps | 

### Set up CodePipeline in the tools account
<a name="set-up-codepipeline-in-the-tools-account"></a>


| Task | Description | Skills required | 
| --- | --- | --- | 
| Create a code pipeline in the tools account. | In the tools account, run the command:<pre>aws cloudformation deploy --stack-name ecscodepipelinestack --parameter-overrides  \<br />TestAccount1=$TESTACCOUNT1 TestAccount1Region=$TESTACCOUNT1REGION \<br />TestAccount2=$TESTACCOUNT2 TestAccount2Region=$TESTACCOUNT2REGION \<br />TestAccount3=$TESTACCOUNT3 TestAccount3Region=$TESTACCOUNT3REGION \<br />CMKARNTools=$CMKTROOLSARN CMKARN1=$CMKARN1 CMKARN2=$CMKARN2 CMKARN3=$CMKARN3 \<br />CodeCommitRepoName=$CODECOMMITREPONAME BucketStartName=$BUCKETSTARTNAME \<br />--template-file codepipeline.yaml --capabilities CAPABILITY_NAMED_IAM</pre> | AWS DevOps | 
| Provide access for CodePipeline and CodeBuild roles in the AWS KMS key policy and S3 bucket policy. | [\[See the AWS documentation website for more details\]](http://docs.aws.amazon.com/prescriptive-guidance/latest/patterns/manage-blue-green-deployments-of-microservices-to-multiple-accounts-and-regions-by-using-aws-code-services-and-aws-kms-multi-region-keys.html) | AWS DevOps | 

### Call and test the pipeline
<a name="call-and-test-the-pipeline"></a>


| Task | Description | Skills required | 
| --- | --- | --- | 
| Push changes to the CodeCommit repository. | [\[See the AWS documentation website for more details\]](http://docs.aws.amazon.com/prescriptive-guidance/latest/patterns/manage-blue-green-deployments-of-microservices-to-multiple-accounts-and-regions-by-using-aws-code-services-and-aws-kms-multi-region-keys.html) |  | 

### Clean up
<a name="clean-up"></a>


| Task | Description | Skills required | 
| --- | --- | --- | 
| Clean up all the deployed resources. | [\[See the AWS documentation website for more details\]](http://docs.aws.amazon.com/prescriptive-guidance/latest/patterns/manage-blue-green-deployments-of-microservices-to-multiple-accounts-and-regions-by-using-aws-code-services-and-aws-kms-multi-region-keys.html) |  | 

## Troubleshooting
<a name="manage-blue-green-deployments-of-microservices-to-multiple-accounts-and-regions-by-using-aws-code-services-and-aws-kms-multi-region-keys-troubleshooting"></a>


| Issue | Solution | 
| --- | --- | 
| Changes that you committed to the repository aren’t getting deployed. | [\[See the AWS documentation website for more details\]](http://docs.aws.amazon.com/prescriptive-guidance/latest/patterns/manage-blue-green-deployments-of-microservices-to-multiple-accounts-and-regions-by-using-aws-code-services-and-aws-kms-multi-region-keys.html) | 

## Related resources
<a name="manage-blue-green-deployments-of-microservices-to-multiple-accounts-and-regions-by-using-aws-code-services-and-aws-kms-multi-region-keys-resources"></a>
+ [Pushing a Docker image](https://docs.aws.amazon.com/AmazonECR/latest/userguide/docker-push-ecr-image.html) (Amazon ECR documentation)
+ [Connect to an AWS CodeCommit repository](https://docs.aws.amazon.com/codecommit/latest/userguide/how-to-connect.html) (AWS CodeCommit documentation)
+ [Troubleshooting AWS CodeBuild](https://docs.aws.amazon.com/codebuild/latest/userguide/troubleshooting.html) (AWS CodeBuild documentation)

# Monitor Amazon ECR repositories for wildcard permissions using AWS CloudFormation and AWS Config
<a name="monitor-amazon-ecr-repositories-for-wildcard-permissions-using-aws-cloudformation-and-aws-config"></a>

*Vikrant Telkar, Wassim Benhallam, and Sajid Momin, Amazon Web Services*

## Summary
<a name="monitor-amazon-ecr-repositories-for-wildcard-permissions-using-aws-cloudformation-and-aws-config-summary"></a>

On the Amazon Web Services (AWS) Cloud, Amazon Elastic Container Registry (Amazon ECR) is a managed container image registry service that supports private repositories with resource-based permissions using AWS Identity and Access Management (IAM).

IAM supports the "`*`" wildcard in both the resource and action attributes, which makes it easier to automatically choose multiple matching items. In your testing environment, you can allow all authenticated AWS users to access an Amazon ECR repository by using the `ecr:*` [wildcard permission](https://docs.aws.amazon.com/lambda/latest/operatorguide/wildcard-permissions-iam.html) in a principal element for your [repository policy statement](https://docs.aws.amazon.com/AmazonECR/latest/userguide/set-repository-policy.html). The `ecr:*` wildcard permission can be useful when developing and testing in development accounts that can't access your production data.

However, you must make sure that the `ecr:*` wildcard permission is not used in your production environments because it can cause serious security vulnerabilities. This pattern’s approach helps you to identify Amazon ECR repositories that contain the `ecr:*` wildcard permission in repository policy statements.   The pattern provides steps and an AWS CloudFormation template to create a custom rule in AWS Config. An AWS Lambda function then monitors your Amazon ECR repository policy statements for `ecr:*` wildcard permissions. If it finds non-compliant repository policy statements, Lambda notifies AWS Config to send an event to Amazon EventBridge and EventBridge then initiates an Amazon Simple Notification Service (Amazon SNS) topic. The SNS topic notifies you by email about the non-compliant repository policy statements.

## Prerequisites and limitations
<a name="monitor-amazon-ecr-repositories-for-wildcard-permissions-using-aws-cloudformation-and-aws-config-prereqs"></a>

**Prerequisites **
+ An active AWS account.
+ AWS Command Line Interface (AWS CLI), installed and configured. For more information about this, see [Installing, updating, and uninstalling the AWS CLI ](https://docs.aws.amazon.com/cli/latest/userguide/cli-chap-install.html)in the AWS CLI documentation.
+ An existing Amazon ECR repository with an attached policy statement, installed and configured in your testing environment. For more information about this, see [Creating a private repository](https://docs.aws.amazon.com/AmazonECR/latest/userguide/repository-create.html) and [Setting a repository policy statement](https://docs.aws.amazon.com/AmazonECR/latest/userguide/set-repository-policy.html) in the Amazon ECR documentation.
+ AWS Config, configured in your preferred AWS Region. For more information about this, see [Getting started with AWS Config](https://docs.aws.amazon.com/config/latest/developerguide/getting-started.html) in the AWS Config documentation.
+ The `aws-config-cloudformation.template` file (attached), downloaded to your local machine.

 

**Limitations **
+ This pattern’s solution is Regional and your resources must be created in the same Region. 

## Architecture
<a name="monitor-amazon-ecr-repositories-for-wildcard-permissions-using-aws-cloudformation-and-aws-config-architecture"></a>

The following diagram shows how AWS Config evaluates Amazon ECR repository policy statements. 

![\[AWS Config workflow with Lambda, Amazon ECR, EventBridge, SNS, and email notification components.\]](http://docs.aws.amazon.com/prescriptive-guidance/latest/patterns/images/pattern-img/01bbf5f8-27aa-4c64-9a03-7fcccc0955b8/images/49bbf14b-0a18-4d4a-86ab-162d37708e01.png)


The diagram shows the following workflow:

1. AWS Config initiates a custom rule. 

1. The custom rule invokes a Lambda function to evaluate the compliance of the Amazon ECR repository policy statements. The Lambda function then identifies non-compliant repository policy statements.

1. The Lambda function sends the non-compliance status to AWS Config.

1. AWS Config sends an event to EventBridge.

1. EventBridge publishes the non-compliance notifications to an SNS topic.

1. Amazon SNS sends an email alert to you or an authorized user.

**Automation and scale**

This pattern’s solution can monitor any number of Amazon ECR repository policy statements, but all resources that you want to evaluate must be created in the same Region.

## Tools
<a name="monitor-amazon-ecr-repositories-for-wildcard-permissions-using-aws-cloudformation-and-aws-config-tools"></a>
+ [AWS CloudFormation](https://docs.aws.amazon.com/AWSCloudFormation/latest/UserGuide/Welcome.html) – AWS CloudFormation helps you model and set up your AWS resources, provision them quickly and consistently, and manage them throughout their lifecycle. You can use a template to describe your resources and their dependencies, and launch and configure them together as a stack, instead of managing resources individually. You can manage and provision stacks across multiple AWS accounts and AWS Regions.
+ [AWS Config](https://docs.aws.amazon.com/config/latest/developerguide/WhatIsConfig.html) – AWS Config provides a detailed view of the configuration of AWS resources in your AWS account. This includes how the resources are related to one another and how they were configured in the past so that you can see how the configurations and relationships change over time.
+ [Amazon ECR](https://docs.aws.amazon.com/AmazonECR/latest/userguide/what-is-ecr.html)** **–** **Amazon Elastic Container Registry (Amazon ECR) is an AWS managed container image registry  service that is secure, scalable, and reliable. Amazon ECR supports private repositories with resource-based permissions using IAM.                                 
+ [Amazon EventBridge ](https://docs.aws.amazon.com/eventbridge/latest/userguide/eb-what-is.html)– Amazon EventBridge is a serverless event bus service that you can use to connect your applications with data from a variety of sources. EventBridge delivers a stream of real-time data from your applications, software as a service (SaaS) applications, and AWS services to targets such as AWS Lambda functions, HTTP invocation endpoints using API destinations, or event buses in other accounts.
+ [AWS Lambda](https://docs.aws.amazon.com/lambda/latest/dg/welcome.html) – AWS Lambda is a compute service that supports running code without provisioning or managing servers. Lambda runs your code only when needed and scales automatically, from a few requests per day to thousands per second. You pay only for the compute time that you consume—there is no charge when your code is not running.
+ [Amazon SNS](https://docs.aws.amazon.com/sns/latest/dg/welcome.html) – Amazon Simple Notification Service (Amazon SNS) coordinates and manages the delivery or sending of messages between publishers and clients, including web servers and email addresses. Subscribers receive all messages published to the topics to which they subscribe, and all subscribers to a topic receive the same messages. 

**Code**

The code for this pattern is available in the `aws-config-cloudformation.template` file (attached).

## Epics
<a name="monitor-amazon-ecr-repositories-for-wildcard-permissions-using-aws-cloudformation-and-aws-config-epics"></a>

### Create the AWS CloudFormation stack
<a name="create-the-aws-cloudformation-stack"></a>


| Task | Description | Skills required | 
| --- | --- | --- | 
| Create the AWS CloudFormation stack. | Create an AWS CloudFormation stack by running the following command in AWS CLI:<pre>$ aws cloudformation create-stack --stack-name=AWSConfigECR \<br />    --template-body  file://aws-config-cloudformation.template \<br />    --parameters ParameterKey=<email>,ParameterValue=<myemail@example.com> \<br />    --capabilities CAPABILITY_NAMED_IAM</pre> | AWS DevOps | 

### Test the AWS Config custom rule
<a name="test-the-aws-config-custom-rule"></a>


| Task | Description | Skills required | 
| --- | --- | --- | 
| Test the AWS Config custom rule. | [\[See the AWS documentation website for more details\]](http://docs.aws.amazon.com/prescriptive-guidance/latest/patterns/monitor-amazon-ecr-repositories-for-wildcard-permissions-using-aws-cloudformation-and-aws-config.html) | AWS DevOps | 

## Attachments
<a name="attachments-01bbf5f8-27aa-4c64-9a03-7fcccc0955b8"></a>

To access additional content that is associated with this document, unzip the following file: [attachment.zip](samples/p-attach/01bbf5f8-27aa-4c64-9a03-7fcccc0955b8/attachments/attachment.zip)

# Optimize multi-account serverless deployments by using the AWS CDK and GitHub Actions workflows
<a name="optimize-multi-account-serverless-deployments"></a>

*Sarat Chandra Pothula and VAMSI KRISHNA SUNKAVALLI, Amazon Web Services*

## Summary
<a name="optimize-multi-account-serverless-deployments-summary"></a>

Organizations deploying serverless infrastructure across multiple AWS accounts and environments often encounter challenges like code duplication, manual processes, and inconsistent practices. This pattern’s solution shows how to use the AWS Cloud Development Kit (AWS CDK) in Go and GitHub Actions reusable workflows to streamline multi-account serverless infrastructure management. This solution demonstrates how you can define cloud resources as code, implement standardized continuous integration/continuous deployment (CI/CD) processes, and create modular, reusable components. 

By using these tools, organizations can efficiently manage cross-account resources, implement consistent deployment pipelines, and simplify complex serverless architectures. The approach also enhances security and compliance by enforcing standardized practices for use with AWS accounts, ultimately improving productivity and reducing errors in serverless application development and deployment.

## Prerequisites and limitations
<a name="optimize-multi-account-serverless-deployments-prereqs"></a>

**Prerequisites**
+ An active AWS account.
+ AWS Identity and Access Management (IAM) [roles and permissions](https://docs.aws.amazon.com/AmazonECR/latest/userguide/security-iam.html) are in place for the deployment process. This includes permissions to access Amazon Elastic Container Registry (Amazon ECR) repositories, create AWS Lambda functions, and any other required resources across the target AWS accounts.
+ AWS Command Line Interface (AWS CLI) version 2.9.11 or later, [installed](https://docs.aws.amazon.com/cli/latest/userguide/cli-chap-getting-started.html) and [configured](https://docs.aws.amazon.com/cli/latest/userguide/cli-chap-configure.html).
+ AWS Cloud Development Kit (AWS CDK) version 2.114.1 or later, [installed](https://docs.aws.amazon.com/cdk/v2/guide/getting_started.html#getting_started_install) and [bootstrapped](https://docs.aws.amazon.com/cdk/v2/guide/getting_started.html#getting_started_bootstrap).
+ Go 1.22 or later, [installed](https://go.dev/doc/install).
+ Docker 24.0.6 or later, [installed](https://docs.docker.com/engine/install/).

**Limitations**
+ **Language compatibility **– Go is a popular language for serverless applications. However, in addition to Go, the AWS CDK supports other programming languages, including C\$1, Java, Python, and TypeScript. If your organization has existing code bases or expertise in other languages, you might need to adapt or learn Go to fully use the solution described in the pattern.
+ **Learning curve **– Adopting the AWS CDK, Go (if it’s new to the organization), and GitHub reusable workflows might involve a learning curve for developers and DevOps teams. Training and documentation might be required to ensure smooth adoption and effective use of these technologies.

## Architecture
<a name="optimize-multi-account-serverless-deployments-architecture"></a>

The following diagram shows the workflow and architecture components for this pattern.

![\[Architecture of AWS CDK and GitHub Actions workflows for multi-account serverless infrastructure management.\]](http://docs.aws.amazon.com/prescriptive-guidance/latest/patterns/images/pattern-img/8d61917b-bd27-44fa-ae95-55358aaf8812/images/a4b36793-95c7-42f7-a92f-99b4722c9c64.png)


This solution performs the following steps:

1. The developer clones the repository, creates a new branch, and makes changes to the application code in their local environment.

1. The developer commits these changes and pushes the new branch to the GitHub repository.

1. The developer creates a pull request in the GitHub repository, proposing to merge their feature or new feature branch into the main branch.

1. This pull request triggers the continuous integration (CI) GitHub Actions workflow. The CI and the continuous deployment (CD) workflows in this pattern use reusable workflows, which are predefined, modular templates that can be shared and executed across different projects or repositories. Reusable workflows promote standardization and efficiency in the CI/CD processes.

1. The CI workflow sets up the necessary environment, generates a Docker tag for the image, and builds the Docker image using the application code. 

1. The CI workflow authenticates with AWS by using the central AWS account GitHub OIDC role. For CI workflows, the central AWS account GitHub OIDC role uses AWS Security Token Service (AWS STS) to obtain temporary credentials. These credentials allow the role to build and push Docker images to the Amazon ECR repository of the central AWS account.

1. The CI workflow pushes the built Docker image to Amazon ECR.

1. The CI workflow stores the image tag to the Systems Manager Parameter Store.

1. After the CI workflow completes successfully, the Docker image tag is output. 

1. When triggering the CD workflow, the developer manually inputs the image tag of the Docker image that they want to deploy. This image tag corresponds to the tag that was generated and pushed to Amazon ECR during the CI workflow.

1. The developer manually triggers the CD workflow, which uses the CD reusable workflow. 

1. The CD workflow authenticates with AWS using the central AWS account GitHub OIDC role. For the CD workflow, AWS STS is first used to assume the central AWS account GitHub OIDC role. Then, this role assumes the CDK bootstrap roles for target account deployments. 

1. The CD workflow uses the AWS CDK to synthesize AWS CloudFormation templates.

1. The CD workflow deploys the application to the target AWS account by using CDK deploy, using the manually specified image tag for the Lambda function.

## Tools
<a name="optimize-multi-account-serverless-deployments-tools"></a>

**AWS services**
+ [AWS Cloud Development Kit (AWS CDK)](https://docs.aws.amazon.com/cdk/latest/guide/home.html) is a software development framework that helps you define and provision AWS Cloud infrastructure in code.
+ [AWS CloudFormation](https://docs.aws.amazon.com/AWSCloudFormation/latest/UserGuide/Welcome.html) helps you set up AWS resources, provision them quickly and consistently, and manage them throughout their lifecycle across AWS accounts and AWS Regions. CloudFormation is an integral part of the AWS CDK deployment process. The CDK synthesizes CloudFormation templates and then uses CloudFormation to create or update the resources in the AWS environment.
+ [Amazon Elastic Container Registry (Amazon ECR)](https://docs.aws.amazon.com/AmazonECR/latest/userguide/what-is-ecr.html) is a managed container image registry service that’s secure, scalable, and reliable.
+ [AWS Identity and Access Management (IAM)](https://docs.aws.amazon.com/IAM/latest/UserGuide/introduction.html) helps you securely manage access to your AWS resources by controlling who is authenticated and authorized to use them.
+ [AWS Lambda](https://docs.aws.amazon.com/lambda/latest/dg/welcome.html) is a compute service that helps you run code without needing to provision or manage servers. It runs your code only when needed and scales automatically, so you pay only for the compute time that you use.
+ [AWS Systems Manager Parameter Store](https://docs.aws.amazon.com/systems-manager/latest/userguide/systems-manager-parameter-store.html) provides secure, hierarchical storage for configuration data management and secrets management.

**Other tools**
+ [Docker](https://www.docker.com/) is a set of platform as a service (PaaS) products that use virtualization at the operating-system level to deliver software in containers.
+ [GitHub Actions](https://docs.github.com/en/actions/writing-workflows/quickstart) is a continuous integration and continuous delivery (CI/CD) platform that’s tightly integrated with GitHub repositories. You can use GitHub Actions to automate your build, test, and deployment pipeline.
+ [Go](https://go.dev/doc/install) is an open source programming language that Google supports.

**Code repository**

The code for this pattern is available in the GitHub [aws-cdk-golang-serverless-cicd-github-actions](https://github.com/aws-samples/aws-cdk-golang-serverless-cicd-github-actions) repository.

## Best practices
<a name="optimize-multi-account-serverless-deployments-best-practices"></a>
+ **Modular design** – Organize your AWS CDK code into modular and reusable constructs or stacks, promoting code reuse and maintainability across multiple accounts and projects.
+ **Separation of concerns** – Separate the infrastructure code from the application code, allowing for independent deployment and management of each component.
+ **Versioning and immutability** – Treat your infrastructure as code (IaC), and use Git for version control. Embrace immutable infrastructure principles by creating new resources instead of modifying existing ones.
+ **Testing and validation** – Implement comprehensive testing strategies, including unit tests, integration tests, and end-to-end tests, to help support the correctness and reliability of your AWS CDK code and deployments.
+ **Security and compliance** – Follow AWS security best practices, such as least-privilege access, secure communication, and data encryption. Implement compliance checks and auditing mechanisms to ensure adherence to organizational policies and regulatory requirements. Implement security best practices for container images, such as scanning for vulnerabilities, enforcing image signing, and adhering to compliance requirements for your organization.
+ **Monitoring and logging** – Set up monitoring and logging mechanisms to track the health and performance of your serverless applications and infrastructure. Use AWS services like Amazon CloudWatch, AWS CloudTrail, and AWS X-Ray for monitoring and auditing purposes.
+ **Automation and CI/CD** – Use GitHub reusable workflows and other CI/CD tools to automate the build, testing, and deployment processes, which can help support consistent and repeatable deployments across multiple accounts.
+ **Environment management** – Maintain separate environments (for example, development, staging, and production). Implement strategies for promoting changes between environments, ensuring proper testing and validation before production deployments.
+ **Documentation and collaboration** – Document your infrastructure code, deployment processes, and best practices to facilitate knowledge sharing and collaboration within your team.
+ **Cost optimization** – Implement cost monitoring and optimization strategies, such as rightsizing resources, making use of auto-scaling, and taking advantage of AWS cost optimization services such as AWS Budgets and AWS Cost Explorer.
+ **Disaster recovery and backup** – Plan for disaster recovery scenarios by implementing backup and restore mechanisms for your serverless applications and infrastructure resources.
+ **Continuous improvement** – Review regularly and update your practices, tools, and processes to align with the latest best practices, security recommendations, and technological advancements in the serverless ecosystem.
+ **Improve security posture** – Use [AWS PrivateLink](https://docs.aws.amazon.com/vpc/latest/privatelink/what-is-privatelink.html) to improve the security posture of your virtual private cloud (VPC) by configuring interface VPC endpoints for Amazon ECR, AWS Lambda, and AWS Systems Manager Parameter Store.

## Epics
<a name="optimize-multi-account-serverless-deployments-epics"></a>

### Set up the environments
<a name="set-up-the-environments"></a>


| Task | Description | Skills required | 
| --- | --- | --- | 
| Create an Amazon ECR repository in the central AWS account. | To share container images across multiple AWS accounts, you must configure cross-account access for Amazon ECR. First, create an Amazon ECR repository in the central AWS account.To create an Amazon ECR repository, run the following command:<pre>aws ecr create-repository --repository-name sample-repo</pre>In a later task, grant pull access to the other AWS accounts that need to use the container image. | AWS DevOps | 
| Add cross-account permissions to the Amazon ECR repository. | To add cross-account permissions to the Amazon ECR repository in the central AWS account, run the following code:<pre>{<br />  "Version": "2008-10-17",		 	 	 <br />  "Statement": [<br />    {<br />      "Sid": "LambdaECRImageRetrievalPolicy",<br />      "Effect": "Allow",<br />      "Principal": {<br />        "Service": "lambda.amazonaws.com"<br />      },<br />      "Action": [<br />        "ecr:BatchGetImage",<br />        "ecr:GetDownloadUrlForLayer",<br />      ],<br />      "Condition": {<br />        "StringLike": {<br />          "aws:sourceArn": "arn:aws:lambda:<Target_Region>:<Target_Account_ID>:function:*"<br />        }<br />      }<br />    },<br />    {<br />      "Sid": "new statement",<br />      "Effect": "Allow",<br />      "Principal": {<br />        "AWS": "arn:aws:iam::<Target_Account_ID>:root"<br />        },<br />      "Action": [<br />        "ecr:BatchGetImage",<br />        "ecr:GetDownloadUrlForLayer",<br />      ],<br />    }<br />  ] <br />}</pre> | AWS DevOps | 
| Configure a role for GitHub OIDC role in the central AWS account. | [\[See the AWS documentation website for more details\]](http://docs.aws.amazon.com/prescriptive-guidance/latest/patterns/optimize-multi-account-serverless-deployments.html) | AWS DevOps | 
| Bootstrap the AWS environment in the target AWS accounts. | Set up a CDK environment in a specific AWS account and AWS Region that enables cross-account deployments from a central account and applies least-privilege principles to the CloudFormation execution role.To [bootstrap](https://docs.aws.amazon.com/cdk/v2/guide/bootstrapping.html) an AWS environment, run the following command:<pre>cdk bootstrap aws://<Target_Account_ID>/<Target_Region> --trust <Central_Account_ID> --cloudformation-execution-policies arn:aws:iam::aws:policy/<Least_Privilege_Policy></pre> | AWS DevOps | 
| Grant central AWS account OIDC role access to the target AWS account bootstrap roles. | The CDK bootstrap creates the following IAM roles that are designed to be assumed by the central AWS account during various stages of the CDK deployment process:[\[See the AWS documentation website for more details\]](http://docs.aws.amazon.com/prescriptive-guidance/latest/patterns/optimize-multi-account-serverless-deployments.html)Each role has specific permissions tailored to its purpose, following the least-privilege principle. The `Target_Account_ID` and `Target_Region` in each role name help to indicate that these roles are unique across different AWS accounts and Regions. This approach supports clear identification and management in multi-account, multi-Region setups.<pre>Target Account CDK Bootstrap Roles<br />arn:aws:iam::<Target_Account_ID>:role/cdk-deploy-role-<Target_Account_ID>-<Target_Region><br />arn:aws:iam::<Target_Account_ID>:role/cdk-file-publishing-role-<Target_Account_ID>-<Target_Region><br />arn:aws:iam::<Target_Account_ID>:role/cdk-image-publishing-role-<Target_Account_ID>-<Target_Region><br />arn:aws:iam::<Target_Account_ID>:role/cdk-lookup-role-<Target_Account_ID>-<Target_Region></pre>[\[See the AWS documentation website for more details\]](http://docs.aws.amazon.com/prescriptive-guidance/latest/patterns/optimize-multi-account-serverless-deployments.html)To update the permissions policy for the OIDC role in the central AWS account, use the following code:<pre>{<br />    "Version": "2012-10-17",		 	 	 <br />    "Statement": [<br />        {<br />            "Effect": "Allow",<br />            "Action": "sts:AssumeRole",<br />            "Resource": [<br />                "arn:aws:iam::<Target_Account_ID>:role/cdk-deploy-role-<Target_Account_ID>-<Target_Region>",<br />                "arn:aws:iam::<Target_Account_ID>:role/cdk-file-publishing-role-<Target_Account_ID>-<Target_Region>",<br />                "arn:aws:iam::<Target_Account_ID>:role/cdk-image-publishing-role-<Target_Account_ID>-<Target_Region>",<br />                "arn:aws:iam::<Target_Account_ID>:role/cdk-lookup-role-<Target_Account_ID>-<Target_Region>"<br />            ]<br />        }<br />    ]<br /> }<br /></pre> | AWS DevOps | 

### Build the Docker image
<a name="build-the-docker-image"></a>


| Task | Description | Skills required | 
| --- | --- | --- | 
| Clone the project repository. | To clone this pattern’s [GitHub repository](https://github.com/aws-samples/aws-cdk-golang-serverless-cicd-github-actions), run the following command:<pre>git clone https://github.com/aws-samples/aws-cdk-golang-serverless-cicd-github-actions.git</pre> | AWS DevOps | 
| Go to the Dockerfile path. | To navigate to the Dockerfile path, run the following command:<pre>cd lambda</pre> | AWS DevOps | 
| Authenticate Docker with Amazon ECR. | Amazon ECR requires secure access to your private container repositories. By signing in this way, you're allowing Docker on your local machine or CI/CD environment to interact with Amazon ECR securely.To authenticate Docker with Amazon ECR, run the following command:<pre>aws ecr get-login-password --region <AWS_REGION> | docker login --username AWS --password-stdin <AWS_ACCOUNT_ID>.dkr.ecr.<AWS_REGION>.amazonaws.com</pre>Revise the placeholders `AWS_REGION` and `AWS_Account_ID` with your information. | AWS DevOps | 
| Build the Docker image. | To build the Docker image, run the following command:<pre>docker build --platform linux/arm64 -t sample-app .</pre> | AWS DevOps | 
| Tag and push the Docker Image. | To tag and push the Docker image to the Amazon ECR repository, run the following commands:<pre>docker tag sample-app:latest <AWS_ACCOUNT_ID>.dkr.ecr.<AWS_REGION>.amazonaws.com/<ECR_REPOSITORY>:<DOCKER_TAG></pre><pre>docker push <AWS_ACCOUNT_ID>.dkr.ecr.<AWS_REGION>.amazonaws.com/<ECR_REPOSITORY>:<DOCKER_TAG></pre>Revise the placeholders `AWS_Account_ID`, `AWS_REGION`, `ECR_REPOSITORY`, and `DOCKER_TAG` with your information. | AWS DevOps | 

### Deploy the AWS CDK app
<a name="deploy-the-cdk-app"></a>


| Task | Description | Skills required | 
| --- | --- | --- | 
| Synthesize the CDK stack with environment-specific variables. | To generate the CloudFormation template for your infrastructure as defined in your CDK code, run the following command:<pre>ENV=<environment> IMAGETAG=<image_tag> ECR_ARN=<ecr_repo_arn> cdk synth</pre>Revise the following placeholders with your information:[\[See the AWS documentation website for more details\]](http://docs.aws.amazon.com/prescriptive-guidance/latest/patterns/optimize-multi-account-serverless-deployments.html) | AWS DevOps | 
| Deploy the CDK stack. | To deploy the CDK stack to your AWS account, run the following command. The `--require-approval never` flag means that the CDK will automatically approve and execute *all *changes. This includes changes that the CDK would normally flag as needing manual review (such as IAM policy changes or removal of resources). Make sure that your CDK code and CI/CD pipeline are well-tested and secure before using the `--require-approval never` flag in production environments.<pre>ENV=<environment> IMAGETAG=<image_tag> ECR_ARN=<ecr_repo_arn> cdk deploy --require-approval never</pre> | AWS DevOps | 

### Automate CI/CD using GitHub Actions workflows
<a name="automate-ci-cd-using-github-actions-workflows"></a>


| Task | Description | Skills required | 
| --- | --- | --- | 
| Create a feature branch, and add your changes. | Use the cloned repository that you created earlier, create a feature branch, and then add your changes to the application code. Use the following commands:<pre>git checkout -b <feature_branch><br />git add .<br />git commit -m "add your changes"<br />git push origin <feature_branch></pre>Following are examples of changes:[\[See the AWS documentation website for more details\]](http://docs.aws.amazon.com/prescriptive-guidance/latest/patterns/optimize-multi-account-serverless-deployments.html)GitHub Actions will use the reusable workflows and trigger the CI/CD pipelines. | AWS DevOps | 
| Merge your changes. | Create a pull request, and merge your changes to main. | AWS DevOps | 

## Troubleshooting
<a name="optimize-multi-account-serverless-deployments-troubleshooting"></a>


| Issue | Solution | 
| --- | --- | 
| `AccessDenied` errors when deploying resources across AWS accounts, for example, `AccessDenied: User not authorized to perform: "sts:AssumeRole"`. | To help resolve this issue, do the following to verify cross-account permissions:[\[See the AWS documentation website for more details\]](http://docs.aws.amazon.com/prescriptive-guidance/latest/patterns/optimize-multi-account-serverless-deployments.html) | 
| Compatibility issues because of version mismatches, for example, `undefined: awscdkStack` error with an outdated CDK version. | To help resolve this issue, do the following to verify that you’re using the required versions of the AWS CDK and Go:[\[See the AWS documentation website for more details\]](http://docs.aws.amazon.com/prescriptive-guidance/latest/patterns/optimize-multi-account-serverless-deployments.html) | 
| CI/CD pipeline failures, for example, `Error: No such file or directory` because of incorrect YAML configuration or `Permission denied` for protected branches. | To help resolve issues with the GitHub Actions configuration, verify that the reusable workflows are properly referenced and configured. | 

## Related resources
<a name="optimize-multi-account-serverless-deployments-resources"></a>

**AWS resources**
+ [AWS Best Practices for Security, Identity, & Compliance](https://aws.amazon.com/architecture/security-identity-compliance/)
+ [AWS CDK Workshop](https://cdkworkshop.com/60-go.html)
+ [AWS Cloud Development Kit Library](https://pkg.go.dev/github.com/aws/aws-cdk-go/awscdk/v2)
+ [Create a Lambda function using a container image](https://docs.aws.amazon.com/lambda/latest/dg/images-create.html)
+ [Identity and Access Management for Amazon Elastic Container Registry](https://docs.aws.amazon.com/AmazonECR/latest/userguide/security-iam.html)
+ [Working with the AWS CDK in Go](https://docs.aws.amazon.com/cdk/v2/guide/work-with-cdk-go.html)

**Other resources**
+ [Configuring OpenID Connect in Amazon Web Services](https://docs.github.com/en/actions/security-for-github-actions/security-hardening-your-deployments/configuring-openid-connect-in-amazon-web-services) (GitHub documentation)
+ [Golang Documentation](https://golang.org/doc/)
+ [Quickstart for GitHub Actions](https://docs.github.com/en/actions/writing-workflows/quickstart) (GitHub documentation)
+ [Reusing workflows](https://docs.github.com/en/actions/sharing-automations/reusing-workflows) (GitHub documentation)

# Provision AWS Service Catalog products based on AWS CloudFormation templates by using GitHub Actions
<a name="provision-aws-service-catalog-products-using-github-actions"></a>

*Ashish Bhatt and Ruchika Modi, Amazon Web Services*

## Summary
<a name="provision-aws-service-catalog-products-using-github-actions-summary"></a>

This pattern provides organizations with a streamlined approach using [AWS Service Catalog](https://docs.aws.amazon.com/servicecatalog/latest/adminguide/introduction.html) products and portfolios to provision standardized and compliant AWS services across teams. [AWS CloudFormation](https://docs.aws.amazon.com/AWSCloudFormation/latest/UserGuide/Welcome.html) helps to combine essential components in Service Catalog products and portfolios for provisioning base network infrastructure on AWS Cloud. This pattern also promotes DevOps practices by integrating infrastructure as code (IaC) into automated development workflows by using [GitHub Actions](https://github.com/features/actions).

AWS Service Catalog enables organizations to create and manage approved IT services on AWS, offering benefits such as standardization, centralized control, self-service provisioning, and cost management. By automating the deployment of Service Catalog portfolios and products through GitHub Actions, companies can do the following:
+ Achieve consistent and repeatable deployments. 
+ Use version control for IaC. 
+ Integrate cloud resource management with existing development workflows. 

This combination streamlines cloud operations, enforces compliance, and accelerates the delivery of approved services while reducing manual errors and improving overall efficiency.

## Prerequisites and limitations
<a name="provision-aws-service-catalog-products-using-github-actions-prereqs"></a>

**Prerequisites **
+ An active AWS account 
+ Access to [GitHub repository](https://docs.github.com/en/get-started/quickstart/create-a-repo)
+ Basic understanding of AWS CloudFormation and AWS Service Catalog
+ An Amazon Simple Storage Service (Amazon S3) bucket to host CloudFormation templates
+ An AWS Identity and Access Management (IAM) role named `github-actions` that is used for connectivity between GitHub and AWS

**Limitations **
+ This pattern’s reusable code has been tested only with GitHub Actions.
+ Some AWS services aren’t available in all AWS Regions. For Region availability, see [AWS services by Region](https://aws.amazon.com/about-aws/global-infrastructure/regional-product-services/). For specific endpoints, see [Service endpoints and quotas](https://docs.aws.amazon.com/general/latest/gr/aws-service-information.html), and choose the link for the service.

**Product versions**

This pattern’s solution was created by using the following [GitHub Marketplace](https://github.com/marketplace) actions and their respective versions:
+ `actions/checkout@v4`
+ `aws-actions/configure-aws-credentials@v2`
+ `aws-actions/aws-cloudformation-github-deploy@v1.2.0`

## Architecture
<a name="provision-aws-service-catalog-products-using-github-actions-architecture"></a>

The following diagram shows the architecture for this solution.

![\[Using GitHub Actions to provision Service Catalog products based on CloudFormation templates.\]](http://docs.aws.amazon.com/prescriptive-guidance/latest/patterns/images/pattern-img/49f82fa7-0c74-4581-bf92-95505dca264c/images/a13c7b41-534e-4a9e-bdca-2974fa40a49a.png)


1. Administrators or platform engineers push standardized CloudFormation templates to a GitHub repository, where the templates are maintained. The GitHub repo also contains workflows that automate the provisioning of AWS Service Catalog using GitHub Actions.

1. GitHub Actions triggers a workflow that connects to the AWS Cloud using an OpenID Connect (OIDC) provider to provision Service Catalog.

1. Service Catalog contains the portfolio and products that developers can directly use to provision standardized AWS resources. This pattern bundles AWS resources such as virtual private clouds (VPCs), subnets, NAT and internet gateways, and route tables.

1. After the developer creates a Service Catalog product, Service Catalog converts it into pre-configured and standardized AWS resources. As a result, developers save time because they don’t need to provision individual resources and configure them manually.

## Tools
<a name="provision-aws-service-catalog-products-using-github-actions-tools"></a>

**AWS services**
+ [AWS CloudFormation](https://docs.aws.amazon.com/AWSCloudFormation/latest/UserGuide/Welcome.html) helps you set up AWS resources, provision them quickly and consistently, and manage them throughout their lifecycle across AWS accounts and AWS Regions. It's an infrastructure as code (IaC) service that can be easily used as one of the product types with AWS Service Catalog.
+ [AWS Identity and Access Management (IAM)](https://docs.aws.amazon.com/IAM/latest/UserGuide/introduction.html) helps you securely manage access to your AWS resources by controlling who is authenticated and authorized to use them.
+ [AWS Service Catalog](https://docs.aws.amazon.com/servicecatalog/latest/adminguide/getstarted.html) helps you centrally manages catalog of IT services that are approved for AWS. End users can quickly deploy only the approved IT services they need, following the constraints set by your organization.
+ [Amazon Simple Storage Service (Amazon S3)](https://docs.aws.amazon.com/AmazonS3/latest/userguide/Welcome.html) is a cloud-based object storage service that helps you store, protect, and retrieve any amount of data.

**Others**
+ [GitHub Actions](https://docs.github.com/en/actions) is a continuous integration and continuous delivery (CI/CD) platform that’s tightly integrated with GitHub repositories. You can use GitHub Actions to automate your build, test, and deployment pipeline.

**Code repository**

The code for this pattern is available in the GitHub [service-catalog-with-github-actions](https://github.com/aws-samples/service-catalog-with-github-actions) repository. The repo contains the following files of interest:
+ `github/workflows`:
  + `e2e-test.yaml` – This file calls `workflow.yaml`, which is the [reusable workflow](https://docs.github.com/en/actions/sharing-automations/reusing-workflows). This workflow is triggered as soon as there is a commit and push on a branch.
  + `workflow.yaml` – This file contains the reusable workflow for this solution and is configured with `workflow_call` as its trigger. As a reusable workflow, `workflow.yaml` can be called from any other workflow.
+ `templates`:
  + `servicecatalog-portfolio.yaml` – This CloudFormation template includes resources that provision the Service Catalog portfolio and Service Catalog product. The template contains a set of parameters that are used while provisioning the Service Catalog portfolio and products. One parameter accepts an Amazon S3 file URL where the template `vpc.yaml` is uploaded. Although this pattern includes the `vpc.yaml` file to provision AWS resources, you can also use the parameter S3 file URL for configuration.
  + `vpc.yaml` – This CloudFormation template contains AWS resources to be added in the Service Catalog product. AWS resources include VPCs, subnets, internet gateways, NAT gateways, and route tables. The `vpc.yaml` template is an example of how you can use any CloudFormation template with a Service Catalog product and portfolio template.

## Best practices
<a name="provision-aws-service-catalog-products-using-github-actions-best-practices"></a>
+ See [Security Best Practices for AWS Service Catalog](https://docs.aws.amazon.com/servicecatalog/latest/adminguide/security-best-practices.html) in the AWS Service Catalog documentation. 
+ See [Security hardening for GitHub Actions](https://docs.github.com/en/actions/security-for-github-actions/security-guides/security-hardening-for-github-actions) in the GitHub documentation.

## Epics
<a name="provision-aws-service-catalog-products-using-github-actions-epics"></a>

### Set up local workstation
<a name="set-up-local-workstation"></a>


| Task | Description | Skills required | 
| --- | --- | --- | 
| Set up Git on your local workstation. | To install and configure Git on your local workstation, use the [Getting Started – Installing Git](https://git-scm.com/book/en/v2/Getting-Started-Installing-Git) instructions in the Git documentation. | App developer | 
| Clone the GitHub project repo. | To clone the GitHub project repo, do the following:[\[See the AWS documentation website for more details\]](http://docs.aws.amazon.com/prescriptive-guidance/latest/patterns/provision-aws-service-catalog-products-using-github-actions.html) | DevOps engineer | 

### Set up the OIDC provider
<a name="set-up-the-oidc-provider"></a>


| Task | Description | Skills required | 
| --- | --- | --- | 
| Configure an OIDC provider. | Create an OpenID Connect (OIDC) provider that allows the GitHub Actions workflows to access resources in AWS, without needing to store the AWS credentials as long-lived GitHub secrets. For instructions, see [Configuring OpenID Connect in Amazon Web Services](https://docs.github.com/en/actions/deployment/security-hardening-your-deployments/configuring-openid-connect-in-amazon-web-services) in the GitHub documentation.After an OIDC provider is configured, the trust policy of the IAM role `github-actions`, mentioned earlier in the [Prerequisites](#provision-aws-service-catalog-products-using-github-actions-prereqs), will be updated. | AWS administrator, AWS DevOps, General AWS | 

### Trigger GitHub Actions pipeline to deploy Service Catalog portfolio and products
<a name="trigger-github-actions-pipeline-to-deploy-sc-portfolio-and-products"></a>


| Task | Description | Skills required | 
| --- | --- | --- | 
| Update `e2e-test.yaml`. | The `e2e-test.yaml` file triggers the reusable workflow at `workflow.yaml`. Update and validate the values for the following input parameters in `e2e-test.yaml`:[\[See the AWS documentation website for more details\]](http://docs.aws.amazon.com/prescriptive-guidance/latest/patterns/provision-aws-service-catalog-products-using-github-actions.html) | DevOps engineer | 

### Validate deployment
<a name="validate-deployment"></a>


| Task | Description | Skills required | 
| --- | --- | --- | 
| Validate the Service Catalog resources. | To validate the Service Catalog resources, do the following:[\[See the AWS documentation website for more details\]](http://docs.aws.amazon.com/prescriptive-guidance/latest/patterns/provision-aws-service-catalog-products-using-github-actions.html) | AWS DevOps | 

### Clean up resources
<a name="clean-up-resources"></a>


| Task | Description | Skills required | 
| --- | --- | --- | 
| Delete the CloudFormation stack. | To delete the CloudFormation stack, do the following:[\[See the AWS documentation website for more details\]](http://docs.aws.amazon.com/prescriptive-guidance/latest/patterns/provision-aws-service-catalog-products-using-github-actions.html)For more information, see [Delete a stack from the CloudFormation console](https://docs.aws.amazon.com/AWSCloudFormation/latest/UserGuide/cfn-console-delete-stack.html) in the CloudFormation documentation | DevOps engineer, AWS administrator | 

## Troubleshooting
<a name="provision-aws-service-catalog-products-using-github-actions-troubleshooting"></a>


| Issue | Solution | 
| --- | --- | 
| `e2e-test``Can't find 'action.yml', 'action.yaml' or 'Dockerfile' under '*/home/runner/work/service-catalog-with-github-actions/service-catalog-with-github-actions``Did you forget to run actions/checkout before running your local action?` | To make sure that you have the correct repository settings enabled, do the following:[\[See the AWS documentation website for more details\]](http://docs.aws.amazon.com/prescriptive-guidance/latest/patterns/provision-aws-service-catalog-products-using-github-actions.html) | 

## Related resources
<a name="provision-aws-service-catalog-products-using-github-actions-resources"></a>

**AWS documentation**
+ [Overview of Service Catalog](https://docs.aws.amazon.com/servicecatalog/latest/adminguide/what-is_concepts.html)

**Other resources**
+ [About events that trigger workflows](https://docs.github.com/en/actions/writing-workflows/choosing-when-your-workflow-runs/events-that-trigger-workflows#about-events-that-trigger-workflows) (GitHub documentation)
+ [Reusing workflows](https://docs.github.com/en/actions/sharing-automations/reusing-workflows) (GitHub documentation)

## Additional information
<a name="provision-aws-service-catalog-products-using-github-actions-additional"></a>

To see screenshots related to the [Epics](#provision-aws-service-catalog-products-using-github-actions-epics), go to the **Images **folder in this pattern's GitHub repo. The following screenshots are available:
+ [AWS Service Catalog portfolio, Administration section](https://github.com/aws-samples/service-catalog-with-github-actions/blob/main/images/SC_portfolio.png)
+ [AWS Service Catalog product, Administration section](https://github.com/aws-samples/service-catalog-with-github-actions/blob/main/images/SC_Product.png)
+ [AWS Service Catalog product, User/Provisioning section](https://github.com/aws-samples/service-catalog-with-github-actions/blob/main/images/SC_Product_User.png)

# Provision least-privilege IAM roles by deploying a role vending machine solution
<a name="provision-least-privilege-iam-roles-by-deploying-a-role-vending-machine-solution"></a>

*Benjamin Morris, Nima Fotouhi, Aman Kaur Gandhi, and Chad Moon, Amazon Web Services*

## Summary
<a name="provision-least-privilege-iam-roles-by-deploying-a-role-vending-machine-solution-summary"></a>

Over-scoped AWS Identity and Access Management (IAM) role permissions for pipelines can introduce unnecessary risk to an organization. Developers sometimes grant broad permissions during development but neglect to scope down permissions after troubleshooting their code. This causes a problem where powerful roles are present without a business need and may have never been reviewed by a security engineer.

This pattern offers a solution to this problem: the role vending machine (RVM). Using a secure and centralized deployment model, the RVM demonstrates how to provision least-privilege IAM roles for individual GitHub repositories’ pipelines with minimal effort from developers. Because the RVM is a central solution, you can configure your security teams as required reviewers to approve changes. This approach allows security to reject over-permissioned pipeline role requests. 

The RVM takes Terraform code as input and generates pipeline-ready IAM roles as output. The required inputs are the AWS account ID, GitHub repository name, and permissions policy. The RVM uses these inputs to create the role’s trust policy and permissions policy. The resulting trust policy allows the specified GitHub repository to assume the role and use it for pipeline operations.

The RVM uses an IAM role (configured during bootstrap). This role has permissions to assume a role-provisioning-role in each account in the organization. The role is configured through either AWS Control Tower Account Factory for Terraform (AFT) or AWS CloudFormation StackSets. The role-provisioning-roles are the roles that actually create the pipeline roles for developers.

## Prerequisites and limitations
<a name="provision-least-privilege-iam-roles-by-deploying-a-role-vending-machine-solution-prereqs"></a>

**Prerequisites**
+ An active AWS account.
+ A GitHub organization that is used to deploy infrastructure as code (IaC) through GitHub Actions. (GitHub Enterprise/Premium/Ultimate are *not*** **required.)
+ A multi-account AWS environment. This environment does not need to be part of AWS Organizations.
+ A mechanism for deploying an IAM role in all AWS accounts (for example, AFT or CloudFormation StackSets).
+ Terraform version 1.3 or latter [installed and configured](https://developer.hashicorp.com/terraform/tutorials/aws-get-started/install-cli).
+ Terraform AWS Provider version 4 or later [installed](https://github.com/hashicorp/terraform-provider-aws/releases) and [configured](https://developer.hashicorp.com/terraform/language/providers/configuration).

**Limitations**
+ This pattern’s code is specific to GitHub Actions and Terraform. However, the pattern’s general concepts can be reused in other continuous integration and delivery (CI/CD) frameworks.
+ Some AWS services aren’t available in all AWS Regions. For Region availability, see [AWS Services by Region](https://aws.amazon.com/about-aws/global-infrastructure/regional-product-services/). For specific endpoints, see [Service endpoints and quotas](https://docs.aws.amazon.com/general/latest/gr/aws-service-information.html), and choose the link for the service.

## Architecture
<a name="provision-least-privilege-iam-roles-by-deploying-a-role-vending-machine-solution-architecture"></a>

The following diagram illustrates the workflow for this pattern.

![\[Workflow to automate IAM role creation and deployment by using GitHub Actions.\]](http://docs.aws.amazon.com/prescriptive-guidance/latest/patterns/images/pattern-img/215c590e-0c84-411d-be6e-b1739f1e19d2/images/82fcdc9f-9576-4e7c-b7fe-b45046ba79d2.png)


The workflow for the typical usage of the role vending machine consists of the following steps:

1. A developer pushes code that contains Terraform code for a newly requested IAM role to the RVM GitHub repository. This action triggers the RVM GitHub Actions pipeline.

1. The pipeline uses an OpenID Connect (OIDC) trust policy to assume the RVM role-assumption role.

1. As the RVM pipeline runs, it assumes the RVM workflow role in the account in which it’s provisioning the developer’s new IAM role. (The RVM workflow role was provisioned by using AFT or CloudFormation StackSets.)

1. The RVM creates the developer’s IAM role with appropriate permissions and trust, so that the role can be assumed by other application pipelines.

1. App developers can configure their app pipelines to assume this RVM-provisioned role.

The created role includes the permissions requested by the developer and a `ReadOnlyAccess` policy. The role is assumable only by pipelines that run against the `main` branch of the developer’s specified repository. This approach helps to ensure that branch protection and reviews can be required to use the role.

**Automation and scale**

Least-privilege permissions require attention to detail for each role being provisioned. This model reduces the complexity required to create these roles, allowing developers to create the roles they need without much additional learning or effort.

## Tools
<a name="provision-least-privilege-iam-roles-by-deploying-a-role-vending-machine-solution-tools"></a>

**AWS services**
+ [AWS Identity and Access Management (IAM)](https://docs.aws.amazon.com/IAM/latest/UserGuide/introduction.html) helps you securely manage access to your AWS resources by controlling who is authenticated and authorized to use them.
+ [AWS Organizations](https://docs.aws.amazon.com/organizations/latest/userguide/orgs_introduction.html) is an account management service that helps you consolidate multiple AWS accounts into an organization that you create and centrally manage.

**Other tools**
+ [Git](https://git-scm.com/docs) is an open source, distributed version control system. It includes the ability to create an [organization account](https://docs.github.com/en/get-started/learning-about-github/types-of-github-accounts#organization-accounts).
+ [GitHub Actions](https://docs.github.com/en/actions/writing-workflows/quickstart) is a continuous integration and continuous delivery (CI/CD) platform that’s tightly integrated with GitHub repositories. You can use GitHub Actions to automate your build, test, and deployment pipeline.
+ [Terraform](https://www.terraform.io/) is an infrastructure as code (IaC) tool from HashiCorp that helps you create and manage cloud and on-premises resources.

**Code repository**

The code for this pattern is available in the GitHub [role-vending-machine](https://github.com/aws-samples/role-vending-machine) repository.

## Best practices
<a name="provision-least-privilege-iam-roles-by-deploying-a-role-vending-machine-solution-best-practices"></a>
+ **Make the right way easy and the wrong way hard** – Make it easy to do the right thing. If developers are struggling with the RVM provisioning process, they might attempt to create roles through other means, which undermines the central nature of RVM. Make sure that your security team provides clear guidance about how to use the RVM securely and effectively.

  You should also make it hard for developers to do the wrong thing. Use service control policies (SCPs) or permission boundaries to restrict what roles can create other roles. This approach can help limit role creation to just RVM and other trusted sources.
+ **Provide good examples** – Inevitably, some developers will adapt existing roles in the RVM repository as informal templates for granting permissions to their new roles. If you have least-permissions examples that they can copy from, that can reduce the risk of developers requesting broad, wildcard-heavy permissions. If you start with highly permissioned roles with lots of wildcards, that problem can multiply as time goes on.
+ **Use naming conventions and conditions** – Even if a developer doesn’t know all the resource names that their application will create, they should still limit role permissions by using a naming convention. For example, if they’re creating Amazon S3 buckets, their resource key’s value might look like `arn:aws:s3:::myorg-myapp-dev-*` so that their role doesn’t have permissions beyond buckets matching that name. Enforcing naming convention through an IAM policy has the additional benefit of improving compliance with the naming convention. This improvement occurs because non-matching resources will not be permitted to be created.
+ **Require pull request (PR) reviews** – The value of the RVM solution is that it creates a central location where new pipeline roles can be reviewed. However, this design is only useful if there are guardrails that help ensure secure, high-quality code is committed to the RVM. Protect the branches that are used to deploy code (for example, `main`) from direct pushes and require approvals for any merge requests that target them.
+ **Configure read-only roles** – By default, the RVM provisions a `readonly` version of each requested role. This role can be used in CI/CD pipelines that don’t write data, such as a `terraform plan` pipeline workflow. This approach helps prevent unwanted changes if a read-only workflow misbehaves.

  By default, the AWS managed `ReadOnlyAccess` policy is attached to both the read-only roles and read-write roles. This policy reduces the need for iteration when determining required permissions, but it might be overly permissive for some organizations. If you want, you can remove the policy from the Terraform code.
+ **Grant minimum permissions** – Follow the principle of least privilege and grant the minimum permissions required to perform a task. For more information, see [Grant least privilege](https://docs.aws.amazon.com/IAM/latest/UserGuide/access_policies.html#grant-least-priv) and [Security best practices](https://docs.aws.amazon.com/IAM/latest/UserGuide/best-practices.html) in the IAM documentation.

## Epics
<a name="provision-least-privilege-iam-roles-by-deploying-a-role-vending-machine-solution-epics"></a>

### Prepare environment
<a name="prepare-environment"></a>


| Task | Description | Skills required | 
| --- | --- | --- | 
| Copy the sample repository to your GitHub organization. | [Clone](https://docs.github.com/en/repositories/creating-and-managing-repositories/cloning-a-repository) this pattern’s repository or [fork](https://docs.github.com/en/pull-requests/collaborating-with-pull-requests/working-with-forks/fork-a-repo) this repository to your GitHub organization so that you can adapt it for your needs.[\[See the AWS documentation website for more details\]](http://docs.aws.amazon.com/prescriptive-guidance/latest/patterns/provision-least-privilege-iam-roles-by-deploying-a-role-vending-machine-solution.html) | DevOps engineer | 
| Determine the AWS account for the RVM. | Determine which infrastructure deployment AWS account to use for the RVM. Don’t use the management or root account. | Cloud architect | 
| (Optional) Allow the organization's pipelines to create PRs. | This step is only necessary if you want to allow the `generate_providers_and_account_vars` workflow to create PRs.To allow your organization’s pipelines to create PRs, use the following steps:[\[See the AWS documentation website for more details\]](http://docs.aws.amazon.com/prescriptive-guidance/latest/patterns/provision-least-privilege-iam-roles-by-deploying-a-role-vending-machine-solution.html)For more information, see [Managing GitHub Actions settings for a repository](https://docs.github.com/en/enterprise-server@3.10/repositories/managing-your-repositorys-settings-and-features/enabling-features-for-your-repository/managing-github-actions-settings-for-a-repository#preventing-github-actions-from-creating-or-approving-pull-requests) in the GitHub documentation. | DevOps engineer | 
| Grant read-only permissions to the RVM account. | Create a delegation policy in your management account that grants your RVM account read-only permissions. This allows your RVM GitHub workflows to dynamically pull a list of your AWS organization's accounts when the `generate_providers_and_account_vars.py` script runs. Use the following code and replace `<YOUR RVM Account ID>` with the AWS account ID that you selected in Step 2:<pre>{<br />  "Version": "2012-10-17",		 	 	 <br />  "Statement": [<br />    {<br />      "Sid": "Statement",<br />      "Effect": "Allow",<br />      "Principal": {<br />        "AWS": "arn:aws:iam::<YOUR RVM Account ID>:root"<br />      },<br />      "Action": [<br />        "organizations:ListAccounts",<br />        "organizations:DescribeOrganization",<br />        "organizations:DescribeOrganizationalUnit",<br />        "organizations:ListRoots",<br />        "organizations:ListAWSServiceAccessForOrganization",<br />        "organizations:ListDelegatedAdministrators"<br />      ],<br />      "Resource": "*"<br />    }<br />  ]<br />}</pre> | Cloud administrator | 
| Update default values from the sample repo. | To configure the RVM to operate in your specific environment and AWS Region, do the following:[\[See the AWS documentation website for more details\]](http://docs.aws.amazon.com/prescriptive-guidance/latest/patterns/provision-least-privilege-iam-roles-by-deploying-a-role-vending-machine-solution.html) | DevOps engineer | 

### Initialize infrastructure
<a name="initialize-infrastructure"></a>


| Task | Description | Skills required | 
| --- | --- | --- | 
| Bootstrap the RVM repo. | This step is necessary to create the OIDC trust and IAM roles used by the RVM pipeline itself, so that it can start operating and vending other roles.In the context of your RVM account, manually run a `terraform apply` command from the `scripts/bootstrap` directory. Provide any required values based on variable documentation. | DevOps engineer | 

### Configure operations
<a name="configure-operations"></a>


| Task | Description | Skills required | 
| --- | --- | --- | 
| Deploy the `github-workflow-rvm` and `github-workflow-rvm-readonly` roles to all accounts. | Choose a deployment method that aligns with your organization’s practices, such as AFT or StackSets. Use that method to deploy the two IAM roles in the `scripts/assumed_role/main.tf` file (default names `github-workflow-rvm` and `github-workflow-rvm-readonly`) to each account where you want the RVM to be able to create pipeline roles.These IAM roles have trust policies that allows the RVM account’s role-assumption role (or its `readonly` equivalent) to assume it. The roles also have IAM permission policies that allow them to read and write (unless using the `readonly` role) roles that match `github-workflow-role-*`. | AWS administrator | 
| Run the `generate_providers_and_account_vars` workflow. | To configure your RVM so that it’s ready to create pipeline roles, do the following:[\[See the AWS documentation website for more details\]](http://docs.aws.amazon.com/prescriptive-guidance/latest/patterns/provision-least-privilege-iam-roles-by-deploying-a-role-vending-machine-solution.html)After the workflow completes, the RVM is ready to:[\[See the AWS documentation website for more details\]](http://docs.aws.amazon.com/prescriptive-guidance/latest/patterns/provision-least-privilege-iam-roles-by-deploying-a-role-vending-machine-solution.html) | DevOps engineer | 

## Troubleshooting
<a name="provision-least-privilege-iam-roles-by-deploying-a-role-vending-machine-solution-troubleshooting"></a>


| Issue | Solution | 
| --- | --- | 
| I created a role by using the RVM, but GitHub isn’t able to assume it. | Verify that the name of the GitHub repository matches the name provided to the `github_workflow_roles` module. Roles are scoped so that only one repository can assume them.Similarly, verify that the branch used in the GitHub pipeline matches the name of the branch provided to the `github_workflow_roles` module. Typically, RVM-created roles with write permissions can only be used by workflows scoped to the `main` branch (that is, deployments sourced from `main`). | 
| My read-only role is failing to run its pipeline because it lacks permissions to read a specific resource. | Although the `ReadOnlyAccess` policy provides broad read-only permissions, the policy doesn't have some read actions (for example, certain AWS Security Hub CSPM actions).You can add specific action permissions by using the `inline_policy_readonly` parameter of the `github-workflow-roles` module. | 

## Related resources
<a name="provision-least-privilege-iam-roles-by-deploying-a-role-vending-machine-solution-resources"></a>
+ [Best practices for using AWS CloudFormation StackSets](https://docs.aws.amazon.com/AWSCloudFormation/latest/UserGuide/stacksets-bestpractices.html)
+ [Organizing Your AWS Environment Using Multiple Accounts](https://docs.aws.amazon.com/whitepapers/latest/organizing-your-aws-environment/organizing-your-aws-environment.html)
+ [Overview of AWS Control Tower Account Factory for Terraform (AFT)](https://docs.aws.amazon.com/controltower/latest/userguide/aft-overview.html)
+ [Policy best practices](https://docs.aws.amazon.com/codepipeline/latest/userguide/security_iam_service-with-iam-policy-best-practices.html) 

## Additional information
<a name="provision-least-privilege-iam-roles-by-deploying-a-role-vending-machine-solution-additional"></a>

**Using GitHub environments**

GitHub environments are an alternative approach to branch-based restrictions for role access. If you prefer to use a GitHub environment, following is an example of the syntax for an additional condition in the IAM trust policy. This syntax specifies that the role can be used only when the GitHub action is running in the `Production` environment.

```
"StringLike": {
    "token.actions.githubusercontent.com:sub": "repo:octo-org/octo-repo:environment:Production"
}
```

The example syntax uses the following placeholder values:
+ `octo-org` is the GitHub organization name.
+ `octo-repo` is the repository name.
+ `Production` is the specific GitHub environment name.

# Publish Amazon CloudWatch metrics to a CSV file
<a name="publish-amazon-cloudwatch-metrics-to-a-csv-file"></a>

*Abdullahi Olaoye, Amazon Web Services*

## Summary
<a name="publish-amazon-cloudwatch-metrics-to-a-csv-file-summary"></a>

This pattern uses a Python script to retrieve Amazon CloudWatch metrics and to convert the metrics information into a comma-separated values (CSV) file for improved readability. The script takes the AWS service whose metrics should be retrieved as a required argument. You can specify the AWS Region and AWS credential profile as optional arguments. If you don’t specify those arguments, the script uses the default Region and profile that are configured for the workstation where the script is run. After the script runs, it generates and stores a CSV file in the same directory.

See the *Attachments* section for the script and associated files provided with this pattern.

## Prerequisites and limitations
<a name="publish-amazon-cloudwatch-metrics-to-a-csv-file-prereqs"></a>

**Prerequisites **
+ Python 3.x
+ AWS Command Line Interface (AWS CLI)

**Limitations **

The script currently supports the following AWS services:
+ AWS Lambda
+ Amazon Elastic Compute Cloud (Amazon EC2)
  + By default, the script doesn’t collect Amazon Elastic Block Store (Amazon EBS) volume metrics. To collect Amazon EBS metrics, you must modify the attached `metrics.yaml` file.
+ Amazon Relational Database Service (Amazon RDS)
  + However, the script doesn't support Amazon Aurora.
+ Application Load Balancer
+ Network Load Balancer
+ Amazon API Gateway

## Tools
<a name="publish-amazon-cloudwatch-metrics-to-a-csv-file-tools"></a>
+ [Amazon CloudWatch](https://docs.aws.amazon.com/AmazonCloudWatch/latest/monitoring/WhatIsCloudWatch.html) is a monitoring service built for DevOps engineers, developers, site reliability engineers (SREs), and IT managers. CloudWatch provides data and actionable insights to help you monitor your applications, respond to systemwide performance changes, optimize resource utilization, and get a unified view of operational health. CloudWatch collects monitoring and operational data in the form of logs, metrics, and events, and provides a unified view of AWS resources, applications, and services that run on AWS and on-premises servers.

## Epics
<a name="publish-amazon-cloudwatch-metrics-to-a-csv-file-epics"></a>

### Install and configure the prerequisites
<a name="install-and-configure-the-prerequisites"></a>


| Task | Description | Skills required | 
| --- | --- | --- | 
| Install the prerequisites. | Run the following command:<pre>$ pip3 install -r requirements.txt</pre> | Developer | 
| Configure the AWS CLI. | Run the following command: <pre>$ aws configure</pre> | Developer | 

### Configure the Python script
<a name="configure-the-python-script"></a>


| Task | Description | Skills required | 
| --- | --- | --- | 
| Open the script. | To change the default configuration of the script, open `metrics.yaml`. | Developer | 
| Set the period for the script. | This is the time period to fetch. The default period is 5 minutes (300 seconds). You can change the time period, but note the following limitations: [\[See the AWS documentation website for more details\]](http://docs.aws.amazon.com/prescriptive-guidance/latest/patterns/publish-amazon-cloudwatch-metrics-to-a-csv-file.html)Otherwise, the API operation won't return any data points. | Developer | 
| Set the hours for the script. | This value specifies how many hours of metrics you want to fetch. The default is 1 hour. To retrieve multiple days of metrics, provide the value in hours. For example, for 2 days, specify 48. | Developer | 
| Change statistics values for the script.  | (Optional) The global statistics value is `Average`, which is used when fetching metrics that do not have a specific statistics value assigned. The script supports the statistics values `Maximum`, `SampleCount`, and `Sum`. | Developer | 

### Run the Python script
<a name="run-the-python-script"></a>


| Task | Description | Skills required | 
| --- | --- | --- | 
| Run the script. | Use the following command: <pre>$ python3 cwreport.py <service> </pre>To see a list of service values and the optional `region `and `profile `parameters, run the following command:<pre> $ python3 cwreport.py -h</pre>For more information about the optional parameters, see the *Additional information* section. | Developer | 

## Related resources
<a name="publish-amazon-cloudwatch-metrics-to-a-csv-file-resources"></a>
+ [Configuring the AWS CLI](https://docs.aws.amazon.com/cli/latest/userguide/cli-chap-configure.html)
+ [Using Amazon CloudWatch metrics](https://docs.aws.amazon.com/AmazonCloudWatch/latest/monitoring/working_with_metrics.html)
+ [Amazon CloudWatch documentation](https://docs.aws.amazon.com/AmazonCloudWatch/latest/monitoring/WhatIsCloudWatch.html)
+ [EC2 CloudWatch Metrics](https://docs.aws.amazon.com/AWSEC2/latest/UserGuide/viewing_metrics_with_cloudwatch.html#ec2-cloudwatch-metrics)
+ [AWS Lambda Metrics](https://docs.aws.amazon.com/lambda/latest/operatorguide/logging-metrics.html)
+ [Amazon RDS Metrics](https://docs.aws.amazon.com/AmazonRDS/latest/UserGuide/rds-metrics.html#rds-cw-metrics-instance)
+ [Application Load Balancer Metrics](https://docs.aws.amazon.com/elasticloadbalancing/latest/application/load-balancer-cloudwatch-metrics.html)
+ [Network Load Balancer Metrics](https://docs.aws.amazon.com/elasticloadbalancing/latest/network/load-balancer-cloudwatch-metrics.html)
+ [Amazon API Gateway Metrics](https://docs.aws.amazon.com/apigateway/latest/developerguide/api-gateway-metrics-and-dimensions.html)

## Additional information
<a name="publish-amazon-cloudwatch-metrics-to-a-csv-file-additional"></a>

**Script usage**

```
$ python3 cwreport.py -h
```

**Example syntax**

```
python3 cwreport.py <service> <--region=Optional Region> <--profile=Optional credential profile>
```

**Parameters**
+ **service (required)** ‒ The service you want to run the script against. The script currently supports these services: AWS Lambda, Amazon EC2, Amazon RDS, Application Load Balancer, Network Load Balancer, and API Gateway.
+ **region (optional)** ‒ The AWS Region to fetch metrics from. The default Region is `ap-southeast-1`.
+ **profile (optional)** ‒ The AWS CLI named profile to use. If this parameter isn’t specified, the default configured credential profile is used.

**Examples**
+ To use the default Region `ap-southeast-1` and default configured credentials to fetch Amazon EC2 metrics: `$ python3 cwreport.py ec2`
+ To specify a Region and fetch API Gateway metrics: `$ python3 cwreport.py apigateway --region us-east-1`
+ To specify an AWS profile and fetch Amazon EC2 metrics: `$ python3 cwreport.py ec2 --profile testprofile`
+ To specify both Region and profile to fetch Amazon EC2 metrics: `$ python3 cwreport.py ec2 --region us-east-1 --profile testprofile`

## Attachments
<a name="attachments-0a915a9d-2eef-4da1-8283-3cf4a115b3b2"></a>

To access additional content that is associated with this document, unzip the following file: [attachment.zip](samples/p-attach/0a915a9d-2eef-4da1-8283-3cf4a115b3b2/attachments/attachment.zip)

# Remove Amazon EC2 entries across AWS accounts from AWS Managed Microsoft AD by using AWS Lambda automation
<a name="remove-amazon-ec2-entries-across-aws-accounts-from-aws-managed-microsoft-ad"></a>

*Dr. Rahul Sharad Gaikwad and Tamilselvan P, Amazon Web Services*

## Summary
<a name="remove-amazon-ec2-entries-across-aws-accounts-from-aws-managed-microsoft-ad-summary"></a>

Active Directory (AD) is a Microsoft scripting tool that manages domain information and user interactions with network services. It’s widely used among managed services providers (MSPs) to manage employee credentials and access permissions. Because AD attackers can use inactive accounts to try and hack into an organization, it’s important to find inactive accounts and disable them on a routine maintenance schedule. With AWS Directory Service for Microsoft Active Directory, you can run Microsoft Active Directory as a managed service. This pattern can help you to configure AWS Lambda automation to quickly find and remove inactive accounts.

If the following scenarios apply to your organization, this pattern can assist you:
+ **Centralized AD management** – If your organization has multiple AWS accounts, each with its own AD deployment, it can be challenging to manage user accounts and access permissions consistently across all accounts. With an across-accounts AD cleanup solution, you can disable or remove inactive accounts from all AD instances in a centralized manner.
+ **AD restructuring or migration** – If your organization plans to restructure or migrate its AD deployment, an across-accounts AD cleanup solution can help you prepare the environment. The solution can help you remove unnecessary or inactive accounts, simplify the migration process, and reduce potential conflicts or issues.

When you use this pattern, you can get the following benefits:
+ Improve database and server performance, and fix vulnerabilities in your security from inactive accounts.
+ If your AD server is hosted in the cloud, removing inactive accounts can also reduce storage costs while improving performance. Your monthly bills might decrease because bandwidth charges and compute resources can both drop.
+ Keep potential attackers at bay with a clean Active Directory.

## Prerequisites and limitations
<a name="remove-amazon-ec2-entries-across-aws-accounts-from-aws-managed-microsoft-ad-prereqs"></a>

**Prerequisites**
+ An active parent AWS account and one or multiple child accounts. In this pattern, a *parent account* is where Active Directory is created. *Child accounts* host Windows servers and are joined through the parent account Active Directory.
+ Git [installed](https://git-scm.com/book/en/v2/Getting-Started-Installing-Git) and configured on a local workstation.
+ Terraform [installed](https://learn.hashicorp.com/tutorials/terraform/install-cli) and configured on a local workstation.
+ AWS Managed Microsoft AD directory configured in the parent account and shared to all child accounts. For more details, see [Tutorial: Sharing your AWS Managed Microsoft AD directory for seamless EC2 domain-join](https://docs.aws.amazon.com/directoryservice/latest/admin-guide/ms_ad_tutorial_directory_sharing.html) in the *AWS Directory Service Administration Guide*.
+ A virtual private cloud (VPC) peering connection or AWS Transit Gateway connection available between the VPC of AWS Directory Service (parent account) and the VPC of the Amazon Elastic Compute Cloud (Amazon EC2) instances (child accounts). For more details, see [Configure a VPC peering connection between the directory owner and the directory consumer account](https://docs.aws.amazon.com/directoryservice/latest/admin-guide/step1_setup_networking.html#step1_configure_owner_account_vpc) in the *AWS Directory Service Administration Guide*.
+ A Windows machine configured with the `EC2WindowsUserdata` script on all the parent and child accounts. The script file is available in the root of this pattern’s [code repository](https://github.com/aws-samples/aws-lambda-ad-cleanup-terraform-samples/tree/main/multiple-account-cleanup).
+ A cross-account AWS Identity and Access Management (IAM) role available on each child account that’s configured with a trust policy to allow the use of an AWS Lambda function from the parent account. For more information, see [Sending and receiving events between AWS accounts in Amazon EventBridge](https://docs.aws.amazon.com/AmazonCloudWatch/latest/events/CloudWatchEvents-CrossAccountEventDelivery.html) in the *Amazon EventBridge User Guide*.
+ The following secrets values available in AWS Systems Manager Parameter Store of the parent account:
  + `domainJoinUser` – Username of the directory service
  + `domainJoinPassword` – Password of the directory service

  For more information about secrets, see [Create an AWS Secrets Manager secret](https://docs.aws.amazon.com/secretsmanager/latest/userguide/create_secret.html) in the *AWS Secrets Manager User Guide*.

**Limitations**
+ Creating a resource in a child account isn’t automated with Terraform. You must create the following resources manually by using the AWS Management Console:
  + Amazon EventBridge rule to send the Amazon EC2 termination events to the parent account
  + Amazon EC2 cross-account role creation in the child account with trust policy
  + VPC peering or Transit Gateway connection
+ Some AWS services aren’t available in all AWS Regions. For Region availability, see [AWS services by Region](https://aws.amazon.com/about-aws/global-infrastructure/regional-product-services/). For specific endpoints, see [Service endpoints and quotas](https://docs.aws.amazon.com/general/latest/gr/aws-service-information.html), and choose the link for the service.

**Product versions**
+ [Terraform version 1.1.9 or later](https://developer.hashicorp.com/terraform/install)
+ [Terraform AWS Provider version 3.0 or higher](https://registry.terraform.io/providers/hashicorp/aws/latest/docs/guides/version-3-upgrade)

## Architecture
<a name="remove-amazon-ec2-entries-across-aws-accounts-from-aws-managed-microsoft-ad-architecture"></a>

The following diagram displays the high-level architecture of the solution.

![\[Process to use Lambda automation to remove EC2 entries from across AWS accounts.\]](http://docs.aws.amazon.com/prescriptive-guidance/latest/patterns/images/pattern-img/c397d873-e10d-44b6-8352-5f1380ab94ca/images/bd6c80a7-e490-47db-bd47-165314e1ea8a.png)


The architecture diagram illustrates the following process:

1. In child accounts, the EventBridge rule collects all the Amazon EC2 termination events. The rule sends those events to EventBridge which is present in the parent account.

1. From the parent account, EventBridge collects all the events and contains the rule to trigger the Lambda function `ADcleanup-Lambda`.

1. The parent account receives any termination events from the parent or child account and triggers the Lambda function.

1. The Lambda function makes the call to Amazon EC2 Auto Scaling groups using the Python boto module and gets the random instance ID. The instance ID is used to execute Systems Manager commands.

1. The Lambda function makes another call to Amazon EC2 using the boto module. The Lambda function gets the private IP addresses of the running Windows servers and stores the addresses in a temporary variable. In steps 5.1 and 5.2, the running Windows EC2 instances are collected from child accounts.

1. The Lambda function makes another call to Systems Manager to get the computer information that is connected to AWS Directory Service.

1. An AWS Systems Manager document helps to execute the PowerShell command on Amazon EC2 Windows servers to get the private IP addresses of the computers which are connected to AD. (The Systems Manager document uses the instance ID that was obtained in step 4.)

1. The AD domain username and passwords are stored in the AWS Systems Manager Parameter Store. AWS Lambda and Systems Manager make a call to Parameter Store and get the username and password values to use to connect to AD.

1. Using the Systems Manager document, the PowerShell script is executed on the Amazon EC2 Windows server using the instance id obtained earlier in step 4.

1. Amazon EC2 connects to AWS Directory Service by using PowerShell commands and remove the computers that are not in use or inactive.

## Tools
<a name="remove-amazon-ec2-entries-across-aws-accounts-from-aws-managed-microsoft-ad-tools"></a>

**AWS services**
+ [AWS Directory Service](https://docs.aws.amazon.com/directoryservice/latest/admin-guide/what_is.html) provides multiple ways to use Microsoft Active Directory (AD) with other AWS services such as Amazon Elastic Compute Cloud (Amazon EC2), Amazon Relational Database Service (Amazon RDS) for SQL Server, and Amazon FSx for Windows File Server.
+ [AWS Directory Service for Microsoft Active Directory](https://docs.aws.amazon.com/directoryservice/latest/admin-guide/directory_microsoft_ad.html) enables your directory-aware workloads and AWS resources to use Microsoft Active Directory in the AWS Cloud.
+ [Amazon Elastic Compute Cloud (Amazon EC2)](https://docs.aws.amazon.com/AWSEC2/latest/UserGuide/concepts.html) provides scalable computing capacity in the AWS Cloud. You can launch as many virtual servers as you need and quickly scale them up or down.
+ [Amazon EventBridge](https://docs.aws.amazon.com/eventbridge/latest/userguide/eb-what-is.html) is a serverless event bus service that that helps you connect your applications with real-time data from a variety of sources. For example, AWS Lambda functions, HTTP invocation endpoints using API destinations, or event buses in other AWS accounts.
+ [AWS Identity and Access Management (IAM)](https://docs.aws.amazon.com/IAM/latest/UserGuide/introduction.html) helps you securely manage access to your AWS resources by controlling who is authenticated and authorized to use them. With IAM, you can specify who or what can access services and resources in AWS, centrally manage fine-grained permissions, and analyze access to refine permissions across AWS.
+ [AWS Lambda](https://docs.aws.amazon.com/lambda/latest/dg/welcome.html) is a compute service that helps you run code without needing to provision or manage servers. It runs your code only when needed and scales automatically, so you pay only for the compute time that you use.
+ [AWS Systems Manager](https://docs.aws.amazon.com/systems-manager/latest/userguide/what-is-systems-manager.html) helps you manage your applications and infrastructure running in the AWS Cloud. It simplifies application and resource management, shortens the time to detect and resolve operational problems, and helps you manage your AWS resources securely at scale.
+ [AWS Systems Manager documents](https://docs.aws.amazon.com/systems-manager/latest/userguide/documents.html) define the actions that Systems Manager performs on your managed instances. Systems Manager includes more than 100 pre-configured documents that you can use by specifying parameters at runtime.
+ [AWS Systems Manager Parameter Store](https://docs.aws.amazon.com/systems-manager/latest/userguide/systems-manager-parameter-store.html) is a capability of AWS Systems Manager and provides secure, hierarchical storage for configuration data management and secrets management.

**Other tools**
+ [HashiCorp Terraform](https://www.terraform.io/docs) is an infrastructure as code (IaC) tool that helps you use code to provision and manage cloud infrastructure and resources.
+ [PowerShell](https://learn.microsoft.com/en-us/powershell/) is a Microsoft automation and configuration management program that runs on Windows, Linux, and macOS.
+ [Python](https://www.python.org/) is a general-purpose computer programming language.

**Code repository**

The code for this pattern is available in the GitHub [aws-lambda-ad-cleanup-terraform-samples](https://github.com/aws-samples/aws-lambda-ad-cleanup-terraform-samples/tree/main/multiple-account-cleanup) repository.

## Best practices
<a name="remove-amazon-ec2-entries-across-aws-accounts-from-aws-managed-microsoft-ad-best-practices"></a>
+ **Automatically join domains. **When you launch a Windows instance that’s to be part of an Directory Service domain, join the domain during the instance creation process instead of manually adding the instance later. To automatically join a domain, select the correct directory from the **Domain join directory** dropdown list when launching a new instance. For more details, see [Seamlessly join an Amazon EC2 Windows instance to your AWS Managed Microsoft AD Active Directory](https://docs.aws.amazon.com/directoryservice/latest/admin-guide/launching_instance.html) in the *Directory Service Administration Guide*.
+ **Delete unused accounts. **It’s common to find accounts in AD that have never been used. Like disabled or inactive accounts that remain in the system, neglected unused accounts can slow down your AD system or make your organization vulnerable to data breaches.
+ **Automate Active Directory cleanups. **To help mitigate security risks and prevent obsolete accounts from impacting AD performance, conduct AD cleanups should at regular intervals. You can accomplish most AD management and cleanup tasks by writing scripts. Example tasks include removing disabled and inactive accounts, deleting empty and inactive groups, and locating expired user accounts and passwords.

## Epics
<a name="remove-amazon-ec2-entries-across-aws-accounts-from-aws-managed-microsoft-ad-epics"></a>

### Set up child accounts
<a name="set-up-child-accounts"></a>


| Task | Description | Skills required | 
| --- | --- | --- | 
| Create a cross-account role in the child account. | To create a cross-account role in a child account, do the following:[\[See the AWS documentation website for more details\]](http://docs.aws.amazon.com/prescriptive-guidance/latest/patterns/remove-amazon-ec2-entries-across-aws-accounts-from-aws-managed-microsoft-ad.html) | DevOps engineer | 
| Create an event rule in the child account. | To create an EventBridge rule for each child account, do the following:[\[See the AWS documentation website for more details\]](http://docs.aws.amazon.com/prescriptive-guidance/latest/patterns/remove-amazon-ec2-entries-across-aws-accounts-from-aws-managed-microsoft-ad.html)For more details, see [Creating rules that react to events in Amazon EventBridge](https://docs.aws.amazon.com/eventbridge/latest/userguide/eb-create-rule.html) in the* Amazon EventBridge User Guide*. | DevOps engineer | 
| Create an EC2 instance and join to AD. | To create an EC2 instance for Windows, do the following:[\[See the AWS documentation website for more details\]](http://docs.aws.amazon.com/prescriptive-guidance/latest/patterns/remove-amazon-ec2-entries-across-aws-accounts-from-aws-managed-microsoft-ad.html) | DevOps engineer | 

### Set up the local workstation
<a name="set-up-the-local-workstation"></a>


| Task | Description | Skills required | 
| --- | --- | --- | 
| Create a project folder and add the files. | To clone the repository and create a project folder, do the following:[\[See the AWS documentation website for more details\]](http://docs.aws.amazon.com/prescriptive-guidance/latest/patterns/remove-amazon-ec2-entries-across-aws-accounts-from-aws-managed-microsoft-ad.html) | DevOps engineer | 
| Build the `adcleanup.zip` file. | To compress the `lambda_function.py` file, run the following command:`zip -r adcleanup.zip lambda_function.py` | DevOps engineer | 

### Provision the target architecture using the Terraform configuration
<a name="provision-the-target-architecture-using-the-terraform-configuration"></a>


| Task | Description | Skills required | 
| --- | --- | --- | 
| Provide values for the Terraform variables. | For the child account, provide values for the following `arn` variables as string types in the `terraform.tfvars` file:[\[See the AWS documentation website for more details\]](http://docs.aws.amazon.com/prescriptive-guidance/latest/patterns/remove-amazon-ec2-entries-across-aws-accounts-from-aws-managed-microsoft-ad.html) | DevOps engineer | 
| Initialize the Terraform configuration. | To initialize your working directory that contains the Terraform files, run the following command:`terraform init` | DevOps engineer | 
| Preview changes. | You can preview the changes that Terraform will make to the infrastructure before your infrastructure is deployed. To validate that Terraform will make the changes as required, run the following command:`terraform plan —-var-file=examples/terraform.tfvars` | DevOps engineer | 
| Execute the proposed actions. | To verify that the results from the `terraform plan` command are as expected, do the following:[\[See the AWS documentation website for more details\]](http://docs.aws.amazon.com/prescriptive-guidance/latest/patterns/remove-amazon-ec2-entries-across-aws-accounts-from-aws-managed-microsoft-ad.html) | DevOps engineer | 

### Verify the deployment
<a name="verify-the-deployment"></a>


| Task | Description | Skills required | 
| --- | --- | --- | 
| Execute and test the Lambda function. | To verify that the deployment occurred successfully, do the following:[\[See the AWS documentation website for more details\]](http://docs.aws.amazon.com/prescriptive-guidance/latest/patterns/remove-amazon-ec2-entries-across-aws-accounts-from-aws-managed-microsoft-ad.html)The execution results show the output of the function. | DevOps engineer | 
| View results of EventBridge rule execution from parent account. | To view the results of the EventBridge rule that’s based on Amazon EC2 termination events from the parent account, do the following:[\[See the AWS documentation website for more details\]](http://docs.aws.amazon.com/prescriptive-guidance/latest/patterns/remove-amazon-ec2-entries-across-aws-accounts-from-aws-managed-microsoft-ad.html)In the CloudWatch console, the **Log groups** page shows the results of the Lambda function. | DevOps engineer | 
| View results of EventBridge rule execution from the child account. | To view the results of the EventBridge rule that’s based on Amazon EC2 termination events from the child account, do the following:[\[See the AWS documentation website for more details\]](http://docs.aws.amazon.com/prescriptive-guidance/latest/patterns/remove-amazon-ec2-entries-across-aws-accounts-from-aws-managed-microsoft-ad.html)In the CloudWatch console, the **Log groups** page shows the results of the Lambda function. | DevOps engineer | 

### Clean up infrastructure after use
<a name="clean-up-infrastructure-after-use"></a>


| Task | Description | Skills required | 
| --- | --- | --- | 
| Clean up the infrastructure. | To clean up the infrastructure that you created, use the following command:`terraform destroy`To confirm the `destroy` command, type `yes`. | DevOps engineer | 
| Verify after cleanup. | Verify that the resources are successfully removed. | DevOps engineer | 

## Troubleshooting
<a name="remove-amazon-ec2-entries-across-aws-accounts-from-aws-managed-microsoft-ad-troubleshooting"></a>


| Issue | Solution | 
| --- | --- | 
| Connection issue between AWS Directory Service (parent account) and Amazon EC2 instance (child account) – You are unable to join the child account’s computers to AD even though VPC peering is available. | Add routing in the VPCs. For instructions, see [Configure a VPC peering connection between the directory owner and the directory consumer account](https://docs.aws.amazon.com/directoryservice/latest/admin-guide/step1_setup_networking.html#step1_configure_owner_account_vpc) in the AWS Directory Service documentation. | 

## Related resources
<a name="remove-amazon-ec2-entries-across-aws-accounts-from-aws-managed-microsoft-ad-resources"></a>

**AWS documentation**
+ [Amazon EventBridge and AWS Identity and Access Management](https://docs.aws.amazon.com/eventbridge/latest/userguide/eb-iam.html)
+ [Configure instance permissions required for Systems Manager](https://docs.aws.amazon.com/systems-manager/latest/userguide/setup-instance-profile.html)
+ [Identity and access management for Directory Service](https://docs.aws.amazon.com/directoryservice/latest/admin-guide/iam_auth_access.html)
+ [Identity-based IAM policies for Lambda](https://docs.aws.amazon.com/lambda/latest/dg/access-control-identity-based.html)
+ [Manually join an Amazon EC2 Windows instance to your AWS Managed Microsoft AD Active Directory](https://docs.aws.amazon.com/directoryservice/latest/admin-guide/join_windows_instance.html)
+ [Remove Amazon EC2 entries in the same AWS account from AWS Managed Microsoft AD by using AWS Lambda automation](https://docs.aws.amazon.com/prescriptive-guidance/latest/patterns/remove-amazon-ec2-entries-in-the-same-aws-account-from-aws-managed-microsoft-ad.html)

**Other resources**
+ [AWS Provider](https://registry.terraform.io/providers/hashicorp/aws/latest/docs) (Terraform documentation)
+ [Backend Configuration](https://developer.hashicorp.com/terraform/language/backend) (Terraform documentation)
+ [Install Terraform](https://learn.hashicorp.com/tutorials/terraform/install-cli) (Terraform documentation)
+ [Python boto module](https://pypi.org/project/boto/) (Python Package Index repository)
+ [Terraform binary download](https://www.terraform.io/downloads) (Terraform documentation)

# Remove Amazon EC2 entries in the same AWS account from AWS Managed Microsoft AD by using AWS Lambda automation
<a name="remove-amazon-ec2-entries-in-the-same-aws-account-from-aws-managed-microsoft-ad"></a>

*Dr. Rahul Sharad Gaikwad and Tamilselvan P, Amazon Web Services*

## Summary
<a name="remove-amazon-ec2-entries-in-the-same-aws-account-from-aws-managed-microsoft-ad-summary"></a>

Active Directory (AD) is a Microsoft scripting tool that manages domain information and user interactions with network services. It’s widely used among managed services providers (MSPs) to manage employee credentials and access permissions. Because AD attackers can use inactive accounts to try and hack into an organization, it’s important to find inactive accounts and disable them on a routine maintenance schedule. With AWS Directory Service for Microsoft Active Directory, you can run Microsoft Active Directory as a managed service. 

This pattern can help you to configure AWS Lambda automation to quickly find and remove inactive accounts. When you use this pattern, you can get the following benefits:
+ Improve database and server performance, and fix vulnerabilities in your security from inactive accounts.
+ If your AD server is hosted in the cloud, removing inactive accounts can also reduce storage costs while improving performance. Your monthly bills might decrease because bandwidth charges and compute resources can both drop.
+ Keep potential attackers at bay with a clean Active Directory.

## Prerequisites and limitations
<a name="remove-amazon-ec2-entries-in-the-same-aws-account-from-aws-managed-microsoft-ad-prereqs"></a>

**Prerequisites **
+ An active AWS account.
+ Git [installed](https://git-scm.com/book/en/v2/Getting-Started-Installing-Git) and configured on a local workstation.
+ Terraform [installed](https://learn.hashicorp.com/tutorials/terraform/install-cli) and configured on a local workstation.
+ Windows computer with Active Directory modules (`ActiveDirectory`).
+ A directory in AWS Managed Microsoft AD and credentials stored in a [parameter in AWS Systems Manager Parameter Store.](https://docs.aws.amazon.com/systems-manager/latest/userguide/parameter-create-console.html)
+ AWS Identity and Access Management (IAM) role with permissions to the AWS services listed in [Tools](#remove-amazon-ec2-entries-in-the-same-aws-account-from-aws-managed-microsoft-ad-tools)*. * For more information about IAM, see [Related resources](#remove-amazon-ec2-entries-in-the-same-aws-account-from-aws-managed-microsoft-ad-resources).

**Limitations**
+ This pattern doesn’t support cross-account setup.
+ Some AWS services aren’t available in all AWS Regions. For Region availability, see [AWS services by Region](https://aws.amazon.com/about-aws/global-infrastructure/regional-product-services/). For specific endpoints, see [Service endpoints and quotas](https://docs.aws.amazon.com/general/latest/gr/aws-service-information.html), and choose the link for the service.

**Product versions**
+ [Terraform version 1.1.9 or later](https://developer.hashicorp.com/terraform/install)
+ [Terraform AWS Provider version 3.0 or higher](https://registry.terraform.io/providers/hashicorp/aws/latest/docs/guides/version-3-upgrade)

## Architecture
<a name="remove-amazon-ec2-entries-in-the-same-aws-account-from-aws-managed-microsoft-ad-architecture"></a>

The following diagram shows the workflow and architecture components for this pattern.

![\[Process to use Lambda automation to remove EC2 entries from Managed Microsoft AD.\]](http://docs.aws.amazon.com/prescriptive-guidance/latest/patterns/images/pattern-img/6b50dcc5-4f4b-4eea-85a7-04cebc9f7454/images/b7fc5962-bfb8-4f5a-968e-7487b1d48c4f.png)


The diagram shows the following workflow:

1. Amazon EventBridge triggers the AWS Lambda function based on a cron expression. (For this pattern, the cron expression schedule is once per day.)

1. The required IAM role and policy are created and attached to AWS Lambda through Terraform.

1. The AWS Lambda function is executed and calls to Amazon Elastic Compute Cloud (Amazon EC2) Auto Scaling Groups by using the Python boto module. The Lambda function gets the random instance id. The instance id is used to execute AWS Systems Manager commands.

1. AWS Lambda makes another call to Amazon EC2 using the boto module and gets the private IP addresses of the running Windows servers and stores the addresses in a temporary variable.

1. AWS Lambda makes another call to Systems Manager to get the computer information that is connected to Directory Service.

1. An AWS Systems Manager document helps to execute the PowerShell script on Amazon EC2 Windows servers to get the private IP addresses of the computers which are connected with AD.

1. The AD domain username and passwords are stored in the AWS Systems Manager Parameter Store. AWS Lambda and Systems Manager make a call to Parameter Store and get the username and password values to use to connect AD.

1. Using the Systems Manager document, the PowerShell script is executed on the Amazon EC2 Windows server using the instance id obtained earlier in step 3.

1. Amazon EC2 connects Directory Service by using PowerShell commands and removes the computers which are not in use or inactive.

## Tools
<a name="remove-amazon-ec2-entries-in-the-same-aws-account-from-aws-managed-microsoft-ad-tools"></a>

**AWS services**
+ [AWS Directory Service](https://docs.aws.amazon.com/directoryservice/latest/admin-guide/what_is.html) provides multiple ways to use Microsoft Active Directory (AD) with other AWS services such as Amazon Elastic Compute Cloud (Amazon EC2), Amazon Relational Database Service (Amazon RDS) for SQL Server, and Amazon FSx for Windows File Server.
+ [AWS Directory Service for Microsoft Active Directory](https://docs.aws.amazon.com/directoryservice/latest/admin-guide/directory_microsoft_ad.html) enables your directory-aware workloads and AWS resources to use Microsoft Active Directory in the AWS Cloud.
+ [Amazon Elastic Compute Cloud (Amazon EC2)](https://docs.aws.amazon.com/AWSEC2/latest/UserGuide/concepts.html) provides scalable computing capacity in the AWS Cloud. You can launch as many virtual servers as you need and quickly scale them up or down.
+ [Amazon EventBridge](https://docs.aws.amazon.com/eventbridge/latest/userguide/eb-what-is.html) is a serverless event bus service that that helps you connect your applications with real-time data from a variety of sources. For example, AWS Lambda functions, HTTP invocation endpoints using API destinations, or event buses in other AWS accounts.
+ [AWS Identity and Access Management (IAM)](https://docs.aws.amazon.com/IAM/latest/UserGuide/introduction.html) helps you securely manage access to your AWSresources by controlling who is authenticated and authorized to use them. With IAM, you can specify who or what can access services and resources in AWS, centrally manage fine-grained permissions, and analyze access to refine permissions across AWS.
+ [AWS Lambda](https://docs.aws.amazon.com/lambda/latest/dg/welcome.html) is a compute service that helps you run code without needing to provision or manage servers. It runs your code only when needed and scales automatically, so you pay only for the compute time that you use.
+ [AWS Systems Manager](https://docs.aws.amazon.com/systems-manager/latest/userguide/what-is-systems-manager.html) helps you manage your applications and infrastructure running in the AWS Cloud. It simplifies application and resource management, shortens the time to detect and resolve operational problems, and helps you manage your AWS resources securely at scale.
+ [AWS Systems Manager documents](https://docs.aws.amazon.com/systems-manager/latest/userguide/documents.html) define the actions that Systems Manager performs on your managed instances. Systems Manager includes more than 100 pre-configured documents that you can use by specifying parameters at runtime.
+ [AWS Systems Manager Parameter Store](https://docs.aws.amazon.com/systems-manager/latest/userguide/systems-manager-parameter-store.html) is a capability of AWS Systems Manager and provides secure, hierarchical storage for configuration data management and secrets management.

**Other tools**
+ [HashiCorp Terraform](https://www.terraform.io/docs) is an open source infrastructure as code (IaC) tool that helps you use code to provision and manage cloud infrastructure and resources.
+ [PowerShell](https://learn.microsoft.com/en-us/powershell/) is a Microsoft automation and configuration management program that runs on Windows, Linux, and macOS.
+ [Python](https://www.python.org/) is a general-purpose computer programming language.

**C****ode repository**

The code for this pattern is available in the GitHub [Custom AD Cleanup Automation solution](https://github.com/aws-samples/aws-lambda-ad-cleanup-terraform-samples/) repository. 

## Best practices
<a name="remove-amazon-ec2-entries-in-the-same-aws-account-from-aws-managed-microsoft-ad-best-practices"></a>
+ **Automatically join domains. **When you launch a Windows instance that’s to be part of an Directory Service domain, join the domain during the instance creation process instead of manually adding the instance later. To automatically join a domain, select the correct directory from the **Domain join directory** dropdown list when launching a new instance. For more details, see [Seamlessly join an Amazon EC2 Windows instance to your AWS Managed Microsoft AD Active Directory](https://docs.aws.amazon.com/directoryservice/latest/admin-guide/launching_instance.html) in the *Directory Service Administration Guide*.
+ **Delete unused accounts. **It’s common to find accounts in AD that have never been used. Like disabled or inactive accounts that remain in the system, neglected unused accounts can slow down your AD system or make your organization vulnerable to data breaches.
+ **Automate Active Directory cleanups. **To help mitigate security risks and prevent obsolete accounts from impacting AD performance, conduct AD cleanups should at regular intervals. You can accomplish most AD management and cleanup tasks by writing scripts. Example tasks include removing disabled and inactive accounts, deleting empty and inactive groups, and locating expired user accounts and passwords. 

## Epics
<a name="remove-amazon-ec2-entries-in-the-same-aws-account-from-aws-managed-microsoft-ad-epics"></a>

### Set up your environment
<a name="set-up-your-environment"></a>


| Task | Description | Skills required | 
| --- | --- | --- | 
| Create a project folder, and add the files. | To clone the repository and create a project folder, do the following: [\[See the AWS documentation website for more details\]](http://docs.aws.amazon.com/prescriptive-guidance/latest/patterns/remove-amazon-ec2-entries-in-the-same-aws-account-from-aws-managed-microsoft-ad.html) | DevOps engineer | 

### Provision the target architecture by using the Terraform configuration
<a name="provision-the-target-architecture-by-using-the-terraform-configuration"></a>


| Task | Description | Skills required | 
| --- | --- | --- | 
| Initialize the Terraform configuration. | To initialize your working directory that contains the Terraform files, run the following command.`terraform init` | DevOps engineer | 
| Preview changes. | You can preview the changes that Terraform will make to the infrastructure before your infrastructure is deployed. To validate that Terraform will make the changes as required, run the following command.`terraform plan` | DevOps engineer | 
| Execute the proposed actions. | To verify that the results from the `terraform plan` command are as expected, do the following:[\[See the AWS documentation website for more details\]](http://docs.aws.amazon.com/prescriptive-guidance/latest/patterns/remove-amazon-ec2-entries-in-the-same-aws-account-from-aws-managed-microsoft-ad.html) | DevOps engineer | 
| Clean up the infrastructure. | To clean up the infrastructure that you created, use the following command.`terraform destroy`To confirm the destroy command, type `yes`. | DevOps engineer | 

### Verify the deployment
<a name="verify-the-deployment"></a>


| Task | Description | Skills required | 
| --- | --- | --- | 
| Execute and test the Lambda function. | To verify that the deployment occurred successfully, do the following:[\[See the AWS documentation website for more details\]](http://docs.aws.amazon.com/prescriptive-guidance/latest/patterns/remove-amazon-ec2-entries-in-the-same-aws-account-from-aws-managed-microsoft-ad.html)The execution results show the output of the function. | DevOps engineer | 
| View the results of the Lambda function. | In this pattern, an EventBridge rule executes the Lambda function once per day. To view the results of the Lambda function, do the following:[\[See the AWS documentation website for more details\]](http://docs.aws.amazon.com/prescriptive-guidance/latest/patterns/remove-amazon-ec2-entries-in-the-same-aws-account-from-aws-managed-microsoft-ad.html)In the CloudWatch console, the **Log groups** page shows the results of the Lambda function. | DevOps engineer | 

### Clean up infrastructure after use
<a name="clean-up-infrastructure-after-use"></a>


| Task | Description | Skills required | 
| --- | --- | --- | 
| Clean up infrastructure. | To clean up the infrastructure that you created, use the following command.`terraform destroy`To confirm the destroy command, type `yes`. | DevOps engineer | 
| Verify after cleanup. | Verify that the resources are successfully removed. | DevOps engineer | 

## Troubleshooting
<a name="remove-amazon-ec2-entries-in-the-same-aws-account-from-aws-managed-microsoft-ad-troubleshooting"></a>


| Issue | Solution | 
| --- | --- | 
| If you try to remove the AD computer, you get an "Access Denied" message. The AD computer can’t be removed because, by default, the action tries to remove two private IP addresses which are connected as a part of the AD services. | To avoid this error, use the following Python operation to ignore the first two computers when you list the differences between an AD computer output and the output of your machine running Windows.<pre>Difference = Difference[2:]</pre> | 
| When Lambda executes a PowerShell script on a Windows server, it expects Active Directory modules to be available by default. If the modules are not available, a Lambda function creates an error that states "Get-AdComputer is not installed on instance". | To avoid this error, install the required modules by using the user data of the EC2 instances. Use the [EC2WindowsUserdata](https://github.com/aws-samples/aws-lambda-ad-cleanup-terraform-samples/blob/main/EC2WindowsUserdata) script that’s in this pattern’s GitHub repository. | 

## Related resources
<a name="remove-amazon-ec2-entries-in-the-same-aws-account-from-aws-managed-microsoft-ad-resources"></a>

**AWS documentation**
+ [Amazon EventBridge and AWS Identity and Access Management](https://docs.aws.amazon.com/eventbridge/latest/userguide/eb-iam.html)
+ [Configure instance permissions required for Systems Manager](https://docs.aws.amazon.com/systems-manager/latest/userguide/setup-instance-profile.html)
+ [Identity and access management for Directory Service](https://docs.aws.amazon.com/directoryservice/latest/admin-guide/iam_auth_access.html)
+ [Manually join an Amazon EC2 Windows instance to your AWS Managed Microsoft AD Active Directory](https://docs.aws.amazon.com/directoryservice/latest/admin-guide/join_windows_instance.html)
+ [Working with identity-based IAM policies in AWS Lambda](https://docs.aws.amazon.com/lambda/latest/dg/access-control-identity-based.html)

**Other resources**
+ [AWS Provider](https://registry.terraform.io/providers/hashicorp/aws/latest/docs) (Terraform documentation)
+ [Backend Configuration](https://developer.hashicorp.com/terraform/language/backend) (Terraform documentation)
+ [Install Terraform](https://learn.hashicorp.com/tutorials/terraform/install-cli) (Terraform documentation)
+ [Python boto module](https://pypi.org/project/boto/) (Python Package Index repository)
+ [Terraform binary download](https://www.terraform.io/downloads) (Terraform documentation)

# Run unit tests for Python ETL jobs in AWS Glue using the pytest framework
<a name="run-unit-tests-for-python-etl-jobs-in-aws-glue-using-the-pytest-framework"></a>

*Praveen Kumar Jeyarajan and Vaidy Sankaran, Amazon Web Services*

## Summary
<a name="run-unit-tests-for-python-etl-jobs-in-aws-glue-using-the-pytest-framework-summary"></a>

You can run unit tests for Python extract, transform, and load (ETL) jobs for AWS Glue in a [local development environment](https://docs.aws.amazon.com/glue/latest/dg/aws-glue-programming-etl-libraries.html), but replicating those tests in a DevOps pipeline can be difficult and time consuming. Unit testing can be especially challenging when you’re modernizing mainframe ETL process on AWS technology stacks. This pattern shows you how to simplify unit testing, while keeping existing functionality intact, avoiding disruptions to key application functionality when you release new features, and maintaining high-quality software. You can use the steps and code samples in this pattern to run unit tests for Python ETL jobs in AWS Glue by using the pytest framework in AWS CodePipeline. You can also use this pattern to test and deploy multiple AWS Glue jobs.

## Prerequisites and limitations
<a name="run-unit-tests-for-python-etl-jobs-in-aws-glue-using-the-pytest-framework-prereqs"></a>

**Prerequisites**
+ An active AWS account
+ An Amazon Elastic Container Registry (Amazon ECR) image URI for your AWS Glue library, downloaded from the [Amazon ECR Public Gallery](https://gallery.ecr.aws/glue/aws-glue-libs)
+ Bash terminal (on any operating system) with a profile for the target AWS account and AWS Region
+ [Python 3.10](https://www.python.org/downloads/) or later
+ [Pytest](https://github.com/pytest-dev/pytest)
+ [Moto](https://github.com/getmoto/moto) Python library for testing AWS services

## Architecture
<a name="run-unit-tests-for-python-etl-jobs-in-aws-glue-using-the-pytest-framework-architecture"></a>

The following diagram describes how to incorporate unit testing for AWS Glue ETL processes that are based on Python into a typical enterprise-scale AWS DevOps pipeline.

![\[Unit testing for AWS Glue ETL processes.\]](http://docs.aws.amazon.com/prescriptive-guidance/latest/patterns/images/pattern-img/82781ca8-4da0-4df0-bf23-32992fece231/images/6286dafc-f1e0-4967-beed-4dedc6047c10.png)


The diagram shows the following workflow:

1. In the source stage, AWS CodePipeline uses a versioned Amazon Simple Storage Service (Amazon S3) bucket to store and manage source code assets. These assets include a sample Python ETL job (`sample.py`), a unit test file (`test_sample.py`), and an AWS CloudFormation template. Then, CodePipeline transfers the most recent code from the main branch to the AWS CodeBuild project for further processing.

1. In the build and publish stage, the most recent code from the previous source stage is unit tested with the help of an AWS Glue public Amazon ECR image. Then, the test report is published to CodeBuild report groups. The container image in the public Amazon ECR repository for AWS Glue libraries includes all the binaries required to run and unit test [PySpark-based](https://spark.apache.org/docs/latest/api/python/) ETL tasks in AWS Glue locally. The public container repository has three image tags, one for each version supported by AWS Glue. For demonstration purposes, this pattern uses the `glue_libs_4.0.0_image_01` image tag. To use this container image as a runtime image in CodeBuild, copy the image URI that corresponds to the image tag that you intend to use, and then update the `pipeline.yml` file in the GitHub repository for the `TestBuild` resource.

1. In the deploy stage, the CodeBuild project is launched and it publishes the code to an Amazon S3 bucket if all the tests pass.

1. The user deploys the AWS Glue task by using the CloudFormation template in the `deploy` folder.

## Tools
<a name="run-unit-tests-for-python-etl-jobs-in-aws-glue-using-the-pytest-framework-tools"></a>

**AWS services**
+ [AWS CodeBuild](https://docs.aws.amazon.com/codebuild/latest/userguide/welcome.html) is a fully managed build service that helps you compile source code, run unit tests, and produce artifacts that are ready to deploy.
+ [AWS CodePipeline](https://docs.aws.amazon.com/codepipeline/latest/userguide/welcome.html) helps you quickly model and configure the different stages of a software release and automate the steps required to release software changes continuously.
+ [Amazon Elastic Container Registry (Amazon ECR)](https://docs.aws.amazon.com/AmazonECR/latest/userguide/what-is-ecr.html) is a managed container image registry service that’s secure, scalable, and reliable.
+ [AWS Glue](https://docs.aws.amazon.com/glue/latest/dg/what-is-glue.html) is a fully managed ETL service. It helps you reliably categorize, clean, enrich, and move data between data stores and data streams.
+ [Amazon Simple Storage Service (Amazon S3)](https://docs.aws.amazon.com/AmazonS3/latest/userguide/Welcome.html) is an object storage service offering industry-leading scalability, data availability, security, and performance.

**Other tools**
+ [Python](https://www.python.org/) is a high-level, interpreted general purpose programming language.
+ [Moto](https://github.com/getmoto/moto) is a Python library for testing AWS services.
+ [Pytest](https://github.com/pytest-dev/pytest) is a framework for writing small unit tests that scale to support complex functional testing for applications and libraries.
+ [Python ETL library](https://github.com/awslabs/aws-glue-libs) for AWS Glue is a repository for Python libraries that are used in the local development of PySpark batch jobs for AWS Glue.

**Code repository**

The code for this pattern is available in the GitHub [aws-glue-jobs-unit-testing](https://github.com/aws-samples/aws-glue-jobs-unit-testing) repository. The repository includes the following resources:
+ A sample Python-based AWS Glue job in the `src` folder
+ Associated unit test cases (built using the pytest framework) in the `tests` folder
+ A CloudFormation template (written in YAML) in the `deploy` folder

## Best practices
<a name="run-unit-tests-for-python-etl-jobs-in-aws-glue-using-the-pytest-framework-best-practices"></a>

**Security for CodePipeline resources**

It’s a best practice to use encryption and authentication for the source repositories that connect to your pipelines in CodePipeline. For more information, see [Security best practices](https://docs.aws.amazon.com/codepipeline/latest/userguide/security-best-practices.html) in the CodePipeline documentation.

**Monitoring and logging for CodePipeline resources**

It’s a best practice to use AWS logging features to determine what actions users take in your account and what resources they use. The log files show the following:
+ Time and date of actions
+ Source IP address of actions
+ Which actions failed due to inadequate permissions

Logging features are available in AWS CloudTrail and Amazon CloudWatch Events. You can use CloudTrail to log AWS API calls and related events made by or on behalf of your AWS account. For more information, see [Logging CodePipeline API calls with AWS CloudTrail](https://docs.aws.amazon.com/codepipeline/latest/userguide/monitoring-cloudtrail-logs.html) in the CodePipeline documentation.

You can use CloudWatch Events to monitor your AWS Cloud resources and applications running on AWS. You can also create alerts in CloudWatch Events. For more information, see [Monitoring CodePipeline events](https://docs.aws.amazon.com/codepipeline/latest/userguide/detect-state-changes-cloudwatch-events.html) in the CodePipeline documentation.

## Epics
<a name="run-unit-tests-for-python-etl-jobs-in-aws-glue-using-the-pytest-framework-epics"></a>

### Deploy the source code
<a name="deploy-the-source-code"></a>


| Task | Description | Skills required | 
| --- | --- | --- | 
| Prepare the code archive for deployment. | [\[See the AWS documentation website for more details\]](http://docs.aws.amazon.com/prescriptive-guidance/latest/patterns/run-unit-tests-for-python-etl-jobs-in-aws-glue-using-the-pytest-framework.html) | DevOps engineer | 
| Create the CloudFormation stack. | [\[See the AWS documentation website for more details\]](http://docs.aws.amazon.com/prescriptive-guidance/latest/patterns/run-unit-tests-for-python-etl-jobs-in-aws-glue-using-the-pytest-framework.html)The stack creates a CodePipeline view using Amazon S3 as the source. In the steps above, the pipeline is **aws-glue-unit-test-pipeline**. | AWS DevOps, DevOps engineer | 

### Run the unit tests
<a name="run-the-unit-tests"></a>


| Task | Description | Skills required | 
| --- | --- | --- | 
| Run the unit tests in the pipeline. | [\[See the AWS documentation website for more details\]](http://docs.aws.amazon.com/prescriptive-guidance/latest/patterns/run-unit-tests-for-python-etl-jobs-in-aws-glue-using-the-pytest-framework.html) | AWS DevOps, DevOps engineer | 

### Clean up all AWS resources
<a name="clean-up-all-aws-resources"></a>


| Task | Description | Skills required | 
| --- | --- | --- | 
| Clean up the resources in your environment. | To avoid additional infrastructure costs, make sure that you delete the stack after experimenting with the examples provided in this pattern.[\[See the AWS documentation website for more details\]](http://docs.aws.amazon.com/prescriptive-guidance/latest/patterns/run-unit-tests-for-python-etl-jobs-in-aws-glue-using-the-pytest-framework.html) | AWS DevOps, DevOps engineer | 

## Troubleshooting
<a name="run-unit-tests-for-python-etl-jobs-in-aws-glue-using-the-pytest-framework-troubleshooting"></a>


| Issue | Solution | 
| --- | --- | 
| The CodePipeline service role cannot access the Amazon S3 bucket. | [\[See the AWS documentation website for more details\]](http://docs.aws.amazon.com/prescriptive-guidance/latest/patterns/run-unit-tests-for-python-etl-jobs-in-aws-glue-using-the-pytest-framework.html) | 
| CodePipeline returns an error that the Amazon S3 bucket is not versioned. | CodePipeline requires that the source Amazon S3 bucket be versioned. Enable versioning on your Amazon S3 bucket. For instructions, see [Enabling versioning on buckets](https://docs.aws.amazon.com/AmazonS3/latest/userguide/manage-versioning-examples.html). | 

## Related resources
<a name="run-unit-tests-for-python-etl-jobs-in-aws-glue-using-the-pytest-framework-resources"></a>
+ [AWS Glue](https://aws.amazon.com/glue/)
+ [Developing and testing AWS Glue jobs locally](https://docs.aws.amazon.com/glue/latest/dg/aws-glue-programming-etl-libraries.html)
+ [AWS CloudFormation for AWS Glue](https://docs.aws.amazon.com/glue/latest/dg/populate-with-cloudformation-templates.html)

## Additional information
<a name="run-unit-tests-for-python-etl-jobs-in-aws-glue-using-the-pytest-framework-additional"></a>

Additionally, you can deploy the AWS CloudFormation templates by using the AWS Command Line Interface (AWS CLI). For more information, see [Quickly deploying templates with transforms](https://docs.aws.amazon.com/AWSCloudFormation/latest/UserGuide/using-cfn-cli-deploy.html) in the CloudFormation documentation.

# Set up a CI/CD pipeline by using AWS CodePipeline and AWS CDK
<a name="set-up-a-ci-cd-pipeline-by-using-aws-codepipeline-and-aws-cdk"></a>

*Konstantin Zarudaev, Yasha Dabas, Lars Kinder, and Cizer Pereira, Amazon Web Services*

## Home
<a name="set-up-a-ci-cd-pipeline-by-using-aws-codepipeline-and-aws-cdk-summary"></a>

Automating your software build and release process with continuous integration and continuous delivery (CI/CD) supports repeatable builds and rapid delivery of new features to your users. You can quickly and easily test each code change, and you can catch and fix bugs before releasing your software. By running each change through your staging and release process, you can verify the quality of your application or infrastructure code. CI/CD embodies a culture, a set of operating principles, and a [collection of practices](https://aws.amazon.com/devops/#cicd) that help application development teams to deliver code changes more frequently and reliably. The implementation is also known as the *CI/CD pipeline*.

This pattern defines a reusable continuous integration and continuous delivery (CI/CD) pipeline on Amazon Web Services (AWS) with an AWS CodeCommit repository. The AWS CodePipeline pipeline is written using [AWS Cloud Development Kit (AWS CDK) v2](https://aws.amazon.com/cdk/).

Using CodePipeline, you can model the different stages of your software release process through the AWS Management Console interface, the AWS Command Line Interface (AWS CLI), AWS CloudFormation, or the AWS SDKs. This pattern demonstrates the implementation of CodePipeline and its components using AWS CDK. In addition to construct libraries, AWS CDK includes a toolkit (the CLI command `cdk`), which is the primary tool for interacting with your AWS CDK app. Among other functions, the toolkit provides the ability to convert one or more stacks to CloudFormation templates and deploy them to an AWS account.

The pipeline includes tests to validate the security of your third-party libraries, and it helps ensure expedited, automated release in the specified environments. You can increase the overall security of your applications by putting them through a validation process.

The intent of this pattern is to accelerate your use of CI/CD pipelines to deploy your code while ensuring the resources you deploy adhere to DevOps best practices. After you implement the [example code](https://github.com/aws-samples/aws-codepipeline-cicd), you will have an [AWS CodePipeline](https://aws.amazon.com/codepipeline/) with linting, testing, a security check, deployment, and post-deployment processes. This pattern also includes steps for Makefile. Using a Makefile, developers can reproduce CI/CD steps locally and increase the velocity of the development process.

## Prerequisites and limitations
<a name="set-up-a-ci-cd-pipeline-by-using-aws-codepipeline-and-aws-cdk-prereqs"></a>

**Prerequisites**
+ An active AWS account
+ A basic understanding in the following:
  + AWS CDK
  + AWS CloudFormation
  + AWS CodePipeline
  + TypeScript

**Limitations**

This pattern uses [AWS CDK ](https://docs.aws.amazon.com/cdk/api/v2/docs/aws-construct-library.html)for TypeScript only. It doesn’t cover other languages supported by AWS CDK.

**Product versions**

Use the latest versions of the following tools:
+ AWS Command Line Interface (AWS CLI)
+ cfn\$1nag
+ git-remote-codecommit
+ Node.js

## Architecture
<a name="set-up-a-ci-cd-pipeline-by-using-aws-codepipeline-and-aws-cdk-architecture"></a>

**Target technology stack**
+ AWS CDK
+ AWS CloudFormation
+ AWS CodeCommit
+ AWS CodePipeline

**Target architecture**

The pipeline is triggered by a change in the AWS CodeCommit repository (`SampleRepository`). In the beginning, CodePipeline builds artifacts, updates itself, and starts the deployment process. The resulting pipeline deploys a solution to three independent environments:
+ Dev – Three-step code check in the active development environment
+ Test – Integration and regression test environment
+ Prod – Production environment

The three steps included in the Dev stage are linting, security, and unit tests. These steps run in parallel to speed up the process. To ensure that the pipeline provides only working artifacts, it will be stop running whenever a step in the process fails. After a Dev stage deployment, the pipeline runs validation tests to verify the results. In the case of success, the pipeline will then deploy the artifacts to the Test environment, which contains post-deployment validation. The final step is to deploy the artifacts to the Prod environment.

The following diagram shows the workflow from the CodeCommit repository to the build and update processes performed by CodePipeline, the three Dev environment steps, and subsequent deployment and validation in each of the three environments.

![\[Dev environment includes linting, security and unit testing, all include deploy and validate.\]](http://docs.aws.amazon.com/prescriptive-guidance/latest/patterns/images/pattern-img/d617e735-8624-4722-8a3d-073bcc356328/images/92504aac-03e3-4c95-b225-74505f8dd136.png)


## Tools
<a name="set-up-a-ci-cd-pipeline-by-using-aws-codepipeline-and-aws-cdk-tools"></a>

**AWS services**
+ [AWS Cloud Development Kit (AWS CDK)](https://docs.aws.amazon.com/cdk/latest/guide/home.html) is a software development framework that helps you define and provision AWS Cloud infrastructure in code.
+ [AWS CloudFormation](https://docs.aws.amazon.com/AWSCloudFormation/latest/UserGuide/Welcome.html) helps you set up AWS resources, provision them quickly and consistently, and manage them throughout their lifecycle across AWS accounts and Regions. In this pattern CloudFormation templates can be used to create a CodeCommit repository and a CodePipeline CI/CD pipeline.
+ [AWS CodeCommit](https://docs.aws.amazon.com/codecommit/latest/userguide/welcome.html) is a version control service that helps you privately store and manage Git repositories, without needing to manage your own source control system.
+ [AWS CodePipeline](https://docs.aws.amazon.com/codepipeline/latest/userguide/welcome.html) is a CI/CD service that helps you quickly model and configure the different stages of a software release and automate the steps required to release software changes continuously.
+ [AWS Command Line Interface (AWS CLI)](https://docs.aws.amazon.com/cli/latest/userguide/cli-chap-welcome.html) is an open-source tool that helps you interact with AWS services through commands in your command-line shell.

**Other tools**
+ [cfn\$1nag](https://github.com/stelligent/cfn_nag) is an open-source tool that looks for patterns in CloudFormation templates to identify potential security issues.
+ [git-remote-codecommit](https://docs.aws.amazon.com/codecommit/latest/userguide/setting-up-git-remote-codecommit.html) is a utility for pushing and pulling code from CodeCommit repositories by extending Git.
+ [Node.js](https://nodejs.org/en/docs/) is an event-driven JavaScript runtime environment designed for building scalable network applications.

**Code**

The code for this pattern is available in the GitHub [AWS CodePipeline with CI/CD practices](https://github.com/aws-samples/aws-codepipeline-cicd) repository.

## Best practices
<a name="set-up-a-ci-cd-pipeline-by-using-aws-codepipeline-and-aws-cdk-best-practices"></a>

Review resources, such as AWS Identity and Access Management (IAM) policies, to confirm that they align with your organizational best practices.

## Epics
<a name="set-up-a-ci-cd-pipeline-by-using-aws-codepipeline-and-aws-cdk-epics"></a>

### Install tools
<a name="install-tools"></a>


| Task | Description | Skills required | 
| --- | --- | --- | 
| Install tools on macOS or Linux. | If you are using MacOS or Linux, you can install the tools by running the following command in your preferred terminal or using [Homebrew for Linux](https://docs.brew.sh/Homebrew-on-Linux).<pre>brew install<br />brew install git-remote-codecommit<br />brew install ruby brew-gem<br />brew-gem install cfn-nag</pre> | DevOps engineer | 
| Set up AWS CLI. | To set up AWS CLI, use the instructions for your operating system:[\[See the AWS documentation website for more details\]](http://docs.aws.amazon.com/prescriptive-guidance/latest/patterns/set-up-a-ci-cd-pipeline-by-using-aws-codepipeline-and-aws-cdk.html) | DevOps engineer | 

### Set up the initial deployment
<a name="set-up-the-initial-deployment"></a>


| Task | Description | Skills required | 
| --- | --- | --- | 
| Download or clone the code. | To get the code that is used by this pattern, do one of the following:[\[See the AWS documentation website for more details\]](http://docs.aws.amazon.com/prescriptive-guidance/latest/patterns/set-up-a-ci-cd-pipeline-by-using-aws-codepipeline-and-aws-cdk.html)<pre>git clone --depth 1 https://github.com/aws-samples/aws-codepipeline-cicd.git</pre>Remove the `.git` directory from the cloned repository.<pre>cd ./aws-codepipeline-cicd<br />rm -rf ./.git</pre>Later, you will use a newly created AWS CodeCommit repository as a remote origin. | DevOps engineer | 
| Connect to the AWS account. | You can connect by using a temporary security token or landing zone authentication. To confirm that you are using the correct account and AWS Region, run the following commands.<pre>AWS_REGION="eu-west-1"<br />ACCOUNT_NUMBER=$(aws sts get-caller-identity --query Account --output text)<br />echo "${ACCOUNT_NUMBER}"</pre> | DevOps engineer | 
| Bootstrap the environment. | To bootstrap an AWS CDK environment, run the following commands.<pre>npm install<br />npm run cdk bootstrap "aws://${ACCOUNT_NUMBER}/${AWS_REGION}"</pre>After you successfully bootstrap the environment, the following output should be displayed.<pre>⏳  Bootstrapping environment aws://{account}/{region}...<br />✅  Environment aws://{account}/{region} bootstrapped</pre>For more information about AWS CDK bootstrapping, see the [AWS CDK documentation](https://docs.aws.amazon.com/cdk/v2/guide/bootstrapping.html). | DevOps engineer | 
| Synthesize a template. | To synthesize an AWS CDK app, use the `cdk synth` command.<pre>npm run cdk synth</pre>You should see the following output.<pre>Successfully synthesized to <path-to-directory>/aws-codepipeline-cicd/cdk.out<br />Supply a stack id (CodePipeline, Dev-MainStack) to display its template.</pre> | DevOps engineer | 
| Deploy the CodePipeline stack. | Now that you bootstrapped and synthesized the CloudFormation template, you can deploy it. The deployment will create the CodePipeline pipeline and a CodeCommit repository, which will be the source and trigger of the pipeline.<pre>npm run cdk -- deploy CodePipeline --require-approval never</pre>After you run the command, you should see a successful deployment of the CodePipeline stack and output information. The `CodePipeline.RepositoryName` gives you the name of the CodeCommit repository in the AWS account.<pre>CodePipeline: deploying...<br />CodePipeline: creating CloudFormation changeset...<br />✅  CodePipeline<br />Outputs:<br />CodePipeline.RepositoryName = SampleRepository<br />Stack ARN:<br />arn:aws:cloudformation:REGION:ACCOUNT-ID:stack/CodePipeline/STACK-ID</pre> | DevOps engineer | 
| Set up the remote CodeCommit repository and branch. | After a successful deployment, CodePipeline will initiate the first run of the pipeline, which you can find in the [AWS CodePipeline console](https://eu-west-1.console.aws.amazon.com/codesuite/codepipeline/pipelines). Because AWS CDK and CodeCommit don’t initiate a default branch, this initial pipeline run will fail and return the following error message.<pre>The action failed because no branch named main was found in the selected AWS CodeCommit repository SampleRepository. Make sure you are using the correct branch name, and then try again. Error: null</pre>To fix this error, set up a remote origin as `SampleRepository`, and create the required `main` branch.<pre>RepoName=$(aws cloudformation describe-stacks --stack-name CodePipeline --query "Stacks[0].Outputs[?OutputKey=='RepositoryName'].OutputValue" --output text)<br />echo "${RepoName}"<br />#<br />git init<br />git branch -m master main<br />git remote add origin codecommit://${RepoName}<br />git add .<br />git commit -m "Initial commit"<br />git push -u origin main</pre> | DevOps engineer | 

### Test the deployed CodePipeline pipeline
<a name="test-the-deployed-codepipeline-pipeline"></a>


| Task | Description | Skills required | 
| --- | --- | --- | 
| Commit a change to activate the pipeline. | After a successful initial deployment, you should have a complete CI/CD pipeline with a `main` branch for `SampleRepository` as a source branch. As soon as you commit changes to the `main` branch, the pipeline will initiate and run the following sequence of actions:[\[See the AWS documentation website for more details\]](http://docs.aws.amazon.com/prescriptive-guidance/latest/patterns/set-up-a-ci-cd-pipeline-by-using-aws-codepipeline-and-aws-cdk.html) | DevOps engineer | 

### Test locally by using a Makefile
<a name="test-locally-by-using-a-makefile"></a>


| Task | Description | Skills required | 
| --- | --- | --- | 
| Run the development process by using a Makefile. | You can run the whole pipeline locally by using the `make` command, or you can run an individual step (for example, `make linting`).To test using `make`, perform the following actions:[\[See the AWS documentation website for more details\]](http://docs.aws.amazon.com/prescriptive-guidance/latest/patterns/set-up-a-ci-cd-pipeline-by-using-aws-codepipeline-and-aws-cdk.html) | App developer, DevOps engineer | 

### Clean up resources
<a name="clean-up-resources"></a>


| Task | Description | Skills required | 
| --- | --- | --- | 
| Delete AWS CDK app resources. | To clean up your AWS CDK app, run the following command.<pre>cdk destroy --all</pre>Be aware that the Amazon Simple Storage Service (Amazon S3) buckets that are created during bootstrapping aren't automatically deleted. They need a retention policy that allows deletion, or you need to delete them manually in your AWS account. | DevOps engineer | 

## Troubleshooting
<a name="set-up-a-ci-cd-pipeline-by-using-aws-codepipeline-and-aws-cdk-troubleshooting"></a>


| Issue | Solution | 
| --- | --- | 
| The template isn’t working as expected. | If something goes wrong and template is not working, make sure that you have the following:[\[See the AWS documentation website for more details\]](http://docs.aws.amazon.com/prescriptive-guidance/latest/patterns/set-up-a-ci-cd-pipeline-by-using-aws-codepipeline-and-aws-cdk.html) | 

## Related resources
<a name="set-up-a-ci-cd-pipeline-by-using-aws-codepipeline-and-aws-cdk-resources"></a>
+ [Get started with common tasks in IAM Identity Center](https://docs.aws.amazon.com/singlesignon/latest/userguide/getting-started.html)
+ [AWS CodePipeline documentation](https://docs.aws.amazon.com/codepipeline/latest/userguide/welcome.html)
+ [AWS CDK](https://docs.aws.amazon.com/cdk/v2/guide/home.html)

# Set up centralized logging at enterprise scale by using Terraform
<a name="set-up-centralized-logging-at-enterprise-scale-by-using-terraform"></a>

*Aarti Rajput, Yashwant Patel, and Nishtha Yadav, Amazon Web Services*

## Summary
<a name="set-up-centralized-logging-at-enterprise-scale-by-using-terraform-summary"></a>

Centralized logging is vital for an organization's cloud infrastructure, because it provides visibility into its operations, security, and compliance. As your organization scales its AWS environment across multiple accounts, a structured log management strategy becomes fundamental for running security operations, meeting audit requirements, and achieving operational excellence.

This pattern provides a scalable, secure framework for centralizing logs from multiple AWS accounts and services, to enable enterprise-scale logging management across complex AWS deployments. The solution is automated by using Terraform, which is an infrastructure as code (IaC) tool from HashiCorp that ensures consistent and repeatable deployments, and minimizes manual configuration. By combining Amazon CloudWatch Logs, Amazon Data Firehose, and Amazon Simple Storage Service (Amazon S3), you can implement a robust log aggregation and analysis pipeline that delivers:
+ Centralized log management across your organization in AWS Organizations
+ Automated log collection with built-in security controls
+ Scalable log processing and durable storage
+ Simplified compliance reporting and audit trails
+ Real-time operational insights and monitoring

The solution collects logs from Amazon Elastic Kubernetes Service (Amazon EKS) containers, AWS Lambda functions, and Amazon Relational Database Service (Amazon RDS) database instances through CloudWatch Logs. It automatically forwards these logs to a dedicated logging account by using CloudWatch subscription filters. Firehose manages the high-throughput log streaming pipeline to Amazon S3 for long-term storage. Amazon Simple Queue Service (Amazon SQS) is configured to receive Amazon S3 event notifications upon object creation. This enables integration with analytics services, including:
+ Amazon OpenSearch Service for log search, visualization, and real-time analytics
+ Amazon Athena for SQL-based querying
+ Amazon EMR for large-scale processing
+ Lambda for custom transformation
+ Amazon Quick Sight for dashboards

All data is encrypted by using AWS Key Management Service (AWS KMS), and the entire infrastructure is deployed by using Terraform for consistent configuration across environments.

This centralized logging approach enables organizations to improve their security posture, maintain compliance requirements, and optimize operational efficiency across their AWS infrastructure.

## Prerequisites and limitations
<a name="set-up-centralized-logging-at-enterprise-scale-by-using-terraform-prereqs"></a>

**Prerequisites**
+ A landing zone for your organization that's built by using [AWS Control Tower](https://docs.aws.amazon.com/controltower/latest/userguide/getting-started-with-control-tower.html)
+ [Account Factory for Terraform (AFT)](https://docs.aws.amazon.com/controltower/latest/userguide/aft-getting-started.html), deployed and configured with required accounts
+ [Terraform](https://developer.hashicorp.com/terraform/downloads) for provisioning the infrastructure
+ [AWS Identity and Access Management (IAM)](https://docs.aws.amazon.com/IAM/latest/UserGuide/getting-started.html) roles and policies for cross-account access

For instructions for setting up AWS Control Tower, AFT, and Application accounts, see the [Epics section](#set-up-centralized-logging-at-enterprise-scale-by-using-terraform-epics).

**Required accounts**

Your organization in AWS Organizations should include these accounts:
+ **Application account** – One or more source accounts where the AWS services (Amazon EKS, Lambda, and Amazon RDS) run and generate logs
+ **Log Archive account** – A dedicated account for centralized log storage and management

**Product versions**
+ [AWS Control Tower version 3.1](https://docs.aws.amazon.com/controltower/latest/userguide/2023-all.html#lz-3-1) or later
+ [Terraform version 0.15.0](https://developer.hashicorp.com/terraform/tutorials/aws-get-started/install-cli) or later

## Architecture
<a name="set-up-centralized-logging-at-enterprise-scale-by-using-terraform-architecture"></a>

The following diagram illustrates an AWS centralized logging architecture that provides a scalable solution for collecting, processing, and storing logs from multiple Application accounts into a dedicated Log Archive account. This architecture efficiently handles logs from AWS services, including Amazon RDS, Amazon EKS, and Lambda, and routes them through a streamlined process to Regional S3 buckets in the Log Archive account.

![\[AWS centralized logging architecture for collecting logs from multiple Application accounts.\]](http://docs.aws.amazon.com/prescriptive-guidance/latest/patterns/images/pattern-img/9fc71a10-65d6-437b-9128-cc27bda11af4/images/2e916040-0f11-4712-a8dd-31c95194ce5d.png)


The workflow includes five processes:

1. **Log flow process**
   + The log flow process begins in the Application accounts, where AWS services generate various types of logs, such as general, error, audit, slow query logs from Amazon RDS, control plane logs from Amazon EKS, and function execution and error logs from Lambda.
   + CloudWatch serves as the initial collection point. It gathers these logs at the log group level within each application account.
   + In CloudWatch, [subscription filters](https://docs.aws.amazon.com/AmazonCloudWatch/latest/logs/Subscriptions.html) determine which logs should be forwarded to the central account. These filters give you granular control over log forwarding, so you can specify exact log patterns or complete log streams for centralization.

1. **Cross-account log transfer**
   + Logs move to the Log Archive account. CloudWatch subscription filters facilitate the cross-account transfer and preserve Regional context.
   + The architecture establishes multiple parallel streams to handle different log sources efficiently, to ensure optimal performance and scalability.

1. **Log processing in the Log Archive account**
   + In the Log Archive account, Firehose processes the incoming log streams.
   + Each Region maintains dedicated Firehose delivery streams that can transform, convert, or enrich logs as needed.
   + These Firehose streams deliver the processed logs to S3 buckets in the Log Archive account, which is located in the same Region as the source Application accounts (Region A in the diagram) to maintain data sovereignty requirements.

1. **Notifications and additional workflows**
   + When logs reach their destination S3 buckets, the architecture implements a notification system by using Amazon SQS.
   + The Regional SQS queues enable asynchronous processing and can trigger additional workflows, analytics, or alerting systems based on the stored logs.

1. **AWS KMS for security**

   The architecture incorporates AWS KMS for security. AWS KMS provides encryption keys for the S3 buckets. This ensures that all stored logs maintain encryption at rest while keeping the encryption Regional to satisfy data residency requirements.

## Tools
<a name="set-up-centralized-logging-at-enterprise-scale-by-using-terraform-tools"></a>

**AWS services**
+ [Amazon CloudWatch](https://docs.aws.amazon.com/AmazonCloudWatch/latest/monitoring/WhatIsCloudWatch.html) is a monitoring and observability service that collects monitoring and operational data in the form of logs, metrics, and events. It provides a unified view of AWS resources, applications, and services that run on AWS and on-premises servers.
+ [CloudWatch Logs subscription filters](https://docs.aws.amazon.com/AmazonCloudWatch/latest/logs/SubscriptionFilters.html) are expressions that match a pattern in incoming log events and deliver matching log events to the specified AWS resource for further processing or analysis.
+ [AWS Control Tower Account Factory For Terraform (AFT)](https://docs.aws.amazon.com/controltower/latest/userguide/aft-overview.html) sets up a Terraform pipeline to help you provision and customize accounts in AWS Control Tower. AFT provides Terraform-based account provisioning while allowing you to govern your accounts with AWS Control Tower.
+ [Amazon Data Firehose](https://docs.aws.amazon.com/firehose/latest/dev/what-is-this-service.html) delivers real-time streaming data to destinations such as Amazon S3, Amazon Redshift, and Amazon OpenSearch Service. It automatically scales to match the throughput of your data and requires no ongoing administration.
+ [Amazon Elastic Kubernetes Service (Amazon EKS)](https://docs.aws.amazon.com/eks/latest/userguide/what-is-eks.html) is a managed container orchestration service that makes it easy to deploy, manage, and scale containerized applications by using Kubernetes. It automatically manages the availability and scalability of the Kubernetes control plane nodes.
+ [AWS Key Management Service (AWS KMS)](https://docs.aws.amazon.com/kms/latest/developerguide/overview.html) creates and controls encryption keys for encrypting your data. AWS KMS integrates with other AWS services to help you protect the data you store with these services.
+ [AWS Lambda](https://docs.aws.amazon.com/lambda/latest/dg/welcome.html) is a serverless compute service that lets you run code without provisioning or managing servers. It automatically scales your applications by running code in response to each trigger, and charges only for the compute time that you use.
+ [Amazon Relational Database Service (Amazon RDS)](https://docs.aws.amazon.com/AmazonRDS/latest/UserGuide/Welcome.html) is a managed relational database service that makes it easy to set up, operate, and scale a relational database in the cloud. It provides cost-efficient and resizable capacity while automating time-consuming administration tasks.
+ [Amazon Simple Queue Service (Amazon SQS)](https://docs.aws.amazon.com/AWSSimpleQueueService/latest/SQSDeveloperGuide/welcome.html) is a message queuing service that enables you to decouple and scale microservices, distributed systems, and serverless applications. It eliminates the complexity of managing and operating message-oriented middleware.
+ [Amazon Simple Storage Service (Amazon S3)](https://docs.aws.amazon.com/AmazonS3/latest/userguide/Welcome.html) is a cloud-based object storage service that offers scalability, data availability, security, and performance. It can store and retrieve any amount of data from anywhere on the web.

**Other tools**
+ [Terraform](https://www.terraform.io/) is an infrastructure as code (IaC) tool from HashiCorp that helps you create and manage cloud and on-premises resources.

**Code**

The code for this pattern are available in the GitHub[ Centralized logging](https://github.com/aws-samples/sample-centralised-logging-at-enterprise-scale-using-terraform) repository.

## Best practices
<a name="set-up-centralized-logging-at-enterprise-scale-by-using-terraform-best-practices"></a>
+ Use [multiple AWS accounts in a single organization in AWS Organizations](https://docs.aws.amazon.com/organizations/latest/userguide/orgs_manage_accounts.html). This practice enables centralized management and standardized logging across accounts.
+ Configure [S3 buckets with versioning, lifecycle policies, and cross-Region replication](https://docs.aws.amazon.com/AmazonS3/latest/userguide/replication.html). Implement encryption and access logging for security and compliance.
+ Implement [common logging standards by using JSON format](https://docs.aws.amazon.com/AmazonCloudWatch/latest/logs/CWL_AnalyzeLogData-discoverable-fields.html) with standard timestamps and fields. Use a consistent prefix structure and correlation IDs for easy tracking and analysis.
+ Enable [security controls with AWS KMS encryption](https://docs.aws.amazon.com/AmazonS3/latest/userguide/security-best-practices.html) and least privilege access. Maintain AWS CloudTrail monitoring and regular key rotation for enhanced security.
+ Set up [CloudWatch metrics and alerts](https://docs.aws.amazon.com/AmazonCloudWatch/latest/monitoring/working_with_metrics.html) for delivery tracking. Monitor costs and performance with automated notifications.
+ Configure [Amazon S3 retention policies](https://docs.aws.amazon.com/AmazonS3/latest/userguide/object-lifecycle-mgmt.html) to meet compliance requirements and enable Amazon S3 server access logging to track all requests made to your S3 buckets. Maintain documentation for S3 bucket policies and lifecycle rules. Conduct periodic reviews of access logs, bucket permissions, and storage configurations to help ensure compliance and [security best practices](https://docs.aws.amazon.com/AmazonS3/latest/userguide/security-best-practices.html).

## Epics
<a name="set-up-centralized-logging-at-enterprise-scale-by-using-terraform-epics"></a>

### Set up AWS Control Tower, AFT, and Application accounts
<a name="set-up-ctowerlong-aft-and-application-accounts"></a>


| Task | Description | Skills required | 
| --- | --- | --- | 
| Set up an AWS Control Tower environment with AFT. | [\[See the AWS documentation website for more details\]](http://docs.aws.amazon.com/prescriptive-guidance/latest/patterns/set-up-centralized-logging-at-enterprise-scale-by-using-terraform.html) | AWS administrator | 
| Enable resource sharing for the organization. | [\[See the AWS documentation website for more details\]](http://docs.aws.amazon.com/prescriptive-guidance/latest/patterns/set-up-centralized-logging-at-enterprise-scale-by-using-terraform.html) | AWS administrator | 
| Verify or provision Application accounts. | To provision new Application accounts for your use case, create them through AFT. For more information, see [Provision a new account with AFT](https://docs.aws.amazon.com/controltower/latest/userguide/aft-provision-account.html) in the AWS Control Tower documentation. | AWS administrator | 

### Set up configuration files for Application accounts
<a name="set-up-configuration-files-for-application-accounts"></a>


| Task | Description | Skills required | 
| --- | --- | --- | 
| Copy `Application_account` folder contents into the `aft-account-customizations` repository. | [\[See the AWS documentation website for more details\]](http://docs.aws.amazon.com/prescriptive-guidance/latest/patterns/set-up-centralized-logging-at-enterprise-scale-by-using-terraform.html) | DevOps engineer | 
| Review and edit the input parameters for setting up the Application account. | In this step, you set up the configuration file for creating resources in Application accounts, including CloudWatch log groups, CloudWatch subscription filters, IAM roles and policies, and configuration details for Amazon RDS, Amazon EKS, and Lambda functions.In your `aft-account-customizations` repository, in the `Application_account` folder, configure the input parameters in the `terraform.tfvars` file based on your organization's requirements:[\[See the AWS documentation website for more details\]](http://docs.aws.amazon.com/prescriptive-guidance/latest/patterns/set-up-centralized-logging-at-enterprise-scale-by-using-terraform.html) | DevOps engineer | 

### Set up configuration files for the Log Archive account
<a name="set-up-configuration-files-for-the-log-archive-account"></a>


| Task | Description | Skills required | 
| --- | --- | --- | 
| Copy `Log_archive_account` folder contents into the `aft-account-customizations` repository. | [\[See the AWS documentation website for more details\]](http://docs.aws.amazon.com/prescriptive-guidance/latest/patterns/set-up-centralized-logging-at-enterprise-scale-by-using-terraform.html) | DevOps engineer | 
| Review and edit the input parameters for setting up the Log Archive account. | In this step, you set up the configuration file for creating resources in the Log Archive account, including Firehose delivery streams, S3 buckets, SQS queues, and IAM roles and policies.In the `Log_archive_account` folder of your `aft-account-customizations` repository, configure the input parameters in the `terraform.tfvars` file based on your organization's requirements:[\[See the AWS documentation website for more details\]](http://docs.aws.amazon.com/prescriptive-guidance/latest/patterns/set-up-centralized-logging-at-enterprise-scale-by-using-terraform.html) | DevOps engineer | 

### Run Terraform commands to provision resources
<a name="run-terraform-commands-to-provision-resources"></a>


| Task | Description | Skills required | 
| --- | --- | --- | 
| Option 1 - Deploy the Terraform configuration files from AFT. | In AFT, the AFT pipeline is triggered after you push the code with the configuration changes to the GitHub `aft-account-customizations` repository. AFT automatically detects the changes and initiates the account customization process.After you make changes to your Terraform (`terraform.tfvars`) files, commit and push your changes to your `aft-account-customizations` repository:<pre>$ git add *<br />$ git commit -m "update message"<br />$ git push origin main</pre>If you're using a different branch (such as `dev`), replace `main` with your branch name. | DevOps engineer | 
| Option 2 - Deploy the Terraform configuration file manually. | If you aren't using AFT or you want to deploy the solution manually, you can use the following Terraform commands from the `Application_account` and `Log_archive_account` folders:[\[See the AWS documentation website for more details\]](http://docs.aws.amazon.com/prescriptive-guidance/latest/patterns/set-up-centralized-logging-at-enterprise-scale-by-using-terraform.html) | DevOps engineer | 

### Validate resources
<a name="validate-resources"></a>


| Task | Description | Skills required | 
| --- | --- | --- | 
| Verify subscription filters. | To verify that the subscription filters forward logs correctly from the Application account log groups to the Log Archive account:[\[See the AWS documentation website for more details\]](http://docs.aws.amazon.com/prescriptive-guidance/latest/patterns/set-up-centralized-logging-at-enterprise-scale-by-using-terraform.html) | DevOps engineer | 
| Verify Firehose streams. | To verify that the Firehose streams in the Log Archive account process application logs successfully:[\[See the AWS documentation website for more details\]](http://docs.aws.amazon.com/prescriptive-guidance/latest/patterns/set-up-centralized-logging-at-enterprise-scale-by-using-terraform.html) | DevOps engineer | 
| Validate the centralized S3 buckets. | To verify that the centralized S3 buckets receive and organize logs properly:[\[See the AWS documentation website for more details\]](http://docs.aws.amazon.com/prescriptive-guidance/latest/patterns/set-up-centralized-logging-at-enterprise-scale-by-using-terraform.html) | DevOps engineer | 
| Validate SQS queues. | To verify that the SQS queues receive notifications for new log files:[\[See the AWS documentation website for more details\]](http://docs.aws.amazon.com/prescriptive-guidance/latest/patterns/set-up-centralized-logging-at-enterprise-scale-by-using-terraform.html) | DevOps engineer | 

### Clean up resources
<a name="clean-up-resources"></a>


| Task | Description | Skills required | 
| --- | --- | --- | 
| Option 1 - Decommission the Terraform configuration file from AFT. | When you remove the Terraform configuration files and push the changes, AFT automatically initiates the resource removal process.[\[See the AWS documentation website for more details\]](http://docs.aws.amazon.com/prescriptive-guidance/latest/patterns/set-up-centralized-logging-at-enterprise-scale-by-using-terraform.html) | DevOps engineer | 
| Option 2 – Clean up  Terraform resources manually. | If you aren't using AFT or you want to clean up resources manually, use the following Terraform commands from the `Application_account` and `Log_archive_account` folders:[\[See the AWS documentation website for more details\]](http://docs.aws.amazon.com/prescriptive-guidance/latest/patterns/set-up-centralized-logging-at-enterprise-scale-by-using-terraform.html) | DevOps engineer | 

## Troubleshooting
<a name="set-up-centralized-logging-at-enterprise-scale-by-using-terraform-troubleshooting"></a>


| Issue | Solution | 
| --- | --- | 
| The CloudWatch Logs destination wasn't created or is inactive. | Validate the following:[\[See the AWS documentation website for more details\]](http://docs.aws.amazon.com/prescriptive-guidance/latest/patterns/set-up-centralized-logging-at-enterprise-scale-by-using-terraform.html) | 
| The subscription filter failed or is stuck in pending status. | Check the following:[\[See the AWS documentation website for more details\]](http://docs.aws.amazon.com/prescriptive-guidance/latest/patterns/set-up-centralized-logging-at-enterprise-scale-by-using-terraform.html) | 
| The Firehose delivery stream shows no incoming records. | Verify the following:[\[See the AWS documentation website for more details\]](http://docs.aws.amazon.com/prescriptive-guidance/latest/patterns/set-up-centralized-logging-at-enterprise-scale-by-using-terraform.html) | 

## Related resources
<a name="set-up-centralized-logging-at-enterprise-scale-by-using-terraform-resources"></a>
+ [Terraform infrastructure setup](https://developer.hashicorp.com/terraform/tutorials/aws-get-started) (Terraform documentation)
+ [Deploy AWS Control Tower Account Factory for Terraform (AFT)](https://docs.aws.amazon.com/controltower/latest/userguide/aft-getting-started.html) (AWS Control Tower documentation)
+ [IAM tutorial: Delegate access across AWS accounts using IAM roles](https://docs.aws.amazon.com/IAM/latest/UserGuide/tutorial_cross-account-with-roles.html) (IAMdocumentation)

# Set up end-to-end encryption for applications on Amazon EKS using cert-manager and Let's Encrypt
<a name="set-up-end-to-end-encryption-for-applications-on-amazon-eks-using-cert-manager-and-let-s-encrypt"></a>

*Mahendra Revanasiddappa and Vasanth Jeyaraj, Amazon Web Services*

## Summary
<a name="set-up-end-to-end-encryption-for-applications-on-amazon-eks-using-cert-manager-and-let-s-encrypt-summary"></a>

Implementing end-to-end encryption can be complex and you need to manage certificates for each asset in your microservices architecture. Although you can terminate the Transport Layer Security (TLS) connection at the edge of the Amazon Web Services (AWS) network with a Network Load Balancer or Amazon API Gateway, some organizations require end-to-end encryption.

This pattern uses NGINX Ingress Controller for ingress. This is because when you create a Kubernetes ingress, the ingress resource uses a Network Load Balancer. The Network Load Balancer doesn't permit uploads of client certificates. Therefore, you can't achieve mutual TLS with Kubernetes ingress.

This pattern is intended for organizations that require mutual authentication between all microservices in their applications. Mutual TLS reduces the burden of maintaining user names or passwords and can also use the turnkey security framework. This pattern’s approach is compatible if your organization has a large number of connected devices or must comply with strict security guidelines.

This pattern helps increase your organization's security posture by implementing end-to-end encryption for applications running on Amazon Elastic Kubernetes Service (Amazon EKS). This pattern provides a sample application and code in the GitHub [End-to-end encryption on Amazon EKS](https://github.com/aws-samples/end-to-end-encryption-on-amazon-eks#readme) repository to show how a microservice runs with end-to-end encryption on Amazon EKS. The pattern's approach uses [cert-manager](https://cert-manager.io/docs/), an add-on to Kubernetes, with [Let's Encrypt](https://letsencrypt.org/) as the certificate authority (CA). Let's Encrypt is a cost-effective solution to manage certificates and provides free certificates that are valid for 90 days. Cert-manager automates the on-demand provisioning and rotating of certificates when a new microservice is deployed on Amazon EKS. 

**Intended audience**

This pattern is recommended for users who have experience with Kubernetes, TLS, Amazon Route 53, and Domain Name System (DNS).

## Prerequisites and limitations
<a name="set-up-end-to-end-encryption-for-applications-on-amazon-eks-using-cert-manager-and-let-s-encrypt-prereqs"></a>

**Prerequisites **
+ An active AWS account.
+ An existing Amazon EKS cluster.
+ AWS Command Line Interface (AWS CLI) version 1.7 or later, installed and configured on macOS, Linux, or Windows.
+ The `kubectl` command line utility, installed and configured to access the Amazon EKS cluster. For more information about this, see [Installing kubectl](https://docs.aws.amazon.com/eks/latest/userguide/install-kubectl.html) in the Amazon EKS documentation.
+ An existing DNS name to test the application. For more information about this, see [Registering domain names using Amazon Route 53](https://docs.aws.amazon.com/Route53/latest/DeveloperGuide/registrar.html) in the Amazon Route 53 documentation. 
+ The latest [Helm](https://docs.aws.amazon.com/eks/latest/userguide/helm.html) version, installed on your local machine. For more information about this, see [Using Helm with Amazon EKS](https://docs.aws.amazon.com/eks/latest/userguide/helm.html) in the Amazon EKS documentation and the GitHub [Helm](https://github.com/helm/helm) repository. 
+ The GitHub [End-to-end encryption on Amazon EKS](https://github.com/aws-samples/end-to-end-encryption-on-amazon-eks#readme) repository, cloned to your local machine. 
+ Replace the following values in the `policy.json` and `trustpolicy.json` files from the cloned GitHub [End-to-end encryption on Amazon EKS](https://github.com/aws-samples/end-to-end-encryption-on-amazon-eks#readme) repository:
  + `<account number>` – Replace with the AWS account ID for the account that you want to deploy the solution in. 
  + `<zone id>` – Replace with the domain name’s Route 53 zone ID. 
  + `<node_group_role>` – Replace with the name of the AWS Identity and Access Management (IAM) role associated with the Amazon EKS nodes.
  + `<namespace>` – Replace with the Kubernetes namespace in which you deploy the NGINX Ingress Controller and the sample application.
  + `<application-domain-name>` – Replace with the DNS domain name from Route 53.

**Limitations **
+ This pattern doesn’t describe how to rotate certificates and only demonstrates how to use certificates with microservices on Amazon EKS. 

## Architecture
<a name="set-up-end-to-end-encryption-for-applications-on-amazon-eks-using-cert-manager-and-let-s-encrypt-architecture"></a>

The following diagram shows the workflow and architecture components for this pattern.

![\[Workflow to set up encryption for applications on Amazon EKS using cert-manager and Let's Encrypt.\]](http://docs.aws.amazon.com/prescriptive-guidance/latest/patterns/images/pattern-img/9aa3ee9e-73db-41f5-a467-b5c47fef496e/images/40692ede-6fb3-474e-8c9e-85c51529e8ad.png)


The diagram shows the following workflow:

1. A client sends a request to access the application to the DNS name.

1. The Route 53 record is a CNAME to the Network Load Balancer.

1. The Network Load Balancer forwards the request to the NGINX Ingress Controller that is configured with a TLS listener. Communication between the NGINX Ingress Controller and the Network Load Balancer follows HTTPS protocol.

1. The NGINX Ingress Controller carries out path-based routing based on the client's request to the application service.

1. The application service forwards the request to the application pod. The application is designed to use the same certificate by calling secrets.

1. Pods run the sample application using the cert-manager certificates. The communication between the NGINX Ingress Controller and the pods uses HTTPS.


| 
| 
| Note: Cert-manager runs in its own namespace. It uses a Kubernetes cluster role to provision certificates as secrets in specific namespaces. You can attach those namespaces to application pods and NGINX Ingress Controller.  | 
| --- |

## Tools
<a name="set-up-end-to-end-encryption-for-applications-on-amazon-eks-using-cert-manager-and-let-s-encrypt-tools"></a>

**AWS services**
+ [Amazon Elastic Kubernetes Service (Amazon EKS)](https://docs.aws.amazon.com/eks/latest/userguide/what-is-eks.html) is a managed service that you can use to run Kubernetes on AWS without needing to install, operate, and maintain your own Kubernetes control plane or nodes.
+ [Elastic Load Balancing](https://docs.aws.amazon.com/elasticloadbalancing/latest/userguide/what-is-load-balancing.html) automatically distributes your incoming traffic across multiple targets, containers, and IP addresses.
+ [AWS Identity and Access Management (IAM)](https://docs.aws.amazon.com/IAM/latest/UserGuide/introduction.html) helps you securely manage access to your AWS resources by controlling who is authenticated and authorized to use them.
+ [Amazon Route 53](https://docs.aws.amazon.com/Route53/latest/DeveloperGuide/Welcome.html) is a highly available and scalable DNS web service.

**Other tools**
+ [cert-manager](https://cert-manager.io/docs/installation/supported-releases/) is an add-on to Kubernetes that requests certificates, distributes them to Kubernetes containers, and automates certificate renewal.
+ [NGINX Ingress Controller](https://kubernetes.github.io/ingress-nginx/) is a traffic management solution for cloud‑native apps in Kubernetes and containerized environments.

## Epics
<a name="set-up-end-to-end-encryption-for-applications-on-amazon-eks-using-cert-manager-and-let-s-encrypt-epics"></a>

### Create and configure a public hosted zone with Route 53
<a name="create-and-configure-a-public-hosted-zone-with-route-53"></a>


| Task | Description | Skills required | 
| --- | --- | --- | 
| Create a public hosted zone in Route 53. | Sign in to the AWS Management Console, open the Amazon Route 53 console, choose **Hosted zones**, and then choose **Create hosted zone**. Create a public hosted zone and record the zone ID. For more information about this, see [Creating a public hosted zone](https://docs.aws.amazon.com/Route53/latest/DeveloperGuide/CreatingHostedZone.html) in the Amazon Route 53 documentation.ACME DNS01 uses the DNS provider to post a challenge for cert-manager to issue the certificate. This challenge asks you to prove that you control the DNS for your domain name by putting a specific value in a TXT record under that domain name. After Let’s Encrypt gives your ACME client a token, your client creates a TXT record derived from that token and your account key, and it puts that record at `_acme-challenge.<YOURDOMAIN>`. Then Let’s Encrypt queries the DNS for that record. If it finds a match, you can proceed to issue a certificate. | AWS DevOps | 

### Configure an IAM role to allow cert-manager to access the public hosted zone
<a name="configure-an-iam-role-to-allow-cert-manager-to-access-the-public-hosted-zone"></a>


| Task | Description | Skills required | 
| --- | --- | --- | 
| Create the IAM policy for cert-manager.  | An IAM policy is required to provide cert-manager with permission to validate that you own the Route 53 domain. The `policy.json` sample IAM policy is provided in the `1-IAMRole` directory in the cloned GitHub [End-to-end encryption on Amazon EKS](https://github.com/aws-samples/end-to-end-encryption-on-amazon-eks#readme) repository.Enter the following command in AWS CLI to create the IAM policy.<pre>aws iam create-policy \<br />  --policy-name PolicyForCertManager \<br />  --policy-document file://policy.json</pre> | AWS DevOps | 
| Create the IAM role for cert-manager. | After you create the IAM policy, you must create an IAM role. The `trustpolicy.json` sample IAM role is provided in the `1-IAMRole` directory.Enter the following command in AWS CLI to create the IAM role.<pre>aws iam create-role \<br />  --role-name RoleForCertManager \<br />  --assume-role-policy-document file://trustpolicy.json</pre> | AWS DevOps | 
| Attach the policy to the role. | Enter the following command in AWS CLI to attach the IAM policy to the IAM role. Replace `AWS_ACCOUNT_ID` with the ID of your AWS account. <pre>aws iam attach-role-policy \<br />  --policy-arn arn:aws:iam::AWS_ACCOUNT_ID:policy/PolicyForCertManager \<br />  --role-name RoleForCertManager</pre> | AWS DevOps | 

### Set up the NGINX Ingress Controller in Amazon EKS
<a name="set-up-the-nginx-ingress-controller-in-amazon-eks"></a>


| Task | Description | Skills required | 
| --- | --- | --- | 
| Deploy the NGINX Ingress Controller. | Install the most recent version of `nginx-ingress` using Helm. You can modify the `nginx-ingress` configuration according to your requirements before deploying it. This pattern uses an annotated, internal-facing Network Load Balancer and that is available in the `5-Nginx-Ingress-Controller` directory. Install the NGINX Ingress Controller by running the following Helm command from the `5-Nginx-Ingress-Controller` directory.`helm install test-nginx nginx-stable/nginx-ingress  -f  5-Nginx-Ingress-Controller/values_internal_nlb.yaml` | AWS DevOps | 
| Verify that the NGINX Ingress Controller is installed. | Enter the `helm list` command. The output should show that the NGINX Ingress Controller is installed. | AWS DevOps | 
| Create a Route 53 A record. | The A record points to the Network Load Balancer created by NGINX Ingress Controller.[\[See the AWS documentation website for more details\]](http://docs.aws.amazon.com/prescriptive-guidance/latest/patterns/set-up-end-to-end-encryption-for-applications-on-amazon-eks-using-cert-manager-and-let-s-encrypt.html) | AWS DevOps | 

### Set up NGINX VirtualServer on Amazon EKS
<a name="set-up-nginx-virtualserver-on-amazon-eks"></a>


| Task | Description | Skills required | 
| --- | --- | --- | 
| Deploy NGINX VirtualServer. | The NGINX VirtualServer resource is a load balancing configuration that is an alternative to the ingress resource. The configuration to create the NGINX VirtualServer resource is available in the `nginx_virtualserver.yaml` file in the `6-Nginx-Virtual-Server` directory. Enter the following command in `kubectl` to create the NGINX VirtualServer resource.`kubectl apply -f  nginx_virtualserver.yaml`Make sure that you update the application domain name, certificate secret, and application service name in the `nginx_virtualserver.yaml` file. | AWS DevOps | 
| Verify that NGINX VirtualServer is created. | Enter the following command in `kubectl` to verify that the NGINX VirtualServer resource was successfully created.`kubectl get virtualserver`Verify that the `Host` column matches your application’s domain name. | AWS DevOps | 
| Deploy the NGINX web server with TLS enabled. | This pattern uses a NGINX web server with TLS enabled as the application for testing end-to-end encryption. The configuration files required to deploy the test application are available in the `demo-webserver` directory. Enter the following command in `kubectl` to deploy the test application.`kubectl apply -f nginx-tls-ap.yaml` | AWS DevOps | 
| Verify that the test application resources are created. | Enter the following commands in `kubectl` to verify that the required resources are created for the test application:[\[See the AWS documentation website for more details\]](http://docs.aws.amazon.com/prescriptive-guidance/latest/patterns/set-up-end-to-end-encryption-for-applications-on-amazon-eks-using-cert-manager-and-let-s-encrypt.html) | AWS DevOps | 
| Validate the application. | [\[See the AWS documentation website for more details\]](http://docs.aws.amazon.com/prescriptive-guidance/latest/patterns/set-up-end-to-end-encryption-for-applications-on-amazon-eks-using-cert-manager-and-let-s-encrypt.html) | AWS DevOps | 

## Related resources
<a name="set-up-end-to-end-encryption-for-applications-on-amazon-eks-using-cert-manager-and-let-s-encrypt-resources"></a>

**AWS resources**
+ [Creating records by using the Amazon Route 53 console](https://docs.aws.amazon.com/Route53/latest/DeveloperGuide/resource-record-sets-creating.html) (Amazon Route 53 documentation)
+ [Using a Network Load Balancer with the NGINX ingress controller on Amazon EKS](https://aws.amazon.com/blogs/opensource/network-load-balancer-nginx-ingress-controller-eks/) (AWS blog post)

**Other resources**
+ [Route 53](https://cert-manager.io/docs/configuration/acme/dns01/route53/) (cert-manager documentation)
+ [Configuring DNS01 Challenge Provider](https://cert-manager.io/docs/configuration/acme/dns01/) (cert-manager documentation)
+ [Let’s encrypt DNS challenge](https://letsencrypt.org/docs/challenge-types/#dns-01-challenge) (Let’s Encrypt documentation)

# Simplify Amazon EKS multi-tenant application deployment by using Flux
<a name="simplify-amazon-eks-multi-tenant-application-deployment-by-using-flux"></a>

*Nadeem Rahaman, Aditya Ambati, Aniket Dekate, and Shrikant Patil, Amazon Web Services*

## Summary
<a name="simplify-amazon-eks-multi-tenant-application-deployment-by-using-flux-summary"></a>

Many companies that offer products and services are data-regulated industries that are required to maintain data barriers between their internal business functions. This pattern describes how you can use the multi-tenancy feature in Amazon Elastic Kubernetes Service (Amazon EKS) to build a data platform that achieves logical and physical isolation between tenants or users that share a single Amazon EKS cluster. The pattern provides isolation through the following approaches:
+ Kubernetes namespace isolation
+ Role-based access control (RBAC)
+ Network policies
+ Resource quotas
+ AWS Identity and Access Management (IAM) roles for service accounts (IRSA)

In addition, this solution uses Flux to keep the tenant configuration immutable when you deploy applications. You can deploy your tenant applications by specifying the tenant repository that contains the Flux `kustomization.yaml` file in your configuration.

This pattern implements the following:
+ An AWS CodeCommit repository, AWS CodeBuild projects, and an AWS CodePipeline pipeline, which are created by manually deploying Terraform scripts.
+ Network and compute components required for hosting the tenants. These are created through CodePipeline and CodeBuild by using Terraform.
+ Tenant namespaces, network policies, and resource quotas, which are configured through a Helm chart.
+ Applications that belong to different tenants, deployed by using Flux.

We recommend that you carefully plan and build your own architecture for multi-tenancy based on your unique requirements and security considerations. This pattern provides a starting point for your implementation.

## Prerequisites and limitations
<a name="simplify-amazon-eks-multi-tenant-application-deployment-by-using-flux-prereqs"></a>

**Prerequisites**
+ An active AWS account
+ AWS Command Line Interface (AWS CLI) version 2.11.4 or later, [installed](https://docs.aws.amazon.com/cli/latest/userguide/getting-started-install.html) and [configured](https://docs.aws.amazon.com/cli/latest/userguide/cli-chap-configure.html)
+ [Terraform](https://developer.hashicorp.com/terraform/tutorials/aws-get-started/install-cli) version 0.12 or later installed on your local machine
+ [Terraform AWS Provider](https://registry.terraform.io/providers/hashicorp/aws/latest) version 3.0.0 or later
+ [Kubernetes Provider](https://registry.terraform.io/providers/hashicorp/kubernetes/latest/docs) version 2.10 or later
+ [Helm Provider](https://registry.terraform.io/providers/hashicorp/helm/latest/docs) version 2.8.0 or later
+ [Kubectl Provider](https://registry.terraform.io/providers/gavinbunney/kubectl/latest/docs) version 1.14 or later

**Limitations**
+ **Dependency on Terraform manual deployments: **The workflow's initial setup, including creating CodeCommit repositories, CodeBuild projects, and CodePipeline pipelines, relies on manual Terraform deployments. This introduces a potential limitation in terms of automation and scalability, because it requires manual intervention for infrastructure changes.
+ **CodeCommit repository dependency: **The workflow relies on CodeCommit repositories as the source code management solution and is tightly coupled with AWS services.

## Architecture
<a name="simplify-amazon-eks-multi-tenant-application-deployment-by-using-flux-architecture"></a>

**Target architectures **

This pattern deploys three modules to build the pipeline, network, and compute infrastructure for a data platform, as illustrated in the following diagrams.

*Pipeline architecture:*

![\[Pipeline infrastructure for Amazon EKS multi-tenant architecture\]](http://docs.aws.amazon.com/prescriptive-guidance/latest/patterns/images/pattern-img/97b700a7-74b6-4f9d-b53a-76de42409a8e/images/76a4a23d-4275-427a-ae36-51c9a3803128.png)


*Network architecture:*

![\[Network infrastructure for Amazon EKS multi-tenant architecture\]](http://docs.aws.amazon.com/prescriptive-guidance/latest/patterns/images/pattern-img/97b700a7-74b6-4f9d-b53a-76de42409a8e/images/e542249a-19a3-4c99-b6f5-fdf80fee4edf.png)


*Compute architecture:*

![\[Compute infrastructure for Amazon EKS multi-tenant architecture\]](http://docs.aws.amazon.com/prescriptive-guidance/latest/patterns/images/pattern-img/97b700a7-74b6-4f9d-b53a-76de42409a8e/images/91bd1ca8-17f0-433c-8600-4c8e6c474e31.png)


## Tools
<a name="simplify-amazon-eks-multi-tenant-application-deployment-by-using-flux-tools"></a>

**AWS services**
+ [AWS CodeBuild](https://docs.aws.amazon.com/codebuild/latest/userguide/welcome.html) is a fully managed build service that helps you compile source code, run unit tests, and produce artifacts that are ready to deploy.
+ [AWS CodeCommit](https://docs.aws.amazon.com/codecommit/latest/userguide/welcome.html) is a version control service that helps you privately store and manage Git repositories, without needing to manage your own source control system.
+ [AWS CodePipeline](https://docs.aws.amazon.com/codepipeline/latest/userguide/welcome.html) helps you quickly model and configure the different stages of a software release and automate the steps required to release software changes continuously.
+ [Amazon Elastic Kubernetes Service (Amazon EKS) ](https://docs.aws.amazon.com/eks/latest/userguide/getting-started.html)helps you run Kubernetes on AWS without needing to install or maintain your own Kubernetes control plane or nodes.
+ [AWS Transit Gateway](https://docs.aws.amazon.com/vpc/latest/tgw/what-is-transit-gateway.html) is a central hub that connects virtual private clouds (VPCs) and on-premises networks.
+ [Amazon Virtual Private Cloud (Amazon VPC)](https://docs.aws.amazon.com/vpc/latest/userguide/what-is-amazon-vpc.html) helps you launch AWS resources into a virtual network that you’ve defined. This virtual network resembles a traditional network that you’d operate in your own data center, with the benefits of using the scalable infrastructure of AWS.

**Other tools**
+ [Cilium Network Policies](https://cilium.io/use-cases/network-policy/#:~:text=Cilium%20implements%20Kubernetes%20Network%20Policies,%2C%20Kafka%2C%20gRPC%2C%20etc.) support Kubernetes L3 and L4 networking policies. They can be extended with L7 policies to provide API-level security for HTTP, Kafka, and gRPC, and other similar protocols.
+ [Flux](https://fluxcd.io/) is a Git-based continuous delivery (CD) tool that automates application deployments on Kubernetes.
+ [Helm](https://helm.sh/docs/) is an open source package manager for Kubernetes that helps you install and manage applications on your Kubernetes cluster.
+ [Terraform](https://www.terraform.io/) is an infrastructure as code (IaC) tool from HashiCorp that helps you create and manage cloud and on-premises resources.

**Code repository**

The code for this pattern is available in the GitHub [EKS Multi-Tenancy Terraform Solution](https://github.com/aws-samples/aws-eks-multitenancy-deployment) repository.

## Best practices
<a name="simplify-amazon-eks-multi-tenant-application-deployment-by-using-flux-best-practices"></a>

For guidelines and best practices for using this implementation, see the following:
+ [Amazon EKS multi-tenancy best practices](https://aws.github.io/aws-eks-best-practices/security/docs/multitenancy/)
+ [Flux documentation](https://fluxcd.io/flux/get-started/)

## Epics
<a name="simplify-amazon-eks-multi-tenant-application-deployment-by-using-flux-epics"></a>

### Create pipelines for Terraform build, test, and deploy stages
<a name="create-pipelines-for-terraform-build-test-and-deploy-stages"></a>


| Task | Description | Skills required | 
| --- | --- | --- | 
| Clone the project repository. | Clone the GitHub [EKS Multi-Tenancy Terraform Solution](https://github.com/aws-samples/aws-eks-multitenancy-deployment) repository by running the following command in a terminal window:<pre>git clone https://github.com/aws-samples/aws-eks-multitenancy-deployment.git</pre> | AWS DevOps | 
| Bootstrap the Terraform S3 bucket and Amazon DynamoDB. | [\[See the AWS documentation website for more details\]](http://docs.aws.amazon.com/prescriptive-guidance/latest/patterns/simplify-amazon-eks-multi-tenant-application-deployment-by-using-flux.html) | AWS DevOps | 
| Update the `run.sh` and `locals.tf` files. | [\[See the AWS documentation website for more details\]](http://docs.aws.amazon.com/prescriptive-guidance/latest/patterns/simplify-amazon-eks-multi-tenant-application-deployment-by-using-flux.html) | AWS DevOps | 
| Deploy the pipeline module. | To create pipeline resources, run the following Terraform commands manually. There is no orchestration for running these commands automatically.<pre>./run.sh -m pipeline -e demo -r <AWS_REGION> -t init<br />./run.sh -m pipeline -e demo -r <AWS_REGION> -t plan<br />./run.sh -m pipeline -e demo -r <AWS_REGION> -t apply</pre> | AWS DevOps | 

### Create the network infrastructure
<a name="create-the-network-infrastructure"></a>


| Task | Description | Skills required | 
| --- | --- | --- | 
| Start the pipeline. | [\[See the AWS documentation website for more details\]](http://docs.aws.amazon.com/prescriptive-guidance/latest/patterns/simplify-amazon-eks-multi-tenant-application-deployment-by-using-flux.html)After this first run, the pipeline starts automatically whenever you commit a change to the CodeCommit repository main branch.The pipeline includes the following [stages](https://docs.aws.amazon.com/codepipeline/latest/userguide/concepts.html#concepts-stages):[\[See the AWS documentation website for more details\]](http://docs.aws.amazon.com/prescriptive-guidance/latest/patterns/simplify-amazon-eks-multi-tenant-application-deployment-by-using-flux.html) | AWS DevOps | 
| Validate the resources created through the network module. | Confirm that the following AWS resources were created after the pipeline deployed successfully:[\[See the AWS documentation website for more details\]](http://docs.aws.amazon.com/prescriptive-guidance/latest/patterns/simplify-amazon-eks-multi-tenant-application-deployment-by-using-flux.html) | AWS DevOps | 

### Create the compute infrastructure
<a name="create-the-compute-infrastructure"></a>


| Task | Description | Skills required | 
| --- | --- | --- | 
| Update `locals.tf` to enable the CodeBuild project’s access to the VPC. | To deploy the add-ons for the Amazon EKS private cluster, the CodeBuild project must be attached to the Amazon EKS VPC.[\[See the AWS documentation website for more details\]](http://docs.aws.amazon.com/prescriptive-guidance/latest/patterns/simplify-amazon-eks-multi-tenant-application-deployment-by-using-flux.html) | AWS DevOps | 
| Update the `buildspec` files to build the compute module. | In the `templates` folder, in all `buildspec` YAML files, set the value of the `TF_MODULE_TO_BUILD` variable from `network` to `compute`:<pre>TF_MODULE_TO_BUILD: "compute"</pre> | AWS DevOps | 
| Update the `values` file for the tenant management Helm chart. | [\[See the AWS documentation website for more details\]](http://docs.aws.amazon.com/prescriptive-guidance/latest/patterns/simplify-amazon-eks-multi-tenant-application-deployment-by-using-flux.html) | AWS DevOps | 
| Validate compute resources. | After you update the files in the previous steps, CodePipeline starts automatically. Confirm that it created the following AWS resources for the compute infrastructure:[\[See the AWS documentation website for more details\]](http://docs.aws.amazon.com/prescriptive-guidance/latest/patterns/simplify-amazon-eks-multi-tenant-application-deployment-by-using-flux.html) | AWS DevOps | 

### Check tenant management and other resources
<a name="check-tenant-management-and-other-resources"></a>


| Task | Description | Skills required | 
| --- | --- | --- | 
| Validate the tenant management resources in Kubernetes. | Run the following commands to check that tenant management resources were created successfully with the help of Helm.[\[See the AWS documentation website for more details\]](http://docs.aws.amazon.com/prescriptive-guidance/latest/patterns/simplify-amazon-eks-multi-tenant-application-deployment-by-using-flux.html) | AWS DevOps | 
| Verify tenant application deployments. | Run the following commands to verify that the tenant applications were deployed.[\[See the AWS documentation website for more details\]](http://docs.aws.amazon.com/prescriptive-guidance/latest/patterns/simplify-amazon-eks-multi-tenant-application-deployment-by-using-flux.html) |  | 

## Troubleshooting
<a name="simplify-amazon-eks-multi-tenant-application-deployment-by-using-flux-troubleshooting"></a>


| Issue | Solution | 
| --- | --- | 
| You encounter an error message that’s similar to the following:`Failed to checkout and determine revision: unable to clone unknown error: You have successfully authenticated over SSH. You can use Git to interact with AWS CodeCommit.` | Follow these steps to troubleshoot the issue:[\[See the AWS documentation website for more details\]](http://docs.aws.amazon.com/prescriptive-guidance/latest/patterns/simplify-amazon-eks-multi-tenant-application-deployment-by-using-flux.html) | 

## Related resources
<a name="simplify-amazon-eks-multi-tenant-application-deployment-by-using-flux-resources"></a>
+ [Amazon EKS Blueprints for Terraform](https://github.com/aws-ia/terraform-aws-eks-blueprints)
+ [Amazon EKS Best Practices Guides, Multi-tenancy section](https://aws.github.io/aws-eks-best-practices/security/docs/multitenancy/)
+ [Flux website](https://fluxcd.io/)
+ [Helm website](https://helm.sh/)

## Additional information
<a name="simplify-amazon-eks-multi-tenant-application-deployment-by-using-flux-additional"></a>

Here's an example repository structure for deploying tenant applications:

```
applications
sample_tenant_app
├── README.md
├── base
│   ├── configmap.yaml
│   ├── deployment.yaml
│   ├── ingress.yaml
│   ├── kustomization.yaml
│   └── service.yaml
└── overlays
    ├── tenant-1
    │   ├── configmap.yaml
    │   ├── deployment.yaml
    │   └── kustomization.yaml
    └── tenant-2
        ├── configmap.yaml
        └── kustomization.yaml
```

# Streamline Amazon Lex bot development and deployment by using an automated workflow
<a name="streamline-amazon-lex-bot-development-and-deployment-using-an-automated-workflow"></a>

*Balaji Panneerselvam, Attila Dancso, Pavan Dusanapudi, Anand Jumnani, and James O'Hara, Amazon Web Services*

## Summary
<a name="streamline-amazon-lex-bot-development-and-deployment-using-an-automated-workflow-summary"></a>

Developing and deploying Amazon Lex conversational bots can be challenging when you’re trying to manage multiple features, developers, and environments. An automated workflow using infrastructure as code (IaC) principles can help streamline the process. This pattern can help improve the productivity of Amazon Lex developers and enable efficient bot lifecycle management in the following ways:
+ **Enable concurrent development of multiple features** - With an automated workflow, developers can work on different features in parallel in separate branches. Changes can then be merged and deployed without blocking other work.
+ **Use the Amazon Lex console UI** - Developers can use the user-friendly Amazon Lex console to build and test bots. The bots are then described in infrastructure code for deployment.
+ **Promote bots across environments** - The workflow automates promoting bot versions from lower environments like development and test up to production. This approach reduces the risk and overhead of manual promotions.
+ **Maintain version control** - Managing bot definitions in Git rather than solely through the Amazon Lex service provides you with version control and an audit trail. Changes are tracked to individual developers, unlike when only using the AWS Management Console or APIs to modify bots stored in AWS. 

By automating the Amazon Lex bot release process, teams can deliver features faster with reduced risk and effort. Bots remain under version control rather than isolated in the Amazon Lex console. 

## Prerequisites and limitations
<a name="streamline-amazon-lex-bot-development-and-deployment-using-an-automated-workflow-prereqs"></a>

**Prerequisites **
+ The workflow involves multiple AWS accounts for different environments (development, production, and DevOps), which requires account management and cross-account access configurations.
+ Python 3.9 available in your deployment environment or pipeline.
+ Git [installed](https://git-scm.com/book/en/v2/Getting-Started-Installing-Git) and configured on a local workstation for source control.
+ AWS Command Line Interface (AWS CLI) [installed](https://docs.aws.amazon.com/cli/latest/userguide/install-cliv2.html) and configured to authenticate by using the command line or Python.

**Limitations **
+ **Repository access** – The workflow assumes that the continuous integration and continuous delivery (CI/CD) pipeline has the necessary permissions to commit changes to the source code repository. 
+ **Initial bot version** – The tooling requires that an initial version of the bot is deployed by using AWS CloudFormation templates. You must create the first iteration of the bot and commit it to the repo before the automated workflow can take over.
+ **Merge conflicts** – Although the workflow aims to enable concurrent development, there is still a possibility of merge conflicts when integrating changes from different branches. Resolving conflicts in bot configurations might require manual intervention.

**Product versions**
+ [Python 3.9](https://www.python.org/downloads/) or above
+ [AWS CDK v2 2.124.0](https://docs.aws.amazon.com/cdk/api/versions.html) or above
+ [AWS SDK for Python (Boto3)](https://docs.aws.amazon.com/pythonsdk/)1.28 or above

## Architecture
<a name="streamline-amazon-lex-bot-development-and-deployment-using-an-automated-workflow-architecture"></a>

The following diagram displays the high-level architecture and key components of the solution.

![\[Workflow to automate development and deployment of Amazon Lex bots.\]](http://docs.aws.amazon.com/prescriptive-guidance/latest/patterns/images/pattern-img/3c7f9d16-9708-43c4-afa6-9d804d6b9dad/images/cdc73e82-a777-4e88-8bf8-a73c9bacb47f.png)


Key components include the following:
+ **Lex bot repo** – A Git repository that stores the IaC definitions for the Amazon Lex bots.
+ **DevOps** – An AWS account dedicated to housing the CI/CD pipelines and related resources for the development and deployment process.
+ **Pipelines** – The AWS CodePipeline instances that automate various stages of the bot development and deployment lifecycle, such as creating a new bot, exporting a bot's definition, importing a bot definition, and deleting a bot.
+ **Ticket bots and main bot** – The Amazon Lex bot resources, where the ticket bots are feature-specific bots developed by individual teams or developers and the main bot is the baseline bot that integrates all the features.

The architecture diagram illustrates the following workflow:

1. **Baseline main bot** – The starting point of the workflow is to baseline the main bot in the development (Dev) environment. The main bot serves as the foundation for future development and feature additions.

1. **Create ticket bot** – When a new feature or change is required, a ticket bot is created. The ticket bot is essentially a copy or branch of the main bot that developers can work on without affecting the main version.

1. **Export ticket bot** - After work on the ticket bot is complete, it's exported from the Amazon Lex service. Then, the branch that contains the ticket bot is rebased from the main branch. This step ensures that any changes made to the main bot while the ticket bot was in development are incorporated, reducing potential conflicts.

1. **Import rebased ticket bot and validate** – The rebased ticket bot is imported back into the development environment and validated to ensure it functions correctly with the latest changes from the main branch. If validation is successful, a pull request (PR) is created to merge the ticket bot changes into the main branch.

1. **Delete ticket bot** – After the changes have been successfully merged into the main branch, the ticket bot is no longer needed. The ticket bot can be deleted to keep the environment clean and manageable.

1. **Deploy main bot into development environment and test** – The updated main bot, now including the new features or changes, is deployed to the development environment. Here, it undergoes thorough testing to ensure all functionalities work as expected.

1. **Deploy main bot into production environment** – After testing in the development environment is complete and successful, the main bot is deployed to the production environment. This step is the final stage of the workflow, where the new features become available to end users.

**Automation and scale**

The automated workflow allows developers to work on different features in parallel, each in separate branches. This facilitates concurrent development, enabling teams to collaborate effectively and deliver features faster. With branches isolated from each other, changes can be merged and deployed without blocking or interfering with other ongoing work.

The workflow automates the deployment and promotion of bot versions across different environments, such as development, testing, and production.

Storing bot definitions in a version control system such as Git provides a comprehensive audit trail and enables efficient collaboration. Changes are tracked to individual developers, ensuring transparency and accountability throughout the development lifecycle. This approach also facilitates code reviews, enabling teams to identify and address issues before deploying to production.

By using AWS CodePipeline and other AWS services, the automated workflow can scale to accommodate increasing workloads and team sizes.

## Tools
<a name="streamline-amazon-lex-bot-development-and-deployment-using-an-automated-workflow-tools"></a>

**AWS services**
+ [AWS Cloud Development Kit (AWS CDK)](https://docs.aws.amazon.com/cdk/v2/guide/home.html) is an open-source software development framework for defining AWS Cloud infrastructure in code by using familiar programming languages and provisioning it through CloudFormation. The sample implementation in this pattern uses Python.
+ [AWS CDK Command Line Interface (AWS CDK CLI)](https://docs.aws.amazon.com/cli/latest/userguide/cli-chap-welcome.html) - The AWS CDK Toolkit is the primary tool for interacting with your AWS CDK app. It executes your app, interrogates the application model you defined, and produces and deploys the CloudFormation templates generated by the CDK.
+ [AWS CloudFormation](https://docs.aws.amazon.com/AWSCloudFormation/latest/UserGuide/Welcome.html) helps you set up AWS resources, provision them quickly and consistently, and manage them throughout their lifecycle across AWS accounts and AWS Regions. This pattern uses CloudFormation for deploying the Amazon Lex bot configurations and related resources using infrastructure as code.
+ [AWS CodeBuild](https://docs.aws.amazon.com/codebuild/latest/userguide/welcome.html) is a fully managed build service that helps you compile source code, run unit tests, and produce artifacts that are ready to deploy. This pattern uses CodeBuild for building and packaging the deployment artifacts.
+ [AWS CodePipeline](https://docs.aws.amazon.com/codepipeline/latest/userguide/welcome.html) helps you quickly model and configure the different stages of a software release and automate the steps required to release software changes continuously. This pattern uses CodePipeline to orchestrate the continuous delivery pipeline.
+ [AWS Command Line Interface (AWS CLI)](https://docs.aws.amazon.com/cli/latest/userguide/cli-chap-welcome.html) is an open source tool that helps you interact withAWS services through commands in your command line shell.
+ [AWS Identity and Access Management (IAM)](https://docs.aws.amazon.com/IAM/latest/UserGuide/introduction.html) helps you securely manage access to your AWS resources by controlling who is authenticated and authorized to use them.
+ [AWS Lambda](https://docs.aws.amazon.com/lambda/latest/dg/welcome.html) is a compute service that helps you run code without needing to provision or manage servers. It runs your code only when needed and scales automatically, so you pay only for the compute time that you use.
+ [Amazon Lex V2](https://docs.aws.amazon.com/lexv2/latest/dg/what-is.html) is an AWS service for building conversational interfaces (bots) for applications using voice and text.
+ [AWS SDK for Python (Boto3)](https://boto3.amazonaws.com/v1/documentation/api/latest/guide/quickstart.html) is a software development kit that helps you integrate your Python application, library, or script with AWS services.

**Other tools**
+ [Git](https://git-scm.com/docs) is an open source distributed version control system.

**Code repository**

The code for this pattern is available in the GitHub [management-framework-sample-for-amazon-lex](https://github.com/aws-samples/management-framework-sample-for-amazon-lex) repository. The code repo contains the following folders and files:
+ `prerequisite` folder – Contains CloudFormation stack definitions (using the AWS CDK) for setting up the required resources and environments.
+ `prerequisite/lexmgmtworkflow` folder – Main directory for the Lex Management Workflow project, including stack definitions and Python code.
+ `prerequisite/tests` – Contains unit tests.
+ `src` folder – Source code directory, including Amazon Lex bot management wrapper and utilities.
+ `src/dialogue_lambda` – Source code directory of the dialogue hook Lambda function that intercepts and processes user inputs during a conversation with an Amazon Lex bot.

## Best practices
<a name="streamline-amazon-lex-bot-development-and-deployment-using-an-automated-workflow-best-practices"></a>
+ **Separation of concerns**
  + Maintain a clear separation of responsibilities between the DevOps, development, and production environments.
  + Use separate AWS accounts for each environment to enforce proper isolation and security boundaries.
  + Use cross-account roles and least-privilege access principles to ensure controlled access between environments.
+ **Infrastructure as code**
  + Regularly review and update the infrastructure code to align with best practices and evolving requirements.
  + Establish a clear branching and merging strategy for the source code repository
+ **Testing and validation**
  + Implement automated testing at various stages of the pipeline to catch issues early in the development cycle.
  + Use the Amazon Lex console or automated testing frameworks to validate bot configurations and functionality before promoting to higher environments.
  + Consider implementing manual approval gates for deployments to production or critical environments.
+ **Monitoring and logging **
  + Set up monitoring and logging mechanisms for the pipelines, deployments, and bot interactions.
  + Monitor pipeline events, deployment statuses, and bot performance metrics to identify and address issues promptly.
  + Use AWS services such as Amazon CloudWatch, AWS CloudTrail, and AWS X-Ray for centralized logging and monitoring.
  + Regularly review and analyze the performance, efficiency, and effectiveness of the automated workflow.
+ **Security and compliance**
  + Implement secure coding practices and follow AWS security best practices for Amazon Lex bot development and deployment.
  + Regularly review and update IAM roles, policies, and permissions to align with the principle of least privilege.
  + Consider integrating security scanning and compliance checks into the pipelines.

## Epics
<a name="streamline-amazon-lex-bot-development-and-deployment-using-an-automated-workflow-epics"></a>

### Set up IaC for Amazon Lex bot management
<a name="set-up-iac-for-lex2-bot-management"></a>


| Task | Description | Skills required | 
| --- | --- | --- | 
| Set up the local CDK environment. | [\[See the AWS documentation website for more details\]](http://docs.aws.amazon.com/prescriptive-guidance/latest/patterns/streamline-amazon-lex-bot-development-and-deployment-using-an-automated-workflow.html) | AWS DevOps | 
| Create a cross-account role in the `devops` environment. | The `devops` account is responsible for hosting and managing the CI/CD pipelines. To enable the CI/CD pipelines to interact with the `dev` and `prod` environments, run the following commands to create a cross-account role in the `devops` account.<pre>cdk bootstrap --profile=devops<br /><br />cdk deploy LexMgmtDevopsRoleStack -c dev-account-id=2222222222222 -c prod-account-id=333333333333 --profile=devops</pre> | AWS DevOps | 
| Create a cross-account role in the `dev` environment. | Create an IAM role in the `dev` account with the necessary permissions to allow the `devops` account to assume this role. The CI/CD pipeline uses this role to perform actions in the `dev` account, such as deploying and managing Amazon Lex bot resources.To create the IAM role, run the following commands:<pre>cdk bootstrap --profile=dev<br /><br />cdk deploy LexMgmtCrossaccountRoleStack -c devops-account-id=1111111111111 --profile=dev</pre> | AWS DevOps | 
| Create a cross-account role in the `prod` environment. | Create an IAM role in the `prod` account with the necessary permissions to allow the `devops` account to assume this role. The CI/CD pipeline uses this role to perform actions in the `prod` account, such as deploying and managing Amazon Lex bot resources.<pre>cdk bootstrap --profile=prod<br /><br />cdk deploy LexMgmtCrossaccountRoleStack -c devops-account-id=1111111111111 --profile=prod</pre> | AWS DevOps | 
| Create pipelines in the `devops` environment. | To manage the development workflow for Amazon Lex bots, run the following command to set up pipelines in the `devops` environment . <pre>cdk deploy LexMgmtWorkflowStack -c devops-account-id=1111111111111 -c dev-account-id=2222222222222 -c prod-account-id=333333333333 --profile=devops</pre> | AWS DevOps | 

### Establish the baseline for the main bot
<a name="establish-the-baseline-for-the-main-bot"></a>


| Task | Description | Skills required | 
| --- | --- | --- | 
| Define the initial version of the main bot. | To define the initial version of the main bot, [trigger](https://docs.aws.amazon.com/codepipeline/latest/userguide/concepts.html#concepts-triggers) the `BaselineBotPipeline` pipeline.The pipeline deploys the basic bot definition that’s defined in the CloudFormation template, exports the main bot definition as .json files. and stores the main bot code in a version control system. | AWS DevOps | 

### Implement the feature development workflow
<a name="implement-the-feature-development-workflow"></a>


| Task | Description | Skills required | 
| --- | --- | --- | 
| Create the ticket bot to develop and test a feature. | `TicketBot` is a new bot instance that’s imported from the existing main bot definition in the feature branch. This approach ensures that the new bot has all the current functionality and configurations from the main bot.To define the initial version of the ticket bot, trigger the `CreateTicketBotPipeline` pipeline.The pipeline creates a new feature branch in the version control system and creates a new ticket bot instance based on the main bot. | Lex Bot Developer | 
| Develop and test the ticket bot feature.  | To develop and test the feature, sign in to the AWS Management Console and open the Amazon Lex console at [https://console.aws.amazon.com/lex/](https://console.aws.amazon.com/lex/). For more information, see [Testing a bot using the console](https://docs.aws.amazon.com/lexv2/latest/dg/test-bot.html) in the Amazon Lex documentation.With the `TicketBot` instance, you can now add, modify, or extend the bot's functionality to implement the new feature. For example, you can create or modify intents, utterances, slots, and dialog flows. For more information, see [Adding intents](https://docs.aws.amazon.com/lexv2/latest/dg/add-intents.html) in the Amazon Lex documentation. | Lex Bot Developer | 
| Export the ticket bot definition. | The exported bot definition is essentially a representation of the bot's configuration and functionality in a JSON format.To export the ticket bot definition, trigger the `ExportTicketBotPipeline` pipeline.The pipeline exports the ticket bot definition as .json files and stores the ticket bot code in a feature branch in the version control system. | Lex Bot Developer | 
| Rebase the feature branch from the latest main branch. | During the development of a new feature, the main branch might have received other changes from different developers or teams. To incorporate these changes into the feature branch, perform a Git `rebase` operation. This operation essentially replays the commits from the feature branch on top of the latest commits from the main branch, ensuring that the feature branch includes all the latest changes | Lex Bot Developer | 
| Import and validate the rebased ticket bot. | After you rebase the feature branch, you must import it into the ticket bot instance. This import updates the existing ticket bot with the latest changes from the rebased branch.To import the rebased ticket bot, trigger the `ImportTicketBotPipeline` pipeline.The pipeline imports the ticket bot definition .json files in the feature branch in the version control system into the `TicketBot` instance. | Lex Bot Developer | 
| Validate the rebased bot definition. | After you import the rebased bot definition, it's crucial to validate its functionality. You want to make sure that the new feature works as expected and doesn't conflict with existing functionality. This validation typically involves testing the bot with various input scenarios, checking the responses, and verifying that the bot behaves as intended. You can perform validation in either of the following ways:[\[See the AWS documentation website for more details\]](http://docs.aws.amazon.com/prescriptive-guidance/latest/patterns/streamline-amazon-lex-bot-development-and-deployment-using-an-automated-workflow.html) | Lex Bot Developer | 
| Merge the feature branch into the main branch. | After you develop and test the new feature in the isolated `TicketBot` instance, do the following:[\[See the AWS documentation website for more details\]](http://docs.aws.amazon.com/prescriptive-guidance/latest/patterns/streamline-amazon-lex-bot-development-and-deployment-using-an-automated-workflow.html) | Lex Bot Developer, Repository Adminstrator | 
| Delete the feature branch and the ticket bot.  | After a feature branch is merged successfully into the main branch, delete the feature branch and the ticket bot from the source code repo. To delete the feature branch and the ticket bot, trigger the `DeleteTicketBotPipeline` pipeline.The pipeline removes temporary bot resources that were created during the development process (for example, the ticket bot). This action helps to maintain a clean repo and prevent confusion or conflicts with future feature branches.  | Lex Bot Developer | 

### Maintain the main bot
<a name="maintain-the-main-bot"></a>


| Task | Description | Skills required | 
| --- | --- | --- | 
| Import the latest main bot definition into the `dev` environment. | To import the latest main bot definition in the main branch into the `dev` environment, trigger the `DeployBotDevPipeline` pipeline.The pipeline also creates a git tag on approval. | AWS DevOps | 
| Import the latest main bot definition into the `prod` environment. | To import the latest bot definition in the main branch into the `prod` environment, provide the tag reference from the previous task as a parameter and trigger the `DeployBotProdPipeline` pipeline.The pipeline imports the latest bot definition from a specific tag into the `prod` environment. | AWS DevOps | 

## Troubleshooting
<a name="streamline-amazon-lex-bot-development-and-deployment-using-an-automated-workflow-troubleshooting"></a>


| Issue | Solution | 
| --- | --- | 
| When you deploy Amazon Lex bots to different AWS accounts, the tooling services must have the necessary permissions to access resources in those accounts. | To grant cross-account access, use IAM roles and policies. Create IAM roles in the target accounts and attach policies to the roles that grant the required permissions. Then, assume these roles from the account where the Amazon Lex bot is deployed.For more information, see [IAM permissions required to import](https://docs.aws.amazon.com/lexv2/latest/dg/import.html#import-permissions) and [IAM permissions required to export bots in Lex V2](https://docs.aws.amazon.com/lexv2/latest/dg/export.html#export-permissions) in the Amazon Lex documentation. | 

## Related resources
<a name="streamline-amazon-lex-bot-development-and-deployment-using-an-automated-workflow-resources"></a>
+ [Importing bots in Amazon Lex V2](https://docs.aws.amazon.com/lexv2/latest/dg/import.html)
+ [Start a pipeline in CodePipeline](https://docs.aws.amazon.com/codepipeline/latest/userguide/pipelines-about-starting.html)
+ [Working with Amazon Lex V2 bots](https://docs.aws.amazon.com/lexv2/latest/dg/building-bots.html)

# Coordinate resource dependency and task execution by using the AWS Fargate WaitCondition hook construct
<a name="use-the-aws-fargate-waitcondition-hook-construct"></a>

*Stan Fan, Amazon Web Services*

## Summary
<a name="use-the-aws-fargate-waitcondition-hook-construct-summary"></a>

This pattern describes the WaitCondition hook (`waitcondition-hook-for-aws-fargate-task`) npm package, which is a cloud-native solution designed for orchestrating [AWS Fargate](https://docs.aws.amazon.com/AmazonECS/latest/developerguide/AWS_Fargate.html) tasks in Amazon Elastic Container Service (Amazon ECS) clusters. 

The WaitCondition hook is an AWS Cloud Development Kit (AWS CDK) construct that’s specifically tailored for integration with AWS CloudFormation. The WaitCondition hook provides the following key capabilities:
+ Acts as a wait condition mechanism, pausing CloudFormation stack execution until a specified Fargate task completes, which helps with orderly deployments and resource provisioning.
+ Supports TypeScript and Python, making it ideal for AWS CDK projects.
+ Allows developers and architects to orchestrate deployments by coordinating task completion and resource management for containerized applications on AWS.
+ Enables running Fargate tasks with one or multiple containers embedded in a CloudFormation lifecycle. and can handle task failures and roll back the CloudFormation stack after a task failure.
+ Provides flexibility to add dependencies between resources and the Fargate task execution results, enabling custom tasks or invoking other endpoints. For instance, you can pause a CloudFormation stack and wait for a database migration (done by a Fargate task) and provision other resources that might depend on the success of the database migration.

## Prerequisites and limitations
<a name="use-the-aws-fargate-waitcondition-hook-construct-prereqs"></a>

**Prerequisites**
+ An active AWS account.
+ AWS Cloud Development Kit (AWS CDK) Command Line Interface (CLI) installed on a local workstation. For more information, see [AWS CDK CLI reference](https://docs.aws.amazon.com/cdk/v2/guide/cli.html) in the AWS CDK documentation.
+ Node package manager (npm), installed on a local workstation and configured for the [AWS CDK in TypeScript](https://docs.aws.amazon.com/cdk/v2/guide/work-with-cdk-typescript.html). For more information, see [Downloading and installing Node.js and npm](https://docs.npmjs.com/downloading-and-installing-node-js-and-npm) in the npm documentation.
+ Yarn installed on a local workstation. For more information, see [Installation](https://yarnpkg.com/getting-started/install) in the Yarn documentation.

**Limitations**
+ This solution is deployed to a single AWS account.
+ The expected return code of the container is `0` for success. Any other return code indicates failure, and the CloudFormation stack will roll back. 
+ Some AWS services aren’t available in all AWS Regions. For Region availability, see [AWS services by Region](https://aws.amazon.com/about-aws/global-infrastructure/regional-product-services/). For specific endpoints, see [Service endpoints and quotas](https://docs.aws.amazon.com/general/latest/gr/aws-service-information.html), and choose the link for the service.

## Architecture
<a name="use-the-aws-fargate-waitcondition-hook-construct-architecture"></a>

The following diagram shows the construct architecture.

![\[AWS Step Functions workflow of waitcondition-hook-for-aws-fargate-task construct.\]](http://docs.aws.amazon.com/prescriptive-guidance/latest/patterns/images/pattern-img/e58680e3-f89f-422f-b0e1-e85605ae8bf9/images/598020df-908c-4486-9844-c05af759c18a.png)


The diagram shows the workflow of `waitcondition-hook-for-aws-fargate-task`:

1. `WaitCondition` and `WaitConditionHandler` are provisioned to listen to the response from the AWS Lambda functions.

1. Depending on the result of the task, either the `CallbackFunction` or the `ErrorHandlerFunction` is triggered by the finish of the Fargate task.

1. The Lambda function sends a SUCCEED or FAILURE signal to `WaitConditionHandler`.

1. `WaitConditionHandler` continues to provision the resources if the execution result of the Fargate task succeeds, or rolls back the stack if the task failed.

The following diagram shows an example of a workflow to perform a database migration.

![\[Workflow of Amazon RDS database migration using WaitCondition hook construct.\]](http://docs.aws.amazon.com/prescriptive-guidance/latest/patterns/images/pattern-img/e58680e3-f89f-422f-b0e1-e85605ae8bf9/images/3b83fc2a-80bb-4ba9-9637-782060493cf0.png)


The example workflow uses the `waitcondition-hook-for-aws-fargate-task` construct to perform a database migration, as follows:

1. An Amazon Relational Database Service (Amazon RDS) instance is provisioned.

1. The `waitcondition-hook-for-aws-fargate-task` construct runs the database migration task and pauses the stack as an Amazon Elastic Compute Cloud (Amazon EC2) instance.

1. If the migration task finishes successfully, it sends a Succeed signal to CloudFormation. Otherwise, it sends a Fail signal to CloudFormation and rolls back the stack.

## Tools
<a name="use-the-aws-fargate-waitcondition-hook-construct-tools"></a>

**AWS services**
+ [AWS Cloud Development Kit (AWS CDK)](https://docs.aws.amazon.com/cdk/v2/guide/home.html) is a software development framework that helps you define cloud infrastructure in code and provision it through CloudFormation.
+ [CloudFormation](https://docs.aws.amazon.com/AWSCloudFormation/latest/UserGuide/Welcome.html) helps you set up AWS resources, provision them quickly and consistently, and manage them throughout their lifecycle across AWS accounts and AWS Regions.
+ [Amazon CloudWatch](https://docs.aws.amazon.com/AmazonCloudWatch/latest/monitoring/WhatIsCloudWatch.html) helps you monitor the metrics of your AWS resources and the applications you run on AWS in real time.
+ [Amazon Elastic Container Service (Amazon ECS)](https://docs.aws.amazon.com/AmazonECS/latest/developerguide/Welcome.html) is a fast and scalable container management service that helps you run, stop, and manage containers on a cluster.
+ [AWS Fargate](https://docs.aws.amazon.com/AmazonECS/latest/developerguide/AWS_Fargate.html) helps you run containers without needing to manage servers or Amazon EC2 instances. It’s used in conjunction with Amazon ECS.
+ [AWS Identity and Access Management (IAM)](https://docs.aws.amazon.com/IAM/latest/UserGuide/introduction.html) helps you securely manage access to your AWS resources by controlling who is authenticated and authorized to use them.
+ [AWS Lambda](https://docs.aws.amazon.com/lambda/latest/dg/welcome.html) is a compute service that helps you run code without needing to provision or manage servers. It runs your code only when needed and scales automatically, so you pay only for the compute time that you use.
+ [AWS Step Functions](https://docs.aws.amazon.com/step-functions/latest/dg/welcome.html) is a serverless orchestration service that helps you combine AWS Lambda functions and other AWS services to build business-critical applications.
+ [Amazon Virtual Private Cloud (Amazon VPC)](https://docs.aws.amazon.com/vpc/latest/userguide/what-is-amazon-vpc.html) helps you launch AWS resources into a virtual network that you’ve defined. This virtual network resembles a traditional network that you would operate in your own data center, with the benefits of using the scalable infrastructure of AWS. 

**Other tools**
+ [npm](https://docs.npmjs.com/about-npm) is a software registry that runs in a Node.js environment and is used to share or borrow packages and manage deployment of private packages.
+ [Yarn](https://yarnpkg.com/) is an open source package manager that you can use to manage dependencies in JavaScript projects. Yarn can assist you with installing, updating, configuring, and removing packages dependencies.

**Code repository**

The code for this pattern is available in the GitHub [waitcondition-hook-for-aws-fargate-task](https://github.com/aws-samples/waitcondition-hook-for-aws-fargate-task) repository.

## Best practices
<a name="use-the-aws-fargate-waitcondition-hook-construct-best-practices"></a>
+ When building your AWS CDK app, follow the [Best practices for developing and deploying cloud infrastructure with the AWS CDK](https://docs.aws.amazon.com/cdk/v2/guide/best-practices.html) in the AWS CDK v2 documentation.
+ For the AWS Fargate task, follow the [Best practices for Amazon ECS container images](https://docs.aws.amazon.com/AmazonECS/latest/bestpracticesguide/application.html) in the Amazon ECS documentation.

## Epics
<a name="use-the-aws-fargate-waitcondition-hook-construct-epics"></a>

### Set up the AWS CDK
<a name="set-up-the-cdk"></a>


| Task | Description | Skills required | 
| --- | --- | --- | 
| Install the AWS CDK. | To install the AWS CDK on your local machine or other environment, run the following command: <pre>npm install -g aws-cdk@latest</pre> | Cloud architect, App developer | 
| Bootstrap the AWS CDK. | [Bootstrapping](https://docs.aws.amazon.com/cdk/v2/guide/bootstrapping.html) is the process of preparing an [environment](https://docs.aws.amazon.com/cdk/v2/guide/environments.html) for deployment. To bootstrap your AWS CDK toolkit for the target AWS account and AWS Region, run the following command:<pre>cdk bootstrap aws://ACCOUNT-NUMBER-1/REGION-1 </pre>This command creates a CloudFormation stack named `CDKToolkit`.  | Cloud architect | 

### Run the WaitCondition hook for AWS Fargate tasks construct
<a name="run-the-waitcondition-hook-for-fargatelong-tasks-construct"></a>


| Task | Description | Skills required | 
| --- | --- | --- | 
| Create the CDK project. | Create a CDK project using the language that you prefer. This pattern uses TypeScript. To create a CDK project using TypeScript, run the following command: `cdk init app —language typescript` | Cloud architect | 
| Install the package. | Execute `npm install` on the root path of your CDK project. After the CDK library has been installed, run the following command to install `waitcondition-hook-for-aws-fargate-task`: `yarn add waitcondition-hook-for-aws-fargate-task` | Cloud architect | 
| Build your CDK application and Amazon ECS components. | Build your CDK project. An Amazon ECS task definition resource is required. For information about creating a task definition, see [Amazon ECS task definitions](https://docs.aws.amazon.com/AmazonECS/latest/developerguide/task_definitions.html) in the Amazon ECS documentation.The following example uses this construct:<pre>import * as cdk from 'aws-cdk-lib';<br />import { Vpc } from 'aws-cdk-lib/aws-ec2';<br />import * as ecr from 'aws-cdk-lib/aws-ecr';<br />import * as ecs from 'aws-cdk-lib/aws-ecs';<br />import { Construct } from 'constructs';<br />import { FargateRunner } from 'waitcondition-hook-for-aws-fargate-task';<br />import { Queue } from 'aws-cdk-lib/aws-sqs';<br /><br />export class FargateRunnerStack extends cdk.Stack {<br />    constructor(scope: Construct, id: string, props?: cdk.StackProps) {<br />        super(scope, id, props);<br />        // Define the VPC<br />        const vpc = new Vpc(this, 'MyVpc')<br />        // Define the Fargate Task<br />        const taskDefinition = new ecs.FargateTaskDefinition(this, 'MyTask', {});<br />        // Import exiting ecr repo<br />        const repo = ecr.Repository.fromRepositoryName(this, 'MyRepo', 'RepoName');<br />        // Add a container to the task<br />        taskDefinition.addContainer('MyContainer', {<br />            image: ecs.ContainerImage.fromEcrRepository(repo),<br />        });<br />        // Create the Fargate runner<br />        const myFargateRunner = new FargateRunner(this, 'MyRunner', {<br />            fargateTaskDef: taskDefinition,<br />            timeout: `${60 * 5}`,<br />            vpc: vpc,<br />        });<br />        // Create the SQS queue<br />        const myQueue = new Queue(this, 'MyQueue', {});<br />        // Add dependency<br />        myQueue.node.addDependency(myFargateRunner);<br />    }<br />}</pre> | Cloud architect | 
| Synth and launch the CDK application. | [\[See the AWS documentation website for more details\]](http://docs.aws.amazon.com/prescriptive-guidance/latest/patterns/use-the-aws-fargate-waitcondition-hook-construct.html)The `waitcondition-hook-for-aws-fargate-task` construct runs the Fargate task.  | Cloud architect | 

### Clean up
<a name="clean-up"></a>


| Task | Description | Skills required | 
| --- | --- | --- | 
| Clean up resources. | To clean up the resources provisioned from the previous step, run the following command:<pre>cdk destroy </pre> | Cloud architect | 

## Troubleshooting
<a name="use-the-aws-fargate-waitcondition-hook-construct-troubleshooting"></a>


| Issue | Solution | 
| --- | --- | 
| General CloudFormation stack failure | To help troubleshoot general CloudFormation stack failures, add the `--no-rollback` flag as shown in the following example: <pre>cdk deploy --no-rollback</pre>This command will pause the CloudFormation stack from rolling back, which gives you resources to troubleshoot. For more information, see [Choose how to handle failures when provisioning resources](https://docs.aws.amazon.com/AWSCloudFormation/latest/UserGuide/stack-failure-options.html) in the CloudFormation documentation. | 
| AWS Step Functions failure | An AWS Step Functions state machine might fail to execute for different reasons. With `—disable-rollback` configured, use the following steps to troubleshoot:[\[See the AWS documentation website for more details\]](http://docs.aws.amazon.com/prescriptive-guidance/latest/patterns/use-the-aws-fargate-waitcondition-hook-construct.html)For more information, see [Troubleshooting issues in Step Functions](https://docs.aws.amazon.com/step-functions/latest/dg/troubleshooting.html) and [Viewing execution details in the Step Functions console](https://docs.aws.amazon.com/step-functions/latest/dg/concepts-view-execution-details.html#exec-details-intf-step-details) in the AWS Step Functions documentation. | 
| AWS Lambda function failure | This construct provisions two Lambda functions: `CallbackFunction` and `ErrorhandlerFunction`. They can fail for various reasons such as unhandled exceptions. Use the following steps to troubleshoot: [\[See the AWS documentation website for more details\]](http://docs.aws.amazon.com/prescriptive-guidance/latest/patterns/use-the-aws-fargate-waitcondition-hook-construct.html)For more information, see [Troubleshooting issues in Lambda](https://docs.aws.amazon.com/lambda/latest/dg/lambda-troubleshooting.html) in the AWS Lambda documentation. | 

## Related resources
<a name="use-the-aws-fargate-waitcondition-hook-construct-resources"></a>

**AWS documentation**
+ [AWS CDK Construct API Reference](https://docs.aws.amazon.com/cdk/api/v2/docs/aws-construct-library.html)
+ [Getting started with the AWS CDK](https://docs.aws.amazon.com/cdk/v2/guide/getting_started.html)
+ [Learn how to create and use Amazon ECS resources](https://docs.aws.amazon.com/AmazonECS/latest/developerguide/getting-started.html)
+ [Learn how to get started with Step Functions](https://docs.aws.amazon.com/step-functions/latest/dg/getting-started-with-sfn.html)
+ [What is AWS CDK?](https://docs.aws.amazon.com/cdk/v2/guide/home.html)

**Other resources**
+ [Waitcondition Hook for AWS Fargate task](https://pypi.org/project/waitcondition-hook-for-aws-fargate-task/) (npm)
+ [waitcondition-hook-for-aws-fargate-task 1.0.6](https://pypi.org/project/waitcondition-hook-for-aws-fargate-task/) (pypi.org)

# Use third-party Git source repositories in AWS CodePipeline
<a name="use-third-party-git-source-repositories-in-aws-codepipeline"></a>

*Kirankumar Chandrashekar, Amazon Web Services*

## Summary
<a name="use-third-party-git-source-repositories-in-aws-codepipeline-summary"></a>

This pattern describes how to use AWS CodePipeline with third-party Git source repositories.

[AWS CodePipeline](https://docs.aws.amazon.com/codepipeline/latest/userguide/concepts-continuous-delivery-integration.html) is a continuous delivery service that automates tasks for building, testing, and deploying your software. The service currently supports Git repositories managed by GitHub, [AWS CodeCommit](https://aws.amazon.com/codecommit), and Atlassian Bitbucket. However, some enterprises use third-party Git repositories that are integrated with their single sign-on (SSO) service and Microsoft Active Directory for authentication. You can use these third-party Git repositories as sources for CodePipeline by creating custom actions and webhooks.

A webhook is an HTTP notification that detects events in another tool, such as a GitHub repository, and connects those external events to a pipeline. When you create a webhook in CodePipeline, the service returns a URL that you can use in your Git repository webhook. If you push code to a specific branch of the Git repository, the Git webhook initiates the CodePipeline webhook through this URL, and sets the source stage of the pipeline to **In Progress**. When the pipeline is in this state, a job worker polls CodePipeline for the custom job, runs the job, and sends a success or failure status to CodePipeline. In this case, because the pipeline is in the source stage, the job worker gets the contents of the Git repository, zips the contents, and uploads it to the Amazon Simple Storage Service (Amazon S3) bucket where artifacts for the pipeline are stored, using the object key provided by the polled job. You can also associate a transition for the custom action with an event in Amazon CloudWatch, and initiate the job worker based on the event. This setup enables you to use third-party Git repositories that the service doesn't natively support as sources for CodePipeline.

## Prerequisites and limitations
<a name="use-third-party-git-source-repositories-in-aws-codepipeline-prereqs"></a>

**Prerequisites **
+ An active AWS account
+ A Git repository that supports webhooks and can connect to a CodePipeline webhook URL through the internet 
+ AWS Command Line Interface (AWS CLI) [installed](https://docs.aws.amazon.com/cli/latest/userguide/cli-chap-install.html) and [configured](https://docs.aws.amazon.com/cli/latest/userguide/cli-chap-configure.html) to work with the AWS account

## Architecture
<a name="use-third-party-git-source-repositories-in-aws-codepipeline-architecture"></a>

The pattern involves these steps:

1. The user commits code to a Git repository.

1. The Git webhook is called.

1. The CodePipeline webhook is called.

1. The pipeline is set to **In Progress**, and the source stage is set to the **In Progress** state.

1. The source stage action initiates a CloudWatch Events rule, indicating that it was started.

1. The CloudWatch event initiates a Lambda function.

1. The Lambda function gets the details of the custom action job.

1. The Lambda function initiates AWS CodeBuild and passes it all the job-related information.

1. CodeBuild gets the public SSH key or user credentials for HTTPS Git access from Secrets Manager.

1. CodeBuild clones the Git repository for a specific branch.

1. CodeBuild zips the archive and uploads it to the S3 bucket that serves as the CodePipeline artifact store.

![\[Workflow that uses third-party Git source repos as sources for AWS CodePipeline.\]](http://docs.aws.amazon.com/prescriptive-guidance/latest/patterns/images/pattern-img/84284bec-b39d-466a-9fd9-994be2c953df/images/85555dab-7317-40f5-86a7-ccb8987c5bf3.png)


 

## Tools
<a name="use-third-party-git-source-repositories-in-aws-codepipeline-tools"></a>
+ [AWS CodePipeline](https://aws.amazon.com/codepipeline/) – AWS CodePipeline is a fully managed [continuous delivery](https://aws.amazon.com/devops/continuous-delivery/) service that helps you automate your release pipelines for fast and reliable application and infrastructure updates. CodePipeline automates the build, test, and deployment phases of your release process for each code change, based on the release model you define. This enables you to rapidly and reliably deliver features and updates. You can integrate AWS CodePipeline with third-party services such as GitHub or with your own custom plugin.
+ [AWS Lambda](https://aws.amazon.com/lambda/) – AWS Lambda lets you run code without provisioning or managing servers. With Lambda, you can run code for virtually any type of application or backend service with no administration necessary. You upload your code and Lambda takes care of everything required to run and scale your code with high availability. You can set up your code to automatically initiate from other AWS services or call it directly from any web or mobile app.
+ [AWS CodeBuild](https://aws.amazon.com/codebuild/) – AWS CodeBuild is a fully managed [continuous integration](https://aws.amazon.com/devops/continuous-integration/) service that compiles source code, runs tests, and produces software packages that are ready to deploy. With CodeBuild, you don't need to provision, manage, and scale your own build servers. CodeBuild scales continuously and processes multiple builds concurrently, so your builds are not left waiting in a queue. You can get started quickly by using prepackaged build environments, or you can create custom build environments that use your own build tools.
+ [AWS Secrets Manager](https://aws.amazon.com/secrets-manager/) – AWS Secrets Manager helps you protect secrets needed to access your applications, services, and IT resources. The service enables you to rotate, manage, and retrieve database credentials, API keys, and other secrets throughout their lifecycle. Users and applications retrieve secrets by calling Secrets Manager APIs, without having to hardcode sensitive information in plain text. Secrets Manager offers secret rotation with built-in integration for Amazon Relational Database Service (Amazon RDS), Amazon Redshift, and Amazon DocumentDB. The service can be extended to support other types of secrets, including API keys and OAuth tokens. In addition, Secrets Manager lets you control access to secrets by using fine-grained permissions, and audit secret rotation centrally for resources in the AWS Cloud, third-party services, and on-premises environments.
+ [Amazon CloudWatch](https://aws.amazon.com/cloudwatch/) – Amazon CloudWatch is a monitoring and observation service built for DevOps engineers, developers, site reliability engineers (SREs), and IT managers. CloudWatch provides you with data and actionable insights to monitor your applications, respond to systemwide performance changes, optimize resource utilization, and get a unified view of operational health. CloudWatch collects monitoring and operational data in the form of logs, metrics, and events, providing you with a unified view of AWS resources, applications, and services that run on AWS and on-premises servers. You can use CloudWatch to detect anomalous behavior in your environments, set alarms, visualize logs and metrics side by side, take automated actions, troubleshoot issues, and discover insights to keep your applications running smoothly.
+ [Amazon S3](https://aws.amazon.com/s3/) – Amazon Simple Storage Service (Amazon S3) is an object storage service that lets you store and protect any amount of data for a range of use cases, such as websites, mobile applications, backup and restore, archive, enterprise applications, IoT devices, and big data analytics. Amazon S3 provides easy-to-use management features to help you organize your data and configure finely tuned access controls to meet your specific business, organizational, and compliance requirements.

## Epics
<a name="use-third-party-git-source-repositories-in-aws-codepipeline-epics"></a>

### Create a custom action in CodePipeline
<a name="create-a-custom-action-in-codepipeline"></a>


| Task | Description | Skills required | 
| --- | --- | --- | 
| Create a custom action using AWS CLI or AWS CloudFormation. | This step involves creating a custom source action that can be used in the source stage of a pipeline in your AWS account in a particular region. You must use AWS CLI or AWS CloudFormation (not the console) to create the custom source action. For more information about the commands and steps described in this and other epics, see the "Related resources" section at the end of this pattern. In AWS CLI, use the create-custom-action-type command. Use --configuration-properties to provide all the parameters required for the job worker to process when it polls CodePipeline for a job. Make sure to note the values provided to the --provider and --action-version options, so that you can use the same values when creating the pipeline with this custom source stage. You can also create the custom source action in AWS CloudFormation by using the resource type AWS::CodePipeline::CustomActionType. | General AWS | 

### Set up authentication
<a name="set-up-authentication"></a>


| Task | Description | Skills required | 
| --- | --- | --- | 
| Create an SSH key pair. | Create a Secure Shell (SSH) key pair. For instructions, see the GitHub documentation. | Systems/DevOps engineer | 
| Create a secret in AWS Secrets Manager. | Copy the contents of the private key from the SSH key pair and create a secret in AWS Secrets Manager. This secret is used for authentication when accessing the Git repository. | General AWS | 
| Add the public key to the Git repository. | Add the public key from the SSH key pair to the Git repository account settings, for authentication against the private key. | Systems/DevOps engineer | 

### Create a pipeline and webhook
<a name="create-a-pipeline-and-webhook"></a>


| Task | Description | Skills required | 
| --- | --- | --- | 
| Create a pipeline that includes the custom source action. | Create a pipeline in CodePipeline. When you configure the source stage, choose the custom source action that you created previously. You can do this in the AWS CodePipeline console or in AWS CLI. CodePipeline prompts you for the configuration properties that you set on the custom action. This information is required for the job worker to process the job for the custom action. Follow the wizard and create the next stage for the pipeline. | General AWS | 
| Create a CodePipeline webhook. | Create a webhook for the pipeline you created with the custom source action. You must use AWS CLI or AWS CloudFormation (not the console) to create the webhook. In AWS CLI, run the put-webhook command and provide the appropriate values for the webhook options. Make a note of the webhook URL that the command returns. If you're using AWS CloudFormation to create the webhook, use the resource type AWS::CodePipeline::Webhook. Make sure to output the webhook URL from the created resource, and make a note of it. | General AWS | 
| Create a Lambda function and CodeBuild project. | In this step, you use Lambda and CodeBuild to create a job worker that will poll CodePipeline for job requests for the custom action, run the job, and return the status result to CodePipeline. Create a Lambda function that is initiated by an Amazon CloudWatch Events rule when the custom source action stage of the pipeline transitions to "In Progress." When the Lambda function is initiated, it should get the custom action job details by polling for jobs. You can use the PollForJobs API to return this information. After the polled job information is obtained, the Lambda function should return an acknowledgment, and then process the information with the data it obtains from the configuration properties for the custom action. When the worker is ready to talk to the Git repository, you might initiate a CodeBuild project, because it's convenient to handle Git tasks by using the SSH client. | General AWS, code developer | 

### Create an event in CloudWatch
<a name="create-an-event-in-cloudwatch"></a>


| Task | Description | Skills required | 
| --- | --- | --- | 
| Create a CloudWatch Events rule. | Create a CloudWatch Events rule that initiates the Lambda function as a target whenever the pipeline's custom action stage transitions to "In Progress." | General AWS | 

## Related resources
<a name="use-third-party-git-source-repositories-in-aws-codepipeline-resources"></a>

**Creating a custom action in CodePipeline**
+ [Create and add a custom action in CodePipeline](https://docs.aws.amazon.com/codepipeline/latest/userguide/actions-create-custom-action.html)
+ [AWS::CodePipeline::CustomActionType resource](https://docs.aws.amazon.com/AWSCloudFormation/latest/UserGuide/aws-resource-codepipeline-customactiontype.html)

**Setting up authentication**
+ [Creating and Managing Secrets with AWS Secrets Manager](https://docs.aws.amazon.com/secretsmanager/latest/userguide/managing-secrets.html)

**Creating a pipeline and webhook**
+ [Create a Pipeline in CodePipeline](https://docs.aws.amazon.com/codepipeline/latest/userguide/pipelines-create.html)
+ [put-webhook command reference](https://docs.aws.amazon.com/cli/latest/reference/codepipeline/put-webhook.html)
+ [AWS::CodePipeline::Webhook resource](https://docs.aws.amazon.com/AWSCloudFormation/latest/UserGuide/aws-resource-codepipeline-webhook.html)
+ [PollForJobs API reference](https://docs.aws.amazon.com/codepipeline/latest/APIReference/API_PollForJobs.html)
+ [Create and Add a Custom Action in CodePipeline](https://docs.aws.amazon.com/codepipeline/latest/userguide/actions-create-custom-action.html)
+ [Create a build project in AWS CodeBuild](https://docs.aws.amazon.com/codebuild/latest/userguide/create-project.html)

**Creating an event**
+ [Detect and react to changes in pipeline state with Amazon CloudWatch Events](https://docs.aws.amazon.com/codepipeline/latest/userguide/detect-state-changes-cloudwatch-events.html)

**Additional references**
+ [Working with pipelines in CodePipeline](https://docs.aws.amazon.com/codepipeline/latest/userguide/pipelines.html)
+ [AWS Lambda developer guide](https://docs.aws.amazon.com/lambda/latest/dg/welcome.html)

# Create a CI/CD pipeline to validate Terraform configurations by using AWS CodePipeline
<a name="create-a-ci-cd-pipeline-to-validate-terraform-configurations-by-using-aws-codepipeline"></a>

*Aromal Raj Jayarajan and Vijesh Vijayakumaran Nair, Amazon Web Services*

## Summary
<a name="create-a-ci-cd-pipeline-to-validate-terraform-configurations-by-using-aws-codepipeline-summary"></a>

This pattern shows how to test HashiCorp Terraform configurations by using a continuous integration and continuous delivery (CI/CD) pipeline deployed by AWS CodePipeline.

Terraform is a command-line interface application that helps you use code to provision and manage cloud infrastructure and resources. The solution provided in this pattern creates a CI/CD pipeline that helps you validate the integrity of your Terraform configurations by running five [CodePipeline stages](https://docs.aws.amazon.com/codepipeline/latest/userguide/concepts.html#concepts-stages):

1. `"checkout"` pulls the Terraform configuration that you’re testing from an AWS CodeCommit repository.

1. `"validate"` runs infrastructure as code (IaC) validation tools, including [tfsec](https://github.com/aquasecurity/tfsec), [TFLint](https://github.com/terraform-linters/tflint), and [checkov](https://www.checkov.io/). The stage also runs the following Terraform IaC validation commands: `terraform validate` and `terraform fmt`.

1. `"plan"` shows what changes will be applied to the infrastructure if the Terraform configuration is applied.

1. `"apply"` uses the generated plan to provision the required infrastructure in a test environment.

1. `"destroy"` removes the test infrastructure that was created during the `"apply"` stage.

## Prerequisites and limitations
<a name="create-a-ci-cd-pipeline-to-validate-terraform-configurations-by-using-aws-codepipeline-prereqs"></a>

**Prerequisites **
+ An active AWS account
+ AWS Command Line Interface (AWS CLI), [installed](https://docs.aws.amazon.com/cli/latest/userguide/getting-started-install.html) and [configured](https://docs.aws.amazon.com/cli/latest/userguide/cli-chap-configure.html)
+ [Git](https://git-scm.com/book/en/v2/Getting-Started-Installing-Git), installed and configured on your local machine
+ [Terraform](https://learn.hashicorp.com/collections/terraform/aws-get-started?utm_source=WEBSITE&utm_medium=WEB_IO&utm_offer=ARTICLE_PAGE&utm_content=DOCS), installed and configured on your local machine

**Limitations**
+ This pattern’s approach deploys AWS CodePipeline into one AWS account and AWS Region only. Configuration changes are required for multi-account and multi-Region deployments.
+ The AWS Identity and Access Management (IAM) role that this pattern provisions (**codepipeline\$1iam\$1role**) follows the principle of least privilege. This IAM role’s permissions must be updated based on the specific resources that your pipeline needs to create.** **

**Product versions**
+ AWS CLI  version 2.9.15 or later
+ Terraform version 1.3.7 or later

## Architecture
<a name="create-a-ci-cd-pipeline-to-validate-terraform-configurations-by-using-aws-codepipeline-architecture"></a>

**Target technology stack**
+ AWS CodePipeline
+ AWS CodeBuild
+ AWS CodeCommit
+ AWS IAM
+ Amazon Simple Storage Service (Amazon S3)
+ AWS Key Management Service (AWS KMS)
+ Terraform

**Target architecture**

The following diagram shows an example CI/CD pipeline workflow for testing Terraform configurations in CodePipeline.

![\[Architecture to test Terraform configurations by using an AWS CI/CD pipeline.\]](http://docs.aws.amazon.com/prescriptive-guidance/latest/patterns/images/pattern-img/4df7b1f8-8eef-4d85-a971-a7f158be9691/images/90b931c8-e745-4b52-92de-a367fb0f1f51.png)


The diagram shows the following workflow:

1. In CodePipeline, an AWS user initiates the actions proposed in a Terraform plan by running the `terraform apply` command in the AWS CLI.

1. AWS CodePipeline assumes an IAM service role that includes the policies required to access CodeCommit, CodeBuild, AWS KMS, and Amazon S3.

1. CodePipeline runs the `"checkout"` pipeline stage to pull the Terraform configuration from an AWS CodeCommit repository for testing.

1. CodePipeline runs the `"validate"` stage to test the Terraform configuration by running IaC validation tools and running Terraform IaC validation commands in a CodeBuild project.

1. CodePipeline runs the `"plan"` stage to create a plan in the CodeBuild project based on the Terraform configuration. The AWS user can review this plan before the changes are applied to the test environment.

1. Code Pipeline runs the `"apply"` stage to implement the plan by using the CodeBuild project to provision the required infrastructure in the test environment.

1. CodePipeline runs the `"destroy"` stage, which uses CodeBuild to remove the test infrastructure that was created during the `"apply"` stage.

1. An Amazon S3 bucket stores pipeline artifacts, which are encrypted and decrypted by using an AWS KMS [customer managed key](https://docs.aws.amazon.com/kms/latest/developerguide/concepts.html#customer-cmk).

## Tools
<a name="create-a-ci-cd-pipeline-to-validate-terraform-configurations-by-using-aws-codepipeline-tools"></a>

**Tools**

*AWS services*
+ [AWS CodePipeline](https://docs.aws.amazon.com/codepipeline/latest/userguide/welcome.html) helps you quickly model and configure the different stages of a software release and automate the steps required to release software changes continuously.
+ [AWS CodeBuild](https://docs.aws.amazon.com/codebuild/latest/userguide/welcome.html) is a fully managed build service that helps you compile source code, run unit tests, and produce artifacts that are ready to deploy.
+ [AWS CodeCommit](https://docs.aws.amazon.com/codecommit/latest/userguide/welcome.html) is a version control service that helps you privately store and manage Git repositories, without needing to manage your own source control system.
+ [AWS Identity and Access Management (IAM)](https://docs.aws.amazon.com/IAM/latest/UserGuide/introduction.html) helps you securely manage access to your AWS resources by controlling who is authenticated and authorized to use them.
+ [AWS Key Management Service (AWS KMS)](https://docs.aws.amazon.com/kms/latest/developerguide/overview.html) helps you create and control cryptographic keys to help protect your data.
+ [Amazon Simple Storage Service (Amazon S3)](https://docs.aws.amazon.com/AmazonS3/latest/userguide/Welcome.html) is a cloud-based object storage service that helps you store, protect, and retrieve any amount of data.

*Other services*
+ [HashiCorp Terraform](https://www.terraform.io/docs) is a command-line interface application that helps you use code to provision and manage cloud infrastructure and resources.

**Code **

The code for this pattern is available in the GitHub [aws-codepipeline-terraform-cicdsamples](https://github.com/aws-samples/aws-codepipeline-terraform-cicd-samples) repository. The repository contains the Terraform configurations required to create the target architecture outlined in this pattern.

## Epics
<a name="create-a-ci-cd-pipeline-to-validate-terraform-configurations-by-using-aws-codepipeline-epics"></a>

### Provision the solution components
<a name="provision-the-solution-components"></a>


| Task | Description | Skills required | 
| --- | --- | --- | 
| Clone the GitHub repository. | Clone the GitHub [aws-codepipeline-terraform-cicdsamples](https://github.com/aws-samples/aws-codepipeline-terraform-cicd-samples) repository by running the following command in a terminal window:<pre>git clone https://github.com/aws-samples/aws-codepipeline-terraform-cicd-samples.git</pre>For more information, see [Cloning a repository](https://docs.github.com/en/repositories/creating-and-managing-repositories/cloning-a-repository) in the GitHub documentation. | DevOps engineer | 
| Create a Terraform variable definitions file.  | Create a `terraform.tfvars` file based on your use case requirements. You can update the variables in the `examples/terraform.tfvars` file that’s in the cloned repository.For more information, see [Assigning values to root module variables](https://developer.hashicorp.com/terraform/language/values/variables#assigning-values-to-root-module-variables) in the Terraform documentation.The repository’s `Readme.md` file includes more information on the required variables. | DevOps engineer | 
| Configure AWS as the Terraform provider. | [\[See the AWS documentation website for more details\]](http://docs.aws.amazon.com/prescriptive-guidance/latest/patterns/create-a-ci-cd-pipeline-to-validate-terraform-configurations-by-using-aws-codepipeline.html)For more information, see [AWS provider](https://registry.terraform.io/providers/hashicorp/aws/latest/docs) in the Terraform documentation. | DevOps engineer | 
| Update the Terraform provider configuration for creating the Amazon S3 replication bucket. | [\[See the AWS documentation website for more details\]](http://docs.aws.amazon.com/prescriptive-guidance/latest/patterns/create-a-ci-cd-pipeline-to-validate-terraform-configurations-by-using-aws-codepipeline.html)Replication activates automatic, asynchronous copying of objects across Amazon S3 buckets. | DevOps engineer | 
| Initialize the Terraform configuration. | To initialize your working directory that contains the Terraform configuration files, run the following command in the cloned repository’s root folder:<pre>terraform init</pre> | DevOps engineer | 
| Create the Terraform plan. | To create a Terraform plan, run the following command in the cloned repository’s root folder:<pre>terraform plan --var-file=terraform.tfvars -out=tfplan</pre>Terraform evaluates the configuration files to determine the target state for the declared resources. It then compares the target state against the current state and creates a plan. | DevOps engineer | 
| Verify the Terraform plan. | Review the Terraform plan and confirm that it configures the required architecture in your target AWS account. | DevOps engineer | 
| Deploy the solution. | [\[See the AWS documentation website for more details\]](http://docs.aws.amazon.com/prescriptive-guidance/latest/patterns/create-a-ci-cd-pipeline-to-validate-terraform-configurations-by-using-aws-codepipeline.html)Terraform creates, updates, or destroys infrastructure to achieve the target state declared in the configuration files. | DevOps engineer | 

### Validate Terraform configurations by running the pipeline
<a name="validate-terraform-configurations-by-running-the-pipeline"></a>


| Task | Description | Skills required | 
| --- | --- | --- | 
| Set up the source code repository. | [\[See the AWS documentation website for more details\]](http://docs.aws.amazon.com/prescriptive-guidance/latest/patterns/create-a-ci-cd-pipeline-to-validate-terraform-configurations-by-using-aws-codepipeline.html) | DevOps engineer | 
| Validate the pipeline stages. | [\[See the AWS documentation website for more details\]](http://docs.aws.amazon.com/prescriptive-guidance/latest/patterns/create-a-ci-cd-pipeline-to-validate-terraform-configurations-by-using-aws-codepipeline.html)For more information, see [View pipeline details and history (console)](https://docs.aws.amazon.com/codepipeline/latest/userguide/pipelines-view-console.html) in the *AWS CodePipeline User Guide*.When a change is committed to the main branch of the source repository, the test pipeline is activated automatically. | DevOps engineer | 
| Verify the report output. | [\[See the AWS documentation website for more details\]](http://docs.aws.amazon.com/prescriptive-guidance/latest/patterns/create-a-ci-cd-pipeline-to-validate-terraform-configurations-by-using-aws-codepipeline.html)The `<project_name>-validate` CodeBuild project generates vulnerability reports for your code during the `"validate"` stage. | DevOps engineer | 

### Clean up your resources
<a name="clean-up-your-resources"></a>


| Task | Description | Skills required | 
| --- | --- | --- | 
| Clean up the pipeline and associated resources. | To delete the test resources from your AWS account, run the following command in the cloned repository’s root folder:<pre>terraform destroy --var-file=terraform.tfvars</pre> | DevOps engineer | 

## Troubleshooting
<a name="create-a-ci-cd-pipeline-to-validate-terraform-configurations-by-using-aws-codepipeline-troubleshooting"></a>


| Issue | Solution | 
| --- | --- | 
| You receive an **AccessDenied **error during the `"apply"` stage. | [\[See the AWS documentation website for more details\]](http://docs.aws.amazon.com/prescriptive-guidance/latest/patterns/create-a-ci-cd-pipeline-to-validate-terraform-configurations-by-using-aws-codepipeline.html) | 

## Related resources
<a name="create-a-ci-cd-pipeline-to-validate-terraform-configurations-by-using-aws-codepipeline-resources"></a>
+ [Module blocks](https://developer.hashicorp.com/terraform/language/modules/syntax) (Terraform documentation)
+ [How to use CI/CD to deploy and configure AWS security services with Terraform](https://aws.amazon.com/blogs/security/how-use-ci-cd-deploy-configure-aws-security-services-terraform/) (AWS blog post)
+ [Using service-linked roles](https://docs.aws.amazon.com/IAM/latest/UserGuide/using-service-linked-roles.html) (IAM documentation)
+ [create-pipeline](https://awscli.amazonaws.com/v2/documentation/api/latest/reference/codepipeline/create-pipeline.html) (AWS CLI documentation)
+ [Configure server-side encryption for artifacts stored in Amazon S3 for CodePipeline](https://docs.aws.amazon.com/codepipeline/latest/userguide/S3-artifact-encryption.html) (AWS CodePipeline documentation)
+ [Quotas for AWS CodeBuild](https://docs.aws.amazon.com/codebuild/latest/userguide/limits.html) (AWS CodeBuild documentation)
+ [Data protection in AWS CodePipeline](https://docs.aws.amazon.com/codepipeline/latest/userguide/data-protection.html) (AWS CodePipeline documentation)

## Additional information
<a name="create-a-ci-cd-pipeline-to-validate-terraform-configurations-by-using-aws-codepipeline-additional"></a>

**Custom Terraform modules**

The following is a list of custom Terraform modules that are used in this pattern:
+ `codebuild_terraform` creates the CodeBuild projects that form each stage of the pipeline.
+ `codecommit_infrastructure_source_repo` captures and creates the source CodeCommit repository.
+ `codepipeline_iam_role` creates the required IAM roles for the pipeline.
+ `codepipeline_kms` creates the required AWS KMS key for Amazon S3 object encryption and decryption.
+ `codepipeline_terraform` creates the test pipeline for the source CodeCommit repository.
+ `s3_artifacts_bucket` creates an Amazon S3 bucket to manage pipeline artifacts.

**Build specification files**

The following is a list of build specification (buildspec) files that this pattern uses to run each pipeline stage:
+ `buildspec_validate.yml` runs the `"validate"` stage.
+ `buildspec_plan.yml` runs the `"plan"` stage.
+ `buildspec_apply.yml` runs the `"apply"` stage.
+ `buildspec_destroy.yml` runs the `"destroy"` stage.

*Build specification file variables*

Each buildspec file uses the following variables to activate different build-specific settings:


| 
| 
| Variable | Default value | Description | 
| --- |--- |--- |
| `CODE_SRC_DIR` | "." | Defines the source CodeCommit directory | 
| `TF_VERSION` | "1.3.7" | Defines the Terraform version for the build environment | 

The `buildspec_validate.yml` file also supports the following variables to activate different build-specific settings:


| 
| 
| Variable | Default value | Description | 
| --- |--- |--- |
| `SCRIPT_DIR` | "./templates/scripts" | Defines the script directory | 
| `ENVIRONMENT` | "dev" | Defines the environment name | 
| `SKIPVALIDATIONFAILURE` | "Y" | Skips validation on failures | 
| `ENABLE_TFVALIDATE` | "Y" | Activates Terraform validate  | 
| `ENABLE_TFFORMAT` | "Y" | Activates Terraform format | 
| `ENABLE_TFCHECKOV` | "Y" | Activates checkov scan | 
| `ENABLE_TFSEC` | "Y" | Activates tfsec scan | 
| `TFSEC_VERSION` | "v1.28.1" | Defines the tfsec version | 

# More patterns
<a name="devops-more-patterns-pattern-list"></a>

**Topics**
+ [Access container applications privately on Amazon EKS using AWS PrivateLink and a Network Load Balancer](access-container-applications-privately-on-amazon-eks-using-aws-privatelink-and-a-network-load-balancer.md)
+ [Associate an AWS CodeCommit repository in one AWS account with Amazon SageMaker AI Studio Classic in another account](associate-an-aws-codecommit-repository-in-one-aws-account-with-sagemaker-studio-in-another-account.md)
+ [Automate account creation by using the Landing Zone Accelerator on AWS](automate-account-creation-lza.md)
+ [Automate adding or updating Windows registry entries using AWS Systems Manager](automate-adding-or-updating-windows-registry-entries-using-aws-systems-manager.md)
+ [Automate backups for Amazon RDS for PostgreSQL DB instances by using AWS Batch](automate-backups-for-amazon-rds-for-postgresql-db-instances-by-using-aws-batch.md)
+ [Automate deployment of nested applications using AWS SAM](automate-deployment-of-nested-applications-using-aws-sam.md)
+ [Automate deployment of Node Termination Handler in Amazon EKS by using a CI/CD pipeline](automate-deployment-of-node-termination-handler-in-amazon-eks-by-using-a-ci-cd-pipeline.md)
+ [Automate RabbitMQ configuration in Amazon MQ](automate-rabbitmq-configuration-in-amazon-mq.md)
+ [Automate the replication of Amazon RDS instances across AWS accounts](automate-the-replication-of-amazon-rds-instances-across-aws-accounts.md)
+ [Automatically build and deploy a Java application to Amazon EKS using a CI/CD pipeline](automatically-build-and-deploy-a-java-application-to-amazon-eks-using-a-ci-cd-pipeline.md)
+ [Automatically generate a PynamoDB model and CRUD functions for Amazon DynamoDB by using a Python application](automatically-generate-a-pynamodb-model-and-crud-functions-for-amazon-dynamodb-by-using-a-python-application.md)
+ [Automatically validate and deploy IAM policies and roles by using CodePipeline, IAM Access Analyzer, and AWS CloudFormation macros](automatically-validate-and-deploy-iam-policies-and-roles-in-an-aws-account-by-using-codepipeline-iam-access-analyzer-and-aws-cloudformation-macros.md)
+ [Back up Sun SPARC servers in the Stromasys Charon-SSP emulator on the AWS Cloud](back-up-sun-sparc-servers-in-the-stromasys-charon-ssp-emulator-on-the-aws-cloud.md)
+ [Build a data pipeline to ingest, transform, and analyze Google Analytics data using the AWS DataOps Development Kit](build-a-data-pipeline-to-ingest-transform-and-analyze-google-analytics-data-using-the-aws-dataops-development-kit.md)
+ [Build a Micro Focus Enterprise Server PAC with Amazon EC2 Auto Scaling and Systems Manager](build-a-micro-focus-enterprise-server-pac-with-amazon-ec2-auto-scaling-and-systems-manager.md)
+ [Build a pipeline for hardened container images using EC2 Image Builder and Terraform](build-a-pipeline-for-hardened-container-images-using-ec2-image-builder-and-terraform.md)
+ [Build an MLOps workflow by using Amazon SageMaker AI and Azure DevOps](build-an-mlops-workflow-by-using-amazon-sagemaker-and-azure-devops.md)
+ [Centralize DNS resolution by using AWS Managed Microsoft AD and on-premises Microsoft Active Directory](centralize-dns-resolution-by-using-aws-managed-microsoft-ad-and-on-premises-microsoft-active-directory.md)
+ [Clean up AWS Account Factory for Terraform (AFT) resources safely after state file loss](clean-up-aft-resources-safely-after-state-file-loss.md)
+ [Configure logging for .NET applications in Amazon CloudWatch Logs by using NLog](configure-logging-for-net-applications-in-amazon-cloudwatch-logs-by-using-nlog.md)
+ [Copy Amazon ECR container images across AWS accounts and AWS Regions](copy-ecr-container-images-across-accounts-regions.md)
+ [Create a custom Docker container image for SageMaker and use it for model training in AWS Step Functions](create-a-custom-docker-container-image-for-sagemaker-and-use-it-for-model-training-in-aws-step-functions.md)
+ [Create a pipeline in AWS Regions that don’t support AWS CodePipeline](create-a-pipeline-in-aws-regions-that-don-t-support-aws-codepipeline.md)
+ [Create alarms for custom metrics using Amazon CloudWatch anomaly detection](create-alarms-for-custom-metrics-using-amazon-cloudwatch-anomaly-detection.md)
+ [Customize default role names by using AWS CDK aspects and escape hatches](customize-default-role-names-by-using-aws-cdk-aspects-and-escape-hatches.md)
+ [Deploy a pipeline that simultaneously detects security issues in multiple code deliverables](deploy-a-pipeline-that-simultaneously-detects-security-issues-in-multiple-code-deliverables.md)
+ [Deploy and manage a serverless data lake on the AWS Cloud by using infrastructure as code](deploy-and-manage-a-serverless-data-lake-on-the-aws-cloud-by-using-infrastructure-as-code.md)
+ [Deploy containerized applications on AWS IoT Greengrass V2 running as a Docker container](deploy-containerized-applications-on-aws-iot-greengrass-version-2-running-as-a-docker-container.md)
+ [Deploy Kubernetes resources and packages using Amazon EKS and a Helm chart repository in Amazon S3](deploy-kubernetes-resources-and-packages-using-amazon-eks-and-a-helm-chart-repository-in-amazon-s3.md)
+ [Deploy multiple-stack applications using AWS CDK with TypeScript](deploy-multiple-stack-applications-using-aws-cdk-with-typescript.md)
+ [Deploy real-time coding security validation by using an MCP server with Kiro and other coding assistants](deploy-real-time-coding-security-validation-by-using-an-mcp-server-with-kiro-and-other-coding-assistants.md)
+ [Deploy SQL Server failover cluster instances on Amazon EC2 and Amazon FSx by using Terraform](deploy-sql-server-failover-cluster-instances-on-amazon-ec2-and-amazon-fsx.md)
+ [Deploy the Security Automations for AWS WAF solution by using Terraform](deploy-the-security-automations-for-aws-waf-solution-by-using-terraform.md)
+ [Develop advanced generative AI chat-based assistants by using RAG and ReAct prompting](develop-advanced-generative-ai-chat-based-assistants-by-using-rag-and-react-prompting.md)
+ [Enable Amazon GuardDuty conditionally by using AWS CloudFormation templates](enable-amazon-guardduty-conditionally-by-using-aws-cloudformation-templates.md)
+ [Set up event-driven auto scaling in Amazon EKS by using Amazon EKS Pod Identity and KEDA](event-driven-auto-scaling-with-eks-pod-identity-and-keda.md)
+ [Generate personalized and re-ranked recommendations using Amazon Personalize](generate-personalized-and-re-ranked-recommendations-using-amazon-personalize.md)
+ [Get Amazon SNS notifications when the key state of an AWS KMS key changes](get-amazon-sns-notifications-when-the-key-state-of-an-aws-kms-key-changes.md)
+ [Govern permission sets for multiple accounts by using Account Factory for Terraform](govern-permission-sets-aft.md)
+ [Identify duplicate container images automatically when migrating to an Amazon ECR repository](identify-duplicate-container-images-automatically-when-migrating-to-ecr-repository.md)
+ [Implement path-based API versioning by using custom domains in Amazon API Gateway](implement-path-based-api-versioning-by-using-custom-domains.md)
+ [Improve operational performance by enabling Amazon DevOps Guru across multiple AWS Regions, accounts, and OUs with the AWS CDK](improve-operational-performance-by-enabling-amazon-devops-guru-across-multiple-aws-regions-accounts-and-ous-with-the-aws-cdk.md)
+ [Install SSM Agent on Amazon EKS worker nodes by using Kubernetes DaemonSet](install-ssm-agent-on-amazon-eks-worker-nodes-by-using-kubernetes-daemonset.md)
+ [Integrate Stonebranch Universal Controller with AWS Mainframe Modernization](integrate-stonebranch-universal-controller-with-aws-mainframe-modernization.md)
+ [Mainframe modernization: DevOps on AWS with Rocket Software Enterprise Suite](mainframe-modernization-devops-on-aws-with-micro-focus.md)
+ [Manage AWS IAM Identity Center permission sets as code by using AWS CodePipeline](manage-aws-iam-identity-center-permission-sets-as-code-by-using-aws-codepipeline.md)
+ [Manage AWS permission sets dynamically by using Terraform](manage-aws-permission-sets-dynamically-by-using-terraform.md)
+ [Manage on-premises container applications by setting up Amazon ECS Anywhere with the AWS CDK](manage-on-premises-container-applications-by-setting-up-amazon-ecs-anywhere-with-the-aws-cdk.md)
+ [Manage AWS Organizations policies as code by using AWS CodePipeline and Amazon Bedrock](manage-organizations-policies-as-code.md)
+ [Migrate DNS records in bulk to an Amazon Route 53 private hosted zone](migrate-dns-records-in-bulk-to-an-amazon-route-53-private-hosted-zone.md)
+ [Migrate IIS-hosted applications to Amazon EC2 by using appcmd.exe](migrate-iis-hosted-applications-to-amazon-ec2-by-using-appcmd.md)
+ [Monitor use of a shared Amazon Machine Image across multiple AWS accounts](monitor-use-of-a-shared-amazon-machine-image-across-multiple-aws-accounts.md)
+ [Orchestrate an ETL pipeline with validation, transformation, and partitioning using AWS Step Functions](orchestrate-an-etl-pipeline-with-validation-transformation-and-partitioning-using-aws-step-functions.md)
+ [Automate blue/green deployments of Amazon Aurora global databases by using IaC principles](p-automate-blue-green-deployments-aurora-global-databases-iac.md)
+ [Preserve routable IP space in multi-account VPC designs for non-workload subnets](preserve-routable-ip-space-in-multi-account-vpc-designs-for-non-workload-subnets.md)
+ [Provision a Terraform product in AWS Service Catalog by using a code repository](provision-a-terraform-product-in-aws-service-catalog-by-using-a-code-repository.md)
+ [Run AWS Systems Manager Automation tasks synchronously from AWS Step Functions](run-aws-systems-manager-automation-tasks-synchronously-from-aws-step-functions.md)
+ [Set up a CI/CD pipeline for hybrid workloads on Amazon ECS Anywhere by using AWS CDK and GitLab](set-up-a-ci-cd-pipeline-for-hybrid-workloads-on-amazon-ecs-anywhere-by-using-aws-cdk-and-gitlab.md)
+ [Set up a CI/CD pipeline for database migration by using Terraform](set-up-ci-cd-pipeline-for-db-migration-with-terraform.md)
+ [Set up Multi-AZ infrastructure for a SQL Server Always On FCI by using Amazon FSx](set-up-multi-az-infrastructure-for-a-sql-server-always-on-fci-by-using-amazon-fsx.md)
+ [Set up UiPath RPA bots automatically on Amazon EC2 by using AWS CloudFormation](set-up-uipath-rpa-bots-automatically-on-amazon-ec2-by-using-aws-cloudformation.md)
+ [Simplify application authentication with mutual TLS in Amazon ECS by using Application Load Balancer](simplify-application-authentication-with-mutual-tls-in-amazon-ecs.md)
+ [Tenant onboarding in SaaS architecture for the silo model using C\$1 and AWS CDK](tenant-onboarding-in-saas-architecture-for-the-silo-model-using-c-and-aws-cdk.md)
+ [Use Terraform to automatically enable Amazon GuardDuty for an organization](use-terraform-to-automatically-enable-amazon-guardduty-for-an-organization.md)
+ [Use Amazon Bedrock agents to automate creation of access entry controls in Amazon EKS through text-based prompts](using-amazon-bedrock-agents-to-automate-creation-of-access-entry-controls-in-amazon-eks.md)
+ [Validate Account Factory for Terraform (AFT) code locally](validate-account-factory-for-terraform-aft-code-locally.md)
+ [Visualize AI/ML model results using Flask and AWS Elastic Beanstalk](visualize-ai-ml-model-results-using-flask-and-aws-elastic-beanstalk.md)

# Infrastructure
<a name="infrastructure-pattern-list"></a>

**Topics**
+ [Access a bastion host by using Session Manager and Amazon EC2 Instance Connect](access-a-bastion-host-by-using-session-manager-and-amazon-ec2-instance-connect.md)
+ [Centralize DNS resolution by using AWS Managed Microsoft AD and on-premises Microsoft Active Directory](centralize-dns-resolution-by-using-aws-managed-microsoft-ad-and-on-premises-microsoft-active-directory.md)
+ [Centralize monitoring by using Amazon CloudWatch Observability Access Manager](centralize-monitoring-by-using-amazon-cloudwatch-observability-access-manager.md)
+ [Check EC2 instances for mandatory tags at launch](check-ec2-instances-for-mandatory-tags-at-launch.md)
+ [Clean up AWS Account Factory for Terraform (AFT) resources safely after state file loss](clean-up-aft-resources-safely-after-state-file-loss.md)
+ [Create a pipeline in AWS Regions that don’t support AWS CodePipeline](create-a-pipeline-in-aws-regions-that-don-t-support-aws-codepipeline.md)
+ [Customize default role names by using AWS CDK aspects and escape hatches](customize-default-role-names-by-using-aws-cdk-aspects-and-escape-hatches.md)
+ [Deploy a Cassandra cluster on Amazon EC2 with private static IPs to avoid rebalancing](deploy-a-cassandra-cluster-on-amazon-ec2-with-private-static-ips-to-avoid-rebalancing.md)
+ [Extend VRFs to AWS by using AWS Transit Gateway Connect](extend-vrfs-to-aws-by-using-aws-transit-gateway-connect.md)
+ [Get Amazon SNS notifications when the key state of an AWS KMS key changes](get-amazon-sns-notifications-when-the-key-state-of-an-aws-kms-key-changes.md)
+ [Preserve routable IP space in multi-account VPC designs for non-workload subnets](preserve-routable-ip-space-in-multi-account-vpc-designs-for-non-workload-subnets.md)
+ [Provision a Terraform product in AWS Service Catalog by using a code repository](provision-a-terraform-product-in-aws-service-catalog-by-using-a-code-repository.md)
+ [Register multiple AWS accounts with a single email address by using Amazon SES](register-multiple-aws-accounts-with-a-single-email-address-by-using-amazon-ses.md)
+ [Set up DNS resolution for hybrid networks in a single-account AWS environment](set-up-dns-resolution-for-hybrid-networks-in-a-single-account-aws-environment.md)
+ [Set up UiPath RPA bots automatically on Amazon EC2 by using AWS CloudFormation](set-up-uipath-rpa-bots-automatically-on-amazon-ec2-by-using-aws-cloudformation.md)
+ [Set up a highly available PeopleSoft architecture on AWS](set-up-a-highly-available-peoplesoft-architecture-on-aws.md)
+ [Set up disaster recovery for Oracle JD Edwards EnterpriseOne with AWS Elastic Disaster Recovery](set-up-disaster-recovery-for-oracle-jd-edwards-enterpriseone-with-aws-elastic-disaster-recovery.md)
+ [Set up CloudFormation drift detection in a multi-Region, multi-account organization](set-up-aws-cloudformation-drift-detection-in-a-multi-region-multi-account-organization.md)
+ [Successfully import an S3 bucket as an AWS CloudFormation stack](successfully-import-an-s3-bucket-as-an-aws-cloudformation-stack.md)
+ [Synchronize data between Amazon EFS file systems in different AWS Regions by using AWS DataSync](synchronize-data-between-amazon-efs-file-systems-in-different-aws-regions-by-using-aws-datasync.md)
+ [Test AWS infrastructure by using LocalStack and Terraform Tests](test-aws-infra-localstack-terraform.md)
+ [Upgrade SAP Pacemaker clusters from ENSA1 to ENSA2](upgrade-sap-pacemaker-clusters-from-ensa1-to-ensa2.md)
+ [Use consistent Availability Zones in VPCs across different AWS accounts](use-consistent-availability-zones-in-vpcs-across-different-aws-accounts.md)
+ [Use user IDs in IAM policies for access control and automation](use-user-ids-iam-policies-access-control-automation.md)
+ [Validate Account Factory for Terraform (AFT) code locally](validate-account-factory-for-terraform-aft-code-locally.md)
+ [More patterns](infrastructure-more-patterns-pattern-list.md)

# Access a bastion host by using Session Manager and Amazon EC2 Instance Connect
<a name="access-a-bastion-host-by-using-session-manager-and-amazon-ec2-instance-connect"></a>

*Piotr Chotkowski and Witold Kowalik, Amazon Web Services*

## Summary
<a name="access-a-bastion-host-by-using-session-manager-and-amazon-ec2-instance-connect-summary"></a>

A *bastion host*, sometimes called a *jump box*, is a server that provides a single point of access from an external network to the resources located in a private network. A server exposed to an external public network, such as the internet, poses a potential security risk for unauthorized access. It’s important to secure and control access to these servers.

This pattern describes how you can use [Session Manager](https://docs.aws.amazon.com/systems-manager/latest/userguide/session-manager.html) and [Amazon EC2 Instance Connect](https://docs.aws.amazon.com/AWSEC2/latest/UserGuide/Connect-using-EC2-Instance-Connect.html) to securely connect to an Amazon Elastic Compute Cloud (Amazon EC2) bastion host deployed in your AWS account. Session Manager is a capability of AWS Systems Manager. The benefits of this pattern include:
+ The deployed bastion host doesn’t have any open, inbound ports exposed to the public internet. This reduces the potential attack surface.
+ You don’t need to store and maintain long-term Secure Shell (SSH) keys in your AWS account. Instead, each user generates a new SSH key pair each time they connect to the bastion host. AWS Identity and Access Management (IAM) policies that are attached to the user’s AWS credentials control access to the bastion host.

**Intended audience**

This pattern is intended for readers who have experience with basic understanding of Amazon EC2, Amazon Virtual Private Cloud (Amazon VPC), and Hashicorp Terraform.

## Prerequisites and limitations
<a name="access-a-bastion-host-by-using-session-manager-and-amazon-ec2-instance-connect-prereqs"></a>

**Prerequisites**
+ An active AWS account
+ AWS Command Line Interface (AWS CLI) version 2, [installed](https://docs.aws.amazon.com/cli/latest/userguide/install-cliv2.html) and [configured](https://docs.aws.amazon.com/cli/latest/userguide/cli-configure-files.html)
+ Session Manager plugin for the AWS CLI, [installed](https://docs.aws.amazon.com/systems-manager/latest/userguide/session-manager-working-with-install-plugin.html)
+ Terraform CLI, [installed](https://developer.hashicorp.com/terraform/cli)
+ Storage for the Terraform [state](https://developer.hashicorp.com/terraform/language/state), such as an Amazon Simple Storage Service (Amazon S3) bucket and an Amazon DynamoDB table that serve as a remote backend to store the Terraform state. For more information on using remote backends for the Terraform state, see [Amazon S3 Backends](https://www.terraform.io/language/settings/backends/s3) (Terraform documentation). For a code sample that sets up remote state management with an Amazon S3 backend, see [remote-state-s3-backend](https://registry.terraform.io/modules/nozaq/remote-state-s3-backend/aws/latest) (Terraform Registry). Note the following requirements:
  + The Amazon S3 bucket and DynamoDB table must be in the same AWS Region.
  + When creating the DynamoDB table, the partition key must be `LockID` (case-sensitive), and the partition key type must be `String`. All other table settings must be at their default values. For more information, see [About primary keys](https://docs.aws.amazon.com/amazondynamodb/latest/developerguide/HowItWorks.CoreComponents.html#HowItWorks.CoreComponents.PrimaryKey) and [Create a table](https://docs.aws.amazon.com/amazondynamodb/latest/developerguide/getting-started-step-1.html) in the DynamoDB documentation.
+ An SSH client, installed

**Limitations**
+ This pattern is intended as a proof of concept (PoC) or as a basis for further development. It should not be used in its current form in production environments. Before deployment, adjust the sample code in the repository to meet your requirements and use case.
+ This pattern assumes that the target bastion host uses Amazon Linux 2 as its operating system. While it is possible to use other Amazon Machine Images (AMIs), other operating systems are out of scope for this pattern.
**Note**  
Amazon Linux 2 is nearing end of support. For more information, see the [Amazon Linux 2 FAQs](https://aws.amazon.com/amazon-linux-2/faqs/).
+ In this pattern, the bastion host is located in a private subnet without an NAT gateway and internet gateway. This design isolates the Amazon EC2 instance from the public internet. You can add a specific network configuration that allows it to communicate with the internet. For more information, see [Connect your virtual private cloud (VPC) to other networks](https://docs.aws.amazon.com/vpc/latest/userguide/extend-intro.html) in the Amazon VPC documentation. Similarly, following the [principle of least privilege](https://docs.aws.amazon.com/IAM/latest/UserGuide/best-practices.html#grant-least-privilege), the bastion host doesn’t have access to any other resources in your AWS account unless you explicitly grant permissions. For more information, see [Resource-based policies](https://docs.aws.amazon.com/IAM/latest/UserGuide/access_policies.html#policies_resource-based) in the IAM documentation.

**Product versions**
+ AWS CLI version 2
+ Terraform version 1.3.9

## Architecture
<a name="access-a-bastion-host-by-using-session-manager-and-amazon-ec2-instance-connect-architecture"></a>

**Target technology stack**
+ A VPC with a single private subnet
+ The following [interface VPC endpoints](https://docs.aws.amazon.com/vpc/latest/privatelink/create-interface-endpoint.html):
  + `amazonaws.<region>.ssm` – The endpoint for the AWS Systems Manager service.
  + `amazonaws.<region>.ec2messages` – Systems Manager uses this endpoint to make calls from SSM Agent to the Systems Manager service.
  + `amazonaws.<region>.ssmmessages` – Session Manager uses this endpoint to connect to your Amazon EC2 instance through a secure data channel.
+ A `t3.nano` Amazon EC2 instance running Amazon Linux 2
+ IAM role and instance profile
+ Amazon VPC security groups and security group rules for the endpoints and Amazon EC2 instance

**Target architecture**

![\[Architecture diagram of using Session Manager to access a bastion host.\]](http://docs.aws.amazon.com/prescriptive-guidance/latest/patterns/images/pattern-img/a02aed20-1852-4c91-902f-f553795006e2/images/819c503b-7eec-4a9c-862b-b87107d50dc1.png)


The diagram shows the following process:

1. The user assumes an IAM role that has permissions to do the following:
   + Authenticate, authorize, and connect to the Amazon EC2 instance
   + Start a session with Session Manager

1. The user initiates an SSH session through Session Manager.

1. Session Manager authenticates the user, verifies the permissions in the associated IAM policies, checks the configuration settings, and sends a message to SSM Agent to open a two-way connection.

1. The user pushes the SSH public key to the bastion host through Amazon EC2 metadata. This must be done before each connection. The SSH public key remains available for 60 seconds.

1. The bastion host communicates with the interface VPC endpoints for Systems Manager and Amazon EC2.

1. The user accesses the bastion host through Session Manager by using a TLS 1.2 encrypted bidirectional communication channel.

**Automation and scale**

The following options are available to automate deployment or to scale this architecture:
+ You can deploy the architecture through a continuous integration and continuous delivery (CI/CD) pipeline.
+ You can modify the code to change the instance type of the bastion host.
+ You can modify the code to deploy multiple bastion hosts. In the `bastion-host/main.tf` file, in the `aws_instance` resource block, add the `count` meta-argument. For more information, see the [Terraform documentation](https://developer.hashicorp.com/terraform/language/meta-arguments/count).

## Tools
<a name="access-a-bastion-host-by-using-session-manager-and-amazon-ec2-instance-connect-tools"></a>

**AWS services**
+ [AWS Command Line Interface (AWS CLI)](https://docs.aws.amazon.com/cli/latest/userguide/cli-chap-welcome.html) is an open source tool that helps you interact with AWS services through commands in your command-line shell.
+ [Amazon Elastic Compute Cloud (Amazon EC2)](https://docs.aws.amazon.com/ec2/) provides scalable computing capacity in the AWS Cloud. You can launch as many virtual servers as you need and quickly scale them up or down.
+ [AWS Identity and Access Management (IAM)](https://docs.aws.amazon.com/IAM/latest/UserGuide/introduction.html) helps you securely manage access to your AWS resources by controlling who is authenticated and authorized to use them.
+ [AWS Systems Manager](https://docs.aws.amazon.com/systems-manager/latest/userguide/what-is-systems-manager.html) helps you manage your applications and infrastructure running in the AWS Cloud. It simplifies application and resource management, shortens the time to detect and resolve operational problems, and helps you manage your AWS resources securely at scale. This pattern uses [Session Manager](https://docs.aws.amazon.com/systems-manager/latest/userguide/session-manager.html), a capability of Systems Manager.
+ [Amazon Virtual Private Cloud (Amazon VPC)](https://docs.aws.amazon.com/vpc/latest/userguide/what-is-amazon-vpc.html) helps you launch AWS resources into a virtual network that you’ve defined. This virtual network resembles a traditional network that you’d operate in your own data center, with the benefits of using the scalable infrastructure of AWS.

**Other tools**
+ [HashiCorp Terraform](https://www.terraform.io/docs) is an infrastructure as code (IaC) tool that helps you use code to provision and manage cloud infrastructure and resources. This pattern uses [Terraform CLI](https://developer.hashicorp.com/terraform/cli).

**Code repository**

The code for this pattern is available in the GitHub [Access a bastion host by using Session Manager and Amazon EC2 Instance Connect](https://github.com/aws-samples/secured-bastion-host-terraform) repository.

## Best practices
<a name="access-a-bastion-host-by-using-session-manager-and-amazon-ec2-instance-connect-best-practices"></a>
+ We recommend using automated code-scanning tools to improve the security and quality of the code. This pattern was scanned by using [Checkov](https://www.checkov.io/), a static code-analysis tool for IaC. At a minimum, we recommend that you perform basic validation and formatting checks by using the `terraform validate` and `terraform fmt -check -recursive` Terraform commands.
+ It’s a good practice to add automated tests for IaC. For more information about the different approaches for testing Terraform code, see [Testing HashiCorp Terraform](https://www.hashicorp.com/blog/testing-hashicorp-terraform) (Terraform blog post).
+ During deployment, Terraform uses the replaces the Amazon EC2 instance each time a new version of the [Amazon Linux 2 AMI](https://aws.amazon.com/marketplace/pp/prodview-zc4x2k7vt6rpu?sr=0-1&ref_=beagle&applicationId=AWSMPContessa) is detected. This deploys the new version of the operating system, including patches and upgrades. If the deployment schedule is infrequent, this can pose a security risk because the instance doesn’t have the latest patches. It is important to frequently update and apply security patches to deployed Amazon EC2 instances. For more information, see [Update management in Amazon EC2](https://docs.aws.amazon.com/AWSEC2/latest/UserGuide/update-management.html).
+ Because this pattern is a proof of concept, it uses AWS managed policies, such as `AmazonSSMManagedInstanceCore`. AWS managed policies cover common use cases but don't grant least-privilege permissions. As needed for your use case, we recommend that you create custom policies that grant least-privilege permissions for the resources deployed in this architecture. For more information, see [Get started with AWS managed policies and move toward least-privilege permissions](https://docs.aws.amazon.com/IAM/latest/UserGuide/best-practices.html#bp-use-aws-defined-policies).
+ Use a password to protect access to SSH keys and store keys in a secure location.
+ Set up logging and monitoring for the bastion host. Logging and monitoring are important parts of maintaining systems, from both an operational and security perspective. There are multiple ways to monitor connections and activity in your bastion host. For more information, see the following topics in the Systems Manager documentation:
  + [Monitoring AWS Systems Manager](https://docs.aws.amazon.com/systems-manager/latest/userguide/monitoring.html)
  + [Logging and monitoring in AWS Systems Manager](https://docs.aws.amazon.com/systems-manager/latest/userguide/logging-and-monitoring.html)
  + [Auditing session activity](https://docs.aws.amazon.com/systems-manager/latest/userguide/session-manager-auditing.html)
  + [Logging session activity](https://docs.aws.amazon.com/systems-manager/latest/userguide/session-manager-logging.html)

## Epics
<a name="access-a-bastion-host-by-using-session-manager-and-amazon-ec2-instance-connect-epics"></a>

### Deploy the resources
<a name="deploy-the-resources"></a>


| Task | Description | Skills required | 
| --- | --- | --- | 
| Clone the code repository. | [\[See the AWS documentation website for more details\]](http://docs.aws.amazon.com/prescriptive-guidance/latest/patterns/access-a-bastion-host-by-using-session-manager-and-amazon-ec2-instance-connect.html) | DevOps engineer, Developer | 
| Initialize the Terraform working directory. | This step is necessary for only the first deployment. If you are redeploying the pattern, skip to the next step.In the root directory of the cloned repository, enter the following command, where:[\[See the AWS documentation website for more details\]](http://docs.aws.amazon.com/prescriptive-guidance/latest/patterns/access-a-bastion-host-by-using-session-manager-and-amazon-ec2-instance-connect.html)<pre>terraform init \<br />    -backend-config="bucket=$S3_STATE_BUCKET" \<br />    -backend-config="key=$PATH_TO_STATE_FILE" \<br />    -backend-config="region=$AWS_REGION</pre>Alternatively, you can open the **config.tf** file and, in the `terraform` section, manually provide these values. | DevOps engineer, Developer, Terraform | 
| Deploy the resources. | [\[See the AWS documentation website for more details\]](http://docs.aws.amazon.com/prescriptive-guidance/latest/patterns/access-a-bastion-host-by-using-session-manager-and-amazon-ec2-instance-connect.html) | DevOps engineer, Developer, Terraform | 

### Set up the local environment
<a name="set-up-the-local-environment"></a>


| Task | Description | Skills required | 
| --- | --- | --- | 
| Configure the SSH connection. | Update the SSH configuration file to allow SSH connections through Session Manager. For instructions, see [Allowing SSH connections for Session Manager](https://docs.aws.amazon.com/systems-manager/latest/userguide/session-manager-getting-started-enable-ssh-connections.html#ssh-connections-enable). This allows authorized users to enter a proxy command that starts a Session Manager session and transfers all data through a two-way connection. | DevOps engineer | 
| Generate the SSH keys. | Enter the following command to generate a local private and public SSH key pair. You use this key pair to connect to the bastion host.<pre>ssh-keygen -t rsa -f my_key</pre> | DevOps engineer, Developer | 

### Connect to the bastion host by using Session Manager
<a name="connect-to-the-bastion-host-by-using-sesh"></a>


| Task | Description | Skills required | 
| --- | --- | --- | 
| Get the instance ID. | [\[See the AWS documentation website for more details\]](http://docs.aws.amazon.com/prescriptive-guidance/latest/patterns/access-a-bastion-host-by-using-session-manager-and-amazon-ec2-instance-connect.html) | General AWS | 
| Send the SSH public key. | In this section, you upload the public key to the [instance metadata](https://docs.aws.amazon.com/AWSEC2/latest/UserGuide/ec2-instance-metadata.html) of the bastion host. After the key is uploaded, you have 60 seconds to start a connection with the bastion host. After 60 seconds, the public key is removed. For more information, see the [Troubleshooting](#access-a-bastion-host-by-using-session-manager-and-amazon-ec2-instance-connect-troubleshooting) section of this pattern. Complete the next steps quickly to prevent the key from being removed before you connect to the bastion host.[\[See the AWS documentation website for more details\]](http://docs.aws.amazon.com/prescriptive-guidance/latest/patterns/access-a-bastion-host-by-using-session-manager-and-amazon-ec2-instance-connect.html) | General AWS | 
| Connect to the bastion host. | [\[See the AWS documentation website for more details\]](http://docs.aws.amazon.com/prescriptive-guidance/latest/patterns/access-a-bastion-host-by-using-session-manager-and-amazon-ec2-instance-connect.html)There are other options for opening an SSH connection with the bastion host. For more information, see *Alternative approaches to establish an SSH connection with the bastion host* in the [Additional information](#access-a-bastion-host-by-using-session-manager-and-amazon-ec2-instance-connect-additional) section of this pattern. | General AWS | 

### (Optional) Clean up
<a name="optional-clean-up"></a>


| Task | Description | Skills required | 
| --- | --- | --- | 
| Remove the deployed resources. | [\[See the AWS documentation website for more details\]](http://docs.aws.amazon.com/prescriptive-guidance/latest/patterns/access-a-bastion-host-by-using-session-manager-and-amazon-ec2-instance-connect.html) | DevOps engineer, Developer, Terraform | 

## Troubleshooting
<a name="access-a-bastion-host-by-using-session-manager-and-amazon-ec2-instance-connect-troubleshooting"></a>


| Issue | Solution | 
| --- | --- | 
| `TargetNotConnected` error when trying to connect to the bastion host | [\[See the AWS documentation website for more details\]](http://docs.aws.amazon.com/prescriptive-guidance/latest/patterns/access-a-bastion-host-by-using-session-manager-and-amazon-ec2-instance-connect.html) | 
| `Permission denied` error when trying to connect to the bastion host | After the public key is uploaded to the bastion host, you have only 60 seconds to start the connection. After 60 seconds, the key is automatically removed, and you can’t use it to connect to the instance. If this occurs, you can repeat the step to resend the key to the instance. | 

## Related resources
<a name="access-a-bastion-host-by-using-session-manager-and-amazon-ec2-instance-connect-resources"></a>

**AWS documentation**
+ [AWS Systems Manager Session Manager](https://docs.aws.amazon.com/systems-manager/latest/userguide/session-manager.html) (Systems Manager documentation)
+ [Install the Session Manager plugin for the AWS CLI](https://docs.aws.amazon.com/systems-manager/latest/userguide/session-manager-working-with-install-plugin.html) (Systems Manager documentation)
+ [Allowing SSH connections for Session Manager](https://docs.aws.amazon.com/systems-manager/latest/userguide/session-manager-getting-started-enable-ssh-connections.html#ssh-connections-enable) (Systems Manager documentation)
+ [About using EC2 Instance Connect](https://docs.aws.amazon.com/AWSEC2/latest/UserGuide/Connect-using-EC2-Instance-Connect.html) (Amazon EC2 documentation)
+ [Connect using EC2 Instance Connect](https://docs.aws.amazon.com/AWSEC2/latest/UserGuide/ec2-instance-connect-methods.html) (Amazon EC2 documentation)
+ [Identity and access management for Amazon EC2](https://docs.aws.amazon.com/AWSEC2/latest/UserGuide/security-iam.html) (Amazon EC2 documentation)
+ [Using an IAM role to grant permissions to applications running on Amazon EC2 instances](https://docs.aws.amazon.com/IAM/latest/UserGuide/id_roles_use_switch-role-ec2.html) (IAM documentation)
+ [Security best practices in IAM](https://docs.aws.amazon.com/IAM/latest/UserGuide/best-practices.html) (IAM documentation)
+ [Control traffic to resources using security groups](https://docs.aws.amazon.com/vpc/latest/userguide/vpc-security-groups.html) (Amazon VPC documentation)

**Other resources**
+ [Terraform Developer webpage](https://developer.hashicorp.com/terraform)
+ [Command: validate](https://developer.hashicorp.com/terraform/cli/commands/validate) (Terraform documentation)
+ [Command: fmt](https://developer.hashicorp.com/terraform/cli/commands/fmt) (Terraform documentation)
+ [Testing HashiCorp Terraform](https://www.hashicorp.com/blog/testing-hashicorp-terraform) (HashiCorp blog post)
+ [Checkov webpage](https://www.checkov.io/)

## Additional information
<a name="access-a-bastion-host-by-using-session-manager-and-amazon-ec2-instance-connect-additional"></a>

**Alternative approaches to establish an SSH connection with the bastion host**

*Port forwarding*

You can use the `-D 8888` option to open an SSH connection with dynamic port forwarding. For more information, see the [instructions](https://explainshell.com/explain?cmd=ssh+-i+%24PRIVATE_KEY_FILE+-D+8888+ec2-user%40%24INSTANCE_ID) at explainshell.com. The following is an example of a command to open an SSH connection by using port forwarding.

```
ssh -i $PRIVATE_KEY_FILE -D 8888 ec2-user@$INSTANCE_ID
```

This is kind of connection opens a SOCKS proxy that can forward traffic from your local browser through the bastion host. If you are using Linux or MacOS, to see all options, enter `man ssh`. This displays the SSH reference manual.

*Using the provided script*

Instead of manually running the steps described in *Connect to the bastion host by using Session Manager* in the [Epics](#access-a-bastion-host-by-using-session-manager-and-amazon-ec2-instance-connect-epics) section, you can use the **connect.sh** script included in the code repository. This script generates the SSH key pair, pushes the public key to the Amazon EC2 instance, and initiates a connection with the bastion host. When you run the script, you pass the tag and key name as arguments. The following is an example of the command to run the script.

```
./connect.sh sandbox-dev-bastion-host my_key
```

# Centralize DNS resolution by using AWS Managed Microsoft AD and on-premises Microsoft Active Directory
<a name="centralize-dns-resolution-by-using-aws-managed-microsoft-ad-and-on-premises-microsoft-active-directory"></a>

*Brian Westmoreland, Amazon Web Services*

## Summary
<a name="centralize-dns-resolution-by-using-aws-managed-microsoft-ad-and-on-premises-microsoft-active-directory-summary"></a>

This pattern provides guidance for centralizing DNS resolution within an AWS multi-account environment by using both AWS Directory Service for Microsoft Active Directory (AWS Managed Microsoft AD) and Amazon Route 53. In this pattern the AWS DNS namespace is a subdomain of the on-premises DNS namespace. This pattern also provides guidance on how to configure the on-premises DNS servers to forward queries to AWS when the on-premises DNS solution uses Microsoft Active Directory.  

## Prerequisites and limitations
<a name="centralize-dns-resolution-by-using-aws-managed-microsoft-ad-and-on-premises-microsoft-active-directory-prereqs"></a>

**Prerequisites **
+ An AWS multi-account environment set up by using AWS Organizations.
+ Network connectivity established between AWS accounts.
+ Network connectivity established between AWS and the on-premises environment (by using AWS Direct Connect or any type of VPN connection).
+ AWS Command Line Interface (AWS CLI) configured on a local workstation.
+ AWS Resource Access Manager (AWS RAM) used to share Route 53 rules between accounts. Therefore, sharing must be enabled within the AWS Organizations environment, as described in the [Epics](#centralize-dns-resolution-by-using-aws-managed-microsoft-ad-and-on-premises-microsoft-active-directory-epics) section.

**Limitations **
+ AWS Managed Microsoft AD Standard Edition has a limit of 5 shares.
+ AWS Managed Microsoft AD Enterprise Edition has a limit of 125 shares.
+ The solution in this pattern is limited to AWS Regions that support sharing through AWS RAM.

**Product versions**
+ Microsoft Active Directory running on Windows Server 2008, 2012, 2012 R2, or 2016.

## Architecture
<a name="centralize-dns-resolution-by-using-aws-managed-microsoft-ad-and-on-premises-microsoft-active-directory-architecture"></a>

**Target architecture **

![\[Architecture for centralized DNS resolution on AWS.\]](http://docs.aws.amazon.com/prescriptive-guidance/latest/patterns/images/pattern-img/91430e2a-f7f6-4dbe-9fe7-8abed1f764a7/images/9b5fc51d-590b-468f-80f7-1949f3b3b258.png)


In this design, AWS Managed Microsoft AD is installed in the shared services AWS account. Although it is not a requirement, this pattern assumes this configuration. If you configure AWS Managed Microsoft AD in a different AWS account, you might have to modify the steps in the [Epics](#centralize-dns-resolution-by-using-aws-managed-microsoft-ad-and-on-premises-microsoft-active-directory-epics) section accordingly.

This design uses Route 53 Resolvers to support name resolution through the use of Route 53 rules. If the on-premises DNS solution uses Microsoft DNS, creating a conditional forwarding rule for the AWS namespace (`aws.company.com`), which is a subdomain of the company DNS namespace (`company.com`), is not straightforward. If you try to create a traditional conditional forwarder, it will result in an error. This is because Microsoft Active Directory is already considered authoritative for any subdomain of `company.com`. To get around this error, you must first create a delegation for `aws.company.com` to delegate authority of that namespace. You can then create the conditional forwarder.

The virtual private cloud (VPC) for each spoke account can have its own unique DNS namespace based on the root AWS namespace. In this design, each spoke account appends an abbreviation of the account name to the base AWS namespace. After the private hosted zones in the spoke account have been created, the zones are associated with the local VPC in the spoke account as well as with the VPC in the central AWS network account. This enables the central AWS network account to answer DNS queries related to the spoke accounts. This way, both Route 53 and AWS Managed Microsoft AD work together to share the responsibility of managing the AWS namespace (`aws.company.com`).

**Automation and scale**

This design uses Route 53 Resolver endpoints to scale DNS queries between AWS and your on-premises environment. Each Route 53 Resolver endpoint comprises multiple elastic network interfaces (spread across multiple Availability Zones), and each network interface can handle up to 10,000 queries per second. Route 53 Resolver supports up to 6 IP addresses per endpoint, so altogether this design supports up to 60,000 DNS queries per second spread across multiple Availability Zones for high availability.  

Additionally, this pattern automatically accounts for future growth within AWS. The DNS forwarding rules configured on premises do not have to be modified to support new VPCs and their associated private hosted zones that are added to AWS. 

## Tools
<a name="centralize-dns-resolution-by-using-aws-managed-microsoft-ad-and-on-premises-microsoft-active-directory-tools"></a>

**AWS services**
+ [AWS Directory Service for Microsoft Active Directory](https://docs.aws.amazon.com/directoryservice/latest/admin-guide/directory_microsoft_ad.html) enables your directory-aware workloads and AWS resources to use Microsoft Active Directory in the AWS Cloud.
+ [AWS Organizations](https://docs.aws.amazon.com/organizations/latest/userguide/orgs_introduction.html) is an account management service that helps you consolidate multiple AWS accounts into an organization that you create and centrally manage.
+ [AWS Resource Access Manager (AWS RAM)](https://docs.aws.amazon.com/ram/latest/userguide/what-is.html) helps you securely share your resources across AWS accounts to reduce operational overhead and provide visibility and auditability.
+ [Amazon Route 53](https://docs.aws.amazon.com/Route53/latest/DeveloperGuide/Welcome.html) is a highly available and scalable DNS web service.

**Tools**
+ [AWS Command Line Interface (AWS CLI)](https://docs.aws.amazon.com/cli/latest/userguide/cli-chap-welcome.html) is an open-source tool that helps you interact with AWS services through commands in your command-line shell. In this pattern, the AWS CLI is used to configure Route 53 authorizations.

## Epics
<a name="centralize-dns-resolution-by-using-aws-managed-microsoft-ad-and-on-premises-microsoft-active-directory-epics"></a>

### Create and share an AWS Managed Microsoft AD directory
<a name="create-and-share-an-managed-ad-directory"></a>


| Task | Description | Skills required | 
| --- | --- | --- | 
| Deploy AWS Managed Microsoft AD. | [\[See the AWS documentation website for more details\]](http://docs.aws.amazon.com/prescriptive-guidance/latest/patterns/centralize-dns-resolution-by-using-aws-managed-microsoft-ad-and-on-premises-microsoft-active-directory.html) | AWS administrator | 
| Share the directory. | After the directory has been built, share it with other AWS accounts in the AWS organization. For instructions, see [Share your directory](https://docs.aws.amazon.com/directoryservice/latest/admin-guide/step2_share_directory.html) in the *AWS Directory Service Administration Guide*.  AWS Managed Microsoft AD Standard Edition has a limit of 5 shares. Enterprise Edition has a limit of 125 shares. | AWS administrator | 

### Configure Route 53
<a name="configure-r53"></a>


| Task | Description | Skills required | 
| --- | --- | --- | 
| Create Route 53 Resolvers. | Route 53 Resolvers facilitate DNS query resolution between AWS and the on-premises data center.  [\[See the AWS documentation website for more details\]](http://docs.aws.amazon.com/prescriptive-guidance/latest/patterns/centralize-dns-resolution-by-using-aws-managed-microsoft-ad-and-on-premises-microsoft-active-directory.html)Although using the central AWS network account VPC isn’t a requirement, the remaining steps assume this configuration. | AWS administrator | 
| Create Route 53 rules. | Your specific use case might require a large number of Route 53 rules, but you will need to configure the following rules as a baseline:[\[See the AWS documentation website for more details\]](http://docs.aws.amazon.com/prescriptive-guidance/latest/patterns/centralize-dns-resolution-by-using-aws-managed-microsoft-ad-and-on-premises-microsoft-active-directory.html)For more information, see [Managing forwarding rules](https://docs.aws.amazon.com/Route53/latest/DeveloperGuide/resolver-rules-managing.html) in the *Route 53 Developer Guide*. | AWS administrator | 
| Configure a Route 53 Profile. | A Route 53 Profile is used to share the rules with spoke accounts.[\[See the AWS documentation website for more details\]](http://docs.aws.amazon.com/prescriptive-guidance/latest/patterns/centralize-dns-resolution-by-using-aws-managed-microsoft-ad-and-on-premises-microsoft-active-directory.html) | AWS administrator | 

### Configure on-premises Active Directory DNS
<a name="configure-on-premises-active-directory-dns"></a>


| Task | Description | Skills required | 
| --- | --- | --- | 
| Create the delegation. | Use the Microsoft DNS snap-in (`dnsmgmt.msc`) to create a new delegation for the `company.com` namespace within Active Directory. The name of the delegated domain should be `aws`. This makes the fully qualified domain name (FQDN) of the delegation `aws.company.com`. Use the IP addresses of the AWS Managed Microsoft AD domain controllers for the name server IP values, and use `server.aws.company.com` for the name. (This delegation is only for redundancy, because a conditional forwarder will be created for this namespace that takes precedence over the delegation.) | Active Directory | 
| Create the conditional forwarder. | Use the Microsoft DNS snap-in (`dnsmgmt.msc`) to create a new conditional forwarder for `aws.company.com`.  Use the IP addresses of the AWS inbound Route 53 Resolvers in the central DNS AWS account for the target of the conditional forwarder.   | Active Directory | 

### Create Route 53 private hosted zones for spoke AWS accounts
<a name="create-r53-private-hosted-zones-for-spoke-aws-accounts"></a>


| Task | Description | Skills required | 
| --- | --- | --- | 
| Create the Route 53 private hosted zones. | Create a Route 53 private hosted zone in each spoke account. Associate this private hosted zone with the spoke account VPC. For detailed steps, see [Creating a private hosted zone](https://docs.aws.amazon.com/Route53/latest/DeveloperGuide/hosted-zone-private-creating.html) in the *Route 53 Developer Guide*. | AWS administrator | 
| Create authorizations. | Use the AWS CLI to create an authorization for the central AWS network account VPC. Run this command from the context of each spoke AWS account:<pre>aws route53 create-vpc-association-authorization --hosted-zone-id <hosted-zone-id> \<br />   --vpc VPCRegion=<region>,VPCId=<vpc-id></pre>where:[\[See the AWS documentation website for more details\]](http://docs.aws.amazon.com/prescriptive-guidance/latest/patterns/centralize-dns-resolution-by-using-aws-managed-microsoft-ad-and-on-premises-microsoft-active-directory.html) | AWS administrator | 
| Create associations. | Create the Route 53 private hosted zone association for the central AWS network account VPC by using the AWS CLI. Run this command from the context of the central AWS network account:<pre>aws route53 associate-vpc-with-hosted-zone --hosted-zone-id <hosted-zone-id> \<br />   --vpc VPCRegion=<region>,VPCId=<vpc-id></pre>where:[\[See the AWS documentation website for more details\]](http://docs.aws.amazon.com/prescriptive-guidance/latest/patterns/centralize-dns-resolution-by-using-aws-managed-microsoft-ad-and-on-premises-microsoft-active-directory.html) | AWS administrator | 

## Related resources
<a name="centralize-dns-resolution-by-using-aws-managed-microsoft-ad-and-on-premises-microsoft-active-directory-resources"></a>
+ [Simplify DNS management in a multi-account environment with Route 53 Resolver](https://aws.amazon.com/blogs/security/simplify-dns-management-in-a-multiaccount-environment-with-route-53-resolver/) (AWS blog post)
+ [Creating your AWS Managed Microsoft AD](https://docs.aws.amazon.com/directoryservice/latest/admin-guide/ms_ad_getting_started_create_directory.html) (AWS Directory Service documentation)
+ [Sharing an AWS Managed Microsoft AD directory](https://docs.aws.amazon.com/directoryservice/latest/admin-guide/step2_share_directory.html) (AWS Directory Service documentation)
+ [What is Amazon Route 53 Resolver?](https://docs.aws.amazon.com/Route53/latest/DeveloperGuide/resolver.html) (Amazon Route 53 documentation)
+ [Creating a private hosted zone](https://docs.aws.amazon.com/Route53/latest/DeveloperGuide/hosted-zone-private-creating.html) (Amazon Route 53 documentation)
+ [What are Amazon Route 53 Profiles?](https://docs.aws.amazon.com/Route53/latest/DeveloperGuide/profiles.html) (Amazon Route 53 documentation)

# Centralize monitoring by using Amazon CloudWatch Observability Access Manager
<a name="centralize-monitoring-by-using-amazon-cloudwatch-observability-access-manager"></a>

*Anand Krishna Varanasi, JAGDISH KOMAKULA, Ashish Kumar, Jimmy Morgan, Sarat Chandra Pothula, Vivek Thangamuthu, and Balaji Vedagiri, Amazon Web Services*

## Summary
<a name="centralize-monitoring-by-using-amazon-cloudwatch-observability-access-manager-summary"></a>

Observability is crucial to monitoring, understanding, and troubleshooting applications. Applications that span multiple accounts, as with AWS Control Tower or landing zone implementations, generate a large number of logs and trace data. To quickly troubleshoot problems or understand user analytics or business analytics, you need a common observability platform across all accounts. The Amazon CloudWatch Observability Access Manager gives you access to, and control over, multiple account logs from a central location.

You can use the Observability Access Manager to view and manage observability data logs generated by source accounts. Source accounts are individual AWS accounts that generate observability data for their resources. Observability data is shared between source accounts and monitoring accounts. The shared observability data can include metrics in Amazon CloudWatch, logs in Amazon CloudWatch Logs, and traces in AWS X-Ray. For more information, see the [Observability Access Manager documentation](https://docs.aws.amazon.com/OAM/latest/APIReference/Welcome.html).

This pattern is for users who have applications or infrastructure that run in multiple AWS accounts and need a common place to view logs. It explains how you can set up Observability Access Manager by using Terraform, to monitor the status and health of these applications or infrastructure. You can install this solution in multiple ways:
+ As a standalone Terraform module that you set up manually
+ By using a continuous integration and continuous delivery (CI/CD) pipeline
+ By integrating with other solutions such as [AWS Control Tower Account Factory for Terraform (AFT)](https://docs.aws.amazon.com/controltower/latest/userguide/aft-overview.html)

The instructions in the [Epics](#centralize-monitoring-by-using-amazon-cloudwatch-observability-access-manager-epics) section cover the manual implementation. For AFT installation steps, see the README file for the GitHub [Observability Access Manager](https://github.com/aws-samples/cloudwatch-obervability-access-manager-terraform) repository.

## Prerequisites and limitations
<a name="centralize-monitoring-by-using-amazon-cloudwatch-observability-access-manager-prereqs"></a>

**Prerequisites**
+ [Terraform](https://www.terraform.io/) installed or referenced in your system or in automated pipelines. (We recommend that you use the [latest version](https://releases.hashicorp.com/terraform/).)
+ An account that you can use as a central monitoring account. Other accounts create links to the central monitoring account in order to view logs.
+ (Optional) A source code repository such as GitHub, AWS CodeCommit, Atlassian Bitbucket, or similar system. A source code repository isn’t necessary if you’re using automated CI/CD pipelines.
+ (Optional) Permissions to create pull requests (PRs) for code review and code collaboration in GitHub.

**Limitations**

Observability Access Manager has the following service quotas, which cannot be changed. Consider these quotas before you deploy this feature. For more information, see [CloudWatch service quotas](https://docs.aws.amazon.com/AmazonCloudWatch/latest/monitoring/cloudwatch_limits.html) in the CloudWatch documentation.
+ **Source account links**: You can link each source account to a maximum of five monitoring accounts.
+ **Sinks**: You can build multiple sinks for an account, but only one sink per AWS Region is allowed.

In addition:
+ Sinks and links must be created in the same AWS Region; they cannot be cross-Region.

**Cross-Region and cross-account monitoring**

For cross-Region, cross-account monitoring, you can choose one of these options:
+ Create [cross-account and cross-Region CloudWatch dashboards](https://docs.aws.amazon.com/AmazonCloudWatch/latest/monitoring/Cross-Account-Cross-Region.html) for alarms and metrics. This option doesn’t support logs and traces.
+ Implement [centralized logging](https://docs.aws.amazon.com/AmazonCloudWatch/latest/monitoring/Cross-Account-Cross-Region.html) by using Amazon OpenSearch Service.
+ Create one sink per Region from all tenant accounts, push metrics to a centralized monitoring account (as described in this pattern), and then use [CloudWatch metric streams](https://docs.aws.amazon.com/AmazonCloudWatch/latest/monitoring/CloudWatch-Metric-Streams.html) to send the data to a common external destination or to third-party monitoring products such as Datadog, Dynatrace, Sumo Logic, Splunk, or New Relic.

## Architecture
<a name="centralize-monitoring-by-using-amazon-cloudwatch-observability-access-manager-architecture"></a>

**Components**

CloudWatch Observability Access Manager consists of two major components that enable cross-account observability:
+ A *sink* provides the ability for source accounts to send observability data to the central monitoring account. A sink basically provides a gateway junction for source accounts to connect to. There can be only one sink gateway or connection, and multiple accounts can connect to it.
+ Each source account has a *link* to the sink gateway junction, and observability data is sent through this link. You must create a sink before you create links from each source account.

**Architecture**

The following diagram illustrates Observability Access Manager and its components.

![\[Architecture for cross-account observability with sinks and links.\]](http://docs.aws.amazon.com/prescriptive-guidance/latest/patterns/images/pattern-img/00603763-4f99-456e-85e7-a80d803b087d/images/5188caf9-348b-4d91-b560-2b3d6ea81191.png)


## Tools
<a name="centralize-monitoring-by-using-amazon-cloudwatch-observability-access-manager-tools"></a>

**AWS services**
+ [Amazon CloudWatch](https://docs.aws.amazon.com/AmazonCloudWatch/latest/monitoring/WhatIsCloudWatch.html) helps you monitor the metrics of your AWS resources and the applications you run on AWS in real time.
+ [AWS Organizations](https://docs.aws.amazon.com/organizations/latest/userguide/orgs_introduction.html) is an account management service that helps you consolidate multiple AWS accounts into an organization that you create and centrally manage.
+ [AWS Identity and Access Management (IAM)](https://docs.aws.amazon.com/IAM/latest/UserGuide/introduction.html) helps you securely manage access to your AWS resources by controlling who is authenticated and authorized to use them.

**Tools**
+ [Terraform](https://www.terraform.io/) is an infrastructure as code (IaC) tool from HashiCorp that helps you create and manage cloud and on-premises resources.
+ [AWS Control Tower Account Factory for Terraform (AFT)](https://docs.aws.amazon.com/controltower/latest/userguide/aft-overview.html) sets up a Terraform pipeline to help you provision and customize accounts in AWS Control Tower. You can optionally use AFT to set up Observability Access Manager at scale across multiple accounts.

**Code repository**

The code for this pattern is available in the GitHub [Observability Access Manager](https://github.com/aws-samples/cloudwatch-obervability-access-manager-terraform) repository.

## Best practices
<a name="centralize-monitoring-by-using-amazon-cloudwatch-observability-access-manager-best-practices"></a>
+ In AWS Control Tower environments, mark the logging account as the central monitoring account (sink).
+ If you have multiple organizations with multiple accounts in AWS Organizations, we recommend that you include the organizations instead of individual accounts in the configuration policy. If you have a small number of accounts or if the accounts aren’t part of an organization in the sink configuration policy, you might decide to include individual accounts instead.

## Epics
<a name="centralize-monitoring-by-using-amazon-cloudwatch-observability-access-manager-epics"></a>

### Set up the sink module
<a name="set-up-the-sink-module"></a>


| Task | Description | Skills required | 
| --- | --- | --- | 
| Clone the repository. | Clone the GitHub Observability Access Manager repository:<pre>git clone https://github.com/aws-samples/cloudwatch-obervability-access-manager-terraform</pre> | AWS DevOps, Cloud administrator, AWS administrator | 
| Specify property values for the sink module. | In the `main.tf` file (in the `deployments/aft-account-customizations/LOGGING/terraform/`** **folder of the repository), specify values for the following properties:[\[See the AWS documentation website for more details\]](http://docs.aws.amazon.com/prescriptive-guidance/latest/patterns/centralize-monitoring-by-using-amazon-cloudwatch-observability-access-manager.html)For more information, see [AWS::Oam::Sink](https://docs.aws.amazon.com/AWSCloudFormation/latest/UserGuide/aws-resource-oam-sink.html) in the CloudFormation documentation. | AWS DevOps, Cloud administrator, AWS administrator | 
| Install the sink module. | Export the credentials of the AWS account that you have selected as the monitoring account, and install the Observability Access Manager sink module:<pre>Terraform Init<br />Terrafom Plan<br />Terraform Apply</pre> | AWS DevOps, Cloud administrator, AWS administrator | 

### Set up the link module
<a name="set-up-the-link-module"></a>


| Task | Description | Skills required | 
| --- | --- | --- | 
| Specify property values for the link module. | In the `main.tf `file (in the `deployments/aft-account-customizations/LOGGING/terraform/`** **folder of the repository), specify values for the following properties:[\[See the AWS documentation website for more details\]](http://docs.aws.amazon.com/prescriptive-guidance/latest/patterns/centralize-monitoring-by-using-amazon-cloudwatch-observability-access-manager.html)For more information, see [AWS::Oam::Link](https://docs.aws.amazon.com/AWSCloudFormation/latest/UserGuide/aws-resource-oam-link.html) in the CloudFormation documentation. | AWS DevOps, Cloud administrator, Cloud architect | 
| Install the link module for individual accounts. | Export the credentials of individual accounts and install the Observability Access Manager link module:<pre>Terraform Plan<br />Terraform Apply</pre>You can set up the link module individually for each account, or use [AFT](https://docs.aws.amazon.com/controltower/latest/userguide/aft-overview.html) to automatically install this module across a large number of accounts. | AWS DevOps, Cloud administrator, Cloud architect | 

### Approve sink-to-link connections
<a name="approve-sink-to-link-connections"></a>


| Task | Description | Skills required | 
| --- | --- | --- | 
| Check the status message. | [\[See the AWS documentation website for more details\]](http://docs.aws.amazon.com/prescriptive-guidance/latest/patterns/centralize-monitoring-by-using-amazon-cloudwatch-observability-access-manager.html)On the right, you should see the status message **Monitoring account enabled** with a green checkmark. This means that the monitoring account has an Observability Access Manager sink that the links of other accounts will connect to. |  | 
| Approve the link-to-sink connections. | [\[See the AWS documentation website for more details\]](http://docs.aws.amazon.com/prescriptive-guidance/latest/patterns/centralize-monitoring-by-using-amazon-cloudwatch-observability-access-manager.html)For more information, see [Link monitoring accounts with source accounts](https://docs.aws.amazon.com/AmazonCloudWatch/latest/monitoring/CloudWatch-Unified-Cross-Account-Setup.html) in the CloudWatch documentation. | AWS DevOps, Cloud administrator, Cloud architect | 

### Verify cross-account observability data
<a name="verify-cross-account-observability-data"></a>


| Task | Description | Skills required | 
| --- | --- | --- | 
| View cross-account data. | [\[See the AWS documentation website for more details\]](http://docs.aws.amazon.com/prescriptive-guidance/latest/patterns/centralize-monitoring-by-using-amazon-cloudwatch-observability-access-manager.html) | AWS DevOps, Cloud administrator, Cloud architect | 

### (Optional) Enable source accounts to trust monitoring account
<a name="optional-enable-source-accounts-to-trust-monitoring-account"></a>


| Task | Description | Skills required | 
| --- | --- | --- | 
| View metrics, dashboards, logs, widgets, and alarms from other accounts. | As an additional feature,** **you can share the CloudWatch metrics, dashboards, logs, widgets, and alarms with other accounts. Each account uses an IAM role called **CloudWatch-CrossAccountSharingRole **to gain access to this data.Source accounts that have a trust relationship with the central monitoring account can assume this role and view data from the monitoring account.CloudWatch provides a sample CloudFormation script to create the role. Choose **Manage role in IAM an**d run this script in the accounts where you want to view data.<pre>{<br />    "Version": "2012-10-17",		 	 	 <br />    "Statement": [<br />        {<br />            "Effect": "Allow",<br />            "Principal": {<br />                "AWS": [<br />                    "arn:aws:iam::XXXXXXXXX:root",<br />                    "arn:aws:iam::XXXXXXXXX:root",<br />                    "arn:aws:iam::XXXXXXXXX:root",<br />                    "arn:aws:iam::XXXXXXXXX:root"<br />                ]<br />            },<br />            "Action": "sts:AssumeRole"<br />        }<br />    ]<br />}</pre>For more information, see [Enabling cross-account functionality in CloudWatch](https://docs.aws.amazon.com/AmazonCloudWatch/latest/monitoring/Cross-Account-Cross-Region.html#enable-cross-account-cross-Region) in the CloudWatch documentation. | AWS DevOps, Cloud administrator, Cloud architect | 

### (Optional) View cross-account cross-Region from the monitoring account
<a name="optional-view-cross-account-cross-region-from-the-monitoring-account"></a>


| Task | Description | Skills required | 
| --- | --- | --- | 
| Set up cross-account, cross-Region access. | In the central monitoring account, you can optionally add an account selector to easily switch between accounts and view their data without having to authenticate.[\[See the AWS documentation website for more details\]](http://docs.aws.amazon.com/prescriptive-guidance/latest/patterns/centralize-monitoring-by-using-amazon-cloudwatch-observability-access-manager.html)For more information, see [Cross-account cross-Region CloudWatch console](https://docs.aws.amazon.com/AmazonCloudWatch/latest/monitoring/Cross-Account-Cross-Region.html) in the CloudWatch documentation. | AWS DevOps, Cloud administrator, Cloud architect | 

## Related resources
<a name="centralize-monitoring-by-using-amazon-cloudwatch-observability-access-manager-resources"></a>
+ [CloudWatch cross-account observability](https://docs.aws.amazon.com/AmazonCloudWatch/latest/monitoring/CloudWatch-Unified-Cross-Account.html) (Amazon CloudWatch documentation)
+ [Amazon CloudWatch Observability Access Manager API Reference](https://docs.aws.amazon.com/OAM/latest/APIReference/Welcome.html) (Amazon CloudWatch documentation)
+ [Resource: aws\$1oam\$1sink](https://registry.terraform.io/providers/hashicorp/aws/latest/docs/resources/oam_sink) (Terraform documentation)
+ [Data Source: aws\$1oam\$1link](https://registry.terraform.io/providers/hashicorp/aws/latest/docs/data-sources/oam_link) (Terraform documentation)
+ [CloudWatchObservabilityAccessManager](https://boto3.amazonaws.com/v1/documentation/api/latest/reference/services/oam.html) (AWS Boto3 documentation)

# Check EC2 instances for mandatory tags at launch
<a name="check-ec2-instances-for-mandatory-tags-at-launch"></a>

*Susanne Kangnoh and Archit Mathur, Amazon Web Services*

## Summary
<a name="check-ec2-instances-for-mandatory-tags-at-launch-summary"></a>

Amazon Elastic Compute Cloud (Amazon EC2) provides scalable computing capacity in the Amazon Web Services (AWS) Cloud. Using Amazon EC2 eliminates your need to invest in hardware up front, so you can develop and deploy applications faster.

You can use tagging to categorize your AWS resources in different ways. EC2 instance tagging is useful when you have many resources in your account and you want to quickly identify a specific resource based on the tags. You can assign custom metadata to your EC2 instances by using tags. A tag consists of a user-defined key and value. We recommend that you create a consistent set of tags to meet your organization's requirements. 

This pattern provides an AWS CloudFormation template to help you monitor EC2 instances for specific tags. The template creates an Amazon CloudWatch Events event that watches for the AWS CloudTrail **TagResource **or **UntagResource** events, to detect new EC2 instance tagging or tag removal. If a predefined tag is missing, it calls an AWS Lambda function, which sends out a violation message to an email address that you provide, by using Amazon Simple Notification Service (Amazon SNS). 

## Prerequisites and limitations
<a name="check-ec2-instances-for-mandatory-tags-at-launch-prerequisites-and-limitations"></a>

**Prerequisites **
+ An active AWS account.
+ An Amazon Simple Storage Service (Amazon S3) bucket to upload the provided Lambda code.
+ An email address where you would like to receive violation notifications.

**Limitations **
+ This solution supports CloudTrail **TagResource **or **UntagResource **events. It does not create notifications for any other events.
+ This solution checks only for tag keys. It does not monitor key values.

## Architecture
<a name="check-ec2-instances-for-mandatory-tags-at-launch-architecture"></a>

**Workflow ****architecture **

![\[Workflow diagram showing AWS services interaction for EC2 instance monitoring and notification.\]](http://docs.aws.amazon.com/prescriptive-guidance/latest/patterns/images/pattern-img/9cd74141-a87f-419e-94b3-0b28fd04a018/images/b48fd21b-a86b-4ec7-b9f6-4f1a64999437.png)


 

**Automation and scale**
+ You can use the AWS CloudFormation template multiple times for different AWS Regions and accounts. You need to run the template only once in each Region or account.

## Tools
<a name="check-ec2-instances-for-mandatory-tags-at-launch-tools"></a>

**AWS services**
+ [Amazon EC2](https://aws.amazon.com/ec2/) – Amazon Elastic Compute Cloud (Amazon EC2) is a web service that provides secure, resizable compute capacity in the cloud. It is designed to make web-scale cloud computing easier for developers.
+ [AWS CloudTrail](https://docs.aws.amazon.com/awscloudtrail/latest/userguide/cloudtrail-user-guide.html) – CloudTrail is an AWS service that helps you with governance, compliance, and operational and risk auditing of your AWS account. Actions taken by a user, role, or AWS service are recorded as events in CloudTrail. 
+ [Amazon CloudWatch Events](https://docs.aws.amazon.com/AmazonCloudWatch/latest/events/WhatIsCloudWatchEvents.html) – Amazon CloudWatch Events delivers a near real-time stream of system events that describe changes in AWS resources. CloudWatch Events becomes aware of operational changes as they occur and takes corrective action as necessary, by sending messages to respond to the environment, activating functions, making changes, and capturing state information. 
+ [AWS Lambda](https://docs.aws.amazon.com/lambda/latest/dg/welcome.html) – Lambda is a compute service that supports running code without needing to provision or manage servers. Lambda runs your code only when needed and scales automatically, from a few requests per day to thousands per second. 
+ [Amazon S3](https://docs.aws.amazon.com/AmazonS3/latest/dev/Welcome.html) – Amazon Simple Storage Service (Amazon S3) is a highly scalable object storage service that can be used for a wide range of storage solutions, including websites, mobile applications, backups, and data lakes.
+ [Amazon SNS](https://docs.aws.amazon.com/sns/latest/dg/welcome.html) – Amazon Simple Notification Service (Amazon SNS) is a web service that enables applications, end-users, and devices to instantly send and receive notifications from the cloud.

**Code**

This pattern includes an attachment with two files:
+ `index.zip` is a compressed file that includes the Lambda code for this pattern.
+ `ec2-require-tags.yaml` is a CloudFormation template that deploys the Lambda code.

See the *Epics *section for information about how to use these files.

## Epics
<a name="check-ec2-instances-for-mandatory-tags-at-launch-epics"></a>

### Deploy the Lambda code
<a name="deploy-the-lambda-code"></a>


| Task | Description | Skills required | 
| --- | --- | --- | 
| Upload the code to an S3 bucket. | Create a new S3 bucket or use an existing S3 bucket to upload the attached `index.zip` file (Lambda code). This bucket must be in the same AWS Region as the resources (EC2 instances) that you want to monitor. | Cloud architect | 
| Deploy the CloudFormation template. | Open the Cloudformation console in the same AWS Region as the S3 bucket, and deploy the `ec2-require-tags.yaml` file that's provided in the attachment. In the next epic, provide values for the template parameters.   | Cloud architect | 

### Complete the parameters in the CloudFormation template
<a name="complete-the-parameters-in-the-cloudformation-template"></a>


| Task | Description | Skills required | 
| --- | --- | --- | 
| Provide the S3 bucket name. | Enter the name of the S3 bucket that you created or selected in the first epic. This S3 bucket contains the .zip file for the Lambda code and must be in the same AWS Region as the CloudFormation template and the EC2 instances that you want to monitor. | Cloud architect | 
| Provide the S3 key. | Provide the location of the Lambda code .zip file in your S3 bucket, without leading slashes (for example, `index.zip` or `controls/index.zip`). | Cloud architect | 
| Provide an email address. | Provide an active email address where you want to receive violation notifications. | Cloud architect | 
| Define a logging level. | Specify the logging level and verbosity. `Info` designates detailed informational messages on the application’s progress and should be used only for debugging. `Error` designates error events that could still allow the application to continue running. `Warning` designates potentially harmful situations. | Cloud architect | 
| Enter the required tag keys. | Enter the tag keys that you want to check for. If you want to specify multiple keys, separate them with commas, without spaces. (For example, `ApplicationId,CreatedBy,Environment,Organization` searches for four keys.) The CloudWatch Events event searches for these tag keys and sends a notification if they are not found. | Cloud architect | 

### Confirm the subscription
<a name="confirm-the-subscription"></a>


| Task | Description | Skills required | 
| --- | --- | --- | 
| Confirm the email subscription. | When the CloudFormation template deploys successfully, it sends a subscription email message to the email address you provided. To receive notifications, you must confirm this email subscription.   | Cloud architect | 

## Related resources
<a name="check-ec2-instances-for-mandatory-tags-at-launch-related-resources"></a>
+ [Creating a bucket](https://docs.aws.amazon.com/AmazonS3/latest/user-guide/create-bucket.html) (Amazon S3 documentation)
+ [Uploading objects](https://docs.aws.amazon.com/AmazonS3/latest/user-guide/upload-objects.html) (Amazon S3 documentation)
+ [Tag your Amazon EC2 resources](https://docs.aws.amazon.com/AWSEC2/latest/UserGuide/Using_Tags.html) (Amazon EC2 documentation)
+ [Creating a CloudWatch Events rule that triggers on an AWS API call using AWS CloudTrail](https://docs.aws.amazon.com/AmazonCloudWatch/latest/events/Create-CloudWatch-Events-CloudTrail-Rule.html) (Amazon CloudWatch documentation)

## Attachments
<a name="attachments-9cd74141-a87f-419e-94b3-0b28fd04a018"></a>

To access additional content that is associated with this document, unzip the following file: [attachment.zip](samples/p-attach/9cd74141-a87f-419e-94b3-0b28fd04a018/attachments/attachment.zip)

# Clean up AWS Account Factory for Terraform (AFT) resources safely after state file loss
<a name="clean-up-aft-resources-safely-after-state-file-loss"></a>

*Gokendra Malviya, Amazon Web Services*

## Summary
<a name="clean-up-aft-resources-safely-after-state-file-loss-summary"></a>

When you use AWS Account Factory for Terraform (AFT) to manage your AWS Control Tower environment, AFT generates a Terraform state file to track the state and configuration of the resources created by Terraform. Losing the Terraform state file can create significant challenges for resource management and cleanup. This pattern provides a systematic approach to safely identify and remove AFT-related resources while maintaining the integrity of your AWS Control Tower environment.

The process is designed to ensure proper removal of all AFT components, even without the original state file reference. This process provides a clear path to successfully re-establish and reconfigure AFT in your environment, to help ensure minimal disruption to your AWS Control Tower operations.

For more information about AFT, see the [AWS Control Tower documentation](https://docs.aws.amazon.com/controltower/latest/userguide/taf-account-provisioning.html).

## Prerequisites and limitations
<a name="clean-up-aft-resources-safely-after-state-file-loss-prereqs"></a>

**Prerequisites**
+ A thorough understanding of [AFT architecture](https://docs.aws.amazon.com/controltower/latest/userguide/aft-architecture.html).
+ Administrator access to the following accounts:
  + AFT Management account
  + AWS Control Tower Management account
  + Log Archive account
  + Audit account
+ Verification that no service control policies (SCPs) contain restrictions or limitations that would block the deletion of AFT-related resources.

**Limitations**
+ This process can clean up resources effectively, but it cannot recover lost state files, and some resources might require manual identification.
+ The duration of the cleanup process depends on your environment's complexity and might take several hours.
+ This pattern has been tested with AFT version 1.12.2 and deletes the following resources. If you're using a different version of AFT, you might have to delete additional resources.    
[\[See the AWS documentation website for more details\]](http://docs.aws.amazon.com/prescriptive-guidance/latest/patterns/clean-up-aft-resources-safely-after-state-file-loss.html)

**Important**  
The resources that are deleted by the steps in this pattern cannot be recovered. Before you follow these steps, verify the resource names carefully and make sure that they were created by AFT.

## Architecture
<a name="clean-up-aft-resources-safely-after-state-file-loss-architecture"></a>

The following diagram shows the AFT components and high-level workflow. AFT sets up a Terraform pipeline that helps you provision and customize your accounts in AWS Control Tower. AFT follows a GitOps model to automate the processes of account provisioning in AWS Control Tower. You create a Terraform file for an account request and commit it to a repository, which provides the input that triggers the AFT workflow for account provisioning. After account provisioning is complete, AFT can run additional customization steps automatically.

![\[AFT components and high-level workflow.\]](http://docs.aws.amazon.com/prescriptive-guidance/latest/patterns/images/pattern-img/1342c0a6-4b07-46df-a063-ceab2e2f83c8/images/3e0cae87-20ef-4fcc-aacf-bb450844ac56.png)


In this architecture:
+ **AWS Control Tower Management account** is an AWS account that's dedicated to the AWS Control Tower service. This is also typically referred to as the *AWS payer account* or *AWS Organizations Management account*.
+ **AFT Management account** is an AWS account that's dedicated to AFT management operations. This is different from your organization's management account.
+ **Vended account** is an AWS account that contains all the baseline components and controls that you selected. AFT uses AWS Control Tower to vend a new account.

For additional information about this architecture, see [Introduction to AFT](https://catalog.workshops.aws/control-tower/en-US/customization/aft) in the AWS Control Tower workshop.

## Tools
<a name="clean-up-aft-resources-safely-after-state-file-loss-tools"></a>

**AWS services**
+ [AWS Control Tower](https://docs.aws.amazon.com/controltower/latest/userguide/what-is-control-tower.html) helps you set up and govern an AWS multi-account environment, following prescriptive best practices.
+ [AWS Account Factory for Terraform (AFT)](https://docs.aws.amazon.com/controltower/latest/userguide/taf-account-provisioning.html) sets up a Terraform pipeline to help you provision and customize accounts and resources in AWS Control Tower.
+ [AWS Organizations](https://docs.aws.amazon.com/organizations/latest/userguide/orgs_introduction.html) helps you centrally manage and govern your environment as you grow and scale your AWS resources. Using Organizations, you can create accounts and allocate resources, group accounts to organize your workflows, apply policies for governance, and simplify billing by using a single payment method for all your accounts.
+ [AWS Identity and Access Management (IAM)](https://docs.aws.amazon.com/IAM/latest/UserGuide/introduction.html) helps you securely manage access to your AWS resources by controlling who is authenticated and authorized to use them. This pattern requires IAM roles and permissions.

**Other tools**
+ [Terraform](https://www.terraform.io/) is an infrastructure as code (IaC) tool from HashiCorp that helps you create and manage cloud and on-premises resources.

## Best practices
<a name="clean-up-aft-resources-safely-after-state-file-loss-best-practices"></a>
+ For AWS Control Tower, see [Best practices for AWS Control Tower administrators](https://docs.aws.amazon.com/controltower/latest/userguide/best-practices.html) in the AWS Control Tower documentation.
+ For IAM, see [Security best practices](https://docs.aws.amazon.com/IAM/latest/UserGuide/best-practices.html) in the IAM documentation.

## Epics
<a name="clean-up-aft-resources-safely-after-state-file-loss-epics"></a>

### Delete AFT resources in the AFT Management account
<a name="delete-aft-resources-in-the-aft-management-account"></a>


| Task | Description | Skills required | 
| --- | --- | --- | 
| Delete resources that are identified by the AFT tag. | [\[See the AWS documentation website for more details\]](http://docs.aws.amazon.com/prescriptive-guidance/latest/patterns/clean-up-aft-resources-safely-after-state-file-loss.html) | AWS administrator, AWS DevOps, DevOps engineer | 
| Delete IAM roles. | [\[See the AWS documentation website for more details\]](http://docs.aws.amazon.com/prescriptive-guidance/latest/patterns/clean-up-aft-resources-safely-after-state-file-loss.html) | AWS administrator, AWS DevOps, DevOps engineer | 
| Delete the AWS Backup backup vault. | [\[See the AWS documentation website for more details\]](http://docs.aws.amazon.com/prescriptive-guidance/latest/patterns/clean-up-aft-resources-safely-after-state-file-loss.html) | AWS administrator, AWS DevOps, DevOps engineer | 
| Delete Amazon CloudWatch resources. | [\[See the AWS documentation website for more details\]](http://docs.aws.amazon.com/prescriptive-guidance/latest/patterns/clean-up-aft-resources-safely-after-state-file-loss.html) | AWS administrator, AWS DevOps, DevOps engineer | 
| Delete AWS KMS resources. | [\[See the AWS documentation website for more details\]](http://docs.aws.amazon.com/prescriptive-guidance/latest/patterns/clean-up-aft-resources-safely-after-state-file-loss.html) | AWS administrator, AWS DevOps, DevOps engineer | 

### Delete AFT resources in the Log Archive account
<a name="delete-aft-resources-in-the-log-archive-account"></a>


| Task | Description | Skills required | 
| --- | --- | --- | 
| Delete S3 buckets. | [\[See the AWS documentation website for more details\]](http://docs.aws.amazon.com/prescriptive-guidance/latest/patterns/clean-up-aft-resources-safely-after-state-file-loss.html) | AWS administrator, AWS DevOps, DevOps engineer | 
| Delete IAM roles. | [\[See the AWS documentation website for more details\]](http://docs.aws.amazon.com/prescriptive-guidance/latest/patterns/clean-up-aft-resources-safely-after-state-file-loss.html) | AWS administrator, AWS DevOps, DevOps engineer | 

### Delete AFT resources in the Audit account
<a name="delete-aft-resources-in-the-audit-account"></a>


| Task | Description | Skills required | 
| --- | --- | --- | 
| Delete IAM roles. | [\[See the AWS documentation website for more details\]](http://docs.aws.amazon.com/prescriptive-guidance/latest/patterns/clean-up-aft-resources-safely-after-state-file-loss.html) | AWS administrator, AWS DevOps, DevOps engineer | 

### Delete AFT resources in the AWS Control Tower Management account
<a name="delete-aft-resources-in-the-ctower-management-account"></a>


| Task | Description | Skills required | 
| --- | --- | --- | 
| Delete IAM roles. | [\[See the AWS documentation website for more details\]](http://docs.aws.amazon.com/prescriptive-guidance/latest/patterns/clean-up-aft-resources-safely-after-state-file-loss.html) | AWS administrator, AWS DevOps, DevOps engineer | 
| Delete EventBridge rules. | [\[See the AWS documentation website for more details\]](http://docs.aws.amazon.com/prescriptive-guidance/latest/patterns/clean-up-aft-resources-safely-after-state-file-loss.html) | AWS administrator, AWS DevOps, DevOps engineer | 

## Troubleshooting
<a name="clean-up-aft-resources-safely-after-state-file-loss-troubleshooting"></a>


| Issue | Solution | 
| --- | --- | 
| Detaching the internet gateway was unsuccessful. | While you're deleting resources that are identified by the **AFT** tag, if you encounter this issue when you detach or delete the internet gateway, you first have to delete VPC endpoints:[\[See the AWS documentation website for more details\]](http://docs.aws.amazon.com/prescriptive-guidance/latest/patterns/clean-up-aft-resources-safely-after-state-file-loss.html) | 
| You're unable to find the specified CloudWatch queries. | If you are unable to find the CloudWatch queries that were created by AFT, follow these steps:[\[See the AWS documentation website for more details\]](http://docs.aws.amazon.com/prescriptive-guidance/latest/patterns/clean-up-aft-resources-safely-after-state-file-loss.html) | 

## Related resources
<a name="clean-up-aft-resources-safely-after-state-file-loss-resources"></a>
+ AFT:
  + [GitHub Repository](https://github.com/aws-ia/terraform-aws-control_tower_account_factory)
  + [Workshop](https://catalog.workshops.aws/control-tower/en-US/customization/aft)
  + [Documentation](https://docs.aws.amazon.com/controltower/latest/userguide/aft-getting-started.html)
+ [AWS Control Tower documentation](https://docs.aws.amazon.com/controltower/latest/userguide/getting-started-with-control-tower.html)

## Additional information
<a name="clean-up-aft-resources-safely-after-state-file-loss-additional"></a>

To view AFT queries on the CloudWatch Logs Insights dashboard, choose the **Saved and sample queries** icon from the upper-right corner, as illustrated in the following screenshot:

![\[Accessing AFT queries on the CloudWatch Logs Insights dashboard.\]](http://docs.aws.amazon.com/prescriptive-guidance/latest/patterns/images/pattern-img/1342c0a6-4b07-46df-a063-ceab2e2f83c8/images/255d4032-738b-4600-9084-9684d2e9a328.png)


# Create a pipeline in AWS Regions that don’t support AWS CodePipeline
<a name="create-a-pipeline-in-aws-regions-that-don-t-support-aws-codepipeline"></a>

*Anand Krishna Varanasi, Amazon Web Services*

## Summary
<a name="create-a-pipeline-in-aws-regions-that-don-t-support-aws-codepipeline-summary"></a>

**Notice**: AWS CodeCommit is no longer available to new customers. Existing customers of AWS CodeCommit can continue to use the service as normal. [Learn more](https://aws.amazon.com/blogs/devops/how-to-migrate-your-aws-codecommit-repository-to-another-git-provider/)

AWS CodePipeline is a continuous delivery (CD) orchestration service that’s part of a set of DevOps tools from Amazon Web Services (AWS). It integrates with a large variety of sources (such as version control systems and storage solutions), continuous integration (CI) products and services from AWS and AWS Partners, and open-source products to provide an end-to-end workflow service for fast application and infrastructure deployments.

However, CodePipeline isn’t supported in all AWS Regions, and it’s useful to have an invisible orchestrator that connects AWS CI/CD services. This pattern describes how to implement an end-to-end workflow pipeline in AWS Regions where CodePipeline isn’t yet supported by using AWS CI/CD services such as AWS CodeCommit, AWS CodeBuild, and AWS CodeDeploy.

## Prerequisites and limitations
<a name="create-a-pipeline-in-aws-regions-that-don-t-support-aws-codepipeline-prereqs"></a>

**Prerequisites **
+ An active AWS account
+ AWS Cloud Development Kit (AWS CDK) CLI version 2.28 or later

## Architecture
<a name="create-a-pipeline-in-aws-regions-that-don-t-support-aws-codepipeline-architecture"></a>

**Target technology stack**

The following diagram shows a pipeline that was created in a Region that doesn’t support CodePipeline, such as the Africa (Cape Town) Region. A developer pushes the CodeDeploy configuration files (also called *deployment lifecycle hook scripts*) to the Git repository that’s hosted by CodeCommit. (See the [GitHub repository](https://github.com/aws-samples/invisible-codepipeline-unsupported-regions) provided with this pattern.) An Amazon EventBridge rule automatically initiates CodeBuild.

The CodeDeploy configuration files are fetched from CodeCommit as part of the source stage of the pipeline and transferred to CodeBuild. 

In the next phase, CodeBuild performs these tasks: 

1. Downloads the application source code TAR file. You can configure the name of this file by using Parameter Store, a capability of AWS Systems Manager.

1. Downloads the CodeDeploy configuration files.

1. Creates a combined archive of application source code and CodeDeploy configuration files that are specific to the application type.

1. Initiates CodeDeploy deployment to an Amazon Elastic Compute Cloud (Amazon EC2) instance by using the combined archive.

![\[Pipeline creation in unsupported AWS Region\]](http://docs.aws.amazon.com/prescriptive-guidance/latest/patterns/images/pattern-img/e27750de-b597-424e-b5bf-4d58dc9b60cc/images/95fc815e-a762-4142-b0fd-2a716823e498.png)


## Tools
<a name="create-a-pipeline-in-aws-regions-that-don-t-support-aws-codepipeline-tools"></a>

**AWS services**
+ [AWS CodeBuild](https://docs.aws.amazon.com/codebuild/latest/userguide/welcome.html) is a fully managed build service that helps you compile source code, run unit tests, and produce artifacts that are ready to deploy.
+ [AWS CodeCommit](https://docs.aws.amazon.com/codecommit/latest/userguide/welcome.html) is a version control service that helps you privately store and manage Git repositories, without needing to manage your own source control system.
+ [AWS CodeDeploy](https://docs.aws.amazon.com/codedeploy/latest/userguide/welcome.html) automates deployments to Amazon EC2 or on-premises instances, AWS Lambda functions, or Amazon Elastic Container Service (Amazon ECS) services.
+ [AWS CodePipeline](https://docs.aws.amazon.com/codepipeline/latest/userguide/welcome.html) helps you quickly model and configure the different stages of a software release and automate the steps required to release software changes continuously.
+ [AWS Cloud Development Kit (AWS CDK)](https://docs.aws.amazon.com/cdk/latest/guide/home.html) is a software development framework that helps you define and provision AWS Cloud infrastructure in code.

**Code**

The code for this pattern is available in the GitHub [CodePipeline Unsupported Regions](https://github.com/aws-samples/invisible-codepipeline-unsupported-regions) repository.

## Epics
<a name="create-a-pipeline-in-aws-regions-that-don-t-support-aws-codepipeline-epics"></a>

### Set up your developer workstation
<a name="set-up-your-developer-workstation"></a>


| Task | Description | Skills required | 
| --- | --- | --- | 
| Install the AWS CDK CLI. | For instructions, see the [AWS CDK documentation](https://docs.aws.amazon.com/cdk/v2/guide/getting_started.html#getting_started_prerequisites). | AWS DevOps | 
| Install a Git client. | To create commits, you can use a Git client installed on your local computer, and then push your commits to the CodeCommit repository. To set up CodeCommit with your Git client, see the [CodeCommit documentation](https://docs.aws.amazon.com/codecommit/latest/userguide/how-to-create-commit.html). | AWS DevOps | 
| Install npm. | Install the **npm **package manager. For more information, see the [npm documentation](https://docs.npmjs.com/downloading-and-installing-node-js-and-npm). | AWS DevOps | 

### Set up the pipeline
<a name="set-up-the-pipeline"></a>


| Task | Description | Skills required | 
| --- | --- | --- | 
| Clone the code repository. | Clone the GitHub [CodePipeline Unsupported Regions](https://github.com/aws-samples/invisible-codepipeline-unsupported-regions) repository to your local machine by running the following command.<pre>git clone https://github.com/aws-samples/invisible-codepipeline-unsupported-regions</pre> | DevOps engineer | 
| Set parameters in cdk.json. | Open the `cdk.json` file and provide values for the following parameters:<pre>"pipeline_account":"XXXXXXXXXXXX",<br />"pipeline_region":"us-west-2",<br />"repo_name": "app-dev-repo",<br />"ec2_tag_key": "test-vm",<br />"configName" : "cbdeployconfig",<br />"deploymentGroupName": "cbdeploygroup",<br />"applicationName" : "cbdeployapplication",<br />"projectName" : "CodeBuildProject"</pre>where:[\[See the AWS documentation website for more details\]](http://docs.aws.amazon.com/prescriptive-guidance/latest/patterns/create-a-pipeline-in-aws-regions-that-don-t-support-aws-codepipeline.html) | AWS DevOps | 
| Set up the AWS CDK construct library. | In the cloned GitHub repository, use the following commands to install the AWS CDK construct library, build your application, and synthesize to generate the AWS CloudFormation template for the application.<pre>npm i aws-cdk-lib<br />npm run build<br />cdk synth</pre> | AWS DevOps | 
| Deploy the sample AWS CDK application. | Deploy the code by running the following command in an unsupported Region (such as `af-south-1`).<pre>cdk deploy</pre> | AWS DevOps | 

### Set up the CodeCommit repository for CodeDeploy
<a name="set-up-the-codecommit-repository-for-codedeploy"></a>


| Task | Description | Skills required | 
| --- | --- | --- | 
| Set up CI/CD for the application. | Clone the CodeCommit repository that you specified in the `cdk.json` file (this is called `app-dev-repo` by default) to set up the CI/CD pipeline for the application.<pre>git clone https://git-codecommit.us-west-2.amazonaws.com/v1/repos/app-dev-repo</pre>where the repository name and Region depend on the values you provided in the `cdk.json` file. | AWS DevOps | 

### Test the pipeline
<a name="test-the-pipeline"></a>


| Task | Description | Skills required | 
| --- | --- | --- | 
| Test the pipeline with deployment instructions. | The `CodeDeploy_Files` folder of the GitHub [CodePipeline Unsupported Regions](https://github.com/aws-samples/invisible-codepipeline-unsupported-regions) repository includes sample files that instruct CodeDeploy to deploy the application. The `appspec.yml` file is a CodeDeploy configuration file that contains hooks to control the flow of application deployment. You can use the sample files `index.html`, `start_server.sh`, `stop_server.sh`, and `install_dependencies.sh` to update a website that’s hosted on Apache. These are examples—you can use the code in the GitHub repository to deploy any type of application. When the files are pushed to the CodeCommit repository, the invisible pipeline is initiated automatically. For deployment results, check the results of individual phases in the CodeBuild and CodeDeploy consoles. | AWS DevOps | 

## Related resources
<a name="create-a-pipeline-in-aws-regions-that-don-t-support-aws-codepipeline-resources"></a>
+ [Getting started](https://docs.aws.amazon.com/cdk/v2/guide/getting_started.html#getting_started_prerequisites) (AWS CDK documentation)
+ [Introduction to the Cloud Development Kit (CDK)](https://catalog.us-east-1.prod.workshops.aws/workshops/5962a836-b214-4fbf-9462-fedba7edcc9b/en-US) (AWS Workshop Studio)
+ [AWS CDK Workshop](https://cdkworkshop.com/)

# Customize default role names by using AWS CDK aspects and escape hatches
<a name="customize-default-role-names-by-using-aws-cdk-aspects-and-escape-hatches"></a>

*SANDEEP SINGH and James Jacob, Amazon Web Services*

## Summary
<a name="customize-default-role-names-by-using-aws-cdk-aspects-and-escape-hatches-summary"></a>

This pattern demonstrates how to customize the default names of roles that are created by AWS Cloud Development Kit (AWS CDK) constructs. Customizing role names is often necessary if your organization has specific constraints based on naming conventions. For example, your organization might set AWS Identity and Access Management (IAM) [permissions boundaries](https://docs.aws.amazon.com/IAM/latest/UserGuide/access_policies_boundaries.html) or [service control policies (SCPs)](https://docs.aws.amazon.com/organizations/latest/userguide/orgs_manage_policies_scps.html) that require a specific prefix in role names. In such cases, the default role names generated by AWS CDK constructs might not meet these conventions and might have to be altered. This pattern addresses those requirements by using [escape hatches](https://docs.aws.amazon.com/cdk/v2/guide/cfn-layer.html) and [aspects](https://docs.aws.amazon.com/cdk/v2/guide/aspects.html) in the AWS CDK. You use escape hatches to define custom role names, and aspects to apply a custom name to all roles, to ensure adherence to your organization's policies and constraints.

## Prerequisites and limitations
<a name="customize-default-role-names-by-using-aws-cdk-aspects-and-escape-hatches-prereqs"></a>

**Prerequisites**
+ An active AWS account
+ Prerequisites specified in the [AWS CDK documentation](https://docs.aws.amazon.com/cdk/v2/guide/getting_started.html#getting_started_prerequisites)

**Limitations**
+ Aspects filter resources based on resource types, so all roles share the same prefix. If you require different role prefixes for different roles, additional filtering based on other properties is necessary. For example, to assign different prefixes to roles that are associated with AWS Lambda functions, you could filter by specific role attributes or tags, and apply one prefix for Lambda-related roles and a different prefix for other roles.
+ IAM role names have a maximum length of 64 characters, so modified role names have to be trimmed to meet this restriction.
+ Some AWS services aren’t available in all AWS Regions. For Region availability, see [AWS services by Region](https://aws.amazon.com/about-aws/global-infrastructure/regional-product-services/). For specific endpoints, see the [Service endpoints and quotas](https://docs.aws.amazon.com/general/latest/gr/aws-service-information.html) page, and choose the link for the service.

## Architecture
<a name="customize-default-role-names-by-using-aws-cdk-aspects-and-escape-hatches-architecture"></a>

**Target technology stack **
+ AWS CDK
+ AWS CloudFormation

**Target architecture **

![\[Architecture for using escape hatches and aspects to customize AWS CDK-assigned role names.\]](http://docs.aws.amazon.com/prescriptive-guidance/latest/patterns/images/pattern-img/c149d8d2-1da6-4680-ab0b-e5051b69688c/images/15e56ca5-f150-4522-b374-8ee2dcc655a9.png)

+ An AWS CDK app consists of one or more CloudFormation stacks, which are synthesized and deployed to manage AWS resources.
+ To modify a property of an AWS CDK-managed resource that isn't exposed by a layer 2 (L2) construct, you use an escape hatch to override the underlying CloudFormation properties (in this case, the role name), and an aspect to apply the role to all resources in the AWS CDK app during the AWS CDK stack synthesis process.

## Tools
<a name="customize-default-role-names-by-using-aws-cdk-aspects-and-escape-hatches-tools"></a>

**AWS services**
+ [AWS Cloud Development Kit (AWS CDK)](https://docs.aws.amazon.com/cdk/latest/guide/home.html) is a software development framework that helps you define and provision AWS Cloud infrastructure in code.
+ [AWS CDK Command Line Interface (AWS CDK CLI)](https://docs.aws.amazon.com/cdk/latest/guide/cli.html) (also referred to as the AWS CDK Toolkit) is a command line cloud development kit that helps you interact with your AWS CDK app. The CLI `cdk` command is the primary tool for interacting with your AWS CDK app. It runs your app, interrogates the application model you defined, and produces and deploys the CloudFormation templates that are generated by the AWS CDK.
+ [CloudFormation](https://docs.aws.amazon.com/AWSCloudFormation/latest/UserGuide/Welcome.html) helps you set up AWS resources, provision them quickly and consistently, and manage them throughout their lifecycle across AWS accounts and Regions.

**Code repository**

The source code and templates for this pattern are available in the GitHub [CDK Aspects Override](https://github.com/aws-samples/cdk-aspects-override) repository.

## Best practices
<a name="customize-default-role-names-by-using-aws-cdk-aspects-and-escape-hatches-best-practices"></a>

See [Best practices for using the AWS CDK in TypeScript to create IaC projects](https://docs.aws.amazon.com/prescriptive-guidance/latest/best-practices-cdk-typescript-iac/introduction.html) on the** **AWS Prescriptive Guidance website.

## Epics
<a name="customize-default-role-names-by-using-aws-cdk-aspects-and-escape-hatches-epics"></a>

### Install the AWS CDK CLI
<a name="install-the-cdk-cli"></a>


| Task | Description | Skills required | 
| --- | --- | --- | 
| Install the AWS CDK CLI. | To install the AWS CDK CLI globally, run the command:<pre>npm install -g aws-cdk</pre> | AWS DevOps | 
| Verify the version. | Run the command:<pre>cdk --version</pre>Confirm that you’re using version 2 of the AWS CDK CLI. | AWS DevOps | 
| Bootstrap the AWS CDK environment. | Before you  deploy the CloudFormation templates, prepare the account and AWS Region that you want to use. Run the command:<pre>cdk bootstrap <account>/<Region></pre>For more information, see [AWS CDK bootstrapping](https://docs.aws.amazon.com/cdk/v2/guide/bootstrapping.html) in the AWS documentation. | AWS DevOps | 

### Deploy the AWS CDK app to demonstrate the use of aspects
<a name="deploy-the-cdk-app-to-demonstrate-the-use-of-aspects"></a>


| Task | Description | Skills required | 
| --- | --- | --- | 
| Set up the project. | [\[See the AWS documentation website for more details\]](http://docs.aws.amazon.com/prescriptive-guidance/latest/patterns/customize-default-role-names-by-using-aws-cdk-aspects-and-escape-hatches.html) | AWS DevOps | 
| Deploy stacks with default role names assigned by the AWS CDK. | Deploy two CloudFormation stacks (`ExampleStack1` and `ExampleStack2`) that contain the Lambda functions and their associated roles:<pre>npm run deploy:ExampleAppWithoutAspects</pre>The code doesn’t explicitly pass role properties, so the role names will be constructed by the AWS CDK.For example output, see the [Additional information](#customize-default-role-names-by-using-aws-cdk-aspects-and-escape-hatches-additional) section. | AWS DevOps | 
| Deploy stacks with aspects. | In this step, you apply an aspect that enforces a role name convention by adding a prefix to all IAM roles that are deployed in the AWS CDK project. The aspect is defined in the `lib/aspects.ts` file. The aspect uses an escape hatch to override the role name by adding a prefix. The aspect is applied to the stacks in the `bin/app-with-aspects.ts` file. The role name prefix used in this example is `dev-unicorn`.[\[See the AWS documentation website for more details\]](http://docs.aws.amazon.com/prescriptive-guidance/latest/patterns/customize-default-role-names-by-using-aws-cdk-aspects-and-escape-hatches.html)For example output, see the [Additional information](#customize-default-role-names-by-using-aws-cdk-aspects-and-escape-hatches-additional) section. | AWS DevOps | 

### Clean up resources
<a name="clean-up-resources"></a>


| Task | Description | Skills required | 
| --- | --- | --- | 
| Delete your AWS CloudFormation stacks. | After you finish using this pattern, run the following command to clean up resources to avoid incurring additional costs:<pre>cdk destroy --all -f && cdk --app npx ts-node bin/app-with-aspects.ts' destroy --all -f </pre> | AWS DevOps | 

## Troubleshooting
<a name="customize-default-role-names-by-using-aws-cdk-aspects-and-escape-hatches-troubleshooting"></a>


| Issue | Solution | 
| --- | --- | 
| You encounter problems using the AWS CDK. | See [Troubleshooting common AWS CDK issues](https://docs.aws.amazon.com/cdk/v2/guide/troubleshooting.html) in the AWS CDK documentation. | 

## Related resources
<a name="customize-default-role-names-by-using-aws-cdk-aspects-and-escape-hatches-resources"></a>
+ [AWS Cloud Development Kit (AWS CDK)](https://aws.amazon.com/cdk/)
+ [AWS CDK documentation](https://docs.aws.amazon.com/cdk/)
+ [AWS CDK on GitHub](https://github.com/aws/aws-cdk)
+ [Escape hatches](https://docs.aws.amazon.com/cdk/v2/guide/cfn-layer.html)
+ [Aspects and the AWS CDK](https://docs.aws.amazon.com/cdk/v2/guide/aspects.html)

## Additional information
<a name="customize-default-role-names-by-using-aws-cdk-aspects-and-escape-hatches-additional"></a>

**Role names created by CloudFormation without aspects**

```
Outputs:
ExampleStack1WithoutAspects.Function1RoleName = example-stack1-without-as-Function1LambdaFunctionSe-y7FYTY6FXJXA
ExampleStack1WithoutAspects.Function2RoleName = example-stack1-without-as-Function2LambdaFunctionSe-dDZV4rkWqWnI
...

Outputs:
ExampleStack2WithoutAspects.Function3RoleName = example-stack2-without-as-Function3LambdaFunctionSe-ygMv49iTyMq0
```

**Role names created by CloudFormation with aspects**

```
Outputs:
ExampleStack1WithAspects.Function1RoleName = dev-unicorn-Function1LambdaFunctionServiceRole783660DC
ExampleStack1WithAspects.Function2RoleName = dev-unicorn-Function2LambdaFunctionServiceRole2C391181
...

Outputs:
ExampleStack2WithAspects.Function3RoleName = dev-unicorn-Function3LambdaFunctionServiceRole4CAA721C
```

# Deploy a Cassandra cluster on Amazon EC2 with private static IPs to avoid rebalancing
<a name="deploy-a-cassandra-cluster-on-amazon-ec2-with-private-static-ips-to-avoid-rebalancing"></a>

*Dipin Jain, Amazon Web Services*

## Summary
<a name="deploy-a-cassandra-cluster-on-amazon-ec2-with-private-static-ips-to-avoid-rebalancing-summary"></a>

The private IP of an Amazon Elastic Compute Cloud (Amazon EC2) instance is retained throughout its lifecycle. However, the private IP might change during a planned or unplanned system crash; for example, during an Amazon Machine Image (AMI) upgrade. In some scenarios, retaining a private static IP can enhance the performance and recovery time of workloads. For example, using a static IP for an Apache Cassandra seed node prevents the cluster from incurring a rebalancing overhead. 

This pattern describes how to attach a secondary elastic network interface to EC2 instances to keep the IP static during rehosting. The pattern focuses on Cassandra clusters, but you can use this implementation for any architecture that benefits from private static IPs.

## Prerequisites and limitations
<a name="deploy-a-cassandra-cluster-on-amazon-ec2-with-private-static-ips-to-avoid-rebalancing-prereqs"></a>

**Prerequisites **
+ An active Amazon Web Service (AWS) account

**Product versions**
+ DataStax version 5.11.1
+ Operating system: Ubuntu 16.04.6 LTS

## Architecture
<a name="deploy-a-cassandra-cluster-on-amazon-ec2-with-private-static-ips-to-avoid-rebalancing-architecture"></a>

**Source architecture**

The source could be a Cassandra cluster on an on-premises virtual machine (VM) or on EC2 instances in the AWS Cloud. The following diagram illustrates the second scenario. This example includes four cluster nodes: three seed nodes and one management node. In the source architecture, each node has a single network interface attached.

![\[Four Amazon EC2 cluster nodes that each have a single network interface attached.\]](http://docs.aws.amazon.com/prescriptive-guidance/latest/patterns/images/pattern-img/47ca4dbc-0922-4e65-b66c-4db5122fc4ac/images/5d80cfc9-4b72-4c72-aefd-b77cc0fb58e3.png)


**Target architecture**

The destination cluster is hosted on EC2 instances with a secondary elastic network interface attached to each node, as illustrated in the following diagram.

![\[Four Amazon EC2 cluster nodes that each have a secondary elastic network interface attached.\]](http://docs.aws.amazon.com/prescriptive-guidance/latest/patterns/images/pattern-img/47ca4dbc-0922-4e65-b66c-4db5122fc4ac/images/d1e22017-f041-426b-9204-31ac158a407d.png)


**Automation and scale**

You can also automate attaching a second elastic network interface to an EC2 Auto Scaling group, as described in an [AWS Knowledge Center video](https://www.youtube.com/watch?v=RmwGYXchb4E).

## Epics
<a name="deploy-a-cassandra-cluster-on-amazon-ec2-with-private-static-ips-to-avoid-rebalancing-epics"></a>

### Configure a Cassandra cluster on Amazon EC2
<a name="configure-a-cassandra-cluster-on-amazon-ec2"></a>


| Task | Description | Skills required | 
| --- | --- | --- | 
| Launch EC2 nodes to host a Cassandra cluster. | On the [Amazon EC2 console](https://console.aws.amazon.com/ec2/), launch four EC2 instances for your Ubuntu nodes in your AWS account. Three (seed) nodes are used for the Cassandra cluster, and the fourth node acts as a cluster management node where you will install DataStax Enterprise (DSE) OpsCenter. For instructions, see the [Amazon EC2 documentation](https://docs.aws.amazon.com/AWSEC2/latest/UserGuide/EC2_GetStarted.html#ec2-launch-instance). | Cloud engineer | 
| Confirm node communications. | Make sure that the four nodes can communicate with one another over the database and cluster management ports. | Network engineer | 
| Install DSE OpsCenter on the management node. | Install DSE OpsCenter 6.1 from the Debian package on the management node. For instructions, see the [DataStax documentation](https://docs.datastax.com/en/opscenter/6.1/opsc/install/opscInstallDeb_t.html). | DBA | 
| Create a secondary network interface. | Cassandra generates a universal unique identifier (UUID) for each node based on the IP address of the EC2 instance for that node. This UUID is used for distributing virtual nodes (vnodes) on the ring. When Cassandra is deployed on EC2 instances, IP addresses are assigned automatically to the instances as they are created.  In the event of a planned or unplanned outage, the IP address for the new EC2 instance changes, the data distribution changes, and the entire ring has to be rebalanced. This is not desirable. To preserve the assigned IP address, use a [secondary elastic network interface](http://docs.aws.amazon.com/AWSEC2/latest/UserGuide/using-eni.html#scenarios-enis) with a fixed IP address.[\[See the AWS documentation website for more details\]](http://docs.aws.amazon.com/prescriptive-guidance/latest/patterns/deploy-a-cassandra-cluster-on-amazon-ec2-with-private-static-ips-to-avoid-rebalancing.html)For more information about creating a network interface, see the [Amazon EC2 documentation](https://docs.aws.amazon.com/AWSEC2/latest/UserGuide/using-eni.html#create_eni). | Cloud Engineer | 
| Attach the secondary network interface to cluster nodes. | [\[See the AWS documentation website for more details\]](http://docs.aws.amazon.com/prescriptive-guidance/latest/patterns/deploy-a-cassandra-cluster-on-amazon-ec2-with-private-static-ips-to-avoid-rebalancing.html)For more information about attaching a network interface, see the [Amazon EC2 documentation](https://docs.aws.amazon.com/AWSEC2/latest/UserGuide/using-eni.html#attach_eni). | Cloud engineer | 
| Add routes in Amazon EC2 to address asymmetric routing.  | When you attach the second network interface, the network will very likely perform asymmetric routing. To avoid this, you can add routes for the new network interfaces.For an in-depth explanation and remediation of asymmetric routing, see the [AWS Knowledge Center video](https://www.youtube.com/watch?v=RmwGYXchb4E) or [Overcoming Asymmetric Routing on Multi-Home Servers](http://www.linuxjournal.com/article/7291) (article in *Linux Journal* by Patrick McManus, April 5, 2004). | Network engineer | 
| Update DNS entries to point to the secondary network interface IP. | Point the fully qualified domain name (FQDN) of the node to the IP of the secondary network interface. | Network engineer | 
| Install and configure the Cassandra cluster by using DSE OpsCenter. | When the cluster nodes are ready with the secondary network interfaces, you can install and configure the Cassandra cluster. | DBA | 

### Recover cluster from node failure
<a name="recover-cluster-from-node-failure"></a>


| Task | Description | Skills required | 
| --- | --- | --- | 
| Create an AMI for the cluster seed node. | Make a backup of the nodes so you can restore them with database binaries in case of node failure. For instructions, see [Create an AMI](https://docs.aws.amazon.com/AWSEC2/latest/UserGuide/create-ami.html) in the Amazon EC2 documentation. | Backup administrator | 
| Recover from node failure. | Replace the failed node with a new EC2 instance launched from the AMI, and attach the secondary network interface of the failed node. | Backup administrator | 
| Verify that the Cassandra cluster is healthy. | When the replacement node is up, verify cluster health in DSE OpsCenter. | DBA | 

## Related resources
<a name="deploy-a-cassandra-cluster-on-amazon-ec2-with-private-static-ips-to-avoid-rebalancing-resources"></a>
+ [Installing DSE OpsCenter 6.1 from the Debian package](https://docs.datastax.com/en/opscenter/6.1/opsc/install/opscInstallDeb_t.html) (DataStax documentation)
+ [How to make a secondary network interface work in an Ubuntu EC2 instance](https://www.youtube.com/watch?v=RmwGYXchb4E) (AWS Knowledge Center video)
+ [Best Practices for Running Apache Cassandra on Amazon EC2](https://aws.amazon.com/blogs/big-data/best-practices-for-running-apache-cassandra-on-amazon-ec2/) (AWS blog post)

# Extend VRFs to AWS by using AWS Transit Gateway Connect
<a name="extend-vrfs-to-aws-by-using-aws-transit-gateway-connect"></a>

*Adam Till, Yashar Araghi, Vikas Dewangan, and Mohideen HajaMohideen, Amazon Web Services*

## Summary
<a name="extend-vrfs-to-aws-by-using-aws-transit-gateway-connect-summary"></a>

Virtual routing and forwarding (VRF) is a feature of traditional networks. It uses isolated logical routing domains, in the form of route tables, to separate network traffic within the same physical infrastructure. You can configure AWS Transit Gateway to support VRF isolation when you connect your on-premises network to AWS. This pattern uses a sample architecture to connect on-premises VRFs to different transit gateway route tables.

This pattern uses transit virtual interfaces (VIFs) in AWS Direct Connect and transit gateway Connect attachments to extend the VRFs. A [transit VIF](https://docs.aws.amazon.com/directconnect/latest/UserGuide/WorkingWithVirtualInterfaces.html) is used to access one or more Amazon VPC transit gateways that are associated with Direct Connect gateways. A [transit gateway Connect attachment](https://docs.aws.amazon.com/vpc/latest/tgw/tgw-connect.html) connects a transit gateway with a third-party virtual appliance that is running in a VPC. A transit gateway Connect attachment supports the Generic Routing Encapsulation (GRE) tunnel protocol for high performance, and it supports Border Gateway Protocol (BGP) for dynamic routing.

The approach described in this pattern has the following benefits:
+ Using Transit Gateway Connect, you can advertise up to 1,000 routes to the Transit Gateway Connect peer and receive up to 5,000 routes from it. Using the Direct Connect transit VIF feature without Transit Gateway Connect is limited to 20 prefixes per transit gateway.
+ You can maintain the traffic isolation and use Transit Gateway Connect to provide hosted services on AWS, regardless of the IP address schemas your customers are using.
+ The VRF traffic doesn’t need to traverse a public virtual interface. This makes it easier to adhere to compliance and security requirements in many organizations.
+ Each GRE tunnel supports up to 5 Gbps, and you can have up to four GRE tunnels per transit gateway Connect attachment. This is faster than many other connection types, such as AWS Site-to-Site VPN connections that support up to 1.25 Gbps.

## Prerequisites and limitations
<a name="extend-vrfs-to-aws-by-using-aws-transit-gateway-connect-prereqs"></a>

**Prerequisites**
+ The required AWS accounts have been created (see the architecture for details)
+ Permissions to assume an AWS Identity and Access Management (IAM) role in each account.
+ The IAM roles in each account must have permissions to provision AWS Transit Gateway and AWS Direct Connect resources. For more information, see [Authentication and access control for your transit gateways](https://docs.aws.amazon.com/vpc/latest/tgw/transit-gateway-authentication-access-control.html) and see [Identity and access management for Direct Connect](https://docs.aws.amazon.com/directconnect/latest/UserGuide/security-iam.html).
+ The Direct Connect connections have successfully been created. For more information, see [Create a connection using the Connection wizard](https://docs.aws.amazon.com/directconnect/latest/UserGuide/dedicated_connection.html#create-connection).

**Limitations**
+ There are limits for transit gateway attachments to the VPCs in the production, QA, and development accounts. For more information, see [Transit gateway attachments to a VPC](https://docs.aws.amazon.com/vpc/latest/tgw/tgw-vpc-attachments.html).
+ There are limits for creating and using Direct Connect gateways. For more information, see [AWS Direct Connect quotas](https://docs.aws.amazon.com/directconnect/latest/UserGuide/limits.html).

## Architecture
<a name="extend-vrfs-to-aws-by-using-aws-transit-gateway-connect-architecture"></a>

**Target architecture**

The following sample architecture provides a reusable solution to deploy transit VIFs with transit gateway Connect attachments. This architecture provides resilience by using multiple Direct Connect locations. For more information, see [Maximum resiliency](https://docs.aws.amazon.com/directconnect/latest/UserGuide/maximum_resiliency.html) in the Direct Connect documentation. The on-premises network has production, QA, and development VRFs that are extended to AWS and isolated by using dedicated route tables.

![\[Architecture diagram of using AWS Direct Connect and AWS Transit Gateway resources to extend VRFs\]](http://docs.aws.amazon.com/prescriptive-guidance/latest/patterns/images/pattern-img/db17e177-6c94-4d81-ab39-0923ecab2f1b/images/10be0625-8574-40eb-bc00-bb0a07d0dc26.png)


In the AWS environment, two accounts are dedicated to extending the VRFs: a *Direct Connect account* and a *network hub account*. The Direct Connect account contains the connection and the transit VIFs for each router. You create the transit VIFs from the Direct Connect account but deploy them to the network hub account so that you can associate them with the Direct Connect gateway in the network hub account. The network hub account contains the Direct Connect gateway and transit gateway. The AWS resources are connected as follows:

1. Transit VIFs connect the routers in the Direct Connect locations with AWS Direct Connect in the Direct Connect account.

1. A transit VIF connects Direct Connect with the Direct Connect gateway in the network hub account.

1. A [transit gateway association](https://docs.aws.amazon.com/directconnect/latest/UserGuide/direct-connect-transit-gateways.html) connects the Direct Connect gateway with the transit gateway in the network hub account.

1. [Transit gateway Connect attachments](https://docs.aws.amazon.com/vpc/latest/tgw/tgw-connect.html) connect the transit gateway with the VPCs in the production, QA, and development accounts.

*Transit VIF architecture*

The following diagram shows the configuration details for the transit VIFs. This sample architecture uses a VLAN for the tunnel source, but you could also use a loopback.

![\[Configuration details for the transit VIF connections between the routers and AWS Direct Connect\]](http://docs.aws.amazon.com/prescriptive-guidance/latest/patterns/images/pattern-img/db17e177-6c94-4d81-ab39-0923ecab2f1b/images/e88d2546-61ef-4531-972b-089cdf44ed67.png)


The following are the configuration details, such as autonomous system numbers (ASNs), for the transit VIFs.


| 
| 
| Resource | Item | Detail | 
| --- |--- |--- |
| router-01 | ASN | 65534 | 
| router-02 | ASN | 65534 | 
| router-03 | ASN | 65534 | 
| router-04 | ASN | 65534 | 
| Direct Connect gateway | ASN | 64601 | 
| Transit gateway | ASN | 64600 | 
| CIDR block | 10.100.254.0/24 | 

*Transit gateway Connect architecture*

The following diagram and tables describe how to configure a single VRF through a transit gateway Connect attachment. For additional VRFs, assign unique tunnel IDs, transit gateway GRE IP addresses, and BGP inside CIDR blocks. The peer GRE IP address matches the router peer IP address from the transit VIF.

![\[Configuration details for the GRE tunnels between the routers and the transit gateway\]](http://docs.aws.amazon.com/prescriptive-guidance/latest/patterns/images/pattern-img/db17e177-6c94-4d81-ab39-0923ecab2f1b/images/e58278e1-f3b4-442d-95d9-1dafab4aa5ac.png)


The following table contains router configuration details.


| 
| 
| Router | Tunnel | IP address | Source | Destination | 
| --- |--- |--- |--- |--- |
| router-01 | Tunnel 1 | 169.254.101.17 | VLAN 60169.254.100.1 | 10.100.254.1 | 
| router-02 | Tunnel 11 | 169.254.101.81 | VLAN 61169.254.100.5 | 10.100.254.11 | 
| router-03 | Tunnel 21 | 169.254.101.145 | VLAN 62169.254.100.9 | 10.100.254.21 | 
| router-04 | Tunnel 31 | 169.254.101.209 | VLAN 63169.254.100.13 | 10.100.254.31 | 

The following table contains transit gateway configuration details.


| 
| 
| Tunnel | Transit gateway GRE IP address | Peer GRE IP address | BGP inside CIDR blocks | 
| --- |--- |--- |--- |
| Tunnel 1 | 10.100.254.1 | VLAN 60169.254.100.1 | 169.254.101.16/29 | 
| Tunnel 11 | 10.100.254.11 | VLAN 61169.254.100.5 | 169.254.101.80/29 | 
| Tunnel 21 | 10.100.254.21 | VLAN 62169.254.100.9 | 169.254.101.144/29 | 
| Tunnel 31 | 10.100.254.31 | VLAN 63169.254.100.13 | 169.254.101.208/29 | 

**Deployment**

The [Epics](#extend-vrfs-to-aws-by-using-aws-transit-gateway-connect-epics) section describes how to deploy a sample configuration for a** **single VRF across multiple customer routers. After steps 1–5 are complete, you can create new transit gateway Connect attachments by using steps 6–7 for every new VRF that you’re extending into AWS:

1. Create the transit gateway.

1. Create a Transit Gateway route table for each VRF.

1. Create the transit virtual interfaces.

1. Create the Direct Connect gateway.

1. Create the Direct Connect gateway virtual interface and gateway associations with allowed prefixes.

1. Create the transit gateway Connect attachment.

1. Create the Transit Gateway Connect peers.

1. Associate the transit gateway Connect attachment with the route table.

1. Advertise routes to the routers.

## Tools
<a name="extend-vrfs-to-aws-by-using-aws-transit-gateway-connect-tools"></a>

**AWS services**
+ [AWS Direct Connect](https://docs.aws.amazon.com/directconnect/latest/UserGuide/Welcome.html) links your internal network to a Direct Connect location over a standard Ethernet fiber-optic cable. With this connection, you can create virtual interfaces directly to public AWS services while bypassing internet service providers in your network path.
+ [AWS Transit Gateway](https://docs.aws.amazon.com/vpc/latest/tgw/what-is-transit-gateway.html) is a central hub that connects virtual private clouds (VPCs) and on-premises networks.
+ [Amazon Virtual Private Cloud (Amazon VPC)](https://docs.aws.amazon.com/vpc/latest/userguide/what-is-amazon-vpc.html) helps you launch AWS resources into a virtual network that you’ve defined. This virtual network resembles a traditional network that you’d operate in your own data center, with the benefits of using the scalable infrastructure of AWS.

## Epics
<a name="extend-vrfs-to-aws-by-using-aws-transit-gateway-connect-epics"></a>

### Plan the architecture
<a name="plan-the-architecture"></a>


| Task | Description | Skills required | 
| --- | --- | --- | 
| Create custom architecture diagrams. | [\[See the AWS documentation website for more details\]](http://docs.aws.amazon.com/prescriptive-guidance/latest/patterns/extend-vrfs-to-aws-by-using-aws-transit-gateway-connect.html) | Cloud architect, Network administrator | 

### Create the Transit Gateway resources
<a name="create-the-transit-gateway-resources"></a>


| Task | Description | Skills required | 
| --- | --- | --- | 
| Create the transit gateway. | [\[See the AWS documentation website for more details\]](http://docs.aws.amazon.com/prescriptive-guidance/latest/patterns/extend-vrfs-to-aws-by-using-aws-transit-gateway-connect.html) | Network administrator, Cloud architect | 
| Create the transit gateway route table. | Follow the instructions in [Create a transit gateway route table](https://docs.aws.amazon.com/vpc/latest/tgw/tgw-route-tables.html#create-tgw-route-table). Note the following for this pattern:[\[See the AWS documentation website for more details\]](http://docs.aws.amazon.com/prescriptive-guidance/latest/patterns/extend-vrfs-to-aws-by-using-aws-transit-gateway-connect.html) | Cloud architect, Network administrator | 

### Create the transit virtual interfaces
<a name="create-the-transit-virtual-interfaces"></a>


| Task | Description | Skills required | 
| --- | --- | --- | 
| Create the transit virtual interfaces. | [\[See the AWS documentation website for more details\]](http://docs.aws.amazon.com/prescriptive-guidance/latest/patterns/extend-vrfs-to-aws-by-using-aws-transit-gateway-connect.html) | Cloud architect, Network administrator | 

### Create the Direct Connect resources
<a name="create-the-direct-connect-resources"></a>


| Task | Description | Skills required | 
| --- | --- | --- | 
| Create a Direct Connect gateway. | [\[See the AWS documentation website for more details\]](http://docs.aws.amazon.com/prescriptive-guidance/latest/patterns/extend-vrfs-to-aws-by-using-aws-transit-gateway-connect.html) | Cloud architect, Network administrator | 
| Attach the Direct Connect gateway to the transit VIFs. | [\[See the AWS documentation website for more details\]](http://docs.aws.amazon.com/prescriptive-guidance/latest/patterns/extend-vrfs-to-aws-by-using-aws-transit-gateway-connect.html) | Cloud architect, Network administrator | 
| Create the Direct Connect gateway associations with allowed prefixes. | In the network hub account, follow the instructions in [To associate a transit gateway](https://docs.aws.amazon.com/directconnect/latest/UserGuide/direct-connect-transit-gateways.html#associate-tgw-with-direct-connect-gateway). Note the following for this pattern:[\[See the AWS documentation website for more details\]](http://docs.aws.amazon.com/prescriptive-guidance/latest/patterns/extend-vrfs-to-aws-by-using-aws-transit-gateway-connect.html)Creating this association automatically creates a Transit Gateway attachment that has a Direct Connect Gateway resource type. This attachment does not need to be associated with a transit gateway route table. | Cloud architect, Network administrator | 
| Create the transit gateway Connect attachment. | [\[See the AWS documentation website for more details\]](http://docs.aws.amazon.com/prescriptive-guidance/latest/patterns/extend-vrfs-to-aws-by-using-aws-transit-gateway-connect.html) | Cloud architect, Network administrator | 
| Create the Transit Gateway Connect peers. | [\[See the AWS documentation website for more details\]](http://docs.aws.amazon.com/prescriptive-guidance/latest/patterns/extend-vrfs-to-aws-by-using-aws-transit-gateway-connect.html) |  | 

### Advertise routes to the routers
<a name="advertise-routes-to-the-routers"></a>


| Task | Description | Skills required | 
| --- | --- | --- | 
| Advertise the routes. | Associate the new transit gateway Connect attachment with the route table you created previously for this VRF. For example, associate the production transit gateway Connect attachment with the `Production-VRF` route table.Create a static route for the prefix that is advertised to the routers.[\[See the AWS documentation website for more details\]](http://docs.aws.amazon.com/prescriptive-guidance/latest/patterns/extend-vrfs-to-aws-by-using-aws-transit-gateway-connect.html) | Network administrator, Cloud architect | 

## Related resources
<a name="extend-vrfs-to-aws-by-using-aws-transit-gateway-connect-resources"></a>

**AWS documentation**
+ Direct Connect documentation
  + [Working with Direct Connect gateways](https://docs.aws.amazon.com/directconnect/latest/UserGuide/direct-connect-gateways.html)
  + [Transit gateway associations](https://docs.aws.amazon.com/directconnect/latest/UserGuide/direct-connect-transit-gateways.html)
  + [AWS Direct Connect virtual interfaces](https://docs.aws.amazon.com/directconnect/latest/UserGuide/WorkingWithVirtualInterfaces.html)
+ Transit Gateway documentation
  + [Working with transit gateways](https://docs.aws.amazon.com/vpc/latest/tgw/working-with-transit-gateways.html)
  + [Transit gateway attachments to a Direct Connect gateway](https://docs.aws.amazon.com/vpc/latest/tgw/tgw-dcg-attachments.html)
  + [Transit gateway Connect attachments and Transit Gateway Connect peers](https://docs.aws.amazon.com/vpc/latest/tgw/tgw-connect.html)
  + [Create a transit gateway Connect attachment](https://docs.aws.amazon.com/vpc/latest/tgw/tgw-connect.html#create-tgw-connect-attachment)

**AWS blog posts**
+ [Segmenting hybrid networks with AWS Transit Gateway connect](https://aws.amazon.com/blogs/networking-and-content-delivery/segmenting-hybrid-networks-with-aws-transit-gateway-connect/)
+ [Using AWS Transit Gateway connect to extend VRFs and increase IP prefix advertisement](https://aws.amazon.com/blogs/networking-and-content-delivery/using-aws-transit-gateway-connect-to-extend-vrfs-and-increase-ip-prefix-advertisement/)

## Attachments
<a name="attachments-db17e177-6c94-4d81-ab39-0923ecab2f1b"></a>

To access additional content that is associated with this document, unzip the following file: [attachment.zip](samples/p-attach/db17e177-6c94-4d81-ab39-0923ecab2f1b/attachments/attachment.zip)

# Get Amazon SNS notifications when the key state of an AWS KMS key changes
<a name="get-amazon-sns-notifications-when-the-key-state-of-an-aws-kms-key-changes"></a>

*Shubham Harsora, Aromal Raj Jayarajan, and Navdeep Pareek, Amazon Web Services*

## Summary
<a name="get-amazon-sns-notifications-when-the-key-state-of-an-aws-kms-key-changes-summary"></a>

The data and metadata associated with an AWS Key Management Service (AWS KMS) key is lost when that key is deleted. The deletion is irreversible and you can't recover lost data (including encrypted data). You can prevent data loss by setting up a notification system to alert you of status changes to [key states](https://docs.aws.amazon.com/kms/latest/developerguide/key-state.html#key-state-cmk-type) of your AWS KMS keys.

This pattern shows you how to monitor status changes to AWS KMS keys by using Amazon EventBridge and Amazon Simple Notification Service (Amazon SNS) to issue automated notifications whenever the key state of an AWS KMS key changes to `Disabled` or `PendingDeletion`. For example, if a user tries to disable or delete an AWS KMS key, you will receive an email notification with details about the attempted status change. You can also use this pattern to schedule the deletion of AWS KMS keys.

## Prerequisites and limitations
<a name="get-amazon-sns-notifications-when-the-key-state-of-an-aws-kms-key-changes-prereqs"></a>

**Prerequisites**
+ An active AWS account with an AWS Identity and Access Management (IAM) user
+ An [AWS KMS key](https://docs.aws.amazon.com/kms/latest/developerguide/getting-started.html)

## Architecture
<a name="get-amazon-sns-notifications-when-the-key-state-of-an-aws-kms-key-changes-architecture"></a>

**Technology stack**
+ Amazon EventBridge
+ AWS Key Management Service (AWS KMS)
+ Amazon Simple Notification Service (Amazon SNS)

**Target architecture**

The following diagram shows an architecture for building an automated monitoring and notification process for detecting any changes to the state of an AWS KMS key.

![\[Architecture for building an automated monitoring and notification process\]](http://docs.aws.amazon.com/prescriptive-guidance/latest/patterns/images/pattern-img/2534df87-a6fd-4360-9b5d-4a8b1f533de3/images/0cb6a6b0-405b-4d26-ad04-2067176aa086.png)


The diagram shows the following workflow:

1. A user disables or schedules the deletion of an AWS KMS key.

1. An EventBridge rule evaluates the scheduled `Disabled` or `PendingDeletion` event.

1. The EventBridge rule invokes the Amazon SNS topic.

1. Amazon SNS sends an email notification message to the users.

**Note**  
You can customize the email message to meet your organization's needs. We recommend including information about the entities where the AWS KMS key is used. This can help users understand the impact of deleting the AWS KMS key. You can also schedule a reminder email notification that's sent one or two days before the AWS KMS key is deleted.

**Automation and scale**

The AWS CloudFormation stack deploys all the necessary resources and services for this pattern to work. You can implement the pattern independently in a single account, or by using [AWS CloudFormation StackSets](https://docs.aws.amazon.com/AWSCloudFormation/latest/UserGuide/what-is-cfnstacksets.html) for multiple independent accounts or [organizational units](https://docs.aws.amazon.com/organizations/latest/userguide/orgs_manage_ous.html) in AWS Organizations.

## Tools
<a name="get-amazon-sns-notifications-when-the-key-state-of-an-aws-kms-key-changes-tools"></a>
+ [AWS CloudFormation](https://docs.aws.amazon.com/AWSCloudFormation/latest/UserGuide/Welcome.html) helps you set up AWS resources, provision them quickly and consistently, and manage them throughout their lifecycle across AWS accounts and AWS Regions. The CloudFormation template for this pattern describes all the AWS resources that you want, and CloudFormation provisions and configures those resources for you.
+ [Amazon EventBridge](https://docs.aws.amazon.com/eventbridge/latest/userguide/eb-what-is.html) is a serverless event bus service that helps you connect your applications with real-time data from a variety of sources. EventBridge delivers a stream of real-time data from your own applications and AWS services, and it routes that data to targets such as AWS Lambda. EventBridge simplifies the process of building event-driven architectures.
+ [AWS Key Management Service (AWS KMS)](https://docs.aws.amazon.com/kms/latest/developerguide/overview.html) helps you create and control cryptographic keys to help protect your data.
+ [Amazon Simple Notification Service (Amazon SNS)](https://docs.aws.amazon.com/sns/latest/dg/welcome.html) helps you coordinate and manage the exchange of messages between publishers and clients, including web servers and email addresses.

**Code**

The code for this pattern is available in the GitHub [Monitor AWS KMS keys disable and scheduled deletion](https://github.com/aws-samples/aws-kms-deletion-notification) repository.

## Epics
<a name="get-amazon-sns-notifications-when-the-key-state-of-an-aws-kms-key-changes-epics"></a>

### Deploy the CloudFormation template
<a name="deploy-the-cloudformation-template"></a>


| Task | Description | Skills required | 
| --- | --- | --- | 
| Clone the repository. | Clone the GitHub [Monitor AWS KMS keys disable and scheduled deletion](https://github.com/aws-samples/aws-kms-deletion-notification) repository to your local machine by running the following command:`git clone https://github.com/aws-samples/aws-kms-deletion-notification` | AWS administrator, Cloud architect | 
| Update the template's parameters. | In a code editor, open the `Alerting-KMS-Events.yaml` CloudFormation template that you cloned from the repository, and then update the following parameters:[\[See the AWS documentation website for more details\]](http://docs.aws.amazon.com/prescriptive-guidance/latest/patterns/get-amazon-sns-notifications-when-the-key-state-of-an-aws-kms-key-changes.html) | AWS administrator, Cloud architect | 
| Deploy the CloudFormation template. | [\[See the AWS documentation website for more details\]](http://docs.aws.amazon.com/prescriptive-guidance/latest/patterns/get-amazon-sns-notifications-when-the-key-state-of-an-aws-kms-key-changes.html) | AWS administrator, Cloud architect | 

### Confirm the subscription
<a name="confirm-the-subscription"></a>


| Task | Description | Skills required | 
| --- | --- | --- | 
| Confirm the subscription email. | After the CloudFormation template successfully deploys, Amazon SNS sends a subscription confirmation message to the email address that you provided in the CloudFormation template.To receive notifications, you must confirm this email subscription. For more information, see [Confirm the subscription](https://docs.aws.amazon.com/sns/latest/dg/SendMessageToHttp.confirm.html) in the Amazon SNS Developer Guide. | AWS administrator, Cloud architect | 

### Test the subscription notification
<a name="test-the-subscription-notification"></a>


| Task | Description | Skills required | 
| --- | --- | --- | 
| Disable AWS KMS keys. | [\[See the AWS documentation website for more details\]](http://docs.aws.amazon.com/prescriptive-guidance/latest/patterns/get-amazon-sns-notifications-when-the-key-state-of-an-aws-kms-key-changes.html) | AWS administrator | 
| Validate the subscription. | Confirm that you received the Amazon SNS notification email. | AWS administrator | 

### Clean up resources
<a name="clean-up-resources"></a>


| Task | Description | Skills required | 
| --- | --- | --- | 
| Delete the CloudFormation stack. | [\[See the AWS documentation website for more details\]](http://docs.aws.amazon.com/prescriptive-guidance/latest/patterns/get-amazon-sns-notifications-when-the-key-state-of-an-aws-kms-key-changes.html) | AWS administrator | 

## Related resources
<a name="get-amazon-sns-notifications-when-the-key-state-of-an-aws-kms-key-changes-resources"></a>
+ [AWS CloudFormation](https://aws.amazon.com/cloudformation/) (AWS documentation)
+ [Creating a stack on the AWS CloudFormation console](https://docs.aws.amazon.com/AWSCloudFormation/latest/UserGuide/cfn-console-create-stack.html) (AWS CloudFormation documentation)
+ [Building event-driven architectures on AWS](https://catalog.us-east-1.prod.workshops.aws/workshops/63320e83-6abc-493d-83d8-f822584fb3cb/en-US) (AWS Workshop Studio documentation)
+ [AWS Key Management Service Best Practices](https://d1.awsstatic.com/whitepapers/aws-kms-best-practices.pdf) (AWS Whitepaper)
+ [Security best practices for AWS Key Management Service](https://docs.aws.amazon.com/kms/latest/developerguide/best-practices.html) (AWS KMS Developer Guide)

## Additional information
<a name="get-amazon-sns-notifications-when-the-key-state-of-an-aws-kms-key-changes-additional"></a>

Amazon SNS provides in-transit encryption by default. To align with security best practices, you can also enable server-side encryption for Amazon SNS by using an AWS KMS customer managed key.

# Preserve routable IP space in multi-account VPC designs for non-workload subnets
<a name="preserve-routable-ip-space-in-multi-account-vpc-designs-for-non-workload-subnets"></a>

*Adam Spicer, Amazon Web Services*

## Summary
<a name="preserve-routable-ip-space-in-multi-account-vpc-designs-for-non-workload-subnets-summary"></a>

Amazon Web Services (AWS) has published best practices that recommend using dedicated subnets in a virtual private cloud (VPC) for both [transit gateway attachments](https://docs.aws.amazon.com/vpc/latest/tgw/tgw-best-design-practices.html) and [Gateway Load Balancer endpoints](https://docs.aws.amazon.com/elasticloadbalancing/latest/gateway/getting-started.html) (to support [AWS Network Firewall](https://docs.aws.amazon.com/network-firewall/latest/developerguide/firewall-high-level-steps.html) or third-party appliances). These subnets are used to contain elastic network interfaces for these services. If you use both AWS Transit Gateway and a Gateway Load Balancer, two subnets are created in each Availability Zone for the VPC. Because of the way VPCs are designed, these extra subnets [can’t be smaller than a /28 mask](https://docs.aws.amazon.com/vpc/latest/userguide/configure-subnets.html#subnet-sizing) and can consume precious routable IP space that could otherwise be used for routable workloads. This pattern demonstrates how you can use a secondary, non-routable Classless Inter-Domain Routing (CIDR) range for these dedicated subnets to help preserve routable IP space.

## Prerequisites and limitations
<a name="preserve-routable-ip-space-in-multi-account-vpc-designs-for-non-workload-subnets-prereqs"></a>

**Prerequisites **
+ [Multi-VPC strategy](https://docs.aws.amazon.com/whitepapers/latest/building-scalable-secure-multi-vpc-network-infrastructure/welcome.html) for routable IP space
+ A non-routable CIDR range for the services you’re using ([transit gateway attachments](https://docs.aws.amazon.com/vpc/latest/tgw/tgw-best-design-practices.html) and [Gateway Load Balancer](https://aws.amazon.com/blogs/apn/centralized-traffic-inspection-with-gateway-load-balancer-on-aws/) or [Network Firewall endpoints](https://aws.amazon.com/blogs/networking-and-content-delivery/deployment-models-for-aws-network-firewall/))

## Architecture
<a name="preserve-routable-ip-space-in-multi-account-vpc-designs-for-non-workload-subnets-architecture"></a>

**Target architecture **

This pattern includes two reference architectures: one architecture has subnets for transit gateway (TGW) attachments and a Gateway Load Balancer endpoint (GWLBe), and the second architecture has subnets for TGW attachments only.

**Architecture 1 ‒ TGW-attached VPC with ingress routing to an appliance**

The following diagram represents a reference architecture for a VPC that spans two Availability Zones. On ingress, the VPC uses an [ingress routing pattern](https://aws.amazon.com/blogs/aws/new-vpc-ingress-routing-simplifying-integration-of-third-party-appliances/) to direct traffic destined for the public subnet to a [bump-in-the-wire appliance](https://aws.amazon.com/blogs/networking-and-content-delivery/introducing-aws-gateway-load-balancer-supported-architecture-patterns/) for firewall inspection. A TGW attachment supports egress from the private subnets to a separate VPC.

This pattern uses a non-routable CIDR range for the TGW attachment subnet and the GWLBe subnet. In the TGW routing table, this non-routable CIDR is configured with a blackhole (static) route by using a set of more specific routes. If the routes were to get propagated to the TGW routing table, these more specific blackhole routes would apply.

In this example, the /23 routable CIDR is divided up and fully allocated to routable subnets.

![\[TGW-attached VPC with ingress routing to an appliance.\]](http://docs.aws.amazon.com/prescriptive-guidance/latest/patterns/images/pattern-img/0171d91d-ab1e-41ca-a425-1e6e610080e1/images/adad1c83-cdc2-4c5e-aa35-f47fc31af384.png)


**Architecture 2 – TGW-attached VPC**

The following diagram represents another reference architecture for a VPC that spans two Availability Zones. A TGW attachment supports outbound traffic (egress) from the private subnets to a separate VPC. It uses a non-routable CIDR range only for the TGW attachments subnet. In the TGW routing table, this non-routable CIDR is configured with a blackhole route by using a set of more specific routes. If the routes were to get propagated to the TGW routing table, these more specific blackhole routes would apply.

In this example, the /23 routable CIDR is divided up and fully allocated to routable subnets. 

![\[VPC spans 2 availability zones with TGW attachment for egress from private subnets to separate VPC.\]](http://docs.aws.amazon.com/prescriptive-guidance/latest/patterns/images/pattern-img/0171d91d-ab1e-41ca-a425-1e6e610080e1/images/31a2a241-5be6-425e-93e9-5ff7ffeca3a9.png)


## Tools
<a name="preserve-routable-ip-space-in-multi-account-vpc-designs-for-non-workload-subnets-tools"></a>

**AWS services and resources**
+ [Amazon Virtual Private Cloud (Amazon VPC)](https://docs.aws.amazon.com/vpc/latest/userguide/what-is-amazon-vpc.html) helps you launch AWS resources into a virtual network that you’ve defined. This virtual network resembles a traditional network that you’d operate in your own data center, with the benefits of using the scalable infrastructure of AWS. In this pattern, VPC secondary CIDRs are used to preserve routable IP space in workload CIDRs.
+ [Internet gateway ingress routing](https://aws.amazon.com/blogs/aws/new-vpc-ingress-routing-simplifying-integration-of-third-party-appliances/) (edge associations) can be used along with Gateway Load Balancer endpoints for dedicated non-routable subnets.
+ [AWS Transit Gateway](https://docs.aws.amazon.com/vpc/latest/tgw/what-is-transit-gateway.html) is a central hub that connects VPCs and on-premises networks. In this pattern, VPCs are centrally attached to a transit gateway, and the transit gateway attachments are in a dedicated non-routable subnet.
+ [Gateway Load Balancers](https://docs.aws.amazon.com/elasticloadbalancing/latest/gateway/introduction.html) help you deploy, scale, and manage virtual appliances, such as firewalls, intrusion detection and prevention systems, and deep packet inspection systems. The gateway serves as a single entry and exit point for all traffic. In this pattern, endpoints for a Gateway Load Balancer can be used in a dedicated non-routable subnet.
+ [AWS Network Firewall](https://docs.aws.amazon.com/network-firewall/latest/developerguide/what-is-aws-network-firewall.html) is a stateful, managed, network firewall and intrusion detection and prevention service for VPCs in the AWS Cloud. In this pattern, endpoints for an firewall can be used in a dedicated non-routable subnet.

**Code repository**

A runbook and AWS CloudFormation templates for this pattern are available in the GitHub [Non-Routable Secondary CIDR Patterns](https://github.com/aws-samples/non-routable-secondary-vpc-cidr-patterns/) repository. You can use the sample files to set up a working lab in your environment.

## Best practices
<a name="preserve-routable-ip-space-in-multi-account-vpc-designs-for-non-workload-subnets-best-practices"></a>

**AWS Transit Gateway**
+ Use a separate subnet for each transit gateway VPC attachment.
+ Allocate a /28 subnet from the secondary non-routable CIDR range for the transit gateway attachment subnets.
+ In each transit gateway routing table, add a static, more specific route for the non-routable CIDR range as a blackhole.

**Gateway Load Balancer and ingress routing**
+ Use ingress routing to direct traffic from the internet to the Gateway Load Balancer endpoints.
+ Use a separate subnet for each Gateway Load Balancer endpoint.
+ Allocate a /28 subnet from the secondary non-routable CIDR range for the Gateway Load Balancer endpoint subnets.

## Epics
<a name="preserve-routable-ip-space-in-multi-account-vpc-designs-for-non-workload-subnets-epics"></a>

### Create VPCs
<a name="create-vpcs"></a>


| Task | Description | Skills required | 
| --- | --- | --- | 
| Determine non-routable CIDR range. | Determine a non-routable CIDR range that will be used for the transit gateway attachment subnet and (optionally) for any Gateway Load Balancer or Network Firewall endpoint subnets. This CIDR range will be used as the secondary CIDR for the VPC. It must **not be routable** from the VPC’s primary CIDR range or the larger network. | Cloud architect | 
| Determine routable CIDR ranges for VPCs. | Determine a set of routable CIDR ranges that will be used for your VPCs. This CIDR range will be used as the primary CIDR for your VPCs. | Cloud architect | 
| Create VPCs. | Create your VPCs and attach them to the transit gateway. Each VPC should have a primary CIDR range that is routable and a secondary CIDR range that is non-routable, based on the ranges you determined in the previous two steps. | Cloud architect | 

### Configure Transit Gateway blackhole routes
<a name="configure-transit-gateway-blackhole-routes"></a>


| Task | Description | Skills required | 
| --- | --- | --- | 
| Create more specific non-routable CIDRs as blackholes. | Each transit gateway routing table needs to have a set of blackhole routes created for the non-routable CIDRs. These are configured to ensure that any traffic from the secondary VPC CIDR remains non-routable and doesn't leak into the larger network. These routes should be more specific than the non-routable CIDR that is set as the secondary CIDR on the VPC. For example, if the secondary non-routable CIDR is 100.64.0.0/26, the blackhole routes in the transit gateway routing table should be 100.64.0.0/27 and 100.64.0.32/27. | Cloud architect | 

## Related resources
<a name="preserve-routable-ip-space-in-multi-account-vpc-designs-for-non-workload-subnets-resources"></a>
+ [Best practices for deploying Gateway Load Balancer](https://aws.amazon.com/blogs/networking-and-content-delivery/best-practices-for-deploying-gateway-load-balancer/)
+ [Distributed Inspection Architectures with Gateway Load Balancer](https://d1.awsstatic.com/architecture-diagrams/ArchitectureDiagrams/distributed-inspection-architectures-gwlb-ra.pdf?did=wp_card&trk=wp_card)
+ [Networking Immersion Day ](https://catalog.workshops.aws/networking/en-US/gwlb/lab2-internettovpc)‒ [Internet to VPC Firewall Lab](https://catalog.workshops.aws/networking/en-US/gwlb/lab2-internettovpc)
+ [Transit gateway design best practices](https://docs.aws.amazon.com/vpc/latest/tgw/tgw-best-design-practices.html)

## Additional information
<a name="preserve-routable-ip-space-in-multi-account-vpc-designs-for-non-workload-subnets-additional"></a>

The non-routable secondary CIDR range can also be useful when working with larger scaled container deployments that require a large set of IP addresses. You can use this pattern with a private NAT Gateway to use a non-routable subnet to host your container deployments. For more information, see the blog post [How to solve Private IP exhaustion with Private NAT Solution](https://aws.amazon.com/blogs/networking-and-content-delivery/how-to-solve-private-ip-exhaustion-with-private-nat-solution/).

# Provision a Terraform product in AWS Service Catalog by using a code repository
<a name="provision-a-terraform-product-in-aws-service-catalog-by-using-a-code-repository"></a>

*Dr. Rahul Sharad Gaikwad and Tamilselvan P, Amazon Web Services*

## Summary
<a name="provision-a-terraform-product-in-aws-service-catalog-by-using-a-code-repository-summary"></a>

AWS Service Catalog supports self-service provisioning with governance for your [HashiCorp Terraform](https://developer.hashicorp.com/terraform/tutorials/aws-get-started) configurations. If you use Terraform, you can use Service Catalog as the single tool to organize, govern, and distribute your Terraform configurations within AWS at scale. You can access Service Catalog key features, including cataloging of standardized and pre-approved infrastructure as code (IaC) templates, access control, cloud resources provisioning with least privilege access, versioning, sharing to thousands of AWS accounts, and tagging. End users, such as engineers, database administrators, and data scientists, see a list of products and versions they have access to, and they can deploy them through a single action.

This pattern helps you deploy AWS resources by using Terraform code. The Terraform code in the GitHub repository is accessed through Service Catalog. Using this approach, you integrate the products with your existing Terraform workflows. Administrators can create Service Catalog portfolios and add AWS Launch Wizard products to them by using Terraform.

The following are the benefits of this solution:
+ Because of the rollback feature in Service Catalog, if any issues occur during deployment, you can revert the product to a previous version.
+ You can easily identify the differences between product versions. This helps you resolve issues during deployment.
+ You can configure a repository connection in Service Catalog, such as to GitHub or GitLab. You can make product changes directly through the repository.

For information about the overall benefits of AWS Service Catalog, see [What is Service Catalog](https://docs.aws.amazon.com/servicecatalog/latest/adminguide/introduction.html).

## Prerequisites and limitations
<a name="provision-a-terraform-product-in-aws-service-catalog-by-using-a-code-repository-prereqs"></a>

**Prerequisites**
+ An active AWS account.
+ A GitHub, BitBucket, or other repository that contains Terraform configuration files in ZIP format.
+ AWS Serverless Application Model Command Line Interface (AWS SAM CLI), [installed](https://docs.aws.amazon.com/serverless-application-model/latest/developerguide/install-sam-cli.html).
+ AWS Command Line Interface (AWS CLI), [installed](https://docs.aws.amazon.com/cli/latest/userguide/getting-started-install.html) and [configured](https://docs.aws.amazon.com/cli/latest/userguide/cli-chap-configure.html).
+ Go, [installed](https://go.dev/doc/install).
+ Python version 3.9 , [installed](https://www.python.org/downloads/release/python-3913/). AWS SAM CLI requires this version of Python.
+ Permissions to write and run AWS Lambda functions and permissions to access and manage Service Catalog products and portfolios.

## Architecture
<a name="provision-a-terraform-product-in-aws-service-catalog-by-using-a-code-repository-architecture"></a>

![\[Architecture diagram of provisioning a Terraform product in Service Catalog from a code repo\]](http://docs.aws.amazon.com/prescriptive-guidance/latest/patterns/images/pattern-img/7d0d76e8-9485-4b3f-915f-481b6a7cdcd9/images/e83fa44a-4ca6-4438-a0d1-99f09a3541bb.png)


The diagram shows the following workflow:

1. When a Terraform configuration is ready, a developer creates a .zip file that contains all of the Terraform code. The developer uploads the .zip file into the code repository that is connected to Service Catalog.

1. An administrator associates the Terraform product to a portfolio in Service Catalog. The administrator also creates a launch constraint that allows end users to provision the product.

1. In Service Catalog, end users launch AWS resources by using the Terraform configuration. They can choose which product version to deploy.

## Tools
<a name="provision-a-terraform-product-in-aws-service-catalog-by-using-a-code-repository-tools"></a>

**AWS services**
+ [AWS Lambda](https://docs.aws.amazon.com/lambda/latest/dg/welcome.html) is a compute service that helps you run code without needing to provision or manage servers. It runs your code only when needed and scales automatically, so you pay only for the compute time that you use.
+ [AWS Service Catalog](https://docs.aws.amazon.com/servicecatalog/latest/adminguide/introduction.html) helps you centrally manage catalogs of IT services that are approved for AWS. End users can quickly deploy only the approved IT services they need, following the constraints set by your organization.

**Other services**
+ [Go](https://go.dev/doc/install) is an open source programming language that Google supports.
+ [Python](https://www.python.org/) is a general-purpose computer programming language.

**Code repository**

If you require sample Terraform configurations that you can deploy through Service Catalog, you can use the configurations in the GitHub [Amazon Macie Organization Setup Using Terraform](https://github.com/aws-samples/aws-macie-customization-terraform-samples) repository. Use of the code samples in this repository is not required.

## Best practices
<a name="provision-a-terraform-product-in-aws-service-catalog-by-using-a-code-repository-best-practices"></a>
+ Instead of providing the values for variables in the Terraform configuration file (`terraform.tfvars`), configure variable values when launching product through Service Catalog.
+ Grant access to the portfolio only to specific users or administrators.
+ Follow the principle of least privilege and grant the minimum permissions required to perform a task. For more information, see [Grant least privilege](https://docs.aws.amazon.com/IAM/latest/UserGuide/access_policies.html#grant-least-priv) and [Security best practices](https://docs.aws.amazon.com/IAM/latest/UserGuide/IAMBestPracticesAndUseCases.html) in the AWS Identity and Access Management (IAM) documentation.

## Epics
<a name="provision-a-terraform-product-in-aws-service-catalog-by-using-a-code-repository-epics"></a>

### Set up your local workstation
<a name="set-up-your-local-workstation"></a>


| Task | Description | Skills required | 
| --- | --- | --- | 
| (Optional) Install Docker. | If you want to run the AWS Lambda functions in your development environment, install Docker. For instructions, see [Install Docker Engine](https://docs.docker.com/engine/install/) in the Docker documentation. | DevOps engineer | 
| Install the AWS Service Catalog Engine for Terraform. | [\[See the AWS documentation website for more details\]](http://docs.aws.amazon.com/prescriptive-guidance/latest/patterns/provision-a-terraform-product-in-aws-service-catalog-by-using-a-code-repository.html) | DevOps engineer, AWS administrator | 

### Connect the GitHub repository
<a name="connect-the-github-repository"></a>


| Task | Description | Skills required | 
| --- | --- | --- | 
| Create a connection to the GitHub repository. | [\[See the AWS documentation website for more details\]](http://docs.aws.amazon.com/prescriptive-guidance/latest/patterns/provision-a-terraform-product-in-aws-service-catalog-by-using-a-code-repository.html) | AWS administrator | 

### Create a Terraform product in Service Catalog
<a name="create-a-terraform-product-in-service-catalog"></a>


| Task | Description | Skills required | 
| --- | --- | --- | 
| Create the Service Catalog product. | [\[See the AWS documentation website for more details\]](http://docs.aws.amazon.com/prescriptive-guidance/latest/patterns/provision-a-terraform-product-in-aws-service-catalog-by-using-a-code-repository.html) | AWS administrator | 
| Create a portfolio. | [\[See the AWS documentation website for more details\]](http://docs.aws.amazon.com/prescriptive-guidance/latest/patterns/provision-a-terraform-product-in-aws-service-catalog-by-using-a-code-repository.html) | AWS administrator | 
| Add the Terraform product to the portfolio. | [\[See the AWS documentation website for more details\]](http://docs.aws.amazon.com/prescriptive-guidance/latest/patterns/provision-a-terraform-product-in-aws-service-catalog-by-using-a-code-repository.html) | AWS administrator | 
| Create the access policy. | [\[See the AWS documentation website for more details\]](http://docs.aws.amazon.com/prescriptive-guidance/latest/patterns/provision-a-terraform-product-in-aws-service-catalog-by-using-a-code-repository.html) | AWS administrator | 
| Create a custom trust policy. | [\[See the AWS documentation website for more details\]](http://docs.aws.amazon.com/prescriptive-guidance/latest/patterns/provision-a-terraform-product-in-aws-service-catalog-by-using-a-code-repository.html) | AWS administrator | 
| Add a launch constraint to the Service Catalog product. | [\[See the AWS documentation website for more details\]](http://docs.aws.amazon.com/prescriptive-guidance/latest/patterns/provision-a-terraform-product-in-aws-service-catalog-by-using-a-code-repository.html) | AWS administrator | 
| Grant access to the product. | [\[See the AWS documentation website for more details\]](http://docs.aws.amazon.com/prescriptive-guidance/latest/patterns/provision-a-terraform-product-in-aws-service-catalog-by-using-a-code-repository.html) | AWS administrator | 
| Launch the product. | [\[See the AWS documentation website for more details\]](http://docs.aws.amazon.com/prescriptive-guidance/latest/patterns/provision-a-terraform-product-in-aws-service-catalog-by-using-a-code-repository.html) | DevOps engineer | 

### Verify the deployment
<a name="verify-the-deployment"></a>


| Task | Description | Skills required | 
| --- | --- | --- | 
| Validate the deployment. | There are two AWS Step Functions state machines for the Service Catalog provisioning workflow:[\[See the AWS documentation website for more details\]](http://docs.aws.amazon.com/prescriptive-guidance/latest/patterns/provision-a-terraform-product-in-aws-service-catalog-by-using-a-code-repository.html)You check the logs for the `ManageProvisionedProductStateMachine` state machine to confirm that the product was provisioned.[\[See the AWS documentation website for more details\]](http://docs.aws.amazon.com/prescriptive-guidance/latest/patterns/provision-a-terraform-product-in-aws-service-catalog-by-using-a-code-repository.html) | DevOps engineer | 

### Clean up infrastructure
<a name="clean-up-infrastructure"></a>


| Task | Description | Skills required | 
| --- | --- | --- | 
| Delete provisioned products. | [\[See the AWS documentation website for more details\]](http://docs.aws.amazon.com/prescriptive-guidance/latest/patterns/provision-a-terraform-product-in-aws-service-catalog-by-using-a-code-repository.html) | DevOps engineer | 
| Remove the AWS Service Catalog Engine for Terraform. | [\[See the AWS documentation website for more details\]](http://docs.aws.amazon.com/prescriptive-guidance/latest/patterns/provision-a-terraform-product-in-aws-service-catalog-by-using-a-code-repository.html) | AWS administrator | 

## Related resources
<a name="provision-a-terraform-product-in-aws-service-catalog-by-using-a-code-repository-resources"></a>

**AWS documentation**
+ [Getting started with a Terraform product](https://docs.aws.amazon.com/servicecatalog/latest/adminguide/getstarted-Terraform.html)

**Terraform documentation**
+ [Terraform installation](https://learn.hashicorp.com/tutorials/terraform/install-cli)
+ [Terraform backend configuration](https://developer.hashicorp.com/terraform/language/backend)
+ [Terraform AWS Provider documentation](https://registry.terraform.io/providers/hashicorp/aws/latest/docs)

## Additional information
<a name="provision-a-terraform-product-in-aws-service-catalog-by-using-a-code-repository-additional"></a>

**Access policy**

```
{
    "Version": "2012-10-17",		 	 	 
    "Statement": [
        {
            "Sid": "VisualEditor0",
            "Effect": "Allow",
            "Action": "s3:GetObject",
            "Resource": "*",
            "Condition": {
                "StringEquals": {
                    "s3:ExistingObjectTag/servicecatalog:provisioning": "true"
                }
            }
        },
        {
            "Action": [
                "s3:CreateBucket*",
                "s3:DeleteBucket*",
                "s3:Get*",
                "s3:List*",
                "s3:PutBucketTagging"
            ],
            "Resource": "arn:aws:s3:::*",
            "Effect": "Allow"
        },
        {
            "Action": [
                "resource-groups:CreateGroup",
                "resource-groups:ListGroupResources",
                "resource-groups:DeleteGroup",
                "resource-groups:Tag"
            ],
            "Resource": "*",
            "Effect": "Allow"
        },
        {
            "Action": [
                "tag:GetResources",
                "tag:GetTagKeys",
                "tag:GetTagValues",
                "tag:TagResources",
                "tag:UntagResources"
            ],
            "Resource": "*",
            "Effect": "Allow"
        }
    ]
}
```

**Trust policy**

```
{
    "Version": "2012-10-17",		 	 	 
    "Statement": [
        {
            "Sid": "GivePermissionsToServiceCatalog",
            "Effect": "Allow",
            "Principal": {
                "Service": "servicecatalog.amazonaws.com"
            },
            "Action": "sts:AssumeRole"
        },
        {
            "Effect": "Allow",
            "Principal": {
                "AWS": "arn:aws:iam::account_id:root"
            },
            "Action": "sts:AssumeRole",
            "Condition": {
                "StringLike": {
                    "aws:PrincipalArn": [
                        "arn:aws:iam::accounti_id:role/TerraformEngine/TerraformExecutionRole*",
                        "arn:aws:iam::accounti_id:role/TerraformEngine/ServiceCatalogExternalParameterParserRole*",
                        "arn:aws:iam::accounti_id:role/TerraformEngine/ServiceCatalogTerraformOSParameterParserRole*"
                    ]
                }
            }
        }
    ]
}
```

# Register multiple AWS accounts with a single email address by using Amazon SES
<a name="register-multiple-aws-accounts-with-a-single-email-address-by-using-amazon-ses"></a>

*Joe Wozniak and Shubhangi Vishwakarma, Amazon Web Services*

## Summary
<a name="register-multiple-aws-accounts-with-a-single-email-address-by-using-amazon-ses-summary"></a>

This pattern describes how you can decouple real email addresses from the email address that’s associated with an AWS account. AWS accounts require a unique email address to be provided at the time of account creation. In some organizations, the team that manages AWS accounts must take on the burden of managing many unique email addresses with their messaging team. This can be difficult for large organizations that manage many AWS accounts. Additionally, if your email system doesn’t allow *plus addressing* or *sub-addressing* as defined in [Sieve Email Filtering: Subaddress Extension (RFC 5233)](https://datatracker.ietf.org/doc/html/rfc5233)—by adding a plus sign (\$1) and an identifier to the end of the local part of the email address, such as `admin+123456789123@example.com`—this pattern can help overcome this limitation.

This pattern provides a unique email address vending solution that enables AWS account owners to associate one email address with multiple AWS accounts. The real email addresses of AWS account owners are then associated with these generated email addresses in a table. The solution handles all incoming email for the unique email accounts, looks up the owner of each account, and then forwards any received messages to the owner.  

## Prerequisites and limitations
<a name="register-multiple-aws-accounts-with-a-single-email-address-by-using-amazon-ses-prereqs"></a>

**Prerequisites **
+ Administrative access to an AWS account.
+ Access to a development environment. 
+ (Optional) Familiarity with AWS Cloud Development Kit (AWS CDK) workflows and the Python programming language will help you troubleshoot any issues or make modifications.

**Limitations **
+ Overall vended email address length of 64 characters. For details, see [CreateAccount](https://docs.aws.amazon.com/organizations/latest/APIReference/API_CreateAccount.html) in the *AWS Organizations API reference*.

**Product versions**
+ Node.js version 22.x or later
+ Python 3.13 or later
+ Python packages **pip** and **virtualenv**
+ AWS CDK CLI version 2.1019.2 or later
+ Docker 20.10.x or later

## Architecture
<a name="register-multiple-aws-accounts-with-a-single-email-address-by-using-amazon-ses-architecture"></a>

**Target technology stack  **
+ CloudFormation stack
+ AWS Lambda functions
+ Amazon Simple Email Service (Amazon SES) rule and rule set
+ AWS Identity and Access Management (IAM) roles and policies
+ Amazon Simple Storage Service (Amazon S3) bucket and bucket policy
+ AWS Key Management Service (AWS KMS) key and key policy
+ Amazon Simple Notification Service (Amazon SNS) topic and topic policy
+ Amazon DynamoDB table 

**Target architecture **

![\[Target architecture for registering multiple AWS accounts with a single email address\]](http://docs.aws.amazon.com/prescriptive-guidance/latest/patterns/images/pattern-img/1be85b92-69e5-43b2-aeed-27b9509e145e/images/c7ae9d7a-d4e0-412e-97cb-0f3073e012e7.png)


This diagram shows two flows:
+ **Email address vending flow: **In the diagram, the email address vending flow (lower section) begins typically with an account vending solution or outside automation, or is invoked manually. In the request, a Lambda function is called with a payload that contains the needed metadata. The function uses this information to generate a unique account name and email address, stores it in a DynamoDB database, and returns the values to the caller. These values can then be used to create a new AWS account (typically by using AWS Organizations).
+ **Email forwarding flow: **This flow is illustrated in the upper section of the previous diagram. When an AWS account is created by using the account email generated from the email address vending flow, AWS sends various emails, such as account registration confirmation and periodic notifications, to that email address. By following the steps in this pattern, you configure your AWS account with Amazon SES to receive emails for the entire domain. This solution configures forwarding rules that allow Lambda to process all incoming emails, check to see if the `TO` address is in the DynamoDB table, and forward the message to the account owner's email address instead. Using this process gives account owners the ability to associate multiple accounts with one email address.

**Automation and scale**

This pattern uses the AWS CDK to fully automate the deployment. The solution uses AWS managed services that will (or can be configured to) scale automatically to meet your needs. The Lambda functions might require additional configuration to meet your scaling needs. For more information, see [Understanding Lambda function scaling](https://docs.aws.amazon.com/lambda/latest/dg/invocation-scaling.html) in the Lambda documentation.

## Tools
<a name="register-multiple-aws-accounts-with-a-single-email-address-by-using-amazon-ses-tools"></a>

**AWS services**
+ [CloudFormation](https://docs.aws.amazon.com/AWSCloudFormation/latest/UserGuide/Welcome.html) helps you set up AWS resources, provision them quickly and consistently, and manage them throughout their lifecycle across AWS accounts and Regions.
+ [AWS Command Line Interface (AWS CLI)](https://docs.aws.amazon.com/cli/latest/userguide/cli-chap-welcome.html) is an open-source tool that helps you interact with AWS services through commands in your command-line shell.
+ [Amazon DynamoDB](https://docs.aws.amazon.com/amazondynamodb/latest/developerguide/Introduction.html) is a fully managed NoSQL database service that provides fast, predictable, and scalable performance.
+ [AWS Identity and Access Management (IAM)](https://docs.aws.amazon.com/IAM/latest/UserGuide/introduction.html) helps you securely manage access to your AWS resources by controlling who is authenticated and authorized to use them.
+ [AWS Key Management Service (AWS KMS)](https://docs.aws.amazon.com/kms/latest/developerguide/overview.html) helps you create and control cryptographic keys to help protect your data.
+ [AWS Lambda](https://docs.aws.amazon.com/lambda/latest/dg/welcome.html) is a compute service that helps you run code without needing to provision or manage servers. It runs your code only when needed and scales automatically, so you pay only for the compute time that you use.
+ [Amazon Simple Email Service (Amazon SES)](https://docs.aws.amazon.com/ses/latest/dg/Welcome.html) helps you send and receive emails by using your own email addresses and domains.
+ [Amazon Simple Notification Service (Amazon SNS)](https://docs.aws.amazon.com/sns/latest/dg/welcome.html) helps you coordinate and manage the exchange of messages between publishers and clients, including web servers and email addresses.
+ [Amazon Simple Storage Service (Amazon S3)](https://docs.aws.amazon.com/AmazonS3/latest/userguide/Welcome.html) is a cloud-based object storage service that helps you store, protect, and retrieve any amount of data.

**Tools needed for deployment**
+ Development environment with the AWS CLI and IAM access to your AWS account. For details, see the links in the [Related resources](#register-multiple-aws-accounts-with-a-single-email-address-by-using-amazon-ses-resources) section.  
+ On your development system, install the following:
  + Git command line tool, available from the [Git downloads website](https://git-scm.com/downloads).
  + The AWS CLI to configure access credentials for the AWS CDK. For more information, see the [AWS CLI documentation](https://docs.aws.amazon.com/cli/latest/userguide/cli-chap-getting-started.html).
  + Python version 3.13 or later, available from the [Python downloads website](https://www.python.org/downloads/).
  + UV for Python package management. For installation instructions, see the [UV installation guide](https://docs.astral.sh/uv/getting-started/installation/).
  + Node.js version 22.x or later. For installation instructions, see the [Node.js documentation](https://nodejs.org/en/learn/getting-started/how-to-install-nodejs).
  + AWS CDK CLI version 2.1019.2 or later. For installation instructions, see the [AWS CDK documentation](https://docs.aws.amazon.com/cdk/v2/guide/getting-started.html#getting-started-install).
  + Docker version 20.10.x or later. For installation instructions, see the [Docker documentation](https://docs.docker.com/engine/install/).

**Code **

The code for this pattern is available in the GitHub [AWS account factory email](https://github.com/aws-samples/aws-account-factory-email) repository.

## Epics
<a name="register-multiple-aws-accounts-with-a-single-email-address-by-using-amazon-ses-epics"></a>

### Allocate a target deployment environment
<a name="allocate-a-target-deployment-environment"></a>


| Task | Description | Skills required | 
| --- | --- | --- | 
| Identify or create an AWS account. | Identify an existing or new AWS account to which you have full administrative access, to deploy the email solution. | AWS administrator, Cloud administrator | 
| Set up a deployment environment. | Configure an easy to use deployment environment and set up dependencies by following these steps:[\[See the AWS documentation website for more details\]](http://docs.aws.amazon.com/prescriptive-guidance/latest/patterns/register-multiple-aws-accounts-with-a-single-email-address-by-using-amazon-ses.html) | AWS DevOps, App developer | 

### Set up a verified domain
<a name="set-up-a-verified-domain"></a>


| Task | Description | Skills required | 
| --- | --- | --- | 
| Identify and allocate a domain. | The email forwarding functionality requires a dedicated domain. Identify and allocate a domain or subdomain that you can verify with Amazon SES. This domain should be available to receive incoming email within the AWS account where the email forwarding solution is deployed.Domain requirements:[\[See the AWS documentation website for more details\]](http://docs.aws.amazon.com/prescriptive-guidance/latest/patterns/register-multiple-aws-accounts-with-a-single-email-address-by-using-amazon-ses.html) | Cloud administrator, Network administrator, DNS administrator | 
| Verify the domain. | Verify that the identified domain can be used to accept incoming email.Complete the instructions in [Verifying your domain for Amazon SES email receiving](https://docs.aws.amazon.com/ses/latest/dg/receiving-email-verification.html) in the Amazon SES documentation. This will require coordination with the person or team who is responsible for the domain's DNS records. | App developer, AWS DevOps | 
| Set up MX records. | Set up your domain with MX records that point to the Amazon SES endpoints in your AWS account and Region. For more information, see [Publishing an MX record for Amazon SES email receiving](https://docs.aws.amazon.com/ses/latest/dg/receiving-email-mx-record.html) in the Amazon SES documentation. | Cloud administrator, Network administrator, DNS administrator | 

### Deploy the email vending and forwarding solution
<a name="deploy-the-email-vending-and-forwarding-solution"></a>


| Task | Description | Skills required | 
| --- | --- | --- | 
| Modify the default values in `cdk.json`. | Edit some of the default values in the `cdk.json` file (in the root of the repository) so that the solution will operate correctly after it is deployed.[\[See the AWS documentation website for more details\]](http://docs.aws.amazon.com/prescriptive-guidance/latest/patterns/register-multiple-aws-accounts-with-a-single-email-address-by-using-amazon-ses.html) | App developer, AWS DevOps | 
| Deploy the email vending and forwarding solution. | [\[See the AWS documentation website for more details\]](http://docs.aws.amazon.com/prescriptive-guidance/latest/patterns/register-multiple-aws-accounts-with-a-single-email-address-by-using-amazon-ses.html) | App developer, AWS DevOps | 
| Verify that the solution has been deployed. | Verify that the solution deployed successfully before you begin testing:[\[See the AWS documentation website for more details\]](http://docs.aws.amazon.com/prescriptive-guidance/latest/patterns/register-multiple-aws-accounts-with-a-single-email-address-by-using-amazon-ses.html) | App developer, AWS DevOps | 

### Verify that email vending and forwarding operate as expected
<a name="verify-that-email-vending-and-forwarding-operate-as-expected"></a>


| Task | Description | Skills required | 
| --- | --- | --- | 
| Verify that the API is working. | In this step, you submit test data to the solution's API and confirm that the solution produces the expected output and that backend operations have been performed as expected.Manually run the **Vend Email** Lambda function by using test input. (For an example, see the [sample\$1vend\$1request.json file](https://github.com/aws-samples/aws-account-factory-email/blob/main/src/events/sample_vend_request.json).) For `OwnerAddress`, use a valid email address. The API should return an account name and account email with values as expected. | App developer, AWS DevOps | 
| Verify that email is being forwarded. | In this step, you send a test email through the system and verify that the email is forwarded to the expected recipient.[\[See the AWS documentation website for more details\]](http://docs.aws.amazon.com/prescriptive-guidance/latest/patterns/register-multiple-aws-accounts-with-a-single-email-address-by-using-amazon-ses.html) | App developer, AWS DevOps | 

## Troubleshooting
<a name="register-multiple-aws-accounts-with-a-single-email-address-by-using-amazon-ses-troubleshooting"></a>


| Issue | Solution | 
| --- | --- | 
| The system doesn’t forward email as expected. | Verify that your setup is correct:[\[See the AWS documentation website for more details\]](http://docs.aws.amazon.com/prescriptive-guidance/latest/patterns/register-multiple-aws-accounts-with-a-single-email-address-by-using-amazon-ses.html)After you verify your domain setup, follow these steps:[\[See the AWS documentation website for more details\]](http://docs.aws.amazon.com/prescriptive-guidance/latest/patterns/register-multiple-aws-accounts-with-a-single-email-address-by-using-amazon-ses.html) | 
| When you try to deploy the AWS CDK stack, you receive an error similar to:"Template format error: Unrecognized resource types"  | In most instances, this error message means that the Region you’re targeting doesn’t have all the available AWS services. If you’re using an Amazon EC2 instance to deploy the solution, you might be targeting a Region that is different from the Region where the instance is running.By default, the AWS CDK deploys to the Region and account that you configured in the AWS CLI.Possible solutions:[\[See the AWS documentation website for more details\]](http://docs.aws.amazon.com/prescriptive-guidance/latest/patterns/register-multiple-aws-accounts-with-a-single-email-address-by-using-amazon-ses.html) | 
| When you deploy the solution, you receive the error message:"Deployment failed: Error: AwsMailFwdStack: SSM parameter /cdk-bootstrap/hnb659fds/version not found. Has the environment been bootstrapped? Please run 'cdk bootstrap'" | If you have never deployed any AWS CDK resources to the AWS account and Region you’re targeting, you will have to first run the `cdk bootstrap` command as the error indicates. If you continue to receive this error after you run the bootstrapping command, you might be trying to deploy the solution to a Region that’s different from the Region where your development environment is running.To solve this problem, set the `AWS_DEFAULT_REGION` environment variable or set a Region with the AWS CLI before you deploy the solution. Alternatively, you can modify the `app.py` file in the root of the repository to include a hard-coded account ID and Region by following the instructions in the [AWS CDK documentation for environments](https://docs.aws.amazon.com/cdk/v2/guide/environments.html). | 

## Related resources
<a name="register-multiple-aws-accounts-with-a-single-email-address-by-using-amazon-ses-resources"></a>
+ For help installing the AWS CLI, see [Installing or updating to the latest version of the AWS CLI](https://docs.aws.amazon.com/cli/latest/userguide/install-cliv2.html).
+ For help setting up the AWS CLI with IAM access credentials, see [Configuring settings for the AWS CLI](https://docs.aws.amazon.com/cli/latest/userguide/cli-chap-configure.html).
+ For help with the AWS CDK, see [Getting started with the AWS CDK](https://docs.aws.amazon.com/cdk/latest/guide/getting_started.html#getting_started_install). 

## Additional information
<a name="register-multiple-aws-accounts-with-a-single-email-address-by-using-amazon-ses-additional"></a>

**Costs**

When you deploy this solution, the AWS account holder might incur costs that are associated with the use of the following services.  It is important for you to understand how these services are billed so you are aware of any potential charges. For pricing information, see the following pages:
+ [Amazon SES pricing](https://aws.amazon.com/ses/pricing/)
+ [Amazon S3 pricing](https://aws.amazon.com/s3/pricing/)
+ [AWS KMS pricing](https://aws.amazon.com/kms/pricing/)
+ [AWS Lambda pricing](https://aws.amazon.com/lambda/pricing/)
+ [Amazon DynamoDB pricing](https://aws.amazon.com/dynamodb/pricing/)

# Set up DNS resolution for hybrid networks in a single-account AWS environment
<a name="set-up-dns-resolution-for-hybrid-networks-in-a-single-account-aws-environment"></a>

*Abdullahi Olaoye, Amazon Web Services*

## Summary
<a name="set-up-dns-resolution-for-hybrid-networks-in-a-single-account-aws-environment-summary"></a>

This pattern describes how to set up a fully hybrid Domain Name System (DNS) architecture that enables end-to-end DNS resolution of on-premise resources, AWS resources, and internet DNS queries, without administrative overhead. The pattern describes how to set up Amazon Route 53 Resolver forwarding rules that determine where a DNS query that originates from AWS should be sent, based on the domain name. DNS queries for on-premises resources are forwarded to on-premises DNS resolvers. DNS queries for AWS resources and internet DNS queries are resolved by Route 53 Resolver.

This pattern covers hybrid DNS resolution in an AWS single-account environment. For information about setting up outbound DNS queries in an AWS multi-account environment, see the pattern [Set up DNS resolution for hybrid networks in a multi-account AWS environment](https://docs.aws.amazon.com/prescriptive-guidance/latest/patterns/set-up-dns-resolution-for-hybrid-networks-in-a-multi-account-aws-environment.html).

## Prerequisites and limitations
<a name="set-up-dns-resolution-for-hybrid-networks-in-a-single-account-aws-environment-prereqs"></a>

**Prerequisites **
+ An AWS account
+ A virtual private cloud (VPC) in your AWS account
+ A network connection between the on-premises environment and your VPC, through AWS Virtual Private Network (AWS VPN) or AWS Direct Connect
+ IP addresses of your on-premises DNS resolvers (reachable from your VPC)
+ Domain/subdomain name to forward to on-premises resolvers (for example, onprem.mydc.com)
+ Domain/subdomain name for the AWS private hosted zone (for example, myvpc.cloud.com)

## Architecture
<a name="set-up-dns-resolution-for-hybrid-networks-in-a-single-account-aws-environment-architecture"></a>

**Target technology stack  **
+ Amazon Route 53 private hosted zone
+ Amazon Route 53 Resolver
+ Amazon VPC
+ AWS VPN or Direct Connect

**Target architecture**

![\[Workflow of Hybrid DNS resolution in an AWS single-account environment using Route 53 Resolver.\]](http://docs.aws.amazon.com/prescriptive-guidance/latest/patterns/images/pattern-img/120dedc8-cc6c-4aa7-be11-c70a7ee80642/images/7b75f534-1adc-4a39-86d6-5c4596ff7b6a.png)


 

## Tools
<a name="set-up-dns-resolution-for-hybrid-networks-in-a-single-account-aws-environment-tools"></a>
+ [Amazon Route 53 Resolver](https://docs.aws.amazon.com/Route53/latest/DeveloperGuide/resolver-getting-started.html) makes hybrid cloud easier for enterprise customers by enabling seamless DNS query resolution across your entire hybrid cloud. You can create DNS endpoints and conditional forwarding rules to resolve DNS namespaces between your on-premises data center and your VPCs.
+ [Amazon Route 53 private hosted zone](https://docs.aws.amazon.com/Route53/latest/DeveloperGuide/hosted-zones-private.html) is a container that holds information about how you want Route 53 to respond to DNS queries for a domain and its subdomains within one or more VPCs that you create with the Amazon VPC service.

## Epics
<a name="set-up-dns-resolution-for-hybrid-networks-in-a-single-account-aws-environment-epics"></a>

### Configure a private hosted zone
<a name="configure-a-private-hosted-zone"></a>


| Task | Description | Skills required | 
| --- | --- | --- | 
| Create a Route 53 private hosted zone for an AWS reserved domain name such as myvpc.cloud.com. | This zone holds the DNS records for AWS resources that should be resolved from the on-premises environment. For instructions, see [Creating a private hosted zone](https://docs.aws.amazon.com/Route53/latest/DeveloperGuide/hosted-zone-private-creating.html) in the Route 53 documentation. | Network admin, System admin | 
| Associate the private hosted zone with your VPC. | To enable resources in your VPC to resolve DNS records in this private hosted zone, you must associate your VPC with the hosted zone. For instructions, see [Creating a private hosted zone](https://docs.aws.amazon.com/Route53/latest/DeveloperGuide/hosted-zone-private-creating.html) in the Route 53 documentation. | Network admin, System admin | 

### Set up Route 53 Resolver endpoints
<a name="set-up-route-53-resolver-endpoints"></a>


| Task | Description | Skills required | 
| --- | --- | --- | 
| Create an inbound endpoint. | Route 53 Resolver uses the inbound endpoint to receive DNS queries from on-premises DNS resolvers. For instructions, see [Forwarding inbound DNS queries to your VPCs](https://docs.aws.amazon.com/Route53/latest/DeveloperGuide/resolver-forwarding-inbound-queries.html) in the Route 53 documentation. Make a note of the inbound endpoint IP address. | Network admin, System admin | 
| Create an outbound endpoint. | Route 53 Resolver uses the outbound endpoint to send DNS queries to on-premises DNS resolvers. For instructions, see [Forwarding outbound DNS queries to your network](https://docs.aws.amazon.com/Route53/latest/DeveloperGuide/resolver-forwarding-outbound-queries.html) in the Route 53 documentation. Make a note of the output endpoint ID. | Network admin, System admin | 

### Set up a forwarding rule and associate it with your VPC
<a name="set-up-a-forwarding-rule-and-associate-it-with-your-vpc"></a>


| Task | Description | Skills required | 
| --- | --- | --- | 
| Create a forwarding rule for the on-premises domain. | This rule will instruct Route 53 Resolver to forward any DNS queries for on-premises domains (such as onprem.mydc.com) to on-premises DNS resolvers. To create this rule, you will need the IP addresses of the on-premises DNS resolvers and the outbound endpoint ID for Route 53 Resolver. For instructions, see [Managing forwarding rules](https://docs.aws.amazon.com/Route53/latest/DeveloperGuide/resolver-rules-managing.html) in the Route 53 documentation. | Network admin, System admin | 
| Associate the forwarding rule with your VPC. | For the forwarding rule to take effect, you must associate the rule with your VPC. Route 53 Resolver then takes the rule into consideration when resolving a domain. For instructions, see [Managing forwarding rules](https://docs.aws.amazon.com/Route53/latest/DeveloperGuide/resolver-rules-managing.html) in the Route 53 documentation. | Network admin, System admin | 

### Configure on-premises DNS resolvers
<a name="configure-on-premises-dns-resolvers"></a>


| Task | Description | Skills required | 
| --- | --- | --- | 
| Configure conditional forwarding in the on-premise DNS resolvers.  | For DNS queries to be sent to the Route 53 private hosted zone from the on-premises environment, you must configure conditional forwarding in the on-premises DNS resolvers. This instructs the DNS resolvers to forward all DNS queries for the AWS domain (for example, for myvpc.cloud.com) to the inbound endpoint IP address for Route 53 Resolver. | Network admin, System admin | 

### Test end-to-end DNS resolution
<a name="test-end-to-end-dns-resolution"></a>


| Task | Description | Skills required | 
| --- | --- | --- | 
| Test DNS resolution from AWS to the on-premises environment. | From a server in the VPC, perform a DNS query for an on-premises domain (such as server1.onprem.mydc.com). | Network admin, System admin | 
| Test DNS resolution from the on-premises environment to AWS. | From an on-premises server, perform DNS resolution for an AWS domain (such as server1.myvpc.cloud.com). | Network admin, System admin | 

## Related resources
<a name="set-up-dns-resolution-for-hybrid-networks-in-a-single-account-aws-environment-resources"></a>
+ [Centralized DNS management of hybrid cloud with Amazon Route 53 and AWS Transit Gateway](https://aws.amazon.com/blogs/networking-and-content-delivery/centralized-dns-management-of-hybrid-cloud-with-amazon-route-53-and-aws-transit-gateway/) (AWS Networking & Content Delivery blog)
+ [Simplify DNS management in a multi-account environment with Route 53 Resolver](https://aws.amazon.com/blogs/security/simplify-dns-management-in-a-multiaccount-environment-with-route-53-resolver/) (AWS Security blog)
+ [Working with private hosted zones](https://docs.aws.amazon.com/Route53/latest/DeveloperGuide/hosted-zones-private.html) (Route 53 documentation)
+ [Getting started with Route 53 Resolver](https://docs.aws.amazon.com/Route53/latest/DeveloperGuide/resolver-getting-started.html) (Route 53 documentation)

# Set up UiPath RPA bots automatically on Amazon EC2 by using AWS CloudFormation
<a name="set-up-uipath-rpa-bots-automatically-on-amazon-ec2-by-using-aws-cloudformation"></a>

*Dr. Rahul Sharad Gaikwad and Tamilselvan P, Amazon Web Services*

## Summary
<a name="set-up-uipath-rpa-bots-automatically-on-amazon-ec2-by-using-aws-cloudformation-summary"></a>

This pattern explains how you can deploy robotic process automation (RPA) bots on Amazon Elastic Compute Cloud (Amazon EC2) instances. It uses an [EC2 Image Builder](https://docs.aws.amazon.com/imagebuilder/latest/userguide/what-is-image-builder.html) pipeline to create a custom Amazon Machine Image (AMI). An AMI is a preconfigured virtual machine (VM) image that contains the operating system (OS) and preinstalled software to deploy EC2 instances. This pattern uses AWS CloudFormation templates to install [UiPath Studio Community edition](https://www.uipath.com/product/studio) on the custom AMI. UiPath is an RPA tool that helps you set up robots to automate your tasks.

As part of this solution, EC2 Windows instances are launched by using the base AMI, and the UiPath Studio application is installed on the instances. The pattern uses the Microsoft System Preparation (Sysprep) tool to duplicate the customized Windows installation. After that, it removes the host information and creates a final AMI from the instance. You can then launch the instances on demand by using the final AMI with your own naming conventions and monitoring setup.


| 
| 
| Note: This pattern doesn’t provide any information about using RPA bots. For that information, see the [UiPath documentation](https://docs.uipath.com/). You can also use this pattern to set up other RPA bot applications by customizing the installation steps based on your requirements. | 
| --- |

This pattern provides the following automations and benefits:
+ Application deployment and sharing: You can build Amazon EC2 AMIs for application deployment and share them across multiple accounts through an EC2 Image Builder pipeline, which uses AWS CloudFormation templates as infrastructure as code (IaC) scripts.
+ Amazon EC2 provisioning and scaling: CloudFormation IaC templates provide custom computer name sequences and Active Directory join automation.
+ Observability and monitoring: The pattern sets up Amazon CloudWatch dashboards to help you monitor Amazon EC2 metrics (such as CPU and disk usage).
+ RPA benefits for your business: RPA improves accuracy because robots can perform assigned tasks automatically and consistently. RPA also increases speed and productivity because it removes operations that don’t add value and handles repetitious activities.

## Prerequisites and limitations
<a name="set-up-uipath-rpa-bots-automatically-on-amazon-ec2-by-using-aws-cloudformation-prereqs"></a>

**Prerequisites **
+ An active[ AWS account](https://aws.amazon.com/free/)
+ [AWS Identity and Access Management (IAM) permissions](https://docs.aws.amazon.com/AWSCloudFormation/latest/UserGuide/using-iam-template.html) for deploying CloudFormation templates
+ [IAM policies](https://docs.aws.amazon.com/imagebuilder/latest/userguide/cross-account-dist.html) to set up cross-account AMI distribution with EC2 Image Builder

## Architecture
<a name="set-up-uipath-rpa-bots-automatically-on-amazon-ec2-by-using-aws-cloudformation-architecture"></a>

![\[Target architecture for setting up RPA bots on Amazon EC2\]](http://docs.aws.amazon.com/prescriptive-guidance/latest/patterns/images/pattern-img/5555a62d-91d4-4e81-9961-ff89faedd6ad/images/1893d2d3-8912-4473-adf1-6633b5badcd9.png)


1. The administrator provides the base Windows AMI in the `ec2-image-builder.yaml` file and deploys the stack in the CloudFormation console.

1. The CloudFormation stack deploys the EC2 Image Builder pipeline, which includes the following resources:
   + `Ec2ImageInfraConfiguration`
   + `Ec2ImageComponent`
   + `Ec2ImageRecipe`
   + `Ec2AMI`

1. The EC2 Image Builder pipeline launches a temporary Windows EC2 instance by using the base AMI and installs the required components (in this case, UiPath Studio).

1. The EC2 Image Builder removes all the host information and creates an AMI from Windows Server.

1. You update the `ec2-provisioning yaml` file with the custom AMI and launch a number of EC2 instances based on your requirements.

1. You deploy the Count macro by using a CloudFormation template. This macro provides a **Count** property for CloudFormation resources so you can specify multiple resources of the same type easily.

1. You update the name of the macro in the CloudFormation `ec2-provisioning.yaml` file and deploy the stack.

1. The administrator updates the `ec2-provisioning.yaml` file based on requirements and launches the stack.

1. The template deploys EC2 instances with the UiPath Studio application.

## Tools
<a name="set-up-uipath-rpa-bots-automatically-on-amazon-ec2-by-using-aws-cloudformation-tools"></a>

**AWS services**
+ [AWS CloudFormation](https://aws.amazon.com/cloudformation/) helps you model and manage infrastructure resources in an automated and secure manner.
+ [Amazon CloudWatch](https://aws.amazon.com/cloudwatch/) helps you observe and monitor resources and applications on AWS, on premises, and on other clouds.
+ [Amazon Elastic Compute Cloud (Amazon EC2](https://aws.amazon.com/ec2/)) provides secure and resizable compute capacity in the AWS Cloud. You can launch as many virtual servers as you need and quickly scale them up or down.
+ [EC2 Image Builder](https://aws.amazon.com/image-builder/) simplifies the building, testing, and deployment of virtual machines and container images for use on AWS or on premises.
+ [Amazon EventBridge](https://aws.amazon.com/eventbridge/) helps you build event-driven applications at scale across AWS, existing systems, or software as a service (SaaS) applications.
+ [AWS Identity and Access Management (IAM)](https://docs.aws.amazon.com/IAM/latest/UserGuide/introduction.html) helps you securely control access to AWS resources. With IAM, you can centrally manage permissions that control which AWS resources users can access. You use IAM to control who is authenticated (signed in) and authorized (has permissions) to use resources.
+ [AWS Lambda](https://aws.amazon.com/lambda/) is a serverless, event-driven compute service that lets you run code for virtually any type of application or backend service without provisioning or managing servers. You can call Lambda functions from over 200 AWS services and SaaS applications, and pay only for what you use.
+ [Amazon Simple Storage Service (Amazon S3) ](https://aws.amazon.com/s3/)is a cloud-based object storage service that helps you store, protect, and retrieve any amount of data..
+ [AWS Systems Manager Agent (SSM Agent)](https://docs.aws.amazon.com/systems-manager/latest/userguide/ssm-agent.html) helps Systems Manager update, manage, and configure EC2 instances, edge devices, on-premises servers, and virtual machines (VMs).

**Code repositories**

The code for this pattern is available in the GitHub [UiPath RPA bot setup using CloudFormation](https://github.com/aws-samples/uipath-rpa-setup-ec2-windows-ami-cloudformation) repository. The pattern also uses a macro that’s available from the [AWS CloudFormation Macros repository](https://github.com/aws-cloudformation/aws-cloudformation-macros/tree/master/Count).

## Best practices
<a name="set-up-uipath-rpa-bots-automatically-on-amazon-ec2-by-using-aws-cloudformation-best-practices"></a>
+ AWS releases new [Windows AMIs](https://docs.aws.amazon.com/AWSEC2/latest/WindowsGuide/windows-ami-version-history.html) each month. These contain the latest OS patches, drivers, and launch agents. We recommend that you use the latest AMI when you launch new instances or when you build your own custom images.
+ Apply all available Windows or Linux security patches during image builds.

## Epics
<a name="set-up-uipath-rpa-bots-automatically-on-amazon-ec2-by-using-aws-cloudformation-epics"></a>

### Deploy an image pipeline for the base image
<a name="deploy-an-image-pipeline-for-the-base-image"></a>


| Task | Description | Skills required | 
| --- | --- | --- | 
| Set up an EC2 Image Builder pipeline. | [\[See the AWS documentation website for more details\]](http://docs.aws.amazon.com/prescriptive-guidance/latest/patterns/set-up-uipath-rpa-bots-automatically-on-amazon-ec2-by-using-aws-cloudformation.html) | AWS DevOps | 
| View EC2 Image Builder settings. | The EC2 Image Builder settings include infrastructure configuration, distribution settings, and security scanning settings. To view the settings:[\[See the AWS documentation website for more details\]](http://docs.aws.amazon.com/prescriptive-guidance/latest/patterns/set-up-uipath-rpa-bots-automatically-on-amazon-ec2-by-using-aws-cloudformation.html)As a best practice, you should make any updates to EC2 Image Builder through the CloudFormation template only. | AWS DevOps | 
| View the image pipeline. | To view the deployed image pipeline:[\[See the AWS documentation website for more details\]](http://docs.aws.amazon.com/prescriptive-guidance/latest/patterns/set-up-uipath-rpa-bots-automatically-on-amazon-ec2-by-using-aws-cloudformation.html) | AWS DevOps | 
| View Image Builder logs. | EC2 Image Builder logs are aggregated in CloudWatch log groups. To view the logs in CloudWatch:[\[See the AWS documentation website for more details\]](http://docs.aws.amazon.com/prescriptive-guidance/latest/patterns/set-up-uipath-rpa-bots-automatically-on-amazon-ec2-by-using-aws-cloudformation.html)EC2 Image Builder logs are also stored in an S3 bucket. To view the logs in the bucket:[\[See the AWS documentation website for more details\]](http://docs.aws.amazon.com/prescriptive-guidance/latest/patterns/set-up-uipath-rpa-bots-automatically-on-amazon-ec2-by-using-aws-cloudformation.html) | AWS DevOps | 
| Upload the UiPath file to an S3 bucket. | [\[See the AWS documentation website for more details\]](http://docs.aws.amazon.com/prescriptive-guidance/latest/patterns/set-up-uipath-rpa-bots-automatically-on-amazon-ec2-by-using-aws-cloudformation.html) | AWS DevOps | 

### Deploy and test the Count macro
<a name="deploy-and-test-the-count-macro"></a>


| Task | Description | Skills required | 
| --- | --- | --- | 
| Deploy the Count macro. | [\[See the AWS documentation website for more details\]](http://docs.aws.amazon.com/prescriptive-guidance/latest/patterns/set-up-uipath-rpa-bots-automatically-on-amazon-ec2-by-using-aws-cloudformation.html)If you want to use the console, follow the instructions in the previous epic or in the [CloudFormation documentation](https://docs.aws.amazon.com/AWSCloudFormation/latest/UserGuide/cfn-console-create-stack.html).  | DevOps engineer | 
| Test the Count macro. | To test the macro's capabilities, try launching the example template that’s provided with the macro. <pre>aws cloudformation deploy \<br />    --stack-name Count-test \<br />    --template-file test.yaml \<br />    --capabilities CAPABILITY_IAM</pre> | DevOps engineer | 

### Deploy the CloudFormation stack to provision instances with the custom image
<a name="deploy-the-cloudformation-stack-to-provision-instances-with-the-custom-image"></a>


| Task | Description | Skills required | 
| --- | --- | --- | 
| Deploy the Amazon EC2 provisioning template. | To deploy EC2 Image Pipeline by using CloudFormation:[\[See the AWS documentation website for more details\]](http://docs.aws.amazon.com/prescriptive-guidance/latest/patterns/set-up-uipath-rpa-bots-automatically-on-amazon-ec2-by-using-aws-cloudformation.html) | AWS DevOps | 
| View Amazon EC2 settings. | Amazon EC2 settings include security, networking, storage, status checks, monitoring, and tags configurations. To view these configurations:[\[See the AWS documentation website for more details\]](http://docs.aws.amazon.com/prescriptive-guidance/latest/patterns/set-up-uipath-rpa-bots-automatically-on-amazon-ec2-by-using-aws-cloudformation.html) | AWS DevOps | 
| View the CloudWatch dashboard. | [\[See the AWS documentation website for more details\]](http://docs.aws.amazon.com/prescriptive-guidance/latest/patterns/set-up-uipath-rpa-bots-automatically-on-amazon-ec2-by-using-aws-cloudformation.html)After you provision the stack, it takes time to populate the dashboard with metrics.The dashboard provides these metrics: `CPUUtilization`, `DiskUtilization`, `MemoryUtilization`, `NetworkIn`, `NetworkOut`, `StatusCheckFailed`. | AWS DevOps | 
| View custom metrics for memory and disk usage.  | [\[See the AWS documentation website for more details\]](http://docs.aws.amazon.com/prescriptive-guidance/latest/patterns/set-up-uipath-rpa-bots-automatically-on-amazon-ec2-by-using-aws-cloudformation.html) | AWS DevOps | 
| View alarms for memory and disk usage.  | [\[See the AWS documentation website for more details\]](http://docs.aws.amazon.com/prescriptive-guidance/latest/patterns/set-up-uipath-rpa-bots-automatically-on-amazon-ec2-by-using-aws-cloudformation.html) | AWS DevOps | 
| Verify the snapshot lifecyle rule. | [\[See the AWS documentation website for more details\]](http://docs.aws.amazon.com/prescriptive-guidance/latest/patterns/set-up-uipath-rpa-bots-automatically-on-amazon-ec2-by-using-aws-cloudformation.html) | AWS DevOps | 

### Delete the environment (optional)
<a name="delete-the-environment-optional"></a>


| Task | Description | Skills required | 
| --- | --- | --- | 
| Delete the stacks. | When your PoC or pilot project is complete, we recommend that you delete the stacks you created to make sure that you aren’t charged for these resources.[\[See the AWS documentation website for more details\]](http://docs.aws.amazon.com/prescriptive-guidance/latest/patterns/set-up-uipath-rpa-bots-automatically-on-amazon-ec2-by-using-aws-cloudformation.html)The stack deletion operation can't be stopped after it begins. The stack proceeds to the `DELETE_IN_PROGRESS` state.If the deletion fails, the stack will be in the `DELETE_FAILED` state. For solutions, see [Delete stack fails](https://docs.aws.amazon.com/AWSCloudFormation/latest/UserGuide/troubleshooting.html#troubleshooting-errors-delete-stack-fails) in the AWS CloudFormation troubleshooting documentation.For information about protecting stacks from being accidentally deleted, see [Protecting a stack from being deleted](https://docs.aws.amazon.com/AWSCloudFormation/latest/UserGuide/using-cfn-protect-stacks.html) in the AWS CloudFormation documentation. | AWS DevOps | 

## Troubleshooting
<a name="set-up-uipath-rpa-bots-automatically-on-amazon-ec2-by-using-aws-cloudformation-troubleshooting"></a>


| Issue | Solution | 
| --- | --- | 
| When you deploy the Amazon EC2 provisioning template, you get the error: *Received malformed response from transform 123xxxx::Count*. | This is a known issue. (See the custom solution and PR in the [AWS CloudFormation macros repository](https://github.com/aws-cloudformation/aws-cloudformation-macros/pull/20).)To fix this issue, open the AWS Lambda console and update `index.py` with the content from the [GitHub repository](https://raw.githubusercontent.com/aws-cloudformation/aws-cloudformation-macros/f1629c96477dcd87278814d4063c37877602c0c8/Count/src/index.py).  | 

## Related resources
<a name="set-up-uipath-rpa-bots-automatically-on-amazon-ec2-by-using-aws-cloudformation-resources"></a>

**GitHub repositories**
+ [UiPath RPA bot setup using CloudFormation](https://github.com/aws-samples/uipath-rpa-setup-ec2-windows-ami-cloudformation)
+ [Count CloudFormation Macro](https://github.com/aws-cloudformation/aws-cloudformation-macros/tree/master/Count)

**AWS references**
+ [Creating a stack on the AWS CloudFormation console](https://docs.aws.amazon.com/AWSCloudFormation/latest/UserGuide/cfn-console-create-stack.html) (CloudFormation documentation)
+ [Troubleshooting CloudFormation](https://docs.aws.amazon.com/AWSCloudFormation/latest/UserGuide/troubleshooting.html) (CloudFormation documentation)
+ [Monitor memory and disk metrics forAmazon EC2 instances](https://docs.aws.amazon.com/AWSEC2/latest/UserGuide/mon-scripts.html) (Amazon EC2 documentation)
+ [How can I use the CloudWatch agent to view metrics for Performance Monitor on a Windows server?](https://repost.aws/knowledge-center/cloudwatch-performance-monitor-windows) (AWS re:Post article)

**Additional references**
+ [UiPath documentation](https://docs.uipath.com/)
+ [Setting the Hostname in a SysPreped AMI](https://blog.brianbeach.com/2014/07/setting-hostname-in-syspreped-ami.html) (blog post by Brian Beach)
+ [How do I make Cloudformation reprocess a template using a macro when parameters change?](https://stackoverflow.com/questions/59828989/how-do-i-make-cloudformation-reprocess-a-template-using-a-macro-when-parameters) (Stack Overflow)

# Set up a highly available PeopleSoft architecture on AWS
<a name="set-up-a-highly-available-peoplesoft-architecture-on-aws"></a>

*Ramanathan Muralidhar, Amazon Web Services*

## Summary
<a name="set-up-a-highly-available-peoplesoft-architecture-on-aws-summary"></a>

When you migrate your PeopleSoft workloads to AWS, resiliency is an important objective. It ensures that your PeopleSoft application is always highly available and able to recover from failures quickly.

This pattern provides an architecture for your PeopleSoft applications on AWS to ensure high availability (HA) at the network, application, and database tiers. It uses an [Amazon Relational Database Service (Amazon RDS)](https://aws.amazon.com/rds/) for Oracle or Amazon RDS for SQL Server database for the database tier. This architecture also includes AWS services such as [Amazon Route 53](https://aws.amazon.com/route53/), [Amazon Elastic Compute Cloud (Amazon EC2)](https://aws.amazon.com/ec2/) Linux instances, [Amazon Elastic Block Storage (Amazon EBS)](https://aws.amazon.com/ebs/), [Amazon Elastic File System (Amazon EFS)](https://aws.amazon.com/efs/), and an [Application Load Balancer](https://aws.amazon.com/elasticloadbalancing/application-load-balancer), and is scalable.

[Oracle PeopleSoft](https://www.oracle.com/applications/peoplesoft/) provides a suite of tools and applications for workforce management and other business operations.

## Prerequisites and limitations
<a name="set-up-a-highly-available-peoplesoft-architecture-on-aws-prereqs"></a>

**Prerequisites**
+ An active AWS account
+ A PeopleSoft environment with the necessary licenses for setting it up on AWS
+ A virtual private cloud (VPC) set up in your AWS account with the following resources:
  + At least two Availability Zones
  + One public subnet and three private subnets in each Availability Zone
  + A NAT gateway and an internet gateway
  + Route tables for each subnet to route the traffic
  + Network access control lists (network ACLs) and security groups defined to help ensure the security of the PeopleSoft application in accordance with your organization’s standards

**Limitations**
+ This pattern provides a high availability (HA) solution. It doesn’t support disaster recovery (DR) scenarios. In the rare occurrence that the entire AWS Region for the HA implementation goes down, the application will become unavailable.

**Product versions**
+ PeopleSoft applications running PeopleTools 8.52 and later

## Architecture
<a name="set-up-a-highly-available-peoplesoft-architecture-on-aws-architecture"></a>

**Target architecture**

Downtime or outage of your PeopleSoft production application impacts the availability of the application and causes major disruptions to your business.

We recommend that you design your PeopleSoft production application so that it is always highly available. You can achieve this by eliminating single points of failure, adding reliable crossover or failover points, and detecting failures. The following diagram illustrates an HA architecture for PeopleSoft on AWS.

![\[Highly available architecture for PeopleSoft on AWS\]](http://docs.aws.amazon.com/prescriptive-guidance/latest/patterns/images/pattern-img/0db96376-dadb-4545-b130-ebbe64acd4e9/images/5d585a8e-320a-495d-a049-97171633e90f.png)


This architecture deployment uses Amazon RDS for Oracle as the PeopleSoft database, and EC2 instances that are running on Red Hat Enterprise Linux (RHEL). You can also use Amazon RDS for SQL Server as the Peoplesoft database.

This architecture contains the following components: 
+ [Amazon Route 53](https://docs.aws.amazon.com/Route53/latest/DeveloperGuide/Welcome.html) is used as the Domain Name Server (DNS) for routing requests from the internet to the PeopleSoft application.
+ [AWS WAF](https://docs.aws.amazon.com/waf/latest/developerguide/waf-chapter.html) helps you protect against common web exploits and bots that can affect availability, compromise security, or consume excessive resources. [AWS Shield Advanced](https://docs.aws.amazon.com/waf/latest/developerguide/shield-chapter.html) (not illustrated) provides much broader protection.
+ An [Application Load Balancer](https://docs.aws.amazon.com/elasticloadbalancing/latest/application/introduction.html) load-balances HTTP and HTTPS traffic with advanced request routing targeted at the web servers.
+ The web servers, application servers, process scheduler servers, and Elasticsearch servers that support the PeopleSoft application run in multiple Availability Zones and use [Amazon EC2 Auto Scaling](https://docs.aws.amazon.com/autoscaling/ec2/userguide/what-is-amazon-ec2-auto-scaling.html).
+ The database used by the PeopleSoft application runs on [Amazon RDS](https://docs.aws.amazon.com/AmazonRDS/latest/UserGuide/Welcome.html) in a Multi-AZ configuration.
+ The file share used by the PeopleSoft application is configured on [Amazon EFS](https://docs.aws.amazon.com/efs/latest/ug/whatisefs.html) and is used to access files across instances.
+ [Amazon Machine Images (AMI](https://docs.aws.amazon.com/AWSEC2/latest/UserGuide/AMIs.html)s) are used by Amazon EC2 Auto Scaling to ensure that PeopleSoft components are cloned quickly when needed.
+ The [NAT gateways](https://docs.aws.amazon.com/vpc/latest/userguide/vpc-nat-gateway.html) connect instances in a private subnet to services outside your VPC, and ensure that external services cannot initiate a connection with those instances.
+ The [internet gateway](https://docs.aws.amazon.com/vpc/latest/userguide/VPC_Internet_Gateway.html) is a horizontally scaled, redundant, and highly available VPC component that allows communication between your VPC and the internet.
+ The bastion hosts in the public subnet provide access to the servers in the private subnet from an external network, such as the internet or on-premises network. The bastion hosts provide controlled and secure access to the servers in the private subnets.

**Architecture details**

The PeopleSoft database is housed in an Amazon RDS for Oracle (or Amazon RDS for SQL Server) database in a Multi-AZ configuration. The [Amazon RDS Multi-AZ feature](https://docs.aws.amazon.com/AmazonRDS/latest/UserGuide/Concepts.MultiAZ.html) replicates database updates across two Availability Zones to increase durability and availability. Amazon RDS automatically fails over to the standby database for planned maintenance and unplanned disruptions.

The PeopleSoft web and middle tier are installed on EC2 instances. These instances are spread across multiple Availability Zones and tied by an [Auto Scaling group](https://docs.aws.amazon.com/autoscaling/ec2/userguide/what-is-amazon-ec2-auto-scaling.html). This ensures that these components are always highly available. A minimum number of required instances are maintained to ensure that the application is always available and can scale when required.

We recommend that you use a current generation EC2 instance type for the OEM EC2 instances. Current generation instance types, such as [instances built on the AWS Nitro System](https://docs.aws.amazon.com/AWSEC2/latest/UserGuide/instance-types.html#ec2-nitro-instances), support hardware virtual machines (HVMs). The HVM AMIs are required to take advantage of [enhanced networking](https://docs.aws.amazon.com/AWSEC2/latest/UserGuide/enhanced-networking.html), and they also offer increased security. The EC2 instances that are part of each Auto Scaling group use their own AMI when replacing or scaling up instances. We recommend that you select EC2 instance types based on the load you want your PeopleSoft application to handle and the minimum values recommended by Oracle for your PeopleSoft application and PeopleTools release. For more information about hardware and software requirements, see the [Oracle support website](https://support.oracle.com).

The PeopleSoft web and middle tier share an Amazon EFS mount to share reports, data files, and (if needed) the `PS_HOME` directory. Amazon EFS is configured with mount targets in each Availability Zone for performance and cost reasons.

An Application Load Balancer is provisioned to support the traffic that accesses the PeopleSoft application and load-balances the traffic among the web servers across different Availability Zones. An Application Load Balancer is a network device that provides HA in at least two Availability Zones. The web servers distribute the traffic to different application servers by using a load balancing configuration. Load balancing among the web server and application server ensures that load is distributed evenly across the instances, and helps avoid bottlenecks and service disruptions due to overloaded instances.

Amazon Route 53 is used as the DNS service to route traffic to the Application Load Balancer from the internet. Route 53 is a highly available and scalable DNS web service.

**HA details**
+ Databases: The Multi-AZ feature of Amazon RDS operates two databases in multiple Availability Zones with synchronous replication. This creates a highly available environment with automatic failover. Amazon RDS has failover event detection and initiates automated failover when these events occur. You can also initiate manual failover through the Amazon RDS API. For a detailed explanation, see the blog post [Amazon RDS Under The Hood: Multi-AZ](https://aws.amazon.com/blogs/database/amazon-rds-under-the-hood-multi-az/). The failover is seamless and the application automatically reconnects to the database when it happens. However, any process scheduler jobs during the failover generate errors and have to be resubmitted.
+ PeopleSoft application servers: The application servers are spread across multiple Availability Zones and have an Auto Scaling group defined for them. If an instance fails, the Auto Scaling group immediately replaces it with a healthy instance that’s cloned from the AMI of the application server template. Specifically, *jolt pooling* is enabled, so when an application server instance goes down, the sessions automatically fail over to another application server, and the Auto Scaling group automatically spins up another instance, brings up the application server, and registers it in the Amazon EFS mount. The newly created application server is automatically added to the web servers by using the `PSSTRSETUP.SH` script in the web servers. This ensures that the application server is always highly available and recovers from failure quickly.
+ Process schedulers: The process schedulers servers are spread across multiple Availability Zones and have an Auto Scaling group defined for them. If an instance fails, the Auto Scaling group immediately replaces it with a healthy instance that’s cloned from the AMI of the process scheduler server template. Specifically, when a process scheduler instance goes down, the Auto Scaling group automatically spins up another instance and brings up the process scheduler. Any jobs that were running when the instance failed must be resubmitted. This ensures that the process scheduler is always highly available and recovers from failure quickly.
+ Elasticsearch servers: The Elasticsearch servers have an Auto Scaling group defined for them. If an instance fails, the Auto Scaling group immediately replaces it with a healthy instance that’s cloned from the AMI of the Elasticsearch server template. Specifically, when an Elasticsearch instance goes down, the Application Load Balancer that serves requests to it detects the failure and stops sending traffic to it. The Auto Scaling group automatically spins up another instance and brings up the Elasticsearch instance. When the Elasticsearch instance is back up, the Application Load Balancer detects that it’s healthy and starts sending requests to it again. This ensures that the Elasticsearch server is always highly available and recovers from failure quickly.
+ Web servers: The web servers have an Auto Scaling group defined for them. If an instance fails, the Auto Scaling group immediately replaces it with a healthy instance that’s cloned from the AMI of the web server template. Specifically, when a web server instance goes down, the Application Load Balancer that serves requests to it detects the failure and stops sending traffic to it. The Auto Scaling group automatically spins up another instance and brings up the web server instance. When the web server instance is back up, the Application Load Balancer detects that it’s healthy and starts sending requests to it again. This ensures that the web server is always highly available and recovers from failure quickly.

## Tools
<a name="set-up-a-highly-available-peoplesoft-architecture-on-aws-tools"></a>

**AWS services**
+ [Application Load Balancers](https://docs.aws.amazon.com/elasticloadbalancing/latest/application/) distribute incoming application traffic across multiple targets, such as EC2 instances, in multiple Availability Zones.
+ [Amazon Elastic Block Store (Amazon EBS)](https://docs.aws.amazon.com/AWSEC2/latest/UserGuide/AmazonEBS.html) provides block-level storage volumes for use with Amazon Elastic Compute Cloud (Amazon EC2) instances.
+ [Amazon Elastic Compute Cloud (Amazon EC2)](https://docs.aws.amazon.com/AWSEC2/latest/UserGuide/concepts.html) provides scalable computing capacity in the AWS Cloud. You can launch as many virtual servers as you need and quickly scale them up or down.
+ [Amazon Elastic File System (Amazon EFS)](https://docs.aws.amazon.com/efs/latest/ug/whatisefs.html) helps you create and configure shared file systems in the AWS Cloud.
+ [Amazon Relational Database Service (Amazon RDS)](https://docs.aws.amazon.com/AmazonRDS/latest/UserGuide/Welcome.html) helps you set up, operate, and scale a relational database in the AWS Cloud.
+ [Amazon Route 53](https://docs.aws.amazon.com/Route53/latest/DeveloperGuide/Welcome.html) is a highly available and scalable DNS web service.

## Best practices
<a name="set-up-a-highly-available-peoplesoft-architecture-on-aws-best-practices"></a>

**Operational best practices**
+ When you run PeopleSoft on AWS, use Route 53 to route the traffic from the internet and locally. Use the [failover option](https://docs.aws.amazon.com/Route53/latest/DeveloperGuide/dns-failover-configuring.html) to reroute traffic to the disaster recovery (DR) site if the primary DB instance isn’t available.
+ Always use an Application Load Balancer in front of the PeopleSoft environment. This ensures that traffic is load-balanced to the web servers in a secure fashion.
+ In the Application Load Balancer target group settings, make sure that [stickiness is turned on](https://docs.aws.amazon.com/elasticloadbalancing/latest/application/sticky-sessions.html) with a load balancer generated cookie.
**Note**  
You might need to use an application-based cookie if you use external single sign-on (SSO). This ensures that connections are consistent across the web servers and application servers.
+ For a PeopleSoft production application, the Application Load Balancer idle timeout must match what is set in the web profile you use. This prevents user sessions from expiring at the load balancer layer.
+ For a PeopleSoft production application, set the application server [recycle count](https://docs.oracle.com/cd/F28299_01/pt857pbr3/eng/pt/tsvt/concept_PSAPPSRVOptions-c07f06.html?pli=ul_d96e90_tsvt) to a value that minimizes memory leaks.
+ If you’re using an Amazon RDS database for your PeopleSoft production application, as described in this pattern, run it in [Multi-AZ format for high availability](https://docs.aws.amazon.com/AmazonRDS/latest/UserGuide/Concepts.MultiAZ.html).
+ If your database is running on an EC2 instance for your PeopleSoft production application, make sure that a [standby database is running on another Availability Zone](https://docs.aws.amazon.com/prescriptive-guidance/latest/migration-oracle-database/ec2-oracle.html#ec2-oracle-ha) for high availability.
+ For DR, make sure that your Amazon RDS database or EC2 instance has a standby configured in a separate AWS Region from the production database. This ensures that in event of a disaster in the Region, you can switch the application over to another Region.
+ For DR, use [Amazon Elastic Disaster Recovery](https://aws.amazon.com/disaster-recovery/) to set up application-level components in a separate Region from production components. This ensures that in the event of a disaster in the Region, you can switch the application over to another Region.
+ Use Amazon EFS (for moderate I/O requirements) or [Amazon FSx](https://aws.amazon.com/fsx/) (for high I/O requirements) to store your PeopleSoft reports, attachments, and data files. This ensures that the content is stored in one central location and can be accessed from anywhere within the infrastructure.
+ Use [Amazon CloudWatch](https://docs.aws.amazon.com/AmazonCloudWatch/latest/monitoring/WhatIsCloudWatch.html) (basic and detailed) to monitor the AWS Cloud resources that your PeopleSoft application is using in near real time. This ensures that you are alerted of issues instantly and can address them quickly before they affect the availability of the environment.
+ If you’re using an Amazon RDS database as the PeopleSoft database, use [Enhanced Monitoring](https://docs.aws.amazon.com/AmazonRDS/latest/UserGuide/USER_Monitoring.OS.overview.html). This feature provides access to over 50 metrics, including CPU, memory, file system I/O, and disk I/O.
+ Use [AWS CloudTrail](https://docs.aws.amazon.com/awscloudtrail/latest/userguide/cloudtrail-user-guide.html) to monitor API calls on the AWS resources that your PeopleSoft application is using. This helps you perform security analysis, resource change tracking, and compliance auditing.

**Security best practices**
+ To protect your PeopleSoft application from common exploits such as SQL injection or cross-site scripting (XSS), use [AWS WAF](https://docs.aws.amazon.com/waf/latest/developerguide/waf-chapter.html). Consider using [AWS Shield Advanced](https://docs.aws.amazon.com/waf/latest/developerguide/shield-chapter.html) for tailored detection and mitigation services.
+ Add a rule to the Application Load Balancer to redirect traffic from HTTP to HTTPS automatically to help secure your PeopleSoft application.
+ Set up a separate security group for the Application Load Balancer. This security group should allow only HTTPS/HTTP inbound traffic and no outbound traffic. This ensures that only intended traffic is allowed and helps secure your application.
+ Use private subnets for the application servers, web servers, and database, and use [NAT gateways](https://docs.aws.amazon.com/vpc/latest/userguide/vpc-nat-gateway.html) for outbound internet traffic. This ensures that the servers that support the application aren’t reachable publicly, while providing public access only to the servers that need it.
+ Use different VPCs to run your PeopleSoft production and non-production environments. Use [AWS Transit Gateway](https://aws.amazon.com/transit-gateway/), [VPC peering](https://docs.aws.amazon.com/vpc/latest/peering/what-is-vpc-peering.html), [network ACLs](https://docs.aws.amazon.com/vpc/latest/userguide/vpc-network-acls.html), and [security groups](https://docs.aws.amazon.com/vpc/latest/userguide/VPC_SecurityGroups.html) to control the traffic flow between the [VPC](https://aws.amazon.com/vpc/)s and, if necessary, your on-premises data center.
+ Follow the principle of least privilege. Grant access to the AWS resources used by the PeopleSoft application only to users who absolutely need it. Grant only the minimum privileges required to perform a task. For more information, see the [security pillar](https://docs.aws.amazon.com/wellarchitected/latest/security-pillar/sec_permissions_least_privileges.html) of the AWS Well-Architected Framework.
+ Wherever possible, use [AWS Systems Manager](https://docs.aws.amazon.com/systems-manager/latest/userguide/what-is-systems-manager.html) to access the EC2 instances that the PeopleSoft application uses.

**Reliability best practices**
+ When you use an Application Load Balancer, register a single target for each enabled Availability Zone. This makes the load balancer most effective.
+ We recommend that you have three distinct URLs for each PeopleSoft production environment: one URL to access the application, one to serve the integration broker, and one to view reports. If possible, each URL should have its own dedicated web servers and application servers. This design helps make your PeopleSoft application more secure, because each URL has a distinct functionality and controlled access. It also minimizes the scope of impact if the underlying services fail.
+ We recommend that you configure [health checks on the load balancer target groups](https://docs.aws.amazon.com/elasticloadbalancing/latest/application/target-group-health-checks.html) for your PeopleSoft application. The health checks should be performed on the web servers instead of the EC2 instances running those servers. This ensures that if the web server crashes or the EC2 instance that hosts the web server goes down, the Application Load Balancer reflects that information accurately.
+ For a PeopleSoft production application, we recommend that you spread the web servers across at least three Availability Zones. This ensures that the PeopleSoft application is always highly available even if one of the Availability Zones goes down.
+ For a PeopleSoft production application, enable jolt pooling (`joltPooling=true`). This ensures that your application fails over to another application server if a server is down for patching purposes or because of a VM failure.
+ For a PeopleSoft production application, set `DynamicConfigReload `to 1. This setting is supported in PeopleTools version 8.52 and later. It adds new application servers to the web server dynamically, without restarting the servers.
+ To minimize downtime when you apply PeopleTools patches, use the blue/green deployment method for your Auto Scaling group launch configurations for the web and application servers. For more information, see the [Overview of deployment options on AWS](https://docs.aws.amazon.com/whitepapers/latest/overview-deployment-options/bluegreen-deployments.html) whitepaper.
+ Use [AWS Backup](https://docs.aws.amazon.com/aws-backup/latest/devguide/whatisbackup.html) to back up your PeopleSoft application on AWS. AWS Backup is a cost-effective, fully managed, policy-based service that simplifies data protection at scale.

**Performance best practices**
+ Terminate the SSL at the Application Load Balancer for optimal performance of the PeopleSoft environment, unless your business requires encrypted traffic throughout the environment.
+ Create [interface VPC endpoints](https://docs.aws.amazon.com/vpc/latest/privatelink/create-interface-endpoint.html) for AWS services like such as [Amazon Simple Notification Service (Amazon SNS)](https://docs.aws.amazon.com/sns/latest/dg/welcome.html) and [CloudWatch](https://docs.aws.amazon.com/AmazonCloudWatch/latest/monitoring/WhatIsCloudWatch.html) so that traffic is always internal. This is cost-effective and helps keep your application secure.

**Cost optimization best practices**
+ Tag all the resources used by your PeopleSoft environment, and enable [cost allocation tags](https://docs.aws.amazon.com/awsaccountbilling/latest/aboutv2/cost-alloc-tags.html). These tags help you view and manage your resource costs.
+ For a PeopleSoft production application, set up Auto Scaling groups for the web servers and the application servers. This maintains a minimal number of web and application servers to support your application. You can use [Auto Scaling group policies](https://docs.aws.amazon.com/autoscaling/ec2/userguide/as-scaling-simple-step.html) to scale the the servers up and down as required.
+ Use [billing alarms](https://docs.aws.amazon.com/AmazonCloudWatch/latest/monitoring/monitor_estimated_charges_with_cloudwatch.html) to get alerts when costs exceed a budget threshold that you specify.

**Sustainability best practices**
+ Use [infrastructure as code](https://docs.aws.amazon.com/whitepapers/latest/introduction-devops-aws/infrastructure-as-code.html) (IaC) to maintain your PeopleSoft environments. This helps you build consistent environments and maintain change control.

## Epics
<a name="set-up-a-highly-available-peoplesoft-architecture-on-aws-epics"></a>

### Migrate your PeopleSoft database to Amazon RDS
<a name="migrate-your-peoplesoft-database-to-amazon-rds"></a>


| Task | Description | Skills required | 
| --- | --- | --- | 
| Create a DB subnet group. | On the [Amazon RDS console](https://console.aws.amazon.com/rds/), in the navigation pane, choose **Subnet groups**, and then create an Amazon RDS DB subnet group with subnets in multiple Availability Zones. This is required for the Amazon RDS database to run in a Multi-AZ configuration. | Cloud administrator | 
| Create the Amazon RDS database. | Create an Amazon RDS database in an Availability Zone of the AWS Region you selected for the PeopleSoft HA environment. When you create the Amazon RDS database, make sure to select the Multi-AZ option (**Create a standby instance**) and the database subnet group you created in the previous step. For more information, see the [Amazon RDS documentation](https://docs.aws.amazon.com/AmazonRDS/latest/UserGuide/USER_CreateDBInstance.html). | Cloud administrator, Oracle database administrator | 
| Migrate your PeopleSoft database to Amazon RDS. | Migrate your existing PeopleSoft database into the Amazon RDS database by using AWS Database Migration Service (AWS DMS). For more information, see the [AWS DMS documentation](https://docs.aws.amazon.com/dms/latest/userguide/CHAP_Source.Oracle.html) and the AWS blog post [Migrating Oracle databases with near-zero downtime using AWS DMS](https://aws.amazon.com/blogs/database/migrating-oracle-databases-with-near-zero-downtime-using-aws-dms/). | Cloud administrator, PeopleSoft DBA | 

### Set up your Amazon EFS file system
<a name="set-up-your-amazon-efs-file-system"></a>


| Task | Description | Skills required | 
| --- | --- | --- | 
| Create a file system. | On the [Amazon EFS console](https://console.aws.amazon.com/efs/), create a file system and mount targets for each Availability Zone. For instructions, see the [Amazon EFS documentation](https://docs.aws.amazon.com/efs/latest/ug/creating-using-create-fs.html#creating-using-fs-part1-console). When the file system has been created, note its DNS name. You will use this information when you mount the file system. | Cloud administrator | 

### Set up your PeopleSoft application and file system
<a name="set-up-your-peoplesoft-application-and-file-system"></a>


| Task | Description | Skills required | 
| --- | --- | --- | 
| Launch an EC2 instance. | Launch an EC2 instance for your PeopleSoft application. For instructions, see the [Amazon EC2 documentation](https://docs.aws.amazon.com/AWSEC2/latest/UserGuide/ec2-launch-instance-wizard.html#liw-quickly-launch-instance).[\[See the AWS documentation website for more details\]](http://docs.aws.amazon.com/prescriptive-guidance/latest/patterns/set-up-a-highly-available-peoplesoft-architecture-on-aws.html) | Cloud administrator, PeopleSoft administrator | 
| Install PeopleSoft on the instance. | Install your PeopleSoft application and PeopleTools on the EC2 instance you created. For instructions, see the [Oracle documentation](https://docs.oracle.com). | Cloud administrator, PeopleSoft administrator | 
| Create the application server. | Create the application server for the AMI template and make sure that it connects successfully to the Amazon RDS database. | Cloud administrator, PeopleSoft administrator | 
| Mount the Amazon EFS file system. | Log in to the EC2 instance as the root user and run the following commands to mount the Amazon EFS file system to a folder called `PSFTMNT` on the server.<pre>sudo su –<br />mkdir /psftmnt<br />cat /etc/fstab</pre>Append the following line to the `/etc/fstab` file. Use the DNS name you noted when you created the file system.<pre>fs-09e064308f1145388.efs.us-east-1.amazonaws.com:/ /psftmnt nfs4 nfsvers=4.1,rsize=1048576,wsize=1048576,hard,timeo=600,retrans=2,noresvport,_netdev 0 0<br />mount -a</pre> | Cloud administrator, PeopleSoft administrator | 
| Check permissions. | Make sure that the `PSFTMNT` folder has the proper permissions so that the PeopleSoft user can access it properly. | Cloud administrator, PeopleSoft administrator | 
| Create additional instances. | Repeat the previous steps in this epic to create template instances for the process scheduler, web server, and Elasticsearch server. Name these instances `PRCS_TEMPLATE`, `WEB_TEMPLATE`, and `SRCH_TEMPLATE`. For the web server, set `joltPooling=true`** **and `DynamicConfigReload=1`. | Cloud administrator, PeopleSoft administrator | 

### Create scripts to set up servers
<a name="create-scripts-to-set-up-servers"></a>


| Task | Description | Skills required | 
| --- | --- | --- | 
| Create a script to install the application server. | In the Amazon EC2 `APP_TEMPLATE` instance, as the PeopleSoft user, create the following script. Name it `appstart.sh` and place it in the `PS_HOME` directory. You will use this script to bring up the application server and also record the server name on the Amazon EFS mount.<pre>#!/bin/ksh<br />. /usr/homes/hcmdemo/.profile.<br />psadmin -c configure -d HCMDEMO<br />psadmin -c parallelboot -d HCMDEMO<br />touch /psftmnt/`echo $HOSTNAME`</pre> | PeopleSoft administrator | 
| Create a script to install the process scheduler server. | In the Amazon EC2 `PRCS_TEMPLATE` instance, as the PeopleSoft user, create the following script. Name it `prcsstart.sh` and place it in the `PS_HOME` directory. You will use this script to bring up the process scheduler server.<pre>#!/bin/ksh<br />. /usr/homes/hcmdemo/. profile<br />/* The following line ensures that the process scheduler always has a unique name during replacement or scaling activity. */ <br />sed -i "s/.*PrcsServerName.*/`hostname -I | awk -F. '{print "PrcsServerName=PSUNX"$3$4}'`/" $HOME/appserv/prcs/*/psprcs.cfg<br />psadmin -p configure -d HCMDEMO<br />psadmin -p start -d HCMDEMO</pre> | PeopleSoft administrator | 
| Create a script to install the Elasticsearch server. | In the Amazon EC2 `SRCH_TEMPLATE` instance, as the Elasticsearch user, create the following script. Name it `srchstart.sh` and place it in the `HOME` directory.<pre>#!/bin/ksh<br />/* The following line ensures that the correct IP is indicated in the elasticsearch.yaml file. */<br />sed -i "s/.*network.host.*/`hostname  -I | awk '{print "host:"$0}'`/" $ES_HOME_DIR/config/elasticsearch.yaml<br />nohup $ES_HOME_DIR/bin/elasticsearch &</pre> | PeopleSoft administrator | 
| Create a script to install the web server. | In the Amazon EC2 `WEB_TEMPLATE` instance, as the web server user, create the following scripts in the `HOME` directory.`renip.sh`: This script ensures that the web server has the correct IP when cloned from the AMI.<pre>#!/bin/ksh<br />hn=`hostname`<br />/* On the following line, change the IP with the hostname with the hostname of the web template. */<br />for text_file in `find  *  -type f -exec grep -l '<hostname-of-the-web-template>' {} \;`<br />do<br />sed -e 's/<hostname-of-the-web-template>/'$hn'/g' $text_file > temp<br />mv -f temp $text_file<br />done</pre>`psstrsetup.sh`: This script ensures that the web server uses the correct application server IPs that are currently running. It tries to connect to each application server on the jolt port and adds it to the configuration file.<pre>#!/bin/ksh<br />c2=""<br />for ctr in `ls -1 /psftmnt/*.internal`<br />do<br />c1=`echo $ctr | awk -F "/" '{print $3}'`<br />/* In the following lines, 9000 is the jolt port. Change it if necessary. */<br />if nc -z $c1 9000 2> /dev/null; then<br />if [[ $c2 = "" ]]; then<br />c2="psserver="`echo $c1`":9000"<br />else<br />c2=`echo $c2`","`echo $c1`":9000"<br />fi<br />fi<br />done</pre>`webstart.sh`: This script runs the two previous scripts and starts the web servers.<pre>#!/bin/ksh<br />/* Change the path in the following if necessary. */<br />cd  /usr/homes/hcmdemo <br />./renip.sh<br />./psstrsetup.sh<br />webserv/peoplesoft/bin/startPIA.sh</pre> | PeopleSoft administrator | 
| Add a crontab entry. | In the Amazon EC2 `WEB_TEMPLATE` instance, as the web server user, add the following line to **crontab**. Change the time and path to reflect the values you need. This entry ensures that your web server always has the correct application server entries in the `configuration.properties` file.<pre>* * * * * /usr/homes/hcmdemo/psstrsetup.sh</pre> | PeopleSoft administrator | 

### Create AMIs and Auto Scaling group templates
<a name="create-amis-and-auto-scaling-group-templates"></a>


| Task | Description | Skills required | 
| --- | --- | --- | 
| Create an AMI for the application server template. | On the Amazon EC2 console, create an AMI image of the Amazon EC2 `APP_TEMPLATE` instance. Name the AMI `PSAPPSRV-SCG-VER1`. For instructions, see the [Amazon EC2 documentation](https://docs.aws.amazon.com/AWSEC2/latest/UserGuide/creating-an-ami-ebs.html). | Cloud administrator, PeopleSoft administrator | 
| Create AMIs for the other servers. | Repeat the previous step to create AMIs for the process scheduler, Elasticsearch server, and web server. | Cloud administrator, PeopleSoft administrator | 
| Create a launch template for the application server Auto Scaling group. | Create a launch template for the application server Auto Scaling group. Name the template `PSAPPSRV_TEMPLATE.` In the template, choose the AMI you created for the `APP_TEMPLATE` instance. For instructions, see the [Amazon EC2 documentation](https://docs.aws.amazon.com/AWSEC2/latest/UserGuide/create-launch-template.html#create-launch-template-from-instance).[\[See the AWS documentation website for more details\]](http://docs.aws.amazon.com/prescriptive-guidance/latest/patterns/set-up-a-highly-available-peoplesoft-architecture-on-aws.html) | Cloud administrator, PeopleSoft administrator | 
| Create a launch template for the process scheduler server Auto Scaling group. | Repeat the previous step to create a launch template for the process scheduler server Auto Scaling group. Name the template `PSPRCS_TEMPLATE`. In the template, choose the AMI you created for the process scheduler.[\[See the AWS documentation website for more details\]](http://docs.aws.amazon.com/prescriptive-guidance/latest/patterns/set-up-a-highly-available-peoplesoft-architecture-on-aws.html) | Cloud administrator, PeopleSoft administrator | 
| Create a launch template for the Elasticsearch server Auto Scaling group. | Repeat the previous steps to create a launch template for the Elasticsearch server Auto Scaling group. Name the template `SRCH_TEMPLATE`. In the template, choose the AMI you created for the search server.[\[See the AWS documentation website for more details\]](http://docs.aws.amazon.com/prescriptive-guidance/latest/patterns/set-up-a-highly-available-peoplesoft-architecture-on-aws.html) | Cloud administrator, PeopleSoft administrator | 
| Create a launch template for the web server Auto Scaling group. | Repeat the previous steps to create a launch template for the web server Auto Scaling group. Name the template `WEB_TEMPLATE`. In the template, choose the AMI you created for the web server.[\[See the AWS documentation website for more details\]](http://docs.aws.amazon.com/prescriptive-guidance/latest/patterns/set-up-a-highly-available-peoplesoft-architecture-on-aws.html) | Cloud administrator, PeopleSoft administrator | 

### Create Auto Scaling groups
<a name="create-auto-scaling-groups"></a>


| Task | Description | Skills required | 
| --- | --- | --- | 
| Create an Auto Scaling group for the application server. | On the Amazon EC2 console, create an Auto Scaling group called `PSAPPSRV_ASG` for the application server by using the `PSAPPSRV_TEMPLATE` template. For instructions, see the [Amazon EC2 documentation](https://docs.aws.amazon.com/autoscaling/ec2/userguide/create-asg-launch-template.html).[\[See the AWS documentation website for more details\]](http://docs.aws.amazon.com/prescriptive-guidance/latest/patterns/set-up-a-highly-available-peoplesoft-architecture-on-aws.html) | Cloud administrator, PeopleSoft administrator | 
| Create Auto Scaling groups for the other servers. | Repeat the previous step to create Auto Scaling groups for the process scheduler, Elasticsearch server, and web server. | Cloud administrator, PeopleSoft administrator | 

### Create and configure target groups
<a name="create-and-configure-target-groups"></a>


| Task | Description | Skills required | 
| --- | --- | --- | 
| Create a target group for the web server. | On the Amazon EC2 console, create a target group for the web server. For instructions, see the [Elastic Load Balancing documentation](https://docs.aws.amazon.com/elasticloadbalancing/latest/application/create-target-group.html). Set the port to the port that the web server is listening on. | Cloud administrator | 
| Configure health checks. | Confirm that the health checks have the correct values to reflect your business requirements. For more information, see the [Elastic Load Balancing documentation](https://docs.aws.amazon.com/elasticloadbalancing/latest/application/target-group-health-checks.html). | Cloud administrator | 
| Create a target group for the Elasticsearch server. | Repeat the previous steps to create a target group called `PSFTSRCH` for the Elasticsearch server, and set the correct Elasticsearch port. | Cloud administrator | 
| Add target groups to Auto Scaling groups. | Open the web server Auto Scaling group called `PSPIA_ASG` that you created earlier. On the **Load balancing** tab, choose **Edit** and then add the `PSFTWEB` target group to the Auto Scaling group.Repeat this step for the Elasticsearch Auto Scaling group `PSSRCH_ASG` to add the target group `PSFTSRCH` you created earlier. | Cloud administrator | 
| Set session stickiness. | In the target group `PSFTWEB`, choose the **Attributes** tab, choose **Edit**, and set the session stickiness. For stickiness type, choose **Load balancer generated cookie**, and set the duration to 1. For more information, see the [Elastic Load Balancing documentation](https://docs.aws.amazon.com/elasticloadbalancing/latest/application/sticky-sessions.html).Repeat this step for the target group `PSFTSRCH`. | Cloud administrator | 

### Create and configure application load balancers
<a name="create-and-configure-application-load-balancers"></a>


| Task | Description | Skills required | 
| --- | --- | --- | 
| Create a load balancer for the web servers. | Create an Application Load Balancer named `PSFTLB` to load-balance traffic to the web servers. For instructions, see the [Elastic Load Balancing documentation](https://docs.aws.amazon.com/elasticloadbalancing/latest/application/create-application-load-balancer.html#configure-load-balancer).[\[See the AWS documentation website for more details\]](http://docs.aws.amazon.com/prescriptive-guidance/latest/patterns/set-up-a-highly-available-peoplesoft-architecture-on-aws.html) | Cloud administrator | 
| Create a load balancer for the Elasticsearch servers. | Create an Application Load Balancer named `PSFTSCH` to load-balance traffic to the Elasticsearch servers.[\[See the AWS documentation website for more details\]](http://docs.aws.amazon.com/prescriptive-guidance/latest/patterns/set-up-a-highly-available-peoplesoft-architecture-on-aws.html) | Cloud administrator | 
| Configure Route 53. | On the [Amazon Route 53 console](https://console.aws.amazon.com/route53/), create a record in the hosted zone that will service the PeopleSoft application. For instructions, see the [Amazon Route 53 documentation](https://docs.aws.amazon.com/Route53/latest/DeveloperGuide/resource-record-sets-creating.html). This ensures that all the traffic passes through the `PSFTLB` load balancer. | Cloud administrator | 

## Related resources
<a name="set-up-a-highly-available-peoplesoft-architecture-on-aws-resources"></a>
+ [Oracle PeopleSoft website](https://www.oracle.com/applications/peoplesoft/)
+ [AWS documentation](https://docs.aws.amazon.com)

# Set up disaster recovery for Oracle JD Edwards EnterpriseOne with AWS Elastic Disaster Recovery
<a name="set-up-disaster-recovery-for-oracle-jd-edwards-enterpriseone-with-aws-elastic-disaster-recovery"></a>

*Thanigaivel Thirumalai, Amazon Web Services*

## Summary
<a name="set-up-disaster-recovery-for-oracle-jd-edwards-enterpriseone-with-aws-elastic-disaster-recovery-summary"></a>

Disasters that are triggered by natural catastrophes, application failures, or disruption of services harm revenue and cause downtime for corporate applications. To reduce the repercussions of such events, planning for disaster recovery (DR) is critical for firms that adopt JD Edwards EnterpriseOne enterprise resource planning (ERP) systems and other mission-critical and business-critical software. 

This pattern explains how businesses can use AWS Elastic Disaster Recovery as a DR option for their JD Edwards EnterpriseOne applications. It also outlines the steps for using Elastic Disaster Recovery failover and failback to construct a cross-Region DR strategy for databases hosted on an Amazon Elastic Compute Cloud (Amazon EC2) instance in the AWS Cloud.

**Note**  
This pattern requires the primary and secondary Regions for the cross-Region DR implementation to be hosted on AWS.

[Oracle JD Edwards EnterpriseOne](https://www.oracle.com/applications/jd-edwards-enterpriseone/) is an integrated ERP software solution for midsize to large companies in a wide range of industries.

AWS Elastic Disaster Recovery minimizes downtime and data loss with fast, reliable recovery of on-premises and cloud-based applications by using affordable storage, minimal compute, and point-in-time recovery.

AWS provides [four core DR architecture patterns](https://docs.aws.amazon.com/whitepapers/latest/disaster-recovery-workloads-on-aws/disaster-recovery-options-in-the-cloud.html). This document focuses on setup, configuration, and optimization by using the [pilot light strategy](https://docs.aws.amazon.com/whitepapers/latest/disaster-recovery-workloads-on-aws/disaster-recovery-options-in-the-cloud.html). This strategy helps you create a lower-cost DR environment where you initially provision a replication server for replicating data from the source database, and you provision the actual database server only when you start a DR drill and recovery. This strategy removes the expense of maintaining a database server in the DR Region. Instead, you pay for a smaller EC2 instance that serves as a replication server.

## Prerequisites and limitations
<a name="set-up-disaster-recovery-for-oracle-jd-edwards-enterpriseone-with-aws-elastic-disaster-recovery-prereqs"></a>

**Prerequisites**
+ An active AWS account.
+ A JD Edwards EnterpriseOne application running on Oracle Database or Microsoft SQL Server with a supported database in a running state on a managed EC2 instance. This application should include all JD Edwards EnterpriseOne base components (Enterprise Server, HTML Server, and Database Server) installed in one AWS Region.
+ An AWS Identity and Access Management (IAM) role to set up the Elastic Disaster Recovery service.
+ The network for running Elastic Disaster Recovery configured according to the required [connectivity settings](https://docs.aws.amazon.com/drs/latest/userguide/Network-Requirements.html).

**Limitations**
+ You can use this pattern to replicate all tiers, unless the database is hosted on Amazon Relational Database Service (Amazon RDS), in which case we recommend that you use the [cross-Region copy functionality](https://docs.aws.amazon.com/AmazonRDS/latest/UserGuide/USER_CopySnapshot.html) of Amazon RDS.
+ Elastic Disaster Recovery isn’t compatible with CloudEndure Disaster Recovery, but you can upgrade from CloudEndure Disaster Recovery. For more information, see the [FAQ](https://docs.aws.amazon.com/drs/latest/userguide/cedr-to-drs.html) in the Elastic Disaster Recovery documentation.
+ Amazon Elastic Block Store (Amazon EBS) limits the rate at which you can take snapshots. You can replicate a maximum number of 300 servers in a single AWS account by using Elastic Disaster Recovery. To replicate more servers, you can use multiple AWS accounts or multiple target AWS Regions. (You will have to set up Elastic Disaster Recovery separately for each account and Region.) For more information, see [Best practices](https://docs.aws.amazon.com/drs/latest/userguide/best_practices_drs.html) in the Elastic Disaster Recovery documentation.
+ The source workloads (the JD Edwards EnterpriseOne application and database) must be hosted on EC2 instances. This pattern doesn’t support workloads that are on premises or in other cloud environments.
+ This pattern focuses on the JD Edwards EnterpriseOne components. A full DR and business continuity plan (BCP) should include other core services, including:
  + Networking (virtual private cloud, subnets, and security groups)
  + Active Directory
  + Amazon WorkSpaces
  + Elastic Load Balancing
  + A managed database service such as Amazon Relational Database Service (Amazon RDS)

For additional information about prerequisites, configurations, and limitations, see the [Elastic Disaster Recovery documentation](https://docs.aws.amazon.com/drs/latest/userguide/what-is-drs.html).

**Product versions**
+ Oracle JD Edwards EnterpriseOne (Oracle and SQL Server supported versions based on Oracle Minimum Technical Requirements)

## Architecture
<a name="set-up-disaster-recovery-for-oracle-jd-edwards-enterpriseone-with-aws-elastic-disaster-recovery-architecture"></a>

**Target technology stack**
+ A single Region and single virtual private cloud (VPC) for production and non-production, and a second Region for DR
+ Single Availability Zones to ensure low latency between servers
+ An Application Load Balancer that distributes network traffic to improve the scalability and availability of your applications across multiple Availability Zones
+ Amazon Route 53 to provide Domain Name System (DNS) configuration
+ Amazon WorkSpaces to provide users with a desktop experience in the cloud
+ Amazon Simple Storage Service (Amazon S3) for storing backups, files, and objects
+ Amazon CloudWatch for application logging, monitoring, and alarms
+ Amazon Elastic Disaster Recovery for disaster recovery

**Target architecture**

The following diagram shows the cross-Region disaster recovery architecture for JD Edwards EnterpriseOne using Elastic Disaster Recovery.

![\[Architecture for JD Edwards EnterpriseOne cross-Region DR on AWS\]](http://docs.aws.amazon.com/prescriptive-guidance/latest/patterns/images/pattern-img/9b0de5f0-f211-4086-a044-321d081604f9/images/978b7219-e54e-4e31-b3ff-4885784e2971.png)


**Procedure**

Here is a high-level review of the process. For details, see the *Epics* section.
+ Elastic Disaster Recovery replication begins with an initial sync. During the initial sync, the AWS Replication Agent replicates all the data from the source disks to the appropriate resource in the staging area subnet.
+ Continuous replication continues indefinitely after the initial sync is complete.
+ You review the launch parameters, which include service-specific configurations and an Amazon EC2 launch template, after the agent has been installed and replication has started. When the source server is indicated as being ready for recovery, you can start instances.
+ When Elastic Disaster Recovery issues a series of API calls to begin the launch operation, the recovery instance is immediately launched on AWS according to your launch settings. The service automatically spins up a conversion server during startup.
+ The new instance is spun up on AWS after the conversion is complete and is ready for use. The source server state at the time of launch is represented by the volumes associated with the launched instance. The conversion process involves changes to the drivers, network, and operating system license to ensure that the instance boots natively on AWS.
+ After the launch, the newly created volumes are no longer kept in sync with the source servers. The AWS Replication Agent continues to routinely replicate changes made to your source servers to the staging area volumes, but the launched instances do not reflect those changes.
+ When you start a new drill or recovery instance, the data is always reflected in the most recent state that has been replicated from the source server to the staging area subnet.
+ When the source server is marked as being prepared for recovery, you can start instances.

**Note**  
The process works both ways: for failover from a primary AWS Region to a DR Region, and to fail back to the primary site, when it has been recovered. You can prepare for failback by reversing the direction of data replication from the target machine back to the source machine in a fully orchestrated way.

The benefits of this process described in this pattern include:
+ Flexibility: Replication servers scale out and scale in based on dataset and replication time, so you can perform DR tests without disrupting source workloads or replication.
+ Reliability: The replication is robust, non-disruptive, and continuous.
+ Automation: This solution provides a unified, automated process for test, recovery, and failback.
+ Cost optimization: You can replicate only the needed volumes and pay for them, and pay for compute resources at the DR site only when those resources are activated. You can use a cost-optimized replication instance (we recommend that you use a compute-optimized instance type) for multiple sources or a single source with a large EBS volume.

**Automation and scale**

When you perform disaster recovery at scale, the JD Edwards EnterpriseOne servers will have dependencies on other servers in the environment. For example:
+ JD Edwards EnterpriseOne application servers that connect to a JD Edwards EnterpriseOne supported database on boot have dependencies on that database.
+ JD Edwards EnterpriseOne servers that require authentication and need to connect to a domain controller on boot to start services have dependencies on the domain controller.

For this reason, we recommend that you automate failover tasks. For example, you can use AWS Lambda or AWS Step Functions to automate the JD Edwards EnterpriseOne startup scripts and load balancer changes to automate the end-to-end failover process. For more information, see the blog post [Creating a scalable disaster recovery plan with AWS Elastic Disaster Recovery](https://aws.amazon.com/blogs/storage/creating-a-scalable-disaster-recovery-plan-with-aws-elastic-disaster-recovery/).

## Tools
<a name="set-up-disaster-recovery-for-oracle-jd-edwards-enterpriseone-with-aws-elastic-disaster-recovery-tools"></a>

**AWS services**
+ [Amazon Elastic Block Store (Amazon EBS)](https://docs.aws.amazon.com/AWSEC2/latest/UserGuide/AmazonEBS.html) provides block-level storage volumes for use with EC2 instances.
+ [Amazon Elastic Compute Cloud (Amazon EC2)](https://aws.amazon.com/products/compute/) provides scalable computing capacity in the AWS Cloud. You can launch as many virtual servers as you need and quickly scale them up or down.
+ [AWS Elastic Disaster Recovery](https://aws.amazon.com/disaster-recovery/) minimizes downtime and data loss with fast, reliable recovery of on-premises and cloud-based applications using affordable storage, minimal compute, and point-in-time recovery.
+ [Amazon Virtual Private Cloud (Amazon VPC)](https://aws.amazon.com/vpc/) gives you full control over your virtual networking environment, including resource placement, connectivity, and security.

## Best practices
<a name="set-up-disaster-recovery-for-oracle-jd-edwards-enterpriseone-with-aws-elastic-disaster-recovery-best-practices"></a>

**General best practices**
+ Have a written plan of what to do in the event of a real recovery event.
+ After you set up Elastic Disaster Recovery correctly, create an AWS CloudFormation template that can create the configuration on demand, should the need arise. Determine the order in which servers and applications should be launched, and record this in the recovery plan.
+ Perform a regular drill (standard Amazon EC2 rates apply).
+ Monitor the health of the ongoing replication by using the Elastic Disaster Recovery console or programmatically.
+ Protect the point-in-time snapshots and confirm before terminating the instances.
+ Create a IAM role for AWS Replication Agent installation.
+ Enable termination protection for recovery instances in a real DR scenario.
+ Do not use the **Disconnect from AWS** action in the Elastic Disaster Recovery console for servers that you launched recovery instances for, even in the case of a real recovery event. Performing a disconnect terminates all replication resources related to these source servers, including your point-in-time (PIT) recovery points.
+ Change the PIT policy to change the number of days for snapshot retention.
+ Edit the launch template in Elastic Disaster Recovery launch settings to set the correct subnet, security group, and instance type for your target server.
+ Automate the end-to-end failover process by using Lambda or Step Functions to automate JD Edwards EnterpriseOne startup scripts and load balancer changes.

**JD Edwards EnterpriseOne optimization and considerations**
+ Move **PrintQueue** into the database.
+ Move **MediaObjects** into the database.
+ Exclude the logs and temp folder from batch and logic servers.
+ Exclude the temp folder from Oracle WebLogic.
+ Create scripts for startup after the failover.
+ Exclude the tempdb for SQL Server.
+ Exclude the temp file for Oracle.

## Epics
<a name="set-up-disaster-recovery-for-oracle-jd-edwards-enterpriseone-with-aws-elastic-disaster-recovery-epics"></a>

### Perform initial tasks and configuration
<a name="perform-initial-tasks-and-configuration"></a>


| Task | Description | Skills required | 
| --- | --- | --- | 
| Set up the replication network.  | Implement your JD Edwards EnterpriseOne system in the primary AWS Region and identify the AWS Region for DR. Follow the steps in the [Replication network requirements](https://docs.aws.amazon.com/drs/latest/userguide/preparing-environments.html) section of the Elastic Disaster Recovery documentation to plan and set up your replication and DR network. | AWS administrator | 
| Determine RPO and RTO. | Identify the recovery time objective (RTO) and recovery point objective (RPO) for your application servers and database. | Cloud architect, DR architect | 
| Enable replication for Amazon EFS. | If applicable, enable replication from the AWS primary to DR Region for shared file systems such as Amazon Elastic File System (Amazon EFS) by using AWS DataSync, **rsync**, or another appropriate tool. | Cloud administrator | 
| Manage DNS in case of DR. | Identify the process to update the Domain Name System (DNS) during the DR drill or actual DR. | Cloud administrator | 
| Create an IAM role for setup. | Follow the instructions in the [Elastic Disaster Recovery initialization and permissions](https://docs.aws.amazon.com/drs/latest/userguide/getting-started-initializing.html) section of the Elastic Disaster Recovery documentation to create an IAM role to initialize and manage the AWS service. | Cloud administrator | 
| Set up VPC peering. | Make sure that the source and target VPCs are peered and accessible to each other. For configuration instructions, see the [Amazon VPC documentation](https://docs.aws.amazon.com/vpc/latest/peering/what-is-vpc-peering.html). | AWS administrator | 

### Configure Elastic Disaster Recovery replication settings
<a name="configure-elastic-disaster-recovery-replication-settings"></a>


| Task | Description | Skills required | 
| --- | --- | --- | 
| Initialize Elastic Disaster Recovery. | Open the [Elastic Disaster Recovery console](https://console.aws.amazon.com/drs/home), choose the target AWS Region (where you will replicate data and launch recovery instances), and then choose **Set default replication settings**. | AWS administrator | 
| Set up replication servers. | [\[See the AWS documentation website for more details\]](http://docs.aws.amazon.com/prescriptive-guidance/latest/patterns/set-up-disaster-recovery-for-oracle-jd-edwards-enterpriseone-with-aws-elastic-disaster-recovery.html) | AWS administrator | 
| Configure volumes and security groups. | [\[See the AWS documentation website for more details\]](http://docs.aws.amazon.com/prescriptive-guidance/latest/patterns/set-up-disaster-recovery-for-oracle-jd-edwards-enterpriseone-with-aws-elastic-disaster-recovery.html) | AWS administrator | 
| Configure additional settings. | [\[See the AWS documentation website for more details\]](http://docs.aws.amazon.com/prescriptive-guidance/latest/patterns/set-up-disaster-recovery-for-oracle-jd-edwards-enterpriseone-with-aws-elastic-disaster-recovery.html) | AWS administrator | 

### Install the AWS Replication Agent
<a name="install-the-aws-replication-agent"></a>


| Task | Description | Skills required | 
| --- | --- | --- | 
| Create an IAM role. | Create an IAM role that contains the `AWSElasticDisasterRecoveryAgentInstallationPolicy` policy. In the **Select AWS access type** section, enable programmatic access. Note the access key ID and secret access key. You will need this information during the installation of the AWS Replication Agent. | AWS administrator | 
| Check requirements. | Check and complete the [prerequisites](https://docs.aws.amazon.com/drs/latest/userguide/installation-requiremets.html) in the Elastic Disaster Recovery documentation for installing the AWS Replication Agent. | AWS administrator | 
| Install the AWS Replication Agent. | Follow the [installation instruction](https://docs.aws.amazon.com/drs/latest/userguide/agent-installation-instructions.html)s for your operating system and install the AWS Replication Agent.[\[See the AWS documentation website for more details\]](http://docs.aws.amazon.com/prescriptive-guidance/latest/patterns/set-up-disaster-recovery-for-oracle-jd-edwards-enterpriseone-with-aws-elastic-disaster-recovery.html)Repeat these steps for the remaining server. | AWS administrator | 
| Monitor the replication. | Return to the Elastic Disaster Recovery **Source servers** pane to monitor the replication status. The initial sync will take some time depending on the size of the data transfer.When the source server is fully synced, the server status will be updated to **Ready**. This means that a replication server has been created in the staging area, and the EBS volumes have been replicated from the source server to the staging area. | AWS administrator | 

### Configure launch settings
<a name="configure-launch-settings"></a>


| Task | Description | Skills required | 
| --- | --- | --- | 
| Edit launch settings. | To update the launch settings for the drill and recovery instances, on the [Elastic Disaster Recovery console](https://console.aws.amazon.com/drs/home), select the source server, and then choose **Actions**, **Edit launch settings**. Or you can choose your replicating source machines from the **Source servers** page, and then choose the **Launch Settings** tab. This tab has two sections: **General launch settings** and **EC2 launch template**. | AWS administrator | 
| Configure general launch settings. | Revise the general launch settings according to your requirements.[\[See the AWS documentation website for more details\]](http://docs.aws.amazon.com/prescriptive-guidance/latest/patterns/set-up-disaster-recovery-for-oracle-jd-edwards-enterpriseone-with-aws-elastic-disaster-recovery.html)For more information, see [General launch settings](https://docs.aws.amazon.com/drs/latest/userguide/launch-general-settings.html) in the Elastic Disaster Recovery documentation. | AWS administrator | 
| Configure the Amazon EC2 launch template. | Elastic Disaster Recovery uses Amazon EC2 launch templates to launch drill and recovery instances for each source server. The launch template is created automatically for each source server that you add to Elastic Disaster Recovery after you install the AWS Replication Agent.You must set the Amazon EC2 launch template as the default launch template if you want to use it with Elastic Disaster Recovery.For more information, see [EC2 Launch Template](https://docs.aws.amazon.com/drs/latest/userguide/ec2-launch.html) in the Elastic Disaster Recovery documentation. | AWS administrator | 

### Initiate DR drill and failover
<a name="initiate-dr-drill-and-failover"></a>


| Task | Description | Skills required | 
| --- | --- | --- | 
| Initiate Drill | [\[See the AWS documentation website for more details\]](http://docs.aws.amazon.com/prescriptive-guidance/latest/patterns/set-up-disaster-recovery-for-oracle-jd-edwards-enterpriseone-with-aws-elastic-disaster-recovery.html)For more information, see [Preparing for failover](https://docs.aws.amazon.com/drs/latest/userguide/failback-preparing.html) in the Elastic Disaster Recovery documentation. | AWS administrator | 
| Validate the drill. | In the previous step, you launched new target instances in the DR Region. The target instances are replicas of the source servers based on the snapshot taken when you initiated the launch.In this procedure, you connect to your Amazon EC2 target machines to confirm that they're running as expected.[\[See the AWS documentation website for more details\]](http://docs.aws.amazon.com/prescriptive-guidance/latest/patterns/set-up-disaster-recovery-for-oracle-jd-edwards-enterpriseone-with-aws-elastic-disaster-recovery.html) |  | 
| Initiate a failover. | A failover is the redirection of traffic from a primary system to a secondary system. Elastic Disaster Recovery helps you perform a failover by launching recovery instances on AWS. When the recovery instances have been launched, you redirect the traffic from your primary systems to these instances.[\[See the AWS documentation website for more details\]](http://docs.aws.amazon.com/prescriptive-guidance/latest/patterns/set-up-disaster-recovery-for-oracle-jd-edwards-enterpriseone-with-aws-elastic-disaster-recovery.html)For more information, see [Performing a failover](https://docs.aws.amazon.com/drs/latest/userguide/failback-preparing-failover.html) in the Elastic Disaster Recovery documentation. | AWS administrator | 
| Initiate a failback. | The process for initiating a failback is similar to the process for initiating failover.[\[See the AWS documentation website for more details\]](http://docs.aws.amazon.com/prescriptive-guidance/latest/patterns/set-up-disaster-recovery-for-oracle-jd-edwards-enterpriseone-with-aws-elastic-disaster-recovery.html)For more information, see [Performing a failback](https://docs.aws.amazon.com/drs/latest/userguide/failback-performing-main.html) in the Elastic Disaster Recovery documentation. | AWS administrator | 
| Start JD Edwards EnterpriseOne components. | [\[See the AWS documentation website for more details\]](http://docs.aws.amazon.com/prescriptive-guidance/latest/patterns/set-up-disaster-recovery-for-oracle-jd-edwards-enterpriseone-with-aws-elastic-disaster-recovery.html)You will need to make incorporate the changes in Route 53 and Application Load Balancer for the JD Edwards EnterpriseOne link to work.You can automate these steps by using Lambda, Step Functions, and Systems Manager (Run Command).Elastic Disaster Recovery performs block-level replication of the source EC2 instance EBS volumes that host the operating system and file systems. Shared file systems that were created by using Amazon EFS aren’t part of this replication. You can replicate shared file systems to the DR Region by using AWS DataSync, as noted in the first epic, and then mount these replicated file systems in the DR system. | JD Edwards EnterpriseOne CNC | 

## Troubleshooting
<a name="set-up-disaster-recovery-for-oracle-jd-edwards-enterpriseone-with-aws-elastic-disaster-recovery-troubleshooting"></a>


| Issue | Solution | 
| --- | --- | 
| Source server data replication status is **Stalled **and replication lags. If you check details, the data replication status displays **Agent not seen**. | Check to confirm that the stalled source server is running.If the source server goes down, the replication server is automatically terminated.For more information about lag issues, see [Replication lag issues](https://docs.aws.amazon.com/drs/latest/userguide/Other-Troubleshooting-Topics.html#Replication-Lag-Issues) in the Elastic Disaster Recovery documentation. | 
| Installation of AWS Replication Agent in source EC2 instance fails in RHEL 8.2 after scanning the disks. `aws_replication_agent_installer.log` reveals that kernel headers are missing. | Before you install the AWS Replication Agent on RHEL 8, CentOS 8, or Oracle Linux 8, run:<pre>sudo yum install elfutils-libelf-devel</pre>For more information, see [Linux installation requirements](https://docs.aws.amazon.com/mgn/latest/ug/installation-requirements.html#linux-requirements) in the Elastic Disaster Recovery documentation. | 
| On the Elastic Disaster Recovery console, you see the source server as **Ready **with a lag and data replication status as **Stalled**.Depending on how long the AWS Replication Agent has been unavailable, the status might indicate high lag, but the issue remains the same. | Use an operating system command to confirm that the AWS Replication Agent is running in the source EC2 instance, or confirm that the instance is running.After you correct any issues, Elastic Disaster Recovery will restart scanning. Wait until all data has been synced and the replication status is **Healthy** before you start a DR drill. | 
| Initial replication with high lag. On the Elastic Disaster Recovery console, you can see that the initial sync status is extremely slow for a source server. | Check for the replication lag issues documented in the [Replication lag issues](https://docs.aws.amazon.com/drs/latest/userguide/Other-Troubleshooting-Topics.html#Replication-Lag-Issues) section of the Elastic Disaster Recovery documentation.The replication server might be unable to handle the load because of intrinsic compute operations. In that case, try upgrading the instance type after consulting with the [AWS Technical Support team](https://support.console.aws.amazon.com/support/). | 

## Related resources
<a name="set-up-disaster-recovery-for-oracle-jd-edwards-enterpriseone-with-aws-elastic-disaster-recovery-resources"></a>
+ [AWS Elastic Disaster Recovery User Guide](https://docs.aws.amazon.com/drs/latest/userguide/what-is-drs.html)
+ [Creating a scalable disaster recovery plan with AWS Elastic Disaster Recovery](https://aws.amazon.com/blogs/storage/creating-a-scalable-disaster-recovery-plan-with-aws-elastic-disaster-recovery/) (AWS blog post)
+ [AWS Elastic Disaster Recovery - A Technical Introduction](https://explore.skillbuilder.aws/learn/course/internal/view/elearning/11123/aws-elastic-disaster-recovery-a-technical-introduction) (AWS Skill Builder course; requires login)
+ [AWS Elastic Disaster Recovery quick start guide](https://docs.aws.amazon.com/drs/latest/userguide/quick-start-guide-gs.html)

# Set up CloudFormation drift detection in a multi-Region, multi-account organization
<a name="set-up-aws-cloudformation-drift-detection-in-a-multi-region-multi-account-organization"></a>

*Ram Kandaswamy, Amazon Web Services*

## Summary
<a name="set-up-aws-cloudformation-drift-detection-in-a-multi-region-multi-account-organization-summary"></a>

Amazon Web Services (AWS) users often look for an efficient way to detect resource configuration mismatches, including drift in AWS CloudFormation stacks, and fix them as soon as possible. This is especially the case when AWS Control Tower is used.

This pattern provides a prescriptive solution that efficiently solves the problem by using consolidated resource configuration changes and acting on those changes to generate results. The solution is designed for scenarios where there are several CloudFormation stacks created in more than one AWS Region, or in more than one account, or a combination of both. The goals of the solution are the following:
+ Simplify the drift detection process
+ Set up notification and alerting
+ Set up consolidated reporting

## Prerequisites and limitations
<a name="set-up-aws-cloudformation-drift-detection-in-a-multi-region-multi-account-organization-prereqs"></a>

**Prerequisites **
+ AWS Config enabled in all the Regions and accounts that must be monitored

**Limitations **
+ The report generated supports only the comma-separated values (CSV) and JSON output formats.

## Architecture
<a name="set-up-aws-cloudformation-drift-detection-in-a-multi-region-multi-account-organization-architecture"></a>

The following diagram shows AWS Organizations set up with multiple accounts. AWS Config rules communicate between the accounts.  

![\[Five-step process for monitoring stacks in two AWS Organizations accounts.\]](http://docs.aws.amazon.com/prescriptive-guidance/latest/patterns/images/pattern-img/735d0987-b953-47f8-a9bc-b02a88957ee5/images/340cee9a-5a4e-49ea-bd73-d37dcea5e098.png)


 The workflow includes the following steps:

1. The AWS Config rule detects drift.

1. Drift detection results that are found in other accounts are sent to the management account.

1. The Amazon CloudWatch rule calls an AWS Lambda function.

1. The Lambda function queries the AWS Config rule for aggregated results.

1. The Lambda function notifies Amazon Simple Notification Service (Amazon SNS), which sends email notification of the drift.

**Automation and scale**

The solution presented here can scale for both additional Regions and accounts.

## Tools
<a name="set-up-aws-cloudformation-drift-detection-in-a-multi-region-multi-account-organization-tools"></a>

**AWS services**
+ [AWS Config](https://docs.aws.amazon.com/config/latest/developerguide/WhatIsConfig.html) provides a detailed view of the configuration of AWS resources in your AWS account. This includes how the resources are related to one another and how they were configured in the past so that you can see how the configurations and relationships change over time.
+ [Amazon CloudWatch](https://docs.aws.amazon.com/AmazonCloudWatch/latest/monitoring/WhatIsCloudWatch.html) helps you monitor the metrics of your AWS resources and the applications you run on AWS in real time.
+ [AWS Lambda](https://docs.aws.amazon.com/lambda/latest/dg/welcome.html) is a compute service that helps you run code without needing to provision or manage servers. It runs your code only when needed and scales automatically, so you pay only for the compute time that you use.
+ [Amazon Simple Notification Service (Amazon SNS)](https://docs.aws.amazon.com/sns/latest/dg/welcome.html) helps you coordinate and manage the exchange of messages between publishers and clients, including web servers and email addresses.

## Epics
<a name="set-up-aws-cloudformation-drift-detection-in-a-multi-region-multi-account-organization-epics"></a>

### Automate drift detection for CloudFormation
<a name="automate-drift-detection-for-cfn"></a>


| Task | Description | Skills required | 
| --- | --- | --- | 
| Create the aggregator. | [\[See the AWS documentation website for more details\]](http://docs.aws.amazon.com/prescriptive-guidance/latest/patterns/set-up-aws-cloudformation-drift-detection-in-a-multi-region-multi-account-organization.html) | Cloud architect | 
| Create an AWS managed rule. | Add the `cloudformation-stack-drift-detection-check` AWS** **managed rule. The rule needs one parameter value: `cloudformationArn`. Enter the IAM role Amazon Resource Name (ARN) that has permissions to detect stack drift. The role must have a trust policy that enables AWS Config to assume the role. | Cloud architect | 
| Create the advanced query section of the aggregator. | To fetch drifted stacks from multiple sources, create the following query:<pre>SELECT resourceId, configuration.driftInformation.stackDriftStatus WHERE resourceType = 'AWS::CloudFormation::Stack'  AND configuration.driftInformation.stackDriftStatus IN ('DRIFTED')</pre> | Cloud architect, Developer | 
| Automate running the query and publish. | [\[See the AWS documentation website for more details\]](http://docs.aws.amazon.com/prescriptive-guidance/latest/patterns/set-up-aws-cloudformation-drift-detection-in-a-multi-region-multi-account-organization.html) | Cloud architect, Developer | 
| Create a CloudWatch rule. | Create a schedule-based CloudWatch rule to call the Lambda function, which is responsible for alerting. | Cloud architect | 

## Related resources
<a name="set-up-aws-cloudformation-drift-detection-in-a-multi-region-multi-account-organization-resources"></a>

**Resources**
+ [What Is AWS Config?](https://docs.aws.amazon.com/config/latest/developerguide/WhatIsConfig.html)
+ [Multi-account multi-Region data aggregation](https://docs.aws.amazon.com/config/latest/developerguide/aggregate-data.html)
+ [Detecting unmanaged configuration changes to stacks and resources](https://docs.aws.amazon.com/AWSCloudFormation/latest/UserGuide/using-cfn-stack-drift.html)
+ [IAM: Pass an IAM role to a specific AWS service](https://docs.aws.amazon.com/IAM/latest/UserGuide/reference_policies_examples_iam-passrole-service.html)
+ [What is Amazon SNS?](https://docs.aws.amazon.com/sns/latest/dg/welcome.html)

## Additional information
<a name="set-up-aws-cloudformation-drift-detection-in-a-multi-region-multi-account-organization-additional"></a>

**Considerations**

We recommend using the solution presented in this pattern instead of using custom solutions that involve API calls at specific intervals to initiate drift detection on each CloudFormation stack or stack set. Custom solutions that use API calls at specific intervals can lead to a large number API calls and affect performance. Because of the number of API calls, throttling can occur. Another potential issue is a delay in detection if resource changes are identified based on schedule only.

Because stack sets are made of stacks, you can use this solution. Stack instance details are also available as part of the solution.

## Attachments
<a name="attachments-735d0987-b953-47f8-a9bc-b02a88957ee5"></a>

To access additional content that is associated with this document, unzip the following file: [attachment.zip](samples/p-attach/735d0987-b953-47f8-a9bc-b02a88957ee5/attachments/attachment.zip)

# Successfully import an S3 bucket as an AWS CloudFormation stack
<a name="successfully-import-an-s3-bucket-as-an-aws-cloudformation-stack"></a>

*Ram Kandaswamy, Amazon Web Services*

## Summary
<a name="successfully-import-an-s3-bucket-as-an-aws-cloudformation-stack-summary"></a>

If you use Amazon Web Services (AWS) resources, such as Amazon Simple Storage Service (Amazon S3) buckets, and want to use an infrastructure as code (IaC) approach, then you can import your resources into AWS CloudFormation and manage them as a stack.

This pattern provides steps to successfully import an S3 bucket as an AWS CloudFormation stack. By using this pattern's approach, you can avoid possible errors that might occur if you import your S3 bucket in a single action.

## Prerequisites and limitations
<a name="successfully-import-an-s3-bucket-as-an-aws-cloudformation-stack-prereqs"></a>

**Prerequisites **
+ An active AWS account.
+ An existing S3 bucket and S3 bucket policy. For more information about this, see [What S3 bucket policy should I use to comply with the AWS Config rule s3-bucket-ssl-requests-only](https://aws.amazon.com/premiumsupport/knowledge-center/s3-bucket-policy-for-config-rule/) in the AWS Knowledge Center.
+ An existing AWS Key Management Service (AWS KMS) key and its alias. For more information about this, see [Working with aliases](https://docs.aws.amazon.com/kms/latest/developerguide/programming-aliases.html) in the AWS KMS documentation.
+ The sample `CloudFormation-template-S3-bucket` AWS CloudFormation template (attached), downloaded to your local computer.

## Architecture
<a name="successfully-import-an-s3-bucket-as-an-aws-cloudformation-stack-architecture"></a>

![\[Workflow to use CloudFormation template to create a CloudFormation stack to import an S3 bucket.\]](http://docs.aws.amazon.com/prescriptive-guidance/latest/patterns/images/pattern-img/aea7f6fe-8e67-46c4-8b90-1ab06b879111/images/ee143374-a0a4-42d9-b7ca-16593a597a84.png)


 

The diagram shows the following workflow:

1. The user creates a JSON or YAML-formatted AWS CloudFormation template.

1. The template creates an AWS CloudFormation stack to import the S3 bucket.

1. The AWS CloudFormation stack manages the S3 bucket that you specified in the template.

**Technology stack**
+ AWS CloudFormation
+ AWS Identity and Access Management (IAM)
+ AWS KMS
+ Amazon S3

 

**Tools**
+ [AWS CloudFormation](https://docs.aws.amazon.com/AWSCloudFormation/latest/UserGuide/Welcome.html) – AWS CloudFormation helps you to create and provision AWS infrastructure deployments predictably and repeatedly.
+ [AWS Identity and Access Management (IAM)](https://docs.aws.amazon.com/IAM/latest/UserGuide/introduction.html) – IAM is a web service for securely controlling access to AWS services.
+ [AWS KMS](https://docs.aws.amazon.com/kms/latest/developerguide/overview.html) – AWS Key Management Service (AWS KMS) is an encryption and key management service scaled for the cloud.
+ [Amazon S3](https://docs.aws.amazon.com/AmazonS3/latest/userguide/Welcome.html) – Amazon Simple Storage Service (Amazon S3) is storage for the Internet.

## Epics
<a name="successfully-import-an-s3-bucket-as-an-aws-cloudformation-stack-epics"></a>

### Import an S3 bucket with AWS KMS key-based encryption as an AWS CloudFormation stack
<a name="import-an-s3-bucket-with-kms-key-long--based-encryption-as-an-aws-cloudformation-stack"></a>


| Task | Description | Skills required | 
| --- | --- | --- | 
| Create a template to import the S3 bucket and KMS key. | On your local computer, create a template to import your S3 bucket and KMS key by using the following sample template:<pre>AWSTemplateFormatVersion: 2010-09-09<br /><br />Parameters:<br /><br />  bucketName:<br /><br />    Type: String<br /><br />Resources:<br /><br />  S3Bucket:<br /><br />    Type: 'AWS::S3::Bucket'<br /><br />    DeletionPolicy: Retain<br /><br />    Properties:<br /><br />      BucketName: !Ref bucketName<br /><br />      BucketEncryption:<br /><br />        ServerSideEncryptionConfiguration:<br /><br />          - ServerSideEncryptionByDefault:<br /><br />              SSEAlgorithm: 'aws:kms'<br /><br />              KMSMasterKeyID: !GetAtt <br /><br />                - KMSS3Encryption<br /><br />                - Arn<br /><br />  KMSS3Encryption:<br /><br />    Type: 'AWS::KMS::Key'<br /><br />    DeletionPolicy: Retain<br /><br />    Properties:<br /><br />      Enabled: true<br /><br />      KeyPolicy: !Sub |-<br /><br />        {<br /><br />            "Id": "key-consolepolicy-3",<br /><br />            "Version": "2012-10-17",		 	 	 <br /><br />            "Statement": [<br /><br />                {<br /><br />                    "Sid": "Enable IAM User Permissions",<br /><br />                    "Effect": "Allow",<br /><br />                    "Principal": {<br /><br />                        "AWS": ["arn:aws:iam::${AWS::AccountId}:root"]<br /><br />                    },<br /><br />                    "Action": "kms:*",<br /><br />                    "Resource": "*"<br /><br />                }<br /><br />                }<br /><br />            ]<br /><br />        }<br /><br />      EnableKeyRotation: true</pre> | AWS DevOps | 
| Create the stack. | [\[See the AWS documentation website for more details\]](http://docs.aws.amazon.com/prescriptive-guidance/latest/patterns/successfully-import-an-s3-bucket-as-an-aws-cloudformation-stack.html) | AWS DevOps | 
| Create the KMS key alias. | [\[See the AWS documentation website for more details\]](http://docs.aws.amazon.com/prescriptive-guidance/latest/patterns/successfully-import-an-s3-bucket-as-an-aws-cloudformation-stack.html)<pre>KMSS3EncryptionAlias:<br /><br />    Type: 'AWS::KMS::Alias'<br /><br />    DeletionPolicy: Retain<br /><br />    Properties: <br /><br />    AliasName: alias/S3BucketKey<br /><br />    TargetKeyId: !Ref KMSS3Encryption</pre>For more information about this, see [AWS CloudFormation stack updates](https://docs.aws.amazon.com/AWSCloudFormation/latest/UserGuide/using-cfn-updating-stacks.html) in the AWS CloudFormation documentation.  | AWS DevOps | 
| Update the stack to include the S3 bucket policy. | [\[See the AWS documentation website for more details\]](http://docs.aws.amazon.com/prescriptive-guidance/latest/patterns/successfully-import-an-s3-bucket-as-an-aws-cloudformation-stack.html)<pre>S3BucketPolicy:<br /><br />  Type: 'AWS::S3::BucketPolicy'<br /><br />  Properties:<br /><br />    Bucket: !Ref S3Bucket<br /><br />    PolicyDocument: !Sub |-<br /><br />      {<br /><br />                  "Version": "2008-10-17",		 	 	 <br /><br />                  "Id": "restricthttp",<br /><br />                  "Statement": [<br /><br />                      {<br /><br />                          "Sid": "denyhttp",<br /><br />                          "Effect": "Deny",<br /><br />                          "Principal": {<br /><br />                              "AWS": "*"<br /><br />                          },<br /><br />                          "Action": "s3:*",<br /><br />                          "Resource": ["arn:aws:s3:::${S3Bucket}","arn:aws:s3:::${S3Bucket}/*"],<br /><br />                          "Condition": {<br /><br />                              "Bool": {<br /><br />                                  "aws:SecureTransport": "false"<br /><br />                              }<br /><br />                          }<br /><br />                      }<br /><br />                  ]<br /><br />              }</pre>This S3 bucket policy has a deny statement that restricts API calls that are not secure.  | AWS DevOps | 
| Update the key policy. | [\[See the AWS documentation website for more details\]](http://docs.aws.amazon.com/prescriptive-guidance/latest/patterns/successfully-import-an-s3-bucket-as-an-aws-cloudformation-stack.html)For more information, see [Key policies in AWS KMS](https://docs.aws.amazon.com/kms/latest/developerguide/key-policies.html) in the AWS KMS documentation. | AWS administrator | 
| Add resource-level tags. | [\[See the AWS documentation website for more details\]](http://docs.aws.amazon.com/prescriptive-guidance/latest/patterns/successfully-import-an-s3-bucket-as-an-aws-cloudformation-stack.html)<pre>Tags:<br /><br />  - Key: createdBy<br /><br />    Value: Cloudformation</pre> | AWS DevOps | 

## Related resources
<a name="successfully-import-an-s3-bucket-as-an-aws-cloudformation-stack-resources"></a>
+ [Bringing existing resources into AWS CloudFormation management ](https://docs.aws.amazon.com/AWSCloudFormation/latest/UserGuide/resource-import.html)
+ [AWS re:Invent 2017: Deep dive on AWS CloudFormation](https://www.youtube.com/watch?v=01hy48R9Kr8) (video)

## Attachments
<a name="attachments-aea7f6fe-8e67-46c4-8b90-1ab06b879111"></a>

To access additional content that is associated with this document, unzip the following file: [attachment.zip](samples/p-attach/aea7f6fe-8e67-46c4-8b90-1ab06b879111/attachments/attachment.zip)

# Synchronize data between Amazon EFS file systems in different AWS Regions by using AWS DataSync
<a name="synchronize-data-between-amazon-efs-file-systems-in-different-aws-regions-by-using-aws-datasync"></a>

*Sarat Chandra Pothula and Aditya Ambati, Amazon Web Services*

## Summary
<a name="synchronize-data-between-amazon-efs-file-systems-in-different-aws-regions-by-using-aws-datasync-summary"></a>

This solution provides a robust framework for efficient and secure data synchronization between Amazon Elastic File System (Amazon EFS) instances in different AWS Regions. This approach is scalable and provides controlled, cross-Region data replication. This solution can enhance your disaster recovery and data redundancy strategies.

By using the AWS Cloud Development Kit (AWS CDK), this pattern uses as an infrastructure as code (IaC) approach to deploy the solution resources. The AWS CDK application deploys the essential AWS DataSync, Amazon EFS, Amazon Virtual Private Cloud (Amazon VPC), and Amazon Elastic Compute Cloud (Amazon EC2) resources. This IaC provides a repeatable and version-controlled deployment process that is fully aligned with AWS best practices.

## Prerequisites and limitations
<a name="synchronize-data-between-amazon-efs-file-systems-in-different-aws-regions-by-using-aws-datasync-prereqs"></a>

**Prerequisites**
+ An active AWS account
+ AWS Command Line Interface (AWS CLI) version 2.9.11 or later, [installed](https://docs.aws.amazon.com/cli/latest/userguide/cli-chap-getting-started.html) and [configured](https://docs.aws.amazon.com/cli/latest/userguide/cli-chap-configure.html)
+ AWS CDK version 2.114.1 or later, [installed](https://docs.aws.amazon.com/cdk/v2/guide/getting_started.html#getting_started_install) and [bootstrapped](https://docs.aws.amazon.com/cdk/v2/guide/getting_started.html#getting_started_bootstrap)
+ NodeJS version 20.8.0 or later, [installed](https://nodejs.org/en/download)

**Limitations**
+ The solution inherits limitations from DataSync and Amazon EFS, such as data transfer rates, size limitations, and regional availability. For more information, see [AWS DataSync quotas](https://docs.aws.amazon.com/datasync/latest/userguide/datasync-limits.html) and [Amazon EFS quotas](https://docs.aws.amazon.com/efs/latest/ug/limits.html).
+ This solution supports Amazon EFS only. DataSync supports [other AWS services](https://docs.aws.amazon.com/datasync/latest/userguide/working-with-locations.html), such as Amazon Simple Storage Service (Amazon S3) and Amazon FSx for Lustre. However, this solution requires modification to synchronize data with these other services.

## Architecture
<a name="synchronize-data-between-amazon-efs-file-systems-in-different-aws-regions-by-using-aws-datasync-architecture"></a>

![\[Architecture diagram for replicating data to an EFS file system in a different Region\]](http://docs.aws.amazon.com/prescriptive-guidance/latest/patterns/images/pattern-img/e28ba6c2-ab8b-4812-932e-f038106d5496/images/18b35ae9-a22e-43e7-b7a3-30e40321c44e.png)


This solution deploys the following AWS CDK stacks:
+ **Amazon VPC stack** –­ This stack sets up virtual private cloud (VPC) resources, including subnets, an internet gateway, and a NAT gateway in both the primary and secondary AWS Regions.
+ **Amazon EFS stack** – This stack deploys Amazon EFS file systems into the primary and secondary Regions and connects them to their respective VPCs.
+ **Amazon EC2 stack** – This stack launches EC2 instances in the primary and secondary Regions. These instances are configured to mount the Amazon EFS file system, which allows them to access the shared storage.
+ **DataSync location stack** – This stack uses a custom construct called `DataSyncLocationConstruct` to create DataSync location resources in the primary and secondary Regions. These resources define endpoints for data synchronization.
+ **DataSync task stack** – This stack uses a custom construct called `DataSyncTaskConstruct` to create a DataSync task in the primary Region. This task is configured to synchronize data between the primary and secondary Regions by using the DataSync source and destination locations.

## Tools
<a name="synchronize-data-between-amazon-efs-file-systems-in-different-aws-regions-by-using-aws-datasync-tools"></a>

**AWS services**
+ [AWS Cloud Development Kit (AWS CDK)](https://docs.aws.amazon.com/cdk/latest/guide/home.html) is a software development framework that helps you define and provision AWS Cloud infrastructure in code.
+ [AWS DataSync](https://docs.aws.amazon.com/datasync/latest/userguide/what-is-datasync.html) is an online data transfer and discovery service that helps you move files or object data to, from, and between AWS storage services.
+ [Amazon Elastic Compute Cloud (Amazon EC2)](https://docs.aws.amazon.com/ec2/) provides scalable computing capacity in the AWS Cloud. You can launch as many virtual servers as you need and quickly scale them up or down.
+ [Amazon Elastic File System (Amazon EFS)](https://docs.aws.amazon.com/efs/latest/ug/whatisefs.html) helps you create and configure shared file systems in the AWS Cloud.
+ [Amazon Virtual Private Cloud (Amazon VPC)](https://docs.aws.amazon.com/vpc/latest/userguide/what-is-amazon-vpc.html) helps you launch AWS resources into a virtual network that you’ve defined. This virtual network resembles a traditional network that you’d operate in your own data center, with the benefits of using the scalable infrastructure of AWS.

**Code repository**

The code for this pattern is available in the GitHub [Amazon EFS Cross-Region DataSync Project](https://github.com/aws-samples/aws-efs-crossregion-datasync/tree/main) repository.

## Best practices
<a name="synchronize-data-between-amazon-efs-file-systems-in-different-aws-regions-by-using-aws-datasync-best-practices"></a>

Follow the best practices described in [Best practices for using the AWS CDK in TypeScript to create IaC projects](https://docs.aws.amazon.com/prescriptive-guidance/latest/best-practices-cdk-typescript-iac/introduction.html).

## Epics
<a name="synchronize-data-between-amazon-efs-file-systems-in-different-aws-regions-by-using-aws-datasync-epics"></a>

### Deploy the AWS CDK app
<a name="deploy-the-aws-cdk-app"></a>


| Task | Description | Skills required | 
| --- | --- | --- | 
| Clone the project repository. | Enter the following command to clone the [Amazon EFS Cross-Region DataSync Project](https://github.com/aws-samples/aws-efs-crossregion-datasync/tree/main) repository.<pre>git clone https://github.com/aws-samples/aws-efs-crossregion-datasync.git</pre> | AWS DevOps | 
| Install the npm dependencies. | Enter the following command.<pre>npm ci</pre> | AWS DevOps | 
| Choose the primary and secondary Regions. | In the cloned repository, navigate to the `src/infa` directory. In the `Launcher.ts` file, update the `PRIMARY_AWS_REGION` and `SECONDARY_AWS_REGION` values. Use the corresponding [Region codes](https://docs.aws.amazon.com/general/latest/gr/datasync.html#datasync-region).<pre>const primaryRegion = { account: account, region: '<PRIMARY_AWS_REGION>' };<br />const secondaryRegion = { account: account, region: '<SECONDARY_AWS_REGION>' };</pre> | AWS DevOps | 
| Bootstrap the environment. | Enter the following command to bootstrap the AWS account and AWS Region that you want to use.<pre>cdk bootstrap <aws_account>/<aws_region></pre>For more information, see [Bootstrapping](https://docs.aws.amazon.com/cdk/v2/guide/bootstrapping.html) in the AWS CDK documentation. | AWS DevOps | 
| List the AWS CDK stacks. | Enter the following command to view a list of the AWS CDK stacks in the app.<pre>cdk ls</pre> | AWS DevOps | 
| Synthesize the AWS CDK stacks. | Enter the following command to produce an AWS CloudFormation template for each stack defined in the AWS CDK app.<pre>cdk synth</pre> | AWS DevOps | 
| Deploy the AWS CDK app. | Enter the following command to deploy all of the stacks to your AWS account, without requiring manual approval for any changes.<pre>cdk deploy --all --require-approval never</pre> | AWS DevOps | 

### Validate the deployment
<a name="validate-the-deployment"></a>


| Task | Description | Skills required | 
| --- | --- | --- | 
| Log in to the EC2 instance in the primary Region. | [\[See the AWS documentation website for more details\]](http://docs.aws.amazon.com/prescriptive-guidance/latest/patterns/synchronize-data-between-amazon-efs-file-systems-in-different-aws-regions-by-using-aws-datasync.html) | AWS DevOps | 
| Create a temporary file. | Enter the following command to create a temporary file in the Amazon EFS mount path.<pre>sudo dd if=/dev/zero \<br />of=tmptst.dat \<br />bs=1G \<br />seek=5 \<br />count=0<br /><br />ls -lrt tmptst.dat</pre> | AWS DevOps | 
| Start the DataSync task. | Enter the following command to replicate the temporary file from the primary Region to the secondary Region, where `<ARN-task>` is the Amazon Resource Name (ARN) of your DataSync task.<pre>aws datasync start-task-execution \<br />    --task-arn <ARN-task></pre>The command returns the ARN of the task execution in the following format.`arn:aws:datasync:<region>:<account-ID>:task/task-execution/<exec-ID>` | AWS DevOps | 
| Check the status of the data transfer. | Enter the following command to describe the DataSync execution task, where `<ARN-task-execution>` is the ARN of the task execution.<pre>aws datasync describe-task-execution \<br />    --task-execution-arn <ARN-task-execution></pre>The DataSync task is complete when `PrepareStatus`, `TransferStatus`, and `VerifyStatus` all have the value `SUCCESS`. | AWS DevOps | 
| Log in to the EC2 instance in the secondary Region. | [\[See the AWS documentation website for more details\]](http://docs.aws.amazon.com/prescriptive-guidance/latest/patterns/synchronize-data-between-amazon-efs-file-systems-in-different-aws-regions-by-using-aws-datasync.html) | AWS DevOps | 
| Validate the replication. | Enter the following command to verify that the temporary file exists in the Amazon EFS file system.<pre>ls -lrt<br />tmptst.dat</pre> | AWS DevOps | 

## Related resources
<a name="synchronize-data-between-amazon-efs-file-systems-in-different-aws-regions-by-using-aws-datasync-resources"></a>

**AWS documentation**
+ [AWS CDK API Reference](https://docs.aws.amazon.com/cdk/api/v2/python/modules.html)
+ [Configuring AWS DataSync transfers with Amazon EFS](https://docs.aws.amazon.com/datasync/latest/userguide/create-efs-location.html)
+ [Troubleshooting issues with AWS DataSync transfers](https://docs.aws.amazon.com/datasync/latest/userguide/troubleshooting-datasync-locations-tasks.html)

**Other AWS resources**
+ [AWS DataSync FAQs](https://aws.amazon.com/datasync/faqs/)

# Test AWS infrastructure by using LocalStack and Terraform Tests
<a name="test-aws-infra-localstack-terraform"></a>

*Ivan Girardi and Ioannis Kalyvas, Amazon Web Services*

## Summary
<a name="test-aws-infra-localstack-terraform-summary"></a>

This pattern helps you locally test infrastructure as code (IaC) for AWS in Terraform without the need to provision infrastructure in your AWS environment. It integrates the [Terraform Tests framework](https://developer.hashicorp.com/terraform/language/tests) with [LocalStack](https://github.com/localstack/localstack). The LocalStack Docker container provides a local development environment that emulates various AWS services. This helps you test and iterate on infrastructure deployments without incurring costs in the AWS Cloud.

This solution provides the following benefits:
+ **Cost optimization** – Running tests against LocalStack eliminates the need to use AWS services. This prevents you from incurring costs that are associated with creating, operating, and modifying those AWS resources.
+ **Speed and efficiency** – Testing locally is also typically faster than deploying the AWS resources. This rapid feedback loop accelerates development and debugging. Because LocalStack runs locally, you can develop and test your Terraform configuration files without an internet connection. You can debug Terraform configuration files locally and receive immediate feedback, which streamlines the development process.
+ **Consistency and reproducibility** – LocalStack provides a consistent environment for testing. This consistency helps make sure that tests yield the same results, regardless of external AWS changes or network issues.
+ **Isolation **– Testing with LocalStack prevents you from accidentally affecting live AWS resources or production environments. This isolation makes it safe to experiment and test various configurations.
+ **Automation** – Integration with a continuous integration and continuous delivery (CI/CD) pipeline helps you automatically test Terraform [configuration files](https://developer.hashicorp.com/terraform/language/files). The pipeline thoroughly tests the IaC before deployment.
+ **Flexibility** – You can simulate different AWS Regions, AWS accounts, and service configurations to match your production environments more closely.

## Prerequisites and limitations
<a name="test-aws-infra-localstack-terraform-prereqs"></a>

**Prerequisites**
+ [Install](https://docs.docker.com/get-started/get-docker/) Docker
+ [Enable access](https://docs.docker.com/reference/cli/dockerd/#daemon-socket-option) to the default Docker socket (`/var/run/docker.sock`). For more information, see the [LocalStack documentation](https://docs.localstack.cloud/user-guide/aws/lambda/#migrating-to-lambda-v2).
+ [Install](https://docs.docker.com/compose/install/) Docker Compose
+ [Install](https://developer.hashicorp.com/terraform/tutorials/aws-get-started/install-cli) Terraform version 1.6.0 or later
+ [Install](https://developer.hashicorp.com/terraform/cli) Terraform CLI
+ [Configure](https://hashicorp.github.io/terraform-provider-aws/) the Terraform AWS Provider
+ (Optional) [Install](https://docs.aws.amazon.com/cli/latest/userguide/getting-started-install.html) and [configure](https://docs.aws.amazon.com/cli/latest/userguide/cli-configure-files.html) the AWS Command Line Interface (AWS CLI). For an example of how to use the AWS CLI with LocalStack, see the GitHub [Test AWS infrastructure using LocalStack and Terraform Tests](https://github.com/aws-samples/localstack-terraform-test) repository.

**Limitations**
+ This pattern provides explicit examples for testing Amazon Simple Storage Service (Amazon S3), AWS Lambda, AWS Step Functions, and Amazon DynamoDB resources. However, you can extend this solution to include additional AWS resources.
+ This pattern provides instructions to run Terraform Tests locally, but can you can integrate testing into any CI/CD pipeline.
+ This pattern provides instructions for using the LocalStack Community image. If you're using the LocalStack Pro image, see the [LocalStack Pro documentation](https://hub.docker.com/r/localstack/localstack-pro).
+ LocalStack provides emulation services for different AWS APIs. For a complete list, see [AWS Service Feature Coverage](https://docs.localstack.cloud/user-guide/aws/feature-coverage/). Some advanced features might require a subscription for LocalStack Pro.

## Architecture
<a name="test-aws-infra-localstack-terraform-architecture"></a>

The following diagram shows the architecture for this solution. The primary components are a source code repository, a CI/CD pipeline, and a LocalStack Docker container. The LocalStack Docker Container hosts the following AWS services locally:
+ An Amazon S3 bucket for storing files
+ Amazon CloudWatch for monitoring and logging
+ An AWS Lambda function for running serverless code
+ An AWS Step Functions state machine for orchestrating multi-step workflows
+ An Amazon DynamoDB table for storing NoSQL data

![\[A CI/CD pipeline builds and tests the LocalStack Docker container and AWS resources.\]](http://docs.aws.amazon.com/prescriptive-guidance/latest/patterns/images/pattern-img/34bfbdbf-14e7-42a0-9022-c85a9c30cdcd/images/dc61fac9-b92c-4841-9132-ff8bb865eed9.png)


The diagram shows the following workflow:

1. You add and commit a Terraform configuration file to the source code repository.

1. The CI/CD pipeline detects the changes and initiates a build process for static Terraform code analysis. The pipeline builds and runs the LocalStack Docker container. Then the pipeline starts the test process.

1. The pipeline uploads an object into an Amazon S3 bucket that is hosted in the LocalStack Docker container.

1. Uploading the object invokes an AWS Lambda function.

1. The Lambda function stores the Amazon S3 event notification in a CloudWatch log.

1. The Lambda function starts an AWS Step Functions state machine.

1. The state machine writes the name of the Amazon S3 object into a DynamoDB table.

1. The test process in the CI/CD pipeline verifies that the name of the uploaded object matches the entry in the DynamoDB table. It also verifies that the S3 bucket is deployed with the specified name and that the AWS Lambda function has been successfully deployed.

## Tools
<a name="test-aws-infra-localstack-terraform-tools"></a>

**AWS services**
+ [Amazon CloudWatch](https://docs.aws.amazon.com/AmazonCloudWatch/latest/monitoring/WhatIsCloudWatch.html) helps you monitor the metrics of your AWS resources and the applications you run on AWS in real time.
+ [Amazon DynamoDB](https://docs.aws.amazon.com/amazondynamodb/latest/developerguide/Introduction.html) is a fully managed NoSQL database service that provides fast, predictable, and scalable performance.
+ [AWS Lambda](https://docs.aws.amazon.com/lambda/latest/dg/welcome.html) is a compute service that helps you run code without needing to provision or manage servers. It runs your code only when needed and scales automatically, so you pay only for the compute time that you use.
+ [Amazon Simple Storage Service (Amazon S3)](https://docs.aws.amazon.com/AmazonS3/latest/userguide/Welcome.html) is a cloud-based object storage service that helps you store, protect, and retrieve any amount of data.
+ [AWS Step Functions](https://docs.aws.amazon.com/step-functions/latest/dg/welcome.html) is a serverless orchestration service that helps you combine AWS Lambda functions and other AWS services to build business-critical applications.

**Other tools**
+ [Docker](https://www.docker.com/) is a set of platform as a service (PaaS) products that use virtualization at the operating-system level to deliver software in containers.
+ [Docker Compose](https://docs.docker.com/compose/) is a tool for defining and running multi-container applications.
+ [LocalStack](https://localstack.cloud) is a cloud service emulator that runs in a single container. By using LocalStack, you can run workloads on your local machine that use AWS services, without connecting to the AWS Cloud.
+ [Terraform](https://www.terraform.io/) is an IaC tool from HashiCorp that helps you create and manage cloud and on-premises resources.
+ [Terraform Tests](https://developer.hashicorp.com/terraform/language/tests) helps you validate Terraform module configuration updates through tests that are analogous to integration or unit testing.

**Code repository**

The code for this pattern is available in the GitHub [Test AWS infrastructure using LocalStack and Terraform Tests](https://github.com/aws-samples/localstack-terraform-test) repository.

## Best practices
<a name="test-aws-infra-localstack-terraform-best-practices"></a>
+ This solution tests AWS infrastructure that is specified in Terraform configuration files, and it does not deploy those resources in the AWS Cloud. If you want to deploy the resources, follow the [principle of least-privilege](https://docs.aws.amazon.com/IAM/latest/UserGuide/best-practices.html#grant-least-privilege) (IAM documentation) and properly [configure the Terraform backend](https://developer.hashicorp.com/terraform/language/backend) (Terraform documentation).
+ When integrating LocalStack in a CI/CD pipeline, we recommend that you don't run the LocalStack Docker container in privilege mode. For more information, see [Runtime privilege and Linux capabilities](https://docs.docker.com/engine/containers/run/#runtime-privilege-and-linux-capabilities) (Docker documentation) and [Security for self-managed runners](https://docs.gitlab.com/runner/security/) (GitLab documentation).

## Epics
<a name="test-aws-infra-localstack-terraform-epics"></a>

### Deploy the solution
<a name="deploy-the-solution"></a>


| Task | Description | Skills required | 
| --- | --- | --- | 
| Clone the repository. | In a bash shell, enter the following command. This clones the [Test AWS infrastructure using LocalStack and Terraform Tests](https://github.com/aws-samples/localstack-terraform-test) repository from GitHub:<pre>git clone https://github.com/aws-samples/localstack-terraform-test.git</pre> | DevOps engineer | 
| Run the LocalStack container. | [\[See the AWS documentation website for more details\]](http://docs.aws.amazon.com/prescriptive-guidance/latest/patterns/test-aws-infra-localstack-terraform.html) | DevOps engineer | 
| Initialize Terraform. | Enter the following command to initialize Terraform:<pre>terraform init</pre> | DevOps engineer | 
| Run Terraform Tests. | [\[See the AWS documentation website for more details\]](http://docs.aws.amazon.com/prescriptive-guidance/latest/patterns/test-aws-infra-localstack-terraform.html) | DevOps engineer | 
| Clean up resources. | Enter the following command to destroy the LocalStack container:<pre>docker-compose down</pre> | DevOps engineer | 

## Troubleshooting
<a name="test-aws-infra-localstack-terraform-troubleshooting"></a>


| Issue | Solution | 
| --- | --- | 
| `Error: reading DynamoDB Table Item (Files\|README.md): empty` result when running the `terraform test` command. | [\[See the AWS documentation website for more details\]](http://docs.aws.amazon.com/prescriptive-guidance/latest/patterns/test-aws-infra-localstack-terraform.html) | 

## Related resources
<a name="test-aws-infra-localstack-terraform-resources"></a>
+ [Getting started with Terraform: Guidance for AWS CDK and AWS CloudFormation experts](https://docs.aws.amazon.com/prescriptive-guidance/latest/getting-started-terraform/introduction.html) (AWS Prescriptive Guidance)
+ [Best practices for using the Terraform AWS Provider](https://docs.aws.amazon.com/prescriptive-guidance/latest/terraform-aws-provider-best-practices/introduction.html) (AWS Prescriptive Guidance)
+ [Terraform CI/CD and testing on AWS with the new Terraform Test Framework](https://aws.amazon.com/blogs/devops/terraform-ci-cd-and-testing-on-aws-with-the-new-terraform-test-framework/) (AWS blog post)
+ [Accelerating software delivery using LocalStack Cloud Emulator from AWS Marketplace](https://aws.amazon.com/blogs/awsmarketplace/accelerating-software-delivery-localstack-cloud-emulator-aws-marketplace/) (AWS blog post)

## Additional information
<a name="test-aws-infra-localstack-terraform-additional"></a>

**Integration with GitHub Actions**

You can integrate LocalStack and Terraform Tests in a CI/CD pipeline by using GitHub Actions. For more information, see the [GitHub Actions documentation](https://docs.github.com/en/actions). The following is a sample GitHub Actions configuration file:

```
name: LocalStack Terraform Test

on:
  push:
    branches:
      - '**'

  workflow_dispatch: {}

jobs:
  localstack-terraform-test:
    runs-on: ubuntu-latest

    steps:
    - uses: actions/checkout@v4

    - name: Build and Start LocalStack Container
      run: |
        docker compose up -d

    - name: Setup Terraform
      uses: hashicorp/setup-terraform@v3
      with:
        terraform_version: latest

    - name: Run Terraform Init and Validation
      run: |
        terraform init
        terraform validate
        terraform fmt --recursive --check
        terraform plan
        terraform show

    - name: Run Terraform Test
      run: |
        terraform test

    - name: Stop and Delete LocalStack Container
      if: always()
      run: docker compose down
```

# Upgrade SAP Pacemaker clusters from ENSA1 to ENSA2
<a name="upgrade-sap-pacemaker-clusters-from-ensa1-to-ensa2"></a>

*Gergely Cserdi and Balazs Sandor Skublics, Amazon Web Services*

## Summary
<a name="upgrade-sap-pacemaker-clusters-from-ensa1-to-ensa2-summary"></a>

This pattern explains the steps and considerations for upgrading an SAP Pacemaker cluster that is based on Standalone Enqueue Server (ENSA1) to ENSA2. The information in this pattern applies to both SUSE Linux Enterprise Server (SLES) and Red Hat Enterprise Linux (RHEL) operating systems.

Pacemaker clusters on SAP NetWeaver 7.52 or S/4HANA 1709 and earlier versions run on an ENSA1 architecture and are configured specifically for ENSA1. If you run your SAP workloads on Amazon Web Services (AWS) and you’re interested in moving to ENSA2, you might find that the SAP, SUSE, and RHEL documentation doesn’t provide comprehensive information. This pattern describes the technical steps required to reconfigure SAP parameters and Pacemaker clusters to upgrade from ENSA1 to ENSA2. It provides examples of SUSE systems, but the concept is the same for RHEL clusters.

**Note**  
ENSA1 and ENSA2 are concepts that pertain to SAP applications only, so the information in this pattern doesn’t apply to SAP HANA or other types of clusters.

**Note**  
Technically, ENSA2 can be used with or without Enqueue Replicator 2. However, high availability (HA) and failover automation (through a  cluster solution) require Enqueue Replicator 2. This pattern uses the term *ENSA2 clusters* to refer to clusters with Standalone Enqueue Server 2 and Enqueue Replicator 2.

## Prerequisites and limitations
<a name="upgrade-sap-pacemaker-clusters-from-ensa1-to-ensa2-prereqs"></a>

**Prerequisites **
+ A working ENSA1-based cluster that uses Pacemaker and Corosync on SLES or RHEL.
+ At least two Amazon Elastic Compute Cloud (Amazon EC2) instances where the (ABAP) SAP Central Services (ASCS/SCS) and Enqueue Replication Server (ERS) instances are running.
+ Knowledge of managing SAP applications and clusters.
+ Access to the Linux environment as root user.

**Limitations **
+ ENSA1-based clusters support a two-node architecture only.
+ ENSA2-based clusters cannot be deployed to SAP NetWeaver versions before 7.52.
+ EC2 instances in clusters should be in different AWS Availability Zones.

**Product versions**
+ SAP NetWeaver version 7.52 or later
+ Starting with S/4HANA 2020, only ENSA2 clusters are supported
+ Kernel 7.53 or later, which supports ENSA2 and Enqueue Replicator 2
+ SLES for SAP Applications version 12 or later
+ RHEL for SAP with High Availability (HA) version 7.9 or later

## Architecture
<a name="upgrade-sap-pacemaker-clusters-from-ensa1-to-ensa2-architecture"></a>

**Source technology stack **
+ SAP NetWeaver 7.52 with SAP Kernel 7.53 or later
+ SLES or RHEL operating system

**Target technology stack **
+ SAP NetWeaver 7.52 with SAP Kernel 7.53 or later, including S/4HANA 2020 with ABAP platform
+ SLES or RHEL operating system

**Target architecture **

The following diagram shows an HA configuration of ASCS/SCS and ERS instances based on an ENSA2 cluster.

![\[HA architecture for ASCS/SCS and ERS instances on an ENSA2 cluster\]](http://docs.aws.amazon.com/prescriptive-guidance/latest/patterns/images/pattern-img/c32560de-901f-4796-a6b3-c08c109b22c8/images/19501713-0ddf-4242-9ea3-90478200a19e.png)


**Comparison of ENSA1 and ENSA2 clusters**

SAP introduced ENSA2 as the successor to ENSA1. An ENSA1-based cluster supports a two-node architecture where the ASCS/SCS instance fails over to ERS when an error occurs. This limitation stems from how the ASCS/SCS instance regains the lock table information from the shared memory of the ERS node after failover. ENSA2-based clusters with Enqueue Replicator 2 eliminate this limitation, because the ASCS/SCS instance can collect the lock information from the ERS instance over the network. ENSA2-based clusters can have more than two nodes, because the ASCS/SCS instance is no longer required to fail over to the ERS node. (However, in a two-node ENSA2 cluster environment, the ASCS/SCS instance will still fail over to the ERS node because there are no other nodes in the cluster to fail over to.) ENSA2 is supported starting with SAP Kernel 7.50 with some limitations. For HA setup that supports Enqueue Replicator 2, the minimum requirement is NetWeaver 7.52 (see [SAP OSS Note 2630416](https://launchpad.support.sap.com/#/notes/2630416)). S/4HANA 1809 comes with ENSA2 architecture recommended by default, whereas S/4HANA supports only ENSA2 starting with version 2020.

**Automation and scale**

The HA cluster in the target architecture makes ASCS fail over to other nodes automatically.

**Scenarios for moving to ENSA2-based clusters**

There are two main scenarios for upgrading to ENSA2-based clusters: 
+ Scenario 1: You choose to upgrade to ENSA2 without an accompanying SAP upgrade or S/4HANA conversion, assuming that your SAP release and Kernel version support ENSA2.
+ Scenario 2: You move to ENSA2 as part of an upgrade or conversion (for example, to S/4HANA 1809 or later) by using SUM.

The [Epics](#upgrade-sap-pacemaker-clusters-from-ensa1-to-ensa2-epics) section covers the steps for these two scenarios. The first scenario requires you to manually set up SAP-related parameters before you change the cluster configuration for ENSA2. In the second scenario, the binaries and SAP-related parameters are deployed by SUM, and your only remaining task is to update the cluster configuration for HA. We still recommend that you validate SAP parameters after you use SUM. In most cases, S/4HANA conversion is the main reason for a cluster upgrade.

## Tools
<a name="upgrade-sap-pacemaker-clusters-from-ensa1-to-ensa2-tools"></a>
+ For OS package managers, we recommend the Zypper (for SLES) or YUM (for RHEL) tools.
+ For cluster management, we recommend **crm** (for SLES) or **pcs** (for RHEL) shells.
+ SAP instance management tools such as SAPControl.
+ (Optional) SUM tool for S/4HANA conversion upgrade.

## Best practices
<a name="upgrade-sap-pacemaker-clusters-from-ensa1-to-ensa2-best-practices"></a>
+ For best practices for using SAP workloads on AWS, see the [SAP Lens](https://docs.aws.amazon.com/wellarchitected/latest/sap-lens/sap-lens.html) for the AWS Well-Architected Framework.
+ Consider the number of cluster nodes (odd or even) in your ENSA2 multi-node architecture.
+ Set up the ENSA2 cluster for SLES 15 in alignment with the SAP S/4-HA-CLU 1.0 certification standard.
+ Always save or back up your existing cluster and application state before upgrading to ENSA2.

## Epics
<a name="upgrade-sap-pacemaker-clusters-from-ensa1-to-ensa2-epics"></a>

### Configure SAP parameters manually for ENSA2 (scenario 1 only)
<a name="configure-sap-parameters-manually-for-ensa2-scenario-1-only"></a>


| Task | Description | Skills required | 
| --- | --- | --- | 
| Configure the parameters in the default profile. | If you want to upgrade to ENSA2 while staying on the same SAP release or if your target release defaults to ENSA1, set the parameters in the default profile (DEFAULT.PFL file) to the following values.<pre>enq/enable=TRUE<br />enq/serverhost=sapascsvirt<br />enq/serverinst=10        (instance number of ASCS/SCS instance)<br />enque/process_location=REMOTESA<br />enq/replicatorhost=sapersvirt<br />enq/replicatorinst=11    (instance number of ERS instance)<br />  </pre>where `sapascsvirt` is the virtual hostname for the ASCS instances, and `sapersvirt` is the virtual hostname for the ERS instances. You can change these to fit your target environment.To use this upgrade option, your SAP release and Kernel version must support ENSA2 and Enqueue Replicator 2. | SAP | 
| Configure the ASCS/SCS instance profile. | If you want to upgrade to ENSA2 while staying on the same SAP release or if your target release defaults to ENSA1, set the following parameters in the ASCS/SCS instance profile. The section of the profile where ENSA1 is defined looks something like the following.<pre>#--------------------------------------------------------------<br />Start SAP enqueue server<br />#-------------------------------------------------------------- <br />_EN = en.sap$(SAPSYSTEMNAME)$(INSTANCE_NAME) <br />Execute_04 = local rm -f $(_EN) <br />Execute_05 = local ln -s -f $(DIR_EXECUTABLE)/enserver$(FT_EXE) $(_EN) <br />Start_Program_01 = local $(_EN) pf=$(_PF)<br />  </pre>To reconfigure this section for ENSA2:[\[See the AWS documentation website for more details\]](http://docs.aws.amazon.com/prescriptive-guidance/latest/patterns/upgrade-sap-pacemaker-clusters-from-ensa1-to-ensa2.html)This profile section would look something like the following after your changes.<pre>#--------------------------------------------------------------<br />Start SAP enqueue server<br />#-------------------------------------------------------------- <br />_ENQ = enq.sap$(SAPSYSTEMNAME)$(INSTANCE_NAME) <br />Execute_04 = local rm -f $(_ENQ) <br />Execute_05 = local ln -s -f $(DIR_EXECUTABLE)/enq_server$(FT_EXE) $(_ENQ) <br />Start_Program_01 = local $(_ENQ) pf=$(_PF) <br />... <br />enq/server/replication/enable = TRUE <br />Autostart = 0</pre>`_ENQ` must not have the restart option enabled. If `RestartProgram_01` is set for `_ENQ`, change it to `StartProgram_01`. This prevents SAP from restarting the service or interfering with cluster-managed resources. | SAP | 
| Configure the ERS profile. | If you want to upgrade to ENSA2 while staying on the same SAP release or if your target release defaults to ENSA1, set the following parameters in the ERS instance profile.Find the section where the enqueue replicator is defined. It will be similar to the following.<pre>#------------------------------------------------------<br />Start enqueue replication server<br />#------------------------------------------------------ <br />_ER = er.sap$(SAPSYSTEMNAME)$(INSTANCE_NAME) <br />Execute_03 = local rm -f $(_ER) <br />Execute_04 = local ln -s -f $(DIR_EXECUTABLE)/enrepserver$(FT_EXE) $(_ER) <br />Start_Program_00 = local $(_ER) pf=$(_PF) NR=$(SCSID)<br />  </pre>To reconfigure this section for Enqueue Replicator 2:[\[See the AWS documentation website for more details\]](http://docs.aws.amazon.com/prescriptive-guidance/latest/patterns/upgrade-sap-pacemaker-clusters-from-ensa1-to-ensa2.html)This profile section should look something like the following after your changes.<pre>#------------------------------------------------------<br />Start enqueue replication server<br />#------------------------------------------------------ <br />_ENQR = enqr.sap$(SAPSYSTEMNAME)$(INSTANCE_NAME) <br />Execute_01 = local rm -f $(_ENQR) <br />Execute_02 = local ln -s -f $(DIR_EXECUTABLE)/enq_replicator$(FT_EXE) $(_ENQR) <br />Start_Program_00 = local $(_ENQR) pf=$(_PF) NR=$(SCSID) <br />… <br />Autostart = 0</pre>`_ENQR` must not have the restart option enabled. If `RestartProgram_01` is set for `_ENQR`, change it to `StartProgram_01`. This prevents SAP from restarting the service or interfering with cluster-managed services. | SAP | 
| Restart SAP Start Services. | After you change the profiles described previously in this epic, restart SAP Start Services for both ASCS/SCS and ERS.`sapcontrol -nr 10 -function RestartService SCT``sapcontrol -nr 11 -function RestartService SCT`where `SCT` refers to the SAP system ID, and assuming that 10 and 11 are the instance numbers for ASCS/SCS and ERS instances, respectively. | SAP | 

### Reconfigure the cluster for ENSA2 (required for both scenarios)
<a name="reconfigure-the-cluster-for-ensa2-required-for-both-scenarios"></a>


| Task | Description | Skills required | 
| --- | --- | --- | 
| Verify version numbers in SAP resource agents. | When you use SUM to upgrade SAP to S/4HANA 1809 or later, SUM handles the parameter changes in the SAP profiles. Only the cluster requires manual adjustment. However, we recommend that you verify the parameter settings before you make any changes to the cluster.The examples in this epic assume that you’re using the SUSE operating system. If you’re using RHEL, you will need to use tools such as YUM and the **pcs** shell instead of Zypper and **crm**.Check both nodes in the architecture to confirm that the `resource-agents` package matches the minimum version recommended by SAP. For SLES, check SAP OSS Note 2641019. For RHEL, check SAP OSS Note 2641322. (SAP Notes require an [SAP ONE Support Launchpad user account](https://support.sap.com/en/my-support/knowledge-base.html).)<pre>sapers:sctadm 23> zypper search -s -i resource-agents<br />Loading repository data...<br />Reading installed packages...<br />S | Name | Type | Version | Arch | Repository<br />--+-----------------+---------+------------------------------------+--------+-----------------------------<br />i | resource-agents | package | 4.8.0+git30.d0077df0-150300.8.28.1 | x86_64 | SLE-Product-HA15-SP3-Updates</pre>Update the `resource-agents` version if necessary. | AWS systems administrator | 
| Back up cluster configuration. | Back up the CRM cluster configuration as follows.`crm configure show > /tmp/cluster_config_backup.txt` | AWS systems administrator | 
| Set maintenance mode. | Set the cluster to maintenance mode.`crm configure property maintenance-mode="true"` | AWS systems administrator | 
| Check cluster configuration. | Check the current cluster configuration.`crm configure show`Here is an excerpt from the full output:<pre>node 1: sapascs<br />node 2: sapers<br />...<br />primitive rsc_sap_SCT_ASCS10 SAPInstance \<br />operations $id=rsc_sap_SCT_ASCS10-operations \<br />op monitor interval=120 timeout=60 on-fail=restart \<br />params InstanceName=SCT_ASCS10_sapascsvirt START_PROFILE="/sapmnt/SCT/profile/SCT_ASCS10_sapascsvirt" \ <br />   AUTOMATIC_RECOVER=false \<br />meta resource-stickiness=5000 failure-timeout=60 migration-threshold=1 priority=10<br />primitive rsc_sap_SCT_ERS11 SAPInstance \<br />operations $id=rsc_sap_SCT_ERS11-operations \<br />op monitor interval=120 timeout=60 on-fail=restart \<br />params InstanceName=SCT_ERS11_sapersvirt START_PROFILE="/sapmnt/SCT/profile/SCT_ERS11_sapersvirt" \<br />   AUTOMATIC_RECOVER=false IS_ERS=true \<br />meta priority=1000<br />...<br />colocation col_sap_SCT_no_both -5000: grp_SCT_ERS11 grp_SCT_ASCS10<br />location loc_sap_SCT_failover_to_ers rsc_sap_SCT_ASCS10 \<br />rule 2000: runs_ers_SCT eq 1<br />order ord_sap_SCT_first_start_ascs Optional: rsc_sap_SCT_ASCS10:start rsc_sap_SCT_ERS11:stop symmetrical=false<br />...</pre>where `sapascsvirt` refers to the virtual hostname for the ASCS instances, `sapersvirt` refers to the virtual hostname for the ERS instances, and `SCT` refers to the SAP system ID. | AWS systems administrator | 
| Remove failover colocation constraint. | In the previous example, the location constraint `loc_sap_SCT_failover_to_ers`  specifies that the ENSA1 feature of ASCS should always follow the ERS instance upon failover. With ENSA2, ASCS should be able to fail over freely to any participating nodes, so you can remove this constraint.`crm configure delete loc_sap_SCT_failover_to_ers` | AWS systems administrator | 
| Adjust primitives. | You will also need to make minor changes to the ASCS and ERS SAPInstance primitives.Here is an example of an ASCS SAPInstance primitive that is configured for ENSA1.<pre>primitive rsc_sap_SCT_ASCS10 SAPInstance \<br />operations $id=rsc_sap_SCT_ASCS10-operations \<br />op monitor interval=120 timeout=60 on-fail=restart \<br />params InstanceName=SCT_ASCS10_sapascsvirt START_PROFILE="/sapmnt/SCT/profile/SCT_ASCS10_sapascsvirt" \<br />   AUTOMATIC_RECOVER=false \<br />meta resource-stickiness=5000 failure-timeout=60 migration-threshold=1 priority=10</pre>To upgrade to ENSA2, change this configuration to the following.<pre>primitive rsc_sap_SCT_ASCS10 SAPInstance \<br />operations $id=rsc_sap_SCT_ASCS10-operations \<br />op monitor interval=120 timeout=60 on-fail=restart \<br />params InstanceName=SCT_ASCS10_sapascsvirt START_PROFILE="/sapmnt/SCT/profile/SCT_ASCS10_sapascsvirt" \<br />   AUTOMATIC_RECOVER=false \<br />meta resource-stickiness=3000 </pre>This is an example of an ERS SAPInstance primitive that is configured for ENSA1.<pre>primitive rsc_sap_SCT_ERS11 SAPInstance \<br />operations $id=rsc_sap_SCT_ERS11-operations \<br />op monitor interval=120 timeout=60 on-fail=restart \<br />params InstanceName=SCT_ERS11_sapersvirt START_PROFILE="/sapmnt/SCT/profile/SCT_ERS11_sapersvirt" \<br />   AUTOMATIC_RECOVER=false IS_ERS=true \<br />meta priority=1000</pre>To upgrade to ENSA2, change this configuration to the following.<pre>primitive rsc_sap_SCT_ERS11 SAPInstance \<br />operations $id=rsc_sap_SCT_ERS11-operations \<br />op monitor interval=120 timeout=60 on-fail=restart \<br />params InstanceName=SCT_ERS11_sapersvirt START_PROFILE="/sapmnt/SCT/profile/SCT_ERS11_sapersvirt" \<br />   AUTOMATIC_RECOVER=false IS_ERS=true</pre>You can change primitives in various ways. For example, you can revise them in an editor such as vi, as in the following example.`crm configure edit rsc_sap_SCT_ERS11` | AWS systems administrator | 
| Disable maintenance mode. | Disable maintenance mode on the cluster.`crm configure property maintenance-mode="false"`When the cluster is out of maintenance mode, it attempts to bring the ASCS and ERS instances online with the new ENSA2 settings. | AWS systems administrator | 

### (Optional) Add cluster nodes
<a name="optional-add-cluster-nodes"></a>


| Task | Description | Skills required | 
| --- | --- | --- | 
| Review best practices. | Before you add more nodes, make sure to understand best practices such as whether to use an odd or even number of nodes. | AWS systems administrator | 
| Add nodes. | Adding more nodes involves a series of tasks, such as updating the operating system, installing software packages that match the existing nodes, and making mounts available. You can use the **Prepare Additional Host** option in SAP Software Provisioning Manager (SWPM) to create an SAP-specific baseline of the host. For more information, see the SAP guides listed in the next section. | AWS systems administrator | 

## Related resources
<a name="upgrade-sap-pacemaker-clusters-from-ensa1-to-ensa2-resources"></a>

**SAP and SUSE references**

To access SAP Notes, you must have an SAP ONE Support Launchpad user account. For more information, see the [SAP Support website](https://support.sap.com/en/my-support/knowledge-base.html).
+ [SAP Note 2501860 ‒ Documentation for SAP NetWeaver Application Server for ABAP 7.52](https://launchpad.support.sap.com/#/notes/2501860)
+ [SAP Note 2641019 ‒ Installation of ENSA2 and update from ENSA1 to ENSA2 in SUSE HA environment](https://launchpad.support.sap.com/#/notes/2641019)
+ [SAP Note 2641322 ‒ Installation of ENSA2 and update from ENSA1 to ENSA2 when using the Red Hat HA solutions for SAP](https://launchpad.support.sap.com/#/notes/2641322)
+ [SAP Note 2711036 ‒ Usage of the Standalone Enqueue Server 2 in an HA Environment](https://launchpad.support.sap.com/#/notes/2711036)
+ [Standalone Enqueue Server 2](https://help.sap.com/docs/ABAP_PLATFORM/cff8531bc1d9416d91bb6781e628d4e0/902412f09e134f5bb875adb6db585c92.html) (SAP documentation)
+ [SAP S/4 HANA ‒ Enqueue Replication 2 High Availability Cluster - Setup Guide](https://documentation.suse.com/sbp/all/html/SAP_S4HA10_SetupGuide-SLE12/index.html) (SUSE documentation)

**AWS references**
+ [SAP HANA on AWS: High Availability Configuration Guide for SLES and RHEL](https://docs.aws.amazon.com/sap/latest/sap-hana/sap-hana-on-aws-ha-configuration.html)
+ [SAP Lens - AWS Well-Architected Framework](https://docs.aws.amazon.com/wellarchitected/latest/sap-lens/sap-lens.html)

# Use consistent Availability Zones in VPCs across different AWS accounts
<a name="use-consistent-availability-zones-in-vpcs-across-different-aws-accounts"></a>

*Adam Spicer, Amazon Web Services*

## Summary
<a name="use-consistent-availability-zones-in-vpcs-across-different-aws-accounts-summary"></a>

On the Amazon Web Services (AWS) Cloud, an Availability Zone has a name that can vary between your AWS accounts and an [Availability Zone ID (AZ ID) ](https://docs.aws.amazon.com/ram/latest/userguide/working-with-az-ids.html)that identifies its location. If you use AWS CloudFormation to create virtual private clouds (VPCs), you must specify the Availability Zone's name or ID when creating the subnets. If you create VPCs in multiple accounts, the Availability Zone name is randomized, which means that subnets use different Availability Zones in each account. 

To use the same Availability Zone across your accounts, you must map the Availability Zone name in each account to the same AZ ID. For example, the following diagram shows that the `use1-az6` AZ ID is named `us-east-1a` in AWS account A and `us-east-1c` in AWS account Z.

![\[The use1-az6 AZ ID is named us-east-1a in AWS account A and us-east-1c in AWS account Z.\]](http://docs.aws.amazon.com/prescriptive-guidance/latest/patterns/images/pattern-img/9954e7f9-d6ce-44bd-af99-0c6bb7cd3cb0/images/23c8a37b-2408-4534-a1e0-bccfa4d7fbe3.png)


 

This pattern helps ensure zonal consistency by providing a cross-account, scalable solution for using the same Availability Zones in your subnets. Zonal consistency ensures that your cross-account network traffic avoids cross-Availability Zone network paths, which helps reduce data transfer costs and lower network latency between your workloads.

This pattern is an alternative approach to the AWS CloudFormation [AvailabilityZoneId property](https://docs.aws.amazon.com/AWSCloudFormation/latest/UserGuide/aws-resource-ec2-subnet.html#cfn-ec2-subnet-availabilityzoneid).

## Prerequisites and limitations
<a name="use-consistent-availability-zones-in-vpcs-across-different-aws-accounts-prereqs"></a>

**Prerequisites**
+ At least two active AWS accounts in the same AWS Region.
+ Evaluate how many Availability Zones are needed to support your VPC requirements in the Region.
+ Identify and record the AZ ID for each Availability Zone that you need to support. For more information about this, see [Availability Zone IDs for your AWS resources](https://docs.aws.amazon.com/ram/latest/userguide/working-with-az-ids.html) in the AWS Resource Access Manager documentation.  
+ An ordered, comma-separated list of your AZ IDs. For example, the first Availability Zone on your list is mapped as `az1`, the second Availbility Zone is mapped as `az2`, and this mapping structure continues until your comma-separated list is fully mapped. There is no maximum number of AZ IDs that can be mapped. 
+ The `az-mapping.yaml` file from the GitHub [Multi-account Availability Zone mapping](https://github.com/aws-samples/multi-account-az-mapping/) repository, copied to your local machine

## Architecture
<a name="use-consistent-availability-zones-in-vpcs-across-different-aws-accounts-architecture"></a>

The following diagram shows the architecture that is deployed in an account and that creates AWS Systems Manager Parameter Store values. These Parameter Store values are consumed when you create a VPC in the account.

![\[Workflow to create Systems Manager Parameter Store values for each AZ ID and store AZ name.\]](http://docs.aws.amazon.com/prescriptive-guidance/latest/patterns/images/pattern-img/9954e7f9-d6ce-44bd-af99-0c6bb7cd3cb0/images/f1168464-55f8-4efc-9b28-6a0cda668b9e.png)


The diagram shows the following workflow:

1. This pattern’s solution is deployed to all accounts that require zonal consistency for a VPC. 

1. The solution creates Parameter Store values for each AZ ID and stores the new Availability Zone name. 

1. The AWS CloudFormation template uses the Availability Zone name stored in each Parameter Store value and this ensures zonal consistency.

The following diagram shows the workflow for creating a VPC with this pattern's solution.

 

![\[Workflow submits CloudFormation template to create a VPC with correct AZ IDs.\]](http://docs.aws.amazon.com/prescriptive-guidance/latest/patterns/images/pattern-img/9954e7f9-d6ce-44bd-af99-0c6bb7cd3cb0/images/cd859430-ac25-479f-b56a-21da24cddf21.png)


 

The diagram shows the following workflow:

1. Submit a template for creating a VPC to AWS CloudFormation.

1. AWS CloudFormation resolves the Parameter Store values for each Availability Zone and returns the Availability Zone name for each AZ ID.

1. A VPC is created with the correct AZ IDs required for zonal consistency.

After you deploy this pattern’s solution, you can create subnets that reference the Parameter Store values. If you use AWS CloudFormation, you can reference the Availability Zone mapping parameter values from the following YAML-formatted sample code:

```
Resources:
    PrivateSubnet1AZ1: 
        Type: AWS::EC2::Subnet 
        Properties: 
            VpcId: !Ref VPC
            CidrBlock: !Ref PrivateSubnetAZ1CIDR
            AvailabilityZone: 
                !Join 
                    - ''
                    - - '{{resolve:ssm:/az-mapping/az1:1}}'
```

This sample code is contained in the `vpc-example.yaml `file from the GitHub [Multi-account Availability Zone mapping](https://github.com/aws-samples/multi-account-az-mapping/) repository. It shows you how to create a VPC and subnets that align to the Parameter Store values for zonal consistency.

**Technology stack  **
+ AWS CloudFormation
+ AWS Lambda
+ AWS Systems Manager Parameter Store

**Automation and scale**

You can deploy this pattern to all your AWS accounts by using AWS CloudFormation StackSets or the Customizations for AWS Control Tower solution. For more information, see [Working with AWS CloudFormation StackSets](https://docs.aws.amazon.com/AWSCloudFormation/latest/UserGuide/what-is-cfnstacksets.html) in the AWS Cloudformation documentation and [Customizations for AWS Control Tower](https://aws.amazon.com/solutions/implementations/customizations-for-aws-control-tower/) in the AWS Solutions Library. 

After you deploy the AWS CloudFormation template, you can update it to use the Parameter Store values and deploy your VPCs in pipelines or according to your requirements. 

## Tools
<a name="use-consistent-availability-zones-in-vpcs-across-different-aws-accounts-tools"></a>

**AWS services**
+ [AWS CloudFormation](https://docs.aws.amazon.com/AWSCloudFormation/latest/UserGuide/Welcome.html) helps you model and set up your AWS resources, provision them quickly and consistently, and manage them throughout their lifecycle. You can use a template to describe your resources and their dependencies, and launch and configure them together as a stack, instead of managing resources individually. You can manage and provision stacks across multiple AWS accounts and AWS Regions.
+ [AWS Lambda](https://docs.aws.amazon.com/lambda/latest/dg/welcome.html) is a compute service that supports running code without provisioning or managing servers. Lambda runs your code only when needed and scales automatically, from a few requests per day to thousands per second. You pay only for the compute time that you consume—there is no charge when your code is not running.
+ [AWS Systems Manager Parameter Store](https://docs.aws.amazon.com/systems-manager/latest/userguide/systems-manager-parameter-store.html) is a capability of AWS Systems Manager. It provides secure, hierarchical storage for configuration data management and secrets management.

**Code**

The code for this pattern is provided in the GitHub [Multi-account Availability Zone mapping](https://github.com/aws-samples/multi-account-az-mapping/) repository.

## Epics
<a name="use-consistent-availability-zones-in-vpcs-across-different-aws-accounts-epics"></a>

### Deploy the az-mapping.yaml file
<a name="deploy-the-az-mapping-yaml-file"></a>


| Task | Description | Skills required | 
| --- | --- | --- | 
| Determine the required Availability Zones for the Region. | [\[See the AWS documentation website for more details\]](http://docs.aws.amazon.com/prescriptive-guidance/latest/patterns/use-consistent-availability-zones-in-vpcs-across-different-aws-accounts.html) | Cloud architect | 
| Deploy the az-mapping.yaml file. | Use the `az-mapping.yaml` file to create an AWS CloudFormation stack in all required AWS accounts. In the `AZIds` parameter, use the comma-separated list that you created earlier. We recommend that you use [AWS CloudFormation StackSets](https://docs.aws.amazon.com/AWSCloudFormation/latest/UserGuide/what-is-cfnstacksets.html) or the [Customizations for AWS Control Tower Solution](https://aws.amazon.com/solutions/implementations/customizations-for-aws-control-tower/). | Cloud architect | 

### Deploy the VPCs in your accounts
<a name="deploy-the-vpcs-in-your-accounts"></a>


| Task | Description | Skills required | 
| --- | --- | --- | 
| Customize the AWS CloudFormation templates. | When you create the subnets using AWS CloudFormation, customize the templates to use the Parameter Store values that you created earlier.For a sample template, see the `vpc-example.yaml` file in the GitHub [Multi-account Availability Zone mapping](https://github.com/aws-samples/multi-account-az-mapping/) repository. | Cloud architect | 
| Deploy the VPCs. | Deploy the customized AWS CloudFormation templates into your accounts. Each VPC in the Region then has zonal consistency in the Availability Zones used for the subnets | Cloud architect | 

## Related resources
<a name="use-consistent-availability-zones-in-vpcs-across-different-aws-accounts-resources"></a>
+ [Availability Zone IDs for your AWS resources](https://docs.aws.amazon.com/ram/latest/userguide/working-with-az-ids.html) (AWS Resource Access Manager documentation)
+ [AWS::EC2::Subnet](https://docs.aws.amazon.com/AWSCloudFormation/latest/UserGuide/aws-resource-ec2-subnet.html) (AWS CloudFormation documentation)

# Use user IDs in IAM policies for access control and automation
<a name="use-user-ids-iam-policies-access-control-automation"></a>

*Srinivas Ananda Babu and Ram Kandaswamy, Amazon Web Services*

## Summary
<a name="use-user-ids-iam-policies-access-control-automation-summary"></a>

This pattern explains the potential pitfalls of using username-based policies in AWS Identity and Access Management (IAM), the benefits of using user IDs, and how to integrate this approach with AWS CloudFormation for automation.

In the AWS Cloud, the IAM service helps you manage user identities and access control with precision. However, reliance on usernames in IAM policy creation can lead to unforeseen security risks and access control issues. For example, consider this scenario: A new employee, John Doe, joins your team, and you create an IAM user account with the username `j.doe`, which grants them permissions through IAM policies that reference usernames. When John leaves the company, the account is deleted. The trouble begins when a new employee, Jane Doe, joins your team, and the `j.doe` username is recreated. The existing policies now grant Jane Doe the same permissions as John Doe. This creates a potential security and compliance nightmare.

Manually updating each policy to reflect new user details is a time-consuming, error-prone process, especially as your organization grows. The solution is to use a unique and immutable user ID. When you create an IAM user account, AWS assigns the IAM user a unique user ID (or principal ID). You can use these user IDs in your IAM policies to ensure consistent and reliable access control that isn't affected by username changes or reuse.

For example, an IAM policy that uses a user ID might look like this:

```
{ 
    "Version": "2012-10-17",		 	 	  
    "Statement": [ 
        { 
            "Effect": "Allow", 
            "Action": "s3:ListBucket", 
            "Resource": "arn:aws:s3:::example-bucket", 
            "Principal": { "AWS": "arn:aws:iam::123456789012:user/abcdef01234567890" } 
        } 
      ] 
}
```

The benefits of using user IDs in IAM policies include:
+ **Uniqueness.** User IDs are unique across all AWS accounts, so they provide correct and consistent permission application.
+ **Immutability.** User IDs cannot be changed, so they provide a stable identifier for referencing users in policies.
+ **Auditing and compliance.** AWS services often include user IDs in logs and audit trails, which makes it easy to trace actions back to specific users.
+ **Automation and integration.** Using user IDs in AWS APIs, SDKs, or automation scripts ensures that processes remain unaffected by username changes.
+ **Future-proofing.** Using user IDs in policies from the start can prevent potential access control issues or extensive policy updates.

**Automation**

When you use infrastructure as code (IaC) tools such as AWS CloudFormation, the pitfalls of username-based IAM policies can still cause issues. The IAM user resource returns the username when you call the `Ref` intrinsic function. As your organization's infrastructure evolves, the cycle of creating and deleting resources, including IAM user accounts, can lead to unintended access control issues if you reuse usernames.

To address this issue, we recommend that you incorporate user IDs into your CloudFormation templates. However, obtaining user IDs for this purpose can be challenging. This is where custom resources can be helpful. You can use CloudFormation custom resources to extend the service's functionality by integrating with AWS APIs or external services. By creating a custom resource that fetches the user ID for a given IAM user, you can make the user ID available within your CloudFormation templates. This approach streamlines the process of referencing user IDs and ensures that your automation workflows remain robust and future-proof.

## Prerequisites and limitations
<a name="use-user-ids-iam-policies-access-control-automation-prereqs"></a>

**Prerequisites**
+ An active AWS account
+ An IAM role for a cloud administrator to run the CloudFormation template

**Limitations**
+ Some AWS services aren’t available in all AWS Regions. For Region availability, see [AWS services by Region](https://aws.amazon.com/about-aws/global-infrastructure/regional-product-services/). For specific endpoints, see the [Service endpoints and quotas](https://docs.aws.amazon.com/general/latest/gr/aws-service-information.html) page, and choose the link for the service.

## Architecture
<a name="use-user-ids-iam-policies-access-control-automation-architecture"></a>

**Target architecture **

The following diagram shows how CloudFormation uses a custom resource backed by AWS Lambda to retrieve the IAM user ID.

![\[Getting the IAM user ID by using a CloudFormation custom resource.\]](http://docs.aws.amazon.com/prescriptive-guidance/latest/patterns/images/pattern-img/71698647-274e-4911-92f0-549e444b53f6/images/7e507df4-f597-499e-bd5b-6d7a55e64146.png)


**Automation and scale**

You can use the CloudFormation template multiple times for different AWS Regions and accounts. You need to run it only once in each Region or account.

## Tools
<a name="use-user-ids-iam-policies-access-control-automation-tools"></a>

**AWS services**
+ [IAM](https://docs.aws.amazon.com/IAM/latest/UserGuide/introduction.html) – AWS Identity and Access Management (IAM) is a web service that helps you securely control access to AWS resources. You use IAM to control who is authenticated (signed in) and authorized (has permissions) to use resources.
+ [AWS CloudFormation](https://docs.aws.amazon.com/AWSCloudFormation/latest/UserGuide/Welcome.html) – AWS CloudFormation helps you model and set up your AWS resources so that you can spend less time managing those resources and more time focusing on your applications that run on AWS. You create a template that describes the AWS resources that you want, and CloudFormation takes care of provisioning and configuring those resources for you.
+ [AWS Lambda](https://docs.aws.amazon.com/lambda/latest/dg/welcome.html) – AWS Lambda is a compute service that supports running code without provisioning or managing servers. Lambda runs your code only when needed and scales automatically, from a few requests per day to thousands per second. 

## Best practices
<a name="use-user-ids-iam-policies-access-control-automation-best-practices"></a>

If you're starting from scratch or planning a greenfield deployment, we strongly recommend that you use [AWS IAM Identity Center](https://docs.aws.amazon.com/singlesignon/latest/userguide/what-is.html) for centralized user management. IAM Identity Center integrates with your existing identity providers (such as Active Directory or Okta) to federate user identities on AWS, which eliminates the need to create and manage IAM users directly. This approach not only ensures consistent access control but also simplifies user lifecycle management and helps enhance security and compliance across your AWS environment.

## Epics
<a name="use-user-ids-iam-policies-access-control-automation-epics"></a>

### Validate permissions
<a name="validate-permissions"></a>


| Task | Description | Skills required | 
| --- | --- | --- | 
| Validate your AWS account and IAM role. | Confirm that you have an IAM role with permissions to deploy CloudFormation templates in your AWS account.If you're planning to use the AWS CLI instead of the CloudFormation console to deploy the template in the last step of this procedure, you should also set up temporary credentials to run AWS CLI commands. For instructions, see the [IAM documentation](https://docs.aws.amazon.com/IAM/latest/UserGuide/id_credentials_temp_use-resources.html#using-temp-creds-sdk-cli). | Cloud architect | 

### Build a CloudFormation template
<a name="build-a-cfnshort-template"></a>


| Task | Description | Skills required | 
| --- | --- | --- | 
| Create a CloudFormation template. | [\[See the AWS documentation website for more details\]](http://docs.aws.amazon.com/prescriptive-guidance/latest/patterns/use-user-ids-iam-policies-access-control-automation.html) | AWS DevOps, Cloud architect | 
| Add an input parameter for the username. | Add the following code to the `Parameters` section of the CloudFormation template:<pre>Parameters:<br />  NewIamUserName:<br />    Type: String<br />    Description: Unique username for the new IAM user<br /></pre>This parameter prompts the user for the username. | AWS DevOps, Cloud architect | 
| Add a custom resource to create an IAM user. | Add the following code to the `Resources` section of the CloudFormation template:<pre>Resources:<br />  rNewIamUser:<br />    Type: 'AWS::IAM::User'<br />    Properties:<br />      UserName: !Ref NewIamUserName<br /></pre>This code adds a CloudFormation resource that creates an IAM user with the name provided by the `NewIamUserName` parameter. | AWS DevOps, Cloud architect | 
| Add an execution role for the Lambda function. | In this step, you  create an IAM role that grants an AWS Lambda function permission to get the IAM `UserId`. Specify the following minimum required permissions for Lambda to run:[\[See the AWS documentation website for more details\]](http://docs.aws.amazon.com/prescriptive-guidance/latest/patterns/use-user-ids-iam-policies-access-control-automation.html)For instructions on creating an execution role, see the [Lambda documentation](https://docs.aws.amazon.com/lambda/latest/dg/lambda-intro-execution-role.html). You will reference this role in the next step, when you create the Lambda function. | AWS administrator, Cloud architect | 
| Add a Lambda function to get the unique IAM `UserId`. | In this step, you define a Lambda function with a Python runtime to get the unique IAM `UserId`. To do this, add the following code to the `Resources` section of the CloudFormation template. Replace `<<ROLENAME>>` with the name of the execution role that you created in the last step.<pre>  GetUserLambdaFunction:<br />    Type: 'AWS::Lambda::Function'<br />    Properties:<br />      Handler: index.handler<br />      Role: <<ROLENAME>><br />      Timeout: 30<br />      Runtime: python3.11<br />      Code:<br />        ZipFile: |<br />          import cfnresponse, boto3<br />          def handler(event, context):<br />            try:<br />              print(event)<br />              user = boto3.client('iam').get_user(UserName=event['ResourceProperties']['NewIamUserName'])['User']<br />              cfnresponse.send(event, context, cfnresponse.SUCCESS, {'NewIamUserId': user['UserId'], 'NewIamUserPath': user['Path'], 'NewIamUserArn': user['Arn']})<br />            except Exception as e:<br />              cfnresponse.send(event, context, cfnresponse.FAILED, {'NewIamUser': str(e)})<br /></pre> | AWS DevOps, Cloud architect | 
| Add a custom resource. | Add the  following code to the `Resources` section of the CloudFormation template:<pre>  rCustomGetUniqueUserId:<br />    Type: 'Custom::rCustomGetUniqueUserIdWithLambda'<br />    Properties:<br />      ServiceToken: !GetAtt GetUserLambdaFunction.Arn<br />      NewIamUserName: !Ref NewIamUserName<br /></pre>This custom resource calls the Lambda function to get the IAM `UserID`. | AWS DevOps, Cloud architect | 
| Define CloudFormation outputs. | Add the following code to the `Outputs` section of the CloudFormation template:<pre>Outputs:<br />  NewIamUserId:<br />    Value: !GetAtt rCustomGetUniqueUserId.NewIamUserId<br /></pre>This displays the IAM `UserID` for the new IAM user. | AWS DevOps, Cloud architect | 
| Save the template. | Save your changes to the CloudFormation template. | AWS DevOps, Cloud architect | 

### Deploy the CloudFormation template
<a name="deploy-the-cfnshort-template"></a>


| Task | Description | Skills required | 
| --- | --- | --- | 
| Deploy the CloudFormation template. | To deploy the `get_unique_user_id.yaml` template by using the CloudFormation console, follow the instructions in the [CloudFormation documentation](https://docs.aws.amazon.com/AWSCloudFormation/latest/UserGuide/cfn-console-create-stack.html).Alternatively, you can run the following AWS CLI command to deploy the template:<pre>aws cloudformation create-stack \<br />--stack-name DemoNewUser \<br />--template-body file://get_unique_user_id.yaml \<br />--parameters ParameterKey=NewIamUserName,ParameterValue=demouser \<br />--capabilities CAPABILITY_NAMED_IAM</pre> | AWS DevOps, Cloud architect | 

## Related resources
<a name="use-user-ids-iam-policies-access-control-automation-resources"></a>
+ [Create a stack from the CloudFormation console](https://docs.aws.amazon.com/AWSCloudFormation/latest/UserGuide/cfn-console-create-stack.html) (CloudFormation documentation)
+ [Lambda-backed custom resources](https://docs.aws.amazon.com/AWSCloudFormation/latest/UserGuide/template-custom-resources-lambda.html) (CloudFormation documentation)
+ [Unique identifiers](https://docs.aws.amazon.com/IAM/latest/UserGuide/reference_identifiers.html#identifiers-unique-ids) (IAM documentation)
+ [Use temporary credentials with AWS resources](https://docs.aws.amazon.com/IAM/latest/UserGuide/id_credentials_temp_use-resources.html) (IAM documentation)

# Validate Account Factory for Terraform (AFT) code locally
<a name="validate-account-factory-for-terraform-aft-code-locally"></a>

*Alexandru Pop and Michal Gorniak, Amazon Web Services*

## Summary
<a name="validate-account-factory-for-terraform-aft-code-locally-summary"></a>

This pattern shows how to locally test HashiCorp Terraform code that’s managed by AWS Control Tower Account Factory for Terraform (AFT). Terraform is an infrastructure as code (IaC) tool that helps you use code to provision and manage cloud infrastructure and resources. AFT sets up a Terraform pipeline that helps you provision and customize multiple AWS accounts in AWS Control Tower.

During code development, it can be helpful to test your Terraform infrastructure as code (IaC) locally, outside of the AFT pipeline. This pattern shows how to do the following:
+ Retrieve a local copy of the Terraform code that’s stored in the AWS CodeCommit repositories in your AFT management account.
+ Simulate the AFT pipeline locally by using the retrieved code.

This procedure can also be used to run Terraform commands that aren’t part of the normal AFT pipeline. For example, you can use this method to run commands such as `terraform validate`, `terraform plan`, `terraform destroy`, and `terraform import`.

## Prerequisites and limitations
<a name="validate-account-factory-for-terraform-aft-code-locally-prereqs"></a>

**Prerequisites **
+ An active AWS multi-account environment that uses [AWS Control Tower](https://aws.amazon.com/controltower)
+ A fully deployed of [AFT environment](https://docs.aws.amazon.com/controltower/latest/userguide/taf-account-provisioning.html)
+ AWS Command Line Interface (AWS CLI), [installed](https://docs.aws.amazon.com/cli/latest/userguide/getting-started-install.html) and [configured](https://docs.aws.amazon.com/cli/latest/userguide/cli-chap-configure.html)
+ [AWS CLI credential helper for AWS CodeCommit](https://docs.aws.amazon.com/codecommit/latest/userguide/setting-up-https-unixes.html), installed and configured
+ Python 3.x
+ [Git](https://git-scm.com/book/en/v2/Getting-Started-Installing-Git), installed and configured on your local machine
+ `git-remote-commit` utility, [installed and configured](https://docs.aws.amazon.com/codecommit/latest/userguide/setting-up-git-remote-codecommit.html#setting-up-git-remote-codecommit-install)
+ [Terraform](https://learn.hashicorp.com/collections/terraform/aws-get-started?utm_source=WEBSITE&utm_medium=WEB_IO&utm_offer=ARTICLE_PAGE&utm_content=DOCS), installed and configured (the local Terraform package version must match the version that’s used in the AFT deployment)

**Limitations **
+ This pattern doesn’t cover the deployment steps required for AWS Control Tower, AFT, or any specific Terraform modules.
+ The output that’s generated locally during this procedure isn’t saved in the AFT pipeline runtime logs.

## Architecture
<a name="validate-account-factory-for-terraform-aft-code-locally-architecture"></a>

**Target technology stack  **
+ AFT infrastructure deployed within an AWS Control Tower deployment
+ Terraform
+ Git
+ AWS CLI version 2

**Automation and scale**

This pattern shows how to locally invoke Terraform code for AFT global account customizations in a single AFT-managed AWS account. After your Terraform code is validated, you can apply it to the remaining accounts in your multi-account environment. For more information, see [Re-invoke customizations](https://docs.aws.amazon.com/controltower/latest/userguide/aft-account-customization-options.html#aft-re-invoke-customizations) in the AWS Control Tower documentation.

You can also use a similar process to run AFT account customizations in a local terminal. To locally invoke Terraform code from AFT account customizations, clone the **aft-account-customizations** repository instead of **aft-global-account-customizations** repository from CodeCommit in your AFT management account.

## Tools
<a name="validate-account-factory-for-terraform-aft-code-locally-tools"></a>

**AWS services**
+ [AWS Control Tower](https://docs.aws.amazon.com/controltower/latest/userguide/what-is-control-tower.html) helps you set up and govern an AWS multi-account environment, following prescriptive best practices.
+ [AWS Command Line Interface (AWS CLI)](https://docs.aws.amazon.com/cli/latest/userguide/cli-chap-welcome.html) is an open-source tool that helps you interact with AWS services through commands in your command-line shell.

**Other services**
+ [HashiCorp Terraform](https://www.terraform.io/docs) is an infrastructure as code (IaC) tool that helps you use code to provision and manage cloud infrastructure and resources.
+ [Git](https://git-scm.com/docs) is an open-source, distributed version control system.

**Code **

The following is an example bash script that can be used to locally run Terraform code that’s managed by AFT. To use the script, follow the instructions in the [Epics](#validate-account-factory-for-terraform-aft-code-locally-epics) section of this pattern.

```
#! /bin/bash
# Version: 1.1 2022-06-24 Unsetting AWS_PROFILE since, when set, it interferes with script operation
#          1.0 2022-02-02 Initial Version
#
# Purpose: For use with AFT: This script runs the local copy of TF code as if it were running within AFT pipeline.
#        * Facilitates testing of what the AFT pipline will do 
#           * Provides the ability to run terraform with custom arguments (like 'plan' or 'move') which are currently not supported within the pipeline.
#
# © 2021 Amazon Web Services, Inc. or its affiliates. All Rights Reserved.
# This AWS Content is provided subject to the terms of the AWS Customer Agreement
# available at http://aws.amazon.com/agreement or other written agreement between
# Customer and either Amazon Web Services, Inc. or Amazon Web Services EMEA SARL or both.
#
# Note: Arguments to this script are passed directly to 'terraform' without parsing nor validation by this script.
#
# Prerequisites:
#    1. local copy of ct GIT repositories
#    2. local backend.tf and aft-providers.tf filled with data for the target account on which terraform is to be run
#       Hint: The contents of above files can be obtain from the logs of a previous execution of the AFT pipeline for the target account.
#    3. 'terraform' binary is available in local PATH
#    4. Recommended: .gitignore file containing 'backend.tf', 'aft_providers.tf' so the local copy of these files are not pushed back to git

readonly credentials=$(aws sts assume-role \
    --role-arn arn:aws:iam::$(aws sts get-caller-identity --query "Account" --output text ):role/AWSAFTAdmin \
    --role-session-name AWSAFT-Session \
    --query Credentials )

unset AWS_PROFILE
export AWS_ACCESS_KEY_ID=$(echo $credentials | jq -r '.AccessKeyId')
export AWS_SECRET_ACCESS_KEY=$(echo $credentials | jq -r '.SecretAccessKey')
export AWS_SESSION_TOKEN=$(echo $credentials | jq -r '.SessionToken')
terraform "$@"
```

## Epics
<a name="validate-account-factory-for-terraform-aft-code-locally-epics"></a>

### Save the example code as a local file
<a name="save-the-example-code-as-a-local-file"></a>


| Task | Description | Skills required | 
| --- | --- | --- | 
| Save the example code as a local file. | [\[See the AWS documentation website for more details\]](http://docs.aws.amazon.com/prescriptive-guidance/latest/patterns/validate-account-factory-for-terraform-aft-code-locally.html) | AWS administrator | 
| Make the example code runnable. | Open a terminal window and authenticate into your AWS AFT management account by doing one of the following:[\[See the AWS documentation website for more details\]](http://docs.aws.amazon.com/prescriptive-guidance/latest/patterns/validate-account-factory-for-terraform-aft-code-locally.html)Your organization might also have a custom tool to provide authentication credentials to your AWS environment. | AWS administrator | 
| Verify access to AFT management account in the correct AWS Region. | Make sure that you use the same terminal session with which you authenticated into your AFT management account.[\[See the AWS documentation website for more details\]](http://docs.aws.amazon.com/prescriptive-guidance/latest/patterns/validate-account-factory-for-terraform-aft-code-locally.html) | AWS administrator | 
| Create a new, local directory to store the AFT repository code. | In the same terminal session, run the following commands:<pre>mkdir my_aft <br />cd my_aft</pre> | AWS administrator | 
| Clone the remote AFT repository code. | [\[See the AWS documentation website for more details\]](http://docs.aws.amazon.com/prescriptive-guidance/latest/patterns/validate-account-factory-for-terraform-aft-code-locally.html) | AWS administrator | 

### Create the Terraform configuration files required for the AFT pipeline to run locally
<a name="create-the-terraform-configuration-files-required-for-the-aft-pipeline-to-run-locally"></a>


| Task | Description | Skills required | 
| --- | --- | --- | 
| Open a previously run AFT pipeline and copy the Terraform configuration files to a local folder. | The `backend.tf` and `aft-providers.tf` configuration files that are created in this epic are needed for the AFT pipeline to run locally. These files are created automatically within the cloud-based AFT pipeline, but must be created manually for the pipeline to run locally. Running the AFT pipeline locally requires one set of files that represent running the pipeline within a single AWS account.[\[See the AWS documentation website for more details\]](http://docs.aws.amazon.com/prescriptive-guidance/latest/patterns/validate-account-factory-for-terraform-aft-code-locally.html)**Example auto-generated backend.tf statement**<pre>## Autogenerated backend.tf ##<br />## Updated on: 2022-05-31 16:27:45 ##<br />terraform {<br />  required_version = ">= 0.15.0"<br />  backend "s3" {<br />    region         = "us-east-2"<br />    bucket         = "aft-backend-############-primary-region"<br />    key            = "############-aft-global-customizations/terraform.tfstate"<br />    dynamodb_table = "aft-backend-############"<br />    encrypt        = "true"<br />    kms_key_id     = "########-####-####-####-############"<br />    role_arn       = "arn:aws:iam::#############:role/AWSAFTExecution"<br />  }<br />}</pre>** **The `backend.tf` and `aft-providers.tf` files are tied to a specific AWS account, AFT deployment, and folder. These files are also different, depending on whether they’re in the **aft-global-customizations** and **aft-account-customizations** repository within the same AFT deployment. Make sure that you generate both files from the same runtime listing. | AWS administrator | 

### Run the AFT pipeline locally by using the example bash script
<a name="run-the-aft-pipeline-locally-by-using-the-example-bash-script"></a>


| Task | Description | Skills required | 
| --- | --- | --- | 
| Implement the Terraform configuration changes that you want to validate. | [\[See the AWS documentation website for more details\]](http://docs.aws.amazon.com/prescriptive-guidance/latest/patterns/validate-account-factory-for-terraform-aft-code-locally.html) | AWS administrator | 
| Run the `ct_terraform.sh` script and review the output. | [\[See the AWS documentation website for more details\]](http://docs.aws.amazon.com/prescriptive-guidance/latest/patterns/validate-account-factory-for-terraform-aft-code-locally.html)** **[\[See the AWS documentation website for more details\]](http://docs.aws.amazon.com/prescriptive-guidance/latest/patterns/validate-account-factory-for-terraform-aft-code-locally.html) | AWS administrator | 

### Push your local code changes back to the AFT repository
<a name="push-your-local-code-changes-back-to-the-aft-repository"></a>


| Task | Description | Skills required | 
| --- | --- | --- | 
| Add references to the `backend.tf` and `aft-providers.tf` files to a `.gitignore` file. | Add the `backend.tf`** **and `aft-providers.tf` files that you created to a `.gitignore` file by running the following commands:<pre>echo backend.tf >> .gitignore<br />echo aft-providers.tf >>.gitignore</pre>Moving the files to the `.gitignore`** **file ensures that they don’t get committed and pushed back to the remote AFT repository. | AWS administrator | 
| Commit and push your code changes to the remote AFT repository. | [\[See the AWS documentation website for more details\]](http://docs.aws.amazon.com/prescriptive-guidance/latest/patterns/validate-account-factory-for-terraform-aft-code-locally.html)The code changes that you introduce by following this procedure up until this point are applied to one AWS account only. | AWS administrator | 

### Roll out the changes to multiple accounts
<a name="roll-out-the-changes-to-multiple-accounts"></a>


| Task | Description | Skills required | 
| --- | --- | --- | 
| Roll out the changes to all of your accounts that are managed by AFT. | To roll out the changes to multiple AWS accounts that are managed by AFT, follow the instructions in [Re-invoke customizations](https://docs.aws.amazon.com/controltower/latest/userguide/aft-account-customization-options.html#aft-re-invoke-customizations) in the AWS Control Tower documentation. | AWS administrator | 

# More patterns
<a name="infrastructure-more-patterns-pattern-list"></a>

**Topics**
+ [Add HA to Oracle PeopleSoft on Amazon RDS Custom by using a read replica](add-ha-to-oracle-peoplesoft-on-amazon-rds-custom-by-using-a-read-replica.md)
+ [Automatically audit AWS security groups that allow access from public IP addresses](audit-security-groups-access-public-ip.md)
+ [Automate account creation by using the Landing Zone Accelerator on AWS](automate-account-creation-lza.md)
+ [Automate adding or updating Windows registry entries using AWS Systems Manager](automate-adding-or-updating-windows-registry-entries-using-aws-systems-manager.md)
+ [Automate AWS resource assessment](automate-aws-resource-assessment.md)
+ [Automate AWS Service Catalog portfolio and product deployment by using AWS CDK](automate-aws-service-catalog-portfolio-and-product-deployment-by-using-aws-cdk.md)
+ [Automate cross-Region failover and failback by using DR Orchestrator Framework](automate-cross-region-failover-and-failback-by-using-dr-orchestrator-framework.md)
+ [Automate deletion of AWS CloudFormation stacks and associated resources](automate-deletion-cloudformation-stacks-associated-resources.md)
+ [Automate ingestion and visualization of Amazon MWAA custom metrics on Amazon Managed Grafana by using Terraform](automate-ingestion-and-visualization-of-amazon-mwaa-custom-metrics.md)
+ [Automate RabbitMQ configuration in Amazon MQ](automate-rabbitmq-configuration-in-amazon-mq.md)
+ [Automate AWS Supply Chain data lakes deployment in a multi-repository setup](automate-the-deployment-of-aws-supply-chain-data-lakes.md)
+ [Automate the replication of Amazon RDS instances across AWS accounts](automate-the-replication-of-amazon-rds-instances-across-aws-accounts.md)
+ [Automatically attach an AWS managed policy for Systems Manager to EC2 instance profiles using Cloud Custodian and AWS CDK](automatically-attach-an-aws-managed-policy-for-systems-manager-to-ec2-instance-profiles-using-cloud-custodian-and-aws-cdk.md)
+ [Automatically build CI/CD pipelines and Amazon ECS clusters for microservices using AWS CDK](automatically-build-ci-cd-pipelines-and-amazon-ecs-clusters-for-microservices-using-aws-cdk.md)
+ [Automatically detect changes and initiate different CodePipeline pipelines for a monorepo in CodeCommit](automatically-detect-changes-and-initiate-different-codepipeline-pipelines-for-a-monorepo-in-codecommit.md)
+ [Build a data pipeline to ingest, transform, and analyze Google Analytics data using the AWS DataOps Development Kit](build-a-data-pipeline-to-ingest-transform-and-analyze-google-analytics-data-using-the-aws-dataops-development-kit.md)
+ [Build a Micro Focus Enterprise Server PAC with Amazon EC2 Auto Scaling and Systems Manager](build-a-micro-focus-enterprise-server-pac-with-amazon-ec2-auto-scaling-and-systems-manager.md)
+ [Build and push Docker images to Amazon ECR using GitHub Actions and Terraform](build-and-push-docker-images-to-amazon-ecr-using-github-actions-and-terraform.md)
+ [Build an AWS landing zone that includes MongoDB Atlas](build-aws-landing-zone-that-includes-mongodb-atlas.md)
+ [Centralize IAM access key management in AWS Organizations by using Terraform](centralize-iam-access-key-management-in-aws-organizations-by-using-terraform.md)
+ [Centralize software package distribution in AWS Organizations by using Terraform](centralize-software-package-distribution-in-aws-organizations-by-using-terraform.md)
+ [Configure model invocation logging in Amazon Bedrock by using AWS CloudFormation](configure-bedrock-invocation-logging-cloudformation.md)
+ [Configure read-only routing in Always On availability groups in SQL Server on AWS](configure-read-only-routing-in-an-always-on-availability-group-in-sql-server-on-aws.md)
+ [Create a portal for micro-frontends by using AWS Amplify, Angular, and Module Federation](create-amplify-micro-frontend-portal.md)
+ [Create an API-driven resource orchestration framework using GitHub Actions and Terragrunt](create-an-api-driven-resource-orchestration-framework-using-github-actions-and-terragrunt.md)
+ [Create a cross-account Amazon EventBridge connection in an organization](create-cross-account-amazon-eventbridge-connection-organization.md)
+ [Create dynamic CI pipelines for Java and Python projects automatically](create-dynamic-ci-pipelines-for-java-and-python-projects-automatically.md)
+ [Deploy an Amazon API Gateway API on an internal website using private endpoints and an Application Load Balancer](deploy-an-amazon-api-gateway-api-on-an-internal-website-using-private-endpoints-and-an-application-load-balancer.md)
+ [Deploy and manage AWS Control Tower controls by using AWS CDK and CloudFormation](deploy-and-manage-aws-control-tower-controls-by-using-aws-cdk-and-aws-cloudformation.md)
+ [Deploy and manage AWS Control Tower controls by using Terraform](deploy-and-manage-aws-control-tower-controls-by-using-terraform.md)
+ [Deploy CloudWatch Synthetics canaries by using Terraform](deploy-cloudwatch-synthetics-canaries-by-using-terraform.md)
+ [Deploy a CockroachDB cluster in Amazon EKS by using Terraform](deploy-cockroachdb-on-eks-using-terraform.md)
+ [Deploy a Lustre file system for high-performance data processing by using Terraform and DRA](deploy-lustre-file-system-for-high-performance-data-processing-with-terraform-dra.md)
+ [Deploy a RAG use case on AWS by using Terraform and Amazon Bedrock](deploy-rag-use-case-on-aws.md)
+ [Deploy resources in an AWS Wavelength Zone by using Terraform](deploy-resources-wavelength-zone-using-terraform.md)
+ [Deploy SQL Server failover cluster instances on Amazon EC2 and Amazon FSx by using Terraform](deploy-sql-server-failover-cluster-instances-on-amazon-ec2-and-amazon-fsx.md)
+ [Deploy the Security Automations for AWS WAF solution by using Terraform](deploy-the-security-automations-for-aws-waf-solution-by-using-terraform.md)
+ [Detect Amazon RDS and Aurora database instances that have expiring CA certificates](detect-rds-instances-expiring-certificates.md)
+ [Document your AWS landing zone design](document-your-aws-landing-zone-design.md)
+ [Export AWS Backup reports from across an organization in AWS Organizations as a CSV file](export-aws-backup-reports-from-across-an-organization-in-aws-organizations-as-a-csv-file.md)
+ [Generate personalized and re-ranked recommendations using Amazon Personalize](generate-personalized-and-re-ranked-recommendations-using-amazon-personalize.md)
+ [Govern permission sets for multiple accounts by using Account Factory for Terraform](govern-permission-sets-aft.md)
+ [Identify and alert when Amazon Data Firehose resources are not encrypted with an AWS KMS key](identify-and-alert-when-amazon-data-firehose-resources-are-not-encrypted-with-an-aws-kms-key.md)
+ [Implement Account Factory for Terraform (AFT) by using a bootstrap pipeline](implement-account-factory-for-terraform-aft-by-using-a-bootstrap-pipeline.md)
+ [Implement path-based API versioning by using custom domains in Amazon API Gateway](implement-path-based-api-versioning-by-using-custom-domains.md)
+ [Install SSM Agent on Amazon EKS worker nodes by using Kubernetes DaemonSet](install-ssm-agent-on-amazon-eks-worker-nodes-by-using-kubernetes-daemonset.md)
+ [Install the SSM Agent and CloudWatch agent on Amazon EKS worker nodes using preBootstrapCommands](install-the-ssm-agent-and-cloudwatch-agent-on-amazon-eks-worker-nodes-using-prebootstrapcommands.md)
+ [Manage AWS IAM Identity Center permission sets as code by using AWS CodePipeline](manage-aws-iam-identity-center-permission-sets-as-code-by-using-aws-codepipeline.md)
+ [Manage AWS permission sets dynamically by using Terraform](manage-aws-permission-sets-dynamically-by-using-terraform.md)
+ [Manage AWS Service Catalog products in multiple AWS accounts and AWS Regions](manage-aws-service-catalog-products-in-multiple-aws-accounts-and-aws-regions.md)
+ [Manage on-premises container applications by setting up Amazon ECS Anywhere with the AWS CDK](manage-on-premises-container-applications-by-setting-up-amazon-ecs-anywhere-with-the-aws-cdk.md)
+ [Manage AWS Organizations policies as code by using AWS CodePipeline and Amazon Bedrock](manage-organizations-policies-as-code.md)
+ [Migrate DNS records in bulk to an Amazon Route 53 private hosted zone](migrate-dns-records-in-bulk-to-an-amazon-route-53-private-hosted-zone.md)
+ [Migrate Oracle PeopleSoft to Amazon RDS Custom](migrate-oracle-peoplesoft-to-amazon-rds-custom.md)
+ [Migrate RHEL BYOL systems to AWS License-Included instances by using AWS MGN](migrate-rhel-byol-systems-to-aws-license-included-instances-by-using-aws-mgn.md)
+ [Monitor Amazon ElastiCache clusters for at-rest encryption](monitor-amazon-elasticache-clusters-for-at-rest-encryption.md)
+ [Monitor application activity by using CloudWatch Logs Insights](monitor-application-activity-by-using-cloudwatch-logs-insights.md)
+ [Monitor SAP RHEL Pacemaker clusters by using AWS services](monitor-sap-rhel-pacemaker-clusters-by-using-aws-services.md)
+ [Create a hierarchical, multi-Region IPAM architecture on AWS by using Terraform](multi-region-ipam-architecture.md)
+ [Optimize multi-account serverless deployments by using the AWS CDK and GitHub Actions workflows](optimize-multi-account-serverless-deployments.md)
+ [Provision AWS Service Catalog products based on AWS CloudFormation templates by using GitHub Actions](provision-aws-service-catalog-products-using-github-actions.md)
+ [Provision least-privilege IAM roles by deploying a role vending machine solution](provision-least-privilege-iam-roles-by-deploying-a-role-vending-machine-solution.md)
+ [Remove Amazon EC2 entries across AWS accounts from AWS Managed Microsoft AD by using AWS Lambda automation](remove-amazon-ec2-entries-across-aws-accounts-from-aws-managed-microsoft-ad.md)
+ [Remove Amazon EC2 entries in the same AWS account from AWS Managed Microsoft AD by using AWS Lambda automation](remove-amazon-ec2-entries-in-the-same-aws-account-from-aws-managed-microsoft-ad.md)
+ [Secure file transfers by using Transfer Family, Amazon Cognito, and GuardDuty](secure-file-transfers.md)
+ [Send a notification when an IAM user is created](send-a-notification-when-an-iam-user-is-created.md)
+ [Set up a serverless cell router for a cell-based architecture](serverless-cell-router-architecture.md)
+ [Set up a CI/CD pipeline for hybrid workloads on Amazon ECS Anywhere by using AWS CDK and GitLab](set-up-a-ci-cd-pipeline-for-hybrid-workloads-on-amazon-ecs-anywhere-by-using-aws-cdk-and-gitlab.md)
+ [Set up an HA/DR architecture for Oracle E-Business Suite on Amazon RDS Custom with an active standby database](set-up-an-ha-dr-architecture-for-oracle-e-business-suite-on-amazon-rds-custom-with-an-active-standby-database.md)
+ [Set up DNS resolution for hybrid networks in a multi-account AWS environment](set-up-dns-resolution-for-hybrid-networks-in-a-multi-account-aws-environment.md)
+ [Set up Multi-AZ infrastructure for a SQL Server Always On FCI by using Amazon FSx](set-up-multi-az-infrastructure-for-a-sql-server-always-on-fci-by-using-amazon-fsx.md)
+ [Set up Oracle UTL\$1FILE functionality on Aurora PostgreSQL-Compatible](set-up-oracle-utl_file-functionality-on-aurora-postgresql-compatible.md)
+ [Simplify application authentication with mutual TLS in Amazon ECS by using Application Load Balancer](simplify-application-authentication-with-mutual-tls-in-amazon-ecs.md)
+ [Simplify private certificate management by using AWS Private CA and AWS RAM](simplify-private-certificate-management-by-using-aws-private-ca-and-aws-ram.md)
+ [Streamline machine learning workflows from local development to scalable experiments by using SageMaker AI and Hydra](streamline-machine-learning-workflows-by-using-amazon-sagemaker.md)
+ [Tag Transit Gateway attachments automatically using AWS Organizations](tag-transit-gateway-attachments-automatically-using-aws-organizations.md)
+ [Transition roles for an Oracle PeopleSoft application on Amazon RDS Custom for Oracle](transition-roles-for-an-oracle-peoplesoft-application-on-amazon-rds-custom-for-oracle.md)
+ [Use Amazon Q Developer as a coding assistant to increase your productivity](use-q-developer-as-coding-assistant-to-increase-productivity.md)

# Web & mobile apps
<a name="websitesandwebapps-pattern-list"></a>

**Topics**
+ [Authenticate existing React application users by using Amazon Cognito and AWS Amplify UI](authenticate-react-app-users-cognito-amplify-ui.md)
+ [Create a React app by using AWS Amplify and add authentication with Amazon Cognito](create-a-react-app-by-using-aws-amplify-and-add-authentication-with-amazon-cognito.md)
+ [Create a portal for micro-frontends by using AWS Amplify, Angular, and Module Federation](create-amplify-micro-frontend-portal.md)
+ [Deploy a React-based single-page application to Amazon S3 and CloudFront](deploy-a-react-based-single-page-application-to-amazon-s3-and-cloudfront.md)
+ [Deploy an Amazon API Gateway API on an internal website using private endpoints and an Application Load Balancer](deploy-an-amazon-api-gateway-api-on-an-internal-website-using-private-endpoints-and-an-application-load-balancer.md)
+ [Embed Amazon Quick Sight visual components into web applications by using Amazon Cognito and IaC automation](embed-quick-sight-visual-components-into-web-apps-cognito-iac.md)
+ [Explore full-stack cloud-native web application development with Green Boost](explore-full-stack-cloud-native-web-application-development-with-green-boost.md)
+ [Structure a Python project in hexagonal architecture using AWS Lambda](structure-a-python-project-in-hexagonal-architecture-using-aws-lambda.md)
+ [More patterns](websitesandwebapps-more-patterns-pattern-list.md)

# Authenticate existing React application users by using Amazon Cognito and AWS Amplify UI
<a name="authenticate-react-app-users-cognito-amplify-ui"></a>

*Daniel Kozhemyako, Amazon Web Services*

## Summary
<a name="authenticate-react-app-users-cognito-amplify-ui-summary"></a>

This pattern shows how to add authentication capabilities to an existing frontend React application by using an AWS Amplify UI library and an Amazon Cognito user pool.

The pattern uses Amazon Cognito to provide authentication, authorization, and user management for the application. It also uses a component from [Amplify UI](https://ui.docs.amplify.aws/react/getting-started/introduction), an open-source library that extends the capabilities of AWS Amplify to user interface (UI) development. The [Authenticator UI](https://ui.docs.amplify.aws/react/connected-components/authenticator/advanced) component manages login sessions and runs the cloud-connected workflow that authenticates users through Amazon Cognito.

After you implement this pattern, users can sign in by using any of the following credentials:
+ User name and password
+ Social identity providers, such as Apple, Facebook, Google, and Amazon
+ Enterprise identity providers that are either SAML 2.0 compatible or OpenID Connect (OIDC) compatible

**Note**  
To create a custom authentication UI component, you can run the Authenticator UI component in headless mode.

## Prerequisites and limitations
<a name="authenticate-react-app-users-cognito-amplify-ui-prereqs"></a>

**Prerequisites**
+ An active AWS account
+ A React 18.2.0 or later web application
+ Node.js and npm 6.14.4 or later, [installed](https://docs.npmjs.com/downloading-and-installing-node-js-and-npm)

**Limitations**
+ This pattern applies to React web applications only.
+ This pattern uses a prebuilt Amplify UI component. The solution doesn’t cover the steps required to implement a custom UI component.

**Product versions**
+ Amplify UI 6.1.3 or later (Gen 1)
+ Amplify 6.0.16 or later (Gen 1)

## Architecture
<a name="authenticate-react-app-users-cognito-amplify-ui-architecture"></a>

**Target architecture**

The following diagram shows an architecture that uses Amazon Cognito to authenticate users for a React web application.

![\[Amazon Cognito authenticates users for a React web application.\]](http://docs.aws.amazon.com/prescriptive-guidance/latest/patterns/images/pattern-img/b2cea053-6931-4404-8aa8-c623ce2024ac/images/b7f69f20-a39d-4a78-8605-7dab73c59052.png)


## Tools
<a name="authenticate-react-app-users-cognito-amplify-ui-tools"></a>

**AWS services**
+ [Amazon Cognito](https://docs.aws.amazon.com/cognito/latest/developerguide/what-is-amazon-cognito.html) provides authentication, authorization, and user management for web and mobile apps.

**Other tools**
+ [Amplify UI](https://ui.docs.amplify.aws/react/getting-started/introduction) is an open-source UI library that provides customizable components that you can connect to the cloud.
+ [Node.js](https://nodejs.org/en/docs/) is an event-driven JavaScript runtime environment designed for building scalable network applications.
+ [npm](https://docs.npmjs.com/about-npm) is a software registry that runs in a Node.js environment and is used to share or borrow packages and manage deployment of private packages.

## Best practices
<a name="authenticate-react-app-users-cognito-amplify-ui-best-practices"></a>

If you're building a new application, we recommend that you use Amplify Gen 2.

## Epics
<a name="authenticate-react-app-users-cognito-amplify-ui-epics"></a>

### Create an Amazon Cognito user pool
<a name="create-an-cog-user-pool"></a>


| Task | Description | Skills required | 
| --- | --- | --- | 
| Create a user pool. | [Create an Amazon Cognito user pool](https://docs.aws.amazon.com/cognito/latest/developerguide/tutorial-create-user-pool.html). Configure the user pool’s sign-in options and security requirements to fit your use case. | App developer | 
| Add an app client. | [Configure a user pool app client](https://docs.aws.amazon.com/cognito/latest/developerguide/user-pool-settings-client-apps.html). This client is required for your application to interact with the Amazon Cognito user pool. | App developer | 

### Integrate your Amazon Cognito user pool with the Authenticator UI component
<a name="integrate-your-cog-user-pool-with-the-authenticator-ui-component"></a>


| Task | Description | Skills required | 
| --- | --- | --- | 
| Install dependencies. | To install the `aws-amplify` and `@aws-amplify/ui-react` packages, run the following command from your application’s root directory:<pre>npm i @aws-amplify/ui-react aws-amplify</pre> | App developer | 
| Configure the user pool. | Based on the following example, create an `aws-exports.js` file and save in the `src` folder. The file should include the following information:[\[See the AWS documentation website for more details\]](http://docs.aws.amazon.com/prescriptive-guidance/latest/patterns/authenticate-react-app-users-cognito-amplify-ui.html)<pre>// replace the user pool region, id, and app client id details<br />const awsmobile = {<br />    "aws_project_region": "put_your_region_here",<br />    "aws_cognito_region": "put_your_region_here",<br />    "aws_user_pools_id": "put_your_user_pool_id_here",<br />    "aws_user_pools_web_client_id": "put_your_user_pool_app_id_here"<br />}<br /><br />export default awsmobile;</pre> | App developer | 
| Import and configure the Amplify service. | [\[See the AWS documentation website for more details\]](http://docs.aws.amazon.com/prescriptive-guidance/latest/patterns/authenticate-react-app-users-cognito-amplify-ui.html) | App developer | 
| Add the Authenticator UI component. | To display the `Authenticator` UI component, add the following lines of code to the application’s entry point file (`App.js`):<pre>import { Authenticator } from '@aws-amplify/ui-react';<br />import '@aws-amplify/ui-react/styles.css';</pre>The example code snippet imports the `Authenticator` UI component and the Amplify UI styles.css file, which is required when using the component’s prebuilt themes.The `Authenticator` UI component provides two return values:[\[See the AWS documentation website for more details\]](http://docs.aws.amazon.com/prescriptive-guidance/latest/patterns/authenticate-react-app-users-cognito-amplify-ui.html)See the following example component:<pre>function App() {<br />    return (<br />        <Authenticator><br />            {({ signOut, user }) => (<br />                <div><br />                    <p>Welcome {user.username}</p><br />                    <button onClick={signOut}>Sign out</button><br />                </div><br />            )}<br />        </Authenticator><br />    );<br />}</pre>For an example `App.js` file, see the [Additional information](#authenticate-react-app-users-cognito-amplify-ui-additional) section of this pattern. | App developer | 
| (Optional) Retrieve session information. | After a user is authenticated, you can retrieve data from the Amplify client about their session. For example, you can retrieve the JSON web token (JWT) from a user’s session so that you can authenticate the requests from their session to a backend API.See the following example of a request header that includes a JWT:<pre>import { fetchAuthSession } from 'aws-amplify/auth';<br />(await fetchAuthSession()).tokens?.idToken?.toString();</pre> | App developer | 

## Troubleshooting
<a name="authenticate-react-app-users-cognito-amplify-ui-troubleshooting"></a>


| Issue | Solution | 
| --- | --- | 
| New users can’t sign up for the application. | As follows, make sure that your Amazon Cognito user pool is configured to allow users to sign themselves up to be in the user pool:[\[See the AWS documentation website for more details\]](http://docs.aws.amazon.com/prescriptive-guidance/latest/patterns/authenticate-react-app-users-cognito-amplify-ui.html) | 
| Auth component stopped working after upgrading from v5 to v6. | The `Auth` category has moved to a functional approach and named parameters in Amplify v6. You must now import the functional APIs directly from the `aws-amplify/auth` path. For more information, see [Migrate from v5 to v6](https://docs.amplify.aws/gen1/react/build-a-backend/auth/auth-migration-guide/) in the Amplify documentation. | 

## Related resources
<a name="authenticate-react-app-users-cognito-amplify-ui-resources"></a>
+ [Getting started with Amazon Cognito](https://aws.amazon.com/cognito/getting-started/) (AWS website)
+ [Create a new React app](https://reactjs.org/docs/create-a-new-react-app.html) (React documentation)
+ [What is Amazon Cognito?](https://docs.aws.amazon.com/cognito/latest/developerguide/what-is-amazon-cognito.html) (Amazon Cognito documentation)
+ [Amplify UI library](https://ui.docs.amplify.aws/) (Amplify documentation)

## Additional information
<a name="authenticate-react-app-users-cognito-amplify-ui-additional"></a>

The `App.js` file should contain the following code:

```
import './App.css';
import { Amplify } from 'aws-amplify';
import awsExports from './aws-exports';
import { fetchAuthSession } from 'aws-amplify/auth';
import { Authenticator } from '@aws-amplify/ui-react';
import '@aws-amplify/ui-react/styles.css';
Amplify.configure({ ...awsExports });
let token = (await fetchAuthSession()).tokens?.idToken?.toString();
function App() {
  return (
      <Authenticator>
        {({ signOut, user }) => (
            <div>
              <p>Welcome {user.username}</p>
                <p>Your token is: {token}</p>
              <button onClick={signOut}>Sign out</button>
            </div>
        )}
      </Authenticator>
  );
}

export default App;
```

# Create a React app by using AWS Amplify and add authentication with Amazon Cognito
<a name="create-a-react-app-by-using-aws-amplify-and-add-authentication-with-amazon-cognito"></a>

*Rishi Singla, Amazon Web Services*

## Summary
<a name="create-a-react-app-by-using-aws-amplify-and-add-authentication-with-amazon-cognito-summary"></a>

This pattern demonstrates how to use AWS Amplify to create a React-based app and how to add authentication to the frontend by using Amazon Cognito. AWS Amplify consists of a set of tools (open source framework, visual development environment, console) and services (web app and static website hosting) to accelerate the development of mobile and web apps on AWS. 

## Prerequisites and limitations
<a name="create-a-react-app-by-using-aws-amplify-and-add-authentication-with-amazon-cognito-prereqs"></a>

**Prerequisites **
+ An active AWS account
+ [Node.js](https://nodejs.org/en/download/) and [npm](https://www.npmjs.com/get-npm) installed on your machine

**Product versions**
+ Node.js version 10.x or later (to verify your version, run `node -v` in a terminal window)
+ npm version 6.x or later (to verify your version, run `npm -v` in a terminal window)

## Architecture
<a name="create-a-react-app-by-using-aws-amplify-and-add-authentication-with-amazon-cognito-architecture"></a>

**Target technology stack **
+ AWS Amplify
+ Amazon Cognito

## Tools
<a name="create-a-react-app-by-using-aws-amplify-and-add-authentication-with-amazon-cognito-tools"></a>
+ [Amplify Command Line Interface (CLI)](https://docs.amplify.aws/cli/)
+ [Amplify Libraries](https://docs.amplify.aws/lib/q/platform/react-native/) (open source client libraries)
+ [Amplify Studio](https://docs.amplify.aws/console/) (visual interface)

## Epics
<a name="create-a-react-app-by-using-aws-amplify-and-add-authentication-with-amazon-cognito-epics"></a>

### Install AWS Amplify CLI
<a name="install-aws-amplify-cli"></a>


| Task | Description | Skills required | 
| --- | --- | --- | 
| Install the Amplify CLI. | The Amplify CLI is a unified toolchain for creating AWS Cloud services for your React app. To install the Amplify CLI, run:<pre>npm install -g @aws-amplify/cli</pre>npm will notify you if a new major version is available. If so, use the following command to upgrade your version of npm:<pre>npm install -g npm@9.8.0</pre>where 9.8.0 refers to the version you want to install. | App developer | 

### Create a React app
<a name="create-a-react-app"></a>


| Task | Description | Skills required | 
| --- | --- | --- | 
| Create a React app. | To create a new React app, use the command:<pre>npx create-react-app amplify-react-application</pre>where `ampify-react-application` is the name of the app.When the app has been created successfully, you will see the message:<pre>Success! Created amplify-react-application</pre>A directory with various subfolders will be created for the React app. | App developer | 
| Launch the app on your local machine. | Go to the directory `amplify-react-application` that was created in the previous step and run the command:<pre>amplify-react-application% npm start</pre>This launches the React app on your local machine. | App developer | 

### Configure the Amplify CLI
<a name="configure-the-amplify-cli"></a>


| Task | Description | Skills required | 
| --- | --- | --- | 
| Configure Amplify to connect to your AWS account. | Configure Amplify by running the command:<pre>amplify-react-application % amplify configure</pre>The Amplify CLI asks you to follow these steps to set up access to your AWS account:[\[See the AWS documentation website for more details\]](http://docs.aws.amazon.com/prescriptive-guidance/latest/patterns/create-a-react-app-by-using-aws-amplify-and-add-authentication-with-amazon-cognito.html)This scenario requires IAM users with programmatic access and long-term credentials, which present a security risk. To help mitigate this risk, we recommend that you provide these users with only the permissions they require to perform the task and that you remove these users when they are no longer needed. Access keys can be updated if necessary. For more information, see [Updating access keys](https://docs.aws.amazon.com/IAM/latest/UserGuide/id_credentials_access-keys.html#Using_RotateAccessKey) in the *IAM User Guide*.These steps appear in the terminal as follows.<pre>Follow these steps to set up access to your AWS account:<br />Sign in to your AWS administrator account:<br />https://console.aws.amazon.com/<br />Press Enter to continue<br />Specify the AWS Region<br />? region:  us-east-1<br />Follow the instructions at<br />https://docs.amplify.aws/cli/start/install/#configure-the-amplify-cli<br />to complete the user creation in the AWS console<br />https://console.aws.amazon.com/iamv2/home#/users/create<br />Press Enter to continue<br />Enter the access key of the newly created user:<br />? accessKeyId:  ********************<br />? secretAccessKey:  ****************************************<br />This would update/create the AWS Profile in your local machine<br />? Profile Name:  new<br /><br />Successfully set up the new user.</pre>For more information about these steps, see the [documentation](https://docs.amplify.aws/cli/start/install/#configure-the-amplify-cli) in the Amplify Dev Center. | General AWS, App developer | 

### Initialize Amplify
<a name="initialize-amplify"></a>


| Task | Description | Skills required | 
| --- | --- | --- | 
| Initialize Amplify. | [\[See the AWS documentation website for more details\]](http://docs.aws.amazon.com/prescriptive-guidance/latest/patterns/create-a-react-app-by-using-aws-amplify-and-add-authentication-with-amazon-cognito.html) | App developer, General AWS | 

### Add authentication to the frontend
<a name="add-authentication-to-the-frontend"></a>


| Task | Description | Skills required | 
| --- | --- | --- | 
| Adding authentication. | You can use the `amplify add <category>` command to add features such as a user login or a backend API. In this step you will use the command to add authentication.Amplify provides a backend authentication service with Amazon Cognito, frontend libraries, and a drop-in Authenticator UI component. Features include user sign-up, user sign-in, multi-factor authentication, user sign-out, and passwordless sign-in. You can also authenticate users by integrating with federated identity providers such as Amazon, Google, and Facebook. The Amplify authentication category integrates seamlessly with other Amplify categories such as API, analytics, and storage, so you can define authorization rules for authenticated and unauthenticated users.[\[See the AWS documentation website for more details\]](http://docs.aws.amazon.com/prescriptive-guidance/latest/patterns/create-a-react-app-by-using-aws-amplify-and-add-authentication-with-amazon-cognito.html) | App developer, General AWS | 

### Change the App.js file
<a name="change-the-app-js-file"></a>


| Task | Description | Skills required | 
| --- | --- | --- | 
| Change the App.js file. | In the `src` folder, open and revise the `App.js` file. The modified file should look like this:<pre>{  App.Js File after modifications:<br />import React from 'react';<br />import logo from './logo.svg';<br />import './App.css';<br />import { Amplify } from 'aws-amplify';<br />import { withAuthenticator, Button, Heading } from '@aws-amplify/ui-react';<br />import awsconfig from './aws-exports';<br />Amplify.configure(awsconfig);<br />function App({ signOut }) {<br />  return (<br />      <div><br />      <h1>Thankyou for doing verification</h1><br />      <h2>My Content</h2><br />       <button onClick={signOut}>Sign out</button><br />    </div><br />  );<br />}<br />export default withAuthenticator(App);</pre> | App developer | 
| Import React packages. | The `App.js` file imports two React packages. Install these packages by using the command:<pre>amplify-react-application1 % npm install --save aws-amplify @aws-amplify/ui-react</pre> | App developer | 

### Launch the React app and check authentication
<a name="launch-the-react-app-and-check-authentication"></a>


| Task | Description | Skills required | 
| --- | --- | --- | 
| Launch the app. | Launch the React app on your local machine:<pre>amplify-react-application1 % npm start</pre> | App developer, General AWS | 
| Check authentication. | Check whether the app prompts for authentication parameters. (In our example, we’ve configured email as the sign-in method.)The frontend UI should prompt you for sign-in credentials and provide an option to create an account.You can also configure the Amplify build process to add the backend as part of a continuous deployment workflow. However, this pattern doesn’t cover that option. | App developer, General AWS | 

## Related resources
<a name="create-a-react-app-by-using-aws-amplify-and-add-authentication-with-amazon-cognito-resources"></a>
+ [Getting started](https://docs.npmjs.com/getting-started) (npm documentation)
+ [Create a standalone AWS account](https://docs.aws.amazon.com/accounts/latest/reference/manage-acct-creating.html) (AWS Account Management documentation) 
+ [AWS Amplify documentation](https://docs.aws.amazon.com/amplify/latest/userguide/welcome.html)
+ [Amazon Cognito documentation](https://docs.aws.amazon.com/cognito/latest/developerguide/what-is-amazon-cognito.html)

# Create a portal for micro-frontends by using AWS Amplify, Angular, and Module Federation
<a name="create-amplify-micro-frontend-portal"></a>

*Milena Godau and Pedro Garcia, Amazon Web Services*

## Summary
<a name="create-amplify-micro-frontend-portal-summary"></a>

A micro-frontend architecture enables multiple teams to work on different parts of a frontend application independently. Each team can develop, build, and deploy a fragment of the frontend without interfering with other parts of the application. From the end user's perspective, it appears to be a single, cohesive application. However, they are interacting with several independent applications that are published by different teams.

This document describes how to create a micro-frontend architecture by using [AWS Amplify](https://docs.amplify.aws/gen1/angular/), the [Angular](https://angular.dev/overview) frontend framework, and [Module Federation](https://webpack.js.org/concepts/module-federation/). In this pattern, the micro-frontends are combined on the client side by a shell (or *parent*) application. The shell application acts as a container that retrieves, displays, and integrates the micro-frontends. The shell application handles the global routing, which loads different micro-frontends. The [@angular-architects/module-federation](https://www.npmjs.com/package/@angular-architects/module-federation) plugin integrates Module Federation with Angular. You deploy the shell application and micro-frontends by using AWS Amplify. End users access the application through a web-based portal.

The portal is split vertically. This means that the micro-frontends are entire views or groups of views, instead of parts of the same view. Therefore the shell application loads only one micro-frontend at a time.

The micro-frontends are implemented as remote modules. The shell application lazily loads these remote modules, which defers the micro-frontend initialization until it is required. This approach optimizes application performance by loading only the necessary modules. This reduces the initial load time and improves the overall user experience. Additionally, you share common dependencies across modules through the webpack configuration file (**webpack.config.js**). This practice promotes code reuse, reduces duplication, and streamlines the bundling process.

## Prerequisites and limitations
<a name="create-amplify-micro-frontend-portal-prereqs"></a>

**Prerequisites **
+ An active AWS account
+ Node.js and npm, [installed](https://nodejs.org/en/download/)
+ Amplify CLI, [installed](https://docs.amplify.aws/gen1/angular/tools/cli/)
+ Angular CLI, [installed](https://angular.io/cli)
+ [Permissions](https://docs.aws.amazon.com/service-authorization/latest/reference/list_awsamplify.html) to use AWS Amplify
+ Familiarity with Angular

**Product versions**
+ Angular CLI version 13.1.2 or later
+ @angular-architects/module-federation version 14.0.1 or later
+ webpack version 5.4.0 or later
+ AWS Amplify Gen 1

**Limitations**

A micro-frontend architecture is a powerful approach for building scalable and resilient web applications. However, it's crucial to understand the following potential challenges before adopting this approach:
+ **Integration** – One of the key challenges is the potential increase in complexity compared to monolithic frontends. Orchestrating multiple micro-frontends, handling communication between them, and managing shared dependencies can be more intricate. Additionally, there may be a performance overhead associated with communication between the micro-frontends. This communication can increase latency and reduce performance. This needs to be addressed through efficient messaging mechanisms and data-sharing strategies.
+ **Code duplication** – Because each micro-frontend is developed independently, there is a risk of duplicating code for common functionality or shared libraries. This can increase the overall application size and introduce maintenance challenges.
+ **Coordination and management** – Coordinating the development and deployment processes across multiple micro-frontends can be challenging. Ensuring consistent versioning, managing dependencies, and maintaining compatibility between components becomes more critical in a distributed architecture. Establishing clear governance, guidelines, and automated testing and deployment pipelines is essential for seamless collaboration and delivery.
+ **Testing** – Testing micro-frontend architectures can be more complex than testing monolithic frontends. It requires additional effort and specialized testing strategies to perform cross-component integration testing and end-to-end testing, and to validate consistent user experiences across multiple micro-frontends.

Before committing to the micro-frontend approach, we recommend that you review [Understanding and implementing micro-frontends on AWS](https://docs.aws.amazon.com/prescriptive-guidance/latest/micro-frontends-aws/introduction.html).

## Architecture
<a name="create-amplify-micro-frontend-portal-architecture"></a>

In a micro-frontend architecture, each team develops and deploys features independently. The following image shows how multiple DevOps teams work together. The portal team develops the shell application. The shell application acts as a container. It retrieves, displays, and integrates the micro-frontend applications that are published by other DevOps teams. You use AWS Amplify to publish the shell application and micro-frontend applications.

![\[Publishing multiple micro-frontends to a shell app that the user accesses through a web portal.\]](http://docs.aws.amazon.com/prescriptive-guidance/latest/patterns/images/pattern-img/ddf82a69-bf1b-4ad1-8e60-3dd375699936/images/cf045bf1-11ea-46d9-93cb-3c603122450d.png)


The architecture diagram shows the following workflow:

1. The portal team develops and maintains the shell application. The shell application orchestrates the integration and rendering of the micro-frontends in order to compose the overall portal.

1. Teams A and B develop and maintain one or more micro-frontends or features that are integrated into the portal. Each team can work independently on their respective micro-frontends.

1. The end user authenticates by using Amazon Cognito.

1. The end user accesses the portal, and the shell application is loaded. As the user navigates, the shell application deals with the routing and retrieves the requested micro-frontend, loading its bundle.

## Tools
<a name="create-amplify-micro-frontend-portal-tools"></a>

**AWS services**
+ [AWS Amplify](https://docs.amplify.aws/angular/start/) is a set of purpose-built tools and features that helps frontend web and mobile developers quickly build full-stack applications on AWS. In this pattern, you use the Amplify CLI to deploy the Amplify micro-frontend applications.
+ [AWS Command Line Interface (AWS CLI)](https://docs.aws.amazon.com/cli/latest/userguide/cli-chap-welcome.html) is an open source tool that helps you interact with AWS services through commands in your command-line shell.

**Other tools**
+ [@angular-architects/module-federation](https://github.com/angular-architects/module-federation-plugin) is a plugin that integrates Angular with Module Federation.
+ [Angular](https://angular.dev/overview) is an open source web application framework for building modern, scalable, and testable single-page applications. It follows a modular and component-based architecture that promotes code reuse and maintenance.
+ [Node.js](https://nodejs.org/en/docs/) is an event-driven JavaScript runtime environment designed that is for building scalable network applications.
+ [npm](https://docs.npmjs.com/about-npm) is a software registry that runs in a Node.js environment and is used to share or borrow packages and manage deployment of private packages.
+ [Webpack Module Federation](https://webpack.js.org/concepts/module-federation/) helps you load code that is independently compiled and deployed, such as micro-frontends or plugins, into an application.

**Code repository**

The code for this pattern is available in the [Micro-frontend portal using Angular and Module Federation](https://github.com/aws-samples/angular-module-federation-mfe) GitHub repository. This repository contains the following two folders:
+ `shell-app` contains the code for the shell application.
+ `feature1-app` contains a sample micro-frontend. The shell application fetches this micro-frontend and displays it as a page within the portal application.

## Best practices
<a name="create-amplify-micro-frontend-portal-best-practices"></a>

Micro-frontend architectures offer numerous advantages, but they also introduce complexity. The following are some best practices for smooth development, high-quality code, and a great user experience:
+ **Planning and communication** – To streamline collaboration, invest in upfront planning, design, and clear communication channels.
+ **Design consistency** – Enforce a consistent visual style across micro-frontends by using design systems, style guides, and component libraries. This provides a cohesive user experience and accelerates development.
+ **Dependency management** – Because micro-frontends evolve independently, adopt standardized contracts and versioning strategies to manage dependencies effectively and prevent compatibility issues.
+ **Micro-frontend architecture** – To enable independent development and deployment, each micro-frontend should have a clear and well-defined responsibility for an encapsulated functionality.
+ **Integration and communication** – To facilitate smooth integration and minimize conflicts, define clear contracts and communication protocols between micro-frontends, including APIs, events, and shared data models.
+ **Testing and quality assurance** – Implement test automation and continuous integration pipelines for micro-frontends. This improves the overall quality, reduces manual testing effort, and validates functionality between micro-frontend interactions.
+ **Performance optimization** –** **Continuously monitor performance metrics and track dependencies between micro-frontends. This helps you identify bottlenecks and maintain optimal application performance. Use performance monitoring and dependency analysis tools for this purpose.
+ **Developer experience** – Focus on the developer experience by providing clear documentation, tooling, and examples. This helps you streamline development and onboard new team members.

## Epics
<a name="create-amplify-micro-frontend-portal-epics"></a>

### Create the shell application
<a name="create-the-shell-application"></a>


| Task | Description | Skills required | 
| --- | --- | --- | 
| Create the shell application. | [\[See the AWS documentation website for more details\]](http://docs.aws.amazon.com/prescriptive-guidance/latest/patterns/create-amplify-micro-frontend-portal.html) | App developer | 
| Install the plugin. | In the Angular CLI, enter the following command to install the [@angular-architects/module-federation](https://www.npmjs.com/package/@angular-architects/module-federation) plugin:<pre>ng add @angular-architects/module-federation --project shell --port 4200</pre> | App developer | 
| Add the micro-frontend URL as an environment variable. | [\[See the AWS documentation website for more details\]](http://docs.aws.amazon.com/prescriptive-guidance/latest/patterns/create-amplify-micro-frontend-portal.html) | App developer | 
| Define routing. | [\[See the AWS documentation website for more details\]](http://docs.aws.amazon.com/prescriptive-guidance/latest/patterns/create-amplify-micro-frontend-portal.html) | App developer | 
| Declare the `mfe1` module. | [\[See the AWS documentation website for more details\]](http://docs.aws.amazon.com/prescriptive-guidance/latest/patterns/create-amplify-micro-frontend-portal.html) | App developer | 
| Prepare preloading for the micro-frontend. | Preloading the micro-frontend helps the webpack properly negotiate the shared libraries and packages.[\[See the AWS documentation website for more details\]](http://docs.aws.amazon.com/prescriptive-guidance/latest/patterns/create-amplify-micro-frontend-portal.html) | App developer | 
| Adjust the HTML content. | [\[See the AWS documentation website for more details\]](http://docs.aws.amazon.com/prescriptive-guidance/latest/patterns/create-amplify-micro-frontend-portal.html) | App developer | 

### Create the micro-frontend application
<a name="create-the-micro-frontend-application"></a>


| Task | Description | Skills required | 
| --- | --- | --- | 
| Create the micro-frontend. | [\[See the AWS documentation website for more details\]](http://docs.aws.amazon.com/prescriptive-guidance/latest/patterns/create-amplify-micro-frontend-portal.html) | App developer | 
| Install the plugin. | Enter the following command to install the @angular-architects/module-federation plugin:<pre>ng add @angular-architects/module-federation --project mfe1 --port 5000</pre> | App developer | 
| Create a module and component. | Enter the following commands to create a module and component and export them as the remote entry module:<pre>ng g module mfe1 --routing<br />ng g c mfe1</pre> | App developer | 
| Set the default routing path. | [\[See the AWS documentation website for more details\]](http://docs.aws.amazon.com/prescriptive-guidance/latest/patterns/create-amplify-micro-frontend-portal.html) | App developer | 
| Add the `mfe1` route. | [\[See the AWS documentation website for more details\]](http://docs.aws.amazon.com/prescriptive-guidance/latest/patterns/create-amplify-micro-frontend-portal.html) | App developer | 
| Edit the **webpack.config.js** file. | [\[See the AWS documentation website for more details\]](http://docs.aws.amazon.com/prescriptive-guidance/latest/patterns/create-amplify-micro-frontend-portal.html) | App developer | 
| Adjust the HTML content. | [\[See the AWS documentation website for more details\]](http://docs.aws.amazon.com/prescriptive-guidance/latest/patterns/create-amplify-micro-frontend-portal.html) | App developer | 

### Run the applications locally
<a name="run-the-applications-locally"></a>


| Task | Description | Skills required | 
| --- | --- | --- | 
| Run the `mfe1` application. | [\[See the AWS documentation website for more details\]](http://docs.aws.amazon.com/prescriptive-guidance/latest/patterns/create-amplify-micro-frontend-portal.html) | App developer | 
| Run the shell application. | [\[See the AWS documentation website for more details\]](http://docs.aws.amazon.com/prescriptive-guidance/latest/patterns/create-amplify-micro-frontend-portal.html) | App developer | 

### Refactor the shell application to handle a micro-frontend loading error
<a name="refactor-the-shell-application-to-handle-a-micro-frontend-loading-error"></a>


| Task | Description | Skills required | 
| --- | --- | --- | 
| Create a module and component. | In the root folder of the shell application, enter the following commands to create a module and component for an error page:<pre>ng g module error-page --routing<br />ng g c error-page</pre> | App developer | 
| Adjust the HTML content. | [\[See the AWS documentation website for more details\]](http://docs.aws.amazon.com/prescriptive-guidance/latest/patterns/create-amplify-micro-frontend-portal.html) | App developer | 
| Set the default routing path. | [\[See the AWS documentation website for more details\]](http://docs.aws.amazon.com/prescriptive-guidance/latest/patterns/create-amplify-micro-frontend-portal.html) | App developer | 
| Create a function to load micro-frontends. | [\[See the AWS documentation website for more details\]](http://docs.aws.amazon.com/prescriptive-guidance/latest/patterns/create-amplify-micro-frontend-portal.html) | App developer | 
| Test the error handling. | [\[See the AWS documentation website for more details\]](http://docs.aws.amazon.com/prescriptive-guidance/latest/patterns/create-amplify-micro-frontend-portal.html) | App developer | 

### Deploy the applications by using AWS Amplify
<a name="deploy-the-applications-by-using-amplifylong"></a>


| Task | Description | Skills required | 
| --- | --- | --- | 
| Deploy the micro-frontend. | [\[See the AWS documentation website for more details\]](http://docs.aws.amazon.com/prescriptive-guidance/latest/patterns/create-amplify-micro-frontend-portal.html) | App developer, AWS DevOps | 
| Deploy the shell application. | [\[See the AWS documentation website for more details\]](http://docs.aws.amazon.com/prescriptive-guidance/latest/patterns/create-amplify-micro-frontend-portal.html) | App developer, App owner | 
| Enable CORS. | Because the shell and micro-frontend applications are hosted independently on different domains, you must enable cross-origin resource sharing (CORS) on the micro-frontend. This allows the shell application to load the content from a different origin. To enable CORS, you add custom headers.[\[See the AWS documentation website for more details\]](http://docs.aws.amazon.com/prescriptive-guidance/latest/patterns/create-amplify-micro-frontend-portal.html) | App developer, AWS DevOps | 
| Create a rewrite rule on the shell application. | The Angular shell application is configured to use HTML5 routing. If the user performs a hard refresh, Amplify tries to load a page from the current URL. This generates a 403 error. To avoid this, you add a rewrite rule in the Amplify console.To create the rewrite rule, follow these steps:[\[See the AWS documentation website for more details\]](http://docs.aws.amazon.com/prescriptive-guidance/latest/patterns/create-amplify-micro-frontend-portal.html) | App developer, AWS DevOps | 
| Test the web portal. | [\[See the AWS documentation website for more details\]](http://docs.aws.amazon.com/prescriptive-guidance/latest/patterns/create-amplify-micro-frontend-portal.html) | App developer | 

### Clean up resources
<a name="clean-up-resources"></a>


| Task | Description | Skills required | 
| --- | --- | --- | 
| Delete the applications. | If you no longer need the shell and micro-frontend applications, delete them. This helps prevent charges for resources that you aren't using.[\[See the AWS documentation website for more details\]](http://docs.aws.amazon.com/prescriptive-guidance/latest/patterns/create-amplify-micro-frontend-portal.html) | General AWS | 

## Troubleshooting
<a name="create-amplify-micro-frontend-portal-troubleshooting"></a>


| Issue | Solution | 
| --- | --- | 
| No AWS profile available when running the `amplify init` command | If you don't have an AWS profile configured, you can still proceed with the `amplify init` command. However, you need to select the `AWS access keys` option when you're prompted for the authentication method. Have your AWS access key and secret key available.Alternatively, you can configure a named profile for the AWS CLI. For instructions, see [Configuration and credential file settings](https://docs.aws.amazon.com/cli/latest/userguide/cli-configure-files.html) in the AWS CLI documentation. | 
| Error loading remote entries | If you encounter an error when loading the remote entries in the **main.ts** file of the shell application, make sure that the `environment.mfe1URL` variable is set correctly. The value of this variable should be the URL of the micro-frontend. | 
| 404 error when accessing the micro-frontend | If you get a 404 error when trying to access the local micro-frontend, such as at `http://localhost:4200/mfe1`, check the following:[\[See the AWS documentation website for more details\]](http://docs.aws.amazon.com/prescriptive-guidance/latest/patterns/create-amplify-micro-frontend-portal.html) | 

## Additional information
<a name="create-amplify-micro-frontend-portal-additional"></a>

**AWS documentation**
+ [Understanding and implementing micro-frontends on AWS](https://docs.aws.amazon.com/prescriptive-guidance/latest/micro-frontends-aws/introduction.html) (AWS Prescriptive Guidance)
+ [Amplify CLI](https://docs.amplify.aws/gen1/angular/tools/cli/) (Amplify documentation)
+ [Amplify Hosting](https://docs.aws.amazon.com/amplify/latest/userguide/welcome.html) (Amplify documentation)

**Other references **
+ [Module Federation](https://webpack.js.org/concepts/module-federation/)
+ [Node.js](https://nodejs.org/en/)
+ [Angular](https://angular.io/)
+ [@angular-architects/module-federation](https://www.npmjs.com/package/@angular-architects/module-federation)

# Deploy a React-based single-page application to Amazon S3 and CloudFront
<a name="deploy-a-react-based-single-page-application-to-amazon-s3-and-cloudfront"></a>

*Jean-Baptiste Guillois, Amazon Web Services*

## Summary
<a name="deploy-a-react-based-single-page-application-to-amazon-s3-and-cloudfront-summary"></a>

A single-page application (SPA) is a website or web application that dynamically updates the contents of a displayed webpage by using JavaScript APIs. This approach enhances the user experience and performance of a website because it updates only new data instead of reloading the entire webpage from the server.

This pattern provides a step-by-step approach to coding and hosting an SPA that’s written in React on Amazon Simple Storage Service (Amazon S3) and Amazon CloudFront. The SPA in this pattern uses a REST API that’s configured in Amazon API Gateway and exposed through an Amazon CloudFront distribution to simplify [cross-origin resource sharing (CORS)](https://docs.aws.amazon.com/AmazonS3/latest/userguide/cors.html) management.

## Prerequisites and limitations
<a name="deploy-a-react-based-single-page-application-to-amazon-s3-and-cloudfront-prereqs"></a>

**Prerequisites **
+ An active AWS account.
+ Node.js and `npm`, installed and configured. For more information, see the [Downloads](https://nodejs.org/en/download/) section of the Node.js documentation.
+ Yarn, installed and configured. For more information, see the [Yarn documentation](https://classic.yarnpkg.com/lang/en/docs/install/#windows-stable).
+ Git, installed and configured. For more information, see the [Git documentation](https://github.com/git-guides/install-git).

## Architecture
<a name="deploy-a-react-based-single-page-application-to-amazon-s3-and-cloudfront-architecture"></a>

![\[Architecture for deploying a React-based SPA to Amazon S3 and CloudFront\]](http://docs.aws.amazon.com/prescriptive-guidance/latest/patterns/images/pattern-img/970a9d13-e8a2-44ac-aca5-a066e4be60e8/images/96061e05-8ac8-446e-b1da-baa6fc1cc7b6.png)


This architecture is automatically deployed by using AWS CloudFormation (infrastructure as code). It uses Regional services such as Amazon S3 to store the static assets and Amazon CloudFront with Amazon API Gateway to expose Regional API (REST) endpoints. The application logs are collected by using Amazon CloudWatch. All AWS API calls are audited in AWS CloudTrail. All security configuration (for example, identities and permissions) is managed in AWS Identity and Access Management (IAM). Static content is delivered through the Amazon CloudFront content delivery network (CDN), and DNS queries are handled by Amazon Route 53.

## Tools
<a name="deploy-a-react-based-single-page-application-to-amazon-s3-and-cloudfront-tools"></a>

**AWS services**
+ [Amazon API Gateway](https://docs.aws.amazon.com/apigateway/latest/developerguide/welcome.html) helps you create, publish, maintain, monitor, and secure REST, HTTP, and WebSocket APIs at any scale.
+ [AWS CloudFormation](https://docs.aws.amazon.com/AWSCloudFormation/latest/UserGuide/Welcome.html) helps you set up AWS resources, provision them quickly and consistently, and manage them throughout their lifecycle across AWS accounts and Regions.
+ [Amazon CloudFront](https://docs.aws.amazon.com/AmazonCloudFront/latest/DeveloperGuide/Introduction.html) speeds up distribution of your web content by delivering it through a worldwide network of data centers, which lowers latency and improves performance.
+ [AWS CloudTrail](https://docs.aws.amazon.com/awscloudtrail/latest/userguide/cloudtrail-user-guide.html) helps you audit the governance, compliance, and operational risk of your AWS account.
+ [Amazon CloudWatch](https://docs.aws.amazon.com/AmazonCloudWatch/latest/monitoring/WhatIsCloudWatch.html) helps you monitor the metrics of your AWS resources and the applications you run on AWS in real time.
+ [AWS Identity and Access Management (IAM)](https://docs.aws.amazon.com/IAM/latest/UserGuide/introduction.html) helps you securely manage access to your AWS resources by controlling who is authenticated and authorized to use them.
+ [Amazon Route 53](https://docs.aws.amazon.com/Route53/latest/DeveloperGuide/) is a highly available and scalable DNS web service.
+ [Amazon Simple Storage Service (Amazon S3)](https://docs.aws.amazon.com/AmazonS3/latest/userguide/) is a cloud-based object storage service that helps you store, protect, and retrieve any amount of data.

**Code**

This pattern's sample application code is available in the GitHub [React-based CORS single-page application](https://github.com/aws-samples/react-cors-spa) repository.

## Best practices
<a name="deploy-a-react-based-single-page-application-to-amazon-s3-and-cloudfront-best-practices"></a>

By using Amazon S3 object storage, you can store your application’s static assets in a secure, highly resilient, performant, and cost-effective way. There is no need to use a dedicated container or an Amazon Elastic Compute Cloud (Amazon EC2) instance for this task.

By using the Amazon CloudFront content delivery network, you can reduce the latency your users might experience when they access your application. You can also attach a web application firewall ([AWS WAF](https://docs.aws.amazon.com/waf/latest/developerguide/cloudfront-features.html)) to protect your assets from malicious attacks.

## Epics
<a name="deploy-a-react-based-single-page-application-to-amazon-s3-and-cloudfront-epics"></a>

### Locally build and deploy your application
<a name="locally-build-and-deploy-your-application"></a>


| Task | Description | Skills required | 
| --- | --- | --- | 
| Clone the repository. | Run the following command to clone the sample application's repository:<pre>git clone https://github.com/aws-samples/react-cors-spa react-cors-spa && cd react-cors-spa</pre> | App developer, AWS DevOps | 
| Locally deploy the application. | [\[See the AWS documentation website for more details\]](http://docs.aws.amazon.com/prescriptive-guidance/latest/patterns/deploy-a-react-based-single-page-application-to-amazon-s3-and-cloudfront.html) | App developer, AWS DevOps | 
|  Locally access the application. | Open a browser window and enter the `http://localhost:3000` URL to access the application. | App developer, AWS DevOps | 

### Deploy the application
<a name="deploy-the-application"></a>


| Task | Description | Skills required | 
| --- | --- | --- | 
| Deploy the AWS CloudFormation template. | [\[See the AWS documentation website for more details\]](http://docs.aws.amazon.com/prescriptive-guidance/latest/patterns/deploy-a-react-based-single-page-application-to-amazon-s3-and-cloudfront.html) | App developer, AWS DevOps | 
| Customize your application source files. | [\[See the AWS documentation website for more details\]](http://docs.aws.amazon.com/prescriptive-guidance/latest/patterns/deploy-a-react-based-single-page-application-to-amazon-s3-and-cloudfront.html) | App developer | 
| Build the application package. | In your project directory, run the `yarn build` command to build the application package. | App developer | 
| Deploy the application package. | [\[See the AWS documentation website for more details\]](http://docs.aws.amazon.com/prescriptive-guidance/latest/patterns/deploy-a-react-based-single-page-application-to-amazon-s3-and-cloudfront.html) | App developer, AWS DevOps | 

### Test the application
<a name="test-the-application"></a>


| Task | Description | Skills required | 
| --- | --- | --- | 
| Access and test the application. | Open a browser window, and then paste the CloudFront distribution domain (the `SPADomain` output from the CloudFormation stack that you deployed previously) to access the application. | App developer, AWS DevOps | 

### Clean up the resources
<a name="clean-up-the-resources"></a>


| Task | Description | Skills required | 
| --- | --- | --- | 
| Delete the S3 bucket contents. | [\[See the AWS documentation website for more details\]](http://docs.aws.amazon.com/prescriptive-guidance/latest/patterns/deploy-a-react-based-single-page-application-to-amazon-s3-and-cloudfront.html) | AWS DevOps, App developer | 
| Delete the CloudFormation stack. | [\[See the AWS documentation website for more details\]](http://docs.aws.amazon.com/prescriptive-guidance/latest/patterns/deploy-a-react-based-single-page-application-to-amazon-s3-and-cloudfront.html) | AWS DevOps, App developer | 

## Related resources
<a name="deploy-a-react-based-single-page-application-to-amazon-s3-and-cloudfront-resources"></a>

To deploy and host your web application, you can also use [AWS Amplify Hosting](https://docs.aws.amazon.com/amplify/latest/userguide/getting-started.html), which provides a Git-based workflow for hosting full-stack, serverless web apps with continuous deployment. Amplify Hosting is part of [AWS Amplify](https://docs.aws.amazon.com/amplify/latest/userguide/welcome.html), which provides a set of purpose-built tools and features that enable frontend web and mobile developers to quickly and easily build full-stack applications on AWS.

## Additional information
<a name="deploy-a-react-based-single-page-application-to-amazon-s3-and-cloudfront-additional"></a>

To handle invalid URLs requested by the user that might generate 403 errors, a custom error page that’s configured in the CloudFront distribution catches 403 errors and redirects them to the application entry point (`index.html`).

To simplify the management of CORS, the REST API is exposed through a CloudFront distribution.

# Deploy an Amazon API Gateway API on an internal website using private endpoints and an Application Load Balancer
<a name="deploy-an-amazon-api-gateway-api-on-an-internal-website-using-private-endpoints-and-an-application-load-balancer"></a>

*Saurabh Kothari, Amazon Web Services*

## Summary
<a name="deploy-an-amazon-api-gateway-api-on-an-internal-website-using-private-endpoints-and-an-application-load-balancer-summary"></a>

This pattern shows you how to deploy an Amazon API Gateway API on an internal website that’s accessible from an on-premises network. You learn to create a custom domain name for a private API by using an architecture that’s designed with private endpoints, an Application Load Balancer, AWS PrivateLink, and Amazon Route 53. This architecture prevents the unintended consequences of using a custom domain name and proxy server to help with domain-based routing on an API. For example, if you deploy a virtual private cloud (VPC) endpoint in a non-routable subnet, your network can’t reach API Gateway. A common solution is to use a custom domain name and then deploy the API in a routable subnet, but this can break other internal sites when the proxy configuration passes traffic (`execute-api.{region}.vpce.amazonaws.com`) to AWS Direct Connect. Finally, this pattern can help you meet organizational requirements for using a private API that’s unreachable from the internet and a custom domain name.

## Prerequisites and limitations
<a name="deploy-an-amazon-api-gateway-api-on-an-internal-website-using-private-endpoints-and-an-application-load-balancer-prereqs"></a>

**Prerequisites**
+ An active AWS account
+ A Server Name Indication (SNI) certificate for your website and API
+ A connection from an on-premises environment to an AWS account that’s set up by using AWS Direct Connect or AWS Site-to-Site VPN
+ A [private hosted zone](https://docs.aws.amazon.com/Route53/latest/DeveloperGuide/hosted-zones-private.html) with a corresponding domain (for example, domain.com) that’s resolved from an on-premises network and forwards DNS queries to Route 53
+ A routable private subnet that’s reachable from an on-premises network

**Limitations**

For more information about quotas (formerly referred to as limits) for load balancers, rules, and other resources, see [Quotas for your Application Load Balancers](https://docs.aws.amazon.com/elasticloadbalancing/latest/application/load-balancer-limits.html) in the Elastic Load Balancing documentation.

## Architecture
<a name="deploy-an-amazon-api-gateway-api-on-an-internal-website-using-private-endpoints-and-an-application-load-balancer-architecture"></a>

**Technology stack**
+ Amazon API Gateway
+ Amazon Route 53
+ Application Load Balancer
+ AWS Certificate Manager
+ AWS PrivateLink

**Target architecture**

The following diagram shows how an Application Load Balancer is deployed in a VPC that directs web traffic to a website target group or API Gateway target group based on Application Load Balancer listener rules. The API Gateway target group is a list of IP addresses for the VPC endpoint in API Gateway. API Gateway is configured to make the API private with its resource policy. The policy denies all calls that are not from a specific VPC endpoint. Custom domain names in API gateway are updated to use api.domain.com for the API and its stage. Application Load Balancer rules are added to route traffic based on the host name.

![\[Architecture that uses Application Load Balancer listener rules to direct web traffic.\]](http://docs.aws.amazon.com/prescriptive-guidance/latest/patterns/images/pattern-img/83145062-4535-4ad0-8947-4ea8950cd174/images/12715186-26ea-4123-b9ef-e3105a934ff3.png)


The diagram shows the following workflow:

1. A user from an on-premises network tries to access an internal website. The request is sent to ui.domain.com and api.domain.com. Then, the request is resolved to the internal Application Load Balancer of the routable private subnet. The SSL is terminated at the Application Load Balancer for ui.domain.com and api.domain.com.

1. Listener rules, configured on the Application Load Balancer, check for the host header.

   a. If the host header is api.domain.com, the request is forwarded to the API Gateway target group. The Application Load Balancer initiates a new connection to API Gateway over port 443.

   b. If the host header is ui.domain.com, the request is forwarded to the website target group.

1. When the request reaches API Gateway, the custom domain mapping configured in API Gateway determines the hostname and which API to run.

**Automation and scale**

The steps in this pattern can be automated by using AWS CloudFormation or the AWS Cloud Development Kit (AWS CDK). To configure the target group of the API Gateway calls, you must use a custom resource to retrieve the IP address of the VPC endpoint. API calls to [describe-vpc-endpoints](https://awscli.amazonaws.com/v2/documentation/api/latest/reference/ec2/describe-vpc-endpoints.html) and [describe-network-interfaces](https://awscli.amazonaws.com/v2/documentation/api/latest/reference/ec2/describe-network-interfaces.html) return the IP addresses and the security group, which can be used to create the API target group of IP addresses.

## Tools
<a name="deploy-an-amazon-api-gateway-api-on-an-internal-website-using-private-endpoints-and-an-application-load-balancer-tools"></a>
+ [Amazon API Gateway](https://docs.aws.amazon.com/apigateway/latest/developerguide/welcome.html) helps you create, publish, maintain, monitor, and secure REST, HTTP, and WebSocket APIs at any scale.
+ [Amazon Route 53](https://docs.aws.amazon.com/Route53/latest/DeveloperGuide/Welcome.html) is a highly available and scalable DNS web service.
+ [AWS Certificate Manager (ACM)](https://docs.aws.amazon.com/acm/latest/userguide/acm-overview.html) helps you create, store, and renew public and private SSL/TLS X.509 certificates and keys that protect your AWS websites and applications.
+ [AWS Cloud Development Kit (AWS CDK)](https://docs.aws.amazon.com/cdk/latest/guide/home.html) is a software development framework that helps you define and provision AWS Cloud infrastructure in code.
+ [AWS PrivateLink](https://docs.aws.amazon.com/vpc/latest/privatelink/what-is-privatelink.html) helps you create unidirectional, private connections from your VPCs to services outside of the VPC.

## Epics
<a name="deploy-an-amazon-api-gateway-api-on-an-internal-website-using-private-endpoints-and-an-application-load-balancer-epics"></a>

### Create an SNI certificate
<a name="create-an-sni-certificate"></a>


| Task | Description | Skills required | 
| --- | --- | --- | 
| Create an SNI certificate and import the certificate into ACM. | [\[See the AWS documentation website for more details\]](http://docs.aws.amazon.com/prescriptive-guidance/latest/patterns/deploy-an-amazon-api-gateway-api-on-an-internal-website-using-private-endpoints-and-an-application-load-balancer.html) | Network administrator | 

### Deploy a VPC endpoint in a non-routable private subnet
<a name="deploy-a-vpc-endpoint-in-a-non-routable-private-subnet"></a>


| Task | Description | Skills required | 
| --- | --- | --- | 
| Create an interface VPC endpoint in API Gateway. | To create an interface VPC endpoint, follow the instructions from [Access an AWS service using an interface VPC endpoint](https://docs.aws.amazon.com/vpc/latest/privatelink/create-interface-endpoint.html) in the Amazon Virtual Private Cloud (Amazon VPC) documentation. | Cloud administrator | 

### Configure the Application Load Balancer
<a name="configure-the-application-load-balancer"></a>


| Task | Description | Skills required | 
| --- | --- | --- | 
| Create a target group for your application. | [Create a target group](https://docs.aws.amazon.com/elasticloadbalancing/latest/application/create-target-group.html) for the UI resources of your application. | Cloud administrator | 
| Create a target group for the API Gateway endpoint. | [\[See the AWS documentation website for more details\]](http://docs.aws.amazon.com/prescriptive-guidance/latest/patterns/deploy-an-amazon-api-gateway-api-on-an-internal-website-using-private-endpoints-and-an-application-load-balancer.html) | Cloud administrator | 
| Create an Application Load Balancer. | [\[See the AWS documentation website for more details\]](http://docs.aws.amazon.com/prescriptive-guidance/latest/patterns/deploy-an-amazon-api-gateway-api-on-an-internal-website-using-private-endpoints-and-an-application-load-balancer.html) | Cloud administrator | 
| Create listeners rules. | Create [listener rules](https://docs.aws.amazon.com/elasticloadbalancing/latest/application/load-balancer-listeners.html#listener-rules) to do the following:[\[See the AWS documentation website for more details\]](http://docs.aws.amazon.com/prescriptive-guidance/latest/patterns/deploy-an-amazon-api-gateway-api-on-an-internal-website-using-private-endpoints-and-an-application-load-balancer.html) | Cloud administrator | 

### Configure Route 53
<a name="configure-route-53"></a>


| Task | Description | Skills required | 
| --- | --- | --- | 
| Create a private hosted zone. | [Create a private hosted zone](https://docs.aws.amazon.com/Route53/latest/DeveloperGuide/hosted-zone-private-creating.html) for domain.com. | Cloud administrator | 
| Create domain records. | [Create CNAME records](https://docs.aws.amazon.com/Route53/latest/DeveloperGuide/resource-record-sets-creating.html) for the following:[\[See the AWS documentation website for more details\]](http://docs.aws.amazon.com/prescriptive-guidance/latest/patterns/deploy-an-amazon-api-gateway-api-on-an-internal-website-using-private-endpoints-and-an-application-load-balancer.html) | Cloud administrator | 

### Create a private API endpoint in API Gateway
<a name="create-a-private-api-endpoint-in-api-gateway"></a>


| Task | Description | Skills required | 
| --- | --- | --- | 
| Create and configure a private API endpoint. | [\[See the AWS documentation website for more details\]](http://docs.aws.amazon.com/prescriptive-guidance/latest/patterns/deploy-an-amazon-api-gateway-api-on-an-internal-website-using-private-endpoints-and-an-application-load-balancer.html) | App developer, Cloud administrator | 
| Create a custom domain name. | [\[See the AWS documentation website for more details\]](http://docs.aws.amazon.com/prescriptive-guidance/latest/patterns/deploy-an-amazon-api-gateway-api-on-an-internal-website-using-private-endpoints-and-an-application-load-balancer.html) | Cloud administrator | 

## Related resources
<a name="deploy-an-amazon-api-gateway-api-on-an-internal-website-using-private-endpoints-and-an-application-load-balancer-resources"></a>
+ [Amazon API Gateway](https://aws.amazon.com/api-gateway/)
+ [Amazon Route 53](https://aws.amazon.com/route53/)
+ [Application Load Balancer](https://aws.amazon.com/elasticloadbalancing/application-load-balancer/)
+ [AWS PrivateLink](https://docs.aws.amazon.com/vpc/latest/privatelink/what-is-privatelink.html)
+ [AWS Certificate Manager](https://aws.amazon.com/certificate-manager/)

# Embed Amazon Quick Sight visual components into web applications by using Amazon Cognito and IaC automation
<a name="embed-quick-sight-visual-components-into-web-apps-cognito-iac"></a>

*Ishita Gupta, Saurabh Singh, and Srishti Wadhwa, Amazon Web Services*

## Summary
<a name="embed-quick-sight-visual-components-into-web-apps-cognito-iac-summary"></a>

This pattern delivers a specialized approach for embedding Amazon Quick Sight visual components into React applications by using registered user embedding with streamlined Amazon Cognito authentication. These resources are then deployed through an infrastructure as code (IaC) template. Unlike traditional dashboard embedding, this solution isolates specific charts and graphs for direct integration into React applications, which dramatically improves both performance and the user experience.

The architecture establishes an efficient authentication flow between Amazon Cognito user management and Quick Sight permissions: Users authenticate through Amazon Cognito and access their authorized visualizations based on the dashboard sharing rules in Quick Sight. This streamlined approach eliminates the need for direct Quick Sight console access while maintaining robust security controls.

The complete environment is deployed through a single AWS CloudFormation template that provisions all the necessary infrastructure components, including:
+ A serverless backend that uses AWS Lambda and Amazon API Gateway
+ Secure frontend hosting through Amazon CloudFront, Amazon Simple Storage Service (Amazon S3), and AWS WAF
+ Identity management by using Amazon Cognito

All components are configured by following security best practices with least-privilege AWS Identity and Access Management (IAM) policies, AWS WAF protection, and end-to-end encryption.

This solution is ideal for development teams and organizations that want to integrate secure, interactive analytics into their applications while maintaining fine-grained control over user access. The solution uses AWS managed services and automation to simplify the embedding process, enhance security, and ensure scalability to meet evolving business needs.

Target audience and use cases:
+ **Frontend developers** who want to embed analytics into React apps
+ **Software as a service (SaaS) product teams** that want to offer per-user or role-based data visualizations
+ **Solutions architects** who are interested in integrating AWS analytics into custom portals
+ **Business intelligence (BI) developers** who want to expose visuals to authenticated users without requiring full dashboard access
+ **Enterprise teams** that want to embed interactive Quick Sight charts within internal tools

## Prerequisites and limitations
<a name="embed-quick-sight-visual-components-into-web-apps-cognito-iac-prereqs"></a>

**Prerequisites**

To successfully implement this pattern, make sure that the following are in place:
+ **Active AWS account** – An AWS account with permissions to deploy CloudFormation stacks and create Lambda, API Gateway, Amazon Cognito, CloudFront, and Amazon S3 resources.
+ **Amazon Quick Sight account** – An active Quick Sight account with at least one dashboard that contains visuals. For setup instructions, see [Tutorial: Create an Amazon Quick Sight dashboard using sample data](https://docs.aws.amazon.com/quicksuite/latest/userguide/example-analysis.html) in the Amazon Quick documentation.
+ **A development environment** that consists of:
  + Node.js (version 16 or later)
  + npm or yarn installed
  + Vite as the React build tool
  + React (version 19.1.1)
+ **Dashboard sharing** – Dashboards must be shared in Quick Sight and the implementer must log in to access the embedded visuals or dashboards.

**Limitations**
+ This pattern uses the registered user embedding method, which requires implementers to have an active Quick Sight account.
+ Access is restricted to the dashboards and visuals that are explicitly shared with the authenticated Quick Sight user who is implementing this pattern. If the implementer doesn’t have the correct access rights, the embed URL generation will fail and visuals won’t load.
+ The CloudFormation stack must be deployed in an AWS Region where Quick Sight, API Gateway, and Amazon Cognito are supported. For Region availability, see [AWS services by Region](https://aws.amazon.com/about-aws/global-infrastructure/regional-product-services/).

**Product versions**
+ [Quick Sight Embedding SDK](https://www.npmjs.com/package/amazon-quicksight-embedding-sdk) version 2.10.1
+ [React](https://www.npmjs.com/package/react) version 19.1.1
+ [Node.js](https://nodejs.org/en/download) version 16 or later to ensure compatibility with the latest React and Vite versions used in this solution

## Architecture
<a name="embed-quick-sight-visual-components-into-web-apps-cognito-iac-architecture"></a>

**Target architecture**

The following diagram shows the architecture and workflow for this pattern.

![\[Architecture and workflow for embedding Quick Sight visuals into a React application.\]](http://docs.aws.amazon.com/prescriptive-guidance/latest/patterns/images/pattern-img/75ad12b1-caaa-4532-b709-8f3eaf3f9cc0/images/d0905f61-9055-49cf-887d-f46f5ca6c871.png)


In this workflow:

1. **The user accesses the application**. The user opens the React web application by using a browser. The request is routed to a CloudFront distribution, which acts as a content delivery network for the application.

1. **AWS WAF filters malicious requests**.** **Before the request reaches CloudFront, it passes through AWS WAF. AWS WAF inspects the traffic and blocks any malicious or suspicious requests based on security rules.

1. **Amazon S3 serves static files**. If the request is clean, CloudFront retrieves the static frontend assets (HTML, JS, CSS) from a private S3 bucket by using origin access control (OAC) and delivers them to the browser.

1. **The user signs in**. After the application is loaded, the user signs in through Amazon Cognito, which authenticates the user and returns a secure JSON web token (JWT) for authorized API access.

1. **The application makes an API request**. After login, the React application makes a secure call to the `/get-embed-url` endpoint on API Gateway, and passes the JWT token in the request header for authentication.

1. **The token is validated**. API Gateway validates the token by using an Amazon Cognito authorizer. If the token is valid, the request proceeds; otherwise, it is denied with a 401 (unauthorized) response.

1. **The request is directed to Lambda for processing**. The validated request is then forwarded to a backend Lambda function. This function is responsible for generating the embed URL for the requested Quick Sight visual.

1. **Lambda generates the embed URL from Quick Sight**. IAM uses an IAM role with appropriate permissions to call the Quick Sight `GenerateEmbedUrlForRegisteredUser` API to generate a secure, user-scoped visual URL.

1. **Lambda returns the embed URL to API Gateway**. Lambda sends the generated embed URL back to API Gateway as part of a JSON response. This response is then prepared for delivery to the frontend.

1. **The embed URL is sent to the browser**. The embed URL is returned to the browser as the API response.

1. **The visual is displayed to the user**.** **The React application receives the response and uses the Quick Sight Embedding SDK to render the specific visual to the user.

**Automation and scale**

Backend and frontend deployments are fully automated by using CloudFormation, which provisions all required AWS resources, including Amazon Cognito, Lambda, API Gateway, Amazon S3, CloudFront, AWS WAF, IAM roles, and Amazon CloudWatch in a single deployment.

This automation ensures consistent and repeatable infrastructure across all environments. All components scale automatically: Lambda adjusts to function invocations, CloudFront serves cached content globally, and API Gateway scales with incoming requests.

## Tools
<a name="embed-quick-sight-visual-components-into-web-apps-cognito-iac-tools"></a>

**AWS services**
+ [Amazon Quick Sight](https://aws.amazon.com/quicksuite/quicksight/) is a cloud-native business intelligence service that helps you create, manage, and embed interactive dashboards and visuals.
+ [Amazon API Gateway](https://aws.amazon.com/api-gateway/) manages APIs that act as the bridge between the React application and backend services.
+ [AWS Lambda](https://aws.amazon.com/lambda/)is a serverless compute service that this pattern uses to generate secure Quick Sight embed URLs dynamically, and scales automatically based on requests.
+ [Amazon Cognito](https://aws.amazon.com/cognito/)provides authentication and authorization for users, and issues secure tokens for API access.
+ [Amazon S3](https://aws.amazon.com/s3/) hosts static frontend assets for this pattern, and serves them securely through CloudFront.
+ [Amazon CloudFront ](https://aws.amazon.com/cloudfront/getting-started/)delivers frontend content globally with low latency and integrates with AWS WAF for traffic filtering.
+ [AWS WAF](https://aws.amazon.com/waf/) protects the web application from malicious traffic by applying security rules and rate limiting.
+ [AWS CloudFormation](https://docs.aws.amazon.com/AWSCloudFormation/latest/UserGuide/Welcome.html)automates the provisioning and configuration of all application resources in a single deployment.
+ [Amazon CloudWatch](https://aws.amazon.com/cloudwatch/) collects logs and metrics from Lambda, API Gateway, and AWS WAF for monitoring and troubleshooting.

**Development tools**
+ [React JS](https://react.dev/) is a frontend framework that this pattern uses to build the web application and integrate embedded Quick Sight visuals.
+ [Vite](https://vite.dev/) is a build tool used for fast development and optimized production builds of the React application.
+ [Quick Sight Embedding SDK](https://www.npmjs.com/package/amazon-quicksight-embedding-sdk/v/2.10.1) facilitates embedding Quick Sight visuals into the React application and enables seamless interaction between the application and visuals.

**Code repository**

The code for this pattern is available in the GitHub [Amazon Quick Sight Visual Embedding in React](https://github.com/aws-samples/sample-quicksight-visual-embedding) repository.

## Best practices
<a name="embed-quick-sight-visual-components-into-web-apps-cognito-iac-best-practices"></a>

This pattern automatically implements the following security best practices:
+ Uses Amazon Cognito user pools for JWT-based authentication, with optional multi-factor authentication (MFA).
+ Secures APIs with Amazon Cognito authorizers and enforces least-privilege IAM policies across all services.
+ Implements Quick Sight registered user embedding and auto-provisions users with the reader role.
+ Enforces encryption in transit that supports TLS 1.2 and later versions through CloudFront and HTTPS.
+ Encrypts data at rest by using AES-256 for Amazon S3 with versioning and OAC.
+ Configures API Gateway usage plans with throttling and quotas.
+ Secures Lambda with reserved concurrency and environment variable protection.
+ Enables logging for Amazon S3, CloudFront, Lambda, and API Gateway; monitors services by using CloudWatch.
+ Encrypts logs, applies access controls, and enforces deny policies for non-HTTPS or unencrypted uploads.

In addition, we recommend the following:
+ Use CloudFormation to automate deployments and maintain consistent configurations across environments.
+ Make sure that each user has the correct Quick Sight permissions to access embedded visuals.
+ Protect API Gateway endpoints with Amazon Cognito authorizers and enforce the principle of least privilege for all IAM roles.
+ Store sensitive information such as Amazon Resource Names (ARNs) and IDs in environment variables instead of hardcoding them.
+ Optimize Lambda functions by reducing dependencies and improving cold-start performance. For more information, see the AWS blog post [Optimizing cold start performance of AWS Lambda using advanced priming strategies with SnapStart](https://aws.amazon.com/blogs/compute/optimizing-cold-start-performance-of-aws-lambda-using-advanced-priming-strategies-with-snapstart/).
+ Add your CloudFront domain to the Quick Sight allowlist to enable secure visual embedding.
+ Monitor performance and security by using CloudWatch and AWS WAF for logging, alerts, and traffic protection.

**Other recommended best practices**
+ Use custom domains with SSL certificates from AWS Certificate Manager to provide a secure and branded user experience.
+ Encrypt Amazon S3 data and CloudWatch logs by using customer managed AWS Key Management Service (AWS KMS) keys for greater control over encryption.
+ Extend AWS WAF rules with geo-blocking, SQL injection (SQLi), cross-site scripting (XSS) protection, and custom filters for enhanced threat prevention.
+ Enable CloudWatch alarms, AWS Config, and AWS CloudTrail for real-time monitoring, auditing, and configuration compliance.
+ Apply granular IAM policies, enforce API key rotation, and allow cross-account access only when absolutely necessary.
+ Perform regular security assessments to ensure alignment with compliance frameworks such as System and Organization Controls 2 (SOC 2), General Data Protection Regulation (GDPR), and Health Insurance Portability and Accountability Act (HIPAA).

## Epics
<a name="embed-quick-sight-visual-components-into-web-apps-cognito-iac-epics"></a>

### Prepare the environment
<a name="prepare-the-environment"></a>


| Task | Description | Skills required | 
| --- | --- | --- | 
| Clone the repository. | Clone the GitHub repository for this solution to your local system and navigate to the project directory:<pre>git clone https://github.com/aws-samples/sample-quicksight-visual-embedding.git<br /><br />cd sample-quicksight-visual-embedding</pre>This repository contains the CloudFormation template and React source code required to deploy the solution. | App developer | 

### Deploy the CloudFormation stack
<a name="deploy-the-cfn-stack"></a>


| Task | Description | Skills required | 
| --- | --- | --- | 
| Deploy the template. | [\[See the AWS documentation website for more details\]](http://docs.aws.amazon.com/prescriptive-guidance/latest/patterns/embed-quick-sight-visual-components-into-web-apps-cognito-iac.html) For more information, see [Creating and managing stacks](https://docs.aws.amazon.com/AWSCloudFormation/latest/UserGuide/stacks.html) in the CloudFormation documentation. | AWS administrator | 
| Monitor stack creation. | Monitor the stack in the **Events** tab until its status is **CREATE\$1COMPLETE**. | AWS administrator | 
| Retrieve stack outputs. | [\[See the AWS documentation website for more details\]](http://docs.aws.amazon.com/prescriptive-guidance/latest/patterns/embed-quick-sight-visual-components-into-web-apps-cognito-iac.html) | AWS administrator | 

### Configure the frontend environment
<a name="configure-the-frontend-environment"></a>


| Task | Description | Skills required | 
| --- | --- | --- | 
| Retrieve Quick Sight visual identifiers. | [\[See the AWS documentation website for more details\]](http://docs.aws.amazon.com/prescriptive-guidance/latest/patterns/embed-quick-sight-visual-components-into-web-apps-cognito-iac.html) | Quick Sight administrator | 
| Configure your local React environment. | To set up your local React environment and link it to AWS resources, create an `.env` file in the `my-app/` folder of your local GitHub repository. Populate the file with:[\[See the AWS documentation website for more details\]](http://docs.aws.amazon.com/prescriptive-guidance/latest/patterns/embed-quick-sight-visual-components-into-web-apps-cognito-iac.html)Here’s an example `.env` file:<pre>VITE_AWS_REGION=us-east-1<br /><br /># Cognito Configuration (from CloudFormation outputs)<br />VITE_USER_POOL_ID=us-east-1_xxxxxxxxx VITE_USER_POOL_WEB_CLIENT_ID=xxxxxxxxxxxxxxxxxxxxxxxxxx<br /><br /># API Configuration (from CloudFormation outputs)<br />VITE_API_URL=https:/your-api-id.execute-api.us-east-1.amazonaws.com/prod<br /><br /># QuickSight Visual Configuration<br />VITE_DASHBOARD_ID=your-dashboard-id <br />VITE_SHEET_ID=your-sheet-id <br />VITE_VISUAL_ID=your-visual-id</pre> | App developer | 

### Set up user authentication
<a name="set-up-user-authentication"></a>


| Task | Description | Skills required | 
| --- | --- | --- | 
| Create or Manage Users in Cognito | To enable authenticated user access to embedded Quick Sight visuals, you first create users in Amazon Cognito:[\[See the AWS documentation website for more details\]](http://docs.aws.amazon.com/prescriptive-guidance/latest/patterns/embed-quick-sight-visual-components-into-web-apps-cognito-iac.html) | AWS administrator | 
| Provide Quick Sight dashboard access | To provide access to Quick Sight visuals, provide Viewer permission access to authenticated users:[\[See the AWS documentation website for more details\]](http://docs.aws.amazon.com/prescriptive-guidance/latest/patterns/embed-quick-sight-visual-components-into-web-apps-cognito-iac.html)Each user will receive an email with a link to the dashboard. You can modify permissions at any time through the **Share** menu.For more information, see [Granting individual Amazon Quick Sight users and groups access to a dashboard in Amazon Quick Sight](https://docs.aws.amazon.com/quicksuite/latest/userguide/share-a-dashboard-grant-access-users.html) in the Amazon Quick documentation. | Quick Sight administrator | 

### Build and deploy the React frontend
<a name="build-and-deploy-the-react-frontend"></a>


| Task | Description | Skills required | 
| --- | --- | --- | 
| Install dependencies and build the project. | In the React application directory, run the following commands to generate optimized production files:<pre>cd my-app<br />npm install<br />npm run build</pre> | App developer | 
| Upload the build files to Amazon S3. | Upload all the files from the `my-app/dist/` directory to the S3 bucket provisioned by CloudFormation (do not upload the folder itself). | App developer | 
| Create a CloudFront invalidation. | On the [CloudFront console](https://console.aws.amazon.com/cloudfront/v4/home), create an invalidation for path `/*` to refresh cached content after deployment. For instructions, see [Invalidate files](https://docs.aws.amazon.com/AmazonCloudFront/latest/DeveloperGuide/Invalidation_Requests.html) in the CloudFront documentation. | AWS administrator | 

### Configure the Quick Sight allowlist
<a name="configure-the-qsight-allowlist"></a>


| Task | Description | Skills required | 
| --- | --- | --- | 
| Add the CloudFront domain to the Quick Sight allowlist. | To enable your CloudFront domain to securely embed Quick Sight visuals:[\[See the AWS documentation website for more details\]](http://docs.aws.amazon.com/prescriptive-guidance/latest/patterns/embed-quick-sight-visual-components-into-web-apps-cognito-iac.html) | Quick Sight administrator | 

### Access the application and verify functionality
<a name="access-the-application-and-verify-functionality"></a>


| Task | Description | Skills required | 
| --- | --- | --- | 
| Open the React application. | Use the **CloudFront domain** (from CloudFormation outputs) to open the deployed React web application in a browser. | App owner | 
| Verify authentication. | Sign in to the application by using Amazon Cognito credentials to verify the authentication flow and JWT validation through API Gateway. | App owner | 
| Verify embedded visuals. | Confirm that Quick Sight visuals load properly within the application based on user-specific access permissions. | App owner | 
| Validate API and Lambda connectivity. | Confirm that the application can successfully call the `/get-embed-url` API and retrieve valid Quick Sight embed URLs without errors. | App owner | 

### Monitor and maintain the application
<a name="monitor-and-maintain-the-application"></a>


| Task | Description | Skills required | 
| --- | --- | --- | 
| Monitor by using CloudWatch. | You can use AWS observability tools to monitor the application and to maintain secure, scalable performance in production.Review Lambda logs, API Gateway metrics and Amazon Cognito authentication events in CloudWatch to ensure application health and to detect anomalies. | AWS administrator | 
| Track AWS WAF and CloudFront logs. | Inspect AWS WAF logs for blocked or suspicious requests and CloudFront access logs for performance and caching metrics. | AWS administrator | 

## Troubleshooting
<a name="embed-quick-sight-visual-components-into-web-apps-cognito-iac-troubleshooting"></a>


| Issue | Solution | 
| --- | --- | 
| "Domain not allowed" error | [\[See the AWS documentation website for more details\]](http://docs.aws.amazon.com/prescriptive-guidance/latest/patterns/embed-quick-sight-visual-components-into-web-apps-cognito-iac.html) | 
| Authentication errors | Possible causes:[\[See the AWS documentation website for more details\]](http://docs.aws.amazon.com/prescriptive-guidance/latest/patterns/embed-quick-sight-visual-components-into-web-apps-cognito-iac.html)Solutions:[\[See the AWS documentation website for more details\]](http://docs.aws.amazon.com/prescriptive-guidance/latest/patterns/embed-quick-sight-visual-components-into-web-apps-cognito-iac.html) | 
| API Gateway errors | Possible causes:[\[See the AWS documentation website for more details\]](http://docs.aws.amazon.com/prescriptive-guidance/latest/patterns/embed-quick-sight-visual-components-into-web-apps-cognito-iac.html)Solutions:[\[See the AWS documentation website for more details\]](http://docs.aws.amazon.com/prescriptive-guidance/latest/patterns/embed-quick-sight-visual-components-into-web-apps-cognito-iac.html) | 
| Quick Sight visuals don’t load | Possible causes:[\[See the AWS documentation website for more details\]](http://docs.aws.amazon.com/prescriptive-guidance/latest/patterns/embed-quick-sight-visual-components-into-web-apps-cognito-iac.html)Solutions:[\[See the AWS documentation website for more details\]](http://docs.aws.amazon.com/prescriptive-guidance/latest/patterns/embed-quick-sight-visual-components-into-web-apps-cognito-iac.html) | 
| "User does not have access" error | Possible causes:[\[See the AWS documentation website for more details\]](http://docs.aws.amazon.com/prescriptive-guidance/latest/patterns/embed-quick-sight-visual-components-into-web-apps-cognito-iac.html)Solution:[\[See the AWS documentation website for more details\]](http://docs.aws.amazon.com/prescriptive-guidance/latest/patterns/embed-quick-sight-visual-components-into-web-apps-cognito-iac.html) | 

## Related resources
<a name="embed-quick-sight-visual-components-into-web-apps-cognito-iac-resources"></a>

** AWS documentation**
+ [Signing up for an Amazon Quick subscription](https://docs.aws.amazon.com/quicksight/latest/user/signing-up.html)
+ [Embedded analytics for Amazon Quick Sight](https://docs.aws.amazon.com/quicksuite/latest/userguide/embedded-analytics.html)
+ [Quick Sight API reference – GenerateEmbedUrlForRegisteredUser](https://docs.aws.amazon.com/quicksight/latest/APIReference/API_GenerateEmbedUrlForRegisteredUser.html)
+ [Amazon Cognito user pools](https://docs.aws.amazon.com/cognito/latest/developerguide/cognito-user-pools.html)
+ [AWS Lambda Developer Guide](https://docs.aws.amazon.com/lambda/latest/dg/welcome.html)
+ [Amazon API Gateway Developer Guide](https://docs.aws.amazon.com/apigateway/latest/developerguide/welcome.html)
+ [Basic monitoring and detailed monitoring in CloudWatch](https://docs.aws.amazon.com/AmazonCloudWatch/latest/monitoring/cloudwatch-metrics-basic-detailed.html)
+ [AWS CloudFormation User Guide](https://docs.aws.amazon.com/AWSCloudFormation/latest/UserGuide/Welcome.html)
+ [Amazon CloudFront Developer Guide](https://docs.aws.amazon.com/AmazonCloudFront/latest/DeveloperGuide/Introduction.html)
+ [AWS WAF Developer Guide](https://docs.aws.amazon.com/waf/latest/developerguide/waf-chapter.html)

**Tutorials and videos**
+ [Embedded analytics for Amazon Quick Sight](https://docs.aws.amazon.com/quicksight/latest/user/embedded-analytics.html)
+ [Amazon Quick Sight YouTube tutorials](https://www.youtube.com/results?search_query=amazon+quicksight+embedding)

# Explore full-stack cloud-native web application development with Green Boost
<a name="explore-full-stack-cloud-native-web-application-development-with-green-boost"></a>

*Ben Stickley and Amiin Samatar, Amazon Web Services*

## Summary
<a name="explore-full-stack-cloud-native-web-application-development-with-green-boost-summary"></a>

In response to the evolving needs of developers, Amazon Web Services (AWS) recognizes the critical demand for an efficient approach to developing cloud-native web applications. The AWS focus is on helping you to overcome common roadblocks associated with deploying web apps on the AWS Cloud. By harnessing the capabilities of modern technologies such as TypeScript, AWS Cloud Development Kit (AWS CDK), React, and Node.js, this pattern aims to streamline and expedite the development process.

Underpinned by the Green Boost (GB) toolkit, the pattern offers a practical guide to constructing web applications that fully use the extensive capabilities of AWS. It acts as a comprehensive roadmap, leading you through the process of deploying a fundamental CRUD (Create, Read, Update, Delete) web application integrated with Amazon Aurora PostgreSQL-Compatible Edition. This is accomplished by using the Green Boost command line interface (Green Boost CLI) and establishing a local development environment.

Following the successful deployment of the application, the pattern delves into key components of the web app, including infrastructure design, backend and frontend development, and essential tools such as cdk-dia for visualization, facilitating efficient project management.

## Prerequisites and limitations
<a name="explore-full-stack-cloud-native-web-application-development-with-green-boost-prereqs"></a>

**Prerequisites**
+ [Git](https://git-scm.com/book/en/v2/Getting-Started-Installing-Git) installed
+ [Visual Studio Code (VS Code)](https://code.visualstudio.com/download) installed
+ [AWS Command Line Interface (AWS CLI)](https://docs.aws.amazon.com/cli/latest/userguide/getting-started-install.html) installed
+ [AWS CDK Toolkit](https://docs.aws.amazon.com/cdk/v2/guide/cli.html) installed
+ [Node.js 18](https://nodejs.org/en/download) installed, or [Node.js 18 with pnpm](https://pnpm.io/cli/env) activated
+ [pnpm](https://pnpm.io/installation) installed, if it isn't part of your Node.js installation
+ Basic familiarity with TypeScript, AWS CDK, Node.js, and React
+ An [active AWS account](https://docs.aws.amazon.com/accounts/latest/reference/manage-acct-creating.html)
+ [An AWS account bootstrapped](https://docs.aws.amazon.com/cdk/v2/guide/bootstrapping.html) by using AWS CDK in `us-east-1`. The `us-east-1` AWS Region** **is required for support of the Amazon CloudFront Lambda@Edge functions.
+ [AWS security credentials](https://docs.aws.amazon.com/cli/latest/userguide/cli-configure-envvars.html), including `AWS_ACCESS_KEY_ID`, correctly configured in your terminal environment
+ For Windows users, a terminal in administrator mode (to accommodate the way pnpm handles node modules)

**Product versions**
+ AWS SDK for JavaScript version 3
+ AWS CDK version 2
+ AWS CLI version 2.2
+ Node.js version 18
+ React version 18

## Architecture
<a name="explore-full-stack-cloud-native-web-application-development-with-green-boost-architecture"></a>

**Target technology stack **
+ Amazon Aurora PostgreSQL-Compatible Edition
+ Amazon CloudFront
+ Amazon CloudWatch
+ Amazon Elastic Compute Cloud (Amazon EC2)
+ AWS Lambda
+ AWS Secrets Manager
+ Amazon Simple Notification Service (Amazon SNS)
+ Amazon Simple Storage Service (Amazon S3)
+ AWS WAF

**Target Architecture**

The following diagram shows that user requests pass through Amazon CloudFront, AWS WAF, and AWS Lambda before interacting with an S3 bucket, an Aurora database, an EC2 instance, and ultimately reaching developers. Administrators, on the other hand, use Amazon SNS and Amazon CloudWatch for notifications and monitoring purposes.

![\[Process to deploy a CRUD web app integrated with Amazon Aurora PostgreSQL by using Green Boost CLI.\]](http://docs.aws.amazon.com/prescriptive-guidance/latest/patterns/images/pattern-img/bacafc47-07c0-494b-8bbf-24bdc9b54f6a/images/129691e9-7fd3-4208-ab8c-05b9f40a5c4c.png)


To gain a more in-depth look at the application after deployment, you can create diagram by using [cdk-dia](https://github.com/pistazie/cdk-dia), as shown in the following example.

![\[First diagram shows user-centric view; cdk-dia diagram shows technical infrastructure view.\]](http://docs.aws.amazon.com/prescriptive-guidance/latest/patterns/images/pattern-img/bacafc47-07c0-494b-8bbf-24bdc9b54f6a/images/5e4c3321-47bd-44e7-bf14-f470eed984c1.png)


These diagrams showcase the web application architecture from two distinct angles. The cdk-dia diagram offers a detailed technical view of the AWS CDK infrastructure, highlighting specific AWS services such as Amazon Aurora PostgreSQL-Compatible and AWS Lambda. In contrast, the other diagram takes a broader perspective, emphasizing the logical flow of data and user interactions. The key distinction lies in the level of detail: The cdk-dia delves into technical intricacies, while the first diagram provides a more user-centric view.

Creation of the cdk-dia diagram is covered in the epic *Understand the app infrastructure by using AWS CDK*.

## Tools
<a name="explore-full-stack-cloud-native-web-application-development-with-green-boost-tools"></a>

**AWS services**
+ [Amazon Aurora PostgreSQL-Compatible Edition](https://docs.aws.amazon.com/AmazonRDS/latest/AuroraUserGuide/Aurora.AuroraPostgreSQL.html) is a fully managed, ACID-compliant relational database engine that helps you set up, operate, and scale PostgreSQL deployments.
+ [AWS Cloud Development Kit (AWS CDK)](https://docs.aws.amazon.com/cdk/latest/guide/home.html) is a software development framework that helps you define and provision AWS Cloud infrastructure in code.
+ [AWS Command Line Interface (AWS CLI)](https://docs.aws.amazon.com/cli/latest/userguide/cli-chap-welcome.html) is an open-source tool that helps you interact with AWS services through commands in your command-line shell.
+ [Amazon CloudFront](https://docs.aws.amazon.com/AmazonCloudFront/latest/DeveloperGuide/Introduction.html) speeds up distribution of your web content by delivering it through a worldwide network of data centers, which lowers latency and improves performance.
+ [Amazon CloudWatch](https://docs.aws.amazon.com/AmazonCloudWatch/latest/monitoring/WhatIsCloudWatch.html) helps you monitor the metrics of your AWS resources and the applications you run on AWS in real time.
+ [Amazon Elastic Compute Cloud (Amazon EC2)](https://docs.aws.amazon.com/ec2/) provides scalable computing capacity in the AWS Cloud. You can launch as many virtual servers as you need and quickly scale them up or down.
+ [AWS Lambda](https://docs.aws.amazon.com/lambda/latest/dg/welcome.html) is a compute service that helps you run code without needing to provision or manage servers. It runs your code only when needed and scales automatically, so you pay only for the compute time that you use.
+ [AWS Secrets Manager](https://docs.aws.amazon.com/secretsmanager/latest/userguide/intro.html) helps you replace hardcoded credentials in your code, including passwords, with an API call to Secrets Manager to retrieve the secret programmatically.
+ [AWS Systems Manager](https://docs.aws.amazon.com/systems-manager/latest/userguide/what-is-systems-manager.html) helps you manage your applications and infrastructure running in the AWS Cloud. It simplifies application and resource management, shortens the time to detect and resolve operational problems, and helps you manage your AWS resources securely at scale. This pattern uses AWS Systems Manager Session Manager.
+ [Amazon Simple Storage Service (Amazon S3)](https://docs.aws.amazon.com/AmazonS3/latest/userguide/Welcome.html) is a cloud-based object storage service that helps you store, protect, and retrieve any amount of data.[Amazon Simple Notification Service (Amazon SNS)](https://docs.aws.amazon.com/sns/latest/dg/welcome.html) helps you coordinate and manage the exchange of messages between publishers and clients, including web servers and email addresses.
+ [AWS WAF](https://docs.aws.amazon.com/waf/latest/developerguide/what-is-aws-waf.html) is a web application firewall that helps you monitor HTTP and HTTPS requests that are forwarded to your protected web application resources

**Other tools**
+ [Git](https://git-scm.com/docs) is an open-source, distributed version control system.
+ [Green Boost](https://awslabs.github.io/green-boost/overview/intro) is a toolkit for building web apps on AWS.
+ [Next.js](https://nextjs.org/docs) is a React framework for adding features and optimizations.
+ [Node.js](https://nodejs.org/en/docs/) is an event-driven JavaScript runtime environment designed for building scalable network applications.
+ [pgAdmin](https://www.pgadmin.org/) is an open-source management tool for PostgreSQL. It provides a graphical interface that helps you create, maintain, and use database objects.
+ [pnpm](https://pnpm.io/motivation) is a package manager for Node.js project dependencies.

## Best practices
<a name="explore-full-stack-cloud-native-web-application-development-with-green-boost-best-practices"></a>

See the [Epics](#explore-full-stack-cloud-native-web-application-development-with-green-boost-epics) section for more information about the following recommendations:
+ Monitor infrastructure by using Amazon CloudWatch Dashboards and alarms.
+ Enforce AWS best practices by using cdk-nag to run static infrastructure as code (IaC) analysis.
+ Establish DB port forwarding through SSH (Secure Shell) tunneling with Systems Manager Session Manager, which is more secure than having a publicly exposed IP address.
+ Manage vulnerabilities by running `pnpm audit`.
+ Enforce best practices by using [ESLint](https://eslint.org/) to perform static TypeScript code analysis, and [Prettier](https://prettier.io/) to standardize code formatting.

## Epics
<a name="explore-full-stack-cloud-native-web-application-development-with-green-boost-epics"></a>

### Deploy a CRUD web app with Aurora PostgreSQL-Compatible
<a name="deploy-a-crud-web-app-with-aurora-postgresql-compatible"></a>


| Task | Description | Skills required | 
| --- | --- | --- | 
| Install the Green Boost CLI. | To install Green Boost CLI, run the following command.<pre>pnpm add -g gboost</pre> | App developer | 
| Create a GB app. | [\[See the AWS documentation website for more details\]](http://docs.aws.amazon.com/prescriptive-guidance/latest/patterns/explore-full-stack-cloud-native-web-application-development-with-green-boost.html) | App developer | 
| Install dependencies and deploy the app. | [\[See the AWS documentation website for more details\]](http://docs.aws.amazon.com/prescriptive-guidance/latest/patterns/explore-full-stack-cloud-native-web-application-development-with-green-boost.html)Wait for deployment to finish (approximately 20 minutes). While you’re waiting, monitor AWS CloudFormation stacks in the CloudFormation console. Notice how the constructs defined in the code map to the resource deployed. Review the [CDK Construct tree view](https://docs.aws.amazon.com/cdk/v2/guide/constructs.html) in the CloudFormation console. | App developer | 
| Access the app. | After deploying your GB app locally, you can access it using the CloudFront URL. The URL is printed in the terminal output, but it can be a bit overwhelming to find. To find it more quickly, use the following steps:[\[See the AWS documentation website for more details\]](http://docs.aws.amazon.com/prescriptive-guidance/latest/patterns/explore-full-stack-cloud-native-web-application-development-with-green-boost.html)Alternatively, you can find the CloudFront URL by accessing the Amazon CloudFront console:[\[See the AWS documentation website for more details\]](http://docs.aws.amazon.com/prescriptive-guidance/latest/patterns/explore-full-stack-cloud-native-web-application-development-with-green-boost.html)Copy the **Domain Name** that is associated with the distribution. It will look similar to `your-unique-id.cloudfront.net`. | App developer | 

### Monitor by using Amazon CloudWatch
<a name="monitor-by-using-amazon-cloudwatch"></a>


| Task | Description | Skills required | 
| --- | --- | --- | 
| View the CloudWatch Dashboard. | [\[See the AWS documentation website for more details\]](http://docs.aws.amazon.com/prescriptive-guidance/latest/patterns/explore-full-stack-cloud-native-web-application-development-with-green-boost.html) | App developer | 
| Enable alerts. | A CloudWatch Dashboard helps you to actively monitor your web app. To passively monitor your web app, you can enable alerting.[\[See the AWS documentation website for more details\]](http://docs.aws.amazon.com/prescriptive-guidance/latest/patterns/explore-full-stack-cloud-native-web-application-development-with-green-boost.html) | App developer | 

### Understand the app infrastructure by using AWS CDK
<a name="understand-the-app-infrastructure-by-using-aws-cdk"></a>


| Task | Description | Skills required | 
| --- | --- | --- | 
| Create an architecture diagram. | Generate an architecture diagram of your web app by using [cdk-dia](https://github.com/pistazie/cdk-dia). Visualizing the architecture helps improve understanding and communication among team members. It provides a clear overview of the system's components and their relationships.[\[See the AWS documentation website for more details\]](http://docs.aws.amazon.com/prescriptive-guidance/latest/patterns/explore-full-stack-cloud-native-web-application-development-with-green-boost.html) | App developer | 
| Use cdk-nag to enforce best practices. | Use [cdk-nag](https://docs.aws.amazon.com/prescriptive-guidance/latest/patterns/check-aws-cdk-applications-or-cloudformation-templates-for-best-practices-by-using-cdk-nag-rule-packs.html) to help you maintain secure and compliant infrastructure by enforcing best practices, reducing the risk of security vulnerabilities and misconfigurations.[\[See the AWS documentation website for more details\]](http://docs.aws.amazon.com/prescriptive-guidance/latest/patterns/explore-full-stack-cloud-native-web-application-development-with-green-boost.html) | App developer | 

### Evaluate the database configuration and schema
<a name="evaluate-the-database-configuration-and-schema"></a>


| Task | Description | Skills required | 
| --- | --- | --- | 
| Acquire environment variables. | To obtain the required environment variables, use the following steps:[\[See the AWS documentation website for more details\]](http://docs.aws.amazon.com/prescriptive-guidance/latest/patterns/explore-full-stack-cloud-native-web-application-development-with-green-boost.html) | App developer | 
| Establish port forwarding. | To establish port forwarding, use the following steps:[\[See the AWS documentation website for more details\]](http://docs.aws.amazon.com/prescriptive-guidance/latest/patterns/explore-full-stack-cloud-native-web-application-development-with-green-boost.html) | App developer | 
| Adjust the Systems Manager Session Manager timeout. | (Optional) If the default 20-minute session timeout is too short, you can increase it up to 60 minutes in the Systems Manager console by choosing **Session Manager**, **Preferences**, **Edit**, **Idle session timeout**. | App developer | 
| Visualize the database. | pgAdmin is a user-friendly open-source tool for managing PostgreSQL databases. It simplifies database tasks, allowing you to efficiently create, manage, and optimize databases. This section guides you through [installing pgAdmin](https://www.pgadmin.org/download/) and using its features for PostgreSQL database management.[\[See the AWS documentation website for more details\]](http://docs.aws.amazon.com/prescriptive-guidance/latest/patterns/explore-full-stack-cloud-native-web-application-development-with-green-boost.html) | App developer | 

### Debug with Node.js
<a name="debug-with-node-js"></a>


| Task | Description | Skills required | 
| --- | --- | --- | 
| Debug the create item use case. | To debug the create item use case, follow these steps:[\[See the AWS documentation website for more details\]](http://docs.aws.amazon.com/prescriptive-guidance/latest/patterns/explore-full-stack-cloud-native-web-application-development-with-green-boost.html)[\[See the AWS documentation website for more details\]](http://docs.aws.amazon.com/prescriptive-guidance/latest/patterns/explore-full-stack-cloud-native-web-application-development-with-green-boost.html) | App developer | 

### Develop the frontend
<a name="develop-the-frontend"></a>


| Task | Description | Skills required | 
| --- | --- | --- | 
| Set up the development server. | [\[See the AWS documentation website for more details\]](http://docs.aws.amazon.com/prescriptive-guidance/latest/patterns/explore-full-stack-cloud-native-web-application-development-with-green-boost.html) | App developer | 

### Tooling with Green Boost
<a name="tooling-with-green-boost"></a>


| Task | Description | Skills required | 
| --- | --- | --- | 
| Set up monorepo and the pnpm package manager. | [\[See the AWS documentation website for more details\]](http://docs.aws.amazon.com/prescriptive-guidance/latest/patterns/explore-full-stack-cloud-native-web-application-development-with-green-boost.html) | App developer | 
| Run pnpm scripts. | Run the following commands in the root of your repository:[\[See the AWS documentation website for more details\]](http://docs.aws.amazon.com/prescriptive-guidance/latest/patterns/explore-full-stack-cloud-native-web-application-development-with-green-boost.html)Notice how these commands are run in all workspaces. The commands are defined in each workspace's `package.json#scripts` field. | App developer | 
| Use ESLint for static code analysis. | To test the static code analysis capability of ESLint, do the following:[\[See the AWS documentation website for more details\]](http://docs.aws.amazon.com/prescriptive-guidance/latest/patterns/explore-full-stack-cloud-native-web-application-development-with-green-boost.html) | App developer | 
| Manage dependencies and vulnerabilities. | [\[See the AWS documentation website for more details\]](http://docs.aws.amazon.com/prescriptive-guidance/latest/patterns/explore-full-stack-cloud-native-web-application-development-with-green-boost.html) | App developer | 
| Pre-commit hooks with Husky. | [\[See the AWS documentation website for more details\]](http://docs.aws.amazon.com/prescriptive-guidance/latest/patterns/explore-full-stack-cloud-native-web-application-development-with-green-boost.html)These tools are mechanisms to help prevent bad code from making its way into your application. | App developer | 

### Tear down the infrastructure
<a name="tear-down-the-infrastructure"></a>


| Task | Description | Skills required | 
| --- | --- | --- | 
| Remove the deployment from your account. | [\[See the AWS documentation website for more details\]](http://docs.aws.amazon.com/prescriptive-guidance/latest/patterns/explore-full-stack-cloud-native-web-application-development-with-green-boost.html) | App developer | 

## Troubleshooting
<a name="explore-full-stack-cloud-native-web-application-development-with-green-boost-troubleshooting"></a>


| Issue | Solution | 
| --- | --- | 
| Unable to establish port forwarding | Ensure that your AWS credentials are properly configured and have the necessary permissions.Double-check that the bastion host ID (`DB_BASTION_ID`) and database endpoint (`DB_ENDPOINT`) environment variables are correctly set.If you still encounter issues, see the AWS documentation for [troubleshooting SSH connections and Session Manager](https://docs.aws.amazon.com/systems-manager/latest/userguide/session-manager-getting-started-enable-ssh-connections.html). | 
| Website isn't loading on `localhost:3000` | Confirm that the terminal output indicates successful port forwarding, including the forwarding address.Ensure there are no conflicting processes using port 3000 on your local machine.Verify that the Green Boost application is properly configured and running on the expected port (3000).Check your web browser for any security extensions or settings that might block local connections. | 
| Error messages during local deployment (`pnpm deploy:local`) | Review the error messages carefully to identify the cause of the issue.Verify that the necessary environment variables and configuration files are correctly set. | 

## Related resources
<a name="explore-full-stack-cloud-native-web-application-development-with-green-boost-resources"></a>
+ [AWS CDK documentation](https://docs.aws.amazon.com/cdk/latest/guide/home.html)
+ [Green Boost documentation](https://awslabs.github.io/green-boost/learn/m1-deploy-gb-app)
+ [Next.js documentation](https://nextjs.org/docs)
+ [Node.js documentation](https://nodejs.org/en/docs/)
+ [React documentation](https://reactjs.org/docs/getting-started.html)
+ [TypeScript documentation](https://www.typescriptlang.org/docs/)

 

# Structure a Python project in hexagonal architecture using AWS Lambda
<a name="structure-a-python-project-in-hexagonal-architecture-using-aws-lambda"></a>

*Furkan Oruc, Dominik Goby, Darius Kunce, and Michal Ploski, Amazon Web Services*

## Summary
<a name="structure-a-python-project-in-hexagonal-architecture-using-aws-lambda-summary"></a>

This pattern shows how to structure a Python project in hexagonal architecture by using AWS Lambda. The pattern uses the AWS Cloud Development Kit (AWS CDK) as the infrastructure as code (IaC) tool, Amazon API Gateway as the REST API, and Amazon DynamoDB as the persistence layer. Hexagonal architecture follows domain-driven design principles. In hexagonal architecture, software consists of three components: domain, ports, and adapters. For detailed information about hexagonal architectures and their benefits, see the guide [Building hexagonal architectures on AWS](https://docs.aws.amazon.com/prescriptive-guidance/latest/hexagonal-architectures/)*.*

## Prerequisites and limitations
<a name="structure-a-python-project-in-hexagonal-architecture-using-aws-lambda-prereqs"></a>

**Prerequisites **
+ An active AWS account
+ Experience in Python
+ Familiarity with AWS Lambda, AWS CDK, Amazon API Gateway, and DynamoDB
+ A GitHub account (see [instructions for signing up](https://docs.github.com/en/get-started/signing-up-for-github/signing-up-for-a-new-github-account))
+ Git (see [installation instructions](https://git-scm.com/book/en/v2/Getting-Started-Installing-Git))
+ A code editor for making changes and pushing your code to GitHub (for example, [Visual Studio Code](https://code.visualstudio.com/) or [JetBrains PyCharm](https://www.jetbrains.com/pycharm/))
+ Docker installed, and the Docker daemon up and running

**Product versions**
+ Git version 2.24.3 or later
+ Python version 3.7 or later
+ AWS CDK v2
+ Poetry version 1.1.13 or later
+ AWS Lambda Powertools for Python version 1.25.6 or later
+ pytest version 7.1.1 or later
+ Moto version 3.1.9 or later
+ pydantic version 1.9.0 or later
+ Boto3 version 1.22.4 or later
+ mypy-boto3-dynamodb version 1.24.0 or later

## Architecture
<a name="structure-a-python-project-in-hexagonal-architecture-using-aws-lambda-architecture"></a>

**Target technology stack  **

The target technology stack consists of a Python service that uses API Gateway, Lambda, and DynamoDB. The service uses a DynamoDB adapter to persist data. It provides a function that uses Lambda as the entry point. The service uses Amazon API Gateway to expose a REST API. The API uses AWS Identity and Access Management (IAM) for the [authentication of clients](https://docs.aws.amazon.com/apigateway/latest/developerguide/permissions.html).

**Target architecture **

To illustrate the implementation, this pattern deploys a serverless target architecture. Clients can send requests to an API Gateway endpoint. API Gateway forwards the request to the target Lambda function that implements the hexagonal architecture pattern. The Lambda function performs create, read, update, and delete (CRUD) operations on a DynamoDB table.


| 
| 
| This pattern was tested in a PoC environment. You must conduct a security review to identify the threat model and create a secure code base before you deploy any architecture to a production environment.  | 
| --- |

![\[Target architecture for structuring a Python project in hexagonal architecture\]](http://docs.aws.amazon.com/prescriptive-guidance/latest/patterns/images/pattern-img/25bd7169-ea5e-4a21-a865-c91c30a3c0da/images/de0d4f0d-ad19-43ec-bd10-676b25477b64.png)


The API supports five operations on a product entity:
+ `GET /products` returns all products. 
+ `POST /products` creates a new product. 
+ `GET /products/{id}` returns a specific product.
+ `PUT /products/{id}` updates a specific product. 
+ `DELETE /products/{id}` deletes a specific product.

You can use the following folder structure to organize your project to follow the hexagonal architecture pattern:  

```
app/  # application code
|--- adapters/  # implementation of the ports defined in the domain
     |--- tests/  # adapter unit tests
|--- entrypoints/  # primary adapters, entry points
     |--- api/  # api entry point
          |--- model/  # api model
          |--- tests/  # end to end api tests
|--- domain/  # domain to implement business logic using hexagonal architecture
     |--- command_handlers/  # handlers used to execute commands on the domain
     |--- commands/  # commands on the domain
     |--- events/  # events triggered via the domain
     |--- exceptions/  # exceptions defined on the domain
     |--- model/  # domain model
     |--- ports/  # abstractions used for external communication
     |--- tests/  # domain tests
|--- libraries/  # List of 3rd party libraries used by the Lambda function
infra/  # infrastructure code
simple-crud-app.py  # AWS CDK v2 app
```

## Tools
<a name="structure-a-python-project-in-hexagonal-architecture-using-aws-lambda-tools"></a>

**AWS services**
+ [Amazon API Gateway](https://aws.amazon.com/api-gateway/) is a fully managed service that makes it easy for developers to create, publish, maintain, monitor, and secure APIs at any scale.
+ [Amazon DynamoDB](https://aws.amazon.com/dynamodb/) is a fully managed, serverless, key-value NoSQL database that is designed to run high-performance applications at any scale.
+ [AWS Lambda](https://aws.amazon.com/lambda/) is a serverless, event-driven compute service that lets you run code for virtually any type of application or backend service without provisioning or managing servers. You can launch Lambda functions from over 200 AWS services and software as a service (SaaS) applications, and only pay for what you use.

**Tools**
+ [Git](https://git-scm.com/book/en/v2/Getting-Started-Installing-Git)  is used as the version control system for code development in this pattern.
+ [Python](https://www.python.org/) is used as the programming language for this pattern. Python provides high-level data structures and an approach to object-oriented programming. AWS Lambda provides a built-in Python runtime that simplifies the operation of Python services.
+ [Visual Studio Code](https://code.visualstudio.com/) is used as the IDE for development and testing for this pattern. You can use any IDE that supports Python development (for example, [PyCharm](https://www.jetbrains.com/pycharm/)).
+ [AWS Cloud Development Kit (AWS CDK](https://aws.amazon.com/cdk/)) is an open-source software development framework that lets you define your cloud application resources by using familiar programming languages. This pattern uses the CDK to write and deploy cloud infrastructure as code.
+ [Poetry](https://python-poetry.org/) is used to manage dependencies in the pattern.
+ [Docker](https://www.docker.com/) is used by the AWS CDK to build the Lambda package and layer.

**Code **

The code for this pattern is available in the GitHub [Lambda hexagonal architecture sample](https://github.com/aws-samples/lambda-hexagonal-architecture-sample) repository.

## Best practices
<a name="structure-a-python-project-in-hexagonal-architecture-using-aws-lambda-best-practices"></a>

To use this pattern in a production environment, follow these best practices:
+ Use customer managed keys in AWS Key Management Service (AWS KMS) to encrypt [Amazon CloudWatch log groups](https://docs.aws.amazon.com/AmazonCloudWatch/latest/logs/encrypt-log-data-kms.html) and [Amazon DynamoDB tables](https://docs.aws.amazon.com/kms/latest/developerguide/services-dynamodb.html).
+ Configure [AWS WAF for Amazon API Gateway](https://docs.aws.amazon.com/apigateway/latest/developerguide/apigateway-control-access-aws-waf.html) to allow access only from your organization's network.
+ Consider other options for API Gateway authorization if IAM doesn’t meet your needs. For example, you can use [Amazon Cognito user pools](https://docs.aws.amazon.com/apigateway/latest/developerguide/apigateway-integrate-with-cognito.html) or [API Gateway Lambda authorizers](https://docs.aws.amazon.com/apigateway/latest/developerguide/apigateway-use-lambda-authorizer.html).
+ Use [DynamoDB backups](https://docs.aws.amazon.com/amazondynamodb/latest/developerguide/BackupRestore.html).
+ Configure Lambda functions with a [virtual private cloud (VPC) deployment](https://docs.aws.amazon.com/lambda/latest/dg/configuration-vpc.html) to keep network traffic inside the cloud.
+ Update the allowed origin configuration for [cross-origin resource sharing (CORS) preflight](https://developer.mozilla.org/en-US/docs/Web/HTTP/CORS) to restrict access to the requesting origin domain only.
+ Use [cdk-nag](https://docs.aws.amazon.com/prescriptive-guidance/latest/patterns/check-aws-cdk-applications-or-cloudformation-templates-for-best-practices-by-using-cdk-nag-rule-packs.html) to check the AWS CDK code for security best practices.
+ Consider using code scanning tools to find common security issues in the code. For example, [Bandit](https://bandit.readthedocs.io/en/latest/) is a tool that’s designed to find common security issues in Python code. [Pip-audit](https://pypi.org/project/pip-audit/) scans Python environments for packages that have known vulnerabilities.

This pattern uses [AWS X-Ray](https://aws.amazon.com/xray/?nc1=h_ls) to trace requests through the application’s entry point, domain, and adapters. AWS X-Ray helps developers identify bottlenecks and determine high latencies to improve application performance.

## Epics
<a name="structure-a-python-project-in-hexagonal-architecture-using-aws-lambda-epics"></a>

### Initialize the project
<a name="initialize-the-project"></a>


| Task | Description | Skills required | 
| --- | --- | --- | 
| Create your own repository. | [\[See the AWS documentation website for more details\]](http://docs.aws.amazon.com/prescriptive-guidance/latest/patterns/structure-a-python-project-in-hexagonal-architecture-using-aws-lambda.html) | App developer | 
| Install dependencies. | [\[See the AWS documentation website for more details\]](http://docs.aws.amazon.com/prescriptive-guidance/latest/patterns/structure-a-python-project-in-hexagonal-architecture-using-aws-lambda.html) | App developer | 
| Configure your IDE. | We recommend Visual Studio Code, but you can use any IDE of your choice that supports Python. The following steps are for Visual Studio Code.[\[See the AWS documentation website for more details\]](http://docs.aws.amazon.com/prescriptive-guidance/latest/patterns/structure-a-python-project-in-hexagonal-architecture-using-aws-lambda.html) | App developer | 
| Run unit tests, option 1: Use Visual Studio Code. | [\[See the AWS documentation website for more details\]](http://docs.aws.amazon.com/prescriptive-guidance/latest/patterns/structure-a-python-project-in-hexagonal-architecture-using-aws-lambda.html) | App developer | 
| Run unit tests, option 2: Use shell commands. | [\[See the AWS documentation website for more details\]](http://docs.aws.amazon.com/prescriptive-guidance/latest/patterns/structure-a-python-project-in-hexagonal-architecture-using-aws-lambda.html) | App developer | 

### Deploy and test the application
<a name="deploy-and-test-the-application"></a>


| Task | Description | Skills required | 
| --- | --- | --- | 
| Request temporary credentials. | To have AWS credentials on the shell when you run `cdk deploy`, create temporary credentials by using AWS IAM Identity Center (successor to AWS Single Sign-On). For instructions, see the blog post [How to retrieve short-term credentials for CLI use with AWS IAM Identity Center](https://aws.amazon.com/blogs/security/aws-single-sign-on-now-enables-command-line-interface-access-for-aws-accounts-using-corporate-credentials/). | App developer, AWS DevOps | 
| Deploy the application. | [\[See the AWS documentation website for more details\]](http://docs.aws.amazon.com/prescriptive-guidance/latest/patterns/structure-a-python-project-in-hexagonal-architecture-using-aws-lambda.html) | App developer, AWS DevOps | 
| Test the API, option 1: Use the console. | Use the [API Gateway console](https://docs.aws.amazon.com/apigateway/latest/developerguide/how-to-test-method.html) to test the API. For more information about API operations and request/response messages, see the [API usage section of the readme file](https://github.com/aws-samples/lambda-hexagonal-architecture-sample/blob/main/README.md#api-usage) in the GitHub repository. | App developer, AWS DevOps | 
| Test the API, option 2: Use Postman. | If you want to use a tool such as [Postman](https://www.postman.com/):[\[See the AWS documentation website for more details\]](http://docs.aws.amazon.com/prescriptive-guidance/latest/patterns/structure-a-python-project-in-hexagonal-architecture-using-aws-lambda.html) | App developer, AWS DevOps | 

### Develop the service
<a name="develop-the-service"></a>


| Task | Description | Skills required | 
| --- | --- | --- | 
| Write unit tests for the business domain. | [\[See the AWS documentation website for more details\]](http://docs.aws.amazon.com/prescriptive-guidance/latest/patterns/structure-a-python-project-in-hexagonal-architecture-using-aws-lambda.html) | App developer | 
| Implement commands and command handlers. | [\[See the AWS documentation website for more details\]](http://docs.aws.amazon.com/prescriptive-guidance/latest/patterns/structure-a-python-project-in-hexagonal-architecture-using-aws-lambda.html) | App developer | 
| Write integration tests for secondary adapters. | [\[See the AWS documentation website for more details\]](http://docs.aws.amazon.com/prescriptive-guidance/latest/patterns/structure-a-python-project-in-hexagonal-architecture-using-aws-lambda.html) | App developer | 
| Implement secondary adapters. | [\[See the AWS documentation website for more details\]](http://docs.aws.amazon.com/prescriptive-guidance/latest/patterns/structure-a-python-project-in-hexagonal-architecture-using-aws-lambda.html) | App developer | 
| Write end-to-end tests. | [\[See the AWS documentation website for more details\]](http://docs.aws.amazon.com/prescriptive-guidance/latest/patterns/structure-a-python-project-in-hexagonal-architecture-using-aws-lambda.html) | App developer | 
| Implement primary adapters. | [\[See the AWS documentation website for more details\]](http://docs.aws.amazon.com/prescriptive-guidance/latest/patterns/structure-a-python-project-in-hexagonal-architecture-using-aws-lambda.html) | App developer | 

## Related resources
<a name="structure-a-python-project-in-hexagonal-architecture-using-aws-lambda-resources"></a>

**APG guide**
+ [Building hexagonal architectures on AWS](https://docs.aws.amazon.com/prescriptive-guidance/latest/hexagonal-architectures/)

**AWS References**
+ [AWS Lambda documentation](https://docs.aws.amazon.com/lambda/)
+ [AWS CDK documentation](https://docs.aws.amazon.com/cdk/)
  + [Your first AWS CDK app](https://docs.aws.amazon.com/cdk/v2/guide/hello_world.html)
+ [API Gateway documentation](https://docs.aws.amazon.com/apigateway/)
  + [Control access to an API with IAM permissions](https://docs.aws.amazon.com/apigateway/latest/developerguide/permissions.html)
  + [Use the API Gateway console to test a REST API method](https://docs.aws.amazon.com/apigateway/latest/developerguide/how-to-test-method.html)
+ [Amazon DynamoDB documentation](https://docs.aws.amazon.com/dynamodb/)

**Tools**
+ [git-scm.com website](https://git-scm.com/)
+ [Installing Git](https://git-scm.com/book/en/v2/Getting-Started-Installing-Git)
+ [Creating a new GitHub repository](https://docs.github.com/en/repositories/creating-and-managing-repositories/creating-a-new-repository)
+ [Python website](https://www.python.org/)
+ [AWS Lambda Powertools for Python](https://docs.powertools.aws.dev/lambda/python/latest/)
+ [Postman website](https://www.postman.com/)
+ [Python mock object library](https://docs.python.org/3/library/unittest.mock.html)
+ [Poetry website](https://python-poetry.org/)

**IDEs**
+ [Visual Studio Code website](https://code.visualstudio.com/)
+ [PyCharm website](https://www.jetbrains.com/pycharm/)

# More patterns
<a name="websitesandwebapps-more-patterns-pattern-list"></a>

**Topics**
+ [Access container applications privately on Amazon ECS by using AWS Fargate, AWS PrivateLink, and a Network Load Balancer](access-container-applications-privately-on-amazon-ecs-by-using-aws-fargate-aws-privatelink-and-a-network-load-balancer.md)
+ [Access container applications privately on Amazon ECS by using AWS PrivateLink and a Network Load Balancer](access-container-applications-privately-on-amazon-ecs-by-using-aws-privatelink-and-a-network-load-balancer.md)
+ [Associate an AWS CodeCommit repository in one AWS account with Amazon SageMaker AI Studio Classic in another account](associate-an-aws-codecommit-repository-in-one-aws-account-with-sagemaker-studio-in-another-account.md)
+ [Automate deletion of AWS CloudFormation stacks and associated resources](automate-deletion-cloudformation-stacks-associated-resources.md)
+ [Restrict access based on IP address or geolocation by using AWS WAF](aws-waf-restrict-access-geolocation.md)
+ [Build a serverless React Native mobile app by using AWS Amplify](build-a-serverless-react-native-mobile-app-by-using-aws-amplify.md)
+ [Build and test iOS apps with AWS CodeCommit, AWS CodePipeline, and AWS Device Farm](build-and-test-ios-apps-with-aws-codecommit-aws-codepipeline-and-aws-device-farm.md)
+ [Configure logging for .NET applications in Amazon CloudWatch Logs by using NLog](configure-logging-for-net-applications-in-amazon-cloudwatch-logs-by-using-nlog.md)
+ [Consolidate Amazon S3 presigned URL generation and object downloads by using an endpoint associated with static IP addresses](consolidate-amazon-s3-presigned-url-generation-and-object-downloads-by-using-an-endpoint-associated-with-static-ip-addresses.md)
+ [Create an Amazon ECS task definition and mount a file system on EC2 instances using Amazon EFS](create-an-amazon-ecs-task-definition-and-mount-a-file-system-on-ec2-instances-using-amazon-efs.md)
+ [Deploy a gRPC-based application on an Amazon EKS cluster and access it with an Application Load Balancer](deploy-a-grpc-based-application-on-an-amazon-eks-cluster-and-access-it-with-an-application-load-balancer.md)
+ [Deploy a ChatOps solution to manage SAST scan results by using Amazon Q Developer in chat applications custom actions and CloudFormation](deploy-chatops-solution-to-manage-sast-scan-results.md)
+ [Deploy CloudWatch Synthetics canaries by using Terraform](deploy-cloudwatch-synthetics-canaries-by-using-terraform.md)
+ [Deploy Java microservices on Amazon ECS using AWS Fargate](deploy-java-microservices-on-amazon-ecs-using-aws-fargate.md)
+ [Deploy a RAG use case on AWS by using Terraform and Amazon Bedrock](deploy-rag-use-case-on-aws.md)
+ [Deploy resources in an AWS Wavelength Zone by using Terraform](deploy-resources-wavelength-zone-using-terraform.md)
+ [Implement path-based API versioning by using custom domains in Amazon API Gateway](implement-path-based-api-versioning-by-using-custom-domains.md)
+ [Migrate a messaging queue from Microsoft Azure Service Bus to Amazon SQS](migrate-a-messaging-queue-from-microsoft-azure-service-bus-to-amazon-sqs.md)
+ [Migrate a .NET application from Microsoft Azure App Service to AWS Elastic Beanstalk](migrate-a-net-application-from-microsoft-azure-app-service-to-aws-elastic-beanstalk.md)
+ [Migrate an on-premises Go web application to AWS Elastic Beanstalk by using the binary method](migrate-an-on-premises-go-web-application-to-aws-elastic-beanstalk-by-using-the-binary-method.md)
+ [Migrate an on-premises SFTP server to AWS using AWS Transfer for SFTP](migrate-an-on-premises-sftp-server-to-aws-using-aws-transfer-for-sftp.md)
+ [Migrate from IBM WebSphere Application Server to Apache Tomcat on Amazon EC2](migrate-from-ibm-websphere-application-server-to-apache-tomcat-on-amazon-ec2.md)
+ [Migrate from IBM WebSphere Application Server to Apache Tomcat on Amazon EC2 with Auto Scaling](migrate-from-ibm-websphere-application-server-to-apache-tomcat-on-amazon-ec2-with-auto-scaling.md)
+ [Migrate on-premises Java applications to AWS using AWS App2Container](migrate-on-premises-java-applications-to-aws-using-aws-app2container.md)
+ [Migrate Windows SSL certificates to an Application Load Balancer using ACM](migrate-windows-ssl-certificates-to-an-application-load-balancer-using-acm.md)
+ [Modernize ASP.NET Web Forms applications on AWS](modernize-asp-net-web-forms-applications-on-aws.md)
+ [Monitor application activity by using CloudWatch Logs Insights](monitor-application-activity-by-using-cloudwatch-logs-insights.md)
+ [Run an ASP.NET Core web API Docker container on an Amazon EC2 Linux instance](run-an-asp-net-core-web-api-docker-container-on-an-amazon-ec2-linux-instance.md)
+ [Send custom attributes to Amazon Cognito and inject them into tokens](send-custom-attributes-cognito.md)
+ [Serve static content in an Amazon S3 bucket through a VPC by using Amazon CloudFront](serve-static-content-in-an-amazon-s3-bucket-through-a-vpc-by-using-amazon-cloudfront.md)
+ [Set up a highly available PeopleSoft architecture on AWS](set-up-a-highly-available-peoplesoft-architecture-on-aws.md)
+ [Streamline Amazon Lex bot development and deployment by using an automated workflow](streamline-amazon-lex-bot-development-and-deployment-using-an-automated-workflow.md)
+ [Troubleshoot states in AWS Step Functions by using Amazon Bedrock](troubleshooting-states-in-aws-step-functions.md)
+ [Use Network Firewall to capture the DNS domain names from the Server Name Indication for outbound traffic](use-network-firewall-to-capture-the-dns-domain-names-from-the-server-name-indication-sni-for-outbound-traffic.md)
+ [Visualize AI/ML model results using Flask and AWS Elastic Beanstalk](visualize-ai-ml-model-results-using-flask-and-aws-elastic-beanstalk.md)