

# Management
<a name="management-pattern-list"></a>

**Topics**
+ [Cost management](costmanagement-pattern-list.md)
+ [End-user computing](endusercomputing-pattern-list.md)
+ [High-performance computing](highperformancecomputing-pattern-list.md)
+ [Hybrid cloud](hybrid-pattern-list.md)
+ [Management & governance](governance-pattern-list.md)
+ [Messaging & communications](messagingandcommunications-pattern-list.md)
+ [Multi-account strategy](multiaccountstrategy-pattern-list.md)

# Cost management
<a name="costmanagement-pattern-list"></a>

**Topics**
+ [Create detailed cost and usage reports for AWS Glue jobs by using AWS Cost Explorer](create-detailed-cost-and-usage-reports-for-aws-glue-jobs-by-using-aws-cost-explorer.md)
+ [Create detailed cost and usage reports for Amazon EMR clusters by using AWS Cost Explorer](create-detailed-cost-and-usage-reports-for-amazon-emr-clusters-by-using-aws-cost-explorer.md)
+ [More patterns](costmanagement-more-patterns-pattern-list.md)

# Create detailed cost and usage reports for AWS Glue jobs by using AWS Cost Explorer
<a name="create-detailed-cost-and-usage-reports-for-aws-glue-jobs-by-using-aws-cost-explorer"></a>

*Parijat Bhide and Aromal Raj Jayarajan, Amazon Web Services*

## Summary
<a name="create-detailed-cost-and-usage-reports-for-aws-glue-jobs-by-using-aws-cost-explorer-summary"></a>

This pattern shows how to track the usage costs of AWS Glue data integration jobs by configuring [user-defined cost allocation tags](https://docs.aws.amazon.com/awsaccountbilling/latest/aboutv2/custom-tags.html). You can use these tags to create detailed cost and usage reports in AWS Cost Explorer for jobs across multiple dimensions. For example, you can track usage costs at the team, project, or cost center level.

## Prerequisites and limitations
<a name="create-detailed-cost-and-usage-reports-for-aws-glue-jobs-by-using-aws-cost-explorer-prereqs"></a>

**Prerequisites**
+ An active AWS account
+ One or more [AWS Glue jobs](https://docs.aws.amazon.com/glue/latest/dg/how-it-works.html) that have user-defined tags activated

## Architecture
<a name="create-detailed-cost-and-usage-reports-for-aws-glue-jobs-by-using-aws-cost-explorer-architecture"></a>

**Target technology stack**
+ AWS Glue
+ AWS Cost Explorer

The following diagram shows how you can apply tags to track usage costs for AWS Glue jobs.

![\[Creating and applying tags in AWS Glue jobs to track usage costs in AWS Cost Explorer.\]](http://docs.aws.amazon.com/prescriptive-guidance/latest/patterns/images/pattern-img/e0ae6643-713d-423a-9013-b41b30638053/images/f2b74ef1-494d-439b-9aec-5a9d601126a6.png)


The diagram shows the following workflow:

1. A data engineer or AWS administrator creates user-defined cost allocation tags for the AWS Glue jobs.

1. An AWS administrator activates the tags.

1. The tags report metadata to AWS Cost Explorer.

## Tools
<a name="create-detailed-cost-and-usage-reports-for-aws-glue-jobs-by-using-aws-cost-explorer-tools"></a>
+ [AWS Glue](https://docs.aws.amazon.com/glue/latest/dg/what-is-glue.html) is a fully managed extract, transform, and load (ETL) service. It helps you reliably categorize, clean, enrich, and move data between data stores and data streams.
+ [AWS Cost Explorer](https://docs.aws.amazon.com/awsaccountbilling/latest/aboutv2/ce-what-is.html) helps you view and analyze your AWS costs and usage.

## Epics
<a name="create-detailed-cost-and-usage-reports-for-aws-glue-jobs-by-using-aws-cost-explorer-epics"></a>

### Create and activate tags for your AWS Glue jobs
<a name="create-and-activate-tags-for-your-aws-glue-jobs"></a>


| Task | Description | Skills required | 
| --- | --- | --- | 
| Create user-defined cost allocation tags for your AWS Glue jobs. | **To add tags to an existing AWS Glue job**[\[See the AWS documentation website for more details\]](http://docs.aws.amazon.com/prescriptive-guidance/latest/patterns/create-detailed-cost-and-usage-reports-for-aws-glue-jobs-by-using-aws-cost-explorer.html)**To add tags to a new AWS Glue job**[\[See the AWS documentation website for more details\]](http://docs.aws.amazon.com/prescriptive-guidance/latest/patterns/create-detailed-cost-and-usage-reports-for-aws-glue-jobs-by-using-aws-cost-explorer.html)For more information, see [AWS tags in AWS Glue](https://docs.aws.amazon.com/glue/latest/dg/monitor-tags.html) in the *AWS Glue Developer Guide.* | Data engineer | 
| Activate the user-defined cost allocation tags. | Follow the instructions in [Activating user-defined cost allocation tags](https://docs.aws.amazon.com/awsaccountbilling/latest/aboutv2/activating-tags.html) in the *AWS Billing User Guide*. | AWS administrator | 

### Create cost and usage reports for your AWS Glue jobs
<a name="create-cost-and-usage-reports-for-your-aws-glue-jobs"></a>


| Task | Description | Skills required | 
| --- | --- | --- | 
| Create cost and usage reports for your AWS Glue jobs by using tag filters in AWS Cost Explorer. | [\[See the AWS documentation website for more details\]](http://docs.aws.amazon.com/prescriptive-guidance/latest/patterns/create-detailed-cost-and-usage-reports-for-aws-glue-jobs-by-using-aws-cost-explorer.html)For more information, see [Exploring your data using Cost Explorer](https://docs.aws.amazon.com/cost-management/latest/userguide/ce-exploring-data.html) in the *AWS Cost Management User Guide*. | General AWS, AWS administrator | 

# Create detailed cost and usage reports for Amazon EMR clusters by using AWS Cost Explorer
<a name="create-detailed-cost-and-usage-reports-for-amazon-emr-clusters-by-using-aws-cost-explorer"></a>

*Parijat Bhide and Aromal Raj Jayarajan, Amazon Web Services*

## Summary
<a name="create-detailed-cost-and-usage-reports-for-amazon-emr-clusters-by-using-aws-cost-explorer-summary"></a>

This pattern shows how to track the usage costs of Amazon EMR clusters by configuring [user-defined cost allocation tags](https://docs.aws.amazon.com/awsaccountbilling/latest/aboutv2/custom-tags.html). You can use these tags to create detailed cost and usage reports in AWS Cost Explorer for clusters across multiple dimensions. For example, you can track usage costs at the team, project, or cost center level.

## Prerequisites and limitations
<a name="create-detailed-cost-and-usage-reports-for-amazon-emr-clusters-by-using-aws-cost-explorer-prereqs"></a>

**Prerequisites**
+ An active AWS account
+ One or more [EMR clusters](https://docs.aws.amazon.com/emr/latest/ManagementGuide/emr-gs.html) that have user-defined tags activated

## Architecture
<a name="create-detailed-cost-and-usage-reports-for-amazon-emr-clusters-by-using-aws-cost-explorer-architecture"></a>

**Target technology stack**
+ Amazon EMR
+ AWS Cost Explorer

**Target architecture**

The following diagram shows how you can apply tags to track usage costs for specific Amazon EMR clusters.

![\[Using cost allocation tags for Amazon EMR clusters.\]](http://docs.aws.amazon.com/prescriptive-guidance/latest/patterns/images/pattern-img/3e470077-e3b1-43cf-8cb9-0895fe39e664/images/fb6b78cb-47bb-4ba1-848a-98dba02bdbb2.png)


The diagram shows the following workflow:

1. A data engineer or AWS administrator creates user-defined cost allocation tags for the Amazon EMR clusters.

1. An AWS administrator activates the tags.

1. The tags report metadata to AWS Cost Explorer.

## Tools
<a name="create-detailed-cost-and-usage-reports-for-amazon-emr-clusters-by-using-aws-cost-explorer-tools"></a>

**Tools**
+ [Amazon EMR](https://docs.aws.amazon.com/emr/latest/ManagementGuide/emr-what-is-emr.html) is a managed cluster platform that simplifies running big data frameworks on AWS to process and analyze large amounts of data.
+ [AWS Cost Explorer](https://docs.aws.amazon.com/awsaccountbilling/latest/aboutv2/ce-what-is.html) helps you view and analyze your AWS costs and usage.

## Epics
<a name="create-detailed-cost-and-usage-reports-for-amazon-emr-clusters-by-using-aws-cost-explorer-epics"></a>

### Create and activate tags for your Amazon EMR clusters
<a name="create-and-activate-tags-for-your-amazon-emr-clusters"></a>


| Task | Description | Skills required | 
| --- | --- | --- | 
| Create user-defined cost allocation tags for your Amazon EMR clusters. | **To add tags to an existing Amazon EMR cluster**Follow the instructions in [Adding tags to an existing cluster](https://docs.aws.amazon.com/emr/latest/ManagementGuide/emr-plan-tags-add.html) in the *Amazon EMR Management Guide*.**To add tags to a new Amazon EMR cluster**Follow the instructions in [Add tags to a new cluster](https://docs.aws.amazon.com/emr/latest/ManagementGuide/emr-plan-tags-add-new.html) in the *Amazon EMR Management Guide*.For more information about how to set up an Amazon EMR cluster, see [Plan and configure clusters](https://docs.aws.amazon.com/emr/latest/ManagementGuide/emr-plan.html) in the *Amazon EMR Management Guide*. | Data engineer | 
| Activate the user-defined cost allocation tags. | Follow the instructions in [Activating user-defined cost allocation tags](https://docs.aws.amazon.com/awsaccountbilling/latest/aboutv2/activating-tags.html) in the *AWS Billing User Guide*. | AWS administrator | 

### Create cost and usage reports for your Amazon EMR clusters
<a name="create-cost-and-usage-reports-for-your-amazon-emr-clusters"></a>


| Task | Description | Skills required | 
| --- | --- | --- | 
| Create cost and usage reports for your Amazon EMR clusters by using tag filters in AWS Cost Explorer. | [\[See the AWS documentation website for more details\]](http://docs.aws.amazon.com/prescriptive-guidance/latest/patterns/create-detailed-cost-and-usage-reports-for-amazon-emr-clusters-by-using-aws-cost-explorer.html)For more information, see [Exploring your data using Cost Explorer](https://docs.aws.amazon.com/cost-management/latest/userguide/ce-exploring-data.html) in the *AWS Cost Management User Guide*. | General AWS, AWS administrator | 

# More patterns
<a name="costmanagement-more-patterns-pattern-list"></a>

**Topics**
+ [Automate AWS infrastructure operations by using Amazon Bedrock](automate-aws-infrastructure-operations-by-using-amazon-bedrock.md)
+ [Automatically inventory AWS resources across multiple accounts and Regions](automate-aws-resource-inventory.md)
+ [Automate the creation of Amazon WorkSpaces Applications resources using AWS CloudFormation](automate-the-creation-of-appstream-2-0-resources-using-aws-cloudformation.md)
+ [Automatically archive items to Amazon S3 using DynamoDB TTL](automatically-archive-items-to-amazon-s3-using-dynamodb-ttl.md)
+ [Automatically stop and start an Amazon RDS DB instance using AWS Systems Manager Maintenance Windows](automatically-stop-and-start-an-amazon-rds-db-instance-using-aws-systems-manager-maintenance-windows.md)
+ [Create detailed cost and usage reports for Amazon RDS and Amazon Aurora](create-detailed-cost-and-usage-reports-for-amazon-rds-and-amazon-aurora.md)
+ [Estimate storage costs for an Amazon DynamoDB table](estimate-storage-costs-for-an-amazon-dynamodb-table.md)
+ [Estimate the cost of a DynamoDB table for on-demand capacity](estimate-the-cost-of-a-dynamodb-table-for-on-demand-capacity.md)
+ [Set up event-driven auto scaling in Amazon EKS by using Amazon EKS Pod Identity and KEDA](event-driven-auto-scaling-with-eks-pod-identity-and-keda.md)
+ [Coordinate resource dependency and task execution by using the AWS Fargate WaitCondition hook construct](use-the-aws-fargate-waitcondition-hook-construct.md)

# End-user computing
<a name="endusercomputing-pattern-list"></a>

**Topics**
+ [Implement SAML 2.0 authentication for Amazon WorkSpaces by using Auth0 and AWS Managed Microsoft AD](implement-saml-authentication-for-amazon-workspaces-by-using-auth0-and-aws-managed-microsoft-ad.md)
+ [More patterns](endusercomputing-more-patterns-pattern-list.md)

# Implement SAML 2.0 authentication for Amazon WorkSpaces by using Auth0 and AWS Managed Microsoft AD
<a name="implement-saml-authentication-for-amazon-workspaces-by-using-auth0-and-aws-managed-microsoft-ad"></a>

*Siva Vinnakota and Shantanu Padhye, Amazon Web Services*

## Summary
<a name="implement-saml-authentication-for-amazon-workspaces-by-using-auth0-and-aws-managed-microsoft-ad-summary"></a>

This pattern explores how you can integrate Auth0 with AWS Directory Service for Microsoft Active Directory to create a robust SAML 2.0 authentication solution for your Amazon WorkSpaces environment. It explains how to establish federation between these AWS services to enable advanced features such as multi-factor authentication (MFA) and custom login flows while preserving seamless desktop access through AWS Managed Microsoft AD. Whether you're managing only a handful of users or thousands, this integration helps provide flexibility and security for your organization. This pattern provides the steps for the setup process so you can implement this solution in your own environment.

## Prerequisites and limitations
<a name="implement-saml-authentication-for-amazon-workspaces-by-using-auth0-and-aws-managed-microsoft-ad-prereqs"></a>

**Prerequisites **
+ An active AWS account
+ AWS Managed Microsoft AD
+ A provisioned desktop in Amazon WorkSpaces Personal that is associated with AWS Managed Microsoft AD
+ An Amazon Elastic Compute Cloud (Amazon EC2) instance
+ An Auth0 account

**Limitations**

Some AWS services aren’t available in all AWS Regions. For Region availability, see [AWS services by Region](https://aws.amazon.com/about-aws/global-infrastructure/regional-product-services/). For specific endpoints, see the [Service endpoints and quotas](https://docs.aws.amazon.com/general/latest/gr/aws-service-information.html) page, and choose the link for the service.

## Architecture
<a name="implement-saml-authentication-for-amazon-workspaces-by-using-auth0-and-aws-managed-microsoft-ad-architecture"></a>

The SAML 2.0 authentication process for a WorkSpaces client application consists of five steps that are illustrated in the following diagram. These steps represent a typical workflow for logging in. You can use this distributed approach to authentication after you follow the instructions in this pattern, to help provide a structured and secure method for user access.

![\[Workflow for the SAML 2.0 authentication process for a WorkSpaces client application.\]](http://docs.aws.amazon.com/prescriptive-guidance/latest/patterns/images/pattern-img/5a0f227c-c111-495b-9fde-98ae7832bb10/images/957b2a11-e898-4c4f-ae4e-c2e85bfa93a0.png)


 Workflow:

1. **Registration**. The user launches the client application for WorkSpaces and enters the WorkSpaces registration code for their SAML-enabled WorkSpaces directory. WorkSpaces returns the Auth0 identity provider (IdP) URL to the client application.

1. **Login. **The WorkSpaces client redirects to the user’s web browser by using the Auth0 URL.  The user authenticates with their username and password. Auth0 returns a SAML assertion to the client browser. The SAML assertion is an encrypted token that asserts the user’s identity.

1. **Authenticate**. The client browser posts the SAML assertion to the AWS Sign-In endpoint to validate it. AWS Sign-In allows the caller to assume an AWS Identity and Access Management (IAM) role. This returns a token that contains temporary credentials for the IAM role.

1. **WorkSpaces login**. The WorkSpaces client presents the token to the WorkSpaces service endpoint. WorkSpaces exchanges the token for a session token and returns the session token to the WorkSpaces client with a login URL. When the WorkSpaces client loads the login page. the username value is populated by the `NameId` value that’s passed in the SAML response.

1. **Streaming**. The user enters their password and authenticates against the WorkSpaces directory. After authentication, WorkSpaces returns a token to the client. The client redirects back to the WorkSpaces service and presents the token. This brokers a streaming session between the WorkSpaces client and the WorkSpace.

**Note**  
To set up a seamless single sign-on experience that doesn’t require a password prompt, see the [Certificate-based authentication and WorkSpaces Personal](https://docs.aws.amazon.com/workspaces/latest/adminguide/certificate-based-authentication.html) in the WorkSpaces documentation.

## Tools
<a name="implement-saml-authentication-for-amazon-workspaces-by-using-auth0-and-aws-managed-microsoft-ad-tools"></a>

**AWS services**
+ [Amazon WorkSpaces](https://docs.aws.amazon.com/workspaces/latest/adminguide/amazon-workspaces.html) is a fully managed virtual desktop infrastructure (VDI) service that provides users with cloud-based desktops without having to procure and deploy hardware or install complex software.
+ [AWS Directory Service for Microsoft Active Directory](https://docs.aws.amazon.com/directoryservice/latest/admin-guide/directory_microsoft_ad.html) enables your directory-aware workloads and AWS resources to use Microsoft Active Directory in the AWS Cloud.

**Other tools**
+ [Auth0](https://auth0.com/) is an authentication and authorization platform that helps you manage access to your applications.

## Epics
<a name="implement-saml-authentication-for-amazon-workspaces-by-using-auth0-and-aws-managed-microsoft-ad-epics"></a>

### Configure the Active Directory LDAP Connector in Auth0
<a name="configure-the-active-directory-ldap-connector-in-auth0"></a>


| Task | Description | Skills required | 
| --- | --- | --- | 
| Install the Active Directory LDAP connector in Auth0 with AWS Managed Microsoft AD. | [\[See the AWS documentation website for more details\]](http://docs.aws.amazon.com/prescriptive-guidance/latest/patterns/implement-saml-authentication-for-amazon-workspaces-by-using-auth0-and-aws-managed-microsoft-ad.html) | Cloud administrator, Cloud architect | 
| Create an application in Auth0 to generate the SAML metadata manifest file. | [\[See the AWS documentation website for more details\]](http://docs.aws.amazon.com/prescriptive-guidance/latest/patterns/implement-saml-authentication-for-amazon-workspaces-by-using-auth0-and-aws-managed-microsoft-ad.html) | Cloud administrator, Cloud architect | 

### Set up IdP, role, and policy for SAML 2.0 in IAM
<a name="set-up-idp-role-and-policy-for-saml-2-0-in-iam"></a>


| Task | Description | Skills required | 
| --- | --- | --- | 
| Create a SAML 2.0 IdP in IAM. | To set up SAML 2.0 as an IdP, follow the steps that are outlined in [Create a SAML identity provider in IAM](https://docs.aws.amazon.com/IAM/latest/UserGuide/id_roles_providers_create_saml.html) in the IAM documentation. | Cloud administrator | 
| Create an IAM role and policy for SAML 2.0 federation. | [\[See the AWS documentation website for more details\]](http://docs.aws.amazon.com/prescriptive-guidance/latest/patterns/implement-saml-authentication-for-amazon-workspaces-by-using-auth0-and-aws-managed-microsoft-ad.html) | Cloud administrator | 

### Configure assertions in Auth0
<a name="configure-assertions-in-auth0"></a>


| Task | Description | Skills required | 
| --- | --- | --- | 
| Configure Auth0 and SAML assertions. | You can use Auth0 actions to configure assertions in SAML 2.0 responses. A SAML assertion is an encrypted token that asserts the user’s identity.[\[See the AWS documentation website for more details\]](http://docs.aws.amazon.com/prescriptive-guidance/latest/patterns/implement-saml-authentication-for-amazon-workspaces-by-using-auth0-and-aws-managed-microsoft-ad.html)This completes the setup of SAML 2.0 authentication for WorkSpaces Personal desktops. The [Architecture](#implement-saml-authentication-for-amazon-workspaces-by-using-auth0-and-aws-managed-microsoft-ad-architecture) section illustrates the authentication process after setup. | Cloud administrator | 

## Troubleshooting
<a name="implement-saml-authentication-for-amazon-workspaces-by-using-auth0-and-aws-managed-microsoft-ad-troubleshooting"></a>


| Issue | Solution | 
| --- | --- | 
| SAML 2.0 authentication issues in WorkSpaces** ** | If you encounter any issues when you implement SAML 2.0 authentication for WorkSpaces Personal, follow the steps and links outlined in the [AWS re:Post article](https://repost.aws/knowledge-center/workspaces-saml-authentication-issues) on troubleshooting SAML 2.0 authentication.For additional information about investigating SAML 2.0 errors while accessing WorkSpaces, see:[\[See the AWS documentation website for more details\]](http://docs.aws.amazon.com/prescriptive-guidance/latest/patterns/implement-saml-authentication-for-amazon-workspaces-by-using-auth0-and-aws-managed-microsoft-ad.html) | 

## Related resources
<a name="implement-saml-authentication-for-amazon-workspaces-by-using-auth0-and-aws-managed-microsoft-ad-resources"></a>
+ [Set up SAML 2.0 for WorkSpaces Personal](https://docs.aws.amazon.com/workspaces/latest/adminguide/setting-up-saml.html) (WorkSpaces documentation)
+ [Auth0 documentation](https://auth0.com/docs)

# More patterns
<a name="endusercomputing-more-patterns-pattern-list"></a>

**Topics**
+ [Automate the creation of Amazon WorkSpaces Applications resources using AWS CloudFormation](automate-the-creation-of-appstream-2-0-resources-using-aws-cloudformation.md)
+ [Improve call quality on agent workstations in Amazon Connect contact centers](improve-call-quality-on-agent-workstations-in-amazon-connect-contact-centers.md)
+ [Run AWS Systems Manager Automation tasks synchronously from AWS Step Functions](run-aws-systems-manager-automation-tasks-synchronously-from-aws-step-functions.md)

# High-performance computing
<a name="highperformancecomputing-pattern-list"></a>

**Topics**
+ [Deploy a Lustre file system for high-performance data processing by using Terraform and DRA](deploy-lustre-file-system-for-high-performance-data-processing-with-terraform-dra.md)
+ [Set up a Grafana monitoring dashboard for AWS ParallelCluster](set-up-a-grafana-monitoring-dashboard-for-aws-parallelcluster.md)
+ [More patterns](highperformancecomputing-more-patterns-pattern-list.md)

# Deploy a Lustre file system for high-performance data processing by using Terraform and DRA
<a name="deploy-lustre-file-system-for-high-performance-data-processing-with-terraform-dra"></a>

*Arun Bagal and Ishwar Chauthaiwale, Amazon Web Services*

## Summary
<a name="deploy-lustre-file-system-for-high-performance-data-processing-with-terraform-dra-summary"></a>

This pattern automatically deploys a Lustre file system on AWS and integrates it with Amazon Elastic Compute Cloud (Amazon EC2) and Amazon Simple Storage Service (Amazon S3).

This solution helps you quickly set up a high performance computing (HPC) environment with integrated storage, compute resources, and Amazon S3 data access. It combines Lustre's storage capabilities with the flexible compute options provided by Amazon EC2 and the scalable object storage in Amazon S3, so you can tackle data-intensive workloads in machine learning, HPC, and big data analytics.

The pattern uses a HashiCorp Terraform module and Amazon FSx for Lustre to streamline the following process:
+ Provisioning a Lustre file system
+ Establishing a data repository association (DRA) between FSx for Lustre and an S3 bucket to link the Lustre file system with Amazon S3 objects
+ Creating an EC2 instance
+ Mounting the Lustre file system with the Amazon S3-linked DRA on the EC2 instance

The benefits of this solution include:
+ Modular design. You can easily maintain and update the individual components of this solution.
+ Scalability. You can quickly deploy consistent environments across AWS accounts or Regions.
+ Flexibility. You can customize the deployment to fit your specific needs.
+ Best practices. This pattern uses preconfigured modules that follow AWS best practices.

For more information about Lustre file systems, see the [Lustre website](https://www.lustre.org/).

## Prerequisites and limitations
<a name="deploy-lustre-file-system-for-high-performance-data-processing-with-terraform-dra-prereqs"></a>

**Prerequisites**
+ An active AWS account
+ A least privilege AWS Identity and Access Management (IAM) policy (see [instructions](https://aws.amazon.com/blogs/security/techniques-for-writing-least-privilege-iam-policies/))

**Limitations**

FSx for Lustre limits the Lustre file system to a single Availability Zone, which could be a concern if you have high availability requirements. If the Availability Zone that contains the file system fails, access to the file system is lost until recovery. To achieve high availability, you can use DRA to link the Lustre file system with Amazon S3, and transfer data between Availability Zones.

**Product versions**
+ [Terraform version 1.9.3 or later](https://developer.hashicorp.com/terraform/install?product_intent=terraform)
+ [HashiCorp AWS Provider version 4.0.0 or later](https://registry.terraform.io/providers/hashicorp/aws/latest)

## Architecture
<a name="deploy-lustre-file-system-for-high-performance-data-processing-with-terraform-dra-architecture"></a>

The following diagram shows the architecture for FSx for Lustre and complementary AWS services in the AWS Cloud.

![\[FSx for Lustre deployment with AWS KMS, Amazon EC2, Amazon CloudWatch Logs, and Amazon S3.\]](http://docs.aws.amazon.com/prescriptive-guidance/latest/patterns/images/pattern-img/51d38589-e752-42cd-9f46-59c3c8d0bfd3/images/c1c21952-fd6f-4b1d-9bf8-09b2f4f4459f.png)


The architecture includes the following:
+ An S3 bucket is used as a durable, scalable, and cost-effective storage location for data. The integration between FSx for Lustre and Amazon S3 provides a high-performance file system that is seamlessly linked with Amazon S3.
+ FSx for Lustre runs and manages the Lustre file system.
+ Amazon CloudWatch Logs collects and monitors log data from the file system. These logs provide insights into the performance, health, and activity of your Lustre file system.
+ Amazon EC2 is used to access Lustre file systems by using the open source Lustre client. EC2 instances can access file systems from other Availability Zones within the same virtual private cloud (VPC). The networking configuration allows for access across subnets within the VPC. After the Lustre file system is mounted on the instance, you can work with its files and directories just as you would use a local file system.
+ AWS Key Management Service (AWS KMS)  enhances the security of the file system by providing encryption for data at rest.

**Automation and scale**

Terraform makes it easier to deploy, manage, and scale your Lustre file systems across multiple environments. In FSx for Lustre, a single file system has size limitations, so you might need to horizontally scale by creating multiple file systems. You can use Terraform to provision multiple Lustre file systems based on your workload needs.

## Tools
<a name="deploy-lustre-file-system-for-high-performance-data-processing-with-terraform-dra-tools"></a>

**AWS services**
+ [Amazon CloudWatch Logs](https://docs.aws.amazon.com/AmazonCloudWatch/latest/logs/WhatIsCloudWatchLogs.html) helps you centralize the logs from all your systems, applications, and AWS services so you can monitor them and archive them securely.
+ [Amazon Elastic Compute Cloud (Amazon EC2)](https://docs.aws.amazon.com/AWSEC2/latest/UserGuide/concepts.html) provides scalable computing capacity in the AWS Cloud. You can launch as many virtual servers as you need and quickly scale them up or down.
+ [Amazon FSx for Lustre](https://docs.aws.amazon.com/fsx/latest/LustreGuide/what-is.html) makes it easy and cost-effective to launch, run, and scale a high-performance Lustre file system.
+ [AWS Key Management Service (AWS KMS)](https://docs.aws.amazon.com/kms/latest/developerguide/overview.html) helps you create and control cryptographic keys to help protect your data.
+ [Amazon Simple Storage Service (Amazon S3)](https://docs.aws.amazon.com/AmazonS3/latest/userguide/Welcome.html) is a cloud-based object storage service that helps you store, protect, and retrieve any amount of data.

**Code repository**

The code for this pattern is available in the GitHub [Provision FSx for Lustre Filesystem using Terraform](https://github.com/aws-samples/provision-fsx-lustre-with-terraform) repository.

## Best practices
<a name="deploy-lustre-file-system-for-high-performance-data-processing-with-terraform-dra-best-practices"></a>
+ The following variables define the Lustre file system. Make sure to configure these correctly based on your environment, as instructed in the [Epics](#deploy-lustre-file-system-for-high-performance-data-processing-with-terraform-dra-epics) section.
  + `storage_capacity` – The storage capacity of the Lustre file system, in GiBs. The minimum and default setting is 1200 GiB.
  + `deployment_type` – The deployment type for the Lustre file system. For an explanation of the two options, `PERSISTENT_1` and `PERSISTENT_2` (default), see the [FSx for Lustre documentation](https://docs.aws.amazon.com/fsx/latest/LustreGuide/using-fsx-lustre.html#persistent-file-system).
  + `per_unit_storage_throughput` – The read and write throughput, in MBs per second per TiB.  
  + `subnet_id` – The ID of the private subnet where you want to deploy FSx for Lustre.
  + `vpc_id` – The ID of your virtual private cloud on AWS where you want to deploy FSx for Lustre.
  + `data_repository_path` – The path to the S3 bucket that will be linked to the Lustre file system.
  + `iam_instance_profile` – The IAM instance profile to use to launch the EC2 instance.
  + `kms_key_id` – The Amazon Resource Name (ARN) of the AWS KMS key that will be used for data encryption.
+ Ensure proper network access and placement within the VPC by using the `security_group` and `vpc_id` variables.
+ Run the `terraform plan` command as described in the [Epics](#deploy-lustre-file-system-for-high-performance-data-processing-with-terraform-dra-epics) section to preview and verify changes before applying them. This helps catch potential issues and ensures that you are aware of what will be deployed.
+ Use the `terraform validate` command as described in the [Epics](#deploy-lustre-file-system-for-high-performance-data-processing-with-terraform-dra-epics) section to check for syntax errors and to confirm that your configuration is correct.

## Epics
<a name="deploy-lustre-file-system-for-high-performance-data-processing-with-terraform-dra-epics"></a>

### Set up your environment
<a name="set-up-your-environment"></a>


| Task | Description | Skills required | 
| --- | --- | --- | 
| Install Terraform. | To install Terraform on your local machine, follow the instructions in the [Terraform documentation](https://developer.hashicorp.com/terraform/tutorials/aws-get-started/install-cli). | AWS DevOps, DevOps engineer | 
| Set up AWS credentials. | To set up the AWS Command Line Interface (AWS CLI) profile for the account, follow the instructions in the [AWS documentation](https://docs.aws.amazon.com/cli/latest/userguide/cli-configure-files.html). | AWS DevOps, DevOps engineer | 
| Clone the GitHub repository. | To clone the GitHub repository, run the command:<pre>git clone https://github.com/aws-samples/provision-fsx-lustre-with-terraform.git</pre> | AWS DevOps, DevOps engineer | 

### Configure and deploy FSx for Lustre
<a name="configure-and-deploy-fsxlustre"></a>


| Task | Description | Skills required | 
| --- | --- | --- | 
| Update the deployment configuration. | [\[See the AWS documentation website for more details\]](http://docs.aws.amazon.com/prescriptive-guidance/latest/patterns/deploy-lustre-file-system-for-high-performance-data-processing-with-terraform-dra.html) | AWS DevOps, DevOps engineer | 
| Initialize the Terraform environment. | To initialize your environment to run the Terraform `fsx_deployment` module, run:<pre>terraform init</pre> | AWS DevOps, DevOps engineer | 
| Validate the Terraform syntax. | To check for syntax errors and to confirm that your configuration is correct, run:<pre>terraform validate </pre> | AWS DevOps, DevOps engineer | 
| Validate the Terraform configuration. | To create a Terraform execution plan and preview the deployment, run:<pre>terraform plan -var-file terraform.tfvars</pre> | AWS DevOps, DevOps engineer | 
| Deploy the Terraform module. | To deploy the FSx for Lustre resources, run:<pre>terraform apply -var-file terraform.tfvars</pre> | AWS DevOps, DevOps engineer | 

### Clean up AWS resources
<a name="clean-up-aws-resources"></a>


| Task | Description | Skills required | 
| --- | --- | --- | 
| Remove AWS resources. | After you finish using your FSx for Lustre environment, you can remove the AWS resources deployed by Terraform to avoid incurring unnecessary charges. The Terraform module provided in the code repository automates this cleanup.[\[See the AWS documentation website for more details\]](http://docs.aws.amazon.com/prescriptive-guidance/latest/patterns/deploy-lustre-file-system-for-high-performance-data-processing-with-terraform-dra.html) | AWS DevOps, DevOps engineer | 

## Troubleshooting
<a name="deploy-lustre-file-system-for-high-performance-data-processing-with-terraform-dra-troubleshooting"></a>


| Issue | Solution | 
| --- | --- | 
| FSx for Lustre returns errors. | For help with FSx for Lustre issues, see [Troubleshooting Amazon FSx for Lustre](https://docs.aws.amazon.com/fsx/latest/LustreGuide/troubleshooting.html) in the FSx for Lustre documentation. | 

## Related resources
<a name="deploy-lustre-file-system-for-high-performance-data-processing-with-terraform-dra-resources"></a>
+ [Building Amazon FSx for Lustre by using Terraform](https://registry.terraform.io/providers/hashicorp/aws/latest/docs/resources/fsx_lustre_file_system) (AWS Provider reference in the Terraform documentation)
+ [Getting started with Amazon FSx for Lustre](https://docs.aws.amazon.com/fsx/latest/LustreGuide/getting-started.html) (FSx for Lustre documentation)
+ [AWS blog posts about Amazon FSx for Lustre](https://aws.amazon.com/blogs/storage/tag/amazon-fsx-for-lustre/)

# Set up a Grafana monitoring dashboard for AWS ParallelCluster
<a name="set-up-a-grafana-monitoring-dashboard-for-aws-parallelcluster"></a>

*Dario La Porta and William Lu, Amazon Web Services*

## Summary
<a name="set-up-a-grafana-monitoring-dashboard-for-aws-parallelcluster-summary"></a>

AWS ParallelCluster helps you deploy and manage high performance computing (HPC) clusters. It supports AWS Batch and Slurm open source job schedulers. Although AWS ParallelCluster is integrated with Amazon CloudWatch for logging and metrics, it doesn't provide a monitoring dashboard for the workload.

The [Grafana dashboard for AWS ParallelCluster](https://github.com/aws-samples/aws-parallelcluster-monitoring) (GitHub) is a monitoring dashboard for AWS ParallelCluster. It provides job scheduler insights and detailed monitoring metrics at the operating system (OS) level. For more information about the dashboards included in this solution, see [Example Dashboards](https://github.com/aws-samples/aws-parallelcluster-monitoring#example-dashboards) in the GitHub repository. These metrics help you better understand the HPC workload and its performance. However, the dashboard code is not updated for the latest versions of AWS ParallelCluster or the open source packages that are used in solution. This pattern enhances the solution to provide the following benefits:
+ Supports AWS ParallelCluster v3
+ Uses the latest version of open source packages, including Prometheus, Grafana, Prometheus Slurm Exporter, and NVIDIA DCGM-Exporter
+ Increases the number of CPU cores and GPUs that the Slurm jobs use
+ Adds a job monitoring dashboard
+ Enhances the GPU node monitoring dashboard for nodes with 4 or 8 graphics processing units (GPUs)

This version of the enhanced solution has been implemented and verified in an AWS customer's HPC production environment.

## Prerequisites and limitations
<a name="set-up-a-grafana-monitoring-dashboard-for-aws-parallelcluster-prereqs"></a>

**Prerequisites**
+ [AWS ParallelCluster CLI](https://docs.aws.amazon.com/parallelcluster/latest/ug/pcluster-v3.html), installed and configured.
+ A supported [network configuration](https://docs.aws.amazon.com/parallelcluster/latest/ug/iam-roles-in-parallelcluster-v3.html) for AWS ParallelCluster. This pattern uses the [AWS ParallelCluster using two subnets](https://docs.aws.amazon.com/parallelcluster/latest/ug/network-configuration-v3.html#network-configuration-v3-two-subnets) configuration, which requires a public subnet, private subnet, internet gateway, and NAT gateway.
+ All AWS ParallelCluster cluster nodes must have internet access. This is required so that the installation scripts can download the open source software and Docker images.
+ A [key pair](https://docs.aws.amazon.com/AWSEC2/latest/UserGuide/ec2-key-pairs.html) in Amazon Elastic Compute Cloud (Amazon EC2). Resources that have this key pair have Secure Shell (SSH) access to the head node.

**Limitations**
+ This pattern is designed to support Ubuntu 20.04 LTS. If you're using a different version of Ubuntu or if you use Amazon Linux or CentOS, then you need to modify the scripts provided with this solution. These modifications are not included in this pattern.

**Product versions**
+ Ubuntu 20.04 LTS
+ ParallelCluster 3.X

**Billing and cost considerations**
+ The solution deployed in this pattern is not covered by the free tier. Charges apply for Amazon EC2, Amazon FSx for Lustre, the NAT gateway in Amazon VPC, and Amazon Route 53.

## Architecture
<a name="set-up-a-grafana-monitoring-dashboard-for-aws-parallelcluster-architecture"></a>

**Target architecture**

The following diagram shows how a user can access the monitoring dashboard for AWS ParallelCluster on the head node. The head node runs NICE DCV, Prometheus, Grafana, Prometheus Slurm Exporter, Prometheus Node Exporter, and NGINX Open Source. The compute nodes run Prometheus Node Exporter, and they also run NVIDIA DCGM-Exporter if the node contains GPUs. The head node retrieves information from the compute nodes and displays that data in the Grafana dashboard.

![\[Accessing the monitoring dashboard for AWS ParallelCluster on the head node.\]](http://docs.aws.amazon.com/prescriptive-guidance/latest/patterns/images/pattern-img/a2132c94-98e0-4b90-8be0-99ebfa546442/images/d2255792-f66a-4ef2-8f04-cc3d5482db5f.png)


In most cases, the head node is not heavily loaded because the job scheduler doesn't require a significant amount of CPU or memory. Users access the dashboard on the head node by using SSL on port 443.

All authorized viewers can anonymously view the monitoring dashboards. Only the Grafana administrator can modify dashboards. You configure a password for the Grafana administrator in the `aws-parallelcluster-monitoring/docker-compose/docker-compose.head.yml` file.

## Tools
<a name="set-up-a-grafana-monitoring-dashboard-for-aws-parallelcluster-tools"></a>

**AWS services**
+ [NICE DCV](https://docs.aws.amazon.com/dcv/#nice-dcv) is a high-performance remote display protocol that helps you deliver remote desktops and application streaming from any cloud or data center to any device, over varying network conditions.
+ [AWS ParallelCluster](https://docs.aws.amazon.com/parallelcluster/latest/ug/what-is-aws-parallelcluster.html) helps you deploy and manage high performance computing (HPC) clusters. It supports AWS Batch and Slurm open source job schedulers.
+ [Amazon Simple Storage Service (Amazon S3)](https://docs.aws.amazon.com/AmazonS3/latest/userguide/Welcome.html) is a cloud-based object storage service that helps you store, protect, and retrieve any amount of data.
+ [Amazon Virtual Private Cloud (Amazon VPC)](https://docs.aws.amazon.com/vpc/latest/userguide/what-is-amazon-vpc.html) helps you launch AWS resources into a virtual network that you’ve defined.

**Other tools**
+ [Docker](https://www.docker.com/) is a set of platform as a service (PaaS) products that use virtualization at the operating-system level to deliver software in containers.
+ [Grafana](https://grafana.com/docs/grafana/latest/introduction/) is an open source software that helps you query, visualize, alert on, and explore metrics, logs, and traces.
+ [NGINX Open Source](https://nginx.org/en/docs/?_ga=2.187509224.1322712425.1699399865-405102969.1699399865) is an open source web server and reverse proxy.
+ [NVIDIA Data Center GPU Manager (DCGM)](https://docs.nvidia.com/data-center-gpu-manager-dcgm/index.html) is a suite of tools for managing and monitoring NVIDIA data center graphics processing units (GPUs) in cluster environments. In this pattern, you use [DCGM-Exporter](https://github.com/NVIDIA/dcgm-exporter), which helps you export GPU metrics from Prometheus.
+ [Prometheus](https://prometheus.io/docs/introduction/overview/) is an open source system-monitoring toolkit that collects and stores its metrics as time-series data with associated key-value pairs, which are called *labels*. In this pattern, you also use [Prometheus Slurm Exporter](https://github.com/vpenso/prometheus-slurm-exporter) to collect and export metrics, and you use [Prometheus Node Exporter](https://github.com/prometheus/node_exporter) to export metrics from the compute nodes.
+ [Ubuntu](https://help.ubuntu.com/) is an open source, Linux-based operating system that is designed for enterprise servers, desktops, cloud environments, and IoT.

**Code repository**

The code for this pattern is available in the GitHub [pcluster-monitoring-dashboard](https://github.com/aws-samples/parallelcluster-monitoring-dashboard) repository.

## Epics
<a name="set-up-a-grafana-monitoring-dashboard-for-aws-parallelcluster-epics"></a>

### Create the required resources
<a name="create-the-required-resources"></a>


| Task | Description | Skills required | 
| --- | --- | --- | 
| Create an S3 bucket. | Create an Amazon S3 bucket. You use this bucket to store the configuration scripts. For instructions, see [Creating a bucket](https://docs.aws.amazon.com/AmazonS3/latest/userguide/create-bucket-overview.html) in the Amazon S3 documentation. | General AWS | 
| Clone the repository. | Clone the GitHub [pcluster-monitoring-dashboard](https://github.com/aws-samples/parallelcluster-monitoring-dashboard/tree/main/aws-parallelcluster-monitoring) repo by running the following command.<pre>git clone https://github.com/aws-samples/parallelcluster-monitoring-dashboard.git</pre> | DevOps engineer | 
| Create an admin password. | [\[See the AWS documentation website for more details\]](http://docs.aws.amazon.com/prescriptive-guidance/latest/patterns/set-up-a-grafana-monitoring-dashboard-for-aws-parallelcluster.html) | Linux Shell scripting | 
| Copy the required files into the S3 bucket. | Copy the [post\$1install.sh](https://github.com/aws-samples/parallelcluster-monitoring-dashboard/blob/main/post_install.sh) script and the [aws-parallelcluster-monitoring](https://github.com/aws-samples/parallelcluster-monitoring-dashboard/tree/main/aws-parallelcluster-monitoring) folder into the S3 bucket you created. For instructions, see [Uploading objects](https://docs.aws.amazon.com/AmazonS3/latest/userguide/upload-objects.html) in the Amazon S3 documentation. | General AWS | 
| Configure an additional security group for the head node. | [\[See the AWS documentation website for more details\]](http://docs.aws.amazon.com/prescriptive-guidance/latest/patterns/set-up-a-grafana-monitoring-dashboard-for-aws-parallelcluster.html) | AWS administrator | 
| Configure an IAM policy for the head node. | Create an identity-based policy for the head node. This policy allows the node to retrieve metric data from Amazon CloudWatch. The GitHub repo contains an example [policy](https://github.com/aws-samples/parallelcluster-monitoring-dashboard/blob/main/policies/head_node.json). For instructions, see [Creating IAM policies](https://docs.aws.amazon.com/IAM/latest/UserGuide/access_policies_create.html) in the AWS Identity and Access Management (IAM) documentation. | AWS administrator | 
| Configure an IAM policy for the compute nodes. | Create an identity-based policy for the compute nodes. This policy allows the node to create the tags that contain the job ID and job owner. The GitHub repo contains an example [policy](https://github.com/aws-samples/parallelcluster-monitoring-dashboard/blob/main/policies/compute_node.json). For instructions, see [Creating IAM policies](https://docs.aws.amazon.com/IAM/latest/UserGuide/access_policies_create.html) in the IAM documentation.If you use the provided example file, replace the following values:[\[See the AWS documentation website for more details\]](http://docs.aws.amazon.com/prescriptive-guidance/latest/patterns/set-up-a-grafana-monitoring-dashboard-for-aws-parallelcluster.html) | AWS administrator | 

### Create the cluster
<a name="create-the-cluster"></a>


| Task | Description | Skills required | 
| --- | --- | --- | 
| Modify the provided cluster template file. | Create the AWS ParallelCluster cluster. Use the provided [cluster.yaml](https://github.com/aws-samples/parallelcluster-monitoring-dashboard/blob/main/cluster.yaml) AWS CloudFormation template file as a starting point to create the cluster. Replace the following values in the provided template:[\[See the AWS documentation website for more details\]](http://docs.aws.amazon.com/prescriptive-guidance/latest/patterns/set-up-a-grafana-monitoring-dashboard-for-aws-parallelcluster.html) | AWS administrator | 
| Create the cluster. | In the AWS ParallelCluster CLI, enter the following command. This deploys the CloudFormation template and creates the cluster. For more information about this command, see [pcluster create-cluster](https://docs.aws.amazon.com/parallelcluster/latest/ug/pcluster.create-cluster-v3.html) in the AWS ParallelCluster documentation.<pre>pcluster create-cluster -n <cluster_name> -c cluster.yaml</pre> | AWS administrator | 
| Monitor the cluster creation. | Enter the following command to monitor the cluster creation. For more information about this command, see [pcluster describe-cluster](https://docs.aws.amazon.com/parallelcluster/latest/ug/pcluster.describe-cluster-v3.html) in the AWS ParallelCluster documentation.<pre>pcluster describe-cluster -n <cluster_name></pre> | AWS administrator | 

### Using the Grafana dashboards
<a name="using-the-grafana-dashboards"></a>


| Task | Description | Skills required | 
| --- | --- | --- | 
| Access to the Grafana portal. | [\[See the AWS documentation website for more details\]](http://docs.aws.amazon.com/prescriptive-guidance/latest/patterns/set-up-a-grafana-monitoring-dashboard-for-aws-parallelcluster.html) | AWS administrator | 

### Clean up the solution to stop incurring associated costs
<a name="clean-up-the-solution-to-stop-incurring-associated-costs"></a>


| Task | Description | Skills required | 
| --- | --- | --- | 
| Delete the cluster. | Enter the following command to delete the cluster. For more information about this command, see [pcluster delete-cluster](https://docs.aws.amazon.com/parallelcluster/latest/ug/pcluster.delete-cluster-v3.html) in the AWS ParallelCluster documentation.<pre>pcluster delete-cluster -n <cluster_name></pre> | AWS administrator | 
| Delete the IAM policies. | Delete the policies that you created for the head node and compute node. For more information about deleting policies, see [Deleting IAM policies](https://docs.aws.amazon.com/IAM/latest/UserGuide/access_policies_manage-delete.html) in the IAM documentation. | AWS administrator | 
| Delete the security group and rule. | Delete the security group that you created for the head node. For more information, see [Delete security group rules](https://docs.aws.amazon.com/vpc/latest/userguide/working-with-security-groups.html#deleting-security-group-rules) and [Delete a security group](https://docs.aws.amazon.com/vpc/latest/userguide/working-with-security-groups.html#deleting-security-groups) in the Amazon VPC documentation. | AWS administrator | 
| Delete the S3 bucket. | Delete the S3 bucket that you created to store the configuration scripts. For more information, see [Deleting a bucket](https://docs.aws.amazon.com/AmazonS3/latest/userguide/delete-bucket.html) in the Amazon S3 documentation. | General AWS | 

## Troubleshooting
<a name="set-up-a-grafana-monitoring-dashboard-for-aws-parallelcluster-troubleshooting"></a>


| Issue | Solution | 
| --- | --- | 
| The head node is not accessible in the browser. | Check  the security group and confirm that the inbound port 443 is open. | 
| Grafana doesn't open. | On the head node, check the container log for `docker logs Grafana`. | 
| Some metrics have no data. | On the head node, check the container logs of all containers. | 

## Related resources
<a name="set-up-a-grafana-monitoring-dashboard-for-aws-parallelcluster-resources"></a>

**AWS documentation**
+ [IAM policies for Amazon EC2](https://docs.aws.amazon.com/AWSEC2/latest/UserGuide/iam-policies-for-amazon-ec2.html)

**Other AWS resources**
+ [AWS ParallelCluster](https://aws.amazon.com/hpc/parallelcluster/)
+ [Monitoring dashboard for AWS ParallelCluster](https://aws.amazon.com/blogs/compute/monitoring-dashboard-for-aws-parallelcluster/) (AWS blog post)

**Other resources**
+ [Prometheus monitoring system](https://prometheus.io/)
+ [Grafana](https://grafana.com/)

# More patterns
<a name="highperformancecomputing-more-patterns-pattern-list"></a>

**Topics**
+ [Implement AI-powered Kubernetes diagnostics and troubleshooting with K8sGPT and Amazon Bedrock integration](implement-ai-powered-kubernetes-diagnostics-and-troubleshooting-with-k8sgpt-and-amazon-bedrock-integration.md)

# Hybrid cloud
<a name="hybrid-pattern-list"></a>

**Topics**
+ [Set up a CI/CD pipeline for hybrid workloads on Amazon ECS Anywhere by using AWS CDK and GitLab](set-up-a-ci-cd-pipeline-for-hybrid-workloads-on-amazon-ecs-anywhere-by-using-aws-cdk-and-gitlab.md)
+ [More patterns](hybrid-more-patterns-pattern-list.md)

# Set up a CI/CD pipeline for hybrid workloads on Amazon ECS Anywhere by using AWS CDK and GitLab
<a name="set-up-a-ci-cd-pipeline-for-hybrid-workloads-on-amazon-ecs-anywhere-by-using-aws-cdk-and-gitlab"></a>

*Rafael Ortiz, Amazon Web Services*

## Summary
<a name="set-up-a-ci-cd-pipeline-for-hybrid-workloads-on-amazon-ecs-anywhere-by-using-aws-cdk-and-gitlab-summary"></a>

Amazon ECS Anywhere is an extension of the Amazon Elastic Container Service (Amazon ECS). It provides support for registering an *external instance*, such as an on-premises server or virtual machine (VM), to your Amazon ECS cluster. is feature helps reduce costs and mitigate complex local container orchestration and operations. You can use ECS Anywhere to deploy and run container applications in both on-premises and cloud environments. It removes the need for your team to learn multiple domains and skill sets, or to manage complex software on their own.

This pattern describes a step-by-step approach to provision an Amazon ECS cluster with Amazon ECS Anywhere instances by using Amazon Web Services (AWS) Cloud Development Kit (AWS CDK) stacks. You then use AWS CodePipeline to set up a continuous integration and continuous deployment (CI/CD) pipeline. Then, you replicate your GitLab code repository to AWS CodeCommit and deploy your containerized application on the Amazon ECS cluster.

This pattern is designed to help those who use on-premises infrastructure to run container applications and use GitLab to manage the application code base. You can manage those workloads by using AWS Cloud services, without disturbing your existing, on-premises infrastructure.

## Prerequisites and limitations
<a name="set-up-a-ci-cd-pipeline-for-hybrid-workloads-on-amazon-ecs-anywhere-by-using-aws-cdk-and-gitlab-prereqs"></a>

**Prerequisites**
+ An active AWS account.
+ A container application running on on-premises infrastructure.
+ A GitLab repository where you manage your application code base. For more information, see [Repository](https://docs.gitlab.com/ee/user/project/repository/) (GitLab).
+ AWS Command Line Interface (AWS CLI), installed and configured. For more information, see [Installing or updating the latest version of the AWS CLI](https://docs.aws.amazon.com/cli/latest/userguide/getting-started-install.html) (AWS CLI documentation).
+ AWS CDK Toolkit, installed and configured globally. For more information, see [Install the AWS CDK](https://docs.aws.amazon.com/cdk/v2/guide/getting_started.html#getting_started_install) (AWS CDK documentation).
+ npm, installed and configured for the AWS CDK in TypeScript. For more information, see [Downloading and installing Node.js and npm](https://docs.npmjs.com/downloading-and-installing-node-js-and-npm) (npm documentation).

**Limitations**
+ For limitations and considerations, see [External instances (Amazon ECS Anywhere)](https://docs.aws.amazon.com/AmazonECS/latest/developerguide/ecs-anywhere.html#ecs-anywhere-considerations) in the Amazon ECS documentation.

**Product versions**
+ AWS CDK Toolkit version 2.27.0 or later
+ npm version 7.20.3 or later
+ Node.js version 16.6.1 or later

## Architecture
<a name="set-up-a-ci-cd-pipeline-for-hybrid-workloads-on-amazon-ecs-anywhere-by-using-aws-cdk-and-gitlab-architecture"></a>

**Target technology stack**
+ AWS CDK
+ AWS CloudFormation
+ AWS CodeBuild
+ AWS CodeCommit
+ AWS CodePipeline
+ Amazon ECS Anywhere
+ Amazon Elastic Container Registry (Amazon ECR)
+ AWS Identity and Access Management (IAM)
+ AWS System Manager
+ GitLab repository

**Target architecture**

![\[Architecture diagram of setting up the Amazon ECS cluster and CI/CD pipeline.\]](http://docs.aws.amazon.com/prescriptive-guidance/latest/patterns/images/pattern-img/b0f35986-a839-4b01-8eb0-4748182ddafc/images/85b8d4d9-3591-4d69-a54b-64aa543498f1.png)


This diagram represents two primary workflows described in this pattern, provisioning the Amazon ECS cluster and setting up the CI/CD pipeline that sets up and deploys the CI/CD pipeline, as follows:

1. **Provisioning the Amazon ECS cluster**

   1. When you deploy the first AWS CDK stack, it creates a CloudFormation stack on AWS.

   1. This CloudFormation stack provisions an Amazon ECS cluster and related AWS resources.

   1. To register an external instance with an Amazon ECS cluster, you must install AWS Systems Manager Agent (SSM Agent) on your VM and register the VM as an AWS Systems Manager managed instance. 

   1. You must also install the Amazon ECS container agent and Docker on your VM to register it as an external instance with the Amazon ECS cluster.

   1. When the external instance is registered and configured with the Amazon ECS cluster, it can run multiple containers on your VM, which is registered as an external instance.

   1. The Amazon ECS cluster is active and can run the application workloads through containers. The Amazon ECS Anywhere container instance runs in on-premises environment but is associated with the Amazon ECS cluster in the cloud.

1. **Setting up and deploying the CI/CD pipeline**

   1. When you deploy the second AWS CDK stack, it creates another CloudFormation stack on AWS.

   1. This CloudFormation stack provisions a pipeline in CodePipeline and related AWS resources.

   1. You push and merge application code changes to an on-premises GitLab repository. 

   1. The GitLab repository is automatically replicated to the CodeCommit repository.

   1. The updates to the CodeCommit repo automatically starts CodePipeline. 

   1. CodePipeline copies code from CodeCommit and creates the deployable application build in CodeBuild.

   1. CodePipeline creates a Docker image of the CodeBuild build environment and pushes it to the Amazon ECR repo.

   1. CodePipeline initiates CodeDeploy actions that pull the container image from the Amazon ECR repo.

   1. CodePipeline deploys the container image on the Amazon ECS cluster.

**Automation and scale**

This pattern uses the AWS CDK as an infrastructure as code (IaC) tool to configure and deploy this architecture. AWS CDK helps you orchestrate the AWS resources and set up Amazon ECS Anywhere and the CI/CD pipeline.

## Tools
<a name="set-up-a-ci-cd-pipeline-for-hybrid-workloads-on-amazon-ecs-anywhere-by-using-aws-cdk-and-gitlab-tools"></a>

**AWS services**
+ [AWS Cloud Development Kit (AWS CDK)](https://docs.aws.amazon.com/cdk/latest/guide/home.html) is a software development framework that helps you define and provision AWS Cloud infrastructure in code.
+ [AWS CodeCommit](https://docs.aws.amazon.com/codecommit/latest/userguide/welcome.html) is a version control service that helps you privately store and manage Git repositories, without needing to manage your own source control system.
+ [AWS CodePipeline](https://docs.aws.amazon.com/codepipeline/latest/userguide/welcome.html) helps you quickly model and configure the different stages of a software release and automate the steps required to release software changes continuously.
+ [AWS Command Line Interface (AWS CLI)](https://docs.aws.amazon.com/cli/latest/userguide/cli-chap-welcome.html) is an open-source tool that helps you interact with AWS services through commands in your command-line shell.
+ [Amazon Elastic Container Registry (Amazon ECR)](https://docs.aws.amazon.com/AmazonECR/latest/userguide/what-is-ecr.html) is a managed container image registry service that’s secure, scalable, and reliable.
+ [Amazon Elastic Container Service (Amazon ECS)](https://docs.aws.amazon.com/AmazonECS/latest/developerguide/Welcome.html) is a fast and scalable container management service that helps you run, stop, and manage containers on a cluster. This pattern also uses [Amazon ECS Anywhere](https://docs.aws.amazon.com/AmazonECS/latest/developerguide/ecs-anywhere.html), which provides support for registering an on-premises server or VM to your Amazon ECS cluster.

**Other tools**
+ [Node.js](https://nodejs.org/en/docs/) is an event-driven JavaScript runtime environment designed for building scalable network applications.
+ [npm](https://docs.npmjs.com/about-npm) is a software registry that runs in a Node.js environment and is used to share or borrow packages and manage deployment of private packages.
+ [Vagrant](https://developer.hashicorp.com/vagrant/docs) is an open-source utility for building and maintaining portable virtual software development environments. For demonstration purposes, this pattern uses Vagrant to create an on-premises VM.

**Code repository**

The code for this pattern is available in the GitHub [CI/CD pipeline for Amazon ECS Anywhere using AWS CDK](https://github.com/aws-samples/amazon-ecs-anywhere-cicd-pipeline-cdk-sample) repository.

## Best practices
<a name="set-up-a-ci-cd-pipeline-for-hybrid-workloads-on-amazon-ecs-anywhere-by-using-aws-cdk-and-gitlab-best-practices"></a>

Consider the following best practices when deploying this pattern:
+ [Best practices for developing and deploying cloud infrastructure with the AWS CDK](https://docs.aws.amazon.com/cdk/v2/guide/best-practices.html)
+ [Best practices for developing cloud applications with AWS CDK](https://aws.amazon.com/blogs/devops/best-practices-for-developing-cloud-applications-with-aws-cdk/) (AWS blog post)

## Epics
<a name="set-up-a-ci-cd-pipeline-for-hybrid-workloads-on-amazon-ecs-anywhere-by-using-aws-cdk-and-gitlab-epics"></a>

### Verify the AWS CDK configuration
<a name="verify-the-aws-cdk-configuration"></a>


| Task | Description | Skills required | 
| --- | --- | --- | 
| Verify the AWS CDK version. | Verify the version of the AWS CDK Toolkit by entering the following command.<pre>cdk --version</pre>This pattern requires version 2.27.0 or later. If you have an earlier version, follow the instructions in the [AWS CDK documentation](https://docs.aws.amazon.com/cdk/latest/guide/cli.html) to update it. | DevOps engineer | 
| Verify the npm version. | Verify the version of npm by entering the following command.<pre>npm --version</pre>This pattern requires version 7.20.3 or later. If you have an earlier version, follow the instructions in the [npm documentation](https://docs.npmjs.com/try-the-latest-stable-version-of-npm) to update it. | DevOps engineer | 
| Set up AWS credentials. | Set up AWS credentials by entering the `aws configure` command and following the prompts.<pre>$aws configure<br />AWS Access Key ID [None]: <your-access-key-ID><br />AWS Secret Access Key [None]: <your-secret-access-key><br />Default region name [None]: <your-Region-name><br />Default output format [None]:</pre> | DevOps engineer | 

### Bootstrap the AWS CDK environment
<a name="bootstrap-the-aws-cdk-environment"></a>


| Task | Description | Skills required | 
| --- | --- | --- | 
| Clone the AWS CDK code repository. | [\[See the AWS documentation website for more details\]](http://docs.aws.amazon.com/prescriptive-guidance/latest/patterns/set-up-a-ci-cd-pipeline-for-hybrid-workloads-on-amazon-ecs-anywhere-by-using-aws-cdk-and-gitlab.html) | DevOps engineer | 
| Bootstrap the environment. | Deploy the CloudFormation template to the account and AWS Region that you want to use by entering the following command.<pre>cdk bootstrap <account-number>/<Region></pre>For more information, see [Bootstrapping](https://docs.aws.amazon.com/cdk/latest/guide/bootstrapping.html) in the AWS CDK documentation. | DevOps engineer | 

### Build and deploy the infrastructure for Amazon ECS Anywhere
<a name="build-and-deploy-the-infrastructure-for-amazon-ecs-anywhere"></a>


| Task | Description | Skills required | 
| --- | --- | --- | 
| Install the package dependencies and compile the TypeScript files. | Install the package dependencies and compile the TypeScript files by entering the following commands.<pre>$cd EcsAnywhereCdk<br />$npm install<br />$npm fund </pre>These commands install all the packages from the sample repository. For more information, see [npm ci](https://docs.npmjs.com/cli/v7/commands/npm-ci) and [npm install](https://docs.npmjs.com/cli/v7/commands/npm-install) in the npm documentation. If you get any errors about missing packages when you enter these commands, see the [Troubleshooting](#set-up-a-ci-cd-pipeline-for-hybrid-workloads-on-amazon-ecs-anywhere-by-using-aws-cdk-and-gitlab-troubleshooting) section of this pattern. | DevOps engineer | 
| Build the project. | To build the project code, enter the following command.<pre>npm run build</pre>For more information about building and deploying the project, see [Your first AWS CDK app](https://docs.aws.amazon.com/cdk/latest/guide/hello_world.html#:~:text=the%20third%20parameter.-,Synthesize%20an%20AWS%20CloudFormation%20template,-Synthesize%20an%20AWS) in the AWS CDK documentation. | DevOps engineer | 
| Deploy the Amazon ECS Anywhere infrastructure stack. | [\[See the AWS documentation website for more details\]](http://docs.aws.amazon.com/prescriptive-guidance/latest/patterns/set-up-a-ci-cd-pipeline-for-hybrid-workloads-on-amazon-ecs-anywhere-by-using-aws-cdk-and-gitlab.html) | DevOps engineer | 
| Verify stack creation and output. | [\[See the AWS documentation website for more details\]](http://docs.aws.amazon.com/prescriptive-guidance/latest/patterns/set-up-a-ci-cd-pipeline-for-hybrid-workloads-on-amazon-ecs-anywhere-by-using-aws-cdk-and-gitlab.html) | DevOps engineer | 

### Set up an on-premises VM
<a name="set-up-an-on-premises-vm"></a>


| Task | Description | Skills required | 
| --- | --- | --- | 
| Set up your VM. | Create a Vagrant VM by entering the `vagrant up` command from the root directory where Vagrantfile is located. For more information, see the [Vagrant documentation](https://developer.hashicorp.com/vagrant/docs/cli/up). | DevOps engineer | 
| Register your VM as an external instance. | [\[See the AWS documentation website for more details\]](http://docs.aws.amazon.com/prescriptive-guidance/latest/patterns/set-up-a-ci-cd-pipeline-for-hybrid-workloads-on-amazon-ecs-anywhere-by-using-aws-cdk-and-gitlab.html)[\[See the AWS documentation website for more details\]](http://docs.aws.amazon.com/prescriptive-guidance/latest/patterns/set-up-a-ci-cd-pipeline-for-hybrid-workloads-on-amazon-ecs-anywhere-by-using-aws-cdk-and-gitlab.html)This sets up your VM was an Amazon ECS Anywhere external instance and registers the instance in the Amazon ECS cluster. For more information, see [Registering an external instance to a cluster](https://docs.amazonaws.cn/en_us/AmazonECS/latest/developerguide/ecs-anywhere-registration.html) in the Amazon ECS documentation. If you experience any issues, see the [Troubleshooting](#set-up-a-ci-cd-pipeline-for-hybrid-workloads-on-amazon-ecs-anywhere-by-using-aws-cdk-and-gitlab-troubleshooting) section. | DevOps engineer | 
| Verify the status of Amazon ECS Anywhere and the external VM. | To verify whether your VM is connected to the Amazon ECS control plane and running, use the following commands.<pre>$aws ssm describe-instance-information<br />$aws ecs list-container-instances --cluster $CLUSTER_NAME</pre> | DevOps engineer | 

### Deploy the CI/CD pipeline
<a name="deploy-the-ci-cd-pipeline"></a>


| Task | Description | Skills required | 
| --- | --- | --- | 
| Create a branch in the CodeCommit repo. | Create a branch named `main` in the CodeCommit repo by creating the first commit for the repository. You can follow AWS documentation to [Create a commit in CodeCommit](https://docs.aws.amazon.com/codecommit/latest/userguide/how-to-create-commit.html#create-first-commit). The following command is an example.<pre>aws codecommit put-file \<br />  --repository-name EcsAnywhereRepo \<br />  --branch-name main \<br />  --file-path README.md \<br />  --file-content "Test" \<br />  --name "Dev Ops" \<br />  --email "devops@example.com" \<br />  --commit-message "Adding README."</pre> | DevOps engineer | 
| Set up repo mirroring. | You can mirror a GitLab repository to and from external sources. You can select which repository serves as the source. Branches, tags, and commits are synced automatically. Set up a push mirror between the GitLab repository that hosts your application and the CodeCommit repository. For instructions, see [Set up a push mirror from GitLab to CodeCommit](https://docs.gitlab.com/ee/user/project/repository/mirror/push.html#set-up-a-push-mirror-from-gitlab-to-aws-codecommit) (GitLab documentation).By default, mirroring automatically syncs the repository. If you want to manually update the repositories, see [Update a mirror](https://docs.gitlab.com/ee/user/project/repository/mirror/#update-a-mirror) (GitLab documentation). | DevOps engineer | 
| Deploy the CI/CD pipeline stack. | Deploy the `EcsAnywherePipelineStack` stack by entering the following command.<pre>$cdk  deploy EcsAnywherePipelineStack</pre> | DevOps engineer | 
| Test the CI/CD pipeline. | [\[See the AWS documentation website for more details\]](http://docs.aws.amazon.com/prescriptive-guidance/latest/patterns/set-up-a-ci-cd-pipeline-for-hybrid-workloads-on-amazon-ecs-anywhere-by-using-aws-cdk-and-gitlab.html) | DevOps engineer | 

### Clean up
<a name="clean-up"></a>


| Task | Description | Skills required | 
| --- | --- | --- | 
| Clean up and delete the resources. | After you walk through this pattern, you should remove the proof-of-concept resources you created. To clean up, enter the following commands.<pre>$cdk destroy EcsAnywherePipelineStack<br />$cdk destroy EcsAnywhereInfraStack</pre> | DevOps engineer | 

## Troubleshooting
<a name="set-up-a-ci-cd-pipeline-for-hybrid-workloads-on-amazon-ecs-anywhere-by-using-aws-cdk-and-gitlab-troubleshooting"></a>


| Issue | Solution | 
| --- | --- | 
| Errors about missing packages when installing package dependencies. | Enter one of the following commands to resolve missing packages.<pre>$npm ci</pre>or<pre>$npm install -g @aws-cdk/<package_name></pre> | 
| When you run the `aws ssm create-activation` command on the VM, you receive the following error.`An error occurred (ValidationException) when calling the CreateActivation operation: Nonexistent role or missing ssm service principal in trust policy: arn:aws:iam::000000000000:role/EcsAnywhereInstanceRole` | The `EcsAnywhereInfraStack` stack isn’t fully deployed, and the IAM role necessary to run this command hasn’t been created yet. Check the stack status in the CloudFormation console. Retry the command after the status changes to `CREATE_COMPLETE`. | 
| An Amazon ECS health check returns `UNHEALTHY`, and you see the following error in the **Services** section of the cluster in the Amazon ECS console.`service EcsAnywhereService was unable to place a task because no container instance met all of its requirements. Reason: No Container Instances were found in your cluster.` | Restart the Amazon ECS agent on your Vagrant VM by entering the following commands.<pre>$vagrant ssh<br />$sudo systemctl restart ecs<br />$sudo systemctl status ecs</pre> | 

## Related resources
<a name="set-up-a-ci-cd-pipeline-for-hybrid-workloads-on-amazon-ecs-anywhere-by-using-aws-cdk-and-gitlab-resources"></a>
+ [Amazon ECS Anywhere marketing page](https://aws.amazon.com/ecs/anywhere/)
+ [Amazon ECS Anywhere documentation](https://docs.aws.amazon.com/AmazonECS/latest/developerguide/ecs-anywhere.html#ecs-anywhere-considerations)
+ [Amazon ECS Anywhere demo](https://www.youtube.com/watch?v=-eud6yUXsJM) (video)
+ [Amazon ECS Anywhere workshop samples](https://github.com/aws-samples/aws-ecs-anywhere-workshop-samples) (GitHub)
+ [Repository mirroring](https://docs.gitlab.com/ee/user/project/repository/mirror/) (GitLab documentation)

# More patterns
<a name="hybrid-more-patterns-pattern-list"></a>

**Topics**
+ [Deploy containerized applications on AWS IoT Greengrass V2 running as a Docker container](deploy-containerized-applications-on-aws-iot-greengrass-version-2-running-as-a-docker-container.md)
+ [Manage on-premises container applications by setting up Amazon ECS Anywhere with the AWS CDK](manage-on-premises-container-applications-by-setting-up-amazon-ecs-anywhere-with-the-aws-cdk.md)
+ [Modify HTTP headers when you migrate from F5 to an Application Load Balancer on AWS](modify-http-headers-when-you-migrate-from-f5-to-an-application-load-balancer-on-aws.md)
+ [Rehost on-premises workloads in the AWS Cloud: migration checklist](rehost-on-premises-workloads-in-the-aws-cloud-migration-checklist.md)
+ [Use BMC Discovery queries to extract migration data for migration planning](use-bmc-discovery-queries-to-extract-migration-data-for-migration-planning.md)

# Management & governance
<a name="governance-pattern-list"></a>

**Topics**
+ [Identify and alert when Amazon Data Firehose resources are not encrypted with an AWS KMS key](identify-and-alert-when-amazon-data-firehose-resources-are-not-encrypted-with-an-aws-kms-key.md)
+ [Automate Amazon VPC IPAM IPv4 CIDR allocations for new AWS accounts by using AFT](automate-amazon-vpc-ipam-ipv4-cidr-allocations-for-new-aws-accounts-by-using-aft.md)
+ [Automate adding or updating Windows registry entries using AWS Systems Manager](automate-adding-or-updating-windows-registry-entries-using-aws-systems-manager.md)
+ [Automatically create an RFC in AMS using Python](automatically-create-an-rfc-in-ams-using-python.md)
+ [Automatically stop and start an Amazon RDS DB instance using AWS Systems Manager Maintenance Windows](automatically-stop-and-start-an-amazon-rds-db-instance-using-aws-systems-manager-maintenance-windows.md)
+ [Centralize software package distribution in AWS Organizations by using Terraform](centralize-software-package-distribution-in-aws-organizations-by-using-terraform.md)
+ [Configure logging for .NET applications in Amazon CloudWatch Logs by using NLog](configure-logging-for-net-applications-in-amazon-cloudwatch-logs-by-using-nlog.md)
+ [Copy AWS Service Catalog products across different AWS accounts and AWS Regions](copy-aws-service-catalog-products-across-different-aws-accounts-and-aws-regions.md)
+ [Create a RACI or RASCI matrix for a cloud operating model](create-a-raci-or-rasci-matrix-for-a-cloud-operating-model.md)
+ [Create alarms for custom metrics using Amazon CloudWatch anomaly detection](create-alarms-for-custom-metrics-using-amazon-cloudwatch-anomaly-detection.md)
+ [Create an AWS Cloud9 IDE that uses Amazon EBS volumes with default encryption](create-an-aws-cloud9-ide-that-uses-amazon-ebs-volumes-with-default-encryption.md)
+ [Create tag-based Amazon CloudWatch dashboards automatically](create-tag-based-amazon-cloudwatch-dashboards-automatically.md)
+ [Document your AWS landing zone design](document-your-aws-landing-zone-design.md)
+ [Improve operational performance by enabling Amazon DevOps Guru across multiple AWS Regions, accounts, and OUs with the AWS CDK](improve-operational-performance-by-enabling-amazon-devops-guru-across-multiple-aws-regions-accounts-and-ous-with-the-aws-cdk.md)
+ [Govern permission sets for multiple accounts by using Account Factory for Terraform](govern-permission-sets-aft.md)
+ [Implement Account Factory for Terraform (AFT) by using a bootstrap pipeline](implement-account-factory-for-terraform-aft-by-using-a-bootstrap-pipeline.md)
+ [Manage AWS Service Catalog products in multiple AWS accounts and AWS Regions](manage-aws-service-catalog-products-in-multiple-aws-accounts-and-aws-regions.md)
+ [Monitor SAP RHEL Pacemaker clusters by using AWS services](monitor-sap-rhel-pacemaker-clusters-by-using-aws-services.md)
+ [Monitor application activity by using CloudWatch Logs Insights](monitor-application-activity-by-using-cloudwatch-logs-insights.md)
+ [Monitor use of a shared Amazon Machine Image across multiple AWS accounts](monitor-use-of-a-shared-amazon-machine-image-across-multiple-aws-accounts.md)
+ [View EBS snapshot details for your AWS account or organization](view-ebs-snapshot-details-for-your-aws-account-or-organization.md)
+ [More patterns](governance-more-patterns-pattern-list.md)

# Identify and alert when Amazon Data Firehose resources are not encrypted with an AWS KMS key
<a name="identify-and-alert-when-amazon-data-firehose-resources-are-not-encrypted-with-an-aws-kms-key"></a>

*Ram Kandaswamy, Amazon Web Services*

## Summary
<a name="identify-and-alert-when-amazon-data-firehose-resources-are-not-encrypted-with-an-aws-kms-key-summary"></a>

For compliance, some organizations must have encryption enabled on data delivery resources such as Amazon Data Firehose. This pattern shows a way to monitor, detect, and notify when resources are out of compliance.

To maintain the encryption requirement, this pattern can be used on AWS to provide automated monitoring and detection of Amazon Data Firehose delivery resources that aren’t encrypted with an AWS Key Management Service (AWS KMS) key. The solution sends alert notifications, and it can be extended to perform automatic remediation. This solution can be applied to an individual account or a multiple-account environment, such as an environment that uses an AWS landing zone or AWS Control Tower.

## Prerequisites and limitations
<a name="identify-and-alert-when-amazon-data-firehose-resources-are-not-encrypted-with-an-aws-kms-key-prereqs"></a>

**Prerequisites **
+ Amazon Data Firehose delivery stream
+ Sufficient permissions and familiarity with CloudFormation, which is used in this infrastructure automation

**Limitations **
+ The solution is not real time because it uses AWS CloudTrail events for detection, and there is a delay between the time an unencrypted resource is created and the notification is sent.

## Architecture
<a name="identify-and-alert-when-amazon-data-firehose-resources-are-not-encrypted-with-an-aws-kms-key-architecture"></a>

**Target technology stack  **

The solution uses serverless technology and the following services:
+ AWS CloudTrail
+ Amazon CloudWatch
+ AWS Command Line Interface (AWS CLI)
+ AWS Identity and Access Management (IAM)
+ Amazon Data Firehose
+ AWS Lambda
+ Amazon Simple Notification Service (Amazon SNS)

**Target architecture **

![\[Process for generating alerts when Data Firehose resources aren't encrypted.\]](http://docs.aws.amazon.com/prescriptive-guidance/latest/patterns/images/pattern-img/897ba8cf-d1c2-4149-98e7-09d3d90d13d6/images/d694f718-bd0c-4d14-a2e4-e0ea58dc048e.png)


The diagram illustrates these steps:

1. A user creates or modifies Amazon Data Firehose.

1. A CloudTrail event is detected and matched.

1. Lambda is invoked.

1. Non-compliant resources are identified.

1. Email notification is sent.

**Automation and scale**

You can use CloudFormation StackSets to apply this solution to multiple AWS Regions or accounts with a single command.

## Tools
<a name="identify-and-alert-when-amazon-data-firehose-resources-are-not-encrypted-with-an-aws-kms-key-tools"></a>
+ [AWS CloudTrail](https://docs.aws.amazon.com/awscloudtrail/latest/userguide/cloudtrail-user-guide.html) is an AWS service that helps you enable governance, compliance, and operational and risk auditing of your AWS account. Actions taken by a user, role, or an AWS service are recorded as events in CloudTrail. Events include actions taken in the AWS Management Console, AWS CLI, AWS SDKs, and API operations.
+ [Amazon CloudWatch Events](https://docs.aws.amazon.com/AmazonCloudWatch/latest/events/WhatIsCloudWatchEvents.html) delivers a near real-time stream of system events that describe changes in AWS resources.
+ [AWS Command Line Interface (AWS CLI)](https://docs.aws.amazon.com/cli/latest/userguide/cli-chap-welcome.html) is an open source tool that enables you to interact with AWS services by using commands in your command line shell. 
+ [AWS Identity and Access Management (IAM)](https://docs.aws.amazon.com/IAM/latest/UserGuide/introduction.html) is a web service that helps you securely control access to AWS resources. You use IAM to control who is authenticated (signed in) and authorized (has permissions) to use resources. 
+ [Amazon Data Firehose](https://docs.aws.amazon.com/firehose/latest/dev/what-is-this-service.html) is a fully managed service for delivering real-time streaming data. With Firehose, you don't have to write applications or manage resources. You configure your data producers to send data to Firehose, and it automatically delivers the data to the destination that you specified.
+ [AWS Lambda](https://docs.aws.amazon.com/lambda/latest/dg/welcome.html) is a compute service that supports running code without provisioning or managing servers. Lambda runs your code only when needed and scales automatically, from a few requests per day to thousands per second. You pay only for the compute time that you consume—there is no charge when your code isn’t running. 
+ [Amazon Simple Notification Service (Amazon SNS)](https://docs.aws.amazon.com/sns/latest/dg/welcome.html) is a managed service that provides message delivery from publishers to subscribers (also known as producers and consumers).

## Epics
<a name="identify-and-alert-when-amazon-data-firehose-resources-are-not-encrypted-with-an-aws-kms-key-epics"></a>

### Enforce encryption for compliance
<a name="enforce-encryption-for-compliance"></a>


| Task | Description | Skills required | 
| --- | --- | --- | 
| Deploy CloudFormation StackSets. | In the AWS CLI, use the `firehose-encryption-checker.yaml` template (attached) to create the stack set by running the following command.  Provide a valid Amazon SNS topic Amazon Resource Name (ARN) for the parameter. The deployment should successfully create CloudWatch Events rules, the Lambda function, and an IAM role with the necessary permissions, as described in the template.<pre>aws cloudformation create-stack-set    --stack-set-name my-stack-set   --template-body file://firehose-encryption-checker.yaml </pre> | Cloud architect, Systems administrator | 
| Create stack instances. | Stacks can be created in the AWS Regions of your choice as well as in one or more accounts.  To create stack instances, run the following command. Replace the stack name, account numbers, and Regions with your own.<pre>aws cloudformation create-stack-instances     --stack-set-name my-stack-set    --accounts 123456789012 223456789012   --regions us-east-1 us-east-2 us-west-1 us-west-2     --operation-preferences FailureToleranceCount=1 </pre> | Cloud architect, Systems administrator | 

## Related resources
<a name="identify-and-alert-when-amazon-data-firehose-resources-are-not-encrypted-with-an-aws-kms-key-resources"></a>
+ [Working with CloudFormation StackSets](https://docs.aws.amazon.com/AWSCloudFormation/latest/UserGuide/what-is-cfnstacksets.html)
+ [What is Amazon CloudWatch Events?](https://docs.aws.amazon.com/AmazonCloudWatch/latest/events/WhatIsCloudWatchEvents.html)

## Attachments
<a name="attachments-897ba8cf-d1c2-4149-98e7-09d3d90d13d6"></a>

To access additional content that is associated with this document, unzip the following file: [attachment.zip](samples/p-attach/897ba8cf-d1c2-4149-98e7-09d3d90d13d6/attachments/attachment.zip)

# Automate Amazon VPC IPAM IPv4 CIDR allocations for new AWS accounts by using AFT
<a name="automate-amazon-vpc-ipam-ipv4-cidr-allocations-for-new-aws-accounts-by-using-aft"></a>

*Kien Pham and Alex Pazik, Amazon Web Services*

## Summary
<a name="automate-amazon-vpc-ipam-ipv4-cidr-allocations-for-new-aws-accounts-by-using-aft-summary"></a>

This pattern shows how to automate Amazon VPC IP Address Manager (IPAM) IPv4 CIDR allocations for new AWS accounts by using [AWS Control Tower Account Factory for Terraform (AFT)](https://docs.aws.amazon.com/controltower/latest/userguide/aft-overview.html). This is done using an account-level customization that allocates a IPv4 CIDR block from IPAM to a new virtual private cloud (VPC) using the `aft-account-customizations` module.

With IPAM, you can organize, assign, monitor, and audit IP addresses at scale, allowing you to easily plan, track, and monitor IP addresses for your AWS workloads. You can [create an IPAM](https://docs.aws.amazon.com/vpc/latest/ipam/create-ipam.html) and IPAM pool to use to allocate an IPv4 CIDR block to a new VPC during the account vending process.

## Prerequisites and limitations
<a name="automate-amazon-vpc-ipam-ipv4-cidr-allocations-for-new-aws-accounts-by-using-aft-prereqs"></a>

**Prerequisites**
+ An active AWS account with AWS Control Tower enabled in a supported [AWS Region](https://docs.aws.amazon.com/controltower/latest/userguide/region-how.html) and AFT deployed
+ A supported [version control system (VCS) provider](https://github.com/aws-ia/terraform-aws-control_tower_account_factory?tab=readme-ov-file#input_vcs_provider) such as BitBucket, GitHub, and GitHub Enterprise
+ Terraform Command Line Interface (CLI) [installed](https://developer.hashicorp.com/terraform/tutorials/aws-get-started/install-cli)
+ A runtime environment where you can run the Terraform module that installs AFT
+ AWS Command Line Interface (AWS CLI) [installed](https://docs.aws.amazon.com/cli/latest/userguide/getting-started-install.html) and [configured](https://docs.aws.amazon.com/cli/latest/userguide/getting-started-quickstart.html)

**Limitations**
+ Some AWS services aren’t available in all AWS Regions. For Region availability, see [AWS Services by Region](https://aws.amazon.com/about-aws/global-infrastructure/regional-product-services/). For specific endpoints, see [Service endpoints and quotas](https://docs.aws.amazon.com/general/latest/gr/aws-service-information.html), and choose the link for the service.

**Product versions**
+ [AWS Control Tower landing zone](https://docs.aws.amazon.com/controltower/latest/userguide/2022-all.html#version-3.0) version 3.0 or later, earlier than version 4.0
+ [AFT](https://github.com/aws-ia/terraform-aws-control_tower_account_factory) version 1.13.0 or later, earlier than version 2.0.0
+ Terraform OSS version 1.2.0 or later, earlier than version 2.0.0
+ [Terraform AWS Provider](https://registry.terraform.io/providers/hashicorp/aws/latest/docs) (`terraform-provider-aws`) version 5.11.0 or later, earlier than version 6.0.0
+ [Terraform module for IPAM](https://github.com/aws-ia/terraform-aws-ipam) (`aws-ia/ipam/aws`) version 2.1.0 or later

## Architecture
<a name="automate-amazon-vpc-ipam-ipv4-cidr-allocations-for-new-aws-accounts-by-using-aft-architecture"></a>

The following diagram shows the workflow and components of this pattern.

![\[Workflow to create Amazon VPC IPAM IPv4 CIDR allocation.\]](http://docs.aws.amazon.com/prescriptive-guidance/latest/patterns/images/pattern-img/986cfc7d-058b-4490-9029-6cd1eadd1dd2/images/f90b84dd-0420-460e-ac0f-9f22b4a9fdc4.png)


The workflow consists of the following main tasks:

1. **Trigger changes** – The changes to Terraform and IPAM customization are committed to the GitHub repository and pushed. This task triggers the AWS CodeBuild pipeline automatically.

1. **Automate build** – Within CodeBuild, multiple build projects trigger AWS Step Functions.

1. **Apply customization** – Step Functions coordinates with CodeBuild to plan and apply Terraform changes. This task uses the AFT Terraform module to coordinate the IPAM pool IP assignment to the AWS vended account.

## Tools
<a name="automate-amazon-vpc-ipam-ipv4-cidr-allocations-for-new-aws-accounts-by-using-aft-tools"></a>

**AWS services**
+ [AWS CodeBuild](https://docs.aws.amazon.com/codebuild/latest/userguide/welcome.html) is a fully managed build service that helps you compile source code, run unit tests, and produce artifacts that are ready to deploy.
+ [AWS CodePipeline](https://docs.aws.amazon.com/codepipeline/latest/userguide/welcome.html) helps you quickly model and configure the different stages of a software release and automate the steps required to release software changes continuously.
+ [AWS Control Tower](https://docs.aws.amazon.com/controltower/latest/userguide/what-is-control-tower.html) orchestrates the capabilities of several other [AWS services](https://docs.aws.amazon.com/controltower/latest/userguide/integrated-services.html), including AWS Organizations, AWS Service Catalog, and AWS IAM Identity Center. It can help you set up and govern an AWS multi-account environment, following prescriptive best practices.
+ [Amazon DynamoDB](https://docs.aws.amazon.com/amazondynamodb/latest/developerguide/Introduction.html) is a fully managed NoSQL database service that provides fast, predictable, and scalable performance.
+ [AWS Lambda](https://docs.aws.amazon.com/lambda/latest/dg/welcome.html) is a compute service that helps you run code without needing to provision or manage servers. It runs your code only when needed and scales automatically, so you pay only for the compute time that you use.
+ [AWS SDK for Python (Boto3)](https://boto3.amazonaws.com/v1/documentation/api/latest/guide/quickstart.html) is a software development kit that helps you integrate your Python application, library, or script with AWS services.
+ [AWS Service Catalog](https://docs.aws.amazon.com/servicecatalog/latest/adminguide/introduction.html) helps you centrally manage catalogs of IT services that are approved for AWS. End users can quickly deploy only the approved IT services they need, following the constraints set by your organization.
+ [AWS Step Functions](https://docs.aws.amazon.com/step-functions/latest/dg/welcome.html) is a serverless orchestration service that helps you combine AWS Lambda functions and other AWS services to build business-critical applications.
+ [Amazon Virtual Private Cloud (Amazon VPC)](https://docs.aws.amazon.com/vpc/latest/userguide/what-is-amazon-vpc.html) helps you launch AWS resources into a virtual network that you’ve defined. This virtual network resembles a traditional network that you’d operate in your own data center, with the benefits of using the scalable infrastructure of AWS. Amazon VPC IP Address Manager (IPAM) is a VPC feature that makes it easier for you to plan, track, and monitor IP addresses for your AWS workloads.

**Other tools**
+ [GitHub](https://docs.github.com/) is a developer platform that developers can use to create, store, manage, and share their code.
+ [HashiCorp Terraform](https://www.terraform.io/) is an infrastructure as code (IaC) tool that helps you create and manage cloud and on-premises resources. This includes low-level components such as compute instances, storage, and networking and high-level components such as DNS entries and software a a service (SaaS) features.
+ [Python](https://www.python.org/) is a general-purpose computer programming language. You can use it to build applications, automate tasks, and develop services on the [AWS Cloud](https://aws.amazon.com/developer/language/python/).

**Code repository**
+ The code for this pattern is available in the GitHub [AWS Control Tower Account Factory for Terraform](https://github.com/aws-ia/terraform-aws-control_tower_account_factory) repository.

## Best practices
<a name="automate-amazon-vpc-ipam-ipv4-cidr-allocations-for-new-aws-accounts-by-using-aft-best-practices"></a>

When you deploy AFT, we recommend that you follow best practices to help ensure a secure, efficient, and successful implementation. Key guidelines and recommendations for implementing and operating AFT include the following: 
+ **Thorough review of inputs **– Carefully review and understand each [input](https://github.com/aws-ia/terraform-aws-control_tower_account_factory). Correct input configuration is crucial for the setup and functioning of AFT.
+ **Regular template updates **– Keep templates updated with the latest AWS features and Terraform versions. Regular updates help you take advantage of new functionality and maintain security.
+ **Versioning **– Pin your AFT module version and use a separate AFT deployment for testing if possible.
+ **Scope** – Use AFT only to deploy infrastructure guardrails and customizations. Do not use it to deploy your application.
+ **Linting and validation **– The AFT pipeline requires a linted and validated Terraform configuration. Run lint, validate, and test before pushing the configuration to AFT repositories.
+ **Terraform modules** – Build reusable Terraform code as modules, and always specify the Terraform and AWS provider versions to match your organization's requirements.

## Epics
<a name="automate-amazon-vpc-ipam-ipv4-cidr-allocations-for-new-aws-accounts-by-using-aft-epics"></a>

### Set up and configure your AWS environment
<a name="set-up-and-configure-your-aws-environment"></a>


| Task | Description | Skills required | 
| --- | --- | --- | 
| Deploy AWS Control Tower. | Set up and configure AWS Control Tower in your AWS environment to ensure centralized management and governance of your AWS accounts. For more information, see [Getting started with AWS Control Tower](https://docs.aws.amazon.com/controltower/latest/userguide/getting-started-with-control-tower.html) in the AWS Control Tower documentation. | Cloud administrator | 
| Deploy AWS Control Tower Account Factory for Terraform (AFT). | Set up AFT in a new, dedicated AFT management account. For more information, see C[onfigure and launch your AWS Control Tower Account Factory for Terraform](https://docs.aws.amazon.com/controltower/latest/userguide/aft-getting-started.html#aft-configure-and-launch) in the AWS Control Tower documentation. | Cloud administrator | 
| Complete AFT post-deployment. | After the AFT infrastructure deployment is complete, complete the steps in [Post-deployment steps](https://docs.aws.amazon.com/controltower/latest/userguide/aft-post-deployment.html) in the AWS Control Tower documentation. | Cloud administrator | 

### Create IPAM
<a name="create-ipam"></a>


| Task | Description | Skills required | 
| --- | --- | --- | 
| Delegate an IPAM administrator. | To delegate an IPAM administrator account in your AWS organization, use the following steps:[\[See the AWS documentation website for more details\]](http://docs.aws.amazon.com/prescriptive-guidance/latest/patterns/automate-amazon-vpc-ipam-ipv4-cidr-allocations-for-new-aws-accounts-by-using-aft.html)Alternatively, you can use the AWS CLI and run the following command:<pre>aws ec2 enable-ipam-organization-admin-account \<br />    --delegated-admin-account-id 012345678901</pre>For more information, see [Integrate IPAM with accounts in an AWS organization](https://docs.aws.amazon.com/vpc/latest/ipam/enable-integ-ipam.html) in the Amazon VPC documentation and [enable-ipam-organization-admin-account](https://docs.aws.amazon.com/cli/latest/reference/ec2/enable-ipam-organization-admin-account.html) in the AWS CLI Command Reference. To continue using IPAM, you must sign in to the delegated administrator account. The SSO profile or AWS environment variables specified in the next step must allow you to sign in to that account and grant permissions to create an IPAM top-level and regional pool. | AWS administrator | 
| Create an IPAM top-level and regional pool. | This pattern’s GitHub repository contains a Terraform template that you can use to create your IPAM top-level pool and regional pool. Then you can share the pools with an organization, organizational unit (OU), AWS account, or other resource by using AWS Resource Access Manager (AWS RAM).Use the following steps:[\[See the AWS documentation website for more details\]](http://docs.aws.amazon.com/prescriptive-guidance/latest/patterns/automate-amazon-vpc-ipam-ipv4-cidr-allocations-for-new-aws-accounts-by-using-aft.html)Make a note of the resource pool ID that’s output after creation. You will need the ID when submitting the account request. If you forget the resource pool ID, you can get it later from the AWS Management Console. Make sure that the created pools’ CIDRs do not overlap with any other pools in your working Region. You can create a pool without a CIDR, but you won’t be able to use the pool for allocations until you’ve provisioned a CIDR for it. You can add CIDRs to a pool at any time by editing the pool. | AWS administrator | 

### Integrate IPAM with AFT
<a name="integrate-ipam-with-aft"></a>


| Task | Description | Skills required | 
| --- | --- | --- | 
| Begin to create account customization. | To begin a new account customization, run the following commands from your terminal:<pre># Default name for customization repo<br />cd aft-account-customizations # Replace with your actual repo name if different than the default<br />mkdir -p APG-AFT-IPAM/terraform # Replace APG-AFT-IPAM with your desired customization name<br />cd APG-AFT-IPAM/terraform</pre> | DevOps engineer | 
| Create `aft-providers.jinja` file. | Add dynamic code to the `aft-providers.jinja` file that specifies the Terraform backend and provider to use.Use the following steps:[\[See the AWS documentation website for more details\]](http://docs.aws.amazon.com/prescriptive-guidance/latest/patterns/automate-amazon-vpc-ipam-ipv4-cidr-allocations-for-new-aws-accounts-by-using-aft.html) | DevOps engineer | 
| Create `backend.jinja` file. | Add dynamic code to the `backend.jinja` file that specifies the Terraform backend and provider to use.Use the following steps:[\[See the AWS documentation website for more details\]](http://docs.aws.amazon.com/prescriptive-guidance/latest/patterns/automate-amazon-vpc-ipam-ipv4-cidr-allocations-for-new-aws-accounts-by-using-aft.html) | DevOps engineer | 
| Create `main.tf` file. | Create a new `main.tf` file and add code that defines two data sources that retrieve two values from AWS Systems Manager (`aws_ssm`) and creates the VPC.Use the following steps:[\[See the AWS documentation website for more details\]](http://docs.aws.amazon.com/prescriptive-guidance/latest/patterns/automate-amazon-vpc-ipam-ipv4-cidr-allocations-for-new-aws-accounts-by-using-aft.html) | DevOps engineer | 
| Create `variables.tf` file. | Create a `variables.tf` file that declares the variables used by the Terraform module.Use the following steps:[\[See the AWS documentation website for more details\]](http://docs.aws.amazon.com/prescriptive-guidance/latest/patterns/automate-amazon-vpc-ipam-ipv4-cidr-allocations-for-new-aws-accounts-by-using-aft.html) | DevOps engineer | 
| Create `terraform.tfvars` file. | Create a `terraform.tfvars` file that defines the values of the variables that are passed to the `main.tf` file.Use the following steps:[\[See the AWS documentation website for more details\]](http://docs.aws.amazon.com/prescriptive-guidance/latest/patterns/automate-amazon-vpc-ipam-ipv4-cidr-allocations-for-new-aws-accounts-by-using-aft.html) | DevOps engineer | 
| Create `outputs.tf` file. | Create a new `outputs.tf` file that exposes some values in CodeBuild.Use the following steps:[\[See the AWS documentation website for more details\]](http://docs.aws.amazon.com/prescriptive-guidance/latest/patterns/automate-amazon-vpc-ipam-ipv4-cidr-allocations-for-new-aws-accounts-by-using-aft.html) | DevOps engineer | 
| Commit the customization. | To commit the new customization to the account customizations repository, run the following commands:<pre># Assumes you are still in the /terraform directory<br />cd .. # Skip if you are in the account customization root directory (APG-AFT-IPAM)<br />git add .<br />git commit -m "APG customization"<br />git push origin</pre> | DevOps engineer | 
| Apply the customization. | Add code to the `account-requests.tf` file that requests a new account with the newly created account customization. The custom fields create Systems Manager parameters in the vended account that are required to create the VPC with the correct IPAM allocated IPv4 CIDR.Use the following steps:[\[See the AWS documentation website for more details\]](http://docs.aws.amazon.com/prescriptive-guidance/latest/patterns/automate-amazon-vpc-ipam-ipv4-cidr-allocations-for-new-aws-accounts-by-using-aft.html) | AWS DevOps | 
| Validate the customization. | Sign in to the newly vended account and verify that the customization was successfully applied.Use the following steps:[\[See the AWS documentation website for more details\]](http://docs.aws.amazon.com/prescriptive-guidance/latest/patterns/automate-amazon-vpc-ipam-ipv4-cidr-allocations-for-new-aws-accounts-by-using-aft.html) | DevOps engineer | 

## Troubleshooting
<a name="automate-amazon-vpc-ipam-ipv4-cidr-allocations-for-new-aws-accounts-by-using-aft-troubleshooting"></a>


| Issue | Solution | 
| --- | --- | 
|  You encounter failures in resource creation or management caused by inadequate permissions. |  Review the AWS Identity and Access Management (IAM) roles and policies that are attached to Step Functions, CodeBuild, and other services involved in the deployment. Confirm that they have the necessary permissions. If there are permission issues, adjust the IAM policies to grant the required access. | 
|  You reach AWS service quotas during deployment. |  Before you deploy the pipeline, check AWS service quotas for resources such as Amazon Simple Storage Service (Amazon S3) buckets, IAM roles, and AWS Lambda functions. If necessary, request increases to the quotas. For more information, see [AWS service quotas](https://docs.aws.amazon.com/general/latest/gr/aws_service_limits.html) in the *AWS General Reference*. | 

## Related resources
<a name="automate-amazon-vpc-ipam-ipv4-cidr-allocations-for-new-aws-accounts-by-using-aft-resources"></a>

**AWS service documentation**
+ [AWS Control Tower User Guide](https://docs.aws.amazon.com/controltower/latest/userguide/what-is-control-tower.html)
+ [How IPAM works](https://docs.aws.amazon.com/vpc/latest/ipam/how-it-works-ipam.html)
+ [Security best practices in IAM ](https://docs.aws.amazon.com/IAM/latest/UserGuide/best-practices.html)
+ [AWS service quotas](https://docs.aws.amazon.com/general/latest/gr/aws_service_limits.html)

**Other resources**
+ [Terraform AWS Provider documentation](https://registry.terraform.io/providers/hashicorp/aws/latest/docs)

# Automate adding or updating Windows registry entries using AWS Systems Manager
<a name="automate-adding-or-updating-windows-registry-entries-using-aws-systems-manager"></a>

*Appasaheb Bagali, Amazon Web Services*

## Summary
<a name="automate-adding-or-updating-windows-registry-entries-using-aws-systems-manager-summary"></a>

AWS Systems Manager is a remote management tool for Amazon Elastic Compute Cloud (Amazon EC2) instances. Systems Manager provides visibility and control over your infrastructure on Amazon Web Services. This versatile tool can be used to fix Windows registry changes that are identified as vulnerabilities by the security vulnerability scan report. 

This pattern covers the steps to keep your EC2 instances that are running Windows operating system secure by automating registry changes that are recommended for the safety of your environment. The pattern uses the Run command to run a Command document. The code is attached, and a portion of it is included in the *Code* section.

## Prerequisites and limitations
<a name="automate-adding-or-updating-windows-registry-entries-using-aws-systems-manager-prereqs"></a>
+ An active AWS account
+ Permissions to access the EC2 instance and Systems Manager

## Architecture
<a name="automate-adding-or-updating-windows-registry-entries-using-aws-systems-manager-architecture"></a>

**Target technology stack**
+ A virtual private cloud (VPC), with two subnets and a network address translation (NAT) gateway
+ A Systems Manager Command document to add or update the registry name and value
+ Systems Manager Run Command to run the Command document on the specified EC2 instances

**Target architecture**

![\[How to automatically add or update Windows registry entries using AWS Systems Manager.\]](http://docs.aws.amazon.com/prescriptive-guidance/latest/patterns/images/pattern-img/2ecf680d-9f36-4070-8a19-2af262db7fcc/images/c992bcb0-d894-4aa7-9bb3-3d60c9c79e8d.png)


 

## Tools
<a name="automate-adding-or-updating-windows-registry-entries-using-aws-systems-manager-tools"></a>

**Tools**
+ [IAM policies and roles](https://docs.aws.amazon.com/IAM/latest/UserGuide/introduction.html) – AWS Identity and Access Management (IAM) is a web service that helps you securely control access to AWS resources. You use IAM to control who is authenticated (signed in) and authorized (has permissions) to use resources.
+ [Amazon Simple Storage Service](https://docs.aws.amazon.com/AmazonS3/latest/userguide/Welcome.html) – Amazon Simple Storage Service (Amazon S3) is storage for the internet. It is designed to make web-scale computing easier for developers. In this pattern, an S3 bucket is used to store the Systems Manager logs.
+ [AWS Systems Manager](https://docs.aws.amazon.com/systems-manager/latest/userguide/what-is-systems-manager.html) – AWS Systems Manager is an AWS service that you can use to view and control your infrastructure on AWS. Systems Manager helps you maintain security and compliance by scanning your *managed instances* and reporting (or taking corrective action on) any policy violations it detects.
+ [AWS Systems Manager Command document](https://docs.aws.amazon.com/systems-manager/latest/userguide/sysman-ssm-docs.html) – AWS Systems Manager Command documents are used by Run Command. Most Command documents are supported on all Linux and Windows Server operating systems supported by Systems Manager.
+ [AWS Systems Manager Run Command](https://docs.aws.amazon.com/systems-manager/latest/userguide/execute-remote-commands.html) – AWS Systems Manager Run Command gives you a way to manage the configuration of your managed instances remotely and securely. Using Run Command, you can automate common administrative tasks and perform one-time configuration changes at scale.

**Code**

You can use the following example code to add or update a Microsoft Windows registry name to `Version`, registry path to `HKCU:\Software\ScriptingGuys\Scripts`, and value to `2`.

```
#Windows registry path which needs to add/update
$registryPath ='HKCU:\\Software\\ScriptingGuys\\Scripts'
#Windows registry Name  which needs to add/update
$Name = 'Version'
#Windows registry value  which needs to add/update
$value = 2
# Test-Path cmdlet to see if the registry key exists. 
IF(!(Test-Path $registryPath))
        {
           New-Item -Path $registryPath -Force | Out-Null
           New-ItemProperty -Path $registryPath -Name $name -Value     $value ` -PropertyType DWORD -                 Force | Out-        Null 
        } ELSE {
                      New-ItemProperty -Path $registryPath -Name $name -Value $value ` -PropertyType            DWORD        -Force | Out-Null
            }
echo 'Registry Path:'$registryPath
 echo 'Registry Name:'$registryPath
 echo 'Registry Value:'(Get-ItemProperty -Path $registryPath -Name $Name).version
```

The full Systems Manager Command document JavaScript Object Notation (JSON) code example is attached. 

## Epics
<a name="automate-adding-or-updating-windows-registry-entries-using-aws-systems-manager-epics"></a>

### Set up a VPC
<a name="set-up-a-vpc"></a>


| Task | Description | Skills required | 
| --- | --- | --- | 
| Create a VPC. | On the AWS Management Console, create a VPC that has public and private subnets and a NAT gateway. For more information, see the [AWS documentation](https://docs.aws.amazon.com/batch/latest/userguide/create-public-private-vpc.html). | Cloud administrator | 
| Create security groups. | Ensure that each security group allows access for Remote Desktop Protocol (RDP) from the source IP address. | Cloud administrator | 

### Create an IAM policy and an IAM role
<a name="create-an-iam-policy-and-an-iam-role"></a>


| Task | Description | Skills required | 
| --- | --- | --- | 
| Create an IAM policy. | Create an IAM policy that provides access to Amazon S3, Amazon EC2, and Systems Manager. | Cloud administrator | 
| Create an IAM role. | Create an IAM role, and attach the IAM policy that provides access to Amazon S3, Amazon EC2, and Systems Manager. | Cloud administrator | 

### Run the automation
<a name="run-the-automation"></a>


| Task | Description | Skills required | 
| --- | --- | --- | 
| Create the Systems Manager Command document. | Create a Systems Manager Command document that will deploy the Microsoft Windows registry changes to add or update. | Cloud administrator | 
| Run the Systems Manager Run Command. | Run the Systems Manager Run Command, selecting the Command document and the Systems Manager target instances. This pushes the Microsoft Windows registry change in the selected Command document to the target instances. | Cloud administrator | 

## Related resources
<a name="automate-adding-or-updating-windows-registry-entries-using-aws-systems-manager-resources"></a>
+ [AWS Systems Manager](https://aws.amazon.com/systems-manager/)
+ [AWS Systems Manager documents](https://docs.aws.amazon.com/systems-manager/latest/userguide/sysman-ssm-docs.html)
+ [AWS Systems Manager Run Command](https://docs.aws.amazon.com/systems-manager/latest/userguide/execute-remote-commands.html)

## Attachments
<a name="attachments-2ecf680d-9f36-4070-8a19-2af262db7fcc"></a>

To access additional content that is associated with this document, unzip the following file: [attachment.zip](samples/p-attach/2ecf680d-9f36-4070-8a19-2af262db7fcc/attachments/attachment.zip)

# Automatically create an RFC in AMS using Python
<a name="automatically-create-an-rfc-in-ams-using-python"></a>

*Gnanasekaran Kailasam, Amazon Web Services*

## Summary
<a name="automatically-create-an-rfc-in-ams-using-python-summary"></a>

AWS Managed Services (AMS) helps you to operate your cloud-based infrastructure more efficiently and securely by providing ongoing management of your Amazon Web Services (AWS) infrastructure. To make a change to your managed environment, you need to create and submit a new request for change (RFC) that includes a change type (CT) ID for a particular operation or action.

However, manually creating an RFC can take around five minutes and teams in your organization might need to submit multiple RFCs every day. This pattern helps you to automate the RFC creation process, reduce the creation time for each RFC, and eliminate manual errors.   

This pattern describes how to use Python code to automatically create the `Stop EC2 instance` RFC that stops Amazon Elastic Compute Cloud (Amazon EC2) instances in your AMS account. You can then apply this pattern’s approach and the Python automation to other RFC types. 

## Prerequisites and limitations
<a name="automatically-create-an-rfc-in-ams-using-python-prereqs"></a>

**Prerequisites**
+ An AMS Advanced account. For more information about this, see [AMS operations plans](https://docs.aws.amazon.com/managedservices/latest/accelerate-guide/what-is-ams-op-plans.html) in the AWS Managed Services documentation.
+ At least one existing EC2 instance in your AMS account.
+ An understanding of how to create and submit RFCs in AMS.
+ Familiarity with Python.

**Limitations**
+ You can only use RFCs for changes in your AMS account. Your AWS account uses different processes for similar changes.

## Architecture
<a name="automatically-create-an-rfc-in-ams-using-python-architecture"></a>

**Technology stack  **
+ AMS
+ AWS Command Line Interface (AWS CLI)
+ AWS SDK for Python (Boto3)
+ Python and its required packages (JSON and Boto3)

**Automation and scale**

This pattern provides sample code to automate the `Stop EC2 instance` RFC, but you can use this pattern’s sample code and approach for other RFCs.

## Tools
<a name="automatically-create-an-rfc-in-ams-using-python-tools"></a>
+ [AWS Managed Services](https://docs.aws.amazon.com/managedservices/latest/ctexguide/ex-rfc-use-examples.html) – AMS helps you to operate your AWS infrastructure more efficiently and securely.
+ [AWS CLI](https://docs.aws.amazon.com/cli/latest/userguide/cli-chap-welcome.html) – AWS Command Line Interface (AWS CLI) is a unified tool to manage your AWS services. In AMS, the change management API provides operations to create and manage RFCs.
+ [AWS SDK for Python (Boto3)](https://docs.aws.amazon.com/pythonsdk/) – SDK for Python makes it easy to integrate your Python application, library, or script with AWS services.

**Code**

The `AMS Stop EC2 Instance.zip` file (attached) contains the Python code for creating a `Stop EC2 instance` RFC. You can also configure this code to submit a single RFC for multiple EC2 instances.

## Epics
<a name="automatically-create-an-rfc-in-ams-using-python-epics"></a>

### Option 1 – Set up environment for macOS or Linux
<a name="option-1-ndash-set-up-environment-for-macos-or-linux"></a>


| Task | Description | Skills required | 
| --- | --- | --- | 
|  Install and validate Python.  | [\[See the AWS documentation website for more details\]](http://docs.aws.amazon.com/prescriptive-guidance/latest/patterns/automatically-create-an-rfc-in-ams-using-python.html) | AWS systems administrator | 
|  Install AWS CLI.  | Run the `pip install awscli --upgrade –user` command to install AWS CLI*. * | AWS systems administrator | 
|  Install Boto3. | Run the `pip install boto3` command to install Boto3. | AWS systems administrator | 
| Install JSON.  | Run the `pip install json` command to install JSON. | AWS systems administrator | 
| Set up AMS CLI.  | Sign in to the AWS Management Console, open the AMS console, and then choose **Documentation**. Download the .zip file that contains the AMS CLI, unzip it, and then install it on your local machine.After you install AMS CLI, run the `aws amscm help` command. The output provides information about the AMS change management process. | AWS systems administrator | 

### Option 2 – Set up environment for Windows
<a name="option-2-ndash-set-up-environment-for-windows"></a>


| Task | Description | Skills required | 
| --- | --- | --- | 
|  Install and validate Python.  | [\[See the AWS documentation website for more details\]](http://docs.aws.amazon.com/prescriptive-guidance/latest/patterns/automatically-create-an-rfc-in-ams-using-python.html) | AWS systems administrator | 
| Install AWS CLI.  | Run the `pip install awscli --upgrade –user` command to install AWS CLI. | AWS systems administrator | 
|  Install Boto3. | Run the `pip install boto3` command to install Boto3. | AWS systems administrator | 
| Install JSON.  | Run the `pip install json` command to install JSON. | AWS systems administrator | 
| Set up AMS CLI.  | Sign in to the AWS Management Console, open the AMS console, and then choose **Documentation**. Download the .zip file that contains the AMS CLI, unzip it, and then install it on your local machine.After you install AMS CLI, run the `aws amscm help` command. The output provides information about the AMS change management process | AWS systems administrator | 

### Extract the CT ID and execution parameters for the RFC
<a name="extract-the-ct-id-and-execution-parameters-for-the-rfc"></a>


| Task | Description | Skills required | 
| --- | --- | --- | 
| Extract the CT ID, version, and execution parameters for the RFC.  | Each RFC has a different CT ID, version, and execution parameters. You can extract this information by using one of the following options:[\[See the AWS documentation website for more details\]](http://docs.aws.amazon.com/prescriptive-guidance/latest/patterns/automatically-create-an-rfc-in-ams-using-python.html)To adapt this pattern’s Python automation for other RFCs, replace the CT type and parameter values in the `ams_stop_ec2_instance` Python code file from the `AMS Stop EC2 Instance.zip` file (attached) with those that you extracted. | AWS systems administrator | 

### Run the Python automation
<a name="run-the-python-automation"></a>


| Task | Description | Skills required | 
| --- | --- | --- | 
| Run the Python automation. | [\[See the AWS documentation website for more details\]](http://docs.aws.amazon.com/prescriptive-guidance/latest/patterns/automatically-create-an-rfc-in-ams-using-python.html) | AWS systems administrator | 

## Related resources
<a name="automatically-create-an-rfc-in-ams-using-python-resources"></a>
+ [What are change types?](https://docs.aws.amazon.com/managedservices/latest/ctexguide/understanding-cts.html)
+ [CLI tutorial: High availability two-tier stack (Linux/RHEL)](https://docs.aws.amazon.com/managedservices/latest/ctexguide/tut-create-ha-stack.html)

## Attachments
<a name="attachments-2b6c68fd-a27e-4c8b-934d-caec50c196ed"></a>

To access additional content that is associated with this document, unzip the following file: [attachment.zip](samples/p-attach/2b6c68fd-a27e-4c8b-934d-caec50c196ed/attachments/attachment.zip)

# Automatically stop and start an Amazon RDS DB instance using AWS Systems Manager Maintenance Windows
<a name="automatically-stop-and-start-an-amazon-rds-db-instance-using-aws-systems-manager-maintenance-windows"></a>

*Ashita Dsilva, Amazon Web Services*

## Summary
<a name="automatically-stop-and-start-an-amazon-rds-db-instance-using-aws-systems-manager-maintenance-windows-summary"></a>

This pattern demonstrates how to automatically stop and start an Amazon Relational Database Service (Amazon RDS) DB instance on a specific schedule (for example, shutting down a DB instance outside of business hours to reduce costs) by using AWS Systems Manager Maintenance Windows. For this purpose, Systems Manager is cost-effective for typical use cases.

AWS Systems Manager Automation provides the  `AWS-StopRdsInstance` and `AWS-StartRdsInstance` runbooks to stop and start Amazon RDS DB instances. This means that you don’t need to write custom logic with AWS Lambda functions or create an Amazon CloudWatch Events rule.

Systems Manager provides two capabilities for scheduling tasks: [State Manager](https://docs.aws.amazon.com/systems-manager/latest/userguide/sysman-state-about.html) and [Maintenance Windows](https://docs.aws.amazon.com/systems-manager/latest/userguide/systems-manager-maintenance.html). State Manager sets and maintains the required state configuration for resources in your Amazon Web Services (AWS) account one time or on a specific schedule. Maintenance Windows runs tasks on the resources in your account during a specific time window. Although you can use this pattern’s approach with State Manager or Maintenance Windows, we recommend that you use Maintenance Windows because it can run one or more tasks based on assigned priority and can also run AWS Lambda functions and AWS Step Functions tasks. For more information about State Manager and Maintenance Windows, see [Choosing between State Manager and Maintenance Windows](https://docs.aws.amazon.com/systems-manager/latest/userguide/state-manager-vs-maintenance-windows.html) in the Systems Manager documentation.

This pattern provides detailed steps to configure two separate maintenance windows that use cron expressions to stop and then start an Amazon RDS DB instance. 

## Prerequisites and limitations
<a name="automatically-stop-and-start-an-amazon-rds-db-instance-using-aws-systems-manager-maintenance-windows-prereqs"></a>

**Prerequisites**
+ An active AWS account.
+ An existing Amazon RDS DB instance that you want to stop and start on a specific schedule.
+ Cron expressions for your required schedule. For example, the expression `cron(0 9 ? * MON-FRI *)` runs the task at 09:00 on every Monday, Tuesday, Wednesday, Thursday, and Friday. For more information, see [Cron and rate expressions for maintenance windows](https://docs.aws.amazon.com/systems-manager/latest/userguide/reference-cron-and-rate-expressions.html#reference-cron-and-rate-expressions-maintenance-window) in the Systems Manager documentation.
+ Familiarity with Systems Manager.
+ Permissions to start and stop the RDS instance. For more information, see the [Epics](#automatically-stop-and-start-an-amazon-rds-db-instance-using-aws-systems-manager-maintenance-windows-epics) section.

**Limitations**
+ An Amazon RDS DB instance can be stopped for up to seven days at one time. After seven days, the DB instance automatically restarts to ensure that it receives any required maintenance updates.
+ You can’t stop a DB instance that is a read replica or that has a read replica.
+ You can’t stop an Amazon RDS for SQL Server DB instance in a Multi-AZ configuration.
+ Service quotas apply to Maintenance Windows and Systems Manager Automation. For more information about service quotas, see [AWS Systems Manager endpoints and quotas](https://docs.aws.amazon.com/general/latest/gr/ssm.html) in the AWS General Reference documentation. 
+ Some AWS services aren’t available in all AWS Regions. For Region availability, see [AWS services by Region](https://aws.amazon.com/about-aws/global-infrastructure/regional-product-services/). For specific endpoints, see the [Service endpoints and quotas](https://docs.aws.amazon.com/general/latest/gr/aws-service-information.html) page, and choose the link for the service.

## Architecture
<a name="automatically-stop-and-start-an-amazon-rds-db-instance-using-aws-systems-manager-maintenance-windows-architecture"></a>

The following diagram shows the workflow to automatically stop and start an Amazon RDS DB instance.

![\[Workflow to automatically stop and start an Amazon RDS DB instance\]](http://docs.aws.amazon.com/prescriptive-guidance/latest/patterns/images/pattern-img/45b81621-5674-4bcf-bf7c-75ae6f62524e/images/7d943830-716e-46a3-be44-7e668c3c01ff.png)


 

The workflow has the following steps:

1. Create a maintenance window and use cron expressions to define the stop and start schedule for your Amazon RDS DB instances.

2. Register a Systems Manager Automation task to the maintenance window by using the `AWS-StopRdsInstance` or `AWS-StartRdsInstance` runbook.

3. Register a target with the maintenance window by using a tag-based resource group for your Amazon RDS DB instances.

**Technology stack**
+ AWS CloudFormation
+ AWS Identity and Access Management (IAM)
+ Amazon RDS
+ Systems Manager

**Automation and scale**

You can stop and start multiple Amazon RDS DB instances at the same time by tagging the required Amazon RDS DB instances, creating a resource group that includes all the tagged DB instances, and registering this resource group as a target for the maintenance window.

## Tools
<a name="automatically-stop-and-start-an-amazon-rds-db-instance-using-aws-systems-manager-maintenance-windows-tools"></a>
+ [AWS CloudFormation](https://docs.aws.amazon.com/AWSCloudFormation/latest/UserGuide/Welcome.html) is a service that helps you model and set up your AWS resources.
+ [AWS Identity and Access Management (IAM)](https://docs.aws.amazon.com/IAM/latest/UserGuide/introduction.html) is a web service that helps you securely control access to AWS resources.
+ [Amazon Relational Database Service (Amazon RDS)](https://docs.aws.amazon.com/AmazonRDS/latest/UserGuide/Welcome.html) is a web service that makes it easier to set up, operate, and scale a relational database in the AWS Cloud.
+ [AWS Resource Groups](https://docs.aws.amazon.com/ARG/latest/userguide/welcome.html) helps you organize AWS resources into groups, tag resources, and manage, monitor, and automate tasks on grouped resources.
+ [AWS Systems Manager](https://docs.aws.amazon.com/systems-manager/latest/userguide/what-is-systems-manager.html) is an AWS service that you can use to view and control your infrastructure on AWS. This pattern uses the following features of Systems Manager:
  + [AWS Systems Manager Automation](https://docs.aws.amazon.com/systems-manager/latest/userguide/systems-manager-automation.html) simplifies common maintenance and deployment tasks of Amazon Elastic Compute Cloud (Amazon EC2) instances and other AWS resources.
  + [AWS Systems Manager Maintenance Windows](https://docs.aws.amazon.com/systems-manager/latest/userguide/systems-manager-maintenance.html) helps you define a schedule for when to perform potentially disruptive actions on your instances.

## Epics
<a name="automatically-stop-and-start-an-amazon-rds-db-instance-using-aws-systems-manager-maintenance-windows-epics"></a>

### Create and configure the IAM service role for Systems Manager Automation
<a name="create-and-configure-the-iam-service-role-for-sys-automation"></a>


| Task | Description | Skills required | 
| --- | --- | --- | 
| Configure the IAM service role for Systems Manager Automation. | Sign in to the AWS Management Console and create a service role for Systems Manager Automation. You can use one of the following two methods to create this service role:[\[See the AWS documentation website for more details\]](http://docs.aws.amazon.com/prescriptive-guidance/latest/patterns/automatically-stop-and-start-an-amazon-rds-db-instance-using-aws-systems-manager-maintenance-windows.html)The Systems Manager Automation workflow invokes Amazon RDS by using a service role to perform start and stop actions on the Amazon RDS DB instance.The service role must be configured with the following [inline policy](https://docs.aws.amazon.com/IAM/latest/UserGuide/access_policies_manage-attach-detach.html#add-policies-console) that has permissions to start and stop the Amazon RDS DB instance:<pre>{<br />    "Version": "2012-10-17",		 	 	 <br />    "Statement": [<br />        {<br />            "Sid": "RdsStartStop",<br />            "Effect": "Allow",<br />            "Action": [<br />                "rds:StopDBInstance",<br />                "rds:StartDBInstance"<br />            ],<br />            "Resource": "<RDS_Instance_ARN>"               <br />        },<br />        {<br />            "Sid": "RdsDescribe",<br />            "Effect": "Allow",<br />            "Action": "rds:DescribeDBInstances",<br />            "Resource": "*"<br />        }<br />    ]<br />}</pre>Make sure that you replace `<RDS_Instance_ARN>` with the Amazon Resource Name (ARN) of your Amazon RDS DB instance.If you are unfamiliar with using IAM policies and roles, follow the instructions in the *Solution Overview* section of the [Schedule Amazon RDS stop and start using AWS Systems Manager](https://aws.amazon.com/blogs/database/schedule-amazon-rds-stop-and-start-using-aws-systems-manager/) blog post.Make sure that you record the ARN of the service role. | AWS administrator | 

### Create a resource group
<a name="create-a-resource-group"></a>


| Task | Description | Skills required | 
| --- | --- | --- | 
| Tag the Amazon RDS DB instances. | Open the [Amazon RDS console](https://console.aws.amazon.com/rds/) and tag the Amazon RDS DB instances that you want to add to the resource group. A tag is metadata assigned to an AWS resource and consists of a key-value pair. We recommend that you use *Action *as the **Tag key** and *StartStop* as the **Value**.For more information about this, see [Adding, listing, and removing tags](https://docs.aws.amazon.com/AmazonRDS/latest/UserGuide/USER_Tagging.html#Tagging.HowTo) in the Amazon RDS documentation. | AWS administrator | 
| Create a resource group for your tagged Amazon RDS DB instances. | Open the [AWS Resource Groups console](https://console.aws.amazon.com/resource-groups) and create a resource group based on the tag that you created for your Amazon RDS DB instances.Under **Grouping Criteria**, make sure that you choose **AWS::RDS::DBInstance **for the resource type and then provide the tag's key-value pair (for example, "Action-StartStop"). This ensures that the service only checks for Amazon RDS DB instances and not other resources that have this tag.** **Make sure that you record the resource group’s name.For more information and detailed steps, see [Build a tag-based query and create a group](https://docs.aws.amazon.com/ARG/latest/userguide/gettingstarted-query.html#gettingstarted-query-tag-based) in the AWS Resource Groups documentation.  | AWS administrator | 

### Configure a maintenance window to stop the Amazon RDS DB instances
<a name="configure-a-maintenance-window-to-stop-the-rds-db-instances"></a>


| Task | Description | Skills required | 
| --- | --- | --- | 
| Create a maintenance window. | [\[See the AWS documentation website for more details\]](http://docs.aws.amazon.com/prescriptive-guidance/latest/patterns/automatically-stop-and-start-an-amazon-rds-db-instance-using-aws-systems-manager-maintenance-windows.html)The task to stop the DB instance runs almost instantly when initiated and doesn't span the entire duration of the maintenance window. This pattern provides the minimum values for **Duration** and **Stop initiating tasks** because they are the required parameters for a maintenance window.For more information and detailed steps, see [Create a maintenance window (console)](https://docs.aws.amazon.com/systems-manager/latest/userguide/sysman-maintenance-create-mw.html) in the Systems Manager documentation. | AWS administrator | 
| Assign a target to the maintenance window. | [\[See the AWS documentation website for more details\]](http://docs.aws.amazon.com/prescriptive-guidance/latest/patterns/automatically-stop-and-start-an-amazon-rds-db-instance-using-aws-systems-manager-maintenance-windows.html)For more information and detailed steps, see [Assign targets to a maintenance window (console)](https://docs.aws.amazon.com/systems-manager/latest/userguide/sysman-maintenance-assign-targets.html) in the Systems Manager documentation. | AWS administrator | 
| Assign a task to the maintenance window. | [\[See the AWS documentation website for more details\]](http://docs.aws.amazon.com/prescriptive-guidance/latest/patterns/automatically-stop-and-start-an-amazon-rds-db-instance-using-aws-systems-manager-maintenance-windows.html)** **The **Service role** option defines the service role required for the maintenance window to run tasks. However, this role is not identical to the service role that you created earlier for Systems Manager Automation.For more information and detailed steps, see [Assign tasks to a maintenance window (console)](https://docs.aws.amazon.com/systems-manager/latest/userguide/sysman-maintenance-assign-tasks.html) in the Systems Manager documentation. | AWS administrator | 

### Configure a maintenance window to start the Amazon RDS DB instances
<a name="configure-a-maintenance-window-to-start-the-rds-db-instances"></a>


| Task | Description | Skills required | 
| --- | --- | --- | 
| Configure a maintenance window to start the Amazon RDS DB instances. | Repeat the steps from the *Configure a maintenance window to stop the Amazon RDS DB instances* epic to configure another maintenance window to start the Amazon RDS DB instances at a scheduled time.You must make the following changes when you configure the maintenance window to start the DB instances:[\[See the AWS documentation website for more details\]](http://docs.aws.amazon.com/prescriptive-guidance/latest/patterns/automatically-stop-and-start-an-amazon-rds-db-instance-using-aws-systems-manager-maintenance-windows.html) | AWS administrator | 

## Related resources
<a name="automatically-stop-and-start-an-amazon-rds-db-instance-using-aws-systems-manager-maintenance-windows-resources"></a>
+ [Use Systems Manager Automation documents to manage instances and cut costs off-hours](https://aws.amazon.com/blogs/mt/systems-manager-automation-documents-manage-instances-cut-costs-off-hours/) (AWS blog post)

# Centralize software package distribution in AWS Organizations by using Terraform
<a name="centralize-software-package-distribution-in-aws-organizations-by-using-terraform"></a>

*Pradip kumar Pandey, Chintamani Aphale, T.V.R.L.Phani Kumar Dadi, Pratap Kumar Nanda, Aarti Rajput, and Mayuri Shinde, Amazon Web Services*

## Summary
<a name="centralize-software-package-distribution-in-aws-organizations-by-using-terraform-summary"></a>

Enterprises often maintain multiple AWS accounts that are spread across multiple AWS Regions in order to create a strong isolation barrier between workloads. To stay secure and compliant, their administration teams install agent-based tools such as [CrowdStrike](https://www.crowdstrike.com/falcon-platform/), [SentinelOne](https://www.sentinelone.com/platform/), or [TrendMicro](https://www.trendmicro.com/en_sg/business.html) tools for security scanning, and the [Amazon CloudWatch agent](https://docs.aws.amazon.com/AmazonCloudWatch/latest/monitoring/Install-CloudWatch-Agent.html), [Datadog Agent](https://www.datadoghq.com/), or [AppDynamics agents](https://www.appdynamics.com/product/how-it-works/agents-and-controller) for monitoring. These teams often face challenges when they want to centrally automate software package management and distribution across this large landscape.

[Distributor](https://docs.aws.amazon.com/systems-manager/latest/userguide/distributor.html), a capability of [AWS Systems Manager](https://docs.aws.amazon.com/systems-manager/latest/userguide/what-is-systems-manager.html), automates the process of packaging and publishing software to managed Microsoft Windows and Linux instances across the cloud and on-premises servers through a single simplified interface. This pattern demonstrates how you can use Terraform to further simplify the process of managing the installation of software and to run scripts across a large number of instances and member accounts within AWS Organizations with minimal effort.

This solution works for Amazon, Linux, and Windows instances that are managed by Systems Manager.

## Prerequisites and limitations
<a name="centralize-software-package-distribution-in-aws-organizations-by-using-terraform-prereqs"></a>
+ A [Distributor package](https://docs.aws.amazon.com/systems-manager/latest/userguide/distributor-working-with-packages-create.html) that has the software to be installed
+ [Terraform](https://developer.hashicorp.com/terraform/tutorials/aws-get-started/install-cli) version 0.15.0 or later
+ Amazon Elastic Compute Cloud (Amazon EC2) instances that are [managed by Systems Manager](https://docs.aws.amazon.com/systems-manager/latest/userguide/managed_instances.html) and have basic [permissions to access Amazon Simple Storage Service (Amazon S3](https://repost.aws/knowledge-center/ec2-instance-access-s3-bucket)) in the target account
+ A landing zone for your organization that’s set up by using [AWS Control Tower](https://docs.aws.amazon.com/controltower/latest/userguide/what-is-control-tower.html)
+ (Optional) [Account Factory for Terraform (AFT)](https://catalog.workshops.aws/control-tower/en-US/customization/aft)

## Architecture
<a name="centralize-software-package-distribution-in-aws-organizations-by-using-terraform-architecture"></a>

**Resource details**

This pattern uses [Account Factory for Terraform (AFT) ](https://catalog.workshops.aws/control-tower/en-US/customization/aft)to create all required AWS resources and the code pipeline to deploy the resources in a deployment account. The code pipeline runs in two repositories:
+ **Global customization** contains Terraform code that will run across all accounts registered with AFT.
+ **Account customizations** contains Terraform code that will run in the deployment account.

You can also deploy this solution without using AFT, by running [Terraform](https://developer.hashicorp.com/terraform/intro) commands in the account customizations folder.

The Terraform code deploys the following resources:
+ AWS Identity and Access Management (IAM) role and policies
  + [SystemsManager-AutomationExecutionRole](https://docs.aws.amazon.com/systems-manager/latest/userguide/running-automations-multiple-accounts-regions.html) grants the user permissions to run automations in the target accounts.
  + [SystemsManager-AutomationAdministrationRole](https://docs.aws.amazon.com/systems-manager/latest/userguide/running-automations-multiple-accounts-regions.html) grants the user permissions to run automations in multiple accounts and organizational units (OUs).
+ Compressed files and manifest.json for the package
  + In Systems Manager, a [package](https://docs.aws.amazon.com/systems-manager/latest/userguide/distributor-working-with-packages-create.html) includes at least one .zip file of software or installable assets.
  + The JSON manifest includes pointers to your package code files.
+ S3 bucket
  + The distributed package that is shared across the organization is securely stored in an Amazon S3 bucket.
+ AWS Systems Manager documents (SSM documents)
  + `DistributeSoftwarePackage` contains the logic to distribute the software package to every target instance in the member accounts.
  + `AddSoftwarePackageToDistributor` contains the logic to package the installable software assets and add it to Automation, a capability of AWS Systems Manager.
+ Systems Manager association
  + A Systems Manager association is used to deploy the solution.

**Architecture and workflow**

![\[Architecture diagram for centralizing software package distribution in AWS Organizations\]](http://docs.aws.amazon.com/prescriptive-guidance/latest/patterns/images/pattern-img/da584449-e12b-4878-a61d-00d8cea3d3d7/images/2718f2c4-f816-4e34-89b8-8182c128e6db.png)


The diagram illustrates the following steps:

1. To run the solution from a centralized account, you upload your packages or software along with deployment steps to an S3 bucket.

1. Your customized package becomes available in the Systems Manager console [Documents](https://ap-southeast-2.console.aws.amazon.com/systems-manager/documents?region=ap-southeast-2) section, in the **Owned by me **tab.

1. State Manager, a capability of Systems Manager, creates, schedules, and runs an association for the package across the organization. The association specifies that the software package must be installed and running on a managed node before it can be installed on the target node.

1. The association instructs Systems Manager to install the package on the target node.

1. For any subsequent installations or changes, users can run the same association periodically or manually from a single location to perform deployments across accounts.

1. In member accounts, Automation sends deployment commands to Distributor.

1. Distributor distributes software packages across instances.

This solution uses the management account within AWS Organizations, but you can also designate an account (delegated administrator) to manage this on behalf of the organization.

## Tools
<a name="centralize-software-package-distribution-in-aws-organizations-by-using-terraform-tools"></a>

**AWS services**
+ [Amazon Simple Storage Service (Amazon S3)](https://docs.aws.amazon.com/AmazonS3/latest/userguide/Welcome.html) is a cloud-based object storage service that helps you store, protect, and retrieve any amount of data. This pattern uses Amazon S3 to centralize and securely store the distributed package.
+ [AWS Systems Manager](https://docs.aws.amazon.com/systems-manager/latest/userguide/what-is-systems-manager.html) helps you manage your applications and infrastructure running in the AWS Cloud. It simplifies application and resource management, shortens the time to detect and resolve operational problems, and helps you manage your AWS resources securely at scale. This pattern uses the following Systems Manager capabilities:
  + [Distributor ](https://docs.aws.amazon.com/systems-manager/latest/userguide/distributor.html)helps you package and publish software to Systems Manager managed instances.
  + [Automation](https://docs.aws.amazon.com/systems-manager/latest/userguide/systems-manager-automation.html) simplifies common maintenance, deployment, and remediation tasks for many AWS services.
  + [Documents](https://docs.aws.amazon.com/systems-manager/latest/userguide/documents.html) performs actions on your Systems Manager managed instances across your organization and accounts.
+ [AWS Organizations](https://docs.aws.amazon.com/organizations/latest/userguide/orgs_introduction.html) is an account management service that helps you consolidate multiple AWS accounts into an organization that you create and centrally manage.

**Other tools**
+ [Terraform](https://www.terraform.io/) is an infrastructure as code (IaC) tool from HashiCorp that helps you create and manage cloud and on-premises resources.

**Code repository**

The instructions and code for this pattern are available in the GitHub [Centralized package distribution](https://github.com/aws-samples/aws-organization-centralised-package-distribution) repository.

## Best practices
<a name="centralize-software-package-distribution-in-aws-organizations-by-using-terraform-best-practices"></a>
+ To assign tags to an association, use the [AWS Command Line Interface (AWS CLI)](https://docs.aws.amazon.com/cli/latest/userguide/cli-chap-welcome.html) or the [AWS Tools for PowerShell](https://docs.aws.amazon.com/powershell/latest/userguide/pstools-welcome.html). Adding tags to an association by using the Systems Manager console isn't supported. For more information, see [Tagging Systems Manager resources](https://docs.aws.amazon.com/systems-manager/latest/userguide/tagging-resources.html) in the Systems Manager documentation.
+ To run an association by using a new version of a document shared from another account, set the document version to `default`.
+ To tag only the target node, use one tag key. If you want to target your nodes by using multiple tag keys, use the resource group option.

## Epics
<a name="centralize-software-package-distribution-in-aws-organizations-by-using-terraform-epics"></a>

### Configure source files and accounts
<a name="configure-source-files-and-accounts"></a>


| Task | Description | Skills required | 
| --- | --- | --- | 
| Clone the repository. | [\[See the AWS documentation website for more details\]](http://docs.aws.amazon.com/prescriptive-guidance/latest/patterns/centralize-software-package-distribution-in-aws-organizations-by-using-terraform.html) | DevOps engineer | 
| Update global variables. | Update the following input parameters in the `global-customization/variables.tf` file. These variables apply to all accounts that are created and managed by AFT.[\[See the AWS documentation website for more details\]](http://docs.aws.amazon.com/prescriptive-guidance/latest/patterns/centralize-software-package-distribution-in-aws-organizations-by-using-terraform.html) | DevOps engineer | 
| Update account variables. | Update the following input parameters in the `account-customization/variables.tf` file. These variables apply only to specific accounts that are created and managed by AFT.[\[See the AWS documentation website for more details\]](http://docs.aws.amazon.com/prescriptive-guidance/latest/patterns/centralize-software-package-distribution-in-aws-organizations-by-using-terraform.html) | DevOps engineer | 

### Customize parameters and deployment files
<a name="customize-parameters-and-deployment-files"></a>


| Task | Description | Skills required | 
| --- | --- | --- | 
| Update input parameters for the State Manager association. | Update the following input parameters in the `account-customization/association.tf` file to define the state you want to maintain on your instances. You can use the default parameter values if they support your use case.[\[See the AWS documentation website for more details\]](http://docs.aws.amazon.com/prescriptive-guidance/latest/patterns/centralize-software-package-distribution-in-aws-organizations-by-using-terraform.html) | DevOps engineer | 
| Prepare compressed files and the `manifest.json` file for the package. | This pattern provides sample PowerShell installable files (.msi for Windows and .rpm for Linux) with install and uninstall scripts in the `account-customization/package` folder.[\[See the AWS documentation website for more details\]](http://docs.aws.amazon.com/prescriptive-guidance/latest/patterns/centralize-software-package-distribution-in-aws-organizations-by-using-terraform.html) | DevOps engineer | 

### Run Terraform commands to provision resources
<a name="run-terraform-commands-to-provision-resources"></a>


| Task | Description | Skills required | 
| --- | --- | --- | 
| Initialize the Terraform configuration. | To deploy the solution automatically with AFT, push the code to AWS CodeCommit:<pre>$ git add *<br />$ git commit -m "message"<br />$ git push</pre>You can also deploy this solution without using AFT by running a Terraform command from the `account-customization` folder. To initialize the working directory that contains the Terraform files, run:<pre>$ terraform init</pre> | DevOps engineer | 
| Preview changes. | To preview the changes that Terraform will make to the infrastructre, run the command:<pre>$ terraform plan</pre>This command evaluates the Terraform configuration to determine the desired state of the resources that have been declared. It also compares the desired state with the actual infrastructure to provision within the workspace. | DevOps engineer | 
| Apply changes. | Run the following command to implement the changes that you made to the `variables.tf` files:<pre>$ terraform apply</pre> | DevOps engineer | 

### Validate resources
<a name="validate-resources"></a>


| Task | Description | Skills required | 
| --- | --- | --- | 
| Validate the creation of SSM documents. | [\[See the AWS documentation website for more details\]](http://docs.aws.amazon.com/prescriptive-guidance/latest/patterns/centralize-software-package-distribution-in-aws-organizations-by-using-terraform.html)You should see the `DistributeSoftwarePackage` and `AddSoftwarePackageToDistributor` packages. | DevOps engineer | 
| Validate the successful deployment of automations. | [\[See the AWS documentation website for more details\]](http://docs.aws.amazon.com/prescriptive-guidance/latest/patterns/centralize-software-package-distribution-in-aws-organizations-by-using-terraform.html) | DevOps engineer | 
| Validate that the package deployed to the targeted member account instances. | [\[See the AWS documentation website for more details\]](http://docs.aws.amazon.com/prescriptive-guidance/latest/patterns/centralize-software-package-distribution-in-aws-organizations-by-using-terraform.html) | DevOps engineer | 

## Troubleshooting
<a name="centralize-software-package-distribution-in-aws-organizations-by-using-terraform-troubleshooting"></a>


| Issue | Solution | 
| --- | --- | 
| The State Manager association failed or is stuck in pending status. | See the [troubleshooting information](https://repost.aws/knowledge-center/ssm-state-manager-association-fail) in the AWS Knowledge Center. | 
| A scheduled association failed to run. | Your schedule specification might be invalid. State Manager doesn't currently support specifying months in cron expressions for associations. Use [cron or rate expressions](https://docs.aws.amazon.com/systems-manager/latest/userguide/reference-cron-and-rate-expressions.html) to confirm the schedule. | 

## Related resources
<a name="centralize-software-package-distribution-in-aws-organizations-by-using-terraform-resources"></a>
+ [Centralized package distribution](https://github.com/aws-samples/aws-organization-centralised-package-distribution) (GitHub repository)
+ [Account Factory for Terraform (AFT)](https://catalog.workshops.aws/control-tower/en-US/customization/aft)
+ [Use cases and best practices](https://docs.aws.amazon.com/systems-manager/latest/userguide/systems-manager-best-practices.html) (AWS Systems Manager documentation)

# Configure logging for .NET applications in Amazon CloudWatch Logs by using NLog
<a name="configure-logging-for-net-applications-in-amazon-cloudwatch-logs-by-using-nlog"></a>

*Bibhuti Sahu and Rob Hill (AWS), Amazon Web Services*

## Summary
<a name="configure-logging-for-net-applications-in-amazon-cloudwatch-logs-by-using-nlog-summary"></a>

This pattern describes how to use the NLog open-source logging framework to log .NET application usage and events in [Amazon CloudWatch Logs](https://docs.aws.amazon.com/AmazonCloudWatch/latest/logs/WhatIsCloudWatchLogs.html). In the CloudWatch console, you can view the application’s log messages in near real time. You can also set up [metrics](https://docs.aws.amazon.com/AmazonCloudWatch/latest/logs/MonitoringLogData.html) and configure [alarms](https://docs.aws.amazon.com/AmazonCloudWatch/latest/monitoring/ConsoleAlarms.html) to notify you if a metric threshold is exceeded. Using CloudWatch Application Insights, you can view automated or custom dashboards that show potential problems for the monitored applications. CloudWatch Application Insights is designed to help you quickly isolate ongoing issues with your applications and infrastructure.

To write log messages to CloudWatch Logs, you add the `AWS.Logger.NLog` NuGet package to the .NET project. Then, you update the `NLog.config` file to use CloudWatch Logs as a target.

## Prerequisites and limitations
<a name="configure-logging-for-net-applications-in-amazon-cloudwatch-logs-by-using-nlog-prereqs"></a>

**Prerequisites**
+ An active AWS account.
+ A .NET web or console application that:
  + Uses supported .NET Framework or .NET Core versions. For more information, see *Product versions*.
  + Uses NLog to send log data to Application Insights.
+ Permissions to create an IAM role for an AWS service. For more information, see [Service role permissions](https://docs.aws.amazon.com/IAM/latest/UserGuide/id_roles_create_for-service.html#id_roles_create_service-permissions).
+ Permissions to pass a role to an AWS service. For more information, see [Granting a user permissions to pass a role to an AWS service](https://docs.aws.amazon.com/IAM/latest/UserGuide/id_roles_use_passrole.html).

**Product versions**
+ .NET Framework version 3.5 or later
+ .NET Core versions 1.0.1, 2.0.0, or later

## Architecture
<a name="configure-logging-for-net-applications-in-amazon-cloudwatch-logs-by-using-nlog-architecture"></a>

**Target technology stack  **
+ NLog
+ Amazon CloudWatch Logs

**Target architecture**

![\[Architecture diagram of NLog writing log data for a .NET application to Amazon ClodWatch Logs.\]](http://docs.aws.amazon.com/prescriptive-guidance/latest/patterns/images/pattern-img/0ac9c3ad-2a28-415f-afc3-7fe3494b2b63/images/daea9f2f-7242-4ed2-843e-655d843dcfdf.png)


1. The .NET application writes log data to the NLog logging framework.

1. NLog writes the log data to CloudWatch Logs.

1. You use CloudWatch alarms and custom dashboards to monitor the .NET application.

## Tools
<a name="configure-logging-for-net-applications-in-amazon-cloudwatch-logs-by-using-nlog-tools"></a>

**AWS services**
+ [Amazon CloudWatch Application Insights](https://docs.aws.amazon.com/AmazonCloudWatch/latest/monitoring/cloudwatch-application-insights.html) helps you observe the health of your applications and underlying AWS resources.
+ [Amazon CloudWatch Logs](https://docs.aws.amazon.com/AmazonCloudWatch/latest/logs/WhatIsCloudWatchLogs.html) helps you centralize the logs from all your systems, applications, and AWS services so you can monitor them and archive them securely.
+ [AWS Identity and Access Management (IAM)](https://docs.aws.amazon.com/IAM/latest/UserGuide/introduction.html) helps you securely manage access to your AWS resources by controlling who is authenticated and authorized to use them.
+ [AWS Tools for PowerShell](https://docs.aws.amazon.com/powershell/latest/userguide/pstools-welcome.html) are a set of PowerShell modules that help you script operations on your AWS resources from the PowerShell command line.

**Other tools**
+ [Logger.NLog](https://www.nuget.org/packages/AWS.Logger.NLog) is an NLog target that records log data to CloudWatch Logs.
+ [NLog](https://nlog-project.org/) is an open-source logging framework for .NET platforms that helps you write log data to targets, such as databases, log files, or consoles.
+ [PowerShell](https://learn.microsoft.com/en-us/powershell/) is a Microsoft automation and configuration management program that runs on Windows, Linux, and macOS.
+ [Visual Studio](https://docs.microsoft.com/en-us/visualstudio/get-started/visual-studio-ide?view=vs-2022) is an integrated development environment (IDE) that includes compilers, code completion tools, graphical designers, and other features that support software development.

## Best practices
<a name="configure-logging-for-net-applications-in-amazon-cloudwatch-logs-by-using-nlog-best-practices"></a>
+ Set a [retention policy](https://docs.aws.amazon.com/AmazonCloudWatch/latest/logs/Working-with-log-groups-and-streams.html#SettingLogRetention) for the target log group. This must be done outside of the NLog configuration. By default, log data is stored in CloudWatch Logs indefinitely.
+ Adhere to the [Best practices for managing AWS access keys](https://docs.aws.amazon.com/accounts/latest/reference/credentials-access-keys-best-practices.html).

## Epics
<a name="configure-logging-for-net-applications-in-amazon-cloudwatch-logs-by-using-nlog-epics"></a>

### Set up access and tools
<a name="set-up-access-and-tools"></a>


| Task | Description | Skills required | 
| --- | --- | --- | 
| Create an IAM policy. | Follow the instructions in [Creating policies using the JSON editor](https://docs.aws.amazon.com/IAM/latest/UserGuide/access_policies_create-console.html#access_policies_create-json-editor) in the IAM documentation. Enter the following JSON policy, which has the least-privilege permissions necessary to allow CloudWatch Logs to read and write logs.<pre>{<br />    "Version": "2012-10-17",		 	 	 <br />    "Statement": [<br />        {<br />            "Effect": "Allow",<br />            "Action": [<br />                "logs:CreateLogGroup",<br />                "logs:CreateLogStream",<br />                "logs:GetLogEvents",<br />                "logs:PutLogEvents",<br />                "logs:DescribeLogGroups",<br />                "logs:DescribeLogStreams",<br />                "logs:PutRetentionPolicy"<br />            ],<br />            "Resource": [<br />                "*"<br />            ]<br />        }<br />    ]<br />}</pre> | AWS administrator, AWS DevOps | 
| Create an IAM role. | Follow the instructions in [Creating a role to delegate permissions to an AWS service](https://docs.aws.amazon.com/IAM/latest/UserGuide/id_roles_create_for-service.html) in the IAM documentation. Select the policy that you created previously. This is the role CloudWatch Logs assumes to perform logging actions. | AWS administrator, AWS DevOps | 
| Set up AWS Tools for PowerShell. | [\[See the AWS documentation website for more details\]](http://docs.aws.amazon.com/prescriptive-guidance/latest/patterns/configure-logging-for-net-applications-in-amazon-cloudwatch-logs-by-using-nlog.html) | General AWS | 

### Configure NLog
<a name="configure-nlog"></a>


| Task | Description | Skills required | 
| --- | --- | --- | 
| Install the NuGet package. | [\[See the AWS documentation website for more details\]](http://docs.aws.amazon.com/prescriptive-guidance/latest/patterns/configure-logging-for-net-applications-in-amazon-cloudwatch-logs-by-using-nlog.html) | App developer | 
| Configure the logging target. | [\[See the AWS documentation website for more details\]](http://docs.aws.amazon.com/prescriptive-guidance/latest/patterns/configure-logging-for-net-applications-in-amazon-cloudwatch-logs-by-using-nlog.html)For a sample configuration file, see the [Additional information](#configure-logging-for-net-applications-in-amazon-cloudwatch-logs-by-using-nlog-additional) section of this pattern. When you run your application, NLog will write the log messages and send them to CloudWatch Logs. | App developer | 

### Validate and monitor logs
<a name="validate-and-monitor-logs"></a>


| Task | Description | Skills required | 
| --- | --- | --- | 
| Validate logging. | Follow the instructions in [View log data sent to CloudWatch Logs](https://docs.aws.amazon.com/AmazonCloudWatch/latest/logs/Working-with-log-groups-and-streams.html#ViewingLogData) in the CloudWatch Logs documentation. Validate that log events are being recorded for the .NET application. If log events are not being recorded, see the [Troubleshooting](#configure-logging-for-net-applications-in-amazon-cloudwatch-logs-by-using-nlog-troubleshooting) section in this pattern. | General AWS | 
| Monitor the .NET application stack. | Configure monitoring in CloudWatch as needed for your use case. You can use [CloudWatch Logs Insights](https://docs.aws.amazon.com/AmazonCloudWatch/latest/logs/AnalyzingLogData.html), [CloudWatch Metrics Insights](https://docs.aws.amazon.com/AmazonCloudWatch/latest/monitoring/query_with_cloudwatch-metrics-insights.html), and [CloudWatch Application Insights](https://docs.aws.amazon.com/AmazonCloudWatch/latest/monitoring/cloudwatch-application-insights.html) to monitor your .NET workload. You can also configure [alarms](https://docs.aws.amazon.com/AmazonCloudWatch/latest/monitoring/AlarmThatSendsEmail.html) so that you can receive alerts, and you can create a custom [dashboard](https://docs.aws.amazon.com/AmazonCloudWatch/latest/monitoring/CloudWatch_Dashboards.html) for monitoring the workload from a single view. | General AWS | 

## Troubleshooting
<a name="configure-logging-for-net-applications-in-amazon-cloudwatch-logs-by-using-nlog-troubleshooting"></a>


| Issue | Solution | 
| --- | --- | 
| Log data doesn’t appear in CloudWatch Logs. | Make sure that the IAM policy is attached to the IAM role that CloudWatch Logs assumes. For instructions, see the *Set up access and tools* section in the [Epics](#configure-logging-for-net-applications-in-amazon-cloudwatch-logs-by-using-nlog-epics) section. | 

## Related resources
<a name="configure-logging-for-net-applications-in-amazon-cloudwatch-logs-by-using-nlog-resources"></a>
+ [Working with log groups and log streams](https://docs.aws.amazon.com/AmazonCloudWatch/latest/logs/Working-with-log-groups-and-streams.html) (CloudWatch Logs documentation)
+ [Amazon CloudWatch Logs and .NET Logging Frameworks](https://aws.amazon.com/blogs/developer/amazon-cloudwatch-logs-and-net-logging-frameworks/) (AWS blog post)

## Additional information
<a name="configure-logging-for-net-applications-in-amazon-cloudwatch-logs-by-using-nlog-additional"></a>

The following is a sample `NLog.config` file.

```
<?xml version="1.0" encoding="utf-8" ?>
<configuration>
  <configSections>
    <section name="nlog" type="NLog.Config.ConfigSectionHandler, NLog" />
  </configSections>
  <startup>
    <supportedRuntime version="v4.0" sku=".NETFramework,Version=v4.7.2" />
  </startup>
  <nlog>
    <extensions>
      <add assembly="NLog.AWS.Logger" />
    </extensions>
    <targets>
      <target name="aws" type="AWSTarget" logGroup="NLog.TestGroup" region="us-east-1" profile="demo"/>
    </targets>
    <rules>
      <logger name="*" minlevel="Info" writeTo="aws" />
    </rules>    
  </nlog>
</configuration>
```

# Copy AWS Service Catalog products across different AWS accounts and AWS Regions
<a name="copy-aws-service-catalog-products-across-different-aws-accounts-and-aws-regions"></a>

*Sachin Vighe and Santosh Kale, Amazon Web Services*

## Summary
<a name="copy-aws-service-catalog-products-across-different-aws-accounts-and-aws-regions-summary"></a>

AWS Service Catalog is a Regional service and this means that AWS Service Catalog [portfolios and products](https://docs.aws.amazon.com/servicecatalog/latest/adminguide/what-is_concepts.html) are only visible in the AWS Region where they are created. If you set up an [AWS Service Catalog hub](https://aws.amazon.com/about-aws/whats-new/2020/06/aws-service-catalog-now-supports-sharing-portfolios-across-an-organization-from-a-delegated-member-account/) in a new Region, you must recreate your existing products and this can be a time-consuming process.

This pattern's approach helps simplify this process by describing how to copy products from an AWS Service Catalog hub in a source AWS account or Region to a new hub in a destination account or Region. For more information about the AWS Service Catalog hub and spoke model, see [AWS Service Catalog hub and spoke model: How to automate the deployment and management of AWS Service Catalog to many accounts](https://aws.amazon.com/blogs/mt/aws-service-catalog-hub-and-spoke-model-how-to-automate-the-deployment-and-management-of-service-catalog-to-many-accounts/) on the AWS Management and Governance Blog. 

The pattern also provides the separate code packages required to copy AWS Service Catalog products across accounts or to other Regions. By using this pattern, your organization can save time, make existing and previous product versions available in a new AWS Service Catalog hub, minimize the risk of manual errors, and scale the approach across multiple accounts or Regions.

**Note**  
This pattern's *Epics *section provides two options for copying products. You can use Option 1 to copy products across accounts or choose Option 2 to copy products across Regions.

## Prerequisites and limitations
<a name="copy-aws-service-catalog-products-across-different-aws-accounts-and-aws-regions-prereqs"></a>

**Prerequisites **
+ An active AWS account.
+ Existing AWS Service Catalog products in a source account or Region.
+ An existing AWS Service Catalog hub in a destination account or Region.
+ If you want to copy products across accounts, you must share and then import the AWS Service Catalog portfolio containing the products into your destination account. For more information about this, see [Sharing and importing portfolios](https://docs.aws.amazon.com/servicecatalog/latest/adminguide/catalogs_portfolios_sharing.html) in the AWS Service Catalog documentation.

**Limitations **
+ AWS Service Catalog products that you want to copy across Regions or accounts cannot belong to more than one portfolio.

## Architecture
<a name="copy-aws-service-catalog-products-across-different-aws-accounts-and-aws-regions-architecture"></a>

The following diagram shows the copying of AWS Service Catalog products from a source account to a destination account.

![\[A cross-account role in Region 1, a Lambda execution role and a Lambda function in Region 2.\]](http://docs.aws.amazon.com/prescriptive-guidance/latest/patterns/images/pattern-img/7ede5d17-89eb-4455-928f-6953d145ac9f/images/26738220-1ed2-4f84-911b-3c88e954b60e.png)


 The following diagram shows the copying of AWS Service Catalog products from a source Region to a destination Region.

![\[Products copied by using the Lambda scProductCopy function in Region 2.\]](http://docs.aws.amazon.com/prescriptive-guidance/latest/patterns/images/pattern-img/7ede5d17-89eb-4455-928f-6953d145ac9f/images/0a936792-3bdc-45c2-ba05-17e828615061.png)


**Technology stack  **
+ Amazon CloudWatch
+ AWS Identity and Access Management (IAM)
+ AWS Lambda
+ AWS Service Catalog

**Automation and scale**

You can scale this pattern’s approach by using a Lambda function that can be scaled depending on the number of requests received or how many AWS Service Catalog products you need to copy. For more information about this, see [Lambda function scaling](https://docs.aws.amazon.com/lambda/latest/dg/invocation-scaling.html) in the AWS Lambda documentation.

## Tools
<a name="copy-aws-service-catalog-products-across-different-aws-accounts-and-aws-regions-tools"></a>
+ [AWS Command Line Interface (AWS CLI)](https://docs.aws.amazon.com/cli/latest/userguide/cli-chap-welcome.html) is an open-source tool that helps you interact with AWS services through commands in your command-line shell.
+ [AWS Identity and Access Management (IAM)](https://docs.aws.amazon.com/IAM/latest/UserGuide/introduction.html) helps you securely manage access to your AWS resources by controlling who is authenticated and authorized to use them.
+ [AWS Lambda](https://docs.aws.amazon.com/lambda/latest/dg/welcome.html) is a compute service that helps you run code without needing to provision or manage servers. It runs your code only when needed and scales automatically, so you pay only for the compute time that you use.
+ [AWS Service Catalog](https://docs.aws.amazon.com/servicecatalog/latest/adminguide/introduction.html) helps you centrally manage catalogs of IT services that are approved for AWS. End users can quickly deploy only the approved IT services they need, following the constraints set by your organization.

**Code**

You can use the ` cross-account-copy` package (attached) to copy AWS Service Catalog products across accounts or the `cross-region-copy` package (attached) to copy products across Regions.

The `cross-account-copy` package contains the following files:
+ `copyconf.properties` – The configuration file that contains the Region and AWS account ID parameters for copying products across accounts.
+ `scProductCopyLambda.py` – The Python function for copying products across accounts.
+ `createDestAccountRole.sh` – The script to create an IAM role in the destination account.
+ `createSrcAccountRole.sh` – The script to create an IAM role in the source account.
+ `copyProduct.sh` – The script to create and invoke the Lambda function for copying products across accounts.

The `cross-region-copy` package contains the following files:
+ `copyconf.properties` – The configuration file that contains the Region and AWS account ID parameters for copying products across Regions.
+ `scProductCopyLambda.py` – The Python function for copying products across Regions.
+ `copyProduct.sh` – The script to create an IAM role and create and invoke the Lambda function for copying products across Regions.

## Epics
<a name="copy-aws-service-catalog-products-across-different-aws-accounts-and-aws-regions-epics"></a>

### Option 1 – Copy AWS Service Catalog products across accounts
<a name="option-1-ndash-copy-aws-service-catalog-products-across-accounts"></a>


| Task | Description | Skills required | 
| --- | --- | --- | 
| Update the configuration file. | [\[See the AWS documentation website for more details\]](http://docs.aws.amazon.com/prescriptive-guidance/latest/patterns/copy-aws-service-catalog-products-across-different-aws-accounts-and-aws-regions.html) | AWS administrator, AWS systems administrator, Cloud administrator | 
| Configure your credentials for AWS CLI in the destination account. | Configure your credentials to access AWS CLI in your destination account by running the `aws configure` command and providing the following values :<pre>$aws configure <br />AWS Access Key ID [None]: <your_access_key_id> <br />AWS Secret Access Key [None]: <your_secret_access_key> <br />Default region name [None]: Region<br />Default output format [None]:</pre>For more information about this, see [Configuration basics](https://docs.aws.amazon.com/cli/latest/userguide/cli-configure-quickstart.html) in the AWS Command Line Interface documentation.  | AWS administrator, AWS systems administrator, Cloud administrator | 
| Configure your credentials for AWS CLI in the source account. | Configure your credentials to access AWS CLI in your source account by running the `aws configure` command and providing the following values: <pre>$aws configure<br />AWS Access Key ID [None]: <your_access_key_id><br />AWS Secret Access Key [None]: <your_secret_access_key><br />Default region name [None]: Region<br />Default output format [None]:</pre>For more information about this, see [Configuration basics](https://docs.aws.amazon.com/cli/latest/userguide/cli-configure-quickstart.html) in the AWS Command Line Interface documentation.  | AWS administrator, AWS systems administrator, Cloud administrator | 
| Create a Lambda execution role in your destination account. | Run the `createDestAccountRole.sh `script in your destination account. The script implements the following actions:[\[See the AWS documentation website for more details\]](http://docs.aws.amazon.com/prescriptive-guidance/latest/patterns/copy-aws-service-catalog-products-across-different-aws-accounts-and-aws-regions.html) | AWS administrator, AWS systems administrator, Cloud administrator | 
| Create the cross-account IAM role in your source account. | Run the `createSrcAccountRole.sh `script in your source account. The script implements the following actions:[\[See the AWS documentation website for more details\]](http://docs.aws.amazon.com/prescriptive-guidance/latest/patterns/copy-aws-service-catalog-products-across-different-aws-accounts-and-aws-regions.html) | AWS administrator, AWS systems administrator, Cloud administrator | 
| Run the copyProduct script in the destination account. | Run the `copyProduct.sh `script in your destination account. The script implements the following actions:[\[See the AWS documentation website for more details\]](http://docs.aws.amazon.com/prescriptive-guidance/latest/patterns/copy-aws-service-catalog-products-across-different-aws-accounts-and-aws-regions.html) | AWS administrator, AWS systems administrator, Cloud administrator | 

### Option 2 – Copy AWS Service Catalog products from a source Region to a destination Region
<a name="option-2-ndash-copy-aws-service-catalog-products-from-a-source-region-to-a-destination-region"></a>


| Task | Description | Skills required | 
| --- | --- | --- | 
| Update the configuration file. | [\[See the AWS documentation website for more details\]](http://docs.aws.amazon.com/prescriptive-guidance/latest/patterns/copy-aws-service-catalog-products-across-different-aws-accounts-and-aws-regions.html) | AWS systems administrator, Cloud administrator, AWS administrator | 
| Configure your credentials for AWS CLI. | Configure your credentials to access AWS CLI in your environment by running the `aws configure` command and providing the following values:<pre>$aws configure<br />AWS Access Key ID [None]: <your_access_key_id><br />AWS Secret Access Key [None]: <your_secret_access_key><br />Default region name [None]: Region<br />Default output format [None]:</pre>For more information about this, see [Configuration basics](https://docs.aws.amazon.com/cli/latest/userguide/cli-configure-quickstart.html) in the AWS Command Line Interface documentation.  | AWS administrator, AWS systems administrator, Cloud administrator | 
| Run the copyProduct script. | Run the `copyProduct.sh` script in your destination Region. The script implements the following actions:[\[See the AWS documentation website for more details\]](http://docs.aws.amazon.com/prescriptive-guidance/latest/patterns/copy-aws-service-catalog-products-across-different-aws-accounts-and-aws-regions.html) | AWS administrator, AWS systems administrator, Cloud administrator | 

## Related resources
<a name="copy-aws-service-catalog-products-across-different-aws-accounts-and-aws-regions-resources"></a>
+ [Create a Lambda execution role](https://docs.aws.amazon.com/lambda/latest/dg/lambda-intro-execution-role.html) (AWS Lambda documentation)
+ [Create a Lambda function](https://docs.aws.amazon.com/lambda/latest/dg/gettingstarted-awscli.html) (AWS Lambda documentation)
+ [AWS Service Catalog API reference](https://docs.aws.amazon.com/servicecatalog/latest/dg/API_Operations_AWS_Service_Catalog.html)
+ [AWS Service Catalog documentation](https://docs.aws.amazon.com/servicecatalog/latest/adminguide/what-is_concepts.html)

## Attachments
<a name="attachments-7ede5d17-89eb-4455-928f-6953d145ac9f"></a>

To access additional content that is associated with this document, unzip the following file: [attachment.zip](samples/p-attach/7ede5d17-89eb-4455-928f-6953d145ac9f/attachments/attachment.zip)

# Create a RACI or RASCI matrix for a cloud operating model
<a name="create-a-raci-or-rasci-matrix-for-a-cloud-operating-model"></a>

*Teddy Germade, Jerome Descreux, Florian Leroux, and Josselin LE MINEUR, Amazon Web Services*

## Summary
<a name="create-a-raci-or-rasci-matrix-for-a-cloud-operating-model-summary"></a>

The Cloud Center of Excellence (CCoE) or CEE (Cloud Enablement Engine) is an empowered and accountable team that is focused on operational readiness for the cloud. Their key focus is to transform the information IT organization from an on-premises operating model to a cloud operating model. The CCoE should be a cross-functional team that includes representation from infrastructure, applications, operations, and security.

One of the key components of a cloud operating model is a *RACI matrix* or *RASCI matrix*. This is used to define the roles and responsibilities for all parties involved in migration activities and cloud operations. The matrix name is derived from the responsibility types defined in the matrix: responsible (R), accountable (A), support (S), consulted (C), and informed (I). The support type is optional. If you include it, it’s called a *RASCI matrix*, and if you exclude it, it’s called a *RACI matrix*.

By starting with the attached template, your CCoE team can create a RACI or RASCI matrix for your organization. The template contains teams, roles, and tasks that are common in cloud operating models. The foundation of this matrix is the tasks related to operations integration and CCoE capabilities. However, you can customize this template to meet the needs of your organization’s structure and use case.

There are no limits to the implementation of a RACI matrix. This approach works for large organizations, start-ups, and everything in between. For small organizations, the same resource can fill several roles.

## Epics
<a name="create-a-raci-or-rasci-matrix-for-a-cloud-operating-model-epics"></a>

### Create the matrix
<a name="create-the-matrix"></a>


| Task | Description | Skills required | 
| --- | --- | --- | 
| Identify key stakeholders. | Identify key service and team managers that are linked to the strategic objectives of your cloud operating model. | Project manager | 
| Customize the matrix template. | Download the template in the [Attachments](#attachments-b3df3d2c-c596-4736-bbaa-8edbcf335352) section, and then update the RACI or RASCI matrix as follows:[\[See the AWS documentation website for more details\]](http://docs.aws.amazon.com/prescriptive-guidance/latest/patterns/create-a-raci-or-rasci-matrix-for-a-cloud-operating-model.html) | Project manager | 
| Plan meetings. | [\[See the AWS documentation website for more details\]](http://docs.aws.amazon.com/prescriptive-guidance/latest/patterns/create-a-raci-or-rasci-matrix-for-a-cloud-operating-model.html) | Project manager | 
| Complete the matrix. | In the meeting with all stakeholders, do the following:[\[See the AWS documentation website for more details\]](http://docs.aws.amazon.com/prescriptive-guidance/latest/patterns/create-a-raci-or-rasci-matrix-for-a-cloud-operating-model.html) | Project manager | 
| Share the RASCI matrix. | When the RACI or RASCI matrix is complete, have it approved by leadership. Save it in a shared repository or central location where all stakeholders can access it. We recommend that you use standard document control processes to record and approve revisions to the matrix. | Project manager | 

## Related resources
<a name="create-a-raci-or-rasci-matrix-for-a-cloud-operating-model-resources"></a>
+ [AWS shared responsibility model](https://aws.amazon.com/compliance/shared-responsibility-model/)

## Attachments
<a name="attachments-b3df3d2c-c596-4736-bbaa-8edbcf335352"></a>

To access additional content that is associated with this document, unzip the following file: [attachment.zip](samples/p-attach/b3df3d2c-c596-4736-bbaa-8edbcf335352/attachments/attachment.zip)

# Create alarms for custom metrics using Amazon CloudWatch anomaly detection
<a name="create-alarms-for-custom-metrics-using-amazon-cloudwatch-anomaly-detection"></a>

*Ram Kandaswamy and Raheem Jiwani, Amazon Web Services*

## Summary
<a name="create-alarms-for-custom-metrics-using-amazon-cloudwatch-anomaly-detection-summary"></a>

On the Amazon Web Services (AWS) Cloud, you can use Amazon CloudWatch to create alarms that monitor metrics and send notifications or automatically make changes if a threshold is breached.

To avoid being limited by [static thresholds](https://docs.aws.amazon.com/AmazonCloudWatch/latest/monitoring/ConsoleAlarms.html), you can create alarms based on past patterns and that notify you if specific metrics are outside the normal operating window. For example, you could monitor your API’s response times from Amazon API Gateway and receive notifications about anomalies that prevent you from meeting a service-level agreement (SLA).

This pattern describes how to use CloudWatch anomaly detection for custom metrics. The pattern shows you how to create a custom metric in Amazon CloudWatch Logs Insights or publish a custom metric with an AWS Lambda function, and then set up anomaly detection and create notifications using Amazon Simple Notification Service (Amazon SNS).

## Prerequisites and limitations
<a name="create-alarms-for-custom-metrics-using-amazon-cloudwatch-anomaly-detection-prereqs"></a>

**Prerequisites **
+ An active AWS account.
+ An existing SNS topic, configured to send email notifications. For more information about this, see [Getting started with Amazon SNS](https://docs.aws.amazon.com/sns/latest/dg/sns-getting-started.html) in the Amazon SNS documentation.
+ An existing application, configured with [CloudWatch Logs](https://docs.aws.amazon.com/AmazonCloudWatch/latest/logs/CWL_GettingStarted.html).

**Limitations **
+ CloudWatch metrics don't support millisecond time intervals. For more information about the granularity of regular and custom metrics, see the [Amazon CloudWatch FAQs](https://aws.amazon.com/cloudwatch/faqs/).

## Architecture
<a name="create-alarms-for-custom-metrics-using-amazon-cloudwatch-anomaly-detection-architecture"></a>

![\[CloudWatch using an Amazon SNS topic to send an email notification when an alarm initiates.\]](http://docs.aws.amazon.com/prescriptive-guidance/latest/patterns/images/pattern-img/d47e6f7f-e469-4cb9-b34b-8c4b78d71820/images/49f30340-9552-430a-893a-d0608bb09e38.png)


 The diagram shows the following workflow:

1. Logs that use metrics created and updated by CloudWatch Logs are streamed to CloudWatch.

1. An alarm initiates based on thresholds and sends an alert to an SNS topic.

1. Amazon SNS sends you an email notification.

**Technology stack  **
+ CloudWatch
+ AWS Lambda
+ Amazon SNS

## Tools
<a name="create-alarms-for-custom-metrics-using-amazon-cloudwatch-anomaly-detection-tools"></a>
+ [Amazon CloudWatch](https://docs.aws.amazon.com/AmazonCloudWatch/latest/monitoring/WhatIsCloudWatch.html) provides a reliable, scalable, and flexible monitoring solution.
+ [AWS Lambda](https://docs.aws.amazon.com/lambda/latest/dg/welcome.html) is a compute service that helps you run code without provisioning or managing  servers. 
+ [Amazon Simple Notification Service (Amazon SNS)](https://docs.aws.amazon.com/sns/latest/dg/welcome.html) is a managed service that provides message delivery from publishers to subscribers.

## Epics
<a name="create-alarms-for-custom-metrics-using-amazon-cloudwatch-anomaly-detection-epics"></a>

### Set up anomaly detection for a custom metric
<a name="set-up-anomaly-detection-for-a-custom-metric"></a>


| Task | Description | Skills required | 
| --- | --- | --- | 
| Option 1 - Create a custom metric with a Lambda function. | Download the `lambda_function.py` file (attached) and then replace the sample `lambda_function.py` file in the [aws-lambda-developer-guide](https://github.com/awsdocs/aws-lambda-developer-guide/tree/main/sample-apps/blank-python/function) repository on the AWS Documentation GitHub. This provides you with a sample Lambda function that sends custom metrics to CloudWatch Logs. The Lambda function uses the Boto3 API to integrate with CloudWatch. After you run the Lambda function, you can sign in to the AWS Management Console, open the CloudWatch console, and the published metric is available under your published namespace. | DevOps engineer, AWS DevOps | 
| Option 2 – Create custom metrics from CloudWatch log groups.  | Sign in to the AWS Management Console, open the CloudWatch console, and then choose **Log groups**. Choose the log group that you want to create a metric for. Choose **Actions **and then choose **Create metric filter**. For **Filter pattern**, enter the filter pattern that you want to use. For more information, see [Filter and pattern syntax](https://docs.aws.amazon.com/AmazonCloudWatch/latest/logs/FilterAndPatternSyntax.html) in the CloudWatch documentation. To test your filter pattern, enter one or more log events under **Test Pattern**. Each log event must be within one line, because line breaks are used to separate log events in the **Log event** messages box. After you test the pattern, you can enter a name and value for your metric under **Metric details**. For more information and steps to create a custom metric, see [Create a metric filter for a log group](https://docs.aws.amazon.com/AmazonCloudWatch/latest/logs/CreateMetricFilterProcedure.html) in the CloudWatch documentation. | DevOps engineer, AWS DevOps | 
| Create an alarm for your custom metric. | On the CloudWatch console, choose **Alarms** and then choose **Create Alarm**. Choose **Select metric** and enter the name of the metric that you created earlier into the search box. Choose the **Graphed metrics** tab and configure the options according to your requirements.Under **Conditions**, choose **Anomaly detection** instead of **Static thresholds**. This shows you a band based on two standard default deviations. You can set up thresholds and adjust them according to your requirements.Choose **Next.**The band is dynamic and depends on the quality of the datapoints. When you begin aggregating more data, the band and thresholds are automatically updated.  | DevOps engineer, AWS DevOps | 
| Set up SNS notifications. | Under **Notification**, choose the SNS topic to notify when the alarm is in `ALARM` state, `OK` state, or `INSUFFICIENT_DATA` state.To have the alarm send multiple notifications for the same alarm state or for different alarm states, choose **Add notification**. Choose **Next**. Enter a name and description for the alarm. The name must only contain ASCII characters. Then choose **Next**.Under **Preview and create**, confirm that the information and conditions are correct, and then choose **Create alarm**. | DevOps engineer, AWS DevOps | 

## Related resources
<a name="create-alarms-for-custom-metrics-using-amazon-cloudwatch-anomaly-detection-resources"></a>
+ [Publishing custom metrics to CloudWatch](https://docs.aws.amazon.com/AmazonCloudWatch/latest/monitoring/publishingMetrics.html)
+ [Using CloudWatch anomaly detection](https://docs.aws.amazon.com/AmazonCloudWatch/latest/monitoring/CloudWatch_Anomaly_Detection.html)
+ [Alarm events and Amazon EventBridge](https://docs.aws.amazon.com/AmazonCloudWatch/latest/monitoring/cloudwatch-and-eventbridge.html)
+ [What are the best practices to follow while pushing custom metrics to Cloud Watch?](https://www.youtube.com/watch?v=mVffHIzIL60) (video)
+ [Introduction to CloudWatch Application Insights ](https://www.youtube.com/watch?v=PBO636_t9n0)(video)
+ [Detect anomalies with CloudWatch ](https://www.youtube.com/watch?v=8umIX-pUy3k)(video)

## Attachments
<a name="attachments-d47e6f7f-e469-4cb9-b34b-8c4b78d71820"></a>

To access additional content that is associated with this document, unzip the following file: [attachment.zip](samples/p-attach/d47e6f7f-e469-4cb9-b34b-8c4b78d71820/attachments/attachment.zip)

# Create an AWS Cloud9 IDE that uses Amazon EBS volumes with default encryption
<a name="create-an-aws-cloud9-ide-that-uses-amazon-ebs-volumes-with-default-encryption"></a>

*Janardhan Malyala and Dhrubajyoti Mukherjee, Amazon Web Services*

## Summary
<a name="create-an-aws-cloud9-ide-that-uses-amazon-ebs-volumes-with-default-encryption-summary"></a>

**Notice**: AWS Cloud9 is no longer available to new customers. Existing customers of AWS Cloud9 can continue to use the service as normal. [Learn more](https://aws.amazon.com/blogs/devops/how-to-migrate-from-aws-cloud9-to-aws-ide-toolkits-or-aws-cloudshell/)

You can use [encryption by default](https://docs.aws.amazon.com/AWSEC2/latest/UserGuide/EBSEncryption.html#encryption-by-default) to enforce the encryption of your Amazon Elastic Block Store (Amazon EBS) volumes and snapshot copies on the Amazon Web Services (AWS) Cloud. 

You can create an AWS Cloud9 integrated development environment (IDE) that uses EBS volumes encrypted by default. However, the AWS Identity and Access Management (IAM) [service-linked role](https://docs.aws.amazon.com/cloud9/latest/user-guide/using-service-linked-roles.html) for AWS Cloud9 requires access to the AWS Key Management Service (AWS KMS) key for these EBS volumes. If access is not provided, the AWS Cloud9 IDE might fail to launch and debugging might be difficult. 

This pattern provides the steps to add the service-linked role for AWS Cloud9 to the AWS KMS key that is used by your EBS volumes. The setup described by this pattern helps you successfully create and launch an IDE that uses EBS volumes with encryption by default.

## Prerequisites and limitations
<a name="create-an-aws-cloud9-ide-that-uses-amazon-ebs-volumes-with-default-encryption-prereqs"></a>

**Prerequisites  **
+ An active AWS account.
+ Default encryption turned on for EBS volumes. For more information about encryption by default, see [Amazon EBS encryption](https://docs.aws.amazon.com/AWSEC2/latest/UserGuide/EBSEncryption.html) in the Amazon Elastic Compute Cloud (Amazon EC2) documentation.
+ An existing [customer managed KMS key](https://docs.aws.amazon.com/kms/latest/developerguide/concepts.html#customer-cmk) for encrypting your EBS volumes.

**Note**  
You don't need to create the service-linked role for AWS Cloud9. When you create an AWS Cloud9 development environment, AWS Cloud9 creates the service-linked role for you.

## Architecture
<a name="create-an-aws-cloud9-ide-that-uses-amazon-ebs-volumes-with-default-encryption-architecture"></a>

![\[Using an AWS Cloud9 IDE to enforce the encryption of EBS volumes and snapshots.\]](http://docs.aws.amazon.com/prescriptive-guidance/latest/patterns/images/pattern-img/dd98fbb4-0949-4299-b701-bc857e13049c/images/6b22b8d1-75d9-4f06-b5d6-5fff7397f22d.png)


**Technology stack**
+ AWS Cloud9
+ IAM
+ AWS KMS

## Tools
<a name="create-an-aws-cloud9-ide-that-uses-amazon-ebs-volumes-with-default-encryption-tools"></a>
+ [AWS Cloud9](https://docs.aws.amazon.com/cloud9/latest/user-guide/welcome.html) is an integrated development environment (IDE) that helps you code, build, run, test, and debug software. It also helps you release software to the AWS Cloud.
+ [Amazon Elastic Block Store (Amazon EBS)](https://docs.aws.amazon.com/AWSEC2/latest/UserGuide/AmazonEBS.html) provides block-level storage volumes for use with Amazon Elastic Compute Cloud (Amazon EC2) instances.
+ [AWS Identity and Access Management (IAM)](https://docs.aws.amazon.com/IAM/latest/UserGuide/introduction.html) helps you securely manage access to your AWS resources by controlling who is authenticated and authorized to use them.
+ [AWS Key Management Service (AWS KMS)](https://docs.aws.amazon.com/kms/latest/developerguide/overview.html) helps you create and control cryptographic keys to help protect your data.

## Epics
<a name="create-an-aws-cloud9-ide-that-uses-amazon-ebs-volumes-with-default-encryption-epics"></a>

### Find the default encryption key value
<a name="find-the-default-encryption-key-value"></a>


| Task | Description | Skills required | 
| --- | --- | --- | 
| Record the default encryption key value for the EBS volumes.  | Sign in to the AWS Management Console and open the Amazon EC2 console. Choose **EC2 dashboard**, and then choose **Data protection and security** in **Account attributes**. In **EBS encryption **section, copy and record the value in **Default encryption key**. | Cloud architect, DevOps engineer | 

### Provide access to the AWS KMS key
<a name="provide-access-to-the-aws-kms-key"></a>


| Task | Description | Skills required | 
| --- | --- | --- | 
| Provide AWS Cloud9 with access to the KMS key for EBS volumes. | [\[See the AWS documentation website for more details\]](http://docs.aws.amazon.com/prescriptive-guidance/latest/patterns/create-an-aws-cloud9-ide-that-uses-amazon-ebs-volumes-with-default-encryption.html)For more information about updating a key policy, see [How to change a key policy](https://docs.aws.amazon.com/kms/latest/developerguide/key-policy-modifying.html#key-policy-modifying-how-to) (AWS KMS documentation).The service-linked role for AWS Cloud9 is automatically created when you launch your first IDE. For more information, see [Creating a service-linked role](https://docs.aws.amazon.com/cloud9/latest/user-guide/using-service-linked-roles.html#create-service-linked-role) in the AWS Cloud9 documentation.  | Cloud architect, DevOps engineer | 

### Create and launch the IDE
<a name="create-and-launch-the-ide"></a>


| Task | Description | Skills required | 
| --- | --- | --- | 
| Create and launch the AWS Cloud9 IDE. | Open the AWS Cloud9 console and choose **Create environment**.** **Configure IDE according to your requirements by following the steps from [Creating an EC2 environment](https://docs.aws.amazon.com/cloud9/latest/user-guide/create-environment-main.html) in the AWS Cloud9 documentation.  | Cloud architect, DevOps engineer | 

## Related resources
<a name="create-an-aws-cloud9-ide-that-uses-amazon-ebs-volumes-with-default-encryption-resources"></a>
+ [Encrypt EBS volumes used by AWS Cloud9](https://docs.aws.amazon.com/cloud9/latest/user-guide/move-environment.html#encrypting-volumes)
+ [Create a service-linked role for AWS Cloud9](https://docs.aws.amazon.com/cloud9/latest/user-guide/using-service-linked-roles.html#create-service-linked-role)
+ [Create an EC2 environment in AWS Cloud9](https://docs.aws.amazon.com/cloud9/latest/user-guide/create-environment-main.html)

## Additional information
<a name="create-an-aws-cloud9-ide-that-uses-amazon-ebs-volumes-with-default-encryption-additional"></a>

**AWS KMS key policy updates**

Replace `<aws_accountid>` with your AWS account ID.

```
{
            "Sid": "Allow use of the key",
            "Effect": "Allow",
            "Principal": {
                "AWS": "arn:aws:iam::<aws_accountid>:role/aws-service-role/cloud9.amazonaws.com/AWSServiceRoleForAWSCloud9"
            },
            "Action": [
                "kms:Encrypt",
                "kms:Decrypt",
                "kms:ReEncrypt*",
                "kms:GenerateDataKey*",
                "kms:DescribeKey"
            ],
            "Resource": "*"
        },
        {
            "Sid": "Allow attachment of persistent resources",
            "Effect": "Allow",
            "Principal": {
                "AWS": "arn:aws:iam::<aws_accountid>:role/aws-service-role/cloud9.amazonaws.com/AWSServiceRoleForAWSCloud9"
            },
            "Action": [
                "kms:CreateGrant",
                "kms:ListGrants",
                "kms:RevokeGrant"
            ],
            "Resource": "*",
            "Condition": {
                "Bool": {
                    "kms:GrantIsForAWSResource": "true"
                }
            }
        }
```

**Using a cross-account key**

If you want to use a cross-account KMS key, you must use a grant in combination with the KMS key policy. This enables cross-account access to the key. In the same account that you used to create the Cloud9 environment, run the following command in the terminal.

```
aws kms create-grant \
 --region <Region where Cloud9 environment is created> \
 --key-id <The cross-account KMS key ARN> \
 --grantee-principal arn:aws:iam::<The account where Cloud9 environment is created>:role/aws-service-role/cloud9.amazonaws.com/AWSServiceRoleForAWSCloud9 \
 --operations "Encrypt" "Decrypt" "ReEncryptFrom" "ReEncryptTo" "GenerateDataKey" "GenerateDataKeyWithoutPlaintext" "DescribeKey" "CreateGrant"
```

After you run this command, you can create Cloud9 environments by using EBS encryption with a key in a different account.

# Create tag-based Amazon CloudWatch dashboards automatically
<a name="create-tag-based-amazon-cloudwatch-dashboards-automatically"></a>

*Janak Vadaria, Vinodkumar Mandalapu, and RAJNEESH TYAGI, Amazon Web Services*

## Summary
<a name="create-tag-based-amazon-cloudwatch-dashboards-automatically-summary"></a>

Creating different Amazon CloudWatch dashboards manually can be time-consuming, particularly when you have to create and update multiple resources to automatically scale your environment. A solution that creates and updates your CloudWatch dashboards automatically can save you time. This pattern helps you deploy a fully automated AWS Cloud Development Kit (AWS CDK) pipeline that creates and updates CloudWatch dashboards for your AWS resources based on tag change events, to display Golden Signals metrics.

In site reliability engineering (SRE), Golden Signals refers to a comprehensive set of metrics that offer a broad view of a service from a user or consumer perspective. These metrics consist of latency, traffic, errors, and saturation. For more information, see [What is Site Reliability Engineering (SRE)?](https://aws.amazon.com/what-is/sre/) on the AWS website.

The solution provided by this pattern is event-driven. After it's deployed, it continuously monitors the tag change events and automatically updates the CloudWatch dashboards and alarms.

## Prerequisites and limitations
<a name="create-tag-based-amazon-cloudwatch-dashboards-automatically-prereqs"></a>

** Prerequisites **
+ An active AWS account
+ AWS Command Line Interface (AWS CLI), [installed and configured](https://docs.aws.amazon.com/cli/latest/userguide/getting-started-install.html)
+ [Prerequisites](https://docs.aws.amazon.com/cdk/v2/guide/work-with.html#work-with-prerequisites) for the AWS CDK v2
+ A [bootstrapped environment](https://docs.aws.amazon.com/cdk/v2/guide/bootstrapping.html) on AWS
+ [Python version 3](https://www.python.org/downloads/)
+ [AWS SDK for Python (Boto3)](https://boto3.amazonaws.com/v1/documentation/api/latest/guide/quickstart.html), installed
+ [Node.js version 18](https://nodejs.org/en/download/current) or later
+ Node package manager (npm), [installed and configured](https://docs.npmjs.com/downloading-and-installing-node-js-and-npm) for the AWS CDK
+ Moderate (level 200) familiarity with the AWS CDK and AWS CodePipeline

**Limitations**

This solution currently creates automated dashboards for the following AWS services only:
+ [Amazon Relational Database Service (Amazon RDS)](https://aws.amazon.com/rds/)
+ [AWS Auto Scaling](https://aws.amazon.com/autoscaling/)
+ [Amazon Simple Notification Service (Amazon SNS)](https://aws.amazon.com/sns/)
+ [Amazon DynamoDB](https://aws.amazon.com/dynamodb/)
+ [AWS Lambda](https://aws.amazon.com/lambda/)

## Architecture
<a name="create-tag-based-amazon-cloudwatch-dashboards-automatically-architecture"></a>

**Target technology stack**
+ [CloudWatch dashboards](https://docs.aws.amazon.com/AmazonCloudWatch/latest/monitoring/CloudWatch_Dashboards.html)
+ [CloudWatch alarms](https://docs.aws.amazon.com/AmazonCloudWatch/latest/monitoring/AlarmThatSendsEmail.html)

**Target architecture**

![\[Target architecture for creating tag-based CloudWatch dashboards\]](http://docs.aws.amazon.com/prescriptive-guidance/latest/patterns/images/pattern-img/f234fe30-87db-446f-a291-d33928ca2ccb/images/f63ca697-f252-416d-8a1b-0239f38c10c5.png)


1. An AWS tag change event for the configured application tags or code changes initiates a pipeline in AWS CodePipeline to build and deploy updated CloudWatch dashboards.

1. AWS CodeBuild runs a Python script to find the resources that have configured tags and stores the resource IDs in a local file in a CodeBuild environment.

1. CodeBuild runs **cdk synth** to generate CloudFormation templates that deploy CloudWatch dashboards and alarms.

1. CodePipeline deploys the CloudFormation templates to the specified AWS account and Region.

1. When the CloudFormation stack has been deployed successfully, you can view the CloudWatch dashboards and alarms.

**Automation and scale**

This solution has been automated by using the AWS CDK. You can find the code in the GitHub [Golden Signals Dashboards on Amazon CloudWatch](https://github.com/aws-samples/golden-signals-dashboards-sample-app) repository. For additional scaling and to create custom dashboards, you can configure multiple tag keys and values.

## Tools
<a name="create-tag-based-amazon-cloudwatch-dashboards-automatically-tools"></a>

**Amazon services**
+ [Amazon EventBridge](https://aws.amazon.com/eventbridge/) is a serverless event bus service that helps you connect your applications with real-time data from a variety of sources, including AWS Lambda functions, HTTP invocation endpoints using API destinations, or event buses in other AWS accounts.
+ [AWS CodePipeline](https://aws.amazon.com/codepipeline/) helps you quickly model and configure the different stages of a software release and automate the steps required to release software changes continuously.
+ [AWS CodeBuild](https://aws.amazon.com/codebuild/) is a fully managed build service that helps you compile source code, run unit tests, and produce artifacts that are ready to deploy.
+ [AWS Command Line Interface (AWS CLI)](https://docs.aws.amazon.com/cli/latest/userguide/cli-chap-welcome.html) is an open source tool that helps you interact with AWS services through commands in your command-line shell.
+ [AWS Identity and Access Management (IAM)](https://aws.amazon.com/iam/) helps you securely manage access to your AWS resources by controlling who is authenticated and authorized to use them.
+ [Amazon Simple Storage Service (Amazon S3)](https://aws.amazon.com/s3/) is a cloud-based object storage service that helps you store, protect, and retrieve any amount of data.

## Best practices
<a name="create-tag-based-amazon-cloudwatch-dashboards-automatically-best-practices"></a>

As a security best practice, you can use encryption and authentication for the source repositories that connect to your pipelines. For additional best practices, see [CodePipeline best practices and use cases](https://docs.aws.amazon.com/codepipeline/latest/userguide/best-practices.html) in the CodePipeline documentation.

## Epics
<a name="create-tag-based-amazon-cloudwatch-dashboards-automatically-epics"></a>

### Configure and deploy the sample application
<a name="configure-and-deploy-the-sample-application"></a>


| Task | Description | Skills required | 
| --- | --- | --- | 
| Configure and deploy the sample application. | [\[See the AWS documentation website for more details\]](http://docs.aws.amazon.com/prescriptive-guidance/latest/patterns/create-tag-based-amazon-cloudwatch-dashboards-automatically.html) | AWS DevOps | 
| Automatically create dashboards and alarms. | After you deploy the sample application, you can create any of the resources that this solution supports with expected tag values, which will automatically create the specified dashboards and alarms.To test this solution, create an AWS Lambda function:[\[See the AWS documentation website for more details\]](http://docs.aws.amazon.com/prescriptive-guidance/latest/patterns/create-tag-based-amazon-cloudwatch-dashboards-automatically.html) | AWS DevOps | 

### Remove the sample application
<a name="remove-the-sample-application"></a>


| Task | Description | Skills required | 
| --- | --- | --- | 
| Remove the `golden-signals-dashboard` construct. | [\[See the AWS documentation website for more details\]](http://docs.aws.amazon.com/prescriptive-guidance/latest/patterns/create-tag-based-amazon-cloudwatch-dashboards-automatically.html) | AWS DevOps | 

## Troubleshooting
<a name="create-tag-based-amazon-cloudwatch-dashboards-automatically-troubleshooting"></a>


| Issue | Solution | 
| --- | --- | 
| Python command not found (referring to `findresources.sh`, line 8).  | Check the version of your Python installation. If you have installed Python version 3, replace `python` with `python3` on line 8 of the `resources.sh` file, and run the `sh deploy.sh` command again to deploy the solution. | 

## Related resources
<a name="create-tag-based-amazon-cloudwatch-dashboards-automatically-resources"></a>
+ [Bootstrapping](https://docs.aws.amazon.com/cdk/v2/guide/bootstrapping.html) (AWS CDK documentation)
+ [Using named profiles](https://docs.aws.amazon.com/cli/latest/userguide/cli-configure-files.html#cli-configure-files-methods) (AWS CLI documentation)
+ [AWS CDK Workshop](https://cdkworkshop.com/)

## Additional information
<a name="create-tag-based-amazon-cloudwatch-dashboards-automatically-additional"></a>

The following illustration shows a sample dashboard for Amazon RDS that is created as part of this solution.

![\[Sample dashboard for Amazon RDS\]](http://docs.aws.amazon.com/prescriptive-guidance/latest/patterns/images/pattern-img/f234fe30-87db-446f-a291-d33928ca2ccb/images/706a262f-8650-47ff-ac44-e04ce5f4023e.png)


# Document your AWS landing zone design
<a name="document-your-aws-landing-zone-design"></a>

*Michael Daehnert, Florian Langer, and Michael Lodemann, Amazon Web Services*

## Summary
<a name="document-your-aws-landing-zone-design-summary"></a>

A *landing zone* is a well-architected, multi-account environment that's based on security and compliance best practices. It is the enterprise-wide container that holds all of your organizational units (OUs), AWS accounts, users, and other resources. A landing zone can scale to fit the needs of an enterprise of any size. AWS has two options for creating your landing zone: a service-based landing zone using [AWS Control Tower](https://docs.aws.amazon.com/controltower/latest/userguide/what-is-control-tower.html) or a customized landing zone that you build. Each option requires a different level of AWS knowledge.

AWS created AWS Control Tower to help you save time by automating the setup of a landing zone. AWS Control Tower is managed by AWS and uses best practices and guidelines to help you create your foundational environment. AWS Control Tower uses integrated services, such as [AWS Service Catalog](https://docs.aws.amazon.com/servicecatalog/latest/adminguide/introduction.html) and [AWS Organizations](https://docs.aws.amazon.com/organizations/latest/userguide/orgs_introduction.html), to provision accounts in your landing zone and manage access to those accounts.

AWS landing zone projects vary in requirements, implementation details, and operational action items. There are customization aspects that need to be handled with every landing zone implementation. This includes (but is not limited to) how access management is handled, which technology stack is used, and what the monitoring requirements are for operational excellence. This pattern provides a template that helps you document your landing zone project. By using the template, you can document your project more quickly and help your development and operations teams understand your landing zone.

## Prerequisites and limitations
<a name="document-your-aws-landing-zone-design-prereqs"></a>

**Limitations**

This pattern does not describe what a landing zone is or how to implement one. For more information about these topics, see the [Related resources](#document-your-aws-landing-zone-design-resources) section.

## Epics
<a name="document-your-aws-landing-zone-design-epics"></a>

### Create the design document
<a name="create-the-design-document"></a>


| Task | Description | Skills required | 
| --- | --- | --- | 
| Identify key stakeholders. | Identify key service and team managers that are linked to your landing zone. | Project manager | 
| Customize the template. | Download the template in the [Attachments](#attachments-9e39a05a-8f51-4fe3-8999-522feafed6ca) section, and then update the template as follows:[\[See the AWS documentation website for more details\]](http://docs.aws.amazon.com/prescriptive-guidance/latest/patterns/document-your-aws-landing-zone-design.html) | Project manager | 
| Complete the template. | In meetings with the stakeholders or by using a write-and-review process, complete the template as follows:[\[See the AWS documentation website for more details\]](http://docs.aws.amazon.com/prescriptive-guidance/latest/patterns/document-your-aws-landing-zone-design.html) | Project manager | 
| Share the design document. | When your landing zone design documentation is complete, save it in a shared repository or central location where all stakeholders can access it. We recommend that you use standard document control processes to record and approve revisions to the design document. | Project manager | 

## Related resources
<a name="document-your-aws-landing-zone-design-resources"></a>
+ [AWS Control Tower documentation](https://docs.aws.amazon.com/controltower/latest/userguide/what-is-control-tower.html)
  + [Plan your AWS Control Tower landing zone](https://docs.aws.amazon.com/controltower/latest/userguide/planning-your-deployment.html)
  + [AWS multi-account strategy for your AWS Control Tower landing zone](https://docs.aws.amazon.com/controltower/latest/userguide/aws-multi-account-landing-zone.html)
  + [Administrative tips for landing zone setup](https://docs.aws.amazon.com/controltower/latest/userguide/tips-for-admin-setup.html)
  + [Expectations for landing zone configuration](https://docs.aws.amazon.com/controltower/latest/userguide/getting-started-configure.html)
+ [Customizations for AWS Control Tower](https://aws.amazon.com/solutions/implementations/customizations-for-aws-control-tower/) (AWS Solutions Library)
+ [Setting up a secure and scalable multi-account AWS environment](https://docs.aws.amazon.com/prescriptive-guidance/latest/migration-aws-environment/welcome.html) (AWS Prescriptive Guidance)

## Attachments
<a name="attachments-9e39a05a-8f51-4fe3-8999-522feafed6ca"></a>

To access additional content that is associated with this document, unzip the following file: [attachment.zip](samples/p-attach/9e39a05a-8f51-4fe3-8999-522feafed6ca/attachments/attachment.zip)

# Improve operational performance by enabling Amazon DevOps Guru across multiple AWS Regions, accounts, and OUs with the AWS CDK
<a name="improve-operational-performance-by-enabling-amazon-devops-guru-across-multiple-aws-regions-accounts-and-ous-with-the-aws-cdk"></a>

*Dr. Rahul Sharad Gaikwad, Amazon Web Services*

## Summary
<a name="improve-operational-performance-by-enabling-amazon-devops-guru-across-multiple-aws-regions-accounts-and-ous-with-the-aws-cdk-summary"></a>

This pattern demonstrates the steps to enable the Amazon DevOps Guru service across multiple Amazon Web Services (AWS) Regions, accounts, and organizational units (OUs) by using the AWS Cloud Development Kit (AWS CDK) in TypeScript. You can use AWS CDK stacks to deploy AWS CloudFormation StackSets from the administrator (primary) AWS account to enable Amazon DevOps Guru across multiple accounts, instead of logging into each account and enabling DevOps Guru individually for each account.

Amazon DevOps Guru provides artificial intelligence operations (AIOps) features to help you improve the availability of your applications and resolve operational issues faster. DevOps Guru reduces your manual effort by applying machine learning (ML) powered recommendations, without requiring any ML expertise. DevOps Guru analyzes your resources and operational data. If it detects any anomalies, it provides metrics, events, and recommendations to help you address the issue.

This pattern describes three deployment options for enabling Amazon DevOps Guru:
+ For all stack resources across multiple accounts and Regions
+ For all stack resources across OUs
+ For specific stack resources across multiple accounts and Regions

## Prerequisites and limitations
<a name="improve-operational-performance-by-enabling-amazon-devops-guru-across-multiple-aws-regions-accounts-and-ous-with-the-aws-cdk-prereqs"></a>

**Prerequisites**
+ An active AWS account.
+ AWS Command Line Interface (AWS CLI), installed and configured. (See [Installing, updating, and uninstalling the AWS CLI](https://docs.aws.amazon.com/cli/latest/userguide/cli-chap-install.html) in the AWS CLI documentation.)
+ AWS CDK Toolkit, installed and configured. (See [AWS CDK Toolkit](https://docs.aws.amazon.com/cdk/latest/guide/cli.html) in the AWS CDK documentation.)
+ Node Package Manager (npm), installed and configured for the AWS CDK in TypeScript. (See [Downloading and installing Node.js and npm](https://docs.npmjs.com/downloading-and-installing-node-js-and-npm) in the npm documentation.)
+ Python3 installed and configured, for running a Python script to inject traffic into the sample serverless application. (See [Python Setup and Usage](https://docs.python.org/3/using/index.html) in the Python documentation.)
+ Pip, installed and configured to install the Python requests library. (See the [pip installation instructions](https://pypi.org/project/pip/) on the PyPl website.)

**Product versions**
+ AWS CDK Toolkit version 1.107.0 or later
+ npm version 7.9.0 or later
+ Node.js version 15.3.0 or later

## Architecture
<a name="improve-operational-performance-by-enabling-amazon-devops-guru-across-multiple-aws-regions-accounts-and-ous-with-the-aws-cdk-architecture"></a>

**Technologies**

The architecture for this pattern includes the following services:
+ [Amazon DevOps Guru](https://aws.amazon.com/devops-guru/)
+ [AWS CloudFormation](https://aws.amazon.com/cloudformation/)
+ [Amazon API Gateway](https://aws.amazon.com/api-gateway/)
+ [AWS Lambda](https://aws.amazon.com/lambda/)
+ [Amazon DynamoDB](https://aws.amazon.com/dynamodb/)
+ [Amazon CloudWatch](https://aws.amazon.com/cloudwatch/)
+ [AWS CloudTrail](https://aws.amazon.com/cloudtrail/)

**AWS CDK stacks**

The pattern uses the following AWS CDK stacks: 
+ `CdkStackSetAdminRole` – Creates an AWS Identity and Access management (IAM) administrator role to establish a trust relationship between the administrator and target accounts.
+ `CdkStackSetExecRole` – Creates an IAM role to trust the administrator account.
+ `CdkDevopsGuruStackMultiAccReg` – Enables DevOps Guru across multiple AWS Regions and accounts for all stacks, and sets up Amazon Simple Notification Service (Amazon SNS) notifications.
+ `CdkDevopsGuruStackMultiAccRegSpecStacks` – Enables DevOps Guru across multiple AWS Regions and accounts for specific stacks, and sets up Amazon SNS notifications.
+ `CdkDevopsguruStackOrgUnit` – Enables DevOps Guru across OUs, and sets up Amazon SNS notifications. 
+ `CdkInfrastructureStack` – Deploys sample serverless application components such as API Gateway, Lambda, and DynamoDB in the administrator account to demonstrate fault injection and insights generation.

**Sample application architecture**

The following diagram illustrates the architecture of a sample serverless application that has been deployed across multiple accounts and Regions. The pattern uses the administrator account to deploy all the AWS CDK stacks. It also uses the administrator account as one of the target accounts for setting up DevOps Guru.

1. When DevOps Guru is enabled, it first baselines each resource’s behavior and then ingests operational data from CloudWatch vended metrics.

1. If it detects an anomaly, it correlates it with the events from CloudTrail, and generates an insight.

1. The insight provides a correlated sequence of events along with prescribed recommendations to enable the operator to identify the culprit resource.

1. Amazon SNS sends notification messages to the operator.

![\[A sample serverless application that has been deployed across multiple accounts and Regions.\]](http://docs.aws.amazon.com/prescriptive-guidance/latest/patterns/images/pattern-img/6075ca48-862a-4aa0-93c6-10bad8195a5c/images/beeb0992-aaa8-4f08-b983-685b6b8b8d5e.png)


**Automation and scale**

The [GitHub repository](https://github.com/aws-samples/amazon-devopsguru-cdk-samples.git) provided with this pattern uses the AWS CDK as an infrastructure as code (IaC) tool to create the configuration for this architecture. AWS CDK helps you orchestrate resources and enable DevOps Guru across multiple AWS accounts, Regions, and OUs.

## Tools
<a name="improve-operational-performance-by-enabling-amazon-devops-guru-across-multiple-aws-regions-accounts-and-ous-with-the-aws-cdk-tools"></a>

**AWS services**
+ [AWS CDK](https://docs.aws.amazon.com/cdk/latest/guide/home.html) – AWS Cloud Development Kit (AWS CDK) helps you define your cloud infrastructure as code in one of five supported programming languages: TypeScript, JavaScript, Python, Java, and C\$1.
+ [AWS CLI ](https://docs.aws.amazon.com/cli/latest/userguide/cli-chap-welcome.html)– AWS Command Line Interface (AWS CLI) is a unified tool that provides a consistent command-line interface for interacting with AWS services and resources.

**Code**

The source code for this pattern is available on GitHub, in the [Amazon DevOps Guru CDK Samples](https://github.com/aws-samples/amazon-devopsguru-cdk-samples.git) repository. The AWS CDK code is written in TypeScript. To clone and use the repository, follow the instructions in the next section.

**Important**  
Some of the stories in this pattern include AWS CDK and AWS CLI command examples that are formatted for Unix, Linux, and macOS. For Windows, replace the backslash (\$1) continuation character at the end of each line with a caret (^).

## Epics
<a name="improve-operational-performance-by-enabling-amazon-devops-guru-across-multiple-aws-regions-accounts-and-ous-with-the-aws-cdk-epics"></a>

### Prepare the AWS resources for deployment
<a name="prepare-the-aws-resources-for-deployment"></a>


| Task | Description | Skills required | 
| --- | --- | --- | 
| Configure AWS named profiles. | Set up your AWS named profiles as follows to deploy stacks in a multi-account environment.For the administrator account:<pre>$aws configure --profile administrator<br />AWS Access Key ID [****]: <your-administrator-access-key-ID><br />AWS Secret Access Key [****]: <your-administrator-secret-access-key><br />Default region name [None]: <your-administrator-region><br />Default output format [None]: json</pre>For the target account:<pre>$aws configure --profile target<br />AWS Access Key ID [****: <your-target-access-key-ID><br />AWS Secret Access Key [****]: <your-target-secret-access-key><br />Default region name [None]: <your-target-region><br />Default output format [None]: json</pre>For more information, see [Using named profiles](https://docs.aws.amazon.com/cli/latest/userguide/cli-configure-files.html#cli-configure-files-using-profiles) in the AWS CLI documentation. | DevOps engineer | 
| Verify AWS profile configurations. | (Optional) You can verify your AWS profile configurations in the `credentials` and `config` files by following the instructions in [Set and view configuration settings](https://docs.aws.amazon.com/cli/latest/userguide/cli-configure-files.html#cli-configure-files-methods) in the AWS CLI documentation. | DevOps engineer | 
| Verify the AWS CDK version. | Verify the version of the AWS CDK Toolkit by running the following command:<pre>$cdk --version</pre>This pattern requires version 1.107.0 or later. If you have an earlier version of the AWS CDK, follow the instructions in the [AWS CDK documentation](https://docs.aws.amazon.com/cdk/latest/guide/cli.html) to update it. | DevOps engineer | 
| Clone the project code. | Clone the GitHub repository for this pattern by using the command:<pre>$git clone https://github.com/aws-samples/amazon-devopsguru-cdk-samples.git</pre> | DevOps engineer | 
| Install package dependencies and compile the TypeScript files. | Install the package dependencies and compile the TypeScript files by running the following commands:<pre>$cd amazon-devopsguru-cdk-samples<br />$npm install<br />$npm fund</pre>These commands install all the packages from the sample repository.If you get any errors about missing packages, use one of the following commands:<pre>$npm ci</pre>—or—<pre>$npm install -g @aws-cdk/<package-name></pre>You can find the list of package names and versions in the `Dependencies` section of the `/amazon-devopsguru-cdk-samples/package.json` file. For more information, see [npm ci](https://docs.npmjs.com/cli/v7/commands/npm-ci) and [npm install](https://docs.npmjs.com/cli/v7/commands/npm-install) in the npm documentation. | DevOps engineer | 

### Build (synthesize) the AWS CDK stacks
<a name="build-synthesize-the-aws-cdk-stacks"></a>


| Task | Description | Skills required | 
| --- | --- | --- | 
| Configure an email address for Amazon SNS notifications. | Follow these steps to provide an email address for Amazon SNS notifications:[\[See the AWS documentation website for more details\]](http://docs.aws.amazon.com/prescriptive-guidance/latest/patterns/improve-operational-performance-by-enabling-amazon-devops-guru-across-multiple-aws-regions-accounts-and-ous-with-the-aws-cdk.html) | DevOps engineer | 
| Build the project code. | Build the project code and synthesize the stacks by running the command:<pre>npm run build && cdk synth </pre>You should see output similar to the following: <pre>$npm run build && cdk synth<br />> cdk-devopsguru@0.1.0 build<br />> tsc<br />Successfully synthesized to ~/amazon-devopsguru-cdk-samples/cdk.out<br />Supply a stack id (CdkDevopsGuruStackMultiAccReg,CdkDevopsGuruStackMultiAccRegSpecStacks, CdkDevopsguruStackOrgUnit, CdkInfrastructureStack, CdkStackSetAdminRole, CdkStackSetExecRole) to display its template.</pre>For more information and steps, see [Your first AWS CDK app](https://docs.aws.amazon.com/cdk/latest/guide/hello_world.html) in the AWS CDK documentation. | DevOps engineer | 
| List the AWS CDK stacks. | Run the following command to list all AWS CDK stacks:<pre>$cdk list</pre>The command displays the following list:<pre>CdkDevopsGuruStackMultiAccReg<br />CdkDevopsGuruStackMultiAccRegSpecStacks<br />CdkDevopsguruStackOrgUnit<br />CdkInfrastructureStack<br />CdkStackSetAdminRole<br />CdkStackSetExecRole</pre> | DevOps engineer | 

### Option 1 - Enable DevOps Guru for all stack resources across multiple accounts
<a name="option-1---enable-devops-guru-for-all-stack-resources-across-multiple-accounts"></a>


| Task | Description | Skills required | 
| --- | --- | --- | 
| Deploy the AWS CDK stacks for creating IAM roles. | This pattern uses [AWS CloudFormation StackSets](https://docs.aws.amazon.com/AWSCloudFormation/latest/UserGuide/what-is-cfnstacksets.html) to perform stack operations across multiple accounts. If you are creating your first stack set, you must create the following IAM roles to get the required permissions set up in your AWS accounts:[\[See the AWS documentation website for more details\]](http://docs.aws.amazon.com/prescriptive-guidance/latest/patterns/improve-operational-performance-by-enabling-amazon-devops-guru-across-multiple-aws-regions-accounts-and-ous-with-the-aws-cdk.html)The roles must have these exact names.[\[See the AWS documentation website for more details\]](http://docs.aws.amazon.com/prescriptive-guidance/latest/patterns/improve-operational-performance-by-enabling-amazon-devops-guru-across-multiple-aws-regions-accounts-and-ous-with-the-aws-cdk.html)For more information, see [Grant self-managed permissions](https://docs.aws.amazon.com/AWSCloudFormation/latest/UserGuide/stacksets-prereqs-self-managed.html) in the AWS CloudFormation documentation. | DevOps engineer | 
| Deploy the AWS CDK stack for enabling DevOps Guru across multiple accounts. | The AWS CDK `CdkDevopsGuruStackMultiAccReg` stack creates stack sets to deploy stack instances across multiple accounts and Regions. To deploy the stack, run the following CLI command with the specified parameters:<pre>$cdk deploy CdkDevopsGuruStackMultiAccReg \<br />  --profile administrator \<br />  --parameters AdministratorAccountId=<administrator-account-ID> \<br />  --parameters TargetAccountId=<target-account-ID> \<br />  --parameters RegionIds="<region-1>,<region-2>"</pre>Currently Amazon DevOps Guru is available in the AWS Regions listed in the [DevOps Guru FAQ](https://aws.amazon.com/devops-guru/faqs/). | DevOps engineer | 

### Option 2 - Enable DevOps Guru for all stack resources across OUs
<a name="option-2---enable-devops-guru-for-all-stack-resources-across-ous"></a>


| Task | Description | Skills required | 
| --- | --- | --- | 
| Extract OU IDs. | On the [AWS Organizations](https://console.aws.amazon.com/organizations/v2/home/accounts) console, identify the IDs of the organizational units where you want to enable DevOps Guru. | DevOps engineer | 
| Enable service-managed permissions for OUs. | If you're using AWS Organizations for account management, you must grant service-managed permissions to enable DevOps Guru. Instead of creating the IAM roles manually, use [organization-based trusted access and service-linked roles (SLRs](https://docs.aws.amazon.com/AWSCloudFormation/latest/UserGuide/stacksets-orgs-enable-trusted-access.html)). | DevOps engineer | 
| Deploy the AWS CDK stack for enabling DevOps Guru across OUs. | The AWS CDK `CdkDevopsguruStackOrgUnit` stack enables DevOps Guru service across OUs. To deploy the stack, run the following command with the specified parameters:<pre>$cdk deploy CdkDevopsguruStackOrgUnit \<br />  --profile administrator \ <br />  --parameters RegionIds="<region-1>,<region-2>" \<br />  --parameters OrganizationalUnitIds="<OU-1>,<OU-2>"</pre> | DevOps engineer | 

### Option 3 - Enable DevOps Guru for specific stack resources across multiple accounts
<a name="option-3---enable-devops-guru-for-specific-stack-resources-across-multiple-accounts"></a>


| Task | Description | Skills required | 
| --- | --- | --- | 
| Deploy the AWS CDK stacks for creating IAM roles. | If you haven't already created the required IAM roles shown in the first option, do that first:[\[See the AWS documentation website for more details\]](http://docs.aws.amazon.com/prescriptive-guidance/latest/patterns/improve-operational-performance-by-enabling-amazon-devops-guru-across-multiple-aws-regions-accounts-and-ous-with-the-aws-cdk.html)For more information, see [Grant self-managed permissions](https://docs.aws.amazon.com/AWSCloudFormation/latest/UserGuide/stacksets-prereqs-self-managed.html) in the AWS CloudFormation documentation. | DevOps engineer | 
| Delete existing stacks. | If you already used the first option to enable DevOps Guru for all stack resources, you can delete the old stack by using the following command:<pre>$cdk destroy CdkDevopsGuruStackMultiAccReg --profile administrator </pre>Or, you can change the` RegionIds` parameter when you redeploy the stack to avoid a *Stacks already exist* error. | DevOps engineer | 
| Update the AWS CDK stack with a stack list.  | [\[See the AWS documentation website for more details\]](http://docs.aws.amazon.com/prescriptive-guidance/latest/patterns/improve-operational-performance-by-enabling-amazon-devops-guru-across-multiple-aws-regions-accounts-and-ous-with-the-aws-cdk.html) | Data engineer | 
| Deploy the AWS CDK stack for enabling DevOps Guru for specific stack resources across multiple accounts. | The AWS CDK `CdkDevopsGuruStackMultiAccRegSpecStacks` stack enables DevOps Guru for specific stack resources across multiple accounts. To deploy the stack, run the following command:<pre>$cdk deploy CdkDevopsGuruStackMultiAccRegSpecStacks \<br />  --profile administrator  \<br />  --parameters AdministratorAccountId=<administrator-account-ID> \<br />  --parameters TargetAccountId=<target-account-ID> \<br />  --parameters RegionIds="<region-1>,<region-2>"</pre>If you previously deployed this stack for option 1, change the `RegionIds` parameter (making sure to choose from [available Regions](https://aws.amazon.com/devops-guru/faqs/)) to avoid a *Stacks already exist* error. | DevOps engineer | 

### Deploy the AWS CDK infrastructure stack
<a name="deploy-the-aws-cdk-infrastructure-stack"></a>


| Task | Description | Skills required | 
| --- | --- | --- | 
| Deploy the sample serverless infrastructure stack. | The AWS CDK `CdkInfrastructureStack` stack deploys serverless components such as API Gateway, Lambda, and a DynamoDB table to demonstrate DevOps Guru insights. To deploy the stack, run the following command: <pre>$cdk deploy CdkInfrastructureStack --profile administrator</pre> | DevOps engineer | 
| Insert sample records in DynamoDB. | Run the following command to populate the DynamoDB table with sample records. Provide the correct path for the `populate-shops-dynamodb-table.json` script.<pre>$aws dynamodb batch-write-item \<br />  --request-items file://scripts/populate-shops-dynamodb-table.json \<br />  --profile administrator</pre>The command displays the following output:<pre>{<br />    "UnprocessedItems": {}<br />}</pre> | DevOps engineer | 
| Verify inserted records in DynamoDB. | To verify that the DynamoDB table includes the sample records from the `populate-shops-dynamodb-table.json` file, access the URL for the `ListRestApiEndpointMonitorOperator` API, which is published as an output of the AWS CDK stack. You can also find this URL in the **Outputs** tab of the AWS CloudFormation console for the `CdkInfrastructureStack` stack. The AWS CDK output would look similar to the following:<pre>CdkInfrastructureStack.CreateRestApiMonitorOperatorEndpointD1D00045 = https://oure17c5vob.execute-api.<your-region>.amazonaws.com/prod/<br /><br />CdkInfrastructureStack.ListRestApiMonitorOperatorEndpointABBDB8D8 = https://cdff8icfrn4.execute-api.<your-region>.amazonaws.com/prod/</pre> | DevOps engineer | 
| Wait for resources to complete baselining. | This serverless stack has a few resources. We recommend that you wait for 2 hours before you carry out the next steps. If you deployed this stack in a production environment, it might take up to 24 hours to complete baselining, depending on the number of resources you selected to monitor in DevOps Guru. | DevOps engineer | 

### Generate DevOps Guru insights
<a name="generate-devops-guru-insights"></a>


| Task | Description | Skills required | 
| --- | --- | --- | 
| Update the AWS CDK infrastructure stack. | To try out DevOps Guru insights, you can make some configuration changes to reproduce a typical operational issue.[\[See the AWS documentation website for more details\]](http://docs.aws.amazon.com/prescriptive-guidance/latest/patterns/improve-operational-performance-by-enabling-amazon-devops-guru-across-multiple-aws-regions-accounts-and-ous-with-the-aws-cdk.html) | DevOps engineer | 
| Inject HTTP requests on the API. | Inject ingress traffic in the form of HTTP requests on the `ListRestApiMonitorOperatorEndpointxxxx` API:[\[See the AWS documentation website for more details\]](http://docs.aws.amazon.com/prescriptive-guidance/latest/patterns/improve-operational-performance-by-enabling-amazon-devops-guru-across-multiple-aws-regions-accounts-and-ous-with-the-aws-cdk.html) | DevOps engineer | 
| Review DevOps Guru insights. | Under standard conditions, the DevOps Guru dashboard displays zero in the ongoing insights counter. If it detects an anomaly, it raises an alert in the form of an insight. In the navigation pane, choose **Insights** to see the details of the anomaly, including an overview, aggregated metrics, relevant events, and recommendations. For more information about reviewing insights, see the [Gaining operational insights with AIOps using Amazon DevOps Guru](https://aws.amazon.com/blogs/devops/gaining-operational-insights-with-aiops-using-amazon-devops-guru/) blog post. | DevOps engineer | 

### Clean up
<a name="clean-up"></a>


| Task | Description | Skills required | 
| --- | --- | --- | 
| Clean up and delete resources. | After you walk through this pattern, you should remove the resources you created to avoid incurring any further charges. Run these commands:<pre>$cdk destroy CdkDevopsGuruStackMultiAccReg --profile administrator<br />$cdk destroy CdkDevopsguruStackOrgUnit --profile administrator<br />$cdk destroy CdkDevopsGuruStackMultiAccRegSpecStacks --profile administrator<br />$cdk destroy CdkInfrastructureStack --profile administrator<br />$cdk destroy CdkStackSetAdminRole --profile administrator<br />$cdk destroy CdkStackSetExecRole --profile administrator<br />$cdk destroy CdkStackSetExecRole --profile target</pre> | DevOps engineer | 

## Related resources
<a name="improve-operational-performance-by-enabling-amazon-devops-guru-across-multiple-aws-regions-accounts-and-ous-with-the-aws-cdk-resources"></a>
+ [Gaining operational insights with AIOps using Amazon DevOps Guru](https://aws.amazon.com/blogs/devops/gaining-operational-insights-with-aiops-using-amazon-devops-guru/)
+ [Easily configure Amazon DevOps Guru across multiple accounts and Regions using AWS CloudFormation StackSets](https://aws.amazon.com/blogs/devops/configure-devops-guru-multiple-accounts-regions-using-cfn-stacksets/)
+ [DevOps Guru Workshop](https://aiops-using-devops-guru.workshop.aws/)

# Govern permission sets for multiple accounts by using Account Factory for Terraform
<a name="govern-permission-sets-aft"></a>

*Anand Krishna Varanasi and Siamak Heshmati, Amazon Web Services*

## Summary
<a name="govern-permission-sets-aft-summary"></a>

This pattern helps you integrate [AWS Control Tower Account Factory Terraform (AFT)](https://docs.aws.amazon.com/controltower/latest/userguide/aft-overview.html) with [AWS IAM Identity Center](https://docs.aws.amazon.com/singlesignon/latest/userguide/what-is.html) in order to configure permissions for multiple AWS accounts at scale. This approach uses custom AWS Lambda functions to automate [permission set](https://docs.aws.amazon.com/singlesignon/latest/userguide/permissionsetsconcept.html) assignments to AWS accounts that are managed as an organization. This streamlines the process because it doesn’t require manual intervention from your platform engineering team. This solution can enhance operational efficiency, security and consistency. It promotes a secure and standardized onboarding process for AWS Control Tower, making it indispensable for enterprises that prioritize agility and reliability for their cloud infrastructure.

## Prerequisites and limitations
<a name="govern-permission-sets-aft-prereqs"></a>

**Prerequisites**
+ AWS accounts, managed through AWS Control Tower. For more information, see [Getting started with AWS Control Tower](https://docs.aws.amazon.com/controltower/latest/userguide/getting-started-with-control-tower.html).
+ Account Factory for Terraform, deployed in a dedicated account in your environment. For more information, see [Deploy AWS Control Tower Account Factory for Terraform](https://docs.aws.amazon.com/controltower/latest/userguide/aft-getting-started.html).
+ An IAM Identity Center instance, set up in your environment. For more information, see [Getting started with IAM Identity Center](https://docs.aws.amazon.com/singlesignon/latest/userguide/getting-started.html).
+ An active IAM Identity Center [group](https://docs.aws.amazon.com/singlesignon/latest/userguide/users-groups-provisioning.html#groups-concept), configured.  For more information, see [Add groups to your IAM Identity Center directory](https://docs.aws.amazon.com/singlesignon/latest/userguide/addgroups.html).
+ Python version 3.9 or later, installed

**Limitations**
+ This solution can be used only with accounts that are managed through AWS Control Tower. This solution is deployed by using Account Factory for Terraform.
+ This pattern does not include instructions for setting up identity federation with an identity source. For more information about how to complete this set up, see [IAM Identity Center identity source tutorials](https://docs.aws.amazon.com/singlesignon/latest/userguide/tutorials.html) in the IAM Identity Center documentation.

## Architecture
<a name="govern-permission-sets-aft-architecture"></a>

**AFT overview**

AFT sets up a Terraform pipeline that helps you provision and customize your accounts in AWS Control Tower. AFT follows a GitOps model that automates the processes of account provisioning in AWS Control Tower. You create an *account request Terraform file* and commit it to repository. This initiates the AFT workflow for account provisioning. After account provisioning is complete, AFT can automatically run additional customization steps. For more information, see [AFT architecture](https://docs.aws.amazon.com/controltower/latest/userguide/aft-architecture.html) in the AWS Control Tower documentation.

AFT provides the following main repositories:
+ `aft-account-request` – This repository contains Terraform code to create or update AWS accounts.
+ `aft-account-customizations` – This repository contains Terraform code to create or customize resources on a per-account basis.
+ `aft-global-customizations` – This repository contains Terraform code to create or customize resources for all accounts, at scale.
+ `aft-account-provisioning-customizations` – This repository manages customizations that are applied only to specific accounts created by and managed with AFT. For example, you might use this repository to customize user or groups assignments in IAM Identity Center or to automate account closures.

**Solution overview**

This custom solution includes an AWS Step Functions state machine and an AWS Lambda function that assign permission sets to users and groups for multiple accounts. The state machine deployed through this pattern operates in conjunction with pre-existing AFT `aft_account_provisioning_customizations` state machine. A user submits a request to update IAM Identity Center user and group assignments either when a new AWS account is created or after the account is created. They do this by pushing a change to the `aft-account-request` repository. The request to create or update an account initiates a stream in [Amazon DynamoDB Streams](https://docs.aws.amazon.com/amazondynamodb/latest/developerguide/Streams.html). This starts the Lambda function, which updates IAM Identity Center users and groups for the target AWS accounts.

The following is an example of the parameters you can provide in the Lambda function for permission set assignments to target users and groups:

```
custom_fields = {
    "InstanceArn"         = "<Organization ID>",
    "PermissionSetArn"    = "<Permission set ARN>",
    "PrincipalId"         = "<Principal ID>",
  }
```

The following are the parameters in this statement:
+ `InstanceArn` – The Amazon Resource Name (ARN) of the organization
+ `PermissionSetArn` – The ARN of the permission set
+ `PrincipalId` – The identifier of a user or group in IAM Identity Center to which the permission set will be applied

**Note**  
You must create the target permission set, users, and groups before running this solution.

While the `InstanceArn` value must remain consistent, you can modify the Lambda function to assign multiple permission sets to multiple target identities. The parameters for permission sets must end in `PermissionSetArn`, and the parameters for users and groups must end in `PrincipalId`. You must define both attributes. The following is an example of how to define multiple permission sets and target users and groups:

```
custom_fields = {
    "InstanceArn"                    = "<Organization ID>",
    "AdminAccessPermissionSetArn"    = "<Admin privileges permission set ARN>",
    "AdminAccessPrincipalId"         = "<Admin principal ID>",
    "ReadOnlyAccessPermissionSetArn" = "<Read-only privileges permission set ARN>",
    "ReadOnlyAccessPrincipalId"      = "<Read-only principal ID>",
  }
```

The following diagram shows a step-by-step workflow of how the solution updates permissions sets for users and groups in the target AWS accounts at scale. When the user initiates an account creation request, AFT initiates the `aft-account-provisioning-framework` Step Functions state machine. This state machine starts the `extract-alternate-sso` Lambda function. The Lambda function assigns permissions sets to users and groups in the target AWS accounts. These users or groups can be from any configured identity source in IAM Identity Center. Examples of identity sources include Okta, Active Directory, or Ping Identity.

![\[Workflow of updating permission sets when an account is created or updated.\]](http://docs.aws.amazon.com/prescriptive-guidance/latest/patterns/images/pattern-img/14751255-3781-48db-a6b7-1a03e28c1020/images/d1de252d-8ac9-4f7d-a559-4ab3e852f325.png)


The diagram shows the following workflow when new accounts are created:

1. A user pushes a `custom_fields` change to the `aft-account-request` repository.

1. AWS CodePipeline starts an AWS CodeBuild job that records the user-defined metadata into the `aft-request-audit` Amazon DynamoDB table. This table has attributes to record user-defined metadata. The `ddb_event_name` attribute defines the type of AFT operation:
   + If the value is `INSERT`, then the solution assigns the permissions set to the target identities when the new AWS account is created.
   + If the value is `UPDATE`, then the solution assigns the permissions set to the target identities after the AWS account is created.

1. Amazon DynamoDB Streams initiates the `aft_alternate_sso_extract` Lambda function.

1. The `aft_alternate_sso_extract` Lambda function assumes an AWS Identity and Access Management (IAM) role in the AWS Control Tower management account.

1. The Lambda function assigns the permissions sets to the target users and groups by making an AWS SDK for Python (Boto3) [create\$1account\$1assignment](https://boto3.amazonaws.com/v1/documentation/api/latest/reference/services/sso-admin/client/create_account_assignment.html) API call to IAM Identity Center. It retrieves the permission set and identity assignments from the `aft-request-audit` Amazon DynamoDB table.

1. When the Step Functions workflow completes, the permission sets are assigned to the target identities.

**Automation and scale**

AFT operates at scale by using AWS services such as CodePipeline, AWS CodeBuild, DynamoDB, and Lambda, which are highly scalable. For additional automation, you can integrate this solution with a ticket or issue management system, such as Jira. For more information, see the [Additional information](#govern-permission-sets-aft-additional) section of this pattern.

## Tools
<a name="govern-permission-sets-aft-tools"></a>

**AWS services**
+ [Account Factory for Terraform (AFT)](https://docs.aws.amazon.com/controltower/latest/userguide/aft-overview.html) is the main tool in this solution. The `aft-account-provisioning-customizations` repository contains the Terraform code for creating customizations for AWS accounts, such as custom IAM Identity Center user or group assignments.
+ [Amazon DynamoDB](https://docs.aws.amazon.com/amazondynamodb/latest/developerguide/Introduction.html) is a fully managed NoSQL database service that provides fast, predictable, and scalable performance.
+ [AWS Lambda](https://docs.aws.amazon.com/lambda/latest/dg/welcome.html) is a compute service that helps you run code without needing to provision or manage servers. It runs your code only when needed and scales automatically, so you pay only for the compute time that you use.
+ [AWS Step Functions](https://docs.aws.amazon.com/step-functions/latest/dg/welcome.html) is a serverless orchestration service that helps you combine AWS Lambda functions and other AWS services to build business-critical applications.

**Other tools**
+ [Python](https://www.python.org/) is a general-purpose computer programming language.
+ [Terraform](https://www.terraform.io/) is an infrastructure as code (IaC) tool from HashiCorp that helps you create and manage cloud and on-premises resources.

**Code repository**

The code repository for AFT is available in the GitHub [AWS Control Tower Account Factory for Terraform](https://github.com/aws-ia/terraform-aws-control_tower_account_factory) repository. The code for this pattern is available in the [Govern SSO Assignments for AWS accounts using Account Factory for Terraform (AFT)](https://github.com/aws-samples/aft-custom-sso-assignment) repository.

## Best practices
<a name="govern-permission-sets-aft-best-practices"></a>
+ Understand the [AWS shared responsibility model](https://aws.amazon.com/compliance/shared-responsibility-model/).
+ Follow the security recommendations for AWS Control Tower. For more information, see [Security in AWS Control Tower](https://docs.aws.amazon.com/controltower/latest/userguide/security.html).
+ Follow the principle of least privilege. For more information, see [Apply least-privilege permissions](https://docs.aws.amazon.com/IAM/latest/UserGuide/best-practices.html#grant-least-privilege).
+ Build specific and focused permission sets and IAM roles for groups and business units.

## Epics
<a name="govern-permission-sets-aft-epics"></a>

### Deploy the solution
<a name="deploy-the-solution"></a>


| Task | Description | Skills required | 
| --- | --- | --- | 
| Create an IAM role. | In the AWS Control Tower management account, use Terraform to create an IAM role. This role has cross-account access and a trust policy that allows federated access from the identity provider. It also has permissions to grant access to other accounts through AWS Control Tower. The Lambda function will assume this role. Do the following:[\[See the AWS documentation website for more details\]](http://docs.aws.amazon.com/prescriptive-guidance/latest/patterns/govern-permission-sets-aft.html) | AWS DevOps, Cloud architect | 
| Customize the solution for your environment. | [\[See the AWS documentation website for more details\]](http://docs.aws.amazon.com/prescriptive-guidance/latest/patterns/govern-permission-sets-aft.html) | AWS DevOps, Cloud architect | 
| Deploy the solution. | [\[See the AWS documentation website for more details\]](http://docs.aws.amazon.com/prescriptive-guidance/latest/patterns/govern-permission-sets-aft.html) | AWS DevOps, Cloud architect | 
| Set up a code repository connection. | Set up a connection between the code repository where you will store the configuration files and your AWS account. For instructions, see the [Add third-party source providers to pipelines using CodeConnections](https://docs.aws.amazon.com/codepipeline/latest/userguide/pipelines-connections.html) in the AWS CodePipeline documentation. | AWS DevOps, Cloud architect | 

### Use the solution
<a name="use-the-solution"></a>


| Task | Description | Skills required | 
| --- | --- | --- | 
| Start AFT pipeline to deploy a new account. | Follow the instructions in [Provision a new account with AFT](https://docs.aws.amazon.com/controltower/latest/userguide/aft-provision-account.html) in order to start the pipeline that creates a new AWS account in your AWS Control Tower environment. Wait for the account creation process to complete. | AWS DevOps, Cloud architect | 
| Validate the changes. | [\[See the AWS documentation website for more details\]](http://docs.aws.amazon.com/prescriptive-guidance/latest/patterns/govern-permission-sets-aft.html) | AWS DevOps, Cloud architect | 

## Troubleshooting
<a name="govern-permission-sets-aft-troubleshooting"></a>


| Issue | Solution | 
| --- | --- | 
| Permission set assignment is not working. | Make sure the group ARN, organization id, and Lambda parameters are correct. For examples, see the *Solution overview* section of this pattern. | 
| Updating code in the repository does not start the pipeline. | This issue is related to the connectivity between your AWS account and the repository. In the AWS Management Console, validate that the connection is active. For more information, see [GitHub connections](https://docs.aws.amazon.com/codepipeline/latest/userguide/connections-github.html) in the AWS CodePipeline documentation. | 

## Additional information
<a name="govern-permission-sets-aft-additional"></a>

**Integrating with a ticket management tool  **

You can choose to integrate this solution with a ticket or issue management tool, such as Jira or ServiceNow. The following diagram shows an example workflow for this option. You can integrate the ticket management tool with the AFT solution repositories by using your tool’s connectors. For Jira connectors, see [Integrate Jira with GitHub](https://support.atlassian.com/jira-cloud-administration/docs/integrate-jira-software-with-github/). For ServiceNow connectors, see [Integrating with GitHub](https://www.servicenow.com/docs/bundle/washingtondc-it-asset-management/page/product/software-asset-management2/concept/integrate-with-github.html). You can even build custom solutions that require users to provide a ticket ID as part of the pull request approval. If a request to create a new AWS account by using AFT is approved, that event could initiate a workflow that adds custom fields to the `aft-account-request` GitHub repository. You can design any custom workflow that meets the requirements of your use case.

![\[Workflow that uses GitHub Actions and a ticket management tool.\]](http://docs.aws.amazon.com/prescriptive-guidance/latest/patterns/images/pattern-img/14751255-3781-48db-a6b7-1a03e28c1020/images/83763f65-32ea-4de0-932f-14a1b2d1d3ad.png)


The diagram shows the following workflow:

1. Users request a custom permission set assignment in a ticket management tool, such as Jira.

1. After the case is approved, a workflow begins to update the permission set assignment. (Optional) You can use plugins for custom automation of this step.

1. Operators send the Terraform code with the updated permission set parameters to the `aft-account-request` repository into a development or feature branch.

1. GitHub Actions initiates AWS CodeBuild by using an OpenID Connect (OIDC) call. CodeBuild performs infrastructure as code (IaC) security scans by using tools such as [tfsec](https://aquasecurity.github.io/tfsec/v1.20.0/) and [checkov](https://www.checkov.io/). It warns the operators of any security violations.

1. If no violations are found, GitHub Actions creates an automated pull request and assigns a code review to the code owners. It also creates a tag for the pull request.

1. If the code owner approves the code review, another GitHub Actions workflow starts. It checks pull request standards, including:
   + If the pull request title meets requirements.
   + If pull request body contains approved case numbers.
   + If the pull request is properly tagged.

1. If the pull requests meets standards, GitHub Actions starts the AFT product workflow. It uses starts the `ct-aft-account-request` pipeline in AWS CodePipeline. This pipeline starts the `aft-account-provisioning-framework` custom state machine in Step Functions. This state machine works as previously described in the *Solution overview* section of this pattern.

# Implement Account Factory for Terraform (AFT) by using a bootstrap pipeline
<a name="implement-account-factory-for-terraform-aft-by-using-a-bootstrap-pipeline"></a>

*Vinicius Elias and Edgar Costa Filho, Amazon Web Services*

## Summary
<a name="implement-account-factory-for-terraform-aft-by-using-a-bootstrap-pipeline-summary"></a>

This pattern provides a simple and secure method for deploying AWS Control Tower Account Factory for Terraform (AFT) from the management account of AWS Organizations. The core of the solution is an CloudFormation template that automates the AFT configuration by creating a Terraform pipeline, which is structured to be easily adaptable for initial deployment or subsequent updates.

Security and data integrity are top priorities at AWS, so the Terraform state file, which is a critical component that tracks the state of the managed infrastructure and configurations, is securely stored in an Amazon Simple Storage Service (Amazon S3) bucket. This bucket is configured with several security measures, including server-side encryption and policies to block public access, to help ensure that your Terraform state is safeguarded against unauthorized access and data breaches.

The management account orchestrates and oversees the entire environment, so it is a critical resource in AWS Control Tower. This pattern follows AWS best practices and ensures that the deployment process is not only efficient but also aligns with security and governance standards, to offer a comprehensive, secure, and efficient way to deploy AFT in your AWS environment.

For more information about AFT, see the [AWS Control Tower documentation](https://docs.aws.amazon.com/controltower/latest/userguide/aft-overview.html).

## Prerequisites and limitations
<a name="implement-account-factory-for-terraform-aft-by-using-a-bootstrap-pipeline-prereqs"></a>

**Prerequisites**
+ A basic AWS multi-account environment with the following accounts at the minimum: management account, Log Archive account, Audit account, and one additional account for AFT management.
+ An established AWS Control Tower environment. The management account should be properly configured, because the CloudFormation template will be deployed within it.
+ The necessary permissions in the AWS management account. You'll need sufficient permissions to create and manage resources such as S3 buckets, AWS Lambda functions, AWS Identity and Access Management (IAM) roles, and AWS CodePipeline projects.
+ Familiarity with Terraform. Understanding Terraform's core concepts and workflow is important because the deployment involves generating and managing Terraform configurations.

**Limitations**
+ Be aware of the [AWS resource quotas](https://docs.aws.amazon.com/general/latest/gr/aws_service_limits.html) in your account. The deployment might create multiple resources, and encountering service quotas could impede the deployment process.
+ The template is designed for specific versions of Terraform and AWS services. Upgrading or changing versions might require template modifications.
+ The template doesn't support self-managed version control system (VCS) services such as GitHub Enterprise.

**Product versions**
+ Terraform version 1.6.6 or later
+ AFT version 1.11 or later

## Architecture
<a name="implement-account-factory-for-terraform-aft-by-using-a-bootstrap-pipeline-architecture"></a>

**Target technology stack**
+ CloudFormation
+ AWS CodeBuild
+ AWS CodeCommit
+ AWS CodePipeline
+ Amazon EventBridge
+ IAM
+ AWS Lambda
+ Amazon S3

**Target architecture**

The following diagram illustrates the implementation discussed in this pattern.

![\[Workflow for implementing AFT by using a bootstrap pipeline.\]](http://docs.aws.amazon.com/prescriptive-guidance/latest/patterns/images/pattern-img/944f9912-87c7-4cc5-8478-7070cf67f7ee/images/4ee74757-940d-4d92-a7f0-fb0db1476247.png)


The workflow consists of three main tasks: creating the resources, generating the content, and running the pipeline.

*Creating the resources*

The [CloudFormation template that's provided with this pattern](https://github.com/aws-samples/aft-bootstrap-pipeline/blob/main/code/aft-deployment-pipeline.yaml) creates and sets up all required resources, depending on the parameters you select when you deploy the template. At the minimum, the template creates the following resources:
+ A CodePipeline pipeline to implement the AFT
+ An S3 bucket to store the Terraform state file that's associated with the AFT implementation
+ Two CodeBuild projects to implement the Terraform plan and apply commands in different stages of the pipeline
+ IAM roles for CodeBuild and CodePipeline services
+ A second S3 bucket to store pipeline runtime artifacts

Depending on the VCS provider you select (CodeCommit or external VCS), the template creates the following resources. 
+ For **CodeCommit**:
  + A CodeCommit repository to store the AFT Terraform bootstrap code
  + An EventBridge rule to capture CodeCommit repository changes on the `main` branch
  + Another IAM role for the EventBridge rule
+ For any other **external VCS provider**, such as GitHub:
  + An AWS CodeConnections connection

Additionally, when you select CodeCommit as the VCS provider, if you set the `Generate AFT Files` parameter to `true`, the template creates these additional resources to generate the content:
+ An S3 bucket to store the generated content and to be used as the source of the CodeCommit repository
+ A Lambda function to process the given parameters and generate the appropriate content
+ An IAM function to run the Lambda function
+ A CloudFormation custom resource that runs the Lambda function when the template is deployed

*Generating the content*

To generate the AFT bootstrap files and their content, the solution uses a Lambda function and an S3 bucket. The function creates a folder in the bucket, and then creates two files inside the folder: `main.tf` and `backend.tf`. The function also processes the provided CloudFormation parameters and populates these files with predefined code, replacing the respective parameter values.

To view the code that's used as a template to generate the files, see the solution's [GitHub repository](https://github.com/aws-samples/aft-bootstrap-pipeline). Basically, the files are generated as follows.

**main.tf**

```
module "aft" {
  source = "github.com/aws-ia/terraform-aws-control_tower_account_factory?ref=<aft_version>"

  # Required variables
  ct_management_account_id  = "<ct_management_account_id>"
  log_archive_account_id    = "<log_archive_account_id>"
  audit_account_id          = "<audit_account_id>"
  aft_management_account_id = "<aft_management_account_id>"
  ct_home_region            = "<ct_home_region>"

  # Optional variables
  tf_backend_secondary_region = "<tf_backend_secondary_region>"
  aft_metrics_reporting       = "<false|true>"

  # AFT Feature flags
  aft_feature_cloudtrail_data_events      = "<false|true>"
  aft_feature_enterprise_support          = "<false|true>"
  aft_feature_delete_default_vpcs_enabled = "<false|true>"

  # Terraform variables
  terraform_version      = "<terraform_version>"
  terraform_distribution = "<terraform_distribution>"

  # VCS variables (if you have chosen an external VCS)
  vcs_provider                                  = "<github|githubenterprise|gitlab|gitlabselfmanaged|bitbucket>"
  account_request_repo_name                     = "<org-name>/aft-account-request"
  account_customizations_repo_name              = "<org-name>/aft-account-customizations"
  account_provisioning_customizations_repo_name = "<org-name>/aft-account-provisioning-customizations"
  global_customizations_repo_name               = "<org-name>/aft-global-customizations"

}
```

**backend.tf**

```
terraform {
  backend "s3" {
    region = "<aft-main-region>"
    bucket = "<s3-bucket-name>"
    key    = "aft-setup.tfstate"
  }
}
```

During the CodeCommit repository creation, if you set the `Generate AFT Files` parameter to `true`, the template uses the S3 bucket with the generated content as the source of the `main` branch to automatically populate the repository.

*Running the pipeline*

After the resources have been created and the bootstrap files have been configured, the pipeline runs. The first stage (*Source*) fetches the source code from the main branch of the repository, and the second stage (*Build*) runs the Terraform plan command and generates the results to be reviewed. In the third stage (*Approval*), the pipeline waits for a manual action to approve or reject the last stage (*Deploy*). At the last stage, the pipeline runs the Terraform `apply` command by using the result of the previous Terraform `plan` command as input. Finally, a cross-account role and the permissions in the management account are used to create the AFT resources in the AFT management account.

**Note**  
If you select an external VCS provider, you will need to authorize the connection with your VCS provider credentials. To complete the setup, follow the steps in [Update a pending connection](https://docs.aws.amazon.com/dtconsole/latest/userguide/connections-update.html) in the AWS Developer Tools console documentation.

## Tools
<a name="implement-account-factory-for-terraform-aft-by-using-a-bootstrap-pipeline-tools"></a>

**AWS services**
+ [CloudFormation](https://docs.aws.amazon.com/AWSCloudFormation/latest/UserGuide/Welcome.html) helps you set up AWS resources, provision them quickly and consistently, and manage them throughout their lifecycle across AWS accounts and Regions.
+ [AWS CodeBuild](https://docs.aws.amazon.com/codebuild/latest/userguide/welcome.html) is a fully managed build service that helps you compile source code, run unit tests, and produce artifacts that are ready to deploy. 
+ [AWS CodeCommit](https://docs.aws.amazon.com/codecommit/latest/userguide/welcome.html) is a version control service that helps you privately store and manage Git repositories without needing to manage your own source control system.
+ [AWS CodePipeline](https://docs.aws.amazon.com/codepipeline/latest/userguide/welcome.html) helps you quickly model and configure the different stages of a software release and automate the steps required to release software changes continuously.
+ [AWS CodeConnections](https://docs.aws.amazon.com/dtconsole/latest/userguide/welcome-connections.html) enables AWS resources and services, such as CodePipeline, to connect to external code repositories, such as GitHub.
+ [AWS Lambda](https://docs.aws.amazon.com/lambda/latest/dg/welcome.html) is a compute service that runs your code in response to events and automatically manages compute resources, providing a fast way to create a modern, serverless application for production.
+ [AWS SDK for Python (Boto3)](https://boto3.amazonaws.com/v1/documentation/api/latest/guide/quickstart.html) is a software development kit that helps you integrate your Python application, library, or script with AWS services.

**Other tools**
+ [Terraform](https://developer.hashicorp.com/terraform?product_intent=terraform) is an infrastructure as code (IaC) tool that lets you build, change, and version infrastructure safely and efficiently. This includes low-level components such as compute instances, storage, and networking; and high-level components such as DNS entries and SaaS features.
+ [Python](https://docs.python.org/3.9/tutorial/index.html) is an easy to learn, powerful programming language. It has efficient high-level data structures and provides a simple but effective approach to object-oriented programming.

**Code repository**

The code for this pattern is available in the GitHub [AFT bootstrap pipeline repository](https://github.com/aws-samples/aft-bootstrap-pipeline).

For the official AFT repository, see [AWS Control Tower Account Factory for Terraform](https://github.com/aws-ia/terraform-aws-control_tower_account_factory/tree/main) in GitHub.

## Best practices
<a name="implement-account-factory-for-terraform-aft-by-using-a-bootstrap-pipeline-best-practices"></a>

When you deploy AFT by using the provided CloudFormation template, we recommend that you follow best practices to help ensure a secure, efficient, and successful implementation. Key guidelines and recommendations for implementing and operating the AFT include the following.
+ **Thorough review of parameters**: Carefully review and understand each parameter in the CloudFormation template. Accurate parameter configuration is crucial for the correct setup and functioning of AFT.
+ **Regular template updates**: Keep the template updated with the latest AWS features and Terraform versions. Regular updates help you take advantage of new functionality and maintain security.
+ **Versioning**: Pin your AFT module version and use a separate AFT deployment for testing if possible.
+ **Scope**: Use AFT only to deploy infrastructure guardrails and customizations. Do not use it to deploy your application.
+ **Linting and validation**: The AFT pipeline requires a linted and validated Terraform configuration. Run lint, validate, and test before pushing the configuration to AFT repositories.
+ **Terraform modules**: Build reusable Terraform code as modules, and always specify the Terraform and AWS provider versions to match your organization's requirements.

## Epics
<a name="implement-account-factory-for-terraform-aft-by-using-a-bootstrap-pipeline-epics"></a>

### Set up and configure the AWS environment
<a name="set-up-and-configure-the-aws-environment"></a>


| Task | Description | Skills required | 
| --- | --- | --- | 
| Prepare the AWS Control Tower environment. | Set up and configure AWS Control Tower in your AWS environment to ensure centralized management and governance for your AWS accounts. For more information, see [Getting started with AWS Control Tower](https://docs.aws.amazon.com/controltower/latest/userguide/getting-started-with-control-tower.html) in the AWS Control Tower documentation. | Cloud administrator | 
| Launch the AFT management account. | Use the AWS Control Tower Account Factory to launch a new AWS account to serve as your AFT management account. For more information, see [Provision accounts with AWS Service Catalog Account Factory](https://docs.aws.amazon.com/controltower/latest/userguide/provision-as-end-user.html) in the AWS Control Tower documentation. | Cloud administrator | 

### Deploy the CloudFormation template in the management account
<a name="deploy-the-cfnshort-template-in-the-management-account"></a>


| Task | Description | Skills required | 
| --- | --- | --- | 
| Launch the CloudFormation template. | In this epic, you deploy the CloudFormation template provided with this solution to set up the AFT bootstrap pipeline in your AWS management account. The pipeline deploys the AFT solution in the AFT management account that you set up in the previous epic.**Step 1: Open the CloudFormation console**[\[See the AWS documentation website for more details\]](http://docs.aws.amazon.com/prescriptive-guidance/latest/patterns/implement-account-factory-for-terraform-aft-by-using-a-bootstrap-pipeline.html)**Step 2: Create a new stack**[\[See the AWS documentation website for more details\]](http://docs.aws.amazon.com/prescriptive-guidance/latest/patterns/implement-account-factory-for-terraform-aft-by-using-a-bootstrap-pipeline.html)**Step 3: Configure stack parameters**[\[See the AWS documentation website for more details\]](http://docs.aws.amazon.com/prescriptive-guidance/latest/patterns/implement-account-factory-for-terraform-aft-by-using-a-bootstrap-pipeline.html)**Step 4: Decide on file generation**[\[See the AWS documentation website for more details\]](http://docs.aws.amazon.com/prescriptive-guidance/latest/patterns/implement-account-factory-for-terraform-aft-by-using-a-bootstrap-pipeline.html)**Step 5: Fill in AWS Control Tower and AFT account details**[\[See the AWS documentation website for more details\]](http://docs.aws.amazon.com/prescriptive-guidance/latest/patterns/implement-account-factory-for-terraform-aft-by-using-a-bootstrap-pipeline.html)**Step 6: Configure AFT options**[\[See the AWS documentation website for more details\]](http://docs.aws.amazon.com/prescriptive-guidance/latest/patterns/implement-account-factory-for-terraform-aft-by-using-a-bootstrap-pipeline.html)[\[See the AWS documentation website for more details\]](http://docs.aws.amazon.com/prescriptive-guidance/latest/patterns/implement-account-factory-for-terraform-aft-by-using-a-bootstrap-pipeline.html)**Step 7: Specify versions**[\[See the AWS documentation website for more details\]](http://docs.aws.amazon.com/prescriptive-guidance/latest/patterns/implement-account-factory-for-terraform-aft-by-using-a-bootstrap-pipeline.html)**Step 8: Review and create the stack**[\[See the AWS documentation website for more details\]](http://docs.aws.amazon.com/prescriptive-guidance/latest/patterns/implement-account-factory-for-terraform-aft-by-using-a-bootstrap-pipeline.html)**Step 9: Monitor stack creation**[\[See the AWS documentation website for more details\]](http://docs.aws.amazon.com/prescriptive-guidance/latest/patterns/implement-account-factory-for-terraform-aft-by-using-a-bootstrap-pipeline.html)**Step 10: Verify the deployment**[\[See the AWS documentation website for more details\]](http://docs.aws.amazon.com/prescriptive-guidance/latest/patterns/implement-account-factory-for-terraform-aft-by-using-a-bootstrap-pipeline.html) | Cloud administrator | 

### Populate and validate the AFT bootstrap repository and pipeline
<a name="populate-and-validate-the-aft-bootstrap-repository-and-pipeline"></a>


| Task | Description | Skills required | 
| --- | --- | --- | 
| Option 1: Populate the AFT bootstrap repository for an external VCS. | If you set the VCS provider to an external VCS (not to CodeCommit), follow these steps.(Optional) After you deploy the CloudFormation template, you can populate or validate the content in the newly created AFT bootstrap repository, and test whether the pipeline has run successfully.**Step 1: Update the connection**[\[See the AWS documentation website for more details\]](http://docs.aws.amazon.com/prescriptive-guidance/latest/patterns/implement-account-factory-for-terraform-aft-by-using-a-bootstrap-pipeline.html)**Step 2: Populate the repository**[\[See the AWS documentation website for more details\]](http://docs.aws.amazon.com/prescriptive-guidance/latest/patterns/implement-account-factory-for-terraform-aft-by-using-a-bootstrap-pipeline.html)**Step 2: Commit and push your changes**[\[See the AWS documentation website for more details\]](http://docs.aws.amazon.com/prescriptive-guidance/latest/patterns/implement-account-factory-for-terraform-aft-by-using-a-bootstrap-pipeline.html) | Cloud administrator | 
| Option 2: Populate the AFT bootstrap repository for CodeCommit. | If you set the VCS provider to CodeCommit, follow these steps.(Optional) After you deploy the CloudFormation template, you can populate or validate the content in the newly created AFT bootstrap repository, and test whether the pipeline has run successfully.If you set the `Generate AFT Files` parameter to `true`, skip to the next story (validating the pipeline).**Step 1: Populate the repository**[\[See the AWS documentation website for more details\]](http://docs.aws.amazon.com/prescriptive-guidance/latest/patterns/implement-account-factory-for-terraform-aft-by-using-a-bootstrap-pipeline.html)**Step 2: Commit and push your changes**[\[See the AWS documentation website for more details\]](http://docs.aws.amazon.com/prescriptive-guidance/latest/patterns/implement-account-factory-for-terraform-aft-by-using-a-bootstrap-pipeline.html) | Cloud administrator | 
| Validate the AFT bootstrap pipeline. | **Step 1: View the pipeline**[\[See the AWS documentation website for more details\]](http://docs.aws.amazon.com/prescriptive-guidance/latest/patterns/implement-account-factory-for-terraform-aft-by-using-a-bootstrap-pipeline.html)**Step 2: Approve the Terraform plan results**[\[See the AWS documentation website for more details\]](http://docs.aws.amazon.com/prescriptive-guidance/latest/patterns/implement-account-factory-for-terraform-aft-by-using-a-bootstrap-pipeline.html)**Step 3: Wait for the deployment**[\[See the AWS documentation website for more details\]](http://docs.aws.amazon.com/prescriptive-guidance/latest/patterns/implement-account-factory-for-terraform-aft-by-using-a-bootstrap-pipeline.html)**Step 4: Check created resources**[\[See the AWS documentation website for more details\]](http://docs.aws.amazon.com/prescriptive-guidance/latest/patterns/implement-account-factory-for-terraform-aft-by-using-a-bootstrap-pipeline.html) | Cloud administrator | 

## Troubleshooting
<a name="implement-account-factory-for-terraform-aft-by-using-a-bootstrap-pipeline-troubleshooting"></a>


| Issue | Solution | 
| --- | --- | 
| The custom Lambda function included in the CloudFormation template fails during deployment. | Check the Amazon CloudWatch logs for the Lambda function to identify the error. The logs provide detailed information and can help pinpoint the specific issue. Confirm that the Lambda function has the necessary permissions and that the environment variables have been set correctly. | 
| You encounter failures in resource creation or management caused by inadequate permissions. | Review the IAM roles and policies that are attached to the Lambda function, CodeBuild, and other services involved in the deployment. Confirm that they have the necessary permissions. If there are permission issues, adjust the IAM policies to grant the required access. | 
| You’re using an outdated version of the CloudFormation template with newer AWS services or Terraform versions. | Regularly update the CloudFormation template to be compatible with the latest AWS and Terraform releases. Check the release notes or documentation for any version-specific changes or requirements. | 
| You reach AWS service quotas during deployment. | Before you deploy the pipeline, check AWS service quotas for resources such as S3 buckets, IAM roles, and Lambda functions. Request increases if necessary. For more information, see [AWS service quotas](https://docs.aws.amazon.com/general/latest/gr/aws_service_limits.html) on the AWS website. | 
| You encounter errors due to incorrect input parameters in the CloudFormation template. | Double-check all input parameters for typos or incorrect values. Confirm that resource identifiers, such as account IDs and Region names, are accurate. | 

## Related resources
<a name="implement-account-factory-for-terraform-aft-by-using-a-bootstrap-pipeline-resources"></a>

To implement this pattern successfully, review the following resources. These resources provide additional information and guidance that can be invaluable in setting up and managing AFT by using CloudFormation.

**AWS** **documentation:**
+ [AWS Control Tower User Guide](https://docs.aws.amazon.com/controltower/latest/userguide/what-is-control-tower.html) offers detailed information on setting up and managing AWS Control Tower.
+ [CloudFormation documentation](https://docs.aws.amazon.com/cloudformation/index.html) provides insights into CloudFormation templates, stacks, and resource management.

**IAM policies and best practices:**
+ [Security best practices in IAM](https://docs.aws.amazon.com/IAM/latest/UserGuide/best-practices.html) explains how to help secure AWS resources by using IAM roles and policies.

**Terraform on AWS:**
+ [Terraform AWS Provider documentation](https://registry.terraform.io/providers/hashicorp/aws/latest/docs) provides comprehensive information about using Terraform with AWS.

**AWS service quotas:**
+ [AWS service quotas](https://docs.aws.amazon.com/general/latest/gr/aws_service_limits.html) provides information about how to view AWS service quotas and how to request increases.

# Manage AWS Service Catalog products in multiple AWS accounts and AWS Regions
<a name="manage-aws-service-catalog-products-in-multiple-aws-accounts-and-aws-regions"></a>

*Ram Kandaswamy, Amazon Web Services*

## Summary
<a name="manage-aws-service-catalog-products-in-multiple-aws-accounts-and-aws-regions-summary"></a>

Amazon Web Services (AWS) Service Catalog simplifies and accelerates the governance and distribution of infrastructure as code (IaC) templates for enterprises. You use AWS CloudFormation templates to define a collection of AWS resources (*stacks*) required for a product. AWS CloudFormation StackSets extends this functionality by enabling you to create, update, or delete stacks across multiple accounts and AWS Regions with a single operation.

AWS Service Catalog administrators create products by using CloudFormation templates that are authored by developers, and publish them. These products are then associated with a portfolio, and constraints are applied for governance. To make your products available to users in other AWS accounts or organizational units (OUs), you typically [share your portfolio](https://docs.aws.amazon.com/servicecatalog/latest/adminguide/catalogs_portfolios_sharing.html) with them. This pattern describes an alternative approach for managing AWS Service Catalog product offerings that is based on AWS CloudFormation StackSets. Instead of sharing portfolios, you use stack set constraints to set AWS Regions and accounts where your product can be deployed and used. By using this approach, you can provision your AWS Service Catalog products in multiple accounts, OUs, and AWS Regions, and manage them from a central location, while meeting your governance requirements. 

Benefits of this approach:
+ The product is provisioned and managed from the primary account, and not shared with other accounts.
+ This approach provides a consolidated view of all provisioned products (stacks) that are based on a specific product.
+ Configuration with AWS Service Management Connector is easier, because it targets only one account.
+ It's easier to query and use products from AWS Service Catalog.

## Prerequisites and limitations
<a name="manage-aws-service-catalog-products-in-multiple-aws-accounts-and-aws-regions-prereqs"></a>

**Prerequisites**
+ AWS CloudFormation templates for IaC and versioning
+ Multi-account setup and AWS Service Catalog for provisioning and managing AWS resources

**Limitations **
+ This approach uses AWS CloudFormation StackSets, and the limitations of StackSets apply:
  + StackSets doesn't support CloudFormation template deployment through macros. If you're using a macro to preprocess the template, you won't be able to use a StackSets-based deployment.
  + StackSets provides the ability to disassociate a stack from the stack set, so you can target a specific stack to fix an issue. However, a disassociated stack cannot be re-associated with the stack set.
+ AWS Service Catalog autogenerates StackSet names. Customization isn't currently supported.

## Architecture
<a name="manage-aws-service-catalog-products-in-multiple-aws-accounts-and-aws-regions-architecture"></a>

**Target architecture**

![\[User manages AWS Service Catalog product using AWS CloudFormation template and StackSets.\]](http://docs.aws.amazon.com/prescriptive-guidance/latest/patterns/images/pattern-img/16458fcd-861d-4ed4-8b91-47e19289a6bb/images/97d23325-b5c6-4ca9-8288-8dec1650c975.png)


1. The user creates an AWS CloudFormation template to provision AWS resources, in JSON or YAML format.

1. The CloudFormation template creates a product in AWS Service Catalog, which is added to a portfolio.

1. The user creates a provisioned product, which creates CloudFormation stacks in the target accounts.

1. Each stack provisions the resources specified in the CloudFormation templates.

## Tools
<a name="manage-aws-service-catalog-products-in-multiple-aws-accounts-and-aws-regions-tools"></a>

**AWS services**
+ [AWS CloudFormation](https://docs.aws.amazon.com/AWSCloudFormation/latest/UserGuide/Welcome.html) helps you set up AWS resources, provision them quickly and consistently, and manage them throughout their lifecycle across AWS accounts and Regions.
+ [AWS Command Line Interface (AWS CLI)](https://docs.aws.amazon.com/cli/latest/userguide/cli-chap-welcome.html) is an open-source tool that helps you interact with AWS services through commands in your command-line shell.
+ [AWS Identity and Access Management (IAM)](https://docs.aws.amazon.com/IAM/latest/UserGuide/introduction.html) helps you securely manage access to your AWS resources by controlling who is authenticated and authorized to use them.
+ [AWS Service Catalog](https://docs.aws.amazon.com/servicecatalog/latest/adminguide/introduction.html) helps you centrally manage catalogs of IT services that are approved for AWS. End users can quickly deploy only the approved IT services they need, following the constraints set by your organization.

## Epics
<a name="manage-aws-service-catalog-products-in-multiple-aws-accounts-and-aws-regions-epics"></a>

### Provision products across accounts
<a name="provision-products-across-accounts"></a>


| Task | Description | Skills required | 
| --- | --- | --- | 
| Create a portfolio. | A portfolio is a container that includes one or more products that are grouped together based on specific criteria. Using a portfolio for your products helps you apply common constraints across your product set.To create a portfolio, follow the instructions in the [AWS Service Catalog documentation](https://docs.aws.amazon.com/servicecatalog/latest/adminguide/portfoliomgmt-create.html). If you're using the AWS CLI, here's an example command:<pre>aws servicecatalog create-portfolio --provider-name my-provider --display-name my-portfolio</pre>For more information, see the [AWS CLI documentation](https://awscli.amazonaws.com/v2/documentation/api/latest/reference/servicecatalog/create-portfolio.html). | AWS Service Catalog, IAM | 
| Create a CloudFormation template. | Create a CloudFormation template that describes the resources. Resource property values should be parameterized where applicable. | AWS CloudFormation, JSON/YAML | 
| Create a product with version information. | The CloudFormation template becomes a product when you publish it in the AWS Service Catalog. Provide values for the optional version detail parameters, such as version title and description; this will be helpful for querying for the product later.To create a product, follow the instructions in the [AWS Service Catalog documentation](https://docs.aws.amazon.com/servicecatalog/latest/adminguide/productmgmt-cloudresource.html). If you're using the AWS CLI, an example command is:<pre>aws servicecatalog create-product --cli-input-json file://create-product-input.json</pre>where `create-product-input.json` is the file that passes the parameters for the product. For an example of this file, see the *Additional information* section. For more information, see the [AWS CLI documentation](https://awscli.amazonaws.com/v2/documentation/api/latest/reference/servicecatalog/create-product.html). | AWS Service Catalog | 
| Apply constraints. | Apply stack set constraints to the portfolio, to configure product deployment options such as multiple AWS accounts, Regions, and permissions. For instructions, see the [AWS Service Catalog documentation](https://docs.aws.amazon.com/servicecatalog/latest/adminguide/constraints-stackset.html). | AWS Service Catalog | 
| Add permissions. | Provide permissions to users so that they can launch the products in the portfolio. For console instructions, see the [AWS Service Catalog documentation](https://docs.aws.amazon.com/servicecatalog/latest/adminguide/catalogs_portfolios_users.html). If you're using the AWS CLI, here's an example command:<pre>aws servicecatalog associate-principal-with-portfolio \<br />    --portfolio-id port-2s6abcdefwdh4 \<br />    --principal-arn arn:aws:iam::444455556666:role/Admin \<br />    --principal-type IAM</pre>For more information, see the [AWS CLI documentation](https://awscli.amazonaws.com/v2/documentation/api/latest/reference/servicecatalog/associate-principal-with-portfolio.html). | AWS Service Catalog, IAM | 
| Provision the product. | A provisioned product is a resourced instance of a product. Provisioning a product based on a CloudFormation template launches a CloudFormation stack and its underlying resources.Provision the product by targeting the applicable AWS Regions and accounts, based on stack set constraints. In the AWS CLI, here's an example command:<pre>aws servicecatalog provision-product \<br />    --product-id prod-abcdfz3syn2rg \<br />    --provisioning-artifact-id pa-abc347pcsccfm \<br />    --provisioned-product-name "mytestppname3"</pre>For more information, see the [AWS CLI documentation](https://awscli.amazonaws.com/v2/documentation/api/latest/reference/servicecatalog/provision-product.html). | AWS Service Catalog | 

## Related resources
<a name="manage-aws-service-catalog-products-in-multiple-aws-accounts-and-aws-regions-resources"></a>

**References**
+ [Overview of AWS Service Catalog](https://docs.aws.amazon.com/servicecatalog/latest/adminguide/what-is_concepts.html)
+ [Using AWS CloudFormation StackSets](https://docs.aws.amazon.com/servicecatalog/latest/adminguide/using-stacksets.html)

**Tutorials and videos **
+ [AWS re:Invent 2019: Automate everything: Options and best practices](https://www.youtube.com/watch?v=bGBVPIpQMYk) (video)

## Additional information
<a name="manage-aws-service-catalog-products-in-multiple-aws-accounts-and-aws-regions-additional"></a>

When you use the `create-product` command, the `cli-input-json` parameter points to a file that specifies information such as product owner, support email, and CloudFormation template details. Here's an example of such a file:

```
{
   "Owner": "Test admin",
      "SupportDescription": "Testing",
         "Name": "SNS",
            "SupportEmail": "example@example.com",
            "ProductType": "CLOUD_FORMATION_TEMPLATE",
               "AcceptLanguage": "en",
                  "ProvisioningArtifactParameters": {
                     "Description": "SNS product",
                        "DisableTemplateValidation": true,
                           "Info": {
                              "LoadTemplateFromURL": "<url>"
                     },
                           "Name": "version 1"
}
```

# Monitor SAP RHEL Pacemaker clusters by using AWS services
<a name="monitor-sap-rhel-pacemaker-clusters-by-using-aws-services"></a>

*Harsh Thoria, Randy Germann, and RAVEENDRA Voore, Amazon Web Services*

## Summary
<a name="monitor-sap-rhel-pacemaker-clusters-by-using-aws-services-summary"></a>

This pattern outlines the steps for monitoring and configuring alerts for a Red Hat Enterprise Linux (RHEL) Pacemaker cluster for SAP applications and SAP HANA database services by using Amazon CloudWatch and Amazon Simple Notification Service (Amazon SNS).

The configuration enables you to monitor SAP SCS or ASCS, Enqueue Replication Server (ERS), and SAP HANA cluster resources when they are in a "stopped" state with the help of CloudWatch log streams, metric filters, and alarms. Amazon SNS sends an email to the infrastructure or SAP Basis team about the stopped cluster status.

You can create the AWS resources for this pattern by using AWS CloudFormation scripts or the AWS service consoles. This pattern assumes that you're using the consoles; it doesn't provide CloudFormation scripts or cover infrastructure deployment for CloudWatch and Amazon SNS. Pacemaker commands are used to set the cluster alerting configuration.

## Prerequisites and limitations
<a name="monitor-sap-rhel-pacemaker-clusters-by-using-aws-services-prereqs"></a>

**Prerequisites**
+ An active AWS account.
+ Amazon SNS set up to send email or mobile notifications.
+ An SAP ASCS/ERS for ABAP or SCS/ERS for Java, and SAP HANA Database RHEL Pacemaker cluster. For instructions, see the following:
  + [SAP HANA cluster setup](https://docs.aws.amazon.com/sap/latest/sap-hana/sap-hana-on-aws-manual-deployment-of-sap-hana-on-aws-with-high-availability-clusters.html)
  + [SAP Netweaver ABAP/Java cluster setup](https://docs.aws.amazon.com/sap/latest/sap-netweaver/sap-netweaver-ha-configuration-guide.html)

**Limitations**
+ This solution currently works for RHEL version 7.3 and later Pacemaker-based clusters. It hasn’t been tested on SUSE operating systems.

**Product versions**
+ RHEL 7.3 and later

## Architecture
<a name="monitor-sap-rhel-pacemaker-clusters-by-using-aws-services-architecture"></a>

**Target technology stack **
+ RHEL Pacemaker alert event-driven agent
+ Amazon Elastic Compute Cloud (Amazon EC2)
+ CloudWatch alarm
+ CloudWatch log group and metric filter
+ Amazon SNS

**Target architecture **

The following diagram illustrates the components and workflows for this solution.

![\[Architecture for monitoring SAP RHEL Pacemaker clusters\]](http://docs.aws.amazon.com/prescriptive-guidance/latest/patterns/images/pattern-img/ca4d282e-eadd-43fd-8506-3dbeb43e4db6/images/bfc96678-1fd3-47b6-8f09-bf7cf7c4a92c.png)


**Automation and scale**
+ You can automate the creation of AWS resources by using CloudFormation scripts. You can also use additional metric filters to scale and cover multiple clusters.

## Tools
<a name="monitor-sap-rhel-pacemaker-clusters-by-using-aws-services-tools"></a>

**AWS services**
+ [Amazon CloudWatch](https://docs.aws.amazon.com/AmazonCloudWatch/latest/monitoring/WhatIsCloudWatch.html) helps you monitor the metrics of your AWS resources and the applications you run on AWS in real time.
+  [Amazon Simple Notification Service (Amazon SNS)](https://docs.aws.amazon.com/sns/latest/dg/welcome.html) helps you coordinate and manage the exchange of messages between publishers and clients, including web servers and email addresses.

**Tools**
+ CloudWatch agent (unified) is a tool that collects system-level metrics, logs, and traces from EC2 instances, and retrieves custom metrics from your applications.
+ Pacemaker alert agent (for RHEL 7.3 and later) is a tool that initiates an action when there's a change, such as when a resource stops or restarts, in a Pacemaker cluster.

## Best practices
<a name="monitor-sap-rhel-pacemaker-clusters-by-using-aws-services-best-practices"></a>
+ For best practices for using SAP workloads on AWS, see the [SAP Lens](https://docs.aws.amazon.com/wellarchitected/latest/sap-lens/sap-lens.html) for the AWS Well-Architected Framework.
+ Consider the costs involved in setting up CloudWatch monitoring for SAP HANA clusters. For more information, see the [CloudWatch documentation](https://docs.aws.amazon.com/AmazonCloudWatch/latest/monitoring/cloudwatch_billing.html).
+ Consider using a pager or ticketing mechanism for Amazon SNS alerts.
+ Always check for RHEL high availability (HA) versions of the RPM package for **pcs**, Pacemaker, and the AWS fencing agent.

## Epics
<a name="monitor-sap-rhel-pacemaker-clusters-by-using-aws-services-epics"></a>

### Set up Amazon SNS
<a name="set-up-sns"></a>


| Task | Description | Skills required | 
| --- | --- | --- | 
| Create an SNS topic. | [\[See the AWS documentation website for more details\]](http://docs.aws.amazon.com/prescriptive-guidance/latest/patterns/monitor-sap-rhel-pacemaker-clusters-by-using-aws-services.html) | AWS administrator | 
| Modify the access policy for the SNS topic. | [\[See the AWS documentation website for more details\]](http://docs.aws.amazon.com/prescriptive-guidance/latest/patterns/monitor-sap-rhel-pacemaker-clusters-by-using-aws-services.html) | AWS systems administrator | 
| Subscribe to the SNS topic. | [\[See the AWS documentation website for more details\]](http://docs.aws.amazon.com/prescriptive-guidance/latest/patterns/monitor-sap-rhel-pacemaker-clusters-by-using-aws-services.html)Your web browser displays a confirmation response from Amazon SNS. | AWS systems administrator | 

### Confirm the setup of the cluster
<a name="confirm-the-setup-of-the-cluster"></a>


| Task | Description | Skills required | 
| --- | --- | --- | 
| Check cluster status. | Use the **pcs status** command to confirm that the resources are online. | SAP Basis administrator | 

### Configure Pacemaker alerts
<a name="configure-pacemaker-alerts"></a>


| Task | Description | Skills required | 
| --- | --- | --- | 
| Configure the Pacemaker alert agent on the primary cluster instance. | Log in to the EC2 instance in the pimary cluster and run the following commands:<pre>install --mode=0755 /usr/share/pacemaker/alerts/alert_file.sh.sample<br />touch /var/lib/pacemaker/alert_file.sh<br />touch /var/log/pcmk_alert_file.log<br />chown hacluster:haclient /var/log/pcmk_alert_file.log<br />chmod 600 /var/log/pcmk_alert_file.log<br />pcs alert create id=alert_file description="Log events to a file." path=/var/lib/pacemaker/alert_file.sh<br />pcs alert recipient add alert_file id=my-alert_logfile value=/var/log/pcmk_alert_file.log</pre> | SAP Basis administrator | 
| Configure the Pacemaker alert agent on the secondary cluster instance. | Log in to the secondary cluster EC2 instance in the secondary cluster and run the following commands:<pre>install --mode=0755 /usr/share/pacemaker/alerts/alert_file.sh.sample<br />touch /var/lib/pacemaker/alert_file.sh<br />touch /var/log/pcmk_alert_file.log<br />chown hacluster:haclient /var/log/pcmk_alert_file.log<br />chmod 600 /var/log/pcmk_alert_file.log</pre> | SAP Basis administrator | 
| Confirm that the RHEL alert resource was created. | Use the following command to confirm that the alert resource was created:<pre>pcs alert</pre>The output of the command will look like this:<pre>[root@xxxxxxx ~]# pcs alert <br />Alerts:<br /> Alert: alert_file (path=/var/lib/pacemaker/alert_file.sh)<br />  Description: Log events to a file.<br />  Recipients:<br />   Recipient: my-alert_logfile (value=/var/log/pcmk_alert_file.log)</pre> | SAP Basis administrator | 

### Configure the CloudWatch agent
<a name="configure-the-cw-agent"></a>


| Task | Description | Skills required | 
| --- | --- | --- | 
| Install the CloudWatch agent. | There are several ways to install the CloudWatch agent on an EC2 instance. To use the command line:[\[See the AWS documentation website for more details\]](http://docs.aws.amazon.com/prescriptive-guidance/latest/patterns/monitor-sap-rhel-pacemaker-clusters-by-using-aws-services.html)For more information, see the [CloudWatch documentation](https://docs.aws.amazon.com/AmazonCloudWatch/latest/monitoring/install-CloudWatch-Agent-on-EC2-Instance.html). | AWS systems administrator | 
| Attach an IAM role to the EC2 instance. | To enable the CloudWatch agent to send data from the instances, you must attach the IAM **CloudWatchAgentServerRole** role to each  instance. Or, you can add a policy for the CloudWatch agent to your existing IAM role. For more information, see the [CloudWatch documentation](https://docs.aws.amazon.com/AmazonCloudWatch/latest/monitoring/create-iam-roles-for-cloudwatch-agent-commandline.html). | AWS administrator | 
| Configure the CloudWatch agent to monitor the Pacemaker alert agent log file on the primary cluster instance. | [\[See the AWS documentation website for more details\]](http://docs.aws.amazon.com/prescriptive-guidance/latest/patterns/monitor-sap-rhel-pacemaker-clusters-by-using-aws-services.html) | AWS administrator | 
| Start the CloudWatch agent on the primary and secondary cluster instances. | To start the agent, run the following command on the EC2 instances in the primary and secondary clusters:<pre>sudo /opt/aws/amazon-cloudwatch-agent/bin/amazon-cloudwatch-agent-ctl -a fetch-config -m<br />ec2 -s -c file:/opt/aws/amazon-cloudwatch-agent/bin/config.json</pre> | AWS administrator | 

### Set up CloudWatch resources
<a name="set-up-cw-resources"></a>


| Task | Description | Skills required | 
| --- | --- | --- | 
| Set up CloudWatch log groups. | [\[See the AWS documentation website for more details\]](http://docs.aws.amazon.com/prescriptive-guidance/latest/patterns/monitor-sap-rhel-pacemaker-clusters-by-using-aws-services.html)The CloudWatch agent will transfer the Pacemaker alert file to the CloudWatch log group as a log stream. | AWS administrator | 
| Set up CloudWatch metric filters. | Metric filters help you search for a pattern such as `stop <cluster-resource-name>` in the CloudWatch log streams. When this pattern is identified, the metric filter updates a custom metric.[\[See the AWS documentation website for more details\]](http://docs.aws.amazon.com/prescriptive-guidance/latest/patterns/monitor-sap-rhel-pacemaker-clusters-by-using-aws-services.html)When the metric filter identifies the pattern in step 4, it updates the value of the CloudWatch custom metric `sapcluster_abc` to **1**.The CloudWatch alarm `SAP-Cluster-QA1-ABC` monitors the metric `sapcluster_abc` and sends out an SNS notification when the value of the metric changes to **1**. This indicates that the cluster resource has stopped and action needs to be taken. | AWS administrator, SAP Basis administrator | 
| Set up a CloudWatch metric alarm for the SAP ASCS/SCS and ERS metric. | To create an alarm based on a single metric:[\[See the AWS documentation website for more details\]](http://docs.aws.amazon.com/prescriptive-guidance/latest/patterns/monitor-sap-rhel-pacemaker-clusters-by-using-aws-services.html) | AWS administrator | 
| Set up a CloudWatch metric alarm for the SAP HANA metric. | Repeat the steps for setting up a CloudWatch metric alarm from the previous task, with these changes:[\[See the AWS documentation website for more details\]](http://docs.aws.amazon.com/prescriptive-guidance/latest/patterns/monitor-sap-rhel-pacemaker-clusters-by-using-aws-services.html) | AWS administrator | 

## Related resources
<a name="monitor-sap-rhel-pacemaker-clusters-by-using-aws-services-resources"></a>
+ [Triggering Scripts for Cluster Events](https://access.redhat.com/documentation/en-us/red_hat_enterprise_linux/7/html-single/high_availability_add-on_reference/index#ch-alertscripts-HAAR) (RHEL documentation)
+ [Create the CloudWatch agent configuration file with the wizard ](https://docs.aws.amazon.com/AmazonCloudWatch/latest/monitoring/create-cloudwatch-agent-configuration-file-wizard.html)(CloudWatch documentation)
+ [Installing and running the CloudWatch agent on your servers ](https://docs.aws.amazon.com/AmazonCloudWatch/latest/monitoring/install-CloudWatch-Agent-commandline-fleet.html)(CloudWatch documentation)
+ [Create a CloudWatch alarm based on a static threshold](https://docs.aws.amazon.com/AmazonCloudWatch/latest/monitoring/ConsoleAlarms.html) (CloudWatch documentation)
+ [Manual deployment of SAP HANA on AWS with high availability clusters](https://docs.aws.amazon.com/sap/latest/sap-hana/sap-hana-on-aws-manual-deployment-of-sap-hana-on-aws-with-high-availability-clusters.html) (SAP documentation on the AWS website)
+ [SAP NetWeaver guides ](https://docs.aws.amazon.com/sap/latest/sap-netweaver/welcome.html)(SAP documentation on the AWS website)

## Attachments
<a name="attachments-ca4d282e-eadd-43fd-8506-3dbeb43e4db6"></a>

To access additional content that is associated with this document, unzip the following file: [attachment.zip](samples/p-attach/ca4d282e-eadd-43fd-8506-3dbeb43e4db6/attachments/attachment.zip)

# Monitor application activity by using CloudWatch Logs Insights
<a name="monitor-application-activity-by-using-cloudwatch-logs-insights"></a>

*Ram Kandaswamy, Amazon Web Services*

## Summary
<a name="monitor-application-activity-by-using-cloudwatch-logs-insights-summary"></a>

This pattern provides a solution for automatically detecting and alerting on application exceptions by using Amazon CloudWatch Logs Insights. By implementing automated log analysis and alerting, you can quickly identify and respond to application issues in your production environment.

Logs play a crucial role in monitoring system behavior, identifying issues, and ensuring optimal performance. During a migration process, log files are invaluable for validating the system's functioning in the new environment, detecting compatibility problems, and identifying any unexpected behaviors. Issues could be related to operations or security. For security-related issues, enabling the detection of unauthorized access attempts or suspicious activities early is essential for maintaining security and regulatory compliance. This capability is especially important when dealing with sensitive data or critical systems. 

This pattern is particularly valuable for teams that need to do the following:
+ Maintain high application availability.
+ Respond to production issues quickly.
+ Analyze application-specific errors not captured by AWS service logs.
+ Perform on-demand log analysis without pre-built infrastructure.

CloudWatch Logs Insights is optimal for analyzing application-generated logs where the error context exists only within your application code. CloudWatch Logs Insights excels at the following tasks:
+ Query unstructured or semi-structured log data.
+ Perform on-demand analysis during incident response.
+ Correlate events across multiple log groups.
+ Create quick visualizations without external tools.

## Prerequisites and limitations
<a name="monitor-application-activity-by-using-cloudwatch-logs-insights-prereqs"></a>

**Prerequisites**
+ A production application deployed in active AWS account
+ Basic understanding of the production application's logging format and exception patterns
+ Application logs configured to stream to Amazon CloudWatch Logs

**Limitations**
+ Some AWS services aren’t available in all AWS Regions. For Region availability, see [AWS Services by Region](https://aws.amazon.com/about-aws/global-infrastructure/regional-product-services/). For specific endpoints, see [Service endpoints and quotas](https://docs.aws.amazon.com/general/latest/gr/aws-service-information.html), and choose the link for the service.

## Architecture
<a name="monitor-application-activity-by-using-cloudwatch-logs-insights-architecture"></a>

The following diagram shows how CloudWatch Logs Insights evaluates resource logs and sends a relevant data visualization to a CloudWatch dashboard.

![\[CloudWatch Logs Insights evaluates resource logs and sends data visualization to dashboard.\]](http://docs.aws.amazon.com/prescriptive-guidance/latest/patterns/images/pattern-img/082ff4b6-9303-42e6-bc62-263e2254f232/images/b1cbb699-07cd-45e6-ac06-839159bafa6b.png)


The diagram shows the following workflow:

1. The resources publish logs to CloudWatch Logs. Resources can include AWS resources such as Amazon Elastic Compute Cloud (Amazon EC2) instances or Amazon Simple Storage Service (Amazon S3) buckets. Another example includes on-premises systems with CloudWatch Agent installed that can publish logs to CloudWatch.

1. CloudWatch Logs Insights filters for the relevant pattern string. Examples of search pattern strings include "error", "exception", or a specific regular expression.

1. Typically, the production support team or developers add the pattern visualization to the CloudWatch dashboard.

**Automation and scale**

Developers can automate this pattern’s solution by using the AWS Cloud Development Kit (AWS CDK), AWS CloudFormation, or AWS SDKs to handle multiple string patterns. Teams can incorporate this automation into their continuous integration and deployment (CI/CD) DevOps processes.

## Tools
<a name="monitor-application-activity-by-using-cloudwatch-logs-insights-tools"></a>

**AWS services**
+ [Amazon CloudWatch Logs](https://docs.aws.amazon.com/AmazonCloudWatch/latest/logs/WhatIsCloudWatchLogs.html) helps you centralize the logs from all your systems, applications, and AWS services so you can monitor them and archive them securely.
+ [AWS Identity and Access Management (IAM)](https://docs.aws.amazon.com/IAM/latest/UserGuide/introduction.html) helps you securely manage access to your AWS resources by controlling who is authenticated and authorized to use them.
+ [AWS Key Management Service (AWS KMS)](https://docs.aws.amazon.com/kms/latest/developerguide/overview.html) helps you create and control cryptographic keys to help protect your data.

## Best practices
<a name="monitor-application-activity-by-using-cloudwatch-logs-insights-best-practices"></a>

**Query efficiency**
+ Define and configure [log groups](https://docs.aws.amazon.com/AmazonCloudWatch/latest/logs/Working-with-log-groups-and-streams.html) to analyze relevant log data.
+ Use field explorers to understand the structure and fields available in your log data.
+ Write efficient queries by using [CloudWatch Logs Insights query syntax](https://docs.aws.amazon.com/AmazonCloudWatch/latest/logs/CWL_AnalyzeLogData_LogsInsights.html).
+ Adapt [sample queries](https://docs.aws.amazon.com/AmazonCloudWatch/latest/logs/CWL_QuerySyntax-examples.html) to your specific requirements for quicker analysis.
+ Limit query time ranges to reduce data scanned and improve performance.
+ [Save queries](https://docs.aws.amazon.com/AmazonCloudWatch/latest/logs/CWL_Insights-Saving-Queries.html) for future use to save time and ensure consistent analysis.

**Security**
+ Apply appropriate IAM[ policies](https://docs.aws.amazon.com/AmazonCloudWatch/latest/logs/iam-access-control-overview-cwl.html) to CloudWatch Logs Insights and log groups. Follow the principle of least privilege and grant the minimum permissions required to perform a task. For more information, see [Grant least privilege](https://docs.aws.amazon.com/IAM/latest/UserGuide/access_policies.html#grant-least-priv) and [Security best practices](https://docs.aws.amazon.com/IAM/latest/UserGuide/best-practices.html) in the IAM documentation.
+ Enable [log data encryption using AWS KMS](https://docs.aws.amazon.com/AmazonCloudWatch/latest/logs/CloudWatchLogs-Insights-Query-Encrypt.html) for sensitive log data.

**Cost optimization**
+ CloudWatch Logs Insights charges per GB of data scanned per query. Narrow time ranges and target specific log groups to reduce costs.
+ Configure appropriate log retention policies to manage storage costs.
+ For frequent analysis of large historical datasets, consider exporting logs to Amazon S3 and using Amazon Athena.
+ Review [CloudWatch pricing](https://aws.amazon.com/cloudwatch/pricing/) to understand cost implications for your use case.

## Epics
<a name="monitor-application-activity-by-using-cloudwatch-logs-insights-epics"></a>

### Create log group and configure logs to view in dashboard.
<a name="create-log-group-and-configure-logs-to-view-in-dashboard"></a>


| Task | Description | Skills required | 
| --- | --- | --- | 
| Configure IAM permissions. | To configure IAM permissions, do the following:[\[See the AWS documentation website for more details\]](http://docs.aws.amazon.com/prescriptive-guidance/latest/patterns/monitor-application-activity-by-using-cloudwatch-logs-insights.html)For information about how to create IAM policies or to add permissions to existing policies, see [Define custom IAM permissions with customer managed policies](https://docs.aws.amazon.com/IAM/latest/UserGuide/access_policies_create.html) and [Edit IAM policies](https://docs.aws.amazon.com/IAM/latest/UserGuide/access_policies_manage-edit.html) in the *IAM User Guide*. For more information, see [Identity and access management for Amazon CloudWatch Logs](https://docs.aws.amazon.com/AmazonCloudWatch/latest/logs/auth-and-access-control-cwl.html) and [CloudWatch Logs permissions reference](https://docs.aws.amazon.com/AmazonCloudWatch/latest/logs/permissions-reference-cwl.html) in the *Amazon CloudWatch Logs User Guide*. | AWS administrator, AWS DevOps, AWS systems administrator, Cloud administrator, Cloud architect, DevOps engineer | 
| Create a log group. | To create a log group, use any of the following options:[\[See the AWS documentation website for more details\]](http://docs.aws.amazon.com/prescriptive-guidance/latest/patterns/monitor-application-activity-by-using-cloudwatch-logs-insights.html)[\[See the AWS documentation website for more details\]](http://docs.aws.amazon.com/prescriptive-guidance/latest/patterns/monitor-application-activity-by-using-cloudwatch-logs-insights.html) | AWS administrator, AWS DevOps, AWS systems administrator, Cloud administrator, Cloud architect, DevOps engineer | 
| Generate a CloudWatch Logs Insights query. | To create and save a CloudWatch Logs Insights query, do the following:[\[See the AWS documentation website for more details\]](http://docs.aws.amazon.com/prescriptive-guidance/latest/patterns/monitor-application-activity-by-using-cloudwatch-logs-insights.html) | AWS administrator, AWS DevOps, AWS systems administrator, Cloud administrator, Cloud architect, DevOps engineer | 
| Create visualization in a CloudWatch dashboard. | To use a CloudWatch dashboard to create a visualization, do the following:[\[See the AWS documentation website for more details\]](http://docs.aws.amazon.com/prescriptive-guidance/latest/patterns/monitor-application-activity-by-using-cloudwatch-logs-insights.html)For more information about dashboard options and capabilities, see [Using Amazon CloudWatch dashboards](https://docs.aws.amazon.com/AmazonCloudWatch/latest/monitoring/CloudWatch_Dashboards.html) and [Creating flexible CloudWatch dashboards with dashboard variables](https://docs.aws.amazon.com/AmazonCloudWatch/latest/monitoring/cloudwatch_dashboard_variables.html) in the *Amazon CloudWatch Logs User Guide*. | AWS administrator, AWS DevOps, AWS systems administrator, Cloud administrator, Cloud architect, DevOps engineer | 

## Troubleshooting
<a name="monitor-application-activity-by-using-cloudwatch-logs-insights-troubleshooting"></a>


| Issue | Solution | 
| --- | --- | 
| Unable to see query results or query seems broken | Start with a working query that was modified from a [sample query](https://docs.aws.amazon.com/AmazonCloudWatch/latest/logs/CWL_QuerySyntax-examples.html). Perform small incremental changes to parts of the query (such as a filter or field), and take advantage of the CloudWatch Logs [query generator feature](https://docs.aws.amazon.com/AmazonCloudWatch/latest/logs/CloudWatchLogs-Insights-Query-Assist.html). | 
| Log groups not creating log streams | In the IAM policy, make sure that the resource for the [CreateLogStream](https://docs.aws.amazon.com/AmazonCloudWatchLogs/latest/APIReference/API_CreateLogStream.html) and the [CreateLogGroup](https://docs.aws.amazon.com/AmazonCloudWatchLogs/latest/APIReference/API_CreateLogGroup.html) operations is provided with a wildcard character `(*)` value. The `create `operation will not succeed without this wildcard permission. | 
| Query timeout or slow performance | Reduce the time range, target specific log groups, or simplify the query. Complex regular expression (`regex`) patterns and large time ranges increase query time. | 
| No data returned for valid time range | Verify log group selection and check that logs are being ingested (review log streams), and confirm the filter pattern matches your log format. | 

## Related resources
<a name="monitor-application-activity-by-using-cloudwatch-logs-insights-resources"></a>
+ [Analyzing log data with CloudWatch Logs Insights](https://docs.aws.amazon.com/AmazonCloudWatch/latest/logs/AnalyzingLogData.html)
+ [Amazon CloudWatch FAQs](https://aws.amazon.com/cloudwatch/faqs/#topic-0)
+ [Creating flexible CloudWatch dashboards with dashboard variables](https://docs.aws.amazon.com/AmazonCloudWatch/latest/monitoring/cloudwatch_dashboard_variables.html)
+ [Get started with Logs Insights QL: Query tutorials](https://docs.aws.amazon.com/AmazonCloudWatch/latest/logs/CWL_AnalyzeLogData_Tutorials.html)
+ [Use natural language to generate and update CloudWatch Logs Insights queries](https://docs.aws.amazon.com/AmazonCloudWatch/latest/logs/CloudWatchLogs-Insights-Query-Assist.html)
+ [Use PutDashboard with an AWS SDK or CLI](https://docs.aws.amazon.com/AmazonCloudWatch/latest/monitoring/example_cloudwatch_PutDashboard_section.html)
+ [Working with log groups and log streams](https://docs.aws.amazon.com/AmazonCloudWatch/latest/logs/Working-with-log-groups-and-streams.html)

# Monitor use of a shared Amazon Machine Image across multiple AWS accounts
<a name="monitor-use-of-a-shared-amazon-machine-image-across-multiple-aws-accounts"></a>

*Naveen Suthar and Sandeep Gawande, Amazon Web Services*

## Summary
<a name="monitor-use-of-a-shared-amazon-machine-image-across-multiple-aws-accounts-summary"></a>

[Amazon Machine Images (AMIs)](https://docs.aws.amazon.com/AWSEC2/latest/UserGuide/AMIs.html) are used to create Amazon Elastic Compute Cloud (Amazon EC2) instances in your Amazon Web Services (AWS) environment. You can create AMIs in a separate, centralized AWS account, which is called a *creator account* in this pattern. You can then share the AMI across multiple AWS accounts that are in the same AWS Region, which are called *consumer accounts* in this pattern. Managing AMIs from a single account provides scalability and simplifies governance. In the consumer accounts, you can reference the shared AMI in Amazon EC2 Auto Scaling [launch templates](https://docs.aws.amazon.com/autoscaling/ec2/userguide/create-asg-launch-template.html) and Amazon Elastic Kubernetes Service (Amazon EKS) [node groups](https://docs.aws.amazon.com/eks/latest/userguide/managed-node-groups.html).

When a shared AMI is [deprecated](https://docs.aws.amazon.com/AWSEC2/latest/UserGuide/ami-deprecate.html), [deregistered](https://docs.aws.amazon.com/AWSEC2/latest/UserGuide/deregister-ami.html), or [unshared](https://docs.aws.amazon.com/AWSEC2/latest/UserGuide/sharingamis-explicit.html), AWS services that refer to the AMI in the consumer accounts cannot use this AMI to launch new instances. Any auto scaling event or relaunch of the same instance fails. This can lead to issues in the production environment, such as application downtime or performance degradation. When AMI sharing and usage events occur in multiple AWS accounts, it can be difficult to monitor this activity.

This pattern helps you monitor shared AMI usage and status across accounts in the same Region. It uses serverless AWS services, such as Amazon EventBridge, Amazon DynamoDB, AWS Lambda, and Amazon Simple Email Service (Amazon SES). You provision the infrastructure as code (IaC) by using HashiCorp Terraform. This solution provides alerts when a service in a consumer account references a deregistered or unshared AMI.

## Prerequisites and limitations
<a name="monitor-use-of-a-shared-amazon-machine-image-across-multiple-aws-accounts-prereqs"></a>

**Prerequisites**
+ Two or more active AWS accounts: one creator account and one or more consumer accounts
+ One or more AMIs that are shared from the creator account to a consumer account
+ Terraform CLI, [installed](https://developer.hashicorp.com/terraform/cli) (Terraform documentation)
+ Terraform AWS Provider, [configured](https://hashicorp.github.io/terraform-provider-aws/) (Terraform documentation)
+ (Optional, but recommended) Terraform backend, [configured](https://developer.hashicorp.com/terraform/language/backend) (Terraform documentation)
+ Git, [installed](https://github.com/git-guides/install-git)

**Limitations**
+ This pattern monitors AMIs that have been shared to specific accounts by using the account ID. This pattern does not monitor AMIs that have been shared to an organization by using the organization ID.
+ AMIs can only be shared to accounts that are within the same AWS Region. This pattern monitors AMIs within a single, target Region. To monitor use of AMIs in multiple Regions, you deploy this solution in each Region.
+ This pattern doesn't monitor any AMIs that were shared before this solution was deployed. If you want to monitor previously shared AMIs, you can unshare the AMI and then reshare it with the consumer accounts.

**Product versions**
+ Terraform version 1.2.0 or later
+ Terraform AWS Provider version 4.20 or later

## Architecture
<a name="monitor-use-of-a-shared-amazon-machine-image-across-multiple-aws-accounts-architecture"></a>

**Target technology stack**

The following resources are provisioned as IaC through Terraform:
+ Amazon DynamoDB tables
+ Amazon EventBridge rules
+ AWS Identity and Access Management (IAM) role
+ AWS Lambda functions
+ Amazon SES

**Target architecture**

![\[Architecture for monitoring shared AMI use and alerting users if the AMI is unshared or deregistered\]](http://docs.aws.amazon.com/prescriptive-guidance/latest/patterns/images/pattern-img/2d709249-0c68-47d7-be5d-46e8a73071ed/images/8c48c4dd-d681-4c32-9ba8-8f5ad2d66f64.png)


The diagram shows the following workflow:

1. An AMI in the creator account is shared with a consumer account in the same AWS Region.

1. When the AMI is shared, an EventBridge rule in the creator account captures the `ModifyImageAttribute` event and initiates a Lambda function in the creator account.

1. The Lambda function stores data related to the AMI in a DynamoDB table in the creator account.

1. When an AWS service in the consumer account uses the shared AMI to launch an Amazon EC2 instance or when the shared AMI is associated with a launch template, an EventBridge rule in the consumer account captures use of the shared AMI.

1. The EventBridge rule initiates a Lambda function in the consumer account. The Lambda function does the following:

   1. The Lambda function updates the AMI-related data in a DynamoDB table in the consumer account.

   1. The Lambda function assumes an IAM role in the creator account and updates the Lambda table in the creator account. In the `Mapping` table, it creates an item that maps the instance ID or launch template ID to its respective AMI ID.

1. The AMI that is centrally managed in the creator account is deprecated, deregistered, or unshared.

1. The EventBridge rule in the creator account captures the `ModifyImageAttribute` or `DeregisterImage` event with the `remove` action and initiates the Lambda function.

1. The Lambda function checks the DynamoDB table to determine whether the AMI is used in any of the consumer accounts. If there are no instance IDs or launch template IDs associated with the AMI in the `Mapping` table, then the process is complete.

1. If any instance IDs or launch template IDs are associated with the AMI in the `Mapping` table, then the Lambda function uses Amazon SES to send an email notification to the configured subscribers.

## Tools
<a name="monitor-use-of-a-shared-amazon-machine-image-across-multiple-aws-accounts-tools"></a>

**AWS services**
+ [Amazon DynamoDB](https://docs.aws.amazon.com/amazondynamodb/latest/developerguide/Introduction.html) is a fully managed NoSQL database service that provides fast, predictable, and scalable performance.
+ [Amazon EventBridge](https://docs.aws.amazon.com/eventbridge/latest/userguide/eb-what-is.html) is a serverless event bus service that helps you connect your applications with real-time data from a variety of sources. For example, AWS Lambda functions, HTTP invocation endpoints using API destinations, or event buses in other AWS accounts.
+ [AWS Identity and Access Management (IAM)](https://docs.aws.amazon.com/IAM/latest/UserGuide/introduction.html) helps you securely manage access to your AWS resources by controlling who is authenticated and authorized to use them.
+ [AWS Lambda](https://docs.aws.amazon.com/lambda/latest/dg/welcome.html) is a compute service that helps you run code without needing to provision or manage servers. It runs your code only when needed and scales automatically, so you pay only for the compute time that you use.
+ [Amazon Simple Email Service (Amazon SES)](https://docs.aws.amazon.com/ses/latest/dg/Welcome.html) helps you send and receive emails by using your own email addresses and domains.

**Other tools**
+ [HashiCorp Terraform](https://www.terraform.io/docs) is an infrastructure as code (IaC) tool that helps you use code to provision and manage cloud infrastructure and resources.
+ [Python](https://www.python.org/) is a general-purpose computer programming language.

**Code repository**

The code for this pattern is available in the GitHub [cross-account-ami-monitoring-terraform-samples](https://github.com/aws-samples/cross-account-ami-monitoring-terraform-samples) repository.

## Best practices
<a name="monitor-use-of-a-shared-amazon-machine-image-across-multiple-aws-accounts-best-practices"></a>
+ Follow the [Best practices for working with AWS Lambda functions](https://docs.aws.amazon.com/lambda/latest/dg/best-practices.html).
+ Follow the [Best practices for building AMIs](https://docs.aws.amazon.com/marketplace/latest/userguide/best-practices-for-building-your-amis.html).
+ When creating the IAM role, follow the principle of least privilege and grant the minimum permissions required to perform a task. For more information, see [Grant least privilege](https://docs.aws.amazon.com/IAM/latest/UserGuide/access_policies.html#grant-least-priv) and [Security best practices](https://docs.aws.amazon.com/IAM/latest/UserGuide/IAMBestPracticesAndUseCases.html) in the IAM documentation.
+ Set up monitoring and alerting for the AWS Lambda functions. For more information, see [Monitoring and troubleshooting Lambda functions](https://docs.aws.amazon.com/lambda/latest/dg/lambda-monitoring.html).

## Epics
<a name="monitor-use-of-a-shared-amazon-machine-image-across-multiple-aws-accounts-epics"></a>

### Customize the Terraform configuration files
<a name="customize-the-terraform-configuration-files"></a>


| Task | Description | Skills required | 
| --- | --- | --- | 
| Create the AWS CLI named profiles. | For the creator account and each consumer account, create an AWS Command Line Interface (AWS CLI) named profile. For instructions, see [Set up the AWS CLI](https://aws.amazon.com/getting-started/guides/setup-environment/module-three/)  in the AWS Getting Started Resources Center. | DevOps engineer | 
| Clone the repository. | Enter the following command. This clones the [cross-account-ami-monitoring-terraform-samples](https://github.com/aws-samples/cross-account-ami-monitoring-terraform-samples) repository from GitHub by using SSH.<pre>git clone git@github.com:aws-samples/cross-account-ami-monitoring-terraform-samples.git</pre> | DevOps engineer | 
| Update the provider.tf file. | [\[See the AWS documentation website for more details\]](http://docs.aws.amazon.com/prescriptive-guidance/latest/patterns/monitor-use-of-a-shared-amazon-machine-image-across-multiple-aws-accounts.html)For more information about configuring the providers, see [Multiple provider configurations](https://developer.hashicorp.com/terraform/language/providers/configuration#alias-multiple-provider-configurations) in the Terraform documentation. | DevOps engineer | 
| Update the terraform.tfvars file. | [\[See the AWS documentation website for more details\]](http://docs.aws.amazon.com/prescriptive-guidance/latest/patterns/monitor-use-of-a-shared-amazon-machine-image-across-multiple-aws-accounts.html) | DevOps engineer | 
| Update the main.tf file. | Complete these steps only if you are deploying this solution to more than one consumer account. If you are deploying this solution to only one consumer account, no modification of this file is necessary.[\[See the AWS documentation website for more details\]](http://docs.aws.amazon.com/prescriptive-guidance/latest/patterns/monitor-use-of-a-shared-amazon-machine-image-across-multiple-aws-accounts.html) | DevOps engineer | 

### Deploy the solution by using Terraform
<a name="deploy-the-solution-by-using-terraform"></a>


| Task | Description | Skills required | 
| --- | --- | --- | 
| Deploy the solution. | In the Terraform CLI, enter the following commands to deploy the AWS resources in the creator and consumer accounts:[\[See the AWS documentation website for more details\]](http://docs.aws.amazon.com/prescriptive-guidance/latest/patterns/monitor-use-of-a-shared-amazon-machine-image-across-multiple-aws-accounts.html) | DevOps engineer | 
| Verify the email address identity. | When you deployed the Terraform plan, Terraform created an email address identity for each consumer account in Amazon SES. Before notifications can be sent to that email address, you must verify the email address. For instructions, see [Verifying an email address identity](https://docs.aws.amazon.com/ses/latest/dg/creating-identities.html#just-verify-email-proc) in the Amazon SES documentation. | General AWS | 

### Validate resource deployment
<a name="validate-resource-deployment"></a>


| Task | Description | Skills required | 
| --- | --- | --- | 
| Validate deployment in the creator account. | [\[See the AWS documentation website for more details\]](http://docs.aws.amazon.com/prescriptive-guidance/latest/patterns/monitor-use-of-a-shared-amazon-machine-image-across-multiple-aws-accounts.html) | DevOps engineer | 
| Validate deployment in the consumer account. | [\[See the AWS documentation website for more details\]](http://docs.aws.amazon.com/prescriptive-guidance/latest/patterns/monitor-use-of-a-shared-amazon-machine-image-across-multiple-aws-accounts.html) | DevOps engineer | 

### Validate monitoring
<a name="validate-monitoring"></a>


| Task | Description | Skills required | 
| --- | --- | --- | 
| Create an AMI in the creator account. | [\[See the AWS documentation website for more details\]](http://docs.aws.amazon.com/prescriptive-guidance/latest/patterns/monitor-use-of-a-shared-amazon-machine-image-across-multiple-aws-accounts.html) | DevOps engineer | 
| Use the AMI in the consumer account. | In the consumer account, use the shared AMI to create an Amazon EC2 instance or launch template. For instructions, see [How do I launch an Amazon EC2 instance from a custom AMI](https://repost.aws/knowledge-center/launch-instance-custom-ami) (AWS re:Post Knowledge Center) or [Create a launch template for an Auto Scaling group](https://docs.aws.amazon.com/autoscaling/ec2/userguide/create-launch-template.html) (Amazon EC2 Auto Scaling documentation). | DevOps engineer | 
| Validate monitoring and alerting. | [\[See the AWS documentation website for more details\]](http://docs.aws.amazon.com/prescriptive-guidance/latest/patterns/monitor-use-of-a-shared-amazon-machine-image-across-multiple-aws-accounts.html) | DevOps engineer | 

### (Optional) Stop monitoring shared AMIs
<a name="optional-stop-monitoring-shared-amis"></a>


| Task | Description | Skills required | 
| --- | --- | --- | 
| Delete the resources. | [\[See the AWS documentation website for more details\]](http://docs.aws.amazon.com/prescriptive-guidance/latest/patterns/monitor-use-of-a-shared-amazon-machine-image-across-multiple-aws-accounts.html) | DevOps engineer | 

## Troubleshooting
<a name="monitor-use-of-a-shared-amazon-machine-image-across-multiple-aws-accounts-troubleshooting"></a>


| Issue | Solution | 
| --- | --- | 
| I did not receive an email alert. | There could be multiple reasons why the Amazon SES email was not sent. Check the following:[\[See the AWS documentation website for more details\]](http://docs.aws.amazon.com/prescriptive-guidance/latest/patterns/monitor-use-of-a-shared-amazon-machine-image-across-multiple-aws-accounts.html) | 

## Related resources
<a name="monitor-use-of-a-shared-amazon-machine-image-across-multiple-aws-accounts-resources"></a>

**AWS documentation**
+ [Building Lambda functions with Python](https://docs.aws.amazon.com/lambda/latest/dg/lambda-python.html) (Lambda documentation)
+ [Create an AMI](https://docs.aws.amazon.com/AWSEC2/latest/UserGuide/create-ami.html) (Amazon EC2 documentation)
+ [Share an AMI with specific AWS accounts](https://docs.aws.amazon.com/AWSEC2/latest/UserGuide/sharingamis-explicit.html) (Amazon EC2 documentation)
+ [Deregister your AMI](https://docs.aws.amazon.com/AWSEC2/latest/UserGuide/deregister-ami.html) (Amazon EC2 documentation)

**Terraform documentation**
+ [Install Terraform](https://learn.hashicorp.com/tutorials/terraform/install-cli)
+ [Terraform Backend Configuration](https://www.terraform.io/language/settings/backends/configuration)
+ [Terraform AWS Provider](https://registry.terraform.io/providers/hashicorp/aws/latest/docs)
+ [Terraform binary download](https://developer.hashicorp.com/terraform/install)

# View EBS snapshot details for your AWS account or organization
<a name="view-ebs-snapshot-details-for-your-aws-account-or-organization"></a>

*Arun Chandapillai and Parag Nagwekar, Amazon Web Services*

## Summary
<a name="view-ebs-snapshot-details-for-your-aws-account-or-organization-summary"></a>

This pattern describes how you can automatically generate an on-demand report of all Amazon Elastic Block Store (Amazon EBS) snapshots in your Amazon Web Services (AWS) account or organizational unit (OU) in AWS Organizations. 

Amazon EBS is an easy-to-use, scalable, high-performance block- storage service designed for Amazon Elastic Compute Cloud (Amazon EC2). An EBS volume provides durable and persistent storage that you can attach to your EC2 instances. You can use EBS volumes as primary storage for your data and take a point-in-time backup of your EBS volumes by creating a snapshot. You can use the AWS Management Console or the AWS Command Line Interface (AWS CLI) to view the details of specific EBS snapshots. This pattern provides a programmatic way to retrieve information about all EBS snapshots in your AWS account or OU.

You can use the script provided by this pattern to generate a comma-separated values (CSV) file that has the following information about each snapshot: account ID, snapshot ID, volume ID and size, the date the snapshot was taken, instance ID, and description. If your EBS snapshots are tagged, the report also includes the owner and team attributes.

## Prerequisites and limitations
<a name="view-ebs-snapshot-details-for-your-aws-account-or-organization-prereqs"></a>

**Prerequisites**
+ An active AWS account
+ AWS CLI version 2 [installed](https://docs.aws.amazon.com/cli/latest/userguide/getting-started-install.html#getting-started-install-instructions) and [configured](https://docs.aws.amazon.com/cli/latest/userguide/cli-chap-configure.html)
+ AWS Identity and Access Management (IAM) role with the appropriate permissions (access permissions for a specific account or for all accounts in an OU if you’re planning to run the script from AWS Organizations)

## Architecture
<a name="view-ebs-snapshot-details-for-your-aws-account-or-organization-architecture"></a>

The following diagram shows the script workflow that generates an on-demand report of EBS snapshots that are spread across multiple AWS accounts in an OU.

![\[Generating an on-demand report of EBS snapshots across OUs.\]](http://docs.aws.amazon.com/prescriptive-guidance/latest/patterns/images/pattern-img/4e8b1812-2731-4f46-8385-0dd4d92f2d03/images/62d10408-7c85-46cf-a6a4-fe87a6e446f2.png)


## Tools
<a name="view-ebs-snapshot-details-for-your-aws-account-or-organization-tools"></a>

**AWS services**
+ [AWS Command Line Interface (AWS CLI)](https://docs.aws.amazon.com/cli/latest/userguide/cli-chap-welcome.html) is an open-source tool that helps you interact with AWS services through commands in your command-line shell.
+ [Amazon Elastic Block Store (Amazon EBS)](https://docs.aws.amazon.com/AWSEC2/latest/UserGuide/AmazonEBS.html) provides block-level storage volumes for use with EC2 instances. 
+ [AWS Identity and Access Management (IAM)](https://docs.aws.amazon.com/IAM/latest/UserGuide/introduction.html) helps you securely manage access to your AWS resources by controlling who is authenticated and authorized to use them.
+ [AWS Organizations](https://docs.aws.amazon.com/organizations/latest/userguide/orgs_introduction.html) is an account management service that helps you consolidate multiple AWS accounts into an organization that you create and centrally manage.

**Code **

The code for the sample application used in this pattern is available on GitHub, in the [aws-ebs-snapshots-awsorganizations](https://github.com/aws-samples/aws-ebs-snapshots-awsorganizations) repository. Follow the instructions in the next section to use the sample files.

## Epics
<a name="view-ebs-snapshot-details-for-your-aws-account-or-organization-epics"></a>

### Download the script
<a name="download-the-script"></a>


| Task | Description | Skills required | 
| --- | --- | --- | 
| Download the Python script. | Download the script  [GetSnapshotDetailsAllAccountsOU.py](https://github.com/aws-samples/aws-ebs-snapshots-awsorganizations/blob/main/GetSnapshotDetailsAllAccountsOU.py) from the [GitHub repository](https://github.com/aws-samples/aws-ebs-snapshots-awsorganizations). | General AWS | 

### Get EBS snapshot details for an AWS account
<a name="get-ebs-snapshot-details-for-an-aws-account"></a>


| Task | Description | Skills required | 
| --- | --- | --- | 
| Run the Python script. | Run the command:<pre>python3 getsnapshotinfo.py --file <output-file>.csv --region <region-name> </pre>where `<output-file>` refers to the CSV output file where you want information about the EBS snapshots placed, and `<region-name>` is the AWS Region where the snapshots are stored. For example:<pre>python3 getsnapshotinfo.py --file snapshots.csv --region us-east-1 </pre> | General AWS | 

### Get EBS snapshot details for an organization
<a name="get-ebs-snapshot-details-for-an-organization"></a>


| Task | Description | Skills required | 
| --- | --- | --- | 
| Run the Python script. | Run the command:<pre>python3 getsnapshotinfo.py --file <output-file>.csv --role <IAM-role> --region <region-name> </pre>where `<output-file>` refers to the CSV output file where you want information about the EBS snapshots placed, `<IAM-role>` is a role that provides permissions to access AWS Organizations, and `<region-name>` is the AWS Region where the snapshots are stored. For example:<pre>python3 getsnapshotinfo.py --file snapshots.csv --role <IAM role> --region us-west-2</pre> | General AWS | 

## Related resources
<a name="view-ebs-snapshot-details-for-your-aws-account-or-organization-resources"></a>
+ [Amazon EBS documentation](https://docs.aws.amazon.com/AWSEC2/latest/UserGuide/AmazonEBS.html)
+ [Amazon EBS actions](https://docs.aws.amazon.com/AWSEC2/latest/APIReference/OperationList-query-ebs.html)
+ [Amazon EBS API reference](https://awscli.amazonaws.com/v2/documentation/api/latest/reference/ebs/index.html)
+ [Improving Amazon EBS performance](https://docs.aws.amazon.com/AWSEC2/latest/UserGuide/EBSPerformance.html)
+ [Amazon EBS resources](https://aws.amazon.com/ebs/resources/)
+ [EBS snapshot pricing](https://aws.amazon.com/ebs/pricing/)

## Additional information
<a name="view-ebs-snapshot-details-for-your-aws-account-or-organization-additional"></a>

**EBS snapshot types**

Amazon EBS provides three types of snapshots, based on ownership and access:
+ **Owned by you** –** **By default, only you can create volumes from snapshots that you own.
+ **Public snapshots** – You can share snapshots publicly with all other AWS accounts. To create a public snapshot, you modify the permissions for a snapshot to share it with the AWS accounts that you specify. Users that you will authorize can then use the snapshots you share by creating their own EBS volumes, while your original snapshot remains unaffected. You can also make your unencrypted snapshots available publicly to all AWS users. However, you can't make your encrypted snapshots available publicly for security reasons. Public snapshots pose a significant security risk because of the possibility of exposing personal and sensitive data. We strongly recommend against sharing your EBS snapshots with all AWS accounts. For more information about sharing snapshots, see the [AWS documentation](https://docs.aws.amazon.com/AWSEC2/latest/UserGuide/ebs-modifying-snapshot-permissions.html).
+ **Private snapshots** – You can share snapshots privately with individual AWS accounts that you specify. To share the snapshot privately with specific AWS accounts, follow the [instructions](https://docs.aws.amazon.com/AWSEC2/latest/UserGuide/ebs-modifying-snapshot-permissions.html#share-unencrypted-snapshot) in the AWS documentation, and choose **Private** for the permissions setting. Users that you have authorized can use the snapshots that you share to create their own EBS volumes, while your original snapshot remains unaffected.

**Overviews and procedures**

The following table provides links to more information about EBS snapshots, including how you can lower EBS volume costs by finding and deleting unused snapshots, and archive rarely accessed snapshots that do not require frequent or fast retrieval. 


| 
| 
| For information about | See | 
| --- |--- |
| **Snapshots, their features, and limitations** | [Create Amazon EBS snapshots](https://docs.aws.amazon.com/AWSEC2/latest/UserGuide/ebs-creating-snapshot.html) | 
| **How to create a snapshot** | Console: [Create a snapshot](https://docs.aws.amazon.com/AWSEC2/latest/UserGuide/ebs-creating-snapshot.html#ebs-create-snapshot)AWS CLI: [create-snapshot command](https://awscli.amazonaws.com/v2/documentation/api/latest/reference/ec2/create-snapshot.html)For example:<pre>aws ec2 create-snapshot --volume-id vol-1234567890abcdef0 --description " volume snapshot"</pre> | 
| **Deleting snapshots (general information)** | [Delete an Amazon EBS snapshot](https://docs.aws.amazon.com/AWSEC2/latest/WindowsGuide/ebs-deleting-snapshot.html) | 
| **How to delete a snapshot** | Console: [Delete a snapshot](https://docs.aws.amazon.com/AWSEC2/latest/WindowsGuide/ebs-deleting-snapshot.html#ebs-delete-snapshot)AWS CLI: [delete-snapshot command](https://awscli.amazonaws.com/v2/documentation/api/latest/reference/ec2/delete-snapshot.html)For example:<pre>aws ec2 delete-snapshot --snapshot-id snap-1234567890abcdef0</pre> | 
| **Archiving snapshots (general information)** | [Archive Amazon EBS snapshots](https://docs.aws.amazon.com/AWSEC2/latest/WindowsGuide/snapshot-archive.html)[Amazon EBS Snapshots Archive](https://aws.amazon.com/blogs/aws/new-amazon-ebs-snapshots-archive/) (blog post) | 
| **How to archive a snapshot** | Console: [Archive a snapshot](https://docs.aws.amazon.com/AWSEC2/latest/WindowsGuide/working-with-snapshot-archiving.html#archive-snapshot)AWS CLI: [modify-snapshot-tier command](https://awscli.amazonaws.com/v2/documentation/api/latest/reference/ec2/modify-snapshot-tier.html) | 
| **How to retrieve an archived snapshot** | Console: [Restore an archived snapshot](https://docs.aws.amazon.com/AWSEC2/latest/WindowsGuide/working-with-snapshot-archiving.html#restore-archived-snapshot)AWS CLI: [restore-snapshot-tier command](https://awscli.amazonaws.com/v2/documentation/api/latest/reference/ec2/restore-snapshot-tier.html) | 
| **Snapshot pricing** | [Amazon EBS pricing](https://aws.amazon.com/ebs/pricing/) | 

**FAQ**

**What is the minimum archive period?**

The minimum archive period is 90 days.

**How long would it take to restore an archived snapshot?**

It can take up to 72 hours to restore an archived snapshot from the archive tier to the standard tier, depending on the size of the snapshot.

**Are archived snapshots full snapshots?**

Archived snapshots are always full snapshots.

**Which snapshots can a user archive?**

You can archive only snapshots that you own in your account.

**Can you archive a snapshot of the root device volume of a registered Amazon Machine Image (AMI)?**

No, you can’t archive a snapshot of the root device volume of a registered AMI.

**What are security considerations for sharing a snapshot?**

When you share a snapshot, you are giving others access to all the data on the snapshot. Share snapshots only with people that you trust with your data.

**How do you share a snapshot with another AWS Region?**

Snapshots are constrained to the Region in which they were created. To share a snapshot with another Region, copy the snapshot to that Region and then share the copy.

**Can you share snapshots that are encrypted?**

You can't share snapshots that are encrypted with the default AWS managed key. You can share snapshots that are encrypted with a customer managed key only. When you share an encrypted snapshot, you must also share the customer managed key that was used to encrypt the snapshot.

**What about unencrypted snapshots?**

You can share unencrypted snapshots publicly.

# More patterns
<a name="governance-more-patterns-pattern-list"></a>

**Topics**
+ [Automate account creation by using the Landing Zone Accelerator on AWS](automate-account-creation-lza.md)
+ [Automate AWS infrastructure operations by using Amazon Bedrock](automate-aws-infrastructure-operations-by-using-amazon-bedrock.md)
+ [Automate AWS resource assessment](automate-aws-resource-assessment.md)
+ [Automatically inventory AWS resources across multiple accounts and Regions](automate-aws-resource-inventory.md)
+ [Automate AWS Service Catalog portfolio and product deployment by using AWS CDK](automate-aws-service-catalog-portfolio-and-product-deployment-by-using-aws-cdk.md)
+ [Automate dynamic pipeline management for deploying hotfix solutions in Gitflow environments by using AWS Service Catalog and AWS CodePipeline](automate-dynamic-pipeline-management-for-deploying-hotfix-solutions.md)
+ [Automate ingestion and visualization of Amazon MWAA custom metrics on Amazon Managed Grafana by using Terraform](automate-ingestion-and-visualization-of-amazon-mwaa-custom-metrics.md)
+ [Automatically attach an AWS managed policy for Systems Manager to EC2 instance profiles using Cloud Custodian and AWS CDK](automatically-attach-an-aws-managed-policy-for-systems-manager-to-ec2-instance-profiles-using-cloud-custodian-and-aws-cdk.md)
+ [Automatically encrypt existing and new Amazon EBS volumes](automatically-encrypt-existing-and-new-amazon-ebs-volumes.md)
+ [Build an AWS landing zone that includes MongoDB Atlas](build-aws-landing-zone-that-includes-mongodb-atlas.md)
+ [Centralize monitoring by using Amazon CloudWatch Observability Access Manager](centralize-monitoring-by-using-amazon-cloudwatch-observability-access-manager.md)
+ [Check EC2 instances for mandatory tags at launch](check-ec2-instances-for-mandatory-tags-at-launch.md)
+ [Clean up AWS Account Factory for Terraform (AFT) resources safely after state file loss](clean-up-aft-resources-safely-after-state-file-loss.md)
+ [Create an Amazon ECS task definition and mount a file system on EC2 instances using Amazon EFS](create-an-amazon-ecs-task-definition-and-mount-a-file-system-on-ec2-instances-using-amazon-efs.md)
+ [Create AWS Config custom rules by using AWS CloudFormation Guard policies](create-aws-config-custom-rules-by-using-aws-cloudformation-guard-policies.md)
+ [Customize default role names by using AWS CDK aspects and escape hatches](customize-default-role-names-by-using-aws-cdk-aspects-and-escape-hatches.md)
+ [Deploy and manage AWS Control Tower controls by using AWS CDK and CloudFormation](deploy-and-manage-aws-control-tower-controls-by-using-aws-cdk-and-aws-cloudformation.md)
+ [Deploy and manage AWS Control Tower controls by using Terraform](deploy-and-manage-aws-control-tower-controls-by-using-terraform.md)
+ [Deploy code in multiple AWS Regions using AWS CodePipeline, AWS CodeCommit, and AWS CodeBuild](deploy-code-in-multiple-aws-regions-using-aws-codepipeline-aws-codecommit-and-aws-codebuild.md)
+ [Deploy containerized applications on AWS IoT Greengrass V2 running as a Docker container](deploy-containerized-applications-on-aws-iot-greengrass-version-2-running-as-a-docker-container.md)
+ [Enable Amazon GuardDuty conditionally by using AWS CloudFormation templates](enable-amazon-guardduty-conditionally-by-using-aws-cloudformation-templates.md)
+ [Enable DB2 log archiving directly to Amazon S3 in an IBM Db2 database](enable-db2-logarchive-directly-to-amazon-s3-in-ibm-db2-database.md)
+ [Export a report of AWS IAM Identity Center identities and their assignments by using PowerShell](export-a-report-of-aws-iam-identity-center-identities-and-their-assignments-by-using-powershell.md)
+ [Generate an AWS CloudFormation template containing AWS Config managed rules using Troposphere](generate-an-aws-cloudformation-template-containing-aws-config-managed-rules-using-troposphere.md)
+ [Give SageMaker notebook instances temporary access to a CodeCommit repository in another AWS account](give-sagemaker-notebook-instances-temporary-access-to-a-codecommit-repository-in-another-aws-account.md)
+ [Integrate Stonebranch Universal Controller with AWS Mainframe Modernization](integrate-stonebranch-universal-controller-with-aws-mainframe-modernization.md)
+ [Launch a CodeBuild project across AWS accounts using Step Functions and a Lambda proxy function](launch-a-codebuild-project-across-aws-accounts-using-step-functions-and-a-lambda-proxy-function.md)
+ [Manage AWS permission sets dynamically by using Terraform](manage-aws-permission-sets-dynamically-by-using-terraform.md)
+ [Migrate IIS-hosted applications to Amazon EC2 by using appcmd.exe](migrate-iis-hosted-applications-to-amazon-ec2-by-using-appcmd.md)
+ [Migrate Windows SSL certificates to an Application Load Balancer using ACM](migrate-windows-ssl-certificates-to-an-application-load-balancer-using-acm.md)
+ [Monitor IAM root user activity](monitor-iam-root-user-activity.md)
+ [Create a hierarchical, multi-Region IPAM architecture on AWS by using Terraform](multi-region-ipam-architecture.md)
+ [Optimize multi-account serverless deployments by using the AWS CDK and GitHub Actions workflows](optimize-multi-account-serverless-deployments.md)
+ [Preserve routable IP space in multi-account VPC designs for non-workload subnets](preserve-routable-ip-space-in-multi-account-vpc-designs-for-non-workload-subnets.md)
+ [Provision least-privilege IAM roles by deploying a role vending machine solution](provision-least-privilege-iam-roles-by-deploying-a-role-vending-machine-solution.md)
+ [Register multiple AWS accounts with a single email address by using Amazon SES](register-multiple-aws-accounts-with-a-single-email-address-by-using-amazon-ses.md)
+ [Remove Amazon EC2 entries across AWS accounts from AWS Managed Microsoft AD by using AWS Lambda automation](remove-amazon-ec2-entries-across-aws-accounts-from-aws-managed-microsoft-ad.md)
+ [Remove Amazon EC2 entries in the same AWS account from AWS Managed Microsoft AD by using AWS Lambda automation](remove-amazon-ec2-entries-in-the-same-aws-account-from-aws-managed-microsoft-ad.md)
+ [Secure sensitive data in CloudWatch Logs by using Amazon Macie](secure-cloudwatch-logs-using-macie.md)
+ [Send notifications for an Amazon RDS for SQL Server database instance by using an on-premises SMTP server and Database Mail](send-notifications-for-an-amazon-rds-for-sql-server-database-instance-by-using-an-on-premises-smtp-server-and-database-mail.md)
+ [Set up a Grafana monitoring dashboard for AWS ParallelCluster](set-up-a-grafana-monitoring-dashboard-for-aws-parallelcluster.md)
+ [Set up centralized logging at enterprise scale by using Terraform](set-up-centralized-logging-at-enterprise-scale-by-using-terraform.md)
+ [Set up disaster recovery for SAP on IBM Db2 on AWS](set-up-disaster-recovery-for-sap-on-ibm-db2-on-aws.md)
+ [Streamline Amazon EC2 compliance management with Amazon Bedrock agents and AWS Config](streamline-amazon-ec2-compliance-management-with-amazon-bedrock-agents-and-aws-config.md)
+ [Tag Transit Gateway attachments automatically using AWS Organizations](tag-transit-gateway-attachments-automatically-using-aws-organizations.md)
+ [Use BMC Discovery queries to extract migration data for migration planning](use-bmc-discovery-queries-to-extract-migration-data-for-migration-planning.md)
+ [Verify operational best practices for PCI DSS 4.0 by using AWS Config](verify-ops-best-practices-pci-dss-4.md)
+ [View AWS Network Firewall logs and metrics by using Splunk](view-aws-network-firewall-logs-and-metrics-by-using-splunk.md)
+ [Visualize IAM credential reports for all AWS accounts using Amazon Quick Sight](visualize-iam-credential-reports-for-all-aws-accounts-using-amazon-quicksight.md)

# Messaging & communications
<a name="messagingandcommunications-pattern-list"></a>

**Topics**
+ [Automate RabbitMQ configuration in Amazon MQ](automate-rabbitmq-configuration-in-amazon-mq.md)
+ [Improve call quality on agent workstations in Amazon Connect contact centers](improve-call-quality-on-agent-workstations-in-amazon-connect-contact-centers.md)
+ [More patterns](messagingandcommunications-more-patterns-pattern-list.md)

# Automate RabbitMQ configuration in Amazon MQ
<a name="automate-rabbitmq-configuration-in-amazon-mq"></a>

*Yogesh Bhatia and Afroz Khan, Amazon Web Services*

## Summary
<a name="automate-rabbitmq-configuration-in-amazon-mq-summary"></a>

[Amazon MQ](https://docs.aws.amazon.com/amazon-mq/) is a managed message broker service that provides compatibility with many popular message brokers. Using Amazon MQ with RabbitMQ provides a robust RabbitMQ cluster managed in the AWS Cloud with multiple brokers and configuration options. Amazon MQ provides a highly available, secure, and scalable infrastructure, and can process a large number of messages per second with ease. Multiple applications can use the infrastructure with different virtual hosts, queues, and exchanges. However, managing these configuration options or creating the infrastructure manually can require time and effort. This pattern describes a way to manage configurations for RabbitMQ in one step, through a single file. You can embed the code provided with this pattern within any continuous integration (CI) tool such as Jenkins or Bamboo. 

You can use this pattern to configure any RabbitMQ cluster. All it requires is connectivity to the cluster. Although there are many other ways to manage RabbitMQ configurations, this solution creates entire application configurations in one step, so you can manage queues and other details easily.

## Prerequisites and limitations
<a name="automate-rabbitmq-configuration-in-amazon-mq-prereqs"></a>

**Prerequisites**
+ AWS Command Line Interface (AWS CLI) [installed and configured](https://docs.aws.amazon.com/cli/latest/userguide/install-cliv2-linux.html) to point to your AWS account
+ Ansible installed, so you can run playbooks to create the configuration
+ **rabbitmqadmin **installed (for instructions, see the [RabbitMQ documentation](https://www.rabbitmq.com/management-cli.html))
+ A RabbitMQ cluster in Amazon MQ, created with healthy Amazon CloudWatch metrics

**Additional requirements**
+ Make sure to create the configurations for virtual hosts and users separately and not as part of JSON.
+ Make sure that the configuration JSON is part of the repository and is version-controlled.
+ The version of the **rabbitmqadmin **CLI must be the same as the version of the RabbitMQ server, so the best option is to download the CLI from the RabbitMQ console.
+ As part of the pipeline, make sure that JSON syntax is validated before each run.

**Product versions**
+ AWS CLI version 2.0
+ Ansible version 2.9.13
+ **rabbitmqadmin **version 3.9.13 (must be the same as the RabbitMQ server version)

## Architecture
<a name="automate-rabbitmq-configuration-in-amazon-mq-architecture"></a>

**Source technology stack  **
+ An RabbitMQ cluster running on an existing on-premises virtual machine (VM) or a Kubernetes cluster (on premises or in the cloud)

**Target technology stack  **
+ Automated RabbitMQ configurations on Amazon MQ for RabbitMQ

**Target architecture **

There are many ways to configure RabbitMQ. This pattern uses the import configuration functionality, where a single JSON file contains all the configurations. This file applies all settings and can be managed by a version-control system such as Bitbucket or Git. This pattern uses Ansible to implement the configuration through the **rabbitmqadmin **CLI.

![\[Automating RabbitMQ configuration in Amazon MQ\]](http://docs.aws.amazon.com/prescriptive-guidance/latest/patterns/images/pattern-img/294120b6-c95f-4cc5-bf85-5ad7e2abdad5/images/292e1284-5c9e-4c82-bb41-010fa84d8d74.png)


## Tools
<a name="automate-rabbitmq-configuration-in-amazon-mq-tools"></a>

**AWS services**
+ [Amazon MQ](https://docs.aws.amazon.com/amazon-mq/) is a managed message broker service that makes it easy to set up and operate message brokers in the cloud.
+ [AWS CloudFormation](https://docs.aws.amazon.com/AWSCloudFormation/latest/UserGuide/Welcome.html) helps you set up your AWS infrastructure and speed up cloud provisioning with infrastructure as code.
+ [AWS CLI](https://docs.aws.amazon.com/cli/latest/userguide/cli-chap-welcome.html) enables you to interact with AWS services by using commands in a command-line shell. 

**Other tools**
+ [rabbitmqadmin](https://www.rabbitmq.com/management-cli.html) is a command-line tool for the RabbitMQ HTTP-based API. It is used to manage and monitor RabbitMQ nodes and clusters.
+ [Ansible](https://www.ansible.com/) is an open-source tool for automating applications and IT infrastructure.

**Code repository**

The JSON configuration file used in this pattern and a sample Ansible playbook are provided in the attachment.

## Epics
<a name="automate-rabbitmq-configuration-in-amazon-mq-epics"></a>

### Create your AWS infrastructure
<a name="create-your-aws-infrastructure"></a>


| Task | Description | Skills required | 
| --- | --- | --- | 
| Create a RabbitMQ cluster on AWS. | If you don't already have a RabbitMQ cluster, you can use [AWS CloudFormation](https://docs.aws.amazon.com/AWSCloudFormation/latest/UserGuide/Welcome.html) to create the stack on AWS. Or, you can use the [CloudFormation module in Ansible](https://docs.ansible.com/projects/ansible/latest/collections/amazon/aws/cloudformation_module.html) to create the stack. With the latter approach, you can use Ansible for both tasks: to create the RabbitMQ infrastructure and to manage configurations.  | General AWS, Ansible | 

### Create the Amazon MQ for RabbitMQ configuration
<a name="create-the-amqlong-for-rabbitmq-configuration"></a>


| Task | Description | Skills required | 
| --- | --- | --- | 
| Create a properties file. | Download the JSON configuration file (`rabbitmqconfig.json`) in the attachment, or export it from the RabbitMQ console.  Modify it to configure queues, exchanges, and bindings. This configuration file demonstrates the following:[\[See the AWS documentation website for more details\]](http://docs.aws.amazon.com/prescriptive-guidance/latest/patterns/automate-rabbitmq-configuration-in-amazon-mq.html)These configurations are performed under the root (/) virtual host, as required by **rabbitmqadmin**.  | JSON | 
| Retrieve the details of the Amazon MQ for RabbitMQ infrastructure. | Retrieve the following details for the RabbitMQ infrastructure on AWS:[\[See the AWS documentation website for more details\]](http://docs.aws.amazon.com/prescriptive-guidance/latest/patterns/automate-rabbitmq-configuration-in-amazon-mq.html)You can use the AWS Management Console or the AWS CLI to retrieve this information. These details enable the Ansible playbook to connect to your AWS account and use the RabbitMQ cluster to run commands.The computer that runs the Ansible playbook must be able to access your AWS account, and AWS CLI must already be configured, as described in the *Prerequisites* section. | General AWS | 
| Create the `hosts_var` file. | Create the `hosts_var` file for Ansible and make sure that all the variables are defined in the file. Consider using Ansible Vault to store the password. You can configure the `hosts_var` file as follows (replace the asterisks with your information):<pre>RABBITMQ_HOST: "***********.mq.us-east-2.amazonaws.com"<br />RABBITMQ_VHOST: "/"<br />RABBITMQ_USERNAME: "admin"<br />RABBITMQ_PASSWORD: "*******"</pre> | Ansible | 
| Create an Ansible playbook. | For a sample playbook, see `ansible-rabbit-config.yaml` in the attachment. Download and save this file. The Ansible playbook imports and manages all RabbitMQ configurations, such as queues, exchanges, and bindings, that applications require. Follow best practices for Ansible playbooks, such as securing passwords. Use Ansible Vault for password encryption, and retrieve the RabbitMQ password from the encrypted file. | Ansible | 

### Deploy the configuration
<a name="deploy-the-configuration"></a>


| Task | Description | Skills required | 
| --- | --- | --- | 
| Run the playbook. | Run the Ansible playbook that you created in the previous epic.<pre>ansible-playbook ansible-rabbit-config.yaml</pre>You can verify the new configurations on the RabbitMQ console. | General AWS, RabbitMQ, Ansible | 

## Related resources
<a name="automate-rabbitmq-configuration-in-amazon-mq-resources"></a>
+ [Migrating from RabbitMQ to Amazon MQ](https://aws.amazon.com/blogs/compute/migrating-from-rabbitmq-to-amazon-mq/) (AWS blog post)
+ [Management Command Line Tool](https://www.rabbitmq.com/management-cli.html) (RabbitMQ documentation)
+ [Create or delete an AWS CloudFormation stack](https://docs.ansible.com/ansible/latest/collections/amazon/aws/cloudformation_module.html) (Ansible documentation)
+ [Migrating message driven applications to Amazon MQ for RabbitMQ](https://aws.amazon.com/blogs/compute/migrating-message-driven-applications-to-amazon-mq-for-rabbitmq/) (AWS blog post)

## Attachments
<a name="attachments-294120b6-c95f-4cc5-bf85-5ad7e2abdad5"></a>

To access additional content that is associated with this document, unzip the following file: [attachment.zip](samples/p-attach/294120b6-c95f-4cc5-bf85-5ad7e2abdad5/attachments/attachment.zip)

# Improve call quality on agent workstations in Amazon Connect contact centers
<a name="improve-call-quality-on-agent-workstations-in-amazon-connect-contact-centers"></a>

*Ernest Ozdoba, Amazon Web Services*

## Summary
<a name="improve-call-quality-on-agent-workstations-in-amazon-connect-contact-centers-summary"></a>

Call quality issues are some of the most difficult problems to troubleshoot in contact centers. To avoid voice quality issues and complex troubleshooting procedures, you must optimize your agents’ work environment and workstation settings. This pattern describes voice quality optimization techniques for agent workstations in Amazon Connect contact centers. It provides recommendations in the following areas:
+ Work environment adjustments. Agents’ surroundings don’t affect how voice is transmitted over the network, but they do have an effect on call quality. 
+ Agent workstation settings. Hardware and network configurations for contact center workstations have significant effects on call quality.
+ Browser settings. Agents use a web browser to access the Amazon Connect Contact Control Panel (CCP) website and communicate with customers, so browser settings can affect call quality.

The following components can also affect call quality, but they fall outside the scope of the workstation and aren’t covered in this pattern:
+ Traffic flows to the Amazon Web Services (AWS) Cloud over AWS Direct Connect, a full-tunnel VPN, or a split-tunnel VPN  
+ Network conditions when working in or outside the corporate office
+ Public switched telephone network (PSTN) connectivity
+ The customer’s device and telephony carrier
+ Virtual desktop infrastructure (VDI) setup

For more information relating to these areas, see [Common Contact Control Panel (CCP) Issues](https://docs.aws.amazon.com/connect/latest/adminguide/common-ccp-issues.html) and [Use the Endpoint Test Utility](https://docs.aws.amazon.com/connect/latest/adminguide/check-connectivity-tool.html) in the Amazon Connect documentation.

## Prerequisites and limitations
<a name="improve-call-quality-on-agent-workstations-in-amazon-connect-contact-centers-prereqs"></a>

**Prerequisites**
+ Headsets and workstations must comply with the requirements specified in the [Amazon Connect Administrator Guide](https://docs.aws.amazon.com/connect/latest/adminguide/ccp-agent-hardware.html). 

**Limitations**
+ The optimization techniques in this pattern apply to soft phone voice quality. They do not apply when you configure the Amazon Connect CCP in desk phone mode. However, you can use desk phone mode if your soft phone setup doesn’t provide acceptable voice quality for the call.

**Product versions**
+ For supported browsers and versions, see the [Amazon Connect Administrator Guide](https://docs.aws.amazon.com/connect/latest/adminguide/browsers.html).

## Architecture
<a name="improve-call-quality-on-agent-workstations-in-amazon-connect-contact-centers-architecture"></a>

This pattern is architecture-agnostic because it targets agent workstation settings. As the following diagram shows, the voice path from the agent to the customer is affected by the agent’s headset, browser, operating system, workstation hardware, and network.

![\[Voice path from agent to customer in Amazon Connect workstation calls\]](http://docs.aws.amazon.com/prescriptive-guidance/latest/patterns/images/pattern-img/04ac4c80-30c4-4a48-8411-e3aac7bc2887/images/04e94efc-39d1-424d-a299-89ea17d40153.png)


In Amazon Connect contact centers, the user’s audio connectivity is established with WebRTC. Voice is encoded with the [Opus interactive audio codec](https://opus-codec.org/) and encrypted with the Secure Real-time Transport Protocol (SRTP) in transit. Other network architectures are possible, including VPN, private WAN/LAN, and ISP networks.

## Tools
<a name="improve-call-quality-on-agent-workstations-in-amazon-connect-contact-centers-tools"></a>
+ [Amazon Connect Endpoint Test Utility](https://tools.connect.aws/endpoint-test/) – This utility checks network connectivity and browser settings.
+ Browser configuration editors for WebRTC settings:
  + For Firefox: **about:config**
  + For Chrome: **chrome://flags**
+ [CCP Log Parser ](https://tools.connect.aws/ccp-log-parser/index.html)– This tool helps you analyze CCP logs for troubleshooting purposes.

## Epics
<a name="improve-call-quality-on-agent-workstations-in-amazon-connect-contact-centers-epics"></a>

### Adjust the work environment
<a name="adjust-the-work-environment"></a>


| Task | Description | Skills required | 
| --- | --- | --- | 
| Reduce background noise. | Avoid noisy environments. If this is not possible, optimize the environment with these soundproofing tips:[\[See the AWS documentation website for more details\]](http://docs.aws.amazon.com/prescriptive-guidance/latest/patterns/improve-call-quality-on-agent-workstations-in-amazon-connect-contact-centers.html) | Agent, Manager | 

### Optimize agent workstation settings
<a name="optimize-agent-workstation-settings"></a>


| Task | Description | Skills required | 
| --- | --- | --- | 
| Choose the right headset. | [\[See the AWS documentation website for more details\]](http://docs.aws.amazon.com/prescriptive-guidance/latest/patterns/improve-call-quality-on-agent-workstations-in-amazon-connect-contact-centers.html) | Agent, Manager | 
| Use the headset as intended. | [\[See the AWS documentation website for more details\]](http://docs.aws.amazon.com/prescriptive-guidance/latest/patterns/improve-call-quality-on-agent-workstations-in-amazon-connect-contact-centers.html) | Agent | 
| Check workstation resources. | Make sure that your agents’ computers are performant. If they use third-party applications that consume resources, their computers might not meet the minimum [hardware requirements](https://docs.aws.amazon.com/connect/latest/adminguide/ccp-agent-hardware.html) to run CCP. If agents experience call quality issues, make sure that they have enough processing power (CPU), disk space, network bandwidth, and memory available for CCP. Agents should close any unnecessary applications and tabs to improve CCP performance and call quality. | Administrator | 
| Configure the operating system’s sound settings. | The default settings for microphone level and boost usually work fine. If you find that outbound voice is quiet or the microphone is picking up too much, it might help to adjust these settings. Microphone settings can be found in your computer’s system sound configuration (**Sound**, **Input** on [MacOS](https://support.apple.com/en-gb/guide/mac-help/mchlp2567/12.0/mac/12.0), **Microphone Properties** in [Windows](https://support.microsoft.com/en-us/windows/fix-microphone-problems-5f230348-106d-bfa4-1db5-336f35576011)). You can access advanced settings that might affect voice quality through system tools or third-party applications. Here are some of the settings you can check:[\[See the AWS documentation website for more details\]](http://docs.aws.amazon.com/prescriptive-guidance/latest/patterns/improve-call-quality-on-agent-workstations-in-amazon-connect-contact-centers.html)If you’re experiencing voice quality issues, try restoring these values to their default settings before investigating further.For more information about these and other adjustable settings, see your device manual. | Agent, Administrator | 
| Use a wired network. | Typically, wired ethernet has lower latency, so it is easier to provide the consistent transmission quality required for voice data transmission.  We recommend 100 KB bandwidth per call at the minimum. [\[See the AWS documentation website for more details\]](http://docs.aws.amazon.com/prescriptive-guidance/latest/patterns/improve-call-quality-on-agent-workstations-in-amazon-connect-contact-centers.html) | Network administrator, Agent | 
| Update hardware drivers. | When you use a USB or other type of headset that has its own firmware, we recommend that you keep it updated with the latest version. Simple headsets that use an auxiliary port use the computer’s built-in audio device, so make sure that the operating system hardware driver is up to date. In rare cases, an audio driver update can cause audio issues, and you might need to roll it back. For more information about changing firmware and driver versions, see your device manual. | Administrator | 
| Avoid USB hubs and dongles. | When you connect your headset, avoid additional devices such as dongles, port type converters, hubs, and extension cables.These devices might affect call quality. Connect your device directly to the port in your computer instead. | Agent | 
| Check CCP logs. | The CCP Log Parser provides an easy way to check application logs.[\[See the AWS documentation website for more details\]](http://docs.aws.amazon.com/prescriptive-guidance/latest/patterns/improve-call-quality-on-agent-workstations-in-amazon-connect-contact-centers.html) | Agent (advanced skills) | 

### Optimize browser settings
<a name="optimize-browser-settings"></a>


| Task | Description | Skills required | 
| --- | --- | --- | 
| Restore default WebRTC settings. | WebRTC has to be enabled to make soft phone calls with CCP. We recommend that you keep the default settings for WebRTC-related features. [\[See the AWS documentation website for more details\]](http://docs.aws.amazon.com/prescriptive-guidance/latest/patterns/improve-call-quality-on-agent-workstations-in-amazon-connect-contact-centers.html) | Administrator | 
| Disable browser extensions when troubleshooting. | Some browser extensions might affect call quality or even prevent calls from connecting properly. Use the incognito window or private mode in your browser, and disable all extensions. If that solves the problem, review your browser extensions and look for suspicious add-ons, or disable them individually. | Agent, Administrator | 
| Check the browser sample rate.  | Confirm that your microphone input is set to the optimal 48 kHz sample rate. For instructions, see the [Amazon Connect Administrator Guide](https://docs.aws.amazon.com/connect/latest/adminguide/verify-sample-rate.html). | Agent, Administrator | 

## Related resources
<a name="improve-call-quality-on-agent-workstations-in-amazon-connect-contact-centers-resources"></a>

If you’ve followed the steps in this pattern but you’re still encountering problems with call quality, see the following resources for troubleshooting tips.
+ Review [common Contact Control Panel (CCP) issues](https://docs.aws.amazon.com/connect/latest/adminguide/common-ccp-issues.html).
+ Check the connection with the [Endpoint Test Utility](https://docs.aws.amazon.com/en_us/connect/latest/adminguide/check-connectivity-tool.html).
+ Follow the [troubleshooting guide ](https://docs.aws.amazon.com/connect/latest/adminguide/troubleshooting.html)for any other issues. 

If your troubleshooting and adjustments don’t solve the call quality issue, the root cause might be external to your workstation. For further troubleshooting, contact your IT support team. 

# More patterns
<a name="messagingandcommunications-more-patterns-pattern-list"></a>

**Topics**
+ [Decompose monoliths into microservices by using CQRS and event sourcing](decompose-monoliths-into-microservices-by-using-cqrs-and-event-sourcing.md)
+ [Deploy a ChatOps solution to manage SAST scan results by using Amazon Q Developer in chat applications custom actions and CloudFormation](deploy-chatops-solution-to-manage-sast-scan-results.md)
+ [Integrate Amazon API Gateway with Amazon SQS to handle asynchronous REST APIs](integrate-amazon-api-gateway-with-amazon-sqs-to-handle-asynchronous-rest-apis.md)
+ [Streamline Amazon Lex bot development and deployment by using an automated workflow](streamline-amazon-lex-bot-development-and-deployment-using-an-automated-workflow.md)

# Multi-account strategy
<a name="multiaccountstrategy-pattern-list"></a>

**Topics**
+ [Migrate an AWS member account from AWS Organizations to AWS Control Tower](migrate-an-aws-member-account-from-aws-organizations-to-aws-control-tower.md)
+ [Set up alerts for programmatic account closures in AWS Organizations](set-up-alerts-for-programmatic-account-closures-in-aws-organizations.md)
+ [More patterns](multiaccountstrategy-more-patterns-pattern-list.md)

# Migrate an AWS member account from AWS Organizations to AWS Control Tower
<a name="migrate-an-aws-member-account-from-aws-organizations-to-aws-control-tower"></a>

*Rodolfo Jr. Cerrada, Amazon Web Services*

## Summary
<a name="migrate-an-aws-member-account-from-aws-organizations-to-aws-control-tower-summary"></a>

This pattern describes how to migrate an AWS account from AWS Organizations, where it is a member account that's governed by a management account, to AWS Control Tower. By enrolling the account in AWS Control Tower, you can take advantage of preventive and detective controls and features that streamline your account governance. You might also want to migrate your member account if your AWS Organizations management account has been compromised, and you want to move member accounts to a new organization that is governed by AWS Control TowerAWS Control Tower. 

AWS Control Tower provides a framework that combines and integrates the capabilities of several other AWS services, including AWS Organizations, and ensures consistent compliance and governance across your multi-account environment. With AWS Control Tower, you can follow a set of prescribed rules and definitions that extend the capabilities of AWS Organizations. For example, you can use controls to ensure that security logs and necessary cross-account access permissions are created, and not altered.

## Prerequisites and limitations
<a name="migrate-an-aws-member-account-from-aws-organizations-to-aws-control-tower-prereqs"></a>

**Prerequisites **
+ An active AWS account
+ AWS Control Tower set up in your target organization in AWS Organizations (for instructions, see [Setting up](https://docs.aws.amazon.com/controltower/latest/userguide/setting-up.html) in the AWS Control Tower documentation)
+ Administrator credentials for AWS Control Tower (member of the **AWSControlTowerAdmins **group)
+ Administrator credentials for the source AWS account

**Limitations**
+ The source management account in AWS Organizations must be different from the target management account in AWS Control Tower.

**Product versions**
+ AWS Control Tower version 2.3 (February 2020) or later (see [release notes](https://docs.aws.amazon.com/controltower/latest/userguide/release-notes.html))

## Architecture
<a name="migrate-an-aws-member-account-from-aws-organizations-to-aws-control-tower-architecture"></a>

The following diagram illustrates the migration process and reference architecture. This pattern migrates the AWS account from the source organization to a target organization that is governed by AWS Control Tower.  

![\[AWS Control Tower enrollment process for an AWS account that's migrated to another organization and moved to a registered OU.\]](http://docs.aws.amazon.com/prescriptive-guidance/latest/patterns/images/pattern-img/1fc2c2f0-fa5d-4068-a2b2-9e57cea2aff5/images/0654d242-0faa-4810-9e53-40ef89305b5b.png)


The enrollment process consists of these steps:

1. The target organization sends an invitation for the account to join the organization. 

1. The account accepts the invitation and becomes a member of the target organization.

1. The account is enrolled in AWS Control Tower and moved to a registered organizational unit (OU). (We recommend that you check the AWS Control Tower dashboard to confirm the enrollment.) At this point, all controls that are enabled in the registered OU take effect.

## Tools
<a name="migrate-an-aws-member-account-from-aws-organizations-to-aws-control-tower-tools"></a>

**AWS services**
+ [AWS Organizations](https://docs.aws.amazon.com/organizations/latest/userguide/orgs_introduction.html) is an account management service that enables you to consolidate multiple AWS accounts into a single entity (an *organization*) that you create and centrally manage.
+ [AWS Control Tower](https://docs.aws.amazon.com/controltower/latest/userguide/what-is-control-tower.html) integrates the capabilities of other services, including AWS Organizations, AWS IAM Identity Center, and AWS Service Catalog, to help you enforce and manage governance rules for security, operations, and compliance at scale across all your organizations and accounts in the AWS Cloud.

## Epics
<a name="migrate-an-aws-member-account-from-aws-organizations-to-aws-control-tower-epics"></a>

### Invite the account to join the new organization with AWS Control Tower
<a name="invite-the-account-to-join-the-new-organization-with-ctower"></a>


| Task | Description | Skills required | 
| --- | --- | --- | 
| Sign in to AWS Control Tower. | Sign in to the AWS Control Tower console as an administrator. Currently, there is no direct way to move an AWS account from a source organization to an organization in an OU that's governed by AWS Control Tower. However, you can extend AWS Control Tower governance to an existing AWS account when you enroll it into an OU that's already governed by AWS Control Tower. That's why you have to log in to AWS Control Tower for this step. | AWS Control Tower administrator | 
| Invite the member account. | [\[See the AWS documentation website for more details\]](http://docs.aws.amazon.com/prescriptive-guidance/latest/patterns/migrate-an-aws-member-account-from-aws-organizations-to-aws-control-tower.html)Verify that no applications or network connectivity will be affected by the account transfer.This action sends an invitation email with a link to the member account. When the account administrator follows the link and accepts the invitation, the member account appears in the **AWS accounts **page. For more information, see [Managing account invitations](https://docs.aws.amazon.com/organizations/latest/userguide/orgs_manage_accounts_invites.html) in the AWS Organizations documentation. | AWS Control Tower administrator | 
| Test applications and connectivity. | When the member account has been registered into the new organization, it appears in the OU within a root. It also appears in the [AWS Control Tower console](https://console.aws.amazon.com/controltower), flagged as not enrolled in accounts, because it hasn't yet been enrolled in the AWS Control Tower registered OU.Verify the following:[\[See the AWS documentation website for more details\]](http://docs.aws.amazon.com/prescriptive-guidance/latest/patterns/migrate-an-aws-member-account-from-aws-organizations-to-aws-control-tower.html) | AWS Control Tower administrator, Member account administrator, Application owners | 

### Prepare the account for enrollment
<a name="prepare-the-account-for-enrollment"></a>


| Task | Description | Skills required | 
| --- | --- | --- | 
| Review controls and fix any violations. | Review the controls that are defined in the target OU, especially the preventive controls, and fix any violations. A number of [mandatory, preventive controls](https://docs.aws.amazon.com/controltower/latest/controlreference/preventive-controls.html) are enabled by default when you set up your AWS Control Tower landing zone. These can't be disabled. You must review these mandatory controls and fix the member account (manually or by using a script) before you enroll the account.Preventive controls keep AWS Control Tower registered accounts compliant and prevent policy violations. Any violation of preventive controls might affect enrollment. Detective control violations appear in the AWS Control Tower dashboard, if detected, after successful enrollment. They do not affect the enrollment process. For more information, see [About controls](https://docs.aws.amazon.com/controltower/latest/controlreference/controls.html) in the AWS Control Tower documentation. | AWS Control Tower administrator, Member account administrator | 
| Check for connectivity issues after fixing control violations. | In some cases, you might have to close specific ports or disable services to fix control violations. Make sure that applications that use those ports and services are remediated before you enroll the account. | Application owner | 

### Enroll the account into AWS Control Tower
<a name="enroll-the-account-into-ctowerlong"></a>


| Task | Description | Skills required | 
| --- | --- | --- | 
| Sign in to AWS Control Tower. | Sign in to the [AWS Control Tower console](https://console.aws.amazon.com/controltower). Use sign-in credentials that have administrative permissions for AWS Control Tower. Do not use the root user (management account) credentials to enroll an AWS Organizations account. This will display an error message. | AWS Control Tower administrator | 
| Enroll the account. | [\[See the AWS documentation website for more details\]](http://docs.aws.amazon.com/prescriptive-guidance/latest/patterns/migrate-an-aws-member-account-from-aws-organizations-to-aws-control-tower.html)For more information, see [About enrolling existing accounts](https://docs.aws.amazon.com/controltower/latest/userguide/enroll-account.html) in the AWS Control Tower documentation. | AWS Control Tower administrator | 

### Verify the account after enrollment
<a name="verify-the-account-after-enrollment"></a>


| Task | Description | Skills required | 
| --- | --- | --- | 
| Verify the account. | From AWS Control Tower, choose **Accounts**. The account that you just enrolled has an initial state of **Enrolling**. When enrollment is complete, its state changes to **Enrolled**. | AWS Control Tower administrator, Member account administrator | 
| Check for control violations. | Controls defined in the OU will automatically apply to the enrolled member account. Monitor the AWS Control Tower dashboard for violations and fix them accordingly. For more information, see [About controls](https://docs.aws.amazon.com/controltower/latest/controlreference/controls.html) in the AWS Control Tower documentation. | AWS Control Tower administrator, Member account administrator | 

## Troubleshooting
<a name="migrate-an-aws-member-account-from-aws-organizations-to-aws-control-tower-troubleshooting"></a>


| Issue | Solution | 
| --- | --- | 
| You receive the error message: **An unknown error occurred. Try again later, or contact AWS Support.**  | This error occurs when you use root user credentials (management account) in AWS Control Tower to enroll a new account. AWS Service Catalog can't map the Account Factory Portfolio or product to the root user, which results in the error message. To remediate this error, use non-root, full-access user (administrator) credentials to enroll the new account. For more information about how to assign administrative access to an administrative user, see [Getting started](https://docs.aws.amazon.com/singlesignon/latest/userguide/getting-started.html) in the IAM Identity Center documentation. | 
| The AWS Control Tower **Activities** page displays a **Get Catastrophic Drift** action. | This action reflects a drift check of the service and does not indicate any issues with the AWS Control Tower setup. No action is required. | 

## Related resources
<a name="migrate-an-aws-member-account-from-aws-organizations-to-aws-control-tower-resources"></a>

**Documentation**
+ [Terminology and concepts](https://docs.aws.amazon.com/organizations/latest/userguide/orgs_getting-started_concepts.html) (AWS Organizations documentation)
+ [What is AWS Control Tower?](https://docs.aws.amazon.com/controltower/latest/userguide/) (AWS Control Tower documentation)
+ [Removing a member account from an organization](https://docs.aws.amazon.com/organizations/latest/userguide/orgs_manage_accounts_remove.html#leave-without-all-info) (AWS Organizations documentation)
+ [Setting up](https://docs.aws.amazon.com/controltower/latest/userguide/setting-up.html#setting-up-iam) (AWS Control Tower documentation)

**Tutorials and videos **
+ [AWS Control Tower workshop](https://catalog.workshops.aws/control-tower/) (self-paced workshop)
+ [What is AWS Control Tower?](https://www.youtube.com/watch?v=daLvEb44d5Q) (video)
+ [Provisioning Users in AWS Control Tower](https://www.youtube.com/watch?v=y_n9xN5mg1g) (video)

# Set up alerts for programmatic account closures in AWS Organizations
<a name="set-up-alerts-for-programmatic-account-closures-in-aws-organizations"></a>

*Richard Milner-Watts, Debojit Bhadra, and Manav Yadav, Amazon Web Services*

## Summary
<a name="set-up-alerts-for-programmatic-account-closures-in-aws-organizations-summary"></a>

The [CloseAccount API](https://docs.aws.amazon.com/organizations/latest/APIReference/API_CloseAccount.html) for [AWS Organizations](https://aws.amazon.com/organizations/) enables you to close member accounts within an organization programmatically, without having to log in to the account with root credentials. The [RemoveAccountFromOrganization API](https://docs.aws.amazon.com/organizations/latest/APIReference/API_RemoveAccountFromOrganization.html) pulls an account out from an organization in AWS Organizations, so it becomes a standalone account.

These APIs potentially increase the number of operators who can close or remove an AWS account. All users who have access to the organization through AWS Identity and Access Management (IAM) in the AWS Organizations management account can call these APIs, so access isn’t limited to the owner of the account's root email with any associated multi-factor authentication (MFA) device.

This pattern implements alerts when the `CloseAccount` and `RemoveAccountFromOrganization` APIs are called, so you can monitor these activities. For alerts, it uses an [Amazon Simple Notification Service](https://aws.amazon.com/sns/) (Amazon SNS) topic. You can also set up Slack notifications through a [webhook](https://api.slack.com/messaging/webhooks).

## Prerequisites and limitations
<a name="set-up-alerts-for-programmatic-account-closures-in-aws-organizations-prereqs"></a>

**Prerequisites **
+ An active AWS account
+ An organization in AWS Organizations
+ Access to the organization management account, under the organization's root, to create the required resources

**Limitations **
+ As described in the [AWS Organizations API reference](https://docs.aws.amazon.com/organizations/latest/APIReference/API_CloseAccount.html), the `CloseAccount` API allows only 10 percent of active member accounts to be closed within a rolling 30-day period.
+ When an AWS account is closed, its status is changed to SUSPENDED. For 90 days after this status transition, AWS Support can reopen the account. After 90 days the account is permanently deleted.
+ Users who have access to the AWS Organizations management account and APIs might also have permissions to disable these alerts. If the primary concern is malicious behavior instead of accidental deletion, consider protecting the resources created by this pattern with an [IAM permissions boundary](https://docs.aws.amazon.com/IAM/latest/UserGuide/access_policies_boundaries.html).
+ The API calls for `CloseAccount `and `RemoveAccountFromOrganization` are processed in the US East (N. Virginia) Region (`us-east-1`). Therefore, you must deploy this solution in `us-east-1` in order to observe the events.

## Architecture
<a name="set-up-alerts-for-programmatic-account-closures-in-aws-organizations-architecture"></a>

**Target technology stack  **
+ AWS Organizations
+ AWS CloudTrail
+ Amazon EventBridge
+ AWS Lambda
+ Amazon SNS

**Target architecture **

The following diagram shows the solution architecture for this pattern.

 

![\[Architecture for setting up alerts in AWS Organizations for account closures\]](http://docs.aws.amazon.com/prescriptive-guidance/latest/patterns/images/pattern-img/ba9d9db1-fab8-4e3b-a1bb-f0be91ade5c6/images/92caee55-2722-4ba2-bdd2-66f1af35dce5.png)


1. AWS Organizations processes a `CloseAccount` or `RemoveAccountFromOrganization` request.

1. Amazon EventBridge is integrated with AWS CloudTrail to deliver these events to the default event bus.

1. A custom Amazon EventBridge rule matches the AWS Organizations requests and calls an AWS Lambda function.

1. The Lambda function delivers a message to an SNS topic, which users can subscribe to for email alerts or further processing.

1. If Slack notifications are enabled, the Lambda function delivers a message to a Slack webhook.

## Tools
<a name="set-up-alerts-for-programmatic-account-closures-in-aws-organizations-tools"></a>

**AWS services**
+ [AWS CloudFormation](https://docs.aws.amazon.com/AWSCloudFormation/latest/UserGuide/Welcome.html) provides a way to model a collection of related AWS and third-party resources, provision them quickly and consistently, and manage them throughout their lifecycles, by treating infrastructure as code.
+ [Amazon EventBridge](https://docs.aws.amazon.com/eventbridge/latest/userguide/eb-what-is.html) is a serverless event bus service that you can use to connect your applications with data from a variety of sources. EventBridge receives an event, an indicator of a change in environment, and applies a rule to route the event to a target. Rules match events to targets based on either the structure of the event, called an *event pattern*, or on a schedule.
+ [AWS Lambda](https://docs.aws.amazon.com/lambda/latest/dg/welcome.html) is a compute service that supports running code without provisioning or managing servers. Lambda runs your code only when needed and scales automatically, from a few requests each day to thousands each second. You pay only for the compute time that you consume. There is no charge when your code is not running.
+ [AWS Organizations](https://docs.aws.amazon.com/organizations/latest/userguide/orgs_introduction.html) helps you centrally manage and govern your environment as you grow and scale your AWS resources. Using AWS Organizations, you can programmatically create new AWS accounts and allocate resources, group accounts to organize your workflows, apply policies to accounts or groups for governance, and simplify billing by using a single payment method for all your accounts.
+ [AWS CloudTrail](https://docs.aws.amazon.com/awscloudtrail/latest/userguide/cloudtrail-user-guide.html) monitors and records account activity across your AWS infrastructure, and gives you control over storage, analysis, and remediation actions.
+ [Amazon Simple Notification Service (Amazon SNS)](https://docs.aws.amazon.com/sns/latest/dg/welcome.html) is a fully managed messaging service for both application-to-application (A2A) and application-to-person (A2P) communication.

**Other tools**
+ [AWS Lambda Powertools for Python library](https://docs.powertools.aws.dev/lambda/python/latest/) is a set of utilities that provide tracing, logging, metrics, and event handling features for Lambda functions.

**Code **

The code for this pattern is located in the GitHub [AWS Account Closer Notifier](https://github.com/aws-samples/aws-account-closure-notifier) repository.

The solution includes a CloudFormation template that deploys the architecture for this pattern. It uses the [AWS Lambda Powertools for Python library](https://docs.powertools.aws.dev/lambda/python/latest/) to provide logging and tracing.

## Epics
<a name="set-up-alerts-for-programmatic-account-closures-in-aws-organizations-epics"></a>

### Deploy the architecture
<a name="deploy-the-architecture"></a>


| Task | Description | Skills required | 
| --- | --- | --- | 
| Launch the CloudFormation template for the solution stack. | The CloudFormation template for this pattern is in the main branch of the [GitHub repository](https://github.com/aws-samples/aws-account-closure-notifier). It deploys the IAM roles, EventBridge rules, Lambda functions, and the SNS topic.To launch the template:[\[See the AWS documentation website for more details\]](http://docs.aws.amazon.com/prescriptive-guidance/latest/patterns/set-up-alerts-for-programmatic-account-closures-in-aws-organizations.html)For more information about launching a CloudFormation stack, see the [AWS documentation](https://docs.aws.amazon.com/AWSCloudFormation/latest/UserGuide/cfn-console-create-stack.html). | AWS administrator | 
| Verify that the solution has launched successfully. | [\[See the AWS documentation website for more details\]](http://docs.aws.amazon.com/prescriptive-guidance/latest/patterns/set-up-alerts-for-programmatic-account-closures-in-aws-organizations.html) | AWS administrator | 
| Subscribe to the SNS topic. | (Optional) If you want to subscribe to the SNS topic:[\[See the AWS documentation website for more details\]](http://docs.aws.amazon.com/prescriptive-guidance/latest/patterns/set-up-alerts-for-programmatic-account-closures-in-aws-organizations.html)For more information about setting up SNS notifications, see the [Amazon SNS documentation](https://docs.aws.amazon.com/AmazonCloudWatch/latest/monitoring/US_SetupSNS.html). | AWS administrator | 

### Verify the solution
<a name="verify-the-solution"></a>


| Task | Description | Skills required | 
| --- | --- | --- | 
| Send a test event to the default event bus. | The [GitHub repository](https://github.com/aws-samples/aws-account-closure-notifier) provides a sample event that you can send to the EventBridge default event bus for testing. The EventBridge rule also reacts to events that use the custom event source `account.closure.notifier`.You can’t use the CloudTrail event source to send this event, because it’s not possible to send an event as an AWS service.To send a test event:[\[See the AWS documentation website for more details\]](http://docs.aws.amazon.com/prescriptive-guidance/latest/patterns/set-up-alerts-for-programmatic-account-closures-in-aws-organizations.html) | AWS administrator | 
| Verify that the email notification was received. | Check the mailbox that subscribed to the SNS topic for notifications. You should receive an email with details of the account that was closed and the principal that performed the API call. | AWS administrator | 
| Verify that the Slack notification was received. | (Optional) If you specified a webhook URL for the `SlackWebhookEndpoint` parameter when you deployed the CloudFormation template, check the Slack channel that is mapped to the webhook. It should display a message with details of the account that was closed and the principal that performed the API call. | AWS administrator | 

## Related resources
<a name="set-up-alerts-for-programmatic-account-closures-in-aws-organizations-resources"></a>
+ [CloseAccount action](https://docs.aws.amazon.com/organizations/latest/APIReference/API_CloseAccount.html) (AWS Organizations API reference)
+ [RemoveAccountFromOrganization action](https://docs.aws.amazon.com/organizations/latest/APIReference/API_RemoveAccountFromOrganization.html) (AWS Organizations API reference)
+ [AWS Lambda Powertools for Python](https://docs.powertools.aws.dev/lambda/python/latest/)

# More patterns
<a name="multiaccountstrategy-more-patterns-pattern-list"></a>

**Topics**
+ [Automate account creation by using the Landing Zone Accelerator on AWS](automate-account-creation-lza.md)
+ [Automate deletion of AWS CloudFormation stacks and associated resources](automate-deletion-cloudformation-stacks-associated-resources.md)
+ [Automate dynamic pipeline management for deploying hotfix solutions in Gitflow environments by using AWS Service Catalog and AWS CodePipeline](automate-dynamic-pipeline-management-for-deploying-hotfix-solutions.md)
+ [Build an enterprise data mesh with Amazon DataZone, AWS CDK, and AWS CloudFormation](build-enterprise-data-mesh-amazon-data-zone.md)
+ [Centralize monitoring by using Amazon CloudWatch Observability Access Manager](centralize-monitoring-by-using-amazon-cloudwatch-observability-access-manager.md)
+ [Govern permission sets for multiple accounts by using Account Factory for Terraform](govern-permission-sets-aft.md)
+ [Implement a Gitflow branching strategy for multi-account DevOps environments](implement-a-gitflow-branching-strategy-for-multi-account-devops-environments.md)
+ [Implement a GitHub Flow branching strategy for multi-account DevOps environments](implement-a-github-flow-branching-strategy-for-multi-account-devops-environments.md)
+ [Implement a Trunk branching strategy for multi-account DevOps environments](implement-a-trunk-branching-strategy-for-multi-account-devops-environments.md)
+ [Manage AWS permission sets dynamically by using Terraform](manage-aws-permission-sets-dynamically-by-using-terraform.md)
+ [Create a hierarchical, multi-Region IPAM architecture on AWS by using Terraform](multi-region-ipam-architecture.md)
+ [Set up CloudFormation drift detection in a multi-Region, multi-account organization](set-up-aws-cloudformation-drift-detection-in-a-multi-region-multi-account-organization.md)
+ [Set up DNS resolution for hybrid networks in a multi-account AWS environment](set-up-dns-resolution-for-hybrid-networks-in-a-multi-account-aws-environment.md)