

# Management & governance
<a name="governance-pattern-list"></a>

**Topics**
+ [Identify and alert when Amazon Data Firehose resources are not encrypted with an AWS KMS key](identify-and-alert-when-amazon-data-firehose-resources-are-not-encrypted-with-an-aws-kms-key.md)
+ [Automate Amazon VPC IPAM IPv4 CIDR allocations for new AWS accounts by using AFT](automate-amazon-vpc-ipam-ipv4-cidr-allocations-for-new-aws-accounts-by-using-aft.md)
+ [Automate adding or updating Windows registry entries using AWS Systems Manager](automate-adding-or-updating-windows-registry-entries-using-aws-systems-manager.md)
+ [Automatically create an RFC in AMS using Python](automatically-create-an-rfc-in-ams-using-python.md)
+ [Automatically stop and start an Amazon RDS DB instance using AWS Systems Manager Maintenance Windows](automatically-stop-and-start-an-amazon-rds-db-instance-using-aws-systems-manager-maintenance-windows.md)
+ [Centralize software package distribution in AWS Organizations by using Terraform](centralize-software-package-distribution-in-aws-organizations-by-using-terraform.md)
+ [Configure logging for .NET applications in Amazon CloudWatch Logs by using NLog](configure-logging-for-net-applications-in-amazon-cloudwatch-logs-by-using-nlog.md)
+ [Copy AWS Service Catalog products across different AWS accounts and AWS Regions](copy-aws-service-catalog-products-across-different-aws-accounts-and-aws-regions.md)
+ [Create a RACI or RASCI matrix for a cloud operating model](create-a-raci-or-rasci-matrix-for-a-cloud-operating-model.md)
+ [Create alarms for custom metrics using Amazon CloudWatch anomaly detection](create-alarms-for-custom-metrics-using-amazon-cloudwatch-anomaly-detection.md)
+ [Create an AWS Cloud9 IDE that uses Amazon EBS volumes with default encryption](create-an-aws-cloud9-ide-that-uses-amazon-ebs-volumes-with-default-encryption.md)
+ [Create tag-based Amazon CloudWatch dashboards automatically](create-tag-based-amazon-cloudwatch-dashboards-automatically.md)
+ [Document your AWS landing zone design](document-your-aws-landing-zone-design.md)
+ [Improve operational performance by enabling Amazon DevOps Guru across multiple AWS Regions, accounts, and OUs with the AWS CDK](improve-operational-performance-by-enabling-amazon-devops-guru-across-multiple-aws-regions-accounts-and-ous-with-the-aws-cdk.md)
+ [Govern permission sets for multiple accounts by using Account Factory for Terraform](govern-permission-sets-aft.md)
+ [Implement Account Factory for Terraform (AFT) by using a bootstrap pipeline](implement-account-factory-for-terraform-aft-by-using-a-bootstrap-pipeline.md)
+ [Manage AWS Service Catalog products in multiple AWS accounts and AWS Regions](manage-aws-service-catalog-products-in-multiple-aws-accounts-and-aws-regions.md)
+ [Monitor SAP RHEL Pacemaker clusters by using AWS services](monitor-sap-rhel-pacemaker-clusters-by-using-aws-services.md)
+ [Monitor application activity by using CloudWatch Logs Insights](monitor-application-activity-by-using-cloudwatch-logs-insights.md)
+ [Monitor use of a shared Amazon Machine Image across multiple AWS accounts](monitor-use-of-a-shared-amazon-machine-image-across-multiple-aws-accounts.md)
+ [View EBS snapshot details for your AWS account or organization](view-ebs-snapshot-details-for-your-aws-account-or-organization.md)
+ [More patterns](governance-more-patterns-pattern-list.md)

# Identify and alert when Amazon Data Firehose resources are not encrypted with an AWS KMS key
<a name="identify-and-alert-when-amazon-data-firehose-resources-are-not-encrypted-with-an-aws-kms-key"></a>

*Ram Kandaswamy, Amazon Web Services*

## Summary
<a name="identify-and-alert-when-amazon-data-firehose-resources-are-not-encrypted-with-an-aws-kms-key-summary"></a>

For compliance, some organizations must have encryption enabled on data delivery resources such as Amazon Data Firehose. This pattern shows a way to monitor, detect, and notify when resources are out of compliance.

To maintain the encryption requirement, this pattern can be used on AWS to provide automated monitoring and detection of Amazon Data Firehose delivery resources that aren’t encrypted with an AWS Key Management Service (AWS KMS) key. The solution sends alert notifications, and it can be extended to perform automatic remediation. This solution can be applied to an individual account or a multiple-account environment, such as an environment that uses an AWS landing zone or AWS Control Tower.

## Prerequisites and limitations
<a name="identify-and-alert-when-amazon-data-firehose-resources-are-not-encrypted-with-an-aws-kms-key-prereqs"></a>

**Prerequisites **
+ Amazon Data Firehose delivery stream
+ Sufficient permissions and familiarity with CloudFormation, which is used in this infrastructure automation

**Limitations **
+ The solution is not real time because it uses AWS CloudTrail events for detection, and there is a delay between the time an unencrypted resource is created and the notification is sent.

## Architecture
<a name="identify-and-alert-when-amazon-data-firehose-resources-are-not-encrypted-with-an-aws-kms-key-architecture"></a>

**Target technology stack  **

The solution uses serverless technology and the following services:
+ AWS CloudTrail
+ Amazon CloudWatch
+ AWS Command Line Interface (AWS CLI)
+ AWS Identity and Access Management (IAM)
+ Amazon Data Firehose
+ AWS Lambda
+ Amazon Simple Notification Service (Amazon SNS)

**Target architecture **

![\[Process for generating alerts when Data Firehose resources aren't encrypted.\]](http://docs.aws.amazon.com/prescriptive-guidance/latest/patterns/images/pattern-img/897ba8cf-d1c2-4149-98e7-09d3d90d13d6/images/d694f718-bd0c-4d14-a2e4-e0ea58dc048e.png)


The diagram illustrates these steps:

1. A user creates or modifies Amazon Data Firehose.

1. A CloudTrail event is detected and matched.

1. Lambda is invoked.

1. Non-compliant resources are identified.

1. Email notification is sent.

**Automation and scale**

You can use CloudFormation StackSets to apply this solution to multiple AWS Regions or accounts with a single command.

## Tools
<a name="identify-and-alert-when-amazon-data-firehose-resources-are-not-encrypted-with-an-aws-kms-key-tools"></a>
+ [AWS CloudTrail](https://docs.aws.amazon.com/awscloudtrail/latest/userguide/cloudtrail-user-guide.html) is an AWS service that helps you enable governance, compliance, and operational and risk auditing of your AWS account. Actions taken by a user, role, or an AWS service are recorded as events in CloudTrail. Events include actions taken in the AWS Management Console, AWS CLI, AWS SDKs, and API operations.
+ [Amazon CloudWatch Events](https://docs.aws.amazon.com/AmazonCloudWatch/latest/events/WhatIsCloudWatchEvents.html) delivers a near real-time stream of system events that describe changes in AWS resources.
+ [AWS Command Line Interface (AWS CLI)](https://docs.aws.amazon.com/cli/latest/userguide/cli-chap-welcome.html) is an open source tool that enables you to interact with AWS services by using commands in your command line shell. 
+ [AWS Identity and Access Management (IAM)](https://docs.aws.amazon.com/IAM/latest/UserGuide/introduction.html) is a web service that helps you securely control access to AWS resources. You use IAM to control who is authenticated (signed in) and authorized (has permissions) to use resources. 
+ [Amazon Data Firehose](https://docs.aws.amazon.com/firehose/latest/dev/what-is-this-service.html) is a fully managed service for delivering real-time streaming data. With Firehose, you don't have to write applications or manage resources. You configure your data producers to send data to Firehose, and it automatically delivers the data to the destination that you specified.
+ [AWS Lambda](https://docs.aws.amazon.com/lambda/latest/dg/welcome.html) is a compute service that supports running code without provisioning or managing servers. Lambda runs your code only when needed and scales automatically, from a few requests per day to thousands per second. You pay only for the compute time that you consume—there is no charge when your code isn’t running. 
+ [Amazon Simple Notification Service (Amazon SNS)](https://docs.aws.amazon.com/sns/latest/dg/welcome.html) is a managed service that provides message delivery from publishers to subscribers (also known as producers and consumers).

## Epics
<a name="identify-and-alert-when-amazon-data-firehose-resources-are-not-encrypted-with-an-aws-kms-key-epics"></a>

### Enforce encryption for compliance
<a name="enforce-encryption-for-compliance"></a>


| Task | Description | Skills required | 
| --- | --- | --- | 
| Deploy CloudFormation StackSets. | In the AWS CLI, use the `firehose-encryption-checker.yaml` template (attached) to create the stack set by running the following command.  Provide a valid Amazon SNS topic Amazon Resource Name (ARN) for the parameter. The deployment should successfully create CloudWatch Events rules, the Lambda function, and an IAM role with the necessary permissions, as described in the template.<pre>aws cloudformation create-stack-set    --stack-set-name my-stack-set   --template-body file://firehose-encryption-checker.yaml </pre> | Cloud architect, Systems administrator | 
| Create stack instances. | Stacks can be created in the AWS Regions of your choice as well as in one or more accounts.  To create stack instances, run the following command. Replace the stack name, account numbers, and Regions with your own.<pre>aws cloudformation create-stack-instances     --stack-set-name my-stack-set    --accounts 123456789012 223456789012   --regions us-east-1 us-east-2 us-west-1 us-west-2     --operation-preferences FailureToleranceCount=1 </pre> | Cloud architect, Systems administrator | 

## Related resources
<a name="identify-and-alert-when-amazon-data-firehose-resources-are-not-encrypted-with-an-aws-kms-key-resources"></a>
+ [Working with CloudFormation StackSets](https://docs.aws.amazon.com/AWSCloudFormation/latest/UserGuide/what-is-cfnstacksets.html)
+ [What is Amazon CloudWatch Events?](https://docs.aws.amazon.com/AmazonCloudWatch/latest/events/WhatIsCloudWatchEvents.html)

## Attachments
<a name="attachments-897ba8cf-d1c2-4149-98e7-09d3d90d13d6"></a>

To access additional content that is associated with this document, unzip the following file: [attachment.zip](samples/p-attach/897ba8cf-d1c2-4149-98e7-09d3d90d13d6/attachments/attachment.zip)

# Automate Amazon VPC IPAM IPv4 CIDR allocations for new AWS accounts by using AFT
<a name="automate-amazon-vpc-ipam-ipv4-cidr-allocations-for-new-aws-accounts-by-using-aft"></a>

*Kien Pham and Alex Pazik, Amazon Web Services*

## Summary
<a name="automate-amazon-vpc-ipam-ipv4-cidr-allocations-for-new-aws-accounts-by-using-aft-summary"></a>

This pattern shows how to automate Amazon VPC IP Address Manager (IPAM) IPv4 CIDR allocations for new AWS accounts by using [AWS Control Tower Account Factory for Terraform (AFT)](https://docs.aws.amazon.com/controltower/latest/userguide/aft-overview.html). This is done using an account-level customization that allocates a IPv4 CIDR block from IPAM to a new virtual private cloud (VPC) using the `aft-account-customizations` module.

With IPAM, you can organize, assign, monitor, and audit IP addresses at scale, allowing you to easily plan, track, and monitor IP addresses for your AWS workloads. You can [create an IPAM](https://docs.aws.amazon.com/vpc/latest/ipam/create-ipam.html) and IPAM pool to use to allocate an IPv4 CIDR block to a new VPC during the account vending process.

## Prerequisites and limitations
<a name="automate-amazon-vpc-ipam-ipv4-cidr-allocations-for-new-aws-accounts-by-using-aft-prereqs"></a>

**Prerequisites**
+ An active AWS account with AWS Control Tower enabled in a supported [AWS Region](https://docs.aws.amazon.com/controltower/latest/userguide/region-how.html) and AFT deployed
+ A supported [version control system (VCS) provider](https://github.com/aws-ia/terraform-aws-control_tower_account_factory?tab=readme-ov-file#input_vcs_provider) such as BitBucket, GitHub, and GitHub Enterprise
+ Terraform Command Line Interface (CLI) [installed](https://developer.hashicorp.com/terraform/tutorials/aws-get-started/install-cli)
+ A runtime environment where you can run the Terraform module that installs AFT
+ AWS Command Line Interface (AWS CLI) [installed](https://docs.aws.amazon.com/cli/latest/userguide/getting-started-install.html) and [configured](https://docs.aws.amazon.com/cli/latest/userguide/getting-started-quickstart.html)

**Limitations**
+ Some AWS services aren’t available in all AWS Regions. For Region availability, see [AWS Services by Region](https://aws.amazon.com/about-aws/global-infrastructure/regional-product-services/). For specific endpoints, see [Service endpoints and quotas](https://docs.aws.amazon.com/general/latest/gr/aws-service-information.html), and choose the link for the service.

**Product versions**
+ [AWS Control Tower landing zone](https://docs.aws.amazon.com/controltower/latest/userguide/2022-all.html#version-3.0) version 3.0 or later, earlier than version 4.0
+ [AFT](https://github.com/aws-ia/terraform-aws-control_tower_account_factory) version 1.13.0 or later, earlier than version 2.0.0
+ Terraform OSS version 1.2.0 or later, earlier than version 2.0.0
+ [Terraform AWS Provider](https://registry.terraform.io/providers/hashicorp/aws/latest/docs) (`terraform-provider-aws`) version 5.11.0 or later, earlier than version 6.0.0
+ [Terraform module for IPAM](https://github.com/aws-ia/terraform-aws-ipam) (`aws-ia/ipam/aws`) version 2.1.0 or later

## Architecture
<a name="automate-amazon-vpc-ipam-ipv4-cidr-allocations-for-new-aws-accounts-by-using-aft-architecture"></a>

The following diagram shows the workflow and components of this pattern.

![\[Workflow to create Amazon VPC IPAM IPv4 CIDR allocation.\]](http://docs.aws.amazon.com/prescriptive-guidance/latest/patterns/images/pattern-img/986cfc7d-058b-4490-9029-6cd1eadd1dd2/images/f90b84dd-0420-460e-ac0f-9f22b4a9fdc4.png)


The workflow consists of the following main tasks:

1. **Trigger changes** – The changes to Terraform and IPAM customization are committed to the GitHub repository and pushed. This task triggers the AWS CodeBuild pipeline automatically.

1. **Automate build** – Within CodeBuild, multiple build projects trigger AWS Step Functions.

1. **Apply customization** – Step Functions coordinates with CodeBuild to plan and apply Terraform changes. This task uses the AFT Terraform module to coordinate the IPAM pool IP assignment to the AWS vended account.

## Tools
<a name="automate-amazon-vpc-ipam-ipv4-cidr-allocations-for-new-aws-accounts-by-using-aft-tools"></a>

**AWS services**
+ [AWS CodeBuild](https://docs.aws.amazon.com/codebuild/latest/userguide/welcome.html) is a fully managed build service that helps you compile source code, run unit tests, and produce artifacts that are ready to deploy.
+ [AWS CodePipeline](https://docs.aws.amazon.com/codepipeline/latest/userguide/welcome.html) helps you quickly model and configure the different stages of a software release and automate the steps required to release software changes continuously.
+ [AWS Control Tower](https://docs.aws.amazon.com/controltower/latest/userguide/what-is-control-tower.html) orchestrates the capabilities of several other [AWS services](https://docs.aws.amazon.com/controltower/latest/userguide/integrated-services.html), including AWS Organizations, AWS Service Catalog, and AWS IAM Identity Center. It can help you set up and govern an AWS multi-account environment, following prescriptive best practices.
+ [Amazon DynamoDB](https://docs.aws.amazon.com/amazondynamodb/latest/developerguide/Introduction.html) is a fully managed NoSQL database service that provides fast, predictable, and scalable performance.
+ [AWS Lambda](https://docs.aws.amazon.com/lambda/latest/dg/welcome.html) is a compute service that helps you run code without needing to provision or manage servers. It runs your code only when needed and scales automatically, so you pay only for the compute time that you use.
+ [AWS SDK for Python (Boto3)](https://boto3.amazonaws.com/v1/documentation/api/latest/guide/quickstart.html) is a software development kit that helps you integrate your Python application, library, or script with AWS services.
+ [AWS Service Catalog](https://docs.aws.amazon.com/servicecatalog/latest/adminguide/introduction.html) helps you centrally manage catalogs of IT services that are approved for AWS. End users can quickly deploy only the approved IT services they need, following the constraints set by your organization.
+ [AWS Step Functions](https://docs.aws.amazon.com/step-functions/latest/dg/welcome.html) is a serverless orchestration service that helps you combine AWS Lambda functions and other AWS services to build business-critical applications.
+ [Amazon Virtual Private Cloud (Amazon VPC)](https://docs.aws.amazon.com/vpc/latest/userguide/what-is-amazon-vpc.html) helps you launch AWS resources into a virtual network that you’ve defined. This virtual network resembles a traditional network that you’d operate in your own data center, with the benefits of using the scalable infrastructure of AWS. Amazon VPC IP Address Manager (IPAM) is a VPC feature that makes it easier for you to plan, track, and monitor IP addresses for your AWS workloads.

**Other tools**
+ [GitHub](https://docs.github.com/) is a developer platform that developers can use to create, store, manage, and share their code.
+ [HashiCorp Terraform](https://www.terraform.io/) is an infrastructure as code (IaC) tool that helps you create and manage cloud and on-premises resources. This includes low-level components such as compute instances, storage, and networking and high-level components such as DNS entries and software a a service (SaaS) features.
+ [Python](https://www.python.org/) is a general-purpose computer programming language. You can use it to build applications, automate tasks, and develop services on the [AWS Cloud](https://aws.amazon.com/developer/language/python/).

**Code repository**
+ The code for this pattern is available in the GitHub [AWS Control Tower Account Factory for Terraform](https://github.com/aws-ia/terraform-aws-control_tower_account_factory) repository.

## Best practices
<a name="automate-amazon-vpc-ipam-ipv4-cidr-allocations-for-new-aws-accounts-by-using-aft-best-practices"></a>

When you deploy AFT, we recommend that you follow best practices to help ensure a secure, efficient, and successful implementation. Key guidelines and recommendations for implementing and operating AFT include the following: 
+ **Thorough review of inputs **– Carefully review and understand each [input](https://github.com/aws-ia/terraform-aws-control_tower_account_factory). Correct input configuration is crucial for the setup and functioning of AFT.
+ **Regular template updates **– Keep templates updated with the latest AWS features and Terraform versions. Regular updates help you take advantage of new functionality and maintain security.
+ **Versioning **– Pin your AFT module version and use a separate AFT deployment for testing if possible.
+ **Scope** – Use AFT only to deploy infrastructure guardrails and customizations. Do not use it to deploy your application.
+ **Linting and validation **– The AFT pipeline requires a linted and validated Terraform configuration. Run lint, validate, and test before pushing the configuration to AFT repositories.
+ **Terraform modules** – Build reusable Terraform code as modules, and always specify the Terraform and AWS provider versions to match your organization's requirements.

## Epics
<a name="automate-amazon-vpc-ipam-ipv4-cidr-allocations-for-new-aws-accounts-by-using-aft-epics"></a>

### Set up and configure your AWS environment
<a name="set-up-and-configure-your-aws-environment"></a>


| Task | Description | Skills required | 
| --- | --- | --- | 
| Deploy AWS Control Tower. | Set up and configure AWS Control Tower in your AWS environment to ensure centralized management and governance of your AWS accounts. For more information, see [Getting started with AWS Control Tower](https://docs.aws.amazon.com/controltower/latest/userguide/getting-started-with-control-tower.html) in the AWS Control Tower documentation. | Cloud administrator | 
| Deploy AWS Control Tower Account Factory for Terraform (AFT). | Set up AFT in a new, dedicated AFT management account. For more information, see C[onfigure and launch your AWS Control Tower Account Factory for Terraform](https://docs.aws.amazon.com/controltower/latest/userguide/aft-getting-started.html#aft-configure-and-launch) in the AWS Control Tower documentation. | Cloud administrator | 
| Complete AFT post-deployment. | After the AFT infrastructure deployment is complete, complete the steps in [Post-deployment steps](https://docs.aws.amazon.com/controltower/latest/userguide/aft-post-deployment.html) in the AWS Control Tower documentation. | Cloud administrator | 

### Create IPAM
<a name="create-ipam"></a>


| Task | Description | Skills required | 
| --- | --- | --- | 
| Delegate an IPAM administrator. | To delegate an IPAM administrator account in your AWS organization, use the following steps:[\[See the AWS documentation website for more details\]](http://docs.aws.amazon.com/prescriptive-guidance/latest/patterns/automate-amazon-vpc-ipam-ipv4-cidr-allocations-for-new-aws-accounts-by-using-aft.html)Alternatively, you can use the AWS CLI and run the following command:<pre>aws ec2 enable-ipam-organization-admin-account \<br />    --delegated-admin-account-id 012345678901</pre>For more information, see [Integrate IPAM with accounts in an AWS organization](https://docs.aws.amazon.com/vpc/latest/ipam/enable-integ-ipam.html) in the Amazon VPC documentation and [enable-ipam-organization-admin-account](https://docs.aws.amazon.com/cli/latest/reference/ec2/enable-ipam-organization-admin-account.html) in the AWS CLI Command Reference. To continue using IPAM, you must sign in to the delegated administrator account. The SSO profile or AWS environment variables specified in the next step must allow you to sign in to that account and grant permissions to create an IPAM top-level and regional pool. | AWS administrator | 
| Create an IPAM top-level and regional pool. | This pattern’s GitHub repository contains a Terraform template that you can use to create your IPAM top-level pool and regional pool. Then you can share the pools with an organization, organizational unit (OU), AWS account, or other resource by using AWS Resource Access Manager (AWS RAM).Use the following steps:[\[See the AWS documentation website for more details\]](http://docs.aws.amazon.com/prescriptive-guidance/latest/patterns/automate-amazon-vpc-ipam-ipv4-cidr-allocations-for-new-aws-accounts-by-using-aft.html)Make a note of the resource pool ID that’s output after creation. You will need the ID when submitting the account request. If you forget the resource pool ID, you can get it later from the AWS Management Console. Make sure that the created pools’ CIDRs do not overlap with any other pools in your working Region. You can create a pool without a CIDR, but you won’t be able to use the pool for allocations until you’ve provisioned a CIDR for it. You can add CIDRs to a pool at any time by editing the pool. | AWS administrator | 

### Integrate IPAM with AFT
<a name="integrate-ipam-with-aft"></a>


| Task | Description | Skills required | 
| --- | --- | --- | 
| Begin to create account customization. | To begin a new account customization, run the following commands from your terminal:<pre># Default name for customization repo<br />cd aft-account-customizations # Replace with your actual repo name if different than the default<br />mkdir -p APG-AFT-IPAM/terraform # Replace APG-AFT-IPAM with your desired customization name<br />cd APG-AFT-IPAM/terraform</pre> | DevOps engineer | 
| Create `aft-providers.jinja` file. | Add dynamic code to the `aft-providers.jinja` file that specifies the Terraform backend and provider to use.Use the following steps:[\[See the AWS documentation website for more details\]](http://docs.aws.amazon.com/prescriptive-guidance/latest/patterns/automate-amazon-vpc-ipam-ipv4-cidr-allocations-for-new-aws-accounts-by-using-aft.html) | DevOps engineer | 
| Create `backend.jinja` file. | Add dynamic code to the `backend.jinja` file that specifies the Terraform backend and provider to use.Use the following steps:[\[See the AWS documentation website for more details\]](http://docs.aws.amazon.com/prescriptive-guidance/latest/patterns/automate-amazon-vpc-ipam-ipv4-cidr-allocations-for-new-aws-accounts-by-using-aft.html) | DevOps engineer | 
| Create `main.tf` file. | Create a new `main.tf` file and add code that defines two data sources that retrieve two values from AWS Systems Manager (`aws_ssm`) and creates the VPC.Use the following steps:[\[See the AWS documentation website for more details\]](http://docs.aws.amazon.com/prescriptive-guidance/latest/patterns/automate-amazon-vpc-ipam-ipv4-cidr-allocations-for-new-aws-accounts-by-using-aft.html) | DevOps engineer | 
| Create `variables.tf` file. | Create a `variables.tf` file that declares the variables used by the Terraform module.Use the following steps:[\[See the AWS documentation website for more details\]](http://docs.aws.amazon.com/prescriptive-guidance/latest/patterns/automate-amazon-vpc-ipam-ipv4-cidr-allocations-for-new-aws-accounts-by-using-aft.html) | DevOps engineer | 
| Create `terraform.tfvars` file. | Create a `terraform.tfvars` file that defines the values of the variables that are passed to the `main.tf` file.Use the following steps:[\[See the AWS documentation website for more details\]](http://docs.aws.amazon.com/prescriptive-guidance/latest/patterns/automate-amazon-vpc-ipam-ipv4-cidr-allocations-for-new-aws-accounts-by-using-aft.html) | DevOps engineer | 
| Create `outputs.tf` file. | Create a new `outputs.tf` file that exposes some values in CodeBuild.Use the following steps:[\[See the AWS documentation website for more details\]](http://docs.aws.amazon.com/prescriptive-guidance/latest/patterns/automate-amazon-vpc-ipam-ipv4-cidr-allocations-for-new-aws-accounts-by-using-aft.html) | DevOps engineer | 
| Commit the customization. | To commit the new customization to the account customizations repository, run the following commands:<pre># Assumes you are still in the /terraform directory<br />cd .. # Skip if you are in the account customization root directory (APG-AFT-IPAM)<br />git add .<br />git commit -m "APG customization"<br />git push origin</pre> | DevOps engineer | 
| Apply the customization. | Add code to the `account-requests.tf` file that requests a new account with the newly created account customization. The custom fields create Systems Manager parameters in the vended account that are required to create the VPC with the correct IPAM allocated IPv4 CIDR.Use the following steps:[\[See the AWS documentation website for more details\]](http://docs.aws.amazon.com/prescriptive-guidance/latest/patterns/automate-amazon-vpc-ipam-ipv4-cidr-allocations-for-new-aws-accounts-by-using-aft.html) | AWS DevOps | 
| Validate the customization. | Sign in to the newly vended account and verify that the customization was successfully applied.Use the following steps:[\[See the AWS documentation website for more details\]](http://docs.aws.amazon.com/prescriptive-guidance/latest/patterns/automate-amazon-vpc-ipam-ipv4-cidr-allocations-for-new-aws-accounts-by-using-aft.html) | DevOps engineer | 

## Troubleshooting
<a name="automate-amazon-vpc-ipam-ipv4-cidr-allocations-for-new-aws-accounts-by-using-aft-troubleshooting"></a>


| Issue | Solution | 
| --- | --- | 
|  You encounter failures in resource creation or management caused by inadequate permissions. |  Review the AWS Identity and Access Management (IAM) roles and policies that are attached to Step Functions, CodeBuild, and other services involved in the deployment. Confirm that they have the necessary permissions. If there are permission issues, adjust the IAM policies to grant the required access. | 
|  You reach AWS service quotas during deployment. |  Before you deploy the pipeline, check AWS service quotas for resources such as Amazon Simple Storage Service (Amazon S3) buckets, IAM roles, and AWS Lambda functions. If necessary, request increases to the quotas. For more information, see [AWS service quotas](https://docs.aws.amazon.com/general/latest/gr/aws_service_limits.html) in the *AWS General Reference*. | 

## Related resources
<a name="automate-amazon-vpc-ipam-ipv4-cidr-allocations-for-new-aws-accounts-by-using-aft-resources"></a>

**AWS service documentation**
+ [AWS Control Tower User Guide](https://docs.aws.amazon.com/controltower/latest/userguide/what-is-control-tower.html)
+ [How IPAM works](https://docs.aws.amazon.com/vpc/latest/ipam/how-it-works-ipam.html)
+ [Security best practices in IAM ](https://docs.aws.amazon.com/IAM/latest/UserGuide/best-practices.html)
+ [AWS service quotas](https://docs.aws.amazon.com/general/latest/gr/aws_service_limits.html)

**Other resources**
+ [Terraform AWS Provider documentation](https://registry.terraform.io/providers/hashicorp/aws/latest/docs)

# Automate adding or updating Windows registry entries using AWS Systems Manager
<a name="automate-adding-or-updating-windows-registry-entries-using-aws-systems-manager"></a>

*Appasaheb Bagali, Amazon Web Services*

## Summary
<a name="automate-adding-or-updating-windows-registry-entries-using-aws-systems-manager-summary"></a>

AWS Systems Manager is a remote management tool for Amazon Elastic Compute Cloud (Amazon EC2) instances. Systems Manager provides visibility and control over your infrastructure on Amazon Web Services. This versatile tool can be used to fix Windows registry changes that are identified as vulnerabilities by the security vulnerability scan report. 

This pattern covers the steps to keep your EC2 instances that are running Windows operating system secure by automating registry changes that are recommended for the safety of your environment. The pattern uses the Run command to run a Command document. The code is attached, and a portion of it is included in the *Code* section.

## Prerequisites and limitations
<a name="automate-adding-or-updating-windows-registry-entries-using-aws-systems-manager-prereqs"></a>
+ An active AWS account
+ Permissions to access the EC2 instance and Systems Manager

## Architecture
<a name="automate-adding-or-updating-windows-registry-entries-using-aws-systems-manager-architecture"></a>

**Target technology stack**
+ A virtual private cloud (VPC), with two subnets and a network address translation (NAT) gateway
+ A Systems Manager Command document to add or update the registry name and value
+ Systems Manager Run Command to run the Command document on the specified EC2 instances

**Target architecture**

![\[How to automatically add or update Windows registry entries using AWS Systems Manager.\]](http://docs.aws.amazon.com/prescriptive-guidance/latest/patterns/images/pattern-img/2ecf680d-9f36-4070-8a19-2af262db7fcc/images/c992bcb0-d894-4aa7-9bb3-3d60c9c79e8d.png)


 

## Tools
<a name="automate-adding-or-updating-windows-registry-entries-using-aws-systems-manager-tools"></a>

**Tools**
+ [IAM policies and roles](https://docs.aws.amazon.com/IAM/latest/UserGuide/introduction.html) – AWS Identity and Access Management (IAM) is a web service that helps you securely control access to AWS resources. You use IAM to control who is authenticated (signed in) and authorized (has permissions) to use resources.
+ [Amazon Simple Storage Service](https://docs.aws.amazon.com/AmazonS3/latest/userguide/Welcome.html) – Amazon Simple Storage Service (Amazon S3) is storage for the internet. It is designed to make web-scale computing easier for developers. In this pattern, an S3 bucket is used to store the Systems Manager logs.
+ [AWS Systems Manager](https://docs.aws.amazon.com/systems-manager/latest/userguide/what-is-systems-manager.html) – AWS Systems Manager is an AWS service that you can use to view and control your infrastructure on AWS. Systems Manager helps you maintain security and compliance by scanning your *managed instances* and reporting (or taking corrective action on) any policy violations it detects.
+ [AWS Systems Manager Command document](https://docs.aws.amazon.com/systems-manager/latest/userguide/sysman-ssm-docs.html) – AWS Systems Manager Command documents are used by Run Command. Most Command documents are supported on all Linux and Windows Server operating systems supported by Systems Manager.
+ [AWS Systems Manager Run Command](https://docs.aws.amazon.com/systems-manager/latest/userguide/execute-remote-commands.html) – AWS Systems Manager Run Command gives you a way to manage the configuration of your managed instances remotely and securely. Using Run Command, you can automate common administrative tasks and perform one-time configuration changes at scale.

**Code**

You can use the following example code to add or update a Microsoft Windows registry name to `Version`, registry path to `HKCU:\Software\ScriptingGuys\Scripts`, and value to `2`.

```
#Windows registry path which needs to add/update
$registryPath ='HKCU:\\Software\\ScriptingGuys\\Scripts'
#Windows registry Name  which needs to add/update
$Name = 'Version'
#Windows registry value  which needs to add/update
$value = 2
# Test-Path cmdlet to see if the registry key exists. 
IF(!(Test-Path $registryPath))
        {
           New-Item -Path $registryPath -Force | Out-Null
           New-ItemProperty -Path $registryPath -Name $name -Value     $value ` -PropertyType DWORD -                 Force | Out-        Null 
        } ELSE {
                      New-ItemProperty -Path $registryPath -Name $name -Value $value ` -PropertyType            DWORD        -Force | Out-Null
            }
echo 'Registry Path:'$registryPath
 echo 'Registry Name:'$registryPath
 echo 'Registry Value:'(Get-ItemProperty -Path $registryPath -Name $Name).version
```

The full Systems Manager Command document JavaScript Object Notation (JSON) code example is attached. 

## Epics
<a name="automate-adding-or-updating-windows-registry-entries-using-aws-systems-manager-epics"></a>

### Set up a VPC
<a name="set-up-a-vpc"></a>


| Task | Description | Skills required | 
| --- | --- | --- | 
| Create a VPC. | On the AWS Management Console, create a VPC that has public and private subnets and a NAT gateway. For more information, see the [AWS documentation](https://docs.aws.amazon.com/batch/latest/userguide/create-public-private-vpc.html). | Cloud administrator | 
| Create security groups. | Ensure that each security group allows access for Remote Desktop Protocol (RDP) from the source IP address. | Cloud administrator | 

### Create an IAM policy and an IAM role
<a name="create-an-iam-policy-and-an-iam-role"></a>


| Task | Description | Skills required | 
| --- | --- | --- | 
| Create an IAM policy. | Create an IAM policy that provides access to Amazon S3, Amazon EC2, and Systems Manager. | Cloud administrator | 
| Create an IAM role. | Create an IAM role, and attach the IAM policy that provides access to Amazon S3, Amazon EC2, and Systems Manager. | Cloud administrator | 

### Run the automation
<a name="run-the-automation"></a>


| Task | Description | Skills required | 
| --- | --- | --- | 
| Create the Systems Manager Command document. | Create a Systems Manager Command document that will deploy the Microsoft Windows registry changes to add or update. | Cloud administrator | 
| Run the Systems Manager Run Command. | Run the Systems Manager Run Command, selecting the Command document and the Systems Manager target instances. This pushes the Microsoft Windows registry change in the selected Command document to the target instances. | Cloud administrator | 

## Related resources
<a name="automate-adding-or-updating-windows-registry-entries-using-aws-systems-manager-resources"></a>
+ [AWS Systems Manager](https://aws.amazon.com/systems-manager/)
+ [AWS Systems Manager documents](https://docs.aws.amazon.com/systems-manager/latest/userguide/sysman-ssm-docs.html)
+ [AWS Systems Manager Run Command](https://docs.aws.amazon.com/systems-manager/latest/userguide/execute-remote-commands.html)

## Attachments
<a name="attachments-2ecf680d-9f36-4070-8a19-2af262db7fcc"></a>

To access additional content that is associated with this document, unzip the following file: [attachment.zip](samples/p-attach/2ecf680d-9f36-4070-8a19-2af262db7fcc/attachments/attachment.zip)

# Automatically create an RFC in AMS using Python
<a name="automatically-create-an-rfc-in-ams-using-python"></a>

*Gnanasekaran Kailasam, Amazon Web Services*

## Summary
<a name="automatically-create-an-rfc-in-ams-using-python-summary"></a>

AWS Managed Services (AMS) helps you to operate your cloud-based infrastructure more efficiently and securely by providing ongoing management of your Amazon Web Services (AWS) infrastructure. To make a change to your managed environment, you need to create and submit a new request for change (RFC) that includes a change type (CT) ID for a particular operation or action.

However, manually creating an RFC can take around five minutes and teams in your organization might need to submit multiple RFCs every day. This pattern helps you to automate the RFC creation process, reduce the creation time for each RFC, and eliminate manual errors.   

This pattern describes how to use Python code to automatically create the `Stop EC2 instance` RFC that stops Amazon Elastic Compute Cloud (Amazon EC2) instances in your AMS account. You can then apply this pattern’s approach and the Python automation to other RFC types. 

## Prerequisites and limitations
<a name="automatically-create-an-rfc-in-ams-using-python-prereqs"></a>

**Prerequisites**
+ An AMS Advanced account. For more information about this, see [AMS operations plans](https://docs.aws.amazon.com/managedservices/latest/accelerate-guide/what-is-ams-op-plans.html) in the AWS Managed Services documentation.
+ At least one existing EC2 instance in your AMS account.
+ An understanding of how to create and submit RFCs in AMS.
+ Familiarity with Python.

**Limitations**
+ You can only use RFCs for changes in your AMS account. Your AWS account uses different processes for similar changes.

## Architecture
<a name="automatically-create-an-rfc-in-ams-using-python-architecture"></a>

**Technology stack  **
+ AMS
+ AWS Command Line Interface (AWS CLI)
+ AWS SDK for Python (Boto3)
+ Python and its required packages (JSON and Boto3)

**Automation and scale**

This pattern provides sample code to automate the `Stop EC2 instance` RFC, but you can use this pattern’s sample code and approach for other RFCs.

## Tools
<a name="automatically-create-an-rfc-in-ams-using-python-tools"></a>
+ [AWS Managed Services](https://docs.aws.amazon.com/managedservices/latest/ctexguide/ex-rfc-use-examples.html) – AMS helps you to operate your AWS infrastructure more efficiently and securely.
+ [AWS CLI](https://docs.aws.amazon.com/cli/latest/userguide/cli-chap-welcome.html) – AWS Command Line Interface (AWS CLI) is a unified tool to manage your AWS services. In AMS, the change management API provides operations to create and manage RFCs.
+ [AWS SDK for Python (Boto3)](https://docs.aws.amazon.com/pythonsdk/) – SDK for Python makes it easy to integrate your Python application, library, or script with AWS services.

**Code**

The `AMS Stop EC2 Instance.zip` file (attached) contains the Python code for creating a `Stop EC2 instance` RFC. You can also configure this code to submit a single RFC for multiple EC2 instances.

## Epics
<a name="automatically-create-an-rfc-in-ams-using-python-epics"></a>

### Option 1 – Set up environment for macOS or Linux
<a name="option-1-ndash-set-up-environment-for-macos-or-linux"></a>


| Task | Description | Skills required | 
| --- | --- | --- | 
|  Install and validate Python.  | [\[See the AWS documentation website for more details\]](http://docs.aws.amazon.com/prescriptive-guidance/latest/patterns/automatically-create-an-rfc-in-ams-using-python.html) | AWS systems administrator | 
|  Install AWS CLI.  | Run the `pip install awscli --upgrade –user` command to install AWS CLI*. * | AWS systems administrator | 
|  Install Boto3. | Run the `pip install boto3` command to install Boto3. | AWS systems administrator | 
| Install JSON.  | Run the `pip install json` command to install JSON. | AWS systems administrator | 
| Set up AMS CLI.  | Sign in to the AWS Management Console, open the AMS console, and then choose **Documentation**. Download the .zip file that contains the AMS CLI, unzip it, and then install it on your local machine.After you install AMS CLI, run the `aws amscm help` command. The output provides information about the AMS change management process. | AWS systems administrator | 

### Option 2 – Set up environment for Windows
<a name="option-2-ndash-set-up-environment-for-windows"></a>


| Task | Description | Skills required | 
| --- | --- | --- | 
|  Install and validate Python.  | [\[See the AWS documentation website for more details\]](http://docs.aws.amazon.com/prescriptive-guidance/latest/patterns/automatically-create-an-rfc-in-ams-using-python.html) | AWS systems administrator | 
| Install AWS CLI.  | Run the `pip install awscli --upgrade –user` command to install AWS CLI. | AWS systems administrator | 
|  Install Boto3. | Run the `pip install boto3` command to install Boto3. | AWS systems administrator | 
| Install JSON.  | Run the `pip install json` command to install JSON. | AWS systems administrator | 
| Set up AMS CLI.  | Sign in to the AWS Management Console, open the AMS console, and then choose **Documentation**. Download the .zip file that contains the AMS CLI, unzip it, and then install it on your local machine.After you install AMS CLI, run the `aws amscm help` command. The output provides information about the AMS change management process | AWS systems administrator | 

### Extract the CT ID and execution parameters for the RFC
<a name="extract-the-ct-id-and-execution-parameters-for-the-rfc"></a>


| Task | Description | Skills required | 
| --- | --- | --- | 
| Extract the CT ID, version, and execution parameters for the RFC.  | Each RFC has a different CT ID, version, and execution parameters. You can extract this information by using one of the following options:[\[See the AWS documentation website for more details\]](http://docs.aws.amazon.com/prescriptive-guidance/latest/patterns/automatically-create-an-rfc-in-ams-using-python.html)To adapt this pattern’s Python automation for other RFCs, replace the CT type and parameter values in the `ams_stop_ec2_instance` Python code file from the `AMS Stop EC2 Instance.zip` file (attached) with those that you extracted. | AWS systems administrator | 

### Run the Python automation
<a name="run-the-python-automation"></a>


| Task | Description | Skills required | 
| --- | --- | --- | 
| Run the Python automation. | [\[See the AWS documentation website for more details\]](http://docs.aws.amazon.com/prescriptive-guidance/latest/patterns/automatically-create-an-rfc-in-ams-using-python.html) | AWS systems administrator | 

## Related resources
<a name="automatically-create-an-rfc-in-ams-using-python-resources"></a>
+ [What are change types?](https://docs.aws.amazon.com/managedservices/latest/ctexguide/understanding-cts.html)
+ [CLI tutorial: High availability two-tier stack (Linux/RHEL)](https://docs.aws.amazon.com/managedservices/latest/ctexguide/tut-create-ha-stack.html)

## Attachments
<a name="attachments-2b6c68fd-a27e-4c8b-934d-caec50c196ed"></a>

To access additional content that is associated with this document, unzip the following file: [attachment.zip](samples/p-attach/2b6c68fd-a27e-4c8b-934d-caec50c196ed/attachments/attachment.zip)

# Automatically stop and start an Amazon RDS DB instance using AWS Systems Manager Maintenance Windows
<a name="automatically-stop-and-start-an-amazon-rds-db-instance-using-aws-systems-manager-maintenance-windows"></a>

*Ashita Dsilva, Amazon Web Services*

## Summary
<a name="automatically-stop-and-start-an-amazon-rds-db-instance-using-aws-systems-manager-maintenance-windows-summary"></a>

This pattern demonstrates how to automatically stop and start an Amazon Relational Database Service (Amazon RDS) DB instance on a specific schedule (for example, shutting down a DB instance outside of business hours to reduce costs) by using AWS Systems Manager Maintenance Windows. For this purpose, Systems Manager is cost-effective for typical use cases.

AWS Systems Manager Automation provides the  `AWS-StopRdsInstance` and `AWS-StartRdsInstance` runbooks to stop and start Amazon RDS DB instances. This means that you don’t need to write custom logic with AWS Lambda functions or create an Amazon CloudWatch Events rule.

Systems Manager provides two capabilities for scheduling tasks: [State Manager](https://docs.aws.amazon.com/systems-manager/latest/userguide/sysman-state-about.html) and [Maintenance Windows](https://docs.aws.amazon.com/systems-manager/latest/userguide/systems-manager-maintenance.html). State Manager sets and maintains the required state configuration for resources in your Amazon Web Services (AWS) account one time or on a specific schedule. Maintenance Windows runs tasks on the resources in your account during a specific time window. Although you can use this pattern’s approach with State Manager or Maintenance Windows, we recommend that you use Maintenance Windows because it can run one or more tasks based on assigned priority and can also run AWS Lambda functions and AWS Step Functions tasks. For more information about State Manager and Maintenance Windows, see [Choosing between State Manager and Maintenance Windows](https://docs.aws.amazon.com/systems-manager/latest/userguide/state-manager-vs-maintenance-windows.html) in the Systems Manager documentation.

This pattern provides detailed steps to configure two separate maintenance windows that use cron expressions to stop and then start an Amazon RDS DB instance. 

## Prerequisites and limitations
<a name="automatically-stop-and-start-an-amazon-rds-db-instance-using-aws-systems-manager-maintenance-windows-prereqs"></a>

**Prerequisites**
+ An active AWS account.
+ An existing Amazon RDS DB instance that you want to stop and start on a specific schedule.
+ Cron expressions for your required schedule. For example, the expression `cron(0 9 ? * MON-FRI *)` runs the task at 09:00 on every Monday, Tuesday, Wednesday, Thursday, and Friday. For more information, see [Cron and rate expressions for maintenance windows](https://docs.aws.amazon.com/systems-manager/latest/userguide/reference-cron-and-rate-expressions.html#reference-cron-and-rate-expressions-maintenance-window) in the Systems Manager documentation.
+ Familiarity with Systems Manager.
+ Permissions to start and stop the RDS instance. For more information, see the [Epics](#automatically-stop-and-start-an-amazon-rds-db-instance-using-aws-systems-manager-maintenance-windows-epics) section.

**Limitations**
+ An Amazon RDS DB instance can be stopped for up to seven days at one time. After seven days, the DB instance automatically restarts to ensure that it receives any required maintenance updates.
+ You can’t stop a DB instance that is a read replica or that has a read replica.
+ You can’t stop an Amazon RDS for SQL Server DB instance in a Multi-AZ configuration.
+ Service quotas apply to Maintenance Windows and Systems Manager Automation. For more information about service quotas, see [AWS Systems Manager endpoints and quotas](https://docs.aws.amazon.com/general/latest/gr/ssm.html) in the AWS General Reference documentation. 
+ Some AWS services aren’t available in all AWS Regions. For Region availability, see [AWS services by Region](https://aws.amazon.com/about-aws/global-infrastructure/regional-product-services/). For specific endpoints, see the [Service endpoints and quotas](https://docs.aws.amazon.com/general/latest/gr/aws-service-information.html) page, and choose the link for the service.

## Architecture
<a name="automatically-stop-and-start-an-amazon-rds-db-instance-using-aws-systems-manager-maintenance-windows-architecture"></a>

The following diagram shows the workflow to automatically stop and start an Amazon RDS DB instance.

![\[Workflow to automatically stop and start an Amazon RDS DB instance\]](http://docs.aws.amazon.com/prescriptive-guidance/latest/patterns/images/pattern-img/45b81621-5674-4bcf-bf7c-75ae6f62524e/images/7d943830-716e-46a3-be44-7e668c3c01ff.png)


 

The workflow has the following steps:

1. Create a maintenance window and use cron expressions to define the stop and start schedule for your Amazon RDS DB instances.

2. Register a Systems Manager Automation task to the maintenance window by using the `AWS-StopRdsInstance` or `AWS-StartRdsInstance` runbook.

3. Register a target with the maintenance window by using a tag-based resource group for your Amazon RDS DB instances.

**Technology stack**
+ AWS CloudFormation
+ AWS Identity and Access Management (IAM)
+ Amazon RDS
+ Systems Manager

**Automation and scale**

You can stop and start multiple Amazon RDS DB instances at the same time by tagging the required Amazon RDS DB instances, creating a resource group that includes all the tagged DB instances, and registering this resource group as a target for the maintenance window.

## Tools
<a name="automatically-stop-and-start-an-amazon-rds-db-instance-using-aws-systems-manager-maintenance-windows-tools"></a>
+ [AWS CloudFormation](https://docs.aws.amazon.com/AWSCloudFormation/latest/UserGuide/Welcome.html) is a service that helps you model and set up your AWS resources.
+ [AWS Identity and Access Management (IAM)](https://docs.aws.amazon.com/IAM/latest/UserGuide/introduction.html) is a web service that helps you securely control access to AWS resources.
+ [Amazon Relational Database Service (Amazon RDS)](https://docs.aws.amazon.com/AmazonRDS/latest/UserGuide/Welcome.html) is a web service that makes it easier to set up, operate, and scale a relational database in the AWS Cloud.
+ [AWS Resource Groups](https://docs.aws.amazon.com/ARG/latest/userguide/welcome.html) helps you organize AWS resources into groups, tag resources, and manage, monitor, and automate tasks on grouped resources.
+ [AWS Systems Manager](https://docs.aws.amazon.com/systems-manager/latest/userguide/what-is-systems-manager.html) is an AWS service that you can use to view and control your infrastructure on AWS. This pattern uses the following features of Systems Manager:
  + [AWS Systems Manager Automation](https://docs.aws.amazon.com/systems-manager/latest/userguide/systems-manager-automation.html) simplifies common maintenance and deployment tasks of Amazon Elastic Compute Cloud (Amazon EC2) instances and other AWS resources.
  + [AWS Systems Manager Maintenance Windows](https://docs.aws.amazon.com/systems-manager/latest/userguide/systems-manager-maintenance.html) helps you define a schedule for when to perform potentially disruptive actions on your instances.

## Epics
<a name="automatically-stop-and-start-an-amazon-rds-db-instance-using-aws-systems-manager-maintenance-windows-epics"></a>

### Create and configure the IAM service role for Systems Manager Automation
<a name="create-and-configure-the-iam-service-role-for-sys-automation"></a>


| Task | Description | Skills required | 
| --- | --- | --- | 
| Configure the IAM service role for Systems Manager Automation. | Sign in to the AWS Management Console and create a service role for Systems Manager Automation. You can use one of the following two methods to create this service role:[\[See the AWS documentation website for more details\]](http://docs.aws.amazon.com/prescriptive-guidance/latest/patterns/automatically-stop-and-start-an-amazon-rds-db-instance-using-aws-systems-manager-maintenance-windows.html)The Systems Manager Automation workflow invokes Amazon RDS by using a service role to perform start and stop actions on the Amazon RDS DB instance.The service role must be configured with the following [inline policy](https://docs.aws.amazon.com/IAM/latest/UserGuide/access_policies_manage-attach-detach.html#add-policies-console) that has permissions to start and stop the Amazon RDS DB instance:<pre>{<br />    "Version": "2012-10-17",		 	 	 <br />    "Statement": [<br />        {<br />            "Sid": "RdsStartStop",<br />            "Effect": "Allow",<br />            "Action": [<br />                "rds:StopDBInstance",<br />                "rds:StartDBInstance"<br />            ],<br />            "Resource": "<RDS_Instance_ARN>"               <br />        },<br />        {<br />            "Sid": "RdsDescribe",<br />            "Effect": "Allow",<br />            "Action": "rds:DescribeDBInstances",<br />            "Resource": "*"<br />        }<br />    ]<br />}</pre>Make sure that you replace `<RDS_Instance_ARN>` with the Amazon Resource Name (ARN) of your Amazon RDS DB instance.If you are unfamiliar with using IAM policies and roles, follow the instructions in the *Solution Overview* section of the [Schedule Amazon RDS stop and start using AWS Systems Manager](https://aws.amazon.com/blogs/database/schedule-amazon-rds-stop-and-start-using-aws-systems-manager/) blog post.Make sure that you record the ARN of the service role. | AWS administrator | 

### Create a resource group
<a name="create-a-resource-group"></a>


| Task | Description | Skills required | 
| --- | --- | --- | 
| Tag the Amazon RDS DB instances. | Open the [Amazon RDS console](https://console.aws.amazon.com/rds/) and tag the Amazon RDS DB instances that you want to add to the resource group. A tag is metadata assigned to an AWS resource and consists of a key-value pair. We recommend that you use *Action *as the **Tag key** and *StartStop* as the **Value**.For more information about this, see [Adding, listing, and removing tags](https://docs.aws.amazon.com/AmazonRDS/latest/UserGuide/USER_Tagging.html#Tagging.HowTo) in the Amazon RDS documentation. | AWS administrator | 
| Create a resource group for your tagged Amazon RDS DB instances. | Open the [AWS Resource Groups console](https://console.aws.amazon.com/resource-groups) and create a resource group based on the tag that you created for your Amazon RDS DB instances.Under **Grouping Criteria**, make sure that you choose **AWS::RDS::DBInstance **for the resource type and then provide the tag's key-value pair (for example, "Action-StartStop"). This ensures that the service only checks for Amazon RDS DB instances and not other resources that have this tag.** **Make sure that you record the resource group’s name.For more information and detailed steps, see [Build a tag-based query and create a group](https://docs.aws.amazon.com/ARG/latest/userguide/gettingstarted-query.html#gettingstarted-query-tag-based) in the AWS Resource Groups documentation.  | AWS administrator | 

### Configure a maintenance window to stop the Amazon RDS DB instances
<a name="configure-a-maintenance-window-to-stop-the-rds-db-instances"></a>


| Task | Description | Skills required | 
| --- | --- | --- | 
| Create a maintenance window. | [\[See the AWS documentation website for more details\]](http://docs.aws.amazon.com/prescriptive-guidance/latest/patterns/automatically-stop-and-start-an-amazon-rds-db-instance-using-aws-systems-manager-maintenance-windows.html)The task to stop the DB instance runs almost instantly when initiated and doesn't span the entire duration of the maintenance window. This pattern provides the minimum values for **Duration** and **Stop initiating tasks** because they are the required parameters for a maintenance window.For more information and detailed steps, see [Create a maintenance window (console)](https://docs.aws.amazon.com/systems-manager/latest/userguide/sysman-maintenance-create-mw.html) in the Systems Manager documentation. | AWS administrator | 
| Assign a target to the maintenance window. | [\[See the AWS documentation website for more details\]](http://docs.aws.amazon.com/prescriptive-guidance/latest/patterns/automatically-stop-and-start-an-amazon-rds-db-instance-using-aws-systems-manager-maintenance-windows.html)For more information and detailed steps, see [Assign targets to a maintenance window (console)](https://docs.aws.amazon.com/systems-manager/latest/userguide/sysman-maintenance-assign-targets.html) in the Systems Manager documentation. | AWS administrator | 
| Assign a task to the maintenance window. | [\[See the AWS documentation website for more details\]](http://docs.aws.amazon.com/prescriptive-guidance/latest/patterns/automatically-stop-and-start-an-amazon-rds-db-instance-using-aws-systems-manager-maintenance-windows.html)** **The **Service role** option defines the service role required for the maintenance window to run tasks. However, this role is not identical to the service role that you created earlier for Systems Manager Automation.For more information and detailed steps, see [Assign tasks to a maintenance window (console)](https://docs.aws.amazon.com/systems-manager/latest/userguide/sysman-maintenance-assign-tasks.html) in the Systems Manager documentation. | AWS administrator | 

### Configure a maintenance window to start the Amazon RDS DB instances
<a name="configure-a-maintenance-window-to-start-the-rds-db-instances"></a>


| Task | Description | Skills required | 
| --- | --- | --- | 
| Configure a maintenance window to start the Amazon RDS DB instances. | Repeat the steps from the *Configure a maintenance window to stop the Amazon RDS DB instances* epic to configure another maintenance window to start the Amazon RDS DB instances at a scheduled time.You must make the following changes when you configure the maintenance window to start the DB instances:[\[See the AWS documentation website for more details\]](http://docs.aws.amazon.com/prescriptive-guidance/latest/patterns/automatically-stop-and-start-an-amazon-rds-db-instance-using-aws-systems-manager-maintenance-windows.html) | AWS administrator | 

## Related resources
<a name="automatically-stop-and-start-an-amazon-rds-db-instance-using-aws-systems-manager-maintenance-windows-resources"></a>
+ [Use Systems Manager Automation documents to manage instances and cut costs off-hours](https://aws.amazon.com/blogs/mt/systems-manager-automation-documents-manage-instances-cut-costs-off-hours/) (AWS blog post)

# Centralize software package distribution in AWS Organizations by using Terraform
<a name="centralize-software-package-distribution-in-aws-organizations-by-using-terraform"></a>

*Pradip kumar Pandey, Chintamani Aphale, T.V.R.L.Phani Kumar Dadi, Pratap Kumar Nanda, Aarti Rajput, and Mayuri Shinde, Amazon Web Services*

## Summary
<a name="centralize-software-package-distribution-in-aws-organizations-by-using-terraform-summary"></a>

Enterprises often maintain multiple AWS accounts that are spread across multiple AWS Regions in order to create a strong isolation barrier between workloads. To stay secure and compliant, their administration teams install agent-based tools such as [CrowdStrike](https://www.crowdstrike.com/falcon-platform/), [SentinelOne](https://www.sentinelone.com/platform/), or [TrendMicro](https://www.trendmicro.com/en_sg/business.html) tools for security scanning, and the [Amazon CloudWatch agent](https://docs.aws.amazon.com/AmazonCloudWatch/latest/monitoring/Install-CloudWatch-Agent.html), [Datadog Agent](https://www.datadoghq.com/), or [AppDynamics agents](https://www.appdynamics.com/product/how-it-works/agents-and-controller) for monitoring. These teams often face challenges when they want to centrally automate software package management and distribution across this large landscape.

[Distributor](https://docs.aws.amazon.com/systems-manager/latest/userguide/distributor.html), a capability of [AWS Systems Manager](https://docs.aws.amazon.com/systems-manager/latest/userguide/what-is-systems-manager.html), automates the process of packaging and publishing software to managed Microsoft Windows and Linux instances across the cloud and on-premises servers through a single simplified interface. This pattern demonstrates how you can use Terraform to further simplify the process of managing the installation of software and to run scripts across a large number of instances and member accounts within AWS Organizations with minimal effort.

This solution works for Amazon, Linux, and Windows instances that are managed by Systems Manager.

## Prerequisites and limitations
<a name="centralize-software-package-distribution-in-aws-organizations-by-using-terraform-prereqs"></a>
+ A [Distributor package](https://docs.aws.amazon.com/systems-manager/latest/userguide/distributor-working-with-packages-create.html) that has the software to be installed
+ [Terraform](https://developer.hashicorp.com/terraform/tutorials/aws-get-started/install-cli) version 0.15.0 or later
+ Amazon Elastic Compute Cloud (Amazon EC2) instances that are [managed by Systems Manager](https://docs.aws.amazon.com/systems-manager/latest/userguide/managed_instances.html) and have basic [permissions to access Amazon Simple Storage Service (Amazon S3](https://repost.aws/knowledge-center/ec2-instance-access-s3-bucket)) in the target account
+ A landing zone for your organization that’s set up by using [AWS Control Tower](https://docs.aws.amazon.com/controltower/latest/userguide/what-is-control-tower.html)
+ (Optional) [Account Factory for Terraform (AFT)](https://catalog.workshops.aws/control-tower/en-US/customization/aft)

## Architecture
<a name="centralize-software-package-distribution-in-aws-organizations-by-using-terraform-architecture"></a>

**Resource details**

This pattern uses [Account Factory for Terraform (AFT) ](https://catalog.workshops.aws/control-tower/en-US/customization/aft)to create all required AWS resources and the code pipeline to deploy the resources in a deployment account. The code pipeline runs in two repositories:
+ **Global customization** contains Terraform code that will run across all accounts registered with AFT.
+ **Account customizations** contains Terraform code that will run in the deployment account.

You can also deploy this solution without using AFT, by running [Terraform](https://developer.hashicorp.com/terraform/intro) commands in the account customizations folder.

The Terraform code deploys the following resources:
+ AWS Identity and Access Management (IAM) role and policies
  + [SystemsManager-AutomationExecutionRole](https://docs.aws.amazon.com/systems-manager/latest/userguide/running-automations-multiple-accounts-regions.html) grants the user permissions to run automations in the target accounts.
  + [SystemsManager-AutomationAdministrationRole](https://docs.aws.amazon.com/systems-manager/latest/userguide/running-automations-multiple-accounts-regions.html) grants the user permissions to run automations in multiple accounts and organizational units (OUs).
+ Compressed files and manifest.json for the package
  + In Systems Manager, a [package](https://docs.aws.amazon.com/systems-manager/latest/userguide/distributor-working-with-packages-create.html) includes at least one .zip file of software or installable assets.
  + The JSON manifest includes pointers to your package code files.
+ S3 bucket
  + The distributed package that is shared across the organization is securely stored in an Amazon S3 bucket.
+ AWS Systems Manager documents (SSM documents)
  + `DistributeSoftwarePackage` contains the logic to distribute the software package to every target instance in the member accounts.
  + `AddSoftwarePackageToDistributor` contains the logic to package the installable software assets and add it to Automation, a capability of AWS Systems Manager.
+ Systems Manager association
  + A Systems Manager association is used to deploy the solution.

**Architecture and workflow**

![\[Architecture diagram for centralizing software package distribution in AWS Organizations\]](http://docs.aws.amazon.com/prescriptive-guidance/latest/patterns/images/pattern-img/da584449-e12b-4878-a61d-00d8cea3d3d7/images/2718f2c4-f816-4e34-89b8-8182c128e6db.png)


The diagram illustrates the following steps:

1. To run the solution from a centralized account, you upload your packages or software along with deployment steps to an S3 bucket.

1. Your customized package becomes available in the Systems Manager console [Documents](https://ap-southeast-2.console.aws.amazon.com/systems-manager/documents?region=ap-southeast-2) section, in the **Owned by me **tab.

1. State Manager, a capability of Systems Manager, creates, schedules, and runs an association for the package across the organization. The association specifies that the software package must be installed and running on a managed node before it can be installed on the target node.

1. The association instructs Systems Manager to install the package on the target node.

1. For any subsequent installations or changes, users can run the same association periodically or manually from a single location to perform deployments across accounts.

1. In member accounts, Automation sends deployment commands to Distributor.

1. Distributor distributes software packages across instances.

This solution uses the management account within AWS Organizations, but you can also designate an account (delegated administrator) to manage this on behalf of the organization.

## Tools
<a name="centralize-software-package-distribution-in-aws-organizations-by-using-terraform-tools"></a>

**AWS services**
+ [Amazon Simple Storage Service (Amazon S3)](https://docs.aws.amazon.com/AmazonS3/latest/userguide/Welcome.html) is a cloud-based object storage service that helps you store, protect, and retrieve any amount of data. This pattern uses Amazon S3 to centralize and securely store the distributed package.
+ [AWS Systems Manager](https://docs.aws.amazon.com/systems-manager/latest/userguide/what-is-systems-manager.html) helps you manage your applications and infrastructure running in the AWS Cloud. It simplifies application and resource management, shortens the time to detect and resolve operational problems, and helps you manage your AWS resources securely at scale. This pattern uses the following Systems Manager capabilities:
  + [Distributor ](https://docs.aws.amazon.com/systems-manager/latest/userguide/distributor.html)helps you package and publish software to Systems Manager managed instances.
  + [Automation](https://docs.aws.amazon.com/systems-manager/latest/userguide/systems-manager-automation.html) simplifies common maintenance, deployment, and remediation tasks for many AWS services.
  + [Documents](https://docs.aws.amazon.com/systems-manager/latest/userguide/documents.html) performs actions on your Systems Manager managed instances across your organization and accounts.
+ [AWS Organizations](https://docs.aws.amazon.com/organizations/latest/userguide/orgs_introduction.html) is an account management service that helps you consolidate multiple AWS accounts into an organization that you create and centrally manage.

**Other tools**
+ [Terraform](https://www.terraform.io/) is an infrastructure as code (IaC) tool from HashiCorp that helps you create and manage cloud and on-premises resources.

**Code repository**

The instructions and code for this pattern are available in the GitHub [Centralized package distribution](https://github.com/aws-samples/aws-organization-centralised-package-distribution) repository.

## Best practices
<a name="centralize-software-package-distribution-in-aws-organizations-by-using-terraform-best-practices"></a>
+ To assign tags to an association, use the [AWS Command Line Interface (AWS CLI)](https://docs.aws.amazon.com/cli/latest/userguide/cli-chap-welcome.html) or the [AWS Tools for PowerShell](https://docs.aws.amazon.com/powershell/latest/userguide/pstools-welcome.html). Adding tags to an association by using the Systems Manager console isn't supported. For more information, see [Tagging Systems Manager resources](https://docs.aws.amazon.com/systems-manager/latest/userguide/tagging-resources.html) in the Systems Manager documentation.
+ To run an association by using a new version of a document shared from another account, set the document version to `default`.
+ To tag only the target node, use one tag key. If you want to target your nodes by using multiple tag keys, use the resource group option.

## Epics
<a name="centralize-software-package-distribution-in-aws-organizations-by-using-terraform-epics"></a>

### Configure source files and accounts
<a name="configure-source-files-and-accounts"></a>


| Task | Description | Skills required | 
| --- | --- | --- | 
| Clone the repository. | [\[See the AWS documentation website for more details\]](http://docs.aws.amazon.com/prescriptive-guidance/latest/patterns/centralize-software-package-distribution-in-aws-organizations-by-using-terraform.html) | DevOps engineer | 
| Update global variables. | Update the following input parameters in the `global-customization/variables.tf` file. These variables apply to all accounts that are created and managed by AFT.[\[See the AWS documentation website for more details\]](http://docs.aws.amazon.com/prescriptive-guidance/latest/patterns/centralize-software-package-distribution-in-aws-organizations-by-using-terraform.html) | DevOps engineer | 
| Update account variables. | Update the following input parameters in the `account-customization/variables.tf` file. These variables apply only to specific accounts that are created and managed by AFT.[\[See the AWS documentation website for more details\]](http://docs.aws.amazon.com/prescriptive-guidance/latest/patterns/centralize-software-package-distribution-in-aws-organizations-by-using-terraform.html) | DevOps engineer | 

### Customize parameters and deployment files
<a name="customize-parameters-and-deployment-files"></a>


| Task | Description | Skills required | 
| --- | --- | --- | 
| Update input parameters for the State Manager association. | Update the following input parameters in the `account-customization/association.tf` file to define the state you want to maintain on your instances. You can use the default parameter values if they support your use case.[\[See the AWS documentation website for more details\]](http://docs.aws.amazon.com/prescriptive-guidance/latest/patterns/centralize-software-package-distribution-in-aws-organizations-by-using-terraform.html) | DevOps engineer | 
| Prepare compressed files and the `manifest.json` file for the package. | This pattern provides sample PowerShell installable files (.msi for Windows and .rpm for Linux) with install and uninstall scripts in the `account-customization/package` folder.[\[See the AWS documentation website for more details\]](http://docs.aws.amazon.com/prescriptive-guidance/latest/patterns/centralize-software-package-distribution-in-aws-organizations-by-using-terraform.html) | DevOps engineer | 

### Run Terraform commands to provision resources
<a name="run-terraform-commands-to-provision-resources"></a>


| Task | Description | Skills required | 
| --- | --- | --- | 
| Initialize the Terraform configuration. | To deploy the solution automatically with AFT, push the code to AWS CodeCommit:<pre>$ git add *<br />$ git commit -m "message"<br />$ git push</pre>You can also deploy this solution without using AFT by running a Terraform command from the `account-customization` folder. To initialize the working directory that contains the Terraform files, run:<pre>$ terraform init</pre> | DevOps engineer | 
| Preview changes. | To preview the changes that Terraform will make to the infrastructre, run the command:<pre>$ terraform plan</pre>This command evaluates the Terraform configuration to determine the desired state of the resources that have been declared. It also compares the desired state with the actual infrastructure to provision within the workspace. | DevOps engineer | 
| Apply changes. | Run the following command to implement the changes that you made to the `variables.tf` files:<pre>$ terraform apply</pre> | DevOps engineer | 

### Validate resources
<a name="validate-resources"></a>


| Task | Description | Skills required | 
| --- | --- | --- | 
| Validate the creation of SSM documents. | [\[See the AWS documentation website for more details\]](http://docs.aws.amazon.com/prescriptive-guidance/latest/patterns/centralize-software-package-distribution-in-aws-organizations-by-using-terraform.html)You should see the `DistributeSoftwarePackage` and `AddSoftwarePackageToDistributor` packages. | DevOps engineer | 
| Validate the successful deployment of automations. | [\[See the AWS documentation website for more details\]](http://docs.aws.amazon.com/prescriptive-guidance/latest/patterns/centralize-software-package-distribution-in-aws-organizations-by-using-terraform.html) | DevOps engineer | 
| Validate that the package deployed to the targeted member account instances. | [\[See the AWS documentation website for more details\]](http://docs.aws.amazon.com/prescriptive-guidance/latest/patterns/centralize-software-package-distribution-in-aws-organizations-by-using-terraform.html) | DevOps engineer | 

## Troubleshooting
<a name="centralize-software-package-distribution-in-aws-organizations-by-using-terraform-troubleshooting"></a>


| Issue | Solution | 
| --- | --- | 
| The State Manager association failed or is stuck in pending status. | See the [troubleshooting information](https://repost.aws/knowledge-center/ssm-state-manager-association-fail) in the AWS Knowledge Center. | 
| A scheduled association failed to run. | Your schedule specification might be invalid. State Manager doesn't currently support specifying months in cron expressions for associations. Use [cron or rate expressions](https://docs.aws.amazon.com/systems-manager/latest/userguide/reference-cron-and-rate-expressions.html) to confirm the schedule. | 

## Related resources
<a name="centralize-software-package-distribution-in-aws-organizations-by-using-terraform-resources"></a>
+ [Centralized package distribution](https://github.com/aws-samples/aws-organization-centralised-package-distribution) (GitHub repository)
+ [Account Factory for Terraform (AFT)](https://catalog.workshops.aws/control-tower/en-US/customization/aft)
+ [Use cases and best practices](https://docs.aws.amazon.com/systems-manager/latest/userguide/systems-manager-best-practices.html) (AWS Systems Manager documentation)

# Configure logging for .NET applications in Amazon CloudWatch Logs by using NLog
<a name="configure-logging-for-net-applications-in-amazon-cloudwatch-logs-by-using-nlog"></a>

*Bibhuti Sahu and Rob Hill (AWS), Amazon Web Services*

## Summary
<a name="configure-logging-for-net-applications-in-amazon-cloudwatch-logs-by-using-nlog-summary"></a>

This pattern describes how to use the NLog open-source logging framework to log .NET application usage and events in [Amazon CloudWatch Logs](https://docs.aws.amazon.com/AmazonCloudWatch/latest/logs/WhatIsCloudWatchLogs.html). In the CloudWatch console, you can view the application’s log messages in near real time. You can also set up [metrics](https://docs.aws.amazon.com/AmazonCloudWatch/latest/logs/MonitoringLogData.html) and configure [alarms](https://docs.aws.amazon.com/AmazonCloudWatch/latest/monitoring/ConsoleAlarms.html) to notify you if a metric threshold is exceeded. Using CloudWatch Application Insights, you can view automated or custom dashboards that show potential problems for the monitored applications. CloudWatch Application Insights is designed to help you quickly isolate ongoing issues with your applications and infrastructure.

To write log messages to CloudWatch Logs, you add the `AWS.Logger.NLog` NuGet package to the .NET project. Then, you update the `NLog.config` file to use CloudWatch Logs as a target.

## Prerequisites and limitations
<a name="configure-logging-for-net-applications-in-amazon-cloudwatch-logs-by-using-nlog-prereqs"></a>

**Prerequisites**
+ An active AWS account.
+ A .NET web or console application that:
  + Uses supported .NET Framework or .NET Core versions. For more information, see *Product versions*.
  + Uses NLog to send log data to Application Insights.
+ Permissions to create an IAM role for an AWS service. For more information, see [Service role permissions](https://docs.aws.amazon.com/IAM/latest/UserGuide/id_roles_create_for-service.html#id_roles_create_service-permissions).
+ Permissions to pass a role to an AWS service. For more information, see [Granting a user permissions to pass a role to an AWS service](https://docs.aws.amazon.com/IAM/latest/UserGuide/id_roles_use_passrole.html).

**Product versions**
+ .NET Framework version 3.5 or later
+ .NET Core versions 1.0.1, 2.0.0, or later

## Architecture
<a name="configure-logging-for-net-applications-in-amazon-cloudwatch-logs-by-using-nlog-architecture"></a>

**Target technology stack  **
+ NLog
+ Amazon CloudWatch Logs

**Target architecture**

![\[Architecture diagram of NLog writing log data for a .NET application to Amazon ClodWatch Logs.\]](http://docs.aws.amazon.com/prescriptive-guidance/latest/patterns/images/pattern-img/0ac9c3ad-2a28-415f-afc3-7fe3494b2b63/images/daea9f2f-7242-4ed2-843e-655d843dcfdf.png)


1. The .NET application writes log data to the NLog logging framework.

1. NLog writes the log data to CloudWatch Logs.

1. You use CloudWatch alarms and custom dashboards to monitor the .NET application.

## Tools
<a name="configure-logging-for-net-applications-in-amazon-cloudwatch-logs-by-using-nlog-tools"></a>

**AWS services**
+ [Amazon CloudWatch Application Insights](https://docs.aws.amazon.com/AmazonCloudWatch/latest/monitoring/cloudwatch-application-insights.html) helps you observe the health of your applications and underlying AWS resources.
+ [Amazon CloudWatch Logs](https://docs.aws.amazon.com/AmazonCloudWatch/latest/logs/WhatIsCloudWatchLogs.html) helps you centralize the logs from all your systems, applications, and AWS services so you can monitor them and archive them securely.
+ [AWS Identity and Access Management (IAM)](https://docs.aws.amazon.com/IAM/latest/UserGuide/introduction.html) helps you securely manage access to your AWS resources by controlling who is authenticated and authorized to use them.
+ [AWS Tools for PowerShell](https://docs.aws.amazon.com/powershell/latest/userguide/pstools-welcome.html) are a set of PowerShell modules that help you script operations on your AWS resources from the PowerShell command line.

**Other tools**
+ [Logger.NLog](https://www.nuget.org/packages/AWS.Logger.NLog) is an NLog target that records log data to CloudWatch Logs.
+ [NLog](https://nlog-project.org/) is an open-source logging framework for .NET platforms that helps you write log data to targets, such as databases, log files, or consoles.
+ [PowerShell](https://learn.microsoft.com/en-us/powershell/) is a Microsoft automation and configuration management program that runs on Windows, Linux, and macOS.
+ [Visual Studio](https://docs.microsoft.com/en-us/visualstudio/get-started/visual-studio-ide?view=vs-2022) is an integrated development environment (IDE) that includes compilers, code completion tools, graphical designers, and other features that support software development.

## Best practices
<a name="configure-logging-for-net-applications-in-amazon-cloudwatch-logs-by-using-nlog-best-practices"></a>
+ Set a [retention policy](https://docs.aws.amazon.com/AmazonCloudWatch/latest/logs/Working-with-log-groups-and-streams.html#SettingLogRetention) for the target log group. This must be done outside of the NLog configuration. By default, log data is stored in CloudWatch Logs indefinitely.
+ Adhere to the [Best practices for managing AWS access keys](https://docs.aws.amazon.com/accounts/latest/reference/credentials-access-keys-best-practices.html).

## Epics
<a name="configure-logging-for-net-applications-in-amazon-cloudwatch-logs-by-using-nlog-epics"></a>

### Set up access and tools
<a name="set-up-access-and-tools"></a>


| Task | Description | Skills required | 
| --- | --- | --- | 
| Create an IAM policy. | Follow the instructions in [Creating policies using the JSON editor](https://docs.aws.amazon.com/IAM/latest/UserGuide/access_policies_create-console.html#access_policies_create-json-editor) in the IAM documentation. Enter the following JSON policy, which has the least-privilege permissions necessary to allow CloudWatch Logs to read and write logs.<pre>{<br />    "Version": "2012-10-17",		 	 	 <br />    "Statement": [<br />        {<br />            "Effect": "Allow",<br />            "Action": [<br />                "logs:CreateLogGroup",<br />                "logs:CreateLogStream",<br />                "logs:GetLogEvents",<br />                "logs:PutLogEvents",<br />                "logs:DescribeLogGroups",<br />                "logs:DescribeLogStreams",<br />                "logs:PutRetentionPolicy"<br />            ],<br />            "Resource": [<br />                "*"<br />            ]<br />        }<br />    ]<br />}</pre> | AWS administrator, AWS DevOps | 
| Create an IAM role. | Follow the instructions in [Creating a role to delegate permissions to an AWS service](https://docs.aws.amazon.com/IAM/latest/UserGuide/id_roles_create_for-service.html) in the IAM documentation. Select the policy that you created previously. This is the role CloudWatch Logs assumes to perform logging actions. | AWS administrator, AWS DevOps | 
| Set up AWS Tools for PowerShell. | [\[See the AWS documentation website for more details\]](http://docs.aws.amazon.com/prescriptive-guidance/latest/patterns/configure-logging-for-net-applications-in-amazon-cloudwatch-logs-by-using-nlog.html) | General AWS | 

### Configure NLog
<a name="configure-nlog"></a>


| Task | Description | Skills required | 
| --- | --- | --- | 
| Install the NuGet package. | [\[See the AWS documentation website for more details\]](http://docs.aws.amazon.com/prescriptive-guidance/latest/patterns/configure-logging-for-net-applications-in-amazon-cloudwatch-logs-by-using-nlog.html) | App developer | 
| Configure the logging target. | [\[See the AWS documentation website for more details\]](http://docs.aws.amazon.com/prescriptive-guidance/latest/patterns/configure-logging-for-net-applications-in-amazon-cloudwatch-logs-by-using-nlog.html)For a sample configuration file, see the [Additional information](#configure-logging-for-net-applications-in-amazon-cloudwatch-logs-by-using-nlog-additional) section of this pattern. When you run your application, NLog will write the log messages and send them to CloudWatch Logs. | App developer | 

### Validate and monitor logs
<a name="validate-and-monitor-logs"></a>


| Task | Description | Skills required | 
| --- | --- | --- | 
| Validate logging. | Follow the instructions in [View log data sent to CloudWatch Logs](https://docs.aws.amazon.com/AmazonCloudWatch/latest/logs/Working-with-log-groups-and-streams.html#ViewingLogData) in the CloudWatch Logs documentation. Validate that log events are being recorded for the .NET application. If log events are not being recorded, see the [Troubleshooting](#configure-logging-for-net-applications-in-amazon-cloudwatch-logs-by-using-nlog-troubleshooting) section in this pattern. | General AWS | 
| Monitor the .NET application stack. | Configure monitoring in CloudWatch as needed for your use case. You can use [CloudWatch Logs Insights](https://docs.aws.amazon.com/AmazonCloudWatch/latest/logs/AnalyzingLogData.html), [CloudWatch Metrics Insights](https://docs.aws.amazon.com/AmazonCloudWatch/latest/monitoring/query_with_cloudwatch-metrics-insights.html), and [CloudWatch Application Insights](https://docs.aws.amazon.com/AmazonCloudWatch/latest/monitoring/cloudwatch-application-insights.html) to monitor your .NET workload. You can also configure [alarms](https://docs.aws.amazon.com/AmazonCloudWatch/latest/monitoring/AlarmThatSendsEmail.html) so that you can receive alerts, and you can create a custom [dashboard](https://docs.aws.amazon.com/AmazonCloudWatch/latest/monitoring/CloudWatch_Dashboards.html) for monitoring the workload from a single view. | General AWS | 

## Troubleshooting
<a name="configure-logging-for-net-applications-in-amazon-cloudwatch-logs-by-using-nlog-troubleshooting"></a>


| Issue | Solution | 
| --- | --- | 
| Log data doesn’t appear in CloudWatch Logs. | Make sure that the IAM policy is attached to the IAM role that CloudWatch Logs assumes. For instructions, see the *Set up access and tools* section in the [Epics](#configure-logging-for-net-applications-in-amazon-cloudwatch-logs-by-using-nlog-epics) section. | 

## Related resources
<a name="configure-logging-for-net-applications-in-amazon-cloudwatch-logs-by-using-nlog-resources"></a>
+ [Working with log groups and log streams](https://docs.aws.amazon.com/AmazonCloudWatch/latest/logs/Working-with-log-groups-and-streams.html) (CloudWatch Logs documentation)
+ [Amazon CloudWatch Logs and .NET Logging Frameworks](https://aws.amazon.com/blogs/developer/amazon-cloudwatch-logs-and-net-logging-frameworks/) (AWS blog post)

## Additional information
<a name="configure-logging-for-net-applications-in-amazon-cloudwatch-logs-by-using-nlog-additional"></a>

The following is a sample `NLog.config` file.

```
<?xml version="1.0" encoding="utf-8" ?>
<configuration>
  <configSections>
    <section name="nlog" type="NLog.Config.ConfigSectionHandler, NLog" />
  </configSections>
  <startup>
    <supportedRuntime version="v4.0" sku=".NETFramework,Version=v4.7.2" />
  </startup>
  <nlog>
    <extensions>
      <add assembly="NLog.AWS.Logger" />
    </extensions>
    <targets>
      <target name="aws" type="AWSTarget" logGroup="NLog.TestGroup" region="us-east-1" profile="demo"/>
    </targets>
    <rules>
      <logger name="*" minlevel="Info" writeTo="aws" />
    </rules>    
  </nlog>
</configuration>
```

# Copy AWS Service Catalog products across different AWS accounts and AWS Regions
<a name="copy-aws-service-catalog-products-across-different-aws-accounts-and-aws-regions"></a>

*Sachin Vighe and Santosh Kale, Amazon Web Services*

## Summary
<a name="copy-aws-service-catalog-products-across-different-aws-accounts-and-aws-regions-summary"></a>

AWS Service Catalog is a Regional service and this means that AWS Service Catalog [portfolios and products](https://docs.aws.amazon.com/servicecatalog/latest/adminguide/what-is_concepts.html) are only visible in the AWS Region where they are created. If you set up an [AWS Service Catalog hub](https://aws.amazon.com/about-aws/whats-new/2020/06/aws-service-catalog-now-supports-sharing-portfolios-across-an-organization-from-a-delegated-member-account/) in a new Region, you must recreate your existing products and this can be a time-consuming process.

This pattern's approach helps simplify this process by describing how to copy products from an AWS Service Catalog hub in a source AWS account or Region to a new hub in a destination account or Region. For more information about the AWS Service Catalog hub and spoke model, see [AWS Service Catalog hub and spoke model: How to automate the deployment and management of AWS Service Catalog to many accounts](https://aws.amazon.com/blogs/mt/aws-service-catalog-hub-and-spoke-model-how-to-automate-the-deployment-and-management-of-service-catalog-to-many-accounts/) on the AWS Management and Governance Blog. 

The pattern also provides the separate code packages required to copy AWS Service Catalog products across accounts or to other Regions. By using this pattern, your organization can save time, make existing and previous product versions available in a new AWS Service Catalog hub, minimize the risk of manual errors, and scale the approach across multiple accounts or Regions.

**Note**  
This pattern's *Epics *section provides two options for copying products. You can use Option 1 to copy products across accounts or choose Option 2 to copy products across Regions.

## Prerequisites and limitations
<a name="copy-aws-service-catalog-products-across-different-aws-accounts-and-aws-regions-prereqs"></a>

**Prerequisites **
+ An active AWS account.
+ Existing AWS Service Catalog products in a source account or Region.
+ An existing AWS Service Catalog hub in a destination account or Region.
+ If you want to copy products across accounts, you must share and then import the AWS Service Catalog portfolio containing the products into your destination account. For more information about this, see [Sharing and importing portfolios](https://docs.aws.amazon.com/servicecatalog/latest/adminguide/catalogs_portfolios_sharing.html) in the AWS Service Catalog documentation.

**Limitations **
+ AWS Service Catalog products that you want to copy across Regions or accounts cannot belong to more than one portfolio.

## Architecture
<a name="copy-aws-service-catalog-products-across-different-aws-accounts-and-aws-regions-architecture"></a>

The following diagram shows the copying of AWS Service Catalog products from a source account to a destination account.

![\[A cross-account role in Region 1, a Lambda execution role and a Lambda function in Region 2.\]](http://docs.aws.amazon.com/prescriptive-guidance/latest/patterns/images/pattern-img/7ede5d17-89eb-4455-928f-6953d145ac9f/images/26738220-1ed2-4f84-911b-3c88e954b60e.png)


 The following diagram shows the copying of AWS Service Catalog products from a source Region to a destination Region.

![\[Products copied by using the Lambda scProductCopy function in Region 2.\]](http://docs.aws.amazon.com/prescriptive-guidance/latest/patterns/images/pattern-img/7ede5d17-89eb-4455-928f-6953d145ac9f/images/0a936792-3bdc-45c2-ba05-17e828615061.png)


**Technology stack  **
+ Amazon CloudWatch
+ AWS Identity and Access Management (IAM)
+ AWS Lambda
+ AWS Service Catalog

**Automation and scale**

You can scale this pattern’s approach by using a Lambda function that can be scaled depending on the number of requests received or how many AWS Service Catalog products you need to copy. For more information about this, see [Lambda function scaling](https://docs.aws.amazon.com/lambda/latest/dg/invocation-scaling.html) in the AWS Lambda documentation.

## Tools
<a name="copy-aws-service-catalog-products-across-different-aws-accounts-and-aws-regions-tools"></a>
+ [AWS Command Line Interface (AWS CLI)](https://docs.aws.amazon.com/cli/latest/userguide/cli-chap-welcome.html) is an open-source tool that helps you interact with AWS services through commands in your command-line shell.
+ [AWS Identity and Access Management (IAM)](https://docs.aws.amazon.com/IAM/latest/UserGuide/introduction.html) helps you securely manage access to your AWS resources by controlling who is authenticated and authorized to use them.
+ [AWS Lambda](https://docs.aws.amazon.com/lambda/latest/dg/welcome.html) is a compute service that helps you run code without needing to provision or manage servers. It runs your code only when needed and scales automatically, so you pay only for the compute time that you use.
+ [AWS Service Catalog](https://docs.aws.amazon.com/servicecatalog/latest/adminguide/introduction.html) helps you centrally manage catalogs of IT services that are approved for AWS. End users can quickly deploy only the approved IT services they need, following the constraints set by your organization.

**Code**

You can use the ` cross-account-copy` package (attached) to copy AWS Service Catalog products across accounts or the `cross-region-copy` package (attached) to copy products across Regions.

The `cross-account-copy` package contains the following files:
+ `copyconf.properties` – The configuration file that contains the Region and AWS account ID parameters for copying products across accounts.
+ `scProductCopyLambda.py` – The Python function for copying products across accounts.
+ `createDestAccountRole.sh` – The script to create an IAM role in the destination account.
+ `createSrcAccountRole.sh` – The script to create an IAM role in the source account.
+ `copyProduct.sh` – The script to create and invoke the Lambda function for copying products across accounts.

The `cross-region-copy` package contains the following files:
+ `copyconf.properties` – The configuration file that contains the Region and AWS account ID parameters for copying products across Regions.
+ `scProductCopyLambda.py` – The Python function for copying products across Regions.
+ `copyProduct.sh` – The script to create an IAM role and create and invoke the Lambda function for copying products across Regions.

## Epics
<a name="copy-aws-service-catalog-products-across-different-aws-accounts-and-aws-regions-epics"></a>

### Option 1 – Copy AWS Service Catalog products across accounts
<a name="option-1-ndash-copy-aws-service-catalog-products-across-accounts"></a>


| Task | Description | Skills required | 
| --- | --- | --- | 
| Update the configuration file. | [\[See the AWS documentation website for more details\]](http://docs.aws.amazon.com/prescriptive-guidance/latest/patterns/copy-aws-service-catalog-products-across-different-aws-accounts-and-aws-regions.html) | AWS administrator, AWS systems administrator, Cloud administrator | 
| Configure your credentials for AWS CLI in the destination account. | Configure your credentials to access AWS CLI in your destination account by running the `aws configure` command and providing the following values :<pre>$aws configure <br />AWS Access Key ID [None]: <your_access_key_id> <br />AWS Secret Access Key [None]: <your_secret_access_key> <br />Default region name [None]: Region<br />Default output format [None]:</pre>For more information about this, see [Configuration basics](https://docs.aws.amazon.com/cli/latest/userguide/cli-configure-quickstart.html) in the AWS Command Line Interface documentation.  | AWS administrator, AWS systems administrator, Cloud administrator | 
| Configure your credentials for AWS CLI in the source account. | Configure your credentials to access AWS CLI in your source account by running the `aws configure` command and providing the following values: <pre>$aws configure<br />AWS Access Key ID [None]: <your_access_key_id><br />AWS Secret Access Key [None]: <your_secret_access_key><br />Default region name [None]: Region<br />Default output format [None]:</pre>For more information about this, see [Configuration basics](https://docs.aws.amazon.com/cli/latest/userguide/cli-configure-quickstart.html) in the AWS Command Line Interface documentation.  | AWS administrator, AWS systems administrator, Cloud administrator | 
| Create a Lambda execution role in your destination account. | Run the `createDestAccountRole.sh `script in your destination account. The script implements the following actions:[\[See the AWS documentation website for more details\]](http://docs.aws.amazon.com/prescriptive-guidance/latest/patterns/copy-aws-service-catalog-products-across-different-aws-accounts-and-aws-regions.html) | AWS administrator, AWS systems administrator, Cloud administrator | 
| Create the cross-account IAM role in your source account. | Run the `createSrcAccountRole.sh `script in your source account. The script implements the following actions:[\[See the AWS documentation website for more details\]](http://docs.aws.amazon.com/prescriptive-guidance/latest/patterns/copy-aws-service-catalog-products-across-different-aws-accounts-and-aws-regions.html) | AWS administrator, AWS systems administrator, Cloud administrator | 
| Run the copyProduct script in the destination account. | Run the `copyProduct.sh `script in your destination account. The script implements the following actions:[\[See the AWS documentation website for more details\]](http://docs.aws.amazon.com/prescriptive-guidance/latest/patterns/copy-aws-service-catalog-products-across-different-aws-accounts-and-aws-regions.html) | AWS administrator, AWS systems administrator, Cloud administrator | 

### Option 2 – Copy AWS Service Catalog products from a source Region to a destination Region
<a name="option-2-ndash-copy-aws-service-catalog-products-from-a-source-region-to-a-destination-region"></a>


| Task | Description | Skills required | 
| --- | --- | --- | 
| Update the configuration file. | [\[See the AWS documentation website for more details\]](http://docs.aws.amazon.com/prescriptive-guidance/latest/patterns/copy-aws-service-catalog-products-across-different-aws-accounts-and-aws-regions.html) | AWS systems administrator, Cloud administrator, AWS administrator | 
| Configure your credentials for AWS CLI. | Configure your credentials to access AWS CLI in your environment by running the `aws configure` command and providing the following values:<pre>$aws configure<br />AWS Access Key ID [None]: <your_access_key_id><br />AWS Secret Access Key [None]: <your_secret_access_key><br />Default region name [None]: Region<br />Default output format [None]:</pre>For more information about this, see [Configuration basics](https://docs.aws.amazon.com/cli/latest/userguide/cli-configure-quickstart.html) in the AWS Command Line Interface documentation.  | AWS administrator, AWS systems administrator, Cloud administrator | 
| Run the copyProduct script. | Run the `copyProduct.sh` script in your destination Region. The script implements the following actions:[\[See the AWS documentation website for more details\]](http://docs.aws.amazon.com/prescriptive-guidance/latest/patterns/copy-aws-service-catalog-products-across-different-aws-accounts-and-aws-regions.html) | AWS administrator, AWS systems administrator, Cloud administrator | 

## Related resources
<a name="copy-aws-service-catalog-products-across-different-aws-accounts-and-aws-regions-resources"></a>
+ [Create a Lambda execution role](https://docs.aws.amazon.com/lambda/latest/dg/lambda-intro-execution-role.html) (AWS Lambda documentation)
+ [Create a Lambda function](https://docs.aws.amazon.com/lambda/latest/dg/gettingstarted-awscli.html) (AWS Lambda documentation)
+ [AWS Service Catalog API reference](https://docs.aws.amazon.com/servicecatalog/latest/dg/API_Operations_AWS_Service_Catalog.html)
+ [AWS Service Catalog documentation](https://docs.aws.amazon.com/servicecatalog/latest/adminguide/what-is_concepts.html)

## Attachments
<a name="attachments-7ede5d17-89eb-4455-928f-6953d145ac9f"></a>

To access additional content that is associated with this document, unzip the following file: [attachment.zip](samples/p-attach/7ede5d17-89eb-4455-928f-6953d145ac9f/attachments/attachment.zip)

# Create a RACI or RASCI matrix for a cloud operating model
<a name="create-a-raci-or-rasci-matrix-for-a-cloud-operating-model"></a>

*Teddy Germade, Jerome Descreux, Florian Leroux, and Josselin LE MINEUR, Amazon Web Services*

## Summary
<a name="create-a-raci-or-rasci-matrix-for-a-cloud-operating-model-summary"></a>

The Cloud Center of Excellence (CCoE) or CEE (Cloud Enablement Engine) is an empowered and accountable team that is focused on operational readiness for the cloud. Their key focus is to transform the information IT organization from an on-premises operating model to a cloud operating model. The CCoE should be a cross-functional team that includes representation from infrastructure, applications, operations, and security.

One of the key components of a cloud operating model is a *RACI matrix* or *RASCI matrix*. This is used to define the roles and responsibilities for all parties involved in migration activities and cloud operations. The matrix name is derived from the responsibility types defined in the matrix: responsible (R), accountable (A), support (S), consulted (C), and informed (I). The support type is optional. If you include it, it’s called a *RASCI matrix*, and if you exclude it, it’s called a *RACI matrix*.

By starting with the attached template, your CCoE team can create a RACI or RASCI matrix for your organization. The template contains teams, roles, and tasks that are common in cloud operating models. The foundation of this matrix is the tasks related to operations integration and CCoE capabilities. However, you can customize this template to meet the needs of your organization’s structure and use case.

There are no limits to the implementation of a RACI matrix. This approach works for large organizations, start-ups, and everything in between. For small organizations, the same resource can fill several roles.

## Epics
<a name="create-a-raci-or-rasci-matrix-for-a-cloud-operating-model-epics"></a>

### Create the matrix
<a name="create-the-matrix"></a>


| Task | Description | Skills required | 
| --- | --- | --- | 
| Identify key stakeholders. | Identify key service and team managers that are linked to the strategic objectives of your cloud operating model. | Project manager | 
| Customize the matrix template. | Download the template in the [Attachments](#attachments-b3df3d2c-c596-4736-bbaa-8edbcf335352) section, and then update the RACI or RASCI matrix as follows:[\[See the AWS documentation website for more details\]](http://docs.aws.amazon.com/prescriptive-guidance/latest/patterns/create-a-raci-or-rasci-matrix-for-a-cloud-operating-model.html) | Project manager | 
| Plan meetings. | [\[See the AWS documentation website for more details\]](http://docs.aws.amazon.com/prescriptive-guidance/latest/patterns/create-a-raci-or-rasci-matrix-for-a-cloud-operating-model.html) | Project manager | 
| Complete the matrix. | In the meeting with all stakeholders, do the following:[\[See the AWS documentation website for more details\]](http://docs.aws.amazon.com/prescriptive-guidance/latest/patterns/create-a-raci-or-rasci-matrix-for-a-cloud-operating-model.html) | Project manager | 
| Share the RASCI matrix. | When the RACI or RASCI matrix is complete, have it approved by leadership. Save it in a shared repository or central location where all stakeholders can access it. We recommend that you use standard document control processes to record and approve revisions to the matrix. | Project manager | 

## Related resources
<a name="create-a-raci-or-rasci-matrix-for-a-cloud-operating-model-resources"></a>
+ [AWS shared responsibility model](https://aws.amazon.com/compliance/shared-responsibility-model/)

## Attachments
<a name="attachments-b3df3d2c-c596-4736-bbaa-8edbcf335352"></a>

To access additional content that is associated with this document, unzip the following file: [attachment.zip](samples/p-attach/b3df3d2c-c596-4736-bbaa-8edbcf335352/attachments/attachment.zip)

# Create alarms for custom metrics using Amazon CloudWatch anomaly detection
<a name="create-alarms-for-custom-metrics-using-amazon-cloudwatch-anomaly-detection"></a>

*Ram Kandaswamy and Raheem Jiwani, Amazon Web Services*

## Summary
<a name="create-alarms-for-custom-metrics-using-amazon-cloudwatch-anomaly-detection-summary"></a>

On the Amazon Web Services (AWS) Cloud, you can use Amazon CloudWatch to create alarms that monitor metrics and send notifications or automatically make changes if a threshold is breached.

To avoid being limited by [static thresholds](https://docs.aws.amazon.com/AmazonCloudWatch/latest/monitoring/ConsoleAlarms.html), you can create alarms based on past patterns and that notify you if specific metrics are outside the normal operating window. For example, you could monitor your API’s response times from Amazon API Gateway and receive notifications about anomalies that prevent you from meeting a service-level agreement (SLA).

This pattern describes how to use CloudWatch anomaly detection for custom metrics. The pattern shows you how to create a custom metric in Amazon CloudWatch Logs Insights or publish a custom metric with an AWS Lambda function, and then set up anomaly detection and create notifications using Amazon Simple Notification Service (Amazon SNS).

## Prerequisites and limitations
<a name="create-alarms-for-custom-metrics-using-amazon-cloudwatch-anomaly-detection-prereqs"></a>

**Prerequisites **
+ An active AWS account.
+ An existing SNS topic, configured to send email notifications. For more information about this, see [Getting started with Amazon SNS](https://docs.aws.amazon.com/sns/latest/dg/sns-getting-started.html) in the Amazon SNS documentation.
+ An existing application, configured with [CloudWatch Logs](https://docs.aws.amazon.com/AmazonCloudWatch/latest/logs/CWL_GettingStarted.html).

**Limitations **
+ CloudWatch metrics don't support millisecond time intervals. For more information about the granularity of regular and custom metrics, see the [Amazon CloudWatch FAQs](https://aws.amazon.com/cloudwatch/faqs/).

## Architecture
<a name="create-alarms-for-custom-metrics-using-amazon-cloudwatch-anomaly-detection-architecture"></a>

![\[CloudWatch using an Amazon SNS topic to send an email notification when an alarm initiates.\]](http://docs.aws.amazon.com/prescriptive-guidance/latest/patterns/images/pattern-img/d47e6f7f-e469-4cb9-b34b-8c4b78d71820/images/49f30340-9552-430a-893a-d0608bb09e38.png)


 The diagram shows the following workflow:

1. Logs that use metrics created and updated by CloudWatch Logs are streamed to CloudWatch.

1. An alarm initiates based on thresholds and sends an alert to an SNS topic.

1. Amazon SNS sends you an email notification.

**Technology stack  **
+ CloudWatch
+ AWS Lambda
+ Amazon SNS

## Tools
<a name="create-alarms-for-custom-metrics-using-amazon-cloudwatch-anomaly-detection-tools"></a>
+ [Amazon CloudWatch](https://docs.aws.amazon.com/AmazonCloudWatch/latest/monitoring/WhatIsCloudWatch.html) provides a reliable, scalable, and flexible monitoring solution.
+ [AWS Lambda](https://docs.aws.amazon.com/lambda/latest/dg/welcome.html) is a compute service that helps you run code without provisioning or managing  servers. 
+ [Amazon Simple Notification Service (Amazon SNS)](https://docs.aws.amazon.com/sns/latest/dg/welcome.html) is a managed service that provides message delivery from publishers to subscribers.

## Epics
<a name="create-alarms-for-custom-metrics-using-amazon-cloudwatch-anomaly-detection-epics"></a>

### Set up anomaly detection for a custom metric
<a name="set-up-anomaly-detection-for-a-custom-metric"></a>


| Task | Description | Skills required | 
| --- | --- | --- | 
| Option 1 - Create a custom metric with a Lambda function. | Download the `lambda_function.py` file (attached) and then replace the sample `lambda_function.py` file in the [aws-lambda-developer-guide](https://github.com/awsdocs/aws-lambda-developer-guide/tree/main/sample-apps/blank-python/function) repository on the AWS Documentation GitHub. This provides you with a sample Lambda function that sends custom metrics to CloudWatch Logs. The Lambda function uses the Boto3 API to integrate with CloudWatch. After you run the Lambda function, you can sign in to the AWS Management Console, open the CloudWatch console, and the published metric is available under your published namespace. | DevOps engineer, AWS DevOps | 
| Option 2 – Create custom metrics from CloudWatch log groups.  | Sign in to the AWS Management Console, open the CloudWatch console, and then choose **Log groups**. Choose the log group that you want to create a metric for. Choose **Actions **and then choose **Create metric filter**. For **Filter pattern**, enter the filter pattern that you want to use. For more information, see [Filter and pattern syntax](https://docs.aws.amazon.com/AmazonCloudWatch/latest/logs/FilterAndPatternSyntax.html) in the CloudWatch documentation. To test your filter pattern, enter one or more log events under **Test Pattern**. Each log event must be within one line, because line breaks are used to separate log events in the **Log event** messages box. After you test the pattern, you can enter a name and value for your metric under **Metric details**. For more information and steps to create a custom metric, see [Create a metric filter for a log group](https://docs.aws.amazon.com/AmazonCloudWatch/latest/logs/CreateMetricFilterProcedure.html) in the CloudWatch documentation. | DevOps engineer, AWS DevOps | 
| Create an alarm for your custom metric. | On the CloudWatch console, choose **Alarms** and then choose **Create Alarm**. Choose **Select metric** and enter the name of the metric that you created earlier into the search box. Choose the **Graphed metrics** tab and configure the options according to your requirements.Under **Conditions**, choose **Anomaly detection** instead of **Static thresholds**. This shows you a band based on two standard default deviations. You can set up thresholds and adjust them according to your requirements.Choose **Next.**The band is dynamic and depends on the quality of the datapoints. When you begin aggregating more data, the band and thresholds are automatically updated.  | DevOps engineer, AWS DevOps | 
| Set up SNS notifications. | Under **Notification**, choose the SNS topic to notify when the alarm is in `ALARM` state, `OK` state, or `INSUFFICIENT_DATA` state.To have the alarm send multiple notifications for the same alarm state or for different alarm states, choose **Add notification**. Choose **Next**. Enter a name and description for the alarm. The name must only contain ASCII characters. Then choose **Next**.Under **Preview and create**, confirm that the information and conditions are correct, and then choose **Create alarm**. | DevOps engineer, AWS DevOps | 

## Related resources
<a name="create-alarms-for-custom-metrics-using-amazon-cloudwatch-anomaly-detection-resources"></a>
+ [Publishing custom metrics to CloudWatch](https://docs.aws.amazon.com/AmazonCloudWatch/latest/monitoring/publishingMetrics.html)
+ [Using CloudWatch anomaly detection](https://docs.aws.amazon.com/AmazonCloudWatch/latest/monitoring/CloudWatch_Anomaly_Detection.html)
+ [Alarm events and Amazon EventBridge](https://docs.aws.amazon.com/AmazonCloudWatch/latest/monitoring/cloudwatch-and-eventbridge.html)
+ [What are the best practices to follow while pushing custom metrics to Cloud Watch?](https://www.youtube.com/watch?v=mVffHIzIL60) (video)
+ [Introduction to CloudWatch Application Insights ](https://www.youtube.com/watch?v=PBO636_t9n0)(video)
+ [Detect anomalies with CloudWatch ](https://www.youtube.com/watch?v=8umIX-pUy3k)(video)

## Attachments
<a name="attachments-d47e6f7f-e469-4cb9-b34b-8c4b78d71820"></a>

To access additional content that is associated with this document, unzip the following file: [attachment.zip](samples/p-attach/d47e6f7f-e469-4cb9-b34b-8c4b78d71820/attachments/attachment.zip)

# Create an AWS Cloud9 IDE that uses Amazon EBS volumes with default encryption
<a name="create-an-aws-cloud9-ide-that-uses-amazon-ebs-volumes-with-default-encryption"></a>

*Janardhan Malyala and Dhrubajyoti Mukherjee, Amazon Web Services*

## Summary
<a name="create-an-aws-cloud9-ide-that-uses-amazon-ebs-volumes-with-default-encryption-summary"></a>

**Notice**: AWS Cloud9 is no longer available to new customers. Existing customers of AWS Cloud9 can continue to use the service as normal. [Learn more](https://aws.amazon.com/blogs/devops/how-to-migrate-from-aws-cloud9-to-aws-ide-toolkits-or-aws-cloudshell/)

You can use [encryption by default](https://docs.aws.amazon.com/AWSEC2/latest/UserGuide/EBSEncryption.html#encryption-by-default) to enforce the encryption of your Amazon Elastic Block Store (Amazon EBS) volumes and snapshot copies on the Amazon Web Services (AWS) Cloud. 

You can create an AWS Cloud9 integrated development environment (IDE) that uses EBS volumes encrypted by default. However, the AWS Identity and Access Management (IAM) [service-linked role](https://docs.aws.amazon.com/cloud9/latest/user-guide/using-service-linked-roles.html) for AWS Cloud9 requires access to the AWS Key Management Service (AWS KMS) key for these EBS volumes. If access is not provided, the AWS Cloud9 IDE might fail to launch and debugging might be difficult. 

This pattern provides the steps to add the service-linked role for AWS Cloud9 to the AWS KMS key that is used by your EBS volumes. The setup described by this pattern helps you successfully create and launch an IDE that uses EBS volumes with encryption by default.

## Prerequisites and limitations
<a name="create-an-aws-cloud9-ide-that-uses-amazon-ebs-volumes-with-default-encryption-prereqs"></a>

**Prerequisites  **
+ An active AWS account.
+ Default encryption turned on for EBS volumes. For more information about encryption by default, see [Amazon EBS encryption](https://docs.aws.amazon.com/AWSEC2/latest/UserGuide/EBSEncryption.html) in the Amazon Elastic Compute Cloud (Amazon EC2) documentation.
+ An existing [customer managed KMS key](https://docs.aws.amazon.com/kms/latest/developerguide/concepts.html#customer-cmk) for encrypting your EBS volumes.

**Note**  
You don't need to create the service-linked role for AWS Cloud9. When you create an AWS Cloud9 development environment, AWS Cloud9 creates the service-linked role for you.

## Architecture
<a name="create-an-aws-cloud9-ide-that-uses-amazon-ebs-volumes-with-default-encryption-architecture"></a>

![\[Using an AWS Cloud9 IDE to enforce the encryption of EBS volumes and snapshots.\]](http://docs.aws.amazon.com/prescriptive-guidance/latest/patterns/images/pattern-img/dd98fbb4-0949-4299-b701-bc857e13049c/images/6b22b8d1-75d9-4f06-b5d6-5fff7397f22d.png)


**Technology stack**
+ AWS Cloud9
+ IAM
+ AWS KMS

## Tools
<a name="create-an-aws-cloud9-ide-that-uses-amazon-ebs-volumes-with-default-encryption-tools"></a>
+ [AWS Cloud9](https://docs.aws.amazon.com/cloud9/latest/user-guide/welcome.html) is an integrated development environment (IDE) that helps you code, build, run, test, and debug software. It also helps you release software to the AWS Cloud.
+ [Amazon Elastic Block Store (Amazon EBS)](https://docs.aws.amazon.com/AWSEC2/latest/UserGuide/AmazonEBS.html) provides block-level storage volumes for use with Amazon Elastic Compute Cloud (Amazon EC2) instances.
+ [AWS Identity and Access Management (IAM)](https://docs.aws.amazon.com/IAM/latest/UserGuide/introduction.html) helps you securely manage access to your AWS resources by controlling who is authenticated and authorized to use them.
+ [AWS Key Management Service (AWS KMS)](https://docs.aws.amazon.com/kms/latest/developerguide/overview.html) helps you create and control cryptographic keys to help protect your data.

## Epics
<a name="create-an-aws-cloud9-ide-that-uses-amazon-ebs-volumes-with-default-encryption-epics"></a>

### Find the default encryption key value
<a name="find-the-default-encryption-key-value"></a>


| Task | Description | Skills required | 
| --- | --- | --- | 
| Record the default encryption key value for the EBS volumes.  | Sign in to the AWS Management Console and open the Amazon EC2 console. Choose **EC2 dashboard**, and then choose **Data protection and security** in **Account attributes**. In **EBS encryption **section, copy and record the value in **Default encryption key**. | Cloud architect, DevOps engineer | 

### Provide access to the AWS KMS key
<a name="provide-access-to-the-aws-kms-key"></a>


| Task | Description | Skills required | 
| --- | --- | --- | 
| Provide AWS Cloud9 with access to the KMS key for EBS volumes. | [\[See the AWS documentation website for more details\]](http://docs.aws.amazon.com/prescriptive-guidance/latest/patterns/create-an-aws-cloud9-ide-that-uses-amazon-ebs-volumes-with-default-encryption.html)For more information about updating a key policy, see [How to change a key policy](https://docs.aws.amazon.com/kms/latest/developerguide/key-policy-modifying.html#key-policy-modifying-how-to) (AWS KMS documentation).The service-linked role for AWS Cloud9 is automatically created when you launch your first IDE. For more information, see [Creating a service-linked role](https://docs.aws.amazon.com/cloud9/latest/user-guide/using-service-linked-roles.html#create-service-linked-role) in the AWS Cloud9 documentation.  | Cloud architect, DevOps engineer | 

### Create and launch the IDE
<a name="create-and-launch-the-ide"></a>


| Task | Description | Skills required | 
| --- | --- | --- | 
| Create and launch the AWS Cloud9 IDE. | Open the AWS Cloud9 console and choose **Create environment**.** **Configure IDE according to your requirements by following the steps from [Creating an EC2 environment](https://docs.aws.amazon.com/cloud9/latest/user-guide/create-environment-main.html) in the AWS Cloud9 documentation.  | Cloud architect, DevOps engineer | 

## Related resources
<a name="create-an-aws-cloud9-ide-that-uses-amazon-ebs-volumes-with-default-encryption-resources"></a>
+ [Encrypt EBS volumes used by AWS Cloud9](https://docs.aws.amazon.com/cloud9/latest/user-guide/move-environment.html#encrypting-volumes)
+ [Create a service-linked role for AWS Cloud9](https://docs.aws.amazon.com/cloud9/latest/user-guide/using-service-linked-roles.html#create-service-linked-role)
+ [Create an EC2 environment in AWS Cloud9](https://docs.aws.amazon.com/cloud9/latest/user-guide/create-environment-main.html)

## Additional information
<a name="create-an-aws-cloud9-ide-that-uses-amazon-ebs-volumes-with-default-encryption-additional"></a>

**AWS KMS key policy updates**

Replace `<aws_accountid>` with your AWS account ID.

```
{
            "Sid": "Allow use of the key",
            "Effect": "Allow",
            "Principal": {
                "AWS": "arn:aws:iam::<aws_accountid>:role/aws-service-role/cloud9.amazonaws.com/AWSServiceRoleForAWSCloud9"
            },
            "Action": [
                "kms:Encrypt",
                "kms:Decrypt",
                "kms:ReEncrypt*",
                "kms:GenerateDataKey*",
                "kms:DescribeKey"
            ],
            "Resource": "*"
        },
        {
            "Sid": "Allow attachment of persistent resources",
            "Effect": "Allow",
            "Principal": {
                "AWS": "arn:aws:iam::<aws_accountid>:role/aws-service-role/cloud9.amazonaws.com/AWSServiceRoleForAWSCloud9"
            },
            "Action": [
                "kms:CreateGrant",
                "kms:ListGrants",
                "kms:RevokeGrant"
            ],
            "Resource": "*",
            "Condition": {
                "Bool": {
                    "kms:GrantIsForAWSResource": "true"
                }
            }
        }
```

**Using a cross-account key**

If you want to use a cross-account KMS key, you must use a grant in combination with the KMS key policy. This enables cross-account access to the key. In the same account that you used to create the Cloud9 environment, run the following command in the terminal.

```
aws kms create-grant \
 --region <Region where Cloud9 environment is created> \
 --key-id <The cross-account KMS key ARN> \
 --grantee-principal arn:aws:iam::<The account where Cloud9 environment is created>:role/aws-service-role/cloud9.amazonaws.com/AWSServiceRoleForAWSCloud9 \
 --operations "Encrypt" "Decrypt" "ReEncryptFrom" "ReEncryptTo" "GenerateDataKey" "GenerateDataKeyWithoutPlaintext" "DescribeKey" "CreateGrant"
```

After you run this command, you can create Cloud9 environments by using EBS encryption with a key in a different account.

# Create tag-based Amazon CloudWatch dashboards automatically
<a name="create-tag-based-amazon-cloudwatch-dashboards-automatically"></a>

*Janak Vadaria, Vinodkumar Mandalapu, and RAJNEESH TYAGI, Amazon Web Services*

## Summary
<a name="create-tag-based-amazon-cloudwatch-dashboards-automatically-summary"></a>

Creating different Amazon CloudWatch dashboards manually can be time-consuming, particularly when you have to create and update multiple resources to automatically scale your environment. A solution that creates and updates your CloudWatch dashboards automatically can save you time. This pattern helps you deploy a fully automated AWS Cloud Development Kit (AWS CDK) pipeline that creates and updates CloudWatch dashboards for your AWS resources based on tag change events, to display Golden Signals metrics.

In site reliability engineering (SRE), Golden Signals refers to a comprehensive set of metrics that offer a broad view of a service from a user or consumer perspective. These metrics consist of latency, traffic, errors, and saturation. For more information, see [What is Site Reliability Engineering (SRE)?](https://aws.amazon.com/what-is/sre/) on the AWS website.

The solution provided by this pattern is event-driven. After it's deployed, it continuously monitors the tag change events and automatically updates the CloudWatch dashboards and alarms.

## Prerequisites and limitations
<a name="create-tag-based-amazon-cloudwatch-dashboards-automatically-prereqs"></a>

** Prerequisites **
+ An active AWS account
+ AWS Command Line Interface (AWS CLI), [installed and configured](https://docs.aws.amazon.com/cli/latest/userguide/getting-started-install.html)
+ [Prerequisites](https://docs.aws.amazon.com/cdk/v2/guide/work-with.html#work-with-prerequisites) for the AWS CDK v2
+ A [bootstrapped environment](https://docs.aws.amazon.com/cdk/v2/guide/bootstrapping.html) on AWS
+ [Python version 3](https://www.python.org/downloads/)
+ [AWS SDK for Python (Boto3)](https://boto3.amazonaws.com/v1/documentation/api/latest/guide/quickstart.html), installed
+ [Node.js version 18](https://nodejs.org/en/download/current) or later
+ Node package manager (npm), [installed and configured](https://docs.npmjs.com/downloading-and-installing-node-js-and-npm) for the AWS CDK
+ Moderate (level 200) familiarity with the AWS CDK and AWS CodePipeline

**Limitations**

This solution currently creates automated dashboards for the following AWS services only:
+ [Amazon Relational Database Service (Amazon RDS)](https://aws.amazon.com/rds/)
+ [AWS Auto Scaling](https://aws.amazon.com/autoscaling/)
+ [Amazon Simple Notification Service (Amazon SNS)](https://aws.amazon.com/sns/)
+ [Amazon DynamoDB](https://aws.amazon.com/dynamodb/)
+ [AWS Lambda](https://aws.amazon.com/lambda/)

## Architecture
<a name="create-tag-based-amazon-cloudwatch-dashboards-automatically-architecture"></a>

**Target technology stack**
+ [CloudWatch dashboards](https://docs.aws.amazon.com/AmazonCloudWatch/latest/monitoring/CloudWatch_Dashboards.html)
+ [CloudWatch alarms](https://docs.aws.amazon.com/AmazonCloudWatch/latest/monitoring/AlarmThatSendsEmail.html)

**Target architecture**

![\[Target architecture for creating tag-based CloudWatch dashboards\]](http://docs.aws.amazon.com/prescriptive-guidance/latest/patterns/images/pattern-img/f234fe30-87db-446f-a291-d33928ca2ccb/images/f63ca697-f252-416d-8a1b-0239f38c10c5.png)


1. An AWS tag change event for the configured application tags or code changes initiates a pipeline in AWS CodePipeline to build and deploy updated CloudWatch dashboards.

1. AWS CodeBuild runs a Python script to find the resources that have configured tags and stores the resource IDs in a local file in a CodeBuild environment.

1. CodeBuild runs **cdk synth** to generate CloudFormation templates that deploy CloudWatch dashboards and alarms.

1. CodePipeline deploys the CloudFormation templates to the specified AWS account and Region.

1. When the CloudFormation stack has been deployed successfully, you can view the CloudWatch dashboards and alarms.

**Automation and scale**

This solution has been automated by using the AWS CDK. You can find the code in the GitHub [Golden Signals Dashboards on Amazon CloudWatch](https://github.com/aws-samples/golden-signals-dashboards-sample-app) repository. For additional scaling and to create custom dashboards, you can configure multiple tag keys and values.

## Tools
<a name="create-tag-based-amazon-cloudwatch-dashboards-automatically-tools"></a>

**Amazon services**
+ [Amazon EventBridge](https://aws.amazon.com/eventbridge/) is a serverless event bus service that helps you connect your applications with real-time data from a variety of sources, including AWS Lambda functions, HTTP invocation endpoints using API destinations, or event buses in other AWS accounts.
+ [AWS CodePipeline](https://aws.amazon.com/codepipeline/) helps you quickly model and configure the different stages of a software release and automate the steps required to release software changes continuously.
+ [AWS CodeBuild](https://aws.amazon.com/codebuild/) is a fully managed build service that helps you compile source code, run unit tests, and produce artifacts that are ready to deploy.
+ [AWS Command Line Interface (AWS CLI)](https://docs.aws.amazon.com/cli/latest/userguide/cli-chap-welcome.html) is an open source tool that helps you interact with AWS services through commands in your command-line shell.
+ [AWS Identity and Access Management (IAM)](https://aws.amazon.com/iam/) helps you securely manage access to your AWS resources by controlling who is authenticated and authorized to use them.
+ [Amazon Simple Storage Service (Amazon S3)](https://aws.amazon.com/s3/) is a cloud-based object storage service that helps you store, protect, and retrieve any amount of data.

## Best practices
<a name="create-tag-based-amazon-cloudwatch-dashboards-automatically-best-practices"></a>

As a security best practice, you can use encryption and authentication for the source repositories that connect to your pipelines. For additional best practices, see [CodePipeline best practices and use cases](https://docs.aws.amazon.com/codepipeline/latest/userguide/best-practices.html) in the CodePipeline documentation.

## Epics
<a name="create-tag-based-amazon-cloudwatch-dashboards-automatically-epics"></a>

### Configure and deploy the sample application
<a name="configure-and-deploy-the-sample-application"></a>


| Task | Description | Skills required | 
| --- | --- | --- | 
| Configure and deploy the sample application. | [\[See the AWS documentation website for more details\]](http://docs.aws.amazon.com/prescriptive-guidance/latest/patterns/create-tag-based-amazon-cloudwatch-dashboards-automatically.html) | AWS DevOps | 
| Automatically create dashboards and alarms. | After you deploy the sample application, you can create any of the resources that this solution supports with expected tag values, which will automatically create the specified dashboards and alarms.To test this solution, create an AWS Lambda function:[\[See the AWS documentation website for more details\]](http://docs.aws.amazon.com/prescriptive-guidance/latest/patterns/create-tag-based-amazon-cloudwatch-dashboards-automatically.html) | AWS DevOps | 

### Remove the sample application
<a name="remove-the-sample-application"></a>


| Task | Description | Skills required | 
| --- | --- | --- | 
| Remove the `golden-signals-dashboard` construct. | [\[See the AWS documentation website for more details\]](http://docs.aws.amazon.com/prescriptive-guidance/latest/patterns/create-tag-based-amazon-cloudwatch-dashboards-automatically.html) | AWS DevOps | 

## Troubleshooting
<a name="create-tag-based-amazon-cloudwatch-dashboards-automatically-troubleshooting"></a>


| Issue | Solution | 
| --- | --- | 
| Python command not found (referring to `findresources.sh`, line 8).  | Check the version of your Python installation. If you have installed Python version 3, replace `python` with `python3` on line 8 of the `resources.sh` file, and run the `sh deploy.sh` command again to deploy the solution. | 

## Related resources
<a name="create-tag-based-amazon-cloudwatch-dashboards-automatically-resources"></a>
+ [Bootstrapping](https://docs.aws.amazon.com/cdk/v2/guide/bootstrapping.html) (AWS CDK documentation)
+ [Using named profiles](https://docs.aws.amazon.com/cli/latest/userguide/cli-configure-files.html#cli-configure-files-methods) (AWS CLI documentation)
+ [AWS CDK Workshop](https://cdkworkshop.com/)

## Additional information
<a name="create-tag-based-amazon-cloudwatch-dashboards-automatically-additional"></a>

The following illustration shows a sample dashboard for Amazon RDS that is created as part of this solution.

![\[Sample dashboard for Amazon RDS\]](http://docs.aws.amazon.com/prescriptive-guidance/latest/patterns/images/pattern-img/f234fe30-87db-446f-a291-d33928ca2ccb/images/706a262f-8650-47ff-ac44-e04ce5f4023e.png)


# Document your AWS landing zone design
<a name="document-your-aws-landing-zone-design"></a>

*Michael Daehnert, Florian Langer, and Michael Lodemann, Amazon Web Services*

## Summary
<a name="document-your-aws-landing-zone-design-summary"></a>

A *landing zone* is a well-architected, multi-account environment that's based on security and compliance best practices. It is the enterprise-wide container that holds all of your organizational units (OUs), AWS accounts, users, and other resources. A landing zone can scale to fit the needs of an enterprise of any size. AWS has two options for creating your landing zone: a service-based landing zone using [AWS Control Tower](https://docs.aws.amazon.com/controltower/latest/userguide/what-is-control-tower.html) or a customized landing zone that you build. Each option requires a different level of AWS knowledge.

AWS created AWS Control Tower to help you save time by automating the setup of a landing zone. AWS Control Tower is managed by AWS and uses best practices and guidelines to help you create your foundational environment. AWS Control Tower uses integrated services, such as [AWS Service Catalog](https://docs.aws.amazon.com/servicecatalog/latest/adminguide/introduction.html) and [AWS Organizations](https://docs.aws.amazon.com/organizations/latest/userguide/orgs_introduction.html), to provision accounts in your landing zone and manage access to those accounts.

AWS landing zone projects vary in requirements, implementation details, and operational action items. There are customization aspects that need to be handled with every landing zone implementation. This includes (but is not limited to) how access management is handled, which technology stack is used, and what the monitoring requirements are for operational excellence. This pattern provides a template that helps you document your landing zone project. By using the template, you can document your project more quickly and help your development and operations teams understand your landing zone.

## Prerequisites and limitations
<a name="document-your-aws-landing-zone-design-prereqs"></a>

**Limitations**

This pattern does not describe what a landing zone is or how to implement one. For more information about these topics, see the [Related resources](#document-your-aws-landing-zone-design-resources) section.

## Epics
<a name="document-your-aws-landing-zone-design-epics"></a>

### Create the design document
<a name="create-the-design-document"></a>


| Task | Description | Skills required | 
| --- | --- | --- | 
| Identify key stakeholders. | Identify key service and team managers that are linked to your landing zone. | Project manager | 
| Customize the template. | Download the template in the [Attachments](#attachments-9e39a05a-8f51-4fe3-8999-522feafed6ca) section, and then update the template as follows:[\[See the AWS documentation website for more details\]](http://docs.aws.amazon.com/prescriptive-guidance/latest/patterns/document-your-aws-landing-zone-design.html) | Project manager | 
| Complete the template. | In meetings with the stakeholders or by using a write-and-review process, complete the template as follows:[\[See the AWS documentation website for more details\]](http://docs.aws.amazon.com/prescriptive-guidance/latest/patterns/document-your-aws-landing-zone-design.html) | Project manager | 
| Share the design document. | When your landing zone design documentation is complete, save it in a shared repository or central location where all stakeholders can access it. We recommend that you use standard document control processes to record and approve revisions to the design document. | Project manager | 

## Related resources
<a name="document-your-aws-landing-zone-design-resources"></a>
+ [AWS Control Tower documentation](https://docs.aws.amazon.com/controltower/latest/userguide/what-is-control-tower.html)
  + [Plan your AWS Control Tower landing zone](https://docs.aws.amazon.com/controltower/latest/userguide/planning-your-deployment.html)
  + [AWS multi-account strategy for your AWS Control Tower landing zone](https://docs.aws.amazon.com/controltower/latest/userguide/aws-multi-account-landing-zone.html)
  + [Administrative tips for landing zone setup](https://docs.aws.amazon.com/controltower/latest/userguide/tips-for-admin-setup.html)
  + [Expectations for landing zone configuration](https://docs.aws.amazon.com/controltower/latest/userguide/getting-started-configure.html)
+ [Customizations for AWS Control Tower](https://aws.amazon.com/solutions/implementations/customizations-for-aws-control-tower/) (AWS Solutions Library)
+ [Setting up a secure and scalable multi-account AWS environment](https://docs.aws.amazon.com/prescriptive-guidance/latest/migration-aws-environment/welcome.html) (AWS Prescriptive Guidance)

## Attachments
<a name="attachments-9e39a05a-8f51-4fe3-8999-522feafed6ca"></a>

To access additional content that is associated with this document, unzip the following file: [attachment.zip](samples/p-attach/9e39a05a-8f51-4fe3-8999-522feafed6ca/attachments/attachment.zip)

# Improve operational performance by enabling Amazon DevOps Guru across multiple AWS Regions, accounts, and OUs with the AWS CDK
<a name="improve-operational-performance-by-enabling-amazon-devops-guru-across-multiple-aws-regions-accounts-and-ous-with-the-aws-cdk"></a>

*Dr. Rahul Sharad Gaikwad, Amazon Web Services*

## Summary
<a name="improve-operational-performance-by-enabling-amazon-devops-guru-across-multiple-aws-regions-accounts-and-ous-with-the-aws-cdk-summary"></a>

This pattern demonstrates the steps to enable the Amazon DevOps Guru service across multiple Amazon Web Services (AWS) Regions, accounts, and organizational units (OUs) by using the AWS Cloud Development Kit (AWS CDK) in TypeScript. You can use AWS CDK stacks to deploy AWS CloudFormation StackSets from the administrator (primary) AWS account to enable Amazon DevOps Guru across multiple accounts, instead of logging into each account and enabling DevOps Guru individually for each account.

Amazon DevOps Guru provides artificial intelligence operations (AIOps) features to help you improve the availability of your applications and resolve operational issues faster. DevOps Guru reduces your manual effort by applying machine learning (ML) powered recommendations, without requiring any ML expertise. DevOps Guru analyzes your resources and operational data. If it detects any anomalies, it provides metrics, events, and recommendations to help you address the issue.

This pattern describes three deployment options for enabling Amazon DevOps Guru:
+ For all stack resources across multiple accounts and Regions
+ For all stack resources across OUs
+ For specific stack resources across multiple accounts and Regions

## Prerequisites and limitations
<a name="improve-operational-performance-by-enabling-amazon-devops-guru-across-multiple-aws-regions-accounts-and-ous-with-the-aws-cdk-prereqs"></a>

**Prerequisites**
+ An active AWS account.
+ AWS Command Line Interface (AWS CLI), installed and configured. (See [Installing, updating, and uninstalling the AWS CLI](https://docs.aws.amazon.com/cli/latest/userguide/cli-chap-install.html) in the AWS CLI documentation.)
+ AWS CDK Toolkit, installed and configured. (See [AWS CDK Toolkit](https://docs.aws.amazon.com/cdk/latest/guide/cli.html) in the AWS CDK documentation.)
+ Node Package Manager (npm), installed and configured for the AWS CDK in TypeScript. (See [Downloading and installing Node.js and npm](https://docs.npmjs.com/downloading-and-installing-node-js-and-npm) in the npm documentation.)
+ Python3 installed and configured, for running a Python script to inject traffic into the sample serverless application. (See [Python Setup and Usage](https://docs.python.org/3/using/index.html) in the Python documentation.)
+ Pip, installed and configured to install the Python requests library. (See the [pip installation instructions](https://pypi.org/project/pip/) on the PyPl website.)

**Product versions**
+ AWS CDK Toolkit version 1.107.0 or later
+ npm version 7.9.0 or later
+ Node.js version 15.3.0 or later

## Architecture
<a name="improve-operational-performance-by-enabling-amazon-devops-guru-across-multiple-aws-regions-accounts-and-ous-with-the-aws-cdk-architecture"></a>

**Technologies**

The architecture for this pattern includes the following services:
+ [Amazon DevOps Guru](https://aws.amazon.com/devops-guru/)
+ [AWS CloudFormation](https://aws.amazon.com/cloudformation/)
+ [Amazon API Gateway](https://aws.amazon.com/api-gateway/)
+ [AWS Lambda](https://aws.amazon.com/lambda/)
+ [Amazon DynamoDB](https://aws.amazon.com/dynamodb/)
+ [Amazon CloudWatch](https://aws.amazon.com/cloudwatch/)
+ [AWS CloudTrail](https://aws.amazon.com/cloudtrail/)

**AWS CDK stacks**

The pattern uses the following AWS CDK stacks: 
+ `CdkStackSetAdminRole` – Creates an AWS Identity and Access management (IAM) administrator role to establish a trust relationship between the administrator and target accounts.
+ `CdkStackSetExecRole` – Creates an IAM role to trust the administrator account.
+ `CdkDevopsGuruStackMultiAccReg` – Enables DevOps Guru across multiple AWS Regions and accounts for all stacks, and sets up Amazon Simple Notification Service (Amazon SNS) notifications.
+ `CdkDevopsGuruStackMultiAccRegSpecStacks` – Enables DevOps Guru across multiple AWS Regions and accounts for specific stacks, and sets up Amazon SNS notifications.
+ `CdkDevopsguruStackOrgUnit` – Enables DevOps Guru across OUs, and sets up Amazon SNS notifications. 
+ `CdkInfrastructureStack` – Deploys sample serverless application components such as API Gateway, Lambda, and DynamoDB in the administrator account to demonstrate fault injection and insights generation.

**Sample application architecture**

The following diagram illustrates the architecture of a sample serverless application that has been deployed across multiple accounts and Regions. The pattern uses the administrator account to deploy all the AWS CDK stacks. It also uses the administrator account as one of the target accounts for setting up DevOps Guru.

1. When DevOps Guru is enabled, it first baselines each resource’s behavior and then ingests operational data from CloudWatch vended metrics.

1. If it detects an anomaly, it correlates it with the events from CloudTrail, and generates an insight.

1. The insight provides a correlated sequence of events along with prescribed recommendations to enable the operator to identify the culprit resource.

1. Amazon SNS sends notification messages to the operator.

![\[A sample serverless application that has been deployed across multiple accounts and Regions.\]](http://docs.aws.amazon.com/prescriptive-guidance/latest/patterns/images/pattern-img/6075ca48-862a-4aa0-93c6-10bad8195a5c/images/beeb0992-aaa8-4f08-b983-685b6b8b8d5e.png)


**Automation and scale**

The [GitHub repository](https://github.com/aws-samples/amazon-devopsguru-cdk-samples.git) provided with this pattern uses the AWS CDK as an infrastructure as code (IaC) tool to create the configuration for this architecture. AWS CDK helps you orchestrate resources and enable DevOps Guru across multiple AWS accounts, Regions, and OUs.

## Tools
<a name="improve-operational-performance-by-enabling-amazon-devops-guru-across-multiple-aws-regions-accounts-and-ous-with-the-aws-cdk-tools"></a>

**AWS services**
+ [AWS CDK](https://docs.aws.amazon.com/cdk/latest/guide/home.html) – AWS Cloud Development Kit (AWS CDK) helps you define your cloud infrastructure as code in one of five supported programming languages: TypeScript, JavaScript, Python, Java, and C\$1.
+ [AWS CLI ](https://docs.aws.amazon.com/cli/latest/userguide/cli-chap-welcome.html)– AWS Command Line Interface (AWS CLI) is a unified tool that provides a consistent command-line interface for interacting with AWS services and resources.

**Code**

The source code for this pattern is available on GitHub, in the [Amazon DevOps Guru CDK Samples](https://github.com/aws-samples/amazon-devopsguru-cdk-samples.git) repository. The AWS CDK code is written in TypeScript. To clone and use the repository, follow the instructions in the next section.

**Important**  
Some of the stories in this pattern include AWS CDK and AWS CLI command examples that are formatted for Unix, Linux, and macOS. For Windows, replace the backslash (\$1) continuation character at the end of each line with a caret (^).

## Epics
<a name="improve-operational-performance-by-enabling-amazon-devops-guru-across-multiple-aws-regions-accounts-and-ous-with-the-aws-cdk-epics"></a>

### Prepare the AWS resources for deployment
<a name="prepare-the-aws-resources-for-deployment"></a>


| Task | Description | Skills required | 
| --- | --- | --- | 
| Configure AWS named profiles. | Set up your AWS named profiles as follows to deploy stacks in a multi-account environment.For the administrator account:<pre>$aws configure --profile administrator<br />AWS Access Key ID [****]: <your-administrator-access-key-ID><br />AWS Secret Access Key [****]: <your-administrator-secret-access-key><br />Default region name [None]: <your-administrator-region><br />Default output format [None]: json</pre>For the target account:<pre>$aws configure --profile target<br />AWS Access Key ID [****: <your-target-access-key-ID><br />AWS Secret Access Key [****]: <your-target-secret-access-key><br />Default region name [None]: <your-target-region><br />Default output format [None]: json</pre>For more information, see [Using named profiles](https://docs.aws.amazon.com/cli/latest/userguide/cli-configure-files.html#cli-configure-files-using-profiles) in the AWS CLI documentation. | DevOps engineer | 
| Verify AWS profile configurations. | (Optional) You can verify your AWS profile configurations in the `credentials` and `config` files by following the instructions in [Set and view configuration settings](https://docs.aws.amazon.com/cli/latest/userguide/cli-configure-files.html#cli-configure-files-methods) in the AWS CLI documentation. | DevOps engineer | 
| Verify the AWS CDK version. | Verify the version of the AWS CDK Toolkit by running the following command:<pre>$cdk --version</pre>This pattern requires version 1.107.0 or later. If you have an earlier version of the AWS CDK, follow the instructions in the [AWS CDK documentation](https://docs.aws.amazon.com/cdk/latest/guide/cli.html) to update it. | DevOps engineer | 
| Clone the project code. | Clone the GitHub repository for this pattern by using the command:<pre>$git clone https://github.com/aws-samples/amazon-devopsguru-cdk-samples.git</pre> | DevOps engineer | 
| Install package dependencies and compile the TypeScript files. | Install the package dependencies and compile the TypeScript files by running the following commands:<pre>$cd amazon-devopsguru-cdk-samples<br />$npm install<br />$npm fund</pre>These commands install all the packages from the sample repository.If you get any errors about missing packages, use one of the following commands:<pre>$npm ci</pre>—or—<pre>$npm install -g @aws-cdk/<package-name></pre>You can find the list of package names and versions in the `Dependencies` section of the `/amazon-devopsguru-cdk-samples/package.json` file. For more information, see [npm ci](https://docs.npmjs.com/cli/v7/commands/npm-ci) and [npm install](https://docs.npmjs.com/cli/v7/commands/npm-install) in the npm documentation. | DevOps engineer | 

### Build (synthesize) the AWS CDK stacks
<a name="build-synthesize-the-aws-cdk-stacks"></a>


| Task | Description | Skills required | 
| --- | --- | --- | 
| Configure an email address for Amazon SNS notifications. | Follow these steps to provide an email address for Amazon SNS notifications:[\[See the AWS documentation website for more details\]](http://docs.aws.amazon.com/prescriptive-guidance/latest/patterns/improve-operational-performance-by-enabling-amazon-devops-guru-across-multiple-aws-regions-accounts-and-ous-with-the-aws-cdk.html) | DevOps engineer | 
| Build the project code. | Build the project code and synthesize the stacks by running the command:<pre>npm run build && cdk synth </pre>You should see output similar to the following: <pre>$npm run build && cdk synth<br />> cdk-devopsguru@0.1.0 build<br />> tsc<br />Successfully synthesized to ~/amazon-devopsguru-cdk-samples/cdk.out<br />Supply a stack id (CdkDevopsGuruStackMultiAccReg,CdkDevopsGuruStackMultiAccRegSpecStacks, CdkDevopsguruStackOrgUnit, CdkInfrastructureStack, CdkStackSetAdminRole, CdkStackSetExecRole) to display its template.</pre>For more information and steps, see [Your first AWS CDK app](https://docs.aws.amazon.com/cdk/latest/guide/hello_world.html) in the AWS CDK documentation. | DevOps engineer | 
| List the AWS CDK stacks. | Run the following command to list all AWS CDK stacks:<pre>$cdk list</pre>The command displays the following list:<pre>CdkDevopsGuruStackMultiAccReg<br />CdkDevopsGuruStackMultiAccRegSpecStacks<br />CdkDevopsguruStackOrgUnit<br />CdkInfrastructureStack<br />CdkStackSetAdminRole<br />CdkStackSetExecRole</pre> | DevOps engineer | 

### Option 1 - Enable DevOps Guru for all stack resources across multiple accounts
<a name="option-1---enable-devops-guru-for-all-stack-resources-across-multiple-accounts"></a>


| Task | Description | Skills required | 
| --- | --- | --- | 
| Deploy the AWS CDK stacks for creating IAM roles. | This pattern uses [AWS CloudFormation StackSets](https://docs.aws.amazon.com/AWSCloudFormation/latest/UserGuide/what-is-cfnstacksets.html) to perform stack operations across multiple accounts. If you are creating your first stack set, you must create the following IAM roles to get the required permissions set up in your AWS accounts:[\[See the AWS documentation website for more details\]](http://docs.aws.amazon.com/prescriptive-guidance/latest/patterns/improve-operational-performance-by-enabling-amazon-devops-guru-across-multiple-aws-regions-accounts-and-ous-with-the-aws-cdk.html)The roles must have these exact names.[\[See the AWS documentation website for more details\]](http://docs.aws.amazon.com/prescriptive-guidance/latest/patterns/improve-operational-performance-by-enabling-amazon-devops-guru-across-multiple-aws-regions-accounts-and-ous-with-the-aws-cdk.html)For more information, see [Grant self-managed permissions](https://docs.aws.amazon.com/AWSCloudFormation/latest/UserGuide/stacksets-prereqs-self-managed.html) in the AWS CloudFormation documentation. | DevOps engineer | 
| Deploy the AWS CDK stack for enabling DevOps Guru across multiple accounts. | The AWS CDK `CdkDevopsGuruStackMultiAccReg` stack creates stack sets to deploy stack instances across multiple accounts and Regions. To deploy the stack, run the following CLI command with the specified parameters:<pre>$cdk deploy CdkDevopsGuruStackMultiAccReg \<br />  --profile administrator \<br />  --parameters AdministratorAccountId=<administrator-account-ID> \<br />  --parameters TargetAccountId=<target-account-ID> \<br />  --parameters RegionIds="<region-1>,<region-2>"</pre>Currently Amazon DevOps Guru is available in the AWS Regions listed in the [DevOps Guru FAQ](https://aws.amazon.com/devops-guru/faqs/). | DevOps engineer | 

### Option 2 - Enable DevOps Guru for all stack resources across OUs
<a name="option-2---enable-devops-guru-for-all-stack-resources-across-ous"></a>


| Task | Description | Skills required | 
| --- | --- | --- | 
| Extract OU IDs. | On the [AWS Organizations](https://console.aws.amazon.com/organizations/v2/home/accounts) console, identify the IDs of the organizational units where you want to enable DevOps Guru. | DevOps engineer | 
| Enable service-managed permissions for OUs. | If you're using AWS Organizations for account management, you must grant service-managed permissions to enable DevOps Guru. Instead of creating the IAM roles manually, use [organization-based trusted access and service-linked roles (SLRs](https://docs.aws.amazon.com/AWSCloudFormation/latest/UserGuide/stacksets-orgs-enable-trusted-access.html)). | DevOps engineer | 
| Deploy the AWS CDK stack for enabling DevOps Guru across OUs. | The AWS CDK `CdkDevopsguruStackOrgUnit` stack enables DevOps Guru service across OUs. To deploy the stack, run the following command with the specified parameters:<pre>$cdk deploy CdkDevopsguruStackOrgUnit \<br />  --profile administrator \ <br />  --parameters RegionIds="<region-1>,<region-2>" \<br />  --parameters OrganizationalUnitIds="<OU-1>,<OU-2>"</pre> | DevOps engineer | 

### Option 3 - Enable DevOps Guru for specific stack resources across multiple accounts
<a name="option-3---enable-devops-guru-for-specific-stack-resources-across-multiple-accounts"></a>


| Task | Description | Skills required | 
| --- | --- | --- | 
| Deploy the AWS CDK stacks for creating IAM roles. | If you haven't already created the required IAM roles shown in the first option, do that first:[\[See the AWS documentation website for more details\]](http://docs.aws.amazon.com/prescriptive-guidance/latest/patterns/improve-operational-performance-by-enabling-amazon-devops-guru-across-multiple-aws-regions-accounts-and-ous-with-the-aws-cdk.html)For more information, see [Grant self-managed permissions](https://docs.aws.amazon.com/AWSCloudFormation/latest/UserGuide/stacksets-prereqs-self-managed.html) in the AWS CloudFormation documentation. | DevOps engineer | 
| Delete existing stacks. | If you already used the first option to enable DevOps Guru for all stack resources, you can delete the old stack by using the following command:<pre>$cdk destroy CdkDevopsGuruStackMultiAccReg --profile administrator </pre>Or, you can change the` RegionIds` parameter when you redeploy the stack to avoid a *Stacks already exist* error. | DevOps engineer | 
| Update the AWS CDK stack with a stack list.  | [\[See the AWS documentation website for more details\]](http://docs.aws.amazon.com/prescriptive-guidance/latest/patterns/improve-operational-performance-by-enabling-amazon-devops-guru-across-multiple-aws-regions-accounts-and-ous-with-the-aws-cdk.html) | Data engineer | 
| Deploy the AWS CDK stack for enabling DevOps Guru for specific stack resources across multiple accounts. | The AWS CDK `CdkDevopsGuruStackMultiAccRegSpecStacks` stack enables DevOps Guru for specific stack resources across multiple accounts. To deploy the stack, run the following command:<pre>$cdk deploy CdkDevopsGuruStackMultiAccRegSpecStacks \<br />  --profile administrator  \<br />  --parameters AdministratorAccountId=<administrator-account-ID> \<br />  --parameters TargetAccountId=<target-account-ID> \<br />  --parameters RegionIds="<region-1>,<region-2>"</pre>If you previously deployed this stack for option 1, change the `RegionIds` parameter (making sure to choose from [available Regions](https://aws.amazon.com/devops-guru/faqs/)) to avoid a *Stacks already exist* error. | DevOps engineer | 

### Deploy the AWS CDK infrastructure stack
<a name="deploy-the-aws-cdk-infrastructure-stack"></a>


| Task | Description | Skills required | 
| --- | --- | --- | 
| Deploy the sample serverless infrastructure stack. | The AWS CDK `CdkInfrastructureStack` stack deploys serverless components such as API Gateway, Lambda, and a DynamoDB table to demonstrate DevOps Guru insights. To deploy the stack, run the following command: <pre>$cdk deploy CdkInfrastructureStack --profile administrator</pre> | DevOps engineer | 
| Insert sample records in DynamoDB. | Run the following command to populate the DynamoDB table with sample records. Provide the correct path for the `populate-shops-dynamodb-table.json` script.<pre>$aws dynamodb batch-write-item \<br />  --request-items file://scripts/populate-shops-dynamodb-table.json \<br />  --profile administrator</pre>The command displays the following output:<pre>{<br />    "UnprocessedItems": {}<br />}</pre> | DevOps engineer | 
| Verify inserted records in DynamoDB. | To verify that the DynamoDB table includes the sample records from the `populate-shops-dynamodb-table.json` file, access the URL for the `ListRestApiEndpointMonitorOperator` API, which is published as an output of the AWS CDK stack. You can also find this URL in the **Outputs** tab of the AWS CloudFormation console for the `CdkInfrastructureStack` stack. The AWS CDK output would look similar to the following:<pre>CdkInfrastructureStack.CreateRestApiMonitorOperatorEndpointD1D00045 = https://oure17c5vob.execute-api.<your-region>.amazonaws.com/prod/<br /><br />CdkInfrastructureStack.ListRestApiMonitorOperatorEndpointABBDB8D8 = https://cdff8icfrn4.execute-api.<your-region>.amazonaws.com/prod/</pre> | DevOps engineer | 
| Wait for resources to complete baselining. | This serverless stack has a few resources. We recommend that you wait for 2 hours before you carry out the next steps. If you deployed this stack in a production environment, it might take up to 24 hours to complete baselining, depending on the number of resources you selected to monitor in DevOps Guru. | DevOps engineer | 

### Generate DevOps Guru insights
<a name="generate-devops-guru-insights"></a>


| Task | Description | Skills required | 
| --- | --- | --- | 
| Update the AWS CDK infrastructure stack. | To try out DevOps Guru insights, you can make some configuration changes to reproduce a typical operational issue.[\[See the AWS documentation website for more details\]](http://docs.aws.amazon.com/prescriptive-guidance/latest/patterns/improve-operational-performance-by-enabling-amazon-devops-guru-across-multiple-aws-regions-accounts-and-ous-with-the-aws-cdk.html) | DevOps engineer | 
| Inject HTTP requests on the API. | Inject ingress traffic in the form of HTTP requests on the `ListRestApiMonitorOperatorEndpointxxxx` API:[\[See the AWS documentation website for more details\]](http://docs.aws.amazon.com/prescriptive-guidance/latest/patterns/improve-operational-performance-by-enabling-amazon-devops-guru-across-multiple-aws-regions-accounts-and-ous-with-the-aws-cdk.html) | DevOps engineer | 
| Review DevOps Guru insights. | Under standard conditions, the DevOps Guru dashboard displays zero in the ongoing insights counter. If it detects an anomaly, it raises an alert in the form of an insight. In the navigation pane, choose **Insights** to see the details of the anomaly, including an overview, aggregated metrics, relevant events, and recommendations. For more information about reviewing insights, see the [Gaining operational insights with AIOps using Amazon DevOps Guru](https://aws.amazon.com/blogs/devops/gaining-operational-insights-with-aiops-using-amazon-devops-guru/) blog post. | DevOps engineer | 

### Clean up
<a name="clean-up"></a>


| Task | Description | Skills required | 
| --- | --- | --- | 
| Clean up and delete resources. | After you walk through this pattern, you should remove the resources you created to avoid incurring any further charges. Run these commands:<pre>$cdk destroy CdkDevopsGuruStackMultiAccReg --profile administrator<br />$cdk destroy CdkDevopsguruStackOrgUnit --profile administrator<br />$cdk destroy CdkDevopsGuruStackMultiAccRegSpecStacks --profile administrator<br />$cdk destroy CdkInfrastructureStack --profile administrator<br />$cdk destroy CdkStackSetAdminRole --profile administrator<br />$cdk destroy CdkStackSetExecRole --profile administrator<br />$cdk destroy CdkStackSetExecRole --profile target</pre> | DevOps engineer | 

## Related resources
<a name="improve-operational-performance-by-enabling-amazon-devops-guru-across-multiple-aws-regions-accounts-and-ous-with-the-aws-cdk-resources"></a>
+ [Gaining operational insights with AIOps using Amazon DevOps Guru](https://aws.amazon.com/blogs/devops/gaining-operational-insights-with-aiops-using-amazon-devops-guru/)
+ [Easily configure Amazon DevOps Guru across multiple accounts and Regions using AWS CloudFormation StackSets](https://aws.amazon.com/blogs/devops/configure-devops-guru-multiple-accounts-regions-using-cfn-stacksets/)
+ [DevOps Guru Workshop](https://aiops-using-devops-guru.workshop.aws/)

# Govern permission sets for multiple accounts by using Account Factory for Terraform
<a name="govern-permission-sets-aft"></a>

*Anand Krishna Varanasi and Siamak Heshmati, Amazon Web Services*

## Summary
<a name="govern-permission-sets-aft-summary"></a>

This pattern helps you integrate [AWS Control Tower Account Factory Terraform (AFT)](https://docs.aws.amazon.com/controltower/latest/userguide/aft-overview.html) with [AWS IAM Identity Center](https://docs.aws.amazon.com/singlesignon/latest/userguide/what-is.html) in order to configure permissions for multiple AWS accounts at scale. This approach uses custom AWS Lambda functions to automate [permission set](https://docs.aws.amazon.com/singlesignon/latest/userguide/permissionsetsconcept.html) assignments to AWS accounts that are managed as an organization. This streamlines the process because it doesn’t require manual intervention from your platform engineering team. This solution can enhance operational efficiency, security and consistency. It promotes a secure and standardized onboarding process for AWS Control Tower, making it indispensable for enterprises that prioritize agility and reliability for their cloud infrastructure.

## Prerequisites and limitations
<a name="govern-permission-sets-aft-prereqs"></a>

**Prerequisites**
+ AWS accounts, managed through AWS Control Tower. For more information, see [Getting started with AWS Control Tower](https://docs.aws.amazon.com/controltower/latest/userguide/getting-started-with-control-tower.html).
+ Account Factory for Terraform, deployed in a dedicated account in your environment. For more information, see [Deploy AWS Control Tower Account Factory for Terraform](https://docs.aws.amazon.com/controltower/latest/userguide/aft-getting-started.html).
+ An IAM Identity Center instance, set up in your environment. For more information, see [Getting started with IAM Identity Center](https://docs.aws.amazon.com/singlesignon/latest/userguide/getting-started.html).
+ An active IAM Identity Center [group](https://docs.aws.amazon.com/singlesignon/latest/userguide/users-groups-provisioning.html#groups-concept), configured.  For more information, see [Add groups to your IAM Identity Center directory](https://docs.aws.amazon.com/singlesignon/latest/userguide/addgroups.html).
+ Python version 3.9 or later, installed

**Limitations**
+ This solution can be used only with accounts that are managed through AWS Control Tower. This solution is deployed by using Account Factory for Terraform.
+ This pattern does not include instructions for setting up identity federation with an identity source. For more information about how to complete this set up, see [IAM Identity Center identity source tutorials](https://docs.aws.amazon.com/singlesignon/latest/userguide/tutorials.html) in the IAM Identity Center documentation.

## Architecture
<a name="govern-permission-sets-aft-architecture"></a>

**AFT overview**

AFT sets up a Terraform pipeline that helps you provision and customize your accounts in AWS Control Tower. AFT follows a GitOps model that automates the processes of account provisioning in AWS Control Tower. You create an *account request Terraform file* and commit it to repository. This initiates the AFT workflow for account provisioning. After account provisioning is complete, AFT can automatically run additional customization steps. For more information, see [AFT architecture](https://docs.aws.amazon.com/controltower/latest/userguide/aft-architecture.html) in the AWS Control Tower documentation.

AFT provides the following main repositories:
+ `aft-account-request` – This repository contains Terraform code to create or update AWS accounts.
+ `aft-account-customizations` – This repository contains Terraform code to create or customize resources on a per-account basis.
+ `aft-global-customizations` – This repository contains Terraform code to create or customize resources for all accounts, at scale.
+ `aft-account-provisioning-customizations` – This repository manages customizations that are applied only to specific accounts created by and managed with AFT. For example, you might use this repository to customize user or groups assignments in IAM Identity Center or to automate account closures.

**Solution overview**

This custom solution includes an AWS Step Functions state machine and an AWS Lambda function that assign permission sets to users and groups for multiple accounts. The state machine deployed through this pattern operates in conjunction with pre-existing AFT `aft_account_provisioning_customizations` state machine. A user submits a request to update IAM Identity Center user and group assignments either when a new AWS account is created or after the account is created. They do this by pushing a change to the `aft-account-request` repository. The request to create or update an account initiates a stream in [Amazon DynamoDB Streams](https://docs.aws.amazon.com/amazondynamodb/latest/developerguide/Streams.html). This starts the Lambda function, which updates IAM Identity Center users and groups for the target AWS accounts.

The following is an example of the parameters you can provide in the Lambda function for permission set assignments to target users and groups:

```
custom_fields = {
    "InstanceArn"         = "<Organization ID>",
    "PermissionSetArn"    = "<Permission set ARN>",
    "PrincipalId"         = "<Principal ID>",
  }
```

The following are the parameters in this statement:
+ `InstanceArn` – The Amazon Resource Name (ARN) of the organization
+ `PermissionSetArn` – The ARN of the permission set
+ `PrincipalId` – The identifier of a user or group in IAM Identity Center to which the permission set will be applied

**Note**  
You must create the target permission set, users, and groups before running this solution.

While the `InstanceArn` value must remain consistent, you can modify the Lambda function to assign multiple permission sets to multiple target identities. The parameters for permission sets must end in `PermissionSetArn`, and the parameters for users and groups must end in `PrincipalId`. You must define both attributes. The following is an example of how to define multiple permission sets and target users and groups:

```
custom_fields = {
    "InstanceArn"                    = "<Organization ID>",
    "AdminAccessPermissionSetArn"    = "<Admin privileges permission set ARN>",
    "AdminAccessPrincipalId"         = "<Admin principal ID>",
    "ReadOnlyAccessPermissionSetArn" = "<Read-only privileges permission set ARN>",
    "ReadOnlyAccessPrincipalId"      = "<Read-only principal ID>",
  }
```

The following diagram shows a step-by-step workflow of how the solution updates permissions sets for users and groups in the target AWS accounts at scale. When the user initiates an account creation request, AFT initiates the `aft-account-provisioning-framework` Step Functions state machine. This state machine starts the `extract-alternate-sso` Lambda function. The Lambda function assigns permissions sets to users and groups in the target AWS accounts. These users or groups can be from any configured identity source in IAM Identity Center. Examples of identity sources include Okta, Active Directory, or Ping Identity.

![\[Workflow of updating permission sets when an account is created or updated.\]](http://docs.aws.amazon.com/prescriptive-guidance/latest/patterns/images/pattern-img/14751255-3781-48db-a6b7-1a03e28c1020/images/d1de252d-8ac9-4f7d-a559-4ab3e852f325.png)


The diagram shows the following workflow when new accounts are created:

1. A user pushes a `custom_fields` change to the `aft-account-request` repository.

1. AWS CodePipeline starts an AWS CodeBuild job that records the user-defined metadata into the `aft-request-audit` Amazon DynamoDB table. This table has attributes to record user-defined metadata. The `ddb_event_name` attribute defines the type of AFT operation:
   + If the value is `INSERT`, then the solution assigns the permissions set to the target identities when the new AWS account is created.
   + If the value is `UPDATE`, then the solution assigns the permissions set to the target identities after the AWS account is created.

1. Amazon DynamoDB Streams initiates the `aft_alternate_sso_extract` Lambda function.

1. The `aft_alternate_sso_extract` Lambda function assumes an AWS Identity and Access Management (IAM) role in the AWS Control Tower management account.

1. The Lambda function assigns the permissions sets to the target users and groups by making an AWS SDK for Python (Boto3) [create\$1account\$1assignment](https://boto3.amazonaws.com/v1/documentation/api/latest/reference/services/sso-admin/client/create_account_assignment.html) API call to IAM Identity Center. It retrieves the permission set and identity assignments from the `aft-request-audit` Amazon DynamoDB table.

1. When the Step Functions workflow completes, the permission sets are assigned to the target identities.

**Automation and scale**

AFT operates at scale by using AWS services such as CodePipeline, AWS CodeBuild, DynamoDB, and Lambda, which are highly scalable. For additional automation, you can integrate this solution with a ticket or issue management system, such as Jira. For more information, see the [Additional information](#govern-permission-sets-aft-additional) section of this pattern.

## Tools
<a name="govern-permission-sets-aft-tools"></a>

**AWS services**
+ [Account Factory for Terraform (AFT)](https://docs.aws.amazon.com/controltower/latest/userguide/aft-overview.html) is the main tool in this solution. The `aft-account-provisioning-customizations` repository contains the Terraform code for creating customizations for AWS accounts, such as custom IAM Identity Center user or group assignments.
+ [Amazon DynamoDB](https://docs.aws.amazon.com/amazondynamodb/latest/developerguide/Introduction.html) is a fully managed NoSQL database service that provides fast, predictable, and scalable performance.
+ [AWS Lambda](https://docs.aws.amazon.com/lambda/latest/dg/welcome.html) is a compute service that helps you run code without needing to provision or manage servers. It runs your code only when needed and scales automatically, so you pay only for the compute time that you use.
+ [AWS Step Functions](https://docs.aws.amazon.com/step-functions/latest/dg/welcome.html) is a serverless orchestration service that helps you combine AWS Lambda functions and other AWS services to build business-critical applications.

**Other tools**
+ [Python](https://www.python.org/) is a general-purpose computer programming language.
+ [Terraform](https://www.terraform.io/) is an infrastructure as code (IaC) tool from HashiCorp that helps you create and manage cloud and on-premises resources.

**Code repository**

The code repository for AFT is available in the GitHub [AWS Control Tower Account Factory for Terraform](https://github.com/aws-ia/terraform-aws-control_tower_account_factory) repository. The code for this pattern is available in the [Govern SSO Assignments for AWS accounts using Account Factory for Terraform (AFT)](https://github.com/aws-samples/aft-custom-sso-assignment) repository.

## Best practices
<a name="govern-permission-sets-aft-best-practices"></a>
+ Understand the [AWS shared responsibility model](https://aws.amazon.com/compliance/shared-responsibility-model/).
+ Follow the security recommendations for AWS Control Tower. For more information, see [Security in AWS Control Tower](https://docs.aws.amazon.com/controltower/latest/userguide/security.html).
+ Follow the principle of least privilege. For more information, see [Apply least-privilege permissions](https://docs.aws.amazon.com/IAM/latest/UserGuide/best-practices.html#grant-least-privilege).
+ Build specific and focused permission sets and IAM roles for groups and business units.

## Epics
<a name="govern-permission-sets-aft-epics"></a>

### Deploy the solution
<a name="deploy-the-solution"></a>


| Task | Description | Skills required | 
| --- | --- | --- | 
| Create an IAM role. | In the AWS Control Tower management account, use Terraform to create an IAM role. This role has cross-account access and a trust policy that allows federated access from the identity provider. It also has permissions to grant access to other accounts through AWS Control Tower. The Lambda function will assume this role. Do the following:[\[See the AWS documentation website for more details\]](http://docs.aws.amazon.com/prescriptive-guidance/latest/patterns/govern-permission-sets-aft.html) | AWS DevOps, Cloud architect | 
| Customize the solution for your environment. | [\[See the AWS documentation website for more details\]](http://docs.aws.amazon.com/prescriptive-guidance/latest/patterns/govern-permission-sets-aft.html) | AWS DevOps, Cloud architect | 
| Deploy the solution. | [\[See the AWS documentation website for more details\]](http://docs.aws.amazon.com/prescriptive-guidance/latest/patterns/govern-permission-sets-aft.html) | AWS DevOps, Cloud architect | 
| Set up a code repository connection. | Set up a connection between the code repository where you will store the configuration files and your AWS account. For instructions, see the [Add third-party source providers to pipelines using CodeConnections](https://docs.aws.amazon.com/codepipeline/latest/userguide/pipelines-connections.html) in the AWS CodePipeline documentation. | AWS DevOps, Cloud architect | 

### Use the solution
<a name="use-the-solution"></a>


| Task | Description | Skills required | 
| --- | --- | --- | 
| Start AFT pipeline to deploy a new account. | Follow the instructions in [Provision a new account with AFT](https://docs.aws.amazon.com/controltower/latest/userguide/aft-provision-account.html) in order to start the pipeline that creates a new AWS account in your AWS Control Tower environment. Wait for the account creation process to complete. | AWS DevOps, Cloud architect | 
| Validate the changes. | [\[See the AWS documentation website for more details\]](http://docs.aws.amazon.com/prescriptive-guidance/latest/patterns/govern-permission-sets-aft.html) | AWS DevOps, Cloud architect | 

## Troubleshooting
<a name="govern-permission-sets-aft-troubleshooting"></a>


| Issue | Solution | 
| --- | --- | 
| Permission set assignment is not working. | Make sure the group ARN, organization id, and Lambda parameters are correct. For examples, see the *Solution overview* section of this pattern. | 
| Updating code in the repository does not start the pipeline. | This issue is related to the connectivity between your AWS account and the repository. In the AWS Management Console, validate that the connection is active. For more information, see [GitHub connections](https://docs.aws.amazon.com/codepipeline/latest/userguide/connections-github.html) in the AWS CodePipeline documentation. | 

## Additional information
<a name="govern-permission-sets-aft-additional"></a>

**Integrating with a ticket management tool  **

You can choose to integrate this solution with a ticket or issue management tool, such as Jira or ServiceNow. The following diagram shows an example workflow for this option. You can integrate the ticket management tool with the AFT solution repositories by using your tool’s connectors. For Jira connectors, see [Integrate Jira with GitHub](https://support.atlassian.com/jira-cloud-administration/docs/integrate-jira-software-with-github/). For ServiceNow connectors, see [Integrating with GitHub](https://www.servicenow.com/docs/bundle/washingtondc-it-asset-management/page/product/software-asset-management2/concept/integrate-with-github.html). You can even build custom solutions that require users to provide a ticket ID as part of the pull request approval. If a request to create a new AWS account by using AFT is approved, that event could initiate a workflow that adds custom fields to the `aft-account-request` GitHub repository. You can design any custom workflow that meets the requirements of your use case.

![\[Workflow that uses GitHub Actions and a ticket management tool.\]](http://docs.aws.amazon.com/prescriptive-guidance/latest/patterns/images/pattern-img/14751255-3781-48db-a6b7-1a03e28c1020/images/83763f65-32ea-4de0-932f-14a1b2d1d3ad.png)


The diagram shows the following workflow:

1. Users request a custom permission set assignment in a ticket management tool, such as Jira.

1. After the case is approved, a workflow begins to update the permission set assignment. (Optional) You can use plugins for custom automation of this step.

1. Operators send the Terraform code with the updated permission set parameters to the `aft-account-request` repository into a development or feature branch.

1. GitHub Actions initiates AWS CodeBuild by using an OpenID Connect (OIDC) call. CodeBuild performs infrastructure as code (IaC) security scans by using tools such as [tfsec](https://aquasecurity.github.io/tfsec/v1.20.0/) and [checkov](https://www.checkov.io/). It warns the operators of any security violations.

1. If no violations are found, GitHub Actions creates an automated pull request and assigns a code review to the code owners. It also creates a tag for the pull request.

1. If the code owner approves the code review, another GitHub Actions workflow starts. It checks pull request standards, including:
   + If the pull request title meets requirements.
   + If pull request body contains approved case numbers.
   + If the pull request is properly tagged.

1. If the pull requests meets standards, GitHub Actions starts the AFT product workflow. It uses starts the `ct-aft-account-request` pipeline in AWS CodePipeline. This pipeline starts the `aft-account-provisioning-framework` custom state machine in Step Functions. This state machine works as previously described in the *Solution overview* section of this pattern.

# Implement Account Factory for Terraform (AFT) by using a bootstrap pipeline
<a name="implement-account-factory-for-terraform-aft-by-using-a-bootstrap-pipeline"></a>

*Vinicius Elias and Edgar Costa Filho, Amazon Web Services*

## Summary
<a name="implement-account-factory-for-terraform-aft-by-using-a-bootstrap-pipeline-summary"></a>

This pattern provides a simple and secure method for deploying AWS Control Tower Account Factory for Terraform (AFT) from the management account of AWS Organizations. The core of the solution is an CloudFormation template that automates the AFT configuration by creating a Terraform pipeline, which is structured to be easily adaptable for initial deployment or subsequent updates.

Security and data integrity are top priorities at AWS, so the Terraform state file, which is a critical component that tracks the state of the managed infrastructure and configurations, is securely stored in an Amazon Simple Storage Service (Amazon S3) bucket. This bucket is configured with several security measures, including server-side encryption and policies to block public access, to help ensure that your Terraform state is safeguarded against unauthorized access and data breaches.

The management account orchestrates and oversees the entire environment, so it is a critical resource in AWS Control Tower. This pattern follows AWS best practices and ensures that the deployment process is not only efficient but also aligns with security and governance standards, to offer a comprehensive, secure, and efficient way to deploy AFT in your AWS environment.

For more information about AFT, see the [AWS Control Tower documentation](https://docs.aws.amazon.com/controltower/latest/userguide/aft-overview.html).

## Prerequisites and limitations
<a name="implement-account-factory-for-terraform-aft-by-using-a-bootstrap-pipeline-prereqs"></a>

**Prerequisites**
+ A basic AWS multi-account environment with the following accounts at the minimum: management account, Log Archive account, Audit account, and one additional account for AFT management.
+ An established AWS Control Tower environment. The management account should be properly configured, because the CloudFormation template will be deployed within it.
+ The necessary permissions in the AWS management account. You'll need sufficient permissions to create and manage resources such as S3 buckets, AWS Lambda functions, AWS Identity and Access Management (IAM) roles, and AWS CodePipeline projects.
+ Familiarity with Terraform. Understanding Terraform's core concepts and workflow is important because the deployment involves generating and managing Terraform configurations.

**Limitations**
+ Be aware of the [AWS resource quotas](https://docs.aws.amazon.com/general/latest/gr/aws_service_limits.html) in your account. The deployment might create multiple resources, and encountering service quotas could impede the deployment process.
+ The template is designed for specific versions of Terraform and AWS services. Upgrading or changing versions might require template modifications.
+ The template doesn't support self-managed version control system (VCS) services such as GitHub Enterprise.

**Product versions**
+ Terraform version 1.6.6 or later
+ AFT version 1.11 or later

## Architecture
<a name="implement-account-factory-for-terraform-aft-by-using-a-bootstrap-pipeline-architecture"></a>

**Target technology stack**
+ CloudFormation
+ AWS CodeBuild
+ AWS CodeCommit
+ AWS CodePipeline
+ Amazon EventBridge
+ IAM
+ AWS Lambda
+ Amazon S3

**Target architecture**

The following diagram illustrates the implementation discussed in this pattern.

![\[Workflow for implementing AFT by using a bootstrap pipeline.\]](http://docs.aws.amazon.com/prescriptive-guidance/latest/patterns/images/pattern-img/944f9912-87c7-4cc5-8478-7070cf67f7ee/images/4ee74757-940d-4d92-a7f0-fb0db1476247.png)


The workflow consists of three main tasks: creating the resources, generating the content, and running the pipeline.

*Creating the resources*

The [CloudFormation template that's provided with this pattern](https://github.com/aws-samples/aft-bootstrap-pipeline/blob/main/code/aft-deployment-pipeline.yaml) creates and sets up all required resources, depending on the parameters you select when you deploy the template. At the minimum, the template creates the following resources:
+ A CodePipeline pipeline to implement the AFT
+ An S3 bucket to store the Terraform state file that's associated with the AFT implementation
+ Two CodeBuild projects to implement the Terraform plan and apply commands in different stages of the pipeline
+ IAM roles for CodeBuild and CodePipeline services
+ A second S3 bucket to store pipeline runtime artifacts

Depending on the VCS provider you select (CodeCommit or external VCS), the template creates the following resources. 
+ For **CodeCommit**:
  + A CodeCommit repository to store the AFT Terraform bootstrap code
  + An EventBridge rule to capture CodeCommit repository changes on the `main` branch
  + Another IAM role for the EventBridge rule
+ For any other **external VCS provider**, such as GitHub:
  + An AWS CodeConnections connection

Additionally, when you select CodeCommit as the VCS provider, if you set the `Generate AFT Files` parameter to `true`, the template creates these additional resources to generate the content:
+ An S3 bucket to store the generated content and to be used as the source of the CodeCommit repository
+ A Lambda function to process the given parameters and generate the appropriate content
+ An IAM function to run the Lambda function
+ A CloudFormation custom resource that runs the Lambda function when the template is deployed

*Generating the content*

To generate the AFT bootstrap files and their content, the solution uses a Lambda function and an S3 bucket. The function creates a folder in the bucket, and then creates two files inside the folder: `main.tf` and `backend.tf`. The function also processes the provided CloudFormation parameters and populates these files with predefined code, replacing the respective parameter values.

To view the code that's used as a template to generate the files, see the solution's [GitHub repository](https://github.com/aws-samples/aft-bootstrap-pipeline). Basically, the files are generated as follows.

**main.tf**

```
module "aft" {
  source = "github.com/aws-ia/terraform-aws-control_tower_account_factory?ref=<aft_version>"

  # Required variables
  ct_management_account_id  = "<ct_management_account_id>"
  log_archive_account_id    = "<log_archive_account_id>"
  audit_account_id          = "<audit_account_id>"
  aft_management_account_id = "<aft_management_account_id>"
  ct_home_region            = "<ct_home_region>"

  # Optional variables
  tf_backend_secondary_region = "<tf_backend_secondary_region>"
  aft_metrics_reporting       = "<false|true>"

  # AFT Feature flags
  aft_feature_cloudtrail_data_events      = "<false|true>"
  aft_feature_enterprise_support          = "<false|true>"
  aft_feature_delete_default_vpcs_enabled = "<false|true>"

  # Terraform variables
  terraform_version      = "<terraform_version>"
  terraform_distribution = "<terraform_distribution>"

  # VCS variables (if you have chosen an external VCS)
  vcs_provider                                  = "<github|githubenterprise|gitlab|gitlabselfmanaged|bitbucket>"
  account_request_repo_name                     = "<org-name>/aft-account-request"
  account_customizations_repo_name              = "<org-name>/aft-account-customizations"
  account_provisioning_customizations_repo_name = "<org-name>/aft-account-provisioning-customizations"
  global_customizations_repo_name               = "<org-name>/aft-global-customizations"

}
```

**backend.tf**

```
terraform {
  backend "s3" {
    region = "<aft-main-region>"
    bucket = "<s3-bucket-name>"
    key    = "aft-setup.tfstate"
  }
}
```

During the CodeCommit repository creation, if you set the `Generate AFT Files` parameter to `true`, the template uses the S3 bucket with the generated content as the source of the `main` branch to automatically populate the repository.

*Running the pipeline*

After the resources have been created and the bootstrap files have been configured, the pipeline runs. The first stage (*Source*) fetches the source code from the main branch of the repository, and the second stage (*Build*) runs the Terraform plan command and generates the results to be reviewed. In the third stage (*Approval*), the pipeline waits for a manual action to approve or reject the last stage (*Deploy*). At the last stage, the pipeline runs the Terraform `apply` command by using the result of the previous Terraform `plan` command as input. Finally, a cross-account role and the permissions in the management account are used to create the AFT resources in the AFT management account.

**Note**  
If you select an external VCS provider, you will need to authorize the connection with your VCS provider credentials. To complete the setup, follow the steps in [Update a pending connection](https://docs.aws.amazon.com/dtconsole/latest/userguide/connections-update.html) in the AWS Developer Tools console documentation.

## Tools
<a name="implement-account-factory-for-terraform-aft-by-using-a-bootstrap-pipeline-tools"></a>

**AWS services**
+ [CloudFormation](https://docs.aws.amazon.com/AWSCloudFormation/latest/UserGuide/Welcome.html) helps you set up AWS resources, provision them quickly and consistently, and manage them throughout their lifecycle across AWS accounts and Regions.
+ [AWS CodeBuild](https://docs.aws.amazon.com/codebuild/latest/userguide/welcome.html) is a fully managed build service that helps you compile source code, run unit tests, and produce artifacts that are ready to deploy. 
+ [AWS CodeCommit](https://docs.aws.amazon.com/codecommit/latest/userguide/welcome.html) is a version control service that helps you privately store and manage Git repositories without needing to manage your own source control system.
+ [AWS CodePipeline](https://docs.aws.amazon.com/codepipeline/latest/userguide/welcome.html) helps you quickly model and configure the different stages of a software release and automate the steps required to release software changes continuously.
+ [AWS CodeConnections](https://docs.aws.amazon.com/dtconsole/latest/userguide/welcome-connections.html) enables AWS resources and services, such as CodePipeline, to connect to external code repositories, such as GitHub.
+ [AWS Lambda](https://docs.aws.amazon.com/lambda/latest/dg/welcome.html) is a compute service that runs your code in response to events and automatically manages compute resources, providing a fast way to create a modern, serverless application for production.
+ [AWS SDK for Python (Boto3)](https://boto3.amazonaws.com/v1/documentation/api/latest/guide/quickstart.html) is a software development kit that helps you integrate your Python application, library, or script with AWS services.

**Other tools**
+ [Terraform](https://developer.hashicorp.com/terraform?product_intent=terraform) is an infrastructure as code (IaC) tool that lets you build, change, and version infrastructure safely and efficiently. This includes low-level components such as compute instances, storage, and networking; and high-level components such as DNS entries and SaaS features.
+ [Python](https://docs.python.org/3.9/tutorial/index.html) is an easy to learn, powerful programming language. It has efficient high-level data structures and provides a simple but effective approach to object-oriented programming.

**Code repository**

The code for this pattern is available in the GitHub [AFT bootstrap pipeline repository](https://github.com/aws-samples/aft-bootstrap-pipeline).

For the official AFT repository, see [AWS Control Tower Account Factory for Terraform](https://github.com/aws-ia/terraform-aws-control_tower_account_factory/tree/main) in GitHub.

## Best practices
<a name="implement-account-factory-for-terraform-aft-by-using-a-bootstrap-pipeline-best-practices"></a>

When you deploy AFT by using the provided CloudFormation template, we recommend that you follow best practices to help ensure a secure, efficient, and successful implementation. Key guidelines and recommendations for implementing and operating the AFT include the following.
+ **Thorough review of parameters**: Carefully review and understand each parameter in the CloudFormation template. Accurate parameter configuration is crucial for the correct setup and functioning of AFT.
+ **Regular template updates**: Keep the template updated with the latest AWS features and Terraform versions. Regular updates help you take advantage of new functionality and maintain security.
+ **Versioning**: Pin your AFT module version and use a separate AFT deployment for testing if possible.
+ **Scope**: Use AFT only to deploy infrastructure guardrails and customizations. Do not use it to deploy your application.
+ **Linting and validation**: The AFT pipeline requires a linted and validated Terraform configuration. Run lint, validate, and test before pushing the configuration to AFT repositories.
+ **Terraform modules**: Build reusable Terraform code as modules, and always specify the Terraform and AWS provider versions to match your organization's requirements.

## Epics
<a name="implement-account-factory-for-terraform-aft-by-using-a-bootstrap-pipeline-epics"></a>

### Set up and configure the AWS environment
<a name="set-up-and-configure-the-aws-environment"></a>


| Task | Description | Skills required | 
| --- | --- | --- | 
| Prepare the AWS Control Tower environment. | Set up and configure AWS Control Tower in your AWS environment to ensure centralized management and governance for your AWS accounts. For more information, see [Getting started with AWS Control Tower](https://docs.aws.amazon.com/controltower/latest/userguide/getting-started-with-control-tower.html) in the AWS Control Tower documentation. | Cloud administrator | 
| Launch the AFT management account. | Use the AWS Control Tower Account Factory to launch a new AWS account to serve as your AFT management account. For more information, see [Provision accounts with AWS Service Catalog Account Factory](https://docs.aws.amazon.com/controltower/latest/userguide/provision-as-end-user.html) in the AWS Control Tower documentation. | Cloud administrator | 

### Deploy the CloudFormation template in the management account
<a name="deploy-the-cfnshort-template-in-the-management-account"></a>


| Task | Description | Skills required | 
| --- | --- | --- | 
| Launch the CloudFormation template. | In this epic, you deploy the CloudFormation template provided with this solution to set up the AFT bootstrap pipeline in your AWS management account. The pipeline deploys the AFT solution in the AFT management account that you set up in the previous epic.**Step 1: Open the CloudFormation console**[\[See the AWS documentation website for more details\]](http://docs.aws.amazon.com/prescriptive-guidance/latest/patterns/implement-account-factory-for-terraform-aft-by-using-a-bootstrap-pipeline.html)**Step 2: Create a new stack**[\[See the AWS documentation website for more details\]](http://docs.aws.amazon.com/prescriptive-guidance/latest/patterns/implement-account-factory-for-terraform-aft-by-using-a-bootstrap-pipeline.html)**Step 3: Configure stack parameters**[\[See the AWS documentation website for more details\]](http://docs.aws.amazon.com/prescriptive-guidance/latest/patterns/implement-account-factory-for-terraform-aft-by-using-a-bootstrap-pipeline.html)**Step 4: Decide on file generation**[\[See the AWS documentation website for more details\]](http://docs.aws.amazon.com/prescriptive-guidance/latest/patterns/implement-account-factory-for-terraform-aft-by-using-a-bootstrap-pipeline.html)**Step 5: Fill in AWS Control Tower and AFT account details**[\[See the AWS documentation website for more details\]](http://docs.aws.amazon.com/prescriptive-guidance/latest/patterns/implement-account-factory-for-terraform-aft-by-using-a-bootstrap-pipeline.html)**Step 6: Configure AFT options**[\[See the AWS documentation website for more details\]](http://docs.aws.amazon.com/prescriptive-guidance/latest/patterns/implement-account-factory-for-terraform-aft-by-using-a-bootstrap-pipeline.html)[\[See the AWS documentation website for more details\]](http://docs.aws.amazon.com/prescriptive-guidance/latest/patterns/implement-account-factory-for-terraform-aft-by-using-a-bootstrap-pipeline.html)**Step 7: Specify versions**[\[See the AWS documentation website for more details\]](http://docs.aws.amazon.com/prescriptive-guidance/latest/patterns/implement-account-factory-for-terraform-aft-by-using-a-bootstrap-pipeline.html)**Step 8: Review and create the stack**[\[See the AWS documentation website for more details\]](http://docs.aws.amazon.com/prescriptive-guidance/latest/patterns/implement-account-factory-for-terraform-aft-by-using-a-bootstrap-pipeline.html)**Step 9: Monitor stack creation**[\[See the AWS documentation website for more details\]](http://docs.aws.amazon.com/prescriptive-guidance/latest/patterns/implement-account-factory-for-terraform-aft-by-using-a-bootstrap-pipeline.html)**Step 10: Verify the deployment**[\[See the AWS documentation website for more details\]](http://docs.aws.amazon.com/prescriptive-guidance/latest/patterns/implement-account-factory-for-terraform-aft-by-using-a-bootstrap-pipeline.html) | Cloud administrator | 

### Populate and validate the AFT bootstrap repository and pipeline
<a name="populate-and-validate-the-aft-bootstrap-repository-and-pipeline"></a>


| Task | Description | Skills required | 
| --- | --- | --- | 
| Option 1: Populate the AFT bootstrap repository for an external VCS. | If you set the VCS provider to an external VCS (not to CodeCommit), follow these steps.(Optional) After you deploy the CloudFormation template, you can populate or validate the content in the newly created AFT bootstrap repository, and test whether the pipeline has run successfully.**Step 1: Update the connection**[\[See the AWS documentation website for more details\]](http://docs.aws.amazon.com/prescriptive-guidance/latest/patterns/implement-account-factory-for-terraform-aft-by-using-a-bootstrap-pipeline.html)**Step 2: Populate the repository**[\[See the AWS documentation website for more details\]](http://docs.aws.amazon.com/prescriptive-guidance/latest/patterns/implement-account-factory-for-terraform-aft-by-using-a-bootstrap-pipeline.html)**Step 2: Commit and push your changes**[\[See the AWS documentation website for more details\]](http://docs.aws.amazon.com/prescriptive-guidance/latest/patterns/implement-account-factory-for-terraform-aft-by-using-a-bootstrap-pipeline.html) | Cloud administrator | 
| Option 2: Populate the AFT bootstrap repository for CodeCommit. | If you set the VCS provider to CodeCommit, follow these steps.(Optional) After you deploy the CloudFormation template, you can populate or validate the content in the newly created AFT bootstrap repository, and test whether the pipeline has run successfully.If you set the `Generate AFT Files` parameter to `true`, skip to the next story (validating the pipeline).**Step 1: Populate the repository**[\[See the AWS documentation website for more details\]](http://docs.aws.amazon.com/prescriptive-guidance/latest/patterns/implement-account-factory-for-terraform-aft-by-using-a-bootstrap-pipeline.html)**Step 2: Commit and push your changes**[\[See the AWS documentation website for more details\]](http://docs.aws.amazon.com/prescriptive-guidance/latest/patterns/implement-account-factory-for-terraform-aft-by-using-a-bootstrap-pipeline.html) | Cloud administrator | 
| Validate the AFT bootstrap pipeline. | **Step 1: View the pipeline**[\[See the AWS documentation website for more details\]](http://docs.aws.amazon.com/prescriptive-guidance/latest/patterns/implement-account-factory-for-terraform-aft-by-using-a-bootstrap-pipeline.html)**Step 2: Approve the Terraform plan results**[\[See the AWS documentation website for more details\]](http://docs.aws.amazon.com/prescriptive-guidance/latest/patterns/implement-account-factory-for-terraform-aft-by-using-a-bootstrap-pipeline.html)**Step 3: Wait for the deployment**[\[See the AWS documentation website for more details\]](http://docs.aws.amazon.com/prescriptive-guidance/latest/patterns/implement-account-factory-for-terraform-aft-by-using-a-bootstrap-pipeline.html)**Step 4: Check created resources**[\[See the AWS documentation website for more details\]](http://docs.aws.amazon.com/prescriptive-guidance/latest/patterns/implement-account-factory-for-terraform-aft-by-using-a-bootstrap-pipeline.html) | Cloud administrator | 

## Troubleshooting
<a name="implement-account-factory-for-terraform-aft-by-using-a-bootstrap-pipeline-troubleshooting"></a>


| Issue | Solution | 
| --- | --- | 
| The custom Lambda function included in the CloudFormation template fails during deployment. | Check the Amazon CloudWatch logs for the Lambda function to identify the error. The logs provide detailed information and can help pinpoint the specific issue. Confirm that the Lambda function has the necessary permissions and that the environment variables have been set correctly. | 
| You encounter failures in resource creation or management caused by inadequate permissions. | Review the IAM roles and policies that are attached to the Lambda function, CodeBuild, and other services involved in the deployment. Confirm that they have the necessary permissions. If there are permission issues, adjust the IAM policies to grant the required access. | 
| You’re using an outdated version of the CloudFormation template with newer AWS services or Terraform versions. | Regularly update the CloudFormation template to be compatible with the latest AWS and Terraform releases. Check the release notes or documentation for any version-specific changes or requirements. | 
| You reach AWS service quotas during deployment. | Before you deploy the pipeline, check AWS service quotas for resources such as S3 buckets, IAM roles, and Lambda functions. Request increases if necessary. For more information, see [AWS service quotas](https://docs.aws.amazon.com/general/latest/gr/aws_service_limits.html) on the AWS website. | 
| You encounter errors due to incorrect input parameters in the CloudFormation template. | Double-check all input parameters for typos or incorrect values. Confirm that resource identifiers, such as account IDs and Region names, are accurate. | 

## Related resources
<a name="implement-account-factory-for-terraform-aft-by-using-a-bootstrap-pipeline-resources"></a>

To implement this pattern successfully, review the following resources. These resources provide additional information and guidance that can be invaluable in setting up and managing AFT by using CloudFormation.

**AWS** **documentation:**
+ [AWS Control Tower User Guide](https://docs.aws.amazon.com/controltower/latest/userguide/what-is-control-tower.html) offers detailed information on setting up and managing AWS Control Tower.
+ [CloudFormation documentation](https://docs.aws.amazon.com/cloudformation/index.html) provides insights into CloudFormation templates, stacks, and resource management.

**IAM policies and best practices:**
+ [Security best practices in IAM](https://docs.aws.amazon.com/IAM/latest/UserGuide/best-practices.html) explains how to help secure AWS resources by using IAM roles and policies.

**Terraform on AWS:**
+ [Terraform AWS Provider documentation](https://registry.terraform.io/providers/hashicorp/aws/latest/docs) provides comprehensive information about using Terraform with AWS.

**AWS service quotas:**
+ [AWS service quotas](https://docs.aws.amazon.com/general/latest/gr/aws_service_limits.html) provides information about how to view AWS service quotas and how to request increases.

# Manage AWS Service Catalog products in multiple AWS accounts and AWS Regions
<a name="manage-aws-service-catalog-products-in-multiple-aws-accounts-and-aws-regions"></a>

*Ram Kandaswamy, Amazon Web Services*

## Summary
<a name="manage-aws-service-catalog-products-in-multiple-aws-accounts-and-aws-regions-summary"></a>

Amazon Web Services (AWS) Service Catalog simplifies and accelerates the governance and distribution of infrastructure as code (IaC) templates for enterprises. You use AWS CloudFormation templates to define a collection of AWS resources (*stacks*) required for a product. AWS CloudFormation StackSets extends this functionality by enabling you to create, update, or delete stacks across multiple accounts and AWS Regions with a single operation.

AWS Service Catalog administrators create products by using CloudFormation templates that are authored by developers, and publish them. These products are then associated with a portfolio, and constraints are applied for governance. To make your products available to users in other AWS accounts or organizational units (OUs), you typically [share your portfolio](https://docs.aws.amazon.com/servicecatalog/latest/adminguide/catalogs_portfolios_sharing.html) with them. This pattern describes an alternative approach for managing AWS Service Catalog product offerings that is based on AWS CloudFormation StackSets. Instead of sharing portfolios, you use stack set constraints to set AWS Regions and accounts where your product can be deployed and used. By using this approach, you can provision your AWS Service Catalog products in multiple accounts, OUs, and AWS Regions, and manage them from a central location, while meeting your governance requirements. 

Benefits of this approach:
+ The product is provisioned and managed from the primary account, and not shared with other accounts.
+ This approach provides a consolidated view of all provisioned products (stacks) that are based on a specific product.
+ Configuration with AWS Service Management Connector is easier, because it targets only one account.
+ It's easier to query and use products from AWS Service Catalog.

## Prerequisites and limitations
<a name="manage-aws-service-catalog-products-in-multiple-aws-accounts-and-aws-regions-prereqs"></a>

**Prerequisites**
+ AWS CloudFormation templates for IaC and versioning
+ Multi-account setup and AWS Service Catalog for provisioning and managing AWS resources

**Limitations **
+ This approach uses AWS CloudFormation StackSets, and the limitations of StackSets apply:
  + StackSets doesn't support CloudFormation template deployment through macros. If you're using a macro to preprocess the template, you won't be able to use a StackSets-based deployment.
  + StackSets provides the ability to disassociate a stack from the stack set, so you can target a specific stack to fix an issue. However, a disassociated stack cannot be re-associated with the stack set.
+ AWS Service Catalog autogenerates StackSet names. Customization isn't currently supported.

## Architecture
<a name="manage-aws-service-catalog-products-in-multiple-aws-accounts-and-aws-regions-architecture"></a>

**Target architecture**

![\[User manages AWS Service Catalog product using AWS CloudFormation template and StackSets.\]](http://docs.aws.amazon.com/prescriptive-guidance/latest/patterns/images/pattern-img/16458fcd-861d-4ed4-8b91-47e19289a6bb/images/97d23325-b5c6-4ca9-8288-8dec1650c975.png)


1. The user creates an AWS CloudFormation template to provision AWS resources, in JSON or YAML format.

1. The CloudFormation template creates a product in AWS Service Catalog, which is added to a portfolio.

1. The user creates a provisioned product, which creates CloudFormation stacks in the target accounts.

1. Each stack provisions the resources specified in the CloudFormation templates.

## Tools
<a name="manage-aws-service-catalog-products-in-multiple-aws-accounts-and-aws-regions-tools"></a>

**AWS services**
+ [AWS CloudFormation](https://docs.aws.amazon.com/AWSCloudFormation/latest/UserGuide/Welcome.html) helps you set up AWS resources, provision them quickly and consistently, and manage them throughout their lifecycle across AWS accounts and Regions.
+ [AWS Command Line Interface (AWS CLI)](https://docs.aws.amazon.com/cli/latest/userguide/cli-chap-welcome.html) is an open-source tool that helps you interact with AWS services through commands in your command-line shell.
+ [AWS Identity and Access Management (IAM)](https://docs.aws.amazon.com/IAM/latest/UserGuide/introduction.html) helps you securely manage access to your AWS resources by controlling who is authenticated and authorized to use them.
+ [AWS Service Catalog](https://docs.aws.amazon.com/servicecatalog/latest/adminguide/introduction.html) helps you centrally manage catalogs of IT services that are approved for AWS. End users can quickly deploy only the approved IT services they need, following the constraints set by your organization.

## Epics
<a name="manage-aws-service-catalog-products-in-multiple-aws-accounts-and-aws-regions-epics"></a>

### Provision products across accounts
<a name="provision-products-across-accounts"></a>


| Task | Description | Skills required | 
| --- | --- | --- | 
| Create a portfolio. | A portfolio is a container that includes one or more products that are grouped together based on specific criteria. Using a portfolio for your products helps you apply common constraints across your product set.To create a portfolio, follow the instructions in the [AWS Service Catalog documentation](https://docs.aws.amazon.com/servicecatalog/latest/adminguide/portfoliomgmt-create.html). If you're using the AWS CLI, here's an example command:<pre>aws servicecatalog create-portfolio --provider-name my-provider --display-name my-portfolio</pre>For more information, see the [AWS CLI documentation](https://awscli.amazonaws.com/v2/documentation/api/latest/reference/servicecatalog/create-portfolio.html). | AWS Service Catalog, IAM | 
| Create a CloudFormation template. | Create a CloudFormation template that describes the resources. Resource property values should be parameterized where applicable. | AWS CloudFormation, JSON/YAML | 
| Create a product with version information. | The CloudFormation template becomes a product when you publish it in the AWS Service Catalog. Provide values for the optional version detail parameters, such as version title and description; this will be helpful for querying for the product later.To create a product, follow the instructions in the [AWS Service Catalog documentation](https://docs.aws.amazon.com/servicecatalog/latest/adminguide/productmgmt-cloudresource.html). If you're using the AWS CLI, an example command is:<pre>aws servicecatalog create-product --cli-input-json file://create-product-input.json</pre>where `create-product-input.json` is the file that passes the parameters for the product. For an example of this file, see the *Additional information* section. For more information, see the [AWS CLI documentation](https://awscli.amazonaws.com/v2/documentation/api/latest/reference/servicecatalog/create-product.html). | AWS Service Catalog | 
| Apply constraints. | Apply stack set constraints to the portfolio, to configure product deployment options such as multiple AWS accounts, Regions, and permissions. For instructions, see the [AWS Service Catalog documentation](https://docs.aws.amazon.com/servicecatalog/latest/adminguide/constraints-stackset.html). | AWS Service Catalog | 
| Add permissions. | Provide permissions to users so that they can launch the products in the portfolio. For console instructions, see the [AWS Service Catalog documentation](https://docs.aws.amazon.com/servicecatalog/latest/adminguide/catalogs_portfolios_users.html). If you're using the AWS CLI, here's an example command:<pre>aws servicecatalog associate-principal-with-portfolio \<br />    --portfolio-id port-2s6abcdefwdh4 \<br />    --principal-arn arn:aws:iam::444455556666:role/Admin \<br />    --principal-type IAM</pre>For more information, see the [AWS CLI documentation](https://awscli.amazonaws.com/v2/documentation/api/latest/reference/servicecatalog/associate-principal-with-portfolio.html). | AWS Service Catalog, IAM | 
| Provision the product. | A provisioned product is a resourced instance of a product. Provisioning a product based on a CloudFormation template launches a CloudFormation stack and its underlying resources.Provision the product by targeting the applicable AWS Regions and accounts, based on stack set constraints. In the AWS CLI, here's an example command:<pre>aws servicecatalog provision-product \<br />    --product-id prod-abcdfz3syn2rg \<br />    --provisioning-artifact-id pa-abc347pcsccfm \<br />    --provisioned-product-name "mytestppname3"</pre>For more information, see the [AWS CLI documentation](https://awscli.amazonaws.com/v2/documentation/api/latest/reference/servicecatalog/provision-product.html). | AWS Service Catalog | 

## Related resources
<a name="manage-aws-service-catalog-products-in-multiple-aws-accounts-and-aws-regions-resources"></a>

**References**
+ [Overview of AWS Service Catalog](https://docs.aws.amazon.com/servicecatalog/latest/adminguide/what-is_concepts.html)
+ [Using AWS CloudFormation StackSets](https://docs.aws.amazon.com/servicecatalog/latest/adminguide/using-stacksets.html)

**Tutorials and videos **
+ [AWS re:Invent 2019: Automate everything: Options and best practices](https://www.youtube.com/watch?v=bGBVPIpQMYk) (video)

## Additional information
<a name="manage-aws-service-catalog-products-in-multiple-aws-accounts-and-aws-regions-additional"></a>

When you use the `create-product` command, the `cli-input-json` parameter points to a file that specifies information such as product owner, support email, and CloudFormation template details. Here's an example of such a file:

```
{
   "Owner": "Test admin",
      "SupportDescription": "Testing",
         "Name": "SNS",
            "SupportEmail": "example@example.com",
            "ProductType": "CLOUD_FORMATION_TEMPLATE",
               "AcceptLanguage": "en",
                  "ProvisioningArtifactParameters": {
                     "Description": "SNS product",
                        "DisableTemplateValidation": true,
                           "Info": {
                              "LoadTemplateFromURL": "<url>"
                     },
                           "Name": "version 1"
}
```

# Monitor SAP RHEL Pacemaker clusters by using AWS services
<a name="monitor-sap-rhel-pacemaker-clusters-by-using-aws-services"></a>

*Harsh Thoria, Randy Germann, and RAVEENDRA Voore, Amazon Web Services*

## Summary
<a name="monitor-sap-rhel-pacemaker-clusters-by-using-aws-services-summary"></a>

This pattern outlines the steps for monitoring and configuring alerts for a Red Hat Enterprise Linux (RHEL) Pacemaker cluster for SAP applications and SAP HANA database services by using Amazon CloudWatch and Amazon Simple Notification Service (Amazon SNS).

The configuration enables you to monitor SAP SCS or ASCS, Enqueue Replication Server (ERS), and SAP HANA cluster resources when they are in a "stopped" state with the help of CloudWatch log streams, metric filters, and alarms. Amazon SNS sends an email to the infrastructure or SAP Basis team about the stopped cluster status.

You can create the AWS resources for this pattern by using AWS CloudFormation scripts or the AWS service consoles. This pattern assumes that you're using the consoles; it doesn't provide CloudFormation scripts or cover infrastructure deployment for CloudWatch and Amazon SNS. Pacemaker commands are used to set the cluster alerting configuration.

## Prerequisites and limitations
<a name="monitor-sap-rhel-pacemaker-clusters-by-using-aws-services-prereqs"></a>

**Prerequisites**
+ An active AWS account.
+ Amazon SNS set up to send email or mobile notifications.
+ An SAP ASCS/ERS for ABAP or SCS/ERS for Java, and SAP HANA Database RHEL Pacemaker cluster. For instructions, see the following:
  + [SAP HANA cluster setup](https://docs.aws.amazon.com/sap/latest/sap-hana/sap-hana-on-aws-manual-deployment-of-sap-hana-on-aws-with-high-availability-clusters.html)
  + [SAP Netweaver ABAP/Java cluster setup](https://docs.aws.amazon.com/sap/latest/sap-netweaver/sap-netweaver-ha-configuration-guide.html)

**Limitations**
+ This solution currently works for RHEL version 7.3 and later Pacemaker-based clusters. It hasn’t been tested on SUSE operating systems.

**Product versions**
+ RHEL 7.3 and later

## Architecture
<a name="monitor-sap-rhel-pacemaker-clusters-by-using-aws-services-architecture"></a>

**Target technology stack **
+ RHEL Pacemaker alert event-driven agent
+ Amazon Elastic Compute Cloud (Amazon EC2)
+ CloudWatch alarm
+ CloudWatch log group and metric filter
+ Amazon SNS

**Target architecture **

The following diagram illustrates the components and workflows for this solution.

![\[Architecture for monitoring SAP RHEL Pacemaker clusters\]](http://docs.aws.amazon.com/prescriptive-guidance/latest/patterns/images/pattern-img/ca4d282e-eadd-43fd-8506-3dbeb43e4db6/images/bfc96678-1fd3-47b6-8f09-bf7cf7c4a92c.png)


**Automation and scale**
+ You can automate the creation of AWS resources by using CloudFormation scripts. You can also use additional metric filters to scale and cover multiple clusters.

## Tools
<a name="monitor-sap-rhel-pacemaker-clusters-by-using-aws-services-tools"></a>

**AWS services**
+ [Amazon CloudWatch](https://docs.aws.amazon.com/AmazonCloudWatch/latest/monitoring/WhatIsCloudWatch.html) helps you monitor the metrics of your AWS resources and the applications you run on AWS in real time.
+  [Amazon Simple Notification Service (Amazon SNS)](https://docs.aws.amazon.com/sns/latest/dg/welcome.html) helps you coordinate and manage the exchange of messages between publishers and clients, including web servers and email addresses.

**Tools**
+ CloudWatch agent (unified) is a tool that collects system-level metrics, logs, and traces from EC2 instances, and retrieves custom metrics from your applications.
+ Pacemaker alert agent (for RHEL 7.3 and later) is a tool that initiates an action when there's a change, such as when a resource stops or restarts, in a Pacemaker cluster.

## Best practices
<a name="monitor-sap-rhel-pacemaker-clusters-by-using-aws-services-best-practices"></a>
+ For best practices for using SAP workloads on AWS, see the [SAP Lens](https://docs.aws.amazon.com/wellarchitected/latest/sap-lens/sap-lens.html) for the AWS Well-Architected Framework.
+ Consider the costs involved in setting up CloudWatch monitoring for SAP HANA clusters. For more information, see the [CloudWatch documentation](https://docs.aws.amazon.com/AmazonCloudWatch/latest/monitoring/cloudwatch_billing.html).
+ Consider using a pager or ticketing mechanism for Amazon SNS alerts.
+ Always check for RHEL high availability (HA) versions of the RPM package for **pcs**, Pacemaker, and the AWS fencing agent.

## Epics
<a name="monitor-sap-rhel-pacemaker-clusters-by-using-aws-services-epics"></a>

### Set up Amazon SNS
<a name="set-up-sns"></a>


| Task | Description | Skills required | 
| --- | --- | --- | 
| Create an SNS topic. | [\[See the AWS documentation website for more details\]](http://docs.aws.amazon.com/prescriptive-guidance/latest/patterns/monitor-sap-rhel-pacemaker-clusters-by-using-aws-services.html) | AWS administrator | 
| Modify the access policy for the SNS topic. | [\[See the AWS documentation website for more details\]](http://docs.aws.amazon.com/prescriptive-guidance/latest/patterns/monitor-sap-rhel-pacemaker-clusters-by-using-aws-services.html) | AWS systems administrator | 
| Subscribe to the SNS topic. | [\[See the AWS documentation website for more details\]](http://docs.aws.amazon.com/prescriptive-guidance/latest/patterns/monitor-sap-rhel-pacemaker-clusters-by-using-aws-services.html)Your web browser displays a confirmation response from Amazon SNS. | AWS systems administrator | 

### Confirm the setup of the cluster
<a name="confirm-the-setup-of-the-cluster"></a>


| Task | Description | Skills required | 
| --- | --- | --- | 
| Check cluster status. | Use the **pcs status** command to confirm that the resources are online. | SAP Basis administrator | 

### Configure Pacemaker alerts
<a name="configure-pacemaker-alerts"></a>


| Task | Description | Skills required | 
| --- | --- | --- | 
| Configure the Pacemaker alert agent on the primary cluster instance. | Log in to the EC2 instance in the pimary cluster and run the following commands:<pre>install --mode=0755 /usr/share/pacemaker/alerts/alert_file.sh.sample<br />touch /var/lib/pacemaker/alert_file.sh<br />touch /var/log/pcmk_alert_file.log<br />chown hacluster:haclient /var/log/pcmk_alert_file.log<br />chmod 600 /var/log/pcmk_alert_file.log<br />pcs alert create id=alert_file description="Log events to a file." path=/var/lib/pacemaker/alert_file.sh<br />pcs alert recipient add alert_file id=my-alert_logfile value=/var/log/pcmk_alert_file.log</pre> | SAP Basis administrator | 
| Configure the Pacemaker alert agent on the secondary cluster instance. | Log in to the secondary cluster EC2 instance in the secondary cluster and run the following commands:<pre>install --mode=0755 /usr/share/pacemaker/alerts/alert_file.sh.sample<br />touch /var/lib/pacemaker/alert_file.sh<br />touch /var/log/pcmk_alert_file.log<br />chown hacluster:haclient /var/log/pcmk_alert_file.log<br />chmod 600 /var/log/pcmk_alert_file.log</pre> | SAP Basis administrator | 
| Confirm that the RHEL alert resource was created. | Use the following command to confirm that the alert resource was created:<pre>pcs alert</pre>The output of the command will look like this:<pre>[root@xxxxxxx ~]# pcs alert <br />Alerts:<br /> Alert: alert_file (path=/var/lib/pacemaker/alert_file.sh)<br />  Description: Log events to a file.<br />  Recipients:<br />   Recipient: my-alert_logfile (value=/var/log/pcmk_alert_file.log)</pre> | SAP Basis administrator | 

### Configure the CloudWatch agent
<a name="configure-the-cw-agent"></a>


| Task | Description | Skills required | 
| --- | --- | --- | 
| Install the CloudWatch agent. | There are several ways to install the CloudWatch agent on an EC2 instance. To use the command line:[\[See the AWS documentation website for more details\]](http://docs.aws.amazon.com/prescriptive-guidance/latest/patterns/monitor-sap-rhel-pacemaker-clusters-by-using-aws-services.html)For more information, see the [CloudWatch documentation](https://docs.aws.amazon.com/AmazonCloudWatch/latest/monitoring/install-CloudWatch-Agent-on-EC2-Instance.html). | AWS systems administrator | 
| Attach an IAM role to the EC2 instance. | To enable the CloudWatch agent to send data from the instances, you must attach the IAM **CloudWatchAgentServerRole** role to each  instance. Or, you can add a policy for the CloudWatch agent to your existing IAM role. For more information, see the [CloudWatch documentation](https://docs.aws.amazon.com/AmazonCloudWatch/latest/monitoring/create-iam-roles-for-cloudwatch-agent-commandline.html). | AWS administrator | 
| Configure the CloudWatch agent to monitor the Pacemaker alert agent log file on the primary cluster instance. | [\[See the AWS documentation website for more details\]](http://docs.aws.amazon.com/prescriptive-guidance/latest/patterns/monitor-sap-rhel-pacemaker-clusters-by-using-aws-services.html) | AWS administrator | 
| Start the CloudWatch agent on the primary and secondary cluster instances. | To start the agent, run the following command on the EC2 instances in the primary and secondary clusters:<pre>sudo /opt/aws/amazon-cloudwatch-agent/bin/amazon-cloudwatch-agent-ctl -a fetch-config -m<br />ec2 -s -c file:/opt/aws/amazon-cloudwatch-agent/bin/config.json</pre> | AWS administrator | 

### Set up CloudWatch resources
<a name="set-up-cw-resources"></a>


| Task | Description | Skills required | 
| --- | --- | --- | 
| Set up CloudWatch log groups. | [\[See the AWS documentation website for more details\]](http://docs.aws.amazon.com/prescriptive-guidance/latest/patterns/monitor-sap-rhel-pacemaker-clusters-by-using-aws-services.html)The CloudWatch agent will transfer the Pacemaker alert file to the CloudWatch log group as a log stream. | AWS administrator | 
| Set up CloudWatch metric filters. | Metric filters help you search for a pattern such as `stop <cluster-resource-name>` in the CloudWatch log streams. When this pattern is identified, the metric filter updates a custom metric.[\[See the AWS documentation website for more details\]](http://docs.aws.amazon.com/prescriptive-guidance/latest/patterns/monitor-sap-rhel-pacemaker-clusters-by-using-aws-services.html)When the metric filter identifies the pattern in step 4, it updates the value of the CloudWatch custom metric `sapcluster_abc` to **1**.The CloudWatch alarm `SAP-Cluster-QA1-ABC` monitors the metric `sapcluster_abc` and sends out an SNS notification when the value of the metric changes to **1**. This indicates that the cluster resource has stopped and action needs to be taken. | AWS administrator, SAP Basis administrator | 
| Set up a CloudWatch metric alarm for the SAP ASCS/SCS and ERS metric. | To create an alarm based on a single metric:[\[See the AWS documentation website for more details\]](http://docs.aws.amazon.com/prescriptive-guidance/latest/patterns/monitor-sap-rhel-pacemaker-clusters-by-using-aws-services.html) | AWS administrator | 
| Set up a CloudWatch metric alarm for the SAP HANA metric. | Repeat the steps for setting up a CloudWatch metric alarm from the previous task, with these changes:[\[See the AWS documentation website for more details\]](http://docs.aws.amazon.com/prescriptive-guidance/latest/patterns/monitor-sap-rhel-pacemaker-clusters-by-using-aws-services.html) | AWS administrator | 

## Related resources
<a name="monitor-sap-rhel-pacemaker-clusters-by-using-aws-services-resources"></a>
+ [Triggering Scripts for Cluster Events](https://access.redhat.com/documentation/en-us/red_hat_enterprise_linux/7/html-single/high_availability_add-on_reference/index#ch-alertscripts-HAAR) (RHEL documentation)
+ [Create the CloudWatch agent configuration file with the wizard ](https://docs.aws.amazon.com/AmazonCloudWatch/latest/monitoring/create-cloudwatch-agent-configuration-file-wizard.html)(CloudWatch documentation)
+ [Installing and running the CloudWatch agent on your servers ](https://docs.aws.amazon.com/AmazonCloudWatch/latest/monitoring/install-CloudWatch-Agent-commandline-fleet.html)(CloudWatch documentation)
+ [Create a CloudWatch alarm based on a static threshold](https://docs.aws.amazon.com/AmazonCloudWatch/latest/monitoring/ConsoleAlarms.html) (CloudWatch documentation)
+ [Manual deployment of SAP HANA on AWS with high availability clusters](https://docs.aws.amazon.com/sap/latest/sap-hana/sap-hana-on-aws-manual-deployment-of-sap-hana-on-aws-with-high-availability-clusters.html) (SAP documentation on the AWS website)
+ [SAP NetWeaver guides ](https://docs.aws.amazon.com/sap/latest/sap-netweaver/welcome.html)(SAP documentation on the AWS website)

## Attachments
<a name="attachments-ca4d282e-eadd-43fd-8506-3dbeb43e4db6"></a>

To access additional content that is associated with this document, unzip the following file: [attachment.zip](samples/p-attach/ca4d282e-eadd-43fd-8506-3dbeb43e4db6/attachments/attachment.zip)

# Monitor application activity by using CloudWatch Logs Insights
<a name="monitor-application-activity-by-using-cloudwatch-logs-insights"></a>

*Ram Kandaswamy, Amazon Web Services*

## Summary
<a name="monitor-application-activity-by-using-cloudwatch-logs-insights-summary"></a>

This pattern provides a solution for automatically detecting and alerting on application exceptions by using Amazon CloudWatch Logs Insights. By implementing automated log analysis and alerting, you can quickly identify and respond to application issues in your production environment.

Logs play a crucial role in monitoring system behavior, identifying issues, and ensuring optimal performance. During a migration process, log files are invaluable for validating the system's functioning in the new environment, detecting compatibility problems, and identifying any unexpected behaviors. Issues could be related to operations or security. For security-related issues, enabling the detection of unauthorized access attempts or suspicious activities early is essential for maintaining security and regulatory compliance. This capability is especially important when dealing with sensitive data or critical systems. 

This pattern is particularly valuable for teams that need to do the following:
+ Maintain high application availability.
+ Respond to production issues quickly.
+ Analyze application-specific errors not captured by AWS service logs.
+ Perform on-demand log analysis without pre-built infrastructure.

CloudWatch Logs Insights is optimal for analyzing application-generated logs where the error context exists only within your application code. CloudWatch Logs Insights excels at the following tasks:
+ Query unstructured or semi-structured log data.
+ Perform on-demand analysis during incident response.
+ Correlate events across multiple log groups.
+ Create quick visualizations without external tools.

## Prerequisites and limitations
<a name="monitor-application-activity-by-using-cloudwatch-logs-insights-prereqs"></a>

**Prerequisites**
+ A production application deployed in active AWS account
+ Basic understanding of the production application's logging format and exception patterns
+ Application logs configured to stream to Amazon CloudWatch Logs

**Limitations**
+ Some AWS services aren’t available in all AWS Regions. For Region availability, see [AWS Services by Region](https://aws.amazon.com/about-aws/global-infrastructure/regional-product-services/). For specific endpoints, see [Service endpoints and quotas](https://docs.aws.amazon.com/general/latest/gr/aws-service-information.html), and choose the link for the service.

## Architecture
<a name="monitor-application-activity-by-using-cloudwatch-logs-insights-architecture"></a>

The following diagram shows how CloudWatch Logs Insights evaluates resource logs and sends a relevant data visualization to a CloudWatch dashboard.

![\[CloudWatch Logs Insights evaluates resource logs and sends data visualization to dashboard.\]](http://docs.aws.amazon.com/prescriptive-guidance/latest/patterns/images/pattern-img/082ff4b6-9303-42e6-bc62-263e2254f232/images/b1cbb699-07cd-45e6-ac06-839159bafa6b.png)


The diagram shows the following workflow:

1. The resources publish logs to CloudWatch Logs. Resources can include AWS resources such as Amazon Elastic Compute Cloud (Amazon EC2) instances or Amazon Simple Storage Service (Amazon S3) buckets. Another example includes on-premises systems with CloudWatch Agent installed that can publish logs to CloudWatch.

1. CloudWatch Logs Insights filters for the relevant pattern string. Examples of search pattern strings include "error", "exception", or a specific regular expression.

1. Typically, the production support team or developers add the pattern visualization to the CloudWatch dashboard.

**Automation and scale**

Developers can automate this pattern’s solution by using the AWS Cloud Development Kit (AWS CDK), AWS CloudFormation, or AWS SDKs to handle multiple string patterns. Teams can incorporate this automation into their continuous integration and deployment (CI/CD) DevOps processes.

## Tools
<a name="monitor-application-activity-by-using-cloudwatch-logs-insights-tools"></a>

**AWS services**
+ [Amazon CloudWatch Logs](https://docs.aws.amazon.com/AmazonCloudWatch/latest/logs/WhatIsCloudWatchLogs.html) helps you centralize the logs from all your systems, applications, and AWS services so you can monitor them and archive them securely.
+ [AWS Identity and Access Management (IAM)](https://docs.aws.amazon.com/IAM/latest/UserGuide/introduction.html) helps you securely manage access to your AWS resources by controlling who is authenticated and authorized to use them.
+ [AWS Key Management Service (AWS KMS)](https://docs.aws.amazon.com/kms/latest/developerguide/overview.html) helps you create and control cryptographic keys to help protect your data.

## Best practices
<a name="monitor-application-activity-by-using-cloudwatch-logs-insights-best-practices"></a>

**Query efficiency**
+ Define and configure [log groups](https://docs.aws.amazon.com/AmazonCloudWatch/latest/logs/Working-with-log-groups-and-streams.html) to analyze relevant log data.
+ Use field explorers to understand the structure and fields available in your log data.
+ Write efficient queries by using [CloudWatch Logs Insights query syntax](https://docs.aws.amazon.com/AmazonCloudWatch/latest/logs/CWL_AnalyzeLogData_LogsInsights.html).
+ Adapt [sample queries](https://docs.aws.amazon.com/AmazonCloudWatch/latest/logs/CWL_QuerySyntax-examples.html) to your specific requirements for quicker analysis.
+ Limit query time ranges to reduce data scanned and improve performance.
+ [Save queries](https://docs.aws.amazon.com/AmazonCloudWatch/latest/logs/CWL_Insights-Saving-Queries.html) for future use to save time and ensure consistent analysis.

**Security**
+ Apply appropriate IAM[ policies](https://docs.aws.amazon.com/AmazonCloudWatch/latest/logs/iam-access-control-overview-cwl.html) to CloudWatch Logs Insights and log groups. Follow the principle of least privilege and grant the minimum permissions required to perform a task. For more information, see [Grant least privilege](https://docs.aws.amazon.com/IAM/latest/UserGuide/access_policies.html#grant-least-priv) and [Security best practices](https://docs.aws.amazon.com/IAM/latest/UserGuide/best-practices.html) in the IAM documentation.
+ Enable [log data encryption using AWS KMS](https://docs.aws.amazon.com/AmazonCloudWatch/latest/logs/CloudWatchLogs-Insights-Query-Encrypt.html) for sensitive log data.

**Cost optimization**
+ CloudWatch Logs Insights charges per GB of data scanned per query. Narrow time ranges and target specific log groups to reduce costs.
+ Configure appropriate log retention policies to manage storage costs.
+ For frequent analysis of large historical datasets, consider exporting logs to Amazon S3 and using Amazon Athena.
+ Review [CloudWatch pricing](https://aws.amazon.com/cloudwatch/pricing/) to understand cost implications for your use case.

## Epics
<a name="monitor-application-activity-by-using-cloudwatch-logs-insights-epics"></a>

### Create log group and configure logs to view in dashboard.
<a name="create-log-group-and-configure-logs-to-view-in-dashboard"></a>


| Task | Description | Skills required | 
| --- | --- | --- | 
| Configure IAM permissions. | To configure IAM permissions, do the following:[\[See the AWS documentation website for more details\]](http://docs.aws.amazon.com/prescriptive-guidance/latest/patterns/monitor-application-activity-by-using-cloudwatch-logs-insights.html)For information about how to create IAM policies or to add permissions to existing policies, see [Define custom IAM permissions with customer managed policies](https://docs.aws.amazon.com/IAM/latest/UserGuide/access_policies_create.html) and [Edit IAM policies](https://docs.aws.amazon.com/IAM/latest/UserGuide/access_policies_manage-edit.html) in the *IAM User Guide*. For more information, see [Identity and access management for Amazon CloudWatch Logs](https://docs.aws.amazon.com/AmazonCloudWatch/latest/logs/auth-and-access-control-cwl.html) and [CloudWatch Logs permissions reference](https://docs.aws.amazon.com/AmazonCloudWatch/latest/logs/permissions-reference-cwl.html) in the *Amazon CloudWatch Logs User Guide*. | AWS administrator, AWS DevOps, AWS systems administrator, Cloud administrator, Cloud architect, DevOps engineer | 
| Create a log group. | To create a log group, use any of the following options:[\[See the AWS documentation website for more details\]](http://docs.aws.amazon.com/prescriptive-guidance/latest/patterns/monitor-application-activity-by-using-cloudwatch-logs-insights.html)[\[See the AWS documentation website for more details\]](http://docs.aws.amazon.com/prescriptive-guidance/latest/patterns/monitor-application-activity-by-using-cloudwatch-logs-insights.html) | AWS administrator, AWS DevOps, AWS systems administrator, Cloud administrator, Cloud architect, DevOps engineer | 
| Generate a CloudWatch Logs Insights query. | To create and save a CloudWatch Logs Insights query, do the following:[\[See the AWS documentation website for more details\]](http://docs.aws.amazon.com/prescriptive-guidance/latest/patterns/monitor-application-activity-by-using-cloudwatch-logs-insights.html) | AWS administrator, AWS DevOps, AWS systems administrator, Cloud administrator, Cloud architect, DevOps engineer | 
| Create visualization in a CloudWatch dashboard. | To use a CloudWatch dashboard to create a visualization, do the following:[\[See the AWS documentation website for more details\]](http://docs.aws.amazon.com/prescriptive-guidance/latest/patterns/monitor-application-activity-by-using-cloudwatch-logs-insights.html)For more information about dashboard options and capabilities, see [Using Amazon CloudWatch dashboards](https://docs.aws.amazon.com/AmazonCloudWatch/latest/monitoring/CloudWatch_Dashboards.html) and [Creating flexible CloudWatch dashboards with dashboard variables](https://docs.aws.amazon.com/AmazonCloudWatch/latest/monitoring/cloudwatch_dashboard_variables.html) in the *Amazon CloudWatch Logs User Guide*. | AWS administrator, AWS DevOps, AWS systems administrator, Cloud administrator, Cloud architect, DevOps engineer | 

## Troubleshooting
<a name="monitor-application-activity-by-using-cloudwatch-logs-insights-troubleshooting"></a>


| Issue | Solution | 
| --- | --- | 
| Unable to see query results or query seems broken | Start with a working query that was modified from a [sample query](https://docs.aws.amazon.com/AmazonCloudWatch/latest/logs/CWL_QuerySyntax-examples.html). Perform small incremental changes to parts of the query (such as a filter or field), and take advantage of the CloudWatch Logs [query generator feature](https://docs.aws.amazon.com/AmazonCloudWatch/latest/logs/CloudWatchLogs-Insights-Query-Assist.html). | 
| Log groups not creating log streams | In the IAM policy, make sure that the resource for the [CreateLogStream](https://docs.aws.amazon.com/AmazonCloudWatchLogs/latest/APIReference/API_CreateLogStream.html) and the [CreateLogGroup](https://docs.aws.amazon.com/AmazonCloudWatchLogs/latest/APIReference/API_CreateLogGroup.html) operations is provided with a wildcard character `(*)` value. The `create `operation will not succeed without this wildcard permission. | 
| Query timeout or slow performance | Reduce the time range, target specific log groups, or simplify the query. Complex regular expression (`regex`) patterns and large time ranges increase query time. | 
| No data returned for valid time range | Verify log group selection and check that logs are being ingested (review log streams), and confirm the filter pattern matches your log format. | 

## Related resources
<a name="monitor-application-activity-by-using-cloudwatch-logs-insights-resources"></a>
+ [Analyzing log data with CloudWatch Logs Insights](https://docs.aws.amazon.com/AmazonCloudWatch/latest/logs/AnalyzingLogData.html)
+ [Amazon CloudWatch FAQs](https://aws.amazon.com/cloudwatch/faqs/#topic-0)
+ [Creating flexible CloudWatch dashboards with dashboard variables](https://docs.aws.amazon.com/AmazonCloudWatch/latest/monitoring/cloudwatch_dashboard_variables.html)
+ [Get started with Logs Insights QL: Query tutorials](https://docs.aws.amazon.com/AmazonCloudWatch/latest/logs/CWL_AnalyzeLogData_Tutorials.html)
+ [Use natural language to generate and update CloudWatch Logs Insights queries](https://docs.aws.amazon.com/AmazonCloudWatch/latest/logs/CloudWatchLogs-Insights-Query-Assist.html)
+ [Use PutDashboard with an AWS SDK or CLI](https://docs.aws.amazon.com/AmazonCloudWatch/latest/monitoring/example_cloudwatch_PutDashboard_section.html)
+ [Working with log groups and log streams](https://docs.aws.amazon.com/AmazonCloudWatch/latest/logs/Working-with-log-groups-and-streams.html)

# Monitor use of a shared Amazon Machine Image across multiple AWS accounts
<a name="monitor-use-of-a-shared-amazon-machine-image-across-multiple-aws-accounts"></a>

*Naveen Suthar and Sandeep Gawande, Amazon Web Services*

## Summary
<a name="monitor-use-of-a-shared-amazon-machine-image-across-multiple-aws-accounts-summary"></a>

[Amazon Machine Images (AMIs)](https://docs.aws.amazon.com/AWSEC2/latest/UserGuide/AMIs.html) are used to create Amazon Elastic Compute Cloud (Amazon EC2) instances in your Amazon Web Services (AWS) environment. You can create AMIs in a separate, centralized AWS account, which is called a *creator account* in this pattern. You can then share the AMI across multiple AWS accounts that are in the same AWS Region, which are called *consumer accounts* in this pattern. Managing AMIs from a single account provides scalability and simplifies governance. In the consumer accounts, you can reference the shared AMI in Amazon EC2 Auto Scaling [launch templates](https://docs.aws.amazon.com/autoscaling/ec2/userguide/create-asg-launch-template.html) and Amazon Elastic Kubernetes Service (Amazon EKS) [node groups](https://docs.aws.amazon.com/eks/latest/userguide/managed-node-groups.html).

When a shared AMI is [deprecated](https://docs.aws.amazon.com/AWSEC2/latest/UserGuide/ami-deprecate.html), [deregistered](https://docs.aws.amazon.com/AWSEC2/latest/UserGuide/deregister-ami.html), or [unshared](https://docs.aws.amazon.com/AWSEC2/latest/UserGuide/sharingamis-explicit.html), AWS services that refer to the AMI in the consumer accounts cannot use this AMI to launch new instances. Any auto scaling event or relaunch of the same instance fails. This can lead to issues in the production environment, such as application downtime or performance degradation. When AMI sharing and usage events occur in multiple AWS accounts, it can be difficult to monitor this activity.

This pattern helps you monitor shared AMI usage and status across accounts in the same Region. It uses serverless AWS services, such as Amazon EventBridge, Amazon DynamoDB, AWS Lambda, and Amazon Simple Email Service (Amazon SES). You provision the infrastructure as code (IaC) by using HashiCorp Terraform. This solution provides alerts when a service in a consumer account references a deregistered or unshared AMI.

## Prerequisites and limitations
<a name="monitor-use-of-a-shared-amazon-machine-image-across-multiple-aws-accounts-prereqs"></a>

**Prerequisites**
+ Two or more active AWS accounts: one creator account and one or more consumer accounts
+ One or more AMIs that are shared from the creator account to a consumer account
+ Terraform CLI, [installed](https://developer.hashicorp.com/terraform/cli) (Terraform documentation)
+ Terraform AWS Provider, [configured](https://hashicorp.github.io/terraform-provider-aws/) (Terraform documentation)
+ (Optional, but recommended) Terraform backend, [configured](https://developer.hashicorp.com/terraform/language/backend) (Terraform documentation)
+ Git, [installed](https://github.com/git-guides/install-git)

**Limitations**
+ This pattern monitors AMIs that have been shared to specific accounts by using the account ID. This pattern does not monitor AMIs that have been shared to an organization by using the organization ID.
+ AMIs can only be shared to accounts that are within the same AWS Region. This pattern monitors AMIs within a single, target Region. To monitor use of AMIs in multiple Regions, you deploy this solution in each Region.
+ This pattern doesn't monitor any AMIs that were shared before this solution was deployed. If you want to monitor previously shared AMIs, you can unshare the AMI and then reshare it with the consumer accounts.

**Product versions**
+ Terraform version 1.2.0 or later
+ Terraform AWS Provider version 4.20 or later

## Architecture
<a name="monitor-use-of-a-shared-amazon-machine-image-across-multiple-aws-accounts-architecture"></a>

**Target technology stack**

The following resources are provisioned as IaC through Terraform:
+ Amazon DynamoDB tables
+ Amazon EventBridge rules
+ AWS Identity and Access Management (IAM) role
+ AWS Lambda functions
+ Amazon SES

**Target architecture**

![\[Architecture for monitoring shared AMI use and alerting users if the AMI is unshared or deregistered\]](http://docs.aws.amazon.com/prescriptive-guidance/latest/patterns/images/pattern-img/2d709249-0c68-47d7-be5d-46e8a73071ed/images/8c48c4dd-d681-4c32-9ba8-8f5ad2d66f64.png)


The diagram shows the following workflow:

1. An AMI in the creator account is shared with a consumer account in the same AWS Region.

1. When the AMI is shared, an EventBridge rule in the creator account captures the `ModifyImageAttribute` event and initiates a Lambda function in the creator account.

1. The Lambda function stores data related to the AMI in a DynamoDB table in the creator account.

1. When an AWS service in the consumer account uses the shared AMI to launch an Amazon EC2 instance or when the shared AMI is associated with a launch template, an EventBridge rule in the consumer account captures use of the shared AMI.

1. The EventBridge rule initiates a Lambda function in the consumer account. The Lambda function does the following:

   1. The Lambda function updates the AMI-related data in a DynamoDB table in the consumer account.

   1. The Lambda function assumes an IAM role in the creator account and updates the Lambda table in the creator account. In the `Mapping` table, it creates an item that maps the instance ID or launch template ID to its respective AMI ID.

1. The AMI that is centrally managed in the creator account is deprecated, deregistered, or unshared.

1. The EventBridge rule in the creator account captures the `ModifyImageAttribute` or `DeregisterImage` event with the `remove` action and initiates the Lambda function.

1. The Lambda function checks the DynamoDB table to determine whether the AMI is used in any of the consumer accounts. If there are no instance IDs or launch template IDs associated with the AMI in the `Mapping` table, then the process is complete.

1. If any instance IDs or launch template IDs are associated with the AMI in the `Mapping` table, then the Lambda function uses Amazon SES to send an email notification to the configured subscribers.

## Tools
<a name="monitor-use-of-a-shared-amazon-machine-image-across-multiple-aws-accounts-tools"></a>

**AWS services**
+ [Amazon DynamoDB](https://docs.aws.amazon.com/amazondynamodb/latest/developerguide/Introduction.html) is a fully managed NoSQL database service that provides fast, predictable, and scalable performance.
+ [Amazon EventBridge](https://docs.aws.amazon.com/eventbridge/latest/userguide/eb-what-is.html) is a serverless event bus service that helps you connect your applications with real-time data from a variety of sources. For example, AWS Lambda functions, HTTP invocation endpoints using API destinations, or event buses in other AWS accounts.
+ [AWS Identity and Access Management (IAM)](https://docs.aws.amazon.com/IAM/latest/UserGuide/introduction.html) helps you securely manage access to your AWS resources by controlling who is authenticated and authorized to use them.
+ [AWS Lambda](https://docs.aws.amazon.com/lambda/latest/dg/welcome.html) is a compute service that helps you run code without needing to provision or manage servers. It runs your code only when needed and scales automatically, so you pay only for the compute time that you use.
+ [Amazon Simple Email Service (Amazon SES)](https://docs.aws.amazon.com/ses/latest/dg/Welcome.html) helps you send and receive emails by using your own email addresses and domains.

**Other tools**
+ [HashiCorp Terraform](https://www.terraform.io/docs) is an infrastructure as code (IaC) tool that helps you use code to provision and manage cloud infrastructure and resources.
+ [Python](https://www.python.org/) is a general-purpose computer programming language.

**Code repository**

The code for this pattern is available in the GitHub [cross-account-ami-monitoring-terraform-samples](https://github.com/aws-samples/cross-account-ami-monitoring-terraform-samples) repository.

## Best practices
<a name="monitor-use-of-a-shared-amazon-machine-image-across-multiple-aws-accounts-best-practices"></a>
+ Follow the [Best practices for working with AWS Lambda functions](https://docs.aws.amazon.com/lambda/latest/dg/best-practices.html).
+ Follow the [Best practices for building AMIs](https://docs.aws.amazon.com/marketplace/latest/userguide/best-practices-for-building-your-amis.html).
+ When creating the IAM role, follow the principle of least privilege and grant the minimum permissions required to perform a task. For more information, see [Grant least privilege](https://docs.aws.amazon.com/IAM/latest/UserGuide/access_policies.html#grant-least-priv) and [Security best practices](https://docs.aws.amazon.com/IAM/latest/UserGuide/IAMBestPracticesAndUseCases.html) in the IAM documentation.
+ Set up monitoring and alerting for the AWS Lambda functions. For more information, see [Monitoring and troubleshooting Lambda functions](https://docs.aws.amazon.com/lambda/latest/dg/lambda-monitoring.html).

## Epics
<a name="monitor-use-of-a-shared-amazon-machine-image-across-multiple-aws-accounts-epics"></a>

### Customize the Terraform configuration files
<a name="customize-the-terraform-configuration-files"></a>


| Task | Description | Skills required | 
| --- | --- | --- | 
| Create the AWS CLI named profiles. | For the creator account and each consumer account, create an AWS Command Line Interface (AWS CLI) named profile. For instructions, see [Set up the AWS CLI](https://aws.amazon.com/getting-started/guides/setup-environment/module-three/)  in the AWS Getting Started Resources Center. | DevOps engineer | 
| Clone the repository. | Enter the following command. This clones the [cross-account-ami-monitoring-terraform-samples](https://github.com/aws-samples/cross-account-ami-monitoring-terraform-samples) repository from GitHub by using SSH.<pre>git clone git@github.com:aws-samples/cross-account-ami-monitoring-terraform-samples.git</pre> | DevOps engineer | 
| Update the provider.tf file. | [\[See the AWS documentation website for more details\]](http://docs.aws.amazon.com/prescriptive-guidance/latest/patterns/monitor-use-of-a-shared-amazon-machine-image-across-multiple-aws-accounts.html)For more information about configuring the providers, see [Multiple provider configurations](https://developer.hashicorp.com/terraform/language/providers/configuration#alias-multiple-provider-configurations) in the Terraform documentation. | DevOps engineer | 
| Update the terraform.tfvars file. | [\[See the AWS documentation website for more details\]](http://docs.aws.amazon.com/prescriptive-guidance/latest/patterns/monitor-use-of-a-shared-amazon-machine-image-across-multiple-aws-accounts.html) | DevOps engineer | 
| Update the main.tf file. | Complete these steps only if you are deploying this solution to more than one consumer account. If you are deploying this solution to only one consumer account, no modification of this file is necessary.[\[See the AWS documentation website for more details\]](http://docs.aws.amazon.com/prescriptive-guidance/latest/patterns/monitor-use-of-a-shared-amazon-machine-image-across-multiple-aws-accounts.html) | DevOps engineer | 

### Deploy the solution by using Terraform
<a name="deploy-the-solution-by-using-terraform"></a>


| Task | Description | Skills required | 
| --- | --- | --- | 
| Deploy the solution. | In the Terraform CLI, enter the following commands to deploy the AWS resources in the creator and consumer accounts:[\[See the AWS documentation website for more details\]](http://docs.aws.amazon.com/prescriptive-guidance/latest/patterns/monitor-use-of-a-shared-amazon-machine-image-across-multiple-aws-accounts.html) | DevOps engineer | 
| Verify the email address identity. | When you deployed the Terraform plan, Terraform created an email address identity for each consumer account in Amazon SES. Before notifications can be sent to that email address, you must verify the email address. For instructions, see [Verifying an email address identity](https://docs.aws.amazon.com/ses/latest/dg/creating-identities.html#just-verify-email-proc) in the Amazon SES documentation. | General AWS | 

### Validate resource deployment
<a name="validate-resource-deployment"></a>


| Task | Description | Skills required | 
| --- | --- | --- | 
| Validate deployment in the creator account. | [\[See the AWS documentation website for more details\]](http://docs.aws.amazon.com/prescriptive-guidance/latest/patterns/monitor-use-of-a-shared-amazon-machine-image-across-multiple-aws-accounts.html) | DevOps engineer | 
| Validate deployment in the consumer account. | [\[See the AWS documentation website for more details\]](http://docs.aws.amazon.com/prescriptive-guidance/latest/patterns/monitor-use-of-a-shared-amazon-machine-image-across-multiple-aws-accounts.html) | DevOps engineer | 

### Validate monitoring
<a name="validate-monitoring"></a>


| Task | Description | Skills required | 
| --- | --- | --- | 
| Create an AMI in the creator account. | [\[See the AWS documentation website for more details\]](http://docs.aws.amazon.com/prescriptive-guidance/latest/patterns/monitor-use-of-a-shared-amazon-machine-image-across-multiple-aws-accounts.html) | DevOps engineer | 
| Use the AMI in the consumer account. | In the consumer account, use the shared AMI to create an Amazon EC2 instance or launch template. For instructions, see [How do I launch an Amazon EC2 instance from a custom AMI](https://repost.aws/knowledge-center/launch-instance-custom-ami) (AWS re:Post Knowledge Center) or [Create a launch template for an Auto Scaling group](https://docs.aws.amazon.com/autoscaling/ec2/userguide/create-launch-template.html) (Amazon EC2 Auto Scaling documentation). | DevOps engineer | 
| Validate monitoring and alerting. | [\[See the AWS documentation website for more details\]](http://docs.aws.amazon.com/prescriptive-guidance/latest/patterns/monitor-use-of-a-shared-amazon-machine-image-across-multiple-aws-accounts.html) | DevOps engineer | 

### (Optional) Stop monitoring shared AMIs
<a name="optional-stop-monitoring-shared-amis"></a>


| Task | Description | Skills required | 
| --- | --- | --- | 
| Delete the resources. | [\[See the AWS documentation website for more details\]](http://docs.aws.amazon.com/prescriptive-guidance/latest/patterns/monitor-use-of-a-shared-amazon-machine-image-across-multiple-aws-accounts.html) | DevOps engineer | 

## Troubleshooting
<a name="monitor-use-of-a-shared-amazon-machine-image-across-multiple-aws-accounts-troubleshooting"></a>


| Issue | Solution | 
| --- | --- | 
| I did not receive an email alert. | There could be multiple reasons why the Amazon SES email was not sent. Check the following:[\[See the AWS documentation website for more details\]](http://docs.aws.amazon.com/prescriptive-guidance/latest/patterns/monitor-use-of-a-shared-amazon-machine-image-across-multiple-aws-accounts.html) | 

## Related resources
<a name="monitor-use-of-a-shared-amazon-machine-image-across-multiple-aws-accounts-resources"></a>

**AWS documentation**
+ [Building Lambda functions with Python](https://docs.aws.amazon.com/lambda/latest/dg/lambda-python.html) (Lambda documentation)
+ [Create an AMI](https://docs.aws.amazon.com/AWSEC2/latest/UserGuide/create-ami.html) (Amazon EC2 documentation)
+ [Share an AMI with specific AWS accounts](https://docs.aws.amazon.com/AWSEC2/latest/UserGuide/sharingamis-explicit.html) (Amazon EC2 documentation)
+ [Deregister your AMI](https://docs.aws.amazon.com/AWSEC2/latest/UserGuide/deregister-ami.html) (Amazon EC2 documentation)

**Terraform documentation**
+ [Install Terraform](https://learn.hashicorp.com/tutorials/terraform/install-cli)
+ [Terraform Backend Configuration](https://www.terraform.io/language/settings/backends/configuration)
+ [Terraform AWS Provider](https://registry.terraform.io/providers/hashicorp/aws/latest/docs)
+ [Terraform binary download](https://developer.hashicorp.com/terraform/install)

# View EBS snapshot details for your AWS account or organization
<a name="view-ebs-snapshot-details-for-your-aws-account-or-organization"></a>

*Arun Chandapillai and Parag Nagwekar, Amazon Web Services*

## Summary
<a name="view-ebs-snapshot-details-for-your-aws-account-or-organization-summary"></a>

This pattern describes how you can automatically generate an on-demand report of all Amazon Elastic Block Store (Amazon EBS) snapshots in your Amazon Web Services (AWS) account or organizational unit (OU) in AWS Organizations. 

Amazon EBS is an easy-to-use, scalable, high-performance block- storage service designed for Amazon Elastic Compute Cloud (Amazon EC2). An EBS volume provides durable and persistent storage that you can attach to your EC2 instances. You can use EBS volumes as primary storage for your data and take a point-in-time backup of your EBS volumes by creating a snapshot. You can use the AWS Management Console or the AWS Command Line Interface (AWS CLI) to view the details of specific EBS snapshots. This pattern provides a programmatic way to retrieve information about all EBS snapshots in your AWS account or OU.

You can use the script provided by this pattern to generate a comma-separated values (CSV) file that has the following information about each snapshot: account ID, snapshot ID, volume ID and size, the date the snapshot was taken, instance ID, and description. If your EBS snapshots are tagged, the report also includes the owner and team attributes.

## Prerequisites and limitations
<a name="view-ebs-snapshot-details-for-your-aws-account-or-organization-prereqs"></a>

**Prerequisites**
+ An active AWS account
+ AWS CLI version 2 [installed](https://docs.aws.amazon.com/cli/latest/userguide/getting-started-install.html#getting-started-install-instructions) and [configured](https://docs.aws.amazon.com/cli/latest/userguide/cli-chap-configure.html)
+ AWS Identity and Access Management (IAM) role with the appropriate permissions (access permissions for a specific account or for all accounts in an OU if you’re planning to run the script from AWS Organizations)

## Architecture
<a name="view-ebs-snapshot-details-for-your-aws-account-or-organization-architecture"></a>

The following diagram shows the script workflow that generates an on-demand report of EBS snapshots that are spread across multiple AWS accounts in an OU.

![\[Generating an on-demand report of EBS snapshots across OUs.\]](http://docs.aws.amazon.com/prescriptive-guidance/latest/patterns/images/pattern-img/4e8b1812-2731-4f46-8385-0dd4d92f2d03/images/62d10408-7c85-46cf-a6a4-fe87a6e446f2.png)


## Tools
<a name="view-ebs-snapshot-details-for-your-aws-account-or-organization-tools"></a>

**AWS services**
+ [AWS Command Line Interface (AWS CLI)](https://docs.aws.amazon.com/cli/latest/userguide/cli-chap-welcome.html) is an open-source tool that helps you interact with AWS services through commands in your command-line shell.
+ [Amazon Elastic Block Store (Amazon EBS)](https://docs.aws.amazon.com/AWSEC2/latest/UserGuide/AmazonEBS.html) provides block-level storage volumes for use with EC2 instances. 
+ [AWS Identity and Access Management (IAM)](https://docs.aws.amazon.com/IAM/latest/UserGuide/introduction.html) helps you securely manage access to your AWS resources by controlling who is authenticated and authorized to use them.
+ [AWS Organizations](https://docs.aws.amazon.com/organizations/latest/userguide/orgs_introduction.html) is an account management service that helps you consolidate multiple AWS accounts into an organization that you create and centrally manage.

**Code **

The code for the sample application used in this pattern is available on GitHub, in the [aws-ebs-snapshots-awsorganizations](https://github.com/aws-samples/aws-ebs-snapshots-awsorganizations) repository. Follow the instructions in the next section to use the sample files.

## Epics
<a name="view-ebs-snapshot-details-for-your-aws-account-or-organization-epics"></a>

### Download the script
<a name="download-the-script"></a>


| Task | Description | Skills required | 
| --- | --- | --- | 
| Download the Python script. | Download the script  [GetSnapshotDetailsAllAccountsOU.py](https://github.com/aws-samples/aws-ebs-snapshots-awsorganizations/blob/main/GetSnapshotDetailsAllAccountsOU.py) from the [GitHub repository](https://github.com/aws-samples/aws-ebs-snapshots-awsorganizations). | General AWS | 

### Get EBS snapshot details for an AWS account
<a name="get-ebs-snapshot-details-for-an-aws-account"></a>


| Task | Description | Skills required | 
| --- | --- | --- | 
| Run the Python script. | Run the command:<pre>python3 getsnapshotinfo.py --file <output-file>.csv --region <region-name> </pre>where `<output-file>` refers to the CSV output file where you want information about the EBS snapshots placed, and `<region-name>` is the AWS Region where the snapshots are stored. For example:<pre>python3 getsnapshotinfo.py --file snapshots.csv --region us-east-1 </pre> | General AWS | 

### Get EBS snapshot details for an organization
<a name="get-ebs-snapshot-details-for-an-organization"></a>


| Task | Description | Skills required | 
| --- | --- | --- | 
| Run the Python script. | Run the command:<pre>python3 getsnapshotinfo.py --file <output-file>.csv --role <IAM-role> --region <region-name> </pre>where `<output-file>` refers to the CSV output file where you want information about the EBS snapshots placed, `<IAM-role>` is a role that provides permissions to access AWS Organizations, and `<region-name>` is the AWS Region where the snapshots are stored. For example:<pre>python3 getsnapshotinfo.py --file snapshots.csv --role <IAM role> --region us-west-2</pre> | General AWS | 

## Related resources
<a name="view-ebs-snapshot-details-for-your-aws-account-or-organization-resources"></a>
+ [Amazon EBS documentation](https://docs.aws.amazon.com/AWSEC2/latest/UserGuide/AmazonEBS.html)
+ [Amazon EBS actions](https://docs.aws.amazon.com/AWSEC2/latest/APIReference/OperationList-query-ebs.html)
+ [Amazon EBS API reference](https://awscli.amazonaws.com/v2/documentation/api/latest/reference/ebs/index.html)
+ [Improving Amazon EBS performance](https://docs.aws.amazon.com/AWSEC2/latest/UserGuide/EBSPerformance.html)
+ [Amazon EBS resources](https://aws.amazon.com/ebs/resources/)
+ [EBS snapshot pricing](https://aws.amazon.com/ebs/pricing/)

## Additional information
<a name="view-ebs-snapshot-details-for-your-aws-account-or-organization-additional"></a>

**EBS snapshot types**

Amazon EBS provides three types of snapshots, based on ownership and access:
+ **Owned by you** –** **By default, only you can create volumes from snapshots that you own.
+ **Public snapshots** – You can share snapshots publicly with all other AWS accounts. To create a public snapshot, you modify the permissions for a snapshot to share it with the AWS accounts that you specify. Users that you will authorize can then use the snapshots you share by creating their own EBS volumes, while your original snapshot remains unaffected. You can also make your unencrypted snapshots available publicly to all AWS users. However, you can't make your encrypted snapshots available publicly for security reasons. Public snapshots pose a significant security risk because of the possibility of exposing personal and sensitive data. We strongly recommend against sharing your EBS snapshots with all AWS accounts. For more information about sharing snapshots, see the [AWS documentation](https://docs.aws.amazon.com/AWSEC2/latest/UserGuide/ebs-modifying-snapshot-permissions.html).
+ **Private snapshots** – You can share snapshots privately with individual AWS accounts that you specify. To share the snapshot privately with specific AWS accounts, follow the [instructions](https://docs.aws.amazon.com/AWSEC2/latest/UserGuide/ebs-modifying-snapshot-permissions.html#share-unencrypted-snapshot) in the AWS documentation, and choose **Private** for the permissions setting. Users that you have authorized can use the snapshots that you share to create their own EBS volumes, while your original snapshot remains unaffected.

**Overviews and procedures**

The following table provides links to more information about EBS snapshots, including how you can lower EBS volume costs by finding and deleting unused snapshots, and archive rarely accessed snapshots that do not require frequent or fast retrieval. 


| 
| 
| For information about | See | 
| --- |--- |
| **Snapshots, their features, and limitations** | [Create Amazon EBS snapshots](https://docs.aws.amazon.com/AWSEC2/latest/UserGuide/ebs-creating-snapshot.html) | 
| **How to create a snapshot** | Console: [Create a snapshot](https://docs.aws.amazon.com/AWSEC2/latest/UserGuide/ebs-creating-snapshot.html#ebs-create-snapshot)AWS CLI: [create-snapshot command](https://awscli.amazonaws.com/v2/documentation/api/latest/reference/ec2/create-snapshot.html)For example:<pre>aws ec2 create-snapshot --volume-id vol-1234567890abcdef0 --description " volume snapshot"</pre> | 
| **Deleting snapshots (general information)** | [Delete an Amazon EBS snapshot](https://docs.aws.amazon.com/AWSEC2/latest/WindowsGuide/ebs-deleting-snapshot.html) | 
| **How to delete a snapshot** | Console: [Delete a snapshot](https://docs.aws.amazon.com/AWSEC2/latest/WindowsGuide/ebs-deleting-snapshot.html#ebs-delete-snapshot)AWS CLI: [delete-snapshot command](https://awscli.amazonaws.com/v2/documentation/api/latest/reference/ec2/delete-snapshot.html)For example:<pre>aws ec2 delete-snapshot --snapshot-id snap-1234567890abcdef0</pre> | 
| **Archiving snapshots (general information)** | [Archive Amazon EBS snapshots](https://docs.aws.amazon.com/AWSEC2/latest/WindowsGuide/snapshot-archive.html)[Amazon EBS Snapshots Archive](https://aws.amazon.com/blogs/aws/new-amazon-ebs-snapshots-archive/) (blog post) | 
| **How to archive a snapshot** | Console: [Archive a snapshot](https://docs.aws.amazon.com/AWSEC2/latest/WindowsGuide/working-with-snapshot-archiving.html#archive-snapshot)AWS CLI: [modify-snapshot-tier command](https://awscli.amazonaws.com/v2/documentation/api/latest/reference/ec2/modify-snapshot-tier.html) | 
| **How to retrieve an archived snapshot** | Console: [Restore an archived snapshot](https://docs.aws.amazon.com/AWSEC2/latest/WindowsGuide/working-with-snapshot-archiving.html#restore-archived-snapshot)AWS CLI: [restore-snapshot-tier command](https://awscli.amazonaws.com/v2/documentation/api/latest/reference/ec2/restore-snapshot-tier.html) | 
| **Snapshot pricing** | [Amazon EBS pricing](https://aws.amazon.com/ebs/pricing/) | 

**FAQ**

**What is the minimum archive period?**

The minimum archive period is 90 days.

**How long would it take to restore an archived snapshot?**

It can take up to 72 hours to restore an archived snapshot from the archive tier to the standard tier, depending on the size of the snapshot.

**Are archived snapshots full snapshots?**

Archived snapshots are always full snapshots.

**Which snapshots can a user archive?**

You can archive only snapshots that you own in your account.

**Can you archive a snapshot of the root device volume of a registered Amazon Machine Image (AMI)?**

No, you can’t archive a snapshot of the root device volume of a registered AMI.

**What are security considerations for sharing a snapshot?**

When you share a snapshot, you are giving others access to all the data on the snapshot. Share snapshots only with people that you trust with your data.

**How do you share a snapshot with another AWS Region?**

Snapshots are constrained to the Region in which they were created. To share a snapshot with another Region, copy the snapshot to that Region and then share the copy.

**Can you share snapshots that are encrypted?**

You can't share snapshots that are encrypted with the default AWS managed key. You can share snapshots that are encrypted with a customer managed key only. When you share an encrypted snapshot, you must also share the customer managed key that was used to encrypt the snapshot.

**What about unencrypted snapshots?**

You can share unencrypted snapshots publicly.

# More patterns
<a name="governance-more-patterns-pattern-list"></a>

**Topics**
+ [Automate account creation by using the Landing Zone Accelerator on AWS](automate-account-creation-lza.md)
+ [Automate AWS infrastructure operations by using Amazon Bedrock](automate-aws-infrastructure-operations-by-using-amazon-bedrock.md)
+ [Automate AWS resource assessment](automate-aws-resource-assessment.md)
+ [Automatically inventory AWS resources across multiple accounts and Regions](automate-aws-resource-inventory.md)
+ [Automate AWS Service Catalog portfolio and product deployment by using AWS CDK](automate-aws-service-catalog-portfolio-and-product-deployment-by-using-aws-cdk.md)
+ [Automate dynamic pipeline management for deploying hotfix solutions in Gitflow environments by using AWS Service Catalog and AWS CodePipeline](automate-dynamic-pipeline-management-for-deploying-hotfix-solutions.md)
+ [Automate ingestion and visualization of Amazon MWAA custom metrics on Amazon Managed Grafana by using Terraform](automate-ingestion-and-visualization-of-amazon-mwaa-custom-metrics.md)
+ [Automatically attach an AWS managed policy for Systems Manager to EC2 instance profiles using Cloud Custodian and AWS CDK](automatically-attach-an-aws-managed-policy-for-systems-manager-to-ec2-instance-profiles-using-cloud-custodian-and-aws-cdk.md)
+ [Automatically encrypt existing and new Amazon EBS volumes](automatically-encrypt-existing-and-new-amazon-ebs-volumes.md)
+ [Build an AWS landing zone that includes MongoDB Atlas](build-aws-landing-zone-that-includes-mongodb-atlas.md)
+ [Centralize monitoring by using Amazon CloudWatch Observability Access Manager](centralize-monitoring-by-using-amazon-cloudwatch-observability-access-manager.md)
+ [Check EC2 instances for mandatory tags at launch](check-ec2-instances-for-mandatory-tags-at-launch.md)
+ [Clean up AWS Account Factory for Terraform (AFT) resources safely after state file loss](clean-up-aft-resources-safely-after-state-file-loss.md)
+ [Create an Amazon ECS task definition and mount a file system on EC2 instances using Amazon EFS](create-an-amazon-ecs-task-definition-and-mount-a-file-system-on-ec2-instances-using-amazon-efs.md)
+ [Create AWS Config custom rules by using AWS CloudFormation Guard policies](create-aws-config-custom-rules-by-using-aws-cloudformation-guard-policies.md)
+ [Customize default role names by using AWS CDK aspects and escape hatches](customize-default-role-names-by-using-aws-cdk-aspects-and-escape-hatches.md)
+ [Deploy and manage AWS Control Tower controls by using AWS CDK and CloudFormation](deploy-and-manage-aws-control-tower-controls-by-using-aws-cdk-and-aws-cloudformation.md)
+ [Deploy and manage AWS Control Tower controls by using Terraform](deploy-and-manage-aws-control-tower-controls-by-using-terraform.md)
+ [Deploy code in multiple AWS Regions using AWS CodePipeline, AWS CodeCommit, and AWS CodeBuild](deploy-code-in-multiple-aws-regions-using-aws-codepipeline-aws-codecommit-and-aws-codebuild.md)
+ [Deploy containerized applications on AWS IoT Greengrass V2 running as a Docker container](deploy-containerized-applications-on-aws-iot-greengrass-version-2-running-as-a-docker-container.md)
+ [Enable Amazon GuardDuty conditionally by using AWS CloudFormation templates](enable-amazon-guardduty-conditionally-by-using-aws-cloudformation-templates.md)
+ [Enable DB2 log archiving directly to Amazon S3 in an IBM Db2 database](enable-db2-logarchive-directly-to-amazon-s3-in-ibm-db2-database.md)
+ [Export a report of AWS IAM Identity Center identities and their assignments by using PowerShell](export-a-report-of-aws-iam-identity-center-identities-and-their-assignments-by-using-powershell.md)
+ [Generate an AWS CloudFormation template containing AWS Config managed rules using Troposphere](generate-an-aws-cloudformation-template-containing-aws-config-managed-rules-using-troposphere.md)
+ [Give SageMaker notebook instances temporary access to a CodeCommit repository in another AWS account](give-sagemaker-notebook-instances-temporary-access-to-a-codecommit-repository-in-another-aws-account.md)
+ [Integrate Stonebranch Universal Controller with AWS Mainframe Modernization](integrate-stonebranch-universal-controller-with-aws-mainframe-modernization.md)
+ [Launch a CodeBuild project across AWS accounts using Step Functions and a Lambda proxy function](launch-a-codebuild-project-across-aws-accounts-using-step-functions-and-a-lambda-proxy-function.md)
+ [Manage AWS permission sets dynamically by using Terraform](manage-aws-permission-sets-dynamically-by-using-terraform.md)
+ [Migrate IIS-hosted applications to Amazon EC2 by using appcmd.exe](migrate-iis-hosted-applications-to-amazon-ec2-by-using-appcmd.md)
+ [Migrate Windows SSL certificates to an Application Load Balancer using ACM](migrate-windows-ssl-certificates-to-an-application-load-balancer-using-acm.md)
+ [Monitor IAM root user activity](monitor-iam-root-user-activity.md)
+ [Create a hierarchical, multi-Region IPAM architecture on AWS by using Terraform](multi-region-ipam-architecture.md)
+ [Optimize multi-account serverless deployments by using the AWS CDK and GitHub Actions workflows](optimize-multi-account-serverless-deployments.md)
+ [Preserve routable IP space in multi-account VPC designs for non-workload subnets](preserve-routable-ip-space-in-multi-account-vpc-designs-for-non-workload-subnets.md)
+ [Provision least-privilege IAM roles by deploying a role vending machine solution](provision-least-privilege-iam-roles-by-deploying-a-role-vending-machine-solution.md)
+ [Register multiple AWS accounts with a single email address by using Amazon SES](register-multiple-aws-accounts-with-a-single-email-address-by-using-amazon-ses.md)
+ [Remove Amazon EC2 entries across AWS accounts from AWS Managed Microsoft AD by using AWS Lambda automation](remove-amazon-ec2-entries-across-aws-accounts-from-aws-managed-microsoft-ad.md)
+ [Remove Amazon EC2 entries in the same AWS account from AWS Managed Microsoft AD by using AWS Lambda automation](remove-amazon-ec2-entries-in-the-same-aws-account-from-aws-managed-microsoft-ad.md)
+ [Secure sensitive data in CloudWatch Logs by using Amazon Macie](secure-cloudwatch-logs-using-macie.md)
+ [Send notifications for an Amazon RDS for SQL Server database instance by using an on-premises SMTP server and Database Mail](send-notifications-for-an-amazon-rds-for-sql-server-database-instance-by-using-an-on-premises-smtp-server-and-database-mail.md)
+ [Set up a Grafana monitoring dashboard for AWS ParallelCluster](set-up-a-grafana-monitoring-dashboard-for-aws-parallelcluster.md)
+ [Set up centralized logging at enterprise scale by using Terraform](set-up-centralized-logging-at-enterprise-scale-by-using-terraform.md)
+ [Set up disaster recovery for SAP on IBM Db2 on AWS](set-up-disaster-recovery-for-sap-on-ibm-db2-on-aws.md)
+ [Streamline Amazon EC2 compliance management with Amazon Bedrock agents and AWS Config](streamline-amazon-ec2-compliance-management-with-amazon-bedrock-agents-and-aws-config.md)
+ [Tag Transit Gateway attachments automatically using AWS Organizations](tag-transit-gateway-attachments-automatically-using-aws-organizations.md)
+ [Use BMC Discovery queries to extract migration data for migration planning](use-bmc-discovery-queries-to-extract-migration-data-for-migration-planning.md)
+ [Verify operational best practices for PCI DSS 4.0 by using AWS Config](verify-ops-best-practices-pci-dss-4.md)
+ [View AWS Network Firewall logs and metrics by using Splunk](view-aws-network-firewall-logs-and-metrics-by-using-splunk.md)
+ [Visualize IAM credential reports for all AWS accounts using Amazon Quick Sight](visualize-iam-credential-reports-for-all-aws-accounts-using-amazon-quicksight.md)