

# Infrastructure
<a name="infrastructure-pattern-list"></a>

**Topics**
+ [Access a bastion host by using Session Manager and Amazon EC2 Instance Connect](access-a-bastion-host-by-using-session-manager-and-amazon-ec2-instance-connect.md)
+ [Centralize DNS resolution by using AWS Managed Microsoft AD and on-premises Microsoft Active Directory](centralize-dns-resolution-by-using-aws-managed-microsoft-ad-and-on-premises-microsoft-active-directory.md)
+ [Centralize monitoring by using Amazon CloudWatch Observability Access Manager](centralize-monitoring-by-using-amazon-cloudwatch-observability-access-manager.md)
+ [Check EC2 instances for mandatory tags at launch](check-ec2-instances-for-mandatory-tags-at-launch.md)
+ [Clean up AWS Account Factory for Terraform (AFT) resources safely after state file loss](clean-up-aft-resources-safely-after-state-file-loss.md)
+ [Create a pipeline in AWS Regions that don’t support AWS CodePipeline](create-a-pipeline-in-aws-regions-that-don-t-support-aws-codepipeline.md)
+ [Customize default role names by using AWS CDK aspects and escape hatches](customize-default-role-names-by-using-aws-cdk-aspects-and-escape-hatches.md)
+ [Deploy a Cassandra cluster on Amazon EC2 with private static IPs to avoid rebalancing](deploy-a-cassandra-cluster-on-amazon-ec2-with-private-static-ips-to-avoid-rebalancing.md)
+ [Extend VRFs to AWS by using AWS Transit Gateway Connect](extend-vrfs-to-aws-by-using-aws-transit-gateway-connect.md)
+ [Get Amazon SNS notifications when the key state of an AWS KMS key changes](get-amazon-sns-notifications-when-the-key-state-of-an-aws-kms-key-changes.md)
+ [Preserve routable IP space in multi-account VPC designs for non-workload subnets](preserve-routable-ip-space-in-multi-account-vpc-designs-for-non-workload-subnets.md)
+ [Provision a Terraform product in AWS Service Catalog by using a code repository](provision-a-terraform-product-in-aws-service-catalog-by-using-a-code-repository.md)
+ [Register multiple AWS accounts with a single email address by using Amazon SES](register-multiple-aws-accounts-with-a-single-email-address-by-using-amazon-ses.md)
+ [Set up DNS resolution for hybrid networks in a single-account AWS environment](set-up-dns-resolution-for-hybrid-networks-in-a-single-account-aws-environment.md)
+ [Set up UiPath RPA bots automatically on Amazon EC2 by using AWS CloudFormation](set-up-uipath-rpa-bots-automatically-on-amazon-ec2-by-using-aws-cloudformation.md)
+ [Set up a highly available PeopleSoft architecture on AWS](set-up-a-highly-available-peoplesoft-architecture-on-aws.md)
+ [Set up disaster recovery for Oracle JD Edwards EnterpriseOne with AWS Elastic Disaster Recovery](set-up-disaster-recovery-for-oracle-jd-edwards-enterpriseone-with-aws-elastic-disaster-recovery.md)
+ [Set up CloudFormation drift detection in a multi-Region, multi-account organization](set-up-aws-cloudformation-drift-detection-in-a-multi-region-multi-account-organization.md)
+ [Successfully import an S3 bucket as an AWS CloudFormation stack](successfully-import-an-s3-bucket-as-an-aws-cloudformation-stack.md)
+ [Synchronize data between Amazon EFS file systems in different AWS Regions by using AWS DataSync](synchronize-data-between-amazon-efs-file-systems-in-different-aws-regions-by-using-aws-datasync.md)
+ [Test AWS infrastructure by using LocalStack and Terraform Tests](test-aws-infra-localstack-terraform.md)
+ [Upgrade SAP Pacemaker clusters from ENSA1 to ENSA2](upgrade-sap-pacemaker-clusters-from-ensa1-to-ensa2.md)
+ [Use consistent Availability Zones in VPCs across different AWS accounts](use-consistent-availability-zones-in-vpcs-across-different-aws-accounts.md)
+ [Use user IDs in IAM policies for access control and automation](use-user-ids-iam-policies-access-control-automation.md)
+ [Validate Account Factory for Terraform (AFT) code locally](validate-account-factory-for-terraform-aft-code-locally.md)
+ [More patterns](infrastructure-more-patterns-pattern-list.md)

# Access a bastion host by using Session Manager and Amazon EC2 Instance Connect
<a name="access-a-bastion-host-by-using-session-manager-and-amazon-ec2-instance-connect"></a>

*Piotr Chotkowski and Witold Kowalik, Amazon Web Services*

## Summary
<a name="access-a-bastion-host-by-using-session-manager-and-amazon-ec2-instance-connect-summary"></a>

A *bastion host*, sometimes called a *jump box*, is a server that provides a single point of access from an external network to the resources located in a private network. A server exposed to an external public network, such as the internet, poses a potential security risk for unauthorized access. It’s important to secure and control access to these servers.

This pattern describes how you can use [Session Manager](https://docs.aws.amazon.com/systems-manager/latest/userguide/session-manager.html) and [Amazon EC2 Instance Connect](https://docs.aws.amazon.com/AWSEC2/latest/UserGuide/Connect-using-EC2-Instance-Connect.html) to securely connect to an Amazon Elastic Compute Cloud (Amazon EC2) bastion host deployed in your AWS account. Session Manager is a capability of AWS Systems Manager. The benefits of this pattern include:
+ The deployed bastion host doesn’t have any open, inbound ports exposed to the public internet. This reduces the potential attack surface.
+ You don’t need to store and maintain long-term Secure Shell (SSH) keys in your AWS account. Instead, each user generates a new SSH key pair each time they connect to the bastion host. AWS Identity and Access Management (IAM) policies that are attached to the user’s AWS credentials control access to the bastion host.

**Intended audience**

This pattern is intended for readers who have experience with basic understanding of Amazon EC2, Amazon Virtual Private Cloud (Amazon VPC), and Hashicorp Terraform.

## Prerequisites and limitations
<a name="access-a-bastion-host-by-using-session-manager-and-amazon-ec2-instance-connect-prereqs"></a>

**Prerequisites**
+ An active AWS account
+ AWS Command Line Interface (AWS CLI) version 2, [installed](https://docs.aws.amazon.com/cli/latest/userguide/install-cliv2.html) and [configured](https://docs.aws.amazon.com/cli/latest/userguide/cli-configure-files.html)
+ Session Manager plugin for the AWS CLI, [installed](https://docs.aws.amazon.com/systems-manager/latest/userguide/session-manager-working-with-install-plugin.html)
+ Terraform CLI, [installed](https://developer.hashicorp.com/terraform/cli)
+ Storage for the Terraform [state](https://developer.hashicorp.com/terraform/language/state), such as an Amazon Simple Storage Service (Amazon S3) bucket and an Amazon DynamoDB table that serve as a remote backend to store the Terraform state. For more information on using remote backends for the Terraform state, see [Amazon S3 Backends](https://www.terraform.io/language/settings/backends/s3) (Terraform documentation). For a code sample that sets up remote state management with an Amazon S3 backend, see [remote-state-s3-backend](https://registry.terraform.io/modules/nozaq/remote-state-s3-backend/aws/latest) (Terraform Registry). Note the following requirements:
  + The Amazon S3 bucket and DynamoDB table must be in the same AWS Region.
  + When creating the DynamoDB table, the partition key must be `LockID` (case-sensitive), and the partition key type must be `String`. All other table settings must be at their default values. For more information, see [About primary keys](https://docs.aws.amazon.com/amazondynamodb/latest/developerguide/HowItWorks.CoreComponents.html#HowItWorks.CoreComponents.PrimaryKey) and [Create a table](https://docs.aws.amazon.com/amazondynamodb/latest/developerguide/getting-started-step-1.html) in the DynamoDB documentation.
+ An SSH client, installed

**Limitations**
+ This pattern is intended as a proof of concept (PoC) or as a basis for further development. It should not be used in its current form in production environments. Before deployment, adjust the sample code in the repository to meet your requirements and use case.
+ This pattern assumes that the target bastion host uses Amazon Linux 2 as its operating system. While it is possible to use other Amazon Machine Images (AMIs), other operating systems are out of scope for this pattern.
**Note**  
Amazon Linux 2 is nearing end of support. For more information, see the [Amazon Linux 2 FAQs](https://aws.amazon.com/amazon-linux-2/faqs/).
+ In this pattern, the bastion host is located in a private subnet without an NAT gateway and internet gateway. This design isolates the Amazon EC2 instance from the public internet. You can add a specific network configuration that allows it to communicate with the internet. For more information, see [Connect your virtual private cloud (VPC) to other networks](https://docs.aws.amazon.com/vpc/latest/userguide/extend-intro.html) in the Amazon VPC documentation. Similarly, following the [principle of least privilege](https://docs.aws.amazon.com/IAM/latest/UserGuide/best-practices.html#grant-least-privilege), the bastion host doesn’t have access to any other resources in your AWS account unless you explicitly grant permissions. For more information, see [Resource-based policies](https://docs.aws.amazon.com/IAM/latest/UserGuide/access_policies.html#policies_resource-based) in the IAM documentation.

**Product versions**
+ AWS CLI version 2
+ Terraform version 1.3.9

## Architecture
<a name="access-a-bastion-host-by-using-session-manager-and-amazon-ec2-instance-connect-architecture"></a>

**Target technology stack**
+ A VPC with a single private subnet
+ The following [interface VPC endpoints](https://docs.aws.amazon.com/vpc/latest/privatelink/create-interface-endpoint.html):
  + `amazonaws.<region>.ssm` – The endpoint for the AWS Systems Manager service.
  + `amazonaws.<region>.ec2messages` – Systems Manager uses this endpoint to make calls from SSM Agent to the Systems Manager service.
  + `amazonaws.<region>.ssmmessages` – Session Manager uses this endpoint to connect to your Amazon EC2 instance through a secure data channel.
+ A `t3.nano` Amazon EC2 instance running Amazon Linux 2
+ IAM role and instance profile
+ Amazon VPC security groups and security group rules for the endpoints and Amazon EC2 instance

**Target architecture**

![\[Architecture diagram of using Session Manager to access a bastion host.\]](http://docs.aws.amazon.com/prescriptive-guidance/latest/patterns/images/pattern-img/a02aed20-1852-4c91-902f-f553795006e2/images/819c503b-7eec-4a9c-862b-b87107d50dc1.png)


The diagram shows the following process:

1. The user assumes an IAM role that has permissions to do the following:
   + Authenticate, authorize, and connect to the Amazon EC2 instance
   + Start a session with Session Manager

1. The user initiates an SSH session through Session Manager.

1. Session Manager authenticates the user, verifies the permissions in the associated IAM policies, checks the configuration settings, and sends a message to SSM Agent to open a two-way connection.

1. The user pushes the SSH public key to the bastion host through Amazon EC2 metadata. This must be done before each connection. The SSH public key remains available for 60 seconds.

1. The bastion host communicates with the interface VPC endpoints for Systems Manager and Amazon EC2.

1. The user accesses the bastion host through Session Manager by using a TLS 1.2 encrypted bidirectional communication channel.

**Automation and scale**

The following options are available to automate deployment or to scale this architecture:
+ You can deploy the architecture through a continuous integration and continuous delivery (CI/CD) pipeline.
+ You can modify the code to change the instance type of the bastion host.
+ You can modify the code to deploy multiple bastion hosts. In the `bastion-host/main.tf` file, in the `aws_instance` resource block, add the `count` meta-argument. For more information, see the [Terraform documentation](https://developer.hashicorp.com/terraform/language/meta-arguments/count).

## Tools
<a name="access-a-bastion-host-by-using-session-manager-and-amazon-ec2-instance-connect-tools"></a>

**AWS services**
+ [AWS Command Line Interface (AWS CLI)](https://docs.aws.amazon.com/cli/latest/userguide/cli-chap-welcome.html) is an open source tool that helps you interact with AWS services through commands in your command-line shell.
+ [Amazon Elastic Compute Cloud (Amazon EC2)](https://docs.aws.amazon.com/ec2/) provides scalable computing capacity in the AWS Cloud. You can launch as many virtual servers as you need and quickly scale them up or down.
+ [AWS Identity and Access Management (IAM)](https://docs.aws.amazon.com/IAM/latest/UserGuide/introduction.html) helps you securely manage access to your AWS resources by controlling who is authenticated and authorized to use them.
+ [AWS Systems Manager](https://docs.aws.amazon.com/systems-manager/latest/userguide/what-is-systems-manager.html) helps you manage your applications and infrastructure running in the AWS Cloud. It simplifies application and resource management, shortens the time to detect and resolve operational problems, and helps you manage your AWS resources securely at scale. This pattern uses [Session Manager](https://docs.aws.amazon.com/systems-manager/latest/userguide/session-manager.html), a capability of Systems Manager.
+ [Amazon Virtual Private Cloud (Amazon VPC)](https://docs.aws.amazon.com/vpc/latest/userguide/what-is-amazon-vpc.html) helps you launch AWS resources into a virtual network that you’ve defined. This virtual network resembles a traditional network that you’d operate in your own data center, with the benefits of using the scalable infrastructure of AWS.

**Other tools**
+ [HashiCorp Terraform](https://www.terraform.io/docs) is an infrastructure as code (IaC) tool that helps you use code to provision and manage cloud infrastructure and resources. This pattern uses [Terraform CLI](https://developer.hashicorp.com/terraform/cli).

**Code repository**

The code for this pattern is available in the GitHub [Access a bastion host by using Session Manager and Amazon EC2 Instance Connect](https://github.com/aws-samples/secured-bastion-host-terraform) repository.

## Best practices
<a name="access-a-bastion-host-by-using-session-manager-and-amazon-ec2-instance-connect-best-practices"></a>
+ We recommend using automated code-scanning tools to improve the security and quality of the code. This pattern was scanned by using [Checkov](https://www.checkov.io/), a static code-analysis tool for IaC. At a minimum, we recommend that you perform basic validation and formatting checks by using the `terraform validate` and `terraform fmt -check -recursive` Terraform commands.
+ It’s a good practice to add automated tests for IaC. For more information about the different approaches for testing Terraform code, see [Testing HashiCorp Terraform](https://www.hashicorp.com/blog/testing-hashicorp-terraform) (Terraform blog post).
+ During deployment, Terraform uses the replaces the Amazon EC2 instance each time a new version of the [Amazon Linux 2 AMI](https://aws.amazon.com/marketplace/pp/prodview-zc4x2k7vt6rpu?sr=0-1&ref_=beagle&applicationId=AWSMPContessa) is detected. This deploys the new version of the operating system, including patches and upgrades. If the deployment schedule is infrequent, this can pose a security risk because the instance doesn’t have the latest patches. It is important to frequently update and apply security patches to deployed Amazon EC2 instances. For more information, see [Update management in Amazon EC2](https://docs.aws.amazon.com/AWSEC2/latest/UserGuide/update-management.html).
+ Because this pattern is a proof of concept, it uses AWS managed policies, such as `AmazonSSMManagedInstanceCore`. AWS managed policies cover common use cases but don't grant least-privilege permissions. As needed for your use case, we recommend that you create custom policies that grant least-privilege permissions for the resources deployed in this architecture. For more information, see [Get started with AWS managed policies and move toward least-privilege permissions](https://docs.aws.amazon.com/IAM/latest/UserGuide/best-practices.html#bp-use-aws-defined-policies).
+ Use a password to protect access to SSH keys and store keys in a secure location.
+ Set up logging and monitoring for the bastion host. Logging and monitoring are important parts of maintaining systems, from both an operational and security perspective. There are multiple ways to monitor connections and activity in your bastion host. For more information, see the following topics in the Systems Manager documentation:
  + [Monitoring AWS Systems Manager](https://docs.aws.amazon.com/systems-manager/latest/userguide/monitoring.html)
  + [Logging and monitoring in AWS Systems Manager](https://docs.aws.amazon.com/systems-manager/latest/userguide/logging-and-monitoring.html)
  + [Auditing session activity](https://docs.aws.amazon.com/systems-manager/latest/userguide/session-manager-auditing.html)
  + [Logging session activity](https://docs.aws.amazon.com/systems-manager/latest/userguide/session-manager-logging.html)

## Epics
<a name="access-a-bastion-host-by-using-session-manager-and-amazon-ec2-instance-connect-epics"></a>

### Deploy the resources
<a name="deploy-the-resources"></a>


| Task | Description | Skills required | 
| --- | --- | --- | 
| Clone the code repository. | [\[See the AWS documentation website for more details\]](http://docs.aws.amazon.com/prescriptive-guidance/latest/patterns/access-a-bastion-host-by-using-session-manager-and-amazon-ec2-instance-connect.html) | DevOps engineer, Developer | 
| Initialize the Terraform working directory. | This step is necessary for only the first deployment. If you are redeploying the pattern, skip to the next step.In the root directory of the cloned repository, enter the following command, where:[\[See the AWS documentation website for more details\]](http://docs.aws.amazon.com/prescriptive-guidance/latest/patterns/access-a-bastion-host-by-using-session-manager-and-amazon-ec2-instance-connect.html)<pre>terraform init \<br />    -backend-config="bucket=$S3_STATE_BUCKET" \<br />    -backend-config="key=$PATH_TO_STATE_FILE" \<br />    -backend-config="region=$AWS_REGION</pre>Alternatively, you can open the **config.tf** file and, in the `terraform` section, manually provide these values. | DevOps engineer, Developer, Terraform | 
| Deploy the resources. | [\[See the AWS documentation website for more details\]](http://docs.aws.amazon.com/prescriptive-guidance/latest/patterns/access-a-bastion-host-by-using-session-manager-and-amazon-ec2-instance-connect.html) | DevOps engineer, Developer, Terraform | 

### Set up the local environment
<a name="set-up-the-local-environment"></a>


| Task | Description | Skills required | 
| --- | --- | --- | 
| Configure the SSH connection. | Update the SSH configuration file to allow SSH connections through Session Manager. For instructions, see [Allowing SSH connections for Session Manager](https://docs.aws.amazon.com/systems-manager/latest/userguide/session-manager-getting-started-enable-ssh-connections.html#ssh-connections-enable). This allows authorized users to enter a proxy command that starts a Session Manager session and transfers all data through a two-way connection. | DevOps engineer | 
| Generate the SSH keys. | Enter the following command to generate a local private and public SSH key pair. You use this key pair to connect to the bastion host.<pre>ssh-keygen -t rsa -f my_key</pre> | DevOps engineer, Developer | 

### Connect to the bastion host by using Session Manager
<a name="connect-to-the-bastion-host-by-using-sesh"></a>


| Task | Description | Skills required | 
| --- | --- | --- | 
| Get the instance ID. | [\[See the AWS documentation website for more details\]](http://docs.aws.amazon.com/prescriptive-guidance/latest/patterns/access-a-bastion-host-by-using-session-manager-and-amazon-ec2-instance-connect.html) | General AWS | 
| Send the SSH public key. | In this section, you upload the public key to the [instance metadata](https://docs.aws.amazon.com/AWSEC2/latest/UserGuide/ec2-instance-metadata.html) of the bastion host. After the key is uploaded, you have 60 seconds to start a connection with the bastion host. After 60 seconds, the public key is removed. For more information, see the [Troubleshooting](#access-a-bastion-host-by-using-session-manager-and-amazon-ec2-instance-connect-troubleshooting) section of this pattern. Complete the next steps quickly to prevent the key from being removed before you connect to the bastion host.[\[See the AWS documentation website for more details\]](http://docs.aws.amazon.com/prescriptive-guidance/latest/patterns/access-a-bastion-host-by-using-session-manager-and-amazon-ec2-instance-connect.html) | General AWS | 
| Connect to the bastion host. | [\[See the AWS documentation website for more details\]](http://docs.aws.amazon.com/prescriptive-guidance/latest/patterns/access-a-bastion-host-by-using-session-manager-and-amazon-ec2-instance-connect.html)There are other options for opening an SSH connection with the bastion host. For more information, see *Alternative approaches to establish an SSH connection with the bastion host* in the [Additional information](#access-a-bastion-host-by-using-session-manager-and-amazon-ec2-instance-connect-additional) section of this pattern. | General AWS | 

### (Optional) Clean up
<a name="optional-clean-up"></a>


| Task | Description | Skills required | 
| --- | --- | --- | 
| Remove the deployed resources. | [\[See the AWS documentation website for more details\]](http://docs.aws.amazon.com/prescriptive-guidance/latest/patterns/access-a-bastion-host-by-using-session-manager-and-amazon-ec2-instance-connect.html) | DevOps engineer, Developer, Terraform | 

## Troubleshooting
<a name="access-a-bastion-host-by-using-session-manager-and-amazon-ec2-instance-connect-troubleshooting"></a>


| Issue | Solution | 
| --- | --- | 
| `TargetNotConnected` error when trying to connect to the bastion host | [\[See the AWS documentation website for more details\]](http://docs.aws.amazon.com/prescriptive-guidance/latest/patterns/access-a-bastion-host-by-using-session-manager-and-amazon-ec2-instance-connect.html) | 
| `Permission denied` error when trying to connect to the bastion host | After the public key is uploaded to the bastion host, you have only 60 seconds to start the connection. After 60 seconds, the key is automatically removed, and you can’t use it to connect to the instance. If this occurs, you can repeat the step to resend the key to the instance. | 

## Related resources
<a name="access-a-bastion-host-by-using-session-manager-and-amazon-ec2-instance-connect-resources"></a>

**AWS documentation**
+ [AWS Systems Manager Session Manager](https://docs.aws.amazon.com/systems-manager/latest/userguide/session-manager.html) (Systems Manager documentation)
+ [Install the Session Manager plugin for the AWS CLI](https://docs.aws.amazon.com/systems-manager/latest/userguide/session-manager-working-with-install-plugin.html) (Systems Manager documentation)
+ [Allowing SSH connections for Session Manager](https://docs.aws.amazon.com/systems-manager/latest/userguide/session-manager-getting-started-enable-ssh-connections.html#ssh-connections-enable) (Systems Manager documentation)
+ [About using EC2 Instance Connect](https://docs.aws.amazon.com/AWSEC2/latest/UserGuide/Connect-using-EC2-Instance-Connect.html) (Amazon EC2 documentation)
+ [Connect using EC2 Instance Connect](https://docs.aws.amazon.com/AWSEC2/latest/UserGuide/ec2-instance-connect-methods.html) (Amazon EC2 documentation)
+ [Identity and access management for Amazon EC2](https://docs.aws.amazon.com/AWSEC2/latest/UserGuide/security-iam.html) (Amazon EC2 documentation)
+ [Using an IAM role to grant permissions to applications running on Amazon EC2 instances](https://docs.aws.amazon.com/IAM/latest/UserGuide/id_roles_use_switch-role-ec2.html) (IAM documentation)
+ [Security best practices in IAM](https://docs.aws.amazon.com/IAM/latest/UserGuide/best-practices.html) (IAM documentation)
+ [Control traffic to resources using security groups](https://docs.aws.amazon.com/vpc/latest/userguide/vpc-security-groups.html) (Amazon VPC documentation)

**Other resources**
+ [Terraform Developer webpage](https://developer.hashicorp.com/terraform)
+ [Command: validate](https://developer.hashicorp.com/terraform/cli/commands/validate) (Terraform documentation)
+ [Command: fmt](https://developer.hashicorp.com/terraform/cli/commands/fmt) (Terraform documentation)
+ [Testing HashiCorp Terraform](https://www.hashicorp.com/blog/testing-hashicorp-terraform) (HashiCorp blog post)
+ [Checkov webpage](https://www.checkov.io/)

## Additional information
<a name="access-a-bastion-host-by-using-session-manager-and-amazon-ec2-instance-connect-additional"></a>

**Alternative approaches to establish an SSH connection with the bastion host**

*Port forwarding*

You can use the `-D 8888` option to open an SSH connection with dynamic port forwarding. For more information, see the [instructions](https://explainshell.com/explain?cmd=ssh+-i+%24PRIVATE_KEY_FILE+-D+8888+ec2-user%40%24INSTANCE_ID) at explainshell.com. The following is an example of a command to open an SSH connection by using port forwarding.

```
ssh -i $PRIVATE_KEY_FILE -D 8888 ec2-user@$INSTANCE_ID
```

This is kind of connection opens a SOCKS proxy that can forward traffic from your local browser through the bastion host. If you are using Linux or MacOS, to see all options, enter `man ssh`. This displays the SSH reference manual.

*Using the provided script*

Instead of manually running the steps described in *Connect to the bastion host by using Session Manager* in the [Epics](#access-a-bastion-host-by-using-session-manager-and-amazon-ec2-instance-connect-epics) section, you can use the **connect.sh** script included in the code repository. This script generates the SSH key pair, pushes the public key to the Amazon EC2 instance, and initiates a connection with the bastion host. When you run the script, you pass the tag and key name as arguments. The following is an example of the command to run the script.

```
./connect.sh sandbox-dev-bastion-host my_key
```

# Centralize DNS resolution by using AWS Managed Microsoft AD and on-premises Microsoft Active Directory
<a name="centralize-dns-resolution-by-using-aws-managed-microsoft-ad-and-on-premises-microsoft-active-directory"></a>

*Brian Westmoreland, Amazon Web Services*

## Summary
<a name="centralize-dns-resolution-by-using-aws-managed-microsoft-ad-and-on-premises-microsoft-active-directory-summary"></a>

This pattern provides guidance for centralizing DNS resolution within an AWS multi-account environment by using both AWS Directory Service for Microsoft Active Directory (AWS Managed Microsoft AD) and Amazon Route 53. In this pattern the AWS DNS namespace is a subdomain of the on-premises DNS namespace. This pattern also provides guidance on how to configure the on-premises DNS servers to forward queries to AWS when the on-premises DNS solution uses Microsoft Active Directory.  

## Prerequisites and limitations
<a name="centralize-dns-resolution-by-using-aws-managed-microsoft-ad-and-on-premises-microsoft-active-directory-prereqs"></a>

**Prerequisites **
+ An AWS multi-account environment set up by using AWS Organizations.
+ Network connectivity established between AWS accounts.
+ Network connectivity established between AWS and the on-premises environment (by using AWS Direct Connect or any type of VPN connection).
+ AWS Command Line Interface (AWS CLI) configured on a local workstation.
+ AWS Resource Access Manager (AWS RAM) used to share Route 53 rules between accounts. Therefore, sharing must be enabled within the AWS Organizations environment, as described in the [Epics](#centralize-dns-resolution-by-using-aws-managed-microsoft-ad-and-on-premises-microsoft-active-directory-epics) section.

**Limitations **
+ AWS Managed Microsoft AD Standard Edition has a limit of 5 shares.
+ AWS Managed Microsoft AD Enterprise Edition has a limit of 125 shares.
+ The solution in this pattern is limited to AWS Regions that support sharing through AWS RAM.

**Product versions**
+ Microsoft Active Directory running on Windows Server 2008, 2012, 2012 R2, or 2016.

## Architecture
<a name="centralize-dns-resolution-by-using-aws-managed-microsoft-ad-and-on-premises-microsoft-active-directory-architecture"></a>

**Target architecture **

![\[Architecture for centralized DNS resolution on AWS.\]](http://docs.aws.amazon.com/prescriptive-guidance/latest/patterns/images/pattern-img/91430e2a-f7f6-4dbe-9fe7-8abed1f764a7/images/9b5fc51d-590b-468f-80f7-1949f3b3b258.png)


In this design, AWS Managed Microsoft AD is installed in the shared services AWS account. Although it is not a requirement, this pattern assumes this configuration. If you configure AWS Managed Microsoft AD in a different AWS account, you might have to modify the steps in the [Epics](#centralize-dns-resolution-by-using-aws-managed-microsoft-ad-and-on-premises-microsoft-active-directory-epics) section accordingly.

This design uses Route 53 Resolvers to support name resolution through the use of Route 53 rules. If the on-premises DNS solution uses Microsoft DNS, creating a conditional forwarding rule for the AWS namespace (`aws.company.com`), which is a subdomain of the company DNS namespace (`company.com`), is not straightforward. If you try to create a traditional conditional forwarder, it will result in an error. This is because Microsoft Active Directory is already considered authoritative for any subdomain of `company.com`. To get around this error, you must first create a delegation for `aws.company.com` to delegate authority of that namespace. You can then create the conditional forwarder.

The virtual private cloud (VPC) for each spoke account can have its own unique DNS namespace based on the root AWS namespace. In this design, each spoke account appends an abbreviation of the account name to the base AWS namespace. After the private hosted zones in the spoke account have been created, the zones are associated with the local VPC in the spoke account as well as with the VPC in the central AWS network account. This enables the central AWS network account to answer DNS queries related to the spoke accounts. This way, both Route 53 and AWS Managed Microsoft AD work together to share the responsibility of managing the AWS namespace (`aws.company.com`).

**Automation and scale**

This design uses Route 53 Resolver endpoints to scale DNS queries between AWS and your on-premises environment. Each Route 53 Resolver endpoint comprises multiple elastic network interfaces (spread across multiple Availability Zones), and each network interface can handle up to 10,000 queries per second. Route 53 Resolver supports up to 6 IP addresses per endpoint, so altogether this design supports up to 60,000 DNS queries per second spread across multiple Availability Zones for high availability.  

Additionally, this pattern automatically accounts for future growth within AWS. The DNS forwarding rules configured on premises do not have to be modified to support new VPCs and their associated private hosted zones that are added to AWS. 

## Tools
<a name="centralize-dns-resolution-by-using-aws-managed-microsoft-ad-and-on-premises-microsoft-active-directory-tools"></a>

**AWS services**
+ [AWS Directory Service for Microsoft Active Directory](https://docs.aws.amazon.com/directoryservice/latest/admin-guide/directory_microsoft_ad.html) enables your directory-aware workloads and AWS resources to use Microsoft Active Directory in the AWS Cloud.
+ [AWS Organizations](https://docs.aws.amazon.com/organizations/latest/userguide/orgs_introduction.html) is an account management service that helps you consolidate multiple AWS accounts into an organization that you create and centrally manage.
+ [AWS Resource Access Manager (AWS RAM)](https://docs.aws.amazon.com/ram/latest/userguide/what-is.html) helps you securely share your resources across AWS accounts to reduce operational overhead and provide visibility and auditability.
+ [Amazon Route 53](https://docs.aws.amazon.com/Route53/latest/DeveloperGuide/Welcome.html) is a highly available and scalable DNS web service.

**Tools**
+ [AWS Command Line Interface (AWS CLI)](https://docs.aws.amazon.com/cli/latest/userguide/cli-chap-welcome.html) is an open-source tool that helps you interact with AWS services through commands in your command-line shell. In this pattern, the AWS CLI is used to configure Route 53 authorizations.

## Epics
<a name="centralize-dns-resolution-by-using-aws-managed-microsoft-ad-and-on-premises-microsoft-active-directory-epics"></a>

### Create and share an AWS Managed Microsoft AD directory
<a name="create-and-share-an-managed-ad-directory"></a>


| Task | Description | Skills required | 
| --- | --- | --- | 
| Deploy AWS Managed Microsoft AD. | [\[See the AWS documentation website for more details\]](http://docs.aws.amazon.com/prescriptive-guidance/latest/patterns/centralize-dns-resolution-by-using-aws-managed-microsoft-ad-and-on-premises-microsoft-active-directory.html) | AWS administrator | 
| Share the directory. | After the directory has been built, share it with other AWS accounts in the AWS organization. For instructions, see [Share your directory](https://docs.aws.amazon.com/directoryservice/latest/admin-guide/step2_share_directory.html) in the *AWS Directory Service Administration Guide*.  AWS Managed Microsoft AD Standard Edition has a limit of 5 shares. Enterprise Edition has a limit of 125 shares. | AWS administrator | 

### Configure Route 53
<a name="configure-r53"></a>


| Task | Description | Skills required | 
| --- | --- | --- | 
| Create Route 53 Resolvers. | Route 53 Resolvers facilitate DNS query resolution between AWS and the on-premises data center.  [\[See the AWS documentation website for more details\]](http://docs.aws.amazon.com/prescriptive-guidance/latest/patterns/centralize-dns-resolution-by-using-aws-managed-microsoft-ad-and-on-premises-microsoft-active-directory.html)Although using the central AWS network account VPC isn’t a requirement, the remaining steps assume this configuration. | AWS administrator | 
| Create Route 53 rules. | Your specific use case might require a large number of Route 53 rules, but you will need to configure the following rules as a baseline:[\[See the AWS documentation website for more details\]](http://docs.aws.amazon.com/prescriptive-guidance/latest/patterns/centralize-dns-resolution-by-using-aws-managed-microsoft-ad-and-on-premises-microsoft-active-directory.html)For more information, see [Managing forwarding rules](https://docs.aws.amazon.com/Route53/latest/DeveloperGuide/resolver-rules-managing.html) in the *Route 53 Developer Guide*. | AWS administrator | 
| Configure a Route 53 Profile. | A Route 53 Profile is used to share the rules with spoke accounts.[\[See the AWS documentation website for more details\]](http://docs.aws.amazon.com/prescriptive-guidance/latest/patterns/centralize-dns-resolution-by-using-aws-managed-microsoft-ad-and-on-premises-microsoft-active-directory.html) | AWS administrator | 

### Configure on-premises Active Directory DNS
<a name="configure-on-premises-active-directory-dns"></a>


| Task | Description | Skills required | 
| --- | --- | --- | 
| Create the delegation. | Use the Microsoft DNS snap-in (`dnsmgmt.msc`) to create a new delegation for the `company.com` namespace within Active Directory. The name of the delegated domain should be `aws`. This makes the fully qualified domain name (FQDN) of the delegation `aws.company.com`. Use the IP addresses of the AWS Managed Microsoft AD domain controllers for the name server IP values, and use `server.aws.company.com` for the name. (This delegation is only for redundancy, because a conditional forwarder will be created for this namespace that takes precedence over the delegation.) | Active Directory | 
| Create the conditional forwarder. | Use the Microsoft DNS snap-in (`dnsmgmt.msc`) to create a new conditional forwarder for `aws.company.com`.  Use the IP addresses of the AWS inbound Route 53 Resolvers in the central DNS AWS account for the target of the conditional forwarder.   | Active Directory | 

### Create Route 53 private hosted zones for spoke AWS accounts
<a name="create-r53-private-hosted-zones-for-spoke-aws-accounts"></a>


| Task | Description | Skills required | 
| --- | --- | --- | 
| Create the Route 53 private hosted zones. | Create a Route 53 private hosted zone in each spoke account. Associate this private hosted zone with the spoke account VPC. For detailed steps, see [Creating a private hosted zone](https://docs.aws.amazon.com/Route53/latest/DeveloperGuide/hosted-zone-private-creating.html) in the *Route 53 Developer Guide*. | AWS administrator | 
| Create authorizations. | Use the AWS CLI to create an authorization for the central AWS network account VPC. Run this command from the context of each spoke AWS account:<pre>aws route53 create-vpc-association-authorization --hosted-zone-id <hosted-zone-id> \<br />   --vpc VPCRegion=<region>,VPCId=<vpc-id></pre>where:[\[See the AWS documentation website for more details\]](http://docs.aws.amazon.com/prescriptive-guidance/latest/patterns/centralize-dns-resolution-by-using-aws-managed-microsoft-ad-and-on-premises-microsoft-active-directory.html) | AWS administrator | 
| Create associations. | Create the Route 53 private hosted zone association for the central AWS network account VPC by using the AWS CLI. Run this command from the context of the central AWS network account:<pre>aws route53 associate-vpc-with-hosted-zone --hosted-zone-id <hosted-zone-id> \<br />   --vpc VPCRegion=<region>,VPCId=<vpc-id></pre>where:[\[See the AWS documentation website for more details\]](http://docs.aws.amazon.com/prescriptive-guidance/latest/patterns/centralize-dns-resolution-by-using-aws-managed-microsoft-ad-and-on-premises-microsoft-active-directory.html) | AWS administrator | 

## Related resources
<a name="centralize-dns-resolution-by-using-aws-managed-microsoft-ad-and-on-premises-microsoft-active-directory-resources"></a>
+ [Simplify DNS management in a multi-account environment with Route 53 Resolver](https://aws.amazon.com/blogs/security/simplify-dns-management-in-a-multiaccount-environment-with-route-53-resolver/) (AWS blog post)
+ [Creating your AWS Managed Microsoft AD](https://docs.aws.amazon.com/directoryservice/latest/admin-guide/ms_ad_getting_started_create_directory.html) (AWS Directory Service documentation)
+ [Sharing an AWS Managed Microsoft AD directory](https://docs.aws.amazon.com/directoryservice/latest/admin-guide/step2_share_directory.html) (AWS Directory Service documentation)
+ [What is Amazon Route 53 Resolver?](https://docs.aws.amazon.com/Route53/latest/DeveloperGuide/resolver.html) (Amazon Route 53 documentation)
+ [Creating a private hosted zone](https://docs.aws.amazon.com/Route53/latest/DeveloperGuide/hosted-zone-private-creating.html) (Amazon Route 53 documentation)
+ [What are Amazon Route 53 Profiles?](https://docs.aws.amazon.com/Route53/latest/DeveloperGuide/profiles.html) (Amazon Route 53 documentation)

# Centralize monitoring by using Amazon CloudWatch Observability Access Manager
<a name="centralize-monitoring-by-using-amazon-cloudwatch-observability-access-manager"></a>

*Anand Krishna Varanasi, JAGDISH KOMAKULA, Ashish Kumar, Jimmy Morgan, Sarat Chandra Pothula, Vivek Thangamuthu, and Balaji Vedagiri, Amazon Web Services*

## Summary
<a name="centralize-monitoring-by-using-amazon-cloudwatch-observability-access-manager-summary"></a>

Observability is crucial to monitoring, understanding, and troubleshooting applications. Applications that span multiple accounts, as with AWS Control Tower or landing zone implementations, generate a large number of logs and trace data. To quickly troubleshoot problems or understand user analytics or business analytics, you need a common observability platform across all accounts. The Amazon CloudWatch Observability Access Manager gives you access to, and control over, multiple account logs from a central location.

You can use the Observability Access Manager to view and manage observability data logs generated by source accounts. Source accounts are individual AWS accounts that generate observability data for their resources. Observability data is shared between source accounts and monitoring accounts. The shared observability data can include metrics in Amazon CloudWatch, logs in Amazon CloudWatch Logs, and traces in AWS X-Ray. For more information, see the [Observability Access Manager documentation](https://docs.aws.amazon.com/OAM/latest/APIReference/Welcome.html).

This pattern is for users who have applications or infrastructure that run in multiple AWS accounts and need a common place to view logs. It explains how you can set up Observability Access Manager by using Terraform, to monitor the status and health of these applications or infrastructure. You can install this solution in multiple ways:
+ As a standalone Terraform module that you set up manually
+ By using a continuous integration and continuous delivery (CI/CD) pipeline
+ By integrating with other solutions such as [AWS Control Tower Account Factory for Terraform (AFT)](https://docs.aws.amazon.com/controltower/latest/userguide/aft-overview.html)

The instructions in the [Epics](#centralize-monitoring-by-using-amazon-cloudwatch-observability-access-manager-epics) section cover the manual implementation. For AFT installation steps, see the README file for the GitHub [Observability Access Manager](https://github.com/aws-samples/cloudwatch-obervability-access-manager-terraform) repository.

## Prerequisites and limitations
<a name="centralize-monitoring-by-using-amazon-cloudwatch-observability-access-manager-prereqs"></a>

**Prerequisites**
+ [Terraform](https://www.terraform.io/) installed or referenced in your system or in automated pipelines. (We recommend that you use the [latest version](https://releases.hashicorp.com/terraform/).)
+ An account that you can use as a central monitoring account. Other accounts create links to the central monitoring account in order to view logs.
+ (Optional) A source code repository such as GitHub, AWS CodeCommit, Atlassian Bitbucket, or similar system. A source code repository isn’t necessary if you’re using automated CI/CD pipelines.
+ (Optional) Permissions to create pull requests (PRs) for code review and code collaboration in GitHub.

**Limitations**

Observability Access Manager has the following service quotas, which cannot be changed. Consider these quotas before you deploy this feature. For more information, see [CloudWatch service quotas](https://docs.aws.amazon.com/AmazonCloudWatch/latest/monitoring/cloudwatch_limits.html) in the CloudWatch documentation.
+ **Source account links**: You can link each source account to a maximum of five monitoring accounts.
+ **Sinks**: You can build multiple sinks for an account, but only one sink per AWS Region is allowed.

In addition:
+ Sinks and links must be created in the same AWS Region; they cannot be cross-Region.

**Cross-Region and cross-account monitoring**

For cross-Region, cross-account monitoring, you can choose one of these options:
+ Create [cross-account and cross-Region CloudWatch dashboards](https://docs.aws.amazon.com/AmazonCloudWatch/latest/monitoring/Cross-Account-Cross-Region.html) for alarms and metrics. This option doesn’t support logs and traces.
+ Implement [centralized logging](https://docs.aws.amazon.com/AmazonCloudWatch/latest/monitoring/Cross-Account-Cross-Region.html) by using Amazon OpenSearch Service.
+ Create one sink per Region from all tenant accounts, push metrics to a centralized monitoring account (as described in this pattern), and then use [CloudWatch metric streams](https://docs.aws.amazon.com/AmazonCloudWatch/latest/monitoring/CloudWatch-Metric-Streams.html) to send the data to a common external destination or to third-party monitoring products such as Datadog, Dynatrace, Sumo Logic, Splunk, or New Relic.

## Architecture
<a name="centralize-monitoring-by-using-amazon-cloudwatch-observability-access-manager-architecture"></a>

**Components**

CloudWatch Observability Access Manager consists of two major components that enable cross-account observability:
+ A *sink* provides the ability for source accounts to send observability data to the central monitoring account. A sink basically provides a gateway junction for source accounts to connect to. There can be only one sink gateway or connection, and multiple accounts can connect to it.
+ Each source account has a *link* to the sink gateway junction, and observability data is sent through this link. You must create a sink before you create links from each source account.

**Architecture**

The following diagram illustrates Observability Access Manager and its components.

![\[Architecture for cross-account observability with sinks and links.\]](http://docs.aws.amazon.com/prescriptive-guidance/latest/patterns/images/pattern-img/00603763-4f99-456e-85e7-a80d803b087d/images/5188caf9-348b-4d91-b560-2b3d6ea81191.png)


## Tools
<a name="centralize-monitoring-by-using-amazon-cloudwatch-observability-access-manager-tools"></a>

**AWS services**
+ [Amazon CloudWatch](https://docs.aws.amazon.com/AmazonCloudWatch/latest/monitoring/WhatIsCloudWatch.html) helps you monitor the metrics of your AWS resources and the applications you run on AWS in real time.
+ [AWS Organizations](https://docs.aws.amazon.com/organizations/latest/userguide/orgs_introduction.html) is an account management service that helps you consolidate multiple AWS accounts into an organization that you create and centrally manage.
+ [AWS Identity and Access Management (IAM)](https://docs.aws.amazon.com/IAM/latest/UserGuide/introduction.html) helps you securely manage access to your AWS resources by controlling who is authenticated and authorized to use them.

**Tools**
+ [Terraform](https://www.terraform.io/) is an infrastructure as code (IaC) tool from HashiCorp that helps you create and manage cloud and on-premises resources.
+ [AWS Control Tower Account Factory for Terraform (AFT)](https://docs.aws.amazon.com/controltower/latest/userguide/aft-overview.html) sets up a Terraform pipeline to help you provision and customize accounts in AWS Control Tower. You can optionally use AFT to set up Observability Access Manager at scale across multiple accounts.

**Code repository**

The code for this pattern is available in the GitHub [Observability Access Manager](https://github.com/aws-samples/cloudwatch-obervability-access-manager-terraform) repository.

## Best practices
<a name="centralize-monitoring-by-using-amazon-cloudwatch-observability-access-manager-best-practices"></a>
+ In AWS Control Tower environments, mark the logging account as the central monitoring account (sink).
+ If you have multiple organizations with multiple accounts in AWS Organizations, we recommend that you include the organizations instead of individual accounts in the configuration policy. If you have a small number of accounts or if the accounts aren’t part of an organization in the sink configuration policy, you might decide to include individual accounts instead.

## Epics
<a name="centralize-monitoring-by-using-amazon-cloudwatch-observability-access-manager-epics"></a>

### Set up the sink module
<a name="set-up-the-sink-module"></a>


| Task | Description | Skills required | 
| --- | --- | --- | 
| Clone the repository. | Clone the GitHub Observability Access Manager repository:<pre>git clone https://github.com/aws-samples/cloudwatch-obervability-access-manager-terraform</pre> | AWS DevOps, Cloud administrator, AWS administrator | 
| Specify property values for the sink module. | In the `main.tf` file (in the `deployments/aft-account-customizations/LOGGING/terraform/`** **folder of the repository), specify values for the following properties:[\[See the AWS documentation website for more details\]](http://docs.aws.amazon.com/prescriptive-guidance/latest/patterns/centralize-monitoring-by-using-amazon-cloudwatch-observability-access-manager.html)For more information, see [AWS::Oam::Sink](https://docs.aws.amazon.com/AWSCloudFormation/latest/UserGuide/aws-resource-oam-sink.html) in the CloudFormation documentation. | AWS DevOps, Cloud administrator, AWS administrator | 
| Install the sink module. | Export the credentials of the AWS account that you have selected as the monitoring account, and install the Observability Access Manager sink module:<pre>Terraform Init<br />Terrafom Plan<br />Terraform Apply</pre> | AWS DevOps, Cloud administrator, AWS administrator | 

### Set up the link module
<a name="set-up-the-link-module"></a>


| Task | Description | Skills required | 
| --- | --- | --- | 
| Specify property values for the link module. | In the `main.tf `file (in the `deployments/aft-account-customizations/LOGGING/terraform/`** **folder of the repository), specify values for the following properties:[\[See the AWS documentation website for more details\]](http://docs.aws.amazon.com/prescriptive-guidance/latest/patterns/centralize-monitoring-by-using-amazon-cloudwatch-observability-access-manager.html)For more information, see [AWS::Oam::Link](https://docs.aws.amazon.com/AWSCloudFormation/latest/UserGuide/aws-resource-oam-link.html) in the CloudFormation documentation. | AWS DevOps, Cloud administrator, Cloud architect | 
| Install the link module for individual accounts. | Export the credentials of individual accounts and install the Observability Access Manager link module:<pre>Terraform Plan<br />Terraform Apply</pre>You can set up the link module individually for each account, or use [AFT](https://docs.aws.amazon.com/controltower/latest/userguide/aft-overview.html) to automatically install this module across a large number of accounts. | AWS DevOps, Cloud administrator, Cloud architect | 

### Approve sink-to-link connections
<a name="approve-sink-to-link-connections"></a>


| Task | Description | Skills required | 
| --- | --- | --- | 
| Check the status message. | [\[See the AWS documentation website for more details\]](http://docs.aws.amazon.com/prescriptive-guidance/latest/patterns/centralize-monitoring-by-using-amazon-cloudwatch-observability-access-manager.html)On the right, you should see the status message **Monitoring account enabled** with a green checkmark. This means that the monitoring account has an Observability Access Manager sink that the links of other accounts will connect to. |  | 
| Approve the link-to-sink connections. | [\[See the AWS documentation website for more details\]](http://docs.aws.amazon.com/prescriptive-guidance/latest/patterns/centralize-monitoring-by-using-amazon-cloudwatch-observability-access-manager.html)For more information, see [Link monitoring accounts with source accounts](https://docs.aws.amazon.com/AmazonCloudWatch/latest/monitoring/CloudWatch-Unified-Cross-Account-Setup.html) in the CloudWatch documentation. | AWS DevOps, Cloud administrator, Cloud architect | 

### Verify cross-account observability data
<a name="verify-cross-account-observability-data"></a>


| Task | Description | Skills required | 
| --- | --- | --- | 
| View cross-account data. | [\[See the AWS documentation website for more details\]](http://docs.aws.amazon.com/prescriptive-guidance/latest/patterns/centralize-monitoring-by-using-amazon-cloudwatch-observability-access-manager.html) | AWS DevOps, Cloud administrator, Cloud architect | 

### (Optional) Enable source accounts to trust monitoring account
<a name="optional-enable-source-accounts-to-trust-monitoring-account"></a>


| Task | Description | Skills required | 
| --- | --- | --- | 
| View metrics, dashboards, logs, widgets, and alarms from other accounts. | As an additional feature,** **you can share the CloudWatch metrics, dashboards, logs, widgets, and alarms with other accounts. Each account uses an IAM role called **CloudWatch-CrossAccountSharingRole **to gain access to this data.Source accounts that have a trust relationship with the central monitoring account can assume this role and view data from the monitoring account.CloudWatch provides a sample CloudFormation script to create the role. Choose **Manage role in IAM an**d run this script in the accounts where you want to view data.<pre>{<br />    "Version": "2012-10-17",		 	 	 <br />    "Statement": [<br />        {<br />            "Effect": "Allow",<br />            "Principal": {<br />                "AWS": [<br />                    "arn:aws:iam::XXXXXXXXX:root",<br />                    "arn:aws:iam::XXXXXXXXX:root",<br />                    "arn:aws:iam::XXXXXXXXX:root",<br />                    "arn:aws:iam::XXXXXXXXX:root"<br />                ]<br />            },<br />            "Action": "sts:AssumeRole"<br />        }<br />    ]<br />}</pre>For more information, see [Enabling cross-account functionality in CloudWatch](https://docs.aws.amazon.com/AmazonCloudWatch/latest/monitoring/Cross-Account-Cross-Region.html#enable-cross-account-cross-Region) in the CloudWatch documentation. | AWS DevOps, Cloud administrator, Cloud architect | 

### (Optional) View cross-account cross-Region from the monitoring account
<a name="optional-view-cross-account-cross-region-from-the-monitoring-account"></a>


| Task | Description | Skills required | 
| --- | --- | --- | 
| Set up cross-account, cross-Region access. | In the central monitoring account, you can optionally add an account selector to easily switch between accounts and view their data without having to authenticate.[\[See the AWS documentation website for more details\]](http://docs.aws.amazon.com/prescriptive-guidance/latest/patterns/centralize-monitoring-by-using-amazon-cloudwatch-observability-access-manager.html)For more information, see [Cross-account cross-Region CloudWatch console](https://docs.aws.amazon.com/AmazonCloudWatch/latest/monitoring/Cross-Account-Cross-Region.html) in the CloudWatch documentation. | AWS DevOps, Cloud administrator, Cloud architect | 

## Related resources
<a name="centralize-monitoring-by-using-amazon-cloudwatch-observability-access-manager-resources"></a>
+ [CloudWatch cross-account observability](https://docs.aws.amazon.com/AmazonCloudWatch/latest/monitoring/CloudWatch-Unified-Cross-Account.html) (Amazon CloudWatch documentation)
+ [Amazon CloudWatch Observability Access Manager API Reference](https://docs.aws.amazon.com/OAM/latest/APIReference/Welcome.html) (Amazon CloudWatch documentation)
+ [Resource: aws\$1oam\$1sink](https://registry.terraform.io/providers/hashicorp/aws/latest/docs/resources/oam_sink) (Terraform documentation)
+ [Data Source: aws\$1oam\$1link](https://registry.terraform.io/providers/hashicorp/aws/latest/docs/data-sources/oam_link) (Terraform documentation)
+ [CloudWatchObservabilityAccessManager](https://boto3.amazonaws.com/v1/documentation/api/latest/reference/services/oam.html) (AWS Boto3 documentation)

# Check EC2 instances for mandatory tags at launch
<a name="check-ec2-instances-for-mandatory-tags-at-launch"></a>

*Susanne Kangnoh and Archit Mathur, Amazon Web Services*

## Summary
<a name="check-ec2-instances-for-mandatory-tags-at-launch-summary"></a>

Amazon Elastic Compute Cloud (Amazon EC2) provides scalable computing capacity in the Amazon Web Services (AWS) Cloud. Using Amazon EC2 eliminates your need to invest in hardware up front, so you can develop and deploy applications faster.

You can use tagging to categorize your AWS resources in different ways. EC2 instance tagging is useful when you have many resources in your account and you want to quickly identify a specific resource based on the tags. You can assign custom metadata to your EC2 instances by using tags. A tag consists of a user-defined key and value. We recommend that you create a consistent set of tags to meet your organization's requirements. 

This pattern provides an AWS CloudFormation template to help you monitor EC2 instances for specific tags. The template creates an Amazon CloudWatch Events event that watches for the AWS CloudTrail **TagResource **or **UntagResource** events, to detect new EC2 instance tagging or tag removal. If a predefined tag is missing, it calls an AWS Lambda function, which sends out a violation message to an email address that you provide, by using Amazon Simple Notification Service (Amazon SNS). 

## Prerequisites and limitations
<a name="check-ec2-instances-for-mandatory-tags-at-launch-prerequisites-and-limitations"></a>

**Prerequisites **
+ An active AWS account.
+ An Amazon Simple Storage Service (Amazon S3) bucket to upload the provided Lambda code.
+ An email address where you would like to receive violation notifications.

**Limitations **
+ This solution supports CloudTrail **TagResource **or **UntagResource **events. It does not create notifications for any other events.
+ This solution checks only for tag keys. It does not monitor key values.

## Architecture
<a name="check-ec2-instances-for-mandatory-tags-at-launch-architecture"></a>

**Workflow ****architecture **

![\[Workflow diagram showing AWS services interaction for EC2 instance monitoring and notification.\]](http://docs.aws.amazon.com/prescriptive-guidance/latest/patterns/images/pattern-img/9cd74141-a87f-419e-94b3-0b28fd04a018/images/b48fd21b-a86b-4ec7-b9f6-4f1a64999437.png)


 

**Automation and scale**
+ You can use the AWS CloudFormation template multiple times for different AWS Regions and accounts. You need to run the template only once in each Region or account.

## Tools
<a name="check-ec2-instances-for-mandatory-tags-at-launch-tools"></a>

**AWS services**
+ [Amazon EC2](https://aws.amazon.com/ec2/) – Amazon Elastic Compute Cloud (Amazon EC2) is a web service that provides secure, resizable compute capacity in the cloud. It is designed to make web-scale cloud computing easier for developers.
+ [AWS CloudTrail](https://docs.aws.amazon.com/awscloudtrail/latest/userguide/cloudtrail-user-guide.html) – CloudTrail is an AWS service that helps you with governance, compliance, and operational and risk auditing of your AWS account. Actions taken by a user, role, or AWS service are recorded as events in CloudTrail. 
+ [Amazon CloudWatch Events](https://docs.aws.amazon.com/AmazonCloudWatch/latest/events/WhatIsCloudWatchEvents.html) – Amazon CloudWatch Events delivers a near real-time stream of system events that describe changes in AWS resources. CloudWatch Events becomes aware of operational changes as they occur and takes corrective action as necessary, by sending messages to respond to the environment, activating functions, making changes, and capturing state information. 
+ [AWS Lambda](https://docs.aws.amazon.com/lambda/latest/dg/welcome.html) – Lambda is a compute service that supports running code without needing to provision or manage servers. Lambda runs your code only when needed and scales automatically, from a few requests per day to thousands per second. 
+ [Amazon S3](https://docs.aws.amazon.com/AmazonS3/latest/dev/Welcome.html) – Amazon Simple Storage Service (Amazon S3) is a highly scalable object storage service that can be used for a wide range of storage solutions, including websites, mobile applications, backups, and data lakes.
+ [Amazon SNS](https://docs.aws.amazon.com/sns/latest/dg/welcome.html) – Amazon Simple Notification Service (Amazon SNS) is a web service that enables applications, end-users, and devices to instantly send and receive notifications from the cloud.

**Code**

This pattern includes an attachment with two files:
+ `index.zip` is a compressed file that includes the Lambda code for this pattern.
+ `ec2-require-tags.yaml` is a CloudFormation template that deploys the Lambda code.

See the *Epics *section for information about how to use these files.

## Epics
<a name="check-ec2-instances-for-mandatory-tags-at-launch-epics"></a>

### Deploy the Lambda code
<a name="deploy-the-lambda-code"></a>


| Task | Description | Skills required | 
| --- | --- | --- | 
| Upload the code to an S3 bucket. | Create a new S3 bucket or use an existing S3 bucket to upload the attached `index.zip` file (Lambda code). This bucket must be in the same AWS Region as the resources (EC2 instances) that you want to monitor. | Cloud architect | 
| Deploy the CloudFormation template. | Open the Cloudformation console in the same AWS Region as the S3 bucket, and deploy the `ec2-require-tags.yaml` file that's provided in the attachment. In the next epic, provide values for the template parameters.   | Cloud architect | 

### Complete the parameters in the CloudFormation template
<a name="complete-the-parameters-in-the-cloudformation-template"></a>


| Task | Description | Skills required | 
| --- | --- | --- | 
| Provide the S3 bucket name. | Enter the name of the S3 bucket that you created or selected in the first epic. This S3 bucket contains the .zip file for the Lambda code and must be in the same AWS Region as the CloudFormation template and the EC2 instances that you want to monitor. | Cloud architect | 
| Provide the S3 key. | Provide the location of the Lambda code .zip file in your S3 bucket, without leading slashes (for example, `index.zip` or `controls/index.zip`). | Cloud architect | 
| Provide an email address. | Provide an active email address where you want to receive violation notifications. | Cloud architect | 
| Define a logging level. | Specify the logging level and verbosity. `Info` designates detailed informational messages on the application’s progress and should be used only for debugging. `Error` designates error events that could still allow the application to continue running. `Warning` designates potentially harmful situations. | Cloud architect | 
| Enter the required tag keys. | Enter the tag keys that you want to check for. If you want to specify multiple keys, separate them with commas, without spaces. (For example, `ApplicationId,CreatedBy,Environment,Organization` searches for four keys.) The CloudWatch Events event searches for these tag keys and sends a notification if they are not found. | Cloud architect | 

### Confirm the subscription
<a name="confirm-the-subscription"></a>


| Task | Description | Skills required | 
| --- | --- | --- | 
| Confirm the email subscription. | When the CloudFormation template deploys successfully, it sends a subscription email message to the email address you provided. To receive notifications, you must confirm this email subscription.   | Cloud architect | 

## Related resources
<a name="check-ec2-instances-for-mandatory-tags-at-launch-related-resources"></a>
+ [Creating a bucket](https://docs.aws.amazon.com/AmazonS3/latest/user-guide/create-bucket.html) (Amazon S3 documentation)
+ [Uploading objects](https://docs.aws.amazon.com/AmazonS3/latest/user-guide/upload-objects.html) (Amazon S3 documentation)
+ [Tag your Amazon EC2 resources](https://docs.aws.amazon.com/AWSEC2/latest/UserGuide/Using_Tags.html) (Amazon EC2 documentation)
+ [Creating a CloudWatch Events rule that triggers on an AWS API call using AWS CloudTrail](https://docs.aws.amazon.com/AmazonCloudWatch/latest/events/Create-CloudWatch-Events-CloudTrail-Rule.html) (Amazon CloudWatch documentation)

## Attachments
<a name="attachments-9cd74141-a87f-419e-94b3-0b28fd04a018"></a>

To access additional content that is associated with this document, unzip the following file: [attachment.zip](samples/p-attach/9cd74141-a87f-419e-94b3-0b28fd04a018/attachments/attachment.zip)

# Clean up AWS Account Factory for Terraform (AFT) resources safely after state file loss
<a name="clean-up-aft-resources-safely-after-state-file-loss"></a>

*Gokendra Malviya, Amazon Web Services*

## Summary
<a name="clean-up-aft-resources-safely-after-state-file-loss-summary"></a>

When you use AWS Account Factory for Terraform (AFT) to manage your AWS Control Tower environment, AFT generates a Terraform state file to track the state and configuration of the resources created by Terraform. Losing the Terraform state file can create significant challenges for resource management and cleanup. This pattern provides a systematic approach to safely identify and remove AFT-related resources while maintaining the integrity of your AWS Control Tower environment.

The process is designed to ensure proper removal of all AFT components, even without the original state file reference. This process provides a clear path to successfully re-establish and reconfigure AFT in your environment, to help ensure minimal disruption to your AWS Control Tower operations.

For more information about AFT, see the [AWS Control Tower documentation](https://docs.aws.amazon.com/controltower/latest/userguide/taf-account-provisioning.html).

## Prerequisites and limitations
<a name="clean-up-aft-resources-safely-after-state-file-loss-prereqs"></a>

**Prerequisites**
+ A thorough understanding of [AFT architecture](https://docs.aws.amazon.com/controltower/latest/userguide/aft-architecture.html).
+ Administrator access to the following accounts:
  + AFT Management account
  + AWS Control Tower Management account
  + Log Archive account
  + Audit account
+ Verification that no service control policies (SCPs) contain restrictions or limitations that would block the deletion of AFT-related resources.

**Limitations**
+ This process can clean up resources effectively, but it cannot recover lost state files, and some resources might require manual identification.
+ The duration of the cleanup process depends on your environment's complexity and might take several hours.
+ This pattern has been tested with AFT version 1.12.2 and deletes the following resources. If you're using a different version of AFT, you might have to delete additional resources.    
[\[See the AWS documentation website for more details\]](http://docs.aws.amazon.com/prescriptive-guidance/latest/patterns/clean-up-aft-resources-safely-after-state-file-loss.html)

**Important**  
The resources that are deleted by the steps in this pattern cannot be recovered. Before you follow these steps, verify the resource names carefully and make sure that they were created by AFT.

## Architecture
<a name="clean-up-aft-resources-safely-after-state-file-loss-architecture"></a>

The following diagram shows the AFT components and high-level workflow. AFT sets up a Terraform pipeline that helps you provision and customize your accounts in AWS Control Tower. AFT follows a GitOps model to automate the processes of account provisioning in AWS Control Tower. You create a Terraform file for an account request and commit it to a repository, which provides the input that triggers the AFT workflow for account provisioning. After account provisioning is complete, AFT can run additional customization steps automatically.

![\[AFT components and high-level workflow.\]](http://docs.aws.amazon.com/prescriptive-guidance/latest/patterns/images/pattern-img/1342c0a6-4b07-46df-a063-ceab2e2f83c8/images/3e0cae87-20ef-4fcc-aacf-bb450844ac56.png)


In this architecture:
+ **AWS Control Tower Management account** is an AWS account that's dedicated to the AWS Control Tower service. This is also typically referred to as the *AWS payer account* or *AWS Organizations Management account*.
+ **AFT Management account** is an AWS account that's dedicated to AFT management operations. This is different from your organization's management account.
+ **Vended account** is an AWS account that contains all the baseline components and controls that you selected. AFT uses AWS Control Tower to vend a new account.

For additional information about this architecture, see [Introduction to AFT](https://catalog.workshops.aws/control-tower/en-US/customization/aft) in the AWS Control Tower workshop.

## Tools
<a name="clean-up-aft-resources-safely-after-state-file-loss-tools"></a>

**AWS services**
+ [AWS Control Tower](https://docs.aws.amazon.com/controltower/latest/userguide/what-is-control-tower.html) helps you set up and govern an AWS multi-account environment, following prescriptive best practices.
+ [AWS Account Factory for Terraform (AFT)](https://docs.aws.amazon.com/controltower/latest/userguide/taf-account-provisioning.html) sets up a Terraform pipeline to help you provision and customize accounts and resources in AWS Control Tower.
+ [AWS Organizations](https://docs.aws.amazon.com/organizations/latest/userguide/orgs_introduction.html) helps you centrally manage and govern your environment as you grow and scale your AWS resources. Using Organizations, you can create accounts and allocate resources, group accounts to organize your workflows, apply policies for governance, and simplify billing by using a single payment method for all your accounts.
+ [AWS Identity and Access Management (IAM)](https://docs.aws.amazon.com/IAM/latest/UserGuide/introduction.html) helps you securely manage access to your AWS resources by controlling who is authenticated and authorized to use them. This pattern requires IAM roles and permissions.

**Other tools**
+ [Terraform](https://www.terraform.io/) is an infrastructure as code (IaC) tool from HashiCorp that helps you create and manage cloud and on-premises resources.

## Best practices
<a name="clean-up-aft-resources-safely-after-state-file-loss-best-practices"></a>
+ For AWS Control Tower, see [Best practices for AWS Control Tower administrators](https://docs.aws.amazon.com/controltower/latest/userguide/best-practices.html) in the AWS Control Tower documentation.
+ For IAM, see [Security best practices](https://docs.aws.amazon.com/IAM/latest/UserGuide/best-practices.html) in the IAM documentation.

## Epics
<a name="clean-up-aft-resources-safely-after-state-file-loss-epics"></a>

### Delete AFT resources in the AFT Management account
<a name="delete-aft-resources-in-the-aft-management-account"></a>


| Task | Description | Skills required | 
| --- | --- | --- | 
| Delete resources that are identified by the AFT tag. | [\[See the AWS documentation website for more details\]](http://docs.aws.amazon.com/prescriptive-guidance/latest/patterns/clean-up-aft-resources-safely-after-state-file-loss.html) | AWS administrator, AWS DevOps, DevOps engineer | 
| Delete IAM roles. | [\[See the AWS documentation website for more details\]](http://docs.aws.amazon.com/prescriptive-guidance/latest/patterns/clean-up-aft-resources-safely-after-state-file-loss.html) | AWS administrator, AWS DevOps, DevOps engineer | 
| Delete the AWS Backup backup vault. | [\[See the AWS documentation website for more details\]](http://docs.aws.amazon.com/prescriptive-guidance/latest/patterns/clean-up-aft-resources-safely-after-state-file-loss.html) | AWS administrator, AWS DevOps, DevOps engineer | 
| Delete Amazon CloudWatch resources. | [\[See the AWS documentation website for more details\]](http://docs.aws.amazon.com/prescriptive-guidance/latest/patterns/clean-up-aft-resources-safely-after-state-file-loss.html) | AWS administrator, AWS DevOps, DevOps engineer | 
| Delete AWS KMS resources. | [\[See the AWS documentation website for more details\]](http://docs.aws.amazon.com/prescriptive-guidance/latest/patterns/clean-up-aft-resources-safely-after-state-file-loss.html) | AWS administrator, AWS DevOps, DevOps engineer | 

### Delete AFT resources in the Log Archive account
<a name="delete-aft-resources-in-the-log-archive-account"></a>


| Task | Description | Skills required | 
| --- | --- | --- | 
| Delete S3 buckets. | [\[See the AWS documentation website for more details\]](http://docs.aws.amazon.com/prescriptive-guidance/latest/patterns/clean-up-aft-resources-safely-after-state-file-loss.html) | AWS administrator, AWS DevOps, DevOps engineer | 
| Delete IAM roles. | [\[See the AWS documentation website for more details\]](http://docs.aws.amazon.com/prescriptive-guidance/latest/patterns/clean-up-aft-resources-safely-after-state-file-loss.html) | AWS administrator, AWS DevOps, DevOps engineer | 

### Delete AFT resources in the Audit account
<a name="delete-aft-resources-in-the-audit-account"></a>


| Task | Description | Skills required | 
| --- | --- | --- | 
| Delete IAM roles. | [\[See the AWS documentation website for more details\]](http://docs.aws.amazon.com/prescriptive-guidance/latest/patterns/clean-up-aft-resources-safely-after-state-file-loss.html) | AWS administrator, AWS DevOps, DevOps engineer | 

### Delete AFT resources in the AWS Control Tower Management account
<a name="delete-aft-resources-in-the-ctower-management-account"></a>


| Task | Description | Skills required | 
| --- | --- | --- | 
| Delete IAM roles. | [\[See the AWS documentation website for more details\]](http://docs.aws.amazon.com/prescriptive-guidance/latest/patterns/clean-up-aft-resources-safely-after-state-file-loss.html) | AWS administrator, AWS DevOps, DevOps engineer | 
| Delete EventBridge rules. | [\[See the AWS documentation website for more details\]](http://docs.aws.amazon.com/prescriptive-guidance/latest/patterns/clean-up-aft-resources-safely-after-state-file-loss.html) | AWS administrator, AWS DevOps, DevOps engineer | 

## Troubleshooting
<a name="clean-up-aft-resources-safely-after-state-file-loss-troubleshooting"></a>


| Issue | Solution | 
| --- | --- | 
| Detaching the internet gateway was unsuccessful. | While you're deleting resources that are identified by the **AFT** tag, if you encounter this issue when you detach or delete the internet gateway, you first have to delete VPC endpoints:[\[See the AWS documentation website for more details\]](http://docs.aws.amazon.com/prescriptive-guidance/latest/patterns/clean-up-aft-resources-safely-after-state-file-loss.html) | 
| You're unable to find the specified CloudWatch queries. | If you are unable to find the CloudWatch queries that were created by AFT, follow these steps:[\[See the AWS documentation website for more details\]](http://docs.aws.amazon.com/prescriptive-guidance/latest/patterns/clean-up-aft-resources-safely-after-state-file-loss.html) | 

## Related resources
<a name="clean-up-aft-resources-safely-after-state-file-loss-resources"></a>
+ AFT:
  + [GitHub Repository](https://github.com/aws-ia/terraform-aws-control_tower_account_factory)
  + [Workshop](https://catalog.workshops.aws/control-tower/en-US/customization/aft)
  + [Documentation](https://docs.aws.amazon.com/controltower/latest/userguide/aft-getting-started.html)
+ [AWS Control Tower documentation](https://docs.aws.amazon.com/controltower/latest/userguide/getting-started-with-control-tower.html)

## Additional information
<a name="clean-up-aft-resources-safely-after-state-file-loss-additional"></a>

To view AFT queries on the CloudWatch Logs Insights dashboard, choose the **Saved and sample queries** icon from the upper-right corner, as illustrated in the following screenshot:

![\[Accessing AFT queries on the CloudWatch Logs Insights dashboard.\]](http://docs.aws.amazon.com/prescriptive-guidance/latest/patterns/images/pattern-img/1342c0a6-4b07-46df-a063-ceab2e2f83c8/images/255d4032-738b-4600-9084-9684d2e9a328.png)


# Create a pipeline in AWS Regions that don’t support AWS CodePipeline
<a name="create-a-pipeline-in-aws-regions-that-don-t-support-aws-codepipeline"></a>

*Anand Krishna Varanasi, Amazon Web Services*

## Summary
<a name="create-a-pipeline-in-aws-regions-that-don-t-support-aws-codepipeline-summary"></a>

**Notice**: AWS CodeCommit is no longer available to new customers. Existing customers of AWS CodeCommit can continue to use the service as normal. [Learn more](https://aws.amazon.com/blogs/devops/how-to-migrate-your-aws-codecommit-repository-to-another-git-provider/)

AWS CodePipeline is a continuous delivery (CD) orchestration service that’s part of a set of DevOps tools from Amazon Web Services (AWS). It integrates with a large variety of sources (such as version control systems and storage solutions), continuous integration (CI) products and services from AWS and AWS Partners, and open-source products to provide an end-to-end workflow service for fast application and infrastructure deployments.

However, CodePipeline isn’t supported in all AWS Regions, and it’s useful to have an invisible orchestrator that connects AWS CI/CD services. This pattern describes how to implement an end-to-end workflow pipeline in AWS Regions where CodePipeline isn’t yet supported by using AWS CI/CD services such as AWS CodeCommit, AWS CodeBuild, and AWS CodeDeploy.

## Prerequisites and limitations
<a name="create-a-pipeline-in-aws-regions-that-don-t-support-aws-codepipeline-prereqs"></a>

**Prerequisites **
+ An active AWS account
+ AWS Cloud Development Kit (AWS CDK) CLI version 2.28 or later

## Architecture
<a name="create-a-pipeline-in-aws-regions-that-don-t-support-aws-codepipeline-architecture"></a>

**Target technology stack**

The following diagram shows a pipeline that was created in a Region that doesn’t support CodePipeline, such as the Africa (Cape Town) Region. A developer pushes the CodeDeploy configuration files (also called *deployment lifecycle hook scripts*) to the Git repository that’s hosted by CodeCommit. (See the [GitHub repository](https://github.com/aws-samples/invisible-codepipeline-unsupported-regions) provided with this pattern.) An Amazon EventBridge rule automatically initiates CodeBuild.

The CodeDeploy configuration files are fetched from CodeCommit as part of the source stage of the pipeline and transferred to CodeBuild. 

In the next phase, CodeBuild performs these tasks: 

1. Downloads the application source code TAR file. You can configure the name of this file by using Parameter Store, a capability of AWS Systems Manager.

1. Downloads the CodeDeploy configuration files.

1. Creates a combined archive of application source code and CodeDeploy configuration files that are specific to the application type.

1. Initiates CodeDeploy deployment to an Amazon Elastic Compute Cloud (Amazon EC2) instance by using the combined archive.

![\[Pipeline creation in unsupported AWS Region\]](http://docs.aws.amazon.com/prescriptive-guidance/latest/patterns/images/pattern-img/e27750de-b597-424e-b5bf-4d58dc9b60cc/images/95fc815e-a762-4142-b0fd-2a716823e498.png)


## Tools
<a name="create-a-pipeline-in-aws-regions-that-don-t-support-aws-codepipeline-tools"></a>

**AWS services**
+ [AWS CodeBuild](https://docs.aws.amazon.com/codebuild/latest/userguide/welcome.html) is a fully managed build service that helps you compile source code, run unit tests, and produce artifacts that are ready to deploy.
+ [AWS CodeCommit](https://docs.aws.amazon.com/codecommit/latest/userguide/welcome.html) is a version control service that helps you privately store and manage Git repositories, without needing to manage your own source control system.
+ [AWS CodeDeploy](https://docs.aws.amazon.com/codedeploy/latest/userguide/welcome.html) automates deployments to Amazon EC2 or on-premises instances, AWS Lambda functions, or Amazon Elastic Container Service (Amazon ECS) services.
+ [AWS CodePipeline](https://docs.aws.amazon.com/codepipeline/latest/userguide/welcome.html) helps you quickly model and configure the different stages of a software release and automate the steps required to release software changes continuously.
+ [AWS Cloud Development Kit (AWS CDK)](https://docs.aws.amazon.com/cdk/latest/guide/home.html) is a software development framework that helps you define and provision AWS Cloud infrastructure in code.

**Code**

The code for this pattern is available in the GitHub [CodePipeline Unsupported Regions](https://github.com/aws-samples/invisible-codepipeline-unsupported-regions) repository.

## Epics
<a name="create-a-pipeline-in-aws-regions-that-don-t-support-aws-codepipeline-epics"></a>

### Set up your developer workstation
<a name="set-up-your-developer-workstation"></a>


| Task | Description | Skills required | 
| --- | --- | --- | 
| Install the AWS CDK CLI. | For instructions, see the [AWS CDK documentation](https://docs.aws.amazon.com/cdk/v2/guide/getting_started.html#getting_started_prerequisites). | AWS DevOps | 
| Install a Git client. | To create commits, you can use a Git client installed on your local computer, and then push your commits to the CodeCommit repository. To set up CodeCommit with your Git client, see the [CodeCommit documentation](https://docs.aws.amazon.com/codecommit/latest/userguide/how-to-create-commit.html). | AWS DevOps | 
| Install npm. | Install the **npm **package manager. For more information, see the [npm documentation](https://docs.npmjs.com/downloading-and-installing-node-js-and-npm). | AWS DevOps | 

### Set up the pipeline
<a name="set-up-the-pipeline"></a>


| Task | Description | Skills required | 
| --- | --- | --- | 
| Clone the code repository. | Clone the GitHub [CodePipeline Unsupported Regions](https://github.com/aws-samples/invisible-codepipeline-unsupported-regions) repository to your local machine by running the following command.<pre>git clone https://github.com/aws-samples/invisible-codepipeline-unsupported-regions</pre> | DevOps engineer | 
| Set parameters in cdk.json. | Open the `cdk.json` file and provide values for the following parameters:<pre>"pipeline_account":"XXXXXXXXXXXX",<br />"pipeline_region":"us-west-2",<br />"repo_name": "app-dev-repo",<br />"ec2_tag_key": "test-vm",<br />"configName" : "cbdeployconfig",<br />"deploymentGroupName": "cbdeploygroup",<br />"applicationName" : "cbdeployapplication",<br />"projectName" : "CodeBuildProject"</pre>where:[\[See the AWS documentation website for more details\]](http://docs.aws.amazon.com/prescriptive-guidance/latest/patterns/create-a-pipeline-in-aws-regions-that-don-t-support-aws-codepipeline.html) | AWS DevOps | 
| Set up the AWS CDK construct library. | In the cloned GitHub repository, use the following commands to install the AWS CDK construct library, build your application, and synthesize to generate the AWS CloudFormation template for the application.<pre>npm i aws-cdk-lib<br />npm run build<br />cdk synth</pre> | AWS DevOps | 
| Deploy the sample AWS CDK application. | Deploy the code by running the following command in an unsupported Region (such as `af-south-1`).<pre>cdk deploy</pre> | AWS DevOps | 

### Set up the CodeCommit repository for CodeDeploy
<a name="set-up-the-codecommit-repository-for-codedeploy"></a>


| Task | Description | Skills required | 
| --- | --- | --- | 
| Set up CI/CD for the application. | Clone the CodeCommit repository that you specified in the `cdk.json` file (this is called `app-dev-repo` by default) to set up the CI/CD pipeline for the application.<pre>git clone https://git-codecommit.us-west-2.amazonaws.com/v1/repos/app-dev-repo</pre>where the repository name and Region depend on the values you provided in the `cdk.json` file. | AWS DevOps | 

### Test the pipeline
<a name="test-the-pipeline"></a>


| Task | Description | Skills required | 
| --- | --- | --- | 
| Test the pipeline with deployment instructions. | The `CodeDeploy_Files` folder of the GitHub [CodePipeline Unsupported Regions](https://github.com/aws-samples/invisible-codepipeline-unsupported-regions) repository includes sample files that instruct CodeDeploy to deploy the application. The `appspec.yml` file is a CodeDeploy configuration file that contains hooks to control the flow of application deployment. You can use the sample files `index.html`, `start_server.sh`, `stop_server.sh`, and `install_dependencies.sh` to update a website that’s hosted on Apache. These are examples—you can use the code in the GitHub repository to deploy any type of application. When the files are pushed to the CodeCommit repository, the invisible pipeline is initiated automatically. For deployment results, check the results of individual phases in the CodeBuild and CodeDeploy consoles. | AWS DevOps | 

## Related resources
<a name="create-a-pipeline-in-aws-regions-that-don-t-support-aws-codepipeline-resources"></a>
+ [Getting started](https://docs.aws.amazon.com/cdk/v2/guide/getting_started.html#getting_started_prerequisites) (AWS CDK documentation)
+ [Introduction to the Cloud Development Kit (CDK)](https://catalog.us-east-1.prod.workshops.aws/workshops/5962a836-b214-4fbf-9462-fedba7edcc9b/en-US) (AWS Workshop Studio)
+ [AWS CDK Workshop](https://cdkworkshop.com/)

# Customize default role names by using AWS CDK aspects and escape hatches
<a name="customize-default-role-names-by-using-aws-cdk-aspects-and-escape-hatches"></a>

*SANDEEP SINGH and James Jacob, Amazon Web Services*

## Summary
<a name="customize-default-role-names-by-using-aws-cdk-aspects-and-escape-hatches-summary"></a>

This pattern demonstrates how to customize the default names of roles that are created by AWS Cloud Development Kit (AWS CDK) constructs. Customizing role names is often necessary if your organization has specific constraints based on naming conventions. For example, your organization might set AWS Identity and Access Management (IAM) [permissions boundaries](https://docs.aws.amazon.com/IAM/latest/UserGuide/access_policies_boundaries.html) or [service control policies (SCPs)](https://docs.aws.amazon.com/organizations/latest/userguide/orgs_manage_policies_scps.html) that require a specific prefix in role names. In such cases, the default role names generated by AWS CDK constructs might not meet these conventions and might have to be altered. This pattern addresses those requirements by using [escape hatches](https://docs.aws.amazon.com/cdk/v2/guide/cfn-layer.html) and [aspects](https://docs.aws.amazon.com/cdk/v2/guide/aspects.html) in the AWS CDK. You use escape hatches to define custom role names, and aspects to apply a custom name to all roles, to ensure adherence to your organization's policies and constraints.

## Prerequisites and limitations
<a name="customize-default-role-names-by-using-aws-cdk-aspects-and-escape-hatches-prereqs"></a>

**Prerequisites**
+ An active AWS account
+ Prerequisites specified in the [AWS CDK documentation](https://docs.aws.amazon.com/cdk/v2/guide/getting_started.html#getting_started_prerequisites)

**Limitations**
+ Aspects filter resources based on resource types, so all roles share the same prefix. If you require different role prefixes for different roles, additional filtering based on other properties is necessary. For example, to assign different prefixes to roles that are associated with AWS Lambda functions, you could filter by specific role attributes or tags, and apply one prefix for Lambda-related roles and a different prefix for other roles.
+ IAM role names have a maximum length of 64 characters, so modified role names have to be trimmed to meet this restriction.
+ Some AWS services aren’t available in all AWS Regions. For Region availability, see [AWS services by Region](https://aws.amazon.com/about-aws/global-infrastructure/regional-product-services/). For specific endpoints, see the [Service endpoints and quotas](https://docs.aws.amazon.com/general/latest/gr/aws-service-information.html) page, and choose the link for the service.

## Architecture
<a name="customize-default-role-names-by-using-aws-cdk-aspects-and-escape-hatches-architecture"></a>

**Target technology stack **
+ AWS CDK
+ AWS CloudFormation

**Target architecture **

![\[Architecture for using escape hatches and aspects to customize AWS CDK-assigned role names.\]](http://docs.aws.amazon.com/prescriptive-guidance/latest/patterns/images/pattern-img/c149d8d2-1da6-4680-ab0b-e5051b69688c/images/15e56ca5-f150-4522-b374-8ee2dcc655a9.png)

+ An AWS CDK app consists of one or more CloudFormation stacks, which are synthesized and deployed to manage AWS resources.
+ To modify a property of an AWS CDK-managed resource that isn't exposed by a layer 2 (L2) construct, you use an escape hatch to override the underlying CloudFormation properties (in this case, the role name), and an aspect to apply the role to all resources in the AWS CDK app during the AWS CDK stack synthesis process.

## Tools
<a name="customize-default-role-names-by-using-aws-cdk-aspects-and-escape-hatches-tools"></a>

**AWS services**
+ [AWS Cloud Development Kit (AWS CDK)](https://docs.aws.amazon.com/cdk/latest/guide/home.html) is a software development framework that helps you define and provision AWS Cloud infrastructure in code.
+ [AWS CDK Command Line Interface (AWS CDK CLI)](https://docs.aws.amazon.com/cdk/latest/guide/cli.html) (also referred to as the AWS CDK Toolkit) is a command line cloud development kit that helps you interact with your AWS CDK app. The CLI `cdk` command is the primary tool for interacting with your AWS CDK app. It runs your app, interrogates the application model you defined, and produces and deploys the CloudFormation templates that are generated by the AWS CDK.
+ [CloudFormation](https://docs.aws.amazon.com/AWSCloudFormation/latest/UserGuide/Welcome.html) helps you set up AWS resources, provision them quickly and consistently, and manage them throughout their lifecycle across AWS accounts and Regions.

**Code repository**

The source code and templates for this pattern are available in the GitHub [CDK Aspects Override](https://github.com/aws-samples/cdk-aspects-override) repository.

## Best practices
<a name="customize-default-role-names-by-using-aws-cdk-aspects-and-escape-hatches-best-practices"></a>

See [Best practices for using the AWS CDK in TypeScript to create IaC projects](https://docs.aws.amazon.com/prescriptive-guidance/latest/best-practices-cdk-typescript-iac/introduction.html) on the** **AWS Prescriptive Guidance website.

## Epics
<a name="customize-default-role-names-by-using-aws-cdk-aspects-and-escape-hatches-epics"></a>

### Install the AWS CDK CLI
<a name="install-the-cdk-cli"></a>


| Task | Description | Skills required | 
| --- | --- | --- | 
| Install the AWS CDK CLI. | To install the AWS CDK CLI globally, run the command:<pre>npm install -g aws-cdk</pre> | AWS DevOps | 
| Verify the version. | Run the command:<pre>cdk --version</pre>Confirm that you’re using version 2 of the AWS CDK CLI. | AWS DevOps | 
| Bootstrap the AWS CDK environment. | Before you  deploy the CloudFormation templates, prepare the account and AWS Region that you want to use. Run the command:<pre>cdk bootstrap <account>/<Region></pre>For more information, see [AWS CDK bootstrapping](https://docs.aws.amazon.com/cdk/v2/guide/bootstrapping.html) in the AWS documentation. | AWS DevOps | 

### Deploy the AWS CDK app to demonstrate the use of aspects
<a name="deploy-the-cdk-app-to-demonstrate-the-use-of-aspects"></a>


| Task | Description | Skills required | 
| --- | --- | --- | 
| Set up the project. | [\[See the AWS documentation website for more details\]](http://docs.aws.amazon.com/prescriptive-guidance/latest/patterns/customize-default-role-names-by-using-aws-cdk-aspects-and-escape-hatches.html) | AWS DevOps | 
| Deploy stacks with default role names assigned by the AWS CDK. | Deploy two CloudFormation stacks (`ExampleStack1` and `ExampleStack2`) that contain the Lambda functions and their associated roles:<pre>npm run deploy:ExampleAppWithoutAspects</pre>The code doesn’t explicitly pass role properties, so the role names will be constructed by the AWS CDK.For example output, see the [Additional information](#customize-default-role-names-by-using-aws-cdk-aspects-and-escape-hatches-additional) section. | AWS DevOps | 
| Deploy stacks with aspects. | In this step, you apply an aspect that enforces a role name convention by adding a prefix to all IAM roles that are deployed in the AWS CDK project. The aspect is defined in the `lib/aspects.ts` file. The aspect uses an escape hatch to override the role name by adding a prefix. The aspect is applied to the stacks in the `bin/app-with-aspects.ts` file. The role name prefix used in this example is `dev-unicorn`.[\[See the AWS documentation website for more details\]](http://docs.aws.amazon.com/prescriptive-guidance/latest/patterns/customize-default-role-names-by-using-aws-cdk-aspects-and-escape-hatches.html)For example output, see the [Additional information](#customize-default-role-names-by-using-aws-cdk-aspects-and-escape-hatches-additional) section. | AWS DevOps | 

### Clean up resources
<a name="clean-up-resources"></a>


| Task | Description | Skills required | 
| --- | --- | --- | 
| Delete your AWS CloudFormation stacks. | After you finish using this pattern, run the following command to clean up resources to avoid incurring additional costs:<pre>cdk destroy --all -f && cdk --app npx ts-node bin/app-with-aspects.ts' destroy --all -f </pre> | AWS DevOps | 

## Troubleshooting
<a name="customize-default-role-names-by-using-aws-cdk-aspects-and-escape-hatches-troubleshooting"></a>


| Issue | Solution | 
| --- | --- | 
| You encounter problems using the AWS CDK. | See [Troubleshooting common AWS CDK issues](https://docs.aws.amazon.com/cdk/v2/guide/troubleshooting.html) in the AWS CDK documentation. | 

## Related resources
<a name="customize-default-role-names-by-using-aws-cdk-aspects-and-escape-hatches-resources"></a>
+ [AWS Cloud Development Kit (AWS CDK)](https://aws.amazon.com/cdk/)
+ [AWS CDK documentation](https://docs.aws.amazon.com/cdk/)
+ [AWS CDK on GitHub](https://github.com/aws/aws-cdk)
+ [Escape hatches](https://docs.aws.amazon.com/cdk/v2/guide/cfn-layer.html)
+ [Aspects and the AWS CDK](https://docs.aws.amazon.com/cdk/v2/guide/aspects.html)

## Additional information
<a name="customize-default-role-names-by-using-aws-cdk-aspects-and-escape-hatches-additional"></a>

**Role names created by CloudFormation without aspects**

```
Outputs:
ExampleStack1WithoutAspects.Function1RoleName = example-stack1-without-as-Function1LambdaFunctionSe-y7FYTY6FXJXA
ExampleStack1WithoutAspects.Function2RoleName = example-stack1-without-as-Function2LambdaFunctionSe-dDZV4rkWqWnI
...

Outputs:
ExampleStack2WithoutAspects.Function3RoleName = example-stack2-without-as-Function3LambdaFunctionSe-ygMv49iTyMq0
```

**Role names created by CloudFormation with aspects**

```
Outputs:
ExampleStack1WithAspects.Function1RoleName = dev-unicorn-Function1LambdaFunctionServiceRole783660DC
ExampleStack1WithAspects.Function2RoleName = dev-unicorn-Function2LambdaFunctionServiceRole2C391181
...

Outputs:
ExampleStack2WithAspects.Function3RoleName = dev-unicorn-Function3LambdaFunctionServiceRole4CAA721C
```

# Deploy a Cassandra cluster on Amazon EC2 with private static IPs to avoid rebalancing
<a name="deploy-a-cassandra-cluster-on-amazon-ec2-with-private-static-ips-to-avoid-rebalancing"></a>

*Dipin Jain, Amazon Web Services*

## Summary
<a name="deploy-a-cassandra-cluster-on-amazon-ec2-with-private-static-ips-to-avoid-rebalancing-summary"></a>

The private IP of an Amazon Elastic Compute Cloud (Amazon EC2) instance is retained throughout its lifecycle. However, the private IP might change during a planned or unplanned system crash; for example, during an Amazon Machine Image (AMI) upgrade. In some scenarios, retaining a private static IP can enhance the performance and recovery time of workloads. For example, using a static IP for an Apache Cassandra seed node prevents the cluster from incurring a rebalancing overhead. 

This pattern describes how to attach a secondary elastic network interface to EC2 instances to keep the IP static during rehosting. The pattern focuses on Cassandra clusters, but you can use this implementation for any architecture that benefits from private static IPs.

## Prerequisites and limitations
<a name="deploy-a-cassandra-cluster-on-amazon-ec2-with-private-static-ips-to-avoid-rebalancing-prereqs"></a>

**Prerequisites **
+ An active Amazon Web Service (AWS) account

**Product versions**
+ DataStax version 5.11.1
+ Operating system: Ubuntu 16.04.6 LTS

## Architecture
<a name="deploy-a-cassandra-cluster-on-amazon-ec2-with-private-static-ips-to-avoid-rebalancing-architecture"></a>

**Source architecture**

The source could be a Cassandra cluster on an on-premises virtual machine (VM) or on EC2 instances in the AWS Cloud. The following diagram illustrates the second scenario. This example includes four cluster nodes: three seed nodes and one management node. In the source architecture, each node has a single network interface attached.

![\[Four Amazon EC2 cluster nodes that each have a single network interface attached.\]](http://docs.aws.amazon.com/prescriptive-guidance/latest/patterns/images/pattern-img/47ca4dbc-0922-4e65-b66c-4db5122fc4ac/images/5d80cfc9-4b72-4c72-aefd-b77cc0fb58e3.png)


**Target architecture**

The destination cluster is hosted on EC2 instances with a secondary elastic network interface attached to each node, as illustrated in the following diagram.

![\[Four Amazon EC2 cluster nodes that each have a secondary elastic network interface attached.\]](http://docs.aws.amazon.com/prescriptive-guidance/latest/patterns/images/pattern-img/47ca4dbc-0922-4e65-b66c-4db5122fc4ac/images/d1e22017-f041-426b-9204-31ac158a407d.png)


**Automation and scale**

You can also automate attaching a second elastic network interface to an EC2 Auto Scaling group, as described in an [AWS Knowledge Center video](https://www.youtube.com/watch?v=RmwGYXchb4E).

## Epics
<a name="deploy-a-cassandra-cluster-on-amazon-ec2-with-private-static-ips-to-avoid-rebalancing-epics"></a>

### Configure a Cassandra cluster on Amazon EC2
<a name="configure-a-cassandra-cluster-on-amazon-ec2"></a>


| Task | Description | Skills required | 
| --- | --- | --- | 
| Launch EC2 nodes to host a Cassandra cluster. | On the [Amazon EC2 console](https://console.aws.amazon.com/ec2/), launch four EC2 instances for your Ubuntu nodes in your AWS account. Three (seed) nodes are used for the Cassandra cluster, and the fourth node acts as a cluster management node where you will install DataStax Enterprise (DSE) OpsCenter. For instructions, see the [Amazon EC2 documentation](https://docs.aws.amazon.com/AWSEC2/latest/UserGuide/EC2_GetStarted.html#ec2-launch-instance). | Cloud engineer | 
| Confirm node communications. | Make sure that the four nodes can communicate with one another over the database and cluster management ports. | Network engineer | 
| Install DSE OpsCenter on the management node. | Install DSE OpsCenter 6.1 from the Debian package on the management node. For instructions, see the [DataStax documentation](https://docs.datastax.com/en/opscenter/6.1/opsc/install/opscInstallDeb_t.html). | DBA | 
| Create a secondary network interface. | Cassandra generates a universal unique identifier (UUID) for each node based on the IP address of the EC2 instance for that node. This UUID is used for distributing virtual nodes (vnodes) on the ring. When Cassandra is deployed on EC2 instances, IP addresses are assigned automatically to the instances as they are created.  In the event of a planned or unplanned outage, the IP address for the new EC2 instance changes, the data distribution changes, and the entire ring has to be rebalanced. This is not desirable. To preserve the assigned IP address, use a [secondary elastic network interface](http://docs.aws.amazon.com/AWSEC2/latest/UserGuide/using-eni.html#scenarios-enis) with a fixed IP address.[\[See the AWS documentation website for more details\]](http://docs.aws.amazon.com/prescriptive-guidance/latest/patterns/deploy-a-cassandra-cluster-on-amazon-ec2-with-private-static-ips-to-avoid-rebalancing.html)For more information about creating a network interface, see the [Amazon EC2 documentation](https://docs.aws.amazon.com/AWSEC2/latest/UserGuide/using-eni.html#create_eni). | Cloud Engineer | 
| Attach the secondary network interface to cluster nodes. | [\[See the AWS documentation website for more details\]](http://docs.aws.amazon.com/prescriptive-guidance/latest/patterns/deploy-a-cassandra-cluster-on-amazon-ec2-with-private-static-ips-to-avoid-rebalancing.html)For more information about attaching a network interface, see the [Amazon EC2 documentation](https://docs.aws.amazon.com/AWSEC2/latest/UserGuide/using-eni.html#attach_eni). | Cloud engineer | 
| Add routes in Amazon EC2 to address asymmetric routing.  | When you attach the second network interface, the network will very likely perform asymmetric routing. To avoid this, you can add routes for the new network interfaces.For an in-depth explanation and remediation of asymmetric routing, see the [AWS Knowledge Center video](https://www.youtube.com/watch?v=RmwGYXchb4E) or [Overcoming Asymmetric Routing on Multi-Home Servers](http://www.linuxjournal.com/article/7291) (article in *Linux Journal* by Patrick McManus, April 5, 2004). | Network engineer | 
| Update DNS entries to point to the secondary network interface IP. | Point the fully qualified domain name (FQDN) of the node to the IP of the secondary network interface. | Network engineer | 
| Install and configure the Cassandra cluster by using DSE OpsCenter. | When the cluster nodes are ready with the secondary network interfaces, you can install and configure the Cassandra cluster. | DBA | 

### Recover cluster from node failure
<a name="recover-cluster-from-node-failure"></a>


| Task | Description | Skills required | 
| --- | --- | --- | 
| Create an AMI for the cluster seed node. | Make a backup of the nodes so you can restore them with database binaries in case of node failure. For instructions, see [Create an AMI](https://docs.aws.amazon.com/AWSEC2/latest/UserGuide/create-ami.html) in the Amazon EC2 documentation. | Backup administrator | 
| Recover from node failure. | Replace the failed node with a new EC2 instance launched from the AMI, and attach the secondary network interface of the failed node. | Backup administrator | 
| Verify that the Cassandra cluster is healthy. | When the replacement node is up, verify cluster health in DSE OpsCenter. | DBA | 

## Related resources
<a name="deploy-a-cassandra-cluster-on-amazon-ec2-with-private-static-ips-to-avoid-rebalancing-resources"></a>
+ [Installing DSE OpsCenter 6.1 from the Debian package](https://docs.datastax.com/en/opscenter/6.1/opsc/install/opscInstallDeb_t.html) (DataStax documentation)
+ [How to make a secondary network interface work in an Ubuntu EC2 instance](https://www.youtube.com/watch?v=RmwGYXchb4E) (AWS Knowledge Center video)
+ [Best Practices for Running Apache Cassandra on Amazon EC2](https://aws.amazon.com/blogs/big-data/best-practices-for-running-apache-cassandra-on-amazon-ec2/) (AWS blog post)

# Extend VRFs to AWS by using AWS Transit Gateway Connect
<a name="extend-vrfs-to-aws-by-using-aws-transit-gateway-connect"></a>

*Adam Till, Yashar Araghi, Vikas Dewangan, and Mohideen HajaMohideen, Amazon Web Services*

## Summary
<a name="extend-vrfs-to-aws-by-using-aws-transit-gateway-connect-summary"></a>

Virtual routing and forwarding (VRF) is a feature of traditional networks. It uses isolated logical routing domains, in the form of route tables, to separate network traffic within the same physical infrastructure. You can configure AWS Transit Gateway to support VRF isolation when you connect your on-premises network to AWS. This pattern uses a sample architecture to connect on-premises VRFs to different transit gateway route tables.

This pattern uses transit virtual interfaces (VIFs) in AWS Direct Connect and transit gateway Connect attachments to extend the VRFs. A [transit VIF](https://docs.aws.amazon.com/directconnect/latest/UserGuide/WorkingWithVirtualInterfaces.html) is used to access one or more Amazon VPC transit gateways that are associated with Direct Connect gateways. A [transit gateway Connect attachment](https://docs.aws.amazon.com/vpc/latest/tgw/tgw-connect.html) connects a transit gateway with a third-party virtual appliance that is running in a VPC. A transit gateway Connect attachment supports the Generic Routing Encapsulation (GRE) tunnel protocol for high performance, and it supports Border Gateway Protocol (BGP) for dynamic routing.

The approach described in this pattern has the following benefits:
+ Using Transit Gateway Connect, you can advertise up to 1,000 routes to the Transit Gateway Connect peer and receive up to 5,000 routes from it. Using the Direct Connect transit VIF feature without Transit Gateway Connect is limited to 20 prefixes per transit gateway.
+ You can maintain the traffic isolation and use Transit Gateway Connect to provide hosted services on AWS, regardless of the IP address schemas your customers are using.
+ The VRF traffic doesn’t need to traverse a public virtual interface. This makes it easier to adhere to compliance and security requirements in many organizations.
+ Each GRE tunnel supports up to 5 Gbps, and you can have up to four GRE tunnels per transit gateway Connect attachment. This is faster than many other connection types, such as AWS Site-to-Site VPN connections that support up to 1.25 Gbps.

## Prerequisites and limitations
<a name="extend-vrfs-to-aws-by-using-aws-transit-gateway-connect-prereqs"></a>

**Prerequisites**
+ The required AWS accounts have been created (see the architecture for details)
+ Permissions to assume an AWS Identity and Access Management (IAM) role in each account.
+ The IAM roles in each account must have permissions to provision AWS Transit Gateway and AWS Direct Connect resources. For more information, see [Authentication and access control for your transit gateways](https://docs.aws.amazon.com/vpc/latest/tgw/transit-gateway-authentication-access-control.html) and see [Identity and access management for Direct Connect](https://docs.aws.amazon.com/directconnect/latest/UserGuide/security-iam.html).
+ The Direct Connect connections have successfully been created. For more information, see [Create a connection using the Connection wizard](https://docs.aws.amazon.com/directconnect/latest/UserGuide/dedicated_connection.html#create-connection).

**Limitations**
+ There are limits for transit gateway attachments to the VPCs in the production, QA, and development accounts. For more information, see [Transit gateway attachments to a VPC](https://docs.aws.amazon.com/vpc/latest/tgw/tgw-vpc-attachments.html).
+ There are limits for creating and using Direct Connect gateways. For more information, see [AWS Direct Connect quotas](https://docs.aws.amazon.com/directconnect/latest/UserGuide/limits.html).

## Architecture
<a name="extend-vrfs-to-aws-by-using-aws-transit-gateway-connect-architecture"></a>

**Target architecture**

The following sample architecture provides a reusable solution to deploy transit VIFs with transit gateway Connect attachments. This architecture provides resilience by using multiple Direct Connect locations. For more information, see [Maximum resiliency](https://docs.aws.amazon.com/directconnect/latest/UserGuide/maximum_resiliency.html) in the Direct Connect documentation. The on-premises network has production, QA, and development VRFs that are extended to AWS and isolated by using dedicated route tables.

![\[Architecture diagram of using AWS Direct Connect and AWS Transit Gateway resources to extend VRFs\]](http://docs.aws.amazon.com/prescriptive-guidance/latest/patterns/images/pattern-img/db17e177-6c94-4d81-ab39-0923ecab2f1b/images/10be0625-8574-40eb-bc00-bb0a07d0dc26.png)


In the AWS environment, two accounts are dedicated to extending the VRFs: a *Direct Connect account* and a *network hub account*. The Direct Connect account contains the connection and the transit VIFs for each router. You create the transit VIFs from the Direct Connect account but deploy them to the network hub account so that you can associate them with the Direct Connect gateway in the network hub account. The network hub account contains the Direct Connect gateway and transit gateway. The AWS resources are connected as follows:

1. Transit VIFs connect the routers in the Direct Connect locations with AWS Direct Connect in the Direct Connect account.

1. A transit VIF connects Direct Connect with the Direct Connect gateway in the network hub account.

1. A [transit gateway association](https://docs.aws.amazon.com/directconnect/latest/UserGuide/direct-connect-transit-gateways.html) connects the Direct Connect gateway with the transit gateway in the network hub account.

1. [Transit gateway Connect attachments](https://docs.aws.amazon.com/vpc/latest/tgw/tgw-connect.html) connect the transit gateway with the VPCs in the production, QA, and development accounts.

*Transit VIF architecture*

The following diagram shows the configuration details for the transit VIFs. This sample architecture uses a VLAN for the tunnel source, but you could also use a loopback.

![\[Configuration details for the transit VIF connections between the routers and AWS Direct Connect\]](http://docs.aws.amazon.com/prescriptive-guidance/latest/patterns/images/pattern-img/db17e177-6c94-4d81-ab39-0923ecab2f1b/images/e88d2546-61ef-4531-972b-089cdf44ed67.png)


The following are the configuration details, such as autonomous system numbers (ASNs), for the transit VIFs.


| 
| 
| Resource | Item | Detail | 
| --- |--- |--- |
| router-01 | ASN | 65534 | 
| router-02 | ASN | 65534 | 
| router-03 | ASN | 65534 | 
| router-04 | ASN | 65534 | 
| Direct Connect gateway | ASN | 64601 | 
| Transit gateway | ASN | 64600 | 
| CIDR block | 10.100.254.0/24 | 

*Transit gateway Connect architecture*

The following diagram and tables describe how to configure a single VRF through a transit gateway Connect attachment. For additional VRFs, assign unique tunnel IDs, transit gateway GRE IP addresses, and BGP inside CIDR blocks. The peer GRE IP address matches the router peer IP address from the transit VIF.

![\[Configuration details for the GRE tunnels between the routers and the transit gateway\]](http://docs.aws.amazon.com/prescriptive-guidance/latest/patterns/images/pattern-img/db17e177-6c94-4d81-ab39-0923ecab2f1b/images/e58278e1-f3b4-442d-95d9-1dafab4aa5ac.png)


The following table contains router configuration details.


| 
| 
| Router | Tunnel | IP address | Source | Destination | 
| --- |--- |--- |--- |--- |
| router-01 | Tunnel 1 | 169.254.101.17 | VLAN 60169.254.100.1 | 10.100.254.1 | 
| router-02 | Tunnel 11 | 169.254.101.81 | VLAN 61169.254.100.5 | 10.100.254.11 | 
| router-03 | Tunnel 21 | 169.254.101.145 | VLAN 62169.254.100.9 | 10.100.254.21 | 
| router-04 | Tunnel 31 | 169.254.101.209 | VLAN 63169.254.100.13 | 10.100.254.31 | 

The following table contains transit gateway configuration details.


| 
| 
| Tunnel | Transit gateway GRE IP address | Peer GRE IP address | BGP inside CIDR blocks | 
| --- |--- |--- |--- |
| Tunnel 1 | 10.100.254.1 | VLAN 60169.254.100.1 | 169.254.101.16/29 | 
| Tunnel 11 | 10.100.254.11 | VLAN 61169.254.100.5 | 169.254.101.80/29 | 
| Tunnel 21 | 10.100.254.21 | VLAN 62169.254.100.9 | 169.254.101.144/29 | 
| Tunnel 31 | 10.100.254.31 | VLAN 63169.254.100.13 | 169.254.101.208/29 | 

**Deployment**

The [Epics](#extend-vrfs-to-aws-by-using-aws-transit-gateway-connect-epics) section describes how to deploy a sample configuration for a** **single VRF across multiple customer routers. After steps 1–5 are complete, you can create new transit gateway Connect attachments by using steps 6–7 for every new VRF that you’re extending into AWS:

1. Create the transit gateway.

1. Create a Transit Gateway route table for each VRF.

1. Create the transit virtual interfaces.

1. Create the Direct Connect gateway.

1. Create the Direct Connect gateway virtual interface and gateway associations with allowed prefixes.

1. Create the transit gateway Connect attachment.

1. Create the Transit Gateway Connect peers.

1. Associate the transit gateway Connect attachment with the route table.

1. Advertise routes to the routers.

## Tools
<a name="extend-vrfs-to-aws-by-using-aws-transit-gateway-connect-tools"></a>

**AWS services**
+ [AWS Direct Connect](https://docs.aws.amazon.com/directconnect/latest/UserGuide/Welcome.html) links your internal network to a Direct Connect location over a standard Ethernet fiber-optic cable. With this connection, you can create virtual interfaces directly to public AWS services while bypassing internet service providers in your network path.
+ [AWS Transit Gateway](https://docs.aws.amazon.com/vpc/latest/tgw/what-is-transit-gateway.html) is a central hub that connects virtual private clouds (VPCs) and on-premises networks.
+ [Amazon Virtual Private Cloud (Amazon VPC)](https://docs.aws.amazon.com/vpc/latest/userguide/what-is-amazon-vpc.html) helps you launch AWS resources into a virtual network that you’ve defined. This virtual network resembles a traditional network that you’d operate in your own data center, with the benefits of using the scalable infrastructure of AWS.

## Epics
<a name="extend-vrfs-to-aws-by-using-aws-transit-gateway-connect-epics"></a>

### Plan the architecture
<a name="plan-the-architecture"></a>


| Task | Description | Skills required | 
| --- | --- | --- | 
| Create custom architecture diagrams. | [\[See the AWS documentation website for more details\]](http://docs.aws.amazon.com/prescriptive-guidance/latest/patterns/extend-vrfs-to-aws-by-using-aws-transit-gateway-connect.html) | Cloud architect, Network administrator | 

### Create the Transit Gateway resources
<a name="create-the-transit-gateway-resources"></a>


| Task | Description | Skills required | 
| --- | --- | --- | 
| Create the transit gateway. | [\[See the AWS documentation website for more details\]](http://docs.aws.amazon.com/prescriptive-guidance/latest/patterns/extend-vrfs-to-aws-by-using-aws-transit-gateway-connect.html) | Network administrator, Cloud architect | 
| Create the transit gateway route table. | Follow the instructions in [Create a transit gateway route table](https://docs.aws.amazon.com/vpc/latest/tgw/tgw-route-tables.html#create-tgw-route-table). Note the following for this pattern:[\[See the AWS documentation website for more details\]](http://docs.aws.amazon.com/prescriptive-guidance/latest/patterns/extend-vrfs-to-aws-by-using-aws-transit-gateway-connect.html) | Cloud architect, Network administrator | 

### Create the transit virtual interfaces
<a name="create-the-transit-virtual-interfaces"></a>


| Task | Description | Skills required | 
| --- | --- | --- | 
| Create the transit virtual interfaces. | [\[See the AWS documentation website for more details\]](http://docs.aws.amazon.com/prescriptive-guidance/latest/patterns/extend-vrfs-to-aws-by-using-aws-transit-gateway-connect.html) | Cloud architect, Network administrator | 

### Create the Direct Connect resources
<a name="create-the-direct-connect-resources"></a>


| Task | Description | Skills required | 
| --- | --- | --- | 
| Create a Direct Connect gateway. | [\[See the AWS documentation website for more details\]](http://docs.aws.amazon.com/prescriptive-guidance/latest/patterns/extend-vrfs-to-aws-by-using-aws-transit-gateway-connect.html) | Cloud architect, Network administrator | 
| Attach the Direct Connect gateway to the transit VIFs. | [\[See the AWS documentation website for more details\]](http://docs.aws.amazon.com/prescriptive-guidance/latest/patterns/extend-vrfs-to-aws-by-using-aws-transit-gateway-connect.html) | Cloud architect, Network administrator | 
| Create the Direct Connect gateway associations with allowed prefixes. | In the network hub account, follow the instructions in [To associate a transit gateway](https://docs.aws.amazon.com/directconnect/latest/UserGuide/direct-connect-transit-gateways.html#associate-tgw-with-direct-connect-gateway). Note the following for this pattern:[\[See the AWS documentation website for more details\]](http://docs.aws.amazon.com/prescriptive-guidance/latest/patterns/extend-vrfs-to-aws-by-using-aws-transit-gateway-connect.html)Creating this association automatically creates a Transit Gateway attachment that has a Direct Connect Gateway resource type. This attachment does not need to be associated with a transit gateway route table. | Cloud architect, Network administrator | 
| Create the transit gateway Connect attachment. | [\[See the AWS documentation website for more details\]](http://docs.aws.amazon.com/prescriptive-guidance/latest/patterns/extend-vrfs-to-aws-by-using-aws-transit-gateway-connect.html) | Cloud architect, Network administrator | 
| Create the Transit Gateway Connect peers. | [\[See the AWS documentation website for more details\]](http://docs.aws.amazon.com/prescriptive-guidance/latest/patterns/extend-vrfs-to-aws-by-using-aws-transit-gateway-connect.html) |  | 

### Advertise routes to the routers
<a name="advertise-routes-to-the-routers"></a>


| Task | Description | Skills required | 
| --- | --- | --- | 
| Advertise the routes. | Associate the new transit gateway Connect attachment with the route table you created previously for this VRF. For example, associate the production transit gateway Connect attachment with the `Production-VRF` route table.Create a static route for the prefix that is advertised to the routers.[\[See the AWS documentation website for more details\]](http://docs.aws.amazon.com/prescriptive-guidance/latest/patterns/extend-vrfs-to-aws-by-using-aws-transit-gateway-connect.html) | Network administrator, Cloud architect | 

## Related resources
<a name="extend-vrfs-to-aws-by-using-aws-transit-gateway-connect-resources"></a>

**AWS documentation**
+ Direct Connect documentation
  + [Working with Direct Connect gateways](https://docs.aws.amazon.com/directconnect/latest/UserGuide/direct-connect-gateways.html)
  + [Transit gateway associations](https://docs.aws.amazon.com/directconnect/latest/UserGuide/direct-connect-transit-gateways.html)
  + [AWS Direct Connect virtual interfaces](https://docs.aws.amazon.com/directconnect/latest/UserGuide/WorkingWithVirtualInterfaces.html)
+ Transit Gateway documentation
  + [Working with transit gateways](https://docs.aws.amazon.com/vpc/latest/tgw/working-with-transit-gateways.html)
  + [Transit gateway attachments to a Direct Connect gateway](https://docs.aws.amazon.com/vpc/latest/tgw/tgw-dcg-attachments.html)
  + [Transit gateway Connect attachments and Transit Gateway Connect peers](https://docs.aws.amazon.com/vpc/latest/tgw/tgw-connect.html)
  + [Create a transit gateway Connect attachment](https://docs.aws.amazon.com/vpc/latest/tgw/tgw-connect.html#create-tgw-connect-attachment)

**AWS blog posts**
+ [Segmenting hybrid networks with AWS Transit Gateway connect](https://aws.amazon.com/blogs/networking-and-content-delivery/segmenting-hybrid-networks-with-aws-transit-gateway-connect/)
+ [Using AWS Transit Gateway connect to extend VRFs and increase IP prefix advertisement](https://aws.amazon.com/blogs/networking-and-content-delivery/using-aws-transit-gateway-connect-to-extend-vrfs-and-increase-ip-prefix-advertisement/)

## Attachments
<a name="attachments-db17e177-6c94-4d81-ab39-0923ecab2f1b"></a>

To access additional content that is associated with this document, unzip the following file: [attachment.zip](samples/p-attach/db17e177-6c94-4d81-ab39-0923ecab2f1b/attachments/attachment.zip)

# Get Amazon SNS notifications when the key state of an AWS KMS key changes
<a name="get-amazon-sns-notifications-when-the-key-state-of-an-aws-kms-key-changes"></a>

*Shubham Harsora, Aromal Raj Jayarajan, and Navdeep Pareek, Amazon Web Services*

## Summary
<a name="get-amazon-sns-notifications-when-the-key-state-of-an-aws-kms-key-changes-summary"></a>

The data and metadata associated with an AWS Key Management Service (AWS KMS) key is lost when that key is deleted. The deletion is irreversible and you can't recover lost data (including encrypted data). You can prevent data loss by setting up a notification system to alert you of status changes to [key states](https://docs.aws.amazon.com/kms/latest/developerguide/key-state.html#key-state-cmk-type) of your AWS KMS keys.

This pattern shows you how to monitor status changes to AWS KMS keys by using Amazon EventBridge and Amazon Simple Notification Service (Amazon SNS) to issue automated notifications whenever the key state of an AWS KMS key changes to `Disabled` or `PendingDeletion`. For example, if a user tries to disable or delete an AWS KMS key, you will receive an email notification with details about the attempted status change. You can also use this pattern to schedule the deletion of AWS KMS keys.

## Prerequisites and limitations
<a name="get-amazon-sns-notifications-when-the-key-state-of-an-aws-kms-key-changes-prereqs"></a>

**Prerequisites**
+ An active AWS account with an AWS Identity and Access Management (IAM) user
+ An [AWS KMS key](https://docs.aws.amazon.com/kms/latest/developerguide/getting-started.html)

## Architecture
<a name="get-amazon-sns-notifications-when-the-key-state-of-an-aws-kms-key-changes-architecture"></a>

**Technology stack**
+ Amazon EventBridge
+ AWS Key Management Service (AWS KMS)
+ Amazon Simple Notification Service (Amazon SNS)

**Target architecture**

The following diagram shows an architecture for building an automated monitoring and notification process for detecting any changes to the state of an AWS KMS key.

![\[Architecture for building an automated monitoring and notification process\]](http://docs.aws.amazon.com/prescriptive-guidance/latest/patterns/images/pattern-img/2534df87-a6fd-4360-9b5d-4a8b1f533de3/images/0cb6a6b0-405b-4d26-ad04-2067176aa086.png)


The diagram shows the following workflow:

1. A user disables or schedules the deletion of an AWS KMS key.

1. An EventBridge rule evaluates the scheduled `Disabled` or `PendingDeletion` event.

1. The EventBridge rule invokes the Amazon SNS topic.

1. Amazon SNS sends an email notification message to the users.

**Note**  
You can customize the email message to meet your organization's needs. We recommend including information about the entities where the AWS KMS key is used. This can help users understand the impact of deleting the AWS KMS key. You can also schedule a reminder email notification that's sent one or two days before the AWS KMS key is deleted.

**Automation and scale**

The AWS CloudFormation stack deploys all the necessary resources and services for this pattern to work. You can implement the pattern independently in a single account, or by using [AWS CloudFormation StackSets](https://docs.aws.amazon.com/AWSCloudFormation/latest/UserGuide/what-is-cfnstacksets.html) for multiple independent accounts or [organizational units](https://docs.aws.amazon.com/organizations/latest/userguide/orgs_manage_ous.html) in AWS Organizations.

## Tools
<a name="get-amazon-sns-notifications-when-the-key-state-of-an-aws-kms-key-changes-tools"></a>
+ [AWS CloudFormation](https://docs.aws.amazon.com/AWSCloudFormation/latest/UserGuide/Welcome.html) helps you set up AWS resources, provision them quickly and consistently, and manage them throughout their lifecycle across AWS accounts and AWS Regions. The CloudFormation template for this pattern describes all the AWS resources that you want, and CloudFormation provisions and configures those resources for you.
+ [Amazon EventBridge](https://docs.aws.amazon.com/eventbridge/latest/userguide/eb-what-is.html) is a serverless event bus service that helps you connect your applications with real-time data from a variety of sources. EventBridge delivers a stream of real-time data from your own applications and AWS services, and it routes that data to targets such as AWS Lambda. EventBridge simplifies the process of building event-driven architectures.
+ [AWS Key Management Service (AWS KMS)](https://docs.aws.amazon.com/kms/latest/developerguide/overview.html) helps you create and control cryptographic keys to help protect your data.
+ [Amazon Simple Notification Service (Amazon SNS)](https://docs.aws.amazon.com/sns/latest/dg/welcome.html) helps you coordinate and manage the exchange of messages between publishers and clients, including web servers and email addresses.

**Code**

The code for this pattern is available in the GitHub [Monitor AWS KMS keys disable and scheduled deletion](https://github.com/aws-samples/aws-kms-deletion-notification) repository.

## Epics
<a name="get-amazon-sns-notifications-when-the-key-state-of-an-aws-kms-key-changes-epics"></a>

### Deploy the CloudFormation template
<a name="deploy-the-cloudformation-template"></a>


| Task | Description | Skills required | 
| --- | --- | --- | 
| Clone the repository. | Clone the GitHub [Monitor AWS KMS keys disable and scheduled deletion](https://github.com/aws-samples/aws-kms-deletion-notification) repository to your local machine by running the following command:`git clone https://github.com/aws-samples/aws-kms-deletion-notification` | AWS administrator, Cloud architect | 
| Update the template's parameters. | In a code editor, open the `Alerting-KMS-Events.yaml` CloudFormation template that you cloned from the repository, and then update the following parameters:[\[See the AWS documentation website for more details\]](http://docs.aws.amazon.com/prescriptive-guidance/latest/patterns/get-amazon-sns-notifications-when-the-key-state-of-an-aws-kms-key-changes.html) | AWS administrator, Cloud architect | 
| Deploy the CloudFormation template. | [\[See the AWS documentation website for more details\]](http://docs.aws.amazon.com/prescriptive-guidance/latest/patterns/get-amazon-sns-notifications-when-the-key-state-of-an-aws-kms-key-changes.html) | AWS administrator, Cloud architect | 

### Confirm the subscription
<a name="confirm-the-subscription"></a>


| Task | Description | Skills required | 
| --- | --- | --- | 
| Confirm the subscription email. | After the CloudFormation template successfully deploys, Amazon SNS sends a subscription confirmation message to the email address that you provided in the CloudFormation template.To receive notifications, you must confirm this email subscription. For more information, see [Confirm the subscription](https://docs.aws.amazon.com/sns/latest/dg/SendMessageToHttp.confirm.html) in the Amazon SNS Developer Guide. | AWS administrator, Cloud architect | 

### Test the subscription notification
<a name="test-the-subscription-notification"></a>


| Task | Description | Skills required | 
| --- | --- | --- | 
| Disable AWS KMS keys. | [\[See the AWS documentation website for more details\]](http://docs.aws.amazon.com/prescriptive-guidance/latest/patterns/get-amazon-sns-notifications-when-the-key-state-of-an-aws-kms-key-changes.html) | AWS administrator | 
| Validate the subscription. | Confirm that you received the Amazon SNS notification email. | AWS administrator | 

### Clean up resources
<a name="clean-up-resources"></a>


| Task | Description | Skills required | 
| --- | --- | --- | 
| Delete the CloudFormation stack. | [\[See the AWS documentation website for more details\]](http://docs.aws.amazon.com/prescriptive-guidance/latest/patterns/get-amazon-sns-notifications-when-the-key-state-of-an-aws-kms-key-changes.html) | AWS administrator | 

## Related resources
<a name="get-amazon-sns-notifications-when-the-key-state-of-an-aws-kms-key-changes-resources"></a>
+ [AWS CloudFormation](https://aws.amazon.com/cloudformation/) (AWS documentation)
+ [Creating a stack on the AWS CloudFormation console](https://docs.aws.amazon.com/AWSCloudFormation/latest/UserGuide/cfn-console-create-stack.html) (AWS CloudFormation documentation)
+ [Building event-driven architectures on AWS](https://catalog.us-east-1.prod.workshops.aws/workshops/63320e83-6abc-493d-83d8-f822584fb3cb/en-US) (AWS Workshop Studio documentation)
+ [AWS Key Management Service Best Practices](https://d1.awsstatic.com/whitepapers/aws-kms-best-practices.pdf) (AWS Whitepaper)
+ [Security best practices for AWS Key Management Service](https://docs.aws.amazon.com/kms/latest/developerguide/best-practices.html) (AWS KMS Developer Guide)

## Additional information
<a name="get-amazon-sns-notifications-when-the-key-state-of-an-aws-kms-key-changes-additional"></a>

Amazon SNS provides in-transit encryption by default. To align with security best practices, you can also enable server-side encryption for Amazon SNS by using an AWS KMS customer managed key.

# Preserve routable IP space in multi-account VPC designs for non-workload subnets
<a name="preserve-routable-ip-space-in-multi-account-vpc-designs-for-non-workload-subnets"></a>

*Adam Spicer, Amazon Web Services*

## Summary
<a name="preserve-routable-ip-space-in-multi-account-vpc-designs-for-non-workload-subnets-summary"></a>

Amazon Web Services (AWS) has published best practices that recommend using dedicated subnets in a virtual private cloud (VPC) for both [transit gateway attachments](https://docs.aws.amazon.com/vpc/latest/tgw/tgw-best-design-practices.html) and [Gateway Load Balancer endpoints](https://docs.aws.amazon.com/elasticloadbalancing/latest/gateway/getting-started.html) (to support [AWS Network Firewall](https://docs.aws.amazon.com/network-firewall/latest/developerguide/firewall-high-level-steps.html) or third-party appliances). These subnets are used to contain elastic network interfaces for these services. If you use both AWS Transit Gateway and a Gateway Load Balancer, two subnets are created in each Availability Zone for the VPC. Because of the way VPCs are designed, these extra subnets [can’t be smaller than a /28 mask](https://docs.aws.amazon.com/vpc/latest/userguide/configure-subnets.html#subnet-sizing) and can consume precious routable IP space that could otherwise be used for routable workloads. This pattern demonstrates how you can use a secondary, non-routable Classless Inter-Domain Routing (CIDR) range for these dedicated subnets to help preserve routable IP space.

## Prerequisites and limitations
<a name="preserve-routable-ip-space-in-multi-account-vpc-designs-for-non-workload-subnets-prereqs"></a>

**Prerequisites **
+ [Multi-VPC strategy](https://docs.aws.amazon.com/whitepapers/latest/building-scalable-secure-multi-vpc-network-infrastructure/welcome.html) for routable IP space
+ A non-routable CIDR range for the services you’re using ([transit gateway attachments](https://docs.aws.amazon.com/vpc/latest/tgw/tgw-best-design-practices.html) and [Gateway Load Balancer](https://aws.amazon.com/blogs/apn/centralized-traffic-inspection-with-gateway-load-balancer-on-aws/) or [Network Firewall endpoints](https://aws.amazon.com/blogs/networking-and-content-delivery/deployment-models-for-aws-network-firewall/))

## Architecture
<a name="preserve-routable-ip-space-in-multi-account-vpc-designs-for-non-workload-subnets-architecture"></a>

**Target architecture **

This pattern includes two reference architectures: one architecture has subnets for transit gateway (TGW) attachments and a Gateway Load Balancer endpoint (GWLBe), and the second architecture has subnets for TGW attachments only.

**Architecture 1 ‒ TGW-attached VPC with ingress routing to an appliance**

The following diagram represents a reference architecture for a VPC that spans two Availability Zones. On ingress, the VPC uses an [ingress routing pattern](https://aws.amazon.com/blogs/aws/new-vpc-ingress-routing-simplifying-integration-of-third-party-appliances/) to direct traffic destined for the public subnet to a [bump-in-the-wire appliance](https://aws.amazon.com/blogs/networking-and-content-delivery/introducing-aws-gateway-load-balancer-supported-architecture-patterns/) for firewall inspection. A TGW attachment supports egress from the private subnets to a separate VPC.

This pattern uses a non-routable CIDR range for the TGW attachment subnet and the GWLBe subnet. In the TGW routing table, this non-routable CIDR is configured with a blackhole (static) route by using a set of more specific routes. If the routes were to get propagated to the TGW routing table, these more specific blackhole routes would apply.

In this example, the /23 routable CIDR is divided up and fully allocated to routable subnets.

![\[TGW-attached VPC with ingress routing to an appliance.\]](http://docs.aws.amazon.com/prescriptive-guidance/latest/patterns/images/pattern-img/0171d91d-ab1e-41ca-a425-1e6e610080e1/images/adad1c83-cdc2-4c5e-aa35-f47fc31af384.png)


**Architecture 2 – TGW-attached VPC**

The following diagram represents another reference architecture for a VPC that spans two Availability Zones. A TGW attachment supports outbound traffic (egress) from the private subnets to a separate VPC. It uses a non-routable CIDR range only for the TGW attachments subnet. In the TGW routing table, this non-routable CIDR is configured with a blackhole route by using a set of more specific routes. If the routes were to get propagated to the TGW routing table, these more specific blackhole routes would apply.

In this example, the /23 routable CIDR is divided up and fully allocated to routable subnets. 

![\[VPC spans 2 availability zones with TGW attachment for egress from private subnets to separate VPC.\]](http://docs.aws.amazon.com/prescriptive-guidance/latest/patterns/images/pattern-img/0171d91d-ab1e-41ca-a425-1e6e610080e1/images/31a2a241-5be6-425e-93e9-5ff7ffeca3a9.png)


## Tools
<a name="preserve-routable-ip-space-in-multi-account-vpc-designs-for-non-workload-subnets-tools"></a>

**AWS services and resources**
+ [Amazon Virtual Private Cloud (Amazon VPC)](https://docs.aws.amazon.com/vpc/latest/userguide/what-is-amazon-vpc.html) helps you launch AWS resources into a virtual network that you’ve defined. This virtual network resembles a traditional network that you’d operate in your own data center, with the benefits of using the scalable infrastructure of AWS. In this pattern, VPC secondary CIDRs are used to preserve routable IP space in workload CIDRs.
+ [Internet gateway ingress routing](https://aws.amazon.com/blogs/aws/new-vpc-ingress-routing-simplifying-integration-of-third-party-appliances/) (edge associations) can be used along with Gateway Load Balancer endpoints for dedicated non-routable subnets.
+ [AWS Transit Gateway](https://docs.aws.amazon.com/vpc/latest/tgw/what-is-transit-gateway.html) is a central hub that connects VPCs and on-premises networks. In this pattern, VPCs are centrally attached to a transit gateway, and the transit gateway attachments are in a dedicated non-routable subnet.
+ [Gateway Load Balancers](https://docs.aws.amazon.com/elasticloadbalancing/latest/gateway/introduction.html) help you deploy, scale, and manage virtual appliances, such as firewalls, intrusion detection and prevention systems, and deep packet inspection systems. The gateway serves as a single entry and exit point for all traffic. In this pattern, endpoints for a Gateway Load Balancer can be used in a dedicated non-routable subnet.
+ [AWS Network Firewall](https://docs.aws.amazon.com/network-firewall/latest/developerguide/what-is-aws-network-firewall.html) is a stateful, managed, network firewall and intrusion detection and prevention service for VPCs in the AWS Cloud. In this pattern, endpoints for an firewall can be used in a dedicated non-routable subnet.

**Code repository**

A runbook and AWS CloudFormation templates for this pattern are available in the GitHub [Non-Routable Secondary CIDR Patterns](https://github.com/aws-samples/non-routable-secondary-vpc-cidr-patterns/) repository. You can use the sample files to set up a working lab in your environment.

## Best practices
<a name="preserve-routable-ip-space-in-multi-account-vpc-designs-for-non-workload-subnets-best-practices"></a>

**AWS Transit Gateway**
+ Use a separate subnet for each transit gateway VPC attachment.
+ Allocate a /28 subnet from the secondary non-routable CIDR range for the transit gateway attachment subnets.
+ In each transit gateway routing table, add a static, more specific route for the non-routable CIDR range as a blackhole.

**Gateway Load Balancer and ingress routing**
+ Use ingress routing to direct traffic from the internet to the Gateway Load Balancer endpoints.
+ Use a separate subnet for each Gateway Load Balancer endpoint.
+ Allocate a /28 subnet from the secondary non-routable CIDR range for the Gateway Load Balancer endpoint subnets.

## Epics
<a name="preserve-routable-ip-space-in-multi-account-vpc-designs-for-non-workload-subnets-epics"></a>

### Create VPCs
<a name="create-vpcs"></a>


| Task | Description | Skills required | 
| --- | --- | --- | 
| Determine non-routable CIDR range. | Determine a non-routable CIDR range that will be used for the transit gateway attachment subnet and (optionally) for any Gateway Load Balancer or Network Firewall endpoint subnets. This CIDR range will be used as the secondary CIDR for the VPC. It must **not be routable** from the VPC’s primary CIDR range or the larger network. | Cloud architect | 
| Determine routable CIDR ranges for VPCs. | Determine a set of routable CIDR ranges that will be used for your VPCs. This CIDR range will be used as the primary CIDR for your VPCs. | Cloud architect | 
| Create VPCs. | Create your VPCs and attach them to the transit gateway. Each VPC should have a primary CIDR range that is routable and a secondary CIDR range that is non-routable, based on the ranges you determined in the previous two steps. | Cloud architect | 

### Configure Transit Gateway blackhole routes
<a name="configure-transit-gateway-blackhole-routes"></a>


| Task | Description | Skills required | 
| --- | --- | --- | 
| Create more specific non-routable CIDRs as blackholes. | Each transit gateway routing table needs to have a set of blackhole routes created for the non-routable CIDRs. These are configured to ensure that any traffic from the secondary VPC CIDR remains non-routable and doesn't leak into the larger network. These routes should be more specific than the non-routable CIDR that is set as the secondary CIDR on the VPC. For example, if the secondary non-routable CIDR is 100.64.0.0/26, the blackhole routes in the transit gateway routing table should be 100.64.0.0/27 and 100.64.0.32/27. | Cloud architect | 

## Related resources
<a name="preserve-routable-ip-space-in-multi-account-vpc-designs-for-non-workload-subnets-resources"></a>
+ [Best practices for deploying Gateway Load Balancer](https://aws.amazon.com/blogs/networking-and-content-delivery/best-practices-for-deploying-gateway-load-balancer/)
+ [Distributed Inspection Architectures with Gateway Load Balancer](https://d1.awsstatic.com/architecture-diagrams/ArchitectureDiagrams/distributed-inspection-architectures-gwlb-ra.pdf?did=wp_card&trk=wp_card)
+ [Networking Immersion Day ](https://catalog.workshops.aws/networking/en-US/gwlb/lab2-internettovpc)‒ [Internet to VPC Firewall Lab](https://catalog.workshops.aws/networking/en-US/gwlb/lab2-internettovpc)
+ [Transit gateway design best practices](https://docs.aws.amazon.com/vpc/latest/tgw/tgw-best-design-practices.html)

## Additional information
<a name="preserve-routable-ip-space-in-multi-account-vpc-designs-for-non-workload-subnets-additional"></a>

The non-routable secondary CIDR range can also be useful when working with larger scaled container deployments that require a large set of IP addresses. You can use this pattern with a private NAT Gateway to use a non-routable subnet to host your container deployments. For more information, see the blog post [How to solve Private IP exhaustion with Private NAT Solution](https://aws.amazon.com/blogs/networking-and-content-delivery/how-to-solve-private-ip-exhaustion-with-private-nat-solution/).

# Provision a Terraform product in AWS Service Catalog by using a code repository
<a name="provision-a-terraform-product-in-aws-service-catalog-by-using-a-code-repository"></a>

*Dr. Rahul Sharad Gaikwad and Tamilselvan P, Amazon Web Services*

## Summary
<a name="provision-a-terraform-product-in-aws-service-catalog-by-using-a-code-repository-summary"></a>

AWS Service Catalog supports self-service provisioning with governance for your [HashiCorp Terraform](https://developer.hashicorp.com/terraform/tutorials/aws-get-started) configurations. If you use Terraform, you can use Service Catalog as the single tool to organize, govern, and distribute your Terraform configurations within AWS at scale. You can access Service Catalog key features, including cataloging of standardized and pre-approved infrastructure as code (IaC) templates, access control, cloud resources provisioning with least privilege access, versioning, sharing to thousands of AWS accounts, and tagging. End users, such as engineers, database administrators, and data scientists, see a list of products and versions they have access to, and they can deploy them through a single action.

This pattern helps you deploy AWS resources by using Terraform code. The Terraform code in the GitHub repository is accessed through Service Catalog. Using this approach, you integrate the products with your existing Terraform workflows. Administrators can create Service Catalog portfolios and add AWS Launch Wizard products to them by using Terraform.

The following are the benefits of this solution:
+ Because of the rollback feature in Service Catalog, if any issues occur during deployment, you can revert the product to a previous version.
+ You can easily identify the differences between product versions. This helps you resolve issues during deployment.
+ You can configure a repository connection in Service Catalog, such as to GitHub or GitLab. You can make product changes directly through the repository.

For information about the overall benefits of AWS Service Catalog, see [What is Service Catalog](https://docs.aws.amazon.com/servicecatalog/latest/adminguide/introduction.html).

## Prerequisites and limitations
<a name="provision-a-terraform-product-in-aws-service-catalog-by-using-a-code-repository-prereqs"></a>

**Prerequisites**
+ An active AWS account.
+ A GitHub, BitBucket, or other repository that contains Terraform configuration files in ZIP format.
+ AWS Serverless Application Model Command Line Interface (AWS SAM CLI), [installed](https://docs.aws.amazon.com/serverless-application-model/latest/developerguide/install-sam-cli.html).
+ AWS Command Line Interface (AWS CLI), [installed](https://docs.aws.amazon.com/cli/latest/userguide/getting-started-install.html) and [configured](https://docs.aws.amazon.com/cli/latest/userguide/cli-chap-configure.html).
+ Go, [installed](https://go.dev/doc/install).
+ Python version 3.9 , [installed](https://www.python.org/downloads/release/python-3913/). AWS SAM CLI requires this version of Python.
+ Permissions to write and run AWS Lambda functions and permissions to access and manage Service Catalog products and portfolios.

## Architecture
<a name="provision-a-terraform-product-in-aws-service-catalog-by-using-a-code-repository-architecture"></a>

![\[Architecture diagram of provisioning a Terraform product in Service Catalog from a code repo\]](http://docs.aws.amazon.com/prescriptive-guidance/latest/patterns/images/pattern-img/7d0d76e8-9485-4b3f-915f-481b6a7cdcd9/images/e83fa44a-4ca6-4438-a0d1-99f09a3541bb.png)


The diagram shows the following workflow:

1. When a Terraform configuration is ready, a developer creates a .zip file that contains all of the Terraform code. The developer uploads the .zip file into the code repository that is connected to Service Catalog.

1. An administrator associates the Terraform product to a portfolio in Service Catalog. The administrator also creates a launch constraint that allows end users to provision the product.

1. In Service Catalog, end users launch AWS resources by using the Terraform configuration. They can choose which product version to deploy.

## Tools
<a name="provision-a-terraform-product-in-aws-service-catalog-by-using-a-code-repository-tools"></a>

**AWS services**
+ [AWS Lambda](https://docs.aws.amazon.com/lambda/latest/dg/welcome.html) is a compute service that helps you run code without needing to provision or manage servers. It runs your code only when needed and scales automatically, so you pay only for the compute time that you use.
+ [AWS Service Catalog](https://docs.aws.amazon.com/servicecatalog/latest/adminguide/introduction.html) helps you centrally manage catalogs of IT services that are approved for AWS. End users can quickly deploy only the approved IT services they need, following the constraints set by your organization.

**Other services**
+ [Go](https://go.dev/doc/install) is an open source programming language that Google supports.
+ [Python](https://www.python.org/) is a general-purpose computer programming language.

**Code repository**

If you require sample Terraform configurations that you can deploy through Service Catalog, you can use the configurations in the GitHub [Amazon Macie Organization Setup Using Terraform](https://github.com/aws-samples/aws-macie-customization-terraform-samples) repository. Use of the code samples in this repository is not required.

## Best practices
<a name="provision-a-terraform-product-in-aws-service-catalog-by-using-a-code-repository-best-practices"></a>
+ Instead of providing the values for variables in the Terraform configuration file (`terraform.tfvars`), configure variable values when launching product through Service Catalog.
+ Grant access to the portfolio only to specific users or administrators.
+ Follow the principle of least privilege and grant the minimum permissions required to perform a task. For more information, see [Grant least privilege](https://docs.aws.amazon.com/IAM/latest/UserGuide/access_policies.html#grant-least-priv) and [Security best practices](https://docs.aws.amazon.com/IAM/latest/UserGuide/IAMBestPracticesAndUseCases.html) in the AWS Identity and Access Management (IAM) documentation.

## Epics
<a name="provision-a-terraform-product-in-aws-service-catalog-by-using-a-code-repository-epics"></a>

### Set up your local workstation
<a name="set-up-your-local-workstation"></a>


| Task | Description | Skills required | 
| --- | --- | --- | 
| (Optional) Install Docker. | If you want to run the AWS Lambda functions in your development environment, install Docker. For instructions, see [Install Docker Engine](https://docs.docker.com/engine/install/) in the Docker documentation. | DevOps engineer | 
| Install the AWS Service Catalog Engine for Terraform. | [\[See the AWS documentation website for more details\]](http://docs.aws.amazon.com/prescriptive-guidance/latest/patterns/provision-a-terraform-product-in-aws-service-catalog-by-using-a-code-repository.html) | DevOps engineer, AWS administrator | 

### Connect the GitHub repository
<a name="connect-the-github-repository"></a>


| Task | Description | Skills required | 
| --- | --- | --- | 
| Create a connection to the GitHub repository. | [\[See the AWS documentation website for more details\]](http://docs.aws.amazon.com/prescriptive-guidance/latest/patterns/provision-a-terraform-product-in-aws-service-catalog-by-using-a-code-repository.html) | AWS administrator | 

### Create a Terraform product in Service Catalog
<a name="create-a-terraform-product-in-service-catalog"></a>


| Task | Description | Skills required | 
| --- | --- | --- | 
| Create the Service Catalog product. | [\[See the AWS documentation website for more details\]](http://docs.aws.amazon.com/prescriptive-guidance/latest/patterns/provision-a-terraform-product-in-aws-service-catalog-by-using-a-code-repository.html) | AWS administrator | 
| Create a portfolio. | [\[See the AWS documentation website for more details\]](http://docs.aws.amazon.com/prescriptive-guidance/latest/patterns/provision-a-terraform-product-in-aws-service-catalog-by-using-a-code-repository.html) | AWS administrator | 
| Add the Terraform product to the portfolio. | [\[See the AWS documentation website for more details\]](http://docs.aws.amazon.com/prescriptive-guidance/latest/patterns/provision-a-terraform-product-in-aws-service-catalog-by-using-a-code-repository.html) | AWS administrator | 
| Create the access policy. | [\[See the AWS documentation website for more details\]](http://docs.aws.amazon.com/prescriptive-guidance/latest/patterns/provision-a-terraform-product-in-aws-service-catalog-by-using-a-code-repository.html) | AWS administrator | 
| Create a custom trust policy. | [\[See the AWS documentation website for more details\]](http://docs.aws.amazon.com/prescriptive-guidance/latest/patterns/provision-a-terraform-product-in-aws-service-catalog-by-using-a-code-repository.html) | AWS administrator | 
| Add a launch constraint to the Service Catalog product. | [\[See the AWS documentation website for more details\]](http://docs.aws.amazon.com/prescriptive-guidance/latest/patterns/provision-a-terraform-product-in-aws-service-catalog-by-using-a-code-repository.html) | AWS administrator | 
| Grant access to the product. | [\[See the AWS documentation website for more details\]](http://docs.aws.amazon.com/prescriptive-guidance/latest/patterns/provision-a-terraform-product-in-aws-service-catalog-by-using-a-code-repository.html) | AWS administrator | 
| Launch the product. | [\[See the AWS documentation website for more details\]](http://docs.aws.amazon.com/prescriptive-guidance/latest/patterns/provision-a-terraform-product-in-aws-service-catalog-by-using-a-code-repository.html) | DevOps engineer | 

### Verify the deployment
<a name="verify-the-deployment"></a>


| Task | Description | Skills required | 
| --- | --- | --- | 
| Validate the deployment. | There are two AWS Step Functions state machines for the Service Catalog provisioning workflow:[\[See the AWS documentation website for more details\]](http://docs.aws.amazon.com/prescriptive-guidance/latest/patterns/provision-a-terraform-product-in-aws-service-catalog-by-using-a-code-repository.html)You check the logs for the `ManageProvisionedProductStateMachine` state machine to confirm that the product was provisioned.[\[See the AWS documentation website for more details\]](http://docs.aws.amazon.com/prescriptive-guidance/latest/patterns/provision-a-terraform-product-in-aws-service-catalog-by-using-a-code-repository.html) | DevOps engineer | 

### Clean up infrastructure
<a name="clean-up-infrastructure"></a>


| Task | Description | Skills required | 
| --- | --- | --- | 
| Delete provisioned products. | [\[See the AWS documentation website for more details\]](http://docs.aws.amazon.com/prescriptive-guidance/latest/patterns/provision-a-terraform-product-in-aws-service-catalog-by-using-a-code-repository.html) | DevOps engineer | 
| Remove the AWS Service Catalog Engine for Terraform. | [\[See the AWS documentation website for more details\]](http://docs.aws.amazon.com/prescriptive-guidance/latest/patterns/provision-a-terraform-product-in-aws-service-catalog-by-using-a-code-repository.html) | AWS administrator | 

## Related resources
<a name="provision-a-terraform-product-in-aws-service-catalog-by-using-a-code-repository-resources"></a>

**AWS documentation**
+ [Getting started with a Terraform product](https://docs.aws.amazon.com/servicecatalog/latest/adminguide/getstarted-Terraform.html)

**Terraform documentation**
+ [Terraform installation](https://learn.hashicorp.com/tutorials/terraform/install-cli)
+ [Terraform backend configuration](https://developer.hashicorp.com/terraform/language/backend)
+ [Terraform AWS Provider documentation](https://registry.terraform.io/providers/hashicorp/aws/latest/docs)

## Additional information
<a name="provision-a-terraform-product-in-aws-service-catalog-by-using-a-code-repository-additional"></a>

**Access policy**

```
{
    "Version": "2012-10-17",		 	 	 
    "Statement": [
        {
            "Sid": "VisualEditor0",
            "Effect": "Allow",
            "Action": "s3:GetObject",
            "Resource": "*",
            "Condition": {
                "StringEquals": {
                    "s3:ExistingObjectTag/servicecatalog:provisioning": "true"
                }
            }
        },
        {
            "Action": [
                "s3:CreateBucket*",
                "s3:DeleteBucket*",
                "s3:Get*",
                "s3:List*",
                "s3:PutBucketTagging"
            ],
            "Resource": "arn:aws:s3:::*",
            "Effect": "Allow"
        },
        {
            "Action": [
                "resource-groups:CreateGroup",
                "resource-groups:ListGroupResources",
                "resource-groups:DeleteGroup",
                "resource-groups:Tag"
            ],
            "Resource": "*",
            "Effect": "Allow"
        },
        {
            "Action": [
                "tag:GetResources",
                "tag:GetTagKeys",
                "tag:GetTagValues",
                "tag:TagResources",
                "tag:UntagResources"
            ],
            "Resource": "*",
            "Effect": "Allow"
        }
    ]
}
```

**Trust policy**

```
{
    "Version": "2012-10-17",		 	 	 
    "Statement": [
        {
            "Sid": "GivePermissionsToServiceCatalog",
            "Effect": "Allow",
            "Principal": {
                "Service": "servicecatalog.amazonaws.com"
            },
            "Action": "sts:AssumeRole"
        },
        {
            "Effect": "Allow",
            "Principal": {
                "AWS": "arn:aws:iam::account_id:root"
            },
            "Action": "sts:AssumeRole",
            "Condition": {
                "StringLike": {
                    "aws:PrincipalArn": [
                        "arn:aws:iam::accounti_id:role/TerraformEngine/TerraformExecutionRole*",
                        "arn:aws:iam::accounti_id:role/TerraformEngine/ServiceCatalogExternalParameterParserRole*",
                        "arn:aws:iam::accounti_id:role/TerraformEngine/ServiceCatalogTerraformOSParameterParserRole*"
                    ]
                }
            }
        }
    ]
}
```

# Register multiple AWS accounts with a single email address by using Amazon SES
<a name="register-multiple-aws-accounts-with-a-single-email-address-by-using-amazon-ses"></a>

*Joe Wozniak and Shubhangi Vishwakarma, Amazon Web Services*

## Summary
<a name="register-multiple-aws-accounts-with-a-single-email-address-by-using-amazon-ses-summary"></a>

This pattern describes how you can decouple real email addresses from the email address that’s associated with an AWS account. AWS accounts require a unique email address to be provided at the time of account creation. In some organizations, the team that manages AWS accounts must take on the burden of managing many unique email addresses with their messaging team. This can be difficult for large organizations that manage many AWS accounts. Additionally, if your email system doesn’t allow *plus addressing* or *sub-addressing* as defined in [Sieve Email Filtering: Subaddress Extension (RFC 5233)](https://datatracker.ietf.org/doc/html/rfc5233)—by adding a plus sign (\$1) and an identifier to the end of the local part of the email address, such as `admin+123456789123@example.com`—this pattern can help overcome this limitation.

This pattern provides a unique email address vending solution that enables AWS account owners to associate one email address with multiple AWS accounts. The real email addresses of AWS account owners are then associated with these generated email addresses in a table. The solution handles all incoming email for the unique email accounts, looks up the owner of each account, and then forwards any received messages to the owner.  

## Prerequisites and limitations
<a name="register-multiple-aws-accounts-with-a-single-email-address-by-using-amazon-ses-prereqs"></a>

**Prerequisites **
+ Administrative access to an AWS account.
+ Access to a development environment. 
+ (Optional) Familiarity with AWS Cloud Development Kit (AWS CDK) workflows and the Python programming language will help you troubleshoot any issues or make modifications.

**Limitations **
+ Overall vended email address length of 64 characters. For details, see [CreateAccount](https://docs.aws.amazon.com/organizations/latest/APIReference/API_CreateAccount.html) in the *AWS Organizations API reference*.

**Product versions**
+ Node.js version 22.x or later
+ Python 3.13 or later
+ Python packages **pip** and **virtualenv**
+ AWS CDK CLI version 2.1019.2 or later
+ Docker 20.10.x or later

## Architecture
<a name="register-multiple-aws-accounts-with-a-single-email-address-by-using-amazon-ses-architecture"></a>

**Target technology stack  **
+ CloudFormation stack
+ AWS Lambda functions
+ Amazon Simple Email Service (Amazon SES) rule and rule set
+ AWS Identity and Access Management (IAM) roles and policies
+ Amazon Simple Storage Service (Amazon S3) bucket and bucket policy
+ AWS Key Management Service (AWS KMS) key and key policy
+ Amazon Simple Notification Service (Amazon SNS) topic and topic policy
+ Amazon DynamoDB table 

**Target architecture **

![\[Target architecture for registering multiple AWS accounts with a single email address\]](http://docs.aws.amazon.com/prescriptive-guidance/latest/patterns/images/pattern-img/1be85b92-69e5-43b2-aeed-27b9509e145e/images/c7ae9d7a-d4e0-412e-97cb-0f3073e012e7.png)


This diagram shows two flows:
+ **Email address vending flow: **In the diagram, the email address vending flow (lower section) begins typically with an account vending solution or outside automation, or is invoked manually. In the request, a Lambda function is called with a payload that contains the needed metadata. The function uses this information to generate a unique account name and email address, stores it in a DynamoDB database, and returns the values to the caller. These values can then be used to create a new AWS account (typically by using AWS Organizations).
+ **Email forwarding flow: **This flow is illustrated in the upper section of the previous diagram. When an AWS account is created by using the account email generated from the email address vending flow, AWS sends various emails, such as account registration confirmation and periodic notifications, to that email address. By following the steps in this pattern, you configure your AWS account with Amazon SES to receive emails for the entire domain. This solution configures forwarding rules that allow Lambda to process all incoming emails, check to see if the `TO` address is in the DynamoDB table, and forward the message to the account owner's email address instead. Using this process gives account owners the ability to associate multiple accounts with one email address.

**Automation and scale**

This pattern uses the AWS CDK to fully automate the deployment. The solution uses AWS managed services that will (or can be configured to) scale automatically to meet your needs. The Lambda functions might require additional configuration to meet your scaling needs. For more information, see [Understanding Lambda function scaling](https://docs.aws.amazon.com/lambda/latest/dg/invocation-scaling.html) in the Lambda documentation.

## Tools
<a name="register-multiple-aws-accounts-with-a-single-email-address-by-using-amazon-ses-tools"></a>

**AWS services**
+ [CloudFormation](https://docs.aws.amazon.com/AWSCloudFormation/latest/UserGuide/Welcome.html) helps you set up AWS resources, provision them quickly and consistently, and manage them throughout their lifecycle across AWS accounts and Regions.
+ [AWS Command Line Interface (AWS CLI)](https://docs.aws.amazon.com/cli/latest/userguide/cli-chap-welcome.html) is an open-source tool that helps you interact with AWS services through commands in your command-line shell.
+ [Amazon DynamoDB](https://docs.aws.amazon.com/amazondynamodb/latest/developerguide/Introduction.html) is a fully managed NoSQL database service that provides fast, predictable, and scalable performance.
+ [AWS Identity and Access Management (IAM)](https://docs.aws.amazon.com/IAM/latest/UserGuide/introduction.html) helps you securely manage access to your AWS resources by controlling who is authenticated and authorized to use them.
+ [AWS Key Management Service (AWS KMS)](https://docs.aws.amazon.com/kms/latest/developerguide/overview.html) helps you create and control cryptographic keys to help protect your data.
+ [AWS Lambda](https://docs.aws.amazon.com/lambda/latest/dg/welcome.html) is a compute service that helps you run code without needing to provision or manage servers. It runs your code only when needed and scales automatically, so you pay only for the compute time that you use.
+ [Amazon Simple Email Service (Amazon SES)](https://docs.aws.amazon.com/ses/latest/dg/Welcome.html) helps you send and receive emails by using your own email addresses and domains.
+ [Amazon Simple Notification Service (Amazon SNS)](https://docs.aws.amazon.com/sns/latest/dg/welcome.html) helps you coordinate and manage the exchange of messages between publishers and clients, including web servers and email addresses.
+ [Amazon Simple Storage Service (Amazon S3)](https://docs.aws.amazon.com/AmazonS3/latest/userguide/Welcome.html) is a cloud-based object storage service that helps you store, protect, and retrieve any amount of data.

**Tools needed for deployment**
+ Development environment with the AWS CLI and IAM access to your AWS account. For details, see the links in the [Related resources](#register-multiple-aws-accounts-with-a-single-email-address-by-using-amazon-ses-resources) section.  
+ On your development system, install the following:
  + Git command line tool, available from the [Git downloads website](https://git-scm.com/downloads).
  + The AWS CLI to configure access credentials for the AWS CDK. For more information, see the [AWS CLI documentation](https://docs.aws.amazon.com/cli/latest/userguide/cli-chap-getting-started.html).
  + Python version 3.13 or later, available from the [Python downloads website](https://www.python.org/downloads/).
  + UV for Python package management. For installation instructions, see the [UV installation guide](https://docs.astral.sh/uv/getting-started/installation/).
  + Node.js version 22.x or later. For installation instructions, see the [Node.js documentation](https://nodejs.org/en/learn/getting-started/how-to-install-nodejs).
  + AWS CDK CLI version 2.1019.2 or later. For installation instructions, see the [AWS CDK documentation](https://docs.aws.amazon.com/cdk/v2/guide/getting-started.html#getting-started-install).
  + Docker version 20.10.x or later. For installation instructions, see the [Docker documentation](https://docs.docker.com/engine/install/).

**Code **

The code for this pattern is available in the GitHub [AWS account factory email](https://github.com/aws-samples/aws-account-factory-email) repository.

## Epics
<a name="register-multiple-aws-accounts-with-a-single-email-address-by-using-amazon-ses-epics"></a>

### Allocate a target deployment environment
<a name="allocate-a-target-deployment-environment"></a>


| Task | Description | Skills required | 
| --- | --- | --- | 
| Identify or create an AWS account. | Identify an existing or new AWS account to which you have full administrative access, to deploy the email solution. | AWS administrator, Cloud administrator | 
| Set up a deployment environment. | Configure an easy to use deployment environment and set up dependencies by following these steps:[\[See the AWS documentation website for more details\]](http://docs.aws.amazon.com/prescriptive-guidance/latest/patterns/register-multiple-aws-accounts-with-a-single-email-address-by-using-amazon-ses.html) | AWS DevOps, App developer | 

### Set up a verified domain
<a name="set-up-a-verified-domain"></a>


| Task | Description | Skills required | 
| --- | --- | --- | 
| Identify and allocate a domain. | The email forwarding functionality requires a dedicated domain. Identify and allocate a domain or subdomain that you can verify with Amazon SES. This domain should be available to receive incoming email within the AWS account where the email forwarding solution is deployed.Domain requirements:[\[See the AWS documentation website for more details\]](http://docs.aws.amazon.com/prescriptive-guidance/latest/patterns/register-multiple-aws-accounts-with-a-single-email-address-by-using-amazon-ses.html) | Cloud administrator, Network administrator, DNS administrator | 
| Verify the domain. | Verify that the identified domain can be used to accept incoming email.Complete the instructions in [Verifying your domain for Amazon SES email receiving](https://docs.aws.amazon.com/ses/latest/dg/receiving-email-verification.html) in the Amazon SES documentation. This will require coordination with the person or team who is responsible for the domain's DNS records. | App developer, AWS DevOps | 
| Set up MX records. | Set up your domain with MX records that point to the Amazon SES endpoints in your AWS account and Region. For more information, see [Publishing an MX record for Amazon SES email receiving](https://docs.aws.amazon.com/ses/latest/dg/receiving-email-mx-record.html) in the Amazon SES documentation. | Cloud administrator, Network administrator, DNS administrator | 

### Deploy the email vending and forwarding solution
<a name="deploy-the-email-vending-and-forwarding-solution"></a>


| Task | Description | Skills required | 
| --- | --- | --- | 
| Modify the default values in `cdk.json`. | Edit some of the default values in the `cdk.json` file (in the root of the repository) so that the solution will operate correctly after it is deployed.[\[See the AWS documentation website for more details\]](http://docs.aws.amazon.com/prescriptive-guidance/latest/patterns/register-multiple-aws-accounts-with-a-single-email-address-by-using-amazon-ses.html) | App developer, AWS DevOps | 
| Deploy the email vending and forwarding solution. | [\[See the AWS documentation website for more details\]](http://docs.aws.amazon.com/prescriptive-guidance/latest/patterns/register-multiple-aws-accounts-with-a-single-email-address-by-using-amazon-ses.html) | App developer, AWS DevOps | 
| Verify that the solution has been deployed. | Verify that the solution deployed successfully before you begin testing:[\[See the AWS documentation website for more details\]](http://docs.aws.amazon.com/prescriptive-guidance/latest/patterns/register-multiple-aws-accounts-with-a-single-email-address-by-using-amazon-ses.html) | App developer, AWS DevOps | 

### Verify that email vending and forwarding operate as expected
<a name="verify-that-email-vending-and-forwarding-operate-as-expected"></a>


| Task | Description | Skills required | 
| --- | --- | --- | 
| Verify that the API is working. | In this step, you submit test data to the solution's API and confirm that the solution produces the expected output and that backend operations have been performed as expected.Manually run the **Vend Email** Lambda function by using test input. (For an example, see the [sample\$1vend\$1request.json file](https://github.com/aws-samples/aws-account-factory-email/blob/main/src/events/sample_vend_request.json).) For `OwnerAddress`, use a valid email address. The API should return an account name and account email with values as expected. | App developer, AWS DevOps | 
| Verify that email is being forwarded. | In this step, you send a test email through the system and verify that the email is forwarded to the expected recipient.[\[See the AWS documentation website for more details\]](http://docs.aws.amazon.com/prescriptive-guidance/latest/patterns/register-multiple-aws-accounts-with-a-single-email-address-by-using-amazon-ses.html) | App developer, AWS DevOps | 

## Troubleshooting
<a name="register-multiple-aws-accounts-with-a-single-email-address-by-using-amazon-ses-troubleshooting"></a>


| Issue | Solution | 
| --- | --- | 
| The system doesn’t forward email as expected. | Verify that your setup is correct:[\[See the AWS documentation website for more details\]](http://docs.aws.amazon.com/prescriptive-guidance/latest/patterns/register-multiple-aws-accounts-with-a-single-email-address-by-using-amazon-ses.html)After you verify your domain setup, follow these steps:[\[See the AWS documentation website for more details\]](http://docs.aws.amazon.com/prescriptive-guidance/latest/patterns/register-multiple-aws-accounts-with-a-single-email-address-by-using-amazon-ses.html) | 
| When you try to deploy the AWS CDK stack, you receive an error similar to:"Template format error: Unrecognized resource types"  | In most instances, this error message means that the Region you’re targeting doesn’t have all the available AWS services. If you’re using an Amazon EC2 instance to deploy the solution, you might be targeting a Region that is different from the Region where the instance is running.By default, the AWS CDK deploys to the Region and account that you configured in the AWS CLI.Possible solutions:[\[See the AWS documentation website for more details\]](http://docs.aws.amazon.com/prescriptive-guidance/latest/patterns/register-multiple-aws-accounts-with-a-single-email-address-by-using-amazon-ses.html) | 
| When you deploy the solution, you receive the error message:"Deployment failed: Error: AwsMailFwdStack: SSM parameter /cdk-bootstrap/hnb659fds/version not found. Has the environment been bootstrapped? Please run 'cdk bootstrap'" | If you have never deployed any AWS CDK resources to the AWS account and Region you’re targeting, you will have to first run the `cdk bootstrap` command as the error indicates. If you continue to receive this error after you run the bootstrapping command, you might be trying to deploy the solution to a Region that’s different from the Region where your development environment is running.To solve this problem, set the `AWS_DEFAULT_REGION` environment variable or set a Region with the AWS CLI before you deploy the solution. Alternatively, you can modify the `app.py` file in the root of the repository to include a hard-coded account ID and Region by following the instructions in the [AWS CDK documentation for environments](https://docs.aws.amazon.com/cdk/v2/guide/environments.html). | 

## Related resources
<a name="register-multiple-aws-accounts-with-a-single-email-address-by-using-amazon-ses-resources"></a>
+ For help installing the AWS CLI, see [Installing or updating to the latest version of the AWS CLI](https://docs.aws.amazon.com/cli/latest/userguide/install-cliv2.html).
+ For help setting up the AWS CLI with IAM access credentials, see [Configuring settings for the AWS CLI](https://docs.aws.amazon.com/cli/latest/userguide/cli-chap-configure.html).
+ For help with the AWS CDK, see [Getting started with the AWS CDK](https://docs.aws.amazon.com/cdk/latest/guide/getting_started.html#getting_started_install). 

## Additional information
<a name="register-multiple-aws-accounts-with-a-single-email-address-by-using-amazon-ses-additional"></a>

**Costs**

When you deploy this solution, the AWS account holder might incur costs that are associated with the use of the following services.  It is important for you to understand how these services are billed so you are aware of any potential charges. For pricing information, see the following pages:
+ [Amazon SES pricing](https://aws.amazon.com/ses/pricing/)
+ [Amazon S3 pricing](https://aws.amazon.com/s3/pricing/)
+ [AWS KMS pricing](https://aws.amazon.com/kms/pricing/)
+ [AWS Lambda pricing](https://aws.amazon.com/lambda/pricing/)
+ [Amazon DynamoDB pricing](https://aws.amazon.com/dynamodb/pricing/)

# Set up DNS resolution for hybrid networks in a single-account AWS environment
<a name="set-up-dns-resolution-for-hybrid-networks-in-a-single-account-aws-environment"></a>

*Abdullahi Olaoye, Amazon Web Services*

## Summary
<a name="set-up-dns-resolution-for-hybrid-networks-in-a-single-account-aws-environment-summary"></a>

This pattern describes how to set up a fully hybrid Domain Name System (DNS) architecture that enables end-to-end DNS resolution of on-premise resources, AWS resources, and internet DNS queries, without administrative overhead. The pattern describes how to set up Amazon Route 53 Resolver forwarding rules that determine where a DNS query that originates from AWS should be sent, based on the domain name. DNS queries for on-premises resources are forwarded to on-premises DNS resolvers. DNS queries for AWS resources and internet DNS queries are resolved by Route 53 Resolver.

This pattern covers hybrid DNS resolution in an AWS single-account environment. For information about setting up outbound DNS queries in an AWS multi-account environment, see the pattern [Set up DNS resolution for hybrid networks in a multi-account AWS environment](https://docs.aws.amazon.com/prescriptive-guidance/latest/patterns/set-up-dns-resolution-for-hybrid-networks-in-a-multi-account-aws-environment.html).

## Prerequisites and limitations
<a name="set-up-dns-resolution-for-hybrid-networks-in-a-single-account-aws-environment-prereqs"></a>

**Prerequisites **
+ An AWS account
+ A virtual private cloud (VPC) in your AWS account
+ A network connection between the on-premises environment and your VPC, through AWS Virtual Private Network (AWS VPN) or AWS Direct Connect
+ IP addresses of your on-premises DNS resolvers (reachable from your VPC)
+ Domain/subdomain name to forward to on-premises resolvers (for example, onprem.mydc.com)
+ Domain/subdomain name for the AWS private hosted zone (for example, myvpc.cloud.com)

## Architecture
<a name="set-up-dns-resolution-for-hybrid-networks-in-a-single-account-aws-environment-architecture"></a>

**Target technology stack  **
+ Amazon Route 53 private hosted zone
+ Amazon Route 53 Resolver
+ Amazon VPC
+ AWS VPN or Direct Connect

**Target architecture**

![\[Workflow of Hybrid DNS resolution in an AWS single-account environment using Route 53 Resolver.\]](http://docs.aws.amazon.com/prescriptive-guidance/latest/patterns/images/pattern-img/120dedc8-cc6c-4aa7-be11-c70a7ee80642/images/7b75f534-1adc-4a39-86d6-5c4596ff7b6a.png)


 

## Tools
<a name="set-up-dns-resolution-for-hybrid-networks-in-a-single-account-aws-environment-tools"></a>
+ [Amazon Route 53 Resolver](https://docs.aws.amazon.com/Route53/latest/DeveloperGuide/resolver-getting-started.html) makes hybrid cloud easier for enterprise customers by enabling seamless DNS query resolution across your entire hybrid cloud. You can create DNS endpoints and conditional forwarding rules to resolve DNS namespaces between your on-premises data center and your VPCs.
+ [Amazon Route 53 private hosted zone](https://docs.aws.amazon.com/Route53/latest/DeveloperGuide/hosted-zones-private.html) is a container that holds information about how you want Route 53 to respond to DNS queries for a domain and its subdomains within one or more VPCs that you create with the Amazon VPC service.

## Epics
<a name="set-up-dns-resolution-for-hybrid-networks-in-a-single-account-aws-environment-epics"></a>

### Configure a private hosted zone
<a name="configure-a-private-hosted-zone"></a>


| Task | Description | Skills required | 
| --- | --- | --- | 
| Create a Route 53 private hosted zone for an AWS reserved domain name such as myvpc.cloud.com. | This zone holds the DNS records for AWS resources that should be resolved from the on-premises environment. For instructions, see [Creating a private hosted zone](https://docs.aws.amazon.com/Route53/latest/DeveloperGuide/hosted-zone-private-creating.html) in the Route 53 documentation. | Network admin, System admin | 
| Associate the private hosted zone with your VPC. | To enable resources in your VPC to resolve DNS records in this private hosted zone, you must associate your VPC with the hosted zone. For instructions, see [Creating a private hosted zone](https://docs.aws.amazon.com/Route53/latest/DeveloperGuide/hosted-zone-private-creating.html) in the Route 53 documentation. | Network admin, System admin | 

### Set up Route 53 Resolver endpoints
<a name="set-up-route-53-resolver-endpoints"></a>


| Task | Description | Skills required | 
| --- | --- | --- | 
| Create an inbound endpoint. | Route 53 Resolver uses the inbound endpoint to receive DNS queries from on-premises DNS resolvers. For instructions, see [Forwarding inbound DNS queries to your VPCs](https://docs.aws.amazon.com/Route53/latest/DeveloperGuide/resolver-forwarding-inbound-queries.html) in the Route 53 documentation. Make a note of the inbound endpoint IP address. | Network admin, System admin | 
| Create an outbound endpoint. | Route 53 Resolver uses the outbound endpoint to send DNS queries to on-premises DNS resolvers. For instructions, see [Forwarding outbound DNS queries to your network](https://docs.aws.amazon.com/Route53/latest/DeveloperGuide/resolver-forwarding-outbound-queries.html) in the Route 53 documentation. Make a note of the output endpoint ID. | Network admin, System admin | 

### Set up a forwarding rule and associate it with your VPC
<a name="set-up-a-forwarding-rule-and-associate-it-with-your-vpc"></a>


| Task | Description | Skills required | 
| --- | --- | --- | 
| Create a forwarding rule for the on-premises domain. | This rule will instruct Route 53 Resolver to forward any DNS queries for on-premises domains (such as onprem.mydc.com) to on-premises DNS resolvers. To create this rule, you will need the IP addresses of the on-premises DNS resolvers and the outbound endpoint ID for Route 53 Resolver. For instructions, see [Managing forwarding rules](https://docs.aws.amazon.com/Route53/latest/DeveloperGuide/resolver-rules-managing.html) in the Route 53 documentation. | Network admin, System admin | 
| Associate the forwarding rule with your VPC. | For the forwarding rule to take effect, you must associate the rule with your VPC. Route 53 Resolver then takes the rule into consideration when resolving a domain. For instructions, see [Managing forwarding rules](https://docs.aws.amazon.com/Route53/latest/DeveloperGuide/resolver-rules-managing.html) in the Route 53 documentation. | Network admin, System admin | 

### Configure on-premises DNS resolvers
<a name="configure-on-premises-dns-resolvers"></a>


| Task | Description | Skills required | 
| --- | --- | --- | 
| Configure conditional forwarding in the on-premise DNS resolvers.  | For DNS queries to be sent to the Route 53 private hosted zone from the on-premises environment, you must configure conditional forwarding in the on-premises DNS resolvers. This instructs the DNS resolvers to forward all DNS queries for the AWS domain (for example, for myvpc.cloud.com) to the inbound endpoint IP address for Route 53 Resolver. | Network admin, System admin | 

### Test end-to-end DNS resolution
<a name="test-end-to-end-dns-resolution"></a>


| Task | Description | Skills required | 
| --- | --- | --- | 
| Test DNS resolution from AWS to the on-premises environment. | From a server in the VPC, perform a DNS query for an on-premises domain (such as server1.onprem.mydc.com). | Network admin, System admin | 
| Test DNS resolution from the on-premises environment to AWS. | From an on-premises server, perform DNS resolution for an AWS domain (such as server1.myvpc.cloud.com). | Network admin, System admin | 

## Related resources
<a name="set-up-dns-resolution-for-hybrid-networks-in-a-single-account-aws-environment-resources"></a>
+ [Centralized DNS management of hybrid cloud with Amazon Route 53 and AWS Transit Gateway](https://aws.amazon.com/blogs/networking-and-content-delivery/centralized-dns-management-of-hybrid-cloud-with-amazon-route-53-and-aws-transit-gateway/) (AWS Networking & Content Delivery blog)
+ [Simplify DNS management in a multi-account environment with Route 53 Resolver](https://aws.amazon.com/blogs/security/simplify-dns-management-in-a-multiaccount-environment-with-route-53-resolver/) (AWS Security blog)
+ [Working with private hosted zones](https://docs.aws.amazon.com/Route53/latest/DeveloperGuide/hosted-zones-private.html) (Route 53 documentation)
+ [Getting started with Route 53 Resolver](https://docs.aws.amazon.com/Route53/latest/DeveloperGuide/resolver-getting-started.html) (Route 53 documentation)

# Set up UiPath RPA bots automatically on Amazon EC2 by using AWS CloudFormation
<a name="set-up-uipath-rpa-bots-automatically-on-amazon-ec2-by-using-aws-cloudformation"></a>

*Dr. Rahul Sharad Gaikwad and Tamilselvan P, Amazon Web Services*

## Summary
<a name="set-up-uipath-rpa-bots-automatically-on-amazon-ec2-by-using-aws-cloudformation-summary"></a>

This pattern explains how you can deploy robotic process automation (RPA) bots on Amazon Elastic Compute Cloud (Amazon EC2) instances. It uses an [EC2 Image Builder](https://docs.aws.amazon.com/imagebuilder/latest/userguide/what-is-image-builder.html) pipeline to create a custom Amazon Machine Image (AMI). An AMI is a preconfigured virtual machine (VM) image that contains the operating system (OS) and preinstalled software to deploy EC2 instances. This pattern uses AWS CloudFormation templates to install [UiPath Studio Community edition](https://www.uipath.com/product/studio) on the custom AMI. UiPath is an RPA tool that helps you set up robots to automate your tasks.

As part of this solution, EC2 Windows instances are launched by using the base AMI, and the UiPath Studio application is installed on the instances. The pattern uses the Microsoft System Preparation (Sysprep) tool to duplicate the customized Windows installation. After that, it removes the host information and creates a final AMI from the instance. You can then launch the instances on demand by using the final AMI with your own naming conventions and monitoring setup.


| 
| 
| Note: This pattern doesn’t provide any information about using RPA bots. For that information, see the [UiPath documentation](https://docs.uipath.com/). You can also use this pattern to set up other RPA bot applications by customizing the installation steps based on your requirements. | 
| --- |

This pattern provides the following automations and benefits:
+ Application deployment and sharing: You can build Amazon EC2 AMIs for application deployment and share them across multiple accounts through an EC2 Image Builder pipeline, which uses AWS CloudFormation templates as infrastructure as code (IaC) scripts.
+ Amazon EC2 provisioning and scaling: CloudFormation IaC templates provide custom computer name sequences and Active Directory join automation.
+ Observability and monitoring: The pattern sets up Amazon CloudWatch dashboards to help you monitor Amazon EC2 metrics (such as CPU and disk usage).
+ RPA benefits for your business: RPA improves accuracy because robots can perform assigned tasks automatically and consistently. RPA also increases speed and productivity because it removes operations that don’t add value and handles repetitious activities.

## Prerequisites and limitations
<a name="set-up-uipath-rpa-bots-automatically-on-amazon-ec2-by-using-aws-cloudformation-prereqs"></a>

**Prerequisites **
+ An active[ AWS account](https://aws.amazon.com/free/)
+ [AWS Identity and Access Management (IAM) permissions](https://docs.aws.amazon.com/AWSCloudFormation/latest/UserGuide/using-iam-template.html) for deploying CloudFormation templates
+ [IAM policies](https://docs.aws.amazon.com/imagebuilder/latest/userguide/cross-account-dist.html) to set up cross-account AMI distribution with EC2 Image Builder

## Architecture
<a name="set-up-uipath-rpa-bots-automatically-on-amazon-ec2-by-using-aws-cloudformation-architecture"></a>

![\[Target architecture for setting up RPA bots on Amazon EC2\]](http://docs.aws.amazon.com/prescriptive-guidance/latest/patterns/images/pattern-img/5555a62d-91d4-4e81-9961-ff89faedd6ad/images/1893d2d3-8912-4473-adf1-6633b5badcd9.png)


1. The administrator provides the base Windows AMI in the `ec2-image-builder.yaml` file and deploys the stack in the CloudFormation console.

1. The CloudFormation stack deploys the EC2 Image Builder pipeline, which includes the following resources:
   + `Ec2ImageInfraConfiguration`
   + `Ec2ImageComponent`
   + `Ec2ImageRecipe`
   + `Ec2AMI`

1. The EC2 Image Builder pipeline launches a temporary Windows EC2 instance by using the base AMI and installs the required components (in this case, UiPath Studio).

1. The EC2 Image Builder removes all the host information and creates an AMI from Windows Server.

1. You update the `ec2-provisioning yaml` file with the custom AMI and launch a number of EC2 instances based on your requirements.

1. You deploy the Count macro by using a CloudFormation template. This macro provides a **Count** property for CloudFormation resources so you can specify multiple resources of the same type easily.

1. You update the name of the macro in the CloudFormation `ec2-provisioning.yaml` file and deploy the stack.

1. The administrator updates the `ec2-provisioning.yaml` file based on requirements and launches the stack.

1. The template deploys EC2 instances with the UiPath Studio application.

## Tools
<a name="set-up-uipath-rpa-bots-automatically-on-amazon-ec2-by-using-aws-cloudformation-tools"></a>

**AWS services**
+ [AWS CloudFormation](https://aws.amazon.com/cloudformation/) helps you model and manage infrastructure resources in an automated and secure manner.
+ [Amazon CloudWatch](https://aws.amazon.com/cloudwatch/) helps you observe and monitor resources and applications on AWS, on premises, and on other clouds.
+ [Amazon Elastic Compute Cloud (Amazon EC2](https://aws.amazon.com/ec2/)) provides secure and resizable compute capacity in the AWS Cloud. You can launch as many virtual servers as you need and quickly scale them up or down.
+ [EC2 Image Builder](https://aws.amazon.com/image-builder/) simplifies the building, testing, and deployment of virtual machines and container images for use on AWS or on premises.
+ [Amazon EventBridge](https://aws.amazon.com/eventbridge/) helps you build event-driven applications at scale across AWS, existing systems, or software as a service (SaaS) applications.
+ [AWS Identity and Access Management (IAM)](https://docs.aws.amazon.com/IAM/latest/UserGuide/introduction.html) helps you securely control access to AWS resources. With IAM, you can centrally manage permissions that control which AWS resources users can access. You use IAM to control who is authenticated (signed in) and authorized (has permissions) to use resources.
+ [AWS Lambda](https://aws.amazon.com/lambda/) is a serverless, event-driven compute service that lets you run code for virtually any type of application or backend service without provisioning or managing servers. You can call Lambda functions from over 200 AWS services and SaaS applications, and pay only for what you use.
+ [Amazon Simple Storage Service (Amazon S3) ](https://aws.amazon.com/s3/)is a cloud-based object storage service that helps you store, protect, and retrieve any amount of data..
+ [AWS Systems Manager Agent (SSM Agent)](https://docs.aws.amazon.com/systems-manager/latest/userguide/ssm-agent.html) helps Systems Manager update, manage, and configure EC2 instances, edge devices, on-premises servers, and virtual machines (VMs).

**Code repositories**

The code for this pattern is available in the GitHub [UiPath RPA bot setup using CloudFormation](https://github.com/aws-samples/uipath-rpa-setup-ec2-windows-ami-cloudformation) repository. The pattern also uses a macro that’s available from the [AWS CloudFormation Macros repository](https://github.com/aws-cloudformation/aws-cloudformation-macros/tree/master/Count).

## Best practices
<a name="set-up-uipath-rpa-bots-automatically-on-amazon-ec2-by-using-aws-cloudformation-best-practices"></a>
+ AWS releases new [Windows AMIs](https://docs.aws.amazon.com/AWSEC2/latest/WindowsGuide/windows-ami-version-history.html) each month. These contain the latest OS patches, drivers, and launch agents. We recommend that you use the latest AMI when you launch new instances or when you build your own custom images.
+ Apply all available Windows or Linux security patches during image builds.

## Epics
<a name="set-up-uipath-rpa-bots-automatically-on-amazon-ec2-by-using-aws-cloudformation-epics"></a>

### Deploy an image pipeline for the base image
<a name="deploy-an-image-pipeline-for-the-base-image"></a>


| Task | Description | Skills required | 
| --- | --- | --- | 
| Set up an EC2 Image Builder pipeline. | [\[See the AWS documentation website for more details\]](http://docs.aws.amazon.com/prescriptive-guidance/latest/patterns/set-up-uipath-rpa-bots-automatically-on-amazon-ec2-by-using-aws-cloudformation.html) | AWS DevOps | 
| View EC2 Image Builder settings. | The EC2 Image Builder settings include infrastructure configuration, distribution settings, and security scanning settings. To view the settings:[\[See the AWS documentation website for more details\]](http://docs.aws.amazon.com/prescriptive-guidance/latest/patterns/set-up-uipath-rpa-bots-automatically-on-amazon-ec2-by-using-aws-cloudformation.html)As a best practice, you should make any updates to EC2 Image Builder through the CloudFormation template only. | AWS DevOps | 
| View the image pipeline. | To view the deployed image pipeline:[\[See the AWS documentation website for more details\]](http://docs.aws.amazon.com/prescriptive-guidance/latest/patterns/set-up-uipath-rpa-bots-automatically-on-amazon-ec2-by-using-aws-cloudformation.html) | AWS DevOps | 
| View Image Builder logs. | EC2 Image Builder logs are aggregated in CloudWatch log groups. To view the logs in CloudWatch:[\[See the AWS documentation website for more details\]](http://docs.aws.amazon.com/prescriptive-guidance/latest/patterns/set-up-uipath-rpa-bots-automatically-on-amazon-ec2-by-using-aws-cloudformation.html)EC2 Image Builder logs are also stored in an S3 bucket. To view the logs in the bucket:[\[See the AWS documentation website for more details\]](http://docs.aws.amazon.com/prescriptive-guidance/latest/patterns/set-up-uipath-rpa-bots-automatically-on-amazon-ec2-by-using-aws-cloudformation.html) | AWS DevOps | 
| Upload the UiPath file to an S3 bucket. | [\[See the AWS documentation website for more details\]](http://docs.aws.amazon.com/prescriptive-guidance/latest/patterns/set-up-uipath-rpa-bots-automatically-on-amazon-ec2-by-using-aws-cloudformation.html) | AWS DevOps | 

### Deploy and test the Count macro
<a name="deploy-and-test-the-count-macro"></a>


| Task | Description | Skills required | 
| --- | --- | --- | 
| Deploy the Count macro. | [\[See the AWS documentation website for more details\]](http://docs.aws.amazon.com/prescriptive-guidance/latest/patterns/set-up-uipath-rpa-bots-automatically-on-amazon-ec2-by-using-aws-cloudformation.html)If you want to use the console, follow the instructions in the previous epic or in the [CloudFormation documentation](https://docs.aws.amazon.com/AWSCloudFormation/latest/UserGuide/cfn-console-create-stack.html).  | DevOps engineer | 
| Test the Count macro. | To test the macro's capabilities, try launching the example template that’s provided with the macro. <pre>aws cloudformation deploy \<br />    --stack-name Count-test \<br />    --template-file test.yaml \<br />    --capabilities CAPABILITY_IAM</pre> | DevOps engineer | 

### Deploy the CloudFormation stack to provision instances with the custom image
<a name="deploy-the-cloudformation-stack-to-provision-instances-with-the-custom-image"></a>


| Task | Description | Skills required | 
| --- | --- | --- | 
| Deploy the Amazon EC2 provisioning template. | To deploy EC2 Image Pipeline by using CloudFormation:[\[See the AWS documentation website for more details\]](http://docs.aws.amazon.com/prescriptive-guidance/latest/patterns/set-up-uipath-rpa-bots-automatically-on-amazon-ec2-by-using-aws-cloudformation.html) | AWS DevOps | 
| View Amazon EC2 settings. | Amazon EC2 settings include security, networking, storage, status checks, monitoring, and tags configurations. To view these configurations:[\[See the AWS documentation website for more details\]](http://docs.aws.amazon.com/prescriptive-guidance/latest/patterns/set-up-uipath-rpa-bots-automatically-on-amazon-ec2-by-using-aws-cloudformation.html) | AWS DevOps | 
| View the CloudWatch dashboard. | [\[See the AWS documentation website for more details\]](http://docs.aws.amazon.com/prescriptive-guidance/latest/patterns/set-up-uipath-rpa-bots-automatically-on-amazon-ec2-by-using-aws-cloudformation.html)After you provision the stack, it takes time to populate the dashboard with metrics.The dashboard provides these metrics: `CPUUtilization`, `DiskUtilization`, `MemoryUtilization`, `NetworkIn`, `NetworkOut`, `StatusCheckFailed`. | AWS DevOps | 
| View custom metrics for memory and disk usage.  | [\[See the AWS documentation website for more details\]](http://docs.aws.amazon.com/prescriptive-guidance/latest/patterns/set-up-uipath-rpa-bots-automatically-on-amazon-ec2-by-using-aws-cloudformation.html) | AWS DevOps | 
| View alarms for memory and disk usage.  | [\[See the AWS documentation website for more details\]](http://docs.aws.amazon.com/prescriptive-guidance/latest/patterns/set-up-uipath-rpa-bots-automatically-on-amazon-ec2-by-using-aws-cloudformation.html) | AWS DevOps | 
| Verify the snapshot lifecyle rule. | [\[See the AWS documentation website for more details\]](http://docs.aws.amazon.com/prescriptive-guidance/latest/patterns/set-up-uipath-rpa-bots-automatically-on-amazon-ec2-by-using-aws-cloudformation.html) | AWS DevOps | 

### Delete the environment (optional)
<a name="delete-the-environment-optional"></a>


| Task | Description | Skills required | 
| --- | --- | --- | 
| Delete the stacks. | When your PoC or pilot project is complete, we recommend that you delete the stacks you created to make sure that you aren’t charged for these resources.[\[See the AWS documentation website for more details\]](http://docs.aws.amazon.com/prescriptive-guidance/latest/patterns/set-up-uipath-rpa-bots-automatically-on-amazon-ec2-by-using-aws-cloudformation.html)The stack deletion operation can't be stopped after it begins. The stack proceeds to the `DELETE_IN_PROGRESS` state.If the deletion fails, the stack will be in the `DELETE_FAILED` state. For solutions, see [Delete stack fails](https://docs.aws.amazon.com/AWSCloudFormation/latest/UserGuide/troubleshooting.html#troubleshooting-errors-delete-stack-fails) in the AWS CloudFormation troubleshooting documentation.For information about protecting stacks from being accidentally deleted, see [Protecting a stack from being deleted](https://docs.aws.amazon.com/AWSCloudFormation/latest/UserGuide/using-cfn-protect-stacks.html) in the AWS CloudFormation documentation. | AWS DevOps | 

## Troubleshooting
<a name="set-up-uipath-rpa-bots-automatically-on-amazon-ec2-by-using-aws-cloudformation-troubleshooting"></a>


| Issue | Solution | 
| --- | --- | 
| When you deploy the Amazon EC2 provisioning template, you get the error: *Received malformed response from transform 123xxxx::Count*. | This is a known issue. (See the custom solution and PR in the [AWS CloudFormation macros repository](https://github.com/aws-cloudformation/aws-cloudformation-macros/pull/20).)To fix this issue, open the AWS Lambda console and update `index.py` with the content from the [GitHub repository](https://raw.githubusercontent.com/aws-cloudformation/aws-cloudformation-macros/f1629c96477dcd87278814d4063c37877602c0c8/Count/src/index.py).  | 

## Related resources
<a name="set-up-uipath-rpa-bots-automatically-on-amazon-ec2-by-using-aws-cloudformation-resources"></a>

**GitHub repositories**
+ [UiPath RPA bot setup using CloudFormation](https://github.com/aws-samples/uipath-rpa-setup-ec2-windows-ami-cloudformation)
+ [Count CloudFormation Macro](https://github.com/aws-cloudformation/aws-cloudformation-macros/tree/master/Count)

**AWS references**
+ [Creating a stack on the AWS CloudFormation console](https://docs.aws.amazon.com/AWSCloudFormation/latest/UserGuide/cfn-console-create-stack.html) (CloudFormation documentation)
+ [Troubleshooting CloudFormation](https://docs.aws.amazon.com/AWSCloudFormation/latest/UserGuide/troubleshooting.html) (CloudFormation documentation)
+ [Monitor memory and disk metrics forAmazon EC2 instances](https://docs.aws.amazon.com/AWSEC2/latest/UserGuide/mon-scripts.html) (Amazon EC2 documentation)
+ [How can I use the CloudWatch agent to view metrics for Performance Monitor on a Windows server?](https://repost.aws/knowledge-center/cloudwatch-performance-monitor-windows) (AWS re:Post article)

**Additional references**
+ [UiPath documentation](https://docs.uipath.com/)
+ [Setting the Hostname in a SysPreped AMI](https://blog.brianbeach.com/2014/07/setting-hostname-in-syspreped-ami.html) (blog post by Brian Beach)
+ [How do I make Cloudformation reprocess a template using a macro when parameters change?](https://stackoverflow.com/questions/59828989/how-do-i-make-cloudformation-reprocess-a-template-using-a-macro-when-parameters) (Stack Overflow)

# Set up a highly available PeopleSoft architecture on AWS
<a name="set-up-a-highly-available-peoplesoft-architecture-on-aws"></a>

*Ramanathan Muralidhar, Amazon Web Services*

## Summary
<a name="set-up-a-highly-available-peoplesoft-architecture-on-aws-summary"></a>

When you migrate your PeopleSoft workloads to AWS, resiliency is an important objective. It ensures that your PeopleSoft application is always highly available and able to recover from failures quickly.

This pattern provides an architecture for your PeopleSoft applications on AWS to ensure high availability (HA) at the network, application, and database tiers. It uses an [Amazon Relational Database Service (Amazon RDS)](https://aws.amazon.com/rds/) for Oracle or Amazon RDS for SQL Server database for the database tier. This architecture also includes AWS services such as [Amazon Route 53](https://aws.amazon.com/route53/), [Amazon Elastic Compute Cloud (Amazon EC2)](https://aws.amazon.com/ec2/) Linux instances, [Amazon Elastic Block Storage (Amazon EBS)](https://aws.amazon.com/ebs/), [Amazon Elastic File System (Amazon EFS)](https://aws.amazon.com/efs/), and an [Application Load Balancer](https://aws.amazon.com/elasticloadbalancing/application-load-balancer), and is scalable.

[Oracle PeopleSoft](https://www.oracle.com/applications/peoplesoft/) provides a suite of tools and applications for workforce management and other business operations.

## Prerequisites and limitations
<a name="set-up-a-highly-available-peoplesoft-architecture-on-aws-prereqs"></a>

**Prerequisites**
+ An active AWS account
+ A PeopleSoft environment with the necessary licenses for setting it up on AWS
+ A virtual private cloud (VPC) set up in your AWS account with the following resources:
  + At least two Availability Zones
  + One public subnet and three private subnets in each Availability Zone
  + A NAT gateway and an internet gateway
  + Route tables for each subnet to route the traffic
  + Network access control lists (network ACLs) and security groups defined to help ensure the security of the PeopleSoft application in accordance with your organization’s standards

**Limitations**
+ This pattern provides a high availability (HA) solution. It doesn’t support disaster recovery (DR) scenarios. In the rare occurrence that the entire AWS Region for the HA implementation goes down, the application will become unavailable.

**Product versions**
+ PeopleSoft applications running PeopleTools 8.52 and later

## Architecture
<a name="set-up-a-highly-available-peoplesoft-architecture-on-aws-architecture"></a>

**Target architecture**

Downtime or outage of your PeopleSoft production application impacts the availability of the application and causes major disruptions to your business.

We recommend that you design your PeopleSoft production application so that it is always highly available. You can achieve this by eliminating single points of failure, adding reliable crossover or failover points, and detecting failures. The following diagram illustrates an HA architecture for PeopleSoft on AWS.

![\[Highly available architecture for PeopleSoft on AWS\]](http://docs.aws.amazon.com/prescriptive-guidance/latest/patterns/images/pattern-img/0db96376-dadb-4545-b130-ebbe64acd4e9/images/5d585a8e-320a-495d-a049-97171633e90f.png)


This architecture deployment uses Amazon RDS for Oracle as the PeopleSoft database, and EC2 instances that are running on Red Hat Enterprise Linux (RHEL). You can also use Amazon RDS for SQL Server as the Peoplesoft database.

This architecture contains the following components: 
+ [Amazon Route 53](https://docs.aws.amazon.com/Route53/latest/DeveloperGuide/Welcome.html) is used as the Domain Name Server (DNS) for routing requests from the internet to the PeopleSoft application.
+ [AWS WAF](https://docs.aws.amazon.com/waf/latest/developerguide/waf-chapter.html) helps you protect against common web exploits and bots that can affect availability, compromise security, or consume excessive resources. [AWS Shield Advanced](https://docs.aws.amazon.com/waf/latest/developerguide/shield-chapter.html) (not illustrated) provides much broader protection.
+ An [Application Load Balancer](https://docs.aws.amazon.com/elasticloadbalancing/latest/application/introduction.html) load-balances HTTP and HTTPS traffic with advanced request routing targeted at the web servers.
+ The web servers, application servers, process scheduler servers, and Elasticsearch servers that support the PeopleSoft application run in multiple Availability Zones and use [Amazon EC2 Auto Scaling](https://docs.aws.amazon.com/autoscaling/ec2/userguide/what-is-amazon-ec2-auto-scaling.html).
+ The database used by the PeopleSoft application runs on [Amazon RDS](https://docs.aws.amazon.com/AmazonRDS/latest/UserGuide/Welcome.html) in a Multi-AZ configuration.
+ The file share used by the PeopleSoft application is configured on [Amazon EFS](https://docs.aws.amazon.com/efs/latest/ug/whatisefs.html) and is used to access files across instances.
+ [Amazon Machine Images (AMI](https://docs.aws.amazon.com/AWSEC2/latest/UserGuide/AMIs.html)s) are used by Amazon EC2 Auto Scaling to ensure that PeopleSoft components are cloned quickly when needed.
+ The [NAT gateways](https://docs.aws.amazon.com/vpc/latest/userguide/vpc-nat-gateway.html) connect instances in a private subnet to services outside your VPC, and ensure that external services cannot initiate a connection with those instances.
+ The [internet gateway](https://docs.aws.amazon.com/vpc/latest/userguide/VPC_Internet_Gateway.html) is a horizontally scaled, redundant, and highly available VPC component that allows communication between your VPC and the internet.
+ The bastion hosts in the public subnet provide access to the servers in the private subnet from an external network, such as the internet or on-premises network. The bastion hosts provide controlled and secure access to the servers in the private subnets.

**Architecture details**

The PeopleSoft database is housed in an Amazon RDS for Oracle (or Amazon RDS for SQL Server) database in a Multi-AZ configuration. The [Amazon RDS Multi-AZ feature](https://docs.aws.amazon.com/AmazonRDS/latest/UserGuide/Concepts.MultiAZ.html) replicates database updates across two Availability Zones to increase durability and availability. Amazon RDS automatically fails over to the standby database for planned maintenance and unplanned disruptions.

The PeopleSoft web and middle tier are installed on EC2 instances. These instances are spread across multiple Availability Zones and tied by an [Auto Scaling group](https://docs.aws.amazon.com/autoscaling/ec2/userguide/what-is-amazon-ec2-auto-scaling.html). This ensures that these components are always highly available. A minimum number of required instances are maintained to ensure that the application is always available and can scale when required.

We recommend that you use a current generation EC2 instance type for the OEM EC2 instances. Current generation instance types, such as [instances built on the AWS Nitro System](https://docs.aws.amazon.com/AWSEC2/latest/UserGuide/instance-types.html#ec2-nitro-instances), support hardware virtual machines (HVMs). The HVM AMIs are required to take advantage of [enhanced networking](https://docs.aws.amazon.com/AWSEC2/latest/UserGuide/enhanced-networking.html), and they also offer increased security. The EC2 instances that are part of each Auto Scaling group use their own AMI when replacing or scaling up instances. We recommend that you select EC2 instance types based on the load you want your PeopleSoft application to handle and the minimum values recommended by Oracle for your PeopleSoft application and PeopleTools release. For more information about hardware and software requirements, see the [Oracle support website](https://support.oracle.com).

The PeopleSoft web and middle tier share an Amazon EFS mount to share reports, data files, and (if needed) the `PS_HOME` directory. Amazon EFS is configured with mount targets in each Availability Zone for performance and cost reasons.

An Application Load Balancer is provisioned to support the traffic that accesses the PeopleSoft application and load-balances the traffic among the web servers across different Availability Zones. An Application Load Balancer is a network device that provides HA in at least two Availability Zones. The web servers distribute the traffic to different application servers by using a load balancing configuration. Load balancing among the web server and application server ensures that load is distributed evenly across the instances, and helps avoid bottlenecks and service disruptions due to overloaded instances.

Amazon Route 53 is used as the DNS service to route traffic to the Application Load Balancer from the internet. Route 53 is a highly available and scalable DNS web service.

**HA details**
+ Databases: The Multi-AZ feature of Amazon RDS operates two databases in multiple Availability Zones with synchronous replication. This creates a highly available environment with automatic failover. Amazon RDS has failover event detection and initiates automated failover when these events occur. You can also initiate manual failover through the Amazon RDS API. For a detailed explanation, see the blog post [Amazon RDS Under The Hood: Multi-AZ](https://aws.amazon.com/blogs/database/amazon-rds-under-the-hood-multi-az/). The failover is seamless and the application automatically reconnects to the database when it happens. However, any process scheduler jobs during the failover generate errors and have to be resubmitted.
+ PeopleSoft application servers: The application servers are spread across multiple Availability Zones and have an Auto Scaling group defined for them. If an instance fails, the Auto Scaling group immediately replaces it with a healthy instance that’s cloned from the AMI of the application server template. Specifically, *jolt pooling* is enabled, so when an application server instance goes down, the sessions automatically fail over to another application server, and the Auto Scaling group automatically spins up another instance, brings up the application server, and registers it in the Amazon EFS mount. The newly created application server is automatically added to the web servers by using the `PSSTRSETUP.SH` script in the web servers. This ensures that the application server is always highly available and recovers from failure quickly.
+ Process schedulers: The process schedulers servers are spread across multiple Availability Zones and have an Auto Scaling group defined for them. If an instance fails, the Auto Scaling group immediately replaces it with a healthy instance that’s cloned from the AMI of the process scheduler server template. Specifically, when a process scheduler instance goes down, the Auto Scaling group automatically spins up another instance and brings up the process scheduler. Any jobs that were running when the instance failed must be resubmitted. This ensures that the process scheduler is always highly available and recovers from failure quickly.
+ Elasticsearch servers: The Elasticsearch servers have an Auto Scaling group defined for them. If an instance fails, the Auto Scaling group immediately replaces it with a healthy instance that’s cloned from the AMI of the Elasticsearch server template. Specifically, when an Elasticsearch instance goes down, the Application Load Balancer that serves requests to it detects the failure and stops sending traffic to it. The Auto Scaling group automatically spins up another instance and brings up the Elasticsearch instance. When the Elasticsearch instance is back up, the Application Load Balancer detects that it’s healthy and starts sending requests to it again. This ensures that the Elasticsearch server is always highly available and recovers from failure quickly.
+ Web servers: The web servers have an Auto Scaling group defined for them. If an instance fails, the Auto Scaling group immediately replaces it with a healthy instance that’s cloned from the AMI of the web server template. Specifically, when a web server instance goes down, the Application Load Balancer that serves requests to it detects the failure and stops sending traffic to it. The Auto Scaling group automatically spins up another instance and brings up the web server instance. When the web server instance is back up, the Application Load Balancer detects that it’s healthy and starts sending requests to it again. This ensures that the web server is always highly available and recovers from failure quickly.

## Tools
<a name="set-up-a-highly-available-peoplesoft-architecture-on-aws-tools"></a>

**AWS services**
+ [Application Load Balancers](https://docs.aws.amazon.com/elasticloadbalancing/latest/application/) distribute incoming application traffic across multiple targets, such as EC2 instances, in multiple Availability Zones.
+ [Amazon Elastic Block Store (Amazon EBS)](https://docs.aws.amazon.com/AWSEC2/latest/UserGuide/AmazonEBS.html) provides block-level storage volumes for use with Amazon Elastic Compute Cloud (Amazon EC2) instances.
+ [Amazon Elastic Compute Cloud (Amazon EC2)](https://docs.aws.amazon.com/AWSEC2/latest/UserGuide/concepts.html) provides scalable computing capacity in the AWS Cloud. You can launch as many virtual servers as you need and quickly scale them up or down.
+ [Amazon Elastic File System (Amazon EFS)](https://docs.aws.amazon.com/efs/latest/ug/whatisefs.html) helps you create and configure shared file systems in the AWS Cloud.
+ [Amazon Relational Database Service (Amazon RDS)](https://docs.aws.amazon.com/AmazonRDS/latest/UserGuide/Welcome.html) helps you set up, operate, and scale a relational database in the AWS Cloud.
+ [Amazon Route 53](https://docs.aws.amazon.com/Route53/latest/DeveloperGuide/Welcome.html) is a highly available and scalable DNS web service.

## Best practices
<a name="set-up-a-highly-available-peoplesoft-architecture-on-aws-best-practices"></a>

**Operational best practices**
+ When you run PeopleSoft on AWS, use Route 53 to route the traffic from the internet and locally. Use the [failover option](https://docs.aws.amazon.com/Route53/latest/DeveloperGuide/dns-failover-configuring.html) to reroute traffic to the disaster recovery (DR) site if the primary DB instance isn’t available.
+ Always use an Application Load Balancer in front of the PeopleSoft environment. This ensures that traffic is load-balanced to the web servers in a secure fashion.
+ In the Application Load Balancer target group settings, make sure that [stickiness is turned on](https://docs.aws.amazon.com/elasticloadbalancing/latest/application/sticky-sessions.html) with a load balancer generated cookie.
**Note**  
You might need to use an application-based cookie if you use external single sign-on (SSO). This ensures that connections are consistent across the web servers and application servers.
+ For a PeopleSoft production application, the Application Load Balancer idle timeout must match what is set in the web profile you use. This prevents user sessions from expiring at the load balancer layer.
+ For a PeopleSoft production application, set the application server [recycle count](https://docs.oracle.com/cd/F28299_01/pt857pbr3/eng/pt/tsvt/concept_PSAPPSRVOptions-c07f06.html?pli=ul_d96e90_tsvt) to a value that minimizes memory leaks.
+ If you’re using an Amazon RDS database for your PeopleSoft production application, as described in this pattern, run it in [Multi-AZ format for high availability](https://docs.aws.amazon.com/AmazonRDS/latest/UserGuide/Concepts.MultiAZ.html).
+ If your database is running on an EC2 instance for your PeopleSoft production application, make sure that a [standby database is running on another Availability Zone](https://docs.aws.amazon.com/prescriptive-guidance/latest/migration-oracle-database/ec2-oracle.html#ec2-oracle-ha) for high availability.
+ For DR, make sure that your Amazon RDS database or EC2 instance has a standby configured in a separate AWS Region from the production database. This ensures that in event of a disaster in the Region, you can switch the application over to another Region.
+ For DR, use [Amazon Elastic Disaster Recovery](https://aws.amazon.com/disaster-recovery/) to set up application-level components in a separate Region from production components. This ensures that in the event of a disaster in the Region, you can switch the application over to another Region.
+ Use Amazon EFS (for moderate I/O requirements) or [Amazon FSx](https://aws.amazon.com/fsx/) (for high I/O requirements) to store your PeopleSoft reports, attachments, and data files. This ensures that the content is stored in one central location and can be accessed from anywhere within the infrastructure.
+ Use [Amazon CloudWatch](https://docs.aws.amazon.com/AmazonCloudWatch/latest/monitoring/WhatIsCloudWatch.html) (basic and detailed) to monitor the AWS Cloud resources that your PeopleSoft application is using in near real time. This ensures that you are alerted of issues instantly and can address them quickly before they affect the availability of the environment.
+ If you’re using an Amazon RDS database as the PeopleSoft database, use [Enhanced Monitoring](https://docs.aws.amazon.com/AmazonRDS/latest/UserGuide/USER_Monitoring.OS.overview.html). This feature provides access to over 50 metrics, including CPU, memory, file system I/O, and disk I/O.
+ Use [AWS CloudTrail](https://docs.aws.amazon.com/awscloudtrail/latest/userguide/cloudtrail-user-guide.html) to monitor API calls on the AWS resources that your PeopleSoft application is using. This helps you perform security analysis, resource change tracking, and compliance auditing.

**Security best practices**
+ To protect your PeopleSoft application from common exploits such as SQL injection or cross-site scripting (XSS), use [AWS WAF](https://docs.aws.amazon.com/waf/latest/developerguide/waf-chapter.html). Consider using [AWS Shield Advanced](https://docs.aws.amazon.com/waf/latest/developerguide/shield-chapter.html) for tailored detection and mitigation services.
+ Add a rule to the Application Load Balancer to redirect traffic from HTTP to HTTPS automatically to help secure your PeopleSoft application.
+ Set up a separate security group for the Application Load Balancer. This security group should allow only HTTPS/HTTP inbound traffic and no outbound traffic. This ensures that only intended traffic is allowed and helps secure your application.
+ Use private subnets for the application servers, web servers, and database, and use [NAT gateways](https://docs.aws.amazon.com/vpc/latest/userguide/vpc-nat-gateway.html) for outbound internet traffic. This ensures that the servers that support the application aren’t reachable publicly, while providing public access only to the servers that need it.
+ Use different VPCs to run your PeopleSoft production and non-production environments. Use [AWS Transit Gateway](https://aws.amazon.com/transit-gateway/), [VPC peering](https://docs.aws.amazon.com/vpc/latest/peering/what-is-vpc-peering.html), [network ACLs](https://docs.aws.amazon.com/vpc/latest/userguide/vpc-network-acls.html), and [security groups](https://docs.aws.amazon.com/vpc/latest/userguide/VPC_SecurityGroups.html) to control the traffic flow between the [VPC](https://aws.amazon.com/vpc/)s and, if necessary, your on-premises data center.
+ Follow the principle of least privilege. Grant access to the AWS resources used by the PeopleSoft application only to users who absolutely need it. Grant only the minimum privileges required to perform a task. For more information, see the [security pillar](https://docs.aws.amazon.com/wellarchitected/latest/security-pillar/sec_permissions_least_privileges.html) of the AWS Well-Architected Framework.
+ Wherever possible, use [AWS Systems Manager](https://docs.aws.amazon.com/systems-manager/latest/userguide/what-is-systems-manager.html) to access the EC2 instances that the PeopleSoft application uses.

**Reliability best practices**
+ When you use an Application Load Balancer, register a single target for each enabled Availability Zone. This makes the load balancer most effective.
+ We recommend that you have three distinct URLs for each PeopleSoft production environment: one URL to access the application, one to serve the integration broker, and one to view reports. If possible, each URL should have its own dedicated web servers and application servers. This design helps make your PeopleSoft application more secure, because each URL has a distinct functionality and controlled access. It also minimizes the scope of impact if the underlying services fail.
+ We recommend that you configure [health checks on the load balancer target groups](https://docs.aws.amazon.com/elasticloadbalancing/latest/application/target-group-health-checks.html) for your PeopleSoft application. The health checks should be performed on the web servers instead of the EC2 instances running those servers. This ensures that if the web server crashes or the EC2 instance that hosts the web server goes down, the Application Load Balancer reflects that information accurately.
+ For a PeopleSoft production application, we recommend that you spread the web servers across at least three Availability Zones. This ensures that the PeopleSoft application is always highly available even if one of the Availability Zones goes down.
+ For a PeopleSoft production application, enable jolt pooling (`joltPooling=true`). This ensures that your application fails over to another application server if a server is down for patching purposes or because of a VM failure.
+ For a PeopleSoft production application, set `DynamicConfigReload `to 1. This setting is supported in PeopleTools version 8.52 and later. It adds new application servers to the web server dynamically, without restarting the servers.
+ To minimize downtime when you apply PeopleTools patches, use the blue/green deployment method for your Auto Scaling group launch configurations for the web and application servers. For more information, see the [Overview of deployment options on AWS](https://docs.aws.amazon.com/whitepapers/latest/overview-deployment-options/bluegreen-deployments.html) whitepaper.
+ Use [AWS Backup](https://docs.aws.amazon.com/aws-backup/latest/devguide/whatisbackup.html) to back up your PeopleSoft application on AWS. AWS Backup is a cost-effective, fully managed, policy-based service that simplifies data protection at scale.

**Performance best practices**
+ Terminate the SSL at the Application Load Balancer for optimal performance of the PeopleSoft environment, unless your business requires encrypted traffic throughout the environment.
+ Create [interface VPC endpoints](https://docs.aws.amazon.com/vpc/latest/privatelink/create-interface-endpoint.html) for AWS services like such as [Amazon Simple Notification Service (Amazon SNS)](https://docs.aws.amazon.com/sns/latest/dg/welcome.html) and [CloudWatch](https://docs.aws.amazon.com/AmazonCloudWatch/latest/monitoring/WhatIsCloudWatch.html) so that traffic is always internal. This is cost-effective and helps keep your application secure.

**Cost optimization best practices**
+ Tag all the resources used by your PeopleSoft environment, and enable [cost allocation tags](https://docs.aws.amazon.com/awsaccountbilling/latest/aboutv2/cost-alloc-tags.html). These tags help you view and manage your resource costs.
+ For a PeopleSoft production application, set up Auto Scaling groups for the web servers and the application servers. This maintains a minimal number of web and application servers to support your application. You can use [Auto Scaling group policies](https://docs.aws.amazon.com/autoscaling/ec2/userguide/as-scaling-simple-step.html) to scale the the servers up and down as required.
+ Use [billing alarms](https://docs.aws.amazon.com/AmazonCloudWatch/latest/monitoring/monitor_estimated_charges_with_cloudwatch.html) to get alerts when costs exceed a budget threshold that you specify.

**Sustainability best practices**
+ Use [infrastructure as code](https://docs.aws.amazon.com/whitepapers/latest/introduction-devops-aws/infrastructure-as-code.html) (IaC) to maintain your PeopleSoft environments. This helps you build consistent environments and maintain change control.

## Epics
<a name="set-up-a-highly-available-peoplesoft-architecture-on-aws-epics"></a>

### Migrate your PeopleSoft database to Amazon RDS
<a name="migrate-your-peoplesoft-database-to-amazon-rds"></a>


| Task | Description | Skills required | 
| --- | --- | --- | 
| Create a DB subnet group. | On the [Amazon RDS console](https://console.aws.amazon.com/rds/), in the navigation pane, choose **Subnet groups**, and then create an Amazon RDS DB subnet group with subnets in multiple Availability Zones. This is required for the Amazon RDS database to run in a Multi-AZ configuration. | Cloud administrator | 
| Create the Amazon RDS database. | Create an Amazon RDS database in an Availability Zone of the AWS Region you selected for the PeopleSoft HA environment. When you create the Amazon RDS database, make sure to select the Multi-AZ option (**Create a standby instance**) and the database subnet group you created in the previous step. For more information, see the [Amazon RDS documentation](https://docs.aws.amazon.com/AmazonRDS/latest/UserGuide/USER_CreateDBInstance.html). | Cloud administrator, Oracle database administrator | 
| Migrate your PeopleSoft database to Amazon RDS. | Migrate your existing PeopleSoft database into the Amazon RDS database by using AWS Database Migration Service (AWS DMS). For more information, see the [AWS DMS documentation](https://docs.aws.amazon.com/dms/latest/userguide/CHAP_Source.Oracle.html) and the AWS blog post [Migrating Oracle databases with near-zero downtime using AWS DMS](https://aws.amazon.com/blogs/database/migrating-oracle-databases-with-near-zero-downtime-using-aws-dms/). | Cloud administrator, PeopleSoft DBA | 

### Set up your Amazon EFS file system
<a name="set-up-your-amazon-efs-file-system"></a>


| Task | Description | Skills required | 
| --- | --- | --- | 
| Create a file system. | On the [Amazon EFS console](https://console.aws.amazon.com/efs/), create a file system and mount targets for each Availability Zone. For instructions, see the [Amazon EFS documentation](https://docs.aws.amazon.com/efs/latest/ug/creating-using-create-fs.html#creating-using-fs-part1-console). When the file system has been created, note its DNS name. You will use this information when you mount the file system. | Cloud administrator | 

### Set up your PeopleSoft application and file system
<a name="set-up-your-peoplesoft-application-and-file-system"></a>


| Task | Description | Skills required | 
| --- | --- | --- | 
| Launch an EC2 instance. | Launch an EC2 instance for your PeopleSoft application. For instructions, see the [Amazon EC2 documentation](https://docs.aws.amazon.com/AWSEC2/latest/UserGuide/ec2-launch-instance-wizard.html#liw-quickly-launch-instance).[\[See the AWS documentation website for more details\]](http://docs.aws.amazon.com/prescriptive-guidance/latest/patterns/set-up-a-highly-available-peoplesoft-architecture-on-aws.html) | Cloud administrator, PeopleSoft administrator | 
| Install PeopleSoft on the instance. | Install your PeopleSoft application and PeopleTools on the EC2 instance you created. For instructions, see the [Oracle documentation](https://docs.oracle.com). | Cloud administrator, PeopleSoft administrator | 
| Create the application server. | Create the application server for the AMI template and make sure that it connects successfully to the Amazon RDS database. | Cloud administrator, PeopleSoft administrator | 
| Mount the Amazon EFS file system. | Log in to the EC2 instance as the root user and run the following commands to mount the Amazon EFS file system to a folder called `PSFTMNT` on the server.<pre>sudo su –<br />mkdir /psftmnt<br />cat /etc/fstab</pre>Append the following line to the `/etc/fstab` file. Use the DNS name you noted when you created the file system.<pre>fs-09e064308f1145388.efs.us-east-1.amazonaws.com:/ /psftmnt nfs4 nfsvers=4.1,rsize=1048576,wsize=1048576,hard,timeo=600,retrans=2,noresvport,_netdev 0 0<br />mount -a</pre> | Cloud administrator, PeopleSoft administrator | 
| Check permissions. | Make sure that the `PSFTMNT` folder has the proper permissions so that the PeopleSoft user can access it properly. | Cloud administrator, PeopleSoft administrator | 
| Create additional instances. | Repeat the previous steps in this epic to create template instances for the process scheduler, web server, and Elasticsearch server. Name these instances `PRCS_TEMPLATE`, `WEB_TEMPLATE`, and `SRCH_TEMPLATE`. For the web server, set `joltPooling=true`** **and `DynamicConfigReload=1`. | Cloud administrator, PeopleSoft administrator | 

### Create scripts to set up servers
<a name="create-scripts-to-set-up-servers"></a>


| Task | Description | Skills required | 
| --- | --- | --- | 
| Create a script to install the application server. | In the Amazon EC2 `APP_TEMPLATE` instance, as the PeopleSoft user, create the following script. Name it `appstart.sh` and place it in the `PS_HOME` directory. You will use this script to bring up the application server and also record the server name on the Amazon EFS mount.<pre>#!/bin/ksh<br />. /usr/homes/hcmdemo/.profile.<br />psadmin -c configure -d HCMDEMO<br />psadmin -c parallelboot -d HCMDEMO<br />touch /psftmnt/`echo $HOSTNAME`</pre> | PeopleSoft administrator | 
| Create a script to install the process scheduler server. | In the Amazon EC2 `PRCS_TEMPLATE` instance, as the PeopleSoft user, create the following script. Name it `prcsstart.sh` and place it in the `PS_HOME` directory. You will use this script to bring up the process scheduler server.<pre>#!/bin/ksh<br />. /usr/homes/hcmdemo/. profile<br />/* The following line ensures that the process scheduler always has a unique name during replacement or scaling activity. */ <br />sed -i "s/.*PrcsServerName.*/`hostname -I | awk -F. '{print "PrcsServerName=PSUNX"$3$4}'`/" $HOME/appserv/prcs/*/psprcs.cfg<br />psadmin -p configure -d HCMDEMO<br />psadmin -p start -d HCMDEMO</pre> | PeopleSoft administrator | 
| Create a script to install the Elasticsearch server. | In the Amazon EC2 `SRCH_TEMPLATE` instance, as the Elasticsearch user, create the following script. Name it `srchstart.sh` and place it in the `HOME` directory.<pre>#!/bin/ksh<br />/* The following line ensures that the correct IP is indicated in the elasticsearch.yaml file. */<br />sed -i "s/.*network.host.*/`hostname  -I | awk '{print "host:"$0}'`/" $ES_HOME_DIR/config/elasticsearch.yaml<br />nohup $ES_HOME_DIR/bin/elasticsearch &</pre> | PeopleSoft administrator | 
| Create a script to install the web server. | In the Amazon EC2 `WEB_TEMPLATE` instance, as the web server user, create the following scripts in the `HOME` directory.`renip.sh`: This script ensures that the web server has the correct IP when cloned from the AMI.<pre>#!/bin/ksh<br />hn=`hostname`<br />/* On the following line, change the IP with the hostname with the hostname of the web template. */<br />for text_file in `find  *  -type f -exec grep -l '<hostname-of-the-web-template>' {} \;`<br />do<br />sed -e 's/<hostname-of-the-web-template>/'$hn'/g' $text_file > temp<br />mv -f temp $text_file<br />done</pre>`psstrsetup.sh`: This script ensures that the web server uses the correct application server IPs that are currently running. It tries to connect to each application server on the jolt port and adds it to the configuration file.<pre>#!/bin/ksh<br />c2=""<br />for ctr in `ls -1 /psftmnt/*.internal`<br />do<br />c1=`echo $ctr | awk -F "/" '{print $3}'`<br />/* In the following lines, 9000 is the jolt port. Change it if necessary. */<br />if nc -z $c1 9000 2> /dev/null; then<br />if [[ $c2 = "" ]]; then<br />c2="psserver="`echo $c1`":9000"<br />else<br />c2=`echo $c2`","`echo $c1`":9000"<br />fi<br />fi<br />done</pre>`webstart.sh`: This script runs the two previous scripts and starts the web servers.<pre>#!/bin/ksh<br />/* Change the path in the following if necessary. */<br />cd  /usr/homes/hcmdemo <br />./renip.sh<br />./psstrsetup.sh<br />webserv/peoplesoft/bin/startPIA.sh</pre> | PeopleSoft administrator | 
| Add a crontab entry. | In the Amazon EC2 `WEB_TEMPLATE` instance, as the web server user, add the following line to **crontab**. Change the time and path to reflect the values you need. This entry ensures that your web server always has the correct application server entries in the `configuration.properties` file.<pre>* * * * * /usr/homes/hcmdemo/psstrsetup.sh</pre> | PeopleSoft administrator | 

### Create AMIs and Auto Scaling group templates
<a name="create-amis-and-auto-scaling-group-templates"></a>


| Task | Description | Skills required | 
| --- | --- | --- | 
| Create an AMI for the application server template. | On the Amazon EC2 console, create an AMI image of the Amazon EC2 `APP_TEMPLATE` instance. Name the AMI `PSAPPSRV-SCG-VER1`. For instructions, see the [Amazon EC2 documentation](https://docs.aws.amazon.com/AWSEC2/latest/UserGuide/creating-an-ami-ebs.html). | Cloud administrator, PeopleSoft administrator | 
| Create AMIs for the other servers. | Repeat the previous step to create AMIs for the process scheduler, Elasticsearch server, and web server. | Cloud administrator, PeopleSoft administrator | 
| Create a launch template for the application server Auto Scaling group. | Create a launch template for the application server Auto Scaling group. Name the template `PSAPPSRV_TEMPLATE.` In the template, choose the AMI you created for the `APP_TEMPLATE` instance. For instructions, see the [Amazon EC2 documentation](https://docs.aws.amazon.com/AWSEC2/latest/UserGuide/create-launch-template.html#create-launch-template-from-instance).[\[See the AWS documentation website for more details\]](http://docs.aws.amazon.com/prescriptive-guidance/latest/patterns/set-up-a-highly-available-peoplesoft-architecture-on-aws.html) | Cloud administrator, PeopleSoft administrator | 
| Create a launch template for the process scheduler server Auto Scaling group. | Repeat the previous step to create a launch template for the process scheduler server Auto Scaling group. Name the template `PSPRCS_TEMPLATE`. In the template, choose the AMI you created for the process scheduler.[\[See the AWS documentation website for more details\]](http://docs.aws.amazon.com/prescriptive-guidance/latest/patterns/set-up-a-highly-available-peoplesoft-architecture-on-aws.html) | Cloud administrator, PeopleSoft administrator | 
| Create a launch template for the Elasticsearch server Auto Scaling group. | Repeat the previous steps to create a launch template for the Elasticsearch server Auto Scaling group. Name the template `SRCH_TEMPLATE`. In the template, choose the AMI you created for the search server.[\[See the AWS documentation website for more details\]](http://docs.aws.amazon.com/prescriptive-guidance/latest/patterns/set-up-a-highly-available-peoplesoft-architecture-on-aws.html) | Cloud administrator, PeopleSoft administrator | 
| Create a launch template for the web server Auto Scaling group. | Repeat the previous steps to create a launch template for the web server Auto Scaling group. Name the template `WEB_TEMPLATE`. In the template, choose the AMI you created for the web server.[\[See the AWS documentation website for more details\]](http://docs.aws.amazon.com/prescriptive-guidance/latest/patterns/set-up-a-highly-available-peoplesoft-architecture-on-aws.html) | Cloud administrator, PeopleSoft administrator | 

### Create Auto Scaling groups
<a name="create-auto-scaling-groups"></a>


| Task | Description | Skills required | 
| --- | --- | --- | 
| Create an Auto Scaling group for the application server. | On the Amazon EC2 console, create an Auto Scaling group called `PSAPPSRV_ASG` for the application server by using the `PSAPPSRV_TEMPLATE` template. For instructions, see the [Amazon EC2 documentation](https://docs.aws.amazon.com/autoscaling/ec2/userguide/create-asg-launch-template.html).[\[See the AWS documentation website for more details\]](http://docs.aws.amazon.com/prescriptive-guidance/latest/patterns/set-up-a-highly-available-peoplesoft-architecture-on-aws.html) | Cloud administrator, PeopleSoft administrator | 
| Create Auto Scaling groups for the other servers. | Repeat the previous step to create Auto Scaling groups for the process scheduler, Elasticsearch server, and web server. | Cloud administrator, PeopleSoft administrator | 

### Create and configure target groups
<a name="create-and-configure-target-groups"></a>


| Task | Description | Skills required | 
| --- | --- | --- | 
| Create a target group for the web server. | On the Amazon EC2 console, create a target group for the web server. For instructions, see the [Elastic Load Balancing documentation](https://docs.aws.amazon.com/elasticloadbalancing/latest/application/create-target-group.html). Set the port to the port that the web server is listening on. | Cloud administrator | 
| Configure health checks. | Confirm that the health checks have the correct values to reflect your business requirements. For more information, see the [Elastic Load Balancing documentation](https://docs.aws.amazon.com/elasticloadbalancing/latest/application/target-group-health-checks.html). | Cloud administrator | 
| Create a target group for the Elasticsearch server. | Repeat the previous steps to create a target group called `PSFTSRCH` for the Elasticsearch server, and set the correct Elasticsearch port. | Cloud administrator | 
| Add target groups to Auto Scaling groups. | Open the web server Auto Scaling group called `PSPIA_ASG` that you created earlier. On the **Load balancing** tab, choose **Edit** and then add the `PSFTWEB` target group to the Auto Scaling group.Repeat this step for the Elasticsearch Auto Scaling group `PSSRCH_ASG` to add the target group `PSFTSRCH` you created earlier. | Cloud administrator | 
| Set session stickiness. | In the target group `PSFTWEB`, choose the **Attributes** tab, choose **Edit**, and set the session stickiness. For stickiness type, choose **Load balancer generated cookie**, and set the duration to 1. For more information, see the [Elastic Load Balancing documentation](https://docs.aws.amazon.com/elasticloadbalancing/latest/application/sticky-sessions.html).Repeat this step for the target group `PSFTSRCH`. | Cloud administrator | 

### Create and configure application load balancers
<a name="create-and-configure-application-load-balancers"></a>


| Task | Description | Skills required | 
| --- | --- | --- | 
| Create a load balancer for the web servers. | Create an Application Load Balancer named `PSFTLB` to load-balance traffic to the web servers. For instructions, see the [Elastic Load Balancing documentation](https://docs.aws.amazon.com/elasticloadbalancing/latest/application/create-application-load-balancer.html#configure-load-balancer).[\[See the AWS documentation website for more details\]](http://docs.aws.amazon.com/prescriptive-guidance/latest/patterns/set-up-a-highly-available-peoplesoft-architecture-on-aws.html) | Cloud administrator | 
| Create a load balancer for the Elasticsearch servers. | Create an Application Load Balancer named `PSFTSCH` to load-balance traffic to the Elasticsearch servers.[\[See the AWS documentation website for more details\]](http://docs.aws.amazon.com/prescriptive-guidance/latest/patterns/set-up-a-highly-available-peoplesoft-architecture-on-aws.html) | Cloud administrator | 
| Configure Route 53. | On the [Amazon Route 53 console](https://console.aws.amazon.com/route53/), create a record in the hosted zone that will service the PeopleSoft application. For instructions, see the [Amazon Route 53 documentation](https://docs.aws.amazon.com/Route53/latest/DeveloperGuide/resource-record-sets-creating.html). This ensures that all the traffic passes through the `PSFTLB` load balancer. | Cloud administrator | 

## Related resources
<a name="set-up-a-highly-available-peoplesoft-architecture-on-aws-resources"></a>
+ [Oracle PeopleSoft website](https://www.oracle.com/applications/peoplesoft/)
+ [AWS documentation](https://docs.aws.amazon.com)

# Set up disaster recovery for Oracle JD Edwards EnterpriseOne with AWS Elastic Disaster Recovery
<a name="set-up-disaster-recovery-for-oracle-jd-edwards-enterpriseone-with-aws-elastic-disaster-recovery"></a>

*Thanigaivel Thirumalai, Amazon Web Services*

## Summary
<a name="set-up-disaster-recovery-for-oracle-jd-edwards-enterpriseone-with-aws-elastic-disaster-recovery-summary"></a>

Disasters that are triggered by natural catastrophes, application failures, or disruption of services harm revenue and cause downtime for corporate applications. To reduce the repercussions of such events, planning for disaster recovery (DR) is critical for firms that adopt JD Edwards EnterpriseOne enterprise resource planning (ERP) systems and other mission-critical and business-critical software. 

This pattern explains how businesses can use AWS Elastic Disaster Recovery as a DR option for their JD Edwards EnterpriseOne applications. It also outlines the steps for using Elastic Disaster Recovery failover and failback to construct a cross-Region DR strategy for databases hosted on an Amazon Elastic Compute Cloud (Amazon EC2) instance in the AWS Cloud.

**Note**  
This pattern requires the primary and secondary Regions for the cross-Region DR implementation to be hosted on AWS.

[Oracle JD Edwards EnterpriseOne](https://www.oracle.com/applications/jd-edwards-enterpriseone/) is an integrated ERP software solution for midsize to large companies in a wide range of industries.

AWS Elastic Disaster Recovery minimizes downtime and data loss with fast, reliable recovery of on-premises and cloud-based applications by using affordable storage, minimal compute, and point-in-time recovery.

AWS provides [four core DR architecture patterns](https://docs.aws.amazon.com/whitepapers/latest/disaster-recovery-workloads-on-aws/disaster-recovery-options-in-the-cloud.html). This document focuses on setup, configuration, and optimization by using the [pilot light strategy](https://docs.aws.amazon.com/whitepapers/latest/disaster-recovery-workloads-on-aws/disaster-recovery-options-in-the-cloud.html). This strategy helps you create a lower-cost DR environment where you initially provision a replication server for replicating data from the source database, and you provision the actual database server only when you start a DR drill and recovery. This strategy removes the expense of maintaining a database server in the DR Region. Instead, you pay for a smaller EC2 instance that serves as a replication server.

## Prerequisites and limitations
<a name="set-up-disaster-recovery-for-oracle-jd-edwards-enterpriseone-with-aws-elastic-disaster-recovery-prereqs"></a>

**Prerequisites**
+ An active AWS account.
+ A JD Edwards EnterpriseOne application running on Oracle Database or Microsoft SQL Server with a supported database in a running state on a managed EC2 instance. This application should include all JD Edwards EnterpriseOne base components (Enterprise Server, HTML Server, and Database Server) installed in one AWS Region.
+ An AWS Identity and Access Management (IAM) role to set up the Elastic Disaster Recovery service.
+ The network for running Elastic Disaster Recovery configured according to the required [connectivity settings](https://docs.aws.amazon.com/drs/latest/userguide/Network-Requirements.html).

**Limitations**
+ You can use this pattern to replicate all tiers, unless the database is hosted on Amazon Relational Database Service (Amazon RDS), in which case we recommend that you use the [cross-Region copy functionality](https://docs.aws.amazon.com/AmazonRDS/latest/UserGuide/USER_CopySnapshot.html) of Amazon RDS.
+ Elastic Disaster Recovery isn’t compatible with CloudEndure Disaster Recovery, but you can upgrade from CloudEndure Disaster Recovery. For more information, see the [FAQ](https://docs.aws.amazon.com/drs/latest/userguide/cedr-to-drs.html) in the Elastic Disaster Recovery documentation.
+ Amazon Elastic Block Store (Amazon EBS) limits the rate at which you can take snapshots. You can replicate a maximum number of 300 servers in a single AWS account by using Elastic Disaster Recovery. To replicate more servers, you can use multiple AWS accounts or multiple target AWS Regions. (You will have to set up Elastic Disaster Recovery separately for each account and Region.) For more information, see [Best practices](https://docs.aws.amazon.com/drs/latest/userguide/best_practices_drs.html) in the Elastic Disaster Recovery documentation.
+ The source workloads (the JD Edwards EnterpriseOne application and database) must be hosted on EC2 instances. This pattern doesn’t support workloads that are on premises or in other cloud environments.
+ This pattern focuses on the JD Edwards EnterpriseOne components. A full DR and business continuity plan (BCP) should include other core services, including:
  + Networking (virtual private cloud, subnets, and security groups)
  + Active Directory
  + Amazon WorkSpaces
  + Elastic Load Balancing
  + A managed database service such as Amazon Relational Database Service (Amazon RDS)

For additional information about prerequisites, configurations, and limitations, see the [Elastic Disaster Recovery documentation](https://docs.aws.amazon.com/drs/latest/userguide/what-is-drs.html).

**Product versions**
+ Oracle JD Edwards EnterpriseOne (Oracle and SQL Server supported versions based on Oracle Minimum Technical Requirements)

## Architecture
<a name="set-up-disaster-recovery-for-oracle-jd-edwards-enterpriseone-with-aws-elastic-disaster-recovery-architecture"></a>

**Target technology stack**
+ A single Region and single virtual private cloud (VPC) for production and non-production, and a second Region for DR
+ Single Availability Zones to ensure low latency between servers
+ An Application Load Balancer that distributes network traffic to improve the scalability and availability of your applications across multiple Availability Zones
+ Amazon Route 53 to provide Domain Name System (DNS) configuration
+ Amazon WorkSpaces to provide users with a desktop experience in the cloud
+ Amazon Simple Storage Service (Amazon S3) for storing backups, files, and objects
+ Amazon CloudWatch for application logging, monitoring, and alarms
+ Amazon Elastic Disaster Recovery for disaster recovery

**Target architecture**

The following diagram shows the cross-Region disaster recovery architecture for JD Edwards EnterpriseOne using Elastic Disaster Recovery.

![\[Architecture for JD Edwards EnterpriseOne cross-Region DR on AWS\]](http://docs.aws.amazon.com/prescriptive-guidance/latest/patterns/images/pattern-img/9b0de5f0-f211-4086-a044-321d081604f9/images/978b7219-e54e-4e31-b3ff-4885784e2971.png)


**Procedure**

Here is a high-level review of the process. For details, see the *Epics* section.
+ Elastic Disaster Recovery replication begins with an initial sync. During the initial sync, the AWS Replication Agent replicates all the data from the source disks to the appropriate resource in the staging area subnet.
+ Continuous replication continues indefinitely after the initial sync is complete.
+ You review the launch parameters, which include service-specific configurations and an Amazon EC2 launch template, after the agent has been installed and replication has started. When the source server is indicated as being ready for recovery, you can start instances.
+ When Elastic Disaster Recovery issues a series of API calls to begin the launch operation, the recovery instance is immediately launched on AWS according to your launch settings. The service automatically spins up a conversion server during startup.
+ The new instance is spun up on AWS after the conversion is complete and is ready for use. The source server state at the time of launch is represented by the volumes associated with the launched instance. The conversion process involves changes to the drivers, network, and operating system license to ensure that the instance boots natively on AWS.
+ After the launch, the newly created volumes are no longer kept in sync with the source servers. The AWS Replication Agent continues to routinely replicate changes made to your source servers to the staging area volumes, but the launched instances do not reflect those changes.
+ When you start a new drill or recovery instance, the data is always reflected in the most recent state that has been replicated from the source server to the staging area subnet.
+ When the source server is marked as being prepared for recovery, you can start instances.

**Note**  
The process works both ways: for failover from a primary AWS Region to a DR Region, and to fail back to the primary site, when it has been recovered. You can prepare for failback by reversing the direction of data replication from the target machine back to the source machine in a fully orchestrated way.

The benefits of this process described in this pattern include:
+ Flexibility: Replication servers scale out and scale in based on dataset and replication time, so you can perform DR tests without disrupting source workloads or replication.
+ Reliability: The replication is robust, non-disruptive, and continuous.
+ Automation: This solution provides a unified, automated process for test, recovery, and failback.
+ Cost optimization: You can replicate only the needed volumes and pay for them, and pay for compute resources at the DR site only when those resources are activated. You can use a cost-optimized replication instance (we recommend that you use a compute-optimized instance type) for multiple sources or a single source with a large EBS volume.

**Automation and scale**

When you perform disaster recovery at scale, the JD Edwards EnterpriseOne servers will have dependencies on other servers in the environment. For example:
+ JD Edwards EnterpriseOne application servers that connect to a JD Edwards EnterpriseOne supported database on boot have dependencies on that database.
+ JD Edwards EnterpriseOne servers that require authentication and need to connect to a domain controller on boot to start services have dependencies on the domain controller.

For this reason, we recommend that you automate failover tasks. For example, you can use AWS Lambda or AWS Step Functions to automate the JD Edwards EnterpriseOne startup scripts and load balancer changes to automate the end-to-end failover process. For more information, see the blog post [Creating a scalable disaster recovery plan with AWS Elastic Disaster Recovery](https://aws.amazon.com/blogs/storage/creating-a-scalable-disaster-recovery-plan-with-aws-elastic-disaster-recovery/).

## Tools
<a name="set-up-disaster-recovery-for-oracle-jd-edwards-enterpriseone-with-aws-elastic-disaster-recovery-tools"></a>

**AWS services**
+ [Amazon Elastic Block Store (Amazon EBS)](https://docs.aws.amazon.com/AWSEC2/latest/UserGuide/AmazonEBS.html) provides block-level storage volumes for use with EC2 instances.
+ [Amazon Elastic Compute Cloud (Amazon EC2)](https://aws.amazon.com/products/compute/) provides scalable computing capacity in the AWS Cloud. You can launch as many virtual servers as you need and quickly scale them up or down.
+ [AWS Elastic Disaster Recovery](https://aws.amazon.com/disaster-recovery/) minimizes downtime and data loss with fast, reliable recovery of on-premises and cloud-based applications using affordable storage, minimal compute, and point-in-time recovery.
+ [Amazon Virtual Private Cloud (Amazon VPC)](https://aws.amazon.com/vpc/) gives you full control over your virtual networking environment, including resource placement, connectivity, and security.

## Best practices
<a name="set-up-disaster-recovery-for-oracle-jd-edwards-enterpriseone-with-aws-elastic-disaster-recovery-best-practices"></a>

**General best practices**
+ Have a written plan of what to do in the event of a real recovery event.
+ After you set up Elastic Disaster Recovery correctly, create an AWS CloudFormation template that can create the configuration on demand, should the need arise. Determine the order in which servers and applications should be launched, and record this in the recovery plan.
+ Perform a regular drill (standard Amazon EC2 rates apply).
+ Monitor the health of the ongoing replication by using the Elastic Disaster Recovery console or programmatically.
+ Protect the point-in-time snapshots and confirm before terminating the instances.
+ Create a IAM role for AWS Replication Agent installation.
+ Enable termination protection for recovery instances in a real DR scenario.
+ Do not use the **Disconnect from AWS** action in the Elastic Disaster Recovery console for servers that you launched recovery instances for, even in the case of a real recovery event. Performing a disconnect terminates all replication resources related to these source servers, including your point-in-time (PIT) recovery points.
+ Change the PIT policy to change the number of days for snapshot retention.
+ Edit the launch template in Elastic Disaster Recovery launch settings to set the correct subnet, security group, and instance type for your target server.
+ Automate the end-to-end failover process by using Lambda or Step Functions to automate JD Edwards EnterpriseOne startup scripts and load balancer changes.

**JD Edwards EnterpriseOne optimization and considerations**
+ Move **PrintQueue** into the database.
+ Move **MediaObjects** into the database.
+ Exclude the logs and temp folder from batch and logic servers.
+ Exclude the temp folder from Oracle WebLogic.
+ Create scripts for startup after the failover.
+ Exclude the tempdb for SQL Server.
+ Exclude the temp file for Oracle.

## Epics
<a name="set-up-disaster-recovery-for-oracle-jd-edwards-enterpriseone-with-aws-elastic-disaster-recovery-epics"></a>

### Perform initial tasks and configuration
<a name="perform-initial-tasks-and-configuration"></a>


| Task | Description | Skills required | 
| --- | --- | --- | 
| Set up the replication network.  | Implement your JD Edwards EnterpriseOne system in the primary AWS Region and identify the AWS Region for DR. Follow the steps in the [Replication network requirements](https://docs.aws.amazon.com/drs/latest/userguide/preparing-environments.html) section of the Elastic Disaster Recovery documentation to plan and set up your replication and DR network. | AWS administrator | 
| Determine RPO and RTO. | Identify the recovery time objective (RTO) and recovery point objective (RPO) for your application servers and database. | Cloud architect, DR architect | 
| Enable replication for Amazon EFS. | If applicable, enable replication from the AWS primary to DR Region for shared file systems such as Amazon Elastic File System (Amazon EFS) by using AWS DataSync, **rsync**, or another appropriate tool. | Cloud administrator | 
| Manage DNS in case of DR. | Identify the process to update the Domain Name System (DNS) during the DR drill or actual DR. | Cloud administrator | 
| Create an IAM role for setup. | Follow the instructions in the [Elastic Disaster Recovery initialization and permissions](https://docs.aws.amazon.com/drs/latest/userguide/getting-started-initializing.html) section of the Elastic Disaster Recovery documentation to create an IAM role to initialize and manage the AWS service. | Cloud administrator | 
| Set up VPC peering. | Make sure that the source and target VPCs are peered and accessible to each other. For configuration instructions, see the [Amazon VPC documentation](https://docs.aws.amazon.com/vpc/latest/peering/what-is-vpc-peering.html). | AWS administrator | 

### Configure Elastic Disaster Recovery replication settings
<a name="configure-elastic-disaster-recovery-replication-settings"></a>


| Task | Description | Skills required | 
| --- | --- | --- | 
| Initialize Elastic Disaster Recovery. | Open the [Elastic Disaster Recovery console](https://console.aws.amazon.com/drs/home), choose the target AWS Region (where you will replicate data and launch recovery instances), and then choose **Set default replication settings**. | AWS administrator | 
| Set up replication servers. | [\[See the AWS documentation website for more details\]](http://docs.aws.amazon.com/prescriptive-guidance/latest/patterns/set-up-disaster-recovery-for-oracle-jd-edwards-enterpriseone-with-aws-elastic-disaster-recovery.html) | AWS administrator | 
| Configure volumes and security groups. | [\[See the AWS documentation website for more details\]](http://docs.aws.amazon.com/prescriptive-guidance/latest/patterns/set-up-disaster-recovery-for-oracle-jd-edwards-enterpriseone-with-aws-elastic-disaster-recovery.html) | AWS administrator | 
| Configure additional settings. | [\[See the AWS documentation website for more details\]](http://docs.aws.amazon.com/prescriptive-guidance/latest/patterns/set-up-disaster-recovery-for-oracle-jd-edwards-enterpriseone-with-aws-elastic-disaster-recovery.html) | AWS administrator | 

### Install the AWS Replication Agent
<a name="install-the-aws-replication-agent"></a>


| Task | Description | Skills required | 
| --- | --- | --- | 
| Create an IAM role. | Create an IAM role that contains the `AWSElasticDisasterRecoveryAgentInstallationPolicy` policy. In the **Select AWS access type** section, enable programmatic access. Note the access key ID and secret access key. You will need this information during the installation of the AWS Replication Agent. | AWS administrator | 
| Check requirements. | Check and complete the [prerequisites](https://docs.aws.amazon.com/drs/latest/userguide/installation-requiremets.html) in the Elastic Disaster Recovery documentation for installing the AWS Replication Agent. | AWS administrator | 
| Install the AWS Replication Agent. | Follow the [installation instruction](https://docs.aws.amazon.com/drs/latest/userguide/agent-installation-instructions.html)s for your operating system and install the AWS Replication Agent.[\[See the AWS documentation website for more details\]](http://docs.aws.amazon.com/prescriptive-guidance/latest/patterns/set-up-disaster-recovery-for-oracle-jd-edwards-enterpriseone-with-aws-elastic-disaster-recovery.html)Repeat these steps for the remaining server. | AWS administrator | 
| Monitor the replication. | Return to the Elastic Disaster Recovery **Source servers** pane to monitor the replication status. The initial sync will take some time depending on the size of the data transfer.When the source server is fully synced, the server status will be updated to **Ready**. This means that a replication server has been created in the staging area, and the EBS volumes have been replicated from the source server to the staging area. | AWS administrator | 

### Configure launch settings
<a name="configure-launch-settings"></a>


| Task | Description | Skills required | 
| --- | --- | --- | 
| Edit launch settings. | To update the launch settings for the drill and recovery instances, on the [Elastic Disaster Recovery console](https://console.aws.amazon.com/drs/home), select the source server, and then choose **Actions**, **Edit launch settings**. Or you can choose your replicating source machines from the **Source servers** page, and then choose the **Launch Settings** tab. This tab has two sections: **General launch settings** and **EC2 launch template**. | AWS administrator | 
| Configure general launch settings. | Revise the general launch settings according to your requirements.[\[See the AWS documentation website for more details\]](http://docs.aws.amazon.com/prescriptive-guidance/latest/patterns/set-up-disaster-recovery-for-oracle-jd-edwards-enterpriseone-with-aws-elastic-disaster-recovery.html)For more information, see [General launch settings](https://docs.aws.amazon.com/drs/latest/userguide/launch-general-settings.html) in the Elastic Disaster Recovery documentation. | AWS administrator | 
| Configure the Amazon EC2 launch template. | Elastic Disaster Recovery uses Amazon EC2 launch templates to launch drill and recovery instances for each source server. The launch template is created automatically for each source server that you add to Elastic Disaster Recovery after you install the AWS Replication Agent.You must set the Amazon EC2 launch template as the default launch template if you want to use it with Elastic Disaster Recovery.For more information, see [EC2 Launch Template](https://docs.aws.amazon.com/drs/latest/userguide/ec2-launch.html) in the Elastic Disaster Recovery documentation. | AWS administrator | 

### Initiate DR drill and failover
<a name="initiate-dr-drill-and-failover"></a>


| Task | Description | Skills required | 
| --- | --- | --- | 
| Initiate Drill | [\[See the AWS documentation website for more details\]](http://docs.aws.amazon.com/prescriptive-guidance/latest/patterns/set-up-disaster-recovery-for-oracle-jd-edwards-enterpriseone-with-aws-elastic-disaster-recovery.html)For more information, see [Preparing for failover](https://docs.aws.amazon.com/drs/latest/userguide/failback-preparing.html) in the Elastic Disaster Recovery documentation. | AWS administrator | 
| Validate the drill. | In the previous step, you launched new target instances in the DR Region. The target instances are replicas of the source servers based on the snapshot taken when you initiated the launch.In this procedure, you connect to your Amazon EC2 target machines to confirm that they're running as expected.[\[See the AWS documentation website for more details\]](http://docs.aws.amazon.com/prescriptive-guidance/latest/patterns/set-up-disaster-recovery-for-oracle-jd-edwards-enterpriseone-with-aws-elastic-disaster-recovery.html) |  | 
| Initiate a failover. | A failover is the redirection of traffic from a primary system to a secondary system. Elastic Disaster Recovery helps you perform a failover by launching recovery instances on AWS. When the recovery instances have been launched, you redirect the traffic from your primary systems to these instances.[\[See the AWS documentation website for more details\]](http://docs.aws.amazon.com/prescriptive-guidance/latest/patterns/set-up-disaster-recovery-for-oracle-jd-edwards-enterpriseone-with-aws-elastic-disaster-recovery.html)For more information, see [Performing a failover](https://docs.aws.amazon.com/drs/latest/userguide/failback-preparing-failover.html) in the Elastic Disaster Recovery documentation. | AWS administrator | 
| Initiate a failback. | The process for initiating a failback is similar to the process for initiating failover.[\[See the AWS documentation website for more details\]](http://docs.aws.amazon.com/prescriptive-guidance/latest/patterns/set-up-disaster-recovery-for-oracle-jd-edwards-enterpriseone-with-aws-elastic-disaster-recovery.html)For more information, see [Performing a failback](https://docs.aws.amazon.com/drs/latest/userguide/failback-performing-main.html) in the Elastic Disaster Recovery documentation. | AWS administrator | 
| Start JD Edwards EnterpriseOne components. | [\[See the AWS documentation website for more details\]](http://docs.aws.amazon.com/prescriptive-guidance/latest/patterns/set-up-disaster-recovery-for-oracle-jd-edwards-enterpriseone-with-aws-elastic-disaster-recovery.html)You will need to make incorporate the changes in Route 53 and Application Load Balancer for the JD Edwards EnterpriseOne link to work.You can automate these steps by using Lambda, Step Functions, and Systems Manager (Run Command).Elastic Disaster Recovery performs block-level replication of the source EC2 instance EBS volumes that host the operating system and file systems. Shared file systems that were created by using Amazon EFS aren’t part of this replication. You can replicate shared file systems to the DR Region by using AWS DataSync, as noted in the first epic, and then mount these replicated file systems in the DR system. | JD Edwards EnterpriseOne CNC | 

## Troubleshooting
<a name="set-up-disaster-recovery-for-oracle-jd-edwards-enterpriseone-with-aws-elastic-disaster-recovery-troubleshooting"></a>


| Issue | Solution | 
| --- | --- | 
| Source server data replication status is **Stalled **and replication lags. If you check details, the data replication status displays **Agent not seen**. | Check to confirm that the stalled source server is running.If the source server goes down, the replication server is automatically terminated.For more information about lag issues, see [Replication lag issues](https://docs.aws.amazon.com/drs/latest/userguide/Other-Troubleshooting-Topics.html#Replication-Lag-Issues) in the Elastic Disaster Recovery documentation. | 
| Installation of AWS Replication Agent in source EC2 instance fails in RHEL 8.2 after scanning the disks. `aws_replication_agent_installer.log` reveals that kernel headers are missing. | Before you install the AWS Replication Agent on RHEL 8, CentOS 8, or Oracle Linux 8, run:<pre>sudo yum install elfutils-libelf-devel</pre>For more information, see [Linux installation requirements](https://docs.aws.amazon.com/mgn/latest/ug/installation-requirements.html#linux-requirements) in the Elastic Disaster Recovery documentation. | 
| On the Elastic Disaster Recovery console, you see the source server as **Ready **with a lag and data replication status as **Stalled**.Depending on how long the AWS Replication Agent has been unavailable, the status might indicate high lag, but the issue remains the same. | Use an operating system command to confirm that the AWS Replication Agent is running in the source EC2 instance, or confirm that the instance is running.After you correct any issues, Elastic Disaster Recovery will restart scanning. Wait until all data has been synced and the replication status is **Healthy** before you start a DR drill. | 
| Initial replication with high lag. On the Elastic Disaster Recovery console, you can see that the initial sync status is extremely slow for a source server. | Check for the replication lag issues documented in the [Replication lag issues](https://docs.aws.amazon.com/drs/latest/userguide/Other-Troubleshooting-Topics.html#Replication-Lag-Issues) section of the Elastic Disaster Recovery documentation.The replication server might be unable to handle the load because of intrinsic compute operations. In that case, try upgrading the instance type after consulting with the [AWS Technical Support team](https://support.console.aws.amazon.com/support/). | 

## Related resources
<a name="set-up-disaster-recovery-for-oracle-jd-edwards-enterpriseone-with-aws-elastic-disaster-recovery-resources"></a>
+ [AWS Elastic Disaster Recovery User Guide](https://docs.aws.amazon.com/drs/latest/userguide/what-is-drs.html)
+ [Creating a scalable disaster recovery plan with AWS Elastic Disaster Recovery](https://aws.amazon.com/blogs/storage/creating-a-scalable-disaster-recovery-plan-with-aws-elastic-disaster-recovery/) (AWS blog post)
+ [AWS Elastic Disaster Recovery - A Technical Introduction](https://explore.skillbuilder.aws/learn/course/internal/view/elearning/11123/aws-elastic-disaster-recovery-a-technical-introduction) (AWS Skill Builder course; requires login)
+ [AWS Elastic Disaster Recovery quick start guide](https://docs.aws.amazon.com/drs/latest/userguide/quick-start-guide-gs.html)

# Set up CloudFormation drift detection in a multi-Region, multi-account organization
<a name="set-up-aws-cloudformation-drift-detection-in-a-multi-region-multi-account-organization"></a>

*Ram Kandaswamy, Amazon Web Services*

## Summary
<a name="set-up-aws-cloudformation-drift-detection-in-a-multi-region-multi-account-organization-summary"></a>

Amazon Web Services (AWS) users often look for an efficient way to detect resource configuration mismatches, including drift in AWS CloudFormation stacks, and fix them as soon as possible. This is especially the case when AWS Control Tower is used.

This pattern provides a prescriptive solution that efficiently solves the problem by using consolidated resource configuration changes and acting on those changes to generate results. The solution is designed for scenarios where there are several CloudFormation stacks created in more than one AWS Region, or in more than one account, or a combination of both. The goals of the solution are the following:
+ Simplify the drift detection process
+ Set up notification and alerting
+ Set up consolidated reporting

## Prerequisites and limitations
<a name="set-up-aws-cloudformation-drift-detection-in-a-multi-region-multi-account-organization-prereqs"></a>

**Prerequisites **
+ AWS Config enabled in all the Regions and accounts that must be monitored

**Limitations **
+ The report generated supports only the comma-separated values (CSV) and JSON output formats.

## Architecture
<a name="set-up-aws-cloudformation-drift-detection-in-a-multi-region-multi-account-organization-architecture"></a>

The following diagram shows AWS Organizations set up with multiple accounts. AWS Config rules communicate between the accounts.  

![\[Five-step process for monitoring stacks in two AWS Organizations accounts.\]](http://docs.aws.amazon.com/prescriptive-guidance/latest/patterns/images/pattern-img/735d0987-b953-47f8-a9bc-b02a88957ee5/images/340cee9a-5a4e-49ea-bd73-d37dcea5e098.png)


 The workflow includes the following steps:

1. The AWS Config rule detects drift.

1. Drift detection results that are found in other accounts are sent to the management account.

1. The Amazon CloudWatch rule calls an AWS Lambda function.

1. The Lambda function queries the AWS Config rule for aggregated results.

1. The Lambda function notifies Amazon Simple Notification Service (Amazon SNS), which sends email notification of the drift.

**Automation and scale**

The solution presented here can scale for both additional Regions and accounts.

## Tools
<a name="set-up-aws-cloudformation-drift-detection-in-a-multi-region-multi-account-organization-tools"></a>

**AWS services**
+ [AWS Config](https://docs.aws.amazon.com/config/latest/developerguide/WhatIsConfig.html) provides a detailed view of the configuration of AWS resources in your AWS account. This includes how the resources are related to one another and how they were configured in the past so that you can see how the configurations and relationships change over time.
+ [Amazon CloudWatch](https://docs.aws.amazon.com/AmazonCloudWatch/latest/monitoring/WhatIsCloudWatch.html) helps you monitor the metrics of your AWS resources and the applications you run on AWS in real time.
+ [AWS Lambda](https://docs.aws.amazon.com/lambda/latest/dg/welcome.html) is a compute service that helps you run code without needing to provision or manage servers. It runs your code only when needed and scales automatically, so you pay only for the compute time that you use.
+ [Amazon Simple Notification Service (Amazon SNS)](https://docs.aws.amazon.com/sns/latest/dg/welcome.html) helps you coordinate and manage the exchange of messages between publishers and clients, including web servers and email addresses.

## Epics
<a name="set-up-aws-cloudformation-drift-detection-in-a-multi-region-multi-account-organization-epics"></a>

### Automate drift detection for CloudFormation
<a name="automate-drift-detection-for-cfn"></a>


| Task | Description | Skills required | 
| --- | --- | --- | 
| Create the aggregator. | [\[See the AWS documentation website for more details\]](http://docs.aws.amazon.com/prescriptive-guidance/latest/patterns/set-up-aws-cloudformation-drift-detection-in-a-multi-region-multi-account-organization.html) | Cloud architect | 
| Create an AWS managed rule. | Add the `cloudformation-stack-drift-detection-check` AWS** **managed rule. The rule needs one parameter value: `cloudformationArn`. Enter the IAM role Amazon Resource Name (ARN) that has permissions to detect stack drift. The role must have a trust policy that enables AWS Config to assume the role. | Cloud architect | 
| Create the advanced query section of the aggregator. | To fetch drifted stacks from multiple sources, create the following query:<pre>SELECT resourceId, configuration.driftInformation.stackDriftStatus WHERE resourceType = 'AWS::CloudFormation::Stack'  AND configuration.driftInformation.stackDriftStatus IN ('DRIFTED')</pre> | Cloud architect, Developer | 
| Automate running the query and publish. | [\[See the AWS documentation website for more details\]](http://docs.aws.amazon.com/prescriptive-guidance/latest/patterns/set-up-aws-cloudformation-drift-detection-in-a-multi-region-multi-account-organization.html) | Cloud architect, Developer | 
| Create a CloudWatch rule. | Create a schedule-based CloudWatch rule to call the Lambda function, which is responsible for alerting. | Cloud architect | 

## Related resources
<a name="set-up-aws-cloudformation-drift-detection-in-a-multi-region-multi-account-organization-resources"></a>

**Resources**
+ [What Is AWS Config?](https://docs.aws.amazon.com/config/latest/developerguide/WhatIsConfig.html)
+ [Multi-account multi-Region data aggregation](https://docs.aws.amazon.com/config/latest/developerguide/aggregate-data.html)
+ [Detecting unmanaged configuration changes to stacks and resources](https://docs.aws.amazon.com/AWSCloudFormation/latest/UserGuide/using-cfn-stack-drift.html)
+ [IAM: Pass an IAM role to a specific AWS service](https://docs.aws.amazon.com/IAM/latest/UserGuide/reference_policies_examples_iam-passrole-service.html)
+ [What is Amazon SNS?](https://docs.aws.amazon.com/sns/latest/dg/welcome.html)

## Additional information
<a name="set-up-aws-cloudformation-drift-detection-in-a-multi-region-multi-account-organization-additional"></a>

**Considerations**

We recommend using the solution presented in this pattern instead of using custom solutions that involve API calls at specific intervals to initiate drift detection on each CloudFormation stack or stack set. Custom solutions that use API calls at specific intervals can lead to a large number API calls and affect performance. Because of the number of API calls, throttling can occur. Another potential issue is a delay in detection if resource changes are identified based on schedule only.

Because stack sets are made of stacks, you can use this solution. Stack instance details are also available as part of the solution.

## Attachments
<a name="attachments-735d0987-b953-47f8-a9bc-b02a88957ee5"></a>

To access additional content that is associated with this document, unzip the following file: [attachment.zip](samples/p-attach/735d0987-b953-47f8-a9bc-b02a88957ee5/attachments/attachment.zip)

# Successfully import an S3 bucket as an AWS CloudFormation stack
<a name="successfully-import-an-s3-bucket-as-an-aws-cloudformation-stack"></a>

*Ram Kandaswamy, Amazon Web Services*

## Summary
<a name="successfully-import-an-s3-bucket-as-an-aws-cloudformation-stack-summary"></a>

If you use Amazon Web Services (AWS) resources, such as Amazon Simple Storage Service (Amazon S3) buckets, and want to use an infrastructure as code (IaC) approach, then you can import your resources into AWS CloudFormation and manage them as a stack.

This pattern provides steps to successfully import an S3 bucket as an AWS CloudFormation stack. By using this pattern's approach, you can avoid possible errors that might occur if you import your S3 bucket in a single action.

## Prerequisites and limitations
<a name="successfully-import-an-s3-bucket-as-an-aws-cloudformation-stack-prereqs"></a>

**Prerequisites **
+ An active AWS account.
+ An existing S3 bucket and S3 bucket policy. For more information about this, see [What S3 bucket policy should I use to comply with the AWS Config rule s3-bucket-ssl-requests-only](https://aws.amazon.com/premiumsupport/knowledge-center/s3-bucket-policy-for-config-rule/) in the AWS Knowledge Center.
+ An existing AWS Key Management Service (AWS KMS) key and its alias. For more information about this, see [Working with aliases](https://docs.aws.amazon.com/kms/latest/developerguide/programming-aliases.html) in the AWS KMS documentation.
+ The sample `CloudFormation-template-S3-bucket` AWS CloudFormation template (attached), downloaded to your local computer.

## Architecture
<a name="successfully-import-an-s3-bucket-as-an-aws-cloudformation-stack-architecture"></a>

![\[Workflow to use CloudFormation template to create a CloudFormation stack to import an S3 bucket.\]](http://docs.aws.amazon.com/prescriptive-guidance/latest/patterns/images/pattern-img/aea7f6fe-8e67-46c4-8b90-1ab06b879111/images/ee143374-a0a4-42d9-b7ca-16593a597a84.png)


 

The diagram shows the following workflow:

1. The user creates a JSON or YAML-formatted AWS CloudFormation template.

1. The template creates an AWS CloudFormation stack to import the S3 bucket.

1. The AWS CloudFormation stack manages the S3 bucket that you specified in the template.

**Technology stack**
+ AWS CloudFormation
+ AWS Identity and Access Management (IAM)
+ AWS KMS
+ Amazon S3

 

**Tools**
+ [AWS CloudFormation](https://docs.aws.amazon.com/AWSCloudFormation/latest/UserGuide/Welcome.html) – AWS CloudFormation helps you to create and provision AWS infrastructure deployments predictably and repeatedly.
+ [AWS Identity and Access Management (IAM)](https://docs.aws.amazon.com/IAM/latest/UserGuide/introduction.html) – IAM is a web service for securely controlling access to AWS services.
+ [AWS KMS](https://docs.aws.amazon.com/kms/latest/developerguide/overview.html) – AWS Key Management Service (AWS KMS) is an encryption and key management service scaled for the cloud.
+ [Amazon S3](https://docs.aws.amazon.com/AmazonS3/latest/userguide/Welcome.html) – Amazon Simple Storage Service (Amazon S3) is storage for the Internet.

## Epics
<a name="successfully-import-an-s3-bucket-as-an-aws-cloudformation-stack-epics"></a>

### Import an S3 bucket with AWS KMS key-based encryption as an AWS CloudFormation stack
<a name="import-an-s3-bucket-with-kms-key-long--based-encryption-as-an-aws-cloudformation-stack"></a>


| Task | Description | Skills required | 
| --- | --- | --- | 
| Create a template to import the S3 bucket and KMS key. | On your local computer, create a template to import your S3 bucket and KMS key by using the following sample template:<pre>AWSTemplateFormatVersion: 2010-09-09<br /><br />Parameters:<br /><br />  bucketName:<br /><br />    Type: String<br /><br />Resources:<br /><br />  S3Bucket:<br /><br />    Type: 'AWS::S3::Bucket'<br /><br />    DeletionPolicy: Retain<br /><br />    Properties:<br /><br />      BucketName: !Ref bucketName<br /><br />      BucketEncryption:<br /><br />        ServerSideEncryptionConfiguration:<br /><br />          - ServerSideEncryptionByDefault:<br /><br />              SSEAlgorithm: 'aws:kms'<br /><br />              KMSMasterKeyID: !GetAtt <br /><br />                - KMSS3Encryption<br /><br />                - Arn<br /><br />  KMSS3Encryption:<br /><br />    Type: 'AWS::KMS::Key'<br /><br />    DeletionPolicy: Retain<br /><br />    Properties:<br /><br />      Enabled: true<br /><br />      KeyPolicy: !Sub |-<br /><br />        {<br /><br />            "Id": "key-consolepolicy-3",<br /><br />            "Version": "2012-10-17",		 	 	 <br /><br />            "Statement": [<br /><br />                {<br /><br />                    "Sid": "Enable IAM User Permissions",<br /><br />                    "Effect": "Allow",<br /><br />                    "Principal": {<br /><br />                        "AWS": ["arn:aws:iam::${AWS::AccountId}:root"]<br /><br />                    },<br /><br />                    "Action": "kms:*",<br /><br />                    "Resource": "*"<br /><br />                }<br /><br />                }<br /><br />            ]<br /><br />        }<br /><br />      EnableKeyRotation: true</pre> | AWS DevOps | 
| Create the stack. | [\[See the AWS documentation website for more details\]](http://docs.aws.amazon.com/prescriptive-guidance/latest/patterns/successfully-import-an-s3-bucket-as-an-aws-cloudformation-stack.html) | AWS DevOps | 
| Create the KMS key alias. | [\[See the AWS documentation website for more details\]](http://docs.aws.amazon.com/prescriptive-guidance/latest/patterns/successfully-import-an-s3-bucket-as-an-aws-cloudformation-stack.html)<pre>KMSS3EncryptionAlias:<br /><br />    Type: 'AWS::KMS::Alias'<br /><br />    DeletionPolicy: Retain<br /><br />    Properties: <br /><br />    AliasName: alias/S3BucketKey<br /><br />    TargetKeyId: !Ref KMSS3Encryption</pre>For more information about this, see [AWS CloudFormation stack updates](https://docs.aws.amazon.com/AWSCloudFormation/latest/UserGuide/using-cfn-updating-stacks.html) in the AWS CloudFormation documentation.  | AWS DevOps | 
| Update the stack to include the S3 bucket policy. | [\[See the AWS documentation website for more details\]](http://docs.aws.amazon.com/prescriptive-guidance/latest/patterns/successfully-import-an-s3-bucket-as-an-aws-cloudformation-stack.html)<pre>S3BucketPolicy:<br /><br />  Type: 'AWS::S3::BucketPolicy'<br /><br />  Properties:<br /><br />    Bucket: !Ref S3Bucket<br /><br />    PolicyDocument: !Sub |-<br /><br />      {<br /><br />                  "Version": "2008-10-17",		 	 	 <br /><br />                  "Id": "restricthttp",<br /><br />                  "Statement": [<br /><br />                      {<br /><br />                          "Sid": "denyhttp",<br /><br />                          "Effect": "Deny",<br /><br />                          "Principal": {<br /><br />                              "AWS": "*"<br /><br />                          },<br /><br />                          "Action": "s3:*",<br /><br />                          "Resource": ["arn:aws:s3:::${S3Bucket}","arn:aws:s3:::${S3Bucket}/*"],<br /><br />                          "Condition": {<br /><br />                              "Bool": {<br /><br />                                  "aws:SecureTransport": "false"<br /><br />                              }<br /><br />                          }<br /><br />                      }<br /><br />                  ]<br /><br />              }</pre>This S3 bucket policy has a deny statement that restricts API calls that are not secure.  | AWS DevOps | 
| Update the key policy. | [\[See the AWS documentation website for more details\]](http://docs.aws.amazon.com/prescriptive-guidance/latest/patterns/successfully-import-an-s3-bucket-as-an-aws-cloudformation-stack.html)For more information, see [Key policies in AWS KMS](https://docs.aws.amazon.com/kms/latest/developerguide/key-policies.html) in the AWS KMS documentation. | AWS administrator | 
| Add resource-level tags. | [\[See the AWS documentation website for more details\]](http://docs.aws.amazon.com/prescriptive-guidance/latest/patterns/successfully-import-an-s3-bucket-as-an-aws-cloudformation-stack.html)<pre>Tags:<br /><br />  - Key: createdBy<br /><br />    Value: Cloudformation</pre> | AWS DevOps | 

## Related resources
<a name="successfully-import-an-s3-bucket-as-an-aws-cloudformation-stack-resources"></a>
+ [Bringing existing resources into AWS CloudFormation management ](https://docs.aws.amazon.com/AWSCloudFormation/latest/UserGuide/resource-import.html)
+ [AWS re:Invent 2017: Deep dive on AWS CloudFormation](https://www.youtube.com/watch?v=01hy48R9Kr8) (video)

## Attachments
<a name="attachments-aea7f6fe-8e67-46c4-8b90-1ab06b879111"></a>

To access additional content that is associated with this document, unzip the following file: [attachment.zip](samples/p-attach/aea7f6fe-8e67-46c4-8b90-1ab06b879111/attachments/attachment.zip)

# Synchronize data between Amazon EFS file systems in different AWS Regions by using AWS DataSync
<a name="synchronize-data-between-amazon-efs-file-systems-in-different-aws-regions-by-using-aws-datasync"></a>

*Sarat Chandra Pothula and Aditya Ambati, Amazon Web Services*

## Summary
<a name="synchronize-data-between-amazon-efs-file-systems-in-different-aws-regions-by-using-aws-datasync-summary"></a>

This solution provides a robust framework for efficient and secure data synchronization between Amazon Elastic File System (Amazon EFS) instances in different AWS Regions. This approach is scalable and provides controlled, cross-Region data replication. This solution can enhance your disaster recovery and data redundancy strategies.

By using the AWS Cloud Development Kit (AWS CDK), this pattern uses as an infrastructure as code (IaC) approach to deploy the solution resources. The AWS CDK application deploys the essential AWS DataSync, Amazon EFS, Amazon Virtual Private Cloud (Amazon VPC), and Amazon Elastic Compute Cloud (Amazon EC2) resources. This IaC provides a repeatable and version-controlled deployment process that is fully aligned with AWS best practices.

## Prerequisites and limitations
<a name="synchronize-data-between-amazon-efs-file-systems-in-different-aws-regions-by-using-aws-datasync-prereqs"></a>

**Prerequisites**
+ An active AWS account
+ AWS Command Line Interface (AWS CLI) version 2.9.11 or later, [installed](https://docs.aws.amazon.com/cli/latest/userguide/cli-chap-getting-started.html) and [configured](https://docs.aws.amazon.com/cli/latest/userguide/cli-chap-configure.html)
+ AWS CDK version 2.114.1 or later, [installed](https://docs.aws.amazon.com/cdk/v2/guide/getting_started.html#getting_started_install) and [bootstrapped](https://docs.aws.amazon.com/cdk/v2/guide/getting_started.html#getting_started_bootstrap)
+ NodeJS version 20.8.0 or later, [installed](https://nodejs.org/en/download)

**Limitations**
+ The solution inherits limitations from DataSync and Amazon EFS, such as data transfer rates, size limitations, and regional availability. For more information, see [AWS DataSync quotas](https://docs.aws.amazon.com/datasync/latest/userguide/datasync-limits.html) and [Amazon EFS quotas](https://docs.aws.amazon.com/efs/latest/ug/limits.html).
+ This solution supports Amazon EFS only. DataSync supports [other AWS services](https://docs.aws.amazon.com/datasync/latest/userguide/working-with-locations.html), such as Amazon Simple Storage Service (Amazon S3) and Amazon FSx for Lustre. However, this solution requires modification to synchronize data with these other services.

## Architecture
<a name="synchronize-data-between-amazon-efs-file-systems-in-different-aws-regions-by-using-aws-datasync-architecture"></a>

![\[Architecture diagram for replicating data to an EFS file system in a different Region\]](http://docs.aws.amazon.com/prescriptive-guidance/latest/patterns/images/pattern-img/e28ba6c2-ab8b-4812-932e-f038106d5496/images/18b35ae9-a22e-43e7-b7a3-30e40321c44e.png)


This solution deploys the following AWS CDK stacks:
+ **Amazon VPC stack** –­ This stack sets up virtual private cloud (VPC) resources, including subnets, an internet gateway, and a NAT gateway in both the primary and secondary AWS Regions.
+ **Amazon EFS stack** – This stack deploys Amazon EFS file systems into the primary and secondary Regions and connects them to their respective VPCs.
+ **Amazon EC2 stack** – This stack launches EC2 instances in the primary and secondary Regions. These instances are configured to mount the Amazon EFS file system, which allows them to access the shared storage.
+ **DataSync location stack** – This stack uses a custom construct called `DataSyncLocationConstruct` to create DataSync location resources in the primary and secondary Regions. These resources define endpoints for data synchronization.
+ **DataSync task stack** – This stack uses a custom construct called `DataSyncTaskConstruct` to create a DataSync task in the primary Region. This task is configured to synchronize data between the primary and secondary Regions by using the DataSync source and destination locations.

## Tools
<a name="synchronize-data-between-amazon-efs-file-systems-in-different-aws-regions-by-using-aws-datasync-tools"></a>

**AWS services**
+ [AWS Cloud Development Kit (AWS CDK)](https://docs.aws.amazon.com/cdk/latest/guide/home.html) is a software development framework that helps you define and provision AWS Cloud infrastructure in code.
+ [AWS DataSync](https://docs.aws.amazon.com/datasync/latest/userguide/what-is-datasync.html) is an online data transfer and discovery service that helps you move files or object data to, from, and between AWS storage services.
+ [Amazon Elastic Compute Cloud (Amazon EC2)](https://docs.aws.amazon.com/ec2/) provides scalable computing capacity in the AWS Cloud. You can launch as many virtual servers as you need and quickly scale them up or down.
+ [Amazon Elastic File System (Amazon EFS)](https://docs.aws.amazon.com/efs/latest/ug/whatisefs.html) helps you create and configure shared file systems in the AWS Cloud.
+ [Amazon Virtual Private Cloud (Amazon VPC)](https://docs.aws.amazon.com/vpc/latest/userguide/what-is-amazon-vpc.html) helps you launch AWS resources into a virtual network that you’ve defined. This virtual network resembles a traditional network that you’d operate in your own data center, with the benefits of using the scalable infrastructure of AWS.

**Code repository**

The code for this pattern is available in the GitHub [Amazon EFS Cross-Region DataSync Project](https://github.com/aws-samples/aws-efs-crossregion-datasync/tree/main) repository.

## Best practices
<a name="synchronize-data-between-amazon-efs-file-systems-in-different-aws-regions-by-using-aws-datasync-best-practices"></a>

Follow the best practices described in [Best practices for using the AWS CDK in TypeScript to create IaC projects](https://docs.aws.amazon.com/prescriptive-guidance/latest/best-practices-cdk-typescript-iac/introduction.html).

## Epics
<a name="synchronize-data-between-amazon-efs-file-systems-in-different-aws-regions-by-using-aws-datasync-epics"></a>

### Deploy the AWS CDK app
<a name="deploy-the-aws-cdk-app"></a>


| Task | Description | Skills required | 
| --- | --- | --- | 
| Clone the project repository. | Enter the following command to clone the [Amazon EFS Cross-Region DataSync Project](https://github.com/aws-samples/aws-efs-crossregion-datasync/tree/main) repository.<pre>git clone https://github.com/aws-samples/aws-efs-crossregion-datasync.git</pre> | AWS DevOps | 
| Install the npm dependencies. | Enter the following command.<pre>npm ci</pre> | AWS DevOps | 
| Choose the primary and secondary Regions. | In the cloned repository, navigate to the `src/infa` directory. In the `Launcher.ts` file, update the `PRIMARY_AWS_REGION` and `SECONDARY_AWS_REGION` values. Use the corresponding [Region codes](https://docs.aws.amazon.com/general/latest/gr/datasync.html#datasync-region).<pre>const primaryRegion = { account: account, region: '<PRIMARY_AWS_REGION>' };<br />const secondaryRegion = { account: account, region: '<SECONDARY_AWS_REGION>' };</pre> | AWS DevOps | 
| Bootstrap the environment. | Enter the following command to bootstrap the AWS account and AWS Region that you want to use.<pre>cdk bootstrap <aws_account>/<aws_region></pre>For more information, see [Bootstrapping](https://docs.aws.amazon.com/cdk/v2/guide/bootstrapping.html) in the AWS CDK documentation. | AWS DevOps | 
| List the AWS CDK stacks. | Enter the following command to view a list of the AWS CDK stacks in the app.<pre>cdk ls</pre> | AWS DevOps | 
| Synthesize the AWS CDK stacks. | Enter the following command to produce an AWS CloudFormation template for each stack defined in the AWS CDK app.<pre>cdk synth</pre> | AWS DevOps | 
| Deploy the AWS CDK app. | Enter the following command to deploy all of the stacks to your AWS account, without requiring manual approval for any changes.<pre>cdk deploy --all --require-approval never</pre> | AWS DevOps | 

### Validate the deployment
<a name="validate-the-deployment"></a>


| Task | Description | Skills required | 
| --- | --- | --- | 
| Log in to the EC2 instance in the primary Region. | [\[See the AWS documentation website for more details\]](http://docs.aws.amazon.com/prescriptive-guidance/latest/patterns/synchronize-data-between-amazon-efs-file-systems-in-different-aws-regions-by-using-aws-datasync.html) | AWS DevOps | 
| Create a temporary file. | Enter the following command to create a temporary file in the Amazon EFS mount path.<pre>sudo dd if=/dev/zero \<br />of=tmptst.dat \<br />bs=1G \<br />seek=5 \<br />count=0<br /><br />ls -lrt tmptst.dat</pre> | AWS DevOps | 
| Start the DataSync task. | Enter the following command to replicate the temporary file from the primary Region to the secondary Region, where `<ARN-task>` is the Amazon Resource Name (ARN) of your DataSync task.<pre>aws datasync start-task-execution \<br />    --task-arn <ARN-task></pre>The command returns the ARN of the task execution in the following format.`arn:aws:datasync:<region>:<account-ID>:task/task-execution/<exec-ID>` | AWS DevOps | 
| Check the status of the data transfer. | Enter the following command to describe the DataSync execution task, where `<ARN-task-execution>` is the ARN of the task execution.<pre>aws datasync describe-task-execution \<br />    --task-execution-arn <ARN-task-execution></pre>The DataSync task is complete when `PrepareStatus`, `TransferStatus`, and `VerifyStatus` all have the value `SUCCESS`. | AWS DevOps | 
| Log in to the EC2 instance in the secondary Region. | [\[See the AWS documentation website for more details\]](http://docs.aws.amazon.com/prescriptive-guidance/latest/patterns/synchronize-data-between-amazon-efs-file-systems-in-different-aws-regions-by-using-aws-datasync.html) | AWS DevOps | 
| Validate the replication. | Enter the following command to verify that the temporary file exists in the Amazon EFS file system.<pre>ls -lrt<br />tmptst.dat</pre> | AWS DevOps | 

## Related resources
<a name="synchronize-data-between-amazon-efs-file-systems-in-different-aws-regions-by-using-aws-datasync-resources"></a>

**AWS documentation**
+ [AWS CDK API Reference](https://docs.aws.amazon.com/cdk/api/v2/python/modules.html)
+ [Configuring AWS DataSync transfers with Amazon EFS](https://docs.aws.amazon.com/datasync/latest/userguide/create-efs-location.html)
+ [Troubleshooting issues with AWS DataSync transfers](https://docs.aws.amazon.com/datasync/latest/userguide/troubleshooting-datasync-locations-tasks.html)

**Other AWS resources**
+ [AWS DataSync FAQs](https://aws.amazon.com/datasync/faqs/)

# Test AWS infrastructure by using LocalStack and Terraform Tests
<a name="test-aws-infra-localstack-terraform"></a>

*Ivan Girardi and Ioannis Kalyvas, Amazon Web Services*

## Summary
<a name="test-aws-infra-localstack-terraform-summary"></a>

This pattern helps you locally test infrastructure as code (IaC) for AWS in Terraform without the need to provision infrastructure in your AWS environment. It integrates the [Terraform Tests framework](https://developer.hashicorp.com/terraform/language/tests) with [LocalStack](https://github.com/localstack/localstack). The LocalStack Docker container provides a local development environment that emulates various AWS services. This helps you test and iterate on infrastructure deployments without incurring costs in the AWS Cloud.

This solution provides the following benefits:
+ **Cost optimization** – Running tests against LocalStack eliminates the need to use AWS services. This prevents you from incurring costs that are associated with creating, operating, and modifying those AWS resources.
+ **Speed and efficiency** – Testing locally is also typically faster than deploying the AWS resources. This rapid feedback loop accelerates development and debugging. Because LocalStack runs locally, you can develop and test your Terraform configuration files without an internet connection. You can debug Terraform configuration files locally and receive immediate feedback, which streamlines the development process.
+ **Consistency and reproducibility** – LocalStack provides a consistent environment for testing. This consistency helps make sure that tests yield the same results, regardless of external AWS changes or network issues.
+ **Isolation **– Testing with LocalStack prevents you from accidentally affecting live AWS resources or production environments. This isolation makes it safe to experiment and test various configurations.
+ **Automation** – Integration with a continuous integration and continuous delivery (CI/CD) pipeline helps you automatically test Terraform [configuration files](https://developer.hashicorp.com/terraform/language/files). The pipeline thoroughly tests the IaC before deployment.
+ **Flexibility** – You can simulate different AWS Regions, AWS accounts, and service configurations to match your production environments more closely.

## Prerequisites and limitations
<a name="test-aws-infra-localstack-terraform-prereqs"></a>

**Prerequisites**
+ [Install](https://docs.docker.com/get-started/get-docker/) Docker
+ [Enable access](https://docs.docker.com/reference/cli/dockerd/#daemon-socket-option) to the default Docker socket (`/var/run/docker.sock`). For more information, see the [LocalStack documentation](https://docs.localstack.cloud/user-guide/aws/lambda/#migrating-to-lambda-v2).
+ [Install](https://docs.docker.com/compose/install/) Docker Compose
+ [Install](https://developer.hashicorp.com/terraform/tutorials/aws-get-started/install-cli) Terraform version 1.6.0 or later
+ [Install](https://developer.hashicorp.com/terraform/cli) Terraform CLI
+ [Configure](https://hashicorp.github.io/terraform-provider-aws/) the Terraform AWS Provider
+ (Optional) [Install](https://docs.aws.amazon.com/cli/latest/userguide/getting-started-install.html) and [configure](https://docs.aws.amazon.com/cli/latest/userguide/cli-configure-files.html) the AWS Command Line Interface (AWS CLI). For an example of how to use the AWS CLI with LocalStack, see the GitHub [Test AWS infrastructure using LocalStack and Terraform Tests](https://github.com/aws-samples/localstack-terraform-test) repository.

**Limitations**
+ This pattern provides explicit examples for testing Amazon Simple Storage Service (Amazon S3), AWS Lambda, AWS Step Functions, and Amazon DynamoDB resources. However, you can extend this solution to include additional AWS resources.
+ This pattern provides instructions to run Terraform Tests locally, but can you can integrate testing into any CI/CD pipeline.
+ This pattern provides instructions for using the LocalStack Community image. If you're using the LocalStack Pro image, see the [LocalStack Pro documentation](https://hub.docker.com/r/localstack/localstack-pro).
+ LocalStack provides emulation services for different AWS APIs. For a complete list, see [AWS Service Feature Coverage](https://docs.localstack.cloud/user-guide/aws/feature-coverage/). Some advanced features might require a subscription for LocalStack Pro.

## Architecture
<a name="test-aws-infra-localstack-terraform-architecture"></a>

The following diagram shows the architecture for this solution. The primary components are a source code repository, a CI/CD pipeline, and a LocalStack Docker container. The LocalStack Docker Container hosts the following AWS services locally:
+ An Amazon S3 bucket for storing files
+ Amazon CloudWatch for monitoring and logging
+ An AWS Lambda function for running serverless code
+ An AWS Step Functions state machine for orchestrating multi-step workflows
+ An Amazon DynamoDB table for storing NoSQL data

![\[A CI/CD pipeline builds and tests the LocalStack Docker container and AWS resources.\]](http://docs.aws.amazon.com/prescriptive-guidance/latest/patterns/images/pattern-img/34bfbdbf-14e7-42a0-9022-c85a9c30cdcd/images/dc61fac9-b92c-4841-9132-ff8bb865eed9.png)


The diagram shows the following workflow:

1. You add and commit a Terraform configuration file to the source code repository.

1. The CI/CD pipeline detects the changes and initiates a build process for static Terraform code analysis. The pipeline builds and runs the LocalStack Docker container. Then the pipeline starts the test process.

1. The pipeline uploads an object into an Amazon S3 bucket that is hosted in the LocalStack Docker container.

1. Uploading the object invokes an AWS Lambda function.

1. The Lambda function stores the Amazon S3 event notification in a CloudWatch log.

1. The Lambda function starts an AWS Step Functions state machine.

1. The state machine writes the name of the Amazon S3 object into a DynamoDB table.

1. The test process in the CI/CD pipeline verifies that the name of the uploaded object matches the entry in the DynamoDB table. It also verifies that the S3 bucket is deployed with the specified name and that the AWS Lambda function has been successfully deployed.

## Tools
<a name="test-aws-infra-localstack-terraform-tools"></a>

**AWS services**
+ [Amazon CloudWatch](https://docs.aws.amazon.com/AmazonCloudWatch/latest/monitoring/WhatIsCloudWatch.html) helps you monitor the metrics of your AWS resources and the applications you run on AWS in real time.
+ [Amazon DynamoDB](https://docs.aws.amazon.com/amazondynamodb/latest/developerguide/Introduction.html) is a fully managed NoSQL database service that provides fast, predictable, and scalable performance.
+ [AWS Lambda](https://docs.aws.amazon.com/lambda/latest/dg/welcome.html) is a compute service that helps you run code without needing to provision or manage servers. It runs your code only when needed and scales automatically, so you pay only for the compute time that you use.
+ [Amazon Simple Storage Service (Amazon S3)](https://docs.aws.amazon.com/AmazonS3/latest/userguide/Welcome.html) is a cloud-based object storage service that helps you store, protect, and retrieve any amount of data.
+ [AWS Step Functions](https://docs.aws.amazon.com/step-functions/latest/dg/welcome.html) is a serverless orchestration service that helps you combine AWS Lambda functions and other AWS services to build business-critical applications.

**Other tools**
+ [Docker](https://www.docker.com/) is a set of platform as a service (PaaS) products that use virtualization at the operating-system level to deliver software in containers.
+ [Docker Compose](https://docs.docker.com/compose/) is a tool for defining and running multi-container applications.
+ [LocalStack](https://localstack.cloud) is a cloud service emulator that runs in a single container. By using LocalStack, you can run workloads on your local machine that use AWS services, without connecting to the AWS Cloud.
+ [Terraform](https://www.terraform.io/) is an IaC tool from HashiCorp that helps you create and manage cloud and on-premises resources.
+ [Terraform Tests](https://developer.hashicorp.com/terraform/language/tests) helps you validate Terraform module configuration updates through tests that are analogous to integration or unit testing.

**Code repository**

The code for this pattern is available in the GitHub [Test AWS infrastructure using LocalStack and Terraform Tests](https://github.com/aws-samples/localstack-terraform-test) repository.

## Best practices
<a name="test-aws-infra-localstack-terraform-best-practices"></a>
+ This solution tests AWS infrastructure that is specified in Terraform configuration files, and it does not deploy those resources in the AWS Cloud. If you want to deploy the resources, follow the [principle of least-privilege](https://docs.aws.amazon.com/IAM/latest/UserGuide/best-practices.html#grant-least-privilege) (IAM documentation) and properly [configure the Terraform backend](https://developer.hashicorp.com/terraform/language/backend) (Terraform documentation).
+ When integrating LocalStack in a CI/CD pipeline, we recommend that you don't run the LocalStack Docker container in privilege mode. For more information, see [Runtime privilege and Linux capabilities](https://docs.docker.com/engine/containers/run/#runtime-privilege-and-linux-capabilities) (Docker documentation) and [Security for self-managed runners](https://docs.gitlab.com/runner/security/) (GitLab documentation).

## Epics
<a name="test-aws-infra-localstack-terraform-epics"></a>

### Deploy the solution
<a name="deploy-the-solution"></a>


| Task | Description | Skills required | 
| --- | --- | --- | 
| Clone the repository. | In a bash shell, enter the following command. This clones the [Test AWS infrastructure using LocalStack and Terraform Tests](https://github.com/aws-samples/localstack-terraform-test) repository from GitHub:<pre>git clone https://github.com/aws-samples/localstack-terraform-test.git</pre> | DevOps engineer | 
| Run the LocalStack container. | [\[See the AWS documentation website for more details\]](http://docs.aws.amazon.com/prescriptive-guidance/latest/patterns/test-aws-infra-localstack-terraform.html) | DevOps engineer | 
| Initialize Terraform. | Enter the following command to initialize Terraform:<pre>terraform init</pre> | DevOps engineer | 
| Run Terraform Tests. | [\[See the AWS documentation website for more details\]](http://docs.aws.amazon.com/prescriptive-guidance/latest/patterns/test-aws-infra-localstack-terraform.html) | DevOps engineer | 
| Clean up resources. | Enter the following command to destroy the LocalStack container:<pre>docker-compose down</pre> | DevOps engineer | 

## Troubleshooting
<a name="test-aws-infra-localstack-terraform-troubleshooting"></a>


| Issue | Solution | 
| --- | --- | 
| `Error: reading DynamoDB Table Item (Files\|README.md): empty` result when running the `terraform test` command. | [\[See the AWS documentation website for more details\]](http://docs.aws.amazon.com/prescriptive-guidance/latest/patterns/test-aws-infra-localstack-terraform.html) | 

## Related resources
<a name="test-aws-infra-localstack-terraform-resources"></a>
+ [Getting started with Terraform: Guidance for AWS CDK and AWS CloudFormation experts](https://docs.aws.amazon.com/prescriptive-guidance/latest/getting-started-terraform/introduction.html) (AWS Prescriptive Guidance)
+ [Best practices for using the Terraform AWS Provider](https://docs.aws.amazon.com/prescriptive-guidance/latest/terraform-aws-provider-best-practices/introduction.html) (AWS Prescriptive Guidance)
+ [Terraform CI/CD and testing on AWS with the new Terraform Test Framework](https://aws.amazon.com/blogs/devops/terraform-ci-cd-and-testing-on-aws-with-the-new-terraform-test-framework/) (AWS blog post)
+ [Accelerating software delivery using LocalStack Cloud Emulator from AWS Marketplace](https://aws.amazon.com/blogs/awsmarketplace/accelerating-software-delivery-localstack-cloud-emulator-aws-marketplace/) (AWS blog post)

## Additional information
<a name="test-aws-infra-localstack-terraform-additional"></a>

**Integration with GitHub Actions**

You can integrate LocalStack and Terraform Tests in a CI/CD pipeline by using GitHub Actions. For more information, see the [GitHub Actions documentation](https://docs.github.com/en/actions). The following is a sample GitHub Actions configuration file:

```
name: LocalStack Terraform Test

on:
  push:
    branches:
      - '**'

  workflow_dispatch: {}

jobs:
  localstack-terraform-test:
    runs-on: ubuntu-latest

    steps:
    - uses: actions/checkout@v4

    - name: Build and Start LocalStack Container
      run: |
        docker compose up -d

    - name: Setup Terraform
      uses: hashicorp/setup-terraform@v3
      with:
        terraform_version: latest

    - name: Run Terraform Init and Validation
      run: |
        terraform init
        terraform validate
        terraform fmt --recursive --check
        terraform plan
        terraform show

    - name: Run Terraform Test
      run: |
        terraform test

    - name: Stop and Delete LocalStack Container
      if: always()
      run: docker compose down
```

# Upgrade SAP Pacemaker clusters from ENSA1 to ENSA2
<a name="upgrade-sap-pacemaker-clusters-from-ensa1-to-ensa2"></a>

*Gergely Cserdi and Balazs Sandor Skublics, Amazon Web Services*

## Summary
<a name="upgrade-sap-pacemaker-clusters-from-ensa1-to-ensa2-summary"></a>

This pattern explains the steps and considerations for upgrading an SAP Pacemaker cluster that is based on Standalone Enqueue Server (ENSA1) to ENSA2. The information in this pattern applies to both SUSE Linux Enterprise Server (SLES) and Red Hat Enterprise Linux (RHEL) operating systems.

Pacemaker clusters on SAP NetWeaver 7.52 or S/4HANA 1709 and earlier versions run on an ENSA1 architecture and are configured specifically for ENSA1. If you run your SAP workloads on Amazon Web Services (AWS) and you’re interested in moving to ENSA2, you might find that the SAP, SUSE, and RHEL documentation doesn’t provide comprehensive information. This pattern describes the technical steps required to reconfigure SAP parameters and Pacemaker clusters to upgrade from ENSA1 to ENSA2. It provides examples of SUSE systems, but the concept is the same for RHEL clusters.

**Note**  
ENSA1 and ENSA2 are concepts that pertain to SAP applications only, so the information in this pattern doesn’t apply to SAP HANA or other types of clusters.

**Note**  
Technically, ENSA2 can be used with or without Enqueue Replicator 2. However, high availability (HA) and failover automation (through a  cluster solution) require Enqueue Replicator 2. This pattern uses the term *ENSA2 clusters* to refer to clusters with Standalone Enqueue Server 2 and Enqueue Replicator 2.

## Prerequisites and limitations
<a name="upgrade-sap-pacemaker-clusters-from-ensa1-to-ensa2-prereqs"></a>

**Prerequisites **
+ A working ENSA1-based cluster that uses Pacemaker and Corosync on SLES or RHEL.
+ At least two Amazon Elastic Compute Cloud (Amazon EC2) instances where the (ABAP) SAP Central Services (ASCS/SCS) and Enqueue Replication Server (ERS) instances are running.
+ Knowledge of managing SAP applications and clusters.
+ Access to the Linux environment as root user.

**Limitations **
+ ENSA1-based clusters support a two-node architecture only.
+ ENSA2-based clusters cannot be deployed to SAP NetWeaver versions before 7.52.
+ EC2 instances in clusters should be in different AWS Availability Zones.

**Product versions**
+ SAP NetWeaver version 7.52 or later
+ Starting with S/4HANA 2020, only ENSA2 clusters are supported
+ Kernel 7.53 or later, which supports ENSA2 and Enqueue Replicator 2
+ SLES for SAP Applications version 12 or later
+ RHEL for SAP with High Availability (HA) version 7.9 or later

## Architecture
<a name="upgrade-sap-pacemaker-clusters-from-ensa1-to-ensa2-architecture"></a>

**Source technology stack **
+ SAP NetWeaver 7.52 with SAP Kernel 7.53 or later
+ SLES or RHEL operating system

**Target technology stack **
+ SAP NetWeaver 7.52 with SAP Kernel 7.53 or later, including S/4HANA 2020 with ABAP platform
+ SLES or RHEL operating system

**Target architecture **

The following diagram shows an HA configuration of ASCS/SCS and ERS instances based on an ENSA2 cluster.

![\[HA architecture for ASCS/SCS and ERS instances on an ENSA2 cluster\]](http://docs.aws.amazon.com/prescriptive-guidance/latest/patterns/images/pattern-img/c32560de-901f-4796-a6b3-c08c109b22c8/images/19501713-0ddf-4242-9ea3-90478200a19e.png)


**Comparison of ENSA1 and ENSA2 clusters**

SAP introduced ENSA2 as the successor to ENSA1. An ENSA1-based cluster supports a two-node architecture where the ASCS/SCS instance fails over to ERS when an error occurs. This limitation stems from how the ASCS/SCS instance regains the lock table information from the shared memory of the ERS node after failover. ENSA2-based clusters with Enqueue Replicator 2 eliminate this limitation, because the ASCS/SCS instance can collect the lock information from the ERS instance over the network. ENSA2-based clusters can have more than two nodes, because the ASCS/SCS instance is no longer required to fail over to the ERS node. (However, in a two-node ENSA2 cluster environment, the ASCS/SCS instance will still fail over to the ERS node because there are no other nodes in the cluster to fail over to.) ENSA2 is supported starting with SAP Kernel 7.50 with some limitations. For HA setup that supports Enqueue Replicator 2, the minimum requirement is NetWeaver 7.52 (see [SAP OSS Note 2630416](https://launchpad.support.sap.com/#/notes/2630416)). S/4HANA 1809 comes with ENSA2 architecture recommended by default, whereas S/4HANA supports only ENSA2 starting with version 2020.

**Automation and scale**

The HA cluster in the target architecture makes ASCS fail over to other nodes automatically.

**Scenarios for moving to ENSA2-based clusters**

There are two main scenarios for upgrading to ENSA2-based clusters: 
+ Scenario 1: You choose to upgrade to ENSA2 without an accompanying SAP upgrade or S/4HANA conversion, assuming that your SAP release and Kernel version support ENSA2.
+ Scenario 2: You move to ENSA2 as part of an upgrade or conversion (for example, to S/4HANA 1809 or later) by using SUM.

The [Epics](#upgrade-sap-pacemaker-clusters-from-ensa1-to-ensa2-epics) section covers the steps for these two scenarios. The first scenario requires you to manually set up SAP-related parameters before you change the cluster configuration for ENSA2. In the second scenario, the binaries and SAP-related parameters are deployed by SUM, and your only remaining task is to update the cluster configuration for HA. We still recommend that you validate SAP parameters after you use SUM. In most cases, S/4HANA conversion is the main reason for a cluster upgrade.

## Tools
<a name="upgrade-sap-pacemaker-clusters-from-ensa1-to-ensa2-tools"></a>
+ For OS package managers, we recommend the Zypper (for SLES) or YUM (for RHEL) tools.
+ For cluster management, we recommend **crm** (for SLES) or **pcs** (for RHEL) shells.
+ SAP instance management tools such as SAPControl.
+ (Optional) SUM tool for S/4HANA conversion upgrade.

## Best practices
<a name="upgrade-sap-pacemaker-clusters-from-ensa1-to-ensa2-best-practices"></a>
+ For best practices for using SAP workloads on AWS, see the [SAP Lens](https://docs.aws.amazon.com/wellarchitected/latest/sap-lens/sap-lens.html) for the AWS Well-Architected Framework.
+ Consider the number of cluster nodes (odd or even) in your ENSA2 multi-node architecture.
+ Set up the ENSA2 cluster for SLES 15 in alignment with the SAP S/4-HA-CLU 1.0 certification standard.
+ Always save or back up your existing cluster and application state before upgrading to ENSA2.

## Epics
<a name="upgrade-sap-pacemaker-clusters-from-ensa1-to-ensa2-epics"></a>

### Configure SAP parameters manually for ENSA2 (scenario 1 only)
<a name="configure-sap-parameters-manually-for-ensa2-scenario-1-only"></a>


| Task | Description | Skills required | 
| --- | --- | --- | 
| Configure the parameters in the default profile. | If you want to upgrade to ENSA2 while staying on the same SAP release or if your target release defaults to ENSA1, set the parameters in the default profile (DEFAULT.PFL file) to the following values.<pre>enq/enable=TRUE<br />enq/serverhost=sapascsvirt<br />enq/serverinst=10        (instance number of ASCS/SCS instance)<br />enque/process_location=REMOTESA<br />enq/replicatorhost=sapersvirt<br />enq/replicatorinst=11    (instance number of ERS instance)<br />  </pre>where `sapascsvirt` is the virtual hostname for the ASCS instances, and `sapersvirt` is the virtual hostname for the ERS instances. You can change these to fit your target environment.To use this upgrade option, your SAP release and Kernel version must support ENSA2 and Enqueue Replicator 2. | SAP | 
| Configure the ASCS/SCS instance profile. | If you want to upgrade to ENSA2 while staying on the same SAP release or if your target release defaults to ENSA1, set the following parameters in the ASCS/SCS instance profile. The section of the profile where ENSA1 is defined looks something like the following.<pre>#--------------------------------------------------------------<br />Start SAP enqueue server<br />#-------------------------------------------------------------- <br />_EN = en.sap$(SAPSYSTEMNAME)$(INSTANCE_NAME) <br />Execute_04 = local rm -f $(_EN) <br />Execute_05 = local ln -s -f $(DIR_EXECUTABLE)/enserver$(FT_EXE) $(_EN) <br />Start_Program_01 = local $(_EN) pf=$(_PF)<br />  </pre>To reconfigure this section for ENSA2:[\[See the AWS documentation website for more details\]](http://docs.aws.amazon.com/prescriptive-guidance/latest/patterns/upgrade-sap-pacemaker-clusters-from-ensa1-to-ensa2.html)This profile section would look something like the following after your changes.<pre>#--------------------------------------------------------------<br />Start SAP enqueue server<br />#-------------------------------------------------------------- <br />_ENQ = enq.sap$(SAPSYSTEMNAME)$(INSTANCE_NAME) <br />Execute_04 = local rm -f $(_ENQ) <br />Execute_05 = local ln -s -f $(DIR_EXECUTABLE)/enq_server$(FT_EXE) $(_ENQ) <br />Start_Program_01 = local $(_ENQ) pf=$(_PF) <br />... <br />enq/server/replication/enable = TRUE <br />Autostart = 0</pre>`_ENQ` must not have the restart option enabled. If `RestartProgram_01` is set for `_ENQ`, change it to `StartProgram_01`. This prevents SAP from restarting the service or interfering with cluster-managed resources. | SAP | 
| Configure the ERS profile. | If you want to upgrade to ENSA2 while staying on the same SAP release or if your target release defaults to ENSA1, set the following parameters in the ERS instance profile.Find the section where the enqueue replicator is defined. It will be similar to the following.<pre>#------------------------------------------------------<br />Start enqueue replication server<br />#------------------------------------------------------ <br />_ER = er.sap$(SAPSYSTEMNAME)$(INSTANCE_NAME) <br />Execute_03 = local rm -f $(_ER) <br />Execute_04 = local ln -s -f $(DIR_EXECUTABLE)/enrepserver$(FT_EXE) $(_ER) <br />Start_Program_00 = local $(_ER) pf=$(_PF) NR=$(SCSID)<br />  </pre>To reconfigure this section for Enqueue Replicator 2:[\[See the AWS documentation website for more details\]](http://docs.aws.amazon.com/prescriptive-guidance/latest/patterns/upgrade-sap-pacemaker-clusters-from-ensa1-to-ensa2.html)This profile section should look something like the following after your changes.<pre>#------------------------------------------------------<br />Start enqueue replication server<br />#------------------------------------------------------ <br />_ENQR = enqr.sap$(SAPSYSTEMNAME)$(INSTANCE_NAME) <br />Execute_01 = local rm -f $(_ENQR) <br />Execute_02 = local ln -s -f $(DIR_EXECUTABLE)/enq_replicator$(FT_EXE) $(_ENQR) <br />Start_Program_00 = local $(_ENQR) pf=$(_PF) NR=$(SCSID) <br />… <br />Autostart = 0</pre>`_ENQR` must not have the restart option enabled. If `RestartProgram_01` is set for `_ENQR`, change it to `StartProgram_01`. This prevents SAP from restarting the service or interfering with cluster-managed services. | SAP | 
| Restart SAP Start Services. | After you change the profiles described previously in this epic, restart SAP Start Services for both ASCS/SCS and ERS.`sapcontrol -nr 10 -function RestartService SCT``sapcontrol -nr 11 -function RestartService SCT`where `SCT` refers to the SAP system ID, and assuming that 10 and 11 are the instance numbers for ASCS/SCS and ERS instances, respectively. | SAP | 

### Reconfigure the cluster for ENSA2 (required for both scenarios)
<a name="reconfigure-the-cluster-for-ensa2-required-for-both-scenarios"></a>


| Task | Description | Skills required | 
| --- | --- | --- | 
| Verify version numbers in SAP resource agents. | When you use SUM to upgrade SAP to S/4HANA 1809 or later, SUM handles the parameter changes in the SAP profiles. Only the cluster requires manual adjustment. However, we recommend that you verify the parameter settings before you make any changes to the cluster.The examples in this epic assume that you’re using the SUSE operating system. If you’re using RHEL, you will need to use tools such as YUM and the **pcs** shell instead of Zypper and **crm**.Check both nodes in the architecture to confirm that the `resource-agents` package matches the minimum version recommended by SAP. For SLES, check SAP OSS Note 2641019. For RHEL, check SAP OSS Note 2641322. (SAP Notes require an [SAP ONE Support Launchpad user account](https://support.sap.com/en/my-support/knowledge-base.html).)<pre>sapers:sctadm 23> zypper search -s -i resource-agents<br />Loading repository data...<br />Reading installed packages...<br />S | Name | Type | Version | Arch | Repository<br />--+-----------------+---------+------------------------------------+--------+-----------------------------<br />i | resource-agents | package | 4.8.0+git30.d0077df0-150300.8.28.1 | x86_64 | SLE-Product-HA15-SP3-Updates</pre>Update the `resource-agents` version if necessary. | AWS systems administrator | 
| Back up cluster configuration. | Back up the CRM cluster configuration as follows.`crm configure show > /tmp/cluster_config_backup.txt` | AWS systems administrator | 
| Set maintenance mode. | Set the cluster to maintenance mode.`crm configure property maintenance-mode="true"` | AWS systems administrator | 
| Check cluster configuration. | Check the current cluster configuration.`crm configure show`Here is an excerpt from the full output:<pre>node 1: sapascs<br />node 2: sapers<br />...<br />primitive rsc_sap_SCT_ASCS10 SAPInstance \<br />operations $id=rsc_sap_SCT_ASCS10-operations \<br />op monitor interval=120 timeout=60 on-fail=restart \<br />params InstanceName=SCT_ASCS10_sapascsvirt START_PROFILE="/sapmnt/SCT/profile/SCT_ASCS10_sapascsvirt" \ <br />   AUTOMATIC_RECOVER=false \<br />meta resource-stickiness=5000 failure-timeout=60 migration-threshold=1 priority=10<br />primitive rsc_sap_SCT_ERS11 SAPInstance \<br />operations $id=rsc_sap_SCT_ERS11-operations \<br />op monitor interval=120 timeout=60 on-fail=restart \<br />params InstanceName=SCT_ERS11_sapersvirt START_PROFILE="/sapmnt/SCT/profile/SCT_ERS11_sapersvirt" \<br />   AUTOMATIC_RECOVER=false IS_ERS=true \<br />meta priority=1000<br />...<br />colocation col_sap_SCT_no_both -5000: grp_SCT_ERS11 grp_SCT_ASCS10<br />location loc_sap_SCT_failover_to_ers rsc_sap_SCT_ASCS10 \<br />rule 2000: runs_ers_SCT eq 1<br />order ord_sap_SCT_first_start_ascs Optional: rsc_sap_SCT_ASCS10:start rsc_sap_SCT_ERS11:stop symmetrical=false<br />...</pre>where `sapascsvirt` refers to the virtual hostname for the ASCS instances, `sapersvirt` refers to the virtual hostname for the ERS instances, and `SCT` refers to the SAP system ID. | AWS systems administrator | 
| Remove failover colocation constraint. | In the previous example, the location constraint `loc_sap_SCT_failover_to_ers`  specifies that the ENSA1 feature of ASCS should always follow the ERS instance upon failover. With ENSA2, ASCS should be able to fail over freely to any participating nodes, so you can remove this constraint.`crm configure delete loc_sap_SCT_failover_to_ers` | AWS systems administrator | 
| Adjust primitives. | You will also need to make minor changes to the ASCS and ERS SAPInstance primitives.Here is an example of an ASCS SAPInstance primitive that is configured for ENSA1.<pre>primitive rsc_sap_SCT_ASCS10 SAPInstance \<br />operations $id=rsc_sap_SCT_ASCS10-operations \<br />op monitor interval=120 timeout=60 on-fail=restart \<br />params InstanceName=SCT_ASCS10_sapascsvirt START_PROFILE="/sapmnt/SCT/profile/SCT_ASCS10_sapascsvirt" \<br />   AUTOMATIC_RECOVER=false \<br />meta resource-stickiness=5000 failure-timeout=60 migration-threshold=1 priority=10</pre>To upgrade to ENSA2, change this configuration to the following.<pre>primitive rsc_sap_SCT_ASCS10 SAPInstance \<br />operations $id=rsc_sap_SCT_ASCS10-operations \<br />op monitor interval=120 timeout=60 on-fail=restart \<br />params InstanceName=SCT_ASCS10_sapascsvirt START_PROFILE="/sapmnt/SCT/profile/SCT_ASCS10_sapascsvirt" \<br />   AUTOMATIC_RECOVER=false \<br />meta resource-stickiness=3000 </pre>This is an example of an ERS SAPInstance primitive that is configured for ENSA1.<pre>primitive rsc_sap_SCT_ERS11 SAPInstance \<br />operations $id=rsc_sap_SCT_ERS11-operations \<br />op monitor interval=120 timeout=60 on-fail=restart \<br />params InstanceName=SCT_ERS11_sapersvirt START_PROFILE="/sapmnt/SCT/profile/SCT_ERS11_sapersvirt" \<br />   AUTOMATIC_RECOVER=false IS_ERS=true \<br />meta priority=1000</pre>To upgrade to ENSA2, change this configuration to the following.<pre>primitive rsc_sap_SCT_ERS11 SAPInstance \<br />operations $id=rsc_sap_SCT_ERS11-operations \<br />op monitor interval=120 timeout=60 on-fail=restart \<br />params InstanceName=SCT_ERS11_sapersvirt START_PROFILE="/sapmnt/SCT/profile/SCT_ERS11_sapersvirt" \<br />   AUTOMATIC_RECOVER=false IS_ERS=true</pre>You can change primitives in various ways. For example, you can revise them in an editor such as vi, as in the following example.`crm configure edit rsc_sap_SCT_ERS11` | AWS systems administrator | 
| Disable maintenance mode. | Disable maintenance mode on the cluster.`crm configure property maintenance-mode="false"`When the cluster is out of maintenance mode, it attempts to bring the ASCS and ERS instances online with the new ENSA2 settings. | AWS systems administrator | 

### (Optional) Add cluster nodes
<a name="optional-add-cluster-nodes"></a>


| Task | Description | Skills required | 
| --- | --- | --- | 
| Review best practices. | Before you add more nodes, make sure to understand best practices such as whether to use an odd or even number of nodes. | AWS systems administrator | 
| Add nodes. | Adding more nodes involves a series of tasks, such as updating the operating system, installing software packages that match the existing nodes, and making mounts available. You can use the **Prepare Additional Host** option in SAP Software Provisioning Manager (SWPM) to create an SAP-specific baseline of the host. For more information, see the SAP guides listed in the next section. | AWS systems administrator | 

## Related resources
<a name="upgrade-sap-pacemaker-clusters-from-ensa1-to-ensa2-resources"></a>

**SAP and SUSE references**

To access SAP Notes, you must have an SAP ONE Support Launchpad user account. For more information, see the [SAP Support website](https://support.sap.com/en/my-support/knowledge-base.html).
+ [SAP Note 2501860 ‒ Documentation for SAP NetWeaver Application Server for ABAP 7.52](https://launchpad.support.sap.com/#/notes/2501860)
+ [SAP Note 2641019 ‒ Installation of ENSA2 and update from ENSA1 to ENSA2 in SUSE HA environment](https://launchpad.support.sap.com/#/notes/2641019)
+ [SAP Note 2641322 ‒ Installation of ENSA2 and update from ENSA1 to ENSA2 when using the Red Hat HA solutions for SAP](https://launchpad.support.sap.com/#/notes/2641322)
+ [SAP Note 2711036 ‒ Usage of the Standalone Enqueue Server 2 in an HA Environment](https://launchpad.support.sap.com/#/notes/2711036)
+ [Standalone Enqueue Server 2](https://help.sap.com/docs/ABAP_PLATFORM/cff8531bc1d9416d91bb6781e628d4e0/902412f09e134f5bb875adb6db585c92.html) (SAP documentation)
+ [SAP S/4 HANA ‒ Enqueue Replication 2 High Availability Cluster - Setup Guide](https://documentation.suse.com/sbp/all/html/SAP_S4HA10_SetupGuide-SLE12/index.html) (SUSE documentation)

**AWS references**
+ [SAP HANA on AWS: High Availability Configuration Guide for SLES and RHEL](https://docs.aws.amazon.com/sap/latest/sap-hana/sap-hana-on-aws-ha-configuration.html)
+ [SAP Lens - AWS Well-Architected Framework](https://docs.aws.amazon.com/wellarchitected/latest/sap-lens/sap-lens.html)

# Use consistent Availability Zones in VPCs across different AWS accounts
<a name="use-consistent-availability-zones-in-vpcs-across-different-aws-accounts"></a>

*Adam Spicer, Amazon Web Services*

## Summary
<a name="use-consistent-availability-zones-in-vpcs-across-different-aws-accounts-summary"></a>

On the Amazon Web Services (AWS) Cloud, an Availability Zone has a name that can vary between your AWS accounts and an [Availability Zone ID (AZ ID) ](https://docs.aws.amazon.com/ram/latest/userguide/working-with-az-ids.html)that identifies its location. If you use AWS CloudFormation to create virtual private clouds (VPCs), you must specify the Availability Zone's name or ID when creating the subnets. If you create VPCs in multiple accounts, the Availability Zone name is randomized, which means that subnets use different Availability Zones in each account. 

To use the same Availability Zone across your accounts, you must map the Availability Zone name in each account to the same AZ ID. For example, the following diagram shows that the `use1-az6` AZ ID is named `us-east-1a` in AWS account A and `us-east-1c` in AWS account Z.

![\[The use1-az6 AZ ID is named us-east-1a in AWS account A and us-east-1c in AWS account Z.\]](http://docs.aws.amazon.com/prescriptive-guidance/latest/patterns/images/pattern-img/9954e7f9-d6ce-44bd-af99-0c6bb7cd3cb0/images/23c8a37b-2408-4534-a1e0-bccfa4d7fbe3.png)


 

This pattern helps ensure zonal consistency by providing a cross-account, scalable solution for using the same Availability Zones in your subnets. Zonal consistency ensures that your cross-account network traffic avoids cross-Availability Zone network paths, which helps reduce data transfer costs and lower network latency between your workloads.

This pattern is an alternative approach to the AWS CloudFormation [AvailabilityZoneId property](https://docs.aws.amazon.com/AWSCloudFormation/latest/UserGuide/aws-resource-ec2-subnet.html#cfn-ec2-subnet-availabilityzoneid).

## Prerequisites and limitations
<a name="use-consistent-availability-zones-in-vpcs-across-different-aws-accounts-prereqs"></a>

**Prerequisites**
+ At least two active AWS accounts in the same AWS Region.
+ Evaluate how many Availability Zones are needed to support your VPC requirements in the Region.
+ Identify and record the AZ ID for each Availability Zone that you need to support. For more information about this, see [Availability Zone IDs for your AWS resources](https://docs.aws.amazon.com/ram/latest/userguide/working-with-az-ids.html) in the AWS Resource Access Manager documentation.  
+ An ordered, comma-separated list of your AZ IDs. For example, the first Availability Zone on your list is mapped as `az1`, the second Availbility Zone is mapped as `az2`, and this mapping structure continues until your comma-separated list is fully mapped. There is no maximum number of AZ IDs that can be mapped. 
+ The `az-mapping.yaml` file from the GitHub [Multi-account Availability Zone mapping](https://github.com/aws-samples/multi-account-az-mapping/) repository, copied to your local machine

## Architecture
<a name="use-consistent-availability-zones-in-vpcs-across-different-aws-accounts-architecture"></a>

The following diagram shows the architecture that is deployed in an account and that creates AWS Systems Manager Parameter Store values. These Parameter Store values are consumed when you create a VPC in the account.

![\[Workflow to create Systems Manager Parameter Store values for each AZ ID and store AZ name.\]](http://docs.aws.amazon.com/prescriptive-guidance/latest/patterns/images/pattern-img/9954e7f9-d6ce-44bd-af99-0c6bb7cd3cb0/images/f1168464-55f8-4efc-9b28-6a0cda668b9e.png)


The diagram shows the following workflow:

1. This pattern’s solution is deployed to all accounts that require zonal consistency for a VPC. 

1. The solution creates Parameter Store values for each AZ ID and stores the new Availability Zone name. 

1. The AWS CloudFormation template uses the Availability Zone name stored in each Parameter Store value and this ensures zonal consistency.

The following diagram shows the workflow for creating a VPC with this pattern's solution.

 

![\[Workflow submits CloudFormation template to create a VPC with correct AZ IDs.\]](http://docs.aws.amazon.com/prescriptive-guidance/latest/patterns/images/pattern-img/9954e7f9-d6ce-44bd-af99-0c6bb7cd3cb0/images/cd859430-ac25-479f-b56a-21da24cddf21.png)


 

The diagram shows the following workflow:

1. Submit a template for creating a VPC to AWS CloudFormation.

1. AWS CloudFormation resolves the Parameter Store values for each Availability Zone and returns the Availability Zone name for each AZ ID.

1. A VPC is created with the correct AZ IDs required for zonal consistency.

After you deploy this pattern’s solution, you can create subnets that reference the Parameter Store values. If you use AWS CloudFormation, you can reference the Availability Zone mapping parameter values from the following YAML-formatted sample code:

```
Resources:
    PrivateSubnet1AZ1: 
        Type: AWS::EC2::Subnet 
        Properties: 
            VpcId: !Ref VPC
            CidrBlock: !Ref PrivateSubnetAZ1CIDR
            AvailabilityZone: 
                !Join 
                    - ''
                    - - '{{resolve:ssm:/az-mapping/az1:1}}'
```

This sample code is contained in the `vpc-example.yaml `file from the GitHub [Multi-account Availability Zone mapping](https://github.com/aws-samples/multi-account-az-mapping/) repository. It shows you how to create a VPC and subnets that align to the Parameter Store values for zonal consistency.

**Technology stack  **
+ AWS CloudFormation
+ AWS Lambda
+ AWS Systems Manager Parameter Store

**Automation and scale**

You can deploy this pattern to all your AWS accounts by using AWS CloudFormation StackSets or the Customizations for AWS Control Tower solution. For more information, see [Working with AWS CloudFormation StackSets](https://docs.aws.amazon.com/AWSCloudFormation/latest/UserGuide/what-is-cfnstacksets.html) in the AWS Cloudformation documentation and [Customizations for AWS Control Tower](https://aws.amazon.com/solutions/implementations/customizations-for-aws-control-tower/) in the AWS Solutions Library. 

After you deploy the AWS CloudFormation template, you can update it to use the Parameter Store values and deploy your VPCs in pipelines or according to your requirements. 

## Tools
<a name="use-consistent-availability-zones-in-vpcs-across-different-aws-accounts-tools"></a>

**AWS services**
+ [AWS CloudFormation](https://docs.aws.amazon.com/AWSCloudFormation/latest/UserGuide/Welcome.html) helps you model and set up your AWS resources, provision them quickly and consistently, and manage them throughout their lifecycle. You can use a template to describe your resources and their dependencies, and launch and configure them together as a stack, instead of managing resources individually. You can manage and provision stacks across multiple AWS accounts and AWS Regions.
+ [AWS Lambda](https://docs.aws.amazon.com/lambda/latest/dg/welcome.html) is a compute service that supports running code without provisioning or managing servers. Lambda runs your code only when needed and scales automatically, from a few requests per day to thousands per second. You pay only for the compute time that you consume—there is no charge when your code is not running.
+ [AWS Systems Manager Parameter Store](https://docs.aws.amazon.com/systems-manager/latest/userguide/systems-manager-parameter-store.html) is a capability of AWS Systems Manager. It provides secure, hierarchical storage for configuration data management and secrets management.

**Code**

The code for this pattern is provided in the GitHub [Multi-account Availability Zone mapping](https://github.com/aws-samples/multi-account-az-mapping/) repository.

## Epics
<a name="use-consistent-availability-zones-in-vpcs-across-different-aws-accounts-epics"></a>

### Deploy the az-mapping.yaml file
<a name="deploy-the-az-mapping-yaml-file"></a>


| Task | Description | Skills required | 
| --- | --- | --- | 
| Determine the required Availability Zones for the Region. | [\[See the AWS documentation website for more details\]](http://docs.aws.amazon.com/prescriptive-guidance/latest/patterns/use-consistent-availability-zones-in-vpcs-across-different-aws-accounts.html) | Cloud architect | 
| Deploy the az-mapping.yaml file. | Use the `az-mapping.yaml` file to create an AWS CloudFormation stack in all required AWS accounts. In the `AZIds` parameter, use the comma-separated list that you created earlier. We recommend that you use [AWS CloudFormation StackSets](https://docs.aws.amazon.com/AWSCloudFormation/latest/UserGuide/what-is-cfnstacksets.html) or the [Customizations for AWS Control Tower Solution](https://aws.amazon.com/solutions/implementations/customizations-for-aws-control-tower/). | Cloud architect | 

### Deploy the VPCs in your accounts
<a name="deploy-the-vpcs-in-your-accounts"></a>


| Task | Description | Skills required | 
| --- | --- | --- | 
| Customize the AWS CloudFormation templates. | When you create the subnets using AWS CloudFormation, customize the templates to use the Parameter Store values that you created earlier.For a sample template, see the `vpc-example.yaml` file in the GitHub [Multi-account Availability Zone mapping](https://github.com/aws-samples/multi-account-az-mapping/) repository. | Cloud architect | 
| Deploy the VPCs. | Deploy the customized AWS CloudFormation templates into your accounts. Each VPC in the Region then has zonal consistency in the Availability Zones used for the subnets | Cloud architect | 

## Related resources
<a name="use-consistent-availability-zones-in-vpcs-across-different-aws-accounts-resources"></a>
+ [Availability Zone IDs for your AWS resources](https://docs.aws.amazon.com/ram/latest/userguide/working-with-az-ids.html) (AWS Resource Access Manager documentation)
+ [AWS::EC2::Subnet](https://docs.aws.amazon.com/AWSCloudFormation/latest/UserGuide/aws-resource-ec2-subnet.html) (AWS CloudFormation documentation)

# Use user IDs in IAM policies for access control and automation
<a name="use-user-ids-iam-policies-access-control-automation"></a>

*Srinivas Ananda Babu and Ram Kandaswamy, Amazon Web Services*

## Summary
<a name="use-user-ids-iam-policies-access-control-automation-summary"></a>

This pattern explains the potential pitfalls of using username-based policies in AWS Identity and Access Management (IAM), the benefits of using user IDs, and how to integrate this approach with AWS CloudFormation for automation.

In the AWS Cloud, the IAM service helps you manage user identities and access control with precision. However, reliance on usernames in IAM policy creation can lead to unforeseen security risks and access control issues. For example, consider this scenario: A new employee, John Doe, joins your team, and you create an IAM user account with the username `j.doe`, which grants them permissions through IAM policies that reference usernames. When John leaves the company, the account is deleted. The trouble begins when a new employee, Jane Doe, joins your team, and the `j.doe` username is recreated. The existing policies now grant Jane Doe the same permissions as John Doe. This creates a potential security and compliance nightmare.

Manually updating each policy to reflect new user details is a time-consuming, error-prone process, especially as your organization grows. The solution is to use a unique and immutable user ID. When you create an IAM user account, AWS assigns the IAM user a unique user ID (or principal ID). You can use these user IDs in your IAM policies to ensure consistent and reliable access control that isn't affected by username changes or reuse.

For example, an IAM policy that uses a user ID might look like this:

```
{ 
    "Version": "2012-10-17",		 	 	  
    "Statement": [ 
        { 
            "Effect": "Allow", 
            "Action": "s3:ListBucket", 
            "Resource": "arn:aws:s3:::example-bucket", 
            "Principal": { "AWS": "arn:aws:iam::123456789012:user/abcdef01234567890" } 
        } 
      ] 
}
```

The benefits of using user IDs in IAM policies include:
+ **Uniqueness.** User IDs are unique across all AWS accounts, so they provide correct and consistent permission application.
+ **Immutability.** User IDs cannot be changed, so they provide a stable identifier for referencing users in policies.
+ **Auditing and compliance.** AWS services often include user IDs in logs and audit trails, which makes it easy to trace actions back to specific users.
+ **Automation and integration.** Using user IDs in AWS APIs, SDKs, or automation scripts ensures that processes remain unaffected by username changes.
+ **Future-proofing.** Using user IDs in policies from the start can prevent potential access control issues or extensive policy updates.

**Automation**

When you use infrastructure as code (IaC) tools such as AWS CloudFormation, the pitfalls of username-based IAM policies can still cause issues. The IAM user resource returns the username when you call the `Ref` intrinsic function. As your organization's infrastructure evolves, the cycle of creating and deleting resources, including IAM user accounts, can lead to unintended access control issues if you reuse usernames.

To address this issue, we recommend that you incorporate user IDs into your CloudFormation templates. However, obtaining user IDs for this purpose can be challenging. This is where custom resources can be helpful. You can use CloudFormation custom resources to extend the service's functionality by integrating with AWS APIs or external services. By creating a custom resource that fetches the user ID for a given IAM user, you can make the user ID available within your CloudFormation templates. This approach streamlines the process of referencing user IDs and ensures that your automation workflows remain robust and future-proof.

## Prerequisites and limitations
<a name="use-user-ids-iam-policies-access-control-automation-prereqs"></a>

**Prerequisites**
+ An active AWS account
+ An IAM role for a cloud administrator to run the CloudFormation template

**Limitations**
+ Some AWS services aren’t available in all AWS Regions. For Region availability, see [AWS services by Region](https://aws.amazon.com/about-aws/global-infrastructure/regional-product-services/). For specific endpoints, see the [Service endpoints and quotas](https://docs.aws.amazon.com/general/latest/gr/aws-service-information.html) page, and choose the link for the service.

## Architecture
<a name="use-user-ids-iam-policies-access-control-automation-architecture"></a>

**Target architecture **

The following diagram shows how CloudFormation uses a custom resource backed by AWS Lambda to retrieve the IAM user ID.

![\[Getting the IAM user ID by using a CloudFormation custom resource.\]](http://docs.aws.amazon.com/prescriptive-guidance/latest/patterns/images/pattern-img/71698647-274e-4911-92f0-549e444b53f6/images/7e507df4-f597-499e-bd5b-6d7a55e64146.png)


**Automation and scale**

You can use the CloudFormation template multiple times for different AWS Regions and accounts. You need to run it only once in each Region or account.

## Tools
<a name="use-user-ids-iam-policies-access-control-automation-tools"></a>

**AWS services**
+ [IAM](https://docs.aws.amazon.com/IAM/latest/UserGuide/introduction.html) – AWS Identity and Access Management (IAM) is a web service that helps you securely control access to AWS resources. You use IAM to control who is authenticated (signed in) and authorized (has permissions) to use resources.
+ [AWS CloudFormation](https://docs.aws.amazon.com/AWSCloudFormation/latest/UserGuide/Welcome.html) – AWS CloudFormation helps you model and set up your AWS resources so that you can spend less time managing those resources and more time focusing on your applications that run on AWS. You create a template that describes the AWS resources that you want, and CloudFormation takes care of provisioning and configuring those resources for you.
+ [AWS Lambda](https://docs.aws.amazon.com/lambda/latest/dg/welcome.html) – AWS Lambda is a compute service that supports running code without provisioning or managing servers. Lambda runs your code only when needed and scales automatically, from a few requests per day to thousands per second. 

## Best practices
<a name="use-user-ids-iam-policies-access-control-automation-best-practices"></a>

If you're starting from scratch or planning a greenfield deployment, we strongly recommend that you use [AWS IAM Identity Center](https://docs.aws.amazon.com/singlesignon/latest/userguide/what-is.html) for centralized user management. IAM Identity Center integrates with your existing identity providers (such as Active Directory or Okta) to federate user identities on AWS, which eliminates the need to create and manage IAM users directly. This approach not only ensures consistent access control but also simplifies user lifecycle management and helps enhance security and compliance across your AWS environment.

## Epics
<a name="use-user-ids-iam-policies-access-control-automation-epics"></a>

### Validate permissions
<a name="validate-permissions"></a>


| Task | Description | Skills required | 
| --- | --- | --- | 
| Validate your AWS account and IAM role. | Confirm that you have an IAM role with permissions to deploy CloudFormation templates in your AWS account.If you're planning to use the AWS CLI instead of the CloudFormation console to deploy the template in the last step of this procedure, you should also set up temporary credentials to run AWS CLI commands. For instructions, see the [IAM documentation](https://docs.aws.amazon.com/IAM/latest/UserGuide/id_credentials_temp_use-resources.html#using-temp-creds-sdk-cli). | Cloud architect | 

### Build a CloudFormation template
<a name="build-a-cfnshort-template"></a>


| Task | Description | Skills required | 
| --- | --- | --- | 
| Create a CloudFormation template. | [\[See the AWS documentation website for more details\]](http://docs.aws.amazon.com/prescriptive-guidance/latest/patterns/use-user-ids-iam-policies-access-control-automation.html) | AWS DevOps, Cloud architect | 
| Add an input parameter for the username. | Add the following code to the `Parameters` section of the CloudFormation template:<pre>Parameters:<br />  NewIamUserName:<br />    Type: String<br />    Description: Unique username for the new IAM user<br /></pre>This parameter prompts the user for the username. | AWS DevOps, Cloud architect | 
| Add a custom resource to create an IAM user. | Add the following code to the `Resources` section of the CloudFormation template:<pre>Resources:<br />  rNewIamUser:<br />    Type: 'AWS::IAM::User'<br />    Properties:<br />      UserName: !Ref NewIamUserName<br /></pre>This code adds a CloudFormation resource that creates an IAM user with the name provided by the `NewIamUserName` parameter. | AWS DevOps, Cloud architect | 
| Add an execution role for the Lambda function. | In this step, you  create an IAM role that grants an AWS Lambda function permission to get the IAM `UserId`. Specify the following minimum required permissions for Lambda to run:[\[See the AWS documentation website for more details\]](http://docs.aws.amazon.com/prescriptive-guidance/latest/patterns/use-user-ids-iam-policies-access-control-automation.html)For instructions on creating an execution role, see the [Lambda documentation](https://docs.aws.amazon.com/lambda/latest/dg/lambda-intro-execution-role.html). You will reference this role in the next step, when you create the Lambda function. | AWS administrator, Cloud architect | 
| Add a Lambda function to get the unique IAM `UserId`. | In this step, you define a Lambda function with a Python runtime to get the unique IAM `UserId`. To do this, add the following code to the `Resources` section of the CloudFormation template. Replace `<<ROLENAME>>` with the name of the execution role that you created in the last step.<pre>  GetUserLambdaFunction:<br />    Type: 'AWS::Lambda::Function'<br />    Properties:<br />      Handler: index.handler<br />      Role: <<ROLENAME>><br />      Timeout: 30<br />      Runtime: python3.11<br />      Code:<br />        ZipFile: |<br />          import cfnresponse, boto3<br />          def handler(event, context):<br />            try:<br />              print(event)<br />              user = boto3.client('iam').get_user(UserName=event['ResourceProperties']['NewIamUserName'])['User']<br />              cfnresponse.send(event, context, cfnresponse.SUCCESS, {'NewIamUserId': user['UserId'], 'NewIamUserPath': user['Path'], 'NewIamUserArn': user['Arn']})<br />            except Exception as e:<br />              cfnresponse.send(event, context, cfnresponse.FAILED, {'NewIamUser': str(e)})<br /></pre> | AWS DevOps, Cloud architect | 
| Add a custom resource. | Add the  following code to the `Resources` section of the CloudFormation template:<pre>  rCustomGetUniqueUserId:<br />    Type: 'Custom::rCustomGetUniqueUserIdWithLambda'<br />    Properties:<br />      ServiceToken: !GetAtt GetUserLambdaFunction.Arn<br />      NewIamUserName: !Ref NewIamUserName<br /></pre>This custom resource calls the Lambda function to get the IAM `UserID`. | AWS DevOps, Cloud architect | 
| Define CloudFormation outputs. | Add the following code to the `Outputs` section of the CloudFormation template:<pre>Outputs:<br />  NewIamUserId:<br />    Value: !GetAtt rCustomGetUniqueUserId.NewIamUserId<br /></pre>This displays the IAM `UserID` for the new IAM user. | AWS DevOps, Cloud architect | 
| Save the template. | Save your changes to the CloudFormation template. | AWS DevOps, Cloud architect | 

### Deploy the CloudFormation template
<a name="deploy-the-cfnshort-template"></a>


| Task | Description | Skills required | 
| --- | --- | --- | 
| Deploy the CloudFormation template. | To deploy the `get_unique_user_id.yaml` template by using the CloudFormation console, follow the instructions in the [CloudFormation documentation](https://docs.aws.amazon.com/AWSCloudFormation/latest/UserGuide/cfn-console-create-stack.html).Alternatively, you can run the following AWS CLI command to deploy the template:<pre>aws cloudformation create-stack \<br />--stack-name DemoNewUser \<br />--template-body file://get_unique_user_id.yaml \<br />--parameters ParameterKey=NewIamUserName,ParameterValue=demouser \<br />--capabilities CAPABILITY_NAMED_IAM</pre> | AWS DevOps, Cloud architect | 

## Related resources
<a name="use-user-ids-iam-policies-access-control-automation-resources"></a>
+ [Create a stack from the CloudFormation console](https://docs.aws.amazon.com/AWSCloudFormation/latest/UserGuide/cfn-console-create-stack.html) (CloudFormation documentation)
+ [Lambda-backed custom resources](https://docs.aws.amazon.com/AWSCloudFormation/latest/UserGuide/template-custom-resources-lambda.html) (CloudFormation documentation)
+ [Unique identifiers](https://docs.aws.amazon.com/IAM/latest/UserGuide/reference_identifiers.html#identifiers-unique-ids) (IAM documentation)
+ [Use temporary credentials with AWS resources](https://docs.aws.amazon.com/IAM/latest/UserGuide/id_credentials_temp_use-resources.html) (IAM documentation)

# Validate Account Factory for Terraform (AFT) code locally
<a name="validate-account-factory-for-terraform-aft-code-locally"></a>

*Alexandru Pop and Michal Gorniak, Amazon Web Services*

## Summary
<a name="validate-account-factory-for-terraform-aft-code-locally-summary"></a>

This pattern shows how to locally test HashiCorp Terraform code that’s managed by AWS Control Tower Account Factory for Terraform (AFT). Terraform is an infrastructure as code (IaC) tool that helps you use code to provision and manage cloud infrastructure and resources. AFT sets up a Terraform pipeline that helps you provision and customize multiple AWS accounts in AWS Control Tower.

During code development, it can be helpful to test your Terraform infrastructure as code (IaC) locally, outside of the AFT pipeline. This pattern shows how to do the following:
+ Retrieve a local copy of the Terraform code that’s stored in the AWS CodeCommit repositories in your AFT management account.
+ Simulate the AFT pipeline locally by using the retrieved code.

This procedure can also be used to run Terraform commands that aren’t part of the normal AFT pipeline. For example, you can use this method to run commands such as `terraform validate`, `terraform plan`, `terraform destroy`, and `terraform import`.

## Prerequisites and limitations
<a name="validate-account-factory-for-terraform-aft-code-locally-prereqs"></a>

**Prerequisites **
+ An active AWS multi-account environment that uses [AWS Control Tower](https://aws.amazon.com/controltower)
+ A fully deployed of [AFT environment](https://docs.aws.amazon.com/controltower/latest/userguide/taf-account-provisioning.html)
+ AWS Command Line Interface (AWS CLI), [installed](https://docs.aws.amazon.com/cli/latest/userguide/getting-started-install.html) and [configured](https://docs.aws.amazon.com/cli/latest/userguide/cli-chap-configure.html)
+ [AWS CLI credential helper for AWS CodeCommit](https://docs.aws.amazon.com/codecommit/latest/userguide/setting-up-https-unixes.html), installed and configured
+ Python 3.x
+ [Git](https://git-scm.com/book/en/v2/Getting-Started-Installing-Git), installed and configured on your local machine
+ `git-remote-commit` utility, [installed and configured](https://docs.aws.amazon.com/codecommit/latest/userguide/setting-up-git-remote-codecommit.html#setting-up-git-remote-codecommit-install)
+ [Terraform](https://learn.hashicorp.com/collections/terraform/aws-get-started?utm_source=WEBSITE&utm_medium=WEB_IO&utm_offer=ARTICLE_PAGE&utm_content=DOCS), installed and configured (the local Terraform package version must match the version that’s used in the AFT deployment)

**Limitations **
+ This pattern doesn’t cover the deployment steps required for AWS Control Tower, AFT, or any specific Terraform modules.
+ The output that’s generated locally during this procedure isn’t saved in the AFT pipeline runtime logs.

## Architecture
<a name="validate-account-factory-for-terraform-aft-code-locally-architecture"></a>

**Target technology stack  **
+ AFT infrastructure deployed within an AWS Control Tower deployment
+ Terraform
+ Git
+ AWS CLI version 2

**Automation and scale**

This pattern shows how to locally invoke Terraform code for AFT global account customizations in a single AFT-managed AWS account. After your Terraform code is validated, you can apply it to the remaining accounts in your multi-account environment. For more information, see [Re-invoke customizations](https://docs.aws.amazon.com/controltower/latest/userguide/aft-account-customization-options.html#aft-re-invoke-customizations) in the AWS Control Tower documentation.

You can also use a similar process to run AFT account customizations in a local terminal. To locally invoke Terraform code from AFT account customizations, clone the **aft-account-customizations** repository instead of **aft-global-account-customizations** repository from CodeCommit in your AFT management account.

## Tools
<a name="validate-account-factory-for-terraform-aft-code-locally-tools"></a>

**AWS services**
+ [AWS Control Tower](https://docs.aws.amazon.com/controltower/latest/userguide/what-is-control-tower.html) helps you set up and govern an AWS multi-account environment, following prescriptive best practices.
+ [AWS Command Line Interface (AWS CLI)](https://docs.aws.amazon.com/cli/latest/userguide/cli-chap-welcome.html) is an open-source tool that helps you interact with AWS services through commands in your command-line shell.

**Other services**
+ [HashiCorp Terraform](https://www.terraform.io/docs) is an infrastructure as code (IaC) tool that helps you use code to provision and manage cloud infrastructure and resources.
+ [Git](https://git-scm.com/docs) is an open-source, distributed version control system.

**Code **

The following is an example bash script that can be used to locally run Terraform code that’s managed by AFT. To use the script, follow the instructions in the [Epics](#validate-account-factory-for-terraform-aft-code-locally-epics) section of this pattern.

```
#! /bin/bash
# Version: 1.1 2022-06-24 Unsetting AWS_PROFILE since, when set, it interferes with script operation
#          1.0 2022-02-02 Initial Version
#
# Purpose: For use with AFT: This script runs the local copy of TF code as if it were running within AFT pipeline.
#        * Facilitates testing of what the AFT pipline will do 
#           * Provides the ability to run terraform with custom arguments (like 'plan' or 'move') which are currently not supported within the pipeline.
#
# © 2021 Amazon Web Services, Inc. or its affiliates. All Rights Reserved.
# This AWS Content is provided subject to the terms of the AWS Customer Agreement
# available at http://aws.amazon.com/agreement or other written agreement between
# Customer and either Amazon Web Services, Inc. or Amazon Web Services EMEA SARL or both.
#
# Note: Arguments to this script are passed directly to 'terraform' without parsing nor validation by this script.
#
# Prerequisites:
#    1. local copy of ct GIT repositories
#    2. local backend.tf and aft-providers.tf filled with data for the target account on which terraform is to be run
#       Hint: The contents of above files can be obtain from the logs of a previous execution of the AFT pipeline for the target account.
#    3. 'terraform' binary is available in local PATH
#    4. Recommended: .gitignore file containing 'backend.tf', 'aft_providers.tf' so the local copy of these files are not pushed back to git

readonly credentials=$(aws sts assume-role \
    --role-arn arn:aws:iam::$(aws sts get-caller-identity --query "Account" --output text ):role/AWSAFTAdmin \
    --role-session-name AWSAFT-Session \
    --query Credentials )

unset AWS_PROFILE
export AWS_ACCESS_KEY_ID=$(echo $credentials | jq -r '.AccessKeyId')
export AWS_SECRET_ACCESS_KEY=$(echo $credentials | jq -r '.SecretAccessKey')
export AWS_SESSION_TOKEN=$(echo $credentials | jq -r '.SessionToken')
terraform "$@"
```

## Epics
<a name="validate-account-factory-for-terraform-aft-code-locally-epics"></a>

### Save the example code as a local file
<a name="save-the-example-code-as-a-local-file"></a>


| Task | Description | Skills required | 
| --- | --- | --- | 
| Save the example code as a local file. | [\[See the AWS documentation website for more details\]](http://docs.aws.amazon.com/prescriptive-guidance/latest/patterns/validate-account-factory-for-terraform-aft-code-locally.html) | AWS administrator | 
| Make the example code runnable. | Open a terminal window and authenticate into your AWS AFT management account by doing one of the following:[\[See the AWS documentation website for more details\]](http://docs.aws.amazon.com/prescriptive-guidance/latest/patterns/validate-account-factory-for-terraform-aft-code-locally.html)Your organization might also have a custom tool to provide authentication credentials to your AWS environment. | AWS administrator | 
| Verify access to AFT management account in the correct AWS Region. | Make sure that you use the same terminal session with which you authenticated into your AFT management account.[\[See the AWS documentation website for more details\]](http://docs.aws.amazon.com/prescriptive-guidance/latest/patterns/validate-account-factory-for-terraform-aft-code-locally.html) | AWS administrator | 
| Create a new, local directory to store the AFT repository code. | In the same terminal session, run the following commands:<pre>mkdir my_aft <br />cd my_aft</pre> | AWS administrator | 
| Clone the remote AFT repository code. | [\[See the AWS documentation website for more details\]](http://docs.aws.amazon.com/prescriptive-guidance/latest/patterns/validate-account-factory-for-terraform-aft-code-locally.html) | AWS administrator | 

### Create the Terraform configuration files required for the AFT pipeline to run locally
<a name="create-the-terraform-configuration-files-required-for-the-aft-pipeline-to-run-locally"></a>


| Task | Description | Skills required | 
| --- | --- | --- | 
| Open a previously run AFT pipeline and copy the Terraform configuration files to a local folder. | The `backend.tf` and `aft-providers.tf` configuration files that are created in this epic are needed for the AFT pipeline to run locally. These files are created automatically within the cloud-based AFT pipeline, but must be created manually for the pipeline to run locally. Running the AFT pipeline locally requires one set of files that represent running the pipeline within a single AWS account.[\[See the AWS documentation website for more details\]](http://docs.aws.amazon.com/prescriptive-guidance/latest/patterns/validate-account-factory-for-terraform-aft-code-locally.html)**Example auto-generated backend.tf statement**<pre>## Autogenerated backend.tf ##<br />## Updated on: 2022-05-31 16:27:45 ##<br />terraform {<br />  required_version = ">= 0.15.0"<br />  backend "s3" {<br />    region         = "us-east-2"<br />    bucket         = "aft-backend-############-primary-region"<br />    key            = "############-aft-global-customizations/terraform.tfstate"<br />    dynamodb_table = "aft-backend-############"<br />    encrypt        = "true"<br />    kms_key_id     = "########-####-####-####-############"<br />    role_arn       = "arn:aws:iam::#############:role/AWSAFTExecution"<br />  }<br />}</pre>** **The `backend.tf` and `aft-providers.tf` files are tied to a specific AWS account, AFT deployment, and folder. These files are also different, depending on whether they’re in the **aft-global-customizations** and **aft-account-customizations** repository within the same AFT deployment. Make sure that you generate both files from the same runtime listing. | AWS administrator | 

### Run the AFT pipeline locally by using the example bash script
<a name="run-the-aft-pipeline-locally-by-using-the-example-bash-script"></a>


| Task | Description | Skills required | 
| --- | --- | --- | 
| Implement the Terraform configuration changes that you want to validate. | [\[See the AWS documentation website for more details\]](http://docs.aws.amazon.com/prescriptive-guidance/latest/patterns/validate-account-factory-for-terraform-aft-code-locally.html) | AWS administrator | 
| Run the `ct_terraform.sh` script and review the output. | [\[See the AWS documentation website for more details\]](http://docs.aws.amazon.com/prescriptive-guidance/latest/patterns/validate-account-factory-for-terraform-aft-code-locally.html)** **[\[See the AWS documentation website for more details\]](http://docs.aws.amazon.com/prescriptive-guidance/latest/patterns/validate-account-factory-for-terraform-aft-code-locally.html) | AWS administrator | 

### Push your local code changes back to the AFT repository
<a name="push-your-local-code-changes-back-to-the-aft-repository"></a>


| Task | Description | Skills required | 
| --- | --- | --- | 
| Add references to the `backend.tf` and `aft-providers.tf` files to a `.gitignore` file. | Add the `backend.tf`** **and `aft-providers.tf` files that you created to a `.gitignore` file by running the following commands:<pre>echo backend.tf >> .gitignore<br />echo aft-providers.tf >>.gitignore</pre>Moving the files to the `.gitignore`** **file ensures that they don’t get committed and pushed back to the remote AFT repository. | AWS administrator | 
| Commit and push your code changes to the remote AFT repository. | [\[See the AWS documentation website for more details\]](http://docs.aws.amazon.com/prescriptive-guidance/latest/patterns/validate-account-factory-for-terraform-aft-code-locally.html)The code changes that you introduce by following this procedure up until this point are applied to one AWS account only. | AWS administrator | 

### Roll out the changes to multiple accounts
<a name="roll-out-the-changes-to-multiple-accounts"></a>


| Task | Description | Skills required | 
| --- | --- | --- | 
| Roll out the changes to all of your accounts that are managed by AFT. | To roll out the changes to multiple AWS accounts that are managed by AFT, follow the instructions in [Re-invoke customizations](https://docs.aws.amazon.com/controltower/latest/userguide/aft-account-customization-options.html#aft-re-invoke-customizations) in the AWS Control Tower documentation. | AWS administrator | 

# More patterns
<a name="infrastructure-more-patterns-pattern-list"></a>

**Topics**
+ [Add HA to Oracle PeopleSoft on Amazon RDS Custom by using a read replica](add-ha-to-oracle-peoplesoft-on-amazon-rds-custom-by-using-a-read-replica.md)
+ [Automatically audit AWS security groups that allow access from public IP addresses](audit-security-groups-access-public-ip.md)
+ [Automate account creation by using the Landing Zone Accelerator on AWS](automate-account-creation-lza.md)
+ [Automate adding or updating Windows registry entries using AWS Systems Manager](automate-adding-or-updating-windows-registry-entries-using-aws-systems-manager.md)
+ [Automate AWS resource assessment](automate-aws-resource-assessment.md)
+ [Automate AWS Service Catalog portfolio and product deployment by using AWS CDK](automate-aws-service-catalog-portfolio-and-product-deployment-by-using-aws-cdk.md)
+ [Automate cross-Region failover and failback by using DR Orchestrator Framework](automate-cross-region-failover-and-failback-by-using-dr-orchestrator-framework.md)
+ [Automate deletion of AWS CloudFormation stacks and associated resources](automate-deletion-cloudformation-stacks-associated-resources.md)
+ [Automate ingestion and visualization of Amazon MWAA custom metrics on Amazon Managed Grafana by using Terraform](automate-ingestion-and-visualization-of-amazon-mwaa-custom-metrics.md)
+ [Automate RabbitMQ configuration in Amazon MQ](automate-rabbitmq-configuration-in-amazon-mq.md)
+ [Automate AWS Supply Chain data lakes deployment in a multi-repository setup](automate-the-deployment-of-aws-supply-chain-data-lakes.md)
+ [Automate the replication of Amazon RDS instances across AWS accounts](automate-the-replication-of-amazon-rds-instances-across-aws-accounts.md)
+ [Automatically attach an AWS managed policy for Systems Manager to EC2 instance profiles using Cloud Custodian and AWS CDK](automatically-attach-an-aws-managed-policy-for-systems-manager-to-ec2-instance-profiles-using-cloud-custodian-and-aws-cdk.md)
+ [Automatically build CI/CD pipelines and Amazon ECS clusters for microservices using AWS CDK](automatically-build-ci-cd-pipelines-and-amazon-ecs-clusters-for-microservices-using-aws-cdk.md)
+ [Automatically detect changes and initiate different CodePipeline pipelines for a monorepo in CodeCommit](automatically-detect-changes-and-initiate-different-codepipeline-pipelines-for-a-monorepo-in-codecommit.md)
+ [Build a data pipeline to ingest, transform, and analyze Google Analytics data using the AWS DataOps Development Kit](build-a-data-pipeline-to-ingest-transform-and-analyze-google-analytics-data-using-the-aws-dataops-development-kit.md)
+ [Build a Micro Focus Enterprise Server PAC with Amazon EC2 Auto Scaling and Systems Manager](build-a-micro-focus-enterprise-server-pac-with-amazon-ec2-auto-scaling-and-systems-manager.md)
+ [Build and push Docker images to Amazon ECR using GitHub Actions and Terraform](build-and-push-docker-images-to-amazon-ecr-using-github-actions-and-terraform.md)
+ [Build an AWS landing zone that includes MongoDB Atlas](build-aws-landing-zone-that-includes-mongodb-atlas.md)
+ [Centralize IAM access key management in AWS Organizations by using Terraform](centralize-iam-access-key-management-in-aws-organizations-by-using-terraform.md)
+ [Centralize software package distribution in AWS Organizations by using Terraform](centralize-software-package-distribution-in-aws-organizations-by-using-terraform.md)
+ [Configure model invocation logging in Amazon Bedrock by using AWS CloudFormation](configure-bedrock-invocation-logging-cloudformation.md)
+ [Configure read-only routing in Always On availability groups in SQL Server on AWS](configure-read-only-routing-in-an-always-on-availability-group-in-sql-server-on-aws.md)
+ [Create a portal for micro-frontends by using AWS Amplify, Angular, and Module Federation](create-amplify-micro-frontend-portal.md)
+ [Create an API-driven resource orchestration framework using GitHub Actions and Terragrunt](create-an-api-driven-resource-orchestration-framework-using-github-actions-and-terragrunt.md)
+ [Create a cross-account Amazon EventBridge connection in an organization](create-cross-account-amazon-eventbridge-connection-organization.md)
+ [Create dynamic CI pipelines for Java and Python projects automatically](create-dynamic-ci-pipelines-for-java-and-python-projects-automatically.md)
+ [Deploy an Amazon API Gateway API on an internal website using private endpoints and an Application Load Balancer](deploy-an-amazon-api-gateway-api-on-an-internal-website-using-private-endpoints-and-an-application-load-balancer.md)
+ [Deploy and manage AWS Control Tower controls by using AWS CDK and CloudFormation](deploy-and-manage-aws-control-tower-controls-by-using-aws-cdk-and-aws-cloudformation.md)
+ [Deploy and manage AWS Control Tower controls by using Terraform](deploy-and-manage-aws-control-tower-controls-by-using-terraform.md)
+ [Deploy CloudWatch Synthetics canaries by using Terraform](deploy-cloudwatch-synthetics-canaries-by-using-terraform.md)
+ [Deploy a CockroachDB cluster in Amazon EKS by using Terraform](deploy-cockroachdb-on-eks-using-terraform.md)
+ [Deploy a Lustre file system for high-performance data processing by using Terraform and DRA](deploy-lustre-file-system-for-high-performance-data-processing-with-terraform-dra.md)
+ [Deploy a RAG use case on AWS by using Terraform and Amazon Bedrock](deploy-rag-use-case-on-aws.md)
+ [Deploy resources in an AWS Wavelength Zone by using Terraform](deploy-resources-wavelength-zone-using-terraform.md)
+ [Deploy SQL Server failover cluster instances on Amazon EC2 and Amazon FSx by using Terraform](deploy-sql-server-failover-cluster-instances-on-amazon-ec2-and-amazon-fsx.md)
+ [Deploy the Security Automations for AWS WAF solution by using Terraform](deploy-the-security-automations-for-aws-waf-solution-by-using-terraform.md)
+ [Detect Amazon RDS and Aurora database instances that have expiring CA certificates](detect-rds-instances-expiring-certificates.md)
+ [Document your AWS landing zone design](document-your-aws-landing-zone-design.md)
+ [Export AWS Backup reports from across an organization in AWS Organizations as a CSV file](export-aws-backup-reports-from-across-an-organization-in-aws-organizations-as-a-csv-file.md)
+ [Generate personalized and re-ranked recommendations using Amazon Personalize](generate-personalized-and-re-ranked-recommendations-using-amazon-personalize.md)
+ [Govern permission sets for multiple accounts by using Account Factory for Terraform](govern-permission-sets-aft.md)
+ [Identify and alert when Amazon Data Firehose resources are not encrypted with an AWS KMS key](identify-and-alert-when-amazon-data-firehose-resources-are-not-encrypted-with-an-aws-kms-key.md)
+ [Implement Account Factory for Terraform (AFT) by using a bootstrap pipeline](implement-account-factory-for-terraform-aft-by-using-a-bootstrap-pipeline.md)
+ [Implement path-based API versioning by using custom domains in Amazon API Gateway](implement-path-based-api-versioning-by-using-custom-domains.md)
+ [Install SSM Agent on Amazon EKS worker nodes by using Kubernetes DaemonSet](install-ssm-agent-on-amazon-eks-worker-nodes-by-using-kubernetes-daemonset.md)
+ [Install the SSM Agent and CloudWatch agent on Amazon EKS worker nodes using preBootstrapCommands](install-the-ssm-agent-and-cloudwatch-agent-on-amazon-eks-worker-nodes-using-prebootstrapcommands.md)
+ [Manage AWS IAM Identity Center permission sets as code by using AWS CodePipeline](manage-aws-iam-identity-center-permission-sets-as-code-by-using-aws-codepipeline.md)
+ [Manage AWS permission sets dynamically by using Terraform](manage-aws-permission-sets-dynamically-by-using-terraform.md)
+ [Manage AWS Service Catalog products in multiple AWS accounts and AWS Regions](manage-aws-service-catalog-products-in-multiple-aws-accounts-and-aws-regions.md)
+ [Manage on-premises container applications by setting up Amazon ECS Anywhere with the AWS CDK](manage-on-premises-container-applications-by-setting-up-amazon-ecs-anywhere-with-the-aws-cdk.md)
+ [Manage AWS Organizations policies as code by using AWS CodePipeline and Amazon Bedrock](manage-organizations-policies-as-code.md)
+ [Migrate DNS records in bulk to an Amazon Route 53 private hosted zone](migrate-dns-records-in-bulk-to-an-amazon-route-53-private-hosted-zone.md)
+ [Migrate Oracle PeopleSoft to Amazon RDS Custom](migrate-oracle-peoplesoft-to-amazon-rds-custom.md)
+ [Migrate RHEL BYOL systems to AWS License-Included instances by using AWS MGN](migrate-rhel-byol-systems-to-aws-license-included-instances-by-using-aws-mgn.md)
+ [Monitor Amazon ElastiCache clusters for at-rest encryption](monitor-amazon-elasticache-clusters-for-at-rest-encryption.md)
+ [Monitor application activity by using CloudWatch Logs Insights](monitor-application-activity-by-using-cloudwatch-logs-insights.md)
+ [Monitor SAP RHEL Pacemaker clusters by using AWS services](monitor-sap-rhel-pacemaker-clusters-by-using-aws-services.md)
+ [Create a hierarchical, multi-Region IPAM architecture on AWS by using Terraform](multi-region-ipam-architecture.md)
+ [Optimize multi-account serverless deployments by using the AWS CDK and GitHub Actions workflows](optimize-multi-account-serverless-deployments.md)
+ [Provision AWS Service Catalog products based on AWS CloudFormation templates by using GitHub Actions](provision-aws-service-catalog-products-using-github-actions.md)
+ [Provision least-privilege IAM roles by deploying a role vending machine solution](provision-least-privilege-iam-roles-by-deploying-a-role-vending-machine-solution.md)
+ [Remove Amazon EC2 entries across AWS accounts from AWS Managed Microsoft AD by using AWS Lambda automation](remove-amazon-ec2-entries-across-aws-accounts-from-aws-managed-microsoft-ad.md)
+ [Remove Amazon EC2 entries in the same AWS account from AWS Managed Microsoft AD by using AWS Lambda automation](remove-amazon-ec2-entries-in-the-same-aws-account-from-aws-managed-microsoft-ad.md)
+ [Secure file transfers by using Transfer Family, Amazon Cognito, and GuardDuty](secure-file-transfers.md)
+ [Send a notification when an IAM user is created](send-a-notification-when-an-iam-user-is-created.md)
+ [Set up a serverless cell router for a cell-based architecture](serverless-cell-router-architecture.md)
+ [Set up a CI/CD pipeline for hybrid workloads on Amazon ECS Anywhere by using AWS CDK and GitLab](set-up-a-ci-cd-pipeline-for-hybrid-workloads-on-amazon-ecs-anywhere-by-using-aws-cdk-and-gitlab.md)
+ [Set up an HA/DR architecture for Oracle E-Business Suite on Amazon RDS Custom with an active standby database](set-up-an-ha-dr-architecture-for-oracle-e-business-suite-on-amazon-rds-custom-with-an-active-standby-database.md)
+ [Set up DNS resolution for hybrid networks in a multi-account AWS environment](set-up-dns-resolution-for-hybrid-networks-in-a-multi-account-aws-environment.md)
+ [Set up Multi-AZ infrastructure for a SQL Server Always On FCI by using Amazon FSx](set-up-multi-az-infrastructure-for-a-sql-server-always-on-fci-by-using-amazon-fsx.md)
+ [Set up Oracle UTL\$1FILE functionality on Aurora PostgreSQL-Compatible](set-up-oracle-utl_file-functionality-on-aurora-postgresql-compatible.md)
+ [Simplify application authentication with mutual TLS in Amazon ECS by using Application Load Balancer](simplify-application-authentication-with-mutual-tls-in-amazon-ecs.md)
+ [Simplify private certificate management by using AWS Private CA and AWS RAM](simplify-private-certificate-management-by-using-aws-private-ca-and-aws-ram.md)
+ [Streamline machine learning workflows from local development to scalable experiments by using SageMaker AI and Hydra](streamline-machine-learning-workflows-by-using-amazon-sagemaker.md)
+ [Tag Transit Gateway attachments automatically using AWS Organizations](tag-transit-gateway-attachments-automatically-using-aws-organizations.md)
+ [Transition roles for an Oracle PeopleSoft application on Amazon RDS Custom for Oracle](transition-roles-for-an-oracle-peoplesoft-application-on-amazon-rds-custom-for-oracle.md)
+ [Use Amazon Q Developer as a coding assistant to increase your productivity](use-q-developer-as-coding-assistant-to-increase-productivity.md)