

# Security, identity & compliance
<a name="securityandcompliance-pattern-list"></a>

**Topics**
+ [Automate incident response and forensics](automate-incident-response-and-forensics.md)
+ [Automatically audit AWS security groups that allow access from public IP addresses](audit-security-groups-access-public-ip.md)
+ [Automatically remediate unencrypted Amazon RDS DB instances and clusters](automatically-remediate-unencrypted-amazon-rds-db-instances-and-clusters.md)
+ [Automatically validate and deploy IAM policies and roles by using CodePipeline, IAM Access Analyzer, and AWS CloudFormation macros](automatically-validate-and-deploy-iam-policies-and-roles-in-an-aws-account-by-using-codepipeline-iam-access-analyzer-and-aws-cloudformation-macros.md)
+ [Bidirectionally integrate AWS Security Hub CSPM with Jira software](bidirectionally-integrate-aws-security-hub-with-jira-software.md)
+ [Build a pipeline for hardened container images using EC2 Image Builder and Terraform](build-a-pipeline-for-hardened-container-images-using-ec2-image-builder-and-terraform.md)
+ [Centralize IAM access key management in AWS Organizations by using Terraform](centralize-iam-access-key-management-in-aws-organizations-by-using-terraform.md)
+ [Check an Amazon CloudFront distribution for access logging, HTTPS, and TLS version](check-an-amazon-cloudfront-distribution-for-access-logging-https-and-tls-version.md)
+ [Choose an Amazon Cognito authentication flow for enterprise applications](choose-an-amazon-cognito-authentication-flow-for-enterprise-applications.md)
+ [Create AWS Config custom rules by using AWS CloudFormation Guard policies](create-aws-config-custom-rules-by-using-aws-cloudformation-guard-policies.md)
+ [Create a consolidated report of Prowler security findings from multiple AWS accounts](create-a-consolidated-report-of-prowler-security-findings-from-multiple-aws-accounts.md)
+ [Deploy and manage AWS Control Tower controls by using AWS CDK and CloudFormation](deploy-and-manage-aws-control-tower-controls-by-using-aws-cdk-and-aws-cloudformation.md)
+ [Deploy and manage AWS Control Tower controls by using Terraform](deploy-and-manage-aws-control-tower-controls-by-using-terraform.md)
+ [Deploy the Security Automations for AWS WAF solution by using Terraform](deploy-the-security-automations-for-aws-waf-solution-by-using-terraform.md)
+ [Deploy a pipeline that simultaneously detects security issues in multiple code deliverables](deploy-a-pipeline-that-simultaneously-detects-security-issues-in-multiple-code-deliverables.md)
+ [Deploy detective attribute-based access controls for public subnets by using AWS Config](deploy-detective-attribute-based-access-controls-for-public-subnets-by-using-aws-config.md)
+ [Deploy preventative attribute-based access controls for public subnets](deploy-preventative-attribute-based-access-controls-for-public-subnets.md)
+ [Detect Amazon RDS and Aurora database instances that have expiring CA certificates](detect-rds-instances-expiring-certificates.md)
+ [Dynamically generate an IAM policy with IAM Access Analyzer by using Step Functions](dynamically-generate-an-iam-policy-with-iam-access-analyzer-by-using-step-functions.md)
+ [Enable Amazon GuardDuty conditionally by using AWS CloudFormation templates](enable-amazon-guardduty-conditionally-by-using-aws-cloudformation-templates.md)
+ [Enable transparent data encryption in Amazon RDS for SQL Server](enable-transparent-data-encryption-in-amazon-rds-for-sql-server.md)
+ [Monitor and remediate scheduled deletion of AWS KMS keys](monitor-and-remediate-scheduled-deletion-of-aws-kms-keys.md)
+ [Identify public Amazon S3 buckets in AWS Organizations by using Security Hub CSPM](identify-public-s3-buckets-in-aws-organizations-using-security-hub.md)
+ [Ingest and analyze AWS security logs in Microsoft Sentinel](ingest-analyze-aws-security-logs-sentinel.md)
+ [Manage AWS Organizations policies as code by using AWS CodePipeline and Amazon Bedrock](manage-organizations-policies-as-code.md)
+ [Manage AWS IAM Identity Center permission sets as code by using AWS CodePipeline](manage-aws-iam-identity-center-permission-sets-as-code-by-using-aws-codepipeline.md)
+ [Manage credentials using AWS Secrets Manager](manage-credentials-using-aws-secrets-manager.md)
+ [Monitor Amazon ElastiCache clusters for at-rest encryption](monitor-amazon-elasticache-clusters-for-at-rest-encryption.md)
+ [Monitor IAM root user activity](monitor-iam-root-user-activity.md)
+ [Send a notification when an IAM user is created](send-a-notification-when-an-iam-user-is-created.md)
+ [Prevent internet access at the account level by using a service control policy](prevent-internet-access-at-the-account-level-by-using-a-service-control-policy.md)
+ [Export a report of AWS IAM Identity Center identities and their assignments by using PowerShell](export-a-report-of-aws-iam-identity-center-identities-and-their-assignments-by-using-powershell.md)
+ [Restrict access based on IP address or geolocation by using AWS WAF](aws-waf-restrict-access-geolocation.md)
+ [Scan Git repositories for sensitive information and security issues by using git-secrets](scan-git-repositories-for-sensitive-information-and-security-issues-by-using-git-secrets.md)
+ [Secure file transfers by using Transfer Family, Amazon Cognito, and GuardDuty](secure-file-transfers.md)
+ [Secure sensitive data in CloudWatch Logs by using Amazon Macie](secure-cloudwatch-logs-using-macie.md)
+ [Send alerts from AWS Network Firewall to a Slack channel](send-alerts-from-aws-network-firewall-to-a-slack-channel.md)
+ [Send custom attributes to Amazon Cognito and inject them into tokens](send-custom-attributes-cognito.md)
+ [Simplify private certificate management by using AWS Private CA and AWS RAM](simplify-private-certificate-management-by-using-aws-private-ca-and-aws-ram.md)
+ [Streamline Amazon EC2 compliance management with Amazon Bedrock agents and AWS Config](streamline-amazon-ec2-compliance-management-with-amazon-bedrock-agents-and-aws-config.md)
+ [Update AWS CLI credentials from AWS IAM Identity Center by using PowerShell](update-aws-cli-credentials-from-aws-iam-identity-center-by-using-powershell.md)
+ [Use Network Firewall to capture the DNS domain names from the Server Name Indication for outbound traffic](use-network-firewall-to-capture-the-dns-domain-names-from-the-server-name-indication-sni-for-outbound-traffic.md)
+ [Use Terraform to automatically enable Amazon GuardDuty for an organization](use-terraform-to-automatically-enable-amazon-guardduty-for-an-organization.md)
+ [Verify operational best practices for PCI DSS 4.0 by using AWS Config](verify-ops-best-practices-pci-dss-4.md)
+ [More patterns](securityandcompliance-more-patterns-pattern-list.md)

# Automate incident response and forensics
<a name="automate-incident-response-and-forensics"></a>

*Lucas Kauffman and Tomek Jakubowski, Amazon Web Services*

## Summary
<a name="automate-incident-response-and-forensics-summary"></a>

This pattern deploys a set of processes that use AWS Lambda functions to provide the following:
+ A way to initiate the incident-response process with minimum knowledge
+ Automated, repeatable processes that are aligned with the *AWS Security Incident Response Guide*
+ Separation of accounts to operate the automation steps, store artifacts, and create forensic environments

The Automated Incident Response and Forensics framework follows a standard digital forensic process consisting of the following phases:

1. Containment

1. Acquisition

1. Examination

1. Analysis

You can perform investigations on static data (for example, acquired memory or disk images) and on dynamic data that is live but on separated systems.

For more details, see the [Additional information](#automate-incident-response-and-forensics-additional) section.

## Prerequisites and limitations
<a name="automate-incident-response-and-forensics-prereqs"></a>

**Prerequisites **
+ Two AWS accounts:
  + Security account, which can be an existing account, but is preferably new
  + Forensics account, preferably new
+ AWS Organizations set up
+ In the Organizations member accounts:
  + The Amazon Elastic Compute Cloud (Amazon EC2) role must have Get and List access to Amazon Simple Storage Service (Amazon S3) and be accessible by AWS Systems Manager. We recommend using the `AmazonSSMManagedInstanceCore` AWS managed role. Note that this role will automatically be attached to the Amazon EC2 instance when incident response is initiated. After the response has finished, AWS Identity and Access Management (IAM) will remove all rights to the instance.
  + Virtual private cloud (VPC) endpoints in the AWS member account and in the Incident Response and Analysis VPCs. Those endpoints are: S3 Gateway, EC2 Messages, SSM, and SSM Messages.
+ AWS Command Line Interface (AWS CLI) installed on the Amazon EC2 instances. If the Amazon EC2 instances don’t have AWS CLI installed, internet access will be required for the disk snapshot and memory acquisition to work. In this case, the scripts will reach out to the internet to download the AWS CLI installation files and will install them on the instances.

**Limitations **
+ This framework does not intend to generate artifacts that can be considered as electronic evidence, submissible in court.
+ Currently, this pattern supports only Linux based instances running on x86 architecture.

## Architecture
<a name="automate-incident-response-and-forensics-architecture"></a>

**Target architecture **

In addition to the member account, the target environment consists of two main accounts: a Security account and a Forensics account. Two accounts are used for the following reasons:
+ To separate them from any other customer accounts to reduce blast radius in case of a failed forensic analysis
+ To help ensure the isolation and protection of the integrity of the artifacts being analyzed
+ To keep the investigation confidential
+ To avoid situations where the threat actors might have used all the resources immediately available to your compromised AWS account by hitting service quotas and so preventing you from instantiating an Amazon EC2 instance to perform investigations. 

Also, having separate Security and Forensics accounts allows for creating separate roles—a Responder for acquiring evidence and an Investigator for analyzing it. Each role would have access to its separate account.

The following diagram shows only the interaction between the accounts. Details of each account are shown in subsequent diagrams, and a complete diagram is attached.

![\[Interaction between member, security, and forensics accounts and users, the internet, and Slack.\]](http://docs.aws.amazon.com/prescriptive-guidance/latest/patterns/images/pattern-img/7fc94597-d82d-4f6d-9c8b-5e0060010c53/images/6ed33293-d198-4458-9e38-74f6d20629c9.png)


The following diagram shows the member account.

![\[Member account with AWS KMS key, IAM roles, Lambda functions, endpoints, VPC with two EC2 instances.\]](http://docs.aws.amazon.com/prescriptive-guidance/latest/patterns/images/pattern-img/7fc94597-d82d-4f6d-9c8b-5e0060010c53/images/464fcefa-1418-4c9e-9902-5050a76ba9b9.png)


1. An event is sent to the Slack Amazon Simple Notification Service (Amazon SNS) topic.

The following diagram shows the Security account.

![\[Security account with EC2DdCopyInstance in the incident response VPC and with LiME memory modules.\]](http://docs.aws.amazon.com/prescriptive-guidance/latest/patterns/images/pattern-img/7fc94597-d82d-4f6d-9c8b-5e0060010c53/images/89dda7a1-972a-403e-abf8-98fc422422b2.png)


2. The Amazon SNS topic in the Security account initiates Forensics events.

The following diagram shows the Forensics account.

![\[Forensics account with forensics and victim EC2 instances, an Analysis VPC, and a Maintenance VPC.\]](http://docs.aws.amazon.com/prescriptive-guidance/latest/patterns/images/pattern-img/7fc94597-d82d-4f6d-9c8b-5e0060010c53/images/da3bcfcc-cdca-4875-ada5-6131e8b666bc.png)


The Security account is where the two main AWS Step Functions workflows are created for memory and disk image acquisition. After the workflows are running, they access the member account that has the Amazon EC2 instances involved in an incident, and they initiate a set of Lambda functions that will gather a memory dump or a disk dump. Those artifacts are then stored in the Forensics account.

The Forensics account will hold the artifacts gathered by the Step Functions workflow in the Analysis artifacts Amazon S3 bucket. The Forensics account will also have an Amazon EC2 Image Builder pipeline that builds an Amazon Machine Image (AMI) of a Forensics instance. Currently, the image is based on SANS SIFT Workstation. 

The build process uses the Maintenance VPC, which has connectivity to the internet. The image can be later used for spinning up the Amazon EC2 instance for analysis of the gathered artifacts in the Analysis VPC. 

The Analysis VPC does not have internet connectivity. By default, the pattern creates three private analysis subnets. You can create up to 200 subnets, which is the quota for the number of subnets in a VPC, but the VPC endpoints need to have those subnets added for AWS Systems Manager Session Manager to automate running commands in them.

From a best-practices perspective, we recommend using AWS CloudTrail and AWS Config to do the following: 
+ Track changes made in your Forensics account
+ Monitor access and integrity of the artifacts that are stored and analyzed

**Workflow**

The following diagram shows the key steps of a workflow that includes the process and decision tree from when an instance is compromised until it is analyzed and contained.

1. Has the `SecurityIncidentStatus`tag been set with the value Analyze? If yes, do the following:

   1. Attach the correct IAM profiles for AWS Systems Manager and Amazon S3.

   1. Send an Amazon SNS message to the Amazon SNS queue in Slack.

   1. Send an Amazon SNS message to the` SecurityIncident` queue.

   1. Invoke the Memory and Disk Acquisition state machine.

1. Have memory and disk been acquired? If no, there is an error.

1. Tag the Amazon EC2 instance with the `Contain` tag.

1. Attach the IAM role and security group to fully isolate the instance.

![\[Workflow steps listed previously.\]](http://docs.aws.amazon.com/prescriptive-guidance/latest/patterns/images/pattern-img/7fc94597-d82d-4f6d-9c8b-5e0060010c53/images/b319bd9b-8cb4-4048-b5c8-6e39e72908b0.png)


**Automation and scale**

The intent of this pattern is to provide a scalable solution to perform incident response and forensics across several accounts within a single AWS Organizations organization.

## Tools
<a name="automate-incident-response-and-forensics-tools"></a>

**AWS services**
+ [AWS CloudFormation](https://docs.aws.amazon.com/AWSCloudFormation/latest/UserGuide/Welcome.html) helps you set up AWS resources, provision them quickly and consistently, and manage them throughout their lifecycle across AWS accounts and Regions.
+ [AWS Command Line Interface (AWS CLI)](https://docs.aws.amazon.com/cli/latest/userguide/cli-chap-welcome.html) is an open source tool for interacting with AWS services through commands in your command-line shell.
+ [AWS Identity and Access Management (IAM)](https://docs.aws.amazon.com/IAM/latest/UserGuide/introduction.html) helps you securely manage access to your AWS resources by controlling who is authenticated and authorized to use them.
+ [AWS Key Management Service (AWS KMS)](https://docs.aws.amazon.com/kms/index.html) helps you create and control cryptographic keys to protect your data.
+ [AWS Lambda](https://docs.aws.amazon.com/lambda/latest/dg/welcome.html) is a compute service that helps you run code without needing to provision or manage servers. It runs your code only when needed and scales automatically, so you pay only for the compute time that you use.
+ [Amazon Simple Storage Service (Amazon S3)](https://docs.aws.amazon.com/AmazonS3/latest/userguide/Welcome.html) is a cloud-based object storage service that helps you store, protect, and retrieve any amount of data.
+ [AWS Security Hub CSPM](https://docs.aws.amazon.com/securityhub/latest/userguide/what-is-securityhub.html) provides a comprehensive view of your security state in AWS. It also helps you check your AWS environment against security industry standards and best practices.
+ [Amazon Simple Notification Service (Amazon SNS)](https://docs.aws.amazon.com/sns/latest/dg/welcome.html) helps you coordinate and manage the exchange of messages between publishers and clients, including web servers and email addresses.
+ [AWS Step Functions](https://docs.aws.amazon.com/step-functions/latest/dg/welcome.html) is a serverless orchestration service that helps you combine AWS Lambda functions and other AWS services to build business-critical applications. 
+ [AWS Systems Manager](https://docs.aws.amazon.com/systems-manager/latest/userguide/what-is-systems-manager.html) helps you manage your applications and infrastructure running in the AWS Cloud. It simplifies application and resource management, shortens the time to detect and resolve operational problems, and helps you manage your AWS resources securely at scale.

**Code **

For the code and specific implementation and usage guidance, see the GitHub [Automated Incident Response and Forensics Framework](https://github.com/awslabs/aws-automated-incident-response-and-forensics) repository.

## Epics
<a name="automate-incident-response-and-forensics-epics"></a>

### Deploy the CloudFormation templates
<a name="deploy-the-cfnshort-templates"></a>


| Task | Description | Skills required | 
| --- | --- | --- | 
| Deploy CloudFormation templates. | The CloudFormation templates are marked 1 through 7 with the first word of the script name indicating in which account the template needs to be deployed. Note that the order of launching the CloudFormation templates is important.[\[See the AWS documentation website for more details\]](http://docs.aws.amazon.com/prescriptive-guidance/latest/patterns/automate-incident-response-and-forensics.html)To initiate the incident response framework for a specific Amazon EC2 instance, create a tag with the key `SecurityIncidentStatus` and the value `Analyze`. This will initiate the member Lambda function that will automatically start isolation and memory as well as disk acquisition. | AWS administrator | 
| Operate the framework. | The Lambda function will also retag the asset at the end (or on failure) with `Contain`. This initiates the containment, which fully isolates the instance with a no INBOUND/OUTBOUND security group and with an IAM role that disallows all access.Follow the steps in the [GitHub repository](https://github.com/awslabs/aws-automated-incident-response-and-forensics#operating-the-incident-response-framework). | AWS administrator | 

### Deploy custom Security Hub CSPM actions
<a name="deploy-custom-ash-actions"></a>


| Task | Description | Skills required | 
| --- | --- | --- | 
| Deploy the custom Security Hub CSPM actions by using a CloudFormation template. | To create a custom action so that you can use the dropdown list from Security Hub CSPM, deploy the `Modules/SecurityHub Custom Actions/SecurityHubCustomActions.yaml` CloudFormation template. Then modify the `IRAutomation` role in each of the member accounts to allow the Lambda function that runs the action to assume the `IRAutomation` role. For more information, see the [GitHub repository](https://github.com/awslabs/aws-automated-incident-response-and-forensics#securityhub-actions). | AWS administrator | 

## Related resources
<a name="automate-incident-response-and-forensics-resources"></a>
+ [AWS Security Incident Response Guide](https://docs.aws.amazon.com/whitepapers/latest/aws-security-incident-response-guide/welcome.html)

## Additional information
<a name="automate-incident-response-and-forensics-additional"></a>

By using this environment, a Security Operations Center (SOC) team can improve their security incident response process through the following:
+ Having the ability to perform forensics in a segregated environment to avoid accidental compromise of production resources
+ Having a standardized, repeatable, automated process to do containment and analysis.
+ Giving any account owner or administrator the ability to initiate the incident-response process with the minimal knowledge of how to use tags
+ Having a standardized, clean environment for performing incident analysis and forensics without the noise of a larger environment
+ Having the ability to create multiple analysis environments in parallel
+ Focusing SOC resources on incident response instead of on maintenance and documentation of a cloud forensics environment
+ Moving away from a manual process toward an automated one to achieve scalability
+ Using CloudFormation templates for consistency and to avoid repeatable tasks

Additionally, you avoid using persistent infrastructure, and you pay for resources when you need them.

## Attachments
<a name="attachments-7fc94597-d82d-4f6d-9c8b-5e0060010c53"></a>

To access additional content that is associated with this document, unzip the following file: [attachment.zip](samples/p-attach/7fc94597-d82d-4f6d-9c8b-5e0060010c53/attachments/attachment.zip)

# Automatically audit AWS security groups that allow access from public IP addresses
<a name="audit-security-groups-access-public-ip"></a>

*Eugene Shifer and Stephen DiCato, Amazon Web Services*

## Summary
<a name="audit-security-groups-access-public-ip-summary"></a>

As a security best practice, it's crucial to minimize the exposure of AWS resources to only what is absolutely necessary. For example, web servers that serve the general public need to allow inbound access from the internet, but access to other workloads should be restricted to specific networks to reduce unnecessary exposure. [Security groups](https://docs.aws.amazon.com/vpc/latest/userguide/vpc-security-groups.html) in Amazon Virtual Private Cloud (Amazon VPC) are an effective control to help you limit resource access. However, evaluating security groups can be a cumbersome task, especially in multi-account architectures. [AWS Config rules](https://docs.aws.amazon.com/config/latest/developerguide/evaluate-config.html) and [AWS Security Hub CSPM controls](https://docs.aws.amazon.com/securityhub/latest/userguide/controls-view-manage.html) can help you identify security groups that permit access from the public internet (0.0.0.0/0) to specific network communication protocols, such as Secure Shell (SSH), HTTP, HTTPS, and Windows remote desktop protocol (RDP). However, these rules and controls are not applicable if services run on non-standard ports or if access is restricted to certain public IP addresses. For instance, this might occur when a web service is associated with TCP port 8443 instead of the standard TCP port 443. This might also occur when developers have access to the server from their home networks, such as for testing purposes.

To address this, you can use the infrastructure as code (IaC) solution provided in this pattern to identify security groups that allow access from any non-private ([RFC 1918](https://datatracker.ietf.org/doc/html/rfc1918) noncompliant) IP addresses to any workload in your AWS account or AWS organization. The [AWS CloudFormation](https://docs.aws.amazon.com/AWSCloudFormation/latest/UserGuide/Welcome.html) template provisions a custom AWS Config rule, an [AWS Lambda](https://docs.aws.amazon.com/lambda/latest/dg/welcome.html) function, and the necessary permissions. You can deploy it as a [stack](https://docs.aws.amazon.com/AWSCloudFormation/latest/UserGuide/stacks.html) in a single account or as a [stack set](https://docs.aws.amazon.com/AWSCloudFormation/latest/UserGuide/what-is-cfnstacksets.html) across the entire organization, managed through AWS Organizations.

## Prerequisites and limitations
<a name="audit-security-groups-access-public-ip-prereqs"></a>

**Prerequisites**
+ An active AWS account
+ Experience using [GitHub](https://github.com/skills/introduction-to-github?tab=readme-ov-file)
+ If you're deploying into a single AWS account:
  + [Permissions](https://docs.aws.amazon.com/AWSCloudFormation/latest/UserGuide/using-iam-template.html) to create CloudFormation stacks
  + AWS Config [set up](https://docs.aws.amazon.com/config/latest/developerguide/getting-started.html) in the target account
  + (Optional) Security Hub CSPM [set up](https://docs.aws.amazon.com/securityhub/latest/userguide/securityhub-settingup.html#securityhub-manual-setup-overview) in the target account
+ If you're deploying into an AWS organization:
  + [Permissions](https://docs.aws.amazon.com/AWSCloudFormation/latest/UserGuide/using-iam-template.html) to create CloudFormation stack sets
  + Security Hub CSPM [set up](https://docs.aws.amazon.com/securityhub/latest/userguide/securityhub-settingup.html#securityhub-orgs-setup-overview) with AWS Organizations integration
  + AWS Config [set up](https://docs.aws.amazon.com/config/latest/developerguide/getting-started.html) in the accounts where you are deploying this solution
  + Designate an AWS account to be the delegated administrator for AWS Config and Security Hub CSPM

**Limitations**
+ If you're deploying to an individual account that doesn't have Security Hub CSPM enabled, you can use AWS Config to evaluate the findings.
+ If you're deploying to an organization that doesn't have a delegated administrator for AWS Config and Security Hub CSPM, you must log into the individual member accounts to view the findings.
+ If you use AWS Control Tower to manage and govern the accounts in your organization, deploy the IaC in this pattern by using [Customizations for AWS Control Tower (CfCT)](https://docs.aws.amazon.com/controltower/latest/userguide/cfct-overview.html). Using the CloudFormation console would create configuration drift from AWS Control Tower guardrails and require that you re-enroll the organizational units (OUs) or managed accounts.
+ Some AWS services aren’t available in all AWS Regions. For Region availability, see [AWS services by Region](https://aws.amazon.com/about-aws/global-infrastructure/regional-product-services/). For specific endpoints, see the [Service endpoints and quotas](https://docs.aws.amazon.com/general/latest/gr/aws-service-information.html) page, and choose the link for the service.

## Architecture
<a name="audit-security-groups-access-public-ip-architecture"></a>

**Deploying into an individual AWS account**

The following architecture diagram shows the deployment of the AWS resources within a single AWS account. You provision the resources by using a CloudFormation template directly through the CloudFormation console. If Security Hub CSPM is enabled, you can view the results in either AWS Config or Security Hub CSPM. If Security Hub CSPM is not enabled, you can view the results only in AWS Config.

![\[Deployment of the IaC template as a CloudFormation stack in a single AWS account.\]](http://docs.aws.amazon.com/prescriptive-guidance/latest/patterns/images/pattern-img/caa8013b-3578-434b-b2c0-5ca7faf45d2d/images/01318e4c-49b5-415f-ac7a-e45451c374cf.png)


The diagram shows the following workflow:

1. You create a CloudFormation stack. This deploys a Lambda function and an AWS Config rule. Both the rule and function are set up with the AWS Identity and Access Management (IAM) permissions that are required to publish resource evaluations in AWS Config and logs.

1. The AWS Config rule operates in [detective evaluation mode](https://docs.aws.amazon.com/config/latest/developerguide/evaluate-config-rules.html#aws-config-rules-evaluation-modes) and invokes the Lambda function every 24 hours.

1. The Lambda function assesses the security groups and sends updates to AWS Config.

1. Security Hub CSPM receives all of the AWS Config findings.

1. You can view the findings in Security Hub CSPM or in AWS Config, depending on the services that you have set up in the account.

**Deploying into an AWS organization**

The following diagram shows deployment of the pattern across multiple accounts that are managed through AWS Organizations and AWS Control Tower. You deploy the CloudFormation template through CfCT. The assessment outcomes are centralized in Security Hub CSPM in the delegated administrator account. The AWS CodePipeline workflow section of the diagram shows the background steps that occur during CfCT deployment.

![\[Deployment of the IaC template as a CloudFormation stack set in an AWS organization.\]](http://docs.aws.amazon.com/prescriptive-guidance/latest/patterns/images/pattern-img/caa8013b-3578-434b-b2c0-5ca7faf45d2d/images/f4500347-a481-4cd3-ba14-25a034af7531.png)


The diagram shows the following workflow:

1. In the management account, you upload a compressed (ZIP) file of the IaC template to an Amazon Simple Storage Service (Amazon S3) bucket that is deployed by CfCT.

1. The CfCT pipeline unzips the file, runs [cfn-nag](https://github.com/stelligent/cfn_nag) (GitHub) checks, and deploys the template as a CloudFormation stack set.

1. Depending on the configuration you specify in the CfCT manifest file, CloudFormation StackSets deploys stacks into individual accounts or specified OUs. This deploys a Lambda function and an AWS Config rule in the target accounts. Both the rule and function are set up with the IAM permissions that are required to publish resource evaluations in AWS Config and logs.

1. The AWS Config rule operates in [detective evaluation mode](https://docs.aws.amazon.com/config/latest/developerguide/evaluate-config-rules.html#aws-config-rules-evaluation-modes) and invokes the Lambda function every 24 hours.

1. The Lambda function assesses the security groups and sends updates to AWS Config.

1. AWS Config forwards all of the findings to Security Hub CSPM.

1. The Security Hub CSPM findings are aggregated in the delegated administrator account.

1. You can view the aggregated findings in Security Hub CSPM in the delegated administrator account.

## Tools
<a name="audit-security-groups-access-public-ip-tools"></a>

**AWS services**
+ [AWS CloudFormation](https://docs.aws.amazon.com/AWSCloudFormation/latest/UserGuide/Welcome.html) helps you set up AWS resources, provision them quickly and consistently, and manage them throughout their lifecycle across AWS accounts and AWS Regions.
+ [AWS Config](https://docs.aws.amazon.com/config/latest/developerguide/WhatIsConfig.html) provides a detailed view of the resources in your AWS account and how they’re configured. It helps you identify how resources are related to one another and how their configurations have changed over time. An AWS Config [rule](https://docs.aws.amazon.com/config/latest/developerguide/evaluate-config.html) defines your ideal configuration settings for a resource, and AWS Config can evaluate whether your AWS resources comply with the conditions in the rule.
+ [AWS Control Tower](https://docs.aws.amazon.com/controltower/latest/userguide/what-is-control-tower.html) helps you set up and govern an AWS multi-account environment, following prescriptive best practices. [Customizations for AWS Control Tower (CfCT)](https://docs.aws.amazon.com/controltower/latest/userguide/cfct-overview.html) helps you customize your AWS Control Tower landing zone and stay aligned with AWS best practices. Customizations for this solution are implemented through CloudFormation templates and AWS Organizations [service control policies (SCPs)](https://docs.aws.amazon.com/organizations/latest/userguide/orgs_manage_policies_scps.html).
+ [AWS Lambda](https://docs.aws.amazon.com/lambda/latest/dg/welcome.html) is a compute service that helps you run code without needing to provision or manage servers. It runs your code only when needed and scales automatically, so you pay only for the compute time that you use.
+ [AWS Organizations](https://docs.aws.amazon.com/organizations/latest/userguide/orgs_introduction.html) is an account management service that helps you consolidate multiple AWS accounts into an organization that you create and centrally manage.
+ [AWS Security Hub CSPM](https://docs.aws.amazon.com/securityhub/latest/userguide/what-is-securityhub.html) provides a comprehensive view of your security state in AWS. It also helps you check your AWS environment against security industry standards and best practices.

**Other tools**
+ [Python](https://www.python.org/) is a general-purpose computer programming language.

**Code repository**

The code for this pattern is available in the GitHub [Detect vulnerable security groups](https://github.com/aws-samples/detect-public-security-groups/tree/main) repository.

## Best practices
<a name="audit-security-groups-access-public-ip-best-practices"></a>

We recommend that you adhere to the best practices in the following resources:
+ [Best Practices for Organizational Units with AWS Organizations](https://aws.amazon.com/blogs/mt/best-practices-for-organizational-units-with-aws-organizations/) (AWS Cloud Operations & Migrations Blog)
+ [Guidance for Establishing an Initial Foundation using AWS Control Tower on AWS](https://aws.amazon.com/solutions/guidance/establishing-an-initial-foundation-using-control-tower-on-aws/) (AWS Solutions Library)
+ [Guidance for creating and modifying AWS Control Tower resources](https://docs.aws.amazon.com/controltower/latest/userguide/getting-started-guidance.html) (AWS Control Tower documentation)
+ [CfCT deployment considerations ](https://docs.aws.amazon.com/controltower/latest/userguide/cfct-considerations.html)(AWS Control Tower documentation)
+ [Apply least-privilege permissions](https://docs.aws.amazon.com/IAM/latest/UserGuide/best-practices.html#grant-least-privilege) (IAM documentation)

## Epics
<a name="audit-security-groups-access-public-ip-epics"></a>

### Review the CloudFormation template
<a name="review-the-cfnshort-template"></a>


| Task | Description | Skills required | 
| --- | --- | --- | 
| Determine your deployment strategy. | Review the solution and code to determine the deployment strategy for your AWS environment. Determine whether you are deploying into a single account or an AWS organization. | App owner, General AWS | 
| Clone the repository. | Enter the following command to clone the [Detect vulnerable security groups](https://github.com/aws-samples/detect-public-security-groups.git) repository:<pre>git clone https://github.com/aws-samples/detect-public-security-groups.git</pre> | App developer, App owner | 
| Validate the Python version. | [\[See the AWS documentation website for more details\]](http://docs.aws.amazon.com/prescriptive-guidance/latest/patterns/audit-security-groups-access-public-ip.html) | AWS administrator, App developer | 

### Deploy the CloudFormation template
<a name="deploy-the-cfnshort-template"></a>


| Task | Description | Skills required | 
| --- | --- | --- | 
| Deploy the CloudFormation template. | Deploy the CloudFormation template into your AWS environment. Do one of the following:[\[See the AWS documentation website for more details\]](http://docs.aws.amazon.com/prescriptive-guidance/latest/patterns/audit-security-groups-access-public-ip.html) | App developer, AWS administrator, General AWS | 
| Verify the deployment. | In the [CloudFormation console](https://console.aws.amazon.com/cloudformation/), verify that the stack or stack set has deployed successfully. | AWS administrator, App owner | 

### Review the findings
<a name="review-the-findings"></a>


| Task | Description | Skills required | 
| --- | --- | --- | 
| View the AWS Config rule findings. | In Security Hub CSPM, do the following to view a list of individual findings:[\[See the AWS documentation website for more details\]](http://docs.aws.amazon.com/prescriptive-guidance/latest/patterns/audit-security-groups-access-public-ip.html)In Security Hub CSPM, do the following to view a list of total findings grouped by AWS account:[\[See the AWS documentation website for more details\]](http://docs.aws.amazon.com/prescriptive-guidance/latest/patterns/audit-security-groups-access-public-ip.html)In AWS Config, to view a list of findings, follow the instructions in [Viewing Compliance Information and Evaluation Results](https://docs.aws.amazon.com/config/latest/developerguide/evaluate-config_view-compliance.html) in the AWS Config documentation. | AWS administrator, AWS systems administrator, Cloud administrator | 

## Troubleshooting
<a name="audit-security-groups-access-public-ip-troubleshooting"></a>


| Issue | Solution | 
| --- | --- | 
| The CloudFormation stack set creation or deletion fails. | When AWS Control Tower is deployed, it enforces necessary guardrails and assumes control over AWS Config aggregators and rules. This includes preventing any direct alterations through CloudFormation. To properly deploy or remove this CloudFormation template, including all associated resources, you must use CfCT. | 
| CfCT fails to delete the CloudFormation template. | If the CloudFormation template persists even after making necessary changes in the manifest file and removing the template files, confirm that the manifest file contains the `enable_stack_set_deletion` parameter and that the value is set to `false`. For more information, see [Delete a stack set](https://docs.aws.amazon.com/controltower/latest/userguide/cfct-delete-stack.html) in the CfCT documentation. | 

## Related resources
<a name="audit-security-groups-access-public-ip-resources"></a>
+ [AWS Config Custom Rules](https://docs.aws.amazon.com/config/latest/developerguide/evaluate-config_develop-rules.html) (AWS Config documentation)

# Automatically remediate unencrypted Amazon RDS DB instances and clusters
<a name="automatically-remediate-unencrypted-amazon-rds-db-instances-and-clusters"></a>

*Ajay Rawat and Josh Joy, Amazon Web Services*

## Summary
<a name="automatically-remediate-unencrypted-amazon-rds-db-instances-and-clusters-summary"></a>

This pattern describes how to automatically remediate unencrypted Amazon Relational Database Service (Amazon RDS) DB instances and clusters on Amazon Web Services (AWS) by using AWS Config, AWS Systems Manager runbooks, and AWS Key Management Service (AWS KMS) keys.

Encrypted RDS DB instances provide an additional layer of data protection by securing your data from unauthorized access to the underlying storage. You can use Amazon RDS encryption to increase data protection of your applications deployed in the AWS Cloud, and to fulfill compliance requirements for encryption at rest. You can enable encryption for an RDS DB instance when you create it, but not after it's created. However, you can add encryption to an unencrypted RDS DB instance by creating a snapshot of your DB instance, and then creating an encrypted copy of that snapshot. You can then restore a DB instance from the encrypted snapshot to get an encrypted copy of your original DB instance.

This pattern uses AWS Config Rules to evaluate RDS DB instances and clusters. It applies remediation by using AWS Systems Manager runbooks, which define the actions to be performed on noncompliant Amazon RDS resources, and AWS KMS keys to encrypt the DB snapshots. It then enforces service control policies (SCPs) to prevent the creation of new DB instances and clusters without encryption.

The code for this pattern is provided in [GitHub](https://github.com/aws-samples/aws-system-manager-automation-unencrypted-to-encrypted-resources).

## Prerequisites and limitations
<a name="automatically-remediate-unencrypted-amazon-rds-db-instances-and-clusters-prereqs"></a>

**Prerequisites**
+ An active AWS account
+ Files from the [GitHub source code repository](https://github.com/aws-samples/aws-system-manager-automation-unencrypted-to-encrypted-resources) for this pattern downloaded to your computer
+ An unencrypted RDS DB instance or cluster
+ An existing AWS KMS key for encrypting RDS DB instances and clusters
+ Access to update the KMS key resource policy
+ AWS Config enabled in your AWS account (see [Getting Started with AWS Config](https://docs.aws.amazon.com/config/latest/developerguide/getting-started.html) in the AWS documentation)

**Limitations**
+ You can enable encryption for an RDS DB instance only when you create it, not after it has been created.
+ You can't have an encrypted read replica of an unencrypted DB instance or an unencrypted read replica of an encrypted DB instance.
+ You can't restore an unencrypted backup or snapshot to an encrypted DB instance.
+ Amazon RDS encryption is available for most DB instance classes. For a list of exceptions, see [Encrypting Amazon RDS resources](https://docs.aws.amazon.com/AmazonRDS/latest/UserGuide/Overview.Encryption.html) in the Amazon RDS documentation.
+ To copy an encrypted snapshot from one AWS Region to another, you must specify the KMS key in the destination AWS Region. This is because KMS keys are specific to the AWS Region that they are created in.
+ The source snapshot remains encrypted throughout the copy process. Amazon RDS uses envelope encryption to protect data during the copy process. For more information, see [Envelope encryption](https://docs.aws.amazon.com/kms/latest/developerguide/concepts.html#enveloping) in the AWS KMS documentation.
+ You can't unencrypt an encrypted DB instance. However, you can export data from an encrypted DB instance and import the data into an unencrypted DB instance.
+ You should delete a KMS key only when you are sure that you don't need to use it any longer. If you aren't sure, consider [disabling the KMS key](https://docs.aws.amazon.com/kms/latest/developerguide/enabling-keys.html) instead of deleting it. You can reenable a disabled KMS key if you need to use it again later, but you cannot recover a deleted KMS key. 
+ If you don't choose to retain automated backups, your automated backups that are in the same AWS Region as the DB instance are deleted. They can't be recovered after you delete the DB instance.
+ Your automated backups are retained for the retention period that is set on the DB instance at the time you delete it. This set retention period occurs whether or not you choose to create a final DB snapshot.
+ If automatic remediation is enabled, this solution encrypts all databases that have the same KMS key.

## Architecture
<a name="automatically-remediate-unencrypted-amazon-rds-db-instances-and-clusters-architecture"></a>

The following diagram illustrates the architecture for the CloudFormation implementation. Note that you can also implement this pattern by using the AWS Cloud Development Kit (AWS CDK).

![\[AWS CloudFormation implementation for remediating unencrypted Amazon RDS instances.\]](http://docs.aws.amazon.com/prescriptive-guidance/latest/patterns/images/pattern-img/7f7195e3-98c4-4b18-9192-c0400ac5b891/images/8c1466fa-15b3-44ef-aa7e-7958f80cb699.png)


## Tools
<a name="automatically-remediate-unencrypted-amazon-rds-db-instances-and-clusters-tools"></a>

**Tools**
+ [CloudFormation](https://aws.amazon.com/cloudformation/) helps you automatically set up your AWS resources. It enables you to use a template file to create and delete a collection of resources together as a single unit (a stack).
+ [AWS Cloud Development Kit (AWS CDK)](https://aws.amazon.com/cdk/) is a software development framework for defining your cloud infrastructure in code and provisioning it by using familiar programming languages.

**AWS services and features**
+ [AWS Config](https://aws.amazon.com/config/) keeps track of the configuration of your AWS resources and their relationships to your other resources. It can also evaluate those AWS resources for compliance. This service uses rules that can be configured to evaluate AWS resources against desired configurations. You can use a set of AWS Config managed rules for common compliance scenarios, or you can create your own rules for custom scenarios. When an AWS resource is found to be noncompliant, you can specify a remediation action through an AWS Systems Manager runbook and optionally send an alert through an Amazon Simple Notification Service (Amazon SNS) topic. In other words, you can associate remediation actions with AWS Config Rules and choose to run them automatically to address noncompliant resources without manual intervention. If a resource is still noncompliant after automatic remediation, you can set the rule to try automatic remediation again.
+ [Amazon Relational Database Service (Amazon RDS)](https://aws.amazon.com/rds/) makes it easier to set up, operate, and scale a relational database in the cloud. The basic building block of Amazon RDS is the DB instance, which is an isolated database environment in the AWS Cloud. Amazon RDS provides a [selection of instance types](https://aws.amazon.com/rds/instance-types/) that are optimized to fit different relational database use cases. Instance types comprise various combinations of CPU, memory, storage, and networking capacity and give you the flexibility to choose the appropriate mix of resources for your database. Each instance type includes several instance sizes, allowing you to scale your database to the requirements of your target workload.
+ [AWS Key Management Service (AWS KMS)](https://aws.amazon.com/kms/) is a managed service that makes it easy for you to create and control AWS KMS keys, which encrypt your data. A KMS key is a logical representation of a root key. The KMS key includes metadata, such as the key ID, creation date, description, and key state.
+ [AWS Identity and Access Management (IAM)](https://aws.amazon.com/iam/) helps you securely manage access to your AWS resources by controlling who is authenticated and authorized to use them.
+ [Service control policies (SCPs)](https://docs.aws.amazon.com/organizations/latest/userguide/orgs_manage_policies_scps.html) offer central control over the maximum available permissions for all accounts in your organization. SCPs help you ensure that your accounts stay within your organization’s access control guidelines. SCPs don't affect users or roles in the management account. They affect only the member accounts in your organization. We strongly recommend that you don't attach SCPs to the root of your organization without thoroughly testing the impact that the policy has on accounts. Instead, create an organizational unit (OU) that you can move your accounts into one at a time, or at least in small numbers, to ensure that you don't inadvertently lock users out of key services.

**Code**

The source code and templates for this pattern are available in a [GitHub repository](https://github.com/aws-samples/aws-system-manager-automation-unencrypted-to-encrypted-resources/). The pattern provides two implementation options: You can deploy an CloudFormation template to create the remediation role that encrypts RDS DB instances and clusters, or use the AWS CDK. The repository has separate folders for these two options.

The [Epics](#automatically-remediate-unencrypted-amazon-rds-db-instances-and-clusters-epics) section provides step-by-step instructions for deploying the CloudFormation template. If you want to use the AWS CDK, follow the instructions in the `README.md` file in the GitHub repository.

## Best practices
<a name="automatically-remediate-unencrypted-amazon-rds-db-instances-and-clusters-best-practices"></a>
+ Enable data encryption both at rest and in transit.
+ Enable AWS Config in all accounts and AWS Regions.
+ Record configuration changes to all resource types.
+ Rotate your IAM credentials regularly.
+ Leverage tagging for AWS Config, which makes is easier to manage, search for, and filter resources.

## Epics
<a name="automatically-remediate-unencrypted-amazon-rds-db-instances-and-clusters-epics"></a>

### Create the IAM remediation role and Systems Manager runbook
<a name="create-the-iam-remediation-role-and-sys-runbook"></a>


| Task | Description | Skills required | 
| --- | --- | --- | 
| Download the CloudFormation template. | Download the `unencrypted-to-encrypted-rds.template.json` file from the [GitHub repository](https://github.com/aws-samples/aws-system-manager-automation-unencrypted-to-encrypted-resources/tree/main/rds/CloudFormation). | DevOps engineer | 
| Create the CloudFormation stack. | [\[See the AWS documentation website for more details\]](http://docs.aws.amazon.com/prescriptive-guidance/latest/patterns/automatically-remediate-unencrypted-amazon-rds-db-instances-and-clusters.html)For more information about deploying templates, see the [CloudFormation documentation](https://docs.aws.amazon.com/AWSCloudFormation/latest/UserGuide/cfn-console-create-stack.html). | DevOps engineer | 
| Review CloudFormation parameters and values. | [\[See the AWS documentation website for more details\]](http://docs.aws.amazon.com/prescriptive-guidance/latest/patterns/automatically-remediate-unencrypted-amazon-rds-db-instances-and-clusters.html) | DevOps engineer | 
| Review the resources. | When the stack has been created, its status changes to **CREATE\$1COMPLETE**. Review the created resources (IAM role, Systems Manager runbook) in the CloudFormation console. | DevOps engineer | 

### Update the AWS KMS key policy
<a name="update-the-kms-key-policy"></a>


| Task | Description | Skills required | 
| --- | --- | --- | 
| Update your KMS key policy. | [\[See the AWS documentation website for more details\]](http://docs.aws.amazon.com/prescriptive-guidance/latest/patterns/automatically-remediate-unencrypted-amazon-rds-db-instances-and-clusters.html)<pre>{<br />    "Sid": "Allow access through RDS for all principals in the account that are authorized to use RDS",<br />    "Effect": "Allow",<br />    "Principal": {<br />        "AWS": "arn:aws:iam:: <your-AWS-account-ID>":role/<your-IAM-remediation-role>"<br />    },<br />    "Action": [<br />        "kms:Encrypt",<br />        "kms:Decrypt",<br />        "kms:ReEncrypt*",<br />        "kms:GenerateDataKey*",<br />        "kms:CreateGrant",<br />        "kms:ListGrants",<br />        "kms:DescribeKey"<br />    ],<br />    "Resource": "*",<br />    "Condition": {<br />        "StringEquals": {<br />            "kms:ViaService": "rds.us-east-1.amazonaws.com",<br />            "kms:CallerAccount": "<your-AWS-account-ID>"<br />        }<br />    }<br />}</pre> | DevOps engineer | 

### Find and remediate noncompliant resources
<a name="find-and-remediate-noncompliant-resources"></a>


| Task | Description | Skills required | 
| --- | --- | --- | 
| View noncompliant resources. | [\[See the AWS documentation website for more details\]](http://docs.aws.amazon.com/prescriptive-guidance/latest/patterns/automatically-remediate-unencrypted-amazon-rds-db-instances-and-clusters.html)The noncompliant resources listed in the AWS Config console will be instances, not clusters. The remediation automation encrypts instances and clusters, and creates either a newly encrypted instance or a newly created cluster. However, be sure not to simultaneously remediate multiple instances that belong to the same cluster.Before you remediate any RDS DB instances or volumes, make sure that the RDS DB instance is not in use. Confirm that there are no write operations occurring while the snapshot is being created, to ensure that the snapshot contains the original data. Consider enforcing a maintenance window during which the remediation will run. | DevOps engineer | 
| Remediate noncompliant resources. | [\[See the AWS documentation website for more details\]](http://docs.aws.amazon.com/prescriptive-guidance/latest/patterns/automatically-remediate-unencrypted-amazon-rds-db-instances-and-clusters.html) | DevOps engineer | 
| Verify that the RDS DB instance is available. | After the automation completes, the newly encrypted RDS DB instance will become available. The encrypted RDS DB instance will have the prefix `encrypted` followed by the original name. For example, if the unencrypted RDS DB instance name was `database-1`, the newly encrypted RDS DB instance would be `encrypted-database-1`. | DevOps engineer | 
| Terminate the unencrypted instance. | After remediation is complete and the newly encrypted resource has been validated, you can terminate the unencrypted instance. Make sure to confirm that the newly encrypted resource matches the unencrypted resource before you terminate any resources. | DevOps engineer | 

### Enforce SCPs
<a name="enforce-scps"></a>


| Task | Description | Skills required | 
| --- | --- | --- | 
| Enforce SCPs. | Enforce SCPs to prevent DB instances and clusters from being created without encryption in the future. Use the `rds_encrypted.json` file that’s provided in the [GitHub repository](https://github.com/aws-samples/aws-system-manager-automation-unencrypted-to-encrypted-resources/tree/main/rds/SCP) for this purpose, and follow the instructions in the [AWS documentation](https://docs.aws.amazon.com/organizations/latest/userguide/orgs_manage_policies_scps_create.html).  | Security engineer | 

## Related resources
<a name="automatically-remediate-unencrypted-amazon-rds-db-instances-and-clusters-resources"></a>

**References**
+ [Setting up AWS Config](https://docs.aws.amazon.com/config/latest/developerguide/gs-console.html)
+ [AWS Config custom rules](https://docs.aws.amazon.com/config/latest/developerguide/evaluate-config_develop-rules.html)
+ [AWS KMS concepts](https://docs.aws.amazon.com/kms/latest/developerguide/concepts.html)
+ [AWS Systems Manager documents](https://docs.aws.amazon.com/systems-manager/latest/userguide/sysman-ssm-docs.html)
+ [Service control policies](https://docs.aws.amazon.com/organizations/latest/userguide/orgs_manage_policies_scps.html)

**Tools**
+ [CloudFormation](https://aws.amazon.com/cloudformation/)
+ [AWS Cloud Development Kit (AWS CDK)](https://aws.amazon.com/cdk/)

**Guides and patterns**
+ [Automatically re-enable AWS CloudTrail by using a custom remediation rule in AWS Config](https://docs.aws.amazon.com/prescriptive-guidance/latest/patterns/automatically-re-enable-aws-cloudtrail-by-using-a-custom-remediation-rule-in-aws-config.html)

## Additional information
<a name="automatically-remediate-unencrypted-amazon-rds-db-instances-and-clusters-additional"></a>

**How does AWS Config work?**

When you use AWS Config, it first discovers the supported AWS resources that exist in your account and generates a [configuration item](https://docs.aws.amazon.com/config/latest/developerguide/config-concepts.html#config-items) for each resource. AWS Config also generates configuration items when the configuration of a resource changes, and it maintains historical records of the configuration items of your resources from the time you start the configuration recorder. By default, AWS Config creates configuration items for every supported resource in the AWS Region. If you don't want AWS Config to create configuration items for all supported resources, you can specify the resource types that you want it to track.

**How are AWS Config and AWS Config Rules related to AWS Security Hub CSPM?**

AWS Security Hub CSPM is a security and compliance service that provides security and compliance posture management as a service. It uses AWS Config and AWS Config Rules as its primary mechanism to evaluate the configuration of AWS resources. AWS Config Rules can also be used to evaluate resource configuration directly. Other AWS services, such AWS Control Tower and AWS Firewall Manager, also use AWS Config Rules.

# Automatically validate and deploy IAM policies and roles by using CodePipeline, IAM Access Analyzer, and AWS CloudFormation macros
<a name="automatically-validate-and-deploy-iam-policies-and-roles-in-an-aws-account-by-using-codepipeline-iam-access-analyzer-and-aws-cloudformation-macros"></a>

*Helton Ribeiro and Guilherme Simoes, Amazon Web Services*

## Summary
<a name="automatically-validate-and-deploy-iam-policies-and-roles-in-an-aws-account-by-using-codepipeline-iam-access-analyzer-and-aws-cloudformation-macros-summary"></a>

This pattern describes the steps and provides code to create a deployment pipeline that allows your development teams to create AWS Identity and Access Management (IAM) policies and roles in your Amazon Web Services (AWS) accounts. This approach helps your organization reduce overhead for your operational teams and speed up the deployment process. It also helps your developers to create IAM roles and policies that are compatible with your existing governance and security controls.

This pattern’s approach uses [AWS Identity and Access Management Access Analyzer](https://docs.aws.amazon.com/IAM/latest/UserGuide/access-analyzer-reference-policy-checks.html) to validate the IAM policies that you want to attach to IAM roles and uses AWS CloudFormation to deploy the IAM roles. However, instead of directly editing the AWS CloudFormation template file, your development team creates JSON-formatted IAM policies and roles. An AWS CloudFormation macro transforms these JSON-formatted policy files into AWS CloudFormation IAM resource types before beginning the deployment.

The deployment pipeline (`RolesPipeline`) has source, validation, and deployment stages. During the source stage, your development team pushes the JSON files that contain the definition of the IAM roles and policies to an AWS CodeCommit repository. AWS CodeBuild then runs a script to validate those files and copies them to an Amazon Simple Storage Service (Amazon S3) bucket. Because your development teams don’t have direct access to the AWS CloudFormation template file stored in a separate S3 bucket, they must follow the JSON file creation and validation process.

Finally, during the deployment phase, AWS CodeDeploy uses an AWS CloudFormation stack to update or delete the IAM policies and roles in an account.

**Important**  
This pattern’s workflow is a proof of concept (POC) and we recommend that you only use it in a test environment. If you want to use this pattern’s approach in a production environment, see [Security best practices in IAM](https://docs.aws.amazon.com/IAM/latest/UserGuide/best-practices.html) in the IAM documentation and make the required changes to your IAM roles and AWS services.

## Prerequisites and limitations
<a name="automatically-validate-and-deploy-iam-policies-and-roles-in-an-aws-account-by-using-codepipeline-iam-access-analyzer-and-aws-cloudformation-macros-prereqs"></a>

**Prerequisites **
+ An active AWS account.
+ A new or existing S3 bucket for the `RolesPipeline` pipeline. Make sure that the access credentials you’re using have permissions to upload objects to this bucket.
+ AWS Command Line Interface (AWS CLI), installed and configured. For more information about this, see [Installing, updating, and uninstalling the AWS CLI ](https://docs.aws.amazon.com/cli/latest/userguide/cli-chap-install.html) in the AWS CLI documentation. 
+ AWS Serverless Application Model (AWS SAM) CLI, installed and configured. For more information about this, see [Installing the AWS SAM CLI](https://docs.aws.amazon.com/serverless-application-model/latest/developerguide/serverless-sam-cli-install.html) in the AWS SAM documentation. 
+ Python 3, installed on your local machine. For more information about this, see the [Python documentation](https://www.python.org/).
+ A Git client, installed and configured.
+ The GitHub `IAM roles pipeline` repository, cloned to your local machine. 
+ Existing JSON-formatted IAM policies and roles. For more information about this, see the [ReadMe](https://github.com/aws-samples/iam-roles-pipeline/blob/main/README.md) file in the Github `IAM roles pipeline` repository.
+ Your developer team must not have permissions to edit this solution’s AWS CodePipeline, CodeBuild, and CodeDeploy resources.

**Limitations **
+ This pattern’s workflow is a proof of concept (POC) and we recommend that you only use it in a test environment. If you want to use this pattern’s approach in a production environment, see [Security best practices in IAM](https://docs.aws.amazon.com/IAM/latest/UserGuide/best-practices.html) in the IAM documentation and make the required changes to your IAM roles and AWS services.

## Architecture
<a name="automatically-validate-and-deploy-iam-policies-and-roles-in-an-aws-account-by-using-codepipeline-iam-access-analyzer-and-aws-cloudformation-macros-architecture"></a>

The following diagram shows you how to automatically validate and deploy IAM roles and policies to an account by using CodePipeline, IAM Access Analyzer, and AWS CloudFormation macros.

![\[Steps for validating and deploying IAM policies and roles in an AWS account.\]](http://docs.aws.amazon.com/prescriptive-guidance/latest/patterns/images/pattern-img/df1add4d-f211-43e3-8976-5314da75f627/images/832bebaf-27a0-4949-9c30-99fc4c9982b8.png)


The diagram shows the following workflow:

1. A developer writes JSON files that contain the definitions for the IAM policies and roles. The developer pushes the code to a CodeCommit repository and CodePipeline then initiates the `RolesPipeline` pipeline.

1. CodeBuild validates the JSON files by using IAM Access Analyzer. If there are any security or error-related findings, the deployment process is stopped.

1. If there are no security or error-related findings, the JSON files are sent to the `RolesBucket` S3 bucket.

1. An AWS CloudFormation macro implemented as an AWS Lambda function then reads the JSON files from the `RolesBucket` bucket and transforms them into AWS CloudFormation IAM resources types.

1. A predefined AWS CloudFormation stack installs, updates, or deletes the IAM policies and roles in the account. 

**Automation and scale**

AWS CloudFormation templates that automatically deploy this pattern are provided in the GitHub [IAM roles pipeline](https://github.com/aws-samples/iam-roles-pipeline) repository.

## Tools
<a name="automatically-validate-and-deploy-iam-policies-and-roles-in-an-aws-account-by-using-codepipeline-iam-access-analyzer-and-aws-cloudformation-macros-tools"></a>
+ [AWS Command Line Interface (AWS CLI)](https://docs.aws.amazon.com/cli/latest/userguide/cli-chap-welcome.html) is an open-source tool that helps you interact with AWS services through commands in your command-line shell.
+ [AWS Identity and Access Management (IAM)](https://docs.aws.amazon.com/IAM/latest/UserGuide/introduction.html) helps you securely manage access to your AWS resources by controlling who is authenticated and authorized to use them.
+ [IAM Access Analyzer](https://docs.aws.amazon.com/IAM/latest/UserGuide/what-is-access-analyzer.html) helps you identify the resources in your organization and accounts, such as S3 buckets or IAM roles, that are shared with an external entity. This helps you to identify unintended access to your resources and data.
+ [AWS Serverless Application Model (AWS SAM)](https://docs.aws.amazon.com/serverless-application-model/latest/developerguide/what-is-sam.html) is an open-source framework that helps you build serverless applications in the AWS Cloud.

**Code **

The source code and templates for this pattern are available in the GitHub [IAM roles pipeline](https://github.com/aws-samples/iam-roles-pipeline) repository.

## Epics
<a name="automatically-validate-and-deploy-iam-policies-and-roles-in-an-aws-account-by-using-codepipeline-iam-access-analyzer-and-aws-cloudformation-macros-epics"></a>

### Clone the repository
<a name="clone-the-repository"></a>


| Task | Description | Skills required | 
| --- | --- | --- | 
|  Clone the sample repository. | Clone the GitHub [IAM roles pipeline](https://github.com/aws-samples/iam-roles-pipeline) repository to your local machine. | App developer, General AWS | 

### Deploy the RolesPipeline pipeline
<a name="deploy-the-rolespipeline-pipeline"></a>


| Task | Description | Skills required | 
| --- | --- | --- | 
| Deploy the pipeline. | [\[See the AWS documentation website for more details\]](http://docs.aws.amazon.com/prescriptive-guidance/latest/patterns/automatically-validate-and-deploy-iam-policies-and-roles-in-an-aws-account-by-using-codepipeline-iam-access-analyzer-and-aws-cloudformation-macros.html) | App developer, General AWS | 
| Clone the pipeline’s repository. | [\[See the AWS documentation website for more details\]](http://docs.aws.amazon.com/prescriptive-guidance/latest/patterns/automatically-validate-and-deploy-iam-policies-and-roles-in-an-aws-account-by-using-codepipeline-iam-access-analyzer-and-aws-cloudformation-macros.html) | App developer, General AWS | 

### Test the RolesPipeline pipeline
<a name="test-the-rolespipeline-pipeline"></a>


| Task | Description | Skills required | 
| --- | --- | --- | 
| Test the RolesPipeline pipeline with valid IAM policies and roles. | [\[See the AWS documentation website for more details\]](http://docs.aws.amazon.com/prescriptive-guidance/latest/patterns/automatically-validate-and-deploy-iam-policies-and-roles-in-an-aws-account-by-using-codepipeline-iam-access-analyzer-and-aws-cloudformation-macros.html) | App developer, General AWS | 
| Test the RolesPipeline pipeline with invalid IAM policies and roles. | [\[See the AWS documentation website for more details\]](http://docs.aws.amazon.com/prescriptive-guidance/latest/patterns/automatically-validate-and-deploy-iam-policies-and-roles-in-an-aws-account-by-using-codepipeline-iam-access-analyzer-and-aws-cloudformation-macros.html) | App developer, General AWS | 

### Clean up your resources
<a name="clean-up-your-resources"></a>


| Task | Description | Skills required | 
| --- | --- | --- | 
| Prepare for cleanup. | Empty the S3 buckets and then run the `destroy` command. | App developer, General AWS | 
| Delete the RolesStack stack. | [\[See the AWS documentation website for more details\]](http://docs.aws.amazon.com/prescriptive-guidance/latest/patterns/automatically-validate-and-deploy-iam-policies-and-roles-in-an-aws-account-by-using-codepipeline-iam-access-analyzer-and-aws-cloudformation-macros.html) | App developer, General AWS | 
| Delete the RolesPipeline stack. | To delete the `RolesPipeline` AWS CloudFormation stack, follow the instructions from the [ReadMe](https://github.com/aws-samples/iam-roles-pipeline/blob/main/README.md) file in the Github `IAM roles pipeline` repository. | App developer, General AWS | 

## Related resources
<a name="automatically-validate-and-deploy-iam-policies-and-roles-in-an-aws-account-by-using-codepipeline-iam-access-analyzer-and-aws-cloudformation-macros-resources"></a>
+ [IAM Access Analyzer - Policy validation](https://aws.amazon.com/blogs/aws/iam-access-analyzer-update-policy-validation/) (AWS News Blog)
+ [Using AWS CloudFormation macros to perform custom processing on templates](https://docs.aws.amazon.com/AWSCloudFormation/latest/UserGuide/template-macros.html) (AWS CloudFormation documentation)
+ [Building Lambda functions with Python](https://docs.aws.amazon.com/lambda/latest/dg/lambda-python.html) (AWS Lambda documentation)

# Bidirectionally integrate AWS Security Hub CSPM with Jira software
<a name="bidirectionally-integrate-aws-security-hub-with-jira-software"></a>

*Joaquin Rinaudo, Amazon Web Services*

## Summary
<a name="bidirectionally-integrate-aws-security-hub-with-jira-software-summary"></a>

This solution supports a bidirectional integration between AWS Security Hub CSPM and Jira. Using this solution, you can automatically and manually create and update Jira tickets from Security Hub CSPM findings. Security teams can use this integration to notify developer teams of severe security findings that require action.

The solution allows you to:
+ Select which Security Hub CSPM controls automatically create or update tickets in Jira.
+ In the Security Hub CSPM console, use Security Hub CSPM custom actions to manually escalate tickets in Jira.
+ Automatically assign tickets in Jira based on the AWS account tags defined in AWS Organizations. If this tag is not defined, a default assignee is used.
+ Automatically suppress Security Hub CSPM findings that are marked as false positive or accepted risk in Jira.
+ Automatically close a Jira ticket when its related finding is archived in Security Hub CSPM.
+ Reopen Jira tickets when Security Hub CSPM findings reoccur.

**Jira workflow**

The solution uses a custom Jira workflow that allows developers to manage and document risks. As the issue moves through the workflow, bidirectional integration ensures that the status of the Jira ticket and Security Hub CSPM finding is synchronized across the workflows in both services. This workflow is a derivative of *SecDevOps Risk Workflow* by Dinis Cruz, licensed licensed under [Apache License version 2.0](https://www.apache.org/licenses/LICENSE-2.0). We recommend adding a Jira workflow condition so that only members of your security team can change the ticket status.

![\[A workflow diagram of a Jira issue. You can fix the issue, accept the risk, or mark it as a false positive.\]](http://docs.aws.amazon.com/prescriptive-guidance/latest/patterns/images/pattern-img/206b9907-c2a3-4142-90bf-d4eabee534c0/images/10b08232-437e-4b0a-b6a5-b5ef4d415ac5.png)


For an example of a Jira ticket automatically generated by this solution, see the [Additional information](#bidirectionally-integrate-aws-security-hub-with-jira-software-additional) section of this pattern.

## Prerequisites and limitations
<a name="bidirectionally-integrate-aws-security-hub-with-jira-software-prereqs"></a>

**Prerequisites**
+ If you want to deploy this solution across a multi-account AWS environment:
  + Your multi-account environment is active and managed by AWS Organizations.
  + Security Hub CSPM is enabled on your AWS accounts.
  + In AWS Organizations, you have designated a Security Hub CSPM administrator account.
  + You have a cross-account AWS Identity and Access Management (IAM) role that has `AWSOrganizationsReadOnlyAccess` permissions to the AWS Organizations management account.
  + (Optional) You have tagged your AWS accounts with `SecurityContactID`. This tag is used to assign Jira tickets to the defined security contacts.
+ If you want to deploy this solution within a single AWS account:
  + You have an active AWS account.
  + Security Hub CSPM is enabled on your AWS account.
+ A Jira Data Center instance
**Important**  
This solution supports use of Jira Cloud. However, Jira Cloud does not support importing XML workflows, so you need to manually re-create the workflow in Jira. You can find of the transitions and status in the GitHub repository.
+ Administrator permissions in Jira
+ One of the following Jira tokens:
  + For Jira Enterprise, a personal access token (PAT). For more information, see [Using Personal Access Tokens](https://confluence.atlassian.com/enterprise/using-personal-access-tokens-1026032365.html) (Atlassian support).
  + For Jira Cloud, a Jira API token. For more information, see [Manage API tokens](https://support.atlassian.com/atlassian-account/docs/manage-api-tokens-for-your-atlassian-account/) (Atlassian support).

## Architecture
<a name="bidirectionally-integrate-aws-security-hub-with-jira-software-architecture"></a>

This section illustrates the architecture of the solution in various scenarios, such as when the developer and security engineer decide to accept the risk or decide to fix the issue.

*Scenario 1: Developer addresses the issue*

1. Security Hub CSPM generates a finding against a specified security control, such as those in the [AWS Foundational Security Best Practices standard](https://docs.aws.amazon.com/securityhub/latest/userguide/securityhub-standards-fsbp.html).

1. An Amazon CloudWatch event associated with the finding and the `CreateJIRA` action initiates an AWS Lambda function.

1. The Lambda function uses its configuration file and the finding's `GeneratorId` field to evaluate whether it should escalate the finding.

1. The Lambda function determines the finding should be escalated, it obtains the `SecurityContactID` account tag from AWS Organizations in the AWS management account. This ID is associated with the developer and is used as the assignee ID for the Jira ticket.

1. The Lambda function uses the credentials stored in AWS Secrets Manager to create a ticket in Jira. Jira notifies the developer.

1. The developer addresses the underlying security finding and, in Jira, changes the status of the ticket to `TEST FIX`.

1. Security Hub CSPM updates the finding as `ARCHIVED`, and a new event is generated. This event causes the Lambda function to automatically close the Jira ticket.

![\[An architecture diagram showing Jira and Security Hub integration when a developer fixes an issue.\]](http://docs.aws.amazon.com/prescriptive-guidance/latest/patterns/images/pattern-img/206b9907-c2a3-4142-90bf-d4eabee534c0/images/18d9a6ce-dd38-4d36-a95d-270fce776c30.png)


*Scenario 2: Developer decides to accept the risk*

1. Security Hub CSPM generates a finding against a specified security control, such as those in the [AWS Foundational Security Best Practices standard](https://docs.aws.amazon.com/securityhub/latest/userguide/securityhub-standards-fsbp.html).

1. A CloudWatch event associated with the finding and the `CreateJIRA` action initiates a Lambda function.

1. The Lambda function uses its configuration file and the finding's `GeneratorId` field to evaluate whether it should escalate the finding.

1. The Lambda function determines the finding should be escalated, it obtains the `SecurityContactID` account tag from AWS Organizations in the AWS management account. This ID is associated with the developer and is used as the assignee ID for the Jira ticket.

1. The Lambda function uses the credentials stored in Secrets Manager to create a ticket in Jira. Jira notifies the developer.

1. The developer decides to accept the risk and, in Jira, changes the status of the ticket to `AWAITING RISK ACCEPTANCE`.

1. The security engineer reviews the request and finds the business justification appropriate. The security engineer changes the status of the Jira ticket to `ACCEPTED RISK`. This closes the Jira ticket.

1. A CloudWatch daily event initiates the refresh Lambda function, which identifies closed Jira tickets and updates their related Security Hub CSPM findings as `SUPPRESSED`.

![\[An architecture diagram showing Jira and Security Hub integration when a developer accepts the risk of a finding.\]](http://docs.aws.amazon.com/prescriptive-guidance/latest/patterns/images/pattern-img/206b9907-c2a3-4142-90bf-d4eabee534c0/images/d5a2f946-9c79-4661-96c1-74c813cbf406.png)


## Tools
<a name="bidirectionally-integrate-aws-security-hub-with-jira-software-tools"></a>

**AWS services**
+ [AWS CloudFormation](https://docs.aws.amazon.com/AWSCloudFormation/latest/UserGuide/Welcome.html) helps you set up AWS resources, provision them quickly and consistently, and manage them throughout their lifecycle across AWS accounts and Regions.
+ [Amazon CloudWatch Events](https://docs.aws.amazon.com/AmazonCloudWatch/latest/events/WhatIsCloudWatchEvents.html) helps you monitor system events for your AWS resources by using rules to match events and route them to functions or streams.
+ [AWS Lambda](https://docs.aws.amazon.com/lambda/latest/dg/welcome.html) is a compute service that helps you run code without needing to provision or manage servers. It runs your code only when needed and scales automatically, so you pay only for the compute time that you use.
+ [AWS Organizations](https://docs.aws.amazon.com/organizations/latest/userguide/orgs_introduction.html) is an account management service that helps you consolidate multiple AWS accounts into an organization that you create and centrally manage.
+ [AWS Secrets Manager](https://docs.aws.amazon.com/secretsmanager/latest/userguide/intro.html) helps you replace hardcoded credentials in your code, including passwords, with an API call to Secrets Manager to retrieve the secret programmatically.
+ [AWS Security Hub CSPM](https://docs.aws.amazon.com/securityhub/latest/userguide/what-is-securityhub.html) provides a comprehensive view of your security state in AWS. It also helps you check your AWS environment against security industry standards and best practices.

**Code repository**

The code for this pattern is available on GitHub, in the [aws-securityhub-jira-software-integration](https://github.com/aws-samples/aws-securityhub-jira-software-integration/) repository. It includes the sample code and Jira workflow for this solution.

## Epics
<a name="bidirectionally-integrate-aws-security-hub-with-jira-software-epics"></a>

### Configure Jira
<a name="configure-jira"></a>


| Task | Description | Skills required | 
| --- | --- | --- | 
| Import the workflow. | As an administrator in Jira, import the `issue-workflow.xml` file to your Jira Data Center instance. If you use Jira Cloud, you need to create the workflow according to the `assets/jira-cloud-transitions.png` and `assets/jira-cloud-status.png` files. Files can be found in the [aws-securityhub-jira-software-integration](https://github.com/aws-samples/aws-securityhub-jira-software-integration/) repository in GitHub. For instructions, see [Using XML to create a workflow](https://confluence.atlassian.com/adminjiraserver/using-xml-to-create-a-workflow-938847525.html) (Jira documentation). | Jira administrator | 
| Activate and assign the workflow. | Workflows are inactive until you assign them to a workflow scheme. You then assign the workflow scheme to a project.[\[See the AWS documentation website for more details\]](http://docs.aws.amazon.com/prescriptive-guidance/latest/patterns/bidirectionally-integrate-aws-security-hub-with-jira-software.html) | Jira administrator | 

### Set up the solution parameters
<a name="set-up-the-solution-parameters"></a>


| Task | Description | Skills required | 
| --- | --- | --- | 
| Configure the solution parameters. | [\[See the AWS documentation website for more details\]](http://docs.aws.amazon.com/prescriptive-guidance/latest/patterns/bidirectionally-integrate-aws-security-hub-with-jira-software.html) | AWS systems administrator | 
| Identify the findings you want to automate. | [\[See the AWS documentation website for more details\]](http://docs.aws.amazon.com/prescriptive-guidance/latest/patterns/bidirectionally-integrate-aws-security-hub-with-jira-software.html) |  | 
| Add the findings to the configuration file. | [\[See the AWS documentation website for more details\]](http://docs.aws.amazon.com/prescriptive-guidance/latest/patterns/bidirectionally-integrate-aws-security-hub-with-jira-software.html)The following code example shows automating the `aws-foundational-security-best-practices/v/1.0.0/SNS.1` and `aws-foundational-security-best-practices/v/1.0.0/S3.1` findings.<pre>{<br />    "Controls" : {<br />        "eu-west-1": [<br />         "arn:aws:securityhub:::ruleset/cis-aws-foundations-benchmark/v/1.2.0/rule/1.22" <br />     ],<br />        "default": [<br />aws-foundational-security-best-practices/v/1.0.0/SNS.1,<br />aws-foundational-security-best-practices/v/1.0.0/S3.1<br />     ]<br />    } <br /> }</pre>You can choose to automate different findings for each AWS Region. A good practice to help prevent duplicated findings is to select a single Region to automate creation of controls related to IAM. | AWS systems administrator | 

### Deploy the integration
<a name="deploy-the-integration"></a>


| Task | Description | Skills required | 
| --- | --- | --- | 
| Deploy the integration. | In a command line terminal, enter the following command:<pre>./deploy.sh prod</pre> | AWS systems administrator | 
| Upload Jira credentials to Secrets Manager. | [\[See the AWS documentation website for more details\]](http://docs.aws.amazon.com/prescriptive-guidance/latest/patterns/bidirectionally-integrate-aws-security-hub-with-jira-software.html) | AWS systems administrator | 
| Create the Security Hub CSPM custom action. | [\[See the AWS documentation website for more details\]](http://docs.aws.amazon.com/prescriptive-guidance/latest/patterns/bidirectionally-integrate-aws-security-hub-with-jira-software.html) | AWS systems administrator | 

## Related resources
<a name="bidirectionally-integrate-aws-security-hub-with-jira-software-resources"></a>
+ [AWS Service Management Connector for Jira Service Management](https://docs.aws.amazon.com/servicecatalog/latest/adminguide/integrations-jiraservicedesk.html)
+ [AWS Foundational Security Best Practices standard](https://docs.aws.amazon.com/securityhub/latest/userguide/securityhub-standards-fsbp.html)

## Additional information
<a name="bidirectionally-integrate-aws-security-hub-with-jira-software-additional"></a>

**Example of a Jira ticket**

When a specified Security Hub CSPM finding occurs, this solution automatically creates a Jira ticket. The ticket includes the following information:
+ **Title** – The title identifies the security issue in the following format:

  ```
  AWS Security Issue :: <AWS account ID> :: <Security Hub finding title>
  ```
+ **Description** – The description section of the ticket describes the security control associated with the finding, includes a link to the finding in the Security Hub CSPM console, and provides a short description of how to handle the security issue in the Jira workflow.

The following is an example of an automatically generated Jira ticket.


|  | 
| --- |
| Title | AWS Security Issue :: 012345678912 :: Lambda.1 Lambda function policies should prohibit public access. | 
| --- |--- |
| **Description** | **What is the problem?** We detected a security finding within the AWS account 012345678912 you are responsible for.This control checks whether the AWS Lambda function policy attached to the Lambda resource prohibits public access. If the Lambda function policy allows public access, the control fails.<Link to Security Hub CSPM finding>**What do I need to do with the ticket?**Access the account and verify the configuration. Acknowledge working on ticket by moving it to "Allocated for Fix". Once fixed, moved to test fix so security validates the issue is addressed.If you think risk should be accepted, move it to "Awaiting Risk acceptance". This will require review by a security engineer.If you think is a false positive, transition it to "Mark as False Positive". This will get reviewed by a security engineer and reopened/closed accordingly. | 

 

# Build a pipeline for hardened container images using EC2 Image Builder and Terraform
<a name="build-a-pipeline-for-hardened-container-images-using-ec2-image-builder-and-terraform"></a>

*Mike Saintcross and Andrew Ranes, Amazon Web Services*

## Summary
<a name="build-a-pipeline-for-hardened-container-images-using-ec2-image-builder-and-terraform-summary"></a>

This pattern builds an [EC2 Image Builder pipeline](https://docs.aws.amazon.com/imagebuilder/latest/userguide/start-build-image-pipeline.html) that produces a hardened [Amazon Linux 2](https://aws.amazon.com/amazon-linux-2/) base container image. Terraform is used as an infrastructure as code (IaC) tool to configure and provision the infrastructure that is used to create hardened container images. The recipe helps you deploy a Docker-based Amazon Linux 2 container image that has been hardened according to Red Hat Enterprise Linux (RHEL) 7 STIG Version 3 Release 7 ‒ Medium. (See [STIG-Build-Linux-Medium version 2022.2.1](https://docs.aws.amazon.com/imagebuilder/latest/userguide/toe-stig.html#linux-os-stig) in the *Linux STIG components* section of the EC2 Image Builder documentation.) This is referred to as a *golden *container image.

The build includes two[ Amazon EventBridge rules](https://docs.aws.amazon.com/eventbridge/latest/userguide/eb-rules.html). One rule starts the container image pipeline when the [Amazon Inspector finding](https://docs.aws.amazon.com/inspector/latest/user/findings-managing.html) is **High** or **Critical** so that non-secure images are replaced. This rule requires both Amazon Inspector and Amazon Elastic Container Registry (Amazon ECR) [enhanced scanning](https://docs.aws.amazon.com/AmazonECR/latest/userguide/image-scanning-enhanced.html) to be enabled. The other rule sends notifications to an Amazon Simple Queue Service (Amazon SQS) [queue](https://docs.aws.amazon.com/AWSSimpleQueueService/latest/SQSDeveloperGuide/sqs-queue-types.html) after a successful image push to the Amazon ECR repository, to help you use the latest container images.

**Note**  
Amazon Linux 2 is nearing end of support. For more information, see the [Amazon Linux 2 FAQs](http://aws.amazon.com/amazon-linux-2/faqs/).

## Prerequisites and limitations
<a name="build-a-pipeline-for-hardened-container-images-using-ec2-image-builder-and-terraform-prereqs"></a>

**Prerequisites**
+ An [AWS account](https://aws.amazon.com/premiumsupport/knowledge-center/create-and-activate-aws-account/) that you can deploy the infrastructure in.
+ [AWS Command Line Interface (AWS CLI) installed](https://docs.aws.amazon.com/cli/latest/userguide/getting-started-install.html) for setting your AWS credentials for local deployment.
+ Terraform [downloaded](https://developer.hashicorp.com/terraform/downloads) and set up by following the [instructions](https://developer.hashicorp.com/terraform/tutorials/aws-get-started) in the Terraform documentation.
+ [Git](https://git-scm.com/) (if you’re provisioning from a local machine).
+ A [role ](https://docs.aws.amazon.com/IAM/latest/UserGuide/id_roles.html)within the AWS account that you can use to create AWS resources.
+ All variables defined in the [.tfvars](https://developer.hashicorp.com/terraform/tutorials/configuration-language/variables) file.  Or, you can define all variables when you apply the Terraform configuration.

**Limitations**
+ This solution creates an Amazon Virtual Private Cloud (Amazon VPC) infrastructure that includes a [NAT gateway](https://docs.aws.amazon.com/vpc/latest/userguide/vpc-nat-gateway.html) and an [internet gateway](https://docs.aws.amazon.com/vpc/latest/userguide/VPC_Internet_Gateway.html) for internet connectivity from its private subnet. You cannot use [VPC endpoints](https://docs.aws.amazon.com/whitepapers/latest/aws-privatelink/what-are-vpc-endpoints.html), because the [bootstrap process by AWS Task Orchestrator and Executor (AWSTOE](https://aws.amazon.com/premiumsupport/knowledge-center/image-builder-pipeline-execution-error/)) installs AWS CLI version 2 from the internet.

**Product versions**
+ Amazon Linux 2
+ AWS CLI version 1.1 or later

## Architecture
<a name="build-a-pipeline-for-hardened-container-images-using-ec2-image-builder-and-terraform-architecture"></a>

**Target technology stack**

This pattern creates 43 resources, including:
+ Two Amazon Simple Storage Service (Amazon S3) [buckets](https://docs.aws.amazon.com/AmazonS3/latest/userguide/UsingBucket.html): one for the pipeline component files and one for server access and Amazon VPC flow logs
+ An [Amazon ECR repository](https://docs.aws.amazon.com/AmazonECR/latest/userguide/repository-create.html)
+ A virtual private cloud (VPC) that contains a public subnet, a private subnet, route tables, a NAT gateway, and an internet gateway
+ An EC2 Image Builder pipeline, recipe, and components
+ A container image
+ An AWS Key Management Service (AWS KMS) [key](https://docs.aws.amazon.com/kms/latest/developerguide/concepts.html#kms_keys) for image encryption
+ An SQS queue
+ Three roles: one to run the EC2 Image Builder pipeline, one instance profile for EC2 Image Builder, and one for EventBridge rules
+ Two EventBridge rules

**Terraform module structure**

For the source code, see the GitHub repository [Terraform EC2 Image Builder Container Hardening Pipeline](https://github.com/aws-samples/terraform-ec2-image-builder-container-hardening-pipeline).

```
├── components.tf
├── config.tf
├── dist-config.tf
├── files
│   └──assumption-policy.json
├── hardening-pipeline.tfvars
├── image.tf
├── infr-config.tf
├── infra-network-config.tf
├── kms-key.tf
├── main.tf
├── outputs.tf
├── pipeline.tf
├── recipes.tf
├── roles.tf
├── sec-groups.tf
├── trigger-build.tf
└── variables.tf
```

**Module details**
+ `components.tf` contains an Amazon S3 upload resource to upload the contents of the `/files` directory. You can also modularly add custom component YAML files here as well.
+ `/files` contains the `.yml` files that define the components used in `components.tf`.
+ `image.tf` contains the definitions for the base image operating system. This is where you can modify the definitions for a different base image pipeline.
+ `infr-config.tf` and `dist-config.tf` contain the resources for the minimum AWS infrastructure needed to spin up and distribute the image.
+ `infra-network-config.tf` contains the minimum VPC infrastructure to deploy the container image into.
+ `hardening-pipeline.tfvars` contains the Terraform variables to be used at apply time.
+ `pipeline.tf` creates and manages an EC2 Image Builder pipeline in Terraform.
+ `recipes.tf` is where you can specify different mixtures of components to create container recipes.
+ `roles.tf` contains the AWS Identity and Access Management (IAM) policy definitions for the Amazon Elastic Compute Cloud (Amazon EC2) instance profile and pipeline deployment role.
+ `trigger-build.tf` contains the EventBridge rules and SQS queue resources.

**Target architecture**

![\[Architecture and workflow for building a pipeline for hardened container images\]](http://docs.aws.amazon.com/prescriptive-guidance/latest/patterns/images/pattern-img/4b16bdfa-4f34-41e9-a69a-d023253c8585/images/23443eca-132f-46ac-98bd-32a9e9359a77.png)


The diagram illustrates the following workflow:

1. EC2 Image Builder builds a container image by using the defined recipe, which installs operating system updates and applies the RHEL Medium STIG to the Amazon Linux 2 base image.

1. The hardened image is published to a private Amazon ECR registry, and an EventBridge rule sends a message to an SQS queue when the image has been published successfully.

1. If Amazon Inspector is configured for enhanced scanning, it scans the Amazon ECR registry.

1. If Amazon Inspector generates a **Critical** or **High** severity finding for the image, an EventBridge rule triggers the EC2 Image Builder pipeline to run again and publish a newly hardened image.

**Automation and scale**
+ This pattern describes how to provision the infrastructure and build the pipeline on your computer. However, it is intended to be used at scale. Instead of deploying the Terraform modules locally, you can use them in a multi-account environment, such as an [AWS Control Tower](https://docs.aws.amazon.com/controltower/latest/userguide/what-is-control-tower.html) with [Account Factory for Terraform](https://aws.amazon.com/blogs/aws/new-aws-control-tower-account-factory-for-terraform/) environment. In that case, you should use a [backend state S3 bucket](https://developer.hashicorp.com/terraform/language/settings/backends/s3) to manage Terraform state files instead of managing the configuration state locally.
+ For scaled use, deploy the solution to one central account, such as a Shared Services or Common Services account, from a Control Tower or landing zone account model, and grant consumer accounts permission to access the Amazon ECR repository and AWS KMS key. For more information about the setup, see the re:Post article [How can I allow a secondary account to push or pull images in my Amazon ECR image repository?](https://repost.aws/knowledge-center/secondary-account-access-ecr) For example, in an [account vending machine](https://www.hashicorp.com/resources/terraform-landing-zones-for-self-service-multi-aws-at-eventbrite) or Account Factory for Terraform, add permissions to each account baseline or account customization baseline to provide access to that Amazon ECR repository and encryption key.
+ After the container image pipeline is deployed, you can modify it by using EC2 Image Builder features such as [components](https://docs.aws.amazon.com/imagebuilder/latest/userguide/manage-components.html), which help you package more components into the Docker build.
+ The AWS KMS key that is used to encrypt the container image should be shared across the accounts that the image is intended to be used in.
+ You can add support for other images by duplicating the entire Terraform module and modifying the following `recipes.tf` attributes:
  + Modify `parent_image = "amazonlinux:latest"` to another image type.
  + Modify `repository_name` to point to an existing Amazon ECR repository. This creates another pipeline that deploys a different parent image type to your existing Amazon ECR repository.

## Tools
<a name="build-a-pipeline-for-hardened-container-images-using-ec2-image-builder-and-terraform-tools"></a>

**Tools**
+ Terraform (IaC provisioning)
+ Git (if provisioning locally)
+ AWS CLI version 1 or version 2 (if provisioning locally)

**Code **

The code for this pattern is in the GitHub repository [Terraform EC2 Image Builder Container Hardening Pipeline](https://github.com/aws-samples/terraform-ec2-image-builder-container-hardening-pipeline). To use the sample code, follow the instructions in the next section.

## Epics
<a name="build-a-pipeline-for-hardened-container-images-using-ec2-image-builder-and-terraform-epics"></a>

### Provision the infrastructure
<a name="provision-the-infrastructure"></a>


| Task | Description | Skills required | 
| --- | --- | --- | 
| Set up local credentials. | Set up your AWS temporary credentials.[\[See the AWS documentation website for more details\]](http://docs.aws.amazon.com/prescriptive-guidance/latest/patterns/build-a-pipeline-for-hardened-container-images-using-ec2-image-builder-and-terraform.html) | AWS DevOps | 
| Clone the repository. | [\[See the AWS documentation website for more details\]](http://docs.aws.amazon.com/prescriptive-guidance/latest/patterns/build-a-pipeline-for-hardened-container-images-using-ec2-image-builder-and-terraform.html) | AWS DevOps | 
| Update variables. | Update the variables in the `hardening-pipeline.tfvars` file to match your environment and your desired configuration. You must provide your own `account_id`. However, you should also modify the rest of the variables to fit your desired deployment. All variables are required.<pre>account_id     = "<DEPLOYMENT-ACCOUNT-ID>"<br />aws_region     = "us-east-1"<br />vpc_name       = "example-hardening-pipeline-vpc"<br />kms_key_alias = "image-builder-container-key"<br />ec2_iam_role_name = "example-hardening-instance-role"<br />hardening_pipeline_role_name = "example-hardening-pipeline-role"<br />aws_s3_ami_resources_bucket = "example-hardening-ami-resources-bucket-0123"<br />image_name = "example-hardening-al2-container-image"<br />ecr_name = "example-hardening-container-repo"<br />recipe_version = "1.0.0" <br />ebs_root_vol_size = 10</pre>Here’s a description of each variable:[\[See the AWS documentation website for more details\]](http://docs.aws.amazon.com/prescriptive-guidance/latest/patterns/build-a-pipeline-for-hardened-container-images-using-ec2-image-builder-and-terraform.html) | AWS DevOps | 
| Initialize Terraform. | After you update your variable values, you can initialize the Terraform configuration directory. Initializing a configuration directory downloads and installs the AWS provider, which is defined in the configuration.<pre>terraform init</pre>You should see a message that says Terraform has been successfully initialized and identifies the version of the provider that was installed. | AWS DevOps | 
| Deploy the infrastructure and create a container image. | Use the following command to initialize, validate, and apply the Terraform modules to the environment by using the variables defined in your `.tfvars` file:<pre>terraform init && terraform validate && terraform apply -var-file *.tfvars -auto-approve</pre> | AWS DevOps | 
| Customize the container. | You can create a new version of a container recipe after EC2 Image Builder deploys the pipeline and initial recipe.You can add any of the 31\$1 components available within EC2 Image Builder to customize the container build. For more information, see the *Components* section of [Create a new version of a container recipe](https://docs.aws.amazon.com/imagebuilder/latest/userguide/create-container-recipes.html) in the EC2 Image Builder documentation. | AWS administrator | 

### Validate resources
<a name="validate-resources"></a>


| Task | Description | Skills required | 
| --- | --- | --- | 
| Validate AWS infrastructure provisioning. | After you have successfully completed your first Terraform `apply` command, if you’re provisioning locally, you should see this snippet in your local machine’s terminal:<pre>Apply complete! Resources: 43 added, 0 changed, 0 destroyed.</pre> | AWS DevOps | 
| Validate individual AWS infrastructure resources. | To validate the individual resources that were deployed, if you’re provisioning locally, you can run the following command:<pre>terraform state list</pre>This command returns a list of 43 resources. | AWS DevOps | 

### Remove resources
<a name="remove-resources"></a>


| Task | Description | Skills required | 
| --- | --- | --- | 
| Remove the infrastructure and container image. | When you’ve finished working with your Terraform configuration, you can run the following command to remove resources:<pre>terraform init && terraform validate && terraform destroy -var-file *.tfvars -auto-approve</pre> | AWS DevOps | 

## Troubleshooting
<a name="build-a-pipeline-for-hardened-container-images-using-ec2-image-builder-and-terraform-troubleshooting"></a>


| Issue | Solution | 
| --- | --- | 
| Error validating provider credentials | When you run the Terraform `apply` or `destroy` command from your local machine, you might encounter an error similar to the following:<pre>Error: configuring Terraform AWS Provider: error validating provider <br />credentials: error calling sts:GetCallerIdentity: operation error STS: <br />GetCallerIdentity, https response error StatusCode: 403, RequestID: <br />123456a9-fbc1-40ed-b8d8-513d0133ba7f, api error InvalidClientTokenId: <br />The security token included in the request is invalid.</pre>This error is caused by the expiration of the security token for the credentials used in your local machine’s configuration.To resolve the error, see [Set and view configuration settings](https://docs.aws.amazon.com/cli/latest/userguide/cli-configure-files.html#cli-configure-files-methods) in the AWS CLI documentation. | 

## Related resources
<a name="build-a-pipeline-for-hardened-container-images-using-ec2-image-builder-and-terraform-resources"></a>
+ [Terraform EC2 Image Builder Container Hardening Pipeline](https://github.com/aws-samples/terraform-ec2-image-builder-container-hardening-pipeline) (GitHub repository)
+ [EC2 Image Builder documentation](https://docs.aws.amazon.com/imagebuilder/latest/userguide/what-is-image-builder.html)
+ [AWS Control Tower Account Factory for Terraform](https://aws.amazon.com/blogs/aws/new-aws-control-tower-account-factory-for-terraform/) (AWS blog post)
+ [Backend state S3 bucket](https://developer.hashicorp.com/terraform/language/settings/backends/s3) (Terraform documentation)
+ [Installing or updating the latest version of the AWS CLI](https://docs.aws.amazon.com/cli/latest/userguide/getting-started-install.html) (AWS CLI documentation)
+ [Download Terraform](https://developer.hashicorp.com/terraform/downloads)

# Centralize IAM access key management in AWS Organizations by using Terraform
<a name="centralize-iam-access-key-management-in-aws-organizations-by-using-terraform"></a>

*Aarti Rajput, Chintamani Aphale, T.V.R.L.Phani Kumar Dadi, Pratap Kumar Nanda, Pradip kumar Pandey, and Mayuri Shinde, Amazon Web Services*

## Summary
<a name="centralize-iam-access-key-management-in-aws-organizations-by-using-terraform-summary"></a>

Enforcing security rules for keys and passwords is an** **essential task for every organization. One  important rule is to rotate AWS Identity and Access Management (IAM) keys at regular intervals to enforce security. AWS access keys are generally created and configured locally whenever teams want to access AWS from the AWS Command Line Interface** **(AWS CLI) or from applications outside AWS. To maintain strong security across the organization, old security keys must be changed or deleted after the requirement has been met or at regular intervals. The process of managing key rotations across multiple accounts in an organization is time-consuming and tedious. This pattern helps you automate the rotation process by using Account Factory for Terraform (AFT) and AWS services.

The pattern provides these benefits:
+ Manages your access key IDs and secret access keys across all the accounts in your organization from a central location.
+ Automatically rotates the `AWS_ACCESS_KEY_ID` and `AWS_SECRET_ACCESS_KEY` environment variables.
+ Enforces renewal if user credentials are compromised.

The pattern uses Terraform to deploy AWS Lambda functions, Amazon EventBridge rules, and IAM roles. An EventBridge rule runs at regular intervals and calls a Lambda function that lists all user access keys based on when they were created. Additional Lambda functions create a new access key ID and secret access key, if the previous key is older than the rotation period you define (for example, 45 days), and notify a security administrator by using Amazon Simple Notification Service (Amazon SNS) and Amazon Simple Email Service (Amazon SES). Secrets are created in AWS Secrets Manager for that user, the old secret access key is stored in Secrets Manager, and permissions for accessing the old key are configured. To ensure that the old access key is no longer used, it is disabled after an inactive period (for example, 60 days, which would be  15 days after the keys were rotated in our example). After an inactive buffer period (for example, 90 days, or 45 days after the keys were rotated in our example), the old access keys are deleted from AWS Secrets Manager. For a detailed architecture and workflow, see the [Architecture](#centralize-iam-access-key-management-in-aws-organizations-by-using-terraform-architecture) section.

## Prerequisites and limitations
<a name="centralize-iam-access-key-management-in-aws-organizations-by-using-terraform-prereqs"></a>
+ A landing zone for your organization that’s built by using [AWS Control Tower ](https://docs.aws.amazon.com/controltower/latest/userguide/what-is-control-tower.html)(version 3.1 or later)
+  [Account Factory for Terraform (AFT)](https://catalog.workshops.aws/control-tower/en-US/customization/aft) configured with three accounts:
  + [Organization management account](https://catalog.workshops.aws/control-tower/en-US/customization/aft/repositories/global-customizations) manages the entire organization from a central location.
  + [AFT management account](https://catalog.workshops.aws/control-tower/en-US/customization/aft/repositories/account-customizations)  hosts the Terraform pipeline and deploys the infrastructure into the deployment account.
  + [Deployment account](https://catalog.workshops.aws/control-tower/en-US/customization/aft/repositories/provisioning-customizations) deploys this complete solution and manages IAM keys from a central location.
+ Terraform version 0.15.0 or later  for provisioning the infrastructure in the deployment account.
+ An email address that’s configured in [Amazon Simple Email Service (Amazon SES)](https://aws.amazon.com/ses/).
+ (Recommended) To enhance security, deploy this solution inside a [private subnet](https://docs.aws.amazon.com/vpc/latest/userguide/create-subnets.html) (deployment account) within a [virtual private cloud (VPC)](https://registry.terraform.io/modules/terraform-aws-modules/vpc/aws/latest). You can provide the details of the VPC and subnet when you customize the variables (see *Customize parameters for the code pipeline* in the [Epics](#centralize-iam-access-key-management-in-aws-organizations-by-using-terraform-epics) section).

## Architecture
<a name="centralize-iam-access-key-management-in-aws-organizations-by-using-terraform-architecture"></a>

**AFT repositories**

This pattern uses Account Factory for Terraform (AFT) to create all required AWS resources and the code pipeline to deploy the resources in a deployment account. The code pipeline runs in two repositories:
+ **Global customization** contains Terraform code that will run across all accounts registered with AFT.
+ **Account customizations** contains Terraform code that will run in the deployment account.

**Resource details**

AWS CodePipeline jobs create the following resources in the deployment account:
+ AWS EventBridge rule and configured rule
+ `account-inventory` Lambda function
+ `IAM-access-key-rotation` Lambda function
+ `Notification` Lambda function
+ Amazon Simple Storage Service (Amazon S3) bucket that contains an email template
+ Required IAM policy

**Architecture**

![\[Architecture for centralizing IAM access key management in AWS Organizations\]](http://docs.aws.amazon.com/prescriptive-guidance/latest/patterns/images/pattern-img/0217275c-cb4c-4bdf-b105-ad9abfd4fded/images/844512f0-67b3-4d41-aaaa-fbd9e341c438.png)


The diagram illustrates the following:

1. An EventBridge rule calls the `account-inventory` Lambda function every 24 hours.

1. The `account-inventory` Lambda function queries AWS Organizations for a list of all AWS account IDs, account names, and account emails. 

1. The `account-inventory` Lambda function initiates an `IAM-access-key-auto-rotation` Lambda function for each AWS account and passes the metadata to it for additional processing.

1. The `IAM-access-key-auto-rotation` Lambda function uses an assumed IAM role to access the AWS account. The Lambda script runs an audit against all users and their IAM access keys in the account.

1. The IAM key rotation threshold (rotation period) is configured as an environment variable when the `IAM-access-key-auto-rotation` Lambda function is deployed. If the rotation period is modified, the `IAM-access-key-auto-rotation` Lambda function is redeployed with an updated environment variable. You can configure parameters to set the rotation period, the inactive period for old keys, and the inactive buffer after which old keys will be deleted (see *Customize parameters for the code pipeline* in the [Epics](#centralize-iam-access-key-management-in-aws-organizations-by-using-terraform-epics) section).

1. The `IAM-access-key-auto-rotation` Lambda function validates the age of the access key based on its configuration. If the IAM access key's age hasn’t exceeded the rotation period you defined, the Lambda function takes no further action.

1. If the IAM access key's age has exceeded the rotation period you defined, the `IAM-access-key-auto-rotation` Lambda function creates a new key and rotates the existing key.

1. The Lambda function saves the old key in Secrets Manager and limits permissions to the user whose access keys deviated from security standards. The Lambda function also creates a resource-based policy that allows only the specified IAM principal to access and retrieve the secret.

1. The `IAM-access-key-rotation` Lambda function calls the `Notification` Lambda function.

1. The `Notification` Lambda function queries the S3 bucket for an email template and dynamically generates email messages with the relevant activity metadata.

1. The `Notification` Lambda function calls Amazon SES for further action.

1.  Amazon SES sends email to the account owner's email address with the relevant information.

## Tools
<a name="centralize-iam-access-key-management-in-aws-organizations-by-using-terraform-tools"></a>

**AWS services**
+ [AWS Identity and Access Management (IAM)](https://docs.aws.amazon.com/IAM/latest/UserGuide/introduction.html) helps you securely manage access to your AWS resources by controlling who is authenticated and authorized to use them. This patern requires IAM roles and permissions.
+ [AWS Lambda](https://docs.aws.amazon.com/lambda/latest/dg/welcome.html) is a compute service that helps you run code without needing to provision or manage servers. It runs your code only when needed and scales automatically, so you pay only for the compute time that you use.
+ [AWS Secrets Manager](https://docs.aws.amazon.com/secretsmanager/latest/userguide/intro.html) helps you replace hardcoded credentials in your code, including passwords, with an API call to Secrets Manager to retrieve the secret programmatically.
+ [Amazon Simple Email Service (Amazon SES)](https://docs.aws.amazon.com/ses/latest/dg/Welcome.html) helps you send and receive emails by using your own email addresses and domains.

**Other tools**
+ [Terraform](https://www.terraform.io/) is an infrastructure as code (IaC) tool from HashiCorp that helps you create and manage cloud and on-premises resources.

**Code repository**

The instructions and code for this pattern are available in the GitHub [IAM access key rotation](https://github.com/aws-samples/centralized-iam-key-management-aws-organizations-terraform.git) repository. You can deploy the code in the AWS Control Tower central deployment account to manage key rotation from a central location.

## Best practices
<a name="centralize-iam-access-key-management-in-aws-organizations-by-using-terraform-best-practices"></a>
+ For IAM, see [security best practices](https://docs.aws.amazon.com/IAM/latest/UserGuide/best-practices.html) in the IAM documentation.
+ For key rotation, see [guidelines for updating access keys](https://docs.aws.amazon.com/IAM/latest/UserGuide/best-practices.html#rotate-credentials) in the IAM documentation.

## Epics
<a name="centralize-iam-access-key-management-in-aws-organizations-by-using-terraform-epics"></a>

### Set up source files
<a name="set-up-source-files"></a>


| Task | Description | Skills required | 
| --- | --- | --- | 
| Clone the repository. | [\[See the AWS documentation website for more details\]](http://docs.aws.amazon.com/prescriptive-guidance/latest/patterns/centralize-iam-access-key-management-in-aws-organizations-by-using-terraform.html) | DevOps engineer | 

### Configure accounts
<a name="configure-accounts"></a>


| Task | Description | Skills required | 
| --- | --- | --- | 
| Configure the bootstrapping account. | As part of the [AFT bootstrapping](https://catalog.workshops.aws/control-tower/en-US/customization/aft/deploy) process, you should have a folder called `aft-bootstrap` on your local machine.[\[See the AWS documentation website for more details\]](http://docs.aws.amazon.com/prescriptive-guidance/latest/patterns/centralize-iam-access-key-management-in-aws-organizations-by-using-terraform.html) | DevOps engineer | 
| Configure global customizations. | As part of the [AFT folder](https://catalog.workshops.aws/control-tower/en-US/customization/aft/repositories/global-customizations) setup, you should have a folder called `aft-global-customizations` on your local machine.[\[See the AWS documentation website for more details\]](http://docs.aws.amazon.com/prescriptive-guidance/latest/patterns/centralize-iam-access-key-management-in-aws-organizations-by-using-terraform.html) | DevOps engineer | 
| Configure account customizations. | As part of the [AFT folder setup](https://catalog.workshops.aws/control-tower/en-US/customization/aft/repositories/account-customizations), you have be a folder called `aft-account-customizations` on your local machine.[\[See the AWS documentation website for more details\]](http://docs.aws.amazon.com/prescriptive-guidance/latest/patterns/centralize-iam-access-key-management-in-aws-organizations-by-using-terraform.html) | DevOps engineer | 

### Customize parameters for the code pipeline
<a name="customize-parameters-for-the-code-pipeline"></a>


| Task | Description | Skills required | 
| --- | --- | --- | 
| Customize non-Terraform code pipeline parameters for all accounts. | Create a file called `input.auto.tfvars` in the `aft-global-customizations/terraform/` folder and provide the required input data. See [the file in the GitHub repository](https://github.com/aws-samples/centralized-iam-key-management-aws-organizations-terraform/blob/main/global-account-customization/input.auto.tfvars) for default values. | DevOps engineer | 
| Customize code pipeline parameters for the deployment account. | Create a file called `input.auto.tfvars` in the `aft-account-customizations/<AccountName>/terraform/` folder and push the code to AWS CodeCommit. Pushing code to AWS CodeCommit automatically initiates the code pipeline.Specify values for parameters based on your organization’s requirements, including the following (see [the file in the Github repository](https://github.com/aws-samples/centralized-iam-key-management-aws-organizations-terraform/blob/main/account-customization/input.auto.tfvars) for default values): [\[See the AWS documentation website for more details\]](http://docs.aws.amazon.com/prescriptive-guidance/latest/patterns/centralize-iam-access-key-management-in-aws-organizations-by-using-terraform.html) | DevOps engineer | 

### Validate key rotation
<a name="validate-key-rotation"></a>


| Task | Description | Skills required | 
| --- | --- | --- | 
| Validate the solution. | [\[See the AWS documentation website for more details\]](http://docs.aws.amazon.com/prescriptive-guidance/latest/patterns/centralize-iam-access-key-management-in-aws-organizations-by-using-terraform.html) | DevOps engineer | 

### Extend the solution
<a name="extend-the-solution"></a>


| Task | Description | Skills required | 
| --- | --- | --- | 
| Customize the email notification date. | If you want to send email notifications on a specific day before you disable the access key, you can update the `IAM-access-key-auto-rotation` Lambda function with those changes:[\[See the AWS documentation website for more details\]](http://docs.aws.amazon.com/prescriptive-guidance/latest/patterns/centralize-iam-access-key-management-in-aws-organizations-by-using-terraform.html) | DevOps engineer | 

## Troubleshooting
<a name="centralize-iam-access-key-management-in-aws-organizations-by-using-terraform-troubleshooting"></a>


| Issue | Solution | 
| --- | --- | 
| The `account-inventory` Lambda job fails with `AccessDenied` while listing accounts. | If you encounter this issue, you must validate permissions:[\[See the AWS documentation website for more details\]](http://docs.aws.amazon.com/prescriptive-guidance/latest/patterns/centralize-iam-access-key-management-in-aws-organizations-by-using-terraform.html) | 

## Related resources
<a name="centralize-iam-access-key-management-in-aws-organizations-by-using-terraform-resources"></a>
+ [Terraform Recommended Practices](https://developer.hashicorp.com/terraform/cloud-docs/recommended-practices) (Terraform documentation)
+ [Security best practices in IAM](https://docs.aws.amazon.com/IAM/latest/UserGuide/best-practices.html) (IAM documentation)
+ [Best practices for key rotation](https://docs.aws.amazon.com/IAM/latest/UserGuide/best-practices.html#rotate-credentials) (IAM documentation)

# Check an Amazon CloudFront distribution for access logging, HTTPS, and TLS version
<a name="check-an-amazon-cloudfront-distribution-for-access-logging-https-and-tls-version"></a>

*SaiJeevan Devireddy and Bijesh Bal, Amazon Web Services*

## Summary
<a name="check-an-amazon-cloudfront-distribution-for-access-logging-https-and-tls-version-summary"></a>

This pattern checks an Amazon CloudFront distribution to make sure that it uses HTTPS, uses Transport Layer Security (TLS) version 1.2 or later, and has access logging enabled. CloudFront is a service provided by Amazon Web Services (AWS) that speeds up the distribution of your static and dynamic web content, such as .html, .css, .js, and image files, to your users. CloudFront delivers your content through a worldwide network of data centers called *edge locations*. When a user requests content that you're serving with CloudFront, the request is routed to the edge location that provides the lowest latency (time delay), so that content is delivered with the best possible performance.

This pattern provides an AWS Lambda function that is initiated when Amazon CloudWatch Events detects the CloudFront API call [CreateDistribution](https://docs.aws.amazon.com/cloudfront/latest/APIReference/API_CreateDistribution.html), [CreateDistributionWithTags](https://docs.aws.amazon.com/cloudfront/latest/APIReference/API_CreateDistributionWithTags.html), or [UpdateDistribution](https://docs.aws.amazon.com/cloudfront/latest/APIReference/API_UpdateDistribution.html). The custom logic in the Lambda function evaluates all CloudFront distributions that were created or updated in the AWS account. It sends a violation notification by using Amazon Simple Notification Service (Amazon SNS) if it detects the following violations:
+ Global checks:
  + Custom certificate doesn't use TLS version 1.2
  + Logging is disabled for distribution
+ Origin checks:
  + Origin isn't configured with TLS version 1.2
  + Communication with origin is allowed on a protocol other than HTTPS
+ Behavior checks:
  + Default behavior communication is allowed on a protocol other than HTTPS
  + Custom behavior communication is allowed on a protocol other than HTTPS

## Prerequisites and limitations
<a name="check-an-amazon-cloudfront-distribution-for-access-logging-https-and-tls-version-prerequisites-and-limitations"></a>

**Prerequisites **
+ An active AWS account
+ An email address where you want to receive the violation notifications

**Limitations **
+ This security control doesn't check for existing Cloudfront distributions unless an update has been made to the distribution.
+ CloudFront is considered a global service and isn't tied to a specific AWS Region. However, Amazon CloudWatch Logs and AWS Cloudtrail API logging for global services occur in the US East (N. Virginia) Region (`us-east-1`). Therefore, this security control for CloudFront must be deployed and maintained in `us-east-1`. This single deployment monitors all distributions for CloudFront. Do not deploy the security control in any other AWS Regions. (Deployment in other Regions will result in a failure to initiate CloudWatch Events and the Lambda function, and no SNS notifications.)
+ This solution has gone through extensive testing with CloudFront web content distributions. It does not cover real-time messaging protocol (RTMP) streaming distributions.

## Architecture
<a name="check-an-amazon-cloudfront-distribution-for-access-logging-https-and-tls-version-architecture"></a>

**Target technology stack  **
+ Lambda function
+ SNS topic
+ Amazon EventBridge rule

**Target architecture **

![\[Workflow diagram showing AWS services for distribution creation, event processing, and email notification.\]](http://docs.aws.amazon.com/prescriptive-guidance/latest/patterns/images/pattern-img/1ae60f8f-3eaf-40f5-b01f-06e30e5604ce/images/e1521c48-99f6-4ec6-9e53-8713f3cf5776.png)


**Automation and scale**
+ If you are using AWS Organizations, you can use [AWS Cloudformation StackSets](https://docs.aws.amazon.com/AWSCloudFormation/latest/UserGuide/what-is-cfnstacksets.html) to deploy the attached template across multiple accounts that you want to monitor.

## Tools
<a name="check-an-amazon-cloudfront-distribution-for-access-logging-https-and-tls-version-tools"></a>

**AWS services**
+ [AWS CloudFormation](https://docs.aws.amazon.com/AWSCloudFormation/latest/UserGuide/Welcome.html) – CloudFormation is a service that helps you model and set up AWS resources by using infrastructure as code.
+ [Amazon EventBridge](https://docs.aws.amazon.com/eventbridge/latest/userguide/what-is-amazon-eventbridge.html) – EventBridge delivers a stream of real-time data from your own applications, software as a service (SaaS) applications, and AWS services, routing that data to targets such as Lambda functions.
+ [AWS Lambda ](https://docs.aws.amazon.com/lambda/latest/dg/welcome.html)– Lambda supports running code without provisioning or managing servers.
+ [Amazon S3](https://docs.aws.amazon.com/AmazonS3/latest/dev/Welcome.html) – Amazon Simple Storage Service (Amazon S3) is a highly scalable object storage service that can be used for a wide range of storage solutions, including websites, mobile applications, backups, and data lakes.
+ [Amazon SNS](https://docs.aws.amazon.com/sns/latest/dg/welcome.html) – Amazon SNS coordinates and manages the delivery or sending of messages between publishers and clients, including web servers and email addresses. Subscribers receive all messages published to the topics to which they subscribe, and all subscribers to a topic receive the same messages.

**Code**

The attached code includes:
+ A .zip file that contains the Lambda code (index.py)
+ A CloudFormation template (.yml file) that you run to deploy the Lambda code

## Epics
<a name="check-an-amazon-cloudfront-distribution-for-access-logging-https-and-tls-version-epics"></a>

### Upload the security control
<a name="upload-the-security-control"></a>


| Task | Description | Skills required | 
| --- | --- | --- | 
| Create the S3 bucket for the Lambda code. | On the Amazon S3 console, create a S3 bucket with an unique name that does not contain leading slashes. An S3 bucket name is globally unique, and the namespace is shared by all AWS accounts. Your S3 bucket must be in the Region where you are planning to deploy the Lambda code. | Cloud architect | 
| Upload the Lambda code to the S3 bucket. | Upload the Lambda code (cloudfront\$1ssl\$1log\$1lambda.zip file) that's provided in the *Attachments *section to the S3 bucket you created in the previous step. | Cloud architect | 

### Deploy the CloudFormation template
<a name="deploy-the-cloudformation-template"></a>


| Task | Description | Skills required | 
| --- | --- | --- | 
| Deploy the CloudFormation template. | On the AWS CloudFormation console, in the same AWS Region as the S3 bucket, deploy the CloudFormation template (cloudfront-ssl-logging.yml) that's provided in the *Attachments *section.  | Cloud architect | 
| Specify the S3 bucket name. | For the **S3 Bucket** parameter, specify the name of the S3 bucket that you created in the first epic. | Cloud architect | 
| Specify the Amazon S3 key name for the Lambda file. | For the **S3 Key** parameter, specify the Amazon S3 location of the Lambda code .zip file in your S3 bucket. Do not include leading slashes (for example, you can enter lambda.zip or controls/lambda.zip). | Cloud architect | 
| Provide a notification email address. | For the **Notification email** parameter, provide an email address where you would like to receive the violation notifications. | Cloud architect | 
| Define the logging level. | For the **Lambda Logging level** parameter, define the logging level for your Lambda function. Choose one of the following values: [\[See the AWS documentation website for more details\]](http://docs.aws.amazon.com/prescriptive-guidance/latest/patterns/check-an-amazon-cloudfront-distribution-for-access-logging-https-and-tls-version.html) | Cloud architect | 

### Confirm the subscription
<a name="confirm-the-subscription"></a>


| Task | Description | Skills required | 
| --- | --- | --- | 
| Confirm the subscription. | When the CloudFormation template has been deployed successfully, a new SNS topic is created and a subscription message is sent to the email address you provided. You must confirm this email subscription to receive violation notifications. | Cloud architect | 

## Related resources
<a name="check-an-amazon-cloudfront-distribution-for-access-logging-https-and-tls-version-related-resources"></a>
+ [AWS CloudFormation information](https://aws.amazon.com/cloudformation/)
+ [Creating a stack on the AWS CloudFormation console](https://docs.aws.amazon.com/AWSCloudFormation/latest/UserGuide/cfn-console-create-stack.html) (CloudFormation documentation)
+ [CloudFront logging](https://docs.aws.amazon.com/AmazonCloudFront/latest/DeveloperGuide/logging.html) (CloudFront documentation)
+ [Amazon S3 information](https://aws.amazon.com/s3/)
+ [AWS Lambda information](https://aws.amazon.com/lambda/)

## Attachments
<a name="attachments-1ae60f8f-3eaf-40f5-b01f-06e30e5604ce"></a>

To access additional content that is associated with this document, unzip the following file: [attachment.zip](samples/p-attach/1ae60f8f-3eaf-40f5-b01f-06e30e5604ce/attachments/attachment.zip)

# Choose an Amazon Cognito authentication flow for enterprise applications
<a name="choose-an-amazon-cognito-authentication-flow-for-enterprise-applications"></a>

*Michael Daehnert and Fabian Jahnke, Amazon Web Services*

## Summary
<a name="choose-an-amazon-cognito-authentication-flow-for-enterprise-applications-summary"></a>

[Amazon Cognito](https://docs.aws.amazon.com/cognito/latest/developerguide/what-is-amazon-cognito.html) provides authentication, authorization, and user management for web and mobile applications. It offers beneficial features for authentication of federated identities. To get it up and running, technical architects need to decide how they want to use those features.

Amazon Cognito supports multiple flows for authentication requests. These flows define how your users can verify their identity. The decision about which authentication flow to use depends on specific requirements of your application and can become complex. This pattern helps you decide which authentication flow is the best fit for your enterprise application. It assumes that you already have a basic knowledge of Amazon Cognito, OpenID Connect (OIDC), and federation, and it guides you through details about different federated authentication flows.

This solution is intended for technical decision makers. It helps you understand the different authentication flows and map them to your application requirements. Technical leads should gather the required insights to start the Amazon Cognito integrations. Because enterprise organizations mainly focus on SAML federation, this pattern includes  descriptions for [Amazon Cognito user pools](https://docs.aws.amazon.com/cognito/latest/developerguide/cognito-user-identity-pools.html) with SAML federation.

## Prerequisites and limitations
<a name="choose-an-amazon-cognito-authentication-flow-for-enterprise-applications-prereqs"></a>

**Prerequisites **
+ An active AWS account
+ AWS Identity and Access Management (IAM) roles and permissions with full access to Amazon Cognito
+ (Optional) Access to your identity provider (IdP), such as Microsoft Entra ID, Active Directory Federation Service (AD FS), or Okta
+ A high level of expertise for your application
+ Basic knowledge of Amazon Cognito, OpenID Connect (OIDC), and federation

**Limitations **
+ This pattern focuses on Amazon Cognito user pools and identity providers. For information about Amazon Cognito identity pools, see the [Additional information](#choose-an-amazon-cognito-authentication-flow-for-enterprise-applications-additional) section.

## Architecture
<a name="choose-an-amazon-cognito-authentication-flow-for-enterprise-applications-architecture"></a>

Use the following table to help you choose an authentication flow. More information about each flow is provided in this section.


| 
| 
| Do you need machine-to-machine authentication? | Is your app a web-based application where the frontend is rendered on the server? | Is your app a single-page application (SPA) or mobile-based frontend application? | Does your application require refresh tokens for a "keep me signed in" feature? | Does the frontend offer a browser-based redirect mechanism? | Recommended Amazon Cognito flow | 
| --- |--- |--- |--- |--- |--- |
| Yes | No | No | No | No | Client Credentials flow | 
| No | Yes | No | Yes | Yes | Authorization Code flow | 
| No | No | Yes | Yes | Yes | Authorization Code flow with Proof Key for Code Exchange (PKCE) | 
| No | No | No | No | No | Resource Owner Password flow\$1 | 

\$1 Resource Owner Password flow should be used only if absolutely necessary. For more information, see the *Resource Owner Password flow* section in this pattern.

**Client Credentials flow **

The Client Credentials flow is the shortest of the Amazon Cognito flows. It should be used if systems or services communicate with each other without any user interaction. The requesting system uses the client ID and the client secret to retrieve an access token. Because both systems work without user interaction, no additional consent step is required.

![\[Client Credentials flow for Amazon Cognito\]](http://docs.aws.amazon.com/prescriptive-guidance/latest/patterns/images/pattern-img/7b5e567c-66a4-4386-a1f6-616ed77a6211/images/1138745d-69fa-4ecc-a9ec-c0b2a68ce7d2.png)


The diagram illustrates the following:

1. Application 1 sends an authentication request with the client ID and client secret to the Amazon Cognito endpoint, and it retrieves an access token.

1. Application 1 uses this access token for every subsequent call to Application 2.

1. Application 2 validates the access token with Amazon Cognito.

This flow should be used:
+ For communications between applications with no user interaction

This flow should not be used:
+ For any communication in which user interactions are possible

**Authorization Code flow**

The Authorization Code flow is for classic web-based authentication. In this flow, the backend handles all of the token exchange and storage. The browser-based client does not see the actual tokens. This solution is used for applications written in frameworks such as .NET Core, Jakarta Faces, or Jakarta Server Pages (JSP).

The Authorization Code flow is a redirection-based flow. The client must be able to interact with the web browser or a similar client. The client is redirected to an authentication server and authenticates against this server. If the client authenticates successfully, it is redirected back to the server.

![\[Authorization Code flow for Amazon Cognito\]](http://docs.aws.amazon.com/prescriptive-guidance/latest/patterns/images/pattern-img/7b5e567c-66a4-4386-a1f6-616ed77a6211/images/1008296c-d5b8-449d-99d4-f0b2b7cf5d80.png)


The diagram illustrates the following:

1. The client sends a request to the web server.

1. The web server redirects the client to Amazon Cognito by using an HTTP 302 status code. The client automatically follows this redirect to the configured IdP login.

1. The IdP checks for an existing browser session on the IdP side. If none exists, the user receives a prompt to authenticate by providing their username and password. The IdP responds with a SAML token to Amazon Cognito.

1. Amazon Cognito returns success with a JSON web token (JWT), specifically a code token. The web server calls /oauth2/token to exchange the code token for an access token. The web server sends the client ID and client secret to Amazon Cognito for validation.

1. The access token is used for every subsequent call to other applications.

1. Other applications validate the access token with Amazon Cognito.

This flow should be used:
+ If the user is able to interact with the web browser or client. The application code is run and rendered on the server to make sure that no secrets are exposed to the browser.

This flow should not be used:
+ For single-page applications (SPAs) or mobile apps because they're rendered on the client and shouldn't use client secrets.

**Authorization Code flow with PKCE**

Authorization Code flow with Proof Key for Code Exchange (PKCE) should be used for single-page applications and mobile applications. It is the successor of the Implicit flow and is more secure because it uses PKCE. PKCE is an extension to the OAuth 2.0 authorization code grant for public clients. PKCE guards against the redemption of intercepted authorization codes.

![\[Authorization Code flow with PKCE for Amazon Cognito\]](http://docs.aws.amazon.com/prescriptive-guidance/latest/patterns/images/pattern-img/7b5e567c-66a4-4386-a1f6-616ed77a6211/images/1609da4f-decd-4d43-afe0-31237238df6d.png)


The diagram illustrates the following:

1. The application creates a code verifier and code challenge. These are well defined, unique values that are sent to Amazon Cognito for future reference.

1. The application calls the /oauth2/authorization endpoint of Amazon Cognito. It automatically redirects the user to the configured IdP login.

1. The IdP checks for an existing session. If none exists, the user receives a prompt to authenticate by providing their username and password. The IdP responds with a SAML token to Amazon Cognito.

1. After Amazon Cognito returns success with a code token, the web server calls /oauth2/token to exchange the code token for an access token.

1. The access token is used for every subsequent call to other applications.

1. The other applications validate the access token with Amazon Cognito.

This flow should be used:
+ For SPAs or mobile applications

This flow should not be used:
+ If the application backend handles authentication

**Resource Owner Password flow**

The Resource Owner Password flow is intended for applications with no redirect capabilities. It is built by creating a login form in your own application. The login is checked on Amazon Cognito through a CLI or SDK call instead of relying on redirect flows. Federation is not possible in this authentication flow because federation requires browser-based redirects.

![\[Resource Owner Password flow for Amazon Cognito\]](http://docs.aws.amazon.com/prescriptive-guidance/latest/patterns/images/pattern-img/7b5e567c-66a4-4386-a1f6-616ed77a6211/images/d74bc596-08a3-40f4-a6a7-07f6610fe6b1.png)


The diagram illustrates the following:

1. The user enters their credentials on a login form provided by the application.

1. The AWS Command Line Interface (AWS CLI) makes an [admin-initiated-auth](https://awscli.amazonaws.com/v2/documentation/api/latest/reference/cognito-idp/admin-initiate-auth.html) call to Amazon Cognito.
**Note**  
Alternatively, you can use AWS SDKs instead of the AWS CLI.

1. Amazon Cognito returns an access token.

1. The access token is used for every subsequent call to other applications.

1. The other applications validate the access token with Amazon Cognito.

This flow should be used:
+ When migrating existing clients that use direct authentication logic (such as basic access authentication or digest access authentication) to OAuth by converting the stored credentials to an access token

This flow should not be used:
+ If you want to use federated identities
+ If your application supports redirects

## Tools
<a name="choose-an-amazon-cognito-authentication-flow-for-enterprise-applications-tools"></a>

**AWS services**
+ [Amazon Cognito](https://docs.aws.amazon.com/cognito/latest/developerguide/what-is-amazon-cognito.html) provides authentication, authorization, and user management for web and mobile apps.

**Other tools**
+ [JSON web token (JWT) debugger](https://jwt.io/) is a web-based JWT validation tool.

## Epics
<a name="choose-an-amazon-cognito-authentication-flow-for-enterprise-applications-epics"></a>

### Assess your application
<a name="assess-your-application"></a>


| Task | Description | Skills required | 
| --- | --- | --- | 
| Define authentication requirements. | Assess your application according to your specific authentication requirements. | App developer, App architect | 
| Align requirements with authentication flows. | In the [Architecture](#choose-an-amazon-cognito-authentication-flow-for-enterprise-applications-architecture) section, use the decision table and explanations of each flow to choose your Amazon Cognito authentication flow. | App developer, General AWS, App architect | 

### Set up the Amazon Cognito user pool
<a name="set-up-the-amazon-cognito-user-pool"></a>


| Task | Description | Skills required | 
| --- | --- | --- | 
| Create a user pool. | [\[See the AWS documentation website for more details\]](http://docs.aws.amazon.com/prescriptive-guidance/latest/patterns/choose-an-amazon-cognito-authentication-flow-for-enterprise-applications.html) | General AWS | 
| (Optional) Configure an identity provider. | [\[See the AWS documentation website for more details\]](http://docs.aws.amazon.com/prescriptive-guidance/latest/patterns/choose-an-amazon-cognito-authentication-flow-for-enterprise-applications.html) | General AWS, Federation administrator | 
| Create an app client. | [\[See the AWS documentation website for more details\]](http://docs.aws.amazon.com/prescriptive-guidance/latest/patterns/choose-an-amazon-cognito-authentication-flow-for-enterprise-applications.html) | General AWS | 

### Integrate the application with Amazon Cognito
<a name="integrate-the-application-with-amazon-cognito"></a>


| Task | Description | Skills required | 
| --- | --- | --- | 
| Exchange Amazon Cognito integration details. | Depending on your authentication flow, share Amazon Cognito information with the application, such as the user pool ID and app client ID. | App developer, General AWS | 
| Implement Amazon Cognito authentication. | This depends on your chosen authentication flow, your programming language, and the frameworks you're using. For some links to get started, see the [Related resources](#choose-an-amazon-cognito-authentication-flow-for-enterprise-applications-resources) section. | App developer | 

## Related resources
<a name="choose-an-amazon-cognito-authentication-flow-for-enterprise-applications-resources"></a>

**AWS documentation**
+ [User pool authentication flow](https://docs.aws.amazon.com/cognito/latest/developerguide/amazon-cognito-user-pools-authentication-flow.html)
+ [Verifying a JSON web token](https://docs.aws.amazon.com/cognito/latest/developerguide/amazon-cognito-user-pools-using-tokens-verifying-a-jwt.html)
+ [Access AWS services from an ASP.NET Core app using Amazon Cognito identity pools](https://docs.aws.amazon.com/prescriptive-guidance/latest/patterns/access-aws-services-from-an-asp-net-core-app-using-amazon-cognito-identity-pools.html?did=pg_card&trk=pg_card)
+ Frameworks and SDKs:
  + [Amazon Amplify authentication](https://docs.amplify.aws/lib/auth/getting-started/q/platform/js)
  + [Amazon Cognito Identity Provider examples](https://docs.aws.amazon.com/sdk-for-java/latest/developer-guide/java_cognito-identity-provider_code_examples.html) (AWS SDK for Java 2.x documentation)
  + [Authenticating users with Amazon Cognito](https://docs.aws.amazon.com/sdk-for-net/v3/developer-guide/cognito-apis-intro.html) (AWS SDK for .NET documentation)

**AWS blog posts**
+ [Authorization@Edge using cookies: Protect your Amazon CloudFront content from being downloaded by unauthenticated users](https://aws.amazon.com/blogs/networking-and-content-delivery/authorizationedge-using-cookies-protect-your-amazon-cloudfront-content-from-being-downloaded-by-unauthenticated-users/)
+ [Building AD FS Federation for your Web App using Amazon Cognito User Pools](https://aws.amazon.com/blogs/mobile/building-adfs-federation-for-your-web-app-using-amazon-cognito-user-pools/)

**Implementation partners**
+ [AWS Partners for authentication solutions](https://partners.amazonaws.com/search/partners?keyword=authentication)

## Additional information
<a name="choose-an-amazon-cognito-authentication-flow-for-enterprise-applications-additional"></a>

**FAQ**

*Why is the Implicit flow deprecated?*

Since the release of the [OAuth 2.1 framework](https://oauth.net/2.1/), the Implicit flow is marked as deprecated for security reasons. As an alternative, please use the Authorization Code flow with PKCE described in the [Architecture](#choose-an-amazon-cognito-authentication-flow-for-enterprise-applications-architecture) section.

*What if Amazon Cognito doesn’t offer some functionality I require?*

AWS Partners offer different integrations for authentication and authorization solutions. For more information, see [AWS Partners for authentication solutions](https://partners.amazonaws.com/search/partners?keyword=authentication).

*What about Amazon Cognito identity pool flows?*

Amazon Cognito user pools and federated identities are for authentication. Amazon Cognito identity pools are used for authorization of AWS resources access by requesting temporary AWS credentials. The ID token and access token exchange for identity pools isn't discussed in this pattern. For more information, see [What's the difference between Amazon Cognito user pools and identity pools](https://aws.amazon.com/premiumsupport/knowledge-center/cognito-user-pools-identity-pools/) and [Common Amazon Cognito scenarios](https://docs.aws.amazon.com/cognito/latest/developerguide/cognito-scenarios.html).

**Next steps**

This pattern provides an overview of Amazon Cognito authentication flows. As a next step, the detailed implementation for the application’s programming language needs to be chosen. Multiple languages offer SDKs and frameworks, which you can use with Amazon Cognito. For helpful references, see the [Related resources](#choose-an-amazon-cognito-authentication-flow-for-enterprise-applications-resources) section.

# Create AWS Config custom rules by using AWS CloudFormation Guard policies
<a name="create-aws-config-custom-rules-by-using-aws-cloudformation-guard-policies"></a>

*Andrew Lok, Nicole Brown, Kailash Havildar, and Tanya Howell, Amazon Web Services*

## Summary
<a name="create-aws-config-custom-rules-by-using-aws-cloudformation-guard-policies-summary"></a>

[AWS Config](https://docs.aws.amazon.com/config/latest/developerguide/WhatIsConfig.html) rules help you evaluate your AWS resources and their target configuration state. There are two types of AWS Config rules: managed and custom. Your can create custom rules with AWS Lambda functions or with [AWS CloudFormation Guard](https://github.com/aws-cloudformation/cloudformation-guard) (GitHub), a policy-as-code language.

Rules created with Guard provide more granular control than managed rules, and they are typically easier to configure than fully custom Lambda rules. This approach provides engineers and architects the ability to build rules without needing to know Python, NodeJS, or Java, which are required to deploy custom rules through Lambda.

This pattern provides workable templates, code samples, and deployment approaches to help you adopt custom rules with Guard. By using this pattern, an administrator can use AWS Config to build custom compliance rules that have [configuration item](https://docs.aws.amazon.com/config/latest/developerguide/config-concepts.html#config-items) attributes. For example, developers can use Guard policies against AWS Config configuration items to continuously monitor the state of deployed AWS and non-AWS resources, detect rule violations, and automatically initiate remediation.

**Objectives**

After reading this pattern, you should be able to:
+ Understand how Guard policy code interacts with the AWS Config service.
+ Deploy *Scenario 1*, which is an AWS Config custom rule that uses Guard syntax to validate compliance for encrypted volumes. This rule verifies that the drive is in use and verifies that the drive type is [gp3](https://docs.aws.amazon.com/AWSEC2/latest/UserGuide/general-purpose.html#gp3-ebs-volume-type).
+ Deploy *Scenario 2*, which is an AWS Config custom rule that uses Guard syntax to validate Amazon GuardDuty compliance. This rule verifies that GuardDuty recorders have [Amazon Simple Storage Service (Amazon S3) Protection](https://docs.aws.amazon.com/guardduty/latest/ug/s3-protection.html) and [Amazon Elastic Kubernetes Service (Amazon EKS) Protection](https://docs.aws.amazon.com/guardduty/latest/ug/kubernetes-protection.html) enabled.

## Prerequisites and limitations
<a name="create-aws-config-custom-rules-by-using-aws-cloudformation-guard-policies-prereqs"></a>

**Prerequisites**
+ An active AWS account
+ AWS Config, [set up](https://docs.aws.amazon.com/config/latest/developerguide/getting-started.html) in your AWS account

**Limitations**
+ Guard custom rules are only able to query key-value pairs in a target configuration item JSON record

## Architecture
<a name="create-aws-config-custom-rules-by-using-aws-cloudformation-guard-policies-architecture"></a>

You apply the Guard syntax to an AWS Config rule as a custom policy. AWS Config captures the hierarchical JSON of each of the resources specified. The JSON of the AWS Config configuration item contains key-value pairs. These attributes are used in the Guard syntax as variables that are assigned to their corresponding value. 

The following is an explanation of the Guard syntax. The variables from the configuration item JSON are used and prepended with a `%` character.

```
# declare variable
let <variable name> = <'value'>

# create rule and assign condition and policy
    rule <rule name> when 
        <CI json key> == <"CI json value"> {
            <top level CI json key>.<next level CI json key> == %<variable name>
        }
```

**Scenario 1: Amazon EBS volumes**

Scenario 1 deploys an AWS Config custom rule that uses Guard syntax to validate compliance for encrypted volumes. This rule verifies that the drive is in use and verifies that the drive type is gp3.

The following is an example of an AWS Config configuration item for scenario 1. There are three key-value pairs in this configuration item that used as variables in the Guard policy: `volumestatus`, `volumeencryptionstatus`, and `volumetype`. Also, the `resourceType` key is used as a filter in the Guard policy.

```
{
  "version": "1.3",
  "accountId": "111111111111",
  "configurationItemCaptureTime": "2023-01-15T19:04:45.402Z",
  "configurationItemStatus": "ResourceDiscovered",
  "configurationStateId": "4444444444444",
  "configurationItemMD5Hash": "",
  "arn": "arn:aws:ec2:us-west-2:111111111111:volume/vol-222222222222",
  "resourceType": "AWS::EC2::Volume",
  "resourceId": "vol-222222222222",
  "awsRegion": "us-west-2",
  "availabilityZone": "us-west-2b",
  "resourceCreationTime": "2023-01-15T19:03:22.247Z",
  "tags": {},
  "relatedEvents": [],
  "relationships": [
    {
      "resourceType": "AWS::EC2::Instance",
      "resourceId": "i-33333333333333333",
      "relationshipName": "Is attached to Instance"
    }
  ],
  "configuration": {
    "attachments": [
      {
        "attachTime": "2023-01-15T19:03:22.000Z",
        "device": "/dev/xvda",
        "instanceId": "i-33333333333333333",
        "state": "attached",
        "volumeId": "vol-222222222222",
        "deleteOnTermination": true,
        "associatedResource": null,
        "instanceOwningService": null
      }
    ],
    "availabilityZone": "us-west-2b",
    "createTime": "2023-01-15T19:03:22.247Z",
    "encrypted": false,
    "kmsKeyId": null,
    "outpostArn": null,
    "size": 8,
    "snapshotId": "snap-55555555555555555",
    "state": "in-use",
    "volumeId": "vol-222222222222",
    "iops": 100,
    "tags": [],
    "volumeType": "gp2",
    "fastRestored": null,
    "multiAttachEnabled": false,
    "throughput": null,
    "sseType": null
  },
  "supplementaryConfiguration": {}
}
```

The following is an example of using Guard syntax to define the variables and rules in scenario 1. In the following example:
+ The first three lines define the variables by using the `let` command.  They are assigned a name and value that is derived from the attributes of the configuration item.
+ The `compliancecheck` rule block adds a when conditional dependency that looks for a `resourceType` key-value pair that matches `AWS::EC2::Volume`. If a match is found, the rule proceeds through the rest of the JSON attributes and looks for matches on the following three conditions: `state`, `encrypted`, and `volumeType`.

```
let volumestatus = 'available'
let volumetype = 'gp3'
let volumeencryptionstatus = true

    rule compliancecheck when 
        resourceType == "AWS::EC2::Volume" {
            configuration.state == %volumestatus
            configuration.encrypted == %volumeencryptionstatus
            configuration.volumeType == %volumetype
        }
```

For the complete Guard custom policy that implements this custom rule, see [awsconfig-guard-cft.yaml](https://github.com/aws-samples/aws-config-custom-rule-cloudformation-guard/blob/main/awsconfig-guard-cft.yaml) or [awsconfig-guard-tf-ec2vol.json](https://github.com/aws-samples/aws-config-custom-rule-cloudformation-guard/blob/main/awsconfig-guard-tf-ec2vol.json) in the GitHub code repository. For HashiCorp Terraform code that deploys this custom policy in Guard, see [awsconfig-guard-tf-example.json](https://github.com/aws-samples/aws-config-custom-rule-cloudformation-guard/blob/main/awsconfig-guard-tf-example.json) in the code repository.

**Scenario 2: GuardDuty compliance**

Scenario 2 deploys an AWS Config custom rule that uses Guard syntax to validate Amazon GuardDuty compliance. This rule verifies that GuardDuty recorders have Amazon S3 Protection and Amazon EKS Protection enabled. It also verifies that GuardDuty findings are published every 15 minutes. This scenario could be deployed across all AWS accounts and AWS Regions in an organization (in AWS Organizations).

The following is an example of an AWS Config configuration item for scenario 2. There are three key-value pairs in this configuration item that used as variables in the Guard policy: `FindingPublishingFrequency`, `S3Logs`, and `Kubernetes`. Also, the `resourceType` key is used as a filter in the policy.

```
{
  "version": "1.3",
  "accountId": "111111111111",
  "configurationItemCaptureTime": "2023-11-27T13:34:28.888Z",
  "configurationItemStatus": "OK",
  "configurationStateId": "7777777777777",
  "configurationItemMD5Hash": "",
  "arn": "arn:aws:guardduty:us-west-2:111111111111:detector/66666666666666666666666666666666",
  "resourceType": "AWS::GuardDuty::Detector",
  "resourceId": "66666666666666666666666666666666",
  "resourceName": "66666666666666666666666666666666",
  "awsRegion": "us-west-2",
  "availabilityZone": "Regional",
  "resourceCreationTime": "2020-02-17T02:48:04.511Z",
  "tags": {},
  "relatedEvents": [],
  "relationships": [],
  "configuration": {
    "Enable": true,
    "FindingPublishingFrequency": "FIFTEEN_MINUTES",
    "DataSources": {
      "S3Logs": {
        "Enable": true
      },
      "Kubernetes": {
        "AuditLogs": {
          "Enable": true
        }
      }
    },
    
    "Id": "66666666666666666666666666666666",
    "Tags": []
  },
  "supplementaryConfiguration": {
    "CreatedAt": "2020-02-17T02:48:04.511Z"
  }
}
```

The following is an example of using Guard syntax to define the variables and rules in scenario 2. In the following example:
+ The first three lines define the variables by using the `let` command.  They are assigned a name and value that is derived from the attributes of the configuration item.
+ The `compliancecheck` rule block adds a when conditional dependency that looks for a `resourceType` key-value pair that matches `AWS::GuardDuty::Detector`. If a match is found, the rule proceeds through the rest of the JSON attributes and looks for matches on the following three conditions: `S3Logs.Enable`, `Kubernetes.AuditLogs.Enable`, and `FindingPublishingFrequency`.

```
let s3protection = true
let kubernetesprotection = true
let publishfrequency = 'FIFTEEN_MINUTES'

    rule compliancecheck when 
        resourceType == "AWS::GuardDuty::Detector" {
            configuration.DataSources.S3Logs.Enable == %s3protection
            configuration.DataSources.Kubernetes.AuditLogs.Enable == %kubernetesprotection
            configuration.FindingPublishingFrequency == %publishfrequency
        }
```

For the complete Guard custom policy that implements this custom rule, see [awsconfig-guard-cft-gd.yaml](https://github.com/aws-samples/aws-config-custom-rule-cloudformation-guard/blob/main/awsconfig-guard-cft-gd.yaml) in the GitHub code repository. For HashiCorp Terraform code that deploys this custom policy in Guard, see [awsconfig-guard-tf-gd.json](https://github.com/aws-samples/aws-config-custom-rule-cloudformation-guard/blob/main/awsconfig-guard-tf-gd.json) in the code repository.

## Tools
<a name="create-aws-config-custom-rules-by-using-aws-cloudformation-guard-policies-tools"></a>

**AWS services**
+ [AWS CloudFormation](https://docs.aws.amazon.com/AWSCloudFormation/latest/UserGuide/Welcome.html) helps you set up AWS resources, provision them quickly and consistently, and manage them throughout their lifecycle across AWS accounts and Regions.
+ [AWS Config](https://docs.aws.amazon.com/config/latest/developerguide/WhatIsConfig.html) provides a detailed view of the resources in your AWS account and how they’re configured. It helps you identify how resources are related to one another and how their configurations have changed over time.

**Other tools**
+ [HashiCorp Terraform](https://www.terraform.io/docs) is an infrastructure as code (IaC) tool that helps you use code to provision and manage cloud infrastructure and resources.

**Code repository**

The code for this pattern is available in the GitHub [AWS Config with AWS CloudFormation Guard](https://github.com/aws-samples/aws-config-custom-rule-cloudformation-guard/tree/main) repository. This code repository contains samples for both of the scenarios described in this pattern.

## Epics
<a name="create-aws-config-custom-rules-by-using-aws-cloudformation-guard-policies-epics"></a>

### Creating AWS Config custom rules
<a name="creating-cc-custom-rules"></a>


| Task | Description | Skills required | 
| --- | --- | --- | 
| (Optional) Select key-value pairs for the rule. | Complete these steps if you are defining a custom Guard policy. If you are using one of the sample policies for scenario 1 or 2, skip these steps.[\[See the AWS documentation website for more details\]](http://docs.aws.amazon.com/prescriptive-guidance/latest/patterns/create-aws-config-custom-rules-by-using-aws-cloudformation-guard-policies.html) | AWS administrator, Security engineer | 
| Create the custom rule. | Using the key-value pairs that you identified previously or using one of the provided sample Guard policies, follow the instructions in [Creating AWS Config Custom Policy Rules](https://docs.aws.amazon.com/config/latest/developerguide/evaluate-config_develop-rules_cfn-guard.html#create-cfn-guard-rule-console) to create a custom rule. | AWS administrator, Security engineer | 
| Validate the custom rule. | Do one of the following to validate the custom Guard rule:[\[See the AWS documentation website for more details\]](http://docs.aws.amazon.com/prescriptive-guidance/latest/patterns/create-aws-config-custom-rules-by-using-aws-cloudformation-guard-policies.html) | AWS administrator, Security engineer | 

## Troubleshooting
<a name="create-aws-config-custom-rules-by-using-aws-cloudformation-guard-policies-troubleshooting"></a>


| Issue | Solution | 
| --- | --- | 
| Test the Guard policy outside of AWS Config | Unit testing can be done on your local device or in an integrated development environment (IDE), such as an AWS Cloud9 IDE. To perform unit testing, do the following:[\[See the AWS documentation website for more details\]](http://docs.aws.amazon.com/prescriptive-guidance/latest/patterns/create-aws-config-custom-rules-by-using-aws-cloudformation-guard-policies.html) | 
| Debug an AWS Config custom rule | In your Guard policy, change the `EnableDebugLogDelivery` value to `true`. The default value is `false`. The log messages are stored in Amazon CloudWatch. | 

## Related resources
<a name="create-aws-config-custom-rules-by-using-aws-cloudformation-guard-policies-resources"></a>

**AWS documentation**
+ [Creating AWS Config Custom Policy Rules](https://docs.aws.amazon.com/config/latest/developerguide/evaluate-config_develop-rules_cfn-guard.html) (AWS Config documentation)
+ [Writing AWS CloudFormation Guard rules](https://docs.aws.amazon.com/cfn-guard/latest/ug/writing-rules.html) (Guard documentation)

**AWS blog posts and workshops**
+ [Introducing AWS CloudFormation Guard 2.0](https://aws.amazon.com/blogs/mt/introducing-aws-cloudformation-guard-2-0/) (AWS blog post)

**Other resources**
+ [AWS CloudFormation Guard](https://github.com/aws-cloudformation/cloudformation-guard) (GitHub)
+ [AWS CloudFormation Guard CLI documentation](https://github.com/aws-cloudformation/cloudformation-guard#guard-cli) (GitHub)

# Create a consolidated report of Prowler security findings from multiple AWS accounts
<a name="create-a-consolidated-report-of-prowler-security-findings-from-multiple-aws-accounts"></a>

*Mike Virgilio, Jay Durga, and Andrea Di Fabio, Amazon Web Services*

## Summary
<a name="create-a-consolidated-report-of-prowler-security-findings-from-multiple-aws-accounts-summary"></a>

[Prowler](https://github.com/prowler-cloud/prowler) (GitHub) is an open-source command line tool that can help you assess, audit, and monitor your Amazon Web Services (AWS) accounts for adherence to security best practices. In this pattern, you deploy Prowler in a centralized AWS account in your organization, managed by AWS Organizations, and then use Prowler to perform a security assessment of all of the accounts in the organization.

While there are many methods to deploy and utilize Prowler for an assessment, this solution has been designed for rapid deployment, full analysis of all accounts in the organization or defined target accounts, and accessible reporting of the security findings. In this solution, when Prowler completes the security assessment of all accounts in the organization, it consolidates the results. It also filters out any expected error messages, such as errors related to restrictions that prevent Prowler from scanning Amazon Simple Storage Service (Amazon S3) buckets in accounts provisioned through AWS Control Tower. The filtered, consolidated results are reported in a Microsoft Excel template that is included with this pattern. You can use this report to identify potential improvements for the security controls in your organization.

This solution was designed with the following in mind:
+ The AWS CloudFormation templates reduce the effort required to deploy the AWS resources in this pattern.
+ You can adjust the parameters in the CloudFormation templates and **prowler\$1scan.sh** script at the time of deployment to customize the templates for your environment.
+ Prowler assessment and reporting speeds are optimized through parallel processing of AWS accounts, aggregated results, consolidated reporting with recommended remediations, and automatically generated visualizations.
+ The user doesn’t need to monitor the scan progress. When the assessment is complete, the user is notified through an Amazon Simple Notification Service (Amazon SNS) topic so that they can retrieve the report.
+ The report template helps you read and assess only the relevant results for your entire organization.

## Prerequisites and limitations
<a name="create-a-consolidated-report-of-prowler-security-findings-from-multiple-aws-accounts-prereqs"></a>

**Prerequisites**
+ An AWS account for hosting security services and tools, managed as a member account of an organization in AWS Organizations. In this pattern, this account is referred to as the *security account*.
+ In the security account, you must have a private subnet with outbound internet access. For instructions, see [VPC with servers in private subnets and NAT](https://docs.aws.amazon.com/vpc/latest/userguide/vpc-example-private-subnets-nat.html) in the Amazon Virtual Private Cloud (Amazon VPC) documentation. You can establish internet access by using an [NAT gateway](https://docs.aws.amazon.com/vpc/latest/userguide/vpc-nat-gateway.html) that is provisioned in a public subnet.
+ Access to the AWS Organizations management account or an account that has delegated administrator permissions for CloudFormation. For instructions, see [Register a delegated administrator](https://docs.aws.amazon.com/AWSCloudFormation/latest/UserGuide/stacksets-orgs-delegated-admin.html) in the CloudFormation documentation.
+ Enable trusted access between AWS Organizations and CloudFormation. For instructions, see [Enable trusted access with AWS Organizations](https://docs.aws.amazon.com/AWSCloudFormation/latest/UserGuide/stacksets-orgs-enable-trusted-access.html) in the CloudFormation documentation.

**Limitations**
+ The target AWS accounts must be managed as an organization in AWS Organizations. If you are not using AWS Organizations, you can update the **IAM-ProwlerExecRole.yaml** CloudFormation template and the **prowler\$1scan.sh** script for your environment. Instead, you provide a list of AWS account IDs and Regions where you want to run the script.
+ The CloudFormation template is designed to deploy the Amazon Elastic Compute Cloud (Amazon EC2) instance in a private subnet that has outbound internet access. The AWS Systems Manager Agent (SSM Agent) requires outbound access to reach the AWS Systems Manager service endpoint, and you need outbound access to clone the code repository and install dependencies. If you want to use a public subnet, you must modify the **prowler-resources.yaml** template to associate an [Elastic IP address](https://docs.aws.amazon.com/AWSEC2/latest/UserGuide/elastic-ip-addresses-eip.html) with the EC2 instance.

**Product versions**
+ Prowler version 4.0 or later

## Architecture
<a name="create-a-consolidated-report-of-prowler-security-findings-from-multiple-aws-accounts-architecture"></a>

![\[Architecture diagram with Prowler deployed in a centralized security account.\]](http://docs.aws.amazon.com/prescriptive-guidance/latest/patterns/images/pattern-img/81ba9037-9958-4e4a-95b7-d68896075a5b/images/4a3c281c-f108-4e35-9683-72783ceb3336.png)


The diagram shows the following process:

1. Using Session Manager, a capability of AWS Systems Manager, the user authenticates to the EC2 instance and runs the **prowler\$1scan.sh** script. This shell script performs steps 2–8.

1. The EC2 instance assumes the `ProwlerEC2Role` IAM role, which grants permissions to access the S3 bucket and to assume the `ProwlerExecRole` IAM roles in the other accounts in the organization.

1. The EC2 instance assumes the `ProwlerExecRole` IAM role in the organization’s management account and generates a list of the accounts in the organization.

1. The EC2 instance assumes the `ProwlerExecRole` IAM role in the organization’s member accounts (called *workload accounts* in the architecture diagram) and performs a security assessment in each account. The findings are stored as CSV and HTML files on the EC2 instance.
**Note**  
 HTML files are an output of the Prowler assessment. Due to the nature of HTML, they aren’t concatenated, processed, or used directly in this pattern. However, these might be useful for individual account report review.

1. The EC2 instance processes all of the CSV files to remove known, expected errors and consolidates the remaining findings into a single CSV file.

1. The EC2 instance packages the individual account results and aggregated results into a zip file.

1. The EC2 instance uploads the zip file to the S3 bucket.

1. An EventBridge rule detects the file upload and uses an Amazon SNS topic to send an email to the user notifying them that the assessment is complete.

1. The user downloads the zip file from the S3 bucket. The user imports the results into the Excel template and reviews the results.

## Tools
<a name="create-a-consolidated-report-of-prowler-security-findings-from-multiple-aws-accounts-tools"></a>

**AWS services**
+ [Amazon Elastic Compute Cloud (Amazon EC2)](https://docs.aws.amazon.com/ec2/) provides scalable computing capacity in the AWS Cloud. You can launch as many virtual servers as you need and quickly scale them up or down.
+ [Amazon EventBridge](https://docs.aws.amazon.com/eventbridge/latest/userguide/eb-what-is.html) is a serverless event bus service that helps you connect your applications with real-time data from a variety of sources. For example, AWS Lambda functions, HTTP invocation endpoints using API destinations, or event buses in other AWS accounts.
+ [AWS Identity and Access Management (IAM)](https://docs.aws.amazon.com/IAM/latest/UserGuide/introduction.html) helps you securely manage access to your AWS resources by controlling who is authenticated and authorized to use them.
+ [AWS Organizations](https://docs.aws.amazon.com/organizations/latest/userguide/orgs_introduction.html) is an account management service that helps you consolidate multiple AWS accounts into an organization that you create and centrally manage.
+ [Amazon Simple Notification Service (Amazon SNS)](https://docs.aws.amazon.com/sns/latest/dg/welcome.html) helps you coordinate and manage the exchange of messages between publishers and clients, including web servers and email addresses.
+ [Amazon Simple Storage Service (Amazon S3)](https://docs.aws.amazon.com/AmazonS3/latest/userguide/Welcome.html) is a cloud-based object storage service that helps you store, protect, and retrieve any amount of data.
+ [AWS Systems Manager](https://docs.aws.amazon.com/systems-manager/latest/userguide/what-is-systems-manager.html) helps you manage your applications and infrastructure running in the AWS Cloud. It simplifies application and resource management, shortens the time to detect and resolve operational problems, and helps you manage your AWS resources securely at scale. This pattern uses Session Manger, a capability of Systems Manager.

**Other tools**
+ [Prowler](https://github.com/prowler-cloud/prowler/#requirements-and-installation) is an open-source command-line tool that helps you assess, audit, and monitor your accounts for adherence to AWS security best practices and other security frameworks and standards.

**Code repository**

The code for this pattern is available in the GitHub [Multi-Account Security Assessment via Prowler](https://github.com/aws-samples/multi-account-security-assessment-via-prowler) repository. The code repository contains the following files:
+ **prowler\$1scan.sh** – This bash script is used to start a Prowler security assessment of multiple AWS accounts, in parallel. As defined in the **Prowler-resources.yaml** CloudFormationtemplate, this script is automatically deployed to the `usr/local/prowler` folder on the EC2 instance.
+ **Prowler-Resources.yaml** – You use this CloudFormation template to create a stack in the security account in the organization. This template deploys all of the required resources for this account in order to support the solution. This stack must be deployed before the **IAM-ProwlerExecRole.yaml** template. We do not recommend that you deploy these resources in an account that hosts critical production workloads.
**Note**  
If this stack is deleted and redeployed, you must rebuild the `ProwlerExecRole` stack set in order to rebuild the cross-account dependencies between the IAM roles.
+ **IAM-ProwlerExecRole.yaml** – You use this CloudFormation template to create a stack set that deploys the `ProwlerExecRole` IAM role in all accounts in the organization, including the management account.
+ **prowler-report-template.xlsm** – You use this Excel template to process the Prowler findings. The pivot tables in the report provide search capabilities, charts, and consolidated findings.

## Epics
<a name="create-a-consolidated-report-of-prowler-security-findings-from-multiple-aws-accounts-epics"></a>

### Prepare for deployment
<a name="prepare-for-deployment"></a>


| Task | Description | Skills required | 
| --- | --- | --- | 
| Clone the code repository. | [\[See the AWS documentation website for more details\]](http://docs.aws.amazon.com/prescriptive-guidance/latest/patterns/create-a-consolidated-report-of-prowler-security-findings-from-multiple-aws-accounts.html) | AWS DevOps | 
| Review the templates. | [\[See the AWS documentation website for more details\]](http://docs.aws.amazon.com/prescriptive-guidance/latest/patterns/create-a-consolidated-report-of-prowler-security-findings-from-multiple-aws-accounts.html) | AWS DevOps | 

### Create the CloudFormation stacks
<a name="create-the-cfnshort-stacks"></a>


| Task | Description | Skills required | 
| --- | --- | --- | 
| Provision resources in the security account. | Using the **prowler-resources.yaml** template, you create a CloudFormation stack that deploys all of the required resources in the security account. For instructions, see [Creating a stack](https://docs.aws.amazon.com/AWSCloudFormation/latest/UserGuide/cfn-console-create-stack.html) in the CloudFormation documentation. Note the following when deploying this template:[\[See the AWS documentation website for more details\]](http://docs.aws.amazon.com/prescriptive-guidance/latest/patterns/create-a-consolidated-report-of-prowler-security-findings-from-multiple-aws-accounts.html) | AWS DevOps | 
| Provision the IAM role in the member accounts. | In the AWS Organizations management account or an account with delegated administrator permissions for CloudFormation, use the **IAM-ProwlerExecRole.yaml** template to create a CloudFormation stack set. The stack set deploys the `ProwlerExecRole` IAM role in all member accounts in the organization. For instructions, see [Create a stack set with service-managed permissions](https://docs.aws.amazon.com/AWSCloudFormation/latest/UserGuide/stacksets-getting-started-create.html#stacksets-orgs-associate-stackset-with-org) in the CloudFormation documentation. Note the following when deploying this template:[\[See the AWS documentation website for more details\]](http://docs.aws.amazon.com/prescriptive-guidance/latest/patterns/create-a-consolidated-report-of-prowler-security-findings-from-multiple-aws-accounts.html) | AWS DevOps | 
| Provision the IAM role in the management account. | Using the **IAM-ProwlerExecRole.yaml** template, you create a CloudFormation stack that deploys the `ProwlerExecRole` IAM role in the management account of the organization. The stack set you created previously doesn’t deploy the IAM role in the management account. For instructions, see [Creating a stack](https://docs.aws.amazon.com/AWSCloudFormation/latest/UserGuide/cfn-console-create-stack.html) in the CloudFormation documentation. Note the following when deploying this template:[\[See the AWS documentation website for more details\]](http://docs.aws.amazon.com/prescriptive-guidance/latest/patterns/create-a-consolidated-report-of-prowler-security-findings-from-multiple-aws-accounts.html) | AWS DevOps | 

### Perform the Prowler security assessment
<a name="perform-the-prowler-security-assessment"></a>


| Task | Description | Skills required | 
| --- | --- | --- | 
| Run the scan. | [\[See the AWS documentation website for more details\]](http://docs.aws.amazon.com/prescriptive-guidance/latest/patterns/create-a-consolidated-report-of-prowler-security-findings-from-multiple-aws-accounts.html) | AWS administrator | 
| Retrieve the Prowler findings. | [\[See the AWS documentation website for more details\]](http://docs.aws.amazon.com/prescriptive-guidance/latest/patterns/create-a-consolidated-report-of-prowler-security-findings-from-multiple-aws-accounts.html) | General AWS | 
| Stop the EC2 instance. | To prevent billing while the instance is idle, stop the EC2 instance that runs Prowler. For instructions, see [Stop and start your instances](https://docs.aws.amazon.com/AWSEC2/latest/UserGuide/Stop_Start.html#starting-stopping-instances) in the Amazon EC2 documentation. | AWS DevOps | 

### Create a report of the findings
<a name="create-a-report-of-the-findings"></a>


| Task | Description | Skills required | 
| --- | --- | --- | 
| Import the findings. | [\[See the AWS documentation website for more details\]](http://docs.aws.amazon.com/prescriptive-guidance/latest/patterns/create-a-consolidated-report-of-prowler-security-findings-from-multiple-aws-accounts.html) | General AWS | 
| Finalize the report. | [\[See the AWS documentation website for more details\]](http://docs.aws.amazon.com/prescriptive-guidance/latest/patterns/create-a-consolidated-report-of-prowler-security-findings-from-multiple-aws-accounts.html) | General AWS | 

### (Optional) Update Prowler or the resources in the code repository
<a name="optional-update-prowler-or-the-resources-in-the-code-repository"></a>


| Task | Description | Skills required | 
| --- | --- | --- | 
| Update Prowler. | If you want to update Prowler to the latest version, do the following:[\[See the AWS documentation website for more details\]](http://docs.aws.amazon.com/prescriptive-guidance/latest/patterns/create-a-consolidated-report-of-prowler-security-findings-from-multiple-aws-accounts.html) | General AWS | 
| Update the prowler\$1scan.sh script. | If you want to update the **prowler\$1scan.sh** script to the latest version in the repo, do the following:[\[See the AWS documentation website for more details\]](http://docs.aws.amazon.com/prescriptive-guidance/latest/patterns/create-a-consolidated-report-of-prowler-security-findings-from-multiple-aws-accounts.html)You might receive warnings related to any locally generated files that are not in the GitHub repo, such as finding reports. You can ignore these as long as the **prowler\$1scan.sh** shows that the locally stashed changes are merged back in. | General AWS | 

### (Optional) Clean up
<a name="optional-clean-up"></a>


| Task | Description | Skills required | 
| --- | --- | --- | 
| Delete all deployed resources. | You can leave the resources deployed in the accounts. If you shut down the EC2 instance when it is not in use and keep the S3 bucket empty, this reduces the costs of maintaining the resources for future scans.If you want to deprovision all resources, do the following:[\[See the AWS documentation website for more details\]](http://docs.aws.amazon.com/prescriptive-guidance/latest/patterns/create-a-consolidated-report-of-prowler-security-findings-from-multiple-aws-accounts.html) | AWS DevOps | 

## Troubleshooting
<a name="create-a-consolidated-report-of-prowler-security-findings-from-multiple-aws-accounts-troubleshooting"></a>


| Issue | Solution | 
| --- | --- | 
| Unable to connect to the EC2 instance by using Session Manager. | The SSM Agent must be able to communicate with the Systems Manager endpoint. Do the following:[\[See the AWS documentation website for more details\]](http://docs.aws.amazon.com/prescriptive-guidance/latest/patterns/create-a-consolidated-report-of-prowler-security-findings-from-multiple-aws-accounts.html) | 
| When deploying the stack set, the CloudFormation console prompts you to `Enable trusted access with AWS Organizations to use service-managed permissions`. | This indicates that trusted access has not been enabled between AWS Organizations and CloudFormation. Trusted access is required to deploy the service-managed stack set. Choose the button to enable trusted access. For more information, see [Enable trusted access](https://docs.aws.amazon.com/AWSCloudFormation/latest/UserGuide/stacksets-orgs-enable-trusted-access.html) in the CloudFormation documentation. | 

## Related resources
<a name="create-a-consolidated-report-of-prowler-security-findings-from-multiple-aws-accounts-resources"></a>

**AWS documentation**
+ [Implementing security controls on AWS](https://docs.aws.amazon.com/prescriptive-guidance/latest/aws-security-controls/introduction.html) (AWS Prescriptive Guidance)

**Other resources**
+ [Prowler](https://github.com/prowler-cloud/prowler) (GitHub)

## Additional information
<a name="create-a-consolidated-report-of-prowler-security-findings-from-multiple-aws-accounts-additional"></a>

**Programmatically removing errors**

If the results contain `Access Denied` errors, you should remove them from the findings. These errors are typically due to external influencing permissions that prevent Prowler from assessing a particular resource. For example, some checks fail when reviewing S3 buckets provisioned through AWS Control Tower. You can programmatically extract these results and save the filtered results as a new file.

The following commands remove rows that contain a single text string (a pattern) and then output the results to a new file.
+ For Linux or MacOS (Grep)

  ```
  grep -v -i "Access Denied getting bucket" myoutput.csv > myoutput_modified.csv
  ```
+ For Windows (PowerShell)

  ```
  Select-String -Path myoutput.csv -Pattern 'Access Denied getting bucket' -NotMatch > myoutput_modified.csv
  ```

The following commands removes rows that match more than one text string and then output the results to a new file.
+ For Linux or MacOS (Uses an escaped pipe between strings)

  ```
  grep -v -i 'Access Denied getting bucket\|Access Denied Trying to Get' myoutput.csv > myoutput_modified.csv
  ```
+ For Windows (Uses a comma between strings)

  ```
  Select-String -Path myoutput.csv -Pattern 'Access Denied getting bucket', 'Access Denied Trying to Get' -NotMatch > myoutput_modified.csv
  ```

**Report examples**

The following image is an example of the **Findings** worksheet in the report of consolidated Prowler findings.

![\[Example of the Findings tab in the report of Prowler scan results\]](http://docs.aws.amazon.com/prescriptive-guidance/latest/patterns/images/pattern-img/81ba9037-9958-4e4a-95b7-d68896075a5b/images/70311fc4-b919-4848-b200-40b35ce81826.png)


The following image is an example of the **Pass Fail** worksheet in the report of consolidated Prowler findings. (By default, pass results are excluded from the output.)

![\[Example of the Pass Fail tab in the report of Prowler scan results\]](http://docs.aws.amazon.com/prescriptive-guidance/latest/patterns/images/pattern-img/81ba9037-9958-4e4a-95b7-d68896075a5b/images/4823e2be-4d5e-4676-9fa3-d47b065dc6d8.png)


The following image is an example of the **Severity** worksheet in the report of consolidated Prowler findings.

![\[Example of the Severity tab in the report of Prowler scan results\]](http://docs.aws.amazon.com/prescriptive-guidance/latest/patterns/images/pattern-img/81ba9037-9958-4e4a-95b7-d68896075a5b/images/b7cbbff1-bca3-4667-9a1e-cc92e2e4adcd.png)


# Deploy and manage AWS Control Tower controls by using AWS CDK and CloudFormation
<a name="deploy-and-manage-aws-control-tower-controls-by-using-aws-cdk-and-aws-cloudformation"></a>

*Iker Reina Fuente and Ivan Girardi, Amazon Web Services*

## Summary
<a name="deploy-and-manage-aws-control-tower-controls-by-using-aws-cdk-and-aws-cloudformation-summary"></a>

This pattern describes how to use AWS CloudFormation and AWS Cloud Development Kit (AWS CDK) to implement and administer preventive, detective, and proactive AWS Control Tower controls as infrastructure as code (IaC). A [control](https://docs.aws.amazon.com/controltower/latest/controlreference/controls.html) (also known as a *guardrail*) is a high-level rule that provides ongoing governance for your overall AWS Control Tower environment. For example, you can use controls to require logging for your AWS accounts and then configure automatic notifications if specific security-related events occur.

AWS Control Tower helps you implement preventive, detective, and proactive controls that govern your AWS resources and monitor compliance across multiple AWS accounts. Each control enforces a single rule. In this pattern, you use a provided IaC template to specify which controls you want to deploy in your environment.

AWS Control Tower controls apply to an entire [organizational unit (OU)](https://docs.aws.amazon.com/organizations/latest/userguide/orgs_getting-started_concepts.html#organizationalunit), and the control affects every AWS account within the OU. Therefore, when users perform any action in any account in your landing zone, the action is subject to the controls that govern the OU.

Implementing AWS Control Tower controls helps establish a strong security foundation for your AWS landing zone. By using this pattern to deploy the controls as IaC through CloudFormation and AWS CDK, you can standardize the controls in your landing zone and more efficiently deploy and manage them. This solution uses [cdk\$1nag](https://github.com/cdklabs/cdk-nag#readme) to scan the AWS CDK application during deployment. This tool checks the application for adherence to AWS best practices.

To deploy AWS Control Tower controls as IaC, you can also use HashiCorp Terraform instead of AWS CDK. For more information, see [Deploy and manage AWS Control Tower controls by using Terraform](https://docs.aws.amazon.com/prescriptive-guidance/latest/patterns/deploy-and-manage-aws-control-tower-controls-by-using-terraform.html).

**Intended audience**

This pattern is recommended for users who have experience with AWS Control Tower, CloudFormation, AWS CDK, and AWS Organizations.

## Prerequisites and limitations
<a name="deploy-and-manage-aws-control-tower-controls-by-using-aws-cdk-and-aws-cloudformation-prereqs"></a>

**Prerequisites**
+ Active AWS accounts managed as an organization in AWS Organizations and an AWS Control Tower landing zone. For instructions, see [Getting started](https://docs.aws.amazon.com/controltower/latest/userguide/getting-started-with-control-tower.html) in the AWS Control Tower documentation.
+ AWS Command Line Interface (AWS CLI), [installed](https://docs.aws.amazon.com/cli/latest/userguide/getting-started-install.html) and [configured](https://docs.aws.amazon.com/cli/latest/userguide/cli-configure-files.html).
+ Node package manager (npm), [installed and configured](https://docs.npmjs.com/downloading-and-installing-node-js-and-npm) for the AWS CDK.
+ [Prerequisites](https://docs.aws.amazon.com/cdk/v2/guide/work-with.html#work-with-prerequisites) for AWS CDK.
+ Permissions to assume an existing AWS Identity and Access Management (IAM) role in a deployment account.
+ Permissions to assume an IAM role in the organization’s management account that that can be used to bootstrap AWS CDK. The role must have permissions to modify and deploy CloudFormation resources. For more information, see [Bootstrapping](https://docs.aws.amazon.com/cdk/v2/guide/bootstrapping.html#bootstrapping-howto) in the AWS CDK documentation.
+ Permissions to create IAM roles and policies in the organization’s management account. For more information, see [Permissions required to access IAM resources](https://docs.aws.amazon.com/IAM/latest/UserGuide/access_permissions-required.html) in the IAM documentation.

**Limitations**
+ This pattern provides instructions for deploying this solution across AWS accounts, from a deployment account to the organization’s management account. For testing purposes, you can deploy this solution directly in the management account, but instructions for this configuration are not explicitly provided.
+ For AWS Control Tower controls, this pattern requires the use of [global identifiers](https://docs.aws.amazon.com/controltower/latest/controlreference/all-global-identifiers.html) that are in the following format:

  ```
  arn:<PARTITION>:controlcatalog:::control/<CONTROL_CATALOG_OPAQUE_ID>
  ```

  Previous versions of this pattern used [regional identifiers](https://docs.aws.amazon.com/controltower/latest/controlreference/control-metadata-tables.html) that are no longer supported. We recommend that you migrate from Regional identifiers to global identifiers. Global identifiers help you manage controls and expand the number of controls you can use.
**Note**  
In most cases, the value for `<PARTITION>` is `aws`.

**Product versions**
+ AWS Control Tower version 3.2 or later
+ Python version 3.9 or later
+ npm version 8.9.0 or later

## Architecture
<a name="deploy-and-manage-aws-control-tower-controls-by-using-aws-cdk-and-aws-cloudformation-architecture"></a>

This section provides a high-level overview of this solution and the architecture established by the sample code. The following diagram shows controls deployed across the various accounts in the OU.

![\[Architecture diagram of controls deployed across all AWS accounts in the organizational unit.\]](http://docs.aws.amazon.com/prescriptive-guidance/latest/patterns/images/pattern-img/7d0d5e37-58ac-4621-b6b0-cb8c1c767ab0/images/47264166-3294-4a53-b0a4-5911086d636f.png)


AWS Control Tower controls are categorized according to their *behavior* and their *guidance*.

There are three primary types of control behaviors:

1. *Preventive controls* are designed to prevent actions from occurring. These are implemented with [service control policies (SCPs)](https://docs.aws.amazon.com/organizations/latest/userguide/orgs_manage_policies_scps.html) or [resource control policies (RCPs)](https://docs.aws.amazon.com/organizations/latest/userguide/orgs_manage_policies_rcps.html) in AWS Organizations. The status of a preventive control is either **enforced** or **not enabled**. Preventive controls are supported in all AWS Regions.

1. *Detective controls* are designed to detect specific events when they occur and log the action in AWS CloudTrail. These are implemented with [AWS Config rules](https://docs.aws.amazon.com/config/latest/developerguide/evaluate-config.html). The status of a detective control is either **clear**, **in violation**, or **not enabled**. Detective controls apply only in those AWS Regions supported by AWS Control Tower.

1. *Proactive controls* scan resources that would be provisioned by AWS CloudFormation and check whether they are compliant with your company policies and objectives. Resources that are not compliant will not be provisioned. These are implemented with [AWS CloudFormation hooks](https://docs.aws.amazon.com/cloudformation-cli/latest/userguide/hooks.html). The status of a proactive control is **PASS**, **FAIL**, or **SKIP**.

Control *guidance* refers to the recommended practice for how to apply each control to your OUs. AWS Control Tower provides three categories of guidance: *mandatory*, *strongly recommended*, and *elective*. The guidance of a control is independent of its behavior. For more information, see [Control behavior and guidance](https://docs.aws.amazon.com/controltower/latest/userguide/controls.html#control-behavior).

## Tools
<a name="deploy-and-manage-aws-control-tower-controls-by-using-aws-cdk-and-aws-cloudformation-tools"></a>

**AWS services**
+ [AWS Cloud Development Kit (AWS CDK)](https://docs.aws.amazon.com/cdk/v2/guide/home.html) is a software development framework that helps you define and provision AWS Cloud infrastructure in code. The [AWS CDK Toolkit](https://docs.aws.amazon.com/cdk/v2/guide/cli.html) is the primary tool for interacting with your AWS CDK app.
+ [AWS CloudFormation](https://docs.aws.amazon.com/AWSCloudFormation/latest/UserGuide/Welcome.html) helps you set up AWS resources, provision them quickly and consistently, and manage them throughout their lifecycle across AWS accounts and AWS Regions.
+ [AWS Config](https://docs.aws.amazon.com/config/latest/developerguide/WhatIsConfig.html) provides a detailed view of the resources in your AWS account and how they’re configured. It helps you identify how resources are related to one another and how their configurations have changed over time.
+ [AWS Control Tower](https://docs.aws.amazon.com/controltower/latest/userguide/what-is-control-tower.html) helps you set up and govern an AWS multi-account environment, following prescriptive best practices.
+ [AWS Organizations](https://docs.aws.amazon.com/organizations/latest/userguide/orgs_introduction.html) is an account management service that helps you consolidate multiple AWS accounts into an organization that you create and centrally manage.

**Other tools**
+ [cdk\$1nag](https://github.com/cdklabs/cdk-nag#readme) is an open-source tool that uses a combination of rule packs to check AWS CDK applications for adherence to best practices.
+ [npm](https://docs.npmjs.com/about-npm/) is a software registry that runs in a Node.js environment and is used to share or borrow packages and manage deployment of private packages.
+ [Python](https://www.python.org/) is a general-purpose computer programming language.

**Code repository**

The code for this pattern is available in the GitHub [Deploy AWS Control Tower controls using AWS CDK](https://github.com/aws-samples/aws-control-tower-controls-cdk) repository. You use the **cdk.json** file to interact with the AWS CDK app, and you use the **package.json** file to install the npm packages.

## Best practices
<a name="deploy-and-manage-aws-control-tower-controls-by-using-aws-cdk-and-aws-cloudformation-best-practices"></a>
+ Follow the [principle of least-privilege](https://docs.aws.amazon.com/IAM/latest/UserGuide/best-practices.html#grant-least-privilege) (IAM documentation). The sample IAM policy and trust policy provided in this pattern include the minimum permissions required, and the AWS CDK stacks created in the management account are restricted by these permissions.
+ Follow the [Best practices for AWS Control Tower administrators](https://docs.aws.amazon.com/controltower/latest/userguide/best-practices.html) (AWS Control Tower documentation).
+ Follow the [Best practices for developing and deploying cloud infrastructure with the AWS CDK](https://docs.aws.amazon.com/cdk/v2/guide/best-practices.html) (AWS CDK documentation).
+ When bootstrapping the AWS CDK, customize the bootstrap template to define policies and the trusted accounts that should have the ability to read and write to any resource in the management account. For more information, see [Customizing bootstrapping](https://docs.aws.amazon.com/cdk/v2/guide/bootstrapping.html#bootstrapping-customizing).
+ Use code analysis tools, such as [cfn\$1nag](https://github.com/stelligent/cfn_nag), to scan the generated CloudFormation templates. The cfn-nag tool looks for patterns in CloudFormation templates that might indicate the infrastructure is not secure. You can also use cdk-nag to check your CloudFormation templates by using the [cloudformation-include](https://docs.aws.amazon.com/cdk/v2/guide/use_cfn_template.html) module.

## Epics
<a name="deploy-and-manage-aws-control-tower-controls-by-using-aws-cdk-and-aws-cloudformation-epics"></a>

### Prepare to enable the controls
<a name="prepare-to-enable-the-controls"></a>


| Task | Description | Skills required | 
| --- | --- | --- | 
| Create the IAM role in the management account. | [\[See the AWS documentation website for more details\]](http://docs.aws.amazon.com/prescriptive-guidance/latest/patterns/deploy-and-manage-aws-control-tower-controls-by-using-aws-cdk-and-aws-cloudformation.html) | DevOps engineer, General AWS | 
| Bootstrap AWS CDK. | [\[See the AWS documentation website for more details\]](http://docs.aws.amazon.com/prescriptive-guidance/latest/patterns/deploy-and-manage-aws-control-tower-controls-by-using-aws-cdk-and-aws-cloudformation.html) | DevOps engineer, General AWS, Python | 
| Clone the repository. | In a bash shell, enter the following command. This clones the [Deploy AWS Control Tower controls using AWS CDK](https://github.com/aws-samples/aws-control-tower-controls-cdk) repository from GitHub.<pre>git clone https://github.com/aws-samples/aws-control-tower-controls-cdk.git</pre> | DevOps engineer, General AWS | 
| Edit the AWS CDK configuration file. | [\[See the AWS documentation website for more details\]](http://docs.aws.amazon.com/prescriptive-guidance/latest/patterns/deploy-and-manage-aws-control-tower-controls-by-using-aws-cdk-and-aws-cloudformation.html) | DevOps engineer, General AWS | 

### Enable controls in the management account
<a name="enable-controls-in-the-management-account"></a>


| Task | Description | Skills required | 
| --- | --- | --- | 
| Assume the IAM role in the deployment account. | In the deployment account, assume the IAM role that has permissions to deploy the AWS CDK stacks in the management account. For more information about assuming an IAM role in the AWS CLI, see [Use an IAM role in the AWS CLI](https://docs.aws.amazon.com/cli/latest/userguide/cli-configure-role.html). | DevOps engineer, General AWS | 
| Activate the environment. | If you are using Linux or MacOS:[\[See the AWS documentation website for more details\]](http://docs.aws.amazon.com/prescriptive-guidance/latest/patterns/deploy-and-manage-aws-control-tower-controls-by-using-aws-cdk-and-aws-cloudformation.html)If you are using Windows:[\[See the AWS documentation website for more details\]](http://docs.aws.amazon.com/prescriptive-guidance/latest/patterns/deploy-and-manage-aws-control-tower-controls-by-using-aws-cdk-and-aws-cloudformation.html) | DevOps engineer, General AWS | 
| Install the dependencies. | After the virtual environment is activated, enter the following command to run the **install\$1deps.sh** script. This script installs the required dependencies.<pre>$ ./scripts/install_deps.sh</pre> | DevOps engineer, General AWS, Python | 
| Deploy the stack. | Enter the following commands to synthesize and deploy the CloudFormation stack.<pre>$ npx cdk synth<br />$ npx cdk deploy</pre> | DevOps engineer, General AWS, Python | 

## Related resources
<a name="deploy-and-manage-aws-control-tower-controls-by-using-aws-cdk-and-aws-cloudformation-resources"></a>

**AWS documentation**
+ [About controls](https://docs.aws.amazon.com/controltower/latest/controlreference/controls.html) (AWS Control Tower documentation)
+ [Controls library](https://docs.aws.amazon.com/controltower/latest/controlreference/controls-reference.html) (AWS Control Tower documentation)
+ [AWS CDK Toolkit commands](https://docs.aws.amazon.com/cdk/v2/guide/cli.html#cli-commands) (AWS CDK documentation)
+ [Deploy and manage AWS Control Tower controls by using Terraform](https://docs.aws.amazon.com/prescriptive-guidance/latest/patterns/deploy-and-manage-aws-control-tower-controls-by-using-terraform.html) (AWS Prescriptive Guidance)

**Other resources**
+ [Python](https://www.python.org/)

## Additional information
<a name="deploy-and-manage-aws-control-tower-controls-by-using-aws-cdk-and-aws-cloudformation-additional"></a>

**Example constants.py file**

The following is an example of an updated **constants.py** file. This sample enables the **AWS-GR\$1DISALLOW\$1CROSS\$1REGION\$1NETWORKING** control (global ID: `dvuaav61i5cnfazfelmvn9m6k`) and the **AWS-GR\$1SUBNET\$1AUTO\$1ASSIGN\$1PUBLIC\$1IP\$1DISABLED** control (global ID: `50z1ot237wl8u1lv5ufau6qqo`). For a list of global IDs, see [All global identifiers](https://docs.aws.amazon.com/controltower/latest/controlreference/all-global-identifiers.html) in the AWS Control Tower documentation.

```
ACCOUNT_ID = 111122223333
AWS_CONTROL_TOWER_REGION = us-east-2
ROLE_ARN = "arn:aws:iam::111122223333:role/CT-Controls-Role"
GUARDRAILS_CONFIGURATION = [
    {
        "Enable-Control": {
            "dvuaav61i5cnfazfelmvn9m6k": {  # AWS-GR_DISALLOW_CROSS_REGION_NETWORKING
                "Parameters": {
                    "ExemptedPrincipalArns": ["arn:aws:iam::111122223333:role/RoleName"]
                },
                "Tags": [{"key": "Environment", "value": "Production"}]
            },
            ...
        },
        "OrganizationalUnitIds": ["ou-1111-11111111", "ou-2222-22222222"...],
    },
    {
        "Enable-Control": {
            "50z1ot237wl8u1lv5ufau6qqo",  # AWS-GR_SUBNET_AUTO_ASSIGN_PUBLIC_IP_DISABLED
            ...
        },
        "OrganizationalUnitIds": ["ou-2222-22222222"...],
    },
]
```

**IAM policy**

The following sample policy allows the minimum actions required to enable or disable AWS Control Tower controls when deploying AWS CDK stacks from a deployment account to the management account.

```
{
    "Version": "2012-10-17",		 	 	 
    "Statement": [
        {
            "Effect": "Allow",
            "Action": [
                "controltower:EnableControl",
                "controltower:DisableControl",
                "controltower:GetControlOperation",
                "controltower:ListEnabledControls",
                "organizations:AttachPolicy",
                "organizations:CreatePolicy",
                "organizations:DeletePolicy",
                "organizations:DescribeOrganization",
                "organizations:DescribeOrganizationalUnit",
                "organizations:DetachPolicy",
                "organizations:ListAccounts",
                "organizations:ListAWSServiceAccessForOrganization",
                "organizations:ListChildren",
                "organizations:ListOrganizationalUnitsForParent",
                "organizations:ListParents",
                "organizations:ListPoliciesForTarget",
                "organizations:ListRoots",
                "organizations:UpdatePolicy",
                "ssm:GetParameters"
            ],
            "Resource": "*"
        }
    ]
}
```

**Trust policy**

The following custom trust policy allows a specific IAM role in the deployment account to assume the IAM role in the management account. Replace the following:
+ `<DEPLOYMENT-ACCOUNT-ID>` is the ID of the deployment account
+ `<DEPLOYMENT-ROLE-NAME>` is the name of the role in the deployment account that is allowed to assume the role in the management account

```
{
    "Version": "2012-10-17",
    "Statement": [
        {
            "Effect": "Allow",
            "Principal": {
                "AWS": "arn:aws:iam::<DEPLOYMENT-ACCOUNT-ID>:role/<DEPLOYMENT-ROLE-NAME>"
            },
            "Action": "sts:AssumeRole",
            "Condition": {}
        }
    ]
}
```

# Deploy and manage AWS Control Tower controls by using Terraform
<a name="deploy-and-manage-aws-control-tower-controls-by-using-terraform"></a>

*Iker Reina Fuente and Ivan Girardi, Amazon Web Services*

## Summary
<a name="deploy-and-manage-aws-control-tower-controls-by-using-terraform-summary"></a>

This pattern describes how to use AWS Control Tower controls, HashiCorp Terraform, and infrastructure as code (IaC) to implement and administer preventive, detective, and proactive security controls. A [control](https://docs.aws.amazon.com/controltower/latest/userguide/controls.html) (also known as a *guardrail*) is a high-level rule that provides ongoing governance for your overall AWS Control Tower environment. For example, you can use controls to require logging for your AWS accounts and then configure automatic notifications if specific security-related events occur.

AWS Control Tower helps you implement preventive, detective, and proactive controls that govern your AWS resources and monitor compliance across multiple AWS accounts. Each control enforces a single rule. In this pattern, you use a provided IaC template to specify which controls you want to deploy in your environment.

AWS Control Tower controls apply to an entire [organizational unit (OU)](https://docs.aws.amazon.com/organizations/latest/userguide/orgs_getting-started_concepts.html#organizationalunit), and the control affects every AWS account within the OU. Therefore, when users perform any action in any account in your landing zone, the action is subject to the controls that govern the OU.

Implementing AWS Control Tower controls helps establish a strong security foundation for your AWS landing zone. By using this pattern to deploy the controls as IaC through Terraform, you can standardize the controls in your landing zone and more efficiently deploy and manage them.

To deploy AWS Control Tower controls as IaC, you can also use AWS Cloud Development Kit (AWS CDK) instead of Terraform. For more information, see [Deploy and manage AWS Control Tower controls by using AWS CDK and AWS CloudFormation](https://docs.aws.amazon.com/prescriptive-guidance/latest/patterns/deploy-and-manage-aws-control-tower-controls-by-using-aws-cdk-and-aws-cloudformation.html).

**Intended audience**

This pattern is recommended for users who have experience with AWS Control Tower, Terraform, and AWS Organizations.

## Prerequisites and limitations
<a name="deploy-and-manage-aws-control-tower-controls-by-using-terraform-prereqs"></a>

**Prerequisites**
+ Active AWS accounts managed as an organization in AWS Organizations and an AWS Control Tower landing zone. For instructions, see [Getting started](https://docs.aws.amazon.com/controltower/latest/userguide/getting-started-with-control-tower.html) in the AWS Control Tower documentation.
+ AWS Command Line Interface (AWS CLI), [installed](https://docs.aws.amazon.com/cli/latest/userguide/getting-started-install.html) and [configured](https://docs.aws.amazon.com/cli/latest/userguide/cli-configure-files.html).
+ An AWS Identity and Access Management (IAM) role in the management account that has permissions to deploy this pattern. For more information about the required permissions and a sample policy, see *Least privilege permissions for the IAM role* in the [Additional information](#deploy-and-manage-aws-control-tower-controls-by-using-terraform-additional) section of this pattern.
+ Permissions to assume the IAM role in the management account.
+ Terraform CLI, [installed](https://developer.hashicorp.com/terraform/cli) (Terraform documentation).
+ Terraform AWS Provider, [configured](https://hashicorp.github.io/terraform-provider-aws/) (Terraform documentation).
+ Terraform backend, [configured](https://developer.hashicorp.com/terraform/language/backend) (Terraform documentation).

**Limitations**
+ For AWS Control Tower controls, this pattern requires the use of [global identifiers](https://docs.aws.amazon.com/controltower/latest/controlreference/all-global-identifiers.html) that are in the following format:

  ```
  arn:<PARTITION>:controlcatalog:::control/<CONTROL_CATALOG_OPAQUE_ID>
  ```

  Previous versions of this pattern used [regional identifiers](https://docs.aws.amazon.com/controltower/latest/controlreference/control-metadata-tables.html) that are no longer supported. We recommend that you migrate from Regional identifiers to global identifiers. Global identifiers help you manage controls and expand the number of controls that you can use.
**Note**  
In most cases, the value for `<PARTITION>` is `aws`.

**Product versions**
+ AWS Control Tower version 3.2 or later
+ Terraform version 1.5 or later
+ Terraform AWS Provider version 4.67 or later

## Architecture
<a name="deploy-and-manage-aws-control-tower-controls-by-using-terraform-architecture"></a>

This section provides a high-level overview of this solution and the architecture established by the sample code. The following diagram shows controls deployed across the various accounts in the OU.

![\[Architecture diagram of controls deployed across all AWS accounts in the organizational unit.\]](http://docs.aws.amazon.com/prescriptive-guidance/latest/patterns/images/pattern-img/6e0d6c30-a539-44b7-8415-e669fb2ad26a/images/60407c0e-852e-4d5f-9a7d-8510316063aa.png)


AWS Control Tower controls are categorized according to their *behavior* and their *guidance*.

There are three primary types of control behaviors:

1. *Preventive controls* are designed to prevent actions from occurring. These are implemented with [service control policies (SCPs)](https://docs.aws.amazon.com/organizations/latest/userguide/orgs_manage_policies_scps.html) or [resource control policies (RCPs)](https://docs.aws.amazon.com/organizations/latest/userguide/orgs_manage_policies_rcps.html) in AWS Organizations. The status of a preventive control is either **enforced** or **not enabled**. Preventive controls are supported in all AWS Regions.

1. *Detective controls* are designed to detect specific events when they occur and log the action in AWS CloudTrail. These are implemented with [AWS Config rules](https://docs.aws.amazon.com/config/latest/developerguide/evaluate-config.html). The status of a detective control is either **clear**, **in violation**, or **not enabled**. Detective controls apply only in those AWS Regions supported by AWS Control Tower.

1. *Proactive controls* scan resources that would be provisioned by AWS CloudFormation and check whether they are compliant with your company policies and objectives. Resources that are not compliant will not be provisioned. These are implemented with [AWS CloudFormation hooks](https://docs.aws.amazon.com/cloudformation-cli/latest/userguide/hooks.html). The status of a proactive control is **PASS**, **FAIL**, or **SKIP**.

Control *guidance* is the recommended practice for how to apply each control to your OUs. AWS Control Tower provides three categories of guidance: *mandatory*, *strongly recommended*, and *elective*. The guidance of a control is independent of its behavior. For more information, see [Control behavior and guidance](https://docs.aws.amazon.com/controltower/latest/userguide/controls.html#control-behavior).

## Tools
<a name="deploy-and-manage-aws-control-tower-controls-by-using-terraform-tools"></a>

**AWS services**
+ [AWS CloudFormation](https://docs.aws.amazon.com/AWSCloudFormation/latest/UserGuide/Welcome.html) helps you set up AWS resources, provision them quickly and consistently, and manage them throughout their lifecycle across AWS accounts and Regions.
+ [AWS Config](https://docs.aws.amazon.com/config/latest/developerguide/WhatIsConfig.html) provides a detailed view of the resources in your AWS account and how they’re configured. It helps you identify how resources are related to one another and how their configurations have changed over time.
+ [AWS Control Tower](https://docs.aws.amazon.com/controltower/latest/userguide/what-is-control-tower.html) helps you set up and govern an AWS multi-account environment, following prescriptive best practices.
+ [AWS Organizations](https://docs.aws.amazon.com/organizations/latest/userguide/orgs_introduction.html) is an account management service that helps you consolidate multiple AWS accounts into an organization that you create and centrally manage.

**Other tools**
+ [HashiCorp Terraform](https://www.terraform.io/docs) is an infrastructure as code (IaC) tool that helps you use code to provision and manage cloud infrastructure and resources.

**Code repository**

The code for this pattern is available in the GitHub [Deploy and manage AWS Control Tower controls by using Terraform](https://github.com/aws-samples/aws-control-tower-controls-terraform) repository.

## Best practices
<a name="deploy-and-manage-aws-control-tower-controls-by-using-terraform-best-practices"></a>
+ The IAM role used to deploy this solution should adhere to the [principle of least-privilege](https://docs.aws.amazon.com/IAM/latest/UserGuide/best-practices.html#grant-least-privilege) (IAM documentation).
+ Follow the [Best practices for AWS Control Tower administrators](https://docs.aws.amazon.com/controltower/latest/userguide/best-practices.html) (AWS Control Tower documentation).

## Epics
<a name="deploy-and-manage-aws-control-tower-controls-by-using-terraform-epics"></a>

### Enable controls in the management account
<a name="enable-controls-in-the-management-account"></a>


| Task | Description | Skills required | 
| --- | --- | --- | 
| Clone the repository. | In a bash shell, enter the following command. This clones the [Deploy and manage AWS Control Tower controls by using Terraform](https://github.com/aws-samples/aws-control-tower-controls-terraform) repository from GitHub.<pre>git clone https://github.com/aws-samples/aws-control-tower-controls-terraform.git</pre> | DevOps engineer | 
| Edit the Terraform backend configuration file. | [\[See the AWS documentation website for more details\]](http://docs.aws.amazon.com/prescriptive-guidance/latest/patterns/deploy-and-manage-aws-control-tower-controls-by-using-terraform.html) | DevOps engineer, Terraform | 
| Edit the Terraform provider configuration file. | [\[See the AWS documentation website for more details\]](http://docs.aws.amazon.com/prescriptive-guidance/latest/patterns/deploy-and-manage-aws-control-tower-controls-by-using-terraform.html) | DevOps engineer, Terraform | 
| Edit the configuration file. | [\[See the AWS documentation website for more details\]](http://docs.aws.amazon.com/prescriptive-guidance/latest/patterns/deploy-and-manage-aws-control-tower-controls-by-using-terraform.html) | DevOps engineer, General AWS, Terraform | 
| Assume the IAM role in the management account. | In the management account, assume the IAM role that has permissions to deploy the Terraform configuration file. For more information about the permissions required and a sample policy, see *Least privilege permissions for the IAM role* in the [Additional information](#deploy-and-manage-aws-control-tower-controls-by-using-terraform-additional) section. For more information about assuming an IAM role in the AWS CLI, see [Use an IAM role in the AWS CLI](https://docs.aws.amazon.com/cli/latest/userguide/cli-configure-role.html). | DevOps engineer, General AWS | 
| Deploy the configuration file. | [\[See the AWS documentation website for more details\]](http://docs.aws.amazon.com/prescriptive-guidance/latest/patterns/deploy-and-manage-aws-control-tower-controls-by-using-terraform.html) | DevOps engineer, General AWS, Terraform | 

### (Optional) Disable controls in the AWS Control Tower management account
<a name="optional-disable-controls-in-the-ctower-management-account"></a>


| Task | Description | Skills required | 
| --- | --- | --- | 
| Run the `destroy` command. | Enter the following command to remove the resources deployed by this pattern.<pre>$ terraform destroy -var-file="variables.tfvars"</pre> | DevOps engineer, General AWS, Terraform | 

## Troubleshooting
<a name="deploy-and-manage-aws-control-tower-controls-by-using-terraform-troubleshooting"></a>


| Issue | Solution | 
| --- | --- | 
| `Error: creating ControlTower Control ValidationException: Guardrail <control ID> is already enabled on organizational unit <OU ID>` error | The control you are trying to enable is already enabled in the target OU. This error can occur if a user manually enabled the control through the AWS Management Console, through AWS Control Tower or through AWS Organizations. To deploy the Terraform configuration file, you can use either of the following options.**Option 1: Update the Terraform current state file**You can import the resource to the Terraform current state file. When you rerun the `apply` command, Terraform will skip this resource. Do the following to import the resource to the current Terraform state:[\[See the AWS documentation website for more details\]](http://docs.aws.amazon.com/prescriptive-guidance/latest/patterns/deploy-and-manage-aws-control-tower-controls-by-using-terraform.html)**Option 2: Disable the control**If you are working in a non-production environment, you can disable the control in the console. Re-enable it by repeating the steps in *Deploy the configuration* in the [Epics](#deploy-and-manage-aws-control-tower-controls-by-using-terraform-epics) section. This approach is not recommended for production environments because there is a period of time when the control will be disabled. If you want to use this option in a production environment, you can implement temporary controls, such as temporarily applying a SCP in AWS Organizations. | 

## Related resources
<a name="deploy-and-manage-aws-control-tower-controls-by-using-terraform-resources"></a>

**AWS documentation**
+ [About controls](https://docs.aws.amazon.com/controltower/latest/userguide/controls.html) (AWS Control Tower documentation)
+ [Controls library](https://docs.aws.amazon.com/controltower/latest/userguide/controls-reference.html) (AWS Control Tower documentation)
+ [Deploy and manage AWS Control Tower controls by using AWS CDK and AWS CloudFormation](https://docs.aws.amazon.com/prescriptive-guidance/latest/patterns/deploy-and-manage-aws-control-tower-controls-by-using-aws-cdk-and-aws-cloudformation.html) (AWS Prescriptive Guidance)

**Other resources**
+ [Terraform](https://www.terraform.io/)
+ [Terraform CLI documentation](https://www.terraform.io/cli)

## Additional information
<a name="deploy-and-manage-aws-control-tower-controls-by-using-terraform-additional"></a>

**Example variables.tfvars**** file**

The following is an example of an updated **variables.tfvars** file. This sample enables the **AWS-GR\$1ENCRYPTED\$1VOLUMES** control (global ID: `503uicglhjkokaajywfpt6ros`) and the **AWS-GR\$1SUBNET\$1AUTO\$1ASSIGN\$1PUBLIC\$1IP\$1DISABLED** control (global ID: `50z1ot237wl8u1lv5ufau6qqo`). For a list of global IDs, see [All global identifiers](https://docs.aws.amazon.com/controltower/latest/controlreference/all-global-identifiers.html) in the AWS Control Tower documentation.

The following example also enables controls that have parameters such as **CT.S3.PV.5** (global ID: `7mo7a2h2ebsq71l8k6uzr96ou`) and  **CT.SECRETSMANAGER.PV.1** (global ID: `dvhe47fxg5o6lryqrq9g6sxg4`). For a list of controls with parameters, see [Controls with parameters](https://docs.aws.amazon.com/controltower/latest/controlreference/control-parameter-concepts.html) in the AWS Control Tower documentation.

```
controls = [
    {
        control_names = [
            "503uicglhjkokaajywfpt6ros", # AWS-GR_ENCRYPTED_VOLUMES
            ...
        ],
        organizational_unit_ids = ["ou-1111-11111111", "ou-2222-22222222"...],
    },
    {
        control_names = [
            "50z1ot237wl8u1lv5ufau6qqo", # AWS-GR_SUBNET_AUTO_ASSIGN_PUBLIC_IP_DISABLED
            ...
        ],
        organizational_unit_ids = ["ou-1111-11111111"...],
    },
]

controls_with_params = [
  {
    control_names = [
      { "7mo7a2h2ebsq71l8k6uzr96ou" = { # CT.S3.PV.5
        parameters = {
          "ExemptedPrincipalArns" : ["arn:aws:iam::*:role/RoleName"],
          "ExemptedResourceArns" : [],
        }
      } },
      { "dvhe47fxg5o6lryqrq9g6sxg4" = { # CT.SECRETSMANAGER.PV.1
        parameters = {
          "ExemptedPrincipalArns" : ["arn:aws:iam::*:role/RoleName"],
        }
      } },
      ...
    ],
    organizational_unit_ids = ["ou-1111-11111111"...]
  },
  {
    control_names = [
      { "dvuaav61i5cnfazfelmvn9m6k" = { # AWS-GR_DISALLOW_CROSS_REGION_NETWORKING
        parameters = {
          "ExemptedPrincipalArns" : ["arn:aws:iam::*:role/RoleName"],
        }
      } },
      { "41ngl8m5c4eb1myoz0t707n7h" = { # AWS-GR_DISALLOW_VPC_INTERNET_ACCESS
        parameters = {
          "ExemptedPrincipalArns" : ["arn:aws:iam::*:role/RoleName"],
        }
      } },
      ...
    ],
    organizational_unit_ids = ["ou-2222-22222222"...]
  }
]
```

**Least privilege permissions for the IAM role**

This pattern requires that you assume an IAM role in the management account. Best practice is to assume a role with temporary permissions and limit the permissions according to the principle of least privilege. The following sample policy allows the minimum actions required to enable or disable AWS Control Tower controls.

```
{
    "Version": "2012-10-17",		 	 	 
    "Statement": [
        {
            "Effect": "Allow",
            "Action": [
                "controltower:EnableControl",
                "controltower:DisableControl",
                "controltower:GetControlOperation",
                "controltower:ListEnabledControls",
                "organizations:AttachPolicy",
                "organizations:CreatePolicy",
                "organizations:DeletePolicy",
                "organizations:DescribeOrganization",
                "organizations:DetachPolicy",
                "organizations:ListAccounts",
                "organizations:ListAWSServiceAccessForOrganization",
                "organizations:ListChildren",
                "organizations:ListOrganizationalUnitsForParent",
                "organizations:ListParents",
                "organizations:ListPoliciesForTarget",
                "organizations:ListRoots",
                "organizations:UpdatePolicy"
            ],
            "Resource": "*"
        }
    ]
}
```

# Deploy the Security Automations for AWS WAF solution by using Terraform
<a name="deploy-the-security-automations-for-aws-waf-solution-by-using-terraform"></a>

*Dr. Rahul Sharad Gaikwad and Tamilselvan P, Amazon Web Services*

## Summary
<a name="deploy-the-security-automations-for-aws-waf-solution-by-using-terraform-summary"></a>

AWS WAF is a web application firewall that helps protect applications from common exploits by using customizable rules, which you define and deploy in *web access control lists* (ACLs). Configuring AWS WAF rules can be challenging, especially for organizations that do not have dedicated security teams. To simplify this process, Amazon Web Services (AWS) offers the [Security Automations for AWS WAF](https://aws.amazon.com/solutions/implementations/security-automations-for-aws-waf/) solution, which automatically deploys a single web ACL with a set of AWS WAF rules that filters web-based attacks. During Terraform deployment, you can specify which protective features to include. After you deploy this solution, AWS WAF inspects web requests to existing Amazon CloudFront distributions or Application Load Balancers, and blocks any requests that don’t match the rules.

The Security Automations for AWS WAF solution can be deployed by using AWS CloudFormation according to the instructions in the [Security Automations for AWS WAF Implementation Guide](https://docs.aws.amazon.com/solutions/latest/security-automations-for-aws-waf/overview.html). This pattern provides an alternative deployment option for organizations that use HashiCorp Terraform as their preferred infrastructure as code (IaC) tool to provision and manage their cloud infrastructure. When you deploy this solution, Terraform automatically applies the changes in the cloud and deploys and configures the AWS WAF settings and protective features.

## Prerequisites and limitations
<a name="deploy-the-security-automations-for-aws-waf-solution-by-using-terraform-prereqs"></a>

**Prerequisites**
+ An active AWS account.
+ AWS Command Line Interface (AWS CLI) version 2.4.25 or later, installed and configured with necessary permissions. For more information, see [Getting started](https://docs.aws.amazon.com/cli/latest/userguide/cli-chap-getting-started.html) (AWS CLI documentation).
+ Terraform version 1.1.9 or later, installed and configured. For more information, see [Install Terraform](https://learn.hashicorp.com/tutorials/terraform/install-cli) (Terraform documentation).

## Architecture
<a name="deploy-the-security-automations-for-aws-waf-solution-by-using-terraform-architecture"></a>

**Target architecture**

This pattern deploys the Security Automations for AWS WAF solution. For more information about the target architecture, see [Architecture overview](https://docs.aws.amazon.com/solutions/latest/security-automations-for-aws-waf/overview.html) in the *Security Automations for AWS WAF Implementation Guide*. For more information about the AWS Lambda automations in this deployment, the Application log parser, the AWS WAF log parser, the IP lists parser, and the Access handler, see [Component details](https://docs.aws.amazon.com/solutions/latest/security-automations-for-aws-waf/appendix-b.html) in the *Security Automations for AWS WAF Implementation Guide*.

**Terraform deployment**

When you run `terraform apply`, Terraform does the following:

1. Terraform creates AWS Identity and Access Management (IAM) roles and Lambda functions based on the inputs from the **testing.tfvars** file.

1. Terraform creates AWS WAF ACL rules and IP sets based on the inputs from the **testing.tfvars **file.

1. Terraform creates the Amazon Simple Storage Service (Amazon S3) buckets, Amazon EventBridge rules, AWS Glue database tables, and Amazon Athena work groups based on the inputs from the **testing.tfvars **file.

1. Terraform deploys the AWS CloudFormation stack to provision the custom resources.

1. Terraform creates the Amazon API Gateway resources based on the given inputs from **testing.tfvars **file.

**Automation and scale**

You can use this pattern to create AWS WAF rules for multiple AWS accounts and AWS Regions to deploy the Security Automations for AWS WAF solution throughout your AWS Cloud environment.

## Tools
<a name="deploy-the-security-automations-for-aws-waf-solution-by-using-terraform-tools"></a>

**AWS services**
+ [AWS Command Line Interface (AWS CLI)](https://docs.aws.amazon.com/cli/latest/userguide/cli-chap-welcome.html) is an open-source tool that helps you interact with AWS services through commands in your command-line shell.
+ [AWS WAF](https://docs.aws.amazon.com/waf/latest/developerguide/what-is-aws-waf.html) is a web application firewall that helps you monitor HTTP and HTTPS requests that are forwarded to your protected web application resources.

**Other services**
+ [Git](https://git-scm.com/docs) is an open-source, distributed version control system.
+ [HashiCorp Terraform](https://www.terraform.io/docs) is a command-line interface application that helps you use code to provision and manage cloud infrastructure and resources.

**Code repository**

The code for this pattern is available in the GitHub [AWS WAF Automation Using Terraform](https://github.com/aws-samples/aws-waf-automation-terraform-samples) repository.

## Best practices
<a name="deploy-the-security-automations-for-aws-waf-solution-by-using-terraform-best-practices"></a>
+ Put static files in separate Amazon S3 buckets.
+ Avoid hardcoding variables.
+ Limit the use of custom scripts.
+ Adopt a naming convention.

## Epics
<a name="deploy-the-security-automations-for-aws-waf-solution-by-using-terraform-epics"></a>

### Set up your local workstation
<a name="set-up-your-local-workstation"></a>


| Task | Description | Skills required | 
| --- | --- | --- | 
| Install Git. | Follow the instructions in [Getting started](https://git-scm.com/book/en/v2/Getting-Started-Installing-Git) (Git website) to install Git on your local workstation. | DevOps engineer | 
| Clone the repository. | On your local workstation, enter the following command to clone the code repository:<pre>git clone https://github.com/aws-samples/aws-waf-automation-terraform-samples.git</pre> | DevOps engineer | 
| Update the variables. | [\[See the AWS documentation website for more details\]](http://docs.aws.amazon.com/prescriptive-guidance/latest/patterns/deploy-the-security-automations-for-aws-waf-solution-by-using-terraform.html) | DevOps engineer | 

### Provision the target architecture using Terraform
<a name="provision-the-target-architecture-using-terraform"></a>


| Task | Description | Skills required | 
| --- | --- | --- | 
| Initialize the Terraform configuration. | Enter the following command to initialize your working directory that contains the Terraform configuration files:<pre>terraform init</pre> | DevOps engineer | 
| Preview the Terraform plan. | Enter the following command. Terraform evaluates the configuration files to determine the target state for the declared resources. It then compares the target state against the current state and creates a plan:<pre>terraform plan -var-file="testing.tfvars"</pre> | DevOps engineer | 
| Verify the plan. | Review the plan and confirm that it configures the required architecture in your target AWS account. | DevOps engineer | 
| Deploy the solution. | [\[See the AWS documentation website for more details\]](http://docs.aws.amazon.com/prescriptive-guidance/latest/patterns/deploy-the-security-automations-for-aws-waf-solution-by-using-terraform.html) | DevOps engineer | 

### Validate and clean up
<a name="validate-and-clean-up"></a>


| Task | Description | Skills required | 
| --- | --- | --- | 
| Verify the changes. | [\[See the AWS documentation website for more details\]](http://docs.aws.amazon.com/prescriptive-guidance/latest/patterns/deploy-the-security-automations-for-aws-waf-solution-by-using-terraform.html) | DevOps engineer | 
| (Optional) Clean up the infrastructure. | If you want to remove all resources and configuration changes made by this solution, do the following:[\[See the AWS documentation website for more details\]](http://docs.aws.amazon.com/prescriptive-guidance/latest/patterns/deploy-the-security-automations-for-aws-waf-solution-by-using-terraform.html) | DevOps engineer | 

## Troubleshooting
<a name="deploy-the-security-automations-for-aws-waf-solution-by-using-terraform-troubleshooting"></a>


| Issue | Solution | 
| --- | --- | 
| `WAFV2 IPSet: WAFOptimisticLockException` error | If you** **receive this error when you run the `terraform destroy` command, you must manually delete the IP sets. For instructions, see [Deleting an IP set](https://docs.aws.amazon.com/waf/latest/developerguide/waf-ip-set-deleting.html) (AWS WAF documentation). | 

## Related resources
<a name="deploy-the-security-automations-for-aws-waf-solution-by-using-terraform-resources"></a>

**AWS references**
+ [Security Automations for AWS WAF Implementation Guide](https://docs.aws.amazon.com/solutions/latest/security-automations-for-aws-waf/welcome.html)
+ [Security Automations for AWS WAF](https://aws.amazon.com/solutions/implementations/security-automations-for-aws-waf/) (AWS Solutions Library)
+ [Security Automations for AWS WAF FAQ](https://aws.amazon.com/solutions/implementations/security-automations-for-aws-waf/resources/#FAQ)

**Terraform references**
+ [Terraform backend configuration](https://developer.hashicorp.com/terraform/language/backend)
+ [Terraform AWS Provider - Documentation and Usage](https://registry.terraform.io/providers/hashicorp/aws/latest/docs)
+ [Terraform AWS Provider](https://github.com/hashicorp/terraform-provider-aws) (GitHub repository)

# Deploy a pipeline that simultaneously detects security issues in multiple code deliverables
<a name="deploy-a-pipeline-that-simultaneously-detects-security-issues-in-multiple-code-deliverables"></a>

*Benjamin Morris, Tim Hahn, Sapeksh Madan, Dina Odum, and Isaiah Schisler, Amazon Web Services*

## Summary
<a name="deploy-a-pipeline-that-simultaneously-detects-security-issues-in-multiple-code-deliverables-summary"></a>

The [Simple Code Scanning Pipeline (SCSP)](https://github.com/awslabs/simple-code-scanning-pipeline) provides two-click creation of a code analysis pipeline that runs industry-standard open-source security tools in parallel. This enables developers to check the quality and security of their code without having to install tools or even understand how to run them. This helps you reduce vulnerabilities and misconfigurations in code deliverables. It also reduces the amount of time your organization spends installing, researching, and configuring security tools.

Before SCSP, scanning code using this particular suite of tools required developers to locate, manually install, and configure the software analysis tools. Even locally installed, all-in-one tools, such as Automated Security Helper (ASH), require configuring a Docker container in order to run. However, with SCSP, a suite of industry-standard code analysis tools runs automatically in the AWS Cloud. With this solution, you use Git to push your code deliverables, and then you receive a visual output with at-a-glance insights into which security checks failed.

## Prerequisites and limitations
<a name="deploy-a-pipeline-that-simultaneously-detects-security-issues-in-multiple-code-deliverables-prereqs"></a>
+ An active AWS account
+ One or more code deliverables that you want to scan for security issues
+ AWS Command Line Interface (AWS CLI), [installed](https://docs.aws.amazon.com/cli/latest/userguide/getting-started-install.html) and [configured](https://docs.aws.amazon.com/cli/latest/userguide/cli-chap-configure.html)
+ Python version 3.0 or later and pip version 9.0.3 or later, [installed](https://www.python.org/downloads/windows/)
+ Git, [installed](https://github.com/git-guides/install-git)
+ Install [git-remote-codecommit](https://docs.aws.amazon.com/codecommit/latest/userguide/setting-up-git-remote-codecommit.html#setting-up-git-remote-codecommit-install) on your local workstation

## Architecture
<a name="deploy-a-pipeline-that-simultaneously-detects-security-issues-in-multiple-code-deliverables-architecture"></a>

**Target technology stack**
+ AWS CodeCommit repository
+ AWS CodeBuild project
+ AWS CodePipeline pipeline
+ Amazon Simple Storage Service (Amazon S3) bucket
+ AWS CloudFormation template

**Target architecture**

The SCSP for static code analysis is a DevOps project designed to give security feedback on deliverable code.

![\[The SCSP performing code analysis in an AWS Region.\]](http://docs.aws.amazon.com/prescriptive-guidance/latest/patterns/images/pattern-img/61fe4f99-7dfc-48a8-90e4-a25253cc140d/images/fbc13150-0970-48d6-87bc-84dfaed90d4b.png)


1. In the AWS Management Console, log into the target AWS account. Confirm that you are in the AWS Region where you want to deploy the pipeline.

1. Use the CloudFormation template in the code repository to deploy the SCSP stack. This creates a new CodeCommit repository and CodeBuild project.
**Note**  
As an alternative deployment option, you can use an existing CodeCommit repo by providing the Amazon Resource Name (ARN) of the repository as a parameter during stack deployment.

1. Clone the repository to your local workstation, and then add any files to their respective folders in the cloned repository.

1. Use Git to add, commit, and push the files to the CodeCommit repository.

1. Pushing to the CodeCommit repository initiates a CodeBuild job. The CodeBuild project uses the security tools to scan the code deliverables.

1. Review the output of the pipeline. Security tools that found error-level issues will result in failed actions in the pipeline. Fix these errors or suppress them as false positives. Review details of the tool output in the **Action details** in CodePipeline or in the pipeline’s S3 bucket.

## Tools
<a name="deploy-a-pipeline-that-simultaneously-detects-security-issues-in-multiple-code-deliverables-tools"></a>

**AWS services**
+ [AWS CloudFormation](https://docs.aws.amazon.com/AWSCloudFormation/latest/UserGuide/Welcome.html) helps you set up AWS resources, provision them quickly and consistently, and manage them throughout their lifecycle across AWS accounts and Regions.
+ [AWS CodeBuild](https://docs.aws.amazon.com/codebuild/latest/userguide/welcome.html) is a fully managed build service that helps you compile source code, run unit tests, and produce artifacts that are ready to deploy.
+ [AWS CodeCommit](https://docs.aws.amazon.com/codecommit/latest/userguide/welcome.html) is a version control service that helps you privately store and manage Git repositories, without needing to manage your own source control system.

**Other tools**

For a complete list of tools that SCSP uses to scan code deliverables, see the [SCSP readme](https://github.com/awslabs/simple-code-scanning-pipeline/blob/main/README.md) in GitHub.

**Code repository**

The code for this pattern is available in the [Simple Code Scanning Pipeline (SCSP)](https://github.com/awslabs/simple-code-scanning-pipeline) repository in GitHub.

## Epics
<a name="deploy-a-pipeline-that-simultaneously-detects-security-issues-in-multiple-code-deliverables-epics"></a>

### Deploy the SCSP
<a name="deploy-the-scsp"></a>


| Task | Description | Skills required | 
| --- | --- | --- | 
| Create the CloudFormation stack. | [\[See the AWS documentation website for more details\]](http://docs.aws.amazon.com/prescriptive-guidance/latest/patterns/deploy-a-pipeline-that-simultaneously-detects-security-issues-in-multiple-code-deliverables.html)This creates a CodeCommit repository, a CodePipeline pipeline, several CodeBuild job definitions, and an S3 bucket. Build runs and scanning results are copied into this bucket. After the CloudFormation stack has been completely deployed, SCSP is ready to use. | AWS DevOps, AWS administrator | 

### Use the pipeline
<a name="use-the-pipeline"></a>


| Task | Description | Skills required | 
| --- | --- | --- | 
| Examine the results of the scan. | [\[See the AWS documentation website for more details\]](http://docs.aws.amazon.com/prescriptive-guidance/latest/patterns/deploy-a-pipeline-that-simultaneously-detects-security-issues-in-multiple-code-deliverables.html) | App developer, AWS DevOps | 

## Troubleshooting
<a name="deploy-a-pipeline-that-simultaneously-detects-security-issues-in-multiple-code-deliverables-troubleshooting"></a>


| Issue | Solution | 
| --- | --- | 
| HashiCorp Terraform or AWS CloudFormation files aren’t being scanned. | Make sure that Terraform (.tf) and CloudFormation (.yml, .yaml, or .json) files are placed in the appropriate folders in the cloned CodeCommit repository. | 
| The `git clone` command is failing. | Make sure that you have installed `git-remote-codecommit` and that your CLI has access to AWS credentials that have permissions to read the CodeCommit repository. | 
| A concurrency error, such as `Project-level concurrent build limit cannot exceed the account-level concurrent build limit of 1`. | Rerun the pipeline by choosing the **Release Change** button in the [CodePipeline console](https://console.aws.amazon.com/codesuite/codepipeline/home). This is a known issue that seems to be most common during the first few times that the pipeline runs. | 

## Related resources
<a name="deploy-a-pipeline-that-simultaneously-detects-security-issues-in-multiple-code-deliverables-resources"></a>

[Provide feedback](https://github.com/awslabs/simple-code-scanning-pipeline/issues) on the SCSP project.

## Additional information
<a name="deploy-a-pipeline-that-simultaneously-detects-security-issues-in-multiple-code-deliverables-additional"></a>

**FAQ**

*Is the SCSP project the same as Automated Security Helper (ASH)?*

No. Use ASH when you want a CLI tool that runs code-scanning tools by using containers. [Automated Security Helper (ASH)](https://github.com/awslabs/automated-security-helper) is a tool that is designed to reduce the probability of a security violation in new code, infrastructure, or IAM resource configuration. ASH is a command-line utility that can be run locally. Local use requires a container environment be installed and operational on the system.

Use SCSP when you want an easier setup pipeline than ASH. SCSP requires no local installations. SCSP is designed to run checks individually in a pipeline and display results by tool. SCSP also avoids a lot of the overhead with setting up Docker, and it is operating system (OS) agnostic.

*Is SCSP just for security teams?*

No, anyone can deploy the pipeline to determine which parts of their code are failing security checks. For example, non-security users can use SCSP to check their code before reviewing with their security teams.

*Can I use SCSP if I’m working with another type of repository, such as GitLab, GitHub, or Bitbucket?*

You can configure a local git repository to point to two different remote repositories. For example, you could clone an existing GitLab repository, create a SCSP instance (specifying CloudFormation, Terraform, and AWS Config Rules Development Kit (AWS RDK) folders, if needed), and then use `git remote add upstream <SCSPGitLink>` to point the local repository at the SCSP CodeCommit repository as well. This allows for code changes to be sent to SCSP first, validated, then, after any additional updates are made to address findings, pushed to the GitLab, GitHub, or Bitbucket repository. For more information about multiple remotes, see [Push commits to an additional Git repository](https://docs.aws.amazon.com/codecommit/latest/userguide/how-to-mirror-repo-pushes.html) (AWS blog post).

**Note**  
Be careful of drift, such as avoid making changes through web interfaces.

**Contributing and adding your own actions**

SCSP setup is maintained as a GitHub project, which contains the source code for the SCSP AWS Cloud Development Kit (AWS CDK) application. To add additional checks to the pipeline, the AWS CDK application needs to be updated and then synthesized or deployed into the target AWS account where the pipeline will run. To do this, start by cloning the SCSP [GitHub project](https://github.com/awslabs/simple-code-scanning-pipeline), and then find the stack definition file in the `lib` folder.

If there's an additional check you would like to add, the `StandardizedCodeBuildProject` class in the AWS CDK code makes it very straightforward to add actions. Provide the name, description, and `install` or `build` commands. AWS CDK creates the CodeBuild project by using sensible default values. In addition to creating the build project, you need to add it to the CodePipeline actions in the build stage. When designing a new check, the action should `FAIL` if the scanning tool detects problems or fails to run. The action should `PASS` if the scanning tool doesn't detect any problems. For an example of configuring a tool, review the code for the `Bandit` action.

For more information about expected input and outputs, see the [repository documentation](https://github.com/awslabs/simple-code-scanning-pipeline/blob/main/README.md).

If you add custom actions, you need to deploy SCSP by using `cdk deploy` or `cdk synth + CloudFormation deploy`. This is because the **Quick create stack** CloudFormation template is maintained by the repo owners.

# Deploy detective attribute-based access controls for public subnets by using AWS Config
<a name="deploy-detective-attribute-based-access-controls-for-public-subnets-by-using-aws-config"></a>

*Alberto Menendez, Amazon Web Services*

## Summary
<a name="deploy-detective-attribute-based-access-controls-for-public-subnets-by-using-aws-config-summary"></a>

Distributed edge network architectures rely on network edge security that runs alongside the workloads in their virtual private clouds (VPCs). This provides unprecedented scalability in comparison to the more common, centralized approach. Although deploying public subnets in workload accounts can provide benefits, it also introduces new security risks because it increases the attack surface. We recommend that you deploy only Elastic Load Balancing resources, such as Application Load Balancers, or NAT gateways in the public subnets of these VPCs. Using load balancers and NAT gateways in dedicated public subnets helps you implement fine-grained control for inbound and outbound traffic.

We recommend that you implement both preventative and detective controls to limit the types of resources that can be deployed in public subnets. For more information about using attribute-based access control (ABAC) to deploy preventative controls for public subnets, see [Deploy preventative attribute-based access controls for public subnets](https://docs.aws.amazon.com/prescriptive-guidance/latest/patterns/deploy-preventative-attribute-based-access-controls-for-public-subnets.html). Although effective for most situations, these preventative controls might not address all possible use cases. Therefore, this pattern builds on the ABAC approach and helps you configure alerts about noncompliant resources that are deployed in public subnets. The solution checks whether elastic network interfaces belong to a resource that is not allowed in public subnets.

To achieve this, this pattern uses [AWS Config custom rules](https://docs.aws.amazon.com/config/latest/developerguide/evaluate-config_develop-rules.html) and [ABAC](https://aws.amazon.com/identity/attribute-based-access-control/). The custom rule processes the configuration of an elastic network interface whenever it is created or modified. At a high level, this rule performs two actions to determine whether the network interface is compliant:

1. To determine whether the network interface is in scope of the rule, the rule checks whether the subnet has specific [AWS tags](https://docs.aws.amazon.com/tag-editor/latest/userguide/tagging.html) that indicate it is a public subnet. For example, this tag might be `IsPublicFacing=True`.

1. If the network interface is deployed in a public subnet, the rule checks which AWS service created this resource. If the resource is not an Elastic Load Balancing resource or NAT gateway, it marks the resource as noncompliant.

## Prerequisites and limitations
<a name="deploy-detective-attribute-based-access-controls-for-public-subnets-by-using-aws-config-prereqs"></a>

**Prerequisites**
+ An active AWS account
+ AWS Config, [set up](https://docs.aws.amazon.com/config/latest/developerguide/gs-console.html) in the workload account
+ Permissions to deploy the required resources in the workload account
+ A VPC with public subnets
+ Tags properly applied to identify the target public subnets
+ (Optional) An organization in AWS Organizations
+ (Optional) A central security account that is the delegated administrator for AWS Config and AWS Security Hub CSPM

## Architecture
<a name="deploy-detective-attribute-based-access-controls-for-public-subnets-by-using-aws-config-architecture"></a>

**Target architecture**

![\[Using an AWS Config custom rule to detect noncompliant resources in public subnets\]](http://docs.aws.amazon.com/prescriptive-guidance/latest/patterns/images/pattern-img/85d54ead-7f00-4381-89fb-cffe307c4cfc/images/a8c19913-d260-4b70-96ba-732bb1b9881f.png)


The diagram illustrates the following:

1. When an elastic network interface resource (`AWS::EC2::NetworkInterface`) is deployed or modified, AWS Config captures the event and the configuration.

1. AWS Config matches this event against the custom rule used to evaluate the configuration.

1. The AWS Lambda function associated with this custom rule is invoked. The function evaluates the resource and applies the specified logic to determine if the resource configuration is `COMPLIANT`, `NON_COMPLIANT` or `NOT_APPLICABLE`.

1. If a resource is determined to be `NON_COMPLIANT`, AWS Config sends an alert through Amazon Simple Notification Service (Amazon SNS).    
[\[See the AWS documentation website for more details\]](http://docs.aws.amazon.com/prescriptive-guidance/latest/patterns/deploy-detective-attribute-based-access-controls-for-public-subnets-by-using-aws-config.html)

**Lambda function evaluation logic**

The following diagram shows the logic applied by the Lambda function to evaluate the compliance of the elastic network interface.

![\[Diagram of Lambda function logic\]](http://docs.aws.amazon.com/prescriptive-guidance/latest/patterns/images/pattern-img/85d54ead-7f00-4381-89fb-cffe307c4cfc/images/9575e20f-142b-4eba-b34d-3b9bda163144.png)


**Automation and scale**

This pattern is a detective solution. You can also complement it with a remediation rule to automatically resolve any noncompliant resources. For more information, see [Remediating Noncompliant Resources with AWS Config Rules](https://docs.aws.amazon.com/config/latest/developerguide/remediation.html).

You can scale this solution by:
+ Enforcing application of the corresponding AWS tags that you establish to identify public-facing subnets. For more information, see [Tag policies](https://docs.aws.amazon.com/organizations/latest/userguide/orgs_manage_policies_tag-policies.html) in the AWS Organizations documentation.
+ Configuring a central security account that applies the AWS Config custom rule to every workload account in the organization. For more information, see [Automate configuration compliance at scale in AWS](https://aws.amazon.com/blogs/mt/automate-configuration-compliance-at-scale-in-aws/) (AWS blog post).
+ Integrating AWS Config with AWS Security Hub CSPM in order to capture, centralize, and notify at scale. For more information, see [Configuring AWS Config](https://docs.aws.amazon.com/securityhub/latest/userguide/securityhub-prereq-config.html) in the AWS Security Hub CSPM documentation.

## Tools
<a name="deploy-detective-attribute-based-access-controls-for-public-subnets-by-using-aws-config-tools"></a>
+ [AWS Config](https://docs.aws.amazon.com/config/latest/developerguide/WhatIsConfig.html) provides a detailed view of the resources in your AWS account and how they’re configured. It helps you identify how resources are related to one another and how their configurations have changed over time.
+ [Elastic Load Balancing](https://docs.aws.amazon.com/elasticloadbalancing/latest/userguide/what-is-load-balancing.html) distributes incoming application or network traffic across multiple targets. For example, you can distribute traffic across Amazon Elastic Compute Cloud (Amazon EC2) instances, containers, and IP addresses in one or more Availability Zones.
+ [AWS Lambda](https://docs.aws.amazon.com/lambda/latest/dg/welcome.html) is a compute service that helps you run code without needing to provision or manage servers. It runs your code only when needed and scales automatically, so you pay only for the compute time that you use.
+ [Amazon Simple Notification Service (Amazon SNS)](https://docs.aws.amazon.com/sns/latest/dg/welcome.html) helps you coordinate and manage the exchange of messages between publishers and clients, including web servers and email addresses. 
+ [Amazon Virtual Private Cloud (Amazon VPC)](https://docs.aws.amazon.com/vpc/latest/userguide/what-is-amazon-vpc.html) helps you launch AWS resources into a virtual network that you’ve defined. This virtual network resembles a traditional network that you’d operate in your own data center, with the benefits of using the scalable infrastructure of AWS.

## Best practices
<a name="deploy-detective-attribute-based-access-controls-for-public-subnets-by-using-aws-config-best-practices"></a>

For more examples and best practices for developing custom AWS Config rules, see the official [AWS Config Rules Repository](https://github.com/awslabs/aws-config-rules) on GitHub.

## Epics
<a name="deploy-detective-attribute-based-access-controls-for-public-subnets-by-using-aws-config-epics"></a>

### Deploy the solution
<a name="deploy-the-solution"></a>


| Task | Description | Skills required | 
| --- | --- | --- | 
| Create the Lambda function. | [\[See the AWS documentation website for more details\]](http://docs.aws.amazon.com/prescriptive-guidance/latest/patterns/deploy-detective-attribute-based-access-controls-for-public-subnets-by-using-aws-config.html) | General AWS | 
| Add permissions to the Lambda function's execution role. | [\[See the AWS documentation website for more details\]](http://docs.aws.amazon.com/prescriptive-guidance/latest/patterns/deploy-detective-attribute-based-access-controls-for-public-subnets-by-using-aws-config.html) | General AWS | 
| Retrieve the Lambda function Amazon Resource Name (ARN). | [\[See the AWS documentation website for more details\]](http://docs.aws.amazon.com/prescriptive-guidance/latest/patterns/deploy-detective-attribute-based-access-controls-for-public-subnets-by-using-aws-config.html) | General AWS | 
| Create the AWS Config custom rule. | [\[See the AWS documentation website for more details\]](http://docs.aws.amazon.com/prescriptive-guidance/latest/patterns/deploy-detective-attribute-based-access-controls-for-public-subnets-by-using-aws-config.html) | General AWS | 
| Configure notifications. | [\[See the AWS documentation website for more details\]](http://docs.aws.amazon.com/prescriptive-guidance/latest/patterns/deploy-detective-attribute-based-access-controls-for-public-subnets-by-using-aws-config.html) | General AWS | 

### Test the solution
<a name="test-the-solution"></a>


| Task | Description | Skills required | 
| --- | --- | --- | 
| Create a compliant resource. | [\[See the AWS documentation website for more details\]](http://docs.aws.amazon.com/prescriptive-guidance/latest/patterns/deploy-detective-attribute-based-access-controls-for-public-subnets-by-using-aws-config.html) | General AWS | 
| Create a noncompliant resource. | [\[See the AWS documentation website for more details\]](http://docs.aws.amazon.com/prescriptive-guidance/latest/patterns/deploy-detective-attribute-based-access-controls-for-public-subnets-by-using-aws-config.html) | General AWS | 
| Create a resource that is not applicable. | [\[See the AWS documentation website for more details\]](http://docs.aws.amazon.com/prescriptive-guidance/latest/patterns/deploy-detective-attribute-based-access-controls-for-public-subnets-by-using-aws-config.html) | General AWS | 

## Related resources
<a name="deploy-detective-attribute-based-access-controls-for-public-subnets-by-using-aws-config-resources"></a>

**AWS documentation**
+ [Setting up AWS Config](https://docs.aws.amazon.com/config/latest/developerguide/gs-console.html)
+ [AWS Config custom rules](https://docs.aws.amazon.com/config/latest/developerguide/evaluate-config_develop-rules.html)
+ [ABAC for AWS](https://aws.amazon.com/identity/attribute-based-access-control/)
+ [Deploy preventative attribute-based access controls for public subnets](https://docs.aws.amazon.com/prescriptive-guidance/latest/patterns/deploy-preventative-attribute-based-access-controls-for-public-subnets.html)

**Other AWS resources**
+ [Automate configuration compliance at scale in AWS](https://aws.amazon.com/blogs/mt/automate-configuration-compliance-at-scale-in-aws/)
+ [Distributed Inspection Architectures with Gateway Load Balancer](https://d1.awsstatic.com/architecture-diagrams/ArchitectureDiagrams/distributed-inspection-architectures-gwlb-ra.pdf)

## Additional information
<a name="deploy-detective-attribute-based-access-controls-for-public-subnets-by-using-aws-config-additional"></a>

The following is a sample Lambda function that is provided for demonstration purposes.

```
import boto3
import json
import os

# Init clients
config_client = boto3.client('config')
ec2_client = boto3.client('ec2')

def lambda_handler(event, context):

    # Init values
    compliance_value = 'NOT_APPLICABLE'
    invoking_event = json.loads(event['invokingEvent'])
    configuration_item = invoking_event['configurationItem']
    
    status = configuration_item['configurationItemStatus']
    eventLeftScope = event['eventLeftScope']

    # First check if the event configuration applies. Ex. resource event is not delete
    if (status == 'OK' or status == 'ResourceDiscovered') and not eventLeftScope:
        compliance_value = evaluate_change_notification_compliance(configuration_item)
    
    
    config_client.put_evaluations(
       Evaluations=[
           {
               'ComplianceResourceType': invoking_event['configurationItem']['resourceType'],
               'ComplianceResourceId': invoking_event['configurationItem']['resourceId'],
               'ComplianceType': compliance_value,
               'OrderingTimestamp': invoking_event['configurationItem']['configurationItemCaptureTime']
           },
       ],
       ResultToken=event['resultToken'])
    
# Function with the logs to evaluate the resource
def evaluate_change_notification_compliance(configuration_item):
    is_in_scope = is_in_scope_subnet(configuration_item['configuration']['subnetId'])
    
    if (configuration_item['resourceType'] != 'AWS::EC2::NetworkInterface') or not is_in_scope:
        return 'NOT_APPLICABLE'

    else:
        alb_condition = configuration_item['configuration']['requesterId'] in ['amazon-elb']
        nlb_condition = configuration_item['configuration']['interfaceType'] in ['network_load_balancer']
        nat_gateway_condition = configuration_item['configuration']['interfaceType'] in ['nat_gateway']

        if alb_condition or nlb_condition or nat_gateway_condition:
            return 'COMPLIANT'
    return 'NON_COMPLIANT'

# Function to check if elastic network interface is in public subnet
def is_in_scope_subnet(eni_subnet):

    subnet_description = ec2_client.describe_subnets(
        SubnetIds=[eni_subnet]
    )

    for subnet in subnet_description['Subnets']:
        for tag in subnet['Tags']:
            if tag['Key'] == os.environ.get('TAG_KEY') and tag['Value'] == os.environ.get('TAG_VALUE'):
                return True
    
    return False
```

# Deploy preventative attribute-based access controls for public subnets
<a name="deploy-preventative-attribute-based-access-controls-for-public-subnets"></a>

*Joel Alfredo Nunez Gonzalez and Samuel Ortega Sancho, Amazon Web Services*

## Summary
<a name="deploy-preventative-attribute-based-access-controls-for-public-subnets-summary"></a>

In centralized network architectures, inspection and edge virtual private clouds (VPCs) concentrate all inbound and outbound traffic, such as traffic to and from the internet. However, this can create bottlenecks or result in reaching the limits of AWS service quotas. Deploying network edge security alongside the workloads in their VPCs provides unprecedented scalability in comparison to the more common, centralized approach. This is called a *distributed edge* architecture.

Although deploying public subnets in workload accounts can provide benefits, it also introduces new security risks because it increases the attack surface. We recommend that you deploy only Elastic Load Balancing (ELB) resources, such as Application Load Balancers, or NAT gateways in the public subnets of these VPCs. Using load balancers and NAT gateways in dedicated public subnets helps you implement fine-grained control for inbound and outbound traffic.

*Attribute-based access control* (ABAC) is the practice of creating fine-grained permissions based on user attributes, such as department, job role, and team name. For more information, see [ABAC for AWS](https://aws.amazon.com/identity/attribute-based-access-control/). ABAC can provide guardrails for public subnets in workload accounts. This helps application teams be agile, without compromising the security of the infrastructure.

This pattern describes how to help secure public subnets by implementing ABAC through a [service control policy (SCP)](https://docs.aws.amazon.com/organizations/latest/userguide/orgs_manage_policies_scps.html) in AWS Organizations and [policies](https://docs.aws.amazon.com/IAM/latest/UserGuide/access_policies.html) in AWS Identity and Access Management (IAM). You apply the SCP to either a member account of an organization or to an organizational unit (OU). These ABAC policies permit users to deploy NAT gateways in the target subnets and prevent them from deploying other Amazon Elastic Compute Cloud (Amazon EC2) resources, such as EC2 instances and elastic network interfaces.  

## Prerequisites and limitations
<a name="deploy-preventative-attribute-based-access-controls-for-public-subnets-prereqs"></a>

**Prerequisites**
+ An organization in AWS Organizations
+ Administrative access to the AWS Organizations root account
+ In the organization, an active member account or OU for testing the SCP

**Limitations**
+ The SCP in this solution doesn’t prevent AWS services that use a service-linked role from deploying resources in the target subnets. Examples of these services are Elastic Load Balancing (ELB), Amazon Elastic Container Service (Amazon ECS), and Amazon Relational Database Service (Amazon RDS). For more information, see [SCP effects on permissions](https://docs.aws.amazon.com/organizations/latest/userguide/orgs_manage_policies_scps.html#scp-effects-on-permissions) in the AWS Organizations documentation. Implement security controls to detect these exceptions.

## Architecture
<a name="deploy-preventative-attribute-based-access-controls-for-public-subnets-architecture"></a>

**Target technology stack**
+ SCP applied to an AWS account or OU in AWS Organizations
+ The following IAM roles:
  + `AutomationAdminRole` – Used to modify subnet tags and create VPC resources after implementing the SCP
  + `TestAdminRole` – Used to test whether the SCP is preventing other IAM principals, including those with administrative access, from performing the actions reserved for the `AutomationAdminRole`

**Target architecture**

![\[The tags prevent users from deploying resources other than NAT gateways in public subnets\]](http://docs.aws.amazon.com/prescriptive-guidance/latest/patterns/images/pattern-img/25f22f67-5bb6-42ac-8fd6-836e00c436f1/images/b8345c8c-0fc5-46a3-be60-c171979cf979.png)


1. You create the `AutomationAdminRole` IAM role in the target account. This role has permissions to manage networking resources. Note the following permissions that are exclusive to this role:
   + This role can create VPCs and public subnets.
   + This role can modify the tag assignments for the target subnets.
   + This role can manage its own permissions.

1. In AWS Organizations, you apply the SCP to the target AWS account or OU. For a sample policy, see [Additional information](#deploy-preventative-attribute-based-access-controls-for-public-subnets-additional) in this pattern.

1. A user or a tool in the CI/CD pipeline can assume the `AutomationAdminRole` role to apply the `SubnetType` tag to the target subnets.

1. By assuming other IAM roles, authorized IAM principals in your organization can manage NAT gateways in the target subnets and other permitted networking resources in the AWS account, such as route tables. Use IAM policies to grant these permissions. For more information, see [Identity and access management for Amazon VPC](https://docs.aws.amazon.com/vpc/latest/userguide/security-iam.html).

**Automation and scale**

To help protect public subnets, the corresponding [AWS tags](https://docs.aws.amazon.com/general/latest/gr/aws_tagging.html) must be applied. After applying the SCP, NAT gateways are the only kind of Amazon EC2 resource that authorized users can create in subnets that have the `SubnetType:IFA` tag. (`IFA` means *internet-facing assets*.) The SCP prevents the creation of other Amazon EC2 resources, such as instances and elastic network interfaces. We recommend that you use a CI/CD pipeline that assumes the `AutomationAdminRole` role to create VPC resources so that these tags are properly applied to public subnets.

## Tools
<a name="deploy-preventative-attribute-based-access-controls-for-public-subnets-tools"></a>

**AWS services**
+ [AWS Identity and Access Management (IAM)](https://docs.aws.amazon.com/IAM/latest/UserGuide/introduction.html) helps you securely manage access to your AWS resources by controlling who is authenticated and authorized to use them.
+ [AWS Organizations](https://docs.aws.amazon.com/organizations/latest/userguide/orgs_introduction.html) is an account management service that helps you consolidate multiple AWS accounts into an organization that you create and centrally manage. In AWS Organizations, you can implement [service control policies (SCPs)](https://docs.aws.amazon.com/organizations/latest/userguide/orgs_manage_policies_scps.html), which are a type of policy that you can use to manage permissions in your organization.
+ [Amazon Virtual Private Cloud (Amazon VPC)](https://docs.aws.amazon.com/vpc/latest/userguide/what-is-amazon-vpc.html) helps you launch AWS resources into a virtual network that you’ve defined. This virtual network resembles a traditional network that you’d operate in your own data center, with the benefits of using the scalable infrastructure of AWS.

## Epics
<a name="deploy-preventative-attribute-based-access-controls-for-public-subnets-epics"></a>

### Apply the SCP
<a name="apply-the-scp"></a>


| Task | Description | Skills required | 
| --- | --- | --- | 
| Create a test admin role.  | Create an IAM role named `TestAdminRole` in the target AWS account. Attach the **AdministratorAccess** AWS managed IAM policy to the new role. For instructions, see [Creating a role to delegate permissions to an IAM user](https://docs.aws.amazon.com/IAM/latest/UserGuide/id_roles_create_for-user.html) in the IAM documentation. | AWS administrator | 
| Create the automation admin role. | [\[See the AWS documentation website for more details\]](http://docs.aws.amazon.com/prescriptive-guidance/latest/patterns/deploy-preventative-attribute-based-access-controls-for-public-subnets.html)The following is an example of a trust policy that you could use to test the role from the `111122223333` account.<pre>{<br />    "Version": "2012-10-17",		 	 	 <br />    "Statement": [<br />        {<br />            "Effect": "Allow",<br />            "Principal": {<br />                "AWS": [<br />                    "arn:aws:iam::111122223333:root"<br />                ]<br />            },<br />            "Action": "sts:AssumeRole",<br />            "Condition": {}<br />        }<br />    ]<br />}</pre> | AWS administrator | 
| Create and attach the SCP. | [\[See the AWS documentation website for more details\]](http://docs.aws.amazon.com/prescriptive-guidance/latest/patterns/deploy-preventative-attribute-based-access-controls-for-public-subnets.html) | AWS administrator | 

### Test the SCP
<a name="test-the-scp"></a>


| Task | Description | Skills required | 
| --- | --- | --- | 
| Create a VPC or subnet. | [\[See the AWS documentation website for more details\]](http://docs.aws.amazon.com/prescriptive-guidance/latest/patterns/deploy-preventative-attribute-based-access-controls-for-public-subnets.html) | AWS administrator | 
| Manage tags. | [\[See the AWS documentation website for more details\]](http://docs.aws.amazon.com/prescriptive-guidance/latest/patterns/deploy-preventative-attribute-based-access-controls-for-public-subnets.html) | AWS administrator | 
| Deploy resources in the target subnets. | [\[See the AWS documentation website for more details\]](http://docs.aws.amazon.com/prescriptive-guidance/latest/patterns/deploy-preventative-attribute-based-access-controls-for-public-subnets.html) | AWS administrator | 
| Manage the AutomationAdminRole role. | [\[See the AWS documentation website for more details\]](http://docs.aws.amazon.com/prescriptive-guidance/latest/patterns/deploy-preventative-attribute-based-access-controls-for-public-subnets.html) | AWS administrator | 

### Clean up
<a name="clean-up"></a>


| Task | Description | Skills required | 
| --- | --- | --- | 
| Clean up deployed resources. | [\[See the AWS documentation website for more details\]](http://docs.aws.amazon.com/prescriptive-guidance/latest/patterns/deploy-preventative-attribute-based-access-controls-for-public-subnets.html) | AWS administrator | 

## Related resources
<a name="deploy-preventative-attribute-based-access-controls-for-public-subnets-resources"></a>

**AWS documentation**
+ [Attaching and detaching SCPs](https://docs.aws.amazon.com/organizations/latest/userguide/orgs_manage_policies_scps_attach.html)
+ [Creating, updating, and deleting SCPs](https://docs.aws.amazon.com/organizations/latest/userguide/orgs_manage_policies_scps_create.html)
+ [Deploy detective attribute-based access controls for public subnets by using AWS Config](https://docs.aws.amazon.com/prescriptive-guidance/latest/patterns/deploy-detective-attribute-based-access-controls-for-public-subnets-by-using-aws-config.html)
+ [Detective controls](https://docs.aws.amazon.com/prescriptive-guidance/latest/aws-security-controls/detective-controls.html)
+ [Service authorization reference](https://docs.aws.amazon.com/service-authorization/latest/reference/reference.html)
+ [Tagging AWS resources](https://docs.aws.amazon.com/general/latest/gr/aws_tagging.html)
+ [What is ABAC for AWS?](https://docs.aws.amazon.com/IAM/latest/UserGuide/introduction_attribute-based-access-control.html)

**Additional AWS references**
+ [Securing resource tags used for authorization using a Service Control Policy in AWS Organizations](https://aws.amazon.com/es/blogs/security/securing-resource-tags-used-for-authorization-using-service-control-policy-in-aws-organizations/) (AWS blog post)

## Additional information
<a name="deploy-preventative-attribute-based-access-controls-for-public-subnets-additional"></a>

The following service control policy is an example that you can use to test this approach in your organization.

```
{
  "Version": "2012-10-17",		 	 	 
  "Statement": [
    {
      "Sid": "DenyVPCActions",
      "Effect": "Deny",
      "Action": [
        "ec2:CreateVPC",
        "ec2:CreateRoute",
        "ec2:CreateSubnet",
        "ec2:CreateInternetGateway",
        "ec2:DeleteVPC",
        "ec2:DeleteRoute",
        "ec2:DeleteSubnet",
        "ec2:DeleteInternetGateway"
      ],
      "Resource": [
        "arn:aws:ec2:*:*:*"
      ],
      "Condition": {
        "StringNotLike": {
          "aws:PrincipalARN": ["arn:aws:iam::*:role/AutomationAdminRole"]
        }
      }
    },
    {
      "Sid": "AllowNATGWOnIFASubnet",
      "Effect": "Deny",
      "NotAction": [
        "ec2:CreateNatGateway",
        "ec2:DeleteNatGateway"
      ],
      "Resource": [
        "arn:aws:ec2:*:*:subnet/*"
      ],
      "Condition": {
        "ForAnyValue:StringEqualsIfExists": {
          "aws:ResourceTag/SubnetType": "IFA"
        },
        "StringNotLike": {
          "aws:PrincipalARN": ["arn:aws:iam::*:role/AutomationAdminRole"]
        }
      }
    },
    {
      "Sid": "DenyChangesToAdminRole",
      "Effect": "Deny",
      "NotAction": [
        "iam:GetContextKeysForPrincipalPolicy",
        "iam:GetRole",
        "iam:GetRolePolicy",
        "iam:ListAttachedRolePolicies",
        "iam:ListInstanceProfilesForRole",
        "iam:ListRolePolicies",
        "iam:ListRoleTags"
      ],
      "Resource": [
        "arn:aws:iam::*:role/AutomationAdminRole"
      ],
      "Condition": {
        "StringNotLike": {
          "aws:PrincipalARN": ["arn:aws:iam::*:role/AutomationAdminRole"]
        }
      }
    }
  ]
}
```

# Detect Amazon RDS and Aurora database instances that have expiring CA certificates
<a name="detect-rds-instances-expiring-certificates"></a>

*Stephen DiCato and Eugene Shifer, Amazon Web Services*

## Summary
<a name="detect-rds-instances-expiring-certificates-summary"></a>

As a security best practice, it is recommended that you encrypt data in transit between application servers and relational databases. You can use SSL or TLS to encrypt a connection to a database (DB) instance or cluster. These protocols help provide confidentiality, integrity, and authenticity between an application and database. The database uses a server certificate, which is issued by a [certificate authority (CA)](https://docs.aws.amazon.com/AmazonRDS/latest/UserGuide/UsingWithRDS.SSL.html#UsingWithRDS.SSL.RegionCertificateAuthorities) and is used to perform server identity verification. SSL or TLS verifies the authenticity of the certificate by validating its digital signature and ensuring it is not expired.

In the AWS Management Console, [Amazon Relational Database Service (Amazon RDS)](https://docs.aws.amazon.com/AmazonRDS/latest/UserGuide/Welcome.html) and [Amazon Aurora](https://docs.aws.amazon.com/AmazonRDS/latest/AuroraUserGuide/CHAP_AuroraOverview.html) provide notifications about DB instances that require certificate updates. However, to check for these notifications, you must log into each AWS account and navigate to the service console in each AWS Region. This task becomes more complex if you need to assess certificate validity across many AWS accounts that are managed as an organization in [AWS Organizations](https://docs.aws.amazon.com/organizations/latest/userguide/orgs_introduction.html).

By provisioning the infrastructure as code (IaC) provided in this pattern, you can detect expiring CA certificates for all Amazon RDS and Aurora DB instances in your AWS account or AWS organization. The [AWS CloudFormation](https://docs.aws.amazon.com/AWSCloudFormation/latest/UserGuide/Welcome.html) template provisions an AWS Config rule, an AWS Lambda function, and the necessary permissions. You can deploy it into a single account as a [stack](https://docs.aws.amazon.com/AWSCloudFormation/latest/UserGuide/stacks.html), or you can deploy it across the entire AWS organization as a [stack set](https://docs.aws.amazon.com/AWSCloudFormation/latest/UserGuide/what-is-cfnstacksets.html).

## Prerequisites and limitations
<a name="detect-rds-instances-expiring-certificates-prereqs"></a>

**Prerequisites**
+ An active AWS account
+ If you're deploying into a single AWS account:
  + Ensure that you have [permissions](https://docs.aws.amazon.com/AWSCloudFormation/latest/UserGuide/using-iam-template.html) to create CloudFormation stacks.
  + [Enable](https://docs.aws.amazon.com/config/latest/developerguide/getting-started.html) AWS Config in the target account.
  + (Optional) [Enable](https://docs.aws.amazon.com/securityhub/latest/userguide/securityhub-settingup.html#securityhub-manual-setup-overview) AWS Security Hub CSPM in the target account.
+ If you're deploying into an AWS organization:
  + Ensure that you have [permissions](https://docs.aws.amazon.com/AWSCloudFormation/latest/UserGuide/using-iam-template.html) to create CloudFormation stack sets.
  + [Enable](https://docs.aws.amazon.com/securityhub/latest/userguide/securityhub-settingup.html#securityhub-orgs-setup-overview) Security Hub CSPM with AWS Organizations integration.
  + [Enable](https://docs.aws.amazon.com/config/latest/developerguide/getting-started.html) AWS Config in the accounts where you are deploying this solution.
  + Designate an AWS account to be the delegated administrator for AWS Config and Security Hub CSPM.

**Limitations**
+ If you're deploying to an individual account that doesn't have Security Hub CSPM enabled, you can use AWS Config to evaluate the findings.
+ If you're deploying to an organization that doesn't have a delegated administrator for AWS Config and Security Hub CSPM, you must log into the individual member accounts to view the findings.
+ If you use AWS Control Tower to manage and govern the accounts in your organization, deploy the IaC in this pattern by using [Customizations for AWS Control Tower (CfCT)](https://docs.aws.amazon.com/controltower/latest/userguide/cfct-overview.html). Using the CloudFormation console will create configuration drift from AWS Control Tower guardrails and require that you re-enroll the organizational units (OUs) or managed accounts.
+ Some AWS services aren’t available in all AWS Regions. For Region availability, see the [Service endpoints and quotas](https://docs.aws.amazon.com/general/latest/gr/aws-service-information.html) page, and choose the link for the service.

## Architecture
<a name="detect-rds-instances-expiring-certificates-architecture"></a>

**Deploying into an individual AWS account**

The following architecture diagram shows the deployment of the AWS resources within a single AWS account. It's implemented by using a CloudFormation template directly through the CloudFormation console. If Security Hub CSPM is enabled, you can view the results in either AWS Config or Security Hub CSPM. If Security Hub CSPM is not enabled, you can view the results only in the AWS Config console.

![\[Deployment of the provided CloudFormation template in a single account.\]](http://docs.aws.amazon.com/prescriptive-guidance/latest/patterns/images/pattern-img/d34fe1f1-6764-4485-b7a7-04e5861f1e9b/images/0b07133a-d4f8-4d87-8d00-2b5e2c453ece.png)


The diagram shows the following steps:

1. You create a CloudFormation stack. This deploys a Lambda function and an AWS Config rule. Both the rule and function are set up with the AWS Identity and Access Management (IAM) permissions required to publish resource evaluations in AWS Config and logs.

1. The AWS Config rule operates in [detective evaluation mode](https://docs.aws.amazon.com/config/latest/developerguide/evaluate-config-rules.html#aws-config-rules-evaluation-modes) and runs every 24 hours.

1. Security Hub CSPM receives all AWS Config findings.

1. You can view the findings in Security Hub CSPM or in AWS Config, depending on the account's configuration.

**Deploying into an AWS organization**

The following diagram shows the assessment of certificate expiration across multiple accounts that are managed through AWS Organizations and AWS Control Tower. You deploy the CloudFormation template through CfCT. The assessment outcomes are centralized in Security Hub CSPM in the delegated administrator account. The AWS CodePipeline workflow depicted in the diagram shows the background steps that occur during CfCT deployment.

![\[Deployment of the provided CloudFormation template to multiple accounts in an AWS Organization.\]](http://docs.aws.amazon.com/prescriptive-guidance/latest/patterns/images/pattern-img/d34fe1f1-6764-4485-b7a7-04e5861f1e9b/images/8d870cbb-54cf-43ec-96f2-00730e0134af.png)


The diagram shows the following steps:

1. Depending on the configuration for CfCT, in the management account, you push the IaC to an AWS CodeCommit repository or you upload a compressed (ZIP) file of the IaC to an Amazon Simple Storage Service (Amazon S3) bucket.

1. The CfCT pipeline unzips the file, runs [cfn-nag](https://github.com/stelligent/cfn_nag) (GitHub) checks, and deploys it as a CloudFormation stack set.

1. Depending on the configuration specified in the CfCT manifest file, CloudFormation StackSets deploys stacks into individual accounts or specified OUs. This deploys a Lambda function and an AWS Config rule in the target accounts. Both the rule and function are set up with the IAM permissions required to publish resource evaluations in AWS Config and logs.

1. The AWS Config rule operates in [detective evaluation mode](https://docs.aws.amazon.com/config/latest/developerguide/evaluate-config-rules.html#aws-config-rules-evaluation-modes) and runs every 24 hours.

1. AWS Config forwards all findings to Security Hub CSPM.

1. Security Hub CSPM findings are aggregated in the delegated administrator account.

1. You can view the findings in Security Hub CSPM in the delegated administrator account.

## Tools
<a name="detect-rds-instances-expiring-certificates-tools"></a>

**AWS services**
+ [AWS CloudFormation](https://docs.aws.amazon.com/AWSCloudFormation/latest/UserGuide/Welcome.html) helps you set up AWS resources, provision them quickly and consistently, and manage them throughout their lifecycle across AWS accounts and Regions.
+ [AWS Config](https://docs.aws.amazon.com/config/latest/developerguide/WhatIsConfig.html) provides a detailed view of the resources in your AWS account and how they’re configured. It helps you identify how resources are related to one another and how their configurations have changed over time. An AWS Config [rule](https://docs.aws.amazon.com/config/latest/developerguide/evaluate-config.html) defines your ideal configuration settings for a resource, and AWS Config can evaluate whether your AWS resources comply with the conditions in your rules.
+ [AWS Control Tower](https://docs.aws.amazon.com/controltower/latest/userguide/what-is-control-tower.html) helps you set up and govern an AWS multi-account environment, following prescriptive best practices. [Customizations for AWS Control Tower (CfCT)](https://docs.aws.amazon.com/controltower/latest/userguide/cfct-overview.html) helps you customize your AWS Control Tower landing zone and stay aligned with AWS best practices. Customizations are implemented with CloudFormation templates and service control policies (SCPs).
+ [AWS Lambda](https://docs.aws.amazon.com/lambda/latest/dg/welcome.html) is a compute service that helps you run code without needing to provision or manage servers. It runs your code only when needed and scales automatically, so you pay only for the compute time that you use.
+ [AWS Organizations](https://docs.aws.amazon.com/organizations/latest/userguide/orgs_introduction.html) is an account management service that helps you consolidate multiple AWS accounts into an organization that you create and centrally manage.
+ [AWS Security Hub CSPM](https://docs.aws.amazon.com/securityhub/latest/userguide/what-is-securityhub.html) provides a comprehensive view of your security state in AWS. It also helps you check your AWS environment against security industry standards and best practices.

**Other tools**
+ [Python](https://www.python.org/) is a general-purpose computer programming language.

**Code repository**

The code for this pattern is available in the GitHub [Detect Amazon RDS instances with expiring CA certificates](https://github.com/aws-samples/config-rds-ca-expiry) repository.

## Best practices
<a name="detect-rds-instances-expiring-certificates-best-practices"></a>

We recommend that you adhere to the best practices in the following resources:
+ [Best Practices for Organizational Units with AWS Organizations](https://aws.amazon.com/blogs/mt/best-practices-for-organizational-units-with-aws-organizations/) (AWS Cloud Operations & Migrations Blog)
+ [Guidance for Establishing an Initial Foundation using AWS Control Tower on AWS](https://aws.amazon.com/solutions/guidance/establishing-an-initial-foundation-using-control-tower-on-aws/) (AWS Solutions Library)
+ [Guidance for creating and modifying AWS Control Tower resources](https://docs.aws.amazon.com/controltower/latest/userguide/getting-started-guidance.html) (AWS Control Tower documentation)
+ [CfCT deployment considerations ](https://docs.aws.amazon.com/controltower/latest/userguide/cfct-considerations.html)(AWS Control Tower documentation)

## Epics
<a name="detect-rds-instances-expiring-certificates-epics"></a>

### Review the solution and code
<a name="review-the-solution-and-code"></a>


| Task | Description | Skills required | 
| --- | --- | --- | 
| Determine your deployment strategy. | Review the solution and code to determine how you will deploy it into your AWS environment. Determine if you will be deploying into a single account or an AWS organization. | App owner, General AWS | 
| Clone the repository. | Enter the following command to clone the [Detect Amazon RDS instances with expiring CA certificates](https://github.com/aws-samples/config-rds-ca-expiry) repository.<pre>git clone https://github.com/aws-samples/config-rds-ca-expiry.git</pre> | App developer, App owner | 
| Validate the Python version. | [\[See the AWS documentation website for more details\]](http://docs.aws.amazon.com/prescriptive-guidance/latest/patterns/detect-rds-instances-expiring-certificates.html) | App developer, App owner | 

### Deploy the solution
<a name="deploy-the-solution"></a>


| Task | Description | Skills required | 
| --- | --- | --- | 
| Deploy the CloudFormation template. | Deploy the CloudFormation template to your AWS environment. Do one of the following:[\[See the AWS documentation website for more details\]](http://docs.aws.amazon.com/prescriptive-guidance/latest/patterns/detect-rds-instances-expiring-certificates.html) | App developer, AWS administrator, General AWS | 
| Verify the deployment. | In the [CloudFormation console](https://console.aws.amazon.com/cloudformation/), verify that the stack or stack set has deployed successfully. | AWS administrator, App owner | 

### Review the findings
<a name="review-the-findings"></a>


| Task | Description | Skills required | 
| --- | --- | --- | 
| View the AWS Config rule findings. | In Security Hub CSPM, do the following to view a list of individual findings:[\[See the AWS documentation website for more details\]](http://docs.aws.amazon.com/prescriptive-guidance/latest/patterns/detect-rds-instances-expiring-certificates.html)In Security Hub CSPM, do the following to view a list of total findings grouped by AWS account:[\[See the AWS documentation website for more details\]](http://docs.aws.amazon.com/prescriptive-guidance/latest/patterns/detect-rds-instances-expiring-certificates.html)In AWS Config, to view a list of findings, follow the instructions in [Viewing Compliance Information and Evaluation Results](https://docs.aws.amazon.com/config/latest/developerguide/evaluate-config_view-compliance.html#evaluate-config_view-compliance-console) in the AWS Config documentation. | AWS administrator, AWS systems administrator, Cloud administrator | 

## Troubleshooting
<a name="detect-rds-instances-expiring-certificates-troubleshooting"></a>


| Issue | Solution | 
| --- | --- | 
| CloudFormation stack set creation or deletion fails | When AWS Control Tower is deployed, it enforces necessary guardrails and assumes control over AWS Config aggregators and rules. This includes preventing any direct alterations through CloudFormation. To properly deploy or remove this CloudFormation template, including all associated resources, you must use CfCT. | 
| CfCT fails to delete the CloudFormation template | If the CloudFormation template persists even after making necessary changes in the manifest file and removing the template files, confirm that the manifest file contains the `enable_stack_set_deletion` parameter and that the value is set to `false`. For more information, see [Delete a stack set](https://docs.aws.amazon.com/controltower/latest/userguide/cfct-delete-stack.html) in the CfCT documentation. | 

## Related resources
<a name="detect-rds-instances-expiring-certificates-resources"></a>
+ [Using SSL/TLS to encrypt a connection to a DB instance or cluster](https://docs.aws.amazon.com/AmazonRDS/latest/UserGuide/UsingWithRDS.SSL.html) (Amazon RDS documentation)
+ [AWS Config Custom Rules](https://docs.aws.amazon.com/config/latest/developerguide/evaluate-config_develop-rules.html) (AWS Config documentation)

# Dynamically generate an IAM policy with IAM Access Analyzer by using Step Functions
<a name="dynamically-generate-an-iam-policy-with-iam-access-analyzer-by-using-step-functions"></a>

*Thomas Scott, Koen van Blijderveen, Adil El Kanabi, and Rafal Pawlaszek, Amazon Web Services*

## Summary
<a name="dynamically-generate-an-iam-policy-with-iam-access-analyzer-by-using-step-functions-summary"></a>

*Least-privilege* is the security best practice of granting the minimum permissions required to perform a task. Implementing least-privilege access in an already active Amazon Web Services (AWS) account can be challenging because you don’t want to unintentionally block users from performing their job duties by changing their permissions. Before you can implement AWS Identity and Access Management (IAM) policy changes, you need to understand the actions and resources the account users are performing.

This pattern is designed to help you apply the principle of least-privilege access, without blocking or slowing down team productivity. It describes how to use IAM Access Analyzer and AWS Step Functions to dynamically generate an up-to-date IAM policy for your role, based on the actions that are currently being performed in the account. The new policy is designed to permit the current activity but remove any unnecessary, elevated privileges. You can customize the generated policy by defining allow and deny rules, and the solution integrates your custom rules.

This pattern includes options for implementing the solution with AWS Cloud Development Kit (AWS CDK) or HashiCorp CDK for Terraform (CDKTF). You can then associate the new policy to the role by using a continuous integration and continuous delivery (CI/CD) pipeline. If you have a multi-account architecture, you can deploy this solution in any account where you want to generate updated IAM policies for the roles, increasing the security of your entire AWS Cloud environment.

## Prerequisites and limitations
<a name="dynamically-generate-an-iam-policy-with-iam-access-analyzer-by-using-step-functions-prereqs"></a>

**Prerequisites**
+ An active AWS account with a AWS CloudTrail trail enabled.
+ IAM permissions for the following:
  + Create and deploy Step Functions workflows. For more information, see [Actions, resources, and condition keys for AWS Step Functions](https://docs.aws.amazon.com/service-authorization/latest/reference/list_awsstepfunctions.html) (Step Functions documentation).
  + Create AWS Lambda functions. For more information, see [Execution role and user permissions](https://docs.aws.amazon.com/lambda/latest/dg/configuration-vpc.html#vpc-permissions) (Lambda documentation).
  + Create IAM roles. For more information, see [Creating a role to delegate permissions to an IAM user](https://docs.aws.amazon.com/IAM/latest/UserGuide/id_roles_create_for-user.html) (IAM documentation).
+ npm installed. For more information, see [Downloading and installing Node.js and npm](https://docs.npmjs.com/downloading-and-installing-node-js-and-npm) (npm documentation).
+ If you are deploying this solution with AWS CDK (Option 1):
  + AWS CDK Toolkit, installed and configured. For more information, see [Install the AWS CDK](https://docs.aws.amazon.com/cdk/v2/guide/getting_started.html#getting_started_install) (AWS CDK documentation).
+ If you are deploying this solution with CDKTF (Option 2):
  + CDKTF, installed and configured. For more information, see [Install CDK for Terraform](https://learn.hashicorp.com/tutorials/terraform/cdktf-install?in=terraform/cdktf) (CDKTF documentation).
  + Terraform, installed and configured. For more information, see [Get Started](https://learn.hashicorp.com/collections/terraform/aws-get-started?utm_source=WEBSITE&utm_medium=WEB_IO&utm_offer=ARTICLE_PAGE&utm_content=DOCS) (Terraform documentation).
+ AWS Command Line Interface (AWS CLI) locally installed and configured for your AWS account. For more information, see [Installing or updating the latest version of the AWS CLI](https://docs.aws.amazon.com/cli/latest/userguide/getting-started-install.html) (AWS CLI documentation).

**Limitations**
+ This pattern does not apply the new IAM policy to the role. At the end of this solution, the new IAM policy is stored in an AWS CodeCommit repository. You can use a CI/CD pipeline to apply policies to the roles in your account.

## Architecture
<a name="dynamically-generate-an-iam-policy-with-iam-access-analyzer-by-using-step-functions-architecture"></a>

**Target architecture **

![\[The Step Functions workflow generating a new policy and storing it in CodeCommit.\]](http://docs.aws.amazon.com/prescriptive-guidance/latest/patterns/images/pattern-img/cb9ee0c9-3fe0-43d9-9dd2-1aedb705c78f/images/eb13a5db-f803-40b1-9a8c-4ef13d584cd4.png)


1. A regularly scheduled Amazon EventBridge event rule starts a Step Functions workflow. You define this regeneration schedule as part of setting up this solution.

1. In the Step Functions workflow, a Lambda function generates the date ranges to use when analyzing account activity in the CloudTrail logs.

1. The next workflow step calls the IAM Access Analyzer API to start generating the policy.

1. Using the Amazon Resource Name (ARN) of the role you specify during set up, IAM Access Analyzer analyzes the CloudTrail logs for activity within the specified date rate. Based on the activity, IAM Access Analyzer generates an IAM policy that permits only the actions and services used by the role during the specified date range. When this step is complete, this step generates a job ID.

1. The next workflow step checks for the job ID every 30 seconds. When the job ID is detected, this step uses the job ID to call the IAM Access Analyzer API and retrieve the new IAM policy. IAM Access Analyzer returns the policy as a JSON file.

1. The next workflow step puts the **<IAM role name>/policy.json** file in an Amazon Simple Storage Service (Amazon S3) bucket. You define this S3 bucket as part of setting up this solution.

1. An Amazon S3 event notification starts a Lambda function.

1. The Lambda function retrieves the policy from the S3 bucket, integrates the custom rules you define in the **allow.json** and **deny.json** files, and then pushes the updated policy to CodeCommit. You define the CodeCommit repository, branch, and folder path as part of setting up this solution.

## Tools
<a name="dynamically-generate-an-iam-policy-with-iam-access-analyzer-by-using-step-functions-tools"></a>

**AWS services**
+ [AWS Cloud Development Kit (AWS CDK)](https://docs.aws.amazon.com/cdk/latest/guide/home.html) is a software development framework that helps you define and provision AWS Cloud infrastructure in code.
+ [AWS CDK Toolkit](https://docs.aws.amazon.com/cdk/latest/guide/cli.html) is a command line cloud development kit that helps you interact with your AWS Cloud Development Kit (AWS CDK) app.
+ [AWS CloudTrail](https://docs.aws.amazon.com/awscloudtrail/latest/userguide/cloudtrail-user-guide.html) helps you audit the governance, compliance, and operational risk of your AWS account.
+ [AWS CodeCommit](https://docs.aws.amazon.com/codecommit/latest/userguide/welcome.html) is a version control service that helps you privately store and manage Git repositories, without needing to manage your own source control system.
+ [AWS Command Line Interface (AWS CLI)](https://docs.aws.amazon.com/cli/latest/userguide/cli-chap-welcome.html) is an open-source tool that helps you interact with AWS services through commands in your command-line shell.
+ [AWS Identity and Access Management (IAM)](https://docs.aws.amazon.com/IAM/latest/UserGuide/introduction.html) helps you securely manage access to your AWS resources by controlling who is authenticated and authorized to use them. This pattern uses [IAM Access Analyzer](https://docs.aws.amazon.com/IAM/latest/UserGuide/what-is-access-analyzer.html), a feature of IAM, to analyze your CloudTrail logs to identify actions and services that have been used by an IAM entity (user or role) and then generate an IAM policy that is based on that activity.
+ [AWS Lambda](https://docs.aws.amazon.com/lambda/latest/dg/welcome.html) is a compute service that helps you run code without needing to provision or manage servers. It runs your code only when needed and scales automatically, so you pay only for the compute time that you use.
+ [Amazon Simple Storage Service (Amazon S3)](https://docs.aws.amazon.com/AmazonS3/latest/userguide/Welcome.html) is a cloud-based object storage service that helps you store, protect, and retrieve any amount of data.
+ [AWS Step Functions](https://docs.aws.amazon.com/step-functions/latest/dg/welcome.html) is a serverless orchestration service that helps you combine AWS Lambda functions and other AWS services to build business-critical applications. In this pattern, you use [AWS SDK service integrations](https://docs.aws.amazon.com/step-functions/latest/dg/supported-services-awssdk.html) in Step Functions to call service API actions from your workflow.

**Other tools**
+ [CDK for Terraform (CDKTF)](https://learn.hashicorp.com/collections/terraform/cdktf) helps you define infrastructure as code (IaC) by using common programming languages, such as Python and Typescript.
+ [Lerna](https://lerna.js.org/docs/introduction) is a build system for managing and publishing multiple JavaScript or TypeScript packages from the same repository.
+ [Node.js](https://nodejs.org) is an event-driven JavaScript runtime environment designed for building scalable network applications.
+ [npm](https://docs.npmjs.com/about-npm) is a software registry that runs in a Node.js environment and is used to share or borrow packages and manage deployment of private packages.

**Code repository**

The code for this pattern is available in the GitHub [Automated IAM Access Analyzer Role Policy Generator](https://github.com/aws-samples/automated-iam-access-analyzer) repository.

## Epics
<a name="dynamically-generate-an-iam-policy-with-iam-access-analyzer-by-using-step-functions-epics"></a>

### Prepare for deployment
<a name="prepare-for-deployment"></a>


| Task | Description | Skills required | 
| --- | --- | --- | 
| Clone the repo. | The following command clones the [Automated IAM Access Analyzer Role Policy Generator](https://github.com/aws-samples/automated-iam-access-analyzer) (GitHub) repository.<pre>git clone https://github.com/aws-samples/automated-iam-access-analyzer.git</pre> | App developer | 
| Install Lerna. | The following command installs Lerna.<pre>npm i -g lerna</pre> | App developer | 
| Set up the dependencies. | The following command installs the dependencies for the repository.<pre>cd automated-iam-access-analyzer/<br />npm install && npm run bootstrap</pre> | App developer | 
| Build the code. | The following command tests, builds, and prepares the zip packages of the Lambda functions.<pre>npm run test:code<br />npm run build:code<br />npm run pack:code</pre> | App developer | 
| Build the constructs. | The following command builds the infrastructure synthesizing applications, for both AWS CDK and CDKTF.<pre>npm run build:infra</pre> |  | 
| Configure any custom permissions. | In the **repo** folder of the cloned repository, edit the **allow.json** and **deny.json** files to define any custom permissions for the role. If the **allow.json** and **deny.json** files contain the same permission, the deny permission is applied. | AWS administrator, App developer | 

### Option 1 – Deploy the solution using AWS CDK
<a name="option-1-deploy-the-solution-using-cdk"></a>


| Task | Description | Skills required | 
| --- | --- | --- | 
| Deploy the AWS CDK stack. | The following command deploys the infrastructure through AWS CloudFormation. Define the following parameters:[\[See the AWS documentation website for more details\]](http://docs.aws.amazon.com/prescriptive-guidance/latest/patterns/dynamically-generate-an-iam-policy-with-iam-access-analyzer-by-using-step-functions.html)<pre>cd infra/cdk<br />cdk deploy —-parameters roleArn=<NAME_OF_ROLE> \<br />—-parameters trailArn=<TRAIL_ARN> \<br />--parameters schedule=<CRON_EXPRESSION_TO_RUN_SOLUTION> \<br />[ --parameters trailLookBack=<TRAIL_LOOKBACK> ]</pre>The square brackets denote optional parameters. | App developer | 
| (Optional) Wait for the new policy. | If the trail does not contain a reasonable amount of historical activity for the role, wait until you are confident that there is enough logged activity for IAM Access Analyzer to generate an accurate policy. If the role has been active in the account for a sufficient period of time, this waiting period might not be necessary. | AWS administrator | 
| Manually review the generated policy. | In your CodeCommit repository, review the generated **<ROLE\$1ARN>.json** file to confirm that the allow and deny permissions are appropriate for the role. | AWS administrator | 

### Option 2 – Deploy the solution using CDKTF
<a name="option-2-ndash-deploy-the-solution-using-cdktf"></a>


| Task | Description | Skills required | 
| --- | --- | --- | 
| Synthesize the Terraform template. | The following command synthesizes the Terraform template.<pre>lerna exec cdktf synth --scope @aiaa/tfm</pre> | App developer | 
| Deploy the Terraform template. | The following command navigates to the directory that contains the CDKTF-defined infrastructure.<pre>cd infra/cdktf</pre>The following command deploys the infrastructure in the target AWS account. Define the following parameters:[\[See the AWS documentation website for more details\]](http://docs.aws.amazon.com/prescriptive-guidance/latest/patterns/dynamically-generate-an-iam-policy-with-iam-access-analyzer-by-using-step-functions.html)<pre>TF_VAR_accountId=<account_ID> \<br /> TF_VAR_region=<region> \<br /> TF_VAR_roleArns=<selected_role_ARN> \<br /> TF_VAR_trailArn=<trail_ARN> \<br /> TF_VAR_schedule=<schedule_expression> \<br /> [ TF_VAR_trailLookBack=<trail_look_back> ] \ cdktf deploy</pre>The square brackets denote optional parameters. | App developer | 
| (Optional) Wait for the new policy. | If the trail does not contain a reasonable amount of historical activity for the role, wait until you are confident that there is enough logged activity for IAM Access Analyzer to generate an accurate policy. If the role has been active in the account for a sufficient period of time, this waiting period might not be necessary. | AWS administrator | 
| Manually review the generated policy. | In your CodeCommit repository, review the generated **<ROLE\$1ARN>.json** file to confirm that the allow and deny permissions are appropriate for the role. | AWS administrator | 

## Related resources
<a name="dynamically-generate-an-iam-policy-with-iam-access-analyzer-by-using-step-functions-resources"></a>

**AWS resources**
+ [IAM Access Analyzer endpoints and quotas](https://docs.aws.amazon.com/general/latest/gr/access-analyzer.html)
+ [Configuring the AWS CLI](https://docs.aws.amazon.com/cli/latest/userguide/cli-chap-configure.html)
+ [Getting started with the AWS CDK](https://docs.aws.amazon.com/cdk/v2/guide/getting_started.html)
+ [Least-privilege permissions](https://docs.aws.amazon.com/IAM/latest/UserGuide/best-practices.html#grant-least-privilege)

**Other resources**
+ [CDK for Terraform](https://www.terraform.io/cdktf) (Terraform website)

# Enable Amazon GuardDuty conditionally by using AWS CloudFormation templates
<a name="enable-amazon-guardduty-conditionally-by-using-aws-cloudformation-templates"></a>

*Ram Kandaswamy, Amazon Web Services*

## Summary
<a name="enable-amazon-guardduty-conditionally-by-using-aws-cloudformation-templates-summary"></a>

[AWS CloudFormation](https://docs.aws.amazon.com/AWSCloudFormation/latest/UserGuide/Welcome.html), an infrastructure as code (IaC) tool, helps you to manage AWS resources through template-based deployments. CloudFormation is typically used to manage AWS resources. Using it to enable AWS services, such as [Amazon GuardDuty](https://docs.aws.amazon.com/guardduty/latest/ug/what-is-guardduty.html), can present unique challenges. GuardDuty is a threat detection service that continuously monitors your AWS accounts for malicious activity and unauthorized behavior. Unlike typical resources that can be created multiple times, GuardDuty is a service that needs to be enabled once per account and AWS Region. Traditional CloudFormation conditions support only static value comparisons, which makes it difficult to check the current state of services such as GuardDuty. If you attempt to enable GuardDuty through CloudFormation in an account where it's already active, the stack deployment fails. This can create operational challenges for DevOps teams that are managing multi-account environments.

This pattern introduces a solution to this challenge. It uses CloudFormation custom resources that are backed by [AWS Lambda](https://docs.aws.amazon.com/lambda/latest/dg/welcome.html) functions to perform dynamic state checks. The conditional logic enables GuardDuty only if it isn’t already enabled. It uses the stack outputs to record the GuardDuty status for future reference.

By following this pattern, you can automate GuardDuty deployments across your AWS infrastructure while maintaining clean, predictable CloudFormation stack operations. This approach is particularly valuable for organizations that are:
+ Managing multiple AWS accounts through IaC
+ Implementing security services at scale
+ Requiring idempotent infrastructure deployments
+ Automating security service deployments

## Prerequisites and limitations
<a name="enable-amazon-guardduty-conditionally-by-using-aws-cloudformation-templates-prereqs"></a>

**Prerequisites**
+ An active AWS account
+ An AWS Identity and Access Management (IAM) role that has permissions to create, update, and delete CloudFormation stacks
+ AWS Command Line Interface (AWS CLI), [installed](https://docs.aws.amazon.com/cli/latest/userguide/getting-started-install.html#getting-started-install-instructions) and [configured](https://docs.aws.amazon.com/cli/latest/userguide/cli-chap-configure.html)

**Limitations**

If GuardDuty has been manually disabled for an AWS account or AWS Region, this pattern does not enable GuardDuty for that target account or Region.

## Architecture
<a name="enable-amazon-guardduty-conditionally-by-using-aws-cloudformation-templates-architecture"></a>

**Target technology stack**

The pattern uses CloudFormation for infrastructure as code (IaC). You use a CloudFormation custom resource backed by a Lambda function to achieve the dynamic service-enablement capability.

**Target architecture**

The following high-level architecture diagram shows the process of enabling GuardDuty by deploying a CloudFormation template:

![\[Using a CloudFormation stack to enable GuardDuty in an AWS account.\]](http://docs.aws.amazon.com/prescriptive-guidance/latest/patterns/images/pattern-img/3abd7cb7-0937-41fe-8eaa-79aedb182732/images/71624052-eebc-474a-9aa3-8606d87fc51d.png)


1. You deploy a CloudFormation template to create a CloudFormation stack.

1. The stack creates an IAM role and a Lambda function.

1. The Lambda function assumes the IAM role.

1. If GuardDuty is not already enabled on the target AWS account, the Lambda function enables it.

**Automation and scale**

You can use the AWS CloudFormation StackSet feature to extend this solution to multiple AWS accounts and AWS Regions. For more information, see [Working with AWS CloudFormation StackSets](https://docs.aws.amazon.com/AWSCloudFormation/latest/UserGuide/what-is-cfnstacksets.html) in the CloudFormation documentation.

## Tools
<a name="enable-amazon-guardduty-conditionally-by-using-aws-cloudformation-templates-tools"></a>
+ [AWS Command Line Interface (AWS CLI)](https://docs.aws.amazon.com/cli/latest/userguide/cli-chap-welcome.html) is an open-source tool that helps you interact with AWS services through commands in your command-line shell.
+ [AWS CloudFormation](https://docs.aws.amazon.com/AWSCloudFormation/latest/UserGuide/Welcome.html) helps you set up AWS resources, provision them quickly and consistently, and manage them throughout their lifecycle across AWS accounts and Regions.
+ [Amazon GuardDuty](https://docs.aws.amazon.com/guardduty/latest/ug/what-is-guardduty.html) is a continuous security monitoring service that analyzes and processes logs to identify unexpected and potentially unauthorized activity in your AWS environment.
+ [AWS Identity and Access Management (IAM)](https://docs.aws.amazon.com/IAM/latest/UserGuide/introduction.html) helps you securely manage access to your AWS resources by controlling who is authenticated and authorized to use them.
+ [AWS Lambda](https://docs.aws.amazon.com/lambda/latest/dg/welcome.html) is a compute service that helps you run code without needing to provision or manage servers. It runs your code only when needed and scales automatically, so you pay only for the compute time that you use.

## Epics
<a name="enable-amazon-guardduty-conditionally-by-using-aws-cloudformation-templates-epics"></a>

### Create the CloudFormation template and deploy the stack
<a name="create-the-cfnshort-template-and-deploy-the-stack"></a>


| Task | Description | Skills required | 
| --- | --- | --- | 
| Store the code in Amazon S3. | [\[See the AWS documentation website for more details\]](http://docs.aws.amazon.com/prescriptive-guidance/latest/patterns/enable-amazon-guardduty-conditionally-by-using-aws-cloudformation-templates.html) | AWS DevOps | 
| Create the CloudFormation template. | [\[See the AWS documentation website for more details\]](http://docs.aws.amazon.com/prescriptive-guidance/latest/patterns/enable-amazon-guardduty-conditionally-by-using-aws-cloudformation-templates.html) | AWS DevOps | 
| Create the CloudFormation stack. | [\[See the AWS documentation website for more details\]](http://docs.aws.amazon.com/prescriptive-guidance/latest/patterns/enable-amazon-guardduty-conditionally-by-using-aws-cloudformation-templates.html) | AWS DevOps | 
| Validate that GuardDuty is enabled for the AWS account. | [\[See the AWS documentation website for more details\]](http://docs.aws.amazon.com/prescriptive-guidance/latest/patterns/enable-amazon-guardduty-conditionally-by-using-aws-cloudformation-templates.html) | Cloud administrator, AWS administrator | 
| Configure additional accounts or Regions. | As needed for your use case, use the CloudFormation StackSet feature to extend this solution to multiple AWS accounts and AWS Regions. For more information, see [Working with AWS CloudFormation StackSets](https://docs.aws.amazon.com/AWSCloudFormation/latest/UserGuide/what-is-cfnstacksets.html) in the CloudFormation documentation. | Cloud administrator, AWS administrator | 

## Related resources
<a name="enable-amazon-guardduty-conditionally-by-using-aws-cloudformation-templates-resources"></a>

**References**
+ [AWS CloudFormation documentation](https://docs.aws.amazon.com/cloudformation/index.html)
+ [AWS Lambda resource type reference](https://docs.aws.amazon.com/AWSCloudFormation/latest/UserGuide/AWS_Lambda.html)
+ [CloudFormation resource type: AWS::IAM::Role](https://docs.aws.amazon.com/AWSCloudFormation/latest/UserGuide/aws-resource-iam-role.html)
+ [CloudFormation resource type: AWS::GuardDuty::Detector ](https://docs.aws.amazon.com/AWSCloudFormation/latest/UserGuide/aws-resource-guardduty-detector.html)
+ [Four ways to retrieve any AWS service property using AWS CloudFormation](https://aws.amazon.com/blogs/mt/four-ways-to-retrieve-any-aws-service-property-using-aws-cloudformation-part-1/) (blog post)

**Tutorials and videos**
+ [Simplify Your Infrastructure Management Using AWS CloudFormation](https://www.youtube.com/watch?v=1h-GPXQrLZw) (Tutorial)
+ [Use Amazon GuardDuty and AWS Security Hub CSPM to secure multiple accounts](https://www.youtube.com/watch?v=Rg2ZzAAi1nY) (AWS re:Invent 2020)
+ [Best practices for authoring AWS CloudFormation](https://www.youtube.com/watch?v=bJHHQM7GGro) (AWS re:Invent 2019)
+ [Threat Detection on AWS: An Introduction to Amazon GuardDuty](https://www.youtube.com/watch?v=czsuZXQvD8E) (AWS re:Inforce 2019)

## Additional information
<a name="enable-amazon-guardduty-conditionally-by-using-aws-cloudformation-templates-additional"></a>

**Python code**

```
import boto3
import os
import json
from botocore.exceptions import ClientError
import cfnresponse
guardduty=boto3.client('guardduty')
cfn=boto3.client('cloudformation')
def lambda_handler(event, context):
    print('Event: ', event)
    if 'RequestType' in event:    
      if event['RequestType'] in ["Create","Update"]:
          enabled=False
          try:  
            response=guardduty.list_detectors()
            if "DetectorIds" in response and len(response["DetectorIds"])>0:
              enabled="AlreadyEnabled"
            elif "DetectorIds" in response and len(response["DetectorIds"])==0:
              cfn_response=cfn.create_stack(
                StackName='guardduty-cfn-stack',
                TemplateBody='{ "AWSTemplateFormatVersion": "2010-09-09",    "Description": "Guard duty creation template",    "Resources": { "IRWorkshopGuardDutyDetector": {  "Type": "AWS::GuardDuty::Detector",    "Properties": {   "Enable": true  }   } } }'
                )
              enabled="True"
          except Exception as e:
              print("Exception: ",e)
          responseData = {}
          responseData['status'] = enabled
          cfnresponse.send(event, context, cfnresponse.SUCCESS, responseData, "CustomResourcePhysicalID" )
      elif event['RequestType'] == "Delete":
          cfn_response=cfn.delete_stack(
                  StackName='guardduty-cfn-stack')
          cfnresponse.send(event, context, cfnresponse.SUCCESS, {})
```



# Enable transparent data encryption in Amazon RDS for SQL Server
<a name="enable-transparent-data-encryption-in-amazon-rds-for-sql-server"></a>

*Ranga Cherukuri, Amazon Web Services*

## Summary
<a name="enable-transparent-data-encryption-in-amazon-rds-for-sql-server-summary"></a>

This pattern describes how to implement transparent data encryption (TDE) in Amazon Relational Database Service (Amazon RDS) for SQL Server to encrypt data at rest.

## Prerequisites and limitations
<a name="enable-transparent-data-encryption-in-amazon-rds-for-sql-server-prereqs"></a>

**Prerequisites **
+ An active AWS account
+ An Amazon RDS for SQL Server DB instance

**Product versions**

Amazon RDS currently supports TDE for the following SQL Server versions and editions:
+ SQL Server 2016 Enterprise Edition
+ SQL Server 2017 Enterprise Edition
+ SQL Server 2019 Standard and Enterprise Editions
+ SQL Server 2022 Standard and Enterprise Editions

For the latest information about supported versions and editions, see [Support for Transparent Data Encryption in SQL Server](https://docs.aws.amazon.com/AmazonRDS/latest/UserGuide/Appendix.SQLServer.Options.TDE.html) in the Amazon RDS documentation.

## Architecture
<a name="enable-transparent-data-encryption-in-amazon-rds-for-sql-server-architecture"></a>

**Technology stack**
+ Amazon RDS for SQL Server

**Architecture**

![\[Architecture for enabling TDE for Amazon RDS for SQL Server databases\]](http://docs.aws.amazon.com/prescriptive-guidance/latest/patterns/images/pattern-img/f513ea66-fd14-48d3-a576-8eb281e77b21/images/8a121e67-9a90-42d2-988e-3bcab0e6bc35.png)


## Tools
<a name="enable-transparent-data-encryption-in-amazon-rds-for-sql-server-tools"></a>
+ Microsoft SQL Server Management Studio (SSMS) is an integrated environment for managing a SQL Server infrastructure. It provides a user interface and a group of tools with rich script editors that interact with SQL Server.

## Epics
<a name="enable-transparent-data-encryption-in-amazon-rds-for-sql-server-epics"></a>

### Create an option group in the Amazon RDS console
<a name="create-an-option-group-in-the-amazon-rds-console"></a>


| Task | Description | Skills required | 
| --- | --- | --- | 
| Open the Amazon RDS console.  | Sign in to the AWS Management Console and open the [Amazon RDS console](https://console.aws.amazon.com/rds/). | Developer, DBA | 
| Create an option group. | In the navigation pane, choose **Option groups**, **Create group**. Choose **sqlserver-ee** as the DB engine, and then select the engine version. | Developer, DBA | 
| Add the TRANSPARENT\$1DATA\$1ENCRYPTION option. | Edit the option group you created and add the option called `TRANSPARENT_DATA_ENCRYPTION`. | Developer, DBA | 

### Associate the option group with the DB instance
<a name="associate-the-option-group-with-the-db-instance"></a>


| Task | Description | Skills required | 
| --- | --- | --- | 
| Choose the DB instance. | In the Amazon RDS console, in the navigation pane, choose **Databases**, and then choose the DB instance you want to associate with the option group. | Developer, DBA | 
| Associate the DB instance with the option group. | Choose **Modify**, and then use the **Option group** setting to associate the SQL Server DB instance with the option group you created earlier. | Developer, DBA | 
| Apply the changes. | Apply the changes immediately or during the next maintenance window, as desired. | Developer, DBA | 
| Get the certificate name. | Get the default certificate name by using the following query.<pre>USE [master]<br />GO<br />SELECT name FROM sys.certificates WHERE name LIKE 'RDSTDECertificate%'<br />GO</pre> | Developer, DBA | 

### Create the database encryption key
<a name="create-the-database-encryption-key"></a>


| Task | Description | Skills required | 
| --- | --- | --- | 
| Connect to the Amazon RDS for SQL Server DB instance using SSMS. | For instructions, see [Using SSMS](https://docs.microsoft.com/en-us/sql/ssms/sql-server-management-studio-ssms) in the Microsoft documentation. | Developer, DBA | 
| Create the database encryption key by using the default certificate. | Create a database encryption key by using the default certificate name you got earlier. Use the following T-SQL query to create a database encryption key. You can specify the AES\$1256 algorithm instead of AES\$1128.<pre>USE [Databasename]<br />GO<br />CREATE DATABASE ENCRYPTION KEY<br />WITH ALGORITHM = AES_128<br />ENCRYPTION BY SERVER CERTIFICATE [certificatename]<br />GO</pre> | Developer, DBA | 
| Enable the encryption on the database. | Use the following T-SQL query to enable database encryption.<pre>ALTER DATABASE [Database Name]<br />SET ENCRYPTION ON<br />GO</pre> | Developer, DBA | 
| Check the status of encryption. | Use the following T-SQL query to check the status of encryption.<pre>SELECT DB_NAME(database_id) AS DatabaseName, encryption_state, percent_complete FROM sys.dm_database_encryption_keys</pre> | Developer, DBA | 

## Related resources
<a name="enable-transparent-data-encryption-in-amazon-rds-for-sql-server-resources"></a>
+ [Support for Transparent Data Encryption in SQL Server](https://docs.aws.amazon.com/AmazonRDS/latest/UserGuide/Appendix.SQLServer.Options.TDE.html) (Amazon RDS documentation)
+ [Working with Option Groups](https://docs.aws.amazon.com/AmazonRDS/latest/UserGuide/USER_WorkingWithOptionGroups.html) (Amazon RDS documentation)
+ [Modifying an Amazon RDS DB Instance](https://docs.aws.amazon.com/AmazonRDS/latest/UserGuide/Overview.DBInstance.Modifying.html) (Amazon RDS documentation)
+ [Transparent Data Encryption for SQL Server](https://docs.microsoft.com/en-us/sql/relational-databases/security/encryption/transparent-data-encryption) (Microsoft documentation)
+ [Using SSMS](https://docs.microsoft.com/en-us/sql/ssms/sql-server-management-studio-ssms) (Microsoft documentation)

# Monitor and remediate scheduled deletion of AWS KMS keys
<a name="monitor-and-remediate-scheduled-deletion-of-aws-kms-keys"></a>

*Mikesh Khanal and Ramya Pulipaka, Amazon Web Services*

## Summary
<a name="monitor-and-remediate-scheduled-deletion-of-aws-kms-keys-summary"></a>

On the Amazon Web Services (AWS) Cloud, deleting an AWS Key Management Services (AWS KMS) key can result in data loss. Deletion removes the key material and all metadata associated with the AWS KMS key, and it is irreversible. After an AWS KMS key is deleted, you can no longer decrypt the data that were encrypted under that AWS KMS key, so that data cannot be recovered.

This pattern sets up monitoring, with notifications when an application or a user schedules an AWS KMS key for deletion. If you receive a notification, you might want to cancel deletion of the AWS KMS key and reconsider your decision to delete it. The pattern uses the AWS Systems Manager automation runbook [AWSConfigRemediation-CancelKeyDeletion](https://docs.aws.amazon.com/systems-manager-automation-runbooks/latest/userguide/automation-aws-cancel-key-deletion.html) to facilitate canceling the deletion of an AWS KMS key.

**Note**  
The pattern's CloudFormation template must be deployed in all AWS Regions where you want to monitor deletion of AWS KMS keys.

## Prerequisites and limitations
<a name="monitor-and-remediate-scheduled-deletion-of-aws-kms-keys-prereqs"></a>

**Prerequisites **
+ An active AWS account
+ Understanding of the following AWS services: 
  + Amazon EventBridge
  + AWS KMS
  + Amazon Simple Notification Service (Amazon SNS)
  + AWS Systems Manager

**Limitations **
+ Any customization of the solution requires knowledge of AWS CloudFormation templates and the AWS services used in this pattern.
+ Currently, this solution uses the default event bus, and it can be customized according to the requirements. For more information about the custom event bus, see the [AWS documentation](https://docs.aws.amazon.com/eventbridge/latest/userguide/eb-event-bus.html).

## Architecture
<a name="monitor-and-remediate-scheduled-deletion-of-aws-kms-keys-architecture"></a>

**Target technology stack  **
+ Amazon EventBridge
+ AWS KMS
+ Amazon SNS
+ AWS Systems Manager
+ Automation using the following:
  + AWS Command Line Interface (AWS CLI) or AWS SDK
  + AWS CloudFormation stack

**Target architecture **

![\[Diagram of the five steps of the monitoring, alerting, and remediation process.\]](http://docs.aws.amazon.com/prescriptive-guidance/latest/patterns/images/pattern-img/56927ebc-bbf7-49cc-9ad2-b2e0dff1201c/images/32537a66-037a-45a1-af19-3bc7bc26eaa6.png)


1. Deletion of an AWS KMS key is scheduled.

1. The scheduled-deletion event is evaluated by an EventBridge rule.

1. The EventBridge rule engages the Amazon SNS topic.

1. The EventBridge rule initiates the Systems Manager automation and runbooks.

1. The runbooks cancel the deletion.

**Automation and scale**

The CloudFormation stack deploys all the necessary resources and services for this solution to work. The pattern can be run independently in a single account or run using AWS CloudFormation StackSets for multiple independent accounts or an organization.

```
aws cloudformation create-stack --stack-name  <stack-name>\
    --template-body file://<Full-Path-of-file> \
    --parameters ParameterKey=,ParameterValue= \
    --capabilities CAPABILITY_NAMED_IAM
```

## Tools
<a name="monitor-and-remediate-scheduled-deletion-of-aws-kms-keys-tools"></a>

**Tools**
+ [AWS CloudFormation](https://aws.amazon.com/cloudformation/) – AWS CloudFormation is a service that helps you model and set up your Amazon Web Services resources so that you can spend less time managing those resources and more time focusing on your applications that run on AWS. You can use a CloudFormation template to create stacks in an AWS account in an AWS Region. The template describes all the AWS resources that you want, and CloudFormation provisions and configures those resources for you.
+ [AWS CLI](https://docs.aws.amazon.com/cli/?id=docs_gateway) – The AWS Command Line Interface (AWS CLI) is an open source tool that you can use to interact with AWS services using commands in your command line shell.
+ [Amazon EventBridge](https://docs.aws.amazon.com/eventbridge/latest/userguide/what-is-amazon-eventbridge.html) – Amazon EventBridge is a serverless event bus service connecting your applications with data from a variety of sources. EventBridge delivers a stream of real-time data from your own applications and AWS services, and it routes that data to targets such as AWS Lambda. EventBridge simplifies the process of building event-driven architectures.
+ [AWS KMS](https://aws.amazon.com/kms/) – AWS Key Management Service (AWS KMS) is a managed service for creating and controlling AWS KMS keys, the encryption keys used to encrypt your data.
+ [AWS SDKs](https://aws.amazon.com/tools/?id=docs_gateway) – AWS tools include SDKs so that you can develop and manage applications on AWS in the programming language of your choice.
+ [Amazon SNS](https://aws.amazon.com/sns/) – Amazon Simple Notification Service (Amazon SNS) is a managed service that provides message delivery from publishers to subscribers (also known as producers and consumers). Publishers communicate asynchronously with subscribers by sending messages to a topic, which is a logical access point and communication channel. 
+ [AWS Systems Manager](https://docs.aws.amazon.com/systems-manager/latest/userguide/systems-manager-automation.html) – AWS Systems Manager is an AWS service that you can use to view and control your infrastructure on AWS. Using the Systems Manager console, you can automate operational tasks across your AWS resources. Systems Manager helps you maintain security and compliance by scanning your managed instances and reporting on (or taking corrective action on) any policy violations it detects.  

**Code **
+ The `alerting_ct_logs.yaml` CloudFormation template for the project is attached.

## Epics
<a name="monitor-and-remediate-scheduled-deletion-of-aws-kms-keys-epics"></a>

### Prepare the AWS account
<a name="prepare-the-aws-account"></a>


| Task | Description | Skills required | 
| --- | --- | --- | 
| Install and configure AWS CLI. | Install AWS CLI version 2. Then configure the security credentials settings for an identity, the default output format, and the default AWS Region that AWS CLI uses to interact with AWS.The identity must have the required permissions to perform the tasks. | Developer, Security engineer | 

### Deploy the AWS CloudFormation template
<a name="deploy-the-aws-cloudformation-template"></a>


| Task | Description | Skills required | 
| --- | --- | --- | 
| Download the CloudFormation template. | Download the attachment to a local path on your computer and extract the `alerting_ct_logs.yaml` template file. | Developer, Security engineer | 
| Deploy the template. | In the terminal window where the AWS account profile has been configured, run the following command.<pre>aws cloudformation create-stack --stack-name <stack_name> \<br />--capabilities <Value>  \<br />--template-body file://<Full_Path> \<br /> --parameters ParameterKey=DestinationEmailAddress,ParameterValue=<Value> \<br />ParameterKey=SNSTopicName,ParameterValue=<Value> \<br />ParameterKey=EnableRemediation ,ParameterValue=<Value> \<br />ParameterKey=AutomationAssumeRole,ParameterValue=<Value></pre>In the next step, enter values for the template parameters. | Developer, Security engineer | 
| Complete the template parameters. | Enter the required values for the parameters.[\[See the AWS documentation website for more details\]](http://docs.aws.amazon.com/prescriptive-guidance/latest/patterns/monitor-and-remediate-scheduled-deletion-of-aws-kms-keys.html) | Developer, Security engineer | 

### Confirm the subscription
<a name="confirm-the-subscription"></a>


| Task | Description | Skills required | 
| --- | --- | --- | 
| Confirm the subscription. | Check your email inbox and choose **Confirm subscription** in the email message that you receive from Amazon SNS. A web browser window will open and display a subscription confirmation and your subscription ID.  | Developer, Security engineer | 

## Related resources
<a name="monitor-and-remediate-scheduled-deletion-of-aws-kms-keys-resources"></a>

**References**
+ [Creating a rule for an AWS service](https://docs.aws.amazon.com/eventbridge/latest/userguide/create-eventbridge-rule.html)
+ [Creating an Amazon CloudWatch alarm to detect usage of an AWS KMS key that is pending deletion](https://docs.aws.amazon.com/kms/latest/developerguide/deleting-keys-creating-cloudwatch-alarm.html)

**Tutorials and videos **
+ [How to get started with Amazon EventBridge](https://www.youtube.com/watch?v=ea9SCYDJIm4)
+ [Deep dive on Amazon EventBridge](https://www.youtube.com/watch?v=28B4L1fnnGM) (AWS Online Tech Talks)

**AWS workshop **
+ [Working with EventBridge rules](https://event-driven-architecture.workshop.aws/2-event-bridge/2-rules/rules.html)

## Additional information
<a name="monitor-and-remediate-scheduled-deletion-of-aws-kms-keys-additional"></a>

The following code provides examples for extending the solution to monitor and notify you of any changes in any AWS service. The examples include predefined patterns and custom patterns. For more information, see [Events and event patterns in EventBridge](https://docs.aws.amazon.com/eventbridge/latest/userguide/eventbridge-and-event-patterns.html).

```
EventPattern:
        source:
        - aws.kms
        detail-type:
        - AWS API Call via CloudTrail
        detail:
          eventSource:
          - kms.amazonaws.com
          eventName:
          - ScheduleKeyDeletion
```

## Attachments
<a name="attachments-56927ebc-bbf7-49cc-9ad2-b2e0dff1201c"></a>

To access additional content that is associated with this document, unzip the following file: [attachment.zip](samples/p-attach/56927ebc-bbf7-49cc-9ad2-b2e0dff1201c/attachments/attachment.zip)

# Identify public Amazon S3 buckets in AWS Organizations by using Security Hub CSPM
<a name="identify-public-s3-buckets-in-aws-organizations-using-security-hub"></a>

*Mourad Cherfaoui, Arun Chandapillai, and Parag Nagwekar, Amazon Web Services*

## Summary
<a name="identify-public-s3-buckets-in-aws-organizations-using-security-hub-summary"></a>

This pattern shows you how to build a mechanism for identifying public Amazon Simple Storage Service (Amazon S3) buckets in your AWS Organizations accounts. The mechanism works by using controls from the [AWS Foundational Security Best Practices (FSBP) standard](https://docs.aws.amazon.com/securityhub/latest/userguide/fsbp-standard.html) in AWS Security Hub CSPM to monitor Amazon S3 buckets. You can use Amazon EventBridge to process Security Hub CSPM [findings](https://docs.aws.amazon.com/securityhub/latest/userguide/securityhub-findings.html), and then post these findings to an Amazon Simple Notification Service (Amazon SNS) topic. Stakeholders in your organization can subscribe to the topic and get immediate email notifications about the findings.

New Amazon S3 buckets and their objects don't allow public access by default. You can use this pattern in scenarios where you must modify default Amazon S3 configurations based on your organization's requirements. For example, this could be a scenario where you have an Amazon S3 bucket that hosts a public-facing website or files that everyone on the internet must be able to read from your Amazon S3 bucket.

Security Hub CSPM is often deployed as a central service to consolidate all security findings, including those related to security standards and compliance requirements. There are other AWS services that you can use to detect public Amazon S3 buckets, but this pattern uses an existing Security Hub CSPM deployment with minimal configuration.

## Prerequisites and limitations
<a name="identify-public-s3-buckets-in-aws-organizations-using-security-hub-prereqs"></a>

**Prerequisites**
+ An AWS multi-account setup with a dedicated [Security Hub CSPM administrator account](https://docs.aws.amazon.com/securityhub/latest/userguide/designate-orgs-admin-account.html)
+ Security Hub CSPM and AWS Config, enabled in the AWS Region that you want to monitor 
**Note**  
You must enable [cross-Region aggregation](https://docs.aws.amazon.com/securityhub/latest/userguide/finding-aggregation-enable.html) in Security Hub CSPM if you want to monitor multiple Regions from a single aggregation Region.
+ User permissions for accessing and updating the Security Hub CSPM administrator account, read access to all the Amazon S3 buckets in the organization, and permissions for turning off public access (if required)

## Architecture
<a name="identify-public-s3-buckets-in-aws-organizations-using-security-hub-architecture"></a>

The following diagram shows an architecture for using Security Hub CSPM to identify public Amazon S3 buckets.

![\[Diagram showing cross-account replication workflow\]](http://docs.aws.amazon.com/prescriptive-guidance/latest/patterns/images/pattern-img/7e365290-e3e9-460a-b69f-669ba459cf4c/images/381d66ac-ec03-4458-9793-9d125cebdba6.png)


The diagram show the following workflow:

1. Security Hub CSPM monitors the configuration of Amazon S3 buckets in all AWS Organizations accounts (including the administrator account) by using the S3.2 and S3.3 controls from the FSBP security standard, and detects a finding if a bucket is configured as public.

1. The Security Hub CSPM administrator account accesses the findings (including those for S3.2 and S3.3) from all member accounts.

1. Security Hub CSPM automatically sends all new findings and all updates to existing findings to EventBridge as **Security Hub CSPM Findings - Imported** events. This includes events for findings from both the administrator and member accounts.

1. An EventBridge rule filters on findings from S3.2 and S3.3 that have a `ComplianceStatus` of `FAILED`, a workflow status of `NEW`, and a `RecordState` of `ACTIVE`.

1. Rules use the event patterns to identify events and send them to an Amazon SNS topic once matched.

1. An Amazon SNS topic sends the events to its subscribers (through email, for example).

1. Security analysts designated to receive the email notifications review the Amazon S3 bucket in question.

1. If the bucket is approved for public access, the security analyst sets the workflow status of the corresponding finding in Security Hub CSPM to `SUPPRESSED`. Otherwise, the analyst sets the status to `NOTIFIED`. This eliminates future notifications for the Amazon S3 bucket and reduces notification noise.

1. If the workflow status is set to `NOTIFIED`, the security analyst reviews the finding with the bucket owner to determine if public access is justified and complies with privacy and data protection requirements. The investigation results in either removing public access to the bucket or approving public access. In the latter case, the security analyst sets the workflow status to `SUPPRESSED`.

**Note**  
The architecture diagram applies to both single Region and cross-Region aggregation deployments. In accounts A, B, and C in the diagram, Security Hub CSPM can belong to the same Region as the administrator account or belong to different Regions if cross-Region aggregation is enabled.

## Tools
<a name="identify-public-s3-buckets-in-aws-organizations-using-security-hub-tools"></a>
+ [Amazon EventBridge](https://docs.aws.amazon.com/eventbridge/latest/userguide/eb-what-is.html) is a serverless event bus service that helps you connect your applications with real-time data from a variety of sources. EventBridge delivers a stream of real-time data from your own applications, software as a service (SaaS) applications, and AWS services. EventBridge routes that data to targets such as Amazon SNS topics and AWS Lambda functions if the data matches user-defined rules.
+ [Amazon Simple Notification Service (Amazon SNS)](https://docs.aws.amazon.com/sns/latest/dg/welcome.html) helps you coordinate and manage the exchange of messages between publishers and clients, including web servers and email addresses. Subscribers receive all messages published to the topics to which they subscribe, and all subscribers to a topic receive the same messages.
+ [Amazon Simple Storage Service (Amazon S3)](https://docs.aws.amazon.com/AmazonS3/latest/userguide/Welcome.html) is a cloud-based object storage service that helps you store, protect, and retrieve any amount of data.
+ [AWS Security Hub CSPM](https://docs.aws.amazon.com/securityhub/latest/userguide/what-is-securityhub.html) provides a comprehensive view of your security state in AWS. Security Hub CSPM also helps you check your AWS environment against security industry standards and best practices. Security Hub CSPM collects security data from across AWS accounts, services, and supported third-party partner products, and then helps to analyze security trends and identify the highest priority security issues.

## Epics
<a name="identify-public-s3-buckets-in-aws-organizations-using-security-hub-epics"></a>

### Configure Security Hub CSPM accounts
<a name="configure-ash-accounts"></a>


| Task | Description | Skills required | 
| --- | --- | --- | 
| Enable Security Hub CSPM in AWS Organizations accounts. | To enable Security Hub CSPM in the organization accounts where you want to monitor Amazon S3 buckets, see the guidelines from [Designating a Security Hub CSPM administrator account (console)](https://docs.aws.amazon.com/securityhub/latest/userguide/designate-orgs-admin-account.html#:~:text=AWSSecurityHubOrganizationsAccess-,Designating%20a%20Security%20Hub%20administrator%20account%20(console),-The%20organization%20management) and [Managing member accounts that belong to an organization](https://docs.aws.amazon.com/securityhub/latest/userguide/securityhub-accounts-orgs.html) in the Security Hub CSPM documentation. | AWS administrator | 
| (Optional) Enable cross-Region aggregation. | If you want to monitor Amazon S3 buckets in multiple Regions from a single Region, set up [cross-Region aggregation](https://docs.aws.amazon.com/securityhub/latest/userguide/finding-aggregation.html). | AWS administrator | 
| Enable the S3.2 and S3.3 controls for the FSBP security standard. | You must enable S3.2 and S3.3 controls for the FSBP security standard.[\[See the AWS documentation website for more details\]](http://docs.aws.amazon.com/prescriptive-guidance/latest/patterns/identify-public-s3-buckets-in-aws-organizations-using-security-hub.html) | AWS administrator | 

### Set up the environment
<a name="set-up-the-environment"></a>


| Task | Description | Skills required | 
| --- | --- | --- | 
| Configure the Amazon SNS topic and email subscription. | [\[See the AWS documentation website for more details\]](http://docs.aws.amazon.com/prescriptive-guidance/latest/patterns/identify-public-s3-buckets-in-aws-organizations-using-security-hub.html) | AWS administrator | 
| Configure the EventBridge rule. | [\[See the AWS documentation website for more details\]](http://docs.aws.amazon.com/prescriptive-guidance/latest/patterns/identify-public-s3-buckets-in-aws-organizations-using-security-hub.html) | AWS administrator | 

## Troubleshooting
<a name="identify-public-s3-buckets-in-aws-organizations-using-security-hub-troubleshooting"></a>


| Issue | Solution | 
| --- | --- | 
| I have an Amazon S3 bucket with public access enabled, but I'm not getting email notifications for it. | This could be because the bucket was created in another Region and cross-Region aggregation is not enabled in the Security Hub CSPM administrator account. To resolve this issue, enable cross-Region aggregation or implement this pattern's solution in the Region where your Amazon S3 bucket currently resides. | 

## Related resources
<a name="identify-public-s3-buckets-in-aws-organizations-using-security-hub-resources"></a>
+ [What is AWS Security Hub CSPM?](https://docs.aws.amazon.com/securityhub/latest/userguide/what-is-securityhub.html) (Security Hub CSPM documentation)
+ [AWS Foundational Security Best Practices (FSBP) standard](https://docs.aws.amazon.com/securityhub/latest/userguide/securityhub-standards-fsbp.html) (Security Hub CSPM documentation)
+ [AWS Security Hub CSPM multi-account enable scripts](https://github.com/awslabs/aws-securityhub-multiaccount-scripts/tree/master/multiaccount-enable) (AWS Labs)
+ [Security best practices for Amazon S3](https://docs.aws.amazon.com/AmazonS3/latest/userguide/security-best-practices.html) (Amazon S3 documentation)

## Additional information
<a name="identify-public-s3-buckets-in-aws-organizations-using-security-hub-additional"></a>

*Workflow for monitoring public Amazon S3 buckets*

The following workflow illustrates how you can monitor the public Amazon S3 buckets in your organization. The workflow assumes that you completed the steps in the *Configure the Amazon SNS topic and email subscription *story of this pattern*.*

1. You receive an email notification when an Amazon S3 bucket is configured with public access.
   + If the bucket is approved for public access, set the workflow status of the corresponding finding to `SUPPRESSED` in the Security Hub CSPM administrator account. This prevents Security Hub CSPM from issuing further notifications for this bucket and can eliminate duplicate alerts.
   + If the bucket isn't approved for public access, set the workflow status of the corresponding finding in the Security Hub CSPM administrator account to `NOTIFIED`. This prevents Security Hub CSPM from issuing further notifications for this bucket from Security Hub CSPM and can eliminate noise.

1. If the bucket might contain sensitive data, turn off public access immediately until the review is completed. If you turn off public access, then Security Hub CSPM changes the workflow status to `RESOLVED`. Then, email notifications for the bucket stop.

1. Find the user who configured the bucket as public (for example, by using AWS CloudTrail) and start a review. The review results in either removing public access to the bucket or approving public access. If public access is approved, then set the workflow status of the corresponding finding to `SUPPRESSED`.

# Ingest and analyze AWS security logs in Microsoft Sentinel
<a name="ingest-analyze-aws-security-logs-sentinel"></a>

*Ivan Girardi and Sebastian Wenzel, Amazon Web Services*

## Summary
<a name="ingest-analyze-aws-security-logs-sentinel-summary"></a>

This pattern describes how to automate the ingestion of AWS security logs, such as AWS CloudTrail logs, Amazon CloudWatch Logs data, Amazon VPC Flow Logs data, and Amazon GuardDuty findings, into Microsoft Sentinel. If your organization uses Microsoft Sentinel as a security information and event management (SIEM) system, this helps you centrally monitor and analyze logs in order to detect security-related events. As soon as the logs are available, they are automatically delivered to an Amazon Simple Storage Service (Amazon S3) bucket in less than 5 minutes. This can help you quickly detect security events in your AWS environment.

Microsoft Sentinel ingests CloudTrail logs in a tabular format that includes the original timestamp for when the event was recorded. The structure of the ingested logs enables query capabilities by using [Kusto Query Language](https://learn.microsoft.com/en-us/azure/sentinel/kusto-overview) in Microsoft Sentinel.

The pattern deploys a monitoring and alerting solution that detects ingestion failures in less than 1 minute. It also includes a notification system that the external SIEM can monitor. You use AWS CloudFormation to deploy the required resources in the logging account.

**Target audience**

This pattern is recommended for users who have experience with AWS Control Tower, AWS Organizations, CloudFormation, AWS Identity and Access Management (IAM), and AWS Key Management Service (AWS KMS).

## Prerequisites and limitations
<a name="ingest-analyze-aws-security-logs-sentinel-prereqs"></a>

**Prerequisites**

The following are the prerequisites for deploying this solution:
+ Active AWS accounts that are managed as an organization in AWS Organizations and are part of an AWS Control Tower landing zone. The organization should include a dedicated account for logging. For instructions, see [Creating and configuring an organization](https://docs.aws.amazon.com/organizations/latest/userguide/orgs_tutorials_basic.html) in the AWS Organizations documentation.
+ A CloudTrail trail that logs events for the entire organization and stores logs in an Amazon S3 bucket in the logging account. For instructions, see [Creating a trail for an organization](https://docs.aws.amazon.com/awscloudtrail/latest/userguide/creating-trail-organization.html).
+ In the logging account, permissions to assume an existing IAM role that has the following permissions:
  + Deploy the resources defined in the provided CloudFormation template.
  + Deploy the provided CloudFormation template.
  + Modify the AWS KMS key policy if the logs are encrypted with a customer managed key.
+ AWS Command Line Interface (AWS CLI), [installed](https://docs.aws.amazon.com/cli/latest/userguide/getting-started-install.html) and [configured](https://docs.aws.amazon.com/cli/latest/userguide/cli-configure-files.html).
+ A Microsoft Azure account with a subscription to use Microsoft Sentinel.
+ Enable and set up Microsoft Sentinel. For instructions, see [Enable Microsoft Sentinel and initial features and content](https://learn.microsoft.com/en-us/azure/sentinel/enable-sentinel-features-content) in the Microsoft Sentinel documentation.
+ Meet the prerequisites for setting up the Microsoft Sentinel S3 connector.

**Limitations**
+ This solution forwards the security logs from an Amazon S3 bucket in the logging account to Microsoft Sentinel. Instructions for how to send the logs to Amazon S3 are not explicitly provided.
+ This pattern provides instructions for deployment in an AWS Control Tower landing zone. However, use of AWS Control Tower is not required.
+ This solution is compatible with an environment where the Amazon S3 logging bucket is restricted with [service control policies (SCPs)](https://docs.aws.amazon.com/organizations/latest/userguide/orgs_manage_policies_scps.html), such as [Disallow Changes to Bucket Policy for AWS Control Tower Created Amazon S3 Buckets in Log Archive](https://docs.aws.amazon.com/controltower/latest/controlreference/mandatory-controls.html#disallow-policy-changes-s3-buckets-created).
+ This pattern provides instructions for forwarding CloudTrail logs, but you can adapt this solution to send other logs that Microsoft Sentinel supports, such as logs from CloudWatch Logs, Amazon VPC Flow Logs, and GuardDuty.
+ The instructions use the AWS CLI to deploy the CloudFormation template, but you could also use the AWS Management Console. For instructions, see [Using the AWS CloudFormation console](https://docs.aws.amazon.com/AWSCloudFormation/latest/UserGuide/cfn-using-console.html). If you use the console to deploy the stack, deploy the stack in the same AWS Region as the logging bucket.
+ This solution deploys an Amazon Simple Queue Service (Amazon SQS) queue to deliver Amazon S3 notifications. The queue contains messages with the paths of objects uploaded in the Amazon S3 bucket, not actual data. The queue uses SSE-SQS encryption to help protect the content of the messages. If you want to encrypt the SQS queue with SSE-KMS, you can use a customer managed KMS key. For more information, see [Encryption at rest in Amazon SQS](https://docs.aws.amazon.com/AWSSimpleQueueService/latest/SQSDeveloperGuide/sqs-server-side-encryption.html).

## Architecture
<a name="ingest-analyze-aws-security-logs-sentinel-architecture"></a>

This section provides a high-level overview of the architecture that the sample code establishes. The following diagram shows the resources deployed in the logging account in order to ingest logs from an existing Amazon S3 bucket into Microsoft Sentinel.

![\[Microsoft Sentinel using an Amazon SNS queue to ingest logs from an S3 bucket\]](http://docs.aws.amazon.com/prescriptive-guidance/latest/patterns/images/pattern-img/e8438b44-6bce-4863-8657-1d0a843ffb6f/images/38108d9d-88ad-4306-8ad2-01b66a6bf00f.png)


The architecture diagram shows the following resource interactions:

1. In the logging account, Microsoft Sentinel assumes an IAM role through OpenID Connect (OIDC) to access logs in a specific Amazon S3 bucket and Amazon SQS queue.

1. Amazon Simple Notification Service (Amazon SNS) and Amazon S3 use AWS KMS for encryption.

1. Amazon S3 sends notification messages to the Amazon SQS queue whenever it receives new logs.

1. Microsoft Sentinel checks Amazon SQS for new messages. The Amazon SQS queue uses SSE-SQS encryption. The message retention period is set to 14 days.

1. Microsoft Sentinel pulls messages from the Amazon SQS queue. The messages contain the path of the uploaded Amazon S3 objects. Microsoft Sentinel ingests those objects from the Amazon S3 bucket into the Microsoft Azure account.

1. A CloudWatch alarm monitors the Amazon SQS queue. If messages are not received and deleted from the Amazon SQS queue within 5 minutes, then it initiates an Amazon SNS notification that sends an email.

AWS Control Tower helps you set up the foundational organization unit (OU) structure and centralizes CloudTrail logs in the logging account. It also implements mandatory SCPs to protect the logging bucket.

We have provided the target architecture in an AWS Control Tower landing zone, but this is not strictly required. In this diagram, the resources in the management account reflect an AWS Control Tower deployment and a CloudTrail trail that logs events for the entire organization.

This pattern focuses on the deployment of resources in the logging account. If the logs stored in Amazon S3 in your AWS Control Tower landing zone are encrypted with a customer managed KMS key, then you must update the key policy to allow Microsoft Sentinel to decrypt the logs. In an AWS Control Tower landing zone, you manage the key policy from the management account, which is where the key was created.

## Tools
<a name="ingest-analyze-aws-security-logs-sentinel-tools"></a>

**AWS services**
+ [AWS CloudFormation](https://docs.aws.amazon.com/AWSCloudFormation/latest/UserGuide/Welcome.html) helps you set up AWS resources, provision them quickly and consistently, and manage them throughout their lifecycle across AWS accounts and Regions.
+ [Amazon CloudWatch](https://docs.aws.amazon.com/AmazonCloudWatch/latest/monitoring/WhatIsCloudWatch.html) helps you monitor the metrics of your AWS resources and the applications you run on AWS in real time. 
+ [AWS Control Tower](https://docs.aws.amazon.com/controltower/latest/userguide/what-is-control-tower.html) helps you set up and govern an AWS multi-account environment, following best practices.
+ [AWS Key Management Service (AWS KMS)](https://docs.aws.amazon.com/kms/latest/developerguide/overview.html) helps you create and control cryptographic keys to help protect your data.
+ [AWS Organizations](https://docs.aws.amazon.com/organizations/latest/userguide/orgs_introduction.html) is an account management service that helps you consolidate multiple AWS accounts into an organization that you create and centrally manage.
+ [Amazon Simple Queue Service (Amazon SQS)](https://docs.aws.amazon.com/AWSSimpleQueueService/latest/SQSDeveloperGuide/welcome.html) provides a secure, durable, and available hosted queue that helps you integrate and decouple distributed software systems and components.
+ [Amazon Simple Storage Service (Amazon S3)](https://docs.aws.amazon.com/AmazonS3/latest/userguide/Welcome.html) is a cloud-based object storage service that helps you store, protect, and retrieve any amount of data.

**Other tools**
+ [Microsoft Sentinel](https://learn.microsoft.com/en-us/azure/sentinel/overview) is a cloud-native SIEM system that provides security orchestration, automation, and response (SOAR).

**Code repository**

The code for this pattern is available in the GitHub [Ingest and analyze AWS security logs in Microsoft Sentinel](https://github.com/aws-samples/ingest-and-analyze-aws-security-logs-in-microsoft-sentinel) repository.

## Best practices
<a name="ingest-analyze-aws-security-logs-sentinel-best-practices"></a>
+ Follow the [principle of least-privilege](https://docs.aws.amazon.com/IAM/latest/UserGuide/best-practices.html#grant-least-privilege) (IAM documentation).
+ Follow the [Best practices for AWS Control Tower administrators](https://docs.aws.amazon.com/controltower/latest/userguide/best-practices.html) (AWS Control Tower documentation).
+ Follow the [AWS CloudFormation best practices](https://docs.aws.amazon.com/AWSCloudFormation/latest/UserGuide/best-practices.html) (CloudFormation documentation).
+ Use code analysis tools, such as [cfn\$1nag](https://github.com/stelligent/cfn_nag), to scan the generated CloudFormation templates. The cfn\$1nag tool identifies potential security issues in CloudFormation templates by searching for patterns.

## Epics
<a name="ingest-analyze-aws-security-logs-sentinel-epics"></a>

### Connect Microsoft Sentinel to Amazon S3
<a name="connect-microsoft-sentinel-to-s3"></a>


| Task | Description | Skills required | 
| --- | --- | --- | 
| Prepare the Microsoft Sentinel S3 connector. | [\[See the AWS documentation website for more details\]](http://docs.aws.amazon.com/prescriptive-guidance/latest/patterns/ingest-analyze-aws-security-logs-sentinel.html) | DevOps engineer, General AWS | 

### Deploy the CloudFormation stack
<a name="deploy-the-cfnshort-stack"></a>


| Task | Description | Skills required | 
| --- | --- | --- | 
| Clone the repository. | In a bash shell, enter the following command. This clones the [Ingest and analyze AWS Security Logs in Microsoft Sentinel](https://github.com/aws-samples/ingest-and-analyze-aws-security-logs-in-microsoft-sentinel) repository.`git clone https://github.com/aws-samples/ingest-and-analyze-aws-security-logs-in-microsoft-sentinel.git` | DevOps engineer, General AWS | 
| Assume the IAM role in the logging account. | In the logging account, assume the IAM role that has permissions to deploy the CloudFormation stack. For more information about assuming an IAM role in the AWS CLI, see [Use an IAM role in the AWS CLI](https://docs.aws.amazon.com/cli/latest/userguide/cli-configure-role.html). | DevOps engineer, General AWS | 
| Deploy the stack. | To deploy the CloudFormation stack enter the following command, where:[\[See the AWS documentation website for more details\]](http://docs.aws.amazon.com/prescriptive-guidance/latest/patterns/ingest-analyze-aws-security-logs-sentinel.html)<pre>aws cloudformation deploy --stack-name cloudtrail-sentinel-integration \<br />    --no-fail-on-empty-changeset \<br />    --template-file template.yml \<br />    --capabilities CAPABILITY_IAM CAPABILITY_NAMED_IAM CAPABILITY_AUTO_EXPAND \<br />    --parameter-overrides \<br />    ControlTowerS3BucketName="<Bucket name>" \<br />    AzureWorkspaceID="<Sentinel external ID>" \<br />    EmailAddress="<Email address>" \<br />    KMSKeyArn="<Customer managed key ARN>" \<br />    Suffix="<Suffix to avoid name conflicts>" \<br />    OIDCProviderArn="<ARN for the OIDC provider>"</pre> | DevOps engineer, General AWS | 
| Copy outputs. | From the output of the CloudFormation stack, copy the values for `SentinelRoleArn` and `SentinelSQS`. You use these values later to complete the configuration in Microsoft Sentinel. | DevOps engineer, General AWS | 
| Modify the key policy. | If you aren't using a customer managed KMS key to encrypt the logs in the Amazon S3 bucket, you can skip this step.If the logs are encrypted with a customer managed KMS key, modify the key policy to grant Microsoft Sentinel permission to decrypt the logs. The following is an example key policy. This example policy allows cross-account access if the KMS key is in another AWS account.<pre>{<br />    "Version": "2012-10-17",		 	 	 <br />    "Id": "key-policy",<br />    "Statement": [<br />        ...<br />        {<br />            "Sid": "Grant access to decrypt",<br />            "Effect": "Allow",<br />            "Principal": {<br />                "AWS": "<SentinelRoleArn>"<br />            },<br />            "Action": "kms:Decrypt",<br />            "Resource": "<KeyArn>"<br />        }<br />    ]<br />}</pre> | DevOps engineer, General AWS | 

### Configure the connector in Microsoft Sentinel
<a name="configure-the-connector-in-microsoft-sentinel"></a>


| Task | Description | Skills required | 
| --- | --- | --- | 
| Complete the configuration in Microsoft Sentinel. | [\[See the AWS documentation website for more details\]](http://docs.aws.amazon.com/prescriptive-guidance/latest/patterns/ingest-analyze-aws-security-logs-sentinel.html) | DevOps engineer | 
| Send Amazon S3 event notifications to Amazon SQS. | Follow the instructions in [Enabling and configuring event notifications using the Amazon S3 console](https://docs.aws.amazon.com/AmazonS3/latest/userguide/enable-event-notifications.html) to configure the Amazon S3 logging bucket to send event notifications to the Amazon SQS queue. If CloudTrail has been configured for the whole organization, logs in the this bucket have the prefix `<OrgID>/AWSLogs/<OrgID>/`, where `<OrgID>` is the organization ID. For more information, see [Viewing details about your organization](https://docs.aws.amazon.com/organizations/latest/userguide/orgs_manage_org_details.html). | DevOps engineer, General AWS | 
| Confirm that the logs are ingested. | [\[See the AWS documentation website for more details\]](http://docs.aws.amazon.com/prescriptive-guidance/latest/patterns/ingest-analyze-aws-security-logs-sentinel.html) | DevOps engineer | 

### Validate the solution
<a name="validate-the-solution"></a>


| Task | Description | Skills required | 
| --- | --- | --- | 
| Compare CloudWatch and Sentinel logs. | In the default configuration of AWS Control Tower, CloudTrail logs are sent to Amazon CloudWatch and stored in the AWS Control Tower management account. For more information, see [Logging and monitoring in AWS Control Tower](https://docs.aws.amazon.com/controltower/latest/userguide/logging-and-monitoring.html). Use the following steps to confirm that that logs are automatically ingested into Microsoft Sentinel:[\[See the AWS documentation website for more details\]](http://docs.aws.amazon.com/prescriptive-guidance/latest/patterns/ingest-analyze-aws-security-logs-sentinel.html) | DevOps engineer, General AWS | 

## Related resources
<a name="ingest-analyze-aws-security-logs-sentinel-resources"></a>

**AWS documentation and resources**
+ [AWS CLI Command Reference](https://docs.aws.amazon.com/cli/latest/) (AWS CLI documentation)
+ [Optionally configure AWS KMS keys](https://docs.aws.amazon.com/controltower/latest/userguide/configure-kms-keys.html) (AWS Control Tower documentation)
+ [Encryption at rest in Amazon SQS](https://docs.aws.amazon.com/AWSSimpleQueueService/latest/SQSDeveloperGuide/sqs-server-side-encryption.html) (Amazon SQS documentation)
+ [How do I keep mailing list recipients from unsubscribing everyone on the list from my Amazon SNS topic emails?](https://repost.aws/knowledge-center/prevent-unsubscribe-all-sns-topic) (AWS Knowledge Center)

**Microsoft documentation**
+ [Connect Microsoft Sentinel to Amazon Web Services to ingest AWS service log data](https://learn.microsoft.com/en-us/azure/sentinel/connect-aws?tabs=s3)
+ [Kusto Query Language in Microsoft Sentinel](https://learn.microsoft.com/en-us/azure/sentinel/kusto-overview)

# Manage AWS Organizations policies as code by using AWS CodePipeline and Amazon Bedrock
<a name="manage-organizations-policies-as-code"></a>

*Andre Cavalcante and Mariana Pessoa de Queiroz, Amazon Web Services*

## Summary
<a name="manage-organizations-policies-as-code-summary"></a>

You can use* authorization policies* in AWS Organizations to centrally configure and manage access for principals and resources in your member accounts. [Service control policies (SCPs)](https://docs.aws.amazon.com/organizations/latest/userguide/orgs_manage_policies_scps.html) define the maximum available permissions for the AWS Identity and Access Management (IAM) roles and users in your organization. [Resource control policies (RCPs)](https://docs.aws.amazon.com/organizations/latest/userguide/orgs_manage_policies_rcps.html) define the maximum available permissions available for resources in your organization.

This pattern helps you to manage SCPs and RCPs as infrastructure as code (IaC) that you deploy through a continuous integration and continuous deployment (CI/CD) pipeline. By using AWS CloudFormation or Hashicorp Terraform to manage these policies, you can reduce the burden associated with building and maintaining multiple authorization policies.

This pattern includes the following features:
+ You create, delete, and update the authorization policies by using *manifest files* (`scp-management.json` and `rcp-management.json`).
+ You work with guardrails instead of policies. You define your guardrails and their targets in the manifest files.
+ The pipeline, which uses AWS CodeBuild and AWS CodePipeline, merges and optimizes the guardrails in the manifest files. For each statement in the manifest file, the pipeline combines the guardrails into a single SCP or RCP and then applies it to the defined targets.
+ AWS Organizations applies the policies to your targets. A *target* can be an AWS account, an organizational unit (OU), an environment (which is a group of accounts or OUs that you define in the `environments.json` file), or a group of accounts that share an [AWS tag](https://docs.aws.amazon.com/whitepapers/latest/tagging-best-practices/what-are-tags.html).
+ Amazon Bedrock reads the pipeline logs and summarizes all policy changes.
+ The pipeline requires a manual approval. The approver can review the executive summary that Amazon Bedrock prepared, which helps them understand the changes.

## Prerequisites and limitations
<a name="manage-organizations-policies-as-code-prereqs"></a>

**Prerequisites**
+ Multiple AWS accounts that are managed as an organization in AWS Organizations. For more information, see [Creating an organization](https://docs.aws.amazon.com/organizations/latest/userguide/orgs_manage_org_create.html).
+ The SCP and RCP features are enabled in AWS Organizations. For more information, see [Enabling a policy type](https://docs.aws.amazon.com/organizations/latest/userguide/enable-policy-type.html).
+ Terraform version 1.9.8 or later is [installed](https://developer.hashicorp.com/terraform/tutorials/aws-get-started/install-cli).
+ If you are not deploying this solution through a Terraform pipeline, then the Terraform state file must be [stored](https://developer.hashicorp.com/terraform/language/backend/s3) in an Amazon Simple Storage Service (Amazon S3) bucket in the AWS account where you are deploying the policy management pipeline.
+ Python version 3.13.3 or later is [installed](https://www.python.org/downloads/).

**Limitations**
+ You cannot use this pattern to manage SCPs or RCPs that were created outside of this CI/CD pipeline. However, you can recreate existing policies through the pipeline. For more information, see *Migrating existing policies to the pipeline* in the [Additional information](#manage-organizations-policies-as-code-additional) section of this pattern.
+ The number of accounts, OUs, and policies in each account are subject to the [quotas and service limits](https://docs.aws.amazon.com/organizations/latest/userguide/orgs_reference_limits.html) for AWS Organizations.
+ This pattern cannot be used to configure [management policies](https://docs.aws.amazon.com/organizations/latest/userguide/orgs_manage_policies_management_policies.html) in AWS Organizations, such as backup policies, tag policies, chat applications policies, or declarative policies.

## Architecture
<a name="manage-organizations-policies-as-code-architecture"></a>

The following diagram shows the workflow of the policy management pipeline and its associated resources.

![\[Releasing SCPs and RCPs through a policy management pipeline.\]](http://docs.aws.amazon.com/prescriptive-guidance/latest/patterns/images/pattern-img/372a1ace-5b2e-4f93-9f88-b5b0519ded48/images/a2cceb99-2b93-48e0-b072-bc61a572201f.png)


The diagram shows the following workflow:

1. A user commits changes to the `scp-management.json` or `rcp-management.json` manifest files in the main branch of the remote repository.

1. The change to the `main` branch initiates the pipeline in AWS CodePipeline.

1. CodePipeline starts the `Validate-Plan` CodeBuild project. This project uses a Python script in the remote repository to validate policies and the policy manifest files. This CodeBuild project does the following:

   1. Checks that the SCP and RCP manifest files contain unique statement IDs (`Sid`).

   1. Uses the `scp-policy-processor/main.py` and `rcp-policy-processor/main.py` Python scripts to concatenate guardrails in the guardrails folder into a single RCP or SCP policy. It combines guardrails that have the same `Resource`, `Action`, and `Condition`.

   1. Uses AWS Identity and Access Management Access Analyzer to validate the final, optimized policy. If any there are any findings, the pipeline stops.

   1. Creates `scps.json` and `rcps.json` files, which Terraform uses to create resources.

   1. Runs the `terraform plan` command, which creates a Terraform execution plan.

1. (Optional) The `Validate-Plan` CodeBuild project uses the `bedrock-prompt/prompt.py` script to send a prompt to Amazon Bedrock. You define the prompt in the `bedrock-prompt/prompt.txt` file. Amazon Bedrock uses Anthropic Claude Sonnet 3.5 to generate a summary of the proposed changes by analyzing the Terraform and Python logs.

1. CodePipeline uses an Amazon Simple Notification Service (Amazon SNS) topic in order to notify approvers that changes must be reviewed. If Amazon Bedrock generated a change summary, the notification includes this summary.

1. A policy approver approves the action in CodePipeline. If Amazon Bedrock generated a change summary, the approver can review the summary in CodePipeline prior to approving.

1. CodePipeline starts the `Apply` CodeBuild project. This project uses Terraform to apply the RCP and SCP changes in AWS Organizations.

The IaC template associated with this architecture also deploys the following resources that support the policy management pipeline:
+ An Amazon S3 bucket for storing the CodePipeline artifacts and scripts, such as `scp-policy-processor/main.py` and `bedrock-prompt/prompt.py`
+ An AWS Key Management Service (AWS KMS) key that encrypts the resources created by this solution

## Tools
<a name="manage-organizations-policies-as-code-tools"></a>

**AWS services**
+ [Amazon Bedrock](https://docs.aws.amazon.com/bedrock/latest/userguide/what-is-bedrock.html) is a fully managed AI service that makes many high-performing foundation models available for use through a unified API.
+ [AWS CodeBuild](https://docs.aws.amazon.com/codebuild/latest/userguide/welcome.html) is a fully managed build service that helps you compile source code, run unit tests, and produce artifacts that are ready to deploy. 
+ [AWS CodePipeline](https://docs.aws.amazon.com/codepipeline/latest/userguide/welcome.html) helps you quickly model and configure the different stages of a software release and automate the steps required to release software changes continuously.
+ [AWS Organizations](https://docs.aws.amazon.com/organizations/latest/userguide/orgs_introduction.html) is an account management service that helps you consolidate multiple AWS accounts into an organization that you create and centrally manage.
+ [AWS SDK for Python (Boto3)](https://boto3.amazonaws.com/v1/documentation/api/latest/guide/quickstart.html) is a software development kit that helps you integrate your Python application, library, or script with AWS services.
+ [Amazon Simple Storage Service (Amazon S3)](https://docs.aws.amazon.com/AmazonS3/latest/userguide/Welcome.html) is a cloud-based object storage service that helps you store, protect, and retrieve any amount of data.

**Other tools**
+ [HashiCorp Terraform](https://www.terraform.io/docs) is an IaC tool that helps you use code to provision and manage cloud infrastructure and resources.

**Code repository **

The code for this pattern is available in the [organizations-policy-pipeline](https://github.com/aws-samples/organizations-policy-pipeline) GitHub repository. The following are the key files that are contained in the `sample-repository` folder:
+ In the `environments` folder, `environments.json` contains a list of environments. *Environments* are a group of targets, and they can contain AWS account IDs or organizational units (OUs).
+ In the `rcp-management` folder:
  + The `guardrails` folder contains the individual guardrails for your RCPs
  + The `policies` folder contains the individual RCPs
  + The `rcp-management.json` manifest file helps you manage RCP guardrails, full RCPs, and their associated targets.
+ In the `scp-management` folder:
  + The `guardrails` folder contains the individual guardrails for your SCPs
  + The `policies` folder contains the individual SCPs
  + The `scp-management.json` manifest file helps you manage SCP guardrails, full SCPs, and their associated targets.
+ The `utils` folder contains scripts that can help you migrate your current SCPs and RCPs so that you can manage them through the pipeline. For more information, see the [Additional information](#manage-organizations-policies-as-code-additional) section of this pattern.

## Best practices
<a name="manage-organizations-policies-as-code-best-practices"></a>
+ Before you set up the pipeline, we recommend that you verify that you have not reached the limits of your AWS Organizations [quotas](https://docs.aws.amazon.com/organizations/latest/userguide/orgs_reference_limits.html).
+ We recommend that you use the AWS Organizations management account only for tasks that must be performed in that account. For more information, see [Best practices for the management account](https://docs.aws.amazon.com/organizations/latest/userguide/orgs_best-practices_mgmt-acct.html#bp_mgmt-acct_use-mgmt).

## Epics
<a name="manage-organizations-policies-as-code-epics"></a>

### Set up the target account
<a name="set-up-the-target-account"></a>


| Task | Description | Skills required | 
| --- | --- | --- | 
| Create a repository. | Create a repository from which your security operations team will manage the policies. Use one of the third-party repository providers that AWS CodeConnections [supports](https://docs.aws.amazon.com/dtconsole/latest/userguide/supported-versions-connections.html). | DevOps engineer | 
| Delegate policy administration. | Delegate administration of AWS Organizations policies to the member account where you are deploying the pipeline. For instructions, see [Create a resource-based delegation policy with AWS Organizations](https://docs.aws.amazon.com/organizations/latest/userguide/orgs-policy-delegate.html). For a sample policy, see *Sample resource-based delegation policy* in the [Additional information](#manage-organizations-policies-as-code-additional) section of this pattern. | AWS administrator | 
| (Optional) Enable the foundation model. | If you want to generate summaries of the policy changes, enable access to the Anthropic Claude 3.5 Sonnet foundation model in Amazon Bedrock in the AWS account where you are deploying the pipeline. For instructions, see [Add or remove access to Amazon Bedrock foundation models](https://docs.aws.amazon.com/bedrock/latest/userguide/model-access-modify.html). | General AWS | 

### Deploy the resources for the pipeline
<a name="deploy-the-resources-for-the-pipeline"></a>


| Task | Description | Skills required | 
| --- | --- | --- | 
| Clone the repository. | Enter the following command to clone the [organizations-policy-pipeline ](https://github.com/aws-samples/organizations-policy-pipeline)repository from GitHub:`git clone https://github.com/aws-samples/organizations-policy-pipeline.git` | DevOps engineer | 
| Define your deployment method. | [\[See the AWS documentation website for more details\]](http://docs.aws.amazon.com/prescriptive-guidance/latest/patterns/manage-organizations-policies-as-code.html) | DevOps engineer | 
| Deploy the pipeline. | [\[See the AWS documentation website for more details\]](http://docs.aws.amazon.com/prescriptive-guidance/latest/patterns/manage-organizations-policies-as-code.html) | DevOps engineer, Terraform | 
| Connect the remote repository. | In the previous step, Terraform created an CodeConnections connection to the third-party repository. In the [AWS Developer Tools console](https://console.aws.amazon.com/codesuite/settings/connections), change the status of the connection from `PENDING` to `AVAILABLE`. For instructions, see [Update a pending connection](https://docs.aws.amazon.com/dtconsole/latest/userguide/connections-update.html). | AWS DevOps | 
| Subscribe to the Amazon SNS topic. | Terraform created an Amazon SNS topic. Subscribe an endpoint to the topic and confirm the subscription so that the approvers receive notifications about pending approval actions in the pipeline. For instructions, see [Creating a subscription to an Amazon SNS topic](https://docs.aws.amazon.com/sns/latest/dg/sns-create-subscribe-endpoint-to-topic.html). | General AWS | 

### Define your guardrails and policies
<a name="define-your-guardrails-and-policies"></a>


| Task | Description | Skills required | 
| --- | --- | --- | 
| Populate the remote repository. | From the cloned repository, copy the contents of the `sample-repository` folder to your remote repository. This includes the `environments`, `rcp-management`, `scp-management`, and `utils` folders. | DevOps engineer | 
| Define your environments. | [\[See the AWS documentation website for more details\]](http://docs.aws.amazon.com/prescriptive-guidance/latest/patterns/manage-organizations-policies-as-code.html) | DevOps engineer | 
| Define your guardrails. | [\[See the AWS documentation website for more details\]](http://docs.aws.amazon.com/prescriptive-guidance/latest/patterns/manage-organizations-policies-as-code.html) | DevOps engineer | 
| Define your policies. | [\[See the AWS documentation website for more details\]](http://docs.aws.amazon.com/prescriptive-guidance/latest/patterns/manage-organizations-policies-as-code.html) | DevOps engineer | 

### Use the manifest file to manage the policies
<a name="use-the-manifest-file-to-manage-the-policies"></a>


| Task | Description | Skills required | 
| --- | --- | --- | 
| Configure the manifest files. | [\[See the AWS documentation website for more details\]](http://docs.aws.amazon.com/prescriptive-guidance/latest/patterns/manage-organizations-policies-as-code.html) | DevOps engineer | 
| Start the pipeline. | Commit and push the changes to the branch of the remote repository that you defined in the `variables.tf` file. Typically, this is the `main` branch. The CI/CD pipeline automatically starts. If there are any pipeline errors, see the [Troubleshooting](#manage-organizations-policies-as-code-troubleshooting) section of this pattern. | DevOps engineer | 
| Approve the changes. | When the `Validate-Plan` CodeBuild project is complete, the policy approvers receive a notification through the Amazon SNS topic that you previously configured. Do the following:[\[See the AWS documentation website for more details\]](http://docs.aws.amazon.com/prescriptive-guidance/latest/patterns/manage-organizations-policies-as-code.html) | General AWS, Policy approver | 
| Validate the deployment. | [\[See the AWS documentation website for more details\]](http://docs.aws.amazon.com/prescriptive-guidance/latest/patterns/manage-organizations-policies-as-code.html) | General AWS | 

## Troubleshooting
<a name="manage-organizations-policies-as-code-troubleshooting"></a>


| Issue | Solution | 
| --- | --- | 
| Manifest file errors in the `Validate-Plan` phase of the pipeline | A "Pipeline errors in the Validation & Plan phase for manifest files" message appears in the pipeline output if there are any errors in the `scp-management.json` or `rcp-management.json` files. Possible errors include an incorrect environment name, duplicated SIDs, or invalid fields or values. Do the following:[\[See the AWS documentation website for more details\]](http://docs.aws.amazon.com/prescriptive-guidance/latest/patterns/manage-organizations-policies-as-code.html) | 
| IAM Access Analyzer findings in the `Validate-Plan` phase of the pipeline | A "Findings in IAM Access Analyzer during Validation & Plan phase" message appears in the pipeline output if there are any errors in the guardrail or policy definitions. This pattern uses IAM Access Analyzer to validate the final policy. Do the following:[\[See the AWS documentation website for more details\]](http://docs.aws.amazon.com/prescriptive-guidance/latest/patterns/manage-organizations-policies-as-code.html) | 

## Related resources
<a name="manage-organizations-policies-as-code-resources"></a>
+ [JSON policy element reference](https://docs.aws.amazon.com/IAM/latest/UserGuide/reference_policies_elements.html) (IAM documentation)
+ [Resource control policies](https://docs.aws.amazon.com/organizations/latest/userguide/orgs_manage_policies_rcps.html) (AWS Organizations documentation)
+ [Service control policies](https://docs.aws.amazon.com/organizations/latest/userguide/orgs_manage_policies_scps.html) (AWS Organizations documentation)
+ [Add or remove access to Amazon Bedrock foundation models](https://docs.aws.amazon.com/bedrock/latest/userguide/model-access-modify.html) (Amazon Bedrock documentation)
+ [Approve or reject an approval action in CodePipeline](https://docs.aws.amazon.com/codepipeline/latest/userguide/approvals-approve-or-reject.html) (CodePipeline documentation)

## Additional information
<a name="manage-organizations-policies-as-code-additional"></a>

**Sample resource-based delegation policy**

The following is a sample resource-based delegation policy for AWS Organizations. It allows the delegated administer account to manage SCPs and RCPs for the organization. In the following sample policy, replace `<MEMBER_ACCOUNT_ID>` with the ID of the account where you are deploying the policy management pipeline.

```
{
  "Version": "2012-10-17",		 	 	 
  "Statement": [
    {
      "Sid": "DelegationToAudit",
      "Effect": "Allow",
      "Principal": {
        "AWS": "arn:aws:iam::<MEMBER_ACCOUNT_ID>:root"
      },
      "Action": [
        "organizations:ListTargetsForPolicy",
        "organizations:CreatePolicy",
        "organizations:DeletePolicy",
        "organizations:AttachPolicy",
        "organizations:DetachPolicy",
        "organizations:DisablePolicyType",
        "organizations:EnablePolicyType",
        "organizations:UpdatePolicy",
        "organizations:DescribeEffectivePolicy",
        "organizations:DescribePolicy",
        "organizations:DescribeResourcePolicy"
      ],
      "Resource": "*"
    }
  ]
}
```

**Migrating existing policies to the pipeline**

If you have existing SCPs or RCPs that you want to migrate and manage through this pipeline, you can use the Python scripts in the `sample-repository/utils` folder of the code repository. These scripts include:
+ `check-if-scp-exists-in-env.py` – This script checks whether a specified policy applies to any targets in a specific environment, which you define in the `environments.json` file. Enter the following command to run this script:

  ```
  python3 check-if-scp-exists-in-env.py \
     --policy-type <POLICY_TYPE> \
     --policy-name <POLICY_NAME> \
     --env-id <ENV_ID>
  ```

  Replace the following in this command:
  + `<POLICY_TYPE>` is `scp` or `rcp`
  + `<POLICY_NAME>` is the name of the SCP or RCP
  + `<ENV_ID>` is the ID of the environment that you defined in the `environments.json` file
+ `create-environments.py` – This script creates an environments.json file based on the current SCPs and RCPs in your environment. It excludes policies deployed through AWS Control Tower. Enter the following command to run this script, where `<POLICY_TYPE>` is `scp` or `rcp`:

  ```
  python create-environments.py --policy-type <POLICY_TYPE>
  ```
+ `verify-policies-capacity.py` – This script checks each environment that you define to determine how much capacity remains for each AWS Organizations policy-related quota. You define the environments to check in the in `environments.json` file. Enter the following command to run this script, where `<POLICY_TYPE>` is `scp` or `rcp`:

  ```
  python verify-policies-capacity.py --policy-type <POLICY_TYPE>
  ```

# Manage AWS IAM Identity Center permission sets as code by using AWS CodePipeline
<a name="manage-aws-iam-identity-center-permission-sets-as-code-by-using-aws-codepipeline"></a>

*Andre Cavalcante and Claison Amorim, Amazon Web Services*

## Summary
<a name="manage-aws-iam-identity-center-permission-sets-as-code-by-using-aws-codepipeline-summary"></a>

AWS IAM Identity Center helps you centrally manage single sign-on (SSO) access to all of your AWS accounts and applications. You can create and manage user identities in IAM Identity Center, or you can connect an existing identity source, such as a Microsoft Active Directory domain or an external identity provider (IdP). IAM Identity Center provides a unified administration experience to define, customize, and assign fine-grained access to your AWS environment by using [permission sets](https://docs.aws.amazon.com/singlesignon/latest/userguide/permissionsetsconcept.html). Permission sets apply to the federated users and groups from your IAM Identity Center identity store or your external IdP.

This pattern helps you to manage IAM Identity Center permission sets as code in your multi-account environment that is managed as an organization in AWS Organizations. With this pattern, you can achieve the following:
+ Create, delete, and update permission sets
+ Create, update, or delete permission set assignments to target AWS accounts, organizational units (OUs), or your organization root.

To manage IAM Identity Center permissions and assignments as code, this solution deploys a continuous integration and continuous delivery (CI/CD) pipeline that uses AWS CodeBuild and AWS CodePipeline. You manage the permission sets and assignments in JSON templates that you store in a remote repository. When Amazon EventBridge rules detect a change to the repository or detect modifications to the accounts in the target OU, it starts an AWS Lambda function. The Lambda function initiates the CI/CD pipeline that updates the permission sets and assignments in IAM Identity Center.

## Prerequisites and limitations
<a name="manage-aws-iam-identity-center-permission-sets-as-code-by-using-aws-codepipeline-prereqs"></a>

**Prerequisites**
+ A multi-account environment managed as an organization in AWS Organizations. For more information, see [Creating an organization](https://docs.aws.amazon.com/organizations/latest/userguide/orgs_manage_org_create.html).
+ IAM Identity Center, enabled and configured with an identity source. For more information, see [Getting Started](https://docs.aws.amazon.com/singlesignon/latest/userguide/getting-started.html) in the IAM Identity Center documentation.
+ A member account that is registered as the delegated administrator for the following AWS services:
  + IAM Identity Center – For instructions, see [Register a member account](https://docs.aws.amazon.com/singlesignon/latest/userguide/delegated-admin.html#delegated-admin-how-to-register) in the IAM Identity Center documentation.
  + AWS Organizations – For instructions, see [Delegated administrator for AWS Organizations](https://docs.aws.amazon.com/organizations/latest/userguide/orgs_delegate_policies.html). This account must have permissions to list and describe accounts and OUs.
**Note**  
You must use the same account as the delegated administrator for both services.
+ Permissions to deploy AWS CloudFormation stacks in the IAM Identity Center delegated administrator account and in the organization’s management account. For more information, see [Controlling access](https://docs.aws.amazon.com/AWSCloudFormation/latest/UserGuide/using-iam-template.html) in the CloudFormation documentation.
+ An Amazon Simple Storage Service (Amazon S3) bucket in the IAM Identity Center delegated administrator account. You upload the artifact code into this bucket. For instructions, see [Creating a bucket](https://docs.aws.amazon.com/AmazonS3/latest/userguide/create-bucket-overview.html) in the Amazon S3 documentation.
+ The account ID of the organization’s management account. For instructions, see [Finding your AWS account ID](https://docs.aws.amazon.com/accounts/latest/reference/manage-acct-identifiers.html#FindAccountId).
+ A repository in your source code host, such as GitHub.

**Limitations**
+ This pattern cannot be used to manage or assign permission sets for single-account environments or for accounts that are not managed as an organization in AWS Organizations.
+ Permission set names, assignment IDs, and IAM Identity Center principal types and IDs cannot be modified after deployment.
+ This pattern helps you create and manage [custom permissions](https://docs.aws.amazon.com/singlesignon/latest/userguide/permissionsetcustom.html). You cannot use this pattern to manage or assign [predefined permissions](https://docs.aws.amazon.com/singlesignon/latest/userguide/permissionsetpredefined.html).
+ This pattern cannot be used to manage a permission set for the organization’s management account.

## Architecture
<a name="manage-aws-iam-identity-center-permission-sets-as-code-by-using-aws-codepipeline-architecture"></a>

**Target architecture**

![\[Using a CI/CD pipeline to manage permission sets in IAM Identity Center.\]](http://docs.aws.amazon.com/prescriptive-guidance/latest/patterns/images/pattern-img/419aaa80-1b97-402d-9c74-c1b8c1ddd1cb/images/1f143bc4-c2c6-4ab6-8615-742fec617f18.png)


The diagram shows the following workflow:

1. A user makes one of the following changes:
   + Commits one or more changes to the remote repository, such as GitHub
   + Modifies the accounts in the OU in AWS Organizations

1. If the user committed a change to the remote repository to the main branch, the pipeline starts. 

   If the user modified the accounts in the OU, then the `MoveAccount` EventBridge rule detects the change and starts a Lambda function in the organization’s management account.

1. The initiated Lambda function starts the CI/CD pipeline in CodePipeline.

1. CodePipeline starts the `TemplateValidation` CodeBuild project. The `TemplateValidation` CodeBuild project uses a Python script in the remote repository to validate the permission set templates. CodeBuild validates the following:
   + The permission set names are unique.
   + The assignment statement IDs (`Sid`) are unique.
   + Policy definitions in the `CustomPolicy` parameter and valid. (This validation uses AWS Identity and Access Management Access Analyzer.)
   + The Amazon Resource Names (ARNs) of the managed policies are valid.

1. The `PermissionSet` action group in the `Deploy` CodeBuild project uses AWS SDK for Python (Boto3) to delete, create, or update the permission sets in IAM Identity Center. Only permission sets with the `SSOPipeline:true` tag are affected. All permission sets that are managed through this pipeline have this tag.

1. The `Assignments` action group in the `Deploy` CodeBuild project uses Terraform to delete, create, or update the assignments in IAM Identity Center. The Terraform backend state files are stored in an Amazon S3 bucket in the same account.

1. CodeBuild updates the permissions sets and assignments in IAM Identity Center.

**Automation and scale**

Because all new accounts in a multi-account environment are moved to a specific organizational unit in AWS Organizations, this solution automatically runs and grants the required permission sets to all accounts that you specify in the assignment templates. No additional automations or scaling actions are necessary.

In large environments, the number of API requests to IAM Identity Center might cause this solution to run more slowly. Terraform and Boto3 automatically manage throttling to minimize any performance degradation.

## Tools
<a name="manage-aws-iam-identity-center-permission-sets-as-code-by-using-aws-codepipeline-tools"></a>

**AWS services**
+ [AWS CloudFormation](https://docs.aws.amazon.com/AWSCloudFormation/latest/UserGuide/Welcome.html) helps you set up AWS resources, provision them quickly and consistently, and manage them throughout their lifecycle across AWS accounts and AWS Regions.
+ [AWS CodeBuild](https://docs.aws.amazon.com/codebuild/latest/userguide/welcome.html) is a fully managed build service that helps you compile source code, run unit tests, and produce artifacts that are ready to deploy. 
+ [AWS CodePipeline](https://docs.aws.amazon.com/codepipeline/latest/userguide/welcome.html) helps you quickly model and configure the different stages of a software release and automate the steps required to release software changes continuously.
+ [Amazon EventBridge](https://docs.aws.amazon.com/eventbridge/latest/userguide/eb-what-is.html) is a serverless event bus service that helps you connect your applications with real-time data from a variety of sources. For example, AWS Lambda functions, HTTP invocation endpoints using API destinations, or event buses in other AWS accounts.
+ [AWS IAM Identity Center](https://docs.aws.amazon.com/singlesignon/latest/userguide/what-is.html) helps you centrally manage single sign-on (SSO) access to all of your AWS accounts and cloud applications.
+ [AWS Organizations](https://docs.aws.amazon.com/organizations/latest/userguide/orgs_introduction.html) is an account management service that helps you consolidate multiple AWS accounts into an organization that you create and centrally manage.
+ [AWS SDK for Python (Boto3)](https://boto3.amazonaws.com/v1/documentation/api/latest/guide/quickstart.html) is a software development kit that helps you integrate your Python application, library, or script with AWS services.
+ [Amazon Simple Storage Service (Amazon S3)](https://docs.aws.amazon.com/AmazonS3/latest/userguide/Welcome.html) is a cloud-based object storage service that helps you store, protect, and retrieve any amount of data.

**Code repository **

The code for this pattern is available in the [aws-iam-identity-center-pipeline](https://github.com/aws-samples/aws-iam-identity-center-pipeline) repository. The templates folder in the repository includes sample templates for both permission sets and assignments. It also includes AWS CloudFormation templates for deploying the CI/CD pipeline and AWS resources in the target accounts.

## Best practices
<a name="manage-aws-iam-identity-center-permission-sets-as-code-by-using-aws-codepipeline-best-practices"></a>
+ Before you start modifying the permission set and assignment templates, we recommend that you plan permission sets for your organization. Consider what the permissions should be, which accounts or OUs the permission set should apply to, and which IAM Identity Center principals (users or groups) should be affected by the permission set. Permission set names, association IDs, and IAM Identity Center principal types and IDs cannot be modified after deployment.
+ Adhere to the principle of least privilege and grant the minimum permissions required to perform a task. For more information, see [Grant least privilege](https://docs.aws.amazon.com/IAM/latest/UserGuide/access_policies.html#grant-least-priv) and [Security best practices](https://docs.aws.amazon.com/IAM/latest/UserGuide/IAMBestPracticesAndUseCases.html) in the AWS Identity and Access Management (IAM) documentation.

## Epics
<a name="manage-aws-iam-identity-center-permission-sets-as-code-by-using-aws-codepipeline-epics"></a>

### Plan permission sets and assignments
<a name="plan-permission-sets-and-assignments"></a>


| Task | Description | Skills required | 
| --- | --- | --- | 
| Clone the repository. | In a bash shell, enter the following command. This clones the [aws-iam-identity-center-pipeline](https://github.com/aws-samples/aws-iam-identity-center-pipeline) repository from GitHub.<pre>git clone https://github.com/aws-samples/aws-iam-identity-center-pipeline.git</pre> | DevOps engineer | 
| Define the permission sets. | [\[See the AWS documentation website for more details\]](http://docs.aws.amazon.com/prescriptive-guidance/latest/patterns/manage-aws-iam-identity-center-permission-sets-as-code-by-using-aws-codepipeline.html) | DevOps engineer | 
| Define the assignments. | [\[See the AWS documentation website for more details\]](http://docs.aws.amazon.com/prescriptive-guidance/latest/patterns/manage-aws-iam-identity-center-permission-sets-as-code-by-using-aws-codepipeline.html) | DevOps engineer | 

### Deploy the permission sets and assignments
<a name="deploy-the-permission-sets-and-assignments"></a>


| Task | Description | Skills required | 
| --- | --- | --- | 
| Deploy resources in the IAM Identity Center delegated administrator account. | [\[See the AWS documentation website for more details\]](http://docs.aws.amazon.com/prescriptive-guidance/latest/patterns/manage-aws-iam-identity-center-permission-sets-as-code-by-using-aws-codepipeline.html) | DevOps engineer | 
| Deploy resources in the AWS Organizations management account. | [\[See the AWS documentation website for more details\]](http://docs.aws.amazon.com/prescriptive-guidance/latest/patterns/manage-aws-iam-identity-center-permission-sets-as-code-by-using-aws-codepipeline.html) | DevOps engineer | 
| Finish the remote repository setup. | Change the status of the AWS CodeConnections connection from `PENDING` to `AVAILABLE`. This connection was created when you deployed the CloudFormation stack. For instructions, see [Update a pending connection](https://docs.aws.amazon.com/dtconsole/latest/userguide/connections-update.html) in the CodeConnections documentation.  | DevOps engineer | 
| Upload files to the remote repository. | Upload all of files you have downloaded from the `aws-samples` repository and edited in previous steps to the remote repository. Changes to the `main` branch start the pipeline, which creates or updates the permission sets and assignments. | DevOps engineer | 

### Updating the permission sets and assignments
<a name="updating-the-permission-sets-and-assignments"></a>


| Task | Description | Skills required | 
| --- | --- | --- | 
| Update the permission sets and assignments. | When the `MoveAccount` Amazon EventBridge rule detects modifications to the accounts in the organization, the CI/CD pipeline automatically starts and updates the permission sets. For example, if you add an account to an OU specified in the assignments JSON file, then the CI/CD pipeline will apply the permission set to the new account.If you want to modify the deployed permission sets and assignments, update the JSON files and then commit them to the remote repository. Note the following when using the CI/CD pipeline to manage previously deployed permission sets and associations:[\[See the AWS documentation website for more details\]](http://docs.aws.amazon.com/prescriptive-guidance/latest/patterns/manage-aws-iam-identity-center-permission-sets-as-code-by-using-aws-codepipeline.html) | DevOps engineer | 

## Troubleshooting
<a name="manage-aws-iam-identity-center-permission-sets-as-code-by-using-aws-codepipeline-troubleshooting"></a>


| Issue | Solution | 
| --- | --- | 
| Access denied errors | Confirm that you have the permissions required to deploy the CloudFormation templates and the resources defined within them. For more information, see [Controlling access](https://docs.aws.amazon.com/AWSCloudFormation/latest/UserGuide/using-iam-template.html) in the CloudFormation documentation. | 
| Pipeline errors in the validation phase | This error appears if there are any errors in the permission set or assignment templates.[\[See the AWS documentation website for more details\]](http://docs.aws.amazon.com/prescriptive-guidance/latest/patterns/manage-aws-iam-identity-center-permission-sets-as-code-by-using-aws-codepipeline.html) | 

## Related resources
<a name="manage-aws-iam-identity-center-permission-sets-as-code-by-using-aws-codepipeline-resources"></a>
+ [Permission sets](https://docs.aws.amazon.com/singlesignon/latest/userguide/permissionsetsconcept.html) (IAM Identity Center documentation)

# Manage credentials using AWS Secrets Manager
<a name="manage-credentials-using-aws-secrets-manager"></a>

*Durga Prasad Cheepuri, Amazon Web Services*

## Summary
<a name="manage-credentials-using-aws-secrets-manager-summary"></a>

This pattern walks you through using AWS Secrets Manager to dynamically fetch database credentials for a Java Spring application.

In the past, when you created a custom application that retrieves information from a database, you typically had to embed the credentials (the secret) for accessing the database directly in the application. When it was time to rotate the credentials, you had to invest time to update the application to use the new credentials, and then distribute the updated application. If you had multiple applications that shared credentials and you missed updating one of them, the application would fail. Because of this risk, many users chose not to regularly rotate their credentials, which effectively substituted one risk for another.

Secrets Manager enables you to replace hard-coded credentials in your code (including passwords) with an API call to retrieve the secret programmatically. This helps ensure that the secret can't be compromised by someone who is examining your code, because the secret simply isn't there. You can also configure Secrets Manager to automatically rotate the secret according to a schedule that you specify. This enables you to replace long-term secrets with short-term ones, which helps significantly reduce the risk of compromise. For more information, see the [AWS Secrets Manager documentation](https://docs.aws.amazon.com/secretsmanager/latest/userguide/intro.html). 

## Prerequisites and limitations
<a name="manage-credentials-using-aws-secrets-manager-prereqs"></a>

**Prerequisites**
+ An AWS account with access to Secrets Manager
+ A Java Spring application

## Architecture
<a name="manage-credentials-using-aws-secrets-manager-architecture"></a>

**Source technology stack**
+ A Java Spring application with code that accesses a database, with DB credentials managed from the application.properties file.

**Target technology stack **
+ A Java Spring application with code that accesses a database, with DB credentials managed in Secrets Manager. The application.properties file holds the secrets to Secrets Manager.

**Secrets Manager integration with an application**

![\[Diagram showing AWS Secrets Manager interaction with an admin, custom app, and personnel database.\]](http://docs.aws.amazon.com/prescriptive-guidance/latest/patterns/images/pattern-img/44d359f5-47d9-4228-ac14-a64b5dfa7972/images/fc4b44fd-d1bd-4564-9bc1-c42c896e305b.png)


## Tools
<a name="manage-credentials-using-aws-secrets-manager-tools"></a>
+ **Secrets Manager** – [AWS Secrets Manager](https://docs.aws.amazon.com/secretsmanager/latest/userguide/intro.html) is an AWS service that makes it easier for you to manage secrets. Secrets can be database credentials, passwords, third-party API keys, and even arbitrary text. You can store and control access to these secrets centrally by using the Secrets Manager console, the Secrets Manager command-line interface (CLI), or the Secrets Manager API and SDKs.

## Epics
<a name="manage-credentials-using-aws-secrets-manager-epics"></a>

### Store secret in Secrets Manager
<a name="store-secret-in-secrets-manager"></a>


| Task | Description | Skills required | 
| --- | --- | --- | 
| Store the DB credentials as a secret in Secrets Manager. | Store Amazon Relational Database Service (Amazon RDS) or other DB credentials as a secret in Secrets Manager by following the steps in [Creating a secret](https://docs.aws.amazon.com/secretsmanager/latest/userguide/manage_create-basic-secret.html) in the Secrets Manager documentation. | Sys Admin | 
| Set permissions for the Spring application to access Secrets Manager. | Set the appropriate permissions based on how the Java Spring application uses Secrets Manager. To control access to the secret, create a policy based on the information provided in the Secrets Manager documentation, in the sections [Using identity-based policies (IAM Policies) and ABAC for Secrets Manager](https://docs.aws.amazon.com/secretsmanager/latest/userguide/auth-and-access_identity-based-policies.html) and [Using resource-based policies for Secrets Manager](https://docs.aws.amazon.com/secretsmanager/latest/userguide/auth-and-access_resource-based-policies.html). Follow the steps in the section [Retrieving the secret value](https://docs.aws.amazon.com/secretsmanager/latest/userguide/manage_retrieve-secret.html) in the Secrets Manager documentation. | Sys Admin | 

### Update the Spring application
<a name="update-the-spring-application"></a>


| Task | Description | Skills required | 
| --- | --- | --- | 
| Add JAR dependencies to use Secrets Manager. | See the *Additional information* section for details. | Java developer | 
| Add the details of the secret to the Spring application. | Update the application.properties file with the secret name, endpoints, and AWS Region. For an example, see the *Additional information* section. | Java developer | 
| Update the DB credentials retrieval code in Java. | In the application, update the Java code that fetches the DB credentials to fetch those details from Secrets Manager. For example code, see the *Additional information* section. | Java developer | 

## Related resources
<a name="manage-credentials-using-aws-secrets-manager-resources"></a>
+ [AWS Secrets Manager documentation](https://docs.aws.amazon.com/secretsmanager/latest/userguide/intro.html)
+ [Using identity-based policies (IAM Policies) and ABAC for Secrets Manager](https://docs.aws.amazon.com/secretsmanager/latest/userguide/auth-and-access_identity-based-policies.html)
+ [Using resource-based policies for Secrets Manager](https://docs.aws.amazon.com/secretsmanager/latest/userguide/auth-and-access_resource-based-policies.html)
+ [Sample code](https://github.com/durgachamz/Spring-secrets-manager) 

## Additional information
<a name="manage-credentials-using-aws-secrets-manager-additional"></a>

**Adding JAR dependencies for using Secrets Manager**

*Maven:*

```
<groupId>com.amazonaws</groupId>
    <artifactId>aws-java-sdk-secretsmanager</artifactId>
    <version>1.11. 355 </version>
```

*Gradle:*

```
compile group: 'com.amazonaws', name: 'aws-java-sdk-secretsmanager', version: '1.11.355'
```

**Updating the application.properties file with the details of the secret**

```
spring.aws.secretsmanager.secretName=postgres-local
spring.aws.secretsmanager.endpoint=secretsmanager.us-east-1.amazonaws.com
spring.aws.secretsmanager.region=us-east-1
```

**Updating the DB credentials retrieval code in Java**

```
String  secretName  =  env.getProperty("spring.aws.secretsmanager.secretName");
String  endpoints  =  env.getProperty("spring.aws.secretsmanager.endpoint");
String  AWS Region  =  env.getProperty("spring.aws.secretsmanager.region");
AwsClientBuilder.EndpointConfiguration  config  =  new  AwsClientBuilder.EndpointConfiguration(endpoints, AWS Region);
AWSSecretsManagerClientBuilder  clientBuilder  =  AWSSecretsManagerClientBuilder.standard();
clientBuilder.setEndpointConfiguration(config);
AWSSecretsManager  client  =  clientBuilder.build();     
 
ObjectMapper  objectMapper  =  new  ObjectMapper();
 
JsonNode  secretsJson  =  null;
 
ByteBuffer  binarySecretData;
 
GetSecretValueRequest  getSecretValueRequest  =  new  GetSecretValueRequest().withSecretId(secretName); 
 
GetSecretValueResult  getSecretValueResponse  =  null;
 
try  {
     getSecretValueResponse  =  client.getSecretValue(getSecretValueRequest);
    }
 
catch  (ResourceNotFoundException  e)  {
     log.error("The requested secret "  +  secretName  +  " was not found");
    }   
 
catch  (InvalidRequestException  e)  {    
     log.error("The request was invalid due to: "  +  e.getMessage());
     }   
 
catch  (InvalidParameterException  e)  {    
     log.error("The request had invalid params: "  +  e.getMessage());
     }
if  (getSecretValueResponse  ==  null)  {    
     return  null;
     }  // Decrypted secret using the associated KMS key // Depending on whether the secret was a string or binary, one of these fields will be populated    
       
 
String secret = getSecretValueResponse.getSecretString();  
 
if (secret != null) {  
     try {        
                secretsJson  =  objectMapper.readTree(secret);    
           }   
 
     catch  (IOException  e)  {        
                log.error("Exception while retrieving secret values: "  +  e.getMessage());    
           }
}   
 
else  {    
     log.error("The Secret String returned is null");  
 
     return null;      
 
     }
     String  host  =  secretsJson.get("host").textValue();
     String  port  =  secretsJson.get("port").textValue();
     String  dbname  =  secretsJson.get("dbname").textValue();
     String  username  =  secretsJson.get("username").textValue();
     String  password  =  secretsJson.get("password").textValue();
}
```

# Monitor Amazon ElastiCache clusters for at-rest encryption
<a name="monitor-amazon-elasticache-clusters-for-at-rest-encryption"></a>

*Abhishek Agawane, Amazon Web Services*

## Summary
<a name="monitor-amazon-elasticache-clusters-for-at-rest-encryption-summary"></a>

Amazon ElastiCache is an Amazon Web Services (AWS) service that provides a high-performance, scalable, and cost-effective caching solution for distributing an in-memory data store or cache environment in the cloud. It retrieves data from high-throughput and low-latency, in-memory data stores. This functionality makes it a popular choice for real-time use cases such as caching, session stores, gaming, geo-spatial services, real-time analytics, and queuing. ElastiCache offers Redis and Memcached data stores, both of which provide sub-millisecond response times.

Data encryption helps prevent unauthorized users from reading sensitive data available on your Redis clusters and their associated cache storage systems. This includes data saved to persistent media, known as *data at rest*, and data that can be intercepted as it travels through the network between clients and cache servers, known as *data in transit*.

You can enable at-rest encryption for ElastiCache (Redis OSS) when you create a replication group, by setting the `AtRestEncryptionEnabled`** **parameter to `true`. When this parameter is enabled, it encrypts the disk during sync, backup, and swap operations, and encrypts backups stored in Amazon Simple Storage Service (Amazon S3). You cannot enable at-rest encryption on an existing replication group. When you create a replication group, you can enable encryption at rest in these two ways:
+ By choosing the **Default **option, which uses service-managed encryption at rest.
+ By using a customer managed key and providing the key ID or Amazon Resource Name (ARN) from AWS Key Management Service (AWS KMS).

This pattern provides a security control that monitors for API calls and generates an Amazon EventBridge Events event on the `CreateReplicationGroup` operation. This event calls an AWS Lambda function, which runs a Python script. The function gets the replication group ID from the event JSON input, and performs the following checks to determine whether there's an unencrypted cluster:
+ Checks if the `AtRestEncryptionEnabled`** **key exists.
+ If `AtRestEncryptionEnabled`** **exists, checks the value to see if it is `true`.
+ If the `AtRestEncryptionEnabled`** **value is set to `false`, sets a variable that tracks violations and sends a violation message to an email address you provide, by using an Amazon Simple Notification Service (Amazon SNS) notification.

## Prerequisites and limitations
<a name="monitor-amazon-elasticache-clusters-for-at-rest-encryption-prereqs"></a>

**Prerequisites**
+ An active AWS account.
+ An S3 bucket to upload the provided Lambda code.
+ An email address where you would like to receive violation notifications.
+ ElastiCache logging enabled, for access to all the API logs.

**Limitations**
+ This detective control is regional and must be deployed in each AWS Region that you want to monitor.
+ The control supports replication groups that are running in a virtual private cloud (VPC).
+ The control supports replication groups that are running the following node types:
  + R7g, R6gd, R6g, R5, R4, R3
  + M7g, M6g, M5, M4, M3
  + T4g, T3, T2
  + C7gn

**Product versions**
+ Supports ElastiCache (Redis OSS) version 3.2.6 or later, and Valkey 7.2 or later

## Architecture
<a name="monitor-amazon-elasticache-clusters-for-at-rest-encryption-architecture"></a>

**Workflow architecture**

![\[Workflow for monitoring ElastiCache clusters.\]](http://docs.aws.amazon.com/prescriptive-guidance/latest/patterns/images/pattern-img/2917ebc2-3cfe-4530-887d-2c7eb7085453/images/59a36936-e9a8-4f12-a49d-776ff7959053.png)


1. The user launches an ElastiCache replication group through the AWS Management Console, the AWS Command Line Interface (AWS CLI), or an API call.

1. ElastiCache generates EventBridge events when the `CreateReplicationGroup `API is called.

1. An EventBridge rule triggers and calls the Lambda function for compliance checking.

1. The Lambda function processes the event and checks if at-rest encryption is enabled on the ElastiCache cluster.

1. If encryption violation is detected, the Lambda function publishes a notification message to an SNS topic.

1. Amazon SNS delivers an email notification to administrators about the encryption compliance violation.

**Automation and scale**
+ If you are using AWS Organizations, you can use [AWS CloudFormation StackSets](https://docs.aws.amazon.com/AWSCloudFormation/latest/UserGuide/what-is-cfnstacksets.html) to deploy this template in multiple accounts that you want to monitor.

## Tools
<a name="monitor-amazon-elasticache-clusters-for-at-rest-encryption-tools"></a>

**AWS services**
+ [Amazon ElastiCache](https://docs.aws.amazon.com/elasticache/) makes it easy to set up, manage, and scale distributed in-memory cache environments in the AWS Cloud. It provides a high performance, resizable, and cost-effective in-memory cache while removing complexity associated with deploying and managing a distributed cache environment. ElastiCache works with both the Redis and Memcached engines.
+ [AWS CloudFormation](https://aws.amazon.com/cloudformation/) helps you model and set up your AWS resources, provision them quickly and consistently, and manage them throughout their lifecycle. You can use a template to describe your resources and their dependencies, and launch and configure them together as a stack, instead of managing resources individually. You can manage and provision stacks across multiple AWS accounts and AWS Regions.
+ [Amazon EventBridge](https://docs.aws.amazon.com/eventbridge/) delivers a near real-time stream of system events that describe changes in AWS resources. EventBridge becomes aware of operational changes as they occur and takes corrective action as necessary, by sending messages to respond to the environment, activating functions, making changes, and capturing state information.
+ [AWS Lambda](https://aws.amazon.com/lambda/) is a compute service that supports running code without provisioning or managing servers. Lambda runs your code only when needed and scales automatically from a few requests per day to thousands per second. You pay only for the compute time that you consume—there is no charge when your code is not running. 
+ [Amazon SNS](https://aws.amazon.com/sns/) coordinates and manages the sending of messages between publishers and clients, including web servers and email addresses. Subscribers receive all messages published to the topics to which they subscribe, and all subscribers to a topic receive the same messages.

**Code**

The code for this pattern is available in the GitHub [Monitor Amazon ElastiCache clusters for at-rest encryption](https://github.com/aws-samples/sample-Monitor_Amazon_ElastiCache_clusters_for_at-rest_encryption) repository. See the [Epics section](#monitor-amazon-elasticache-clusters-for-at-rest-encryption-epics) for information about how to use the files in the repository.

## Best practices
<a name="monitor-amazon-elasticache-clusters-for-at-rest-encryption-best-practices"></a>

**Deployment**
+ Make sure that AWS CloudTrail is logging ElastiCache API calls before you deploy this control.
+ This is a regional control; deploy the control in each AWS Region where you use ElastiCache.
+ Validate the solution in dev/test environments before you deploy it to production.

**Security**
+ For enhanced control over encryption keys, use customer managed KMS keys. 
+ Review AWS Identity and Access Management (IAM) permissions to ensure least privilege access for the Lambda execution role.
+ Set up alerts for messages in the dead letter queue.

**Operations**
+ Set appropriate log retention to balance compliance needs with cost.
+ Tune the reserved concurrency of Lambda to adjust based on your ElastiCache creation frequency.
+ Subscribe multiple email addresses to Amazon SNS for team notifications.

**Monitoring**
+ Review Amazon CloudWatch alarms to make sure that alarm thresholds match your operational needs.
+ Monitor Lambda metrics execution duration and error rates regularly.
+ Audit violations regularly to review encryption compliance notifications.

## Epics
<a name="monitor-amazon-elasticache-clusters-for-at-rest-encryption-epics"></a>

### Deploy the security control
<a name="deploy-the-security-control"></a>


| Task | Description | Skills required | 
| --- | --- | --- | 
| Download the code from GitHub. | Clone or download the [code repository](https://github.com/aws-samples/sample-Monitor_Amazon_ElastiCache_clusters_for_at-rest_encryption) from GitHub. The repository contains the files  `index.py` and `elasticache_encryption_at_rest.yml`. | Cloud architect | 
| Create Lambda deployment packages. | Create two .zip files from the Python code:[\[See the AWS documentation website for more details\]](http://docs.aws.amazon.com/prescriptive-guidance/latest/patterns/monitor-amazon-elasticache-clusters-for-at-rest-encryption.html) | Cloud architect | 
| Upload the code to an S3 bucket. | [\[See the AWS documentation website for more details\]](http://docs.aws.amazon.com/prescriptive-guidance/latest/patterns/monitor-amazon-elasticache-clusters-for-at-rest-encryption.html) | Cloud architect  | 
| Deploy the CloudFormation template. | Open the [CloudFormation console](https://console.aws.amazon.com/cloudformation/) in the same AWS Region as the S3 bucket, and deploy the `elasticache_encryption_at_rest.yml` file that's provided in the code repository. In the next epic, provide values for the template parameters. | Cloud architect  | 

### Complete the parameters in the CloudFormation template
<a name="complete-the-parameters-in-the-cloudformation-template"></a>


| Task | Description | Skills required | 
| --- | --- | --- | 
| Provide the S3 bucket name. | Enter the name of the S3 bucket that you created or selected in the first epic. This S3 bucket contains the .zip file for the Lambda code and must be in the same AWS Region as the CloudFormation template and the resource that will be evaluated.  | Cloud architect | 
| Provide the S3 key. | Provide the location of the Lambda code .zip file in your S3 bucket, without leading slashes (for example, `ElasticCache-EncryptionAtRest.zip` or `controls/ElasticCache-EncryptionAtRest.zip`). | Cloud architect  | 
| Provide an email address. | Provide an active email address where you want to receive violation notifications.  | Cloud architect | 
| Specify a logging level. | Specify the logging level and verbosity. [\[See the AWS documentation website for more details\]](http://docs.aws.amazon.com/prescriptive-guidance/latest/patterns/monitor-amazon-elasticache-clusters-for-at-rest-encryption.html) | Cloud architect  | 

### Confirm the subscription
<a name="confirm-the-subscription"></a>


| Task | Description | Skills required | 
| --- | --- | --- | 
| Confirm the email subscription. | When the CloudFormation template deploys successfully, it sends a subscription message to the email address you provided. To receive notifications, you must confirm this email subscription. | Cloud architect | 

## Troubleshooting
<a name="monitor-amazon-elasticache-clusters-for-at-rest-encryption-troubleshooting"></a>


| Issue | Solution | 
| --- | --- | 
| ** **Lambda function not triggered | **Symptom**: No logs in CloudWatch after you create or modify ElastiCache clusters.**Solutions**:[\[See the AWS documentation website for more details\]](http://docs.aws.amazon.com/prescriptive-guidance/latest/patterns/monitor-amazon-elasticache-clusters-for-at-rest-encryption.html) | 
| No email notifications | **Symptom**: The Lambda function runs successfully, but you don’t receive any email notifications.**Solutions**:[\[See the AWS documentation website for more details\]](http://docs.aws.amazon.com/prescriptive-guidance/latest/patterns/monitor-amazon-elasticache-clusters-for-at-rest-encryption.html) | 
| Permission issues | **Symptom**: *Access denied* errors in Lambda function CloudWatch logs.**Solutions**:[\[See the AWS documentation website for more details\]](http://docs.aws.amazon.com/prescriptive-guidance/latest/patterns/monitor-amazon-elasticache-clusters-for-at-rest-encryption.html) | 

## Related resources
<a name="monitor-amazon-elasticache-clusters-for-at-rest-encryption-resources"></a>
+ [Create a stack from the CloudFormation console](https://docs.aws.amazon.com/AWSCloudFormation/latest/UserGuide/cfn-console-create-stack.html) (CloudFormation documentation)
+ [At-rest encryption in ElastiCache (Redis OSS)](https://docs.aws.amazon.com/AmazonElastiCache/latest/red-ug/at-rest-encryption.html) (ElastiCache documentation)

# Monitor IAM root user activity
<a name="monitor-iam-root-user-activity"></a>

*JJ Sung and Mostefa Brougui, Amazon Web Services*

## Summary
<a name="monitor-iam-root-user-activity-summary"></a>

Every Amazon Web Services (AWS) account has a root user. As a [security best practice](https://docs.aws.amazon.com/IAM/latest/UserGuide/best-practices.html) for AWS Identity and Access Management (IAM), we recommend that you use the root user to complete the tasks that only the root user can perform. For the complete list, see [Tasks that require root user credentials](https://docs.aws.amazon.com/accounts/latest/reference/root-user-tasks.html) in the *AWS Account Management Reference Guide*. Because the root user has full access to all of your AWS resources and billing information, we recommend that you don’t use this account and monitor it for any activity, which might indicate that the root user credentials have been compromised.

Using this pattern, you set up an [event-driven architecture](https://aws.amazon.com/event-driven-architecture/) that monitors the IAM root user. This pattern sets up a hub-and-spoke solution that monitors multiple AWS accounts, the *spoke *accounts, and centralizes management and reporting in a single account, the *hub* account.

When the IAM root user credentials are used, Amazon CloudWatch and AWS CloudTrail record the activity in the log and trail, respectively. In the spoke account, an Amazon EventBridge rule sends the event to the central [event bus](https://docs.aws.amazon.com/eventbridge/latest/userguide/eb-event-bus.html) in the hub account. In the hub account, an EventBridge rule sends the event to an AWS Lambda function. The function uses an Amazon Simple Notification Service (Amazon SNS) topic that notifies you of the root user activity.

In this pattern, you use an AWS CloudFormation template to deploy the monitoring and event-handling services in the spoke accounts. You use a HashiCorp Terraform template to deploy the event-management and notification services in the hub account.

## Prerequisites and limitations
<a name="monitor-iam-root-user-activity-prereqs"></a>

**Prerequisites**

1. Permissions to deploy AWS resources in your AWS environment.

1. Permissions to deploy CloudFormation stack sets. For more information, see [Prerequisites for stack set operations](https://docs.aws.amazon.com/AWSCloudFormation/latest/UserGuide/stacksets-prereqs.html) (CloudFormation documentation).

1. Terraform installed and ready to use. For more information, see [Get Started – AWS](https://learn.hashicorp.com/collections/terraform/aws-get-started) (Terraform documentation).

1. An existing trail in each spoke account. For more information, see [Getting started with AWS CloudTrail](https://docs.aws.amazon.com/awscloudtrail/latest/userguide/cloudtrail-tutorial.html) (CloudTrail documentation).

1. The trail is configured to send events to CloudWatch Logs. For more information, see [Sending events to CloudWatch Logs](https://docs.aws.amazon.com/awscloudtrail/latest/userguide/send-cloudtrail-events-to-cloudwatch-logs.html) (CloudTrail documentation).

1. Your hub and spoke accounts must be managed by AWS Organizations.

## Architecture
<a name="monitor-iam-root-user-activity-architecture"></a>

The following diagram illustrates the building blocks of the implementation.

![\[An event in a spoke account creating an email notification in a hub account\]](http://docs.aws.amazon.com/prescriptive-guidance/latest/patterns/images/pattern-img/175f356b-f9df-4d33-82fc-fe33b2c88b05/images/6147e5b5-616e-49a4-b330-dbb7e3381fe7.png)


1. When the IAM root user credentials are used, CloudWatch and CloudTrail record the activity in the log and trail, respectively.

1. In the spoke account, an EventBridge rule sends the event to the central [event bus](https://docs.aws.amazon.com/eventbridge/latest/userguide/eb-event-bus.html) in the hub account.

1. In the hub account, an EventBridge rule sends the event to a Lambda function.

1. The Lambda function uses an Amazon SNS topic that notifies you of the root user activity.

## Tools
<a name="monitor-iam-root-user-activity-tools"></a>

**AWS services**
+ [AWS CloudFormation](https://docs.aws.amazon.com/AWSCloudFormation/latest/UserGuide/Welcome.html) helps you set up AWS resources, provision them quickly and consistently, and manage them throughout their lifecycle across AWS accounts and Regions.
+ [AWS CloudTrail](https://docs.aws.amazon.com/awscloudtrail/latest/userguide/cloudtrail-user-guide.html) helps you audit the governance, compliance, and operational risk of your AWS account.
+ [Amazon CloudWatch Logs](https://docs.aws.amazon.com/AmazonCloudWatch/latest/logs/WhatIsCloudWatchLogs.html) helps you centralize the logs from all your systems, applications, and AWS services so you can monitor them and archive them securely.
+ [Amazon EventBridge](https://docs.aws.amazon.com/eventbridge/latest/userguide/eb-what-is.html) is a serverless event bus service that helps you connect your applications with real-time data from a variety of sources. For example, AWS Lambda functions, HTTP invocation endpoints using API destinations, or event buses in other AWS accounts.
+ [AWS Identity and Access Management (IAM)](https://docs.aws.amazon.com/IAM/latest/UserGuide/introduction.html) helps you securely manage access to your AWS resources by controlling who is authenticated and authorized to use them.
+ [AWS Lambda](https://docs.aws.amazon.com/lambda/latest/dg/welcome.html) is a compute service that helps you run code without needing to provision or manage servers. It runs your code only when needed and scales automatically, so you pay only for the compute time that you use.
+ [Amazon Simple Notification Service (Amazon SNS)](https://docs.aws.amazon.com/sns/latest/dg/welcome.html) helps you coordinate and manage the exchange of messages between publishers and clients, including web servers and email addresses.

**Other tools and services**
+ [Terraform](https://www.terraform.io/docs) is a CLI application for provisioning and managing cloud infrastructure and resources by using code, in the form of configuration files.

**Code repository**

The source code and templates for this pattern are available in a [GitHub repository](https://github.com/aws-samples/aws-iam-root-user-activity-monitor). This pattern provides two templates:
+ A Terraform template containing the resources you deploy in the hub account
+ A CloudFormation template you deploy as a stack set instance in the spoke accounts

The repository has the following overall structure.

```
.
 |__README.md
 |__spoke-stackset.yaml
 |__hub.tf
 |__root-activity-monitor-module
     |__main.tf  # contains Terraform code to deploy resources in the Hub account
     |__iam      # contains IAM policies JSON files
         |__ lambda-assume-policy.json          # contains trust policy of the IAM role used by the Lambda function
         |__ lambda-policy.json                 # contains the IAM policy attached to the IAM role used by the Lambda function
     |__outputs  # contains Lambda function zip code
```

The *Epics* section provides step-by-step instructions for deploying the templates.

## Epics
<a name="monitor-iam-root-user-activity-epics"></a>

### Deploy resources to the hub account
<a name="deploy-resources-to-the-hub-account"></a>


| Task | Description | Skills required | 
| --- | --- | --- | 
| Clone the sample code repository. | [\[See the AWS documentation website for more details\]](http://docs.aws.amazon.com/prescriptive-guidance/latest/patterns/monitor-iam-root-user-activity.html) | General AWS | 
| Update the Terraform template. | [\[See the AWS documentation website for more details\]](http://docs.aws.amazon.com/prescriptive-guidance/latest/patterns/monitor-iam-root-user-activity.html) | General AWS | 
| Deploy the resources to the AWS hub account. | [\[See the AWS documentation website for more details\]](http://docs.aws.amazon.com/prescriptive-guidance/latest/patterns/monitor-iam-root-user-activity.html) | General AWS | 

### Deploy resources to your spoke accounts
<a name="deploy-resources-to-your-spoke-accounts"></a>


| Task | Description | Skills required | 
| --- | --- | --- | 
| Deploy the CloudFormation template. | [\[See the AWS documentation website for more details\]](http://docs.aws.amazon.com/prescriptive-guidance/latest/patterns/monitor-iam-root-user-activity.html)For more information and instructions, see [Create a stack set](https://docs.aws.amazon.com/AWSCloudFormation/latest/UserGuide/stacksets-getting-started-create.html) (CloudFormation documentation). | General AWS | 

### (Optional) Test the notifications
<a name="optional-test-the-notifications"></a>


| Task | Description | Skills required | 
| --- | --- | --- | 
| Use the root user credentials. | [\[See the AWS documentation website for more details\]](http://docs.aws.amazon.com/prescriptive-guidance/latest/patterns/monitor-iam-root-user-activity.html) | General AWS | 

## Related resources
<a name="monitor-iam-root-user-activity-resources"></a>
+ [Security best practices](https://docs.aws.amazon.com/IAM/latest/UserGuide/best-practices.html) (IAM documentation)
+ [Working with StackSets](https://docs.aws.amazon.com/AWSCloudFormation/latest/UserGuide/what-is-cfnstacksets.html) (CloudFormation documentation)
+ [Get Started](https://learn.hashicorp.com/collections/terraform/aws-get-started) (Terraform documentation)

## Additional information
<a name="monitor-iam-root-user-activity-additional"></a>

[Amazon GuardDuty](https://docs.aws.amazon.com/guardduty/latest/ug/what-is-guardduty.html) is a continuous security monitoring service that analyzes and processes logs to identify unexpected and potentially unauthorized activity in your AWS environment. As an alternative to this solution, if you have enabled GuardDuty, it can alert you when the root user credentials have been used. The GuardDuty finding is `Policy:IAMUser/RootCredentialUsage`, and the default severity is **Low**. For more information, see [Managing Amazon GuardDuty findings](https://docs.aws.amazon.com/guardduty/latest/ug/findings_management.html).

# Send a notification when an IAM user is created
<a name="send-a-notification-when-an-iam-user-is-created"></a>

*Mansi Suratwala and Sergiy Shevchenko, Amazon Web Services*

## Summary
<a name="send-a-notification-when-an-iam-user-is-created-summary"></a>

On Amazon Web Services (AWS), you can use this pattern to deploy an AWS CloudFormation template to receive notifications automatically when AWS Identity and Access Management (IAM) users are created. 

Using IAM, you can manage access to AWS services and resources securely. You can create and manage AWS users and groups, and use permissions to allow and deny those users and groups access to AWS resources.

The CloudFormation template creates an Amazon CloudWatch Events event and an AWS Lambda function. The event uses AWS CloudTrail to monitor for any IAM user being created in the AWS account. If a user is created, the CloudWatch Events event initiates a Lambda function, which sends you an Amazon Simple Notification Service (Amazon SNS) notification informing you of the new user creation event.

## Prerequisites and limitations
<a name="send-a-notification-when-an-iam-user-is-created-prereqs"></a>

**Prerequisites**
+ An active AWS account
+ An AWS CloudTrail trail created and deployed

**Limitations **
+ The AWS CloudFormation template must be deployed for `CreateUser` only. 

## Architecture
<a name="send-a-notification-when-an-iam-user-is-created-architecture"></a>

**Target technology stack  **
+ IAM
+ AWS CloudTrail
+ Amazon CloudWatch Events
+ AWS Lambda
+ Amazon Simple Storage Service (Amazon S3)
+ Amazon SNS

**Target architecture **

![\[Process from user to IAM to CloudTrail to CloudWatch Events to Lambda and an S3 bucket, ending with SNS email notification.\]](http://docs.aws.amazon.com/prescriptive-guidance/latest/patterns/images/pattern-img/5487fbff-72e7-4da4-a970-a4542e89824d/images/c73532fd-8e95-45a5-843d-1864eb4df227.png)


**Automation and scale**

You can use the AWS CloudFormation template multiple times for different AWS Regions and accounts. You need to run it only once in each Region or account. To automate deployment to multiple accounts, use [AWS CloudFormation StackSets](https://docs.aws.amazon.com/AWSCloudFormation/latest/UserGuide/what-is-cfnstacksets.html). The CloudFormation template will be able to deploy all the required resources in each account.

## Tools
<a name="send-a-notification-when-an-iam-user-is-created-tools"></a>

**Tools**
+ [IAM](https://docs.aws.amazon.com/IAM/latest/UserGuide/introduction.html) – AWS Identity and Access Management (IAM) is a web service that helps you securely control access to AWS resources. You use IAM to control who is authenticated (signed in) and authorized (has permissions) to use resources.
+ [AWS CloudFormation](https://docs.aws.amazon.com/AWSCloudFormation/latest/UserGuide/Welcome.html) – AWS CloudFormation helps you model and set up your Amazon Web Services resources so that you can spend less time managing those resources and more time focusing on your applications that run in AWS. You create a template that describes all the AWS resources that you want, and CloudFormation takes care of provisioning and configuring those resources for you.
+ [AWS CloudTrail](https://docs.aws.amazon.com/awscloudtrail/latest/userguide/cloudtrail-user-guide.html) – AWS CloudTrail helps you manage governance, compliance, and operational and risk auditing of your AWS account. Actions taken by a user, a role, or an AWS service are recorded as events in CloudTrail. Events include actions taken in the AWS Management Console, AWS Command Line Interface, and AWS SDKs and APIs.
+ [Amazon CloudWatch Events](https://docs.aws.amazon.com/AmazonCloudWatch/latest/events/WhatIsCloudWatchEvents.html) – Amazon CloudWatch Events delivers a near-real-time stream of system events that describe changes in AWS resources. 
+ [AWS Lambda](https://docs.aws.amazon.com/lambda/latest/dg/welcome.html) – AWS Lambda is a compute service that supports running code without provisioning or managing servers. Lambda runs your code only when needed and scales automatically, from a few requests per day to thousands per second. 
+ [Amazon S3](https://docs.aws.amazon.com/AmazonS3/latest/userguide/Welcome.html) – Amazon Simple Storage Service (Amazon S3) is storage for the internet. You can use Amazon S3 to store and retrieve any amount of data at any time, from anywhere on the web.
+ [Amazon SNS](https://docs.aws.amazon.com/sns/latest/dg/welcome.html) – Amazon Simple Notification Service (Amazon SNS) is a managed service that provides message delivery using Lambda, HTTP, email, mobile push notifications, and mobile text messages (SMS).

**Code **

A .zip file of the project is available as an attachment.

## Epics
<a name="send-a-notification-when-an-iam-user-is-created-epics"></a>

### Create the S3 bucket for the Lambda script
<a name="create-the-s3-bucket-for-the-lambda-script"></a>


| Task | Description | Skills required | 
| --- | --- | --- | 
| Define the S3 bucket. | Open the Amazon S3 console, and choose or create an S3 bucket. This S3 bucket will host the Lambda code .zip file. The S3 bucket name cannot contain leading slashes. | Cloud architect | 

### Upload the Lambda code to the S3 bucket
<a name="upload-the-lambda-code-to-the-s3-bucket"></a>


| Task | Description | Skills required | 
| --- | --- | --- | 
| Upload the Lambda code. | Upload the Lambda code .zip file provided in the *Attachments* section to the S3 bucket that you defined. | Cloud architect | 

### Deploy the CloudFormation template
<a name="deploy-the-cloudformation-template"></a>


| Task | Description | Skills required | 
| --- | --- | --- | 
| Deploy the CloudFormation template. | On the CloudFormation console, deploy the CloudFormation `createIAMuser.yaml` template that's provided as an attachment to this pattern. In the next epic, provide values for the template parameters. | Cloud architect | 

### Complete the parameters in the CloudFormation template
<a name="complete-the-parameters-in-the-cloudformation-template"></a>


| Task | Description | Skills required | 
| --- | --- | --- | 
| Provide the S3 bucket name. | Enter the name of the S3 bucket that you created or chose in the first epic. | Cloud architect | 
| Provide the S3 key. | Provide the location of the Lambda code .zip file in your S3 bucket, without leading slashes (for example, `<directory>/<file-name>.zip`). | Cloud architect | 
| Provide an email address. | Provide an active email address to receive Amazon SNS notifications. | Cloud architect | 
| Define the logging level. | Define the logging level and frequency for your Lambda function. `Info` designates detailed informational messages on the application’s progress. `Error` designates error events that could still allow the application to continue running. `Warning` designates potentially harmful situations. | Cloud architect | 

### Confirm the subscription
<a name="confirm-the-subscription"></a>


| Task | Description | Skills required | 
| --- | --- | --- | 
| Confirm the subscription. | When the template successfully deploys, it sends a subscription email message to the email address provided. To receive notifications, you must confirm this email subscription. | Cloud architect | 

## Related resources
<a name="send-a-notification-when-an-iam-user-is-created-resources"></a>
+ [Creating a trail](https://docs.aws.amazon.com/awscloudtrail/latest/userguide/cloudtrail-create-a-trail-using-the-console-first-time.html?icmpid=docs_console_unmapped)
+ [Creating an S3 bucket](https://docs.aws.amazon.com/AmazonS3/latest/user-guide/create-bucket.html)
+ [Uploading files to an S3 bucket](https://docs.aws.amazon.com/AmazonS3/latest/user-guide/upload-objects.html) 
+ [Deploying a CloudFormation template](https://docs.aws.amazon.com/AWSCloudFormation/latest/UserGuide/cfn-console-create-stack.html)
+ [Creating an IAM user](https://docs.aws.amazon.com/IAM/latest/UserGuide/id_users_create.html)
+ [Creating a CloudWatch Events rule that triggers on an AWS API call using AWS CloudTrail](https://docs.aws.amazon.com/AmazonCloudWatch/latest/events/Create-CloudWatch-Events-CloudTrail-Rule.html)

## Attachments
<a name="attachments-5487fbff-72e7-4da4-a970-a4542e89824d"></a>

To access additional content that is associated with this document, unzip the following file: [attachment.zip](samples/p-attach/5487fbff-72e7-4da4-a970-a4542e89824d/attachments/attachment.zip)

# Prevent internet access at the account level by using a service control policy
<a name="prevent-internet-access-at-the-account-level-by-using-a-service-control-policy"></a>

*Sergiy Shevchenko, Sean O'Sullivan, and Victor Mazeo Whitaker, Amazon Web Services*

## Summary
<a name="prevent-internet-access-at-the-account-level-by-using-a-service-control-policy-summary"></a>

Organizations frequently want to limit internet access for account resources that should remain private. In these accounts, the resources in virtual private clouds (VPCs) should not access the internet by any means. Many organizations choose a [centralized inspection architecture](https://aws.amazon.com/blogs/networking-and-content-delivery/centralized-inspection-architecture-with-aws-gateway-load-balancer-and-aws-transit-gateway/). For the east-west (VPC-to-VPC) traffic in a centralized inspection architecture, you need to make sure that the spoke accounts and their resources do not have access to the internet. For north-south (internet egress and on-premises) traffic, you want to allow internet access only through the inspection VPC.

This pattern uses a [service control policy (SCP)](https://docs.aws.amazon.com/organizations/latest/userguide/orgs_manage_policies_scps.html) to help prevent internet access. You can apply this SCP at the account or organizational unit (OU) level. The SCP limits internet connectivity by preventing the following:
+ Creating or attaching an IPv4 or IPv6 [internet gateway](https://docs.aws.amazon.com/vpc/latest/userguide/VPC_Internet_Gateway.html) that allows direct internet access to the VPC
+ Creating or accepting a [VPC peering connection](https://docs.aws.amazon.com/vpc/latest/peering/what-is-vpc-peering.html) that might allow indirect internet access through another VPC
+ Creating or updating an [AWS Global Accelerator](https://docs.aws.amazon.com/global-accelerator/latest/dg/what-is-global-accelerator.html) configuration that might allow direct internet access to VPC resources

## Prerequisites and limitations
<a name="prevent-internet-access-at-the-account-level-by-using-a-service-control-policy-prereqs"></a>

**Prerequisites**
+ One or multiple AWS accounts managed as an organization in AWS Organizations.
+ [All features are enabled](https://docs.aws.amazon.com/organizations/latest/userguide/orgs_manage_org_support-all-features.html) in AWS Organizations.
+ [SCPs are enabled](https://docs.aws.amazon.com/organizations/latest/userguide/orgs_manage_policies_enable-disable.html) in the organization.
+ Permissions to:
  + Access the organization's management account.
  + Create SCPs. For more information about the minimum permissions, see [Creating an SCP](https://docs.aws.amazon.com/organizations/latest/userguide/orgs_manage_policies_scps_create.html#create-an-scp).
  + Attach the SCP to the target accounts or organizational units (OUs). For more information about the minimum permissions, see [Attaching and detaching service control policies](https://docs.aws.amazon.com/organizations/latest/userguide/orgs_manage_policies_scps_attach.html).

**Limitations**
+ SCPs don't affect users or roles in the management account. They affect only the member accounts in your organization.
+ SCPs affect only AWS Identity and Access Management (IAM) users and roles that are managed by accounts that are part of the organization. For more information, see [SCP effects on permissions](https://docs.aws.amazon.com/organizations/latest/userguide/orgs_manage_policies_scps.html#scp-effects-on-permissions).

## Tools
<a name="prevent-internet-access-at-the-account-level-by-using-a-service-control-policy-tools"></a>

**AWS services**
+ [AWS Organizations](https://docs.aws.amazon.com/organizations/latest/userguide/orgs_introduction.html) is an account management service that helps you consolidate multiple AWS accounts into an organization that you create and centrally manage. In this pattern, you use [service control policies (SCPs)](https://docs.aws.amazon.com/organizations/latest/userguide/orgs_manage_policies_scps.html) in AWS Organizations.
+ [Amazon Virtual Private Cloud (Amazon VPC)](https://docs.aws.amazon.com/vpc/latest/userguide/what-is-amazon-vpc.html) helps you launch AWS resources into a virtual network that you’ve defined. This virtual network resembles a traditional network that you’d operate in your own data center, with the benefits of using the scalable infrastructure of AWS.

## Best practices
<a name="prevent-internet-access-at-the-account-level-by-using-a-service-control-policy-best-practices"></a>

After establishing this SCP in your organization, make sure to update it frequently to address any new AWS services or features that might affect internet access.

## Epics
<a name="prevent-internet-access-at-the-account-level-by-using-a-service-control-policy-epics"></a>

### Create and attach the SCP
<a name="create-and-attach-the-scp"></a>


| Task | Description | Skills required | 
| --- | --- | --- | 
| Create the SCP. | [\[See the AWS documentation website for more details\]](http://docs.aws.amazon.com/prescriptive-guidance/latest/patterns/prevent-internet-access-at-the-account-level-by-using-a-service-control-policy.html) | AWS administrator | 
| Attach the SCP. | [\[See the AWS documentation website for more details\]](http://docs.aws.amazon.com/prescriptive-guidance/latest/patterns/prevent-internet-access-at-the-account-level-by-using-a-service-control-policy.html) | AWS administrator | 

## Related resources
<a name="prevent-internet-access-at-the-account-level-by-using-a-service-control-policy-resources"></a>
+ [AWS Organizations documentation](https://docs.aws.amazon.com/organizations/latest/userguide/orgs_introduction.html)
+ [Service control policies (SCPs)](https://docs.aws.amazon.com/organizations/latest/userguide/orgs_introduction.html)
+ [Centralized inspection architecture with AWS Gateway Load Balancer and AWS Transit Gateway](https://aws.amazon.com/blogs/networking-and-content-delivery/centralized-inspection-architecture-with-aws-gateway-load-balancer-and-aws-transit-gateway/) (AWS blog post)

# Export a report of AWS IAM Identity Center identities and their assignments by using PowerShell
<a name="export-a-report-of-aws-iam-identity-center-identities-and-their-assignments-by-using-powershell"></a>

*Jorge Pava, Frank Allotta, Manideep Reddy Gillela, and Chad Miles, Amazon Web Services*

## Summary
<a name="export-a-report-of-aws-iam-identity-center-identities-and-their-assignments-by-using-powershell-summary"></a>

When you use AWS IAM Identity Center (successor to AWS Single Sign-On) to centrally manage single sign-on (SSO) access to all of your Amazon Web Services (AWS) accounts and cloud applications, reporting and auditing those assignments through the AWS Management Console can be tedious and time consuming. This is especially true if you’re reporting on permissions for a user or group across dozens or hundreds of AWS accounts.

For many, the ideal tool to view this information would be in a spreadsheet application, such as Microsoft Excel. This can help you filter, search, and visualize the data for your entire organization, managed by AWS Organizations.

This pattern describes how to use AWS Tools for PowerShell to generate a report of SSO identity configurations in IAM Identity Center. The report is formatted as a CSV file, and it includes the identity name (principal), identity type (user or group), accounts the identity can access, and permission sets. After generating this report, you can open it in your preferred application to search, filter, and audit the data as needed. The following image shows sample data in a spreadsheet application.

![\[PowerShell script results viewed in spreadsheet application.\]](http://docs.aws.amazon.com/prescriptive-guidance/latest/patterns/images/pattern-img/46c7dc7f-c726-4754-b590-2f09d657b167/images/bbc21d8b-fc5d-4b5d-b159-81197a89823e.png)



| 
| 
| Important: Because this report contains sensitive information, we highly recommend you store it securely and share it only on a need-to-know basis. | 
| --- |

## Prerequisites and limitations
<a name="export-a-report-of-aws-iam-identity-center-identities-and-their-assignments-by-using-powershell-prereqs"></a>

**Prerequisites**
+ IAM Identity Center and AWS Organizations, configured and enabled.
+ PowerShell, installed and configured. For more information, see [Installing PowerShell](https://learn.microsoft.com/en-us/powershell/scripting/install/installing-powershell?view=powershell-7.2) (Microsoft documentation).
+ AWS Tools for PowerShell, installed and configured. For performance reasons, we highly recommend that you install the modularized version of AWS Tools for PowerShell, called `AWS.Tools`. Each AWS service is supported by its own individual, small module. In the PowerShell shell, enter the following commands to install the modules needed for this pattern: `AWS.Tools.Installer`, `Organizations`, `SSOAdmin`, and `IdentityStore`.

  ```
  Install-Module AWS.Tools.Installer
  Install-AWSToolsModule -Name Organizations, SSOAdmin, IdentityStore
  ```

  For more information, see [Install AWS.Tools on Windows](https://docs.aws.amazon.com/powershell/latest/userguide/pstools-getting-set-up-windows.html#ps-installing-awstools) or [Install AWS.Tools on Linux or macOS](https://docs.aws.amazon.com/powershell/latest/userguide/pstools-getting-set-up-linux-mac.html#install-aws.tools-on-linux-macos) (AWS Tools for PowerShell documentation). If you receive an error when installing the modules, see the [Troubleshooting](#export-a-report-of-aws-iam-identity-center-identities-and-their-assignments-by-using-powershell-troubleshooting) section of this pattern.
+ AWS Command Line Interface (AWS CLI) or the AWS SDK must be previously configured with working credentials by doing one of the following:
  + Use the AWS CLI `aws configure` For more information, see [Quick configuration](https://docs.aws.amazon.com/cli/latest/userguide/cli-configure-quickstart.html#cli-configure-quickstart-config) (AWS CLI documentation).
  + Configure AWS CLI or AWS Cloud Development Kit (AWS CDK) to get temporary access through an AWS Identity and Access Management (IAM) role. For more information, see [Getting IAM role credentials for CLI access](https://docs.aws.amazon.com/singlesignon/latest/userguide/howtogetcredentials.html) (IAM Identity Center documentation).
+ A named profile for the AWS CLI that has saved credentials for an IAM principal that:
  + Has access to the AWS Organizations management account or the delegated administrator account for IAM Identity Center
  + Has the `AWSSSOReadOnly` and `AWSSSODirectoryReadOnly` AWS managed policies applied to it

  For more information, see [Using named profiles](https://docs.aws.amazon.com/cli/latest/userguide/cli-configure-files.html#cli-configure-files-using-profiles) (AWS CLI documentation) and [AWS managed policies](https://docs.aws.amazon.com/IAM/latest/UserGuide/access_policies_managed-vs-inline.html#aws-managed-policies) (IAM documentation).

**Limitations**
+ The target AWS accounts must be managed as an organization in AWS Organizations.

**Product versions**
+ For all operating systems, it is recommended that you use [PowerShell version 7.0](https://github.com/powershell/powershell) or later.

## Architecture
<a name="export-a-report-of-aws-iam-identity-center-identities-and-their-assignments-by-using-powershell-architecture"></a>

**Target architecture**

![\[Script using AWS CLI named profile to create a report of SSO identities in IAM Identity Center.\]](http://docs.aws.amazon.com/prescriptive-guidance/latest/patterns/images/pattern-img/46c7dc7f-c726-4754-b590-2f09d657b167/images/ae5189aa-8197-4a05-88df-7c0294a679a1.png)


1. The user runs the script in a PowerShell command line.

1. The script assumes the named profile for AWS CLI. This grants access to IAM Identity Center.

1. The script retrieves the SSO identity configurations from IAM Identity Center.

1. The script generates a CSV file in the same directory on the local workstation where the script is saved.

## Tools
<a name="export-a-report-of-aws-iam-identity-center-identities-and-their-assignments-by-using-powershell-tools"></a>

**AWS services**
+ [AWS Command Line Interface (AWS CLI)](https://docs.aws.amazon.com/cli/latest/userguide/cli-chap-welcome.html) is an open-source tool that helps you interact with AWS services through commands in your command-line shell.
+ [AWS IAM Identity Center](https://docs.aws.amazon.com/singlesignon/latest/userguide/what-is.html) helps you centrally manage single sign-on (SSO) access to all of your AWS accounts and cloud applications.
+ [AWS Tools for PowerShell](https://docs.aws.amazon.com/powershell/latest/userguide/pstools-welcome.html) are a set of PowerShell modules that help you script operations on your AWS resources from the PowerShell command line.

**Other tools**
+ [PowerShell](https://learn.microsoft.com/en-us/powershell/) is a Microsoft automation and configuration management program that runs on Windows, Linux, and macOS.

## Epics
<a name="export-a-report-of-aws-iam-identity-center-identities-and-their-assignments-by-using-powershell-epics"></a>

### Generate the report
<a name="generate-the-report"></a>


| Task | Description | Skills required | 
| --- | --- | --- | 
| Prepare the script. | [\[See the AWS documentation website for more details\]](http://docs.aws.amazon.com/prescriptive-guidance/latest/patterns/export-a-report-of-aws-iam-identity-center-identities-and-their-assignments-by-using-powershell.html) | Cloud administrator | 
| Run the script. | It is recommended that you run your custom script in the PowerShell shell with the following command.<pre>.\SSO-Report.ps1</pre>Alternatively, you can run the script from another shell by entering the following command.<pre>pwsh .\SSO-Report.ps1</pre>The script generates a CSV file in the same directory as the script file. | Cloud administrator | 
| Analyze report data. | The output CSV file has the headers **AccountName**, **PermissionSet**, **Principal**, and **Type**. Open this file in your preferred spreadsheet application. You can create a data table to filter and sort the output. | Cloud administrator | 

## Troubleshooting
<a name="export-a-report-of-aws-iam-identity-center-identities-and-their-assignments-by-using-powershell-troubleshooting"></a>


| Issue | Solution | 
| --- | --- | 
| `The term ‘Get-<parameter>’ is not recognized as the name of a cmdlet, function, script file, or operable program.` error | AWS Tools for PowerShell or its modules are not installed. In the PowerShell shell, enter the following commands to install AWS Tools for PowerShell and the modules needed for this pattern: `AWS.Tools.Installer`, `Organizations`, `SSOAdmin`, and `IdentityStore`.<pre>Install-Module AWS.Tools.Installer<br />Install-AWSToolsModule -Name Organizations, SSOAdmin, IdentityStore</pre> | 
| `No credentials specified or obtained from persisted/shell defaults` error | In *Prepare the script* in the [Epics](#export-a-report-of-aws-iam-identity-center-identities-and-their-assignments-by-using-powershell-epics) section, confirm that you have correctly entered the `ProfileName` and `Region` variables. Make sure that the settings and credentials in the named profile have sufficient permissions to administer IAM Identity Center. | 
| `Authenticode Issuer …` error when installing the AWS.Tools modules | Add the `-SkipPublisherCheck` parameter to the end of the `Install-AWSToolsModule` command. | 
| `Get-ORGAccountList : Assembly AWSSDK.SSO could not be found or loaded.` error | This error can occur when named AWS CLI profiles are specified, AWS CLI is configured to authenticate users with IAM Identity Center, and AWS CLI is configured to automatically retrieve refreshed authentication tokens. To resolve this error, do the following:[\[See the AWS documentation website for more details\]](http://docs.aws.amazon.com/prescriptive-guidance/latest/patterns/export-a-report-of-aws-iam-identity-center-identities-and-their-assignments-by-using-powershell.html) | 

## Related resources
<a name="export-a-report-of-aws-iam-identity-center-identities-and-their-assignments-by-using-powershell-resources"></a>
+ [Where are configuration settings stored?](https://docs.aws.amazon.com/cli/latest/userguide/cli-configure-files.html#cli-configure-files-where) (AWS CLI documentation)
+ [Configuring the AWS CLI to use AWS IAM Identity Center](https://docs.aws.amazon.com/cli/latest/userguide/cli-configure-sso.html) (AWS CLI documentation)
+ [Using named profiles](https://docs.aws.amazon.com/cli/latest/userguide/cli-configure-files.html#cli-configure-files-using-profiles) (AWS CLI documentation)

## Additional information
<a name="export-a-report-of-aws-iam-identity-center-identities-and-their-assignments-by-using-powershell-additional"></a>

In the following script, determine whether you need to update the values for the following parameters:
+ If you’re using a named profile in AWS CLI to access the account in which IAM Identity Center is configured, update the `$ProfileName` value. 
+ If IAM Identity Center is deployed in a different AWS Region than the default Region for your AWS CLI or AWS SDK configuration, update the `$Region` value to use the Region where IAM Identity Center is deployed.
+ If neither of these situations apply, then no script update is required.

```
param (
    # The name of the output CSV file
    [String] $OutputFile  = "SSO-Assignments.csv",
    # The AWS CLI named profile
    [String] $ProfileName = "",
    # The AWS Region in which IAM Identity Center is configured
    [String] $Region      = ""
)
$Start = Get-Date; $OrgParams = @{}
If ($Region){ $OrgParams.Region = $Region}
if ($ProfileName){$OrgParams.ProfileName = $ProfileName}
$SSOParams   = $OrgParams.Clone(); $IdsParams = $OrgParams.Clone()
$AccountList = Get-ORGAccountList @OrgParams | Select-Object Id, Name
$SSOinstance = Get-SSOADMNInstanceList @OrgParams
$SSOParams['InstanceArn']       = $SSOinstance.InstanceArn
$IdsParams['IdentityStoreId']   = $SSOinstance.IdentityStoreId
$PSsets       = @{}; $Principals   = @{}
$Assignments  = @(); $AccountCount = 1; Write-Host ""
foreach ($Account in $AccountList) {
    $Duration = New-Timespan -Start $Start -End (Get-Date) | ForEach-Object {[Timespan]::New($_.Days, $_.Hours, $_.Minutes, $_.Seconds)}
    Write-Host "`r$Duration - Account $AccountCount of $($AccountList.Count) (Assignments:$($Assignments.Count))        " -NoNewline
    $AccountCount++
    foreach ($PS in Get-SSOADMNPermissionSetsProvisionedToAccountList -AccountId $Account.Id @SSOParams) {
        if (-not $PSsets[$PS]) {$PSsets[$PS] = (Get-SSOADMNPermissionSet @SSOParams -PermissionSetArn $PS).Name;$APICalls++}
        $AssignmentsResponse = Get-SSOADMNAccountAssignmentList @SSOParams -PermissionSetArn $PS -AccountId $Account.Id
        if ($AssignmentsResponse.NextToken) {$AccountAssignments = $AssignmentsResponse.AccountAssignments}
        else {$AccountAssignments = $AssignmentsResponse}
        While ($AssignmentsResponse.NextToken) {
            $AssignmentsResponse = Get-SSOADMNAccountAssignmentList @SSOParams -PermissionSetArn $PS -AccountId $Account.Id -NextToken $AssignmentsResponse.NextToken
            $AccountAssignments += $AssignmentsResponse.AccountAssignments}
        foreach ($Assignment in $AccountAssignments) {
            if (-not $Principals[$Assignment.PrincipalId]) {
                $AssignmentType = $Assignment.PrincipalType.Value
                $Expression     = "Get-IDS"+$AssignmentType+" @IdsParams -"+$AssignmentType+"Id "+$Assignment.PrincipalId
                $Principal      = Invoke-Expression $Expression
                if ($Assignment.PrincipalType.Value -eq "GROUP") { $Principals[$Assignment.PrincipalId] = $Principal.DisplayName } 
                else { $Principals[$Assignment.PrincipalId] = $Principal.UserName }
            }
            $Assignments += [PSCustomObject]@{
                AccountName     = $Account.Name
                PermissionSet   = $PSsets[$PS]
                Principal       = $Principals[$Assignment.PrincipalId]
                Type            = $Assignment.PrincipalType.Value}
        }
    }
}
$Duration = New-Timespan -Start $Start -End (Get-Date) | ForEach-Object {[Timespan]::New($_.Days, $_.Hours, $_.Minutes, $_.Seconds)}
Write-Host "`r$($AccountList.Count) accounts done in $Duration. Outputting result to $OutputFile"
$Assignments | Sort-Object Account | Export-CSV -Path $OutputFile -Force
```

# Restrict access based on IP address or geolocation by using AWS WAF
<a name="aws-waf-restrict-access-geolocation"></a>

*Louis Hourcade, Amazon Web Services*

## Summary
<a name="aws-waf-restrict-access-geolocation-summary"></a>

[AWS WAF](https://docs.aws.amazon.com/waf/latest/developerguide/waf-chapter.html) is a web application firewall that helps protect web applications and APIs against common web exploits and bots that can affect availability, compromise security, or consume excessive resources. [Web access control lists (web ACLs)](https://docs.aws.amazon.com/waf/latest/developerguide/web-acl.html) in AWS WAF give you control over how traffic reaches your applications. In a web ACL, you add rules or rule groups that are designed to permit legitimate traffic, control bot traffic, and block common attack patterns. For more information, see [How AWS WAF works](https://docs.aws.amazon.com/waf/latest/developerguide/how-aws-waf-works.html).

You can associate the following types of rules to your AWS WAF web ACLs:
+ [Managed rule groups](https://docs.aws.amazon.com/waf/latest/developerguide/waf-managed-rule-groups.html) – AWS Managed Rules teams and AWS Marketplace sellers offer preconfigured sets of rules. Some managed rule groups are designed to help protect specific types of web applications. Others offer broad protection against known threats or common vulnerabilities.
+ [Custom rules](https://docs.aws.amazon.com/waf/latest/developerguide/waf-rules.html) and [custom rule groups](https://docs.aws.amazon.com/waf/latest/developerguide/waf-user-created-rule-groups.html) – You can also create rules and rule groups that customize access to your web applications and APIs. For example, you can restrict traffic based on a specific list of IP addresses or on a list of countries.

By using this pattern and the associated code repository, you can use the [AWS Cloud Development Kit (AWS CDK)](https://docs.aws.amazon.com/cdk/v2/guide/home.html) to deploy AWS WAF web ACLs with custom rules. These rules restrict access to web application resources based on the end user's IP address or geolocation. You can also optionally attach several managed rule groups.

## Prerequisites and limitations
<a name="aws-waf-restrict-access-geolocation-prereqs"></a>

**Prerequisites**
+ An active AWS account
+ [Permissions](https://docs.aws.amazon.com/waf/latest/developerguide/security-iam.html) to deploy AWS WAF resources
+ AWS CDK, [installed and configured](https://docs.aws.amazon.com/cdk/latest/guide/getting_started.html) in your account
+ Git, [installed](https://github.com/git-guides/install-git)

**Limitations**
+ You can use this pattern only in AWS Regions where AWS WAF is available. For Region availability, see [AWS services by Region](https://aws.amazon.com/about-aws/global-infrastructure/regional-product-services/).

## Tools
<a name="aws-waf-restrict-access-geolocation-tools"></a>

**AWS services**
+ [AWS Cloud Development Kit (AWS CDK)](https://docs.aws.amazon.com/cdk/v2/guide/home.html) is a software development framework that helps you define and provision AWS Cloud infrastructure in code.
+ [AWS WAF](https://docs.aws.amazon.com/waf/latest/developerguide/what-is-aws-waf.html) is a web application firewall that helps you monitor HTTP and HTTPS requests that are forwarded to your protected web application resources.

**Code repository**

The code for this pattern is available in the GitHub [IP and geolocation restriction with AWS WAF](https://github.com/aws-samples/ip-and-geolocation-restriction-with-waf-cdk) repository. The code deploys two AWS WAF web ACLs. The first is a regional web ACL that is intended for [Amazon API Gateway](https://docs.aws.amazon.com/apigateway/latest/developerguide/welcome.html) resources. The second is global web ACL for [Amazon CloudFront](https://docs.aws.amazon.com/AmazonCloudFront/latest/DeveloperGuide/Introduction.html) resources. Both web ACLs contain the following custom rules:
+ `IPMatch` blocks requests from non-allowed IP addresses.
+ `GeoMatch` blocks requests from non-allowed countries.

During deployment, you can optionally attach all of the following managed rule groups to your web ACLs:
+ [Core rule set (CRS)](https://docs.aws.amazon.com/waf/latest/developerguide/aws-managed-rule-groups-baseline.html#aws-managed-rule-groups-baseline-crs) – This rule group contains rules that are generally applicable to web applications. It helps protect against exploitation of a wide range of vulnerabilities, including some of the high risk and commonly occurring vulnerabilities described in OWASP publications, such as [OWASP Top 10](https://owasp.org/www-project-top-ten/).
+ [Admin protection](https://docs.aws.amazon.com/waf/latest/developerguide/aws-managed-rule-groups-baseline.html#aws-managed-rule-groups-baseline-admin) – This rule group contains rules that help you block external access to exposed administrative pages.
+ [Known bad inputs](https://docs.aws.amazon.com/waf/latest/developerguide/aws-managed-rule-groups-baseline.html#aws-managed-rule-groups-baseline-known-bad-inputs) – This rule group helps block request patterns that are known to be invalid and are associated with the exploitation or discovery of vulnerabilities.
+ [Amazon IP reputation list](https://docs.aws.amazon.com/waf/latest/developerguide/aws-managed-rule-groups-ip-rep.html#aws-managed-rule-groups-ip-rep-amazon) – This rule group contains rules that are based on Amazon internal threat intelligence. It helps you block IP addresses that are typically associated with bots or other threats.
+ [Linux operating system managed rule group](https://docs.aws.amazon.com/waf/latest/developerguide/aws-managed-rule-groups-use-case.html#aws-managed-rule-groups-use-case-linux-os) – This rule group helps block request patterns that are associated with the exploitation of Linux vulnerabilities, including Linux-specific Local File Inclusion (LFI) attacks.
+ [SQL database managed rule group](https://docs.aws.amazon.com/waf/latest/developerguide/aws-managed-rule-groups-use-case.html#aws-managed-rule-groups-use-case-sql-db) – This rule group helps block request patterns that are associated with the exploitation of SQL databases, such as SQL injection attacks.

## Epics
<a name="aws-waf-restrict-access-geolocation-epics"></a>

### Configure the AWS WAF web ACLs
<a name="configure-the-waf-web-acls"></a>


| Task | Description | Skills required | 
| --- | --- | --- | 
| Clone the repository. | Enter the following command to clone the [IP and geolocation restriction with AWS WAF](https://github.com/aws-samples/ip-and-geolocation-restriction-with-waf-cdk) repository to your local workstation:<pre>git clone https://github.com/aws-samples/ip-and-geolocation-restriction-with-waf-cdk.git</pre> | Git | 
| Configure the rules. | [\[See the AWS documentation website for more details\]](http://docs.aws.amazon.com/prescriptive-guidance/latest/patterns/aws-waf-restrict-access-geolocation.html) | General AWS, Python | 

### Bootstrap and deploy the code
<a name="bootstrap-and-deploy-the-code"></a>


| Task | Description | Skills required | 
| --- | --- | --- | 
| Bootstrap your AWS environment. | If not already done, you need to [bootstrap](https://docs.aws.amazon.com/cdk/v2/guide/bootstrapping-env.html) your AWS environment before you can deploy the AWS CDK application.[\[See the AWS documentation website for more details\]](http://docs.aws.amazon.com/prescriptive-guidance/latest/patterns/aws-waf-restrict-access-geolocation.html) | General AWS | 
| Deploy the AWS CDK application. | [\[See the AWS documentation website for more details\]](http://docs.aws.amazon.com/prescriptive-guidance/latest/patterns/aws-waf-restrict-access-geolocation.html) | General AWS | 

### Validate the deployment
<a name="validate-the-deployment"></a>


| Task | Description | Skills required | 
| --- | --- | --- | 
| Confirm that the web ACLs successfully deployed. | [\[See the AWS documentation website for more details\]](http://docs.aws.amazon.com/prescriptive-guidance/latest/patterns/aws-waf-restrict-access-geolocation.html) | General AWS | 
| (Optional) Associate the web ACLs to your resources. | Associate the AWS WAF web ACLs with your AWS resources, such as an Application Load Balancer, API Gateway, or CloudFront distribution. For instructions, see [Associating or disassociating a web ACL with an ](https://docs.aws.amazon.com/waf/latest/developerguide/web-acl-associating-aws-resource.html)AWS[ resource](https://docs.aws.amazon.com/waf/latest/developerguide/web-acl-associating-aws-resource.html). For an example, see [class CfnWebACLAssociation (construct)](https://docs.aws.amazon.com/cdk/api/v2/docs/aws-cdk-lib.aws_wafv2.CfnWebACLAssociation.html) in the AWS CDK documentation. | General AWS | 

### Clean up resources
<a name="clean-up-resources"></a>


| Task | Description | Skills required | 
| --- | --- | --- | 
| Delete the stacks. | [\[See the AWS documentation website for more details\]](http://docs.aws.amazon.com/prescriptive-guidance/latest/patterns/aws-waf-restrict-access-geolocation.html) | General AWS | 

## Related resources
<a name="aws-waf-restrict-access-geolocation-resources"></a>
+ [API Reference](https://docs.aws.amazon.com/cdk/api/v2/docs/aws-construct-library.html) (AWS CDK documentation)
+ [aws-cdk-lib.aws\$1wafv2 module](https://docs.aws.amazon.com/cdk/api/v2/docs/aws-cdk-lib.aws_wafv2-readme.html) (AWS CDK documentation)
+ [Working with web ACLs](https://docs.aws.amazon.com/waf/latest/developerguide/web-acl-working-with.html) (AWS WAF documentation)
+ [Managing your own rule groups](https://docs.aws.amazon.com/waf/latest/developerguide/waf-user-created-rule-groups.html) (AWS WAF documentation)
+ [Rules](https://docs.aws.amazon.com/waf/latest/developerguide/waf-rules.html) (AWS WAF documentation)

# Scan Git repositories for sensitive information and security issues by using git-secrets
<a name="scan-git-repositories-for-sensitive-information-and-security-issues-by-using-git-secrets"></a>

*Saurabh Singh, Amazon Web Services*

## Summary
<a name="scan-git-repositories-for-sensitive-information-and-security-issues-by-using-git-secrets-summary"></a>

This pattern describes how to use the open-source [git-secrets](https://github.com/awslabs/git-secrets) tool from AWS Labs to scan Git source repositories and find code that might potentially include sensitive information, such as user passwords or AWS access keys, or that has any other security issues.

`git-secrets` scans commits, commit messages, and merges to prevent sensitive information such as secrets from being added to your Git repositories. For example, if a commit, commit message, or any commit in a merge history matches one of your configured, prohibited regular expression patterns, the commit is rejected.

## Prerequisites and limitations
<a name="scan-git-repositories-for-sensitive-information-and-security-issues-by-using-git-secrets-prereqs"></a>

**Prerequisites **
+ An active AWS account
+ A Git repository that requires a security scan
+ A Git client (version 2.37.1 and later) installed

## Architecture
<a name="scan-git-repositories-for-sensitive-information-and-security-issues-by-using-git-secrets-architecture"></a>

**Target architecture **
+ Git
+ `git-secrets`

![\[Using the git-secrets tool to scan Git source repositories for sensitive information.\]](http://docs.aws.amazon.com/prescriptive-guidance/latest/patterns/images/pattern-img/4a18e0c8-0935-4ee2-86bf-c1dfcfbc1bcb/images/e4813a76-83c2-4254-b5f4-aafe2b8f2127.png)


 

## Tools
<a name="scan-git-repositories-for-sensitive-information-and-security-issues-by-using-git-secrets-tools"></a>
+ [git-secrets](https://github.com/awslabs/git-secrets) is a tool that prevents you from committing sensitive information into Git repositories.
+ [Git](https://git-scm.com/) is an open-source distributed version control system.

## Best practices
<a name="scan-git-repositories-for-sensitive-information-and-security-issues-by-using-git-secrets-best-practices"></a>
+ Always scan a Git repository by including all revisions:

  ```
  git secrets --scan-history
  ```

## Epics
<a name="scan-git-repositories-for-sensitive-information-and-security-issues-by-using-git-secrets-epics"></a>

### Connect to an EC2 instance
<a name="connect-to-an-ec2-instance"></a>


| Task | Description | Skills required | 
| --- | --- | --- | 
| Connect to an EC2 instance by using SSH. | Connect to an Amazon Elastic Compute Cloud (Amazon EC2) instance by using SSH and a key pair file. You can skip this step if you are scanning a repository on your local machine. | General AWS | 

### Install Git
<a name="install-git"></a>


| Task | Description | Skills required | 
| --- | --- | --- | 
| Install Git. | Install Git by using the command:<pre>yum install git -y</pre>If you are using your local machine, you can install a Git client for a specific OS version. For more information, see the [Git website](https://git-scm.com/downloads/guis). | General AWS | 

### Clone the source repository and install git-secrets
<a name="clone-the-source-repository-and-install-git-secrets"></a>


| Task | Description | Skills required | 
| --- | --- | --- | 
| Clone the Git source repository. | To clone the Git repository that you want to scan, choose the **Git clone** command from your home directory. | General AWS | 
| Clone git-secrets. | Clone the `git-secrets` Git repository.<pre>git clone https://github.com/awslabs/git-secrets.git</pre>Place `git-secrets` somewhere in your `PATH `so that Git picks it up when you run `git-secrets`. | General AWS | 
| Install git-secrets. | **For Unix and variants (Linux/macOS):**You can use the `install` target of the `Makefile` (provided in the `git-secrets` repository) to install the tool. You can customize the installation path by using the `PREFIX` and `MANPREFIX` variables.<pre>make install</pre>**For Windows:**Run the PowerShell `install.ps1` script provided in the `git-secrets` repository. This script copies the installation files to an installation directory (`%USERPROFILE%/.git-secrets` by default) and adds the directory to the current user `PATH`.<pre>PS > ./install.ps1</pre>**For Homebrew (macOS users):**Run:<pre>brew install git-secrets</pre> | General AWS | 

### Scan git code repository
<a name="scan-git-code-repository"></a>


| Task | Description | Skills required | 
| --- | --- | --- | 
| Go to the source repository. | Switch to the directory for the Git repository that you want to scan:<pre>cd my-git-repository</pre> | General AWS | 
| Register the AWS rule set (Git hooks). | To configure `git-secrets` to scan your Git repository on each commit, run the command: <pre>git secrets --register-aws</pre> | General AWS | 
| Scan the repository. | Run the following command to start scanning your repository:<pre>git secrets -–scan</pre> | General AWS | 
| Review the output file. | The tool generates an output file if it finds a vulnerability in your Git repository. For example:<pre>example.sh:4:AWS_SECRET_ACCESS_KEY = *********<br /><br />[ERROR] Matched one or more prohibited patterns<br /><br />Possible mitigations:<br />- Mark false positives as allowed using: git config --add secrets.allowed ...<br />- Mark false positives as allowed by adding regular expressions to .gitallowed at repository's root directory<br />- List your configured patterns: git config --get-all secrets.patterns<br />- List your configured allowed patterns: git config --get-all secrets.allowed<br />- List your configured allowed patterns in .gitallowed at repository's root directory<br />- Use --no-verify if this is a one-time false positive</pre> | General AWS | 

## Related resources
<a name="scan-git-repositories-for-sensitive-information-and-security-issues-by-using-git-secrets-resources"></a>
+ [git-secrets tool](https://github.com/awslabs/git-secrets)

# Secure file transfers by using Transfer Family, Amazon Cognito, and GuardDuty
<a name="secure-file-transfers"></a>

*Manoj Kumar, Amazon Web Services*

## Summary
<a name="secure-file-transfers-summary"></a>

This solution helps you securely transfer files through an SFTP server by using AWS Transfer Family. It includes automated malware scanning capabilities through [Malware Protection for S3](https://docs.aws.amazon.com/guardduty/latest/ug/gdu-malware-protection-s3.html), a feature of Amazon GuardDuty. It is designed for organizations that need to securely exchange files with external parties and validate that all incoming files are scanned for malware before being processed.

The infrastructure as code (IaC) templates provided with this pattern help you deploy the following:
+ A secure SFTP server with Amazon Cognito authentication through AWS Lambda
+ Amazon Simple Storage Service (Amazon S3) buckets for uploads and incoming files that have been scanned for malware
+ A virtual private cloud (VPC)-based architecture with public and private subnets across multiple Availability Zones
+ IP-based access control for both ingress and egress traffic, with configurable allow and deny lists
+ Automated malware scanning through GuardDuty
+ Intelligent file routing based on scan results through Amazon EventBridge and Lambda
+ Real-time notifications for security incidents through Amazon Simple Notification Service (Amazon SNS)
+ Encryption for Amazon S3 buckets and Lambda environment variables through AWS Key Management Service (AWS KMS)
+ Amazon Virtual Private Cloud (Amazon VPC) endpoints for access without internet exposure
+ Comprehensive logging through Amazon CloudWatch integration

## Prerequisites and limitations
<a name="secure-file-transfers-prereqs"></a>

**Prerequisites **
+ An active AWS account
+ Permissions in AWS Identity and Access Management (IAM) to perform the actions described in this pattern, including deploying AWS CloudFormation templates that provision IAM roles
+ GuardDuty, [enabled](https://docs.aws.amazon.com/guardduty/latest/ug/guardduty_settingup.html) in the target account
+ Malware Protection for S3, [enabled](https://docs.aws.amazon.com/guardduty/latest/ug/malware-protection-s3-get-started-independent.html) in the target account
+ Service quotas allow you to create the following in the target account:
  + One VPC
  + One private subnet
  + One public subnet
  + Three elastic IP addresses
  + Sufficient Lambda concurrency limits
+ A valid email address for security-related notifications
+ (Optional) A list of IP addresses or CIDR ranges that you want to allow or deny
+ (Optional) AWS Command Line Interface (AWS CLI), [installed](https://docs.aws.amazon.com/cli/latest/userguide/cli-chap-getting-started.html) and [configured](https://docs.aws.amazon.com/cli/latest/userguide/cli-chap-configure.html)

**Limitations **
+ Malware Protection for S3 is subject to quotas, such as maximum file sizes. For more information, see [Quotas in Malware Protection for S3](https://docs.aws.amazon.com/guardduty/latest/ug/malware-protection-s3-quotas-guardduty.html) and [Supportability of Amazon S3 features](https://docs.aws.amazon.com/guardduty/latest/ug/supported-s3-features-malware-protection-s3.html) in the GuardDuty documentation.
+ This solution uses Amazon Cognito username and password authentication only. Certificate-based or other authentication methods are not supported in this template. By default, this solution does not configure multi-factor authentication (MFA).
+ The solution implements IP-based access control through security groups only.

## Architecture
<a name="secure-file-transfers-architecture"></a>

The following architecture diagram shows the resources that are deployed in this pattern. This solution uses Amazon Cognito for user authentication and authorization. An AWS Transfer Family SFTP server is used for file uploads. Files are stored in Amazon S3 buckets, and Amazon GuardDuty scans the files for malware. Amazon SNS sends an email notification if malware is detected.

![\[Using GuardDuty and Cognito to securely transfer files to Amazon S3 buckets.\]](http://docs.aws.amazon.com/prescriptive-guidance/latest/patterns/images/pattern-img/39d98ebe-2844-4ccd-a497-9b796b7da5e8/images/05567010-e189-40e7-acab-74e77c4f8525.png)


The diagram shows the following workflow:

1. A user connects to the SFTP server endpoint in AWS Transfer Family. This initiates the authentication process with the Amazon Cognito user pool.

1. A Lambda function initiates the authentication and authorization process and validates the user’s credentials with Amazon Cognito.

1. The Lambda function returns the `UploadBucket` Amazon S3 bucket as the home directory. The user assumes the IAM role for the Transfer Family server, and the Lambda function notifies the user that they have been successfully authenticated.

1. The user uploads a file to the Transfer Family SFTP server. The file is stored in the `UploadBucket` Amazon S3 bucket.

1. GuardDuty scans the file for malware. The potential scan results are `NO_THREATS_FOUND`, `THREATS_FOUND`, `UNSUPPORTED`, `ACCESS_DENIED`, and `FAILED`. For sample results, see [S3 object scan result](https://docs.aws.amazon.com/guardduty/latest/ug/monitor-with-eventbridge-s3-malware-protection.html#s3-object-scan-status-malware-protection-s3-ev) in the GuardDuty documentation.

1. An EventBridge rule detects the scan result event.

1. EventBridge initiates the file-routing Lambda function.

1. The Lambda function processes the event and filters the files based on the scan results as follows:
   + Files that have a `NO_THREATS_FOUND` scan result are sent to the `CleanBucket` Amazon S3 bucket.
   + Files that have a `THREATS_FOUND` scan result are sent to the `MalwareBucket` Amazon S3 bucket.
   + Files that have an `UNSUPPORTED` scan result are sent to the `ErrorBucket` Amazon S3 bucket.
   + Files that have an `ACCESS_DENIED` scan result are sent to the `ErrorBucket` Amazon S3 bucket.
   + Files that have a `FAILED` scan result are sent to the `ErrorBucket` Amazon S3 bucket.

   All files are encrypted with an AWS KMS key.

1. If a file was sent to the `MalwareBucket` Amazon S3 bucket, the Lambda function initiates an Amazon SNS topic. The Amazon SNS topic sends an email notification to an email address that you configure.

## Tools
<a name="secure-file-transfers-tools"></a>

**AWS services**
+ [Amazon CloudWatch](https://docs.aws.amazon.com/AmazonCloudWatch/latest/monitoring/WhatIsCloudWatch.html) helps you monitor the metrics of your AWS resources and the applications you run on AWS in real time.
+ [Amazon Cognito](https://docs.aws.amazon.com/cognito/latest/developerguide/what-is-amazon-cognito.html) provides authentication, authorization, and user management for web and mobile apps.
+ [Amazon EventBridge](https://docs.aws.amazon.com/eventbridge/latest/userguide/eb-what-is.html) is a serverless event bus service that helps you connect your applications with real-time data from a variety of sources. For example, AWS Lambda functions, HTTP invocation endpoints using API destinations, or event buses in other AWS accounts.
+ [Amazon GuardDuty](https://docs.aws.amazon.com/guardduty/latest/ug/what-is-guardduty.html) is a continuous security monitoring service that analyzes and processes logs to identify unexpected and potentially unauthorized activity in your AWS environment.
+ [AWS Key Management Service (AWS KMS)](https://docs.aws.amazon.com/kms/latest/developerguide/overview.html) helps you create and control cryptographic keys to help protect your data.
+ [AWS Lambda](https://docs.aws.amazon.com/lambda/latest/dg/welcome.html) is a compute service that helps you run code without needing to provision or manage servers. It runs your code only when needed and scales automatically, so you pay only for the compute time that you use.
+ [Amazon Simple Notification Service (Amazon SNS)](https://docs.aws.amazon.com/sns/latest/dg/welcome.html) helps you coordinate and manage the exchange of messages between publishers and clients, including web servers and email addresses.
+ [Amazon Simple Storage Service (Amazon S3)](https://docs.aws.amazon.com/AmazonS3/latest/userguide/Welcome.html) is a cloud-based object storage service that helps you store, protect, and retrieve any amount of data.
+ [AWS Transfer Family](https://docs.aws.amazon.com/transfer/latest/userguide/what-is-aws-transfer-family.html) helps you transfer files into and out of AWS storage services over the SFTP, FTPS, or FTP protocols.
+ [Amazon Virtual Private Cloud (Amazon VPC)](https://docs.aws.amazon.com/vpc/latest/userguide/what-is-amazon-vpc.html) helps you launch AWS resources into a virtual network that you’ve defined. This virtual network resembles a traditional network that you’d operate in your own data center, with the benefits of using the scalable infrastructure of AWS.

**Code repository**

The code for this pattern is available in the GitHub [AWS Transfer Family and GuardDuty Malware Scanning Solution](https://github.com/aws-samples/sample-secure-transfer-family-code) repository.

## Best practices
<a name="secure-file-transfers-best-practices"></a>

The CloudFormation template provided is designed to incorporate many AWS best practices, such as least-privilege permissions for IAM roles and policies, encryption at rest and in transit, and automatic key rotation. For production environments, consider implementing the following additional recommendations:
+ Enable [MFA](https://docs.aws.amazon.com/cognito/latest/developerguide/user-pool-settings-mfa.html) for Amazon Cognito users
+ Implement [AWS Shield](https://docs.aws.amazon.com/waf/latest/developerguide/shield-chapter.html) for distributed denial of service (DDoS) protection
+ Configure [AWS Config](https://docs.aws.amazon.com/config/latest/developerguide/WhatIsConfig.html) for continuous compliance monitoring
+ Implement [AWS CloudTrail](https://docs.aws.amazon.com/awscloudtrail/latest/userguide/cloudtrail-user-guide.html) for comprehensive API logging
+ Set up [Amazon GuardDuty](https://docs.aws.amazon.com/guardduty/latest/ug/what-is-guardduty.html) for threat detection beyond malware scanning
+ Implement [AWS Security Hub CSPM](https://docs.aws.amazon.com/securityhub/latest/userguide/what-is-securityhub-v2.html) for centralized security management
+ Use [AWS Secrets Manager](https://docs.aws.amazon.com/secretsmanager/latest/userguide/intro.html) for credential management
+ Implement network traffic monitoring with [Traffic Mirroring](https://docs.aws.amazon.com/vpc/latest/mirroring/what-is-traffic-mirroring.html)
+ Configure [Amazon Macie](https://docs.aws.amazon.com/macie/latest/user/what-is-macie.html) for sensitive data discovery and protection in Amazon S3
+ Implement regular security assessments and penetration testing
+ Establish a formal incident response plan
+ Implement automated patching for all components
+ Conduct regular security training for administrators
+ Set up [AWS Organizations](https://docs.aws.amazon.com/organizations/latest/userguide/orgs_introduction.html) for multi-account security management

## Epics
<a name="secure-file-transfers-epics"></a>

### Deploy the resources
<a name="deploy-the-resources"></a>


| Task | Description | Skills required | 
| --- | --- | --- | 
| Clone the repository. | Enter the following command to clone the [AWS Transfer Family and GuardDuty malware scanning solution](https://github.com/aws-samples/sample-secure-transfer-family-code) repository to your local workstation:<pre>git clone https://github.com/aws-samples/sample-secure-transfer-family-code.git</pre> | App developer, DevOps engineer | 
| Create the CloudFormation stack. | [\[See the AWS documentation website for more details\]](http://docs.aws.amazon.com/prescriptive-guidance/latest/patterns/secure-file-transfers.html) | Cloud administrator, DevOps engineer | 

### Configure the resources
<a name="configure-the-resources"></a>


| Task | Description | Skills required | 
| --- | --- | --- | 
| Turn on malware protection. | [\[See the AWS documentation website for more details\]](http://docs.aws.amazon.com/prescriptive-guidance/latest/patterns/secure-file-transfers.html) | Cloud administrator, AWS administrator | 
| Add users to the user pool. | Add one or more users to the Amazon Cognito user pool. For instructions, see [Managing users in your user pool](https://docs.aws.amazon.com/cognito/latest/developerguide/managing-users.html) in the Amazon Cognito documentation. | Cloud administrator, AWS administrator | 

### Test the SFTP server
<a name="test-the-sftp-server"></a>


| Task | Description | Skills required | 
| --- | --- | --- | 
| Connect to the SFTP server endpoint. | [\[See the AWS documentation website for more details\]](http://docs.aws.amazon.com/prescriptive-guidance/latest/patterns/secure-file-transfers.html) | App developer, Cloud administrator, Cloud architect, DevOps engineer | 

## Troubleshooting
<a name="secure-file-transfers-troubleshooting"></a>


| Issue | Solution | 
| --- | --- | 
| User authentication fails | [\[See the AWS documentation website for more details\]](http://docs.aws.amazon.com/prescriptive-guidance/latest/patterns/secure-file-transfers.html)For a list of AWS CLI commands that can help you perform these troubleshooting steps, see *Useful commands for troubleshooting* in the [Additional information](#secure-file-transfers-additional) section. | 
| SFTP authentication fails | [\[See the AWS documentation website for more details\]](http://docs.aws.amazon.com/prescriptive-guidance/latest/patterns/secure-file-transfers.html)For a list of AWS CLI commands that can help you perform these troubleshooting steps, see *Useful commands for troubleshooting* in the [Additional information](#secure-file-transfers-additional) section. | 
| File upload access denied | [\[See the AWS documentation website for more details\]](http://docs.aws.amazon.com/prescriptive-guidance/latest/patterns/secure-file-transfers.html)For a list of AWS CLI commands that can help you perform these troubleshooting steps, see *Useful commands for troubleshooting* in the [Additional information](#secure-file-transfers-additional) section. | 
| No malware scanning | [\[See the AWS documentation website for more details\]](http://docs.aws.amazon.com/prescriptive-guidance/latest/patterns/secure-file-transfers.html)For a list of AWS CLI commands that can help you perform these troubleshooting steps, see *Useful commands for troubleshooting* in the [Additional information](#secure-file-transfers-additional) section. | 
| Lambda function errors | [\[See the AWS documentation website for more details\]](http://docs.aws.amazon.com/prescriptive-guidance/latest/patterns/secure-file-transfers.html)For a list of AWS CLI commands that can help you perform these troubleshooting steps, see *Useful commands for troubleshooting* in the [Additional information](#secure-file-transfers-additional) section. | 

## Related resources
<a name="secure-file-transfers-resources"></a>
+ [Transfer Family web apps](https://docs.aws.amazon.com/transfer/latest/userguide/web-app.html) (Transfer Family documentation)

## Additional information
<a name="secure-file-transfers-additional"></a>

**Useful commands for troubleshooting**

Check the status of a CloudFormation stack:

```
aws cloudformation describe-stacks \
  --stack-name <STACK_NAME>
```

List all users in an Amazon Cognito user pool:

```
aws cognito-idp list-users \
  --user-pool-id <USER_POOL_ID>
```

View logs for Lambda functions:

```
aws logs describe-log-groups \
  --log-group-name-prefix /aws/lambda/
```

Check the status of GuardDuty:

```
aws guardduty list-detectors
```

Check security group rules:

```
aws ec2 describe-security-groups \
  --group-ids <SECURITY_GROUP_ID> \
  --output table
```

Check the status of the AWS Transfer Family server:

```
aws transfer describe-server \
  --server-id <SERVER_ID>
```

List all files in an Amazon S3 bucket:

```
aws s3 ls s3://<BUCKET_NAME>/ \
  --recursive
```

Check the status of an EventBridge rule:

```
aws events describe-rule \
  --name <RULE_NAME>
```

# Secure sensitive data in CloudWatch Logs by using Amazon Macie
<a name="secure-cloudwatch-logs-using-macie"></a>

*Anisha Salunkhe, Omar Franco, and David Guardiola, Amazon Web Services*

## Summary
<a name="secure-cloudwatch-logs-using-macie-summary"></a>

This pattern shows you how to use Amazon Macie to automatically detect sensitive data in an Amazon CloudWatch Logs log group by implementing a comprehensive security monitoring workflow. The solution uses Amazon Data Firehose to stream CloudWatch Logs entries to Amazon Simple Storage Service (Amazon S3). Macie periodically scans this bucket for personally identifiable information (PII), financial data, and other sensitive content. The infrastructure is deployed through a AWS CloudFormation template that provisions all necessary AWS services and configurations.

CloudWatch Logs often contains application data that can inadvertently include sensitive user information. This can create compliance and security risks. Traditional log monitoring approaches lack automated sensitive data detection capabilities. This can make it difficult to identify and respond to potential data exposures in real-time.

This pattern helps security teams and compliance officers maintain data confidentiality by providing automated detection and alerting for sensitive data in logging systems. This solution enables proactive incident response through Amazon Simple Notification Service (Amazon SNS) notifications, and it automatically isolates sensitive data to a secure Amazon S3 bucket. You can customize the detection patterns and integrate the workflow with your existing security operations processes.

## Prerequisites and limitations
<a name="secure-cloudwatch-logs-using-macie-prereqs"></a>

**Prerequisites**
+ An active AWS account
+ Permissions to create a CloudFormation stack
+ A CloudWatch Logs log group that you want to monitor
+ An active email address to receive notifications from Amazon SNS
+ Access to [AWS CloudShell](https://docs.aws.amazon.com/cloudshell/latest/userguide/welcome.html)
+ (Optional) Access to the AWS Command Line Interface (AWS CLI), [installed](https://docs.aws.amazon.com/cli/latest/userguide/getting-started-install.html) and [configured](https://docs.aws.amazon.com/cli/latest/userguide/cli-chap-configure.html)

**Limitations**
+ Macie is subject to service quotas. For more information, see [Quotas for Macie](https://docs.aws.amazon.com/macie/latest/user/macie-quotas.html) in the Macie documentation.

## Architecture
<a name="secure-cloudwatch-logs-using-macie-architecture"></a>

**Target architecture**

The following diagram shows the workflow for using Macie to examine CloudWatch Logs log entries for sensitive data.

 

![\[alt text not found\]](http://docs.aws.amazon.com/prescriptive-guidance/latest/patterns/images/pattern-img/c9979070-09ab-4331-b969-5eff26fb2baa/images/d02f34ce-a7d1-4f96-a430-22975224eb9d.png)


The workflow shows the following steps:

1. The CloudWatch Logs log group generates the logs, which are subject to the subscription filter.

1. The subscription filter forwards the logs to Amazon Data Firehose.

1. The logs are encrypted with an AWS Key Management Service (AWS KMS) key when they pass through the Amazon Data Firehose delivery stream.

1. The delivery stream delivers the logs to the exported logs bucket in Amazon S3.

1. At 4 AM each day, Amazon EventBridge initiates an AWS Lambda function that starts a Macie scan for sensitive data in the exported logs bucket.

1. If Macie identifies sensitive data in the bucket, a Lambda function removes the log from the exported logs bucket and encrypts it with an AWS KMS key.

1. The Lambda function isolates the logs that contain sensitive data in the data isolation bucket.

1. The identification of sensitive data initiates an Amazon SNS topic.

1. Amazon SNS sends an email notification to an email address that you configure with information about the logs that contain sensitive data.

**Deployed resources**

The CloudFormation template deploys the following resources in your target AWS account and AWS Region:
+ Two Amazon S3 [buckets](https://docs.aws.amazon.com/AmazonS3/latest/userguide/Welcome.html#BasicsBucket):
  + An exported logs bucket for storing the CloudWatch Logs data
  + A data isolation bucket to store the sensitive information
+ An Amazon EventBridge [rule](https://docs.aws.amazon.com/macie/latest/user/findings-monitor-events-eventbridge.html) that responds to Macie findings
+ AWS Lambda [functions](https://docs.aws.amazon.com/lambda/latest/dg/concepts-basics.html#gettingstarted-concepts-function) that initiate events and export logs to Amazon S3 buckets
+ An Amazon SNS [topic](https://docs.aws.amazon.com/sns/latest/dg/sns-create-topic.html) and [subscription](https://docs.aws.amazon.com/sns/latest/dg/sns-create-subscribe-endpoint-to-topic.html)
+ An Amazon Data Firehose [stream](https://docs.aws.amazon.com/firehose/latest/dev/what-is-this-service.html#key-concepts)
+ A Macie [session](https://docs.aws.amazon.com/macie/latest/user/macie-terms.html#macie-terms-session)
+ A Macie [custom data identifier](https://docs.aws.amazon.com/macie/latest/user/macie-terms.html#macie-terms-cdi)
+ A CloudWatch Logs [subscription filter](https://docs.aws.amazon.com/AmazonCloudWatch/latest/logs/SubscriptionFilters.html)
+ AWS KMS [keys](https://docs.aws.amazon.com/kms/latest/developerguide/concepts.html) to encrypt the logs stored in the buckets
+ The necessary AWS Identity and Access Management (IAM) [roles](https://docs.aws.amazon.com/IAM/latest/UserGuide/id_roles.html) and [policies](https://docs.aws.amazon.com/IAM/latest/UserGuide/access_policies.html) for the solution

## Tools
<a name="secure-cloudwatch-logs-using-macie-tools"></a>

**AWS services**
+ [AWS CloudFormation](https://docs.aws.amazon.com/AWSCloudFormation/latest/UserGuide/Welcome.html) helps you set up AWS resources, provision them quickly and consistently, and manage them throughout their lifecycle across AWS accounts and AWS Regions.
+ [Amazon CloudWatch Logs](https://docs.aws.amazon.com/AmazonCloudWatch/latest/logs/WhatIsCloudWatchLogs.html) helps you centralize the logs from all your systems, applications, and AWS services so you can monitor them and archive them securely.
+ [Amazon Data Firehose](https://docs.aws.amazon.com/firehose/latest/dev/what-is-this-service.html) helps you deliver real-time streaming data to other AWS services, custom HTTP endpoints, and HTTP endpoints owned by supported third-party service providers.
+ [Amazon EventBridge](https://docs.aws.amazon.com/eventbridge/latest/userguide/eb-what-is.html) is a serverless event bus service that helps you connect your applications with real-time data from a variety of sources. For example, sources such as AWS Lambda functions, HTTP invocation endpoints using API destinations, or event buses in other AWS accounts.
+ [AWS Key Management Service (AWS KMS)](https://docs.aws.amazon.com/kms/latest/developerguide/overview.html) helps you create and control cryptographic keys to help protect your data.
+ [AWS Lambda](https://docs.aws.amazon.com/lambda/latest/dg/welcome.html) is a compute service that helps you run code without needing to provision or manage servers. It runs your code only when needed and scales automatically, so you pay only for the compute time that you use.
+ [Amazon Macie](https://docs.aws.amazon.com/macie/latest/user/what-is-macie.html) helps you discover sensitive data, provides visibility into data security risks, and enables automated protection against those risks.
+ [Amazon Simple Notification Service (Amazon SNS)](https://docs.aws.amazon.com/sns/latest/dg/welcome.html) helps you coordinate and manage the exchange of messages between publishers and clients, including web servers and email addresses.
+ [Amazon Simple Storage Service (Amazon S3)](https://docs.aws.amazon.com/AmazonS3/latest/userguide/Welcome.html) is a cloud-based object storage service that helps you store, protect, and retrieve any amount of data.

**Code repository**

The code for this pattern is available in the GitHub [sample-macie-for-securing-cloudwatch-logs](https://github.com/aws-samples/sample-macie-for-securing-cloudwatch-logs) repository.

## Best practices
<a name="secure-cloudwatch-logs-using-macie-best-practices"></a>

Follow the [CloudFormation best practices](https://docs.aws.amazon.com/AWSCloudFormation/latest/UserGuide/best-practices.html) in the CloudFormation documentation.

## Epics
<a name="secure-cloudwatch-logs-using-macie-epics"></a>

### Deploy the solution
<a name="deploy-the-solution"></a>


| Task | Description | Skills required | 
| --- | --- | --- | 
| Clone the code repository. | Enter the following command to clone the repository to your local workstation:<pre>git clone https://github.com/aws-samples/sample-macie-for-securing-cloudwatch-logs</pre> | App developer | 
| (Optional) Edit the CloudFormation template. | [\[See the AWS documentation website for more details\]](http://docs.aws.amazon.com/prescriptive-guidance/latest/patterns/secure-cloudwatch-logs-using-macie.html) | App developer | 
| Option 1 – Deploy using script with command-line parameters. | Enter the following command to deploy the solution by using command line parameters, where the value for `enable-macie` is `true` only if Amazon Macie is not already enabled:<pre>./scripts/test-macie-solution.sh --deploy-stack \<br />  --stack-name <stack name> \<br />  --email <email address> \<br />  --enable-macie <true or false> \<br />  --region <region> \<br />  --resource-name <prefix for all resources> \<br />  --bucket-name <bucket name></pre> | General AWS | 
| Option 2 – Deploy using script with environment variables. | [\[See the AWS documentation website for more details\]](http://docs.aws.amazon.com/prescriptive-guidance/latest/patterns/secure-cloudwatch-logs-using-macie.html) | General AWS | 
| Option 3 – Deploy using the AWS CLI. | Enter the following command to deploy the solution by using the AWS CLI, where the value for `EnableMacie` is `true` only if Amazon Macie is not already enabled:<pre>aws cloudformation create-stack \<br />  --region us-east-1 \<br />  --stack-name macie-for-securing-cloudwatch-logs \<br />  --template-body file://app/main.yml \<br />  --capabilities CAPABILITY_IAM \<br />  --parameters \<br />    ParameterKey=ResourceName,ParameterValue=<prefix for all resources> \<br />    ParameterKey=BucketName,ParameterValue=<bucket name> \<br />    ParameterKey=LogGroupName,ParameterValue=<path for log group> \<br />    ParameterKey=SNSTopicEndpointEmail,ParameterValue=<email address> \<br />    ParameterKey=EnableMacie,ParameterValue=<true or false></pre> |  | 
| Option 4 – Deploy through the AWS Management Console. | [\[See the AWS documentation website for more details\]](http://docs.aws.amazon.com/prescriptive-guidance/latest/patterns/secure-cloudwatch-logs-using-macie.html) | General AWS | 
| Monitor the deployment status and confirm deployment. | [\[See the AWS documentation website for more details\]](http://docs.aws.amazon.com/prescriptive-guidance/latest/patterns/secure-cloudwatch-logs-using-macie.html) | General AWS | 
| Confirm the Amazon SNS subscription. | Follow the instructions in [Confirm your Amazon SNS subscription](https://docs.aws.amazon.com/sns/latest/dg/SendMessageToHttp.confirm.html) in the Amazon SNS documentation to confirm your Amazon SNS subscription. | App developer | 

### Test the solution
<a name="test-the-solution"></a>


| Task | Description | Skills required | 
| --- | --- | --- | 
| Option 1 – Test with automated reporting. | If you used the default stack name, enter the following command to test the solution:<pre>./scripts/test-macie-solution.sh \<br />   --full-test</pre>If you used a custom stack name, enter the following command to test the solution:<pre>./scripts/test-macie-solution.sh \<br />   --full-test \<br />   --stack-name <stack name></pre>If you used a custom stack name and custom parameters, enter the following command to test the solution:<pre>./scripts/test-macie-solution.sh --full-test \<br />  --stack-name <stack name> \<br />  --region <region> \<br />  --log-group <log group path></pre> | General AWS | 
| Option 2 – Test with targeted validation. | [\[See the AWS documentation website for more details\]](http://docs.aws.amazon.com/prescriptive-guidance/latest/patterns/secure-cloudwatch-logs-using-macie.html) | General AWS | 

### Clean up
<a name="clean-up"></a>


| Task | Description | Skills required | 
| --- | --- | --- | 
| Option 1 – Perform automated cleanup. | If you used the default stack name, enter the following command to delete the stack:<pre>./scripts/cleanup-macie-solution.sh \<br />  --full-cleanup</pre>If you used a custom stack name, enter the following command to delete the stack:<pre>./scripts/cleanup-macie-solution.sh \<br />  --full-cleanup \<br />  --stack-name <stack name></pre>If you used a custom stack name and custom parameters, enter the following command to delete the stack:<pre>./scripts/cleanup-macie-solution.sh \<br />  --full-cleanup \<br />  --stack-name <stack name> \<br />  --region <region> \<br />  --disable-macie <true or false></pre> | General AWS | 
| Option 2 – Perform step-by-step cleanup. | [\[See the AWS documentation website for more details\]](http://docs.aws.amazon.com/prescriptive-guidance/latest/patterns/secure-cloudwatch-logs-using-macie.html) | General AWS | 
| Verify clean up. | [\[See the AWS documentation website for more details\]](http://docs.aws.amazon.com/prescriptive-guidance/latest/patterns/secure-cloudwatch-logs-using-macie.html) | General AWS | 

## Troubleshooting
<a name="secure-cloudwatch-logs-using-macie-troubleshooting"></a>


| Issue | Solution | 
| --- | --- | 
| CloudFormation stack status shows **CREATE\$1FAILED**. | The CloudFormation template is configured to publish logs to CloudWatch Logs. You can view the logs in the AWS Management Console so that you don't have to connect to your Amazon EC2 instance. For more information, see [View CloudFormation logs in the console](https://aws.amazon.com/blogs/devops/view-cloudformation-logs-in-the-console/) (AWS blog post). | 
| CloudFormation `delete-stack` command fails. | Some resources must be empty before they can be deleted. For example, you must delete all objects in an Amazon S3 bucket or remove all instances in an Amazon EC2 security group before you can delete the bucket or security group. For more information, see [Delete stack fails](https://docs.aws.amazon.com/AWSCloudFormation/latest/UserGuide/troubleshooting.html#troubleshooting-errors-delete-stack-fails) in the Amazon S3 documentation. | 
| Error when parsing a parameter. | When you use the AWS CLI or the CloudFormation console to pass in a value, add the quotation marks. | 

## Related resources
<a name="secure-cloudwatch-logs-using-macie-resources"></a>
+ [Architecture best practices for storage](https://aws.amazon.com/architecture/storage/?docs3_bp1&cards-all.sort-by=item.additionalFields.sortDate&cards-all.sort-order=desc&awsf.content-type=*all&awsf.methodology=*all) (AWS website)
+ [Filter pattern syntax for metric filters, subscription filters, filter log events, and Live Tail](https://docs.aws.amazon.com/AmazonCloudWatch/latest/logs/FilterAndPatternSyntax.html) (CloudWatch Logs documentation)
+ [Designing and implementing logging and monitoring with Amazon CloudWatch](https://docs.aws.amazon.com/prescriptive-guidance/latest/implementing-logging-monitoring-cloudwatch/welcome.html) (AWS Prescriptive Guidance)
+ [Troubleshooting CloudFormation](https://docs.aws.amazon.com/AWSCloudFormation/latest/UserGuide/troubleshooting.html) (CloudFormation documentation)

# Send alerts from AWS Network Firewall to a Slack channel
<a name="send-alerts-from-aws-network-firewall-to-a-slack-channel"></a>

*Venki Srivatsav and Aromal Raj Jayarajan, Amazon Web Services*

## Summary
<a name="send-alerts-from-aws-network-firewall-to-a-slack-channel-summary"></a>

This pattern describes how to deploy a firewall by using Amazon Web Services (AWS) Network Firewall with the distributed deployment model and how to propagate the alerts generated by AWS Network Firewall to a configurable Slack channel. 

Compliance standards such as Payment Card Industry Data Security Standard (PCI DSS) require that you install and maintain a firewall to protect customer data. In the AWS Cloud, a virtual private cloud (VPC) is considered the same as a physical network in the context of these compliance requirements. You can use Network Firewall to monitor network traffic between VPCs and to protect your workloads that run in VPCs governed by a compliance standard. Network Firewall blocks access or generates alerts when it detects unauthorized access from other VPCs in the same account. However, Network Firewall supports a limited number of destinations for delivering the alerts. These destinations include Amazon Simple Storage Service (Amazon S3) buckets, Amazon CloudWatch log groups, and Amazon Data Firehose delivery streams. Any further action on these notifications requires offline analysis by using either Amazon Athena or Amazon Kinesis. 

This pattern provides a method for propagating alerts that are generated by Network Firewall to a configurable Slack channel for further action in near real time. You can also extend the functionality to other alerting mechanisms such as PagerDuty, Jira, and email. (Those customizations are outside the scope of this pattern.) 

## Prerequisites and limitations
<a name="send-alerts-from-aws-network-firewall-to-a-slack-channel-prereqs"></a>

**Prerequisites **
+ Slack channel (see [Getting started](https://slack.com/help/articles/206845317-Create-a-Slack-workspace) in the Slack help center)
+ Required privileges to send a message to the channel
+ The Slack endpoint URL with an API token ([select your app](https://api.slack.com/apps) and choose an incoming webhook to see its URL; for more information, see [Creating an Incoming Webhook](https://api.slack.com/messaging/webhooks#create_a_webhook) in the Slack API documentation) 
+ An Amazon Elastic Compute Cloud (Amazon EC2) test instance in the workload subnets
+ Test rules in Network Firewall
+ Actual or simulated traffic to trigger the test rules
+ An S3 bucket to hold the source files to be deployed

**Limitations **
+ Currently this solution supports only a single Classless Inter-Domain Routing (CIDR) range as a filter for source and destination IPs.

## Architecture
<a name="send-alerts-from-aws-network-firewall-to-a-slack-channel-architecture"></a>

**Target technology stack**
+ One VPC
+ Four subnets (two for the firewall and two for workloads) 
+ Internet gateway
+ Four route tables with rules 
+ S3 bucket used as an alert destination, configured with a bucket policy and event settings to run a Lambda function
+ Lambda function with an execution role, to send Slack notifications
+ AWS Secrets Manager secret for storing the Slack URL
+ Network firewall with alert configuration
+ Slack channel

All components except for the Slack channel are provisioned by the CloudFormation templates and the Lambda function that are provided with this pattern (see the [Code ](#send-alerts-from-aws-network-firewall-to-a-slack-channel-tools)section).

**Target architecture **

This pattern sets up a decentralized network firewall with Slack integration. This architecture consists of a VPC with two Availability Zones. The VPC includes two protected subnets and two firewall subnets with network firewall endpoints. All traffic going into and out of the protected subnets can be monitored by [creating firewall policies](https://docs.aws.amazon.com/waf/latest/developerguide/network-firewall-policies.html) and rules. The network firewall is configured to place all alerts in an S3 bucket. This S3 bucket is configured to call a Lambda function when it receives a `put` event. The Lambda function fetches the configured Slack URL from Secrets Manager and sends the notification message to the Slack workspace.

![\[Target architecture for a decentralized network firewall with Slack integration.\]](http://docs.aws.amazon.com/prescriptive-guidance/latest/patterns/images/pattern-img/7207fd56-094e-4af4-9ecd-75b122b82275/images/b1320776-c010-49b9-96bf-15e97ebe09ba.png)


For more information about this architecture, see the AWS blog post [Deployment models for AWS Network Firewall](https://aws.amazon.com/blogs/networking-and-content-delivery/deployment-models-for-aws-network-firewall/).

## Tools
<a name="send-alerts-from-aws-network-firewall-to-a-slack-channel-tools"></a>

**AWS services**
+ [AWS Network Firewall](https://docs.aws.amazon.com/network-firewall/latest/developerguide/what-is-aws-network-firewall.html) is a stateful, managed, network firewall and intrusion detection and prevention service for VPCs in the AWS Cloud. You can use Network Firewall to filter traffic at the perimeter of your VPC and protect your workloads on AWS.
+ [AWS Secrets Manager](https://docs.aws.amazon.com/secretsmanager/latest/userguide/intro.html) is a service for credential storage and retrieval. Using Secrets Manager, you can replace hardcoded credentials in your code, including passwords, with an API call to Secrets Manager to retrieve the secret programmatically. This pattern uses Secrets Manager to store the Slack URL.
+ [Amazon Simple Storage Service (Amazon S3)](https://docs.aws.amazon.com/AmazonS3/latest/userguide/Welcome.html) is an object storage service. You can use Amazon S3 to store and retrieve any amount of data at any time, from anywhere on the web. This pattern uses Amazon S3 to store the CloudFormation templates and Python script for the Lambda function. It also uses an S3 bucket as the network firewall alert destination.
+ [AWS CloudFormation](https://docs.aws.amazon.com/AWSCloudFormation/latest/UserGuide/Welcome.html) helps you model and set up your AWS resources, provision them quickly and consistently, and manage them throughout their lifecycle. You can use a template to describe your resources and their dependencies, and launch and configure them together as a stack, instead of managing resources individually. This pattern uses AWS CloudFormation to automatically deploy a distributed architecture for Firewall Manager.

**Code **

The code for this pattern is available on GitHub, in the [Network Firewall Slack Integration](https://github.com/aws-samples/aws-network-firewall-automation-examples/tree/main/NfwSlackIntegration/src) repository. In the `src `folder of the repository, you’ll find:
+ A set of CloudFormation files in YAML format. You use these templates to provision the components for this pattern.
+ A Python source file (`slack-lambda.py`) to create the Lambda function.
+ A .zip archive deployment package (`slack-lambda.py.zip`) to upload your Lambda function code.

To use these files, follow the instructions in the next section.

## Epics
<a name="send-alerts-from-aws-network-firewall-to-a-slack-channel-epics"></a>

### Set up the S3 bucket
<a name="set-up-the-s3-bucket"></a>


| Task | Description | Skills required | 
| --- | --- | --- | 
| Create an S3 bucket. | [\[See the AWS documentation website for more details\]](http://docs.aws.amazon.com/prescriptive-guidance/latest/patterns/send-alerts-from-aws-network-firewall-to-a-slack-channel.html)For more information, see [Creating a bucket](https://docs.aws.amazon.com/AmazonS3/latest/userguide/create-bucket-overview.html) in the Amazon S3 documentation.  | App developer, App owner, Cloud administrator | 
| Upload the CloudFormation templates and Lambda code. | [\[See the AWS documentation website for more details\]](http://docs.aws.amazon.com/prescriptive-guidance/latest/patterns/send-alerts-from-aws-network-firewall-to-a-slack-channel.html) | App developer, App owner, Cloud administrator | 

### Deploy the CloudFormation template
<a name="deploy-the-cloudformation-template"></a>


| Task | Description | Skills required | 
| --- | --- | --- | 
| Launch the CloudFormation template. | Open the [AWS CloudFormation console](https://console.aws.amazon.com/cloudformation/) in the same AWS Region as your S3 bucket and deploy the template `base.yml`. This template creates the required AWS resources and Lambda functions for the alerts to be transmitted to the Slack channel.For more information about deploying CloudFormation templates, see [Creating a stack on the AWS CloudFormation console](https://docs.aws.amazon.com/AWSCloudFormation/latest/UserGuide/cfn-console-create-stack.html) in the CloudFormation documentation. | App developer, App owner, Cloud administrator | 
| Complete the parameters in the template. | Specify the stack name and configure the parameter values. For a list of parameters, their descriptions, and default values, see *CloudFormation parameters* in the [Additional information](#send-alerts-from-aws-network-firewall-to-a-slack-channel-additional) section.  | App developer, App owner, Cloud administrator | 
| Create the stack. | [\[See the AWS documentation website for more details\]](http://docs.aws.amazon.com/prescriptive-guidance/latest/patterns/send-alerts-from-aws-network-firewall-to-a-slack-channel.html) | App developer, App owner, Cloud administrator | 

### Verify the solution
<a name="verify-the-solution"></a>


| Task | Description | Skills required | 
| --- | --- | --- | 
| Test the deployment. | Use the AWS CloudFormation console or the AWS Command Line Interface (AWS CLI) to verify that the resources listed in the [Target technology stack](#send-alerts-from-aws-network-firewall-to-a-slack-channel-architecture) section have been created.  If the CloudFormation template fails to deploy successfully, check the values you provided for the `pAvailabilityZone1 `and  `pAvailabilityZone2 `parameters. These should be appropriate for the AWS Region you’re deploying the solution in. For a list of Availability Zones for each Region, see [Regions and Zones](https://docs.aws.amazon.com/AWSEC2/latest/UserGuide/using-regions-availability-zones.html#concepts-availability-zones) in the Amazon EC2 documentation.  | App developer, App owner, Cloud administrator | 
| Test functionality. | 1. Open the Amazon EC2 console at [https://console.aws.amazon.com/ec2/](https://console.aws.amazon.com/ec2/).2. Create  an EC2 instance in one of the protected subnets. Choose an Amazon Linux 2 AMI (HVM) to use as an HTTPS server. For instructions, see [Launch an instance](https://docs.aws.amazon.com/AWSEC2/latest/UserGuide/EC2_GetStarted.html#ec2-launch-instance) in the Amazon EC2 documentation.Amazon Linux 2 is nearing end of support. For more information, see the [Amazon Linux 2 FAQs](http://aws.amazon.com/amazon-linux-2/faqs/).3. Use the following user data to install a web server on the EC2 instance:<pre>#!/bin/bash<br />yum install httpd -y<br />systemctl start httpd<br />systemctl stop firewalld<br />cd /var/www/html<br />echo "Hello!! this is a NFW alert test page, 200 OK" > index.html</pre>4. Create the following network firewall rules:*Stateless rule:*<pre>Source: 0.0.0.0/0<br />Destination 10.0.3.65/32 (private IP of the EC2 instance)<br />Action: Forward </pre>*Stateful rule:*<pre>Protocol: HTTP<br />Source ip/port: Any / Any<br />Destination ip/port: Any /Any</pre>5. Get the public IP of the web server you created in step 3.6. Access the public IP in a browser. You should see the following message in the browser:<pre>Hello!! this is a NFW alert test page, 200 OK </pre>You will also get a notification in the Slack channel. The notification might be delayed, depending on the size of the message. For testing purposes, consider providing a CIDR filter that is not too narrow (for example, a CIDR value with /32 would be considered too narrow, and /8 would be too broad). For more information, see the *Filter behavior* section in [Additional information](#send-alerts-from-aws-network-firewall-to-a-slack-channel-additional). | App developer, App owner, Cloud administrator | 

## Related resources
<a name="send-alerts-from-aws-network-firewall-to-a-slack-channel-resources"></a>
+ [Deployment models for AWS Network Firewall](https://aws.amazon.com/blogs/networking-and-content-delivery/deployment-models-for-aws-network-firewall/) (AWS blog post)
+ [AWS Network Firewall policies](https://docs.aws.amazon.com/waf/latest/developerguide/network-firewall-policies.html) (AWS documentation)
+ [Network Firewall Slack Integration](https://github.com/aws-samples/aws-network-firewall-automation-examples/tree/main/NfwSlackIntegration/src) (GitHub repository)
+ [Create a Slack workspace](https://slack.com/help/articles/206845317-Create-a-Slack-workspace) (Slack help center)

## Additional information
<a name="send-alerts-from-aws-network-firewall-to-a-slack-channel-additional"></a>

**CloudFormation parameters**


| 
| 
| Parameter  | Description  | Default or sample value  | 
| --- |--- |--- |
| `pVpcName` | The name of the VPC to create. | Inspection | 
| `pVpcCidr` | The CIDR range for the VPC to create. | 10.0.0.0/16 | 
| `pVpcInstanceTenancy` | How EC2 instances are distributed across physical hardware. Options are `default `(shared tenancy) or `dedicated `(single tenancy). | default | 
| `pAvailabilityZone1` | The first Availability Zone for the infrastructure.  | us-east-2a  | 
| `pAvailabilityZone2` | The second Availability Zone for the infrastructure. | us-east-2b | 
| `pNetworkFirewallSubnet1Cidr` | The CIDR range for the first firewall subnet (minimum /28). | 10.0.1.0/24 | 
| `pNetworkFirewallSubnet2Cidr` | The CIDR range for the second firewall subnet (minimum /28). | 10.0.2.0/24 | 
| `pProtectedSubnet1Cidr` | The CIDR range for the first protected (workload) subnet. | 10.0.3.0/24 | 
| `pProtectedSubnet2Cidr` | The CIDR range for the second protected (workload) subnet. | 10.0.4.0/24 | 
| `pS3BucketName` | The name of the existing S3 bucket where you uploaded the Lambda source code. | us-w2-yourname-lambda-functions | 
| `pS3KeyPrefix` | The prefix of the S3 bucket where you uploaded the Lambda source code. | aod-test  | 
| `pAWSSecretName4Slack` | The name of the secret that holds the Slack URL. | SlackEnpoint-Cfn | 
| `pSlackChannelName` | The name of the Slack channel you created. | somename-notifications | 
| `pSlackUserName` | Slack user name. | Slack User | 
| `pSecretKey` | This can be any key. We recommend that you use the default. | webhookUrl | 
| `pWebHookUrl` | The value of the Slack URL. | https://hooks.slack.com/services/T???9T??/A031885JRM7/9D4Y?????? | 
| `pAlertS3Bucket` | The name of the S3 bucket to be used as the network firewall alert destination. This bucket will be created for you. | us-w2-yourname-security-aod-alerts | 
| `pSecretTagName` | The tag name for the secret. | AppName | 
| `pSecretTagValue` | The tag value for the specified tag name. | LambdaSlackIntegration | 
| `pdestCidr` | The filter for the destination CIDR range. For more information, see the next section, *Filter behavior*. | 10.0.0.0/16 | 
| `pdestCondition` | A flag to indicate whether to exclude or include the destination match. For more information, see the next section. Valid values are `include `and `exclude`. | include | 
| `psrcCidr` | The filter for the source CIDR range to alert. For more information, see the next section.  | 118.2.0.0/16 | 
| `psrcCondition` | The flag to exclude or include the source match. For more information, see the next section. | include | 

**Filter behavior**

If you haven’t configured any filters in AWS Lambda, all generated alerts are sent to your Slack channel. The source and destination IPs of the generated alerts are matched against the CIDR ranges you configured when you deployed the CloudFormation template. If a match is found, the condition is applied. If either the source or the destination falls within the configured CIDR range and at least one of them is configured with the condition `include`, an alert is generated. The following tables provide examples of CIDR values, conditions, and results.


| 
| 
|  | Configured CIDR  | Alert IP  | Configured  | Alert  | 
| --- |--- |--- |--- |--- |
| **Source**  | 10.0.0.0/16  | 10.0.0.25  | include  | Yes  | 
| **Destination**  | 100.0.0.0/16  | 202.0.0.13  | include  | 


| 
| 
|  | Configured CIDR  | Alert IP  | Configured  | Alert  | 
| --- |--- |--- |--- |--- |
| **Source**  | 10.0.0.0/16  | 10.0.0.25  | exclude  | No  | 
| **Destination**  | 100.0.0.0/16  | 202.0.0.13  | include  | 


| 
| 
|  | Configured CIDR  | Alert IP  | Configured  | Alert  | 
| --- |--- |--- |--- |--- |
| **Source**  | 10.0.0.0/16  | 10.0.0.25  | include  | Yes  | 
| **Destination**  | 100.0.0.0/16  | 100.0.0.13  | include  | 


| 
| 
|  | Configured CIDR  | Alert IP  | Configured  | Alert  | 
| --- |--- |--- |--- |--- |
| **Source**  | 10.0.0.0/16  | 90.0.0.25  | include  | Yes  | 
| **Destination**  | Null  | 202.0.0.13  | include  | 


| 
| 
|  | Configured CIDR  | Alert IP  | Configured  | Alert  | 
| --- |--- |--- |--- |--- |
| **Source**  | 10.0.0.0/16  | 90.0.0.25  | include  | No  | 
| **Destination**  | 100.0.0.0/16  | 202.0.0.13  | include  | 

# Send custom attributes to Amazon Cognito and inject them into tokens
<a name="send-custom-attributes-cognito"></a>

*Carlos Alessandro Ribeiro and Mauricio Mendoza, Amazon Web Services*

## Summary
<a name="send-custom-attributes-cognito-summary"></a>

Sending custom attributes to an Amazon Cognito authentication process can provide additional context to an application, enable more granular access controls, and make it easier to manage user profiles and authentication requirements. These features are useful in a wide range of applications and scenarios, and they can help you improve the overall security and functionality of an application.

This pattern shows how to send custom attributes to an Amazon Cognito authentication process when an application needs to provide additional context to the [access token](https://docs.aws.amazon.com/cognito/latest/developerguide/amazon-cognito-user-pools-using-the-access-token.html) or [identity (ID) token](https://docs.aws.amazon.com/cognito/latest/developerguide/amazon-cognito-user-pools-using-the-id-token.html). You use the Node.js as the backend application. The application authenticates a user from an Amazon Cognito [user pool](https://docs.aws.amazon.com/cognito/latest/developerguide/cognito-user-pools.html) and passes custom attributes that are needed for token generation. You can use [AWS Lambda triggers](https://docs.aws.amazon.com/cognito/latest/developerguide/cognito-user-pools-working-with-lambda-triggers.html) for Amazon Cognito to customize your authentication process without major code customization or significant effort.

**Important**  
The code and samples in this pattern are not recommended for production workloads because they are intended for demonstration purposes only. For production workloads, additional configuration is required on the client side. Use this pattern as a reference for pilot or proof-of-concept purposes only.

## Prerequisites and limitations
<a name="send-custom-attributes-cognito-prereqs"></a>

**Prerequisites**
+ An active AWS account
+ Permissions to create and manage Amazon Cognito user pools and AWS Lambda functions
+ AWS Command Line Interface (AWS CLI), [installed](https://docs.aws.amazon.com/cli/latest/userguide/getting-started-install.html) and [configured](https://docs.aws.amazon.com/cli/latest/userguide/cli-chap-configure.html)
+ An integrated development environment (IDE) that supports Node.js
+ Node.js version 18 or later, [installed](https://nodejs.org/en/download/)
+ npm version 8 or later, [installed](https://docs.npmjs.com/getting-started)
+ TypeScript, [installed](https://www.typescriptlang.org/download/)

**Limitations**
+ This pattern is not applicable for application integration through the Client Credentials authentication flow.
+ The pre-token generation trigger can add or change only some attributes of the access token and identity token. For more information, see [Pre token generation Lambda trigger](https://docs.aws.amazon.com/cognito/latest/developerguide/user-pool-lambda-pre-token-generation.html) in the Amazon Cognito documentation.

## Architecture
<a name="send-custom-attributes-cognito-architecture"></a>

**Target architecture**

The following diagram shows the target architecture for this pattern. It also shows how the Node.js application might work with a backend to update databases. However, the backend database updates are outside the scope of this pattern.

![\[A Node.js application issuing an access token with custom attributes to an Amazon Cognito user pool.\]](http://docs.aws.amazon.com/prescriptive-guidance/latest/patterns/images/pattern-img/9f0855e6-77f9-48c2-846e-f9c317127e1f/images/8c52c88b-8954-4b4c-aed3-fd8c22f84c1d.png)


The diagram shows the following workflow:

1. The Node.js application issues an access token with custom attributes to the Amazon Cognito user pool.

1. The Amazon Cognito user pool initiates the pre-token generation Lambda function, which customizes the access and ID tokens.

1. The Node.js application makes an API call through Amazon API Gateway.

**Note**  
The other architectural components shown in this architecture are for example only and are out of scope for this pattern.

**Automation and scale**

You can automate the provisioning of Amazon Cognito user pools, AWS Lambda functions, database instances, and other resources by using [AWS CloudFormation](https://docs.aws.amazon.com/AWSCloudFormation/latest/UserGuide/Welcome.html), the [AWS Cloud Development Kit (AWS CDK)](https://docs.aws.amazon.com/cdk/v2/guide/home.html), [HashiCorp Terraform](https://www.terraform.io/docs), or any supported infrastructure as code (IaC) tool. If you want to scale your deployments, use continuous integration and continuous delivery (CI/CD) pipelines, which help prevent errors associated with manual deployments.

## Tools
<a name="send-custom-attributes-cognito-tools"></a>

**AWS services**
+ [Amazon API Gateway](https://docs.aws.amazon.com/apigateway/latest/developerguide/welcome.html) helps you create, publish, maintain, monitor, and secure REST, HTTP, and WebSocket APIs at any scale.
+ [Amazon Cognito](https://docs.aws.amazon.com/cognito/latest/developerguide/what-is-amazon-cognito.html) provides authentication, authorization, and user management for web and mobile apps.
+ [Amazon Elastic Container Service (Amazon ECS)](https://docs.aws.amazon.com/AmazonECS/latest/developerguide/Welcome.html) is a fast and scalable container management service that helps you run, stop, and manage containers on a cluster.
+ [AWS Lambda](https://docs.aws.amazon.com/lambda/latest/dg/welcome.html) is a compute service that helps you run code without needing to provision or manage servers. It runs your code only when needed and scales automatically, so you pay only for the compute time that you use.
+ [AWS SDK for JavaScript](https://docs.aws.amazon.com/sdk-for-javascript/v3/developer-guide/welcome.html) provides a JavaScript API for AWS services. You can use it to build libraries or applications for Node.js or the browser.

**Other tools**
+ [Node.js](https://nodejs.org/en/docs/) is an event-driven JavaScript runtime environment that is designed for building scalable network applications.
+ [npm](https://docs.npmjs.com/about-npm) is a software registry that runs in a Node.js environment and is used to share or borrow packages and manage deployment of private packages.

## Best practices
<a name="send-custom-attributes-cognito-best-practices"></a>

We recommend that you implement the following best practices:
+ **Secrets and sensitive data** – Do not store secrets or sensitive data within the application. Use an external system that the application can pull the data from, such as [AWS AppConfig](https://docs.aws.amazon.com/appconfig/latest/userguide/what-is-appconfig.html), [AWS Secrets Manager](https://docs.aws.amazon.com/secretsmanager/latest/userguide/intro.html), or [AWS Systems Manager Parameter Store](https://docs.aws.amazon.com/systems-manager/latest/userguide/systems-manager-parameter-store.html).
+ **Standardized deployment** – Use CI/CD pipelines to deploy your applications. You can use services such as [AWS CodeBuild](https://docs.aws.amazon.com/codebuild/latest/userguide/welcome.html) and [AWS CodePipeline](https://docs.aws.amazon.com/codepipeline/latest/userguide/welcome.html).
+ **Token expiration** – Set a short expiration date for the access token.
+ **Use a secure connection** – All communication between the client application and the backend should be encrypted by using SSL/TLS. Use [AWS Certificate Manager (ACM)](https://docs.aws.amazon.com/acm/latest/userguide/acm-overview.html) to generate and manage SSL/TLS certificates, and use [Amazon CloudFront](https://docs.aws.amazon.com/AmazonCloudFront/latest/DeveloperGuide/Introduction.html) or [Elastic Load Balancing ](https://docs.aws.amazon.com/elasticloadbalancing/latest/userguide/what-is-load-balancing.html)to handle SSL/TLS termination.
+ **Validate user input** – Make sure that all user input is validated to prevent injection attacks and other security vulnerabilities. Use input validation libraries and services such as Amazon API Gateway and [AWS WAF](https://docs.aws.amazon.com/waf/latest/developerguide/what-is-aws-waf.html#waf-intro) to prevent common attack vectors.
+ **Use IAM roles** – Use [AWS Identity and Access Management (IAM)](https://docs.aws.amazon.com/IAM/latest/UserGuide/introduction.html) roles to control access to AWS resources and make sure that only authorized users have access. Follow the principle of least privilege and make sure that each user has only the necessary permissions to perform their role.
+ **Use a password policy** – Configure a password policy that meets your security requirements, such as minimum length, complexity, and expiration. Use Secrets Manager or AWS Systems Manager Parameter Store to store and manage passwords securely.
+ **Enable multi-factor authentication (MFA)** – Enable MFA for all users to provide an additional layer of security and reduce the risk of unauthorized access. Use [AWS IAM Identity Center](https://docs.aws.amazon.com/singlesignon/latest/userguide/what-is.html) or Amazon Cognito to enable MFA and other authentication methods.
+ **Store sensitive information securely** – Store sensitive information, such as passwords and access tokens, securely by using [AWS Key Management Service (AWS KMS)](https://docs.aws.amazon.com/kms/latest/developerguide/overview.html) or other encryption services.
+ **Use strong authentication methods** – To increase the security of the authentication process, use strong authentication methods, such as biometric authentication or multi-factor authentication.
+ **Monitor for suspicious activity** – Use [AWS CloudTrail](https://docs.aws.amazon.com/awscloudtrail/latest/userguide/cloudtrail-user-guide.html) and other monitoring tools to monitor for suspicious activity and potential security threats. Set up automated alerts for unusual activity, and use [Amazon GuardDuty](https://docs.aws.amazon.com/guardduty/latest/ug/what-is-guardduty.html) or [AWS Security Hub CSPM](https://docs.aws.amazon.com/securityhub/latest/userguide/what-is-securityhub.html) to detect potential threats.
+ **Regularly review and update security policies** – Regularly review and update your security policies and procedures to make sure that they meet your changing security requirements and best practices. Use AWS Config to track and audit changes to your security policies and procedures.
+ **Automated sign-up** – Do not enable automated sign-up to an Amazon Cognito user pool. For more information, see [Reduce risks of user sign-up fraud and SMS pumping with Amazon Cognito user pools](https://aws.amazon.com/blogs/security/reduce-risks-of-user-sign-up-fraud-and-sms-pumping-with-amazon-cognito-user-pools/) (AWS blog post).

For additional best practices, see [Security best practices for Amazon Cognito user pools](https://docs.aws.amazon.com/cognito/latest/developerguide/user-pool-security-best-practices.html) in the Amazon Cognito documentation.

## Epics
<a name="send-custom-attributes-cognito-epics"></a>

### Set up the AWS resources
<a name="set-up-the-aws-resources"></a>


| Task | Description | Skills required | 
| --- | --- | --- | 
| Create a user pool. | [\[See the AWS documentation website for more details\]](http://docs.aws.amazon.com/prescriptive-guidance/latest/patterns/send-custom-attributes-cognito.html)For more information and instructions for how to set up a user pool in the AWS Management Console, see [Getting started with user pools](https://docs.aws.amazon.com/cognito/latest/developerguide/getting-started-user-pools.html) and [Add more features and security options to your user pool](https://docs.aws.amazon.com/cognito/latest/developerguide/user-pool-next-steps.html).To reduce costs, use the Essentials plan or Lite plan to test this pattern. For more information, see [Amazon Cognito pricing](https://aws.amazon.com/cognito/pricing/). | App developer, AWS DevOps | 
| Add a user to the user pool. | Enter the following command to create one user in the Amazon Cognito user pool:<pre>aws cognito-idp sign-up \<br />   --client-id <ClientID> \<br />   --username <jane@example.com> \<br />   --password <PASSWORD> \<br />   --user-attributes Name="email",Value="<jane@example.com>" Name="name",Value="<Jane>"</pre> | App developer, AWS DevOps | 
| Add the app client to the user pool. | [\[See the AWS documentation website for more details\]](http://docs.aws.amazon.com/prescriptive-guidance/latest/patterns/send-custom-attributes-cognito.html) | AWS systems administrator, AWS administrator, AWS DevOps, App developer | 
| Create a Lambda trigger for pre-token generation. | [\[See the AWS documentation website for more details\]](http://docs.aws.amazon.com/prescriptive-guidance/latest/patterns/send-custom-attributes-cognito.html) | AWS DevOps, App developer | 
| Customize the user pool workflow. | [\[See the AWS documentation website for more details\]](http://docs.aws.amazon.com/prescriptive-guidance/latest/patterns/send-custom-attributes-cognito.html)For more information, see [Customizing user pool workflows with Lambda triggers](https://docs.aws.amazon.com/cognito/latest/developerguide/cognito-user-pools-working-with-lambda-triggers.html) in the Amazon Cognito documentation. | AWS DevOps, App developer | 

### Create the Node.js application
<a name="create-the-node-js-application"></a>


| Task | Description | Skills required | 
| --- | --- | --- | 
| Create the Node.js application. | [\[See the AWS documentation website for more details\]](http://docs.aws.amazon.com/prescriptive-guidance/latest/patterns/send-custom-attributes-cognito.html) | App developer | 
| Implement the authentication logic. | [\[See the AWS documentation website for more details\]](http://docs.aws.amazon.com/prescriptive-guidance/latest/patterns/send-custom-attributes-cognito.html)You can create your own TypeScript file or modify the sample provided as needed for your use case. | App developer | 
| Configure the environment variables and configuration file. | In a terminal, enter the following commands to create the environment variables:<pre>export USERNAME="<COGNITO_USER_NAME>"<br />export PASSWORD="<COGNITO_USER_PASSWORD>"<br />export USER_POOL_ID="<COGNITO_USER_ID>"<br />export CLIENT_ID="<COGNITO_CLIENT_ID>"</pre>Do not hardcode secrets or expose your credentials. | App developer | 
| Run the application. | Enter the following commands to run the application and confirm that it is working:<pre>npm run build<br />npm start</pre> | App developer | 
| Confirm that the custom attributes are injected into the tokens. | Use the debugging features for your IDE to view the access and ID tokens. Confirm that the custom attributes were added. For sample tokens, see the [Additional information](#send-custom-attributes-cognito-additional) section of this pattern. | App developer | 

## Troubleshooting
<a name="send-custom-attributes-cognito-troubleshooting"></a>


| Issue | Solution | 
| --- | --- | 
| Invalid client ID when trying to authenticate the user | This error typically occurs when you are using a client ID with a generated client secret. You must create a client ID without a secret attached to it. For more information, see [Application-specific settings with app clients](https://docs.aws.amazon.com/cognito/latest/developerguide/user-pool-settings-client-apps.html). | 

## Related resources
<a name="send-custom-attributes-cognito-resources"></a>
+ [Customizing user pool workflows with Lambda triggers](https://docs.aws.amazon.com/cognito/latest/developerguide/cognito-user-identity-pools-working-with-aws-lambda-triggers.html) (Amazon Cognito documentation)
+ [Pre token generation Lambda trigger](https://docs.aws.amazon.com/cognito/latest/developerguide/user-pool-lambda-pre-token-generation.html) (Amazon Cognito documentation)
+ [CognitoIdentityProviderClient](https://docs.aws.amazon.com/AWSJavaScriptSDK/v3/latest/client/cognito-identity-provider/) (AWS SDK for JavaScript documentation)
+ [cognito-idp](https://awscli.amazonaws.com/v2/documentation/api/2.0.34/reference/cognito-idp/index.html#cli-aws-cognito-idp) (AWS CLI documentation)

## Additional information
<a name="send-custom-attributes-cognito-additional"></a>

**Sample TypeScript file**

The following code sample is a TypeScript file that invokes the authentication process by using an AWS SDK to send custom attributes to Amazon Cognito:

```
import * as AmazonCognitoIdentity from "amazon-cognito-identity-js";

const userPoolId: string = process.env.USER_POOL_ID ?? '';
const clientId: string = process.env.CLIENT_ID ?? '';

const poolData = {
  UserPoolId: userPoolId,
  ClientId: clientId
};
const userPool = new AmazonCognitoIdentity.CognitoUserPool(poolData);

export const loginWithCognitoSDK = function (userName: string, password: string) {
  const authenticationDetails = new AmazonCognitoIdentity.AuthenticationDetails({
    Username: userName,
    Password: password,
    ClientMetadata: {
        customGroup: "MyCustomGroup",
        customApplicationData: "Custom data from a custom application"
    }
  });
  const userData = {
    Username: userName,
    Pool: userPool
  };

  const cognitoUser = new AmazonCognitoIdentity.CognitoUser(userData);

  // Authenticate the user using the authenticationDetails object
  cognitoUser.authenticateUser(authenticationDetails, {
    onSuccess: function (result: any) {},
    onFailure: function (err: any) {},
  });
}
loginWithCognitoSDK(process.env.USERNAME ?? '', process.env.PASSWORD ?? '');
```

The sample uses the `AuthenticationDetails` model from the SDK for JavaScript to provide the username, password, and the `ClientMetadada`. After the authentication in Amazon Cognito, the client metadata can be retrieved from the access and ID tokens.

**Sample Lambda function**

The following code sample is a Lambda function that is linked to the [Pre-Generation Token](https://docs.aws.amazon.com/cognito/latest/developerguide/cognito-user-identity-pools-working-with-aws-lambda-triggers.html) from Amazon Cognito. It helps you customize the access token and ID token that Amazon Cognito uses. The tokens are passed through the integrations between your architecture. This sample includes a custom claim attribute called `customApplicationData` and a custom group name called `MyCustomGroup`:

```
export const handler = async(event, context, callback) => {
    event.response = {
        claimsOverrideDetails: {
            claimsToAddOrOverride: { customApplicationData: event.request.clientMetadata.customApplicationData },
            groupOverrideDetails: { groupsToOverride: [event.request.clientMetadata.customGroup] }
        }
    };
    callback(null, event);
};
```

**Sample access token**

You can decode the access token to visualize the custom attributes that were added. The following is a sample access token:

```
{
  "sub": "6daf331f-4451-48b4-abde-774579299204",
  "cognito:groups": [
    "MyCustomGroup"
  ],
  "iss": "https://cognito-idp.<REGION>.amazonaws.com/<USERPOOL_ID>",
  "client_id": "<YOUR_CLIENT_ID>",
  "origin_jti": "acff7e91-09f9-4fde-8eec-38b0f8c47cdc",
  "event_id": "c5113a9c-1f01-435b-9b73-a5cd3e88514e",
  "token_use": "access",
  "scope": "aws.cognito.signin.user.admin",
  "auth_time": 1677979246,
  "exp": 1677982846,
  "iat": 1677979246,
  "jti": "5c9c2708-a871-4428-bd9b-18ad261bea90",
  "username": "<USER_NAME>"
}
```

**Sample ID token**

You can decode the access token to visualize the custom attributes that were added. The following is a sample access token:

```
{
  "sub": "6daf331f-4451-48b4-abde-774579299204",
  "cognito:groups": [
    "MyCustomGroup"
  ],
  "iss": "https://cognito-idp.<REGION>.amazonaws.com/<USERPOOL_ID>",
  "cognito:username": "<USER_NAME>",
  "origin_jti": "acff7e91-09f9-4fde-8eec-38b0f8c47cdc",
  "customApplicationData": "Custom data from a custom application",
  "aud": "<YOUR_CLIENT_ID>",
  "event_id": "c5113a9c-1f01-435b-9b73-a5cd3e88514e",
  "token_use": "id",
  "auth_time": 1677979246,
  "exp": 1677982846,
  "iat": 1677979246,
  "jti": "f7ca006b-f25b-44d2-a7a4-6e6423f4201f",
  "email": "<USER_EMAIL>"
}
```

# Simplify private certificate management by using AWS Private CA and AWS RAM
<a name="simplify-private-certificate-management-by-using-aws-private-ca-and-aws-ram"></a>

*Everett Hinckley and Vivek Goyal, Amazon Web Services*

## Summary
<a name="simplify-private-certificate-management-by-using-aws-private-ca-and-aws-ram-summary"></a>

You can use AWS Private Certificate Authority (AWS Private CA) to issue private certificates for authenticating internal resources and signing computer code. This pattern provides an AWS CloudFormation template for the rapid deployment of a multi-level CA hierarchy and consistent provisioning experience. Optionally, you can use AWS Resource Access Manager (AWS RAM) to securely share the CA within your organizations or organizational units (OUs) in AWS Organizations, and centralize the CA while using AWS RAM to manage permissions. There is no need for a private CA in every account, so this approach saves you money. Additionally, you can use Amazon Simple Storage Service (Amazon S3) to store the certificate revocation list (CRL) and access logs.

This implementation provides the following features and benefits:
+ Centralizes and simplifies the management of the private CA hierarchy by using AWS Private CA.
+ Exports certificates and keys to customer-managed devices on AWS and on premises.
+ Uses an AWS CloudFormation template for a rapid deployment and consistent provisioning experience.
+ Creates a private root CA along with 1, 2, 3, or 4 subordinate CA hierarchy.
+ Optionally, uses AWS RAM to share the end-entity subordinate CA with other accounts at the organization or OU level.
+ Saves money by removing the need for a private CA in every account by using AWS RAM.
+ Creates an optional S3 bucket for the CRL.
+ Creates an optional S3 bucket for CRL access logs.

## Prerequisites and limitations
<a name="simplify-private-certificate-management-by-using-aws-private-ca-and-aws-ram-prereqs"></a>

**Prerequisites **

If you want to share the CA within an AWS Organizations structure, identify or set up the following:
+ A security account for creating the CA hierarchy and share.
+ A separate OU or account for testing.
+ Sharing enabled within the AWS Organizations management account. For more information, see [Enable resource sharing within AWS Organizations](https://docs.aws.amazon.com/ram/latest/userguide/getting-started-sharing.html#getting-started-sharing-orgs) in the AWS RAM documentation.

**Limitations **
+ CAs are regional resources. All CAs reside in a single AWS account and in a single AWS Region.
+ User-generated certificates and keys are not supported. For this use case, we recommend that you customize this solution to use an external root CA. 
+ A public CRL bucket is not supported. We recommend that you keep the CRL private. If internet access to the CRL is required, see the section on using Amazon CloudFront to serve CRLs in [Enabling the S3 Block Public Access (BPA) feature](https://docs.aws.amazon.com/privateca/latest/userguide/crl-planning.html#s3-bpa) in the AWS Private CA documentation.
+ This pattern implements a single-Region approach. If you require a multi-Region certificate authority, you can implement subordinates in a second AWS Region or on premises. That complexity is outside the scope of this pattern, because the implementation depends on your specific use case, workload volume, dependencies, and requirements.

## Architecture
<a name="simplify-private-certificate-management-by-using-aws-private-ca-and-aws-ram-architecture"></a>

**Target technology stack  **
+ AWS Private CA
+ AWS RAM
+ Amazon S3
+ AWS Organizations
+ AWS CloudFormation

**Target architecture **

This pattern provides two options for sharing to AWS Organizations:

**Option 1** ─ Create the share at the organization level. All accounts in the organization can issue the private certificates by using the shared CA, as shown in the following diagram.

![\[Share a CA at the organization level\]](http://docs.aws.amazon.com/prescriptive-guidance/latest/patterns/images/pattern-img/34701b79-c670-4c5d-8c8b-00c7fbd12d06/images/3765d327-3097-4134-a701-28753e1abb14.png)


**Option 2 **─  Create the share at the organizational unit (OU) level. Only the accounts in the specified OU can issue the private certificates by using the shared CA. For example, in the following diagram, if the share is created at the Sandbox OU level, both Developer 1 and Developer 2 can issue private certificates by using the shared CA.

 

![\[Share a CA at the OU level\]](http://docs.aws.amazon.com/prescriptive-guidance/latest/patterns/images/pattern-img/34701b79-c670-4c5d-8c8b-00c7fbd12d06/images/b8385d18-42d1-4924-aa69-cc4a3e96bf56.png)


## Tools
<a name="simplify-private-certificate-management-by-using-aws-private-ca-and-aws-ram-tools"></a>

**AWS services**
+ [AWS Private CA](https://docs.aws.amazon.com/privateca/latest/userguide/PcaWelcome.html) – AWS Private Certificate Authority (AWS Private CA) is a hosted private CA service for issuing and revoking private digital certificates. It helps you create private CA hierarchies, including root and subordinate CAs, without the investment and maintenance costs of operating an on-premises CA.
+ [AWS RAM](https://docs.aws.amazon.com/ram/latest/userguide/what-is.html) – AWS Resource Access Manager (AWS RAM) helps you securely share your resources across AWS accounts and within your organization or OUs in AWS Organizations. To reduce operational overhead in a multi-account environment, you can create a resource and use AWS RAM to share that resource across accounts.
+ [AWS Organizations](https://docs.aws.amazon.com/organizations/latest/userguide/orgs_introduction.html) – AWS Organizations is an account management service that enables you to consolidate multiple AWS accounts into an organization that you create and centrally manage.
+ [Amazon S3](https://docs.aws.amazon.com/AmazonS3/latest/userguide/Welcome.html) – Amazon Simple Storage Service (Amazon S3) is an object storage service. You can use Amazon S3 to store and retrieve any amount of data at any time, from anywhere on the web. This pattern uses Amazon S3 to store the certificate revocation list (CRL) and access logs.
+ [AWS CloudFormation](https://docs.aws.amazon.com/AWSCloudFormation/latest/UserGuide/Welcome.html) – AWS CloudFormation helps you model and set up your AWS resources, provision them quickly and consistently, and manage them throughout their lifecycle. You can use a template to describe your resources and their dependencies, and launch and configure them together as a stack, instead of managing resources individually. This pattern uses AWS CloudFormation to automatically deploy a multi-level CA hierarchy.

** Code**

The source code for this pattern is available on GitHub, in the [AWS Private CA hierarchy](https://github.com/aws-samples/acmpca-hierarchy) repository. The repository includes:
+ The AWS CloudFormation template `ACMPCA-RootCASubCA.yaml`. You can use this template to deploy the CA hierarchy for this implementation. 
+ Test files for use cases such as requesting, exporting, describing, and deleting a certificate.

To use these files, follow the instructions in the *Epics* section.

## Epics
<a name="simplify-private-certificate-management-by-using-aws-private-ca-and-aws-ram-epics"></a>

### Architect the CA hierarchy
<a name="architect-the-ca-hierarchy"></a>


| Task | Description | Skills required | 
| --- | --- | --- | 
| Collect certificate subject information. | Gather certificate subject information about the certificate owner: organization name, organization unit, country, state, locality, and common name. | Cloud architect, Security architect, PKI engineer | 
| Collect optional information about AWS Organizations. | If the CA will be part of an AWS Organizations structure and you want to share the CA hierarchy inside that structure, collect the management account number, the organization ID, and optionally the OU ID (if you want to share the CA hierarchy only with a specific OU). Also, determine the AWS Organizations accounts or OUs, if any, that you want to share the CA with. | Cloud architect, Security architect, PKI engineer | 
| Design the CA hierarchy. | Determine which account will house the root and subordinate CAs. Determine how many subordinate levels the hierarchy requires between the root and the end-entity certificates. For more information, see [Designing a CA hierarchy](https://docs.aws.amazon.com/privateca/latest/userguide/ca-hierarchy.html) in the AWS Private CA documentation. | Cloud architect, Security architect, PKI engineer | 
| Determine naming and tagging conventions for the CA hierarchy. | Determine the names for the AWS resources: the root CA and each subordinate CA. Determine which tags should be assigned to each CA. | Cloud architect, Security architect, PKI engineer | 
| Determine required encryption and signing algorithms. | Determine the following:[\[See the AWS documentation website for more details\]](http://docs.aws.amazon.com/prescriptive-guidance/latest/patterns/simplify-private-certificate-management-by-using-aws-private-ca-and-aws-ram.html) | Cloud architect, Security architect, PKI engineer | 
| Determine certificate revocation requirements for the CA hierarchy. | If certificate revocation capabilities are required, establish a naming convention for the S3 bucket that contains the certificate revocation list (CRL). | Cloud architect, Security architect, PKI engineer | 
| Determine the logging requirements for the CA hierarchy. | If access logging capabilities are required, establish a naming convention for the S3 bucket that contains the access logs. | Cloud architect, Security architect, PKI engineer | 
| Determine certificate expiration periods. | Determine the expiration date for the root certificate (the default is 10 years), end-entity certificates (the default is 13 months), and subordinate CA certificates (the default is 3 years). Subordinate CA certificates should expire earlier than the CA certificates at higher levels in the hierarchy. For more information, see [Managing the private CA lifecycle](https://docs.aws.amazon.com/privateca/latest/userguide/ca-lifecycle.html) in the AWS Private CA documentation. | Cloud architect, Security architect, PKI engineer | 

### Deploy the CA hierarchy
<a name="deploy-the-ca-hierarchy"></a>


| Task | Description | Skills required | 
| --- | --- | --- | 
| Complete prerequisites. | Complete the steps in the [Prerequisites](#simplify-private-certificate-management-by-using-aws-private-ca-and-aws-ram-prereqs) section of this pattern. | Cloud administrator, Security engineers, PKI engineers | 
| Create CA roles for various personas. | [\[See the AWS documentation website for more details\]](http://docs.aws.amazon.com/prescriptive-guidance/latest/patterns/simplify-private-certificate-management-by-using-aws-private-ca-and-aws-ram.html) | Cloud administrator, Security engineers, PKI engineers | 
| Deploy the CloudFormation stack. | [\[See the AWS documentation website for more details\]](http://docs.aws.amazon.com/prescriptive-guidance/latest/patterns/simplify-private-certificate-management-by-using-aws-private-ca-and-aws-ram.html) | Cloud administrator, Security engineers, PKI engineers | 
| Architect a solution for updating certificates used by user-managed resources. | Resources of integrated AWS services, such as Elastic Load Balancing, update certificates automatically before expiration. However, user-managed resources, such as web servers that are running on Amazon Elastic Compute Cloud (Amazon EC2) instances, require another mechanism. [\[See the AWS documentation website for more details\]](http://docs.aws.amazon.com/prescriptive-guidance/latest/patterns/simplify-private-certificate-management-by-using-aws-private-ca-and-aws-ram.html) | Cloud administrator, Security engineers, PKI engineers | 

### Validate and document the CA hierarchy
<a name="validate-and-document-the-ca-hierarchy"></a>


| Task | Description | Skills required | 
| --- | --- | --- | 
| Validate optional AWS RAM sharing. | If the CA hierarchy is shared with other accounts in AWS Organizations, log in to one of those accounts from the AWS Management Console, navigate to the [AWS Private CA console](https://console.aws.amazon.com/acm-pca/home), and confirm that the newly created CA is shared to this account. Only the lowest-level CA in the hierarchy will be visible, because that is the CA that generates the end-entity certificates. Repeat for a sampling of the accounts that the CA is shared with. | Cloud administrator, Security engineers, PKI engineers | 
| Validate the CA hierarchy with certificate lifecycle tests. | In the [GitHub repository](https://github.com/aws-samples/acmpca-hierarchy) for this pattern, locate the lifecycle tests. Run the tests from the AWS CLI to request a certificate, export a certificate, describe a certificate, and delete a certificate. | Cloud administrator, Security engineers, PKI engineers | 
| Import the certificate chain into trust stores. | For browsers and other applications to trust a certificate, the certificate’s issuer must be included in the browser’s trust store, which is a list of trusted CAs. Add the certificate chain for the new CA hierarchy to your browser's and application’s trust store. Confirm that the end-entity certificates are trusted. | Cloud administrator, Security engineers, PKI engineers | 
| Create a runbook to document the CA hierarchy. | Create a runbook document to describe the architecture of the CA hierarchy, the account structure that can request end-entity certificates, the build process, and basic management tasks such as issuing end-entity certificates (unless you want to allow self-service by child accounts), usage, and tracking. | Cloud administrator, Security engineers, PKI engineers | 

## Related resources
<a name="simplify-private-certificate-management-by-using-aws-private-ca-and-aws-ram-resources"></a>
+ [Designing a CA hierarchy](https://docs.aws.amazon.com/privateca/latest/userguide/ca-hierarchy.html) (AWS Private CA documentation)
+ [Creating a private CA](https://docs.aws.amazon.com/privateca/latest/userguide/create-CA.html) (AWS Private CA documentation)
+ [How to use AWS RAM to share your AWS Private CA cross-account](https://aws.amazon.com/blogs/security/how-to-use-aws-ram-to-share-your-acm-private-ca-cross-account/) (AWS blog post)
+ [AWS Private CA best practices](https://docs.aws.amazon.com/acm-pca/latest/userguide/ca-best-practices.html) (AWS blog post)
+ [Enable resource sharing within AWS Organizations](https://docs.aws.amazon.com/ram/latest/userguide/getting-started-sharing.html#getting-started-sharing-orgs) (AWS RAM documentation)
+ [Managing the private CA lifecycle](https://docs.aws.amazon.com/privateca/latest/userguide/ca-lifecycle.html) (AWS Private CA documentation)
+ [acm-certificate-expiration-check for AWS Config](https://docs.aws.amazon.com/config/latest/developerguide/acm-certificate-expiration-check.html) (AWS Config documentation)
+ [AWS Certificate Manager now provides certificate expiry monitoring through Amazon CloudWatch](https://aws.amazon.com/about-aws/whats-new/2021/03/aws-certificate-manager-provides-certificate-expiry-monitoring-through-amazon-cloudwatch/) (AWS announcement)
+ [Services integrated with AWS Certificate Manager](https://docs.aws.amazon.com/acm/latest/userguide/acm-services.html) (ACM documentation)

## Additional information
<a name="simplify-private-certificate-management-by-using-aws-private-ca-and-aws-ram-additional"></a>

When you export certificates, use a passphrase that is cryptographically strong and aligns with your organization’s data loss prevention strategy.

# Streamline Amazon EC2 compliance management with Amazon Bedrock agents and AWS Config
<a name="streamline-amazon-ec2-compliance-management-with-amazon-bedrock-agents-and-aws-config"></a>

*Anand Bukkapatnam Tirumala, Amazon Web Services*

## Summary
<a name="streamline-amazon-ec2-compliance-management-with-amazon-bedrock-agents-and-aws-config-summary"></a>

This pattern describes how to integrate Amazon Bedrock with AWS Config rules to facilitate compliance management for Amazon Elastic Compute Cloud (Amazon EC2) instances. The approach uses advanced generative AI capabilities to provide tailored recommendations that are aligned with the [AWS Well-Architected Framework](https://docs.aws.amazon.com/wellarchitected/latest/framework/welcome.html), to ensure optimal instance type selection and system efficiency. Key features of this pattern include:
+ Automated compliance monitoring: AWS Config rules continuously assess EC2 instances against predefined criteria for desired instance types.
+ AI-driven recommendations: The generative AI models in Amazon Bedrock analyze infrastructure patterns. These models provide intelligent suggestions for improvements based on best practices that are outlined in the AWS Well-Architected Framework.
+ Remediation: Amazon Bedrock action groups enable automated remediation steps to swiftly address non-compliant instances and minimize potential performance or cost inefficiencies.
+ Scalability and adaptability: The solution is designed to scale with your infrastructure and adapt to your evolving cloud architecture needs.
+ Enhanced security recommendations: Compliance with AWS Well-Architected principles contributes to improved security posture and system performance.

You can use this pattern as a blueprint to deploy your own generative AI-based infrastructure into multiple environments with minimal changes, using DevOps practices as necessary.

## Prerequisites and limitations
<a name="streamline-amazon-ec2-compliance-management-with-amazon-bedrock-agents-and-aws-config-prereqs"></a>

**Prerequisites**
+ An active AWS account.
+ An AWS Identity and Access Management (IAM) role with permissions to create and manage resources in Amazon Simple Storage Service (Amazon S3) buckets, AWS Config, AWS Lambda functions, Amazon Bedrock, IAM, Amazon CloudWatch Logs, and Amazon EC2.
+ An EC2 instance to flag as non-compliant. Do not use the  `t2.small` type for this instance.
+ [Amazon Titan Text Embeddings V2](https://docs.aws.amazon.com/bedrock/latest/userguide/titan-embedding-models.html) and Anthropic Claude 3 Haiku models enabled in your AWS account. To enable model access for the AWS Region where you are deploying the solution, see [Add or remove access to Amazon Bedrock foundation models](https://docs.aws.amazon.com/bedrock/latest/userguide/model-access-modify.html) in the Amazon Bedrock documentation.
+ [Terraform](https://developer.hashicorp.com/terraform/install), installed and configured.
+ The [AWS Command Line Interface (AWS CLI)](https://docs.aws.amazon.com/cli/latest/userguide/getting-started-install.html) v2 installed and configured in the deployment environment.
+ Completed review of the [Amazon Responsible AI policy](https://aws.amazon.com/ai/responsible-ai/policy/).

**Limitations **
+ Some AWS services aren’t available in all AWS Regions. For Region availability, see [AWS services by Region](https://aws.amazon.com/about-aws/global-infrastructure/regional-product-services/). For specific endpoints, see [Service endpoints and quotas](https://docs.aws.amazon.com/general/latest/gr/aws-service-information.html), and choose the link for the service.
+ This solution has been tested by using the Amazon Titan Text Embeddings V2 and Claude 3 Haiku models. If you prefer to use other models, you can customize the Terraform code, which is parameterized for easy changes.
+ This solution does not include a chat history feature, and the chat isn't stored.

## Architecture
<a name="streamline-amazon-ec2-compliance-management-with-amazon-bedrock-agents-and-aws-config-architecture"></a>

The following diagram shows the workflow and architecture components for this pattern.

![\[Architecture and workflow for streamlining Amazon EC2 compliance management with Amazon Bedrock agents.\]](http://docs.aws.amazon.com/prescriptive-guidance/latest/patterns/images/pattern-img/f43ae2bd-209e-412b-9364-e73996360992/images/4ebf4bce-4927-4d78-841e-95c44b8d780f.png)


The workflow consists of these steps:

1. The user interacts with the model through the Amazon Bedrock chat console. The user asks questions such as:
   + `What can you help me with?`
   + `List non-complaint resources`
   + `Suggest security best practices`

1. If the model is pre-trained, it responds to the prompts directly from its existing knowledge. Otherwise, the prompt goes to the Amazon Bedrock action group.

1. The action group reaches the [VPC endpoints ](https://docs.aws.amazon.com/whitepapers/latest/aws-privatelink/what-are-vpc-endpoints.html)by using [AWS PrivateLink](https://aws.amazon.com/privatelink/) for secure service communication.

1. The request reaches the Lambda function through the VPC endpoints for Amazon Bedrock services.

1. The Lambda function is the primary execution engine. Based on the request, the function calls the API to perform actions on the AWS services. It also handles operation routing and execution.

1. The Lambda function calls AWS Config to determine non-complaint resources (the non-compliant EC2 instance that you created as a prerequisite).

1. AWS Config flags the non-complaint resource. This pattern deploys the AWS Config [desired-instance-type](https://docs.aws.amazon.com/config/latest/developerguide/desired-instance-type.html) rule to find the ideal EC2 instance size.

1. AWS Config prompts the user to pause or remediate the instance, and takes action accordingly on the EC2 instance. Amazon Bedrock understands this return payload.

1. The user receives a response on the Amazon Bedrock chat console.

**Automation and scale**

This solution uses Terraform as an infrastructure as code (IaC) tool to enable easy deployment to AWS accounts and to function as a standalone utility across multiple accounts. This approach simplifies management and improves consistency in deployments.

## Tools
<a name="streamline-amazon-ec2-compliance-management-with-amazon-bedrock-agents-and-aws-config-tools"></a>

**AWS services**
+ [AWS Config](https://docs.aws.amazon.com/config/latest/developerguide/WhatIsConfig.html) enables you to assess, audit, and evaluate the configurations of your AWS resources for compliance and desired settings.
+ [Amazon Bedrock](https://docs.aws.amazon.com/bedrock/latest/userguide/what-is-bedrock.html) is a fully managed AI service that provides access to many high-performing foundation models through a unified API.
+ [AWS Identity and Access Management (IAM)](https://docs.aws.amazon.com/IAM/latest/UserGuide/introduction.html) helps you securely manage access to your AWS resources by controlling who is authenticated and authorized to use them.
+ [AWS Lambda](https://docs.aws.amazon.com/lambda/latest/dg/welcome.html) is a compute service that helps you run code without needing to provision or manage servers. It runs your code only when needed and scales automatically, so you pay only for the compute time that you use.

**Other tools**
+ [Git](https://git-scm.com/docs) is an open source, distributed version control system.
+ [Terraform](https://www.terraform.io/) is an infrastructure as code (IaC) tool from HashiCorp that helps you create and manage cloud and on-premises resources.

**Code repository**

The code for this pattern is available in the GitHub [sample-awsconfig-bedrock-compliance-manager](https://github.com/aws-samples/sample-awsconfig-bedrock-compliance-manager) repository.

## Best practices
<a name="streamline-amazon-ec2-compliance-management-with-amazon-bedrock-agents-and-aws-config-best-practices"></a>
+ Follow the principle of least privilege and grant the minimum permissions required to perform a task. For more information, see [Grant least privilege](https://docs.aws.amazon.com/IAM/latest/UserGuide/access_policies.html#grant-least-priv) and [Security best practices and use cases](https://docs.aws.amazon.com/IAM/latest/UserGuide/IAMBestPracticesAndUseCases.html) in the IAM documentation.
+ Monitor Lambda execution logs regularly. For more information, see [Monitoring, debugging, and troubleshooting Lambda functions](https://docs.aws.amazon.com/lambda/latest/dg/lambda-monitoring.html) and [Best practices for working with AWS Lambda functions](https://docs.aws.amazon.com/lambda/latest/dg/best-practices.html) in the Lambda documentation.

## Epics
<a name="streamline-amazon-ec2-compliance-management-with-amazon-bedrock-agents-and-aws-config-epics"></a>

### Deploy the solution
<a name="deploy-the-solution"></a>


| Task | Description | Skills required | 
| --- | --- | --- | 
| Clone the repository. | To clone the repository for this pattern, use the following command:<pre>git clone "git@github.com:aws-samples/sample-awsconfig-bedrock-compliance-manager.git"</pre> | AWS DevOps, Build lead, DevOps engineer, Cloud administrator | 
| Edit the environment variables. | In the root directory of the cloned repository on your local machine, edit the `terraform.tfvars` file. Review the placeholders that are marked with `[XXXXX]`, and edit them based on your environment. | AWS systems administrator, AWS DevOps, DevOps engineer, AWS administrator | 
| Create the infrastructure. | [\[See the AWS documentation website for more details\]](http://docs.aws.amazon.com/prescriptive-guidance/latest/patterns/streamline-amazon-ec2-compliance-management-with-amazon-bedrock-agents-and-aws-config.html) | AWS DevOps, DevOps engineer, AWS systems administrator, Cloud administrator | 

### Use the agent
<a name="use-the-agent"></a>


| Task | Description | Skills required | 
| --- | --- | --- | 
| Chat with the agent. | Deploying the solution in the previous step deploys `security-bot-agent`, which is an Amazon Bedrock agent with a chat console.To use the agent:[\[See the AWS documentation website for more details\]](http://docs.aws.amazon.com/prescriptive-guidance/latest/patterns/streamline-amazon-ec2-compliance-management-with-amazon-bedrock-agents-and-aws-config.html) | AWS DevOps, DevOps engineer, AWS systems administrator, Cloud administrator | 

### Clean up resources
<a name="clean-up-resources"></a>


| Task | Description | Skills required | 
| --- | --- | --- | 
| Delete the infrastructure and resources. | When you’ve completed your work with this solution, you can delete the infrastructure created by this pattern by running the command:<pre>terraform destroy --auto-approve</pre> | AWS DevOps, DevOps engineer, AWS systems administrator, Cloud administrator | 

## Troubleshooting
<a name="streamline-amazon-ec2-compliance-management-with-amazon-bedrock-agents-and-aws-config-troubleshooting"></a>


| Issue | Solution | 
| --- | --- | 
| Agent behavior issues | For troubleshooting information, see [Test and troubleshoot agent behavior](https://docs.aws.amazon.com/bedrock/latest/userguide/agents-test.html) in the Amazon Bedrock documentation. | 
| AWS Lambda network issues | For more information, see [Troubleshoot networking issues in Lambda](https://docs.aws.amazon.com/lambda/latest/dg/troubleshooting-networking.html) in the Lambda documentation. | 
| IAM permissions | For more information, see [Troubleshoot IAM ](https://docs.aws.amazon.com/IAM/latest/UserGuide/troubleshoot.html)in the IAM documentation. | 

## Related resources
<a name="streamline-amazon-ec2-compliance-management-with-amazon-bedrock-agents-and-aws-config-resources"></a>
+ [Amazon Bedrock agents](https://aws.amazon.com/bedrock/agents/)
+ [Use action groups to define actions for your agent to perform](https://docs.aws.amazon.com/bedrock/latest/userguide/agents-action-create.html) (Amazon Bedrock documentation)
+ [desired-instance-type rule](https://docs.aws.amazon.com/config/latest/developerguide/desired-instance-type.html) (AWS Config documentation)
+ [How AWS Config works](https://docs.aws.amazon.com/config/latest/developerguide/how-does-config-work.html) (AWS Config documentation)

# Update AWS CLI credentials from AWS IAM Identity Center by using PowerShell
<a name="update-aws-cli-credentials-from-aws-iam-identity-center-by-using-powershell"></a>

*Chad Miles and Andy Bowen, Amazon Web Services*

## Summary
<a name="update-aws-cli-credentials-from-aws-iam-identity-center-by-using-powershell-summary"></a>

If you want to use AWS IAM Identity Center (successor to AWS Single Sign-On) credentials with AWS Command Line Interface (AWS CLI), AWS SDKs, or AWS Cloud Development Kit (AWS CDK), you typically have to copy and paste the credentials from the IAM Identity Center console into the command line interface. This process can take a considerable amount of time and has to be repeated for each account that requires access.

One common solution is to use the AWS CLI `aws sso configure` command. This command adds an IAM Identity Center enabled profile to your AWS CLI or AWS SDK. However, the disadvantage of this solution is that you must run the command `aws sso login` for each AWS CLI profile or account that you have configured this way.

As an alternative solution, this pattern describes how to use AWS CLI [named profiles](https://docs.aws.amazon.com/cli/latest/userguide/cli-configure-files.html#cli-configure-files-using-profiles) and AWS Tools for PowerShell to store and refresh credentials for multiple accounts from a single IAM Identity Center instance simultaneously. The script also stores IAM Identity Center session data in memory for refreshing credentials without logging into IAM Identity Center again.

## Prerequisites and limitations
<a name="update-aws-cli-credentials-from-aws-iam-identity-center-by-using-powershell-prereqs"></a>

**Prerequisites**
+ PowerShell, installed and configured. For more information, see [Installing PowerShell](https://learn.microsoft.com/en-us/powershell/scripting/install/installing-powershell?view=powershell-7.3) (Microsoft documentation).
+ AWS Tools for PowerShell, installed and configured. For performance reasons, we highly recommend that you install the modularized version of AWS Tools for PowerShell, called `AWS.Tools`. Each AWS service is supported by its own individual, small module. In the PowerShell prompt, enter the following commands to install the modules needed for this pattern: `AWS.Tools.Installer`, `SSO`, and `SSOIDC`.

  ```
  Install-Module AWS.Tools.Installer
  Install-AWSToolsModule SSO, SSOOIDC
  ```

  For more information, see [Install AWS.Tools on Windows](https://docs.aws.amazon.com/powershell/latest/userguide/pstools-getting-set-up-windows.html#ps-installing-awstools) or [Install AWS.Tools on Linux or macOS](https://docs.aws.amazon.com/powershell/latest/userguide/pstools-getting-set-up-linux-mac.html#install-aws.tools-on-linux-macos).
+ AWS CLI or the AWS SDK must be previously configured with working credentials by doing one of the following:
  + Use the AWS CLI `aws configure` command. For more information, see [Quick configuration](https://docs.aws.amazon.com/cli/latest/userguide/cli-configure-quickstart.html#cli-configure-quickstart-config) (AWS CLI documentation).
  + Configure AWS CLI or AWS CDK to get temporary access through an IAM role. For more information, see [Getting IAM role credentials for CLI access](https://docs.aws.amazon.com/singlesignon/latest/userguide/howtogetcredentials.html) (IAM Identity Center documentation).

**Limitations**
+ This script can’t be used in a pipeline or fully automated solution. When you deploy this script, you must manually authorize access from IAM Identity Center. The script then continues automatically.

**Product versions**
+ For all operating systems, it is recommended that you use [PowerShell version 7.0](https://github.com/powershell/powershell) or later.

## Architecture
<a name="update-aws-cli-credentials-from-aws-iam-identity-center-by-using-powershell-architecture"></a>

You can use the script in this pattern to simultaneously refresh multiple IAM Identity Center credentials, and you can create a credential file for use with AWS CLI, AWS SDKs, or AWS CDK.

![\[Using a PowerShell script to update credentials in AWS CLI, AWS CDK, or AWS SKDs.\]](http://docs.aws.amazon.com/prescriptive-guidance/latest/patterns/images/pattern-img/6d54a6bb-01ac-4736-9b78-40921fcc9056/images/01e0fcb6-3b48-422c-8868-07a7de83b3e3.png)


## Tools
<a name="update-aws-cli-credentials-from-aws-iam-identity-center-by-using-powershell-tools"></a>

**AWS services**
+ [AWS Command Line Interface (AWS CLI)](https://docs.aws.amazon.com/cli/latest/userguide/cli-chap-welcome.html) is an open-source tool that helps you interact with AWS services through commands in your command-line shell.
+ [AWS IAM Identity Center](https://docs.aws.amazon.com/singlesignon/latest/userguide/what-is.html) helps you centrally manage single sign-on (SSO) access to all of your AWS accounts and cloud applications.
+ [AWS Tools for PowerShell](https://docs.aws.amazon.com/powershell/latest/userguide/pstools-welcome.html) are a set of PowerShell modules that help you script operations on your AWS resources from the PowerShell command line.

**Other tools**
+ [PowerShell](https://learn.microsoft.com/en-us/powershell/) is a Microsoft automation and configuration management program that runs on Windows, Linux, and macOS.

## Best practices
<a name="update-aws-cli-credentials-from-aws-iam-identity-center-by-using-powershell-best-practices"></a>

Keep one copy of this script for each IAM Identity Center instance. Using one script for multiple instances is not supported.

## Epics
<a name="update-aws-cli-credentials-from-aws-iam-identity-center-by-using-powershell-epics"></a>

### Run the SSO script
<a name="run-the-sso-script"></a>


| Task | Description | Skills required | 
| --- | --- | --- | 
| Customize the SSO script. | [\[See the AWS documentation website for more details\]](http://docs.aws.amazon.com/prescriptive-guidance/latest/patterns/update-aws-cli-credentials-from-aws-iam-identity-center-by-using-powershell.html) | Cloud administrator | 
| Run the SSO script. | It is recommended that you run your custom script in the PowerShell shell with the following command.<pre>./Set-AwsCliSsoCredentials.ps1</pre>Alternatively, you can run the script from another shell by entering the following command.<pre>pwsh Set-AwsCliSsoCredentials.ps1</pre> | Cloud administrator | 

## Troubleshooting
<a name="update-aws-cli-credentials-from-aws-iam-identity-center-by-using-powershell-troubleshooting"></a>


| Issue | Solution | 
| --- | --- | 
| `No Access` error | The IAM role that you are using doesn’t have permissions to access the role or permission set that you defined in a `RoleName` parameter. Update the permissions for the role you are using, or define a different role or permission set in the script. | 

## Related resources
<a name="update-aws-cli-credentials-from-aws-iam-identity-center-by-using-powershell-resources"></a>
+ [Where are configuration settings stored?](https://docs.aws.amazon.com/cli/latest/userguide/cli-configure-files.html#cli-configure-files-where) (AWS CLI documentation)
+ [Configuring the AWS CLI to use AWS IAM Identity Center](https://docs.aws.amazon.com/cli/latest/userguide/cli-configure-sso.html) (AWS CLI documentation)
+ [Using named profiles](https://docs.aws.amazon.com/cli/latest/userguide/cli-configure-files.html#cli-configure-files-using-profiles) (AWS CLI documentation)

## Additional information
<a name="update-aws-cli-credentials-from-aws-iam-identity-center-by-using-powershell-additional"></a>

**SSO script**

In the following script, replace placeholders in angle brackets (<>) with your own information and remove the angle brackets.

```
Set-AwsCliSsoCredentials.ps1
Param(
    $DefaultRoleName = '<AWSAdministratorAccess>',
    $Region          = '<us-west-2>',
    $StartUrl        = "<https://d-12345abcde.awsapps.com/start/>",
    $EnvironmentName = "<CompanyName>"
) 
Try {$SsoAwsAccounts = (Get-Variable -name "$($EnvironmentName)SsoAwsAccounts" -Scope Global -ErrorAction 'SilentlyContinue').Value.Clone()} 
Catch {$SsoAwsAccounts = $False}
if (-not $SsoAwsAccounts) { $SsoAwsAccounts = @(    
# Add your account information in the list of hash tables below, expand as necessary, and do not forget the commas 
    @{Profile = "<Account1>"      ; AccountId = "<012345678901 >"; RoleName = $DefaultRoleName },
    @{Profile = "<Account2>"      ; AccountId = "<123456789012>"; RoleName = "<AWSReadOnlyAccess>" }
)}
$ErrorActionPreference = "Stop"
if (-not (Test-Path ~\.aws))      { New-Item ~\.aws -type Directory }
if (-not (Test-Path ~\.aws\credentials)) { New-Item ~\.aws\credentials -type File }
$CredentialFile = Resolve-Path ~\.aws\credentials 
$PsuedoCreds    = @{AccessKey = 'AKAEXAMPLE123ACCESS';SecretKey='PsuedoS3cret4cceSSKey123PsuedoS3cretKey'} # Pseudo Creds, do not edit.
Try {$SSOTokenExpire = (Get-Variable -Scope Global -Name "$($EnvironmentName)SSOTokenExpire" -ErrorAction 'SilentlyContinue').Value} Catch {$SSOTokenExpire = $False}
Try {$SSOToken       = (Get-Variable -Scope Global -Name "$($EnvironmentName)SSOToken" -ErrorAction 'SilentlyContinue').Value }      Catch {$SSOToken       = $False}
if ( $SSOTokenExpire -lt (Get-Date) ) {
    $SSOToken = $Null
    $Client   = Register-SSOOIDCClient -ClientName cli-sso-client -ClientType public -Region $Region @PsuedoCreds
    $Device   = $Client | Start-SSOOIDCDeviceAuthorization -StartUrl $StartUrl -Region $Region @PsuedoCreds
    Write-Host "A Browser window should open. Please login there and click ALLOW." -NoNewline
    Start-Process $Device.VerificationUriComplete
    While (-Not $SSOToken){
        Try {$SSOToken = $Client | New-SSOOIDCToken -DeviceCode $Device.DeviceCode -GrantType "urn:ietf:params:oauth:grant-type:device_code" -Region $Region @PsuedoCreds}
        Catch {If ($_.Exception.Message -notlike "*AuthorizationPendingException*"){Write-Error $_.Exception} ; Start-Sleep 1}
    }
    $SSOTokenExpire = (Get-Date).AddSeconds($SSOToken.ExpiresIn)
    Set-Variable -Name "$($EnvironmentName)SSOToken" -Value $SSOToken -Scope Global
    Set-Variable -Name "$($EnvironmentName)SSOTokenExpire" -Value $SSOTokenExpire -Scope Global
}
$CredsTime     = $SSOTokenExpire - (Get-Date)
$CredsTimeText = ('{0:D2}:{1:D2}:{2:D2} left on SSO Token' -f $CredsTime.Hours, $CredsTime.Minutes, $CredsTime.Seconds).TrimStart("0 :")
for ($i = 0; $i -lt $SsoAwsAccounts.Count; $i++) {
    if (([DateTimeOffset]::FromUnixTimeSeconds($SsoAwsAccounts[$i].CredsExpiration / 1000)).DateTime -lt (Get-Date).ToUniversalTime()) {
        Write-host "`r                                                                     `rRegistering Profile $($SsoAwsAccounts[$i].Profile)" -NoNewline
        $TempCreds = $SSOToken | Get-SSORoleCredential -AccountId $SsoAwsAccounts[$i].AccountId -RoleName $SsoAwsAccounts[$i].RoleName -Region $Region @PsuedoCreds
        [PSCustomObject]@{AccessKey = $TempCreds.AccessKeyId; SecretKey = $TempCreds.SecretAccessKey; SessionToken = $TempCreds.SessionToken
        } | Set-AWSCredential -StoreAs $SsoAwsAccounts[$i].Profile -ProfileLocation $CredentialFile 
        $SsoAwsAccounts[$i].CredsExpiration = $TempCreds.Expiration
    }
} 
Set-Variable -name "$($EnvironmentName)SsoAwsAccounts" -Value $SsoAwsAccounts.Clone() -Scope Global
Write-Host "`r$($SsoAwsAccounts.Profile) Profiles registered, $CredsTimeText"
```

# Use Network Firewall to capture the DNS domain names from the Server Name Indication for outbound traffic
<a name="use-network-firewall-to-capture-the-dns-domain-names-from-the-server-name-indication-sni-for-outbound-traffic"></a>

*Kirankumar Chandrashekar, Amazon Web Services*

## Summary
<a name="use-network-firewall-to-capture-the-dns-domain-names-from-the-server-name-indication-sni-for-outbound-traffic-summary"></a>

This pattern shows you how to use AWS Network Firewall to collect the DNS domain names that are provided by the Server Name Indication (SNI) in the HTTPS header of your outbound network traffic. Network Firewall is a managed service that makes it easy to deploy critical network protections for Amazon Virtual Private Cloud (Amazon VPC), including the ability to secure outbound traffic with a firewall that blocks packets that fail to meet certain security requirements. Securing outbound traffic to specific DNS domain names is called egress filtering, which is the practice of monitoring and potentially restricting the flow of outbound information from one network to another.

After you capture the SNI data that passes through Network Firewall, you can use Amazon CloudWatch Logs and AWS Lambda to publish the data to an Amazon Simple Notification Service (Amazon SNS) topic that generates email notifications. The email notifications include the server name and other relevant SNI information. Additionally, you can use the output of this pattern to allow or restrict outbound traffic by domain name in the SNI by using firewall rules. For more information, see [Working with stateful rule groups in AWS Network Firewall](https://docs.aws.amazon.com/network-firewall/latest/developerguide/stateful-rule-groups-ips.html) in the Network Firewall documentation.

## Prerequisites and limitations
<a name="use-network-firewall-to-capture-the-dns-domain-names-from-the-server-name-indication-sni-for-outbound-traffic-prereqs"></a>

**Prerequisites**
+ An active AWS account.
+ [AWS Command Line Interface (AWS CLI)](https://docs.aws.amazon.com/cli/latest/userguide/install-cliv2.html) version 2, installed and configured on Linux, macOS, or Windows.
+ [Network Firewall](https://docs.aws.amazon.com/network-firewall/latest/developerguide/getting-started.html), set up and configured in Amazon VPC and in use for inspecting outbound traffic. You can configure Network Firewall to use any of the following VPC configurations:
  + [Simple single zone architecture with an internet gateway](https://docs.aws.amazon.com/network-firewall/latest/developerguide/arch-single-zone-igw.html)
  + [Multi zone architecture with an internet gateway](https://docs.aws.amazon.com/network-firewall/latest/developerguide/arch-two-zone-igw.html)
  + [Architecture with an internet gateway and a NAT gateway](https://docs.aws.amazon.com/network-firewall/latest/developerguide/arch-igw-ngw.html)

## Architecture
<a name="use-network-firewall-to-capture-the-dns-domain-names-from-the-server-name-indication-sni-for-outbound-traffic-architecture"></a>

The following diagram shows how to use Network Firewall to collect SNI data from outbound network traffic, and then publish that data to an SNS topic by using CloudWatch Logs and Lambda.

![\[Workflow between Network Firewall, CloudWatch Logs, Lambda, and Amazon SNS.\]](http://docs.aws.amazon.com/prescriptive-guidance/latest/patterns/images/pattern-img/9eb1e9e3-f459-4ea3-8e6d-60fec6b7ea98/images/1094b5f6-33e3-42bc-8fb8-7409b5b826b0.png)


The diagram shows the following workflow:

1. Network Firewall collects domain names from the SNI data in the HTTPS header of your outbound network traffic.

1. CloudWatch Logs monitors the SNI data and invokes a Lambda function whenever the outbound network traffic passes through Network Firewall.

1. The Lambda function reads the SNI data captured by CloudWatch Logs and then publishes that data to an SNS topic.

1. The SNS topic sends you an email notification that includes the SNI data.

**Automation and scale**
+ You can use [AWS CloudFormation](https://docs.aws.amazon.com/AWSCloudFormation/latest/UserGuide/Welcome.html) to create this pattern by using [infrastructure as code](https://docs.aws.amazon.com/whitepapers/latest/introduction-devops-aws/infrastructure-as-code.html).

**Technology stack**
+ Amazon CloudWatch Logs
+ Amazon SNS
+ Amazon VPC
+ AWS Lambda 
+ AWS Network Firewall

## Tools
<a name="use-network-firewall-to-capture-the-dns-domain-names-from-the-server-name-indication-sni-for-outbound-traffic-tools"></a>

**AWS services**
+ [Amazon CloudWatch Logs](https://docs.aws.amazon.com/AmazonCloudWatch/latest/logs/WhatIsCloudWatchLogs.html) – You can use Amazon CloudWatch Logs to monitor, store, and access your log files from Amazon Elastic Compute Cloud (Amazon EC2) instances, AWS CloudTrail, Amazon Route 53, and other sources.
+ [Amazon SNS](https://docs.aws.amazon.com/sns/latest/dg/welcome.html) – Amazon Simple Notification Service (Amazon SNS) is a managed service that provides message delivery from publishers to subscribers (also known as producers and consumers).
+ [Amazon VPC](https://docs.aws.amazon.com/vpc/latest/userguide/what-is-amazon-vpc.html) – Amazon Virtual Private Cloud (Amazon VPC) provisions a logically isolated section of the AWS Cloud where you can launch AWS resources in a virtual network that you've defined. This virtual network closely resembles a traditional network that you'd operate in your own data center, with the benefits of using the scalable infrastructure of AWS.
+ [AWS Lambda](https://docs.aws.amazon.com/lambda/latest/dg/welcome.html) – AWS Lambda is a compute service that lets you run code without provisioning or managing servers.
+ [AWS Network Firewall](https://docs.aws.amazon.com/network-firewall/latest/developerguide/what-is-aws-network-firewall.html) – AWS Network Firewall is a managed service that makes it easy to deploy essential network protections for all of your Amazon VPCs.

## Epics
<a name="use-network-firewall-to-capture-the-dns-domain-names-from-the-server-name-indication-sni-for-outbound-traffic-epics"></a>

### Create a CloudWatch log group for Network Firewall
<a name="create-a-cloudwatch-log-group-for-network-firewall"></a>


| Task | Description | Skills required | 
| --- | --- | --- | 
| Create a CloudWatch log group. | [\[See the AWS documentation website for more details\]](http://docs.aws.amazon.com/prescriptive-guidance/latest/patterns/use-network-firewall-to-capture-the-dns-domain-names-from-the-server-name-indication-sni-for-outbound-traffic.html)For more information, see [Working with log groups and log streams](https://docs.aws.amazon.com/AmazonCloudWatch/latest/logs/Working-with-log-groups-and-streams.html) in the CloudWatch documentation. | Cloud administrator | 

### Create an SNS topic and subscription
<a name="create-an-sns-topic-and-subscription"></a>


| Task | Description | Skills required | 
| --- | --- | --- | 
| Create an SNS topic. | To create an SNS topic, follow the instructions in the [Amazon SNS documentation](https://docs.aws.amazon.com/sns/latest/dg/sns-create-topic.html#create-topic-aws-console). | Cloud administrator | 
| Subscribe an endpoint to the SNS topic. | To subscribe an email address as an endpoint to the SNS topic that you created, follow the instructions in the [Amazon SNS documentation](https://docs.aws.amazon.com/sns/latest/dg/sns-create-subscribe-endpoint-to-topic.html). For **Protocol**, choose [Email/Email-JSON](https://docs.aws.amazon.com/sns/latest/dg/sns-email-notifications.html). You can also choose a different endpoint based on your requirements. | Cloud administrator | 

### Set up logging in Network Firewall
<a name="set-up-logging-in-network-firewall"></a>


| Task | Description | Skills required | 
| --- | --- | --- | 
| Enable firewall logging. | [\[See the AWS documentation website for more details\]](http://docs.aws.amazon.com/prescriptive-guidance/latest/patterns/use-network-firewall-to-capture-the-dns-domain-names-from-the-server-name-indication-sni-for-outbound-traffic.html)For more information about using CloudWatch Logs as a log destination for Network Firewall, see [Amazon CloudWatch Logs](https://docs.aws.amazon.com/network-firewall/latest/developerguide/logging-cw-logs.html) in the Network Firewall documentation.  | Cloud administrator | 

### Set up a stateful rule in Network Firewall
<a name="set-up-a-stateful-rule-in-network-firewall"></a>


| Task | Description | Skills required | 
| --- | --- | --- | 
| Create a stateful rule. | [\[See the AWS documentation website for more details\]](http://docs.aws.amazon.com/prescriptive-guidance/latest/patterns/use-network-firewall-to-capture-the-dns-domain-names-from-the-server-name-indication-sni-for-outbound-traffic.html) | Cloud administrator | 
| Associate the stateful rule to Network Firewall. | [\[See the AWS documentation website for more details\]](http://docs.aws.amazon.com/prescriptive-guidance/latest/patterns/use-network-firewall-to-capture-the-dns-domain-names-from-the-server-name-indication-sni-for-outbound-traffic.html) | Cloud administrator | 

### Create a Lambda function to read the logs
<a name="create-a-lambda-function-to-read-the-logs"></a>


| Task | Description | Skills required | 
| --- | --- | --- | 
| Create the code for the Lambda function. | In an integrated development environment (IDE) that can read the CloudWatch Logs event from Network Firewall for outbound traffic, paste in the following Python 3 code and replace `<SNS-topic-ARN>` with your value:<pre>import json<br />import gzip<br />import base64<br />import boto3<br />sns_client = boto3.client('sns')<br />def lambda_handler(event, context):<br />    decoded_event = json.loads(gzip.decompress(base64.b64decode(event['awslogs']['data'])))<br />    body = '''<br />    {filtermatch}<br />    '''.format(<br />        loggroup=decoded_event['logGroup'],<br />        logstream=decoded_event['logStream'],<br />        filtermatch=decoded_event['logEvents'][0]['message'],<br />    )<br />    print(body)<br />    filterMatch = json.loads(body)<br />    data = []<br />    if 'http' in filterMatch['event']:<br />        data.append(filterMatch['event']['http']['hostname'])<br />    elif 'tls' in filterMatch['event']:<br />        data.append(filterMatch['event']['tls']['sni'])<br />    result = 'Domain accessed ' + 1*' ' + (data[0]) + 1*' ' 'via AWS Network Firewall ' + 1*' '  + (filterMatch['firewall_name'])<br />    print(result)<br />    message = {'ServerName': result}<br />    send_to_sns = sns_client.publish(<br />        TargetArn=<SNS-topic-ARN>,          #Replace with the SNS topic ARN<br />        Message=json.dumps({'default': json.dumps(message),<br />                        'sms': json.dumps(message),<br />                        'email': json.dumps(message)}),<br />        Subject='Server Name passed through the Network Firewall',<br />        MessageStructure='json'<br />    )</pre>This code sample parses the CloudWatch Logs content and captures the server name provided by the SNI in the HTTPS header. | App developer | 
| Create the Lambda function. | To create the Lambda function, follow the instructions in the [Lambda documentation](https://docs.aws.amazon.com/lambda/latest/dg/getting-started.html#getting-started-create-function) and choose **Python 3.9** for **Runtime**. | Cloud administrator | 
| Add the code to the Lambda function. | To add your Python code to the Lambda function that you created earlier, follow the instructions in the [Lambda documentation](https://docs.aws.amazon.com/lambda/latest/dg/configuration-function-zip.html#configuration-function-update). | Cloud administrator | 
| Add CloudWatch Logs as a trigger to the Lambda function. | [\[See the AWS documentation website for more details\]](http://docs.aws.amazon.com/prescriptive-guidance/latest/patterns/use-network-firewall-to-capture-the-dns-domain-names-from-the-server-name-indication-sni-for-outbound-traffic.html)For more information, see [Using Lambda with CloudWatch Logs](https://docs.aws.amazon.com/lambda/latest/dg/services-cloudwatchlogs.html) in the Lambda documentation. | Cloud administrator | 
| Add SNS publish permissions. | Add the **sns:Publish** permission to the Lambda execution role, so that Lambda can make API calls to publish messages to SNS.  [\[See the AWS documentation website for more details\]](http://docs.aws.amazon.com/prescriptive-guidance/latest/patterns/use-network-firewall-to-capture-the-dns-domain-names-from-the-server-name-indication-sni-for-outbound-traffic.html)<pre>{<br />    "Version": "2012-10-17",		 	 	 <br />    "Statement": [<br />        {<br />            "Sid": "AllowSNSPublish",<br />            "Effect": "Allow",<br />            "Action": [<br />                "sns:GetTopicAttributes",<br />                "sns:Subscribe",<br />                "sns:Unsubscribe",<br />                "sns:Publish"<br />            ],<br />            "Resource": "*"<br />        }<br />    ]<br />}</pre> | Cloud administrator | 

### Test the functionality of your SNS notification
<a name="test-the-functionality-of-your-sns-notification"></a>


| Task | Description | Skills required | 
| --- | --- | --- | 
| Send traffic through Network Firewall. | [\[See the AWS documentation website for more details\]](http://docs.aws.amazon.com/prescriptive-guidance/latest/patterns/use-network-firewall-to-capture-the-dns-domain-names-from-the-server-name-indication-sni-for-outbound-traffic.html)<pre>{<br />    "Type": "Notification",<br />    "MessageId": "<messageID>",<br />    "TopicArn": "arn:aws:sns:us-west-2:123456789:testSNSTopic",<br />    "Subject": "Server Name passed through the Network Firewall",<br />    "Message": "{\"ServerName\": \"Domain 'aws.amazon.com' accessed via AWS Network Firewall 'AWS-Network-Firewall-Multi-AZ-firewall\"}",<br />    "Timestamp": "2022-03-22T04:10:04.217Z",<br />    "SignatureVersion": "1",<br />    "Signature": "<Signature>",<br />    "SigningCertURL": "<SigningCertUrl>",<br />    "UnsubscribeURL": "<UnsubscribeURL>"<br />}</pre>Then, check the Network Firewall alert log in Amazon CloudWatch by following the instructions in the [Amazon CloudWatch documentation](https://docs.aws.amazon.com/AmazonCloudWatch/latest/logs/SearchDataFilterPattern.html). The alert log shows the following output:<pre>{<br />    "firewall_name": "AWS-Network-Firewall-Multi-AZ-firewall",<br />    "availability_zone": "us-east-2b",<br />    "event_timestamp": "<event timestamp>",<br />    "event": {<br />        "timestamp": "2021-03-22T04:10:04.214222+0000",<br />        "flow_id": <flow ID>,<br />        "event_type": "alert",<br />        "src_ip": "10.1.3.76",<br />        "src_port": 22761,<br />        "dest_ip": "99.86.59.73",<br />        "dest_port": 443,<br />        "proto": "TCP",<br />        "alert": {<br />            "action": "allowed",<br />            "signature_id": 2,<br />            "rev": 0,<br />            "signature": "",<br />            "category": "",<br />            "severity": 3<br />        },<br />        "tls": {<br />            "subject": "CN=aws.amazon.com",<br />            "issuerdn": "C=US, O=Amazon, OU=Server CA 1B, CN=Amazon",<br />            "serial": "<serial number>",<br />            "fingerprint": "<fingerprint ID>",<br />            "sni": "aws.amazon.com",<br />            "version": "TLS 1.2",<br />            "notbefore": "2020-09-30T00:00:00",<br />            "notafter": "2021-09-23T12:00:00",<br />            "ja3": {},<br />            "ja3s": {}<br />        },<br />        "app_proto": "tls"<br />    }<br />}</pre> | Test engineer | 

# Use Terraform to automatically enable Amazon GuardDuty for an organization
<a name="use-terraform-to-automatically-enable-amazon-guardduty-for-an-organization"></a>

*Aarthi Kannan, Amazon Web Services*

## Summary
<a name="use-terraform-to-automatically-enable-amazon-guardduty-for-an-organization-summary"></a>

Amazon GuardDuty continuously monitors your Amazon Web Services (AWS) accounts and uses threat intelligence to identify unexpected and potentially malicious activity within your AWS environment. Manually enabling GuardDuty for multiple accounts or organizations, across multiple AWS Regions, or through the AWS Management Console can be cumbersome. You can automate the process by using an infrastructure as code (IaC) tool, such as Terraform, which can provision and manage multi-account, multi-Region services and resources in the cloud.

AWS recommends using AWS Organizations to set up and manage multiple accounts in GuardDuty. This pattern adheres to that recommendation. One benefit of this approach is that, when new accounts are created or added to the organization, GuardDuty will be auto-enabled in these accounts for all supported Regions, without the need for manual intervention.

This pattern demonstrates how to use HashiCorp Terraform to enable Amazon GuardDuty for three or more Amazon Web Services (AWS) accounts in an organization. The sample code provided with this pattern does the following:
+ Enables GuardDuty for all AWS accounts that are current members of the target organization in AWS Organizations
+ Turns on the *Auto-Enable* feature in GuardDuty, which automatically enables GuardDuty for any accounts that are added to the target organization in the future
+ Allows you select the Regions where you want to enable GuardDuty
+ Uses the organization’s security account as the GuardDuty delegated administrator
+ Creates an Amazon Simple Storage Service (Amazon S3) bucket in the logging account and configures GuardDuty to publish the aggregated findings from all accounts in this bucket
+ Assigns a life-cycle policy that transitions findings from the S3 bucket to Amazon S3 Glacier Flexible Retrieval storage after 365 days, by default

You can manually run this sample code, or you can integrate it into your continuous integration and continuous delivery (CI/CD) pipeline.

**Target audience**

This pattern is recommended for users who have experience with Terraform, Python, GuardDuty, and AWS Organizations.

## Prerequisites and limitations
<a name="use-terraform-to-automatically-enable-amazon-guardduty-for-an-organization-prereqs"></a>

**Prerequisites**
+ An active AWS account.
+ An organization is set up in AWS Organizations, and it contains at least the following three accounts:
  + **A management account** – This is the account from which you deploy the Terraform code, either standalone or as part of the CI/CD pipeline. The Terraform state is also stored in this account.
  + **A security account** – This account is used as the GuardDuty delegated administrator. For more information, see [Important considerations for GuardDuty delegated administrators](https://docs.aws.amazon.com/guardduty/latest/ug/guardduty_organizations.html#delegated_admin_important) (GuardDuty documentation).
  + **A logging account** – This account contains the S3 bucket where GuardDuty publishes the aggregated findings from all member accounts.

  For more information about how to set up the organization with the required configuration, see [Create an account structure](https://www.wellarchitectedlabs.com/cost/100_labs/100_1_aws_account_setup/2_account_structure/) (AWS Well-Architected Labs).
+ An Amazon S3 bucket and an Amazon DynamoDB table that serve as a remote backend to store Terraform’s state in the management account. For more information on using remote backends for the Terraform state, see [S3 Backends](https://www.terraform.io/language/settings/backends/s3) (Terraform documentation). For a code sample that sets up remote state management with an S3 backend, see [remote-state-s3-backend](https://registry.terraform.io/modules/nozaq/remote-state-s3-backend/aws/latest) (Terraform Registry). Note the following requirements:
  + The S3 bucket and DynamoDB table must be in the same Region.
  + When creating the DynamoDB table, the partition key must be `LockID` (case-sensitive), and the partition key type must be **String**. All other table settings must be at their default values. For more information, see [About primary keys](https://docs.aws.amazon.com/amazondynamodb/latest/developerguide/HowItWorks.CoreComponents.html#HowItWorks.CoreComponents.PrimaryKey) and [Create a table](https://docs.aws.amazon.com/amazondynamodb/latest/developerguide/getting-started-step-1.html) (DynamoDB documentation).
+ An S3 bucket that will be used to store access logs for the S3 bucket in which GuardDuty will publish findings. For more information, see [Enabling Amazon S3 server access logging](https://docs.aws.amazon.com/AmazonS3/latest/userguide/enable-server-access-logging.html) (Amazon S3 documentation). If you’re deploying to an AWS Control Tower landing zone, you can reuse the S3 bucket in the **log archive** account for this purpose. 
+ Terraform version 0.14.6 or later is installed and configured. For more information, see [Get Started – AWS](https://learn.hashicorp.com/collections/terraform/aws-get-started) (Terraform documentation).
+ Python version 3.9.6 or later is installed and configured. For more information, see [Source releases](https://www.python.org/downloads/source/) (Python website).
+ AWS SDK for Python (Boto3) is installed. For more information, see [Installation](https://boto3.amazonaws.com/v1/documentation/api/latest/guide/quickstart.html#installation) (Boto3 documentation).
+ jq is installed and configured. For more information, see [Download jq](https://stedolan.github.io/jq/download/) (jq documentation).

**Limitations**
+ This pattern supports macOS and Amazon Linux 2 operating systems. This pattern has not been tested for use in Windows operating systems.
**Note**  
Amazon Linux 2 is nearing end of support. For more information, see the [Amazon Linux 2 FAQs](https://aws.amazon.com/amazon-linux-2/faqs/).
+ GuardDuty must not already be enabled in any of the accounts, in any of the target Regions.
+ The IaC solution in this pattern does not deploy the prerequisites.
+ This pattern is designed for an AWS landing zone that adheres to the following best practices:
  + The landing zone was created by using AWS Control Tower.
  + Separate AWS accounts are used for security and logging.

**Product versions**
+ Terraform version 0.14.6 or later. The sample code has been tested for version 1.2.8.
+ Python version 3.9.6 or later.

## Architecture
<a name="use-terraform-to-automatically-enable-amazon-guardduty-for-an-organization-architecture"></a>

This section gives a high-level overview of this solution and the architecture established by the sample code. The following diagram shows the resources deployed across the various accounts in the organization, within a single AWS Region.

![\[Architecture diagram showing resources in management, security, logging, and member accounts.\]](http://docs.aws.amazon.com/prescriptive-guidance/latest/patterns/images/pattern-img/c9b68627-b68e-47a6-9933-d0f36ea10ae2/images/86193749-eef9-4d63-8a7f-daa0cd03fbfe.png)


1. Terraform creates the **GuardDutyTerraformOrgRole** AWS Identity and Access Management (IAM) role in the security account and the logging account.

1. Terraform creates an S3 bucket in the default AWS Region in the logging account. This bucket is used as the publishing destination to aggregate all GuardDuty findings across all Regions and from all accounts in the organization. Terraform also creates an AWS Key Management Service (AWS KMS) key in the security account that is used to encrypt the findings in the S3 bucket and configures automatic archiving of findings from the S3 bucket into S3 Glacier Flexible Retrieval storage.

1. From the management account, Terraform designates the security account as the delegated administrator for GuardDuty. This means that the security account now manages the GuardDuty service for all member accounts, including the management account. Individual member accounts cannot suspend or disable GuardDuty by themselves.

1. Terraform creates the GuardDuty detector in the security account, for the GuardDuty delegated administrator.

1. If it is not already enabled, Terraform enables S3 protection in GuardDuty. For more information, see [Amazon S3 protection in Amazon GuardDuty](https://docs.aws.amazon.com/guardduty/latest/ug/s3-protection.html) (GuardDuty documentation).

1. Terraform enrolls all current, active member accounts in the organization as GuardDuty members.

1. Terraform configures the GuardDuty delegated administrator to publish the aggregated findings from all member accounts to the S3 bucket in the logging account.

1. Terraform repeats steps 3 through 7 for each AWS Region you choose.

**Automation and scale**

The sample code provided is modularized so that you can integrate it into your CI/CD pipeline for automated deployment.

## Tools
<a name="use-terraform-to-automatically-enable-amazon-guardduty-for-an-organization-tools"></a>

**AWS services**
+ [Amazon DynamoDB](https://docs.aws.amazon.com/amazondynamodb/latest/developerguide/Introduction.html) is a fully managed NoSQL database service that provides fast, predictable, and scalable performance.
+ [Amazon GuardDuty](https://docs.aws.amazon.com/guardduty/latest/ug/what-is-guardduty.html) is a continuous security monitoring service that analyzes and processes logs to identify unexpected and potentially unauthorized activity in your AWS environment.
+ [AWS Identity and Access Management (IAM)](https://docs.aws.amazon.com/IAM/latest/UserGuide/introduction.html) helps you securely manage access to your AWS resources by controlling who is authenticated and authorized to use them.
+ [AWS Key Management Service (AWS KMS)](https://docs.aws.amazon.com/kms/latest/developerguide/overview.html) helps you create and control cryptographic keys to protect your data.
+ [AWS Organizations](https://docs.aws.amazon.com/organizations/latest/userguide/orgs_introduction.html) is an account management service that helps you consolidate multiple AWS accounts into an organization that you create and centrally manage.
+ [Amazon Simple Storage Service (Amazon S3)](https://docs.aws.amazon.com/AmazonS3/latest/userguide/Welcome.html) is a cloud-based object storage service that helps you store, protect, and retrieve any amount of data.
+ [AWS SDK for Python (Boto3)](https://boto3.amazonaws.com/v1/documentation/api/latest/guide/quickstart.html) is a software development kit that helps you integrate your Python application, library, or script with AWS services.

**Other tools and services**
+ [HashiCorp Terraform](https://www.terraform.io/docs) is a command-line interface application that helps you use code to provision and manage cloud infrastructure and resources.
+ [Python](https://www.python.org/) is a general-purpose programming language.
+ [jq](https://stedolan.github.io/jq/download/) is a command-line processor that helps you work with JSON files.

**Code repository**

The code for this pattern is available on GitHub, in the [amazon-guardduty-for-aws-organizations-with-terraform](https://github.com/aws-samples/amazon-guardduty-for-aws-organizations-with-terraform) repository.

## Epics
<a name="use-terraform-to-automatically-enable-amazon-guardduty-for-an-organization-epics"></a>

### Enable GuardDuty in the organization
<a name="enable-guardduty-in-the-organization"></a>


| Task | Description | Skills required | 
| --- | --- | --- | 
| Clone the repository. | In a Bash shell, run the following command. In *Clone the repository* in the [Additional information](#use-terraform-to-automatically-enable-amazon-guardduty-for-an-organization-additional) section, you can copy the full command containing the URL of the GitHub repository. This clones the [amazon-guardduty-for-aws-organizations-with-terraform](https://github.com/aws-samples/amazon-guardduty-for-aws-organizations-with-terraform) repository from GitHub.<pre>git clone <github-repository-url></pre> | DevOps engineer | 
| Edit the Terraform configuration file. | [\[See the AWS documentation website for more details\]](http://docs.aws.amazon.com/prescriptive-guidance/latest/patterns/use-terraform-to-automatically-enable-amazon-guardduty-for-an-organization.html) | DevOps engineer, General AWS, Terraform, Python | 
| Generate CloudFormation templates for new IAM roles.  | This pattern includes an IaC solution to create two CloudFormation templates. These templates create two IAM roles that Terraform uses during the setup process. These templates adhere to the security best practice of [least-privilege permissions](https://docs.aws.amazon.com/IAM/latest/UserGuide/best-practices.html#grant-least-privilege).[\[See the AWS documentation website for more details\]](http://docs.aws.amazon.com/prescriptive-guidance/latest/patterns/use-terraform-to-automatically-enable-amazon-guardduty-for-an-organization.html) | DevOps engineer, General AWS | 
| Create the IAM roles. | Following the instructions in [Creating a stack](https://docs.aws.amazon.com/AWSCloudFormation/latest/UserGuide/cfn-console-create-stack.html) (CloudFormation documentation), do the following:[\[See the AWS documentation website for more details\]](http://docs.aws.amazon.com/prescriptive-guidance/latest/patterns/use-terraform-to-automatically-enable-amazon-guardduty-for-an-organization.html) | DevOps engineer, General AWS | 
| Assume the IAM role in the management account. | As a security best practice, we recommend that you assume the new **management-account-role** IAM role before proceeding. In the AWS Command Line Interface (AWS CLI), enter the command in *Assume the management account IAM role* in the [Additional Information](#use-terraform-to-automatically-enable-amazon-guardduty-for-an-organization-additional) section. | DevOps engineer, General AWS | 
| Run the setup script. | In the repository `root` folder, run the following command to start the setup script.<pre>bash scripts/full-setup.sh</pre>The **full-setup.sh** script performs the following actions:[\[See the AWS documentation website for more details\]](http://docs.aws.amazon.com/prescriptive-guidance/latest/patterns/use-terraform-to-automatically-enable-amazon-guardduty-for-an-organization.html) | DevOps engineer, Python | 

### (Optional) Disable GuardDuty in the organization
<a name="optional-disable-guardduty-in-the-organization"></a>


| Task | Description | Skills required | 
| --- | --- | --- | 
| Run the clean-up script. | If you used this pattern to enable GuardDuty for the organization and want to disable GuardDuty, in the repository `root` folder, run the following command to start the **cleanup-gd.sh** script.<pre>bash scripts/cleanup-gd.sh</pre>This script disables GuardDuty in the target organization, removes any deployed resources, and restores the organization to its previous state before using Terraform to enable GuardDuty.This script does not remove the Terraform state files or lock files from the local and remote backends. If you need to do so, you must perform these actions manually. Also, this script does not delete the imported organization or the accounts managed by it. Trusted access for GuardDuty isn’t disabled as part of the clean-up script. | DevOps engineer, General AWS, Terraform, Python | 
| Remove IAM roles. | Delete the stacks that were created with the **role-to-assume-for-role-creation.yaml** and **management-account-role.yaml** CloudFormation templates. For more information, see [Deleting a stack](https://docs.aws.amazon.com/AWSCloudFormation/latest/UserGuide/cfn-console-delete-stack.html) (CloudFormation documentation). | DevOps engineer, General AWS | 

## Related resources
<a name="use-terraform-to-automatically-enable-amazon-guardduty-for-an-organization-resources"></a>

*AWS documentation*
+ [Managing multiple accounts](https://docs.aws.amazon.com/guardduty/latest/ug/guardduty_accounts.html) (GuardDuty documentation)
+ [Granting least privilege](https://docs.aws.amazon.com/IAM/latest/UserGuide/best-practices.html#grant-least-privilege) (IAM documentation)

*AWS marketing*
+ [Amazon GuardDuty](https://aws.amazon.com/guardduty/)
+ [AWS Organizations](https://aws.amazon.com/organizations/)

*Other resources*
+ [Terraform](https://www.terraform.io/)
+ [Terraform CLI Documentation](https://www.terraform.io/cli)

## Additional information
<a name="use-terraform-to-automatically-enable-amazon-guardduty-for-an-organization-additional"></a>

**Clone the repository**

Run the following command to clone the GitHub repository.

```
git clone https://github.com/aws-samples/amazon-guardduty-for-aws-organizations-with-terraform
```

**Assume the management account IAM role**

To assume the IAM role in the management account, run the following command. Replace `<IAM role ARN>` with the ARN of the IAM role.

```
export ROLE_CREDENTIALS=$(aws sts assume-role --role-arn <IAM role ARN> --role-session-name AWSCLI-Session --output json)
export AWS_ACCESS_KEY_ID=$(echo $ROLE_CREDENTIALS | jq .Credentials.AccessKeyId | sed 's/"//g')
export AWS_SECRET_ACCESS_KEY=$(echo $ROLE_CREDENTIALS | jq .Credentials.SecretAccessKey | sed 's/"//g')
export AWS_SESSION_TOKEN=$(echo $ROLE_CREDENTIALS | jq .Credentials.SessionToken | sed 's/"//g')
```

# Verify operational best practices for PCI DSS 4.0 by using AWS Config
<a name="verify-ops-best-practices-pci-dss-4"></a>

*Tala Qraitem and Alex Goff, Amazon Web Services*

## Summary
<a name="verify-ops-best-practices-pci-dss-4-summary"></a>

The [Payment Card Industry Data Security Standard (PCI DSS)](https://www.pcisecuritystandards.org/standards/pci-dss/) outlines essential technical and operational protocols to help safeguard payment data. PCI DSS was developed to encourage and enhance data security for payment card accounts. It also facilitates the global adoption of consistent security measures. Although it’s specifically designed for environments with payment card account data, you can use PCI DSS to help protect against threats and secure other elements in the payment ecosystem.

PCI DSS version 4.0 was released to address evolving requirements, provide clarification or additional guidance, and improve the structure and format of the standard. For more information about the changes, see [Summary of changes from PCI DSS version 3.2.1 to 4.0](https://listings.pcisecuritystandards.org/documents/PCI-DSS-Summary-of-Changes-v3_2_1-to-v4_0.pdf).

An AWS Config [conformance pack](https://docs.aws.amazon.com/config/latest/developerguide/conformance-packs.html) is a collection of AWS Config rules and remediation actions that help you create security, operational, or cost-optimization governance checks. You can deploy a conformance pack as a single entity in an AWS account and AWS Region, or you can deploy across an organization in AWS Organizations.

The conformance packs for PCI DSS version 4.0 augment and build upon the conformance pack for version 3.2.1. The rules in the conformance pack map to the rules in the standard. For more information, see the mapping provided in the *Attachments* section. You can choose between two versions of this conformance pack: one that includes [global resource types](https://docs.aws.amazon.com/config/latest/developerguide/select-resources.html#select-resources-all) and one that excludes them. 

**Important**  
Conformance packs are not designed to fully ensure compliance with a specific governance or compliance standard. You are responsible for making your own assessment of whether usage meets applicable legal and regulatory requirements.

## Prerequisites and limitations
<a name="verify-ops-best-practices-pci-dss-4-prereqs"></a>

**Prerequisites**
+ Have an active AWS account.
+ [Set up AWS Config](https://docs.aws.amazon.com/config/latest/developerguide/gs-console.html).
+ Meet the [prerequisites for conformance packs](https://docs.aws.amazon.com/config/latest/developerguide/cpack-prerequisites.html).
+ Deploy the [PCI DSS version 3.2.1 conformance pack](https://github.com/awslabs/aws-config-rules/blob/master/aws-config-conformance-packs/Operational-Best-Practices-for-PCI-DSS.yaml).
+ Have permissions to  access AWS Config and manage conformance packs. For an example policy, see the [Additional information](#verify-ops-best-practices-pci-dss-4-additional) section of this pattern.

**Limitations**
+ Your AWS account has default quotas, formerly referred to as *limits*, for each AWS service. Unless otherwise noted, each quota is Region-specific. You can request increases for some quotas, but not all quotas can be increased. Make sure that you are familiar with the [AWS Config service limits](https://docs.aws.amazon.com/config/latest/developerguide/configlimits.html), including the limits for single account conformance packs and organization conformance packs.
+ The version of this conformance pack that includes global resource types is intended for deployment only in the `us-east-1` Region.
+ The version of this conformance pack that excludes global resources types is intended for deployment only in the following Regions:
  + `ap-east-1`
  + `ap-south-1`
  + `ap-northeast-2`
  + `ap-southeast-1`
  + `ap-southeast-2`
  + `ap-northeast-1`
  + `ca-central-1`
  + `eu-central-1`
  + `eu-west-1`
  + `eu-west-2`
  + `eu-west-3`
  + `eu-north-1`
  + `sa-east-1`
  + `us-east-2`
  + `us-west-1`
  + `us-west-2`

## Tools
<a name="verify-ops-best-practices-pci-dss-4-tools"></a>

**AWS services**
+ [AWS Config](https://docs.aws.amazon.com/config/latest/developerguide/WhatIsConfig.html) provides a detailed view of the resources in your AWS account and how they’re configured. It helps you identify how resources are related to one another and how their configurations have changed over time.
+ [AWS Systems Manager](https://docs.aws.amazon.com/systems-manager/latest/userguide/what-is-systems-manager.html) helps you manage your applications and infrastructure running in the AWS Cloud. It simplifies application and resource management, shortens the time to detect and resolve operational problems, and helps you manage your AWS resources securely at scale.

**Code repository**

The conformance packs are located in the [AWS Config conformance packs](https://github.com/awslabs/aws-config-rules/tree/master/aws-config-conformance-packs) GitHub repository. This repository contains the following templates related to PCI DSS version 4.0:
+ [Operational-Best-Practices-for-PCI-DSS-v4.0-including-global-resourcetypes.yaml](https://github.com/awslabs/aws-config-rules/blob/master/aws-config-conformance-packs/Operational-Best-Practices-for-PCI-DSS-v4.0-including-global-resourcetypes.yaml)
+ [Operational-Best-Practices-for-PCI-DSS-v4.0-excluding-global-resourcetypes.yaml](https://github.com/awslabs/aws-config-rules/blob/master/aws-config-conformance-packs/Operational-Best-Practices-for-PCI-DSS-v4.0-excluding-global-resourcetypes.yaml)

## Epics
<a name="verify-ops-best-practices-pci-dss-4-epics"></a>

### Deploy and manage the conformance pack
<a name="deploy-and-manage-the-conformance-pack"></a>


| Task | Description | Skills required | 
| --- | --- | --- | 
| Download the conformance pack. | If you're deploying the conformance pack in the `us-east-1` Region, download the [Operational-Best-Practices-for-PCI-DSS-v4.0-including-global-resourcetypes.yaml](https://github.com/awslabs/aws-config-rules/blob/master/aws-config-conformance-packs/Operational-Best-Practices-for-PCI-DSS-v4.0-including-global-resourcetypes.yaml) template.If you're deploying the conformance pack in a different Region, download the [Operational-Best-Practices-for-PCI-DSS-v4.0-excluding-global-resourcetypes.yaml](https://github.com/awslabs/aws-config-rules/blob/master/aws-config-conformance-packs/Operational-Best-Practices-for-PCI-DSS-v4.0-excluding-global-resourcetypes.yaml) template. | DevOps engineer | 
| (Optional) Modify the conformance pack. | You can modify the conformance pack template for the unique needs of your organization. For example, you can create custom remediation actions. For more information about how to create and modify templates, see [Creating templates for custom conformance packs](https://docs.aws.amazon.com/config/latest/developerguide/custom-conformance-pack.html) in the AWS Config documentation. | General AWS | 
| Deploy the conformance pack. | If you're deploying in a target AWS account or AWS Region, follow the instructions in [Deploying conformance packs](https://docs.aws.amazon.com/config/latest/developerguide/conformance-pack-deploy.html) in the AWS Config documentation. You can use the AWS Management Console or the AWS Command Line Interface (AWS CLI).If you're deploying the conformance pack across an organization in AWS Organizations, follow the instructions in [Deploy AWS Config conformance pack using Quick Setup](https://docs.aws.amazon.com/systems-manager/latest/userguide/quick-setup-cpack.html) in the AWS Systems Manager documentation. | General AWS | 
| (Optional) Edit the conformance pack. | If you want to edit the conformance pack, follow the instructions in [Editing conformance packs](https://docs.aws.amazon.com/config/latest/developerguide/conformance-pack-edit.html) in the AWS Config documentation. You can use the AWS Management Console or the AWS CLI. | General AWS | 
| (Optional) Delete the conformance pack. | If you want to delete the conformance pack, follow the instructions in [Deleting conformance packs](https://docs.aws.amazon.com/config/latest/developerguide/conformance-pack-delete.html) in the AWS Config documentation. You can use the AWS Management Console or the AWS CLI. | General AWS | 

## Related resources
<a name="verify-ops-best-practices-pci-dss-4-resources"></a>

**AWS resources**
+ [Conformance packs for AWS Config](https://docs.aws.amazon.com/config/latest/developerguide/conformance-packs.html) (AWS Config documentation)
+ [Deploy AWS Config conformance pack using Quick Setup](https://docs.aws.amazon.com/systems-manager/latest/userguide/quick-setup-cpack.html) (Systems Manager documentation)
+ [PCI DSS compliance on AWS](https://aws.amazon.com/compliance/pci-dss-level-1-faqs/) (AWS website)
+ [PCI DSS version 4.0 on AWS](https://d1.awsstatic.com/whitepapers/compliance/pci-dss-compliance-on-aws-v4-102023.pdf) (Compliance guide)

**PCI DSS resources**
+ [PCI DSS version 4.0 Resource Hub](https://blog.pcisecuritystandards.org/pci-dss-v4-0-resource-hub)
+ [PCI Security Standards Council Document Library](https://www.pcisecuritystandards.org/document_library/)
+ [Summary of changes from PCI DSS version 3.2.1 to 4.0](https://listings.pcisecuritystandards.org/documents/PCI-DSS-Summary-of-Changes-v3_2_1-to-v4_0.pdf)

## Additional information
<a name="verify-ops-best-practices-pci-dss-4-additional"></a>

The following is a sample AWS Identity and Access Management (IAM) policy that allows the user to access AWS Config and manage conformance packs:

```
{
    "Version": "2012-10-17",		 	 	 
    "Statement": [
        {
            "Effect": "Allow",
            "Action": [
                "config:PutConfigRule",
                "config:PutConformancePack",
                "config:DeleteConfigRule",
                "config:DeleteRemediationConfiguration",
                "config:DeleteConformancePack",
                "config:PutRemediationConfigurations",
                "config:BatchGetAggregateResourceConfig",
                "config:BatchGetResourceConfig",
                "config:Get*",
                "config:Describe*",
                "config:Deliver*",
                "config:List*",
                "config:Select*"
            ],
            "Resource": "*"
        }
    ]
}
```

## Attachments
<a name="attachments-7f4b4311-2606-44e9-b9a2-8c2472643008"></a>

To access additional content that is associated with this document, unzip the following file: [attachment.zip](samples/p-attach/7f4b4311-2606-44e9-b9a2-8c2472643008/attachments/attachment.zip)

# More patterns
<a name="securityandcompliance-more-patterns-pattern-list"></a>

**Topics**
+ [Access a bastion host by using Session Manager and Amazon EC2 Instance Connect](access-a-bastion-host-by-using-session-manager-and-amazon-ec2-instance-connect.md)
+ [Access container applications privately on Amazon ECS by using AWS Fargate, AWS PrivateLink, and a Network Load Balancer](access-container-applications-privately-on-amazon-ecs-by-using-aws-fargate-aws-privatelink-and-a-network-load-balancer.md)
+ [Access container applications privately on Amazon ECS by using AWS PrivateLink and a Network Load Balancer](access-container-applications-privately-on-amazon-ecs-by-using-aws-privatelink-and-a-network-load-balancer.md)
+ [Access container applications privately on Amazon EKS using AWS PrivateLink and a Network Load Balancer](access-container-applications-privately-on-amazon-eks-using-aws-privatelink-and-a-network-load-balancer.md)
+ [Allow EC2 instances write access to S3 buckets in AMS accounts](allow-ec2-instances-write-access-to-s3-buckets-in-ams-accounts.md)
+ [Associate an AWS CodeCommit repository in one AWS account with Amazon SageMaker AI Studio Classic in another account](associate-an-aws-codecommit-repository-in-one-aws-account-with-sagemaker-studio-in-another-account.md)
+ [Authenticate existing React application users by using Amazon Cognito and AWS Amplify UI](authenticate-react-app-users-cognito-amplify-ui.md)
+ [Automate adding or updating Windows registry entries using AWS Systems Manager](automate-adding-or-updating-windows-registry-entries-using-aws-systems-manager.md)
+ [Automate encryption enforcement in AWS Glue using an AWS CloudFormation template](automate-encryption-enforcement-in-aws-glue-using-an-aws-cloudformation-template.md)
+ [Automatically attach an AWS managed policy for Systems Manager to EC2 instance profiles using Cloud Custodian and AWS CDK](automatically-attach-an-aws-managed-policy-for-systems-manager-to-ec2-instance-profiles-using-cloud-custodian-and-aws-cdk.md)
+ [Automatically encrypt existing and new Amazon EBS volumes](automatically-encrypt-existing-and-new-amazon-ebs-volumes.md)
+ [Block public access to Amazon RDS by using Cloud Custodian](block-public-access-to-amazon-rds-by-using-cloud-custodian.md)
+ [Centralize DNS resolution by using AWS Managed Microsoft AD and on-premises Microsoft Active Directory](centralize-dns-resolution-by-using-aws-managed-microsoft-ad-and-on-premises-microsoft-active-directory.md)
+ [Implement centralized custom Checkov scanning to enforce policy before deploying AWS infrastructure](centralized-custom-checkov-scanning.md)
+ [Check EC2 instances for mandatory tags at launch](check-ec2-instances-for-mandatory-tags-at-launch.md)
+ [Configure cross-account access to Amazon DynamoDB](configure-cross-account-access-to-amazon-dynamodb.md)
+ [Configure HTTPS encryption for Oracle JD Edwards EnterpriseOne on Oracle WebLogic by using an Application Load Balancer](configure-https-encryption-for-oracle-jd-edwards-enterpriseone-on-oracle-weblogic-by-using-an-application-load-balancer.md)
+ [Configure mutual TLS authentication for applications running on Amazon EKS](configure-mutual-tls-authentication-for-applications-running-on-amazon-eks.md)
+ [Configure Windows authentication for Amazon RDS for Microsoft SQL Server using AWS Managed Microsoft AD](configure-windows-authentication-for-amazon-rds-using-microsoft-ad.md)
+ [Connect by using an SSH tunnel in pgAdmin](connect-by-using-an-ssh-tunnel-in-pgadmin.md)
+ [Create a React app by using AWS Amplify and add authentication with Amazon Cognito](create-a-react-app-by-using-aws-amplify-and-add-authentication-with-amazon-cognito.md)
+ [Create a report of Network Access Analyzer findings for inbound internet access in multiple AWS accounts](create-a-report-of-network-access-analyzer-findings-for-inbound-internet-access-in-multiple-aws-accounts.md)
+ [Customize Amazon CloudWatch alerts for AWS Network Firewall](customize-amazon-cloudwatch-alerts-for-aws-network-firewall.md)
+ [Deploy a ChatOps solution to manage SAST scan results by using Amazon Q Developer in chat applications custom actions and CloudFormation](deploy-chatops-solution-to-manage-sast-scan-results.md)
+ [Deploy real-time coding security validation by using an MCP server with Kiro and other coding assistants](deploy-real-time-coding-security-validation-by-using-an-mcp-server-with-kiro-and-other-coding-assistants.md)
+ [Document your AWS landing zone design](document-your-aws-landing-zone-design.md)
+ [Enable encrypted connections for PostgreSQL DB instances in Amazon RDS](enable-encrypted-connections-for-postgresql-db-instances-in-amazon-rds.md)
+ [Encrypt an existing Amazon RDS for PostgreSQL DB instance](encrypt-an-existing-amazon-rds-for-postgresql-db-instance.md)
+ [Enforce automatic tagging of Amazon RDS databases at launch](enforce-automatic-tagging-of-amazon-rds-databases-at-launch.md)
+ [Enforce tagging of Amazon EMR clusters at launch](enforce-tagging-of-amazon-emr-clusters-at-launch.md)
+ [Ensure Amazon EMR logging to Amazon S3 is enabled at launch](ensure-amazon-emr-logging-to-amazon-s3-is-enabled-at-launch.md)
+ [Generate an AWS CloudFormation template containing AWS Config managed rules using Troposphere](generate-an-aws-cloudformation-template-containing-aws-config-managed-rules-using-troposphere.md)
+ [Get Amazon SNS notifications when the key state of an AWS KMS key changes](get-amazon-sns-notifications-when-the-key-state-of-an-aws-kms-key-changes.md)
+ [Help enforce DynamoDB tagging](help-enforce-dynamodb-tagging.md)
+ [Identify and alert when Amazon Data Firehose resources are not encrypted with an AWS KMS key](identify-and-alert-when-amazon-data-firehose-resources-are-not-encrypted-with-an-aws-kms-key.md)
+ [Implement Microsoft Entra ID-based authentication in an AWS Blu Age modernized mainframe application](implement-entra-id-authentication-in-aws-blu-age-modernized-mainframe-application.md)
+ [Implement SAML 2.0 authentication for Amazon WorkSpaces by using Auth0 and AWS Managed Microsoft AD](implement-saml-authentication-for-amazon-workspaces-by-using-auth0-and-aws-managed-microsoft-ad.md)
+ [Implement SHA1 hashing for PII data when migrating from SQL Server to PostgreSQL](implement-sha1-hashing-for-pii-data-when-migrating-from-sql-server-to-postgresql.md)
+ [Improve operational performance by enabling Amazon DevOps Guru across multiple AWS Regions, accounts, and OUs with the AWS CDK](improve-operational-performance-by-enabling-amazon-devops-guru-across-multiple-aws-regions-accounts-and-ous-with-the-aws-cdk.md)
+ [Ingest and migrate EC2 Windows instances into an AWS Managed Services account](ingest-and-migrate-ec2-windows-instances-into-an-aws-managed-services-account.md)
+ [Migrate Amazon RDS for Oracle to Amazon RDS for PostgreSQL in SSL mode by using AWS DMS](migrate-amazon-rds-for-oracle-to-amazon-rds-for-postgresql-in-ssl-mode-by-using-aws-dms.md)
+ [Migrate an ELK Stack to Elastic Cloud on AWS](migrate-an-elk-stack-to-elastic-cloud-on-aws.md)
+ [Migrate an F5 BIG-IP workload to F5 BIG-IP VE on the AWS Cloud](migrate-an-f5-big-ip-workload-to-f5-big-ip-ve-on-the-aws-cloud.md)
+ [Monitor Amazon Aurora for instances without encryption](monitor-amazon-aurora-for-instances-without-encryption.md)
+ [Provision least-privilege IAM roles by deploying a role vending machine solution](provision-least-privilege-iam-roles-by-deploying-a-role-vending-machine-solution.md)
+ [Secure and streamline user access in a Db2 federation database on AWS by using trusted contexts](secure-and-streamline-user-access-in-a-db2-federation-database-on-aws-by-using-trusted-contexts.md)
+ [Send AWS WAF logs to Splunk by using AWS Firewall Manager and Amazon Data Firehose](send-aws-waf-logs-to-splunk-by-using-aws-firewall-manager-and-amazon-data-firehose.md)
+ [Serve static content in an Amazon S3 bucket through a VPC by using Amazon CloudFront](serve-static-content-in-an-amazon-s3-bucket-through-a-vpc-by-using-amazon-cloudfront.md)
+ [Set up end-to-end encryption for applications on Amazon EKS using cert-manager and Let's Encrypt](set-up-end-to-end-encryption-for-applications-on-amazon-eks-using-cert-manager-and-let-s-encrypt.md)
+ [Use user IDs in IAM policies for access control and automation](use-user-ids-iam-policies-access-control-automation.md)
+ [Verify that ELB load balancers require TLS termination](verify-that-elb-load-balancers-require-tls-termination.md)
+ [View AWS Network Firewall logs and metrics by using Splunk](view-aws-network-firewall-logs-and-metrics-by-using-splunk.md)
+ [Visualize IAM credential reports for all AWS accounts using Amazon Quick Sight](visualize-iam-credential-reports-for-all-aws-accounts-using-amazon-quicksight.md)