

# Modernization
<a name="modernization-pattern-list"></a>

**Topics**
+ [Automatically archive items to Amazon S3 using DynamoDB TTL](automatically-archive-items-to-amazon-s3-using-dynamodb-ttl.md)
+ [Build a multi-tenant serverless architecture in Amazon OpenSearch Service](build-a-multi-tenant-serverless-architecture-in-amazon-opensearch-service.md)
+ [Deploy multiple-stack applications using AWS CDK with TypeScript](deploy-multiple-stack-applications-using-aws-cdk-with-typescript.md)
+ [Automate deployment of nested applications using AWS SAM](automate-deployment-of-nested-applications-using-aws-sam.md)
+ [Implement SaaS tenant isolation for Amazon S3 by using an AWS Lambda token vending machine](implement-saas-tenant-isolation-for-amazon-s3-by-using-an-aws-lambda-token-vending-machine.md)
+ [Implement the serverless saga pattern by using AWS Step Functions](implement-the-serverless-saga-pattern-by-using-aws-step-functions.md)
+ [Manage on-premises container applications by setting up Amazon ECS Anywhere with the AWS CDK](manage-on-premises-container-applications-by-setting-up-amazon-ecs-anywhere-with-the-aws-cdk.md)
+ [Modernize ASP.NET Web Forms applications on AWS](modernize-asp-net-web-forms-applications-on-aws.md)
+ [Tenant onboarding in SaaS architecture for the silo model using C\$1 and AWS CDK](tenant-onboarding-in-saas-architecture-for-the-silo-model-using-c-and-aws-cdk.md)
+ [Decompose monoliths into microservices by using CQRS and event sourcing](decompose-monoliths-into-microservices-by-using-cqrs-and-event-sourcing.md)
+ [More patterns](modernization-more-patterns-pattern-list.md)

# Automatically archive items to Amazon S3 using DynamoDB TTL
<a name="automatically-archive-items-to-amazon-s3-using-dynamodb-ttl"></a>

*Tabby Ward, Amazon Web Services*

## Summary
<a name="automatically-archive-items-to-amazon-s3-using-dynamodb-ttl-summary"></a>

This pattern provides steps to remove older data from an Amazon DynamoDB table and archive it to an Amazon Simple Storage Service (Amazon S3) bucket on Amazon Web Services (AWS) without having to manage a fleet of servers. 

This pattern uses Amazon DynamoDB Time to Live (TTL) to automatically delete old items and Amazon DynamoDB Streams to capture the TTL-expired items. It then connects DynamoDB Streams to AWS Lambda, which runs the code without provisioning or managing any servers. 

When new items are added to the DynamoDB stream, the Lambda function is initiated and writes the data to an Amazon Data Firehose delivery stream. Firehose provides a simple, fully managed solution to load the data as an archive into Amazon S3.

DynamoDB is often used to store time series data, such as webpage click-stream data or Internet of Things (IoT) data from sensors and connected devices. Rather than deleting less frequently accessed items, many customers want to archive them for auditing purposes. TTL simplifies this archiving by automatically deleting items based on the timestamp attribute. 

Items deleted by TTL can be identified in DynamoDB Streams, which captures a time-ordered sequence of item-level modifications and stores the sequence in a log for up to 24 hours. This data can be consumed by a Lambda function and archived in an Amazon S3 bucket to reduce the storage cost. To further reduce the costs, [Amazon S3 lifecycle rules](https://docs.aws.amazon.com/AmazonS3/latest/userguide/object-lifecycle-mgmt.html) can be created to automatically transition the data (as soon as it gets created) to lowest-cost [storage classes](https://aws.amazon.com/s3/storage-classes/).

## Prerequisites and limitations
<a name="automatically-archive-items-to-amazon-s3-using-dynamodb-ttl-prereqs"></a>

**Prerequisites **
+ An active AWS account.
+ [AWS Command Line Interface (AWS CLI) 1.7 or later](https://docs.aws.amazon.com/cli/latest/userguide/install-cliv1.html), installed and configured on macOS, Linux, or Windows.
+ [Python 3.7](https://www.python.org/downloads/release/python-370/) or later.
+ [Boto3](https://boto3.amazonaws.com/v1/documentation/api/latest/index.html), installed and configured. If Boto3 is not already installed, run the `python -m pip install boto3` command to install it.

## Architecture
<a name="automatically-archive-items-to-amazon-s3-using-dynamodb-ttl-architecture"></a>

**Technology stack  **
+ Amazon DynamoDB
+ Amazon DynamoDB Streams
+ Amazon Data Firehose
+ AWS Lambda
+ Amazon S3

![\[Four-step process from DynamoDB to the S3 bucket.\]](http://docs.aws.amazon.com/prescriptive-guidance/latest/patterns/images/pattern-img/9dbc833f-cf3c-4574-8f09-d0b81134fe41/images/50d9da65-5398-4a99-bc8f-58afc80e9d7b.png)


1. Items are deleted by TTL.

1. The DynamoDB stream trigger invokes the Lambda stream processor function.

1. The Lambda function puts records in the Firehose delivery stream in batch format.

1. Data records are archived in the S3 bucket.

## Tools
<a name="automatically-archive-items-to-amazon-s3-using-dynamodb-ttl-tools"></a>
+ [AWS CLI](https://docs.aws.amazon.com/cli/latest/userguide/cli-chap-install.html) – The AWS Command Line Interface (AWS CLI) is a unified tool to manage your AWS services.
+ [Amazon DynamoDB](https://docs.aws.amazon.com/amazondynamodb/latest/developerguide/Introduction.html) – Amazon DynamoDB is a key-value and document database that delivers single-digit millisecond performance at any scale.
+ [Amazon DynamoDB Time to Live (TTL)](https://docs.aws.amazon.com/amazondynamodb/latest/developerguide/howitworks-ttl.html) – Amazon DynamoDB TTL helps you define a per-item timestamp to determine when an item is no longer required.
+ [Amazon DynamoDB Streams](https://docs.aws.amazon.com/amazondynamodb/latest/APIReference/API_Types_Amazon_DynamoDB_Streams.html) – Amazon DynamoDB Streams captures a time-ordered sequence of item-level modifications in any DynamoDB table and stores this information in a log for up to 24 hours.
+ [Amazon Data Firehose](https://docs.aws.amazon.com/firehose/latest/dev/what-is-this-service.html) – Amazon Data Firehose is the easiest way to reliably load streaming data into data lakes, data stores, and analytics services.
+ [AWS Lambda](https://docs.aws.amazon.com/lambda/latest/dg/welcome.html) – AWS Lambda runs code without the need to provision or manage servers. You pay only for the compute time you consume.
+ [Amazon S3](https://docs.aws.amazon.com/AmazonS3/latest/dev/Welcome.html) – Amazon Simple Storage Service (Amazon S3) is an object storage service that offers industry-leading scalability, data availability, security, and performance.

**Code**

The code for this pattern is available in the GitHub [Archive items to S3 using DynamoDB TTL](https://github.com/aws-samples/automatically-archive-items-to-s3-using-dynamodb-ttl) repository.

## Epics
<a name="automatically-archive-items-to-amazon-s3-using-dynamodb-ttl-epics"></a>

### Set up a DynamoDB table, TTL , and a DynamoDB stream
<a name="set-up-a-dynamodb-table-ttl-and-a-dynamodb-stream"></a>


| Task | Description | Skills required | 
| --- | --- | --- | 
| Create a DynamoDB table. | Use the AWS CLI to create a table in DynamoDB called `Reservation`. Choose random read capacity unit (RCU) and write capacity unit (WCU), and give your table two attributes: `ReservationID` and `ReservationDate`. <pre>aws dynamodb create-table \<br />--table-name Reservation \<br />--attribute-definitions AttributeName=ReservationID,AttributeType=S AttributeName=ReservationDate,AttributeType=N \<br />--key-schema AttributeName=ReservationID,KeyType=HASH AttributeName=ReservationDate,KeyType=RANGE \<br />--provisioned-throughput ReadCapacityUnits=100,WriteCapacityUnits=100 </pre>`ReservationDate` is an epoch timestamp that will be used to turn on TTL. | Cloud architect, App developer | 
| Turn on DynamoDB TTL. | Use the AWS CLI to turn on DynamoDB TTL for the `ReservationDate` attribute.<pre>aws dynamodb update-time-to-live \<br />--table-name Reservation\<br />  --time-to-live-specification Enabled=true,AttributeName=ReservationDate</pre> | Cloud architect, App developer | 
| Turn on a DynamoDB stream. | Use the AWS CLI to turn on a DynamoDB stream for the `Reservation` table by using the `NEW_AND_OLD_IMAGES` stream type. <pre>aws dynamodb update-table \<br />--table-name Reservation \<br />  --stream-specification StreamEnabled=true,StreamViewType=NEW_AND_OLD_IMAGES</pre>This stream will contain records for new items, updated items, deleted items, and items that are deleted by TTL. The records for items that are deleted by TTL contain an additional metadata attribute to distinguish them from items that were deleted manually. The `userIdentity` field for TTL deletions indicates that the DynamoDB service performed the delete action. In this pattern, only the items deleted by TTL are archived, but you could archive only the records where `eventName` is `REMOVE` and `userIdentity` contains `principalId` equal to `dynamodb.amazonaws.com`. | Cloud architect, App developer | 

### Create and configure an S3 bucket
<a name="create-and-configure-an-s3-bucket"></a>


| Task | Description | Skills required | 
| --- | --- | --- | 
| Create an S3 bucket. | Use the AWS CLI to create a destination S3 bucket in your AWS Region, replacing `us-east-1` with your Region and amzn-s3-demo-destination-bucket with the name of your bucket. <pre>aws s3api create-bucket \<br />--bucket amzn-s3-demo-destination-bucket \<br />--region us-east-1</pre>Make sure that your S3 bucket's name is globally unique, because the namespace is shared by all AWS accounts. | Cloud architect, App developer | 
| Create a 30-day lifecycle policy for the S3 bucket. | [\[See the AWS documentation website for more details\]](http://docs.aws.amazon.com/prescriptive-guidance/latest/patterns/automatically-archive-items-to-amazon-s3-using-dynamodb-ttl.html) | Cloud architect, App developer | 

### Create a Firehose delivery stream
<a name="create-a-akf-delivery-stream"></a>


| Task | Description | Skills required | 
| --- | --- | --- | 
| Create and configure a Firehose delivery stream. | Download and edit the `CreateFireHoseToS3.py` code example from the GitHub repository. This code is written in Python and shows you how to create a Firehose delivery stream and an AWS Identity and Access Management (IAM) role. The IAM role will have a policy that can be used by Firehose to write to the destination S3 bucket.To run the script, use the following command and command line arguments.Argument 1= `<Your_S3_bucket_ARN>`, which is the Amazon Resource Name (ARN) for the bucket that you created earlierArgument 2= Your Firehose name (This pilot is using  `firehose_to_s3_stream`.)Argument 3= Your IAM role name (This pilot is using `firehose_to_s3`.)<pre>python CreateFireHoseToS3.py <Your_S3_Bucket_ARN> firehose_to_s3_stream firehose_to_s3</pre>If the specified IAM role does not exist, the script will create an assume role with a trusted relationship policy, as well as a policy that grants sufficient Amazon S3 permission. For examples of these policies, see the *Additional information* section. | Cloud architect, App developer | 
| Verify the Firehose delivery stream. | Describe the Firehose delivery stream by using the AWS CLI to verify that the delivery stream was successfully created.<pre>aws firehose describe-delivery-stream --delivery-stream-name firehose_to_s3_stream </pre> | Cloud architect, App developer | 

### Create a Lambda function to process the Firehose delivery stream
<a name="create-a-lambda-function-to-process-the-akf-delivery-stream"></a>


| Task | Description | Skills required | 
| --- | --- | --- | 
| Create a trust policy for the Lambda function. | Create a trust policy file with the following information.<pre> {<br />     "Version": "2012-10-17",		 	 	 <br />     "Statement": [<br />      {<br />          "Effect": "Allow",<br />          "Principal": {<br />              "Service": "lambda.amazonaws.com"<br />           },<br />           "Action": "sts:AssumeRole"<br />      }<br />    ]<br />  } </pre>This gives your function permission to access AWS resources. | Cloud architect, App developer | 
| Create an execution role for the Lambda function. | To create the execution role, run the following code.<pre>aws iam create-role --role-name lambda-ex --assume-role-policy-document file://TrustPolicy.json</pre> | Cloud architect, App developer | 
| Add permission to the role. | To add permission to the role, use the `attach-policy-to-role` command.<pre>aws iam attach-role-policy --role-name lambda-ex --policy-arn arn:aws:iam::aws:policy/service-role/AWSLambdaBasicExecutionRole<br />aws iam attach-role-policy --role-name lambda-ex --policy-arn arn:aws:iam::aws:policy/service-role/AWSLambdaDynamoDBExecutionRole<br />aws iam attach-role-policy --role-name lambda-ex --policy-arn arn:aws:iam::aws:policy/AmazonKinesisFirehoseFullAccess<br />aws iam attach-role-policy --role-name lambda-ex --policy-arn arn:aws:iam::aws:policy/IAMFullAccess </pre> | Cloud architect, App developer | 
| Create a Lambda function. | Compress the `LambdaStreamProcessor.py` file from the code repository by running the following command.<pre>zip function.zip LambdaStreamProcessor.py</pre>When you create the Lambda function, you will need the Lambda execution role ARN. To get the ARN, run the following code.<pre>aws iam get-role \<br />--role-name lambda-ex </pre>To create the Lambda function, run the following code.<pre># Review the environment variables and replace them with your values.<br /><br />aws lambda create-function --function-name LambdaStreamProcessor \<br />--zip-file fileb://function.zip --handler LambdaStreamProcessor.handler --runtime python3.8 \<br />--role {Your Lamda Execution Role ARN}\<br />  --environment Variables="{firehose_name=firehose_to_s3_stream,bucket_arn = <Your_S3_bucket_ARN>,iam_role_name = firehose_to_s3, batch_size=400}"</pre> | Cloud architect, App developer | 
| Configure the Lambda function trigger. | Use the AWS CLI to configure the trigger (DynamoDB Streams), which invokes the Lambda function. The batch size of 400 is to avoid running into Lambda concurrency issues.<pre>aws lambda create-event-source-mapping --function-name LambdaStreamProcessor \<br />--batch-size 400 --starting-position LATEST \<br />--event-source-arn <Your Latest Stream ARN From DynamoDB Console></pre> | Cloud architect, App developer | 

### Test the functionality
<a name="test-the-functionality"></a>


| Task | Description | Skills required | 
| --- | --- | --- | 
| Add items with expired timestamps to the Reservation table. | To test the functionality, add items with expired epoch timestamps  to the `Reservation` table. TTL will automatically delete items based on the timestamp. The Lambda function is initiated upon DynamoDB Stream activities, and it filters the event to identify `REMOVE` activity or deleted items. It then puts records in the Firehose delivery stream in batch format.The Firehose delivery stream transfers items to a destination S3 bucket with the `firehosetos3example/year=current year/month=current month/ day=current day/hour=current hour/` prefix.To optimize data retrieval, configure Amazon S3 with the `Prefix` and `ErrorOutputPrefix` that are detailed in the *Additional information* section. | Cloud architect  | 

### Clean up the resources
<a name="clean-up-the-resources"></a>


| Task | Description | Skills required | 
| --- | --- | --- | 
| Delete all resources. | Delete all the resources to ensure that you aren't charged for any services that you aren't using.   | Cloud architect, App developer | 

## Related resources
<a name="automatically-archive-items-to-amazon-s3-using-dynamodb-ttl-resources"></a>
+ [Managing your storage lifecycle](https://docs.aws.amazon.com/AmazonS3/latest/user-guide/create-lifecycle.html)
+ [Amazon S3 Storage Classes](https://aws.amazon.com/s3/storage-classes/)
+ [AWS SDK for Python (Boto3) documentation](https://boto3.amazonaws.com/v1/documentation/api/latest/index.html)

## Additional information
<a name="automatically-archive-items-to-amazon-s3-using-dynamodb-ttl-additional"></a>

**Create and configure a Firehose delivery stream – Policy examples**

*Firehose trusted relationship policy example document*

```
firehose_assume_role = {
        'Version': '2012-10-17',
        'Statement': [
            {
                'Sid': '',
                'Effect': 'Allow',
                'Principal': {
                    'Service': 'firehose.amazonaws.com'
                },
                'Action': 'sts:AssumeRole'
            }
        ]
    }
```

*S3 permissions policy example*

```
s3_access = {
        "Version": "2012-10-17",		 	 	 
        "Statement": [
            {
                "Sid": "",
                "Effect": "Allow",
                "Action": [
                    "s3:AbortMultipartUpload",
                    "s3:GetBucketLocation",
                    "s3:GetObject",
                    "s3:ListBucket",
                    "s3:ListBucketMultipartUploads",
                    "s3:PutObject"
                ],
                "Resource": [
                    "{your s3_bucket ARN}/*",
                    "{Your s3 bucket ARN}"
                ]
            }
        ]
    }
```

**Test the functionality – Amazon S3 configuration**

The Amazon S3 configuration with the following `Prefix` and `ErrorOutputPrefix` is chosen to optimize data retrieval. 

*Prefix *

```
firehosetos3example/year=! {timestamp: yyyy}/month=! {timestamp:MM}/day=! {timestamp:dd}/hour=!{timestamp:HH}/
```

Firehose first creates a base folder called `firehosetos3example` directly under the S3 bucket. It then evaluates the expressions `!{timestamp:yyyy}`, `!{timestamp:MM}`, `!{timestamp:dd}`, and `!{timestamp:HH}` to year, month, day, and hour using the Java [DateTimeFormatter](https://docs.oracle.com/javase/8/docs/api/java/time/format/DateTimeFormatter.html) format.

For example, an approximate arrival timestamp of 1604683577 in Unix epoch time evaluates to `year=2020`, `month=11`, `day=06`, and `hour=05`. Therefore, the location in Amazon S3, where data records are delivered, evaluates to `firehosetos3example/year=2020/month=11/day=06/hour=05/`.

*ErrorOutputPrefix*

```
firehosetos3erroroutputbase/!{firehose:random-string}/!{firehose:error-output-type}/!{timestamp:yyyy/MM/dd}/
```

The `ErrorOutputPrefix` results in a base folder called `firehosetos3erroroutputbase` directly under the S3 bucket. The expression `!{firehose:random-string}` evaluates to an 11-character random string such as `ztWxkdg3Thg`. The location for an Amazon S3 object where failed records are delivered could evaluate to `firehosetos3erroroutputbase/ztWxkdg3Thg/processing-failed/2020/11/06/`.

# Build a multi-tenant serverless architecture in Amazon OpenSearch Service
<a name="build-a-multi-tenant-serverless-architecture-in-amazon-opensearch-service"></a>

*Tabby Ward and Nisha Gambhir, Amazon Web Services*

## Summary
<a name="build-a-multi-tenant-serverless-architecture-in-amazon-opensearch-service-summary"></a>

Amazon OpenSearch Service is a managed service that makes it easy to deploy, operate, and scale Elasticsearch, which is a popular open-source search and analytics engine. OpenSearch Service provides free-text search as well as near real-time ingestion and dashboarding for streaming data such as logs and metrics. 

Software as a service (SaaS) providers frequently use OpenSearch Service to address a broad range of use cases, such as gaining customer insights in a scalable and secure way while reducing complexity and downtime.

Using OpenSearch Service in a multi-tenant environment introduces a series of considerations that affect partitioning, isolation, deployment, and management of your SaaS solution. SaaS providers have to consider how to effectively scale their Elasticsearch clusters with continually shifting workloads. They also need to consider how tiering and noisy neighbor conditions could impact their partitioning model.

This pattern reviews the models that are used to represent and isolate tenant data with Elasticsearch constructs. In addition, the pattern focuses on a simple serverless reference architecture as an example to demonstrate indexing and searching using OpenSearch Service in a multi-tenant environment. It implements the pool data partitioning model, which shares the same index among all tenants while maintaining a tenant's data isolation. This pattern uses the following AWS services: Amazon API Gateway, AWS Lambda, Amazon Simple Storage Service (Amazon S3), and OpenSearch Service.

For more information about the pool model and other data partitioning models, see the [Additional information](#build-a-multi-tenant-serverless-architecture-in-amazon-opensearch-service-additional) section.

## Prerequisites and limitations
<a name="build-a-multi-tenant-serverless-architecture-in-amazon-opensearch-service-prereqs"></a>

**Prerequisites**
+ An active AWS account
+ [AWS Command Line Interface (AWS CLI) version 2.x](https://docs.aws.amazon.com/cli/latest/userguide/install-cliv2.html), installed and configured on macOS, Linux, or Windows
+ [Python version 3.9](https://www.python.org/downloads/release/python-3921/)
+ [pip3](https://pip.pypa.io/en/stable/) – The Python source code is provided as a .zip file to be deployed in a Lambda function. If you want to use the code locally or customize it, follow these steps to develop and recompile the source code:

  1. Generate the `requirements.txt` file by running the the following command in the same directory as the Python scripts: `pip3 freeze > requirements.txt`

  1. Install the dependencies: `pip3 install -r requirements.txt`

**Limitations**
+ This code runs in Python, and doesn’t currently support other programming languages. 
+ The sample application doesn’t include AWS cross-Region or disaster recovery (DR) support. 
+ This pattern is intended for demonstration purposes only. It is not intended to be used in a production environment.

## Architecture
<a name="build-a-multi-tenant-serverless-architecture-in-amazon-opensearch-service-architecture"></a>

The following diagram illustrates the high-level architecture of this pattern. The architecture includes the following:
+ Lambda to index and query the content 
+ OpenSearch Service to perform search 
+ API Gateway to provide an API interaction with the user
+ Amazon S3 to store raw (non-indexed) data
+ Amazon CloudWatch to monitor logs
+ AWS Identity and Access Management (IAM) to create tenant roles and policies

![\[High-level multi-tenant serverless architecture.\]](http://docs.aws.amazon.com/prescriptive-guidance/latest/patterns/images/pattern-img/750196bb-03f6-4b6e-92cd-eb7141602547/images/1a8501e7-0776-4aca-aed3-28e3ada1d15d.png)


**Automation and scale**

For simplicity, the pattern uses AWS CLI to provision the infrastructure and to deploy the sample code. You can create an CloudFormation template or AWS Cloud Development Kit (AWS CDK) scripts to automate the pattern.

## Tools
<a name="build-a-multi-tenant-serverless-architecture-in-amazon-opensearch-service-tools"></a>

**AWS services**
+ [AWS CLI](https://aws.amazon.com/cli/) is a unified tool for managing AWS services and resources by using commands in your command-line shell.
+ [Lambda](https://aws.amazon.com/lambda/) is a compute service that lets you run code without provisioning or managing servers. Lambda runs your code only when needed and scales automatically, from a few requests per day to thousands per second.
+ [API Gateway](https://aws.amazon.com/api-gateway/) is an AWS service for creating, publishing, maintaining, monitoring, and securing REST, HTTP, and WebSocket APIs at any scale.
+ [Amazon S3](https://aws.amazon.com/s3/) is an object storage service that lets you store and retrieve any amount of information at any time, from anywhere on the web.
+ [OpenSearch Service](https://aws.amazon.com/opensearch-service/) is a fully managed service that makes it easy for you to deploy, secure, and run Elasticsearch cost-effectively at scale.

**Code**

The attachment provides sample files for this pattern. These include:
+ `index_lambda_package.zip` – The Lambda function for indexing data in OpenSearch Service by using the pool model.
+ `search_lambda_package.zip` – The Lambda function for searching for data in OpenSearch Service.
+ `Tenant-1-data` – Sample raw (non-indexed) data for Tenant-1.
+ `Tenant-2-data` – Sample raw (non-indexed) data for Tenant-2.

**Important**  
The stories in this pattern include AWS CLI command examples that are formatted for Unix, Linux, and macOS. For Windows, replace the backslash (\$1) Unix continuation character at the end of each line with a caret (^).

**Note**  
In AWS CLI commands, replace all values within the angle brackets (<>) with correct values.

## Epics
<a name="build-a-multi-tenant-serverless-architecture-in-amazon-opensearch-service-epics"></a>

### Create and configure an S3 bucket
<a name="create-and-configure-an-s3-bucket"></a>


| Task | Description | Skills required | 
| --- | --- | --- | 
| Create an S3 bucket. | Create an S3 bucket in your AWS Region. This bucket will hold the non-indexed tenant data for the sample application. Make sure that the S3 bucket's name is globally unique, because the namespace is shared by all AWS accounts.To create an S3 bucket, you can use the AWS CLI [create-bucket](https://awscli.amazonaws.com/v2/documentation/api/latest/reference/s3api/create-bucket.html) command as follows:<pre>aws s3api create-bucket \<br />  --bucket <tenantrawdata> \<br />  --region <your-AWS-Region></pre>where `tenantrawdata` is the S3 bucket name. (You can use any unique name that follows [the bucket naming guidelines](https://docs.aws.amazon.com/AmazonS3/latest/userguide/bucketnamingrules.html).) | Cloud architect, Cloud administrator | 

### Create and configure an Elasticsearch cluster
<a name="create-and-configure-an-elasticsearch-cluster"></a>


| Task | Description | Skills required | 
| --- | --- | --- | 
| Create an OpenSearch Service domain. | Run the AWS CLI [create-elasticsearch-domain](https://awscli.amazonaws.com/v2/documentation/api/latest/reference/es/create-elasticsearch-domain.html) command to create an OpenSearch Service domain:<pre>aws es create-elasticsearch-domain \<br />  --domain-name vpc-cli-example \<br />  --elasticsearch-version 7.10 \<br />  --elasticsearch-cluster-config InstanceType=t3.medium.elasticsearch,InstanceCount=1 \<br />  --ebs-options EBSEnabled=true,VolumeType=gp2,VolumeSize=10 \<br />  --domain-endpoint-options "{\"EnforceHTTPS\": true}" \<br />  --encryption-at-rest-options "{\"Enabled\": true}" \<br />  --node-to-node-encryption-options "{\"Enabled\": true}" \<br />  --advanced-security-options "{\"Enabled\": true, \"InternalUserDatabaseEnabled\": true, \<br />    \"MasterUserOptions\": {\"MasterUserName\": \"KibanaUser\", \<br />    \"MasterUserPassword\": \"NewKibanaPassword@123\"}}" \<br />  --vpc-options "{\"SubnetIds\": [\"<subnet-id>\"], \"SecurityGroupIds\": [\"<sg-id>\"]}" \<br />  --access-policies "{\"Version\": \"2012-10-17\", \"Statement\": [ { \"Effect\": \"Allow\", \ <br />    \"Principal\": {\"AWS\": \"*\" }, \"Action\":\"es:*\", \<br />    \"Resource\": \"arn:aws:es:<region>:<account-id>:domain\/vpc-cli-example\/*\" } ] }"</pre>The instance count is set to 1 because the domain is for testing purposes. You need to enable fine-grained access control by using the `advanced-security-options` parameter, because the details cannot be changed after the domain has been created. This command creates a master user name (`KibanaUser`) and a password that you can use to log in to the Kibana console.Because the domain is part of a virtual private cloud (VPC), you have to make sure that you can reach the Elasticsearch instance by specifying the access policy to use.For more information, see [Launching your Amazon OpenSearch Service domains within a VPC](https://docs.aws.amazon.com/elasticsearch-service/latest/developerguide/es-vpc.html) in the AWS documentation. | Cloud architect, Cloud administrator | 
| Set up a bastion host. | Set up a Amazon Elastic Compute Cloud (Amazon EC2) Windows instance as a bastion host to access the Kibana console. The Elasticsearch security group must allow traffic from the Amazon EC2 security group. For instructions, see the blog post [Controlling Network Access to EC2 Instances Using a Bastion Server](https://aws.amazon.com/blogs/security/controlling-network-access-to-ec2-instances-using-a-bastion-server/).When the bastion host has been set up, and you have the security group that is associated with the instance available, use the AWS CLI [authorize-security-group-ingress](https://awscli.amazonaws.com/v2/documentation/api/latest/reference/ec2/authorize-security-group-ingress.html) command to add permission to the Elasticsearch security group to allow port 443 from the Amazon EC2 (bastion host) security group.<pre>aws ec2 authorize-security-group-ingress \<br />  --group-id <SecurityGroupIdfElasticSearch> \ <br />  --protocol tcp \<br />  --port 443 \<br />  --source-group <SecurityGroupIdfBashionHostEC2></pre> | Cloud architect, Cloud administrator | 

### Create and configure the Lambda index function
<a name="create-and-configure-the-lam-index-function"></a>


| Task | Description | Skills required | 
| --- | --- | --- | 
| Create the Lambda execution role. | Run the AWS CLI [create-role](https://awscli.amazonaws.com/v2/documentation/api/latest/reference/iam/create-role.html) command to grant the Lambda index function access to AWS services and resources:<pre>aws iam create-role \<br />  --role-name index-lambda-role \<br />  --assume-role-policy-document file://lambda_assume_role.json</pre>where `lambda_assume_role.json` is a JSON document that grants `AssumeRole` permissions to the Lambda function, as follows:<pre>{<br />     "Version": "2012-10-17",		 	 	 <br />     "Statement": [<br />         {<br />             "Effect": "Allow",<br />             "Principal": {<br />                 "Service": "lambda.amazonaws.com"<br />               },<br />             "Action": "sts:AssumeRole"<br />         }<br />     ]<br /> }</pre> | Cloud architect, Cloud administrator | 
| Attach managed policies to the Lambda role. | Run the AWS CLI [attach-role-policy](https://awscli.amazonaws.com/v2/documentation/api/latest/reference/iam/attach-role-policy.html) command to attach managed policies to the role created in the previous step. These two policies give the role permissions to create an elastic network interface and to write logs to CloudWatch Logs.<pre>aws iam attach-role-policy \<br />  --role-name index-lambda-role \<br />  --policy-arn arn:aws:iam::aws:policy/service-role/AWSLambdaBasicExecutionRole<br /><br />aws iam attach-role-policy \<br />  --role-name index-lambda-role \<br />  --policy-arn arn:aws:iam::aws:policy/service-role/AWSLambdaVPCAccessExecutionRole </pre> | Cloud architect, Cloud administrator | 
| Create a policy to give the Lambda index function permission to read the S3 objects. | Run the AWS CLI [create-policy](https://awscli.amazonaws.com/v2/documentation/api/latest/reference/iam/create-policy.html) command to to give the Lambda index function `s3:GetObject` permission to read the objects in the S3 bucket:<pre>aws iam create-policy \<br />  --policy-name s3-permission-policy \<br />  --policy-document file://s3-policy.json</pre>The file `s3-policy.json` is a JSON document shown below that grants `s3:GetObject` permissions to allow read access to S3 objects. If you used a different name when you created the S3 bucket, provide the correct bucket name in the `Resource `section:<pre>{<br />    "Version": "2012-10-17",		 	 	 <br />    "Statement": [<br />        {<br />           "Effect": "Allow",<br />           "Action": "s3:GetObject",<br />           "Resource": "arn:aws:s3:::<tenantrawdata>/*"<br />        }<br />    ]<br />}</pre> | Cloud architect, Cloud administrator | 
| Attach the Amazon S3 permission policy to the Lambda execution role. | Run the AWS CLI [attach-role-policy](https://awscli.amazonaws.com/v2/documentation/api/latest/reference/iam/attach-role-policy.html) command to attach the Amazon S3 permission policy you created in the previous step to the Lambda execution role:<pre>aws iam attach-role-policy \<br />  --role-name index-lambda-role \<br />  --policy-arn <PolicyARN></pre>where `PolicyARN` is the Amazon Resource Name (ARN) of the Amazon S3 permission policy. You can get this value from the output of the previous command. | Cloud architect, Cloud administrator | 
| Create the Lambda index function. | Run the AWS CLI [create-function](https://awscli.amazonaws.com/v2/documentation/api/latest/reference/lambda/create-function.html) command to create the Lambda index function, which will access OpenSearch Service:<pre>aws lambda create-function \<br />  --function-name index-lambda-function \<br />  --zip-file fileb://index_lambda_package.zip \<br />  --handler lambda_index.lambda_handler \<br />  --runtime python3.9 \<br />  --role "arn:aws:iam::account-id:role/index-lambda-role" \<br />  --timeout 30 \<br />  --vpc-config "{\"SubnetIds\": [\"<subnet-id1\>", \"<subnet-id2>\"], \<br />    \"SecurityGroupIds\": [\"<sg-1>\"]}"</pre> | Cloud architect, Cloud administrator | 
| Allow Amazon S3 to call the Lambda index function. | Run the AWS CLI [add-permission](https://awscli.amazonaws.com/v2/documentation/api/latest/reference/lambda/add-permission.html) command to give Amazon S3 the permission to call the Lambda index function:<pre>aws lambda add-permission \<br />  --function-name index-lambda-function \<br />  --statement-id s3-permissions \<br />  --action lambda:InvokeFunction \<br />  --principal s3.amazonaws.com \<br />  --source-arn "arn:aws:s3:::<tenantrawdata>" \<br />  --source-account "<account-id>" </pre> | Cloud architect, Cloud administrator | 
| Add a Lambda trigger for the Amazon S3 event. | Run the AWS CLI [put-bucket-notification-configuration](https://awscli.amazonaws.com/v2/documentation/api/latest/reference/s3api/put-bucket-notification-configuration.html) command to send  notifications to the Lambda index function when the Amazon S3 `ObjectCreated` event is detected. The index function runs whenever an object is uploaded to the S3 bucket. <pre>aws s3api put-bucket-notification-configuration \<br />  --bucket <tenantrawdata> \<br />  --notification-configuration file://s3-trigger.json</pre>The file `s3-trigger.json` is a JSON document in the current folder that adds the resource policy to the Lambda function when the Amazon S3 `ObjectCreated` event occurs. | Cloud architect, Cloud administrator | 

### Create and configure the Lambda search function
<a name="create-and-configure-the-lam-search-function"></a>


| Task | Description | Skills required | 
| --- | --- | --- | 
| Create the Lambda execution role. | Run the AWS CLI [create-role](https://awscli.amazonaws.com/v2/documentation/api/latest/reference/iam/create-role.html) command to grant the Lambda search function access to AWS services and resources:<pre>aws iam create-role \<br />  --role-name search-lambda-role \<br />  --assume-role-policy-document file://lambda_assume_role.json</pre>where `lambda_assume_role.json` is a JSON document in the current folder that grants `AssumeRole` permissions to the Lambda function, as follows:<pre>{<br />     "Version": "2012-10-17",		 	 	 <br />     "Statement": [<br />         {<br />             "Effect": "Allow",<br />             "Principal": {<br />                 "Service": "lambda.amazonaws.com"<br />               },<br />             "Action": "sts:AssumeRole"<br />         }<br />     ]<br /> }</pre> | Cloud architect, Cloud administrator | 
| Attach managed policies to the Lambda role. | Run the AWS CLI [attach-role-policy](https://awscli.amazonaws.com/v2/documentation/api/latest/reference/iam/attach-role-policy.html) command to attach managed policies to the role created in the previous step. These two policies give the role permissions to create an elastic network interface and to write logs to CloudWatch Logs.<pre>aws iam attach-role-policy \<br />  --role-name search-lambda-role \<br />  --policy-arn arn:aws:iam::aws:policy/service-role/AWSLambdaBasicExecutionRole<br /><br />aws iam attach-role-policy \<br />  --role-name search-lambda-role \<br />  --policy-arn arn:aws:iam::aws:policy/service-role/AWSLambdaVPCAccessExecutionRole </pre> | Cloud architect, Cloud administrator | 
| Create the Lambda search function. | Run the AWS CLI [create-function](https://awscli.amazonaws.com/v2/documentation/api/latest/reference/lambda/create-function.html) command to create the Lambda search function, which will access OpenSearch Service:<pre>aws lambda create-function \<br />  --function-name search-lambda-function \<br />  --zip-file fileb://search_lambda_package.zip \<br />  --handler lambda_search.lambda_handler \<br />  --runtime python3.9 \<br />  --role "arn:aws:iam::account-id:role/search-lambda-role" \<br />  --timeout 30 \<br />  --vpc-config "{\"SubnetIds\": [\"<subnet-id1\>", \"<subnet-id2>\"], \<br />    \"SecurityGroupIds\": [\"<sg-1>\"]}"</pre> | Cloud architect, Cloud administrator | 

### Create and configure tenant roles
<a name="create-and-configure-tenant-roles"></a>


| Task | Description | Skills required | 
| --- | --- | --- | 
| Create tenant IAM roles. | Run the AWS CLI [create-role](https://awscli.amazonaws.com/v2/documentation/api/latest/reference/iam/create-role.html) command to create two tenant roles that will be used to test the search functionality:<pre>aws iam create-role \<br />  --role-name Tenant-1-role \<br />  --assume-role-policy-document file://assume-role-policy.json</pre><pre>aws iam create-role \<br />  --role-name Tenant-2-role \<br />  --assume-role-policy-document file://assume-role-policy.json</pre>The file `assume-role-policy.json` is a JSON document in the current folder that grants `AssumeRole` permissions to the Lambda execution role:<pre>{<br />    "Version": "2012-10-17",		 	 	 <br />    "Statement": [<br />        {<br />            "Effect": "Allow",<br />            "Principal": {<br />                 "AWS": "<Lambda execution role for index function>",<br />                 "AWS": "<Lambda execution role for search function>"<br />             },<br />            "Action": "sts:AssumeRole"<br />        }<br />    ]<br />}</pre> | Cloud architect, Cloud administrator | 
| Create a tenant IAM policy. | Run the AWS CLI [create-policy](https://awscli.amazonaws.com/v2/documentation/api/latest/reference/iam/create-policy.html) command to create a tenant policy that grants access to Elasticsearch operations:<pre>aws iam create-policy \<br />  --policy-name tenant-policy \<br />  --policy-document file://policy.json</pre>The file `policy.json` is a JSON document in the current folder that grants permissions on Elasticsearch:<pre>{<br />    "Version": "2012-10-17",		 	 	 <br />    "Statement": [<br />        {<br />            "Effect": "Allow",<br />            "Action": [<br />                "es:ESHttpDelete",<br />                "es:ESHttpGet",<br />                "es:ESHttpHead",<br />                "es:ESHttpPost",<br />                "es:ESHttpPut",<br />                "es:ESHttpPatch"<br />            ],<br />            "Resource": [<br />                "<ARN of Elasticsearch domain created earlier>"<br />            ]<br />        }<br />    ]<br />}</pre> | Cloud architect, Cloud administrator | 
| Attach the tenant IAM policy to the tenant roles. | Run the AWS CLI [attach-role-policy](https://awscli.amazonaws.com/v2/documentation/api/latest/reference/iam/attach-role-policy.html) command to attach the tenant IAM policy to the two tenant roles you created in the earlier step:<pre>aws iam attach-role-policy \<br />  --policy-arn arn:aws:iam::account-id:policy/tenant-policy \<br />  --role-name Tenant-1-role<br /><br />aws iam attach-role-policy \<br />  --policy-arn arn:aws:iam::account-id:policy/tenant-policy \<br />  --role-name Tenant-2-role</pre>The policy ARN is from the output of the previous step. | Cloud architect, Cloud administrator | 
| Create an IAM policy to give Lambda permissions to assume role. | Run the AWS CLI [create-policy](https://awscli.amazonaws.com/v2/documentation/api/latest/reference/iam/create-policy.html) command to create a policy for Lambda to assume the tenant role:<pre>aws iam create-policy \<br />  --policy-name assume-tenant-role-policy \<br />  --policy-document file://lambda_policy.json</pre>The file `lambda_policy.json` is a JSON document in the current folder that grants permissions to `AssumeRole`:<pre>{<br />    "Version": "2012-10-17",		 	 	 <br />    "Statement": [<br />       {<br />            "Effect": "Allow",<br />            "Action":  "sts:AssumeRole",<br />            "Resource": "<ARN of tenant role created earlier>"<br />       }<br />    ]<br />}</pre>For `Resource`, you can use a wildcard character to avoid creating a new policy for each tenant. | Cloud architect, Cloud administrator | 
| Create an IAM policy to give the Lambda index role permission to access Amazon S3. | Run the AWS CLI [create-policy](https://awscli.amazonaws.com/v2/documentation/api/latest/reference/iam/create-policy.html) command to give the Lambda index role permission to access the objects in the S3 bucket:<pre>aws iam create-policy \<br />  --policy-name s3-permission-policy \<br />  --policy-document file://s3_lambda_policy.json</pre>The file `s3_lambda_policy.json` is the following JSON policy document in the current folder:<pre>{<br />    "Version": "2012-10-17",		 	 	 <br />    "Statement": [<br />        {<br />            "Effect": "Allow",<br />            "Action": "s3:GetObject",<br />            "Resource": "arn:aws:s3:::tenantrawdata/*"<br />        }<br />    ]<br />}</pre> | Cloud architect, Cloud administrator | 
| Attach the policy to the Lambda execution role. | Run the AWS CLI [attach-role-policy](https://awscli.amazonaws.com/v2/documentation/api/latest/reference/iam/attach-role-policy.html) command to attach the policy created in the previous step to the Lambda index and search execution roles you created earlier:<pre>aws iam attach-role-policy \<br />  --policy-arn arn:aws:iam::account-id:policy/assume-tenant-role-policy \<br />  --role-name index-lambda-role<br /><br />aws iam attach-role-policy \<br />  --policy-arn arn:aws:iam::account-id:policy/assume-tenant-role-policy \<br />  --role-name search-lambda-role<br /><br />aws iam attach-role-policy \<br />  --policy-arn arn:aws:iam::account-id:policy/s3-permission-policy \<br />  --role-name index-lambda-role</pre>The policy ARN is from the output of the previous step. | Cloud architect, Cloud administrator | 

### Create and configure a search API
<a name="create-and-configure-a-search-api"></a>


| Task | Description | Skills required | 
| --- | --- | --- | 
| Create a REST API in API Gateway. | Run the AWS CLI [create-rest-api](https://awscli.amazonaws.com/v2/documentation/api/latest/reference/apigateway/create-rest-api.html) command to create a REST API resource:<pre>aws apigateway create-rest-api \<br />  --name Test-Api \<br />  --endpoint-configuration "{ \"types\": [\"REGIONAL\"] }"</pre>For the endpoint configuration type, you can specify `EDGE` instead of `REGIONAL` to use edge locations instead of a particular AWS Region.Note the value of the `id` field from the command output. This is the API ID that you will use in subsequent commands. | Cloud architect, Cloud administrator | 
| Create a resource for the search API. | The search API resource starts the Lambda search function with the resource name `search`. (You don’t have to create an API for the Lambda index function, because it runs automatically when objects are uploaded to the S3 bucket.)[\[See the AWS documentation website for more details\]](http://docs.aws.amazon.com/prescriptive-guidance/latest/patterns/build-a-multi-tenant-serverless-architecture-in-amazon-opensearch-service.html) | Cloud architect, Cloud administrator | 
| Create a GET method for the search API. | Run the AWS CLI [put-method](https://awscli.amazonaws.com/v2/documentation/api/latest/reference/apigateway/put-method.html) command to create a `GET `method for the search API:<pre>aws apigateway put-method \<br />  --rest-api-id <API-ID> \<br />  --resource-id <ID from the previous command output> \<br />  --http-method GET \<br />  --authorization-type "NONE" \<br />  --no-api-key-required</pre>For `resource-id`, specify the ID from the output of the `create-resource` command. | Cloud architect, Cloud administrator | 
| Create a method response for the search API. | Run the AWS CLI [put-method-response](https://awscli.amazonaws.com/v2/documentation/api/latest/reference/apigateway/put-method-response.html) command to add a method response for the search API:<pre>aws apigateway put-method-response \<br />  --rest-api-id <API-ID> \<br />  --resource-id  <ID from the create-resource command output> \<br />  --http-method GET \<br />  --status-code 200 \<br />  --response-models "{\"application/json\": \"Empty\"}"</pre>For `resource-id`, specify the ID from the output of the earlier `create-resource` command. | Cloud architect, Cloud administrator | 
| Set up a proxy Lambda integration for the search API. | Run the AWS CLI [put-integration](https://awscli.amazonaws.com/v2/documentation/api/latest/reference/apigateway/put-integration.html) command to set up an integration with the Lambda search function:<pre>aws apigateway put-integration \<br />  --rest-api-id <API-ID> \<br />  --resource-id  <ID from the create-resource command output> \<br />  --http-method GET \<br />  --type AWS_PROXY \<br />  --integration-http-method GET \<br />  --uri arn:aws:apigateway:region:lambda:path/2015-03-31/functions/arn:aws:lambda:<region>:<account-id>:function:<function-name>/invocations</pre>For `resource-id`, specify the ID from the earlier `create-resource` command. | Cloud architect, Cloud administrator | 
| Grant API Gateway permission to call the Lambda search function. | Run the AWS CLI [add-permission](https://awscli.amazonaws.com/v2/documentation/api/latest/reference/lambda/add-permission.html) command to give API Gateway permission to use the search function:<pre>aws lambda add-permission \<br />  --function-name <function-name> \<br />  --statement-id apigateway-get \<br />  --action lambda:InvokeFunction \<br />  --principal apigateway.amazonaws.com \<br />  --source-arn "arn:aws:execute-api:<region>:<account-id>:api-id/*/GET/search</pre>Change the `source-arn` path if you used a different API resource name instead of `search`. | Cloud architect, Cloud administrator | 
| Deploy the search API. | Run the AWS CLI [create-deployment](https://awscli.amazonaws.com/v2/documentation/api/latest/reference/apigateway/create-deployment.html) command to create a stage resource named `dev`:<pre>aws apigateway create-deployment \<br />  --rest-api-id <API-ID> \<br />  --stage-name dev</pre>If you update the API, you can use the same AWS CLI command to redeploy it to the same stage. | Cloud architect, Cloud administrator | 

### Create and configure Kibana roles
<a name="create-and-configure-kibana-roles"></a>


| Task | Description | Skills required | 
| --- | --- | --- | 
| Log in to the Kibana console. | [\[See the AWS documentation website for more details\]](http://docs.aws.amazon.com/prescriptive-guidance/latest/patterns/build-a-multi-tenant-serverless-architecture-in-amazon-opensearch-service.html) | Cloud architect, Cloud administrator | 
| Create and configure Kibana roles. | To provide data isolation and to make sure that one tenant cannot retrieve the data of another tenant, you need to use document security, which allows tenants to access only documents that contain their tenant ID.[\[See the AWS documentation website for more details\]](http://docs.aws.amazon.com/prescriptive-guidance/latest/patterns/build-a-multi-tenant-serverless-architecture-in-amazon-opensearch-service.html) | Cloud architect, Cloud administrator | 
| Map users to roles. | [\[See the AWS documentation website for more details\]](http://docs.aws.amazon.com/prescriptive-guidance/latest/patterns/build-a-multi-tenant-serverless-architecture-in-amazon-opensearch-service.html)We recommend that you automate the creation of the tenant and Kibana roles at the time of tenant onboarding. | Cloud architect, Cloud administrator | 
| Create the tenant-data index. | In the navigation pane, under **Management**, choose **Dev Tools**, and then run the following command. This command creates the `tenant-data` index to define the mapping for the `TenantId` property.<pre>PUT /tenant-data<br />{<br />  "mappings": {<br />    "properties": {<br />      "TenantId": { "type": "keyword"}<br />    }<br />  }<br />}</pre> | Cloud architect, Cloud administrator | 

### Create VPC endpoints for Amazon S3 and AWS STS
<a name="create-vpc-endpoints-for-s3-and-sts"></a>


| Task | Description | Skills required | 
| --- | --- | --- | 
| Create a VPC endpoint for Amazon S3. | Run the AWS CLI [create-vpc-endpoint](https://awscli.amazonaws.com/v2/documentation/api/latest/reference/ec2/create-vpc-endpoint.html) command to create a VPC endpoint for Amazon S3. The endpoint enables the Lambda index function in the VPC to access Amazon S3.<pre>aws ec2 create-vpc-endpoint \<br />  --vpc-id <VPC-ID> \<br />  --service-name com.amazonaws.us-east-1.s3 \<br />  --route-table-ids <route-table-ID></pre>For `vpc-id`, specify the VPC that you’re using for the Lambda index function. For `service-name`, use the correct URL for the Amazon S3 endpoint. For `route-table-ids`, specify the route table that’s associated with the VPC endpoint. | Cloud architect, Cloud administrator | 
| Create a VPC endpoint for AWS STS. | Run the AWS CLI [create-vpc-endpoint](https://awscli.amazonaws.com/v2/documentation/api/latest/reference/ec2/create-vpc-endpoint.html) command to create a VPC endpoint for AWS Security Token Service (AWS STS). The endpoint enables the Lambda index and search functions in the VPC to access AWS STS. The functions use AWS STS when they assume the IAM role.<pre>aws ec2 create-vpc-endpoint \<br />  --vpc-id <VPC-ID> \<br />  --vpc-endpoint-type Interface \<br />  --service-name com.amazonaws.us-east-1.sts \<br />  --subnet-id <subnet-ID> \<br />  --security-group-id <security-group-ID></pre>For `vpc-id`, specify the VPC that you’re using for the Lambda index and search functions. For `subnet-id`, provide the subnet in which this endpoint should be created. For `security-group-id`, specify the security group to associate this endpoint with. (It could be the same as the security group Lambda uses.) | Cloud architect, Cloud administrator | 

### Test multi-tenancy and data isolation
<a name="test-multi-tenancy-and-data-isolation"></a>


| Task | Description | Skills required | 
| --- | --- | --- | 
| Update the Python files for the index and search functions. | [\[See the AWS documentation website for more details\]](http://docs.aws.amazon.com/prescriptive-guidance/latest/patterns/build-a-multi-tenant-serverless-architecture-in-amazon-opensearch-service.html)You can get the Elasticsearch endpoint from the **Overview **tab of the OpenSearch Service console. It has the format `<AWS-Region>.es.amazonaws.com`. | Cloud architect, App developer | 
| Update the Lambda code. | Use the AWS CLI [update-function-code](https://awscli.amazonaws.com/v2/documentation/api/latest/reference/lambda/update-function-code.html) command to update the Lambda code with the changes you made to the Python files:<pre>aws lambda update-function-code \<br />  --function-name index-lambda-function \<br />  --zip-file fileb://index_lambda_package.zip<br /><br />aws lambda update-function-code \<br />  --function-name search-lambda-function \<br />  --zip-file fileb://search_lambda_package.zip</pre> | Cloud architect, App developer | 
| Upload raw data to the S3 bucket. | Use the AWS CLI [cp](https://awscli.amazonaws.com/v2/documentation/api/latest/reference/s3/cp.html) command to upload data for the Tenant-1 and Tenant-2 objects to the `tenantrawdata` bucket (specify the name of the S3 bucket you created for this purpose):<pre>aws s3 cp tenant-1-data s3://tenantrawdata<br />aws s3 cp tenant-2-data s3://tenantrawdata</pre>The S3 bucket is set up to run the Lambda index function whenever data is uploaded so that the document is indexed in Elasticsearch. | Cloud architect, Cloud administrator | 
| Search data from the Kibana console. | On the Kibana console, run the following query:<pre>GET tenant-data/_search</pre>This query displays all the documents indexed in Elasticsearch. In this case, you should see two separate documents for Tenant-1 and Tenant-2. | Cloud architect, Cloud administrator | 
| Test the search API from API Gateway. | [\[See the AWS documentation website for more details\]](http://docs.aws.amazon.com/prescriptive-guidance/latest/patterns/build-a-multi-tenant-serverless-architecture-in-amazon-opensearch-service.html)For screen illustrations, see the [Additional information](#build-a-multi-tenant-serverless-architecture-in-amazon-opensearch-service-additional) section. | Cloud architect, App developer | 
| Clean up resources. | Clean up all the resources you created to prevent additional charges to your account. | AWS DevOps, Cloud architect, Cloud administrator | 

## Related resources
<a name="build-a-multi-tenant-serverless-architecture-in-amazon-opensearch-service-resources"></a>
+ [AWS SDK for Python (Boto)](https://aws.amazon.com/sdk-for-python/)
+ [AWS Lambda documentation](https://docs.aws.amazon.com/lambda/)
+ [API Gateway documentation](https://docs.aws.amazon.com/apigateway/)
+ [Amazon S3 documentation](https://docs.aws.amazon.com/s3/)
+ [Amazon OpenSearch Service documentation](https://docs.aws.amazon.com/elasticsearch-service/)
  + [Fine-grained access control in Amazon OpenSearch Service](https://docs.amazonaws.cn/en_us/elasticsearch-service/latest/developerguide/fgac.html)
  + [Creating a search application with Amazon OpenSearch Service](https://docs.aws.amazon.com/elasticsearch-service/latest/developerguide/search-example.html)
  + [Launching your Amazon OpenSearch Service domains within a VPC](https://docs.aws.amazon.com/elasticsearch-service/latest/developerguide/es-vpc.html)

## Additional information
<a name="build-a-multi-tenant-serverless-architecture-in-amazon-opensearch-service-additional"></a>

**Data partitioning models**

There are three common data partitioning models used in multi-tenant systems: silo, pool, and hybrid. The model you choose depends on the compliance, noisy neighbor, operations, and isolation needs of your environment.

*Silo model*

In the silo model, each tenant’s data is stored in a distinct storage area where there is no commingling of tenant data. You can use two approaches to implement the silo model with OpenSearch Service: domain per tenant and index per tenant.
+ **Domain per tenant** – You can use a separate OpenSearch Service domain (synonymous with an Elasticsearch cluster) per tenant. Placing each tenant in its own domain provides all the benefits associated with having data in a standalone construct. However, this approach introduces management and agility challenges. Its distributed nature makes it harder to aggregate and assess the operational health and activity of tenants. This is a costly option that requires each OpenSearch Service domain to have three master nodes and two data nodes for production workloads at the minimum.

![\[Domain per tenant silo model for multi-tenant serverless architectures.\]](http://docs.aws.amazon.com/prescriptive-guidance/latest/patterns/images/pattern-img/750196bb-03f6-4b6e-92cd-eb7141602547/images/c2195f82-e5ed-40bb-b76a-3b0210bf1254.png)


 
+ **Index per tenant** – You can place tenant data in separate indexes within an OpenSearch Service cluster. With this approach, you use a tenant identifier when you create and name the index, by prepending the tenant identifier to the index name. The index per tenant approach helps you achieve your silo goals without introducing a completely separate cluster for each tenant. However, you might encounter memory pressure if the number of indexes grows, because this approach requires more shards, and the master node has to handle more allocation and rebalancing.

![\[Index per tenant silo model for multi-tenant serverless architectures.\]](http://docs.aws.amazon.com/prescriptive-guidance/latest/patterns/images/pattern-img/750196bb-03f6-4b6e-92cd-eb7141602547/images/354a9463-25bb-422b-84de-d4875a7c8ea2.png)


 

**Isolation in the silo model** – In the silo model, you use IAM policies to isolate the domains or indexes that hold each tenant’s data. These policies prevent one tenant from accessing another tenant’s data. To implement your silo isolation model, you can create a resource-based policy that controls access to your tenant resource. This is often a domain access policy that specifies which actions a principal can perform on the domain’s sub-resources, including Elasticsearch indexes and APIs. With IAM identity-based polices, you can specify *allowed* or *denied* actions on the domain, indexes, or APIs within OpenSearch Service. The `Action` element of an IAM policy describes the specific action or actions that are allowed or denied by the policy, and the `Principal `element specifies the affected accounts, users, or roles.

The following sample policy grants Tenant-1 full access (as specified by `es:*`) to the sub-resources on the `tenant-1` domain only. The trailing `/*` in the `Resource` element indicates that this policy applies to the domain’s sub-resources, not to the domain itself. When this policy is in effect, tenants are not allowed to create a new domain or modify settings on an existing domain.

```
{
   "Version": "2012-10-17",		 	 	 
   "Statement": [
      {
         "Effect": "Allow",
         "Principal": {
            "AWS": "arn:aws:iam::<aws-account-id>:user/Tenant-1"
         },
         "Action": "es:*",
         "Resource": "arn:aws:es:<Region>:<account-id>:domain/tenant-1/*"
      }
   ]
}
```

To implement the tenant per index silo model, you would need to modify this sample policy to further restrict Tenant-1 to the specified index or indexes, by specifying the index name. The following sample policy restricts Tenant-1 to the `tenant-index-1` index. 

```
{
   "Version": "2012-10-17",		 	 	 
   "Statement": [
      {
         "Effect": "Allow",
         "Principal": {
            "AWS": "arn:aws:iam::123456789012:user/Tenant-1"
         },
         "Action": "es:*",
         "Resource": "arn:aws:es:<Region>:<account-id>:domain/test-domain/tenant-index-1/*"
      }
   ]
}
```

*Pool model*

In the pool model, all tenant data is stored in an index within the same domain. The tenant identifier is included in the data (document) and used as the partition key, so you can determine which data belongs to which tenant. This model reduces the management overhead. Operating and managing the pooled index is easier and more efficient than managing multiple indexes. However, because tenant data is commingled within the same index, you lose the natural tenant isolation that the silo model provides. This approach might also degrade performance because of the noisy neighbor effect.

![\[Pool model for multi-tenant serverless architectures.\]](http://docs.aws.amazon.com/prescriptive-guidance/latest/patterns/images/pattern-img/750196bb-03f6-4b6e-92cd-eb7141602547/images/c2c3bb0f-6ccd-47a7-ab67-e7f3f8c7f289.png)


 

**Tenant isolation in the pool model** – In general, tenant isolation is challenging to implement in the pool model. The IAM mechanism used with the silo model doesn’t allow you to describe isolation based on the tenant ID stored in your document.

An alternative approach is to use the [fine-grained access control](https://docs.aws.amazon.com/elasticsearch-service/latest/developerguide/fgac.html) (FGAC) support provided by the Open Distro for Elasticsearch. FGAC allows you to control permissions at an index, document, or field level. With each request, FGAC evaluates the user credentials and either authenticates the user or denies access. If FGAC authenticates the user, it fetches all roles mapped to that user and uses the complete set of permissions to determine how to handle the request. 

To achieve the required isolation in the pooled model, you can use [document-level security](https://opendistro.github.io/for-elasticsearch-docs/docs/security/access-control/document-level-security/), which lets you restrict a role to a subset of documents in an index. The following sample role restricts queries to Tenant-1. By applying this role to Tenant-1, you can achieve the necessary isolation. 

```
{
   "bool": {
     "must": {
       "match": {
         "tenantId": "Tenant-1"
       }
     }
   }
 }
```

*Hybrid model*

The hybrid model uses a combination of the silo and pool models in the same environment to offer unique experiences to each tenant tier (such as free, standard, and premium tiers). Each tier follows the same security profile that was used in the pool model.

 

![\[Hybrid model for multi-tenant serverless architectures.\]](http://docs.aws.amazon.com/prescriptive-guidance/latest/patterns/images/pattern-img/750196bb-03f6-4b6e-92cd-eb7141602547/images/e7def98a-38ef-435a-9881-7e95ae4d4940.png)


**Tenant isolation in the hybrid model** – In the hybrid model, you follow the same security profile as in the pool model, where using the FGAC security model at the document level provided tenant isolation. Although this strategy simplifies cluster management and offers agility, it complicates other aspects of the architecture. For example, your code requires additional complexity to determine which model is associated with each tenant. You also have to ensure that single-tenant queries don’t saturate the entire domain and degrade the experience for other tenants. 

**Testing in API Gateway**

*Test window for Tenant-1 query*

![\[Test window for Tenant-1 query.\]](http://docs.aws.amazon.com/prescriptive-guidance/latest/patterns/images/pattern-img/750196bb-03f6-4b6e-92cd-eb7141602547/images/a6757d3f-977a-4ecc-90cb-83ab7f1c3588.png)


*Test window for Tenant-2 query*

 

![\[Test window for Tenant-2 query.\]](http://docs.aws.amazon.com/prescriptive-guidance/latest/patterns/images/pattern-img/750196bb-03f6-4b6e-92cd-eb7141602547/images/31bfd656-33ca-4750-b6e6-da4d703c2071.png)


## Attachments
<a name="attachments-750196bb-03f6-4b6e-92cd-eb7141602547"></a>

To access additional content that is associated with this document, unzip the following file: [attachment.zip](samples/p-attach/750196bb-03f6-4b6e-92cd-eb7141602547/attachments/attachment.zip)

# Deploy multiple-stack applications using AWS CDK with TypeScript
<a name="deploy-multiple-stack-applications-using-aws-cdk-with-typescript"></a>

*Dr. Rahul Sharad Gaikwad, Amazon Web Services*

## Summary
<a name="deploy-multiple-stack-applications-using-aws-cdk-with-typescript-summary"></a>

This pattern provides a step-by-step approach for application deployment on Amazon Web Services (AWS) using AWS Cloud Development Kit (AWS CDK) with TypeScript. As an example, the pattern deploys a serverless real-time analytics application.

The pattern builds and deploys nested stack applications. The parent AWS CloudFormation stack calls the child, or nested, stacks.  Each child stack builds and deploys the AWS resources that are defined in the CloudFormation stack. AWS CDK Toolkit, the command line interface (CLI) command `cdk`, is the primary interface for the CloudFormation stacks.

## Prerequisites and limitations
<a name="deploy-multiple-stack-applications-using-aws-cdk-with-typescript-prereqs"></a>

**Prerequisites**
+ An active AWS account
+ Existing virtual private cloud (VPC) and subnets
+ AWS CDK Toolkit installed and configured
+ A user with administrator permissions and a set of access keys.
+ Node.js
+ AWS Command Line Interface (AWS CLI)

**Limitations **
+ Because AWS CDK uses AWS CloudFormation, AWS CDK applications are subject to CloudFormation service quotas. For more information, see [AWS CloudFormation quotas](https://docs.aws.amazon.com/AWSCloudFormation/latest/UserGuide/cloudformation-limits.html).

**Product versions**

This pattern has been built and tested using the following tools and versions.
+ AWS CDK Toolkit 1.83.0
+ Node.js 14.13.0
+ npm 7.0.14

The pattern should work with any version of AWS CDK or npm. Note that Node.js versions 13.0.0 through 13.6.0 are not compatible with the AWS CDK.

## Architecture
<a name="deploy-multiple-stack-applications-using-aws-cdk-with-typescript-architecture"></a>

**Target technology stack  **
+ AWS Amplify Console
+ Amazon API Gateway
+ AWS CDK
+ Amazon CloudFront
+ Amazon Cognito
+ Amazon DynamoDB
+ Amazon Data Firehose
+ Amazon Kinesis Data Streams
+ AWS Lambda
+ Amazon Simple Storage Service (Amazon S3)

**Target architecture**

The following diagram shows multiple-stack application deployment using AWS CDK with TypeScript.

![\[Stack architecture in the VPC, with a parent stack and two child stacks that contain resources.\]](http://docs.aws.amazon.com/prescriptive-guidance/latest/patterns/images/pattern-img/0ac29a11-1362-4084-92ed-6b85205763ca/images/8f92e86a-aa3d-4f8a-9b11-b92c52a7226c.png)


 

The following diagram shows the architecture of the example serverless real-time application.

![\[Application architecture in the Region.\]](http://docs.aws.amazon.com/prescriptive-guidance/latest/patterns/images/pattern-img/0ac29a11-1362-4084-92ed-6b85205763ca/images/2df00faf-f871-4aec-9655-19ba2eb14cf8.png)


 

## Tools
<a name="deploy-multiple-stack-applications-using-aws-cdk-with-typescript-tools"></a>

**Tools**
+ [AWS Amplify Console](https://docs.aws.amazon.com/amplify/latest/userguide/welcome.html) is the control center for fullstack web and mobile application deployments in AWS. Amplify Console hosting provides a git-based workflow for hosting fullstack serverless web apps with continuous deployment. The Admin UI is a visual interface for frontend web and mobile developers to create and manage app backends outside the AWS console.
+ [Amazon API Gateway](https://docs.aws.amazon.com/apigateway/latest/developerguide/welcome.html) is an AWS service for creating, publishing, maintaining, monitoring, and securing REST, HTTP, and WebSocket APIs at any scale.
+ [AWS Cloud Development Kit (AWS CDK)](https://docs.aws.amazon.com/cdk/latest/guide/home.html) is a software development framework that helps you define and provision AWS Cloud infrastructure in code.
+ [AWS CDK Toolkit](https://docs.aws.amazon.com/cdk/latest/guide/cli.html) is a command line cloud development kit that helps you interact with your AWS CDK app. The `cdk` CLI command is the primary tool for interacting with your AWS CDK app. It runs your app, interrogates the application model you defined, and produces and deploys the AWS CloudFormation templates generated by the AWS CDK.
+ [Amazon CloudFront](https://docs.aws.amazon.com/AmazonCloudFront/latest/DeveloperGuide/Introduction.html) is a web service that speeds up distribution of static and dynamic web content, such as .html, .css, .js, and image files. CloudFront delivers your content through a worldwide network of data centers called edge locations for lower latency and improved performance.
+ [Amazon Cognito](https://docs.aws.amazon.com/cognito/latest/developerguide/what-is-amazon-cognito.html) provides authentication, authorization, and user management for your web and mobile apps. Your users can sign in directly or through a third party.
+ [Amazon DynamoDB](https://docs.aws.amazon.com/amazondynamodb/latest/developerguide/Introduction.html) is a fully managed NoSQL database service that provides fast and predictable performance with seamless scalability.
+ [Amazon Data Firehose](https://docs.aws.amazon.com/firehose/latest/dev/what-is-this-service.html) is a fully managed service for delivering real-time [streaming data](https://aws.amazon.com/streaming-data/) to destinations such as Amazon S3, Amazon Redshift, Amazon OpenSearch Service, Splunk, and any custom HTTP endpoint or HTTP endpoints owned by supported third-party service providers.
+ [Amazon Kinesis Data Streams](https://docs.aws.amazon.com/streams/latest/dev/introduction.html) is a service for collecting and processing large streams of data records in real time.
+ [AWS Lambda](https://docs.aws.amazon.com/lambda/latest/dg/welcome.html) is a compute service that supports running code without provisioning or managing servers. Lambda runs your code only when needed and scales automatically, from a few requests per day to thousands per second. You pay only for the compute time that you consume—there is no charge when your code is not running.
+ [Amazon Simple Storage Service (Amazon S3)](https://docs.aws.amazon.com/AmazonS3/latest/userguide/Welcome.html) is a cloud-based object storage service that helps you store, protect, and retrieve any amount of data.

**Code**

The code for this pattern is attached.

## Epics
<a name="deploy-multiple-stack-applications-using-aws-cdk-with-typescript-epics"></a>

### Install AWS CDK Toolkit
<a name="install-aws-cdk-toolkit"></a>


| Task | Description | Skills required | 
| --- | --- | --- | 
| Install AWS CDK Toolkit. | To install AWS CDK Toolkit globally, run the following command.`npm install -g aws-cdk` | DevOps | 
| Verify the version. | To verify the AWS CDK Toolkit version, run the following command. `cdk --version` | DevOps | 

### Set up AWS credentials
<a name="set-up-aws-credentials"></a>


| Task | Description | Skills required | 
| --- | --- | --- | 
| Set up credentials. | To set up credentials, run the `aws configure` command and follow the prompts.<pre>$aws configure<br />AWS Access Key ID [None]: <br />AWS Secret Access Key [None]: your_secret_access_key<br />Default region name [None]:<br />Default output format [None]:</pre> | DevOps | 

### Download the project code
<a name="download-the-project-code"></a>


| Task | Description | Skills required | 
| --- | --- | --- | 
| Download the attached project code. | For more information about the directory and file structure, see the *Additional information* section. | DevOps | 

### Bootstrap the AWS CDK environment
<a name="bootstrap-the-aws-cdk-environment"></a>


| Task | Description | Skills required | 
| --- | --- | --- | 
| Bootstrap the environment. | To deploy the AWS CloudFormation template to the account and AWS Region that you want to use, run the following command.`cdk bootstrap <account>/<Region>`For more information, see the [AWS documentation](https://docs.aws.amazon.com/cdk/latest/guide/bootstrapping.html). | DevOps | 

### Build and deploy the project
<a name="build-and-deploy-the-project"></a>


| Task | Description | Skills required | 
| --- | --- | --- | 
| Build the project. | To build the project code, run the `npm run build` command. | DevOps | 
| Deploy the project. | To deploy the project code, run the `cdk deploy` command. |  | 

### Verify outputs
<a name="verify-outputs"></a>


| Task | Description | Skills required | 
| --- | --- | --- | 
| Verify stack creation. | On the AWS Management Console, choose **CloudFormation**. In the stacks for the project, verify that a parent stack and two child stacks have been created. | DevOps | 

### Test the application
<a name="test-the-application"></a>


| Task | Description | Skills required | 
| --- | --- | --- | 
| Send data to Kinesis Data Streams. | Configure your AWS Account to send data to Kinesis Data Streams using Amazon Kinesis Data Generator (KDG). For more information, see [Amazon Kinesis Data Generator](https://awslabs.github.io/amazon-kinesis-data-generator/web/help.html). | DevOps | 
| Create an Amazon Cognito user. | To create an Amazon Cognito user, download the cognito-setup.json CloudFormation template from the *Create an Amazon Cognito User* section on the [Kinesis Data Generator help page](https://awslabs.github.io/amazon-kinesis-data-generator/web/help.html). Initiate the template, and then enter your Amazon Cognito **Username** and **Password**.The **Outputs** tab lists the Kinesis Data Generator URL. | DevOps | 
| Log in to Kinesis Data Generator | To log in to KDG, use the Amazon Cognito credentials that you provided and the Kinesis Data Generator URL. | DevOps | 
| Test the application. | In KDG, in **Record template**, **Template 1**, paste the test code from the *Additional information* section, and choose **Send data**. | DevOps | 
| Test API Gateway. | After the data has been ingested, test API Gateway by using the `GET` method to retrieve data. | DevOps | 

## Related resources
<a name="deploy-multiple-stack-applications-using-aws-cdk-with-typescript-resources"></a>

**References**
+ [AWS Cloud Development Kit](https://aws.amazon.com/cdk/)
+ [AWS CDK on GitHub](https://github.com/aws/aws-cdk)
+ [Working with nested stacks](https://docs.aws.amazon.com/AWSCloudFormation/latest/UserGuide/using-cfn-nested-stacks.html)
+ [AWS sample example - Serverless real-time analytics](https://github.com/aws-samples/serverless-realtime-analytics)

## Additional information
<a name="deploy-multiple-stack-applications-using-aws-cdk-with-typescript-additional"></a>

**Directory and file details**

This pattern sets up the following three stacks.
+ `parent-cdk-stack.ts` – This stack acts as the parent stack and calls the two child applications as nested stacks. 
+ `real-time-analytics-poc-stack.ts` – This nested stack contains the infrastructure and application code.
+ `real-time-analytics-web-stack.ts` – This nested stack contains only the static web application code.

*Important files and their functionality*
+ `bin/real-time-analytics-poc.ts` – Entry point of the AWS CDK application. It loads all stacks defined under `lib/`.
+ `lib/real-time-analytics-poc-stack.ts` – Definition of the AWS CDK application’s stack (`real-time-analytics-poc`).
+ `lib/real-time-analytics-web-stack.ts` – Definition of the AWS CDK application’s stack (`real-time-analytics-web-stack`).
+ `lib/parent-cdk-stack.ts` – Definition of the AWS CDK application’s stack (`parent-cdk`).
+ `package.json` – npm module manifest, which includes the application name, version, and dependencies.
+ `package-lock.json` – Maintained by npm.
+ `cdk.json` – Toolkit for running the application.
+ `tsconfig.json` – The project’s TypeScript configuration.
+ `.gitignore` – List of files that Git should exclude from source control.
+ `node_modules` – Maintained by npm; includes the project’s dependencies.

The following section of code in the parent stack calls child applications as a nested AWS CDK stacks.

```
import * as cdk from '@aws-cdk/core';
import { Construct, Stack, StackProps } from '@aws-cdk/core';
import { RealTimeAnalyticsPocStack } from './real-time-analytics-poc-stack';
import { RealTimeAnalyticsWebStack } from './real-time-analytics-web-stack';


export class CdkParentStack extends Stack {
  constructor(scope: Construct, id: string, props?: StackProps) {
    super(scope, id, props);


    new RealTimeAnalyticsPocStack(this, 'RealTimeAnalyticsPocStack');
    new RealTimeAnalyticsWebStack(this, 'RealTimeAnalyticsWebStack');
  }
}
```

**Code for testing**

```
session={{date.now('YYYYMMDD')}}|sequence={{date.now('x')}}|reception={{date.now('x')}}|instrument={{random.number(9)}}|l={{random.number(20)}}|price_0={{random.number({"min":10000, "max":30000})}}|price_1={{random.number({"min":10000, "max":30000})}}|price_2={{random.number({"min":10000, "max":30000})}}|price_3={{random.number({"min":10000, "max":30000})}}|price_4={{random.number({"min":10000, "max":30000})}}|price_5={{random.number({"min":10000, "max":30000})}}|price_6={{random.number({"min":10000, "max":30000})}}|price_7={{random.number({"min":10000, "max":30000})}}|price_8={{random.number({"min":10000, "max":30000})}}|
```

**Testing API Gateway**

On the API Gateway console, test API Gateway by using the `GET` method. 

![\[API Gateway Console with GET chosen under OPTIONS.\]](http://docs.aws.amazon.com/prescriptive-guidance/latest/patterns/images/pattern-img/0ac29a11-1362-4084-92ed-6b85205763ca/images/452e5b8f-6d61-401d-8484-e5a436cb6f1b.png)


 

## Attachments
<a name="attachments-0ac29a11-1362-4084-92ed-6b85205763ca"></a>

To access additional content that is associated with this document, unzip the following file: [attachment.zip](samples/p-attach/0ac29a11-1362-4084-92ed-6b85205763ca/attachments/attachment.zip)

# Automate deployment of nested applications using AWS SAM
<a name="automate-deployment-of-nested-applications-using-aws-sam"></a>

*Dr. Rahul Sharad Gaikwad, Ishwar Chauthaiwale, Dmitry Gulin, and Tabby Ward, Amazon Web Services*

## Summary
<a name="automate-deployment-of-nested-applications-using-aws-sam-summary"></a>

On Amazon Web Services (AWS), AWS Serverless Application Model (AWS SAM) is an open-source framework that provides shorthand syntax to express functions, APIs, databases, and event source mappings. With just a few lines for each resource, you can define the application you want and model it by using YAML. During deployment, SAM transforms and expands the SAM syntax into AWS CloudFormation syntax that you can use to build serverless applications faster.

AWS SAM simplifies the development, deployment, and management of serverless applications on the AWS platform. It provides a standardized framework, faster deployment, local testing capabilities, resource management, seamless Integration with Development Tools, and a supportive community. These features make it a valuable tool for building serverless applications efficiently and effectively.

This pattern uses AWS SAM templates to automate the deployment of nested applications. A nested application is an application within another application. Parent applications call their child applications. These are loosely coupled components of a serverless architecture. 

Using nested applications, you can rapidly build highly sophisticated serverless architectures by reusing services or components that are independently authored and maintained but are composed using AWS SAM and the Serverless Application Repository. Nested applications help you to build applications that are more powerful, avoid duplicated work, and ensure consistency and best practices across your teams and organizations. To demonstrate nested applications, the pattern deploys an [example AWS serverless shopping cart application](https://github.com/aws-samples/aws-sam-nested-stack-sample).

## Prerequisites and limitations
<a name="automate-deployment-of-nested-applications-using-aws-sam-prereqs"></a>

**Prerequisites **
+ An active AWS account
+ An existing virtual private cloud (VPC) and subnets
+ An integrated development environment such as Visual Studio Code (for more information, see [Tools to Build on AWS](https://aws.amazon.com/getting-started/tools-sdks/#IDE_and_IDE_Toolkits))
+ Python wheel library installed using pip install wheel, if it’s not already installed

**Limitations **
+ The maximum number of applications that can be nested in a serverless application is 200.
+ The maximum number of parameters for a nested application can have 60.

**Product versions**
+ This solution is built on AWS SAM command line interface (AWS SAM CLI) version 1.21.1, but this architecture should work with later AWS SAM CLI versions.

## Architecture
<a name="automate-deployment-of-nested-applications-using-aws-sam-architecture"></a>

**Target technology stack  **
+ Amazon API Gateway
+ AWS SAM
+ Amazon Cognito
+ Amazon DynamoDB
+ AWS Lambda
+ Amazon Simple Queue Service (Amazon SQS) queue

**Target architecture**

The following diagram shows how user requests are made to the shopping services by calling APIs. The user's request, including all necessary information, is sent to Amazon API Gateway and the Amazon Cognito authorizer, which performs authentication and authorization mechanisms for the APIs.

When an item is added, deleted, or updated in DynamoDB, an event is put onto DynamoDB Streams, which in turn initiates a Lambda function. To avoid immediate deletion of old items as part of a synchronous workflow, messages are put onto an SQS queue, which initiates a worker function to delete the messages.

![\[POST and PUT operations from API Gateway to Lambda functions to DynamoDB and Product Service.\]](http://docs.aws.amazon.com/prescriptive-guidance/latest/patterns/images/pattern-img/218adecc-b5b8-4193-9012-b5d584e2e128/images/5b454bae-5fd4-405d-a37d-6bafc3fcf889.png)


In this solution setup, AWS SAM CLI serves as the interface for AWS CloudFormation stacks. AWS SAM templates automatically deploy nested applications. The parent SAM template calls the child templates, and the parent CloudFormation stack deploys the child stacks. Each child stack builds the AWS resources that are defined in the AWS SAM CloudFormation templates.

![\[Four-step process using AWS SAM CLI with a parent and three child CloudFormation stacks.\]](http://docs.aws.amazon.com/prescriptive-guidance/latest/patterns/images/pattern-img/218adecc-b5b8-4193-9012-b5d584e2e128/images/5828026e-72ad-4a3f-a5f2-bffac0f13e42.png)


1. Build and deploy the stacks.

1. The Auth CloudFormation stack contains Amazon Cognito.

1. The Product CloudFormation stack contains an Lambda function and Amazon API Gateway

1. The Shopping CloudFormation stack contains a Lambda function, Amazon API Gateway, the SQS queue, and the Amazon DynamoDB database.

## Tools
<a name="automate-deployment-of-nested-applications-using-aws-sam-tools"></a>

**Tools**
+ [Amazon API Gateway](https://docs.aws.amazon.com/apigateway/latest/developerguide/welcome.html) helps you create, publish, maintain, monitor, and secure REST, HTTP, and WebSocket APIs at any scale.
+ [AWS CloudFormation](https://docs.aws.amazon.com/AWSCloudFormation/latest/UserGuide/Welcome.html) helps you set up AWS resources, provision them quickly and consistently, and manage them throughout their lifecycle across AWS accounts and Regions.
+ [Amazon Cognito](https://docs.aws.amazon.com/cognito/latest/developerguide/what-is-amazon-cognito.html) provides authentication, authorization, and user management for web and mobile apps.
+ [Amazon DynamoDB](https://docs.aws.amazon.com/amazondynamodb/latest/developerguide/Introduction.html) is a fully managed NoSQL database service that provides fast, predictable, and scalable performance.
+ [AWS Lambda](https://docs.aws.amazon.com/lambda/latest/dg/welcome.html) is a compute service that helps you run code without needing to provision or manage servers. It runs your code only when needed and scales automatically, so you pay only for the compute time that you use.
+ [AWS Serverless Application Model (AWS SAM)](https://docs.aws.amazon.com/serverless-application-model/latest/developerguide/what-is-sam.html) is an open-source framework that helps you build serverless applications in the AWS Cloud.
+ [Amazon Simple Queue Service (Amazon SQS)](https://docs.aws.amazon.com/AWSSimpleQueueService/latest/SQSDeveloperGuide/welcome.html) provides a secure, durable, and available hosted queue that helps you integrate and decouple distributed software systems and components.

**Code **

The code for this pattern is available in the GitHub [AWS SAM Nested Stack Sample](https://github.com/aws-samples/aws-sam-nested-stack-sample) repository.

## Epics
<a name="automate-deployment-of-nested-applications-using-aws-sam-epics"></a>

### Install AWS SAM CLI
<a name="install-aws-sam-cli"></a>


| Task | Description | Skills required | 
| --- | --- | --- | 
| Install AWS SAM CLI. | To install AWS SAM CLI, see the instructions in the [AWS SAM documentation](https://docs.aws.amazon.com/serverless-application-model/latest/developerguide/serverless-sam-cli-install.html). | DevOps engineer | 
| Set up AWS credentials. | To set AWS credentials so that the AWS SAM CLI can make calls to AWS services on your behalf, run the `aws configure` command and follow the prompts.<pre>$aws configure<br />AWS Access Key ID [None]: <your_access_key_id><br />AWS Secret Access Key [None]: your_secret_access_key<br />Default region name [None]:<br />Default output format [None]:</pre>For more information on setting up your credentials, see [Authentication and access credentials](https://docs.aws.amazon.com/cli/latest/userguide/cli-chap-authentication.html).  | DevOps engineer | 

### Initialize the AWS SAM project
<a name="initialize-the-aws-sam-project"></a>


| Task | Description | Skills required | 
| --- | --- | --- | 
| Clone the AWS SAM code repository. | [\[See the AWS documentation website for more details\]](http://docs.aws.amazon.com/prescriptive-guidance/latest/patterns/automate-deployment-of-nested-applications-using-aws-sam.html) | DevOps engineer | 
| Deploy templates to initialize the project. | To initialize the project, run the `SAM init` command. When prompted to choose a template source, choose `Custom Template Location`. | DevOps engineer | 

### Compile and build the SAM template code
<a name="compile-and-build-the-sam-template-code"></a>


| Task | Description | Skills required | 
| --- | --- | --- | 
| Review the AWS SAM application templates. | Review the templates for the nested applications. This example uses the following nested application templates:[\[See the AWS documentation website for more details\]](http://docs.aws.amazon.com/prescriptive-guidance/latest/patterns/automate-deployment-of-nested-applications-using-aws-sam.html) | DevOps engineer | 
| Review the parent template. | Review the template that will invoke the nested application templates. In this example, the parent template is `template.yml`. All separate applications are nested in the single parent template `template.yml`. | DevOps engineer | 
| Compile and build the AWS SAM template code.  | Using the AWS SAM CLI, run the following command.<pre>sam build</pre> | DevOps engineer | 

### Deploy the AWS SAM template
<a name="deploy-the-aws-sam-template"></a>


| Task | Description | Skills required | 
| --- | --- | --- | 
| Deploy the applications. | To launch the SAM template code that creates the nested application CloudFormation stacks and deploys code in the AWS environment, run the following command.<pre>sam deploy --guided --stack-name shopping-cart-nested-stack --capabilities CAPABILITY_IAM CAPABILITY_AUTO_EXPAND</pre>The command will prompt with a few questions. Answer all questions with `y`. | DevOps engineer | 

### Verify the deployment
<a name="verify-the-deployment"></a>


| Task | Description | Skills required | 
| --- | --- | --- | 
| Verify the stacks. | To review the AWS CloudFormation stacks and AWS resources that were defined in the AWS SAM templates, do the following:[\[See the AWS documentation website for more details\]](http://docs.aws.amazon.com/prescriptive-guidance/latest/patterns/automate-deployment-of-nested-applications-using-aws-sam.html) | DevOps engineer | 

## Related resources
<a name="automate-deployment-of-nested-applications-using-aws-sam-resources"></a>

**References**
+ [AWS Serverless Application Model (AWS SAM)](https://aws.amazon.com/serverless/sam/#:~:text=The%20AWS%20Serverless%20Application%20Model,and%20model%20it%20using%20YAML.)
+ [AWS SAM on GitHub](https://github.com/aws/serverless-application-model)
+ [Serverless Shopping Cart Microservice](https://github.com/aws-samples/aws-serverless-shopping-cart) (AWS example application)

**Tutorials and videos **
+ [Build a Serverless App](https://youtu.be/Hv3YrP8G4ag)
+ [AWS Online Tech Talks: Serverless Application Building and Deployments with AWS SAM](https://youtu.be/1NU7vyJw9LU)

## Additional information
<a name="automate-deployment-of-nested-applications-using-aws-sam-additional"></a>

After all the code is in place, the example has the following directory structure:
+ [sam\$1stacks](https://docs.aws.amazon.com/lambda/latest/dg/chapter-layers.html) – This folder contains the `shared.py` layer. A layer is a file archive that contains libraries, a custom runtime, or other dependencies. With layers, you can use libraries in your function without needing to include them in a deployment package.
+ *product-mock-service* – This folder contains all product-related Lambda functions and files.
+ *shopping-cart-service* – This folder contains all shopping-related Lambda functions and files.

# Implement SaaS tenant isolation for Amazon S3 by using an AWS Lambda token vending machine
<a name="implement-saas-tenant-isolation-for-amazon-s3-by-using-an-aws-lambda-token-vending-machine"></a>

*Tabby Ward, Thomas Davis, and Sravan Periyathambi, Amazon Web Services*

## Summary
<a name="implement-saas-tenant-isolation-for-amazon-s3-by-using-an-aws-lambda-token-vending-machine-summary"></a>

Multitenant SaaS applications must implement systems to ensure that tenant isolation is maintained. When you store tenant data on the same AWS resource—such as when multiple tenants store data in the same Amazon Simple Storage Service (Amazon S3) bucket—you must ensure that cross-tenant access cannot occur. Token vending machines (TVMs) are one way to provide tenant data isolation. These machines provide a mechanism for obtaining tokens while abstracting the complexity of how these tokens are generated. Developers can use a TVM without having detailed knowledge of how it produces tokens.

This pattern implements a TVM by using AWS Lambda. The TVM generates a token that consists of temporary security token service (STS) credentials that limit access to a single SaaS tenant's data in an S3 bucket.

TVMs, and the code that’s provided with this pattern, are typically used with claims that are derived from JSON Web Tokens (JWTs) to associate requests for AWS resources with a tenant-scoped AWS Identity and Access Management (IAM) policy. You can use the code in this pattern as a basis to implement a SaaS application that generates scoped, temporary STS credentials based on the claims provided in a JWT token.

## Prerequisites and limitations
<a name="implement-saas-tenant-isolation-for-amazon-s3-by-using-an-aws-lambda-token-vending-machine-prereqs"></a>

**Prerequisites **
+ An active AWS account.
+ AWS Command Line Interface (AWS CLI) [version 1.19.0 or later](https://docs.aws.amazon.com/cli/latest/userguide/install-cliv1.html), installed and configured on macOS, Linux, or Windows. Alternatively, you can use AWS CLI [version 2.1 or later](https://docs.aws.amazon.com/cli/latest/userguide/install-cliv2.html).

**Limitations **
+ This code runs in Java and doesn’t currently support other programming languages. 
+ The sample application doesn’t include AWS cross-Region or disaster recovery (DR) support. 
+ This pattern demonstrates how a Lambda TVM for a SaaS application can provide scoped tenant access. This pattern is not intended to be used in production environments without additional security testing as a part of your specific application or use case.

## Architecture
<a name="implement-saas-tenant-isolation-for-amazon-s3-by-using-an-aws-lambda-token-vending-machine-architecture"></a>

**Target technology stack  **
+ AWS Lambda
+ Amazon S3
+ IAM
+ AWS Security Token Service (AWS STS)

**Target architecture **

![\[Generating a token to gain temporary STS credentials to access data in an S3 bucket.\]](http://docs.aws.amazon.com/prescriptive-guidance/latest/patterns/images/pattern-img/97a34c8e-d04e-40b6-acbf-1baa176d22a9/images/14d0508a-703b-4229-85e6-c5094de7fe01.png)


 

## Tools
<a name="implement-saas-tenant-isolation-for-amazon-s3-by-using-an-aws-lambda-token-vending-machine-tools"></a>

**AWS services**
+ [AWS Command Line Interface (AWS CLI)](https://docs.aws.amazon.com/cli/latest/userguide/cli-chap-welcome.html) is an open-source tool that helps you interact with AWS services through commands in your command-line shell.
+ [AWS Identity and Access Management (IAM)](https://docs.aws.amazon.com/IAM/latest/UserGuide/introduction.html) helps you securely manage access to your AWS resources by controlling who is authenticated and authorized to use them.
+ [AWS Lambda](https://docs.aws.amazon.com/lambda/latest/dg/welcome.html) is a compute service that helps you run code without needing to provision or manage servers. It runs your code only when needed and scales automatically, so you pay only for the compute time that you use.
+ [AWS Security Token Service (AWS STS)](https://docs.aws.amazon.com/STS/latest/APIReference/welcome.html) helps you request temporary, limited-privilege credentials for users.
+ [Amazon Simple Storage Service (Amazon S3)](https://docs.aws.amazon.com/AmazonS3/latest/userguide/Welcome.html) is a cloud-based object storage service that helps you store, protect, and retrieve any amount of data.

**Code**

The source code for this pattern is available as an attachment and includes the following files:
+ `s3UploadSample.jar` provides the source code for a Lambda function that uploads a JSON document to an S3 bucket.
+ `tvm-layer.zip` provides a reusable Java library that supplies a token (STS temporary credentials) for the Lambda function to access the S3 bucket and upload the JSON document.
+ `token-vending-machine-sample-app.zip` provides the source code used to create these artifacts and compilation instructions.

To use these files, follow the instructions in the next section.

## Epics
<a name="implement-saas-tenant-isolation-for-amazon-s3-by-using-an-aws-lambda-token-vending-machine-epics"></a>

### Determine variable values
<a name="determine-variable-values"></a>


| Task | Description | Skills required | 
| --- | --- | --- | 
| Determine variable values. | The implementation of this pattern includes several variable names that must be used consistently. Determine the values that should be used for each variable, and provide that value when requested in subsequent steps.[\[See the AWS documentation website for more details\]](http://docs.aws.amazon.com/prescriptive-guidance/latest/patterns/implement-saas-tenant-isolation-for-amazon-s3-by-using-an-aws-lambda-token-vending-machine.html) | Cloud administrator | 

### Create an S3 bucket
<a name="create-an-s3-bucket"></a>


| Task | Description | Skills required | 
| --- | --- | --- | 
| Create an S3 bucket for the sample application. | Use the following AWS CLI command to create an S3 bucket. Provide the `<sample-app-bucket-name>`** **value in the code snippet:<pre>aws s3api create-bucket --bucket <sample-app-bucket-name></pre>The Lambda sample application uploads JSON files to this bucket. | Cloud administrator | 

### Create the IAM TVM role and policy
<a name="create-the-iam-tvm-role-and-policy"></a>


| Task | Description | Skills required | 
| --- | --- | --- | 
| Create a TVM role. | Use one of the following AWS CLI commands to create an IAM role. Provide the `<sample-tvm-role-name>`** **value in the command.For macOS or Linux shells:<pre>aws iam create-role \<br />--role-name <sample-tvm-role-name> \<br />--assume-role-policy-document '{<br />    "Version": "2012-10-17",		 	 	 <br />    "Statement": [<br />        {<br />            "Effect": "Allow",<br />            "Action": [<br />                "sts:AssumeRole"<br />            ],<br />            "Principal": {<br />                "Service": [<br />                    "lambda.amazonaws.com"<br />                ]<br />            },<br />            "Condition": {<br />                "StringEquals": {<br />                    "aws:SourceAccount": "<AWS Account ID>"<br />                }<br />            }<br />        }<br />    ]<br />}'</pre>For the Windows command line:<pre>aws iam create-role ^<br />--role-name <sample-tvm-role-name> ^<br />--assume-role-policy-document "{\"Version\": \"2012-10-17\", \"Statement\": [{\"Effect\": \"Allow\", \"Action\": [\"sts:AssumeRole\"], \"Principal\": {\"Service\": [\"lambda.amazonaws.com\"]}, \"Condition\": {\"StringEquals\": {\"aws:SourceAccount\": \"<AWS Account ID>\"}}}]}"</pre>The Lambda sample application assumes this role when the application is invoked. The capability to assume the application role with a scoped policy gives the code broader permissions to access the S3 bucket. | Cloud administrator | 
| Create an inline TVM role policy. | Use one of the following AWS CLI commands to create an IAM policy. Provide the `<sample-tvm-role-name>`,** **`<AWS Account ID>`, and `<sample-app-role-name>` values in the command.For macOS or Linux shells:<pre>aws iam put-role-policy \<br />--role-name <sample-tvm-role-name> \<br />--policy-name assume-app-role \<br />--policy-document '{<br />    "Version": "2012-10-17",		 	 	  <br />    "Statement": [<br />        {<br />            "Effect": "Allow", <br />            "Action": "sts:AssumeRole", <br />            "Resource": "arn:aws:iam::<AWS Account ID>:role/<sample-app-role-name>"<br />        }<br />    ]}'</pre>For the Windows command line:<pre>aws iam put-role-policy ^<br />--role-name <sample-tvm-role-name> ^<br />--policy-name assume-app-role ^<br />--policy-document "{\"Version\": \"2012-10-17\", \"Statement\": [{\"Effect\": \"Allow\", \"Action\": \"sts:AssumeRole\", \"Resource\": \"arn:aws:iam::<AWS Account ID>:role/<sample-app-role-name>\"}]}"</pre>This policy is attached to the TVM role. It gives the code the capability to assume the application role, which has broader permissions to access the S3 bucket. | Cloud administrator | 
| Attach the managed Lambda policy. | Use the following AWS CLI command to attach the `AWSLambdaBasicExecutionRole` IAM policy. Provide the `<sample-tvm-role-name>` value in the command:<pre>aws iam attach-role-policy \<br />--role-name <sample-tvm-role-name> \<br />--policy-arn arn:aws:iam::aws:policy/service-role/AWSLambdaBasicExecutionRole</pre>For the Windows command line:<pre>aws iam attach-role-policy ^<br />--role-name <sample-tvm-role-name> ^<br />--policy-arn arn:aws:iam::aws:policy/service-role/AWSLambdaBasicExecutionRole</pre>This managed policy is attached to the TVM role to permit Lambda to send logs to Amazon CloudWatch. | Cloud administrator | 

### Create the IAM application role and policy
<a name="create-the-iam-application-role-and-policy"></a>


| Task | Description | Skills required | 
| --- | --- | --- | 
| Create the application role. | Use one of the following AWS CLI commands to create an IAM role. Provide the `<sample-app-role-name>`, `<AWS Account ID>`, and `<sample-tvm-role-name>` values in the command.For macOS or Linux shells:<pre>aws iam create-role \<br />--role-name <sample-app-role-name> \<br />--assume-role-policy-document '{<br />    "Version": "2012-10-17",		 	 	  <br />    "Statement": [<br />        {<br />            "Effect": <br />            "Allow",<br />            "Principal": {<br />                "AWS": "arn:aws:iam::<AWS Account ID>:role/<sample-tvm-role-name>"<br />            },<br />            "Action": "sts:AssumeRole"<br />        }<br />    ]}'</pre>For the Windows command line:<pre>aws iam create-role ^<br />--role-name <sample-app-role-name> ^<br />--assume-role-policy-document "{\"Version\": \"2012-10-17\", \"Statement\": [{\"Effect\": \"Allow\",\"Principal\": {\"AWS\": \"arn:aws:iam::<AWS Account ID>:role/<sample-tvm-role-name>\"},\"Action\": \"sts:AssumeRole\"}]}"</pre>The Lambda sample application assumes this role with a scoped policy to get tenant-based access to an S3 bucket. | Cloud administrator | 
| Create an inline application role policy. | Use one of the following AWS CLI mmands to create an IAM policy. Provide the `<sample-app-role-name>` and `<sample-app-bucket-name>`** **values in the command.For macOS or Linux shells:<pre>aws iam put-role-policy \<br />--role-name <sample-app-role-name> \<br />--policy-name s3-bucket-access \<br />--policy-document '{<br />    "Version": "2012-10-17",		 	 	  <br />    "Statement": [<br />        {<br />            "Effect": "Allow", <br />            "Action": [<br />                "s3:PutObject", <br />                "s3:GetObject", <br />                "s3:DeleteObject"<br />            ], <br />            "Resource": "arn:aws:s3:::<sample-app-bucket-name>/*"<br />        }, <br />        {<br />            "Effect": "Allow", <br />            "Action": ["s3:ListBucket"], <br />            "Resource": "arn:aws:s3:::<sample-app-bucket-name>"<br />        }<br />    ]}'</pre>For the Windows command line:<pre>aws iam put-role-policy ^<br />--role-name <sample-app-role-name> ^<br />--policy-name s3-bucket-access ^<br />--policy-document "{\"Version\": \"2012-10-17\", \"Statement\": [{\"Effect\": \"Allow\", \"Action\": [\"s3:PutObject\", \"s3:GetObject\", \"s3:DeleteObject\"], \"Resource\": \"arn:aws:s3:::<sample-app-bucket-name>/*\"}, {\"Effect\": \"Allow\", \"Action\": [\"s3:ListBucket\"], \"Resource\": \"arn:aws:s3:::<sample-app-bucket-name>\"}]}"</pre>This policy is attached to the application role. It provides broad access to objects in the S3 bucket. When the sample application assumes the role, these permissions are scoped to a specific tenant with the TVM's dynamically generated policy. | Cloud administrator | 

### Create the Lambda sample application with TVM
<a name="create-the-lam-sample-application-with-tvm"></a>


| Task | Description | Skills required | 
| --- | --- | --- | 
| Download the compiled source files. | Download the `s3UploadSample.jar` and `tvm-layer.zip`** **files, which are included as attachments. The source code used to create these artifacts and compilation instuctions are provided in `token-vending-machine-sample-app.zip`. | Cloud administrator | 
| Create the Lambda layer. | Use the following AWS CLI command to create a Lambda layer, which makes the TVM accessible to Lambda. If you aren’t running this command from the location where you downloaded` tvm-layer.zip`, provide the correct path to `tvm-layer.zip` in the `--zip-file` parameter. <pre>aws lambda publish-layer-version \<br />--layer-name sample-token-vending-machine \<br />--compatible-runtimes java11 \<br />--zip-file fileb://tvm-layer.zip</pre>For the Windows command line:<pre>aws lambda publish-layer-version ^<br />--layer-name sample-token-vending-machine ^<br />--compatible-runtimes java11 ^<br />--zip-file fileb://tvm-layer.zip</pre>This command creates a Lambda layer that contains the reusable TVM library. | Cloud administrator, App developer | 
| Create the Lambda function. | Use the following AWS CLI command to create a Lambda function. Provide the `<sample-app-function-name>`, `<AWS Account ID>`, `<AWS Region>`, `<sample-tvm-role-name>`, `<sample-app-bucket-name>`, and `<sample-app-role-name>` values in the command. If you aren’t running this command from the location where you downloaded `s3UploadSample.jar`, provide the correct path to `s3UploadSample.jar` in the `--zip-file` parameter. <pre>aws lambda create-function \<br />--function-name <sample-app-function-name>  \<br />--timeout 30 \<br />--memory-size 256 \<br />--runtime java11 \<br />--role arn:aws:iam::<AWS Account ID>:role/<sample-tvm-role-name> \<br />--handler com.amazon.aws.s3UploadSample.App \<br />--zip-file fileb://s3UploadSample.jar \<br />--layers arn:aws:lambda:<AWS Region>:<AWS Account ID>:layer:sample-token-vending-machine:1 \<br />--environment "Variables={S3_BUCKET=<sample-app-bucket-name>,<br />ROLE=arn:aws:iam::<AWS Account ID>:role/<sample-app-role-name>}"</pre>For the Windows command line:<pre>aws lambda create-function ^<br />--function-name <sample-app-function-name>  ^<br />--timeout 30 ^<br />--memory-size 256 ^<br />--runtime java11 ^<br />--role arn:aws:iam::<AWS Account ID>:role/<sample-tvm-role-name> ^<br />--handler com.amazon.aws.s3UploadSample.App ^<br />--zip-file fileb://s3UploadSample.jar ^<br />--layers arn:aws:lambda:<AWS Region>:<AWS Account ID>:layer:sample-token-vending-machine:1 ^<br />--environment "Variables={S3_BUCKET=<sample-app-bucket-name>,ROLE=arn:aws:iam::<AWS Account ID>:role/<sample-app-role-name>}"</pre>This command creates a Lambda function with the sample application code and the TVM layer attached. It also sets two environment variables: `S3_BUCKET` and `ROLE`. The sample application uses these variables to determine the role to assume and the S3 bucket to upload JSON documents to. | Cloud administrator, App developer | 

### Test the sample application and TVM
<a name="test-the-sample-application-and-tvm"></a>


| Task | Description | Skills required | 
| --- | --- | --- | 
| Invoke the Lambda sample application. | Use one of the following AWS CLI commands to start the Lambda sample application with its expected payload. Provide the `<sample-app-function-name>` and `<sample-tenant-name>` values in the command.For macOS and Linux shells:<pre>aws lambda invoke \<br />--function <sample-app-function-name> \<br />--invocation-type RequestResponse \<br />--payload '{"tenant": "<sample-tenant-name>"}' \<br />--cli-binary-format raw-in-base64-out response.json</pre>For the Windows command line:<pre>aws lambda invoke ^<br />--function <sample-app-function-name> ^<br />--invocation-type RequestResponse ^<br />--payload "{\"tenant\": \"<sample-tenant-name>\"}" ^<br />--cli-binary-format raw-in-base64-out response.json</pre>This command calls the Lambda function and returns the result in a `response.json` document. On many Unix-based systems, you can change `response.json` to `/dev/stdout` to output the results directly to your shell without creating another file. Changing the `<sample-tenant-name>` value in subsequent invocations of this Lambda function alters the location of the JSON document and the permissions the token provides. | Cloud administrator, App developer | 
| View the S3 bucket to see created objects. | Browse to the S3 bucket ( `<sample-app-bucket-name>`) that you created earlier. This bucket contains an S3 object prefix with the value of `<sample-tenant-name>`. Under that prefix, you will find a JSON document named with a UUID. Invoking the sample application multiple times adds more JSON documents. | Cloud administrator | 
| View the logs for the sample application in CloudWatch Logs. | View the logs that are associated with the Lambda function named `<sample-app-function-name>` in CloudWatch Logs. For instructions, see [Sending Lambda function logs to CloudWatch Logs](https://docs.aws.amazon.com/lambda/latest/dg/monitoring-cloudwatchlogs.html) in the Lambda documentation. You can view the tenant-scoped policy generated by the TVM in these logs. This tenant-scoped policy gives permissions for the sample application to the Amazon S3 **PutObject**, **GetObject**, **DeleteObject**, and **ListBucket** APIs, but only for the object prefix that’s associated with `<sample-tenant-name>`. In subsequent invocations of the sample application, if you change `<sample-tenant-name>`, the TVM updates the scoped policy to correspond to the tenant provided in the invocation payload. This dynamically generated policy shows how tenant-scoped access can be maintained with a TVM in SaaS applications. The TVM functionality is provided in a Lambda layer so that it can be attached to other Lambda functions used by an application without having to replicate the code.For an illustration of the dynamically generated policy, see the [Additional information](#implement-saas-tenant-isolation-for-amazon-s3-by-using-an-aws-lambda-token-vending-machine-additional) section. | Cloud administrator | 

## Related resources
<a name="implement-saas-tenant-isolation-for-amazon-s3-by-using-an-aws-lambda-token-vending-machine-resources"></a>
+ [Isolating Tenants with Dynamically Generated IAM Policies](https://aws.amazon.com/blogs/apn/isolating-saas-tenants-with-dynamically-generated-iam-policies/) (blog post)
+ [Applying Dynamically Generated Isolation Policies in SaaS Environments](https://aws.amazon.com/blogs/apn/applying-dynamically-generated-isolation-policies-in-saas-environments/) (blog post)
+ [SaaS on AWS](https://aws.amazon.com/saas/)

## Additional information
<a name="implement-saas-tenant-isolation-for-amazon-s3-by-using-an-aws-lambda-token-vending-machine-additional"></a>

The following log shows the dynamically generated policy produced by the TVM code in this pattern. In this screenshot, the `<sample-app-bucket-name>` is `DOC-EXAMPLE-BUCKET` and the `<sample-tenant-name>` is `test-tenant-1`. The STS credentials returned by this scoped policy are unable to perform any actions on objects in the S3 bucket except for objects that are associated with the object key prefix `test-tenant-1`.

![\[Log showing a dynamically generated policy produced by the TVM code.\]](http://docs.aws.amazon.com/prescriptive-guidance/latest/patterns/images/pattern-img/97a34c8e-d04e-40b6-acbf-1baa176d22a9/images/d4776ebe-fb8f-41ac-b8c5-b4f97a821c8c.png)


## Attachments
<a name="attachments-97a34c8e-d04e-40b6-acbf-1baa176d22a9"></a>

To access additional content that is associated with this document, unzip the following file: [attachment.zip](samples/p-attach/97a34c8e-d04e-40b6-acbf-1baa176d22a9/attachments/attachment.zip)

# Implement the serverless saga pattern by using AWS Step Functions
<a name="implement-the-serverless-saga-pattern-by-using-aws-step-functions"></a>

*Tabby Ward, Joe Kern, and Rohan Mehta, Amazon Web Services*

## Summary
<a name="implement-the-serverless-saga-pattern-by-using-aws-step-functions-summary"></a>

In a microservices architecture, the main goal is to build decoupled and independent components to promote agility, flexibility, and faster time to market for your applications. As a result of decoupling, each microservice component has its own data persistence layer. In a distributed architecture, business transactions can span multiple microservices. Because these microservices cannot use a single atomicity, consistency, isolation, durability (ACID) transaction, you might end up with partial transactions. In this case, some control logic is needed to undo the transactions that have already been processed. The distributed saga pattern is typically used for this purpose. 

The saga pattern is a failure management pattern that helps establish consistency in distributed applications and coordinates transactions between multiple microservices to maintain data consistency. When you use the saga pattern, every service that performs a transaction publishes an event that triggers subsequent services to perform the next transaction in the chain. This continues until the last transaction in the chain is complete. If a business transaction fails, saga orchestrates a series of compensating transactions that undo the changes that were made by the preceding transactions.

This pattern demonstrates how to automate the setup and deployment of a sample application (which handles travel reservations) with serverless technologies such as AWS Step Functions, AWS Lambda, and Amazon DynamoDB. The sample application also uses Amazon API Gateway and Amazon Simple Notification Service (Amazon SNS) to implement a saga execution coordinator. The pattern can be deployed with an infrastructure as code (IaC) framework such as the AWS Cloud Development Kit (AWS CDK), the AWS Serverless Application Model (AWS SAM), or Terraform.

## Prerequisites and limitations
<a name="implement-the-serverless-saga-pattern-by-using-aws-step-functions-prereqs"></a>

**Prerequisites **
+ An active AWS account.
+ Permissions to create an AWS CloudFormation stack. For more information, see [Controlling access](https://docs.aws.amazon.com/AWSCloudFormation/latest/UserGuide/using-iam-template.html) in the CloudFormation documentation.
+ IaC framework of your choice (AWS CDK, AWS SAM, or Terraform) configured with your AWS account so that you can use the framework CLI to deploy the application.
+ NodeJS, used to build the application and run it locally. 
+ A code editor of your choice (such as Visual Studio Code, Sublime, or Atom).

**Product versions**
+ [NodeJS version 14](https://nodejs.org/en/download/)
+ [AWS CDK version 2.37.1](https://docs.aws.amazon.com/cdk/v2/guide/getting_started.html#getting_started_install)
+ [AWS SAM version 1.71.0](https://docs.aws.amazon.com/serverless-application-model/latest/developerguide/install-sam-cli.html)
+ [Terraform version 1.3.7](https://developer.hashicorp.com/terraform/tutorials/aws-get-started/install-cli)

**Limitations**

Event sourcing is a natural way to implement the saga orchestration pattern in a microservices architecture where all components are loosely coupled and don’t have direct knowledge of one another. If your transaction involves a small number of steps (three to five), the saga pattern might be a great fit. However complexity increases with the number of microservices and the number of steps. 

Testing and debugging can become difficult when you’re using this design, because you have to have all services running in order to simulate the transaction pattern.

## Architecture
<a name="implement-the-serverless-saga-pattern-by-using-aws-step-functions-architecture"></a>

**Target architecture **

The proposed architecture uses AWS Step Functions to build a saga pattern to book flights, book car rentals, and process payments for a vacation.

The following workflow diagram illustrates the typical flow of the travel reservation system. The workflow consists of reserving air travel ("ReserveFlight"), reserving a car ("ReserveCarRental"), processing payments ("ProcessPayment"), confirming flight reservations ("ConfirmFlight"), and confirming car rentals ("ConfirmCarRental") followed by a success notification when these steps are complete. However, if the system encounters any errors in running any of these transactions, it starts to fail backward. For example, an error with payment processing ("ProcessPayment") triggers a refund ("RefundPayment"), which then triggers a cancellation of the rental car and flight ("CancelRentalReservation" and "CancelFlightReservation"), which ends the entire transaction with a failure message.

This pattern deploys separate Lambda functions for each task that is highlighted in the diagram as well as three DynamoDB tables for flights, car rentals, and payments. Each Lambda function creates, updates, or deletes the rows in the respective DynamoDB tables, depending on whether a transaction is confirmed or rolled back. The pattern uses Amazon SNS to send text (SMS) messages to subscribers, notifying them of failed or successful transactions. 

![\[Workflow for a travel reservation system based on the saga pattern.\]](http://docs.aws.amazon.com/prescriptive-guidance/latest/patterns/images/pattern-img/fec0789c-d9b1-4d80-b179-dd9a7ecbec07/images/daad3e8e-6e6b-41c2-95c1-ca79d53ead64.png)


 

**Automation and scale**

You can create the configuration for this architecture by using one of the IaC frameworks. Use one of the following links for your preferred IaC.
+ [Deploy with AWS CDK](https://serverlessland.com/workflows/saga-pattern-cdk)
+ [Deploy with AWS SAM](https://serverlessland.com/workflows/saga-pattern-sam)
+ [Deploy with Terraform](https://serverlessland.com/workflows/saga-pattern-tf)

## Tools
<a name="implement-the-serverless-saga-pattern-by-using-aws-step-functions-tools"></a>

**AWS services**
+ [AWS Step Functions](https://aws.amazon.com/step-functions/) is a serverless orchestration service that lets you combine AWS Lambda functions and other AWS services to build business-critical applications. Through the Step Functions graphical console, you see your application’s workflow as a series of event-driven steps.
+ [Amazon DynamoDB](https://aws.amazon.com/dynamodb/) is a fully managed NoSQL database service that provides fast and predictable performance with seamless scalability. You can use DynamoDB to create a database table that can store and retrieve any amount of data, and serve any level of request traffic.
+ [AWS Lambda](https://aws.amazon.com/lambda/) is a compute service that lets you run code without provisioning or managing servers. Lambda runs your code only when needed and scales automatically, from a few requests per day to thousands per second.
+ [Amazon API Gateway](https://aws.amazon.com/api-gateway/) is an AWS service for creating, publishing, maintaining, monitoring, and securing REST, HTTP, and WebSocket APIs at any scale.
+ [Amazon Simple Notification Service (Amazon SNS)](https://aws.amazon.com/sns/) is a managed service that provides message delivery from publishers to subscribers.
+ [AWS Cloud Development Kit (AWS CDK)](https://aws.amazon.com/cdk/) is a software development framework for defining your cloud application resources by using familiar programming languages such as TypeScript, JavaScript, Python, Java, and C\$1/.Net.
+ [AWS Serverless Application Model (AWS SAM)](https://aws.amazon.com/serverless/sam/) is an open-source framework for building serverless applications. It provides shorthand syntax to express functions, APIs, databases, and event source mappings. 

**Code**

The code for a sample application that demonstrates the saga pattern, including the IaC template (AWS CDK, AWS SAM, or Terraform), the Lambda functions, and the DynamoDB tables can be found in the following links. Follow the instructions in the first epic to install these.
+ [Deploy with AWS CDK](https://serverlessland.com/workflows/saga-pattern-cdk)
+ [Deploy with AWS SAM](https://serverlessland.com/workflows/saga-pattern-sam)
+ [Deploy with Terraform](https://serverlessland.com/workflows/saga-pattern-tf)

## Epics
<a name="implement-the-serverless-saga-pattern-by-using-aws-step-functions-epics"></a>

### Install packages, compile, and build
<a name="install-packages-compile-and-build"></a>


| Task | Description | Skills required | 
| --- | --- | --- | 
| Install the NPM packages. | Create a new directory, navigate to that directory in a terminal, and clone the GitHub repository of your choice from the *Code* section earlier in this pattern.In the root folder that has the `package.json` file, run the following command to download and install all Node Package Manager (NPM) packages:<pre>npm install</pre> | Developer, Cloud architect | 
| Compile scripts. | In the root folder, run the following command to instruct the TypeScript transpiler to create all necessary JavaScript files:<pre>npm run build</pre> | Developer, Cloud architect | 
| Watch for changes and recompile. | In the root folder, run the following command in a separate terminal window to watch for code changes, and compile the code when it detects a change:<pre>npm run watch</pre> | Developer, Cloud architect | 
| Run unit tests (AWS CDK only).  | If you’re using the AWS CDK, in the root folder, run the following command to perform the Jest unit tests:<pre>npm run test</pre> | Developer, Cloud architect | 

### Deploy resources to the target AWS account
<a name="deploy-resources-to-the-target-aws-account"></a>


| Task | Description | Skills required | 
| --- | --- | --- | 
| Deploy the demo stack to AWS. | The application is AWS Region-agnostic. If you use a profile, you must declare the Region explicitly in either the [AWS Command Line Interface (AWS CLI) profile](https://docs.aws.amazon.com/cli/latest/userguide/cli-configure-files.html) or through [AWS CLI environment variables.](https://docs.aws.amazon.com/cli/latest/userguide/cli-configure-envvars.html)In the root folder, run the following command to create a deployment assembly and to deploy it to the default AWS account and Region.AWS CDK:<pre>cdk bootstrap<br />cdk deploy</pre>AWS SAM:<pre>sam build<br />sam deploy --guided</pre>Terraform:<pre>terraform init<br />terraform apply</pre>This step might take several minutes to complete. This command uses the default credentials that were configured for the AWS CLI.Note the API Gateway URL that is displayed on the console after deployment is complete. You will need this information to test the saga execution flow. | Developer, Cloud architect | 
| Compare the deployed stack with the current state. | In the root folder, run the following command to compare the deployed stack with the current state after making changes to the source code:AWS CDK:<pre>cdk diff</pre>AWS SAM:<pre>sam deploy</pre>Terraform:<pre>terraform plan</pre> | Developer, Cloud architect | 

### Test the execution flow
<a name="test-the-execution-flow"></a>


| Task | Description | Skills required | 
| --- | --- | --- | 
| Test the saga execution flow. | Navigate to the API Gateway URL that you noted in the earlier step, when you deployed the stack. This URL triggers the state machine to start. For more information about how to manipulate the flow of the state machine by passing different URL parameters, see the [Additional information](#implement-the-serverless-saga-pattern-by-using-aws-step-functions-additional) section.To  view the results, sign in to the AWS Management Console and navigate to the Step Functions console. Here, you can see every step of the saga state machine. You can also view the DynamoDB table to see the records inserted, updated, or deleted. If you refresh the screen frequently, you can watch the transaction status change from `pending` to `confirmed`. You can subscribe to the SNS topic by updating the code in the `stateMachine.ts` file with your cell phone number to receive SMS messages upon successful or failed reservations. For more information, see *Amazon SNS* in the [Additional information](#implement-the-serverless-saga-pattern-by-using-aws-step-functions-additional) section. | Developer, Cloud architect | 

### Clean up
<a name="clean-up"></a>


| Task | Description | Skills required | 
| --- | --- | --- | 
| Clean up resources. | To clean up the resources deployed for this application, you can use one of the following commands.AWS CDK:<pre>cdk destroy</pre>AWS SAM:<pre>sam delete</pre>Terraform:<pre>terraform destroy</pre> | App developer, Cloud architect | 

## Related resources
<a name="implement-the-serverless-saga-pattern-by-using-aws-step-functions-resources"></a>

**Technical papers**
+ [Implementing Microservices on AWS](https://docs.aws.amazon.com/pdfs/whitepapers/latest/microservices-on-aws/microservices-on-aws.pdf)
+ [Serverless Application Lens](https://docs.aws.amazon.com/wellarchitected/latest/serverless-applications-lens/welcome.html)

**AWS service documentation**
+ [Getting started with the AWS CDK](https://docs.aws.amazon.com/cdk/latest/guide/getting_started.html)
+ [Getting started with AWS SAM](https://docs.aws.amazon.com/serverless-application-model/latest/developerguide/serverless-getting-started.html)
+ [AWS Step Functions](https://docs.aws.amazon.com/step-functions/)
+ [Amazon DynamoDB](https://docs.aws.amazon.com/dynamodb/)
+ [AWS Lambda](https://docs.aws.amazon.com/lambda/)
+ [Amazon API Gateway](https://docs.aws.amazon.com/apigateway/)
+ [Amazon SNS](https://docs.aws.amazon.com/sns/)

**Tutorials**
+ [Hands-on Workshops for Serverless Computing](https://aws.amazon.com/serverless-workshops/)

## Additional information
<a name="implement-the-serverless-saga-pattern-by-using-aws-step-functions-additional"></a>

**Code**

For testing purposes, this pattern deploys API Gateway and a test Lambda function that triggers the Step Functions state machine. With Step Functions, you can control the functionality of the travel reservation system by passing a `run_type` parameter to mimic failures in "ReserveFlight," "ReserveCarRental," "ProcessPayment," "ConfirmFlight," and "ConfirmCarRental."

The `saga` Lambda function (`sagaLambda.ts`) takes input from the query parameters in the API Gateway URL, creates the following JSON object, and passes it to Step Functions for execution:

```
let input = {
"trip_id": tripID, //  value taken from query parameter, default is AWS request ID
"depart_city": "Detroit",
"depart_time": "2021-07-07T06:00:00.000Z",
"arrive_city": "Frankfurt",
"arrive_time": "2021-07-09T08:00:00.000Z",
"rental": "BMW",
"rental_from": "2021-07-09T00:00:00.000Z",
"rental_to": "2021-07-17T00:00:00.000Z",
"run_type": runType // value taken from query parameter, default is "success"
};
```

You can experiment with different flows of the Step Functions state machine by passing the following URL parameters:
+ **Successful Execution** ─ https://\$1api gateway url\$1
+ **Reserve Flight Fail** ─ https://\$1api gateway url\$1?**runType=failFlightsReservation**
+ **Confirm Flight Fail** ─ https://\$1api gateway url\$1?**runType=failFlightsConfirmation**
+ **Reserve Car Rental Fail** ─ https://\$1api gateway url\$1?**runType=failCarRentalReservation**
+ **Confirm Car Rental Fail** ─ https://\$1api gateway url\$1?**runType=failCarRentalConfirmation**
+ **Process Payment Fail** ─ https://\$1api gateway url\$1?**runType=failPayment**
+ **Pass a Trip ID** ─ https://\$1api gateway url\$1?**tripID=**\$1by default, trip ID will be the AWS request ID\$1

**IaC templates**

The linked repositories include IaC templates that you can use to create the entire sample travel reservation application.
+ [Deploy with AWS CDK](https://serverlessland.com/workflows/saga-pattern-cdk)
+ [Deploy with AWS SAM](https://serverlessland.com/workflows/saga-pattern-sam)
+ [Deploy with Terraform](https://serverlessland.com/workflows/saga-pattern-tf)

**DynamoDB tables**

Here are the data models for the flights, car rentals, and payments tables.

```
Flight Data Model:
 var params = {
      TableName: process.env.TABLE_NAME,
      Item: {
        'pk' : {S: event.trip_id},
        'sk' : {S: flightReservationID},
        'trip_id' : {S: event.trip_id},
        'id': {S: flightReservationID},
        'depart_city' : {S: event.depart_city},
        'depart_time': {S: event.depart_time},
        'arrive_city': {S: event.arrive_city},
        'arrive_time': {S: event.arrive_time},
        'transaction_status': {S: 'pending'}
      }
    };

Car Rental Data Model:
var params = {
      TableName: process.env.TABLE_NAME,
      Item: {
        'pk' : {S: event.trip_id},
        'sk' : {S: carRentalReservationID},
        'trip_id' : {S: event.trip_id},
        'id': {S: carRentalReservationID},
        'rental': {S: event.rental},
        'rental_from': {S: event.rental_from},
        'rental_to': {S: event.rental_to},
        'transaction_status': {S: 'pending'}
      }
    };

Payment Data Model:
var params = {
      TableName: process.env.TABLE_NAME,
      Item: {
        'pk' : {S: event.trip_id},
        'sk' : {S: paymentID},
        'trip_id' : {S: event.trip_id},
        'id': {S: paymentID},
        'amount': {S: "750.00"}, // hard coded for simplicity as implementing any monetary transaction functionality is beyond the scope of this pattern
        'currency': {S: "USD"},
        'transaction_status': {S: "confirmed"}
      }
    };
```

**Lambda functions**

The following functions will be created to support the state machine flow and execution in Step Functions:
+ **Reserve Flights**: Inserts a record into the DynamoDB Flights table with a `transaction_status` of `pending`, to book a flight.
+ **Confirm Flight**: Updates the record in the DynamoDB Flights table, to set `transaction_status` to `confirmed`, to confirm the flight.
+ **Cancel Flights Reservation**: Deletes the record from the DynamoDB Flights table, to cancel the pending flight.
+ **Reserve Car Rentals**: Inserts a record into the DynamoDB CarRentals table with a `transaction_status` of `pending`, to book a car rental.
+ **Confirm Car Rentals**: Updates the record in the DynamoDB CarRentals table, to set `transaction_status` to `confirmed`, to confirm the car rental.
+ **Cancel Car Rentals Reservation:** Deletes the record from the DynamoDB CarRentals table, to cancel the pending car rental.
+ **Process Payment**: Inserts a record into the DynamoDB Payment table for the payment.
+ **Cancel Payment**: Deletes the record from the DynamoDB Payments table for the payment.

**Amazon SNS**

The sample application creates the following topic and subscription for sending SMS messages and notifying the customer about successful or failed reservations. If you want to receive text messages while testing the sample application, update the SMS subscription with your valid phone number in the state machine definition file.

AWS CDK snippet (add the phone number in the second line of the following code):

```
const topic = new  sns.Topic(this, 'Topic');
topic.addSubscription(new subscriptions.SmsSubscription('+11111111111'));
const snsNotificationFailure = new tasks.SnsPublish(this ,'SendingSMSFailure', {
topic:topic,
integrationPattern: sfn.IntegrationPattern.REQUEST_RESPONSE,
message: sfn.TaskInput.fromText('Your Travel Reservation Failed'),
});
 
const snsNotificationSuccess = new tasks.SnsPublish(this ,'SendingSMSSuccess', {
topic:topic,
integrationPattern: sfn.IntegrationPattern.REQUEST_RESPONSE,
message: sfn.TaskInput.fromText('Your Travel Reservation is Successful'),
});
```

AWS SAM snippet (replace the `+1111111111` strings with your valid phone number):

```
  StateMachineTopic11111111111:
    Type: 'AWS::SNS::Subscription'
    Properties:
      Protocol: sms
      TopicArn:
        Ref: StateMachineTopic
      Endpoint: '+11111111111'
    Metadata:
      'aws:sam:path': SamServerlessSagaStack/StateMachine/Topic/+11111111111/Resource
```

Terraform snippet (replace the `+111111111` string with your valid phone number):

```
resource "aws_sns_topic_subscription" "sms-target" {
  topic_arn = aws_sns_topic.topic.arn
  protocol  = "sms"
  endpoint  = "+11111111111"
}
```

**Successful reservations**

The following flow illustrates a successful reservation with "ReserveFlight," "ReserveCarRental," and "ProcessPayment" followed by "ConfirmFlight" and "ConfirmCarRental." The customer is notified about the successful booking through SMS messages that are sent to the subscriber of the SNS topic.

![\[Example of a successful reservation implemented by Step Functions by using the saga pattern.\]](http://docs.aws.amazon.com/prescriptive-guidance/latest/patterns/images/pattern-img/fec0789c-d9b1-4d80-b179-dd9a7ecbec07/images/f58c894e-7721-4bc7-8f7d-29f23faa5dc1.png)


**Failed reservations**

This flow is an example of failure in the saga pattern. If, after booking flights and car rentals, "ProcessPayment" fails, steps are canceled in reverse order.  The reservations are released, and the customer is notified of the failure through SMS messages that are sent to the subscriber of the SNS topic.

![\[Example of a failed reservation implemented by Step Functions by using the saga pattern.\]](http://docs.aws.amazon.com/prescriptive-guidance/latest/patterns/images/pattern-img/fec0789c-d9b1-4d80-b179-dd9a7ecbec07/images/7c64d326-be27-42c3-b03f-d677efedb9a7.png)


# Manage on-premises container applications by setting up Amazon ECS Anywhere with the AWS CDK
<a name="manage-on-premises-container-applications-by-setting-up-amazon-ecs-anywhere-with-the-aws-cdk"></a>

*Dr. Rahul Sharad Gaikwad, Amazon Web Services*

## Summary
<a name="manage-on-premises-container-applications-by-setting-up-amazon-ecs-anywhere-with-the-aws-cdk-summary"></a>

[Amazon ECS Anywhere](https://aws.amazon.com/ecs/anywhere/) is an extension of the Amazon Elastic Container Service (Amazon ECS). You can use ECS Anywhere to deploy native Amazon ECS tasks in an on-premises or customer-managed environment. This feature helps reduce costs and mitigate complex local container orchestration and operations. You can use ECS Anywhere to deploy and run container applications in both on-premises and cloud environments. It removes the need for your team to learn multiple domains and skill sets, or to manage complex software on their own.

This pattern demonstrates the steps to set up ECS Anywhere by using [AWS Cloud Development Kit (AWS CDK)](https://aws.amazon.com/cdk/) stacks.

## Prerequisites and limitations
<a name="manage-on-premises-container-applications-by-setting-up-amazon-ecs-anywhere-with-the-aws-cdk-prereqs"></a>

**Prerequisites **
+ An active AWS account.
+ AWS Command Line Interface (AWS CLI), installed and configured. (See [Installing, updating, and uninstalling the AWS CLI](https://docs.aws.amazon.com/cli/latest/userguide/cli-chap-install.html) in the AWS CLI documentation.) 
+ AWS CDK Toolkit, installed and configured. (See [AWS CDK Toolkit](https://docs.aws.amazon.com/cdk/v2/guide/cli.html) in the AWS CDK documentation, and follow the instructions to install version 2 globally.)
+ Node package manager (npm), installed and configured for the AWS CDK in TypeScript. (See [Downloading and installing Node.js and npm ](https://docs.npmjs.com/downloading-and-installing-node-js-and-npm)in the npm documentation.)

**Limitations **
+ For limitations and considerations, see [External instances (Amazon ECS Anywhere)](https://docs.aws.amazon.com/AmazonECS/latest/developerguide/ecs-anywhere.html#ecs-anywhere-considerations) in the Amazon ECS documentation.

**Product versions**
+ AWS CDK Toolkit version 2
+ npm version 7.20.3 or later
+ Node.js version 16.6.1 or later

## Architecture
<a name="manage-on-premises-container-applications-by-setting-up-amazon-ecs-anywhere-with-the-aws-cdk-architecture"></a>

**Target technology stack  **
+ AWS CloudFormation
+ AWS CDK
+ Amazon ECS Anywhere
+ AWS Identity and Access Management (IAM)

**Target architecture **

The following diagram illustrates a high-level system architecture of ECS Anywhere setup using the AWS CDK with TypeScript, as implemented by this pattern.

1. When you deploy the AWS CDK stack, it creates a CloudFormation stack on AWS.

1. The CloudFormation stack provisions an Amazon ECS cluster and related AWS resources.

1. To register an external instance with an Amazon ECS cluster, you must install AWS Systems Manager Agent (SSM Agent) on your virtual machine (VM) and register the VM as an AWS Systems Manager managed instance. 

1. You must also install the Amazon ECS container agent and Docker on your VM to register it as an external instance with the Amazon ECS cluster.

1. When the external instance is registered and configured with the Amazon ECS cluster, it can run multiple containers on your VM, which is registered as an external instance.

![\[ECS Anywhere setup using the AWS CDK with TypeScript.\]](http://docs.aws.amazon.com/prescriptive-guidance/latest/patterns/images/pattern-img/3ed63c00-40e7-4831-bb9d-63049c3490aa/images/ff7dc774-830d-4b9f-8262-7314afe7a033.png)


 

**Automation and scale**

The [GitHub repository](https://github.com/aws-samples/amazon-ecs-anywhere-cdk-samples/) that is provided with this pattern uses the AWS CDK as an infrastructure as code (IaC) tool to create the configuration for this architecture. AWS CDK helps you orchestrate resources and set up ECS Anywhere.

## Tools
<a name="manage-on-premises-container-applications-by-setting-up-amazon-ecs-anywhere-with-the-aws-cdk-tools"></a>
+ [AWS Cloud Development Kit (AWS CDK)](https://docs.aws.amazon.com/cdk/latest/guide/home.html) is a software development framework that helps you define and provision AWS Cloud infrastructure in code.
+ [AWS Command Line Interface (AWS CLI)](https://docs.aws.amazon.com/cli/latest/userguide/cli-chap-welcome.html) is an open-source tool that helps you interact with AWS services through commands in your command-line shell.

**Code**

The source code for this pattern is available on GitHub, in the [Amazon ECS Anywhere CDK Samples](https://github.com/aws-samples/amazon-ecs-anywhere-cdk-samples) repository. To clone and use the repository, follow the instructions in the next section.

## Epics
<a name="manage-on-premises-container-applications-by-setting-up-amazon-ecs-anywhere-with-the-aws-cdk-epics"></a>

### Verify AWS CDK configuration
<a name="verify-aws-cdk-configuration"></a>


| Task | Description | Skills required | 
| --- | --- | --- | 
| Verify the AWS CDK version. | Verify the version of the AWS CDK Toolkit by running the following command:<pre>cdk --version</pre>This pattern requires AWS CDK version 2. If you have an earlier version of the AWS CDK, follow the instructions in the [AWS CDK documentation](https://docs.aws.amazon.com/cdk/v2/guide/cli.html) to update it. | DevOps engineer | 
| Set up AWS credentials. | To set up credentials, run the `aws configure` command and follow the prompts:<pre>$aws configure<br />AWS Access Key ID [None]: <your-access-key-ID><br />AWS Secret Access Key [None]: <your-secret-access-key><br />Default region name [None]: <your-Region-name><br />Default output format [None]:</pre> | DevOps engineer | 

### Bootstrap the AWS CDK environment
<a name="bootstrap-the-aws-cdk-environment"></a>


| Task | Description | Skills required | 
| --- | --- | --- | 
| Clone the AWS CDK code repository. | Clone the GitHub code repository for this pattern by using the command:<pre>git clone https://github.com/aws-samples/amazon-ecs-anywhere-cdk-samples.git</pre> | DevOps engineer | 
| Bootstrap the environment. | To deploy the AWS CloudFormation template to the account and AWS Region that you want to use, run the following command:<pre>cdk bootstrap <account-number>/<Region></pre>For more information, see [Bootstrapping](https://docs.aws.amazon.com/cdk/latest/guide/bootstrapping.html) in the AWS CDK documentation. | DevOps engineer | 

### Build and deploy the project
<a name="build-and-deploy-the-project"></a>


| Task | Description | Skills required | 
| --- | --- | --- | 
| Install package dependencies and compile TypeScript files. | Install the package dependencies and compile the TypeScript files by running the following commands:<pre>$cd amazon-ecs-anywhere-cdk-samples<br />$npm install<br />$npm fund </pre>These commands install all the packages from the sample repository. If you get any errors about missing packages, use one of the following commands:<pre>$npm ci   </pre>—or—<pre>$npm install -g @aws-cdk/<package_name></pre>For more information, see [npm ci](https://docs.npmjs.com/cli/v7/commands/npm-ci) and  [npm install](https://docs.npmjs.com/cli/v7/commands/npm-install) in the npm documentation. | DevOps engineer | 
| Build the project. | To build the project code, run the command:<pre>npm run build</pre>For more information about building and deploying the project, see [Your first AWS CDK app](https://docs.aws.amazon.com/cdk/latest/guide/hello_world.html#:~:text=the%20third%20parameter.-,Synthesize%20an%20AWS%20CloudFormation%20template,-Synthesize%20an%20AWS) in the AWS CDK documentation. | DevOps engineer | 
| Deploy the project. | To deploy the project code, run the command:<pre>cdk deploy</pre> | DevOps engineer | 
| Verify stack creation and output. | Open the AWS CloudFormation console at [https://console.aws.amazon.com/cloudformation](https://console.aws.amazon.com/cloudformation/),** **and choose the `EcsAnywhereStack` stack. The **Outputs** tab shows the commands to run on your external VM. | DevOps engineer | 

### Set up an on-premises machine
<a name="set-up-an-on-premises-machine"></a>


| Task | Description | Skills required | 
| --- | --- | --- | 
| Set up your VM by using Vagrant. | For demonstration purposes, you can use [HashiCorp Vagrant](https://www.vagrantup.com/) to create a VM. Vagrant is an open-source utility for building and maintaining portable virtual software development environments. Create a Vagrant VM by running the `vagrant up` command from the root directory where Vagrantfile is placed. For more information, see the [Vagrant documentation](https://www.vagrantup.com/docs/cli/up). | DevOps engineer | 
| Register your VM as an external instance. | 1. Log in to the Vagrant VM by using the `vagrant ssh` command. For more information, see the [Vagrant documentation](https://www.vagrantup.com/docs/cli/ssh).2. Create an activation code and ID that you can use to register your VM with AWS Systems Manager and to activate your external instance. The output from this command includes `ActivationId` and `ActivationCode` values: <pre>aws ssm create-activation --iam-role EcsAnywhereInstanceRole | tee ssm-activation.json</pre>3. Export the activation ID and code values:<pre>export ACTIVATION_ID=<activation-ID><br />export ACTIVATION_CODE=<activation-code></pre>4. Download the installation script to your on-premises server or VM:<pre>curl -o "ecs-anywhere-install.sh" "https://amazon-ecs-agent.s3.amazonaws.com/ecs-anywhere-install-latest.sh" && sudo chmod +x ecs-anywhere-install.sh</pre>5. Run the installation script on your on-premises server or VM:<pre>sudo ./ecs-anywhere-install.sh \<br />    --cluster test-ecs-anywhere \<br />     --activation-id $ACTIVATION_ID \<br />     --activation-code $ACTIVATION_CODE \<br />    --region <Region></pre>For more information about setting up and registering your VM, see [Registering an external instance to a cluster](https://docs.amazonaws.cn/en_us/AmazonECS/latest/developerguide/ecs-anywhere-registration.html) in the Amazon ECS documentation. | DevOps engineer | 
| Verify the status of ECS Anywhere and the external VM. | To verify whether your virtual box is connected to the Amazon ECS control plane and running, use the following commands:<pre>aws ssm describe-instance-information<br />aws ecs list-container-instances --cluster $CLUSTER_NAME</pre> | DevOps engineer | 

### Clean up
<a name="clean-up"></a>


| Task | Description | Skills required | 
| --- | --- | --- | 
| Clean up and delete resources. | After you walk through this pattern, you should remove the resources you created to avoid incurring any further charges. To clean up, run the command:<pre>cdk destroy</pre> | DevOps engineer | 

## Related resources
<a name="manage-on-premises-container-applications-by-setting-up-amazon-ecs-anywhere-with-the-aws-cdk-resources"></a>
+ [Amazon ECS Anywhere Documentation](https://aws.amazon.com/ecs/anywhere/) 
+ [Amazon ECS Anywhere Demo](https://www.youtube.com/watch?v=-eud6yUXsJM)
+ [Amazon ECS Anywhere Workshop Samples](https://github.com/aws-samples/aws-ecs-anywhere-workshop-samples)

# Modernize ASP.NET Web Forms applications on AWS
<a name="modernize-asp-net-web-forms-applications-on-aws"></a>

*Vijai Anand Ramalingam and Sreelaxmi Pai, Amazon Web Services*

## Summary
<a name="modernize-asp-net-web-forms-applications-on-aws-summary"></a>

This pattern describes the steps for modernizing a legacy, monolith ASP.NET Web Forms application by porting it to ASP.NET Core on AWS.

Porting ASP.NET Web Forms applications to ASP.NET Core helps you take advantage of the performance, cost savings, and robust ecosystem of Linux. However, it can be a significant manual effort. In this pattern, the legacy application is modernized incrementally by using a phased approach, and then containerized in the AWS Cloud.

Consider a legacy, monolith application for a shopping cart. Let’s assume that it was created as an ASP.NET Web Forms application and consists of .aspx pages with a code-behind (`aspx.cs`) file. The modernization process consists of these steps:

1. Break the monolith into microservices by using the appropriate decomposition patterns. For more information, see the guide [Decomposing monoliths into microservices](https://docs.aws.amazon.com/prescriptive-guidance/latest/modernization-decomposing-monoliths/) on the AWS Prescriptive Guidance website.

1. Port your legacy ASP.NET Web Forms (.NET Framework) application to ASP.NET Core in .NET 5 or later. In this pattern, you use Porting Assistant for .NET to scan your ASP.NET Web Forms application and identify incompatibilities with ASP.NET Core. This reduces the manual porting effort.

1. Redevelop the Web Forms UI layer by using React. This pattern doesn’t cover UI redevelopment. For instructions, see [Create a New React App](https://reactjs.org/docs/create-a-new-react-app.html) in the React documentation.

1. Redevelop the Web Forms code-behind file (business interface) as an ASP.NET Core web API. This pattern uses NDepend reports to help identify required files and dependencies.

1. Upgrade shared/common projects, such as Business Logic and Data Access, in your legacy application to .NET 5 or later by using Porting Assistant for .NET. 

1. Add AWS services to complement your application. For example, you can use [Amazon CloudWatch Logs](https://docs.aws.amazon.com/AmazonCloudWatch/latest/logs/WhatIsCloudWatchLogs.html) to monitor, store, and access your application’s logs, and [AWS Systems Manager](https://aws.amazon.com/systems-manager/) to store your application settings.

1. Containerize the modernized ASP.NET Core application. This pattern creates a Docker file that targets Linux in Visual Studio and uses Docker Desktop to test it locally. This step assumes that your legacy application is already running on an on-premises or Amazon Elastic Compute Cloud (Amazon EC2) Windows instance. For more information, see the pattern [Run an ASP.NET Core web API Docker container on an Amazon EC2 Linux instance](https://docs.aws.amazon.com/prescriptive-guidance/latest/patterns/run-an-asp-net-core-web-api-docker-container-on-an-amazon-ec2-linux-instance.html).

1. Deploy the modernized ASP.NET core application to Amazon Elastic Container Service (Amazon ECS). This pattern doesn’t cover the deployment step. For instructions, see the [Amazon ECS Workshop](https://ecsworkshop.com/).

**Note**  
This pattern doesn’t cover UI development, database modernization, or container deployment steps.

## Prerequisites and limitations
<a name="modernize-asp-net-web-forms-applications-on-aws-prereqs"></a>

**Prerequisites **
+ [Visual Studio](https://visualstudio.microsoft.com/downloads/) or [Visual Studio Code](https://code.visualstudio.com/download), downloaded and installed.
+ Access to an AWS account using the AWS Management Console and the AWS Command Line Interface (AWS CLI) version 2. (See the [instructions for configuring the AWS CLI](https://docs.aws.amazon.com/cli/latest/userguide/install-cliv2.html).)
+ The AWS Toolkit for Visual Studio (see [setup instructions](https://docs.aws.amazon.com/toolkit-for-visual-studio/latest/user-guide/setup.html)).
+ Docker Desktop, [downloaded](https://www.docker.com/products/docker-desktop) and installed.
+ .NET SDK, [downloaded](https://download.visualstudio.microsoft.com/download/pr/4263dc3b-dc67-4f11-8d46-cc0ae86a232e/66782bbd04c53651f730b2e30a873f18/dotnet-sdk-5.0.203-win-x64.exe) and installed.
+ NDepend tool, [downloaded](https://www.ndepend.com/download) and installed. To install the NDepend extension for Visual Studio, run `NDepend.VisualStudioExtension.Installer` ([see instructions](https://www.ndepend.com/docs/getting-started-with-ndepend#Part1)). You can select Visual Studio 2019 or 2022, depending on your requirements. 
+ Porting Assistant for .NET, [downloaded](https://aws.amazon.com/porting-assistant-dotnet/) and installed.

## Architecture
<a name="modernize-asp-net-web-forms-applications-on-aws-architecture"></a>

**Modernizing the shopping cart application**

The following diagram illustrates the modernization process for a legacy ASP.NET shopping cart application.

![\[Modernizing a legacy shopping cart application\]](http://docs.aws.amazon.com/prescriptive-guidance/latest/patterns/images/pattern-img/36cda8e6-f2cb-4f1a-b37f-fa3045cc5ba1/images/4367e259-9bb3-4eb6-a54d-1c1e2dece7d4.png)


**Target architecture**

The following diagram illustrates the architecture of the modernized shopping cart application on AWS. ASP.NET Core web APIs are deployed to an Amazon ECS cluster. Logging and configuration services are provided by Amazon CloudWatch Logs and AWS Systems Manager.

![\[Target architecture for ASP.NET Web Forms application on AWS\]](http://docs.aws.amazon.com/prescriptive-guidance/latest/patterns/images/pattern-img/36cda8e6-f2cb-4f1a-b37f-fa3045cc5ba1/images/ed6d65ec-0dc9-43ab-ac07-1f172e089399.png)


## Tools
<a name="modernize-asp-net-web-forms-applications-on-aws-tools"></a>

**AWS services**
+ [Amazon ECS](https://docs.aws.amazon.com/AmazonECS/latest/developerguide/Welcome.html) – Amazon Elastic Container Service (Amazon ECS) is a highly scalable, fast container management service for running, stopping, and managing containers on a cluster. You can run your tasks and services on a serverless infrastructure that is managed by AWS Fargate. Alternatively, for more control over your infrastructure, you can run your tasks and services on a cluster of EC2 instances that you manage.
+ [Amazon CloudWatch Logs](https://docs.aws.amazon.com/AmazonCloudWatch/latest/logs/WhatIsCloudWatchLogs.html) – Amazon CloudWatch Logs centralizes the logs from all your systems, applications, and AWS services that you use. You can view and monitor the logs, search them for specific error codes or patterns, filter them based on specific fields, or archive them securely for future analysis.
+ [AWS Systems Manager](https://docs.aws.amazon.com/systems-manager/latest/userguide/what-is-systems-manager.html) ─ AWS Systems Manager is an AWS service that you can use to view and control your infrastructure on AWS. Using the Systems Manager console, you can view operational data from multiple AWS services and automate operational tasks across your AWS resources. Systems Manager helps you maintain security and compliance by scanning your managed instances and reporting (or taking corrective action) on any policy violations it detects.

**Tools**
+ [Visual Studio](https://visualstudio.microsoft.com/) or [Visual Studio Code](https://code.visualstudio.com/) – Tools for building .NET applications, web APIs, and other programs.
+ [AWS Toolkit for Visual Studio](https://docs.aws.amazon.com/toolkit-for-visual-studio/latest/user-guide/welcome.html) – An extension for Visual Studio that helps develop, debug, and deploy .NET applications that use AWS services.
+ [Docker Desktop](https://www.docker.com/products/docker-desktop) – A tool that simplifies building and deploying containerized applications.
+ [NDepend](https://www.ndepend.com/features/) – An analyzer that monitors .NET code for dependencies, quality issues, and code changes.
+ [Porting Assistant for .NET](https://aws.amazon.com/porting-assistant-dotnet/) – An analysis tool that scans .NET code to identify incompatibilities with .NET Core and to estimate the migration effort.

## Epics
<a name="modernize-asp-net-web-forms-applications-on-aws-epics"></a>

### Port your legacy application to .NET 5 or later version
<a name="port-your-legacy-application-to-net-5-or-later-version"></a>


| Task | Description | Skills required | 
| --- | --- | --- | 
| Upgrade your.NET Framework legacy application to .NET 5. | You can use Porting Assistant for .NET to convert your legacy ASP.NET Web Forms application to .NET 5 or later. Follow the instructions in the [Porting Assistant for .NET documentation](https://docs.aws.amazon.com/portingassistant/latest/userguide/porting-assistant-getting-started.html). | App developer | 
| Generate NDepend reports. | When you modernize your ASP.NET Web Forms application by decomposing it into microservices, you might not need all the .cs files from the legacy application. You can use NDepend to generate a report for any code-behind (.cs) file, to get all the callers and callees. This report helps you identify and use only the required files in your microservices.After you install NDepend (see the [Prerequisites ](#modernize-asp-net-web-forms-applications-on-aws-prereqs)section), open the solution (.sln file) for your legacy application in Visual Studio and follow these steps:[\[See the AWS documentation website for more details\]](http://docs.aws.amazon.com/prescriptive-guidance/latest/patterns/modernize-asp-net-web-forms-applications-on-aws.html)This process generates a report for the code-behind file that lists all callers and callees. For more information about the dependency graph, see the [NDepend documentation](https://www.ndepend.com/docs/visual-studio-dependency-graph). | App developer | 
| Create a new .NET 5 solution. | To create a new .NET 5 (or later) structure for your modernized ASP.NET Core web APIs:[\[See the AWS documentation website for more details\]](http://docs.aws.amazon.com/prescriptive-guidance/latest/patterns/modernize-asp-net-web-forms-applications-on-aws.html)For more information about creating projects and solutions, see the [Visual Studio documentation](https://docs.microsoft.com/en-us/visualstudio/ide/creating-solutions-and-projects).As you build the solution and verify functionality, you might identify several additional files to be added to the solution, in addition to the files that NDepend identified. | App developer | 

### Update your application code
<a name="update-your-application-code"></a>


| Task | Description | Skills required | 
| --- | --- | --- | 
| Implement web APIs with ASP.NET Core. | Let’s assume that one of the microservices that you identified in your legacy monolith shopping cart application is *Products*. You created a new ASP.NET Core web API project for *Products* in the previous epic. In this step, you identify and modernize all the web forms (.aspx pages) that are related to *Products*. Let’s assume that *Products* consists of four web forms, as illustrated earlier in the [Architecture](#modernize-asp-net-web-forms-applications-on-aws-architecture) section:[\[See the AWS documentation website for more details\]](http://docs.aws.amazon.com/prescriptive-guidance/latest/patterns/modernize-asp-net-web-forms-applications-on-aws.html)You should analyze each web form, identify all the requests that are sent to the database to perform some logic, and get responses. You can implement each request as a web API endpoint. Given its web forms, *Products* can have the following possible endpoints:[\[See the AWS documentation website for more details\]](http://docs.aws.amazon.com/prescriptive-guidance/latest/patterns/modernize-asp-net-web-forms-applications-on-aws.html)As mentioned previously, you can also reuse all the other projects that you upgraded to .NET 5, including Business Logic, Data Access, and shared/common projects. | App developer | 
| Configure Amazon CloudWatch Logs. | You can use [Amazon CloudWatch Logs](http://docs.aws.amazon.com/AmazonCloudWatch/latest/logs/WhatIsCloudWatchLogs.html) to monitor, store, and access your application’s logs. You can log data into Amazon CloudWatch Logs by using an AWS SDK. You can also integrate .NET applications with CloudWatch Logs by using popular .NET logging frameworks such as [NLog](https://www.nuget.org/packages/AWS.Logger.NLog/), [Log4Net](https://www.nuget.org/packages/AWS.Logger.Log4net/), and [ASP.NET Core logging framework](https://www.nuget.org/packages/AWS.Logger.AspNetCore/).For more information about this step, see the blog post [Amazon CloudWatch Logs and .NET Logging Frameworks](https://aws.amazon.com/blogs/developer/amazon-cloudwatch-logs-and-net-logging-frameworks/). | App developer | 
| Configure AWS Systems Manager Parameter Store. | You can use [AWS Systems Manager Parameter Store](https://docs.aws.amazon.com/systems-manager/latest/userguide/systems-manager-paramstore.html) to store application settings such as connection strings separately from your application’s code. The NuGet package [Amazon.Extensions.Configuration.SystemsManager](https://www.nuget.org/packages/Amazon.Extensions.Configuration.SystemsManager/) simplifies how your application loads these settings from the AWS Systems Manager Parameter Store into the .NET Core configuration system. For more information about this step, see the blog post [.NET Core configuration provider for AWS Systems Manager](https://aws.amazon.com/blogs/developer/net-core-configuration-provider-for-aws-systems-manager/). | App developer | 

### Add authentication and authorization
<a name="add-authentication-and-authorization"></a>


| Task | Description | Skills required | 
| --- | --- | --- | 
| Use a shared cookie for authentication. | Modernizing a legacy monolith application is an iterative process and requires the monolith and its modernized version to co-exist. You can use a shared cookie to achieve seamless authentication between the two versions. The legacy ASP.NET application continues to validate user credentials and issues the cookie while the modernized ASP.NET Core application validates the cookie. For instructions and sample code, see the [sample GitHub project](https://github.com/aws-samples/dotnet-share-auth-cookie-between-monolith-and-modernized-apps). | App developer | 

### Build and run the container locally
<a name="build-and-run-the-container-locally"></a>


| Task | Description | Skills required | 
| --- | --- | --- | 
| Create a Docker image by using Visual Studio. | In this step, you create a Docker file by using the Visual Studio for .NET Core web API.[\[See the AWS documentation website for more details\]](http://docs.aws.amazon.com/prescriptive-guidance/latest/patterns/modernize-asp-net-web-forms-applications-on-aws.html)Visual Studio creates a Docker file for your project. For a sample Docker file, see [Visual Studio Container Tools for Docker](https://docs.microsoft.com/en-us/visualstudio/containers/overview) on the Microsoft website. | App developer | 
| Build and run the container by using Docker Desktop. | Now you can build, create and run the container in Docker Desktop.[\[See the AWS documentation website for more details\]](http://docs.aws.amazon.com/prescriptive-guidance/latest/patterns/modernize-asp-net-web-forms-applications-on-aws.html) | App developer | 

## Related resources
<a name="modernize-asp-net-web-forms-applications-on-aws-resources"></a>
+ [Run an ASP.NET Core web API Docker container on an Amazon EC2 Linux instance](https://docs.aws.amazon.com/prescriptive-guidance/latest/patterns/run-an-asp-net-core-web-api-docker-container-on-an-amazon-ec2-linux-instance.html) (AWS Prescriptive Guidance)
+ [Amazon ECS Workshop](https://ecsworkshop.com/)
+ [Perform ECS blue/green deployments through CodeDeploy using AWS CloudFormation](https://docs.aws.amazon.com/AWSCloudFormation/latest/UserGuide/blue-green.html) (AWS CloudFormation documentation)
+ [Getting started with NDepend](https://www.ndepend.com/docs/getting-started-with-ndepend) (NDepend documentation)
+ [Porting Assistant for .NET](https://aws.amazon.com/porting-assistant-dotnet/)

## Additional information
<a name="modernize-asp-net-web-forms-applications-on-aws-additional"></a>

The following tables provide examples of sample projects for a legacy shopping cart application and the equivalent projects in your modernized ASP.NET Core application.

**Legacy solution:**


| 
| 
| Project name | Project template | Target framework | 
| --- |--- |--- |
| Business Interface  | Class Library  | .NET Framework  | 
| BusinessLogic  | Class Library  | .NET Framework  | 
| WebApplication  | ASP.NET Framework Web Application  | .NET Framework  | 
| UnitTests  | NUnit Test Project  | .NET Framework  | 
| Shared ->Common  | Class Library  | .NET Framework  | 
| Shared ->Framework  | Class Library  | .NET Framework  | 

**New solution:**


| 
| 
| Project name | Project template | Target framework | 
| --- |--- |--- |
| BusinessLogic  | Class Library  | .NET 5.0  | 
| <WebAPI>  | ASP.NET Core Web API  | .NET 5.0  | 
| <WebAPI>.UnitTests  | NUnit 3 Test Project  | .NET 5.0  | 
| Shared ->Common  | Class Library  | .NET 5.0  | 
| Shared ->Framework  | Class Library  | .NET 5.0  | 

# Tenant onboarding in SaaS architecture for the silo model using C\$1 and AWS CDK
<a name="tenant-onboarding-in-saas-architecture-for-the-silo-model-using-c-and-aws-cdk"></a>

*Tabby Ward, Susmitha Reddy Gankidi, and Vijai Anand Ramalingam, Amazon Web Services*

## Summary
<a name="tenant-onboarding-in-saas-architecture-for-the-silo-model-using-c-and-aws-cdk-summary"></a>

Software as a service (SaaS) applications can be built with a variety of different architectural models. The *silo model* refers to an architecture where tenants are provided dedicated resources.

SaaS applications rely on a frictionless model for introducing new tenants into their environment. This often requires the orchestration of a number of components to successfully provision and configure all the elements needed to create a new tenant. This process, in SaaS architecture, is referred to as tenant on-boarding. On-boarding should be fully automated for every SaaS environment by utilizing infrastructure as code in your on-boarding process.

This pattern guides you through an example of creating a tenant and provisioning a basic infrastructure for the tenant on Amazon Web Services (AWS). The pattern uses C\$1 and the AWS Cloud Development Kit (AWS CDK).

Because this pattern creates a billing alarm, we recommend deploying the stack in the US East (N. Virginia), or us-east-1, AWS Region. For more information, see the [AWS documentation](https://docs.aws.amazon.com/AmazonCloudWatch/latest/monitoring/monitor_estimated_charges_with_cloudwatch.html).

## Prerequisites and limitations
<a name="tenant-onboarding-in-saas-architecture-for-the-silo-model-using-c-and-aws-cdk-prereqs"></a>

**Prerequisites**** **
+ An active [AWS account](https://aws.amazon.com/account/).
+ An AWS Identity and Access Management (IAM) principal with sufficient IAM access to create AWS resources for this pattern. For more information, see [IAM roles](https://docs.aws.amazon.com/IAM/latest/UserGuide/id_roles.html).
+ [Install Amazon Command Line Interface (AWS CLI)](https://docs.aws.amazon.com/cli/latest/userguide/getting-started-install.html) and [configure AWS CLI](https://docs.aws.amazon.com/cli/latest/userguide/cli-chap-configure.html) to perform AWS CDK deployment.
+ [Visual Studio 2022](https://visualstudio.microsoft.com/downloads/) downloaded and installed or [Visual Studio Code](https://code.visualstudio.com/download) downloaded and installed.
+ [AWS Toolkit for Visual Studio](https://docs.aws.amazon.com/toolkit-for-visual-studio/latest/user-guide/setup.html) set up.
+ [.NET Core 3.1 or later](https://dotnet.microsoft.com/download/dotnet-core/3.1) (required for C\$1 AWS CDK applications)
+ [Amazon.Lambda.Tools](https://github.com/aws/aws-extensions-for-dotnet-cli#aws-lambda-amazonlambdatools) installed.

**Limitations**** **
+ AWS CDK uses [AWS CloudFormation](https://aws.amazon.com/cloudformation/), so AWS CDK applications are subject to CloudFormation service quotas. For more information, see [AWS CloudFormation quotas](https://docs.aws.amazon.com/AWSCloudFormation/latest/UserGuide/cloudformation-limits.html). 
+ The tenant CloudFormation stack is created with a CloudFormation service role `infra-cloudformation-role` with wildcard characters on actions (`sns`\$1 and `sqs*`) but with resources locked down to the `tenant-cluster` prefix. For a production use case, evaluate this setting and provide only required access to this service role. The `InfrastructureProvision` Lambda function also uses a wildcard character (`cloudformation*`) to provision the CloudFormation stack but with resources locked down to the `tenant-cluster` prefix.
+ This example code's docker build uses `--platform=linux/amd64` to force `linux/amd64` based images. This is to ensure that the final image artifacts will be suitable for Lambda, which by default uses x86-64 architecture. If you need to change the target Lambda architecture, be sure to change both the Dockerfiles and the AWS CDK codes. For more information, see the blog post [Migrating AWS Lambda functions to Arm-based AWS Graviton2 processors](https://aws.amazon.com/blogs/compute/migrating-aws-lambda-functions-to-arm-based-aws-graviton2-processors/).
+ The stack deletion process will not clean up CloudWatch Logs (log groups and logs) generated by the stack. You must manually clean up the logs through the AWS Management Console Amazon CloudWatch console or the through the API.

This pattern is set up as an example. For production use, evaluate the following setups and make changes based on your business requirements:
+ The [AWS Simple Storage Service (Amazon S3)](https://aws.amazon.com/s3/) bucket in this example does not have versioning enabled for simplicity. Evaluate and update the setup as needed.
+ This example sets up [Amazon API Gateway](https://aws.amazon.com/api-gateway/) REST API endpoints without authentication, authorization, or throttling for simplicity. For production use, we recommend integrating the system with the business security infrastructure. Evaluate this setting and add required security settings as needed.
+ For this tenant infrastructure example, [Amazon Simple Notification Service (Amazon SNS)](https://aws.amazon.com/sns/) and [Amazon Simple Queue Service (Amazon SQS)](https://aws.amazon.com/sqs/) have only minimum setups. The [AWS Key Management Service (AWS KMS)](https://aws.amazon.com/kms/) for each tenant opens to [Amazon CloudWatch](https://aws.amazon.com/cloudwatch/) and Amazon SNS services in the account to consume based on the [AWS KMS key policy](https://docs.aws.amazon.com/AWSSimpleQueueService/latest/SQSDeveloperGuide/sqs-key-management.html#compatibility-with-aws-services). The setup is only an example placeholder. Adjust the setups as needed based on your business use case.
+ The entire setup, which includes but isn’t limited to API endpoints and backend tenant provisioning and deletion by using AWS CloudFormation, covers only the basic happy path case. Evaluate and update the setup with the necessary retry logic, additional error handling logic, and security logic based on your business needs.
+ The example code is tested with up-to-date [cdk-nag](https://github.com/cdklabs/cdk-nag) to check for policies at the time of this writing. New policies might be enforced in the future. These new policies might require you to manually modify the stack based on the recommendations before the stack can be deployed. Review the existing code to ensure that it aligns with your business requirements.
+ The code relies on the AWS CDK to generate a random suffix instead of relying on static assigned physical names for most created resources. This setup is to ensure that these resources are unique and do not conflict with other stacks. For more information, see the [AWS CDK documentation](https://docs.aws.amazon.com/cdk/v2/guide/resources.html#resources_physical_names). Adjust this based on your business requirements.
+ This example code packages .NET Lambda artifacts into Docker based images and runs with the Lambda provided [Container image runtime](https://docs.aws.amazon.com/lambda/latest/dg/csharp-image.html). The container image runtime has advantages for standard transfer and store mechanisms (container registries) and more accurate local test environments (through the container image). You can switch the project to use [Lambda provided .NET runtimes](https://docs.aws.amazon.com/lambda/latest/dg/lambda-csharp.html) to reduce the build time of the Docker images, but you will then need to set up transfer and store mechanisms and ensure that the local setup matches the Lambda setup. Adjust the code to align with users' business requirements.

**Product versions**
+ AWS CDK version 2.45.0 or later
+ Visual Studio 2022

## Architecture
<a name="tenant-onboarding-in-saas-architecture-for-the-silo-model-using-c-and-aws-cdk-architecture"></a>

**Technology stack**
+ Amazon API Gateway
+ AWS CloudFormation
+ Amazon CloudWatch
+ Amazon DynamoDB
+ AWS Identity and Access Management (IAM)
+ AWS KMS
+ AWS Lambda
+ Amazon S3
+ Amazon SNS
+ Amazon SQS

**Architecture**

The following diagram shows the tenant stack creation flow. For more information about the control-plane and tenant technology stacks, see the *Additional information* section.

![\[Workflow to create a tenant and provision a basic infrastructure for the tenant on AWS.\]](http://docs.aws.amazon.com/prescriptive-guidance/latest/patterns/images/pattern-img/5baef800-fe39-4eb8-b11d-2c23eb3175fc/images/0b579484-b87c-4acb-8c60-8c33c18370e3.png)


**Tenant stack creation flow**

1. User sends a POST API request with new tenant payload (tenant name, tenant description) in JSON to a REST API hosted by Amazon API Gateway. The API Gateway processes the request and forwards it to the backend Lambda Tenant On-boarding function. In this example, there is no authorization or authentication. In a production setup, this API should be integrated with the SaaS infrastructure security system.

1. The Tenant On-boarding function verifies the request. Then it attempts to store the tenant record, which includes the tenant name, generated tenant universally unique identifier (UUID), and tenant description, into the Amazon DynamoDB Tenant On-boarding table. 

1. After DynamoDB stores the record, a DynamoDB stream initiates the downstream Lambda Tenant Infrastructure function.

1. The Tenant Infrastructure Lambda function acts based the on received DynamoDB stream. If the stream is for the INSERT event, the function uses the stream's NewImage section (latest update record, Tenant Name field) to invoke CloudFormation to create a new tenant infrastructure using the template that is stored in the S3 bucket. The CloudFormation template requires the Tenant Name parameter. 

1. AWS CloudFormation creates the tenant infrastructure based on the CloudFormation template and input parameters.

1. Each tenant infrastructure setup has a CloudWatch alarm, a billing alarm, and an alarm event.

1. The alarm event becomes a message to an SNS topic, which is encrypted by the tenant's AWS KMS key.

1. The SNS topic forwards the received alarm message to the SQS queue, which is encrypted by the tenant's AWS KMS for encryption key.

Other systems can be integrated with Amazon SQS to perform actions based on messages in queue. In this example, to keep the code generic, incoming messages remain in queue and require manual deletion.

**Tenant stack deletion flow**

1. User sends a DELETE API request with new tenant payload (tenant name, tenant description) in JSON to the REST API hosted by Amazon API Gateway, which will process the request and forward to Tenant On-boarding function. In this example, there is no authorization or authentication. In a production setup, this API will be integrated with the SaaS infrastructure security system.

1. The Tenant On-boarding function will verify the request and then attempt to delete the tenant record (tenant name) from the Tenant On-boarding table. 

1. After DynamoDB deletes the record successfully (the record exists in the table and is deleted), a DynamoDB stream initiates the downstream Lambda Tenant Infrastructure function.

1. The Tenant Infrastructure Lambda function acts based on the received DynamoDB stream record. If the stream is for the REMOVE event, the function uses the record's OldImage section (record information and Tenant Name field, before the latest change, which is delete) to initiate deletion of an existing stack based on that record information.

1. AWS CloudFormation deletes the target tenant stack according to the input.

## Tools
<a name="tenant-onboarding-in-saas-architecture-for-the-silo-model-using-c-and-aws-cdk-tools"></a>

**AWS services**
+ [Amazon API Gateway](https://docs.aws.amazon.com/apigateway/latest/developerguide/welcome.html) helps you create, publish, maintain, monitor, and secure REST, HTTP, and WebSocket APIs at any scale.
+ [AWS Cloud Development Kit (AWS CDK)](https://docs.aws.amazon.com/cdk/v2/guide/home.html) is a software development framework that helps you define and provision AWS Cloud infrastructure in code.
+ [AWS CDK Toolkit](https://docs.aws.amazon.com/cdk/v2/guide/cli.html) is a command line cloud development kit that helps you interact with your AWS Cloud Development Kit (AWS CDK) app.
+ [AWS Command Line Interface (AWS CLI)](https://docs.aws.amazon.com/cli/latest/userguide/cli-chap-welcome.html) is an open-source tool that helps you interact with AWS services through commands in your command-line shell.
+ [AWS CloudFormation](https://docs.aws.amazon.com/AWSCloudFormation/latest/UserGuide/Welcome.html) helps you set up AWS resources, provision them quickly and consistently, and manage them throughout their lifecycle across AWS accounts and Regions.
+ [Amazon DynamoDB](https://docs.aws.amazon.com/amazondynamodb/latest/developerguide/Introduction.html) is a fully managed NoSQL database service that provides fast, predictable, and scalable performance.
+ [AWS Identity and Access Management (IAM)](https://docs.aws.amazon.com/IAM/latest/UserGuide/introduction.html) helps you securely manage access to your AWS resources by controlling who is authenticated and authorized to use them.
+ [AWS Key Management Service (AWS KMS) ](https://docs.aws.amazon.com/kms/latest/developerguide/overview.html)helps you create and control cryptographic keys to help protect your data.
+ [AWS Lambda](https://docs.aws.amazon.com/lambda/latest/dg/welcome.html) is a compute service that helps you run code without needing to provision or manage servers. It runs your code only when needed and scales automatically, so you pay only for the compute time that you use.
+ [Amazon Simple Storage Service (Amazon S3)](https://docs.aws.amazon.com/AmazonS3/latest/userguide/Welcome.html) is a cloud-based object storage service that helps you store, protect, and retrieve any amount of data.
+ [Amazon Simple Notification Service (Amazon SNS)](https://docs.aws.amazon.com/sns/latest/dg/welcome.html) helps you coordinate and manage the exchange of messages between publishers and clients, including web servers and email addresses.
+ [Amazon Simple Queue Service (Amazon SQS)](https://docs.aws.amazon.com/AWSSimpleQueueService/latest/SQSDeveloperGuide/welcome.html) provides a secure, durable, and available hosted queue that helps you integrate and decouple distributed software systems and components.
+ [AWS Toolkit for Visual Studio](https://docs.aws.amazon.com/toolkit-for-visual-studio/latest/user-guide/welcome.html) is a plugin for the Visual Studio integrated development environment (IDE). The Toolkit for Visual Studio supports developing, debugging, and deploying .NET applications that use AWS services.

**Other tools**
+ [Visual Studio](https://docs.microsoft.com/en-us/visualstudio/ide/whats-new-visual-studio-2022?view=vs-2022) is an IDE that includes compilers, code completion tools, graphical designers, and other features that support software development.

**Code**

The code for this pattern is in the [Tenant onboarding in SaaS Architecture for Silo Model APG Example](https://github.com/aws-samples/tenant-onboarding-in-saas-architecture-for-silo-model-apg-example) repository.

## Epics
<a name="tenant-onboarding-in-saas-architecture-for-the-silo-model-using-c-and-aws-cdk-epics"></a>

### Set up AWS CDK
<a name="set-up-aws-cdk"></a>


| Task | Description | Skills required | 
| --- | --- | --- | 
| Verify Node.js installation. | To verify that Node.js is installed on your local machine, run the following command.<pre>node --version</pre> | AWS administrator, AWS DevOps | 
| Install AWS CDK Toolkit. | To install AWS CDK Toolkit on your local machine, run the following command.<pre>npm install -g aws-cdk</pre>If npm is not installed, you can install it from the [Node.js site](https://nodejs.org/en/download/package-manager/). | AWS administrator, AWS DevOps | 
| Verify the AWS CDK Toolkit version. | To verify that the AWS CDK Toolkit version is installed correctly on your machine, run the following command.  <pre>cdk --version</pre> | AWS administrator, AWS DevOps | 

### Review the code for the tenant onboarding control plane
<a name="review-the-code-for-the-tenant-onboarding-control-plane"></a>


| Task | Description | Skills required | 
| --- | --- | --- | 
| Clone the repository. | Clone the [repository](https://github.com/aws-samples/tenant-onboarding-in-saas-architecture-for-silo-model-apg-example), and navigate to the `\tenant-onboarding-in-saas-architecture-for-silo-model-apg-example` folder.In Visual Studio 2022, open the `\src\TenantOnboardingInfra.sln` solution. Open the `TenantOnboardingInfraStack.cs` file and review the code.The following resources are created as part of this stack:[\[See the AWS documentation website for more details\]](http://docs.aws.amazon.com/prescriptive-guidance/latest/patterns/tenant-onboarding-in-saas-architecture-for-the-silo-model-using-c-and-aws-cdk.html) | AWS administrator, AWS DevOps | 
| Review the CloudFormation template. | In the `\tenant-onboarding-in-saas-architecture-for-silo-model-apg-example\template` folder, open `infra.yaml`, and review the CloudFormation template. This template will be hydrated with the tenant name retrieved from the tenant onboarding DynamoDB table.The template provisions the tenant-specific infrastructure. In this example, it provisions the AWS KMS key, Amazon SNS , Amazon SQS, and the CloudWatch alarm. | App developer, AWS DevOps | 
| Review the tenant onboarding function. | Open `Function.cs`, and review the code for the tenant onboarding function, which is created with the Visual Studio AWS Lambda Project (.NET Core- C\$1) template with the .NET 6 (Container Image) blueprint.Open the `Dockerfile`, and review the code. The `Dockerfile` is a text file that consists of instructions for building the Lambda container image.Note that the following NuGet packages are added as dependencies to the `TenantOnboardingFunction` project:[\[See the AWS documentation website for more details\]](http://docs.aws.amazon.com/prescriptive-guidance/latest/patterns/tenant-onboarding-in-saas-architecture-for-the-silo-model-using-c-and-aws-cdk.html) | App developer, AWS DevOps | 
| Review the Tenant InfraProvisioning function. | Navigate to `\tenant-onboarding-in-saas-architecture-for-silo-model-apg-example\src\InfraProvisioningFunction`.Open `Function.cs`, and review the code for the tenant infrastructure provisioning function, which is created with the Visual Studio AWS Lambda Project (.NET Core- C\$1) template with the .NET 6 (Container Image) blueprint.Open the `Dockerfile`, and review the code. Note that the following NuGet packages are added as dependencies to the `InfraProvisioningFunction` project:[\[See the AWS documentation website for more details\]](http://docs.aws.amazon.com/prescriptive-guidance/latest/patterns/tenant-onboarding-in-saas-architecture-for-the-silo-model-using-c-and-aws-cdk.html) | App developer, AWS DevOps | 

### Deploy the AWS resources
<a name="deploy-the-aws-resources"></a>


| Task | Description | Skills required | 
| --- | --- | --- | 
| Build the solution. | To build the solution, perform the following steps:[\[See the AWS documentation website for more details\]](http://docs.aws.amazon.com/prescriptive-guidance/latest/patterns/tenant-onboarding-in-saas-architecture-for-the-silo-model-using-c-and-aws-cdk.html)Make sure that you update the `Amazon.CDK.Lib NuGet` package to the latest version in `\tenant-onboarding-in-saas-architecture-for-silo-model-apg-example\src\TenantOnboardingInfra` project before you build the solution. | App developer | 
| Bootstrap the AWS CDK environment. | Open the Windows command prompt and navigate to the AWS CDK app root folder where the `cdk.json` file is available (`\tenant-onboarding-in-saas-architecture-for-silo-model-apg-example`). Run the following command for bootstrapping.<pre>cdk bootstrap </pre>If you have created an AWS profile for the credentials, use the command with your profile.<pre>cdk bootstrap --profile <profile name><br />  </pre> | AWS administrator, AWS DevOps | 
| List the AWS CDK stacks. | To list all the stacks to be created as part of this project, run the following command.<pre>cdk ls<br />cdk ls --profile <profile name></pre>If you have created an AWS profile for the credentials, use the command with your profile.<pre>cdk ls --profile <profile name></pre> | AWS administrator, AWS DevOps | 
| Review which AWS resources will be created. | To review all the AWS resources that will be created as part of this project, run the following command.<pre>cdk diff</pre>If you have created an AWS profile for the credentials, use the command with your profile.<pre>cdk diff --profile <profile name></pre> | AWS administrator, AWS DevOps | 
| Deploy all the AWS resources by using AWS CDK. | To deploy all the AWS resources run the following command.<pre>cdk deploy --all --require-approval never</pre>If you have created an AWS profile for the credentials, use the command with your profile.<pre>cdk deploy --all --require-approval never --profile <profile name></pre>After the deployment is complete, copy the API URL from the outputs section in the command prompt, which is shown in the following example.<pre>Outputs:<br />TenantOnboardingInfraStack.TenantOnboardingAPIEndpoint42E526D7 = https://j2qmp8ds21i1i.execute-api.us-west-2.amazonaws.com/prod/</pre> | AWS administrator, AWS DevOps | 

### Verify the functionality
<a name="verify-the-functionality"></a>


| Task | Description | Skills required | 
| --- | --- | --- | 
| Create a new tenant. | To create the new tenant, send the following curl request.<pre>curl -X POST <TenantOnboardingAPIEndpoint* from CDK Output>tenant -d '{"Name":"Tenant123", "Description":"Stack for Tenant123"}'</pre>Change the place holder `<TenantOnboardingAPIEndpoint* from CDK Output>` to the actual value from AWS CDK, as shown in the following example.<pre>curl -X POST https://j2qmp8ds21i1i.execute-api.us-west-2.amazonaws.com/prod/tenant -d '{"Name":"Tenant123", "Description":"test12"}'</pre>The following example shows the output.<pre>{"message": "A new tenant added - 5/4/2022 7:11:30 AM"}</pre> | App developer, AWS administrator, AWS DevOps | 
| Verify the newly created tenant details in DynamoDB. | To verify the newly created tenant details in DynamoDB, perform the following steps.[\[See the AWS documentation website for more details\]](http://docs.aws.amazon.com/prescriptive-guidance/latest/patterns/tenant-onboarding-in-saas-architecture-for-the-silo-model-using-c-and-aws-cdk.html) | App developer, AWS administrator, AWS DevOps | 
| Verify the stack creation for the new tenant. | Verify that the new stack was successfully created and provisioned with infrastructure for the newly created tenant according to the CloudFormation template.[\[See the AWS documentation website for more details\]](http://docs.aws.amazon.com/prescriptive-guidance/latest/patterns/tenant-onboarding-in-saas-architecture-for-the-silo-model-using-c-and-aws-cdk.html) | App developer, AWS administrator, AWS DevOps | 
| Delete the tenant stack. | To delete the tenant stack, send the following curl request.<pre>curl -X DELETE <TenantOnboardingAPIEndpoint* from CDK Output>tenant/<Tenant Name from previous step></pre>Change the place holder `<TenantOnboardingAPIEndpoint* from CDK Output>` to the actual value from AWS CDK, and change `<Tenant Name from previous step>` to the actual value from the previous tenant creation step, as shown in the following example.<pre>curl -X DELETE https://j2qmp8ds21i1i.execute-api.us-west-2.amazonaws.com/prod/tenant/Tenant123</pre>The following example shows the output.<pre>{"message": "Tenant destroyed - 5/4/2022 7:14:48 AM"}</pre> | App developer, AWS DevOps, AWS administrator | 
| Verify the stack deletion for the existing tenant. | To verify that the existing tenant stack got deleted, perform the following steps:[\[See the AWS documentation website for more details\]](http://docs.aws.amazon.com/prescriptive-guidance/latest/patterns/tenant-onboarding-in-saas-architecture-for-the-silo-model-using-c-and-aws-cdk.html) | App developer, AWS administrator, AWS DevOps | 

### Clean up
<a name="clean-up"></a>


| Task | Description | Skills required | 
| --- | --- | --- | 
| Destroy the environment. | Before the stack clean up, ensure the following:[\[See the AWS documentation website for more details\]](http://docs.aws.amazon.com/prescriptive-guidance/latest/patterns/tenant-onboarding-in-saas-architecture-for-the-silo-model-using-c-and-aws-cdk.html)After testing is done, AWS CDK can be used to destroy all the stacks and related resources by running the following command.<pre>cdk destroy --all;</pre>If you created an AWS profile for the credentials, use the profile.Confirm the stack deletion prompt to delete the stack. | AWS administrator, AWS DevOps | 
| Clean up Amazon CloudWatch Logs. | The stack deletion process will not clean up CloudWatch Logs (log groups and logs) that were generated by the stack. Manually clean up the CloudWatch resources by using the CloudWatch console or the API. | App developer, AWS DevOps, AWS administrator | 

## Related resources
<a name="tenant-onboarding-in-saas-architecture-for-the-silo-model-using-c-and-aws-cdk-resources"></a>
+ [AWS CDK .NET Workshop](https://cdkworkshop.com/40-dotnet.html)
+ [Working with the AWS CDK in C\$1](https://docs.aws.amazon.com/cdk/v2/guide/work-with-cdk-csharp.html)
+ [CDK .NET Reference](https://docs.aws.amazon.com/cdk/api/v2/dotnet/api/index.html)

## Additional information
<a name="tenant-onboarding-in-saas-architecture-for-the-silo-model-using-c-and-aws-cdk-additional"></a>

**Control-plane technology stack**

The CDK code written in .NET is used to provision the control-plane infrastructure, which consists of the following resources:

1. **API Gateway**

   Serves as the REST API entry point for the control-plane stack.

1. **Tenant on-boarding Lambda function**

   This Lambda function is initiated by API Gateway using the m method.

   A POST method API request results in (`tenant name`, `tenant description`) being inserted into the DynamoDB `Tenant Onboarding` table.

   In this code example, the tenant name is also used as part of the tenant stack name and the names of resources within that stack. This is to make these resources easier to identify. This tenant name must be unique across the setup to avoid conflicts or errors. Detailed input validation setup is explained in the [IAM roles](https://docs.aws.amazon.com/IAM/latest/UserGuide/id_roles.html) documentation and the *Limitations* section.

   The persistence process to the DynamoDB table will succeed only if the tenant name is not used in any other record in the table.

   The tenant name in this case is the partition key for this table, because only the partition key can be used as a `PutItem` condition expression.

   If the tenant name was never recorded before, the record will be saved into the table successfully.

   However, if the tenant name is already used by an existing record in the table, the operation will fail and initiate a DynamoDB `ConditionalCheckFailedException` exception. The exception will be used to return a failure message (`HTTP BadRequest`) indicating that the tenant name already exists.

   A `DELETE` method API request will remove the record for a specific tenant name from the `Tenant Onboardin`g table.

   The DynamoDB record deletion in this example will succeed even if the record does not exist.

   If the target record exists and is deleted, it will create a DynamoDB stream record. Otherwise, no downstream record will be created.

1. **Tenant on-boarding DynamoDB, with Amazon DynamoDB Streams enabled**

   This records the tenant metadata information, and any record save or deletion will send a stream downstream to the `Tenant Infrastructure` Lambda function. 

1. **Tenant infrastructure Lambda function**

   This Lambda function is initiated by the DynamoDB stream record from the previous step. If the record is for an `INSERT` event, it invokes AWS CloudFormation to create a new tenant infrastructure with the CloudFormation template that is stored in an S3 bucket. If the record is for `REMOVE`, it initiates deletion of an existing stack based on the stream record's `Tenant Name` field.

1. **S3 bucket**

   This is for storing the CloudFormation template.

1. **IAM roles for each Lambda function and a service role for CloudFormation**

   Each Lambda function has its unique IAM role with [least-privilege permissions](https://docs.aws.amazon.com/IAM/latest/UserGuide/best-practices.html#grant-least-privilege) to achieve its task. For example, the `Tenant On-boarding` Lambda function has read/write access to DynamoDB, and the `Tenant Infrastructure` Lambda function can only read the DynamoDB stream.

   A custom CloudFormation service role is created for tenant stack provisioning. This service role contains additional permissions for CloudFormation stack provisioning (for example, the AWS KMS key). This divides roles between Lambda and CloudFormation to avoid all permissions on a single role (Infrastructure Lambda role).

   Permissions that allow powerful actions (such as creating and deleting CloudFormation stacks) are locked down and allowed only on resources that start with `tenantcluster-`. The exception is AWS KMS, because of its resource naming convention. The ingested tenant name from the API will be prepended with `tenantcluster-` along with other validation checks (alphanumeric with dash only, and limited to less than 30 characters to fit into most AWS resource naming). This ensures that the tenant name will not accidentally result in disruption of core infrastructure stacks or resources.

**Tenant technology stack**

A CloudFormation template is stored in the S3 bucket. The template provisions the tenant-specific AWS KMS key, a CloudWatch alarm, an SNS topic, an SQS queue, and an [SQS policy](https://docs.aws.amazon.com/AWSSimpleQueueService/latest/SQSDeveloperGuide/sqs-using-identity-based-policies.html).

The AWS KMS key is used for data encryption by Amazon SNS and Amazon SQS for their messages. The security practices for [AwsSolutions-SNS2 and AwsSolutions-SQS2](https://github.com/cdklabs/cdk-nag/blob/main/RULES.md) recommend that you set up Amazon SNS and Amazon SQS with encryption. However, CloudWatch alarms don’t work with Amazon SNS when using an AWS managed key, so you must use a customer managed key in this case. For more information, see the [AWS Knowledge Center](https://aws.amazon.com/premiumsupport/knowledge-center/cloudwatch-receive-sns-for-alarm-trigger/).

The SQS policy is used on the Amazon SQS queue to allow the created SNS topic to deliver the message to the queue. Without the SQS policy, the access will be denied. For more information, see the [Amazon SNS documentation](https://docs.aws.amazon.com/sns/latest/dg/subscribe-sqs-queue-to-sns-topic.html#SendMessageToSQS.sqs.permissions).

# Decompose monoliths into microservices by using CQRS and event sourcing
<a name="decompose-monoliths-into-microservices-by-using-cqrs-and-event-sourcing"></a>

*Rodolfo Jr. Cerrada, Dmitry Gulin, and Tabby Ward, Amazon Web Services*

## Summary
<a name="decompose-monoliths-into-microservices-by-using-cqrs-and-event-sourcing-summary"></a>

This pattern combines two patterns, using both the command query responsibility separation (CQRS) pattern and the event sourcing pattern. The CQRS pattern separates responsibilities of the command and query models. The event sourcing pattern takes advantage of asynchronous event-driven communication to improve the overall user experience.

You can use CQRS and Amazon Web Services (AWS) services to maintain and scale each data model independently while refactoring your monolith application into microservices architecture. Then you can use the event sourcing pattern to synchronize data from the command database to the query database.

This pattern uses example code that includes a solution (\$1.sln) file that you can open using the latest version of Visual Studio. The example contains Reward API code to showcase how CQRS and event sourcing work in AWS serverless and traditional or on-premises applications.

To learn more about CQRS and event sourcing, see the [Additional information](#decompose-monoliths-into-microservices-by-using-cqrs-and-event-sourcing-additional) section.

## Prerequisites and limitations
<a name="decompose-monoliths-into-microservices-by-using-cqrs-and-event-sourcing-prereqs"></a>

**Prerequisites **
+ An active AWS account
+ Amazon CloudWatch
+ Amazon DynamoDB tables
+ Amazon DynamoDB Streams
+ AWS Identity and Access Management (IAM) access key and secret key; for more information, see the video in the *Related resources* section
+ AWS Lambda
+ Familiarity with Visual Studio
+ Familiarity with AWS Toolkit for Visual Studio; for more information, see the *AWS Toolkit for Visual Studio demo* video in the *Related resources* section

**Product versions**
+ [Visual Studio 2019 Community Edition](https://visualstudio.microsoft.com/downloads/).
+ [AWS Toolkit for Visual Studio 2019](https://aws.amazon.com/visualstudio/).
+ .NET Core 3.1. This component is an option in the Visual Studio installation. To include .NET Core during installation, select **NET Core cross-platform development**.

**Limitations**
+ The example code for a traditional on-premises application (ASP.NET Core Web API and data access objects) does not come with a database. However, it comes with the `CustomerData` in-memory object, which acts as a mock database. The code provided is enough for you to test the pattern.

## Architecture
<a name="decompose-monoliths-into-microservices-by-using-cqrs-and-event-sourcing-architecture"></a>

**Source technology stack**
+ ASP.NET Core Web API project
+ IIS Web Server
+ Data access object
+ CRUD model

**Source architecture**

In the source architecture, the CRUD model contains both command and query interfaces in one application. For example code, see `CustomerDAO.cs` (attached).

![\[Connections between application, service interface, customer CRUD model, and database.\]](http://docs.aws.amazon.com/prescriptive-guidance/latest/patterns/images/pattern-img/9f1bc700-def4-4201-bb2d-f1fa27404f15/images/1cd3a84c-12c7-4306-99aa-23f2c53d3cd3.png)


**Target technology stack **
+ Amazon DynamoDB
+ Amazon DynamoDB Streams
+ AWS Lambda
+ (Optional) Amazon API Gateway
+ (Optional) Amazon Simple Notification Service (Amazon SNS)

**Target architecture **

In the target architecture, the command and query interfaces are separated. The architecture shown in the following diagram can be extended with API Gateway and Amazon SNS. For more information, see the [Additional information](#decompose-monoliths-into-microservices-by-using-cqrs-and-event-sourcing-additional) section.

![\[Application connecting with serverless Customer Command and Customer Query microservices.\]](http://docs.aws.amazon.com/prescriptive-guidance/latest/patterns/images/pattern-img/9f1bc700-def4-4201-bb2d-f1fa27404f15/images/1c665697-e3ac-4ef4-98d0-86c2cbf164c1.png)


1. Command Lambda functions perform write operations, such as create, update, or delete, on the database.

1. Query Lambda functions perform read operations, such as get or select, on the database.

1. This Lambda function processes the DynamoDB streams from the Command database and updates the Query database for the changes.

## Tools
<a name="decompose-monoliths-into-microservices-by-using-cqrs-and-event-sourcing-tools"></a>

**Tools**
+ [Amazon DynamoDB](https://docs.aws.amazon.com/amazondynamodb/latest/developerguide/Introduction.html) – Amazon DynamoDB is a fully managed NoSQL database service that provides fast and predictable performance with seamless scalability.
+ [Amazon DynamoDB Streams](https://docs.aws.amazon.com/amazondynamodb/latest/developerguide/Streams.html) – DynamoDB Streams captures a time-ordered sequence of item-level modifications in any DynamoDB table. It then stores this information in a log for up to 24 hours. Encryption at rest encrypts the data in DynamoDB streams.
+ [AWS Lambda](https://docs.aws.amazon.com/lambda/latest/dg/welcome.html) – AWS Lambda is a compute service that supports running code without provisioning or managing servers. Lambda runs your code only when needed and scales automatically, from a few requests per day to thousands per second. You pay only for the compute time that you consume—there is no charge when your code is not running.
+ [AWS Management Console](https://docs.aws.amazon.com/awsconsolehelpdocs/latest/gsg/learn-whats-new.html) – The AWS Management Console is a web application that comprises a broad collection of service consoles for managing AWS services.
+ [Visual Studio 2019 Community Edition](https://visualstudio.microsoft.com/downloads/) – Visual Studio 2019 is an integrated development environment (IDE). The Community Edition is free for open-source contributors. In this pattern, you will use Visual Studio 2019 Community Edition to open, compile, and run example code. For viewing only, you can use any text editor or [Visual Studio Code](https://docs.aws.amazon.com/toolkit-for-vscode/latest/userguide/welcome.html).
+ [AWS Toolkit for Visual Studio](https://docs.aws.amazon.com/toolkit-for-visual-studio/latest/user-guide/welcome.html) – The AWS Toolkit for Visual Studio is a plugin for the Visual Studio IDE. The AWS Toolkit for Visual Studio makes it easier for you to develop, debug, and deploy .NET applications that use AWS services.

**Code **

The example code is attached. For instructions on deploying the example code, see the *Epics* section.

## Epics
<a name="decompose-monoliths-into-microservices-by-using-cqrs-and-event-sourcing-epics"></a>

### Open and build the solution
<a name="open-and-build-the-solution"></a>


| Task | Description | Skills required | 
| --- | --- | --- | 
| Open the solution. | [\[See the AWS documentation website for more details\]](http://docs.aws.amazon.com/prescriptive-guidance/latest/patterns/decompose-monoliths-into-microservices-by-using-cqrs-and-event-sourcing.html) | App developer | 
| Build the solution. | Open the context (right-click) menu for the solution, and then choose **Build Solution**. This will build and compile all the projects in the solution. It should compile successfully.Visual Studio Solution Explorer should show the directory structure.[\[See the AWS documentation website for more details\]](http://docs.aws.amazon.com/prescriptive-guidance/latest/patterns/decompose-monoliths-into-microservices-by-using-cqrs-and-event-sourcing.html) | App developer | 

### Build the DynamoDB tables
<a name="build-the-dynamodb-tables"></a>


| Task | Description | Skills required | 
| --- | --- | --- | 
| Provide credentials. | If you don't have an access key yet, see the video in the *Related resources* section.[\[See the AWS documentation website for more details\]](http://docs.aws.amazon.com/prescriptive-guidance/latest/patterns/decompose-monoliths-into-microservices-by-using-cqrs-and-event-sourcing.html) | App developer, Data engineer, DBA | 
| Build the project. | To build the project, open the context (right-click) menu for the **AwS.APG.CQRSES.Build** project, and then choose **Build**. | App developer, Data engineer, DBA | 
| Build and populate the tables. | To build the tables and populate them with seed data, open the context (right-click) menu for the **AwS.APG.CQRSES.Build** project, and then choose **Debug**,** Start New Instance**. | App developer, Data engineer, DBA | 
| Verify the table construction and the data. | To verify, navigate to **AWS Explorer**, and expand **Amazon DynamoDB**. It should display the tables. Open each table to display the example data. | App developer, Data engineer, DBA | 

### Run local tests
<a name="run-local-tests"></a>


| Task | Description | Skills required | 
| --- | --- | --- | 
| Build the CQRS project. | [\[See the AWS documentation website for more details\]](http://docs.aws.amazon.com/prescriptive-guidance/latest/patterns/decompose-monoliths-into-microservices-by-using-cqrs-and-event-sourcing.html) | App developer, Test engineer | 
| Build the event-sourcing project. | [\[See the AWS documentation website for more details\]](http://docs.aws.amazon.com/prescriptive-guidance/latest/patterns/decompose-monoliths-into-microservices-by-using-cqrs-and-event-sourcing.html) | App developer, Test engineer | 
| Run the tests. | To run all tests, choose **View**, **Test Explorer**, and then choose **Run All Tests In View**. All tests should pass, which is indicated by a green check mark icon.  | App developer, Test engineer | 

### Publish the CQRS Lambda functions to AWS
<a name="publish-the-cqrs-lambda-functions-to-aws"></a>


| Task | Description | Skills required | 
| --- | --- | --- | 
| Publish the first Lambda function. | [\[See the AWS documentation website for more details\]](http://docs.aws.amazon.com/prescriptive-guidance/latest/patterns/decompose-monoliths-into-microservices-by-using-cqrs-and-event-sourcing.html) | App developer, DevOps engineer | 
| Verify the function upload. | (Optional) You can verify that the function was successfully loaded by navigating to AWS Explorer and expanding **AWS Lambda**. To open the test window, choose the Lambda function (double-click). | App developer, DevOps engineer | 
| Test the Lambda function. | [\[See the AWS documentation website for more details\]](http://docs.aws.amazon.com/prescriptive-guidance/latest/patterns/decompose-monoliths-into-microservices-by-using-cqrs-and-event-sourcing.html)All CQRS Lambda projects are found under the `CQRS AWS Serverless\CQRS\Command Microservice` and` CQRS AWS Serverless\CQRS\Command Microservice` solution folders. For the solution directory and projects, see **Source code directory** in the [Additional information](#decompose-monoliths-into-microservices-by-using-cqrs-and-event-sourcing-additional) section. | App developer, DevOps engineer | 
| Publish the remaining functions. | Repeat the previous steps for the following projects:[\[See the AWS documentation website for more details\]](http://docs.aws.amazon.com/prescriptive-guidance/latest/patterns/decompose-monoliths-into-microservices-by-using-cqrs-and-event-sourcing.html) | App developer, DevOps engineer | 

### Set up the Lambda function as an event listener
<a name="set-up-the-lambda-function-as-an-event-listener"></a>


| Task | Description | Skills required | 
| --- | --- | --- | 
| Publish the Customer and Reward Lambda event handlers. | To publish each event handler, follow the steps in the preceding epic.The projects are under the `CQRS AWS Serverless\Event Source\Customer Event` and `CQRS AWS Serverless\Event Source\Reward Event` solution folders. For more information, see *Source code directory* in the [Additional information](#decompose-monoliths-into-microservices-by-using-cqrs-and-event-sourcing-additional) section. | App developer | 
| Attach the event-sourcing Lambda event listener. | [\[See the AWS documentation website for more details\]](http://docs.aws.amazon.com/prescriptive-guidance/latest/patterns/decompose-monoliths-into-microservices-by-using-cqrs-and-event-sourcing.html)After the listener is successfully attached to the DynamoDB table, it will be displayed on the Lambda designer page. | App developer | 
| Publish and attach the EventSourceReward Lambda function. | To publish and attach the `EventSourceReward` Lambda function, repeat the steps in the previous two stories, selecting **cqrses-reward-cmd** from the **DynamoDB table** dropdown list. | App developer | 

### Test and validate the DynamoDB streams and Lambda trigger
<a name="test-and-validate-the-dynamodb-streams-and-lambda-trigger"></a>


| Task | Description | Skills required | 
| --- | --- | --- | 
| Test the stream and the Lambda trigger. | [\[See the AWS documentation website for more details\]](http://docs.aws.amazon.com/prescriptive-guidance/latest/patterns/decompose-monoliths-into-microservices-by-using-cqrs-and-event-sourcing.html) | App developer | 
| Validate, using the DynamodDB reward query table. | [\[See the AWS documentation website for more details\]](http://docs.aws.amazon.com/prescriptive-guidance/latest/patterns/decompose-monoliths-into-microservices-by-using-cqrs-and-event-sourcing.html) | App developer | 
| Validate, using CloudWatch Logs. | [\[See the AWS documentation website for more details\]](http://docs.aws.amazon.com/prescriptive-guidance/latest/patterns/decompose-monoliths-into-microservices-by-using-cqrs-and-event-sourcing.html) | App developer | 
| Validate the EventSourceCustomer trigger. | To validate the `EventSourceCustomer` trigger, repeat the steps in this epic, using the `EventSourceCustomer` trigger's respective customer table and CloudWatch logs. | App developer | 

## Related resources
<a name="decompose-monoliths-into-microservices-by-using-cqrs-and-event-sourcing-resources"></a>

**References **
+ [Visual Studio 2019 Community Edition downloads](https://visualstudio.microsoft.com/downloads/)
+ [AWS Toolkit for Visual Studio download](https://aws.amazon.com/visualstudio/)
+ [AWS Toolkit for Visual Studio User Guide](https://docs.aws.amazon.com/toolkit-for-visual-studio/latest/user-guide/welcome.html)
+ [Serverless on AWS](https://aws.amazon.com/serverless/)
+ [DynamoDB Use Cases and Design Patterns](https://aws.amazon.com/blogs/database/dynamodb-streams-use-cases-and-design-patterns/)
+ [Martin Fowler CQRS](https://martinfowler.com/bliki/CQRS.html)
+ [Martin Fowler Event Sourcing](https://martinfowler.com/eaaDev/EventSourcing.html)

**Videos**
+ [AWS Toolkit for Visual Studio demo](https://www.youtube.com/watch?v=B190tcu1ERk)
+ [How do I create an access key ID for a new IAM user?](https://www.youtube.com/watch?v=665RYobRJDY)

## Additional information
<a name="decompose-monoliths-into-microservices-by-using-cqrs-and-event-sourcing-additional"></a>

**CQRS and event sourcing**

*CQRS*

The CQRS pattern separates a single conceptual operations model, such as a data access object single CRUD (create, read, update, delete) model, into command and query operations models. The command model refers to any operation, such as create, update, or delete, that changes the state. The query model refers to any operation that returns a value.

![\[Architecture with service interface, CRUD model, and database.\]](http://docs.aws.amazon.com/prescriptive-guidance/latest/patterns/images/pattern-img/9f1bc700-def4-4201-bb2d-f1fa27404f15/images/3f64756d-681e-4f0e-8034-746263d857b2.png)


1. The Customer CRUD model includes the following interfaces:
   + `Create Customer()`
   + `UpdateCustomer()`
   + `DeleteCustomer()`
   + `AddPoints()`
   + `RedeemPoints()`
   + `GetVIPCustomers()`
   + `GetCustomerList()`
   + `GetCustomerPoints()`

As your requirements become more complex, you can move from this single-model approach. CQRS uses a command model and a query model to separate the responsibility for writing and reading data. That way, the data can be independently maintained and managed. With a clear separation of responsibilities, enhancements to each model do not impact the other. This separation improves maintenance and performance, and it reduces the complexity of the application as it grows.

![\[The application separated into command and query models, sharing a single database.\]](http://docs.aws.amazon.com/prescriptive-guidance/latest/patterns/images/pattern-img/9f1bc700-def4-4201-bb2d-f1fa27404f15/images/12db023c-eb81-4c27-bbb9-b085b13176ae.png)


 

1. Interfaces in the Customer Command model:
   + `Create Customer()`
   + `UpdateCustomer()`
   + `DeleteCustomer()`
   + `AddPoints()`
   + `RedeemPoints()`

1. Interfaces in the Customer Query model:
   + `GetVIPCustomers()`
   + `GetCustomerList()`
   + `GetCustomerPoints()`
   + `GetMonthlyStatement()`

For example code, see *Source code directory*.

The CQRS pattern then decouples the database. This decoupling leads to the total independence of each service, which is the main ingredient of microservice architecture.

![\[Separate databases for command and query models.\]](http://docs.aws.amazon.com/prescriptive-guidance/latest/patterns/images/pattern-img/9f1bc700-def4-4201-bb2d-f1fa27404f15/images/016dbfa8-3bd8-42ee-afa1-38a98986c7d5.png)


 Using CQRS in the AWS Cloud, you can further optimize each service. For example, you can set different compute settings or choose between a serverless or a container-based microservice. You can replace your on-premises caching with Amazon ElastiCache. If you have an on-premises publish/subscribe messaging, you can replace it with Amazon Simple Notification Service (Amazon SNS). Additionally, you can take advantage of pay-as-you-go pricing and the wide array of AWS services that you pay only for what you use.

CQRS includes the following benefits:
+ Independent scaling – Each model can have its scaling strategy adjusted to meet the requirements and demand of the service. Similar to high-performance applications, separating read and write enables the model to scale independently to address each demand. You can also add or reduce compute resources to address the scalability demand of one model without affecting the other.
+ Independent maintenance – Separation of query and command models improves the maintainability of the models. You can make code changes and enhancements to one model without affecting the other.
+ Security – It's easier to apply the permissions and policies to separate models for read and write.
+ Optimized reads – You can define a schema that is optimized for queries. For example, you can define a schema for the aggregated data and a separate schema for the fact tables.
+ Integration –  CQRS fits well with event-based programming models.
+ Managed complexity – The separation into query and command models is suited to complex domains.

When using CQRS, keep in mind the following caveats:
+ The CQRS pattern applies only to a specific portion of an application and not the whole application. If implemented on a domain that does not fit the pattern, it can reduce productivity, increase risk, and introduce complexity.
+ The pattern works best for frequently used models that have an imbalance read and write operations.
+ For read-heavy applications, such as large reports that take time to process, CQRS gives you the option to select the right database and create a schema to store your aggregated data. This improves the response time of reading and viewing the report by processing the report data only one time and dumping it in the aggregated table.
+ For the write-heavy applications, you can configure the database for write operations and allow the command microservice to scale independently when the demand for write increases. For examples, see the `AWS.APG.CQRSES.CommandRedeemRewardLambda` and `AWS.APG.CQRSES.CommandAddRewardLambda` microservices.

*Event sourcing*

The next step is to use event sourcing to synchronize the query database when a command is run. For example, consider the following events:
+ A customer reward point is added that requires the customer total or aggregated reward points in the query database to be updated.
+ A customer's last name is updated in the command database, which requires the surrogate customer information in the query database to be updated.

In the traditional CRUD model, you ensure consistency of data by locking the data until it finishes a transaction. In event sourcing, the data are synchronized through publishing a series of events that will be consumed by a subscriber to update its respective data.

The event-sourcing pattern ensures and records a full series of actions taken on the data and publishes it through a sequence of events. These events represent a set of changes to the data that subscribers of that event must process to keep their record updated. These events are consumed by the subscriber, synchronizing the data on the subscriber's database. In this case, that's the query database.

The following diagram shows event sourcing used with CQRS on AWS.

![\[Microservice architecture for the CQRS and event sourcing patterns using AWS serverless services.\]](http://docs.aws.amazon.com/prescriptive-guidance/latest/patterns/images/pattern-img/9f1bc700-def4-4201-bb2d-f1fa27404f15/images/cc9bc84a-60b4-4459-9a5c-2334c69dbb4e.png)


1. Command Lambda functions perform write operations, such as create, update, or delete, on the database.

1. Query Lambda functions perform read operations, such as get or select, on the database.

1. This Lambda function processes the DynamoDB streams from the Command database and updates the Query database for the changes. You can also use this function also to publish a message to Amazon SNS so that its subscribers can process the data.

1. (Optional) The Lambda event subscriber processes the message published by Amazon SNS and updates the Query database.

1. (Optional) Amazon SNS sends email notification of the write operation.

On AWS, the query database can be synchronized by DynamoDB Streams. DynamoDB captures a time-ordered sequence of item-level modifications in a DynamobDB table in near-real time and durably stores the information within 24 hours.

Activating DynamoDB Streams enables the database to publish a sequence of events that makes the event sourcing pattern possible. The event sourcing pattern adds the event subscriber. The event subscriber application consumes the event and processes it depending on the subscriber's responsibility. In the previous diagram, the event subscriber pushes the changes to the Query DynamoDB database to keep the data synchronized. The use of Amazon SNS, the message broker, and the event subscriber application keeps the architecture decoupled.

Event sourcing includes the following benefits:
+ Consistency for transactional data
+ A reliable audit trail and history of the actions, which can be used to monitor actions taken in the data
+ Allows distributed applications such as microservices to synchronize their data across the environment
+ Reliable publication of events whenever the state changes
+ Reconstructing or replaying of past states
+ Loosely coupled entities that exchange events for migration from a monolithic application to microservices
+ Reduction of conflicts caused by concurrent updates; event sourcing avoids the requirement to update objects directly in the data store
+ Flexibility and extensibility from decoupling the task and the event
+ External system updates
+ Management of multiple tasks in a single event

When using event sourcing, keep in mind the following caveats:
+ Because there is some delay in updating data between the source subscriber databases, the only way to undo a change is to add a compensating event to the event store.
+ Implementing event sourcing has a learning curve since its different style of programming.

**Test data**

Use the following test data to test the Lambda function after successful deployment.

**CommandCreate Customer**

```
{  "Id":1501,  "Firstname":"John",  "Lastname":"Done",  "CompanyName":"AnyCompany",  "Address": "USA",  "VIP":true }
```

**CommandUpdate Customer**

```
{  "Id":1501,  "Firstname":"John",  "Lastname":"Doe",  "CompanyName":"Example Corp.",  "Address": "Seattle, USA",  "VIP":true }
```

**CommandDelete Customer**

Enter the customer ID as request data. For example, if the customer ID is 151, enter 151 as request data.

```
151
```

**QueryCustomerList**

This is blank. When it is invoked, it will return all customers.

**CommandAddReward**

This will add 40 points to customer with ID 1 (Richard).

```
{
  "Id":10101,
  "CustomerId":1,
  "Points":40
}
```

**CommandRedeemReward**

This will deduct 15 points to customer with ID 1 (Richard).

```
{
  "Id":10110,
  "CustomerId":1,
  "Points":15
}
```

**QueryReward**

Enter the ID of the customer. For example, enter 1 for Richard, 2 for Arnav, and 3 for Shirley.

```
2 
```

**Source code directory**

Use the following table as a guide to the directory structure of the Visual Studio solution. 

*CQRS On-Premises Code Sample solution directory*

![\[Solution directory with Command and Query services expanded.\]](http://docs.aws.amazon.com/prescriptive-guidance/latest/patterns/images/pattern-img/9f1bc700-def4-4201-bb2d-f1fa27404f15/images/4811c2c0-643b-410f-bb87-0b86ec5e194c.png)


**Customer CRUD model**

CQRS On-Premises Code Sample\$1CRUD Model\$1AWS.APG.CQRSES.DAL project

**CQRS version of the Customer CRUD model**
+ Customer command: `CQRS On-Premises Code Sample\CQRS Model\Command Microservice\AWS.APG.CQRSES.Command`project
+ Customer query: `CQRS On-Premises Code Sample\CQRS Model\Query Microservice\AWS.APG.CQRSES.Query` project

**Command and Query microservices**

The Command microservice is under the solution folder `CQRS On-Premises Code Sample\CQRS Model\Command Microservice`:
+ `AWS.APG.CQRSES.CommandMicroservice` ASP.NET Core API project acts as the entry point where consumers interact with the service.
+ `AWS.APG.CQRSES.Command` .NET Core project is an object that hosts command-related objects and interfaces.

The query microservice is under the solution folder `CQRS On-Premises Code Sample\CQRS Model\Query Microservice`:
+ `AWS.APG.CQRSES.QueryMicroservice` ASP.NET Core API project acts as the entry point where consumers interact with the service.
+ `AWS.APG.CQRSES.Query` .NET Core project is an object that hosts query-related objects and interfaces.

*CQRS AWS Serverless code solution directory*

![\[Solution directory showing both microservices and the event source expanded.\]](http://docs.aws.amazon.com/prescriptive-guidance/latest/patterns/images/pattern-img/9f1bc700-def4-4201-bb2d-f1fa27404f15/images/23f8655c-95ad-422c-b20a-e29dc145e995.png)


 

This code is the AWS version of the on-premises code using AWS serverless services.

In C\$1 .NET Core, each Lambda function is represented by one .NET Core project. In this pattern's example code, there is a separate project for each interface in the command and query models.

**CQRS using AWS services**

You can find the root solution directory for CQRS using AWS serverless services is in the `CQRS AWS Serverless\CQRS`folder. The example includes two models: Customer and Reward.

The command Lambda functions for Customer and Reward are under `CQRS\Command Microservice\Customer` and `CQRS\Command Microservice\Reward` folders. They contain the following Lambda projects:
+ Customer command: `CommandCreateLambda`, `CommandDeleteLambda`, and `CommandUpdateLambda`
+ Reward command: `CommandAddRewardLambda` and `CommandRedeemRewardLambda`

The query Lambda functions for Customer and Reward are found under the `CQRS\Query Microservice\Customer` and `CQRS\QueryMicroservice\Reward`folders. They contain the `QueryCustomerListLambda` and `QueryRewardLambda` Lambda projects.

**CQRS test project**

The test project is under the `CQRS\Tests` folder. This project contains a test script to automate testing the CQRS Lambda functions.

**Event sourcing using AWS services**

The following Lambda event handlers are initiated by the Customer and Reward DynamoDB streams to process and synchronize the data in query tables.
+ The `EventSourceCustomer` Lambda function is mapped to the Customer table (`cqrses-customer-cmd`) DynamoDB stream.
+ The `EventSourceReward` Lambda function is mapped to the Reward table (`cqrses-reward-cmd`) DynamoDB stream.

## Attachments
<a name="attachments-9f1bc700-def4-4201-bb2d-f1fa27404f15"></a>

To access additional content that is associated with this document, unzip the following file: [attachment.zip](samples/p-attach/9f1bc700-def4-4201-bb2d-f1fa27404f15/attachments/attachment.zip)

# More patterns
<a name="modernization-more-patterns-pattern-list"></a>

**Topics**
+ [Access container applications privately on Amazon EKS using AWS PrivateLink and a Network Load Balancer](access-container-applications-privately-on-amazon-eks-using-aws-privatelink-and-a-network-load-balancer.md)
+ [Automate adding or updating Windows registry entries using AWS Systems Manager](automate-adding-or-updating-windows-registry-entries-using-aws-systems-manager.md)
+ [Automate cross-Region failover and failback by using DR Orchestrator Framework](automate-cross-region-failover-and-failback-by-using-dr-orchestrator-framework.md)
+ [Automatically build and deploy a Java application to Amazon EKS using a CI/CD pipeline](automatically-build-and-deploy-a-java-application-to-amazon-eks-using-a-ci-cd-pipeline.md)
+ [Automatically build CI/CD pipelines and Amazon ECS clusters for microservices using AWS CDK](automatically-build-ci-cd-pipelines-and-amazon-ecs-clusters-for-microservices-using-aws-cdk.md)
+ [Back up and archive mainframe data to Amazon S3 using BMC AMI Cloud Data](back-up-and-archive-mainframe-data-to-amazon-s3-using-bmc-ami-cloud-data.md)
+ [Build a Micro Focus Enterprise Server PAC with Amazon EC2 Auto Scaling and Systems Manager](build-a-micro-focus-enterprise-server-pac-with-amazon-ec2-auto-scaling-and-systems-manager.md)
+ [Build an enterprise data mesh with Amazon DataZone, AWS CDK, and AWS CloudFormation](build-enterprise-data-mesh-amazon-data-zone.md)
+ [Containerize mainframe workloads that have been modernized by Blu Age](containerize-mainframe-workloads-that-have-been-modernized-by-blu-age.md)
+ [Convert and unpack EBCDIC data to ASCII on AWS by using Python](convert-and-unpack-ebcdic-data-to-ascii-on-aws-by-using-python.md)
+ [Convert mainframe data files with complex record layouts using Micro Focus](convert-mainframe-data-files-with-complex-record-layouts-using-micro-focus.md)
+ [Create a portal for micro-frontends by using AWS Amplify, Angular, and Module Federation](create-amplify-micro-frontend-portal.md)
+ [Deploy containers by using Elastic Beanstalk](deploy-containers-by-using-elastic-beanstalk.md)
+ [Emulate Oracle DR by using a PostgreSQL-compatible Aurora global database](emulate-oracle-dr-by-using-a-postgresql-compatible-aurora-global-database.md)
+ [Generate data insights by using AWS Mainframe Modernization and Amazon Q in Quick Sight](generate-data-insights-by-using-aws-mainframe-modernization-and-amazon-q-in-quicksight.md)
+ [Generate Db2 z/OS data insights by using AWS Mainframe Modernization and Amazon Q in Quick Sight](generate-db2-zos-data-insights-aws-mainframe-modernization-amazon-q-in-quicksight.md)
+ [Identify duplicate container images automatically when migrating to an Amazon ECR repository](identify-duplicate-container-images-automatically-when-migrating-to-ecr-repository.md)
+ [Implement AI-powered Kubernetes diagnostics and troubleshooting with K8sGPT and Amazon Bedrock integration](implement-ai-powered-kubernetes-diagnostics-and-troubleshooting-with-k8sgpt-and-amazon-bedrock-integration.md)
+ [Implement Microsoft Entra ID-based authentication in an AWS Blu Age modernized mainframe application](implement-entra-id-authentication-in-aws-blu-age-modernized-mainframe-application.md)
+ [Implement path-based API versioning by using custom domains in Amazon API Gateway](implement-path-based-api-versioning-by-using-custom-domains.md)
+ [Incrementally migrate from Amazon RDS for Oracle to Amazon RDS for PostgreSQL using Oracle SQL Developer and AWS SCT](incrementally-migrate-from-amazon-rds-for-oracle-to-amazon-rds-for-postgresql-using-oracle-sql-developer-and-aws-sct.md)
+ [Integrate Stonebranch Universal Controller with AWS Mainframe Modernization](integrate-stonebranch-universal-controller-with-aws-mainframe-modernization.md)
+ [Manage AWS Service Catalog products in multiple AWS accounts and AWS Regions](manage-aws-service-catalog-products-in-multiple-aws-accounts-and-aws-regions.md)
+ [Migrate an AWS member account from AWS Organizations to AWS Control Tower](migrate-an-aws-member-account-from-aws-organizations-to-aws-control-tower.md)
+ [Migrate and replicate VSAM files to Amazon RDS or Amazon MSK using Connect from Precisely](migrate-and-replicate-vsam-files-to-amazon-rds-or-amazon-msk-using-connect-from-precisely.md)
+ [Migrate from SAP ASE to Amazon RDS for SQL Server using AWS DMS](migrate-from-sap-ase-to-amazon-rds-for-sql-server-using-aws-dms.md)
+ [Migrate Oracle external tables to Amazon Aurora PostgreSQL-Compatible](migrate-oracle-external-tables-to-amazon-aurora-postgresql-compatible.md)
+ [Modernize the CardDemo mainframe application by using AWS Transform](modernize-carddemo-mainframe-app.md)
+ [Modernize and deploy mainframe applications using AWS Transform and Terraform](modernize-mainframe-app-transform-terraform.md)
+ [Modernize mainframe batch printing workloads on AWS by using Rocket Enterprise Server and LRS VPSX/MFI](modernize-mainframe-batch-printing-workloads-on-aws-by-using-rocket-enterprise-server-and-lrs-vpsx-mfi.md)
+ [Modernize mainframe online printing workloads on AWS by using Micro Focus Enterprise Server and LRS VPSX/MFI](modernize-mainframe-online-printing-workloads-on-aws-by-using-micro-focus-enterprise-server-and-lrs-vpsx-mfi.md)
+ [Modernize mainframe output management on AWS by using Rocket Enterprise Server and LRS PageCenterX](modernize-mainframe-output-management-on-aws-by-using-rocket-enterprise-server-and-lrs-pagecenterx.md)
+ [Move mainframe files directly to Amazon S3 using Transfer Family](move-mainframe-files-directly-to-amazon-s3-using-transfer-family.md)
+ [Optimize multi-account serverless deployments by using the AWS CDK and GitHub Actions workflows](optimize-multi-account-serverless-deployments.md)
+ [Optimize the performance of your AWS Blu Age modernized application](optimize-performance-aws-blu-age-modernized-application.md)
+ [Automate blue/green deployments of Amazon Aurora global databases by using IaC principles](p-automate-blue-green-deployments-aurora-global-databases-iac.md)
+ [Replicate mainframe databases to AWS by using Precisely Connect](replicate-mainframe-databases-to-aws-by-using-precisely-connect.md)
+ [Run Amazon ECS tasks on Amazon WorkSpaces with Amazon ECS Anywhere](run-amazon-ecs-tasks-on-amazon-workspaces-with-amazon-ecs-anywhere.md)
+ [Send telemetry data from AWS Lambda to OpenSearch for real-time analytics and visualization](send-telemetry-data-from-lambda-to-opensearch-for-analytics-visualization.md)
+ [Set up CloudFormation drift detection in a multi-Region, multi-account organization](set-up-aws-cloudformation-drift-detection-in-a-multi-region-multi-account-organization.md)
+ [Structure a Python project in hexagonal architecture using AWS Lambda](structure-a-python-project-in-hexagonal-architecture-using-aws-lambda.md)
+ [Test AWS infrastructure by using LocalStack and Terraform Tests](test-aws-infra-localstack-terraform.md)
+ [Transform Easytrieve to modern languages by using AWS Transform custom](transform-easytrieve-modern-languages.md)
+ [Upgrade SAP Pacemaker clusters from ENSA1 to ENSA2](upgrade-sap-pacemaker-clusters-from-ensa1-to-ensa2.md)
+ [Use Amazon Q Developer as a coding assistant to increase your productivity](use-q-developer-as-coding-assistant-to-increase-productivity.md)
+ [Validate Account Factory for Terraform (AFT) code locally](validate-account-factory-for-terraform-aft-code-locally.md)