

# Mainframes
<a name="mainframe-pattern-list"></a>

**Topics**
+ [Access AWS services from IBM z/OS by installing the AWS CLI](access-aws-services-from-ibm-z-os-by-installing-aws-cli.md)
+ [Back up and archive mainframe data to Amazon S3 using BMC AMI Cloud Data](back-up-and-archive-mainframe-data-to-amazon-s3-using-bmc-ami-cloud-data.md)
+ [Build COBOL Db2 programs by using AWS Mainframe Modernization and AWS CodeBuild](build-cobol-db2-programs-mainframe-modernization-codebuild.md)
+ [Build a Micro Focus Enterprise Server PAC with Amazon EC2 Auto Scaling and Systems Manager](build-a-micro-focus-enterprise-server-pac-with-amazon-ec2-auto-scaling-and-systems-manager.md)
+ [Build an advanced mainframe file viewer in the AWS Cloud](build-an-advanced-mainframe-file-viewer-in-the-aws-cloud.md)
+ [Containerize mainframe workloads that have been modernized by Blu Age](containerize-mainframe-workloads-that-have-been-modernized-by-blu-age.md)
+ [Convert and unpack EBCDIC data to ASCII on AWS by using Python](convert-and-unpack-ebcdic-data-to-ascii-on-aws-by-using-python.md)
+ [Convert mainframe files from EBCDIC format to character-delimited ASCII format in Amazon S3 using AWS Lambda](convert-mainframe-files-from-ebcdic-format-to-character-delimited-ascii-format-in-amazon-s3-using-aws-lambda.md)
+ [Convert mainframe data files with complex record layouts using Micro Focus](convert-mainframe-data-files-with-complex-record-layouts-using-micro-focus.md)
+ [Deploy an environment for containerized Blu Age applications by using Terraform](deploy-an-environment-for-containerized-blu-age-applications-by-using-terraform.md)
+ [Generate Db2 z/OS data insights by using AWS Mainframe Modernization and Amazon Q in Quick Sight](generate-db2-zos-data-insights-aws-mainframe-modernization-amazon-q-in-quicksight.md)
+ [Generate data insights by using AWS Mainframe Modernization and Amazon Q in Quick Sight](generate-data-insights-by-using-aws-mainframe-modernization-and-amazon-q-in-quicksight.md)
+ [Implement Microsoft Entra ID-based authentication in an AWS Blu Age modernized mainframe application](implement-entra-id-authentication-in-aws-blu-age-modernized-mainframe-application.md)
+ [Integrate Stonebranch Universal Controller with AWS Mainframe Modernization](integrate-stonebranch-universal-controller-with-aws-mainframe-modernization.md)
+ [Migrate and replicate VSAM files to Amazon RDS or Amazon MSK using Connect from Precisely](migrate-and-replicate-vsam-files-to-amazon-rds-or-amazon-msk-using-connect-from-precisely.md)
+ [Modernize the CardDemo mainframe application by using AWS Transform](modernize-carddemo-mainframe-app.md)
+ [Modernize and deploy mainframe applications using AWS Transform and Terraform](modernize-mainframe-app-transform-terraform.md)
+ [Modernize mainframe output management on AWS by using Rocket Enterprise Server and LRS PageCenterX](modernize-mainframe-output-management-on-aws-by-using-rocket-enterprise-server-and-lrs-pagecenterx.md)
+ [Modernize mainframe batch printing workloads on AWS by using Rocket Enterprise Server and LRS VPSX/MFI](modernize-mainframe-batch-printing-workloads-on-aws-by-using-rocket-enterprise-server-and-lrs-vpsx-mfi.md)
+ [Mainframe modernization: DevOps on AWS with Rocket Software Enterprise Suite](mainframe-modernization-devops-on-aws-with-micro-focus.md)
+ [Modernize mainframe online printing workloads on AWS by using Micro Focus Enterprise Server and LRS VPSX/MFI](modernize-mainframe-online-printing-workloads-on-aws-by-using-micro-focus-enterprise-server-and-lrs-vpsx-mfi.md)
+ [Move mainframe files directly to Amazon S3 using Transfer Family](move-mainframe-files-directly-to-amazon-s3-using-transfer-family.md)
+ [Optimize the performance of your AWS Blu Age modernized application](optimize-performance-aws-blu-age-modernized-application.md)
+ [Secure and streamline user access in a Db2 federation database on AWS by using trusted contexts](secure-and-streamline-user-access-in-a-db2-federation-database-on-aws-by-using-trusted-contexts.md)
+ [Transfer large-scale Db2 z/OS data to Amazon S3 in CSV files](transfer-large-scale-db2-z-os-data-to-amazon-s3-in-csv-files.md)
+ [Transform Easytrieve to modern languages by using AWS Transform custom](transform-easytrieve-modern-languages.md)
+ [More patterns](mainframe-more-patterns-pattern-list.md)

# Access AWS services from IBM z/OS by installing the AWS CLI
<a name="access-aws-services-from-ibm-z-os-by-installing-aws-cli"></a>

*Souma Ghosh, Paulo Vitor Pereira, and Phil de Valence, Amazon Web Services*

## Summary
<a name="access-aws-services-from-ibm-z-os-by-installing-aws-cli-summary"></a>

The [AWS Command Line Interface (AWS CLI)](https://aws.amazon.com/cli/) is an open source tool for managing multiple AWS services by using commands in a command line shell. With minimal configuration, you can run commands from command line sessions such as the command prompt, terminal, and bash shell to implement functionality that's equivalent to that provided by the browser-based AWS Management Console.

All AWS infrastructure as a service (IaaS) administration, management, and access functions in the AWS Management Console are available in the AWS API and AWS CLI. You can install the AWS CLI on an IBM z/OS mainframe to directly access, manage, and interact with AWS services from z/OS. The AWS CLI enables users and applications to perform various tasks, such as:
+ Transferring files or datasets between z/OS and Amazon Simple Storage Service (Amazon S3) object storage and viewing content of buckets
+ Starting and stopping different AWS resources; for example, starting a batch job in an AWS Mainframe Modernization environment
+ Calling an AWS Lambda function to implement common business logic
+ Integrating with artificial intelligence and machine learning (AI/ML) and analytics services

This pattern describes how to install, configure, and use the AWS CLI on z/OS. You can install it globally, so it's available to all z/OS users, or at a user level. The pattern also details how to use the AWS CLI in an interactive command line session from z/OS Unix System Services (USS) or as a batch job.

## Prerequisites and limitations
<a name="access-aws-services-from-ibm-z-os-by-installing-aws-cli-prereqs"></a>

**Prerequisites**
+ **Network communication from z/OS to AWS**

  By default, the AWS CLI sends requests to AWS services by using HTTPS on TCP port 443. To use the AWS CLI successfully, you must be able to make outbound connections on TCP port 443. You can use any of the following z/OS USS commands (some of these might not be installed in your environment) to test network connectivity from z/OS to AWS:

  ```
  ping amazonaws.com
  dig amazonaws.com
  traceroute amazonaws.com
  curl -k https://docs.aws.amazon.com/cli/v1/userguide/cli-chap-welcome.html
  ```
+ **AWS credentials**

  In order to communicate with AWS Cloud services from z/OS, the AWS CLI requires you to configure some credentials with privileges to access the target AWS account. For programmatic commands to AWS, you can use access keys, which consist of an access key ID and secret access key. If you don't have access keys, you can create them from the AWS Management Console. As a best practice, do not use the access keys for the AWS account root user for any task unless the root user is required. Instead, [create a new administrator IAM user](https://docs.aws.amazon.com/IAM/latest/UserGuide/getting-set-up.html#create-an-admin) and [prepare for least-privilege permissions](https://docs.aws.amazon.com/IAM/latest/UserGuide/getting-set-up.html#LeastPrivilege)** **to set up the user with access keys. After you create the user, you can [create an access key ID and secret access key](https://docs.aws.amazon.com/IAM/latest/UserGuide/id_credentials_access-keys.html) for this user.    
[\[See the AWS documentation website for more details\]](http://docs.aws.amazon.com/prescriptive-guidance/latest/patterns/access-aws-services-from-ibm-z-os-by-installing-aws-cli.html)
+ **IBM Python for z/OS**

  The AWS CLI requires Python 3.8 or later. IBM has enabled Python to run on z/OS with [IBM Open Enterprise Python for z/OS](https://www.ibm.com/products/open-enterprise-python-zos). IBM Open Enterprise Python is available at no charge through Shopz SMP/E, or you can download the PAX file from the [IBM website](https://www.ibm.com/account/reg/signup?formid=urx-49465). For instructions, see the [installation and configuration documentation](https://www.ibm.com/docs/en/python-zos) for IBM Open Enterprise Python for z/OS.

**Limitations**
+ The installation instructions provided in this pattern are applicable to **AWS CLI version 1 only**. The latest version of the AWS CLI is version 2. However, this pattern uses the older version because the installation methods are different for version 2, and the binary executables available for version 2 aren't compatible with the z/OS system.

**Product versions**
+ AWS CLI version 1
+ Python 3.8 or later

## Architecture
<a name="access-aws-services-from-ibm-z-os-by-installing-aws-cli-architecture"></a>

**Technology stack**
+ Mainframe running z/OS
+ Mainframe z/OS UNIX System Services (USS)
+ Mainframe Open MVS (OMVS) – z/OS UNIX shell environment command interface
+ Mainframe disk, such as a direct-access storage device (DASD)
+ AWS CLI

**Target architecture**

The following diagram shows an AWS CLI deployment on IBM z/OS. You can invoke the AWS CLI from an interactive user session, such as SSH, and telnet sessions. You can also invoke it from a batch job by using job control language (JCL), or from any program that can call a z/OS Unix shell command.

![\[AWS CLI on an IBM z/OS mainframe accessing AWS services.\]](http://docs.aws.amazon.com/prescriptive-guidance/latest/patterns/images/pattern-img/4e3188d8-287f-4ced-8c29-80a01cbbdf50/images/c3883500-bd00-4c56-982a-26d5e0b8b093.png)


The AWS CLI communicates with AWS service endpoints over a TCP/IP network. This network connection can happen over the internet or through a private AWS Direct Connect connection from the customer data center to AWS Cloud data centers. The communication is authenticated with AWS credentials and encrypted. 

**Automation and scale**

You can explore the capabilities of an AWS service with the AWS CLI and develop USS shell scripts to manage your AWS resources from z/OS. You can also run AWS CLI commands and shell scripts from the z/OS batch environment, and you can automate batch jobs to run on a specific schedule by integrating with mainframe schedulers. AWS CLI commands or scripts can be coded inside parameters (PARMs) and procedures (PROCs), and can be scaled by following the standard approach of calling the PARM or PROC from different batch jobs with different parameters.

## Tools
<a name="access-aws-services-from-ibm-z-os-by-installing-aws-cli-tools"></a>
+ [AWS Command Line Interface (AWS CLI)](https://docs.aws.amazon.com/cli/latest/userguide/cli-chap-welcome.html) is an open source tool that helps you interact with AWS services through commands in your command-line shell.

## Best practices
<a name="access-aws-services-from-ibm-z-os-by-installing-aws-cli-best-practices"></a>
+ For security reasons, restrict the access permissions to the USS directory where the AWS access key details are stored. Allow access to only the users or programs that use the AWS CLI.
+ Do not use the AWS account root user access keys for any task. Instead, [create a new administrator IAM user](https://docs.aws.amazon.com/IAM/latest/UserGuide/getting-set-up.html#create-an-admin) for yourself and set it up with access keys.


| 
| 
| IAM users have long-term credentials that present a security risk. To help mitigate this risk, we recommend that you provide these users with only the permissions they require to perform the task and that you remove these users when they are no longer needed. | 
| --- |

## Epics
<a name="access-aws-services-from-ibm-z-os-by-installing-aws-cli-epics"></a>

### Install AWS CLI version 1 on z/OS USS
<a name="install-cli-version-1-on-z-os-uss"></a>


| Task | Description | Skills required | 
| --- | --- | --- | 
| Install Python 3.8 or later. | [\[See the AWS documentation website for more details\]](http://docs.aws.amazon.com/prescriptive-guidance/latest/patterns/access-aws-services-from-ibm-z-os-by-installing-aws-cli.html) | Mainframe z/OS administrator | 
| Set USS environment variables. | Add environment variables to the profile. You can add these either to the `/u/cliuser/.profile` file for an individual user (`cliuser`) or to the `/etc/profile` file for all users.This pattern assumes that Python has been installed in the `/u/awscli/python` directory. If your installation directory is different, update the code accordingly.<pre># Python configuration<br />export BPXKAUTOCVT='ON'<br />export CEERUNOPTS='FILETAG(AUTOCVT,AUTOTAG) POSIX(ON)'<br />export TAGREDIR_ERR=txt<br />export TAGREDIR_IN=txt<br />export TAGREDIR_OUT=txt<br /><br /># AWS CLI configuration<br />export PATH=/u/cliuser/python/bin:$PATH<br />export PYTHONPATH=/u/cliuser/python:$PYTHONPATH</pre> | Mainframe z/OS administrator | 
| Test the Python installation. | Run the **python** command:<pre>python --version</pre>The output should confirm that you have Python 3.8 or later installed correctly. | Mainframe z/OS administrator | 
| Verify or install **pip**. | [\[See the AWS documentation website for more details\]](http://docs.aws.amazon.com/prescriptive-guidance/latest/patterns/access-aws-services-from-ibm-z-os-by-installing-aws-cli.html) | Mainframe z/OS administrator | 
| Install AWS CLI version 1. | [\[See the AWS documentation website for more details\]](http://docs.aws.amazon.com/prescriptive-guidance/latest/patterns/access-aws-services-from-ibm-z-os-by-installing-aws-cli.html) | Mainframe z/OS administrator | 

### Configure AWS CLI access from z/OS
<a name="configure-cli-access-from-z-os"></a>


| Task | Description | Skills required | 
| --- | --- | --- | 
| Configure the AWS access keys, default Region, and output. | The [AWS CLI documentation](https://docs.aws.amazon.com/cli/v1/userguide/cli-configure-files.html) describes different options for setting up AWS access. You can choose a configuration according to your organization's standards. This example uses the short-term credential configuration.[\[See the AWS documentation website for more details\]](http://docs.aws.amazon.com/prescriptive-guidance/latest/patterns/access-aws-services-from-ibm-z-os-by-installing-aws-cli.html) | AWS administrator, Mainframe z/OS administrator, Mainframe z/OS developer | 
| Test the AWS CLI. | [\[See the AWS documentation website for more details\]](http://docs.aws.amazon.com/prescriptive-guidance/latest/patterns/access-aws-services-from-ibm-z-os-by-installing-aws-cli.html) | Mainframe z/OS administrator, Mainframe z/OS developer | 

### Option 1 ‒ Transfer data from USS to Amazon S3 interactively from a USS session
<a name="option-1-transfer-data-from-uss-to-s3-interactively-from-a-uss-session"></a>


| Task | Description | Skills required | 
| --- | --- | --- | 
| Download and transfer the sample CSV file. | [\[See the AWS documentation website for more details\]](http://docs.aws.amazon.com/prescriptive-guidance/latest/patterns/access-aws-services-from-ibm-z-os-by-installing-aws-cli.html) | App developer, Mainframe z/OS developer | 
| Create an S3 bucket and upload the CSV file. | [\[See the AWS documentation website for more details\]](http://docs.aws.amazon.com/prescriptive-guidance/latest/patterns/access-aws-services-from-ibm-z-os-by-installing-aws-cli.html) | App developer, Mainframe z/OS developer | 
| View the S3 bucket and uploaded file. | [\[See the AWS documentation website for more details\]](http://docs.aws.amazon.com/prescriptive-guidance/latest/patterns/access-aws-services-from-ibm-z-os-by-installing-aws-cli.html)For more information about uploading objects, see [Getting started with Amazon S3](https://docs.aws.amazon.com/AmazonS3/latest/userguide/GetStartedWithS3.html) in the Amazon S3 documentation. | General AWS | 
| Run a SQL query on an Amazon Athena table. | [\[See the AWS documentation website for more details\]](http://docs.aws.amazon.com/prescriptive-guidance/latest/patterns/access-aws-services-from-ibm-z-os-by-installing-aws-cli.html)The output of the SQL query will display the contents of your CSV file. | General AWS, App developer | 

### Option 2 ‒ Transfer data from USS to Amazon S3 by using batch JCL
<a name="option-2-transfer-data-from-uss-to-s3-by-using-batch-jcl"></a>


| Task | Description | Skills required | 
| --- | --- | --- | 
| Upload the sample file. | [\[See the AWS documentation website for more details\]](http://docs.aws.amazon.com/prescriptive-guidance/latest/patterns/access-aws-services-from-ibm-z-os-by-installing-aws-cli.html) | Mainframe z/OS developer | 
| Create batch JCL. | Code the batch JCL as follows to create the destination S3 bucket, upload the dataset, and list the bucket content. Make sure to replace the directory name, file names, and bucket name to your own values.<pre>//AWSCLICP JOB ACTINFO1,'IBMUSER',CLASS=A,MSGCLASS=H,MSGLEVEL=(1,1), <br />// NOTIFY=&SYSUID,TIME=1440 <br />//*---------------------------------------------------------<br />//* Sample job for AWS CLI <br />//*--------------------------------------------------------- <br />//USSCMD EXEC PGM=BPXBATCH<br />//STDERR  DD SYSOUT=*<br />//STDOUT  DD SYSOUT=*<br />//STDENV  DD *<br /> export PATH=/u/cliuser/python/bin:$PATH<br />//STDPARM DD *<br />SH<br /> export _BPXK_AUTOCVT=ON;<br /> aws s3 mb s3://DOC-EXAMPLE-BUCKET2;<br /> cp "//'USER.DATA.FIXED'" /tmp/tmpfile;<br /> aws s3 cp /tmp/tmpfile s3://DOC-EXAMPLE-BUCKET2/USER.DATA.FIXED; <br /> rm /tmp/tmpfile;<br /> aws s3 ls s3://DOC-EXAMPLE-BUCKET2;<br />/*</pre> | Mainframe z/OS developer | 
| Submit the batch JCL job. | [\[See the AWS documentation website for more details\]](http://docs.aws.amazon.com/prescriptive-guidance/latest/patterns/access-aws-services-from-ibm-z-os-by-installing-aws-cli.html) | Mainframe z/OS developer | 
| View the dataset uploaded to the S3 bucket. | [\[See the AWS documentation website for more details\]](http://docs.aws.amazon.com/prescriptive-guidance/latest/patterns/access-aws-services-from-ibm-z-os-by-installing-aws-cli.html) | General AWS | 

## Related resources
<a name="access-aws-services-from-ibm-z-os-by-installing-aws-cli-resources"></a>
+ [AWS CLI version 1 documentation](https://docs.aws.amazon.com/cli/v1/userguide/cli-chap-welcome.html)
+ [AWS Mainframe Modernization CLI Command Reference](https://docs.aws.amazon.com/cli/latest/reference/m2/)
+ [AWS Mainframe Modernization](https://aws.amazon.com/mainframe-modernization/)

## Additional information
<a name="access-aws-services-from-ibm-z-os-by-installing-aws-cli-additional"></a>

**USER.DATA.FIXED in ISPF option 3.4 (dataset list utility)**

![\[Viewing the contents of the dataset in z/OS.\]](http://docs.aws.amazon.com/prescriptive-guidance/latest/patterns/images/pattern-img/4e3188d8-287f-4ced-8c29-80a01cbbdf50/images/96c25145-3d4d-4007-99f6-5eeb9e88642d.png)


**SYSOUT of the submitted batch job**

![\[Standard output from job log.\]](http://docs.aws.amazon.com/prescriptive-guidance/latest/patterns/images/pattern-img/4e3188d8-287f-4ced-8c29-80a01cbbdf50/images/03fffbd2-7d2b-43b2-bf14-736b3d150e38.png)


## Attachments
<a name="attachments-4e3188d8-287f-4ced-8c29-80a01cbbdf50"></a>

To access additional content that is associated with this document, unzip the following file: [attachment.zip](samples/p-attach/4e3188d8-287f-4ced-8c29-80a01cbbdf50/attachments/attachment.zip)

# Back up and archive mainframe data to Amazon S3 using BMC AMI Cloud Data
<a name="back-up-and-archive-mainframe-data-to-amazon-s3-using-bmc-ami-cloud-data"></a>

*Santosh Kumar Singh, Gilberto Biondo, and Maggie Li, Amazon Web Services*

*Mikhael Liberman, Model9 Mainframe Software*

## Summary
<a name="back-up-and-archive-mainframe-data-to-amazon-s3-using-bmc-ami-cloud-data-summary"></a>

This pattern demonstrates how to back up and archive mainframe data directly to Amazon Simple Storage Service (Amazon S3), and then recall and restore that data to the mainframe by using BMC AMI Cloud Data (previously known as Model9 Manager). If you are looking for a way to modernize your backup and archive solution as part of a mainframe modernization project or to meet compliance requirements, this pattern can help meet those goals.

Typically, organizations that run core business applications on mainframes use a virtual tape library (VTL) to back up data stores such as files and logs. This method can be expensive because it consumes billable MIPS, and the data stored on tapes outside the mainframe is inaccessible. To avoid these issues, you can use BMC AMI Cloud Data to quickly and cost-effectively transfer operational and historical mainframe data directly to Amazon S3. You can use BMC AMI Cloud Data to back up and archive data over TCP/IP to AWS while taking advantage of IBM z Integrated Information Processor (zIIP) engines to reduce cost, parallelism, and transfer times.

## Prerequisites and limitations
<a name="back-up-and-archive-mainframe-data-to-amazon-s3-using-bmc-ami-cloud-data-prereqs"></a>

**Prerequisites**
+ An active AWS account
+ BMC AMI Cloud Data with a valid license key
+ TCP/IP connectivity between the mainframe and AWS
+ An AWS Identity and Access Management (IAM) role for read/write access to an S3 bucket
+ Mainframe security product (RACF) access in place to run BMC AMI Cloud processes
+ A BMC AMI Cloud z/OS agent (Java version 8 64-bit SR5 FP16 or later) that has available network ports, firewall rules permitting access to S3 buckets, and a dedicated z/FS file system
+ [Requirements](https://docs.bmc.com/docs/cdacv27/management-server-requirements-1245343255.html) met for the BMC AMI Cloud management server

**Limitations**
+ BMC AMI Cloud Data stores its operational data in a PostgreSQL database that runs as a Docker container on the same Amazon Elastic Compute Cloud (Amazon EC2) instance as the management server. Amazon Relational Database Service (Amazon RDS) is not currently supported as a backend for BMC AMI Cloud Data. For more information about the latest product updates, see [What's New?](https://docs.bmc.com/docs/cdacv27/what-s-new-1245343246.html) in the BMC documentation.
+ This pattern backs up and archives z/OS mainframe data only. BMC AMI Cloud Data backs up and archives only mainframe files.
+ This pattern doesn’t convert data into standard open formats such as JSON or CSV. Use an additional transformation service such as [BMC AMI Cloud Analytics](https://www.bmc.com/it-solutions/bmc-ami-cloud-analytics.html) (previously known as Model9 Gravity) to convert the data into standard open formats. Cloud-native applications and data analytics tools can access the data after it's is written to the cloud.

**Product versions**
+ BMC AMI Cloud Data version 2.x

## Architecture
<a name="back-up-and-archive-mainframe-data-to-amazon-s3-using-bmc-ami-cloud-data-architecture"></a>

**Source technology stack**
+ Mainframe running z/OS
+ Mainframe files such as datasets and z/OS UNIX System Services (USS) files
+ Mainframe disk, such as a direct-access storage device (DASD)
+ Mainframe tape (virtual or physical tape library)

**Target technology stack**
+ Amazon S3
+ Amazon EC2 instance in a virtual private cloud (VPC)
+ AWS Direct Connect
+ Amazon Elastic File System (Amazon EFS)

**Target architecture**

The following diagram shows a reference architecture where BMC AMI Cloud Data software agents on a mainframe drive the legacy data backup and archive processes that store the data in Amazon S3.

![\[BMC AMI Cloud Data software agents on a mainframe driving legacy data backup and archive processes\]](http://docs.aws.amazon.com/prescriptive-guidance/latest/patterns/images/pattern-img/bde3b029-184e-4eb0-933b-f8caf6cc40ab/images/a24cd6c1-b131-49ea-8238-f3aea5ab8134.png)


The diagram shows the following workflow:

1. BMC AMI Cloud Data software agents run on mainframe logical partitions (LPARs). The software agents read and write mainframe data from DASD or tape directly to Amazon S3 over TCP/IP.

1. AWS Direct Connect sets up a physical, isolated connection between the on-premises network and AWS. For enhanced security, run a site-to-site VPN on top of Direct Connect to encrypt data in transit.

1. The S3 bucket stores mainframe files as object storage data, and BMC AMI Cloud Data agents directly communicate with the S3 buckets. Certificates are used for HTTPS encryption of all communications between the agent and Amazon S3. Amazon S3 data encryption is used to encrypt and protect the data at rest.

1. BMC AMI Cloud Data management servers run as Docker containers on EC2 instances. The instances communicate with agents running on mainframe LPARs and S3 buckets.

1. Amazon EFS is mounted on both active and passive EC2 instances to share the Network File System (NFS) storage. This is to make sure that metadata related to a policy created on the management server isn't lost in the event of a failover. In the event of a failover by the active server, the passive server can be accessed without any data loss. If the passive server fails, the active server can be accessed without any data loss.

## Tools
<a name="back-up-and-archive-mainframe-data-to-amazon-s3-using-bmc-ami-cloud-data-tools"></a>

**AWS services**
+ [Amazon Elastic Compute Cloud (Amazon EC2)](https://docs.aws.amazon.com/ec2/) provides scalable computing capacity in the AWS Cloud. You can launch as many virtual servers as you need and quickly scale them up or down.
+ [Amazon Elastic File System (Amazon EFS)](https://docs.aws.amazon.com/efs/latest/ug/whatisefs.html) helps you create and configure shared file systems in the AWS Cloud.
+ [Amazon Simple Storage Service (Amazon S3)](https://docs.aws.amazon.com/AmazonS3/latest/userguide/Welcome.html) is a cloud-based object storage service that helps you store, protect, and retrieve nearly any amount of data.
+ [Amazon Virtual Private Cloud (Amazon VPC)](https://docs.aws.amazon.com/vpc/latest/userguide/what-is-amazon-vpc.html) helps you launch AWS resources into a virtual network that you’ve defined. This virtual network resembles a traditional network that you’d operate in your own data center, with the benefits of using the scalable infrastructure of AWS.
+ [AWS Direct Connect](https://docs.aws.amazon.com/directconnect/latest/UserGuide/Welcome.html) links your internal network to a AWS Direct Connect location over a standard Ethernet fiber-optic cable. With this connection, you can create virtual interfaces directly to public AWS services while bypassing internet service providers in your network path.
+ [AWS Identity and Access Management (IAM)](https://docs.aws.amazon.com/IAM/latest/UserGuide/introduction.html) helps you securely manage access to your AWS resources by controlling who is authenticated and authorized to use them.

**BMC tools**
+ [BMC AMI Cloud management server](https://docs.bmc.com/docs/cdacv27/bmc-ami-cloud-overview-1245343249.html) is a GUI application that runs as a Docker container on an Amazon Linux Amazon Machine Image (AMI) for Amazon EC2. The management server provides the functionality to manage BMC AMI Cloud activities such as reporting, creating and managing policies, running archives, and performing backups, recalls, and restores.
+ [BMC AMI Cloud agent](https://docs.bmc.com/docs/cdacv27/bmc-ami-cloud-overview-1245343249.html) runs on an on-premises mainframe LPAR that reads and writes files directly to object storage by using TCP/IP. A started task runs on a mainframe LPAR and is responsible for reading and writing backup and archive data to and from Amazon S3.
+ [BMC AMI Cloud Mainframe Command Line Interface (M9CLI)](https://docs.bmc.com/docs/cdacv27/command-line-interface-cli-reference-1245343519.html) provides you with a set of commands to perform BMC AMI Cloud actions directly from TSO/E or in batch operations, without the dependency on the management server.

## Epics
<a name="back-up-and-archive-mainframe-data-to-amazon-s3-using-bmc-ami-cloud-data-epics"></a>

### Create an S3 bucket and IAM policy
<a name="create-an-s3-bucket-and-iam-policy"></a>


| Task | Description | Skills required | 
| --- | --- | --- | 
| Create an S3 bucket. | [Create an S3 bucket](https://docs.aws.amazon.com/AmazonS3/latest/userguide/create-bucket-overview.html) to store the files and volumes that you want to back up and archive from your mainframe environment. | General AWS | 
| Create an IAM policy. | All BMC AMI Cloud management servers and agents require access to the S3 bucket that you created in the previous step.To grant the required access, create the following IAM policy:<pre>{<br />    "Version": "2012-10-17",		 	 	 <br />    "Statement": [<br />        {<br />            "Sid": "Listfolder",<br />            "Action": [<br />                "s3:ListBucket",<br />                "s3:GetBucketLocation",<br />                "s3:ListBucketVersions"<br />            ],<br />            "Effect": "Allow",<br />            "Resource": [<br />                "arn:aws:s3:::<Bucket Name>"<br />            ]<br />        },<br />        {<br />            "Sid": "Objectaccess",<br />            "Effect": "Allow",<br />            "Action": [<br />                "s3:PutObject",<br />                "s3:GetObjectAcl",<br />                "s3:GetObject",<br />                "s3:DeleteObjectVersion",<br />                "s3:DeleteObject",<br />                "s3:PutObjectAcl",<br />                "s3:GetObjectVersion"<br />            ],<br />            "Resource": [<br />                "arn:aws:s3:::<Bucket Name>/*"<br />            ]<br />        }<br />    ]<br />}</pre> | General AWS | 

### Get the BMC AMI Cloud software license and download the software
<a name="get-the-bmc-ami-cloud-software-license-and-download-the-software"></a>


| Task | Description | Skills required | 
| --- | --- | --- | 
| Get a BMC AMI Cloud software license. | To get a software license key, contact the [BMC AMI Cloud team](https://www.bmc.com/it-solutions/bmc-ami-cloud.html?vd=model9-io). The output of the z/OS `D M=CPU` command is required for generating a license. | Build lead | 
| Download the BMC AMI Cloud software and license key. | Obtain the installation files and license key by following the instructions in the [BMC documentation](https://docs.bmc.com/docs/cdacv27/preparing-to-install-the-bmc-ami-cloud-agent-1245343285.html). | Mainframe infrastructure administrator | 

### Install the BMC AMI Cloud software agent on the mainframe
<a name="install-the-bmc-ami-cloud-software-agent-on-the-mainframe"></a>


| Task | Description | Skills required | 
| --- | --- | --- | 
| Install the BMC AMI Cloud software agent. | [\[See the AWS documentation website for more details\]](http://docs.aws.amazon.com/prescriptive-guidance/latest/patterns/back-up-and-archive-mainframe-data-to-amazon-s3-using-bmc-ami-cloud-data.html) | Mainframe infrastructure administrator | 

### Set up a BMC AMI Cloud management server on an EC2 instance
<a name="set-up-a-bmc-ami-cloud-management-server-on-an-ec2-instance"></a>


| Task | Description | Skills required | 
| --- | --- | --- | 
| Create Amazon EC2 Linux 2 instances. | Launch two Amazon EC2 Linux 2 instances in different Availability Zones by following the instructions from [Step 1: Launch an instance](https://docs.aws.amazon.com/AWSEC2/latest/WindowsGuide/EC2_GetStarted.html#ec2-launch-instance) in the Amazon EC2 documentation.The instance must meet the following recommended hardware and software requirements:[\[See the AWS documentation website for more details\]](http://docs.aws.amazon.com/prescriptive-guidance/latest/patterns/back-up-and-archive-mainframe-data-to-amazon-s3-using-bmc-ami-cloud-data.html)For more information, see the [BMC documentation](https://docs.bmc.com/docs/cdacv27/preparing-to-install-the-management-server-on-linux-1245343268.html). | Cloud architect, Cloud administrator | 
| Create an Amazon EFS file system. | Create an Amazon EFS file system by following the instructions from [Step 1: Create your Amazon EFS file system](https://docs.aws.amazon.com/efs/latest/ug/gs-step-two-create-efs-resources.html) in the Amazon EFS documentation.When creating the file system, do the following:[\[See the AWS documentation website for more details\]](http://docs.aws.amazon.com/prescriptive-guidance/latest/patterns/back-up-and-archive-mainframe-data-to-amazon-s3-using-bmc-ami-cloud-data.html) | Cloud administrator, Cloud architect | 
| Install Docker and configure the management server. | **Connect to your EC2 instances:**Connect to your EC2 instances by following the instructions from [Connect to your Linux instance](https://docs.aws.amazon.com/AWSEC2/latest/UserGuide/AccessingInstances.html) in the Amazon EC2 documentation.**Configure your EC2 instances:**For each EC2 instance, do the following:[\[See the AWS documentation website for more details\]](http://docs.aws.amazon.com/prescriptive-guidance/latest/patterns/back-up-and-archive-mainframe-data-to-amazon-s3-using-bmc-ami-cloud-data.html) | Cloud architect, Cloud administrator | 
| Install the management server software. | [\[See the AWS documentation website for more details\]](http://docs.aws.amazon.com/prescriptive-guidance/latest/patterns/back-up-and-archive-mainframe-data-to-amazon-s3-using-bmc-ami-cloud-data.html)To troubleshoot issues, go to the logs stored in the `/data/model9/logs/`** **folder. For more information, see the [BMC documentation](https://docs.bmc.com/docs/cdacv27/performing-the-management-server-installation-on-linux-1245343272.html). | Cloud architect, Cloud administrator | 

### Add an agent and define a backup or archive policy on the BMC AMI Cloud management server
<a name="add-an-agent-and-define-a-backup-or-archive-policy-on-the-bmc-ami-cloud-management-server"></a>


| Task | Description | Skills required | 
| --- | --- | --- | 
| Add a new agent. | Before you add a new agent, confirm the following:[\[See the AWS documentation website for more details\]](http://docs.aws.amazon.com/prescriptive-guidance/latest/patterns/back-up-and-archive-mainframe-data-to-amazon-s3-using-bmc-ami-cloud-data.html)You must create an agent on the management server before you define any backup and archive policies. To create the agent, do the following:[\[See the AWS documentation website for more details\]](http://docs.aws.amazon.com/prescriptive-guidance/latest/patterns/back-up-and-archive-mainframe-data-to-amazon-s3-using-bmc-ami-cloud-data.html)After the agent is created, you'll see the **connected** status against the object storage and mainframe agent in a new window that appears in the table. | Mainframe storage administrator or developer | 
| Create a backup or archive policy. | [\[See the AWS documentation website for more details\]](http://docs.aws.amazon.com/prescriptive-guidance/latest/patterns/back-up-and-archive-mainframe-data-to-amazon-s3-using-bmc-ami-cloud-data.html) | Mainframe storage administrator or developer | 

### Run the backup or archive policy from the management server
<a name="run-the-backup-or-archive-policy-from-the-management-server"></a>


| Task | Description | Skills required | 
| --- | --- | --- | 
| Run the backup or archive policy. | Run the data backup or archive policy that you created earlier from the management server either manually or automatically (based on a schedule). To run the policy manually:[\[See the AWS documentation website for more details\]](http://docs.aws.amazon.com/prescriptive-guidance/latest/patterns/back-up-and-archive-mainframe-data-to-amazon-s3-using-bmc-ami-cloud-data.html) | Mainframe storage administrator or developer | 
| Restore the backup or archive policy. | [\[See the AWS documentation website for more details\]](http://docs.aws.amazon.com/prescriptive-guidance/latest/patterns/back-up-and-archive-mainframe-data-to-amazon-s3-using-bmc-ami-cloud-data.html) | Mainframe storage administrator or developer | 

### Run the backup or archive policy from the mainframe
<a name="run-the-backup-or-archive-policy-from-the-mainframe"></a>


| Task | Description | Skills required | 
| --- | --- | --- | 
| Run the backup or archive policy by using M9CLI. | Use the M9CLI to perform backup and restore processes from TSO/E, REXX, or through JCLs without setting up rules on the BMC AMI Cloud management server.**Using TSO/E:**If you use TSO/E, make sure that `M9CLI REXX` is concatenated to `TSO`. To back up a dataset through TSO/E, use the `TSO M9CLI BACKDSN <DSNAME>` command.For more information about M9CLI commands, see [CLI reference ](https://docs.bmc.com/docs/cdacv27/command-line-interface-cli-reference-1245343519.html)in the BMC documentation.**Using JCLs:**To run the backup and archive policy by using JCLs, run the `M9CLI` command.**Using batch operations:**The following example shows you how to archive a dataset by running the `M9CLI` command in batch:<pre>//JOBNAME JOB …<br />//M9CLI EXEC PGM=IKJEFT01<br />//STEPLIB DD DISP=SHR,DSN=<MODEL9 LOADLIB><br />//SYSEXEC DD DISP=SHR,DSN=<MODEL9 EXEC LIB><br />//SYSTSPRT DD SYSOUT=*<br />//SYSPRINT DD SYSOUT=*<br />//SYSTSIN DD TSO M9CLI ARCHIVE <br /> M9CLI ARCHIVE <DSNNAME OR DSN PATTERN>   <br />/</pre> | Mainframe storage administrator or developer | 
| Run the backup or archive policy in JCL batch. | BMC AMI Cloud provides a sample JCL routine called **M9SAPIJ**. You can customize **M9SAPIJ** to run a specific policy created on the management server with a JCL. This job can also be part of a batch scheduler for running backup and restore processes automatically.The batch job expects the following mandatory values:[\[See the AWS documentation website for more details\]](http://docs.aws.amazon.com/prescriptive-guidance/latest/patterns/back-up-and-archive-mainframe-data-to-amazon-s3-using-bmc-ami-cloud-data.html)You can also change other values by following the instructions on the sample job. | Mainframe storage administrator or developer | 

## Related resources
<a name="back-up-and-archive-mainframe-data-to-amazon-s3-using-bmc-ami-cloud-data-resources"></a>
+ [Mainframe Modernization with AWS](https://aws.amazon.com/mainframe/) (AWS documentation)
+ [How Cloud Backup for Mainframes Cuts Costs with Model9 and AWS](https://aws.amazon.com/blogs/apn/how-cloud-backup-for-mainframes-cuts-costs-with-model9-and-aws/) (AWS Partner Network Blog)
+ [How to Enable Mainframe Data Analytics on AWS Using Model9](https://aws.amazon.com/blogs/apn/how-to-enable-mainframe-data-analytics-on-aws-using-model9/) (AWS Partner Network Blog)
+ [AWS Direct Connect Resiliency Recommendations](https://aws.amazon.com/directconnect/resiliency-recommendation/?nc=sn&loc=4&dn=2) (AWS documentation)
+ [BMC AMI Cloud documentation](https://docs.bmc.com/docs/cdacv27/getting-started-1245343248.html) (BMC website)

# Build COBOL Db2 programs by using AWS Mainframe Modernization and AWS CodeBuild
<a name="build-cobol-db2-programs-mainframe-modernization-codebuild"></a>

*Luis Gustavo Dantas and Eduardo Zimelewicz, Amazon Web Services*

## Summary
<a name="build-cobol-db2-programs-mainframe-modernization-codebuild-summary"></a>

**Note**  
AWS Mainframe Modernization Service (Managed Runtime Environment experience) is no longer open to new customers. For capabilities similar to AWS Mainframe Modernization Service (Managed Runtime Environment experience) explore AWS Mainframe Modernization Service (Self-Managed Experience). Existing customers can continue to use the service as normal. For more information, see [AWS Mainframe Modernization availability change](https://docs.aws.amazon.com/m2/latest/userguide/mainframe-modernization-availability-change.html).

This pattern explains how to create a simple AWS CodeBuild project to precompile and bind COBOL Db2 programs by using the AWS Mainframe Modernization Replatform tools. This enables the deployment and execution of these programs in the AWS Mainframe Modernization Replatform runtime environment.

COBOL, a business-oriented programming language, powers many critical applications due to its reliability and readability. IBM Db2, a relational database management system, manages large volumes of data efficiently and integrates with COBOL programs through SQL. Together, COBOL and Db2 form the backbone of mission-critical operations in industries such as finance and government, despite the emergence of newer technologies.

Migrating COBOL and Db2 components from the mainframe environment to other platforms leads to challenges such as platform compatibility, integration complexity, data migration, and performance optimization. Moving these critical components requires careful planning, technical expertise, and resources to ensure a smooth migration while maintaining reliability and functionality.

The AWS Mainframe Modernization service provides tools and resources to replatform mainframe applications and databases to run on AWS infrastructure, such as Amazon Elastic Compute Cloud (Amazon EC2) instances. This involves moving mainframe workloads to the cloud without major code changes.

The Db2 precompile and bind process is essential for optimizing the performance and reliability of database applications. Precompilation transforms embedded SQL statements into executable code, which reduces runtime overhead and enhances efficiency. The bind process links the precompiled code with database structures, facilitating access paths and query optimization. This process ensures data integrity, improves application responsiveness, and guards against security vulnerabilities. Properly precompiled and bound applications minimize resource consumption, enhance scalability, and mitigate the risks of SQL injection attacks.

## Prerequisites and limitations
<a name="build-cobol-db2-programs-mainframe-modernization-codebuild-prereqs"></a>

**Prerequisites **
+ An AWS account and administrative-level console access.
+ An IBM Db2 database system, such as IBM Db2 for z/OS or Db2 for Linux, Unix, and Windows (LUW).
+ The IBM Data Server Client software, which is available for download from the [IBM website](https://www.ibm.com/support/pages/download-initial-version-115-clients-and-drivers). For more information, see [IBM Data Server Client and Data Server Driver types](https://www.ibm.com/docs/en/db2/11.5?topic=overviews-data-server-clients).
+ A COBOL Db2 program to be compiled and bound. Alternatively, this pattern provides a basic sample program that you can use.
+ A virtual private cloud (VPC) on AWS with a private network. For information about creating a VPC, see the [Amazon Virtual Private Cloud (Amazon VPC) documentation](https://docs.aws.amazon.com/vpc/latest/userguide/create-vpc.html).
+ A source control repository such as GitHub or GitLab.

**Limitations **
+ For AWS CodeBuild quotas, see [Quotas for AWS CodeBuild](https://docs.aws.amazon.com/codebuild/latest/userguide/limits.html).
+ Some AWS services aren’t available in all AWS Regions. For Region availability, see [AWS services by Region](https://aws.amazon.com/about-aws/global-infrastructure/regional-product-services/). For specific endpoints, see the [Service endpoints and quotas](https://docs.aws.amazon.com/general/latest/gr/aws-service-information.html) page, and choose the link for the service.

## Architecture
<a name="build-cobol-db2-programs-mainframe-modernization-codebuild-architecture"></a>

**Source technology stack**

The source stack includes:
+ COBOL programs that use a Db2 database to store data
+ IBM COBOL compiler and Db2 for z/OS precompiler
+ Other parts of the mainframe setup, such as the file system, transaction manager, and spool

**Target technology stack**

This pattern's approach works for two options: moving data from Db2 for z/OS to Db2 for LUW, or staying on Db2 for z/OS. The target architecture includes:
+ COBOL programs that use a Db2 database to store data
+ AWS Mainframe Modernization Replatform compilation tools
+ AWS CodeBuild as the infrastructure to build the application
+ Other AWS Cloud resources such as Amazon Linux

**Target architecture**

![\[Architecture for building COBOL Db2 programs on AWS.\]](http://docs.aws.amazon.com/prescriptive-guidance/latest/patterns/images/pattern-img/5895fa34-f05b-4cc3-a59f-a596f9116c66/images/0dda414a-21a7-41d1-b86b-7ff3b1c6fbda.png)


The diagram illustrates the following:

1. The user uploads their code to a source control repository such as GitHub or GitLab.

1. AWS CodePipeline notices the change and gets the code from the repository.

1. CodePipeline starts AWS CodeBuild and sends the code.

1. CodeBuild follows the instructions in the `buildspec.yml` template (provided in the [Additional information](#build-cobol-db2-programs-mainframe-modernization-codebuild-additional) section) to:

   1. Get the IBM Data Server Client from an Amazon Simple Storage Service (Amazon S3) bucket.

   1. Install and set up the IBM Data Server Client.

   1. Retrieve Db2 credentials from AWS Secrets Manager.

   1. Connect to the Db2 server.

   1. Precompile, compile, and bind the COBOL program.

   1. Save the finished products in an S3 bucket for AWS CodeDeploy to use.

1. CodePipeline starts CodeDeploy.

1. CodeDeploy coordinates its agents, which are already installed in the runtime environments. The agents fetch the application from Amazon S3 and install it based on the instructions in `appspec.yml`.

To keep things simple and focused on the build,  the instructions in this pattern cover steps 1 through 4 but don't include the deployment of the COBOL Db2 program.

**Automation and scale**

For simplicity, this pattern describes how to provision resources manually. However, there are numerous automation options available, such as CloudFormation, AWS Cloud Development Kit (AWS CDK), and HashiCorp Terraform, which automate these tasks. For more information, see the [CloudFormation](https://docs.aws.amazon.com/AWSCloudFormation/latest/UserGuide/Welcome.html) and [AWS CDK](https://docs.aws.amazon.com/cdk/v2/guide/home.html) documentation.

## Tools
<a name="build-cobol-db2-programs-mainframe-modernization-codebuild-tools"></a>

**AWS services**
+ [AWS CodeBuild](https://docs.aws.amazon.com/codebuild/latest/userguide/welcome.html) is a fully managed build service that helps you compile source code, run unit tests, and produce artifacts that are ready to deploy.
+ [AWS CodeDeploy](https://docs.aws.amazon.com/codedeploy/latest/userguide/welcome.html) automates deployments to Amazon EC2 or on-premises instances, AWS Lambda functions, or Amazon Elastic Container Service (Amazon ECS) services.
+ [AWS CodePipeline](https://docs.aws.amazon.com/codepipeline/latest/userguide/welcome.html) helps you quickly model and configure the different stages of a software release and automate the steps required to release software changes continuously.
+ [AWS Mainframe Modernization](https://docs.aws.amazon.com/m2/latest/userguide/what-is-m2.html) provides tools and resources to help you plan and implement migration and modernization from mainframes to AWS managed runtime environments.

**Other tools**
+ **Amazon ECR image for AWS Mainframe Modernization Replatform tools**. To compile a COBOL application, you'll need to initiate CodeBuild by using an Amazon Elastic Container Registry (Amazon ECR) image that contains the AWS Mainframe Modernization Replatform tools:

  `673918848628.dkr.ecr.<your-region>.amazonaws.com/m2-enterprise-build-tools:9.0.7.R1`

  For more information about the ECR image available, see the [tutorial](https://docs.aws.amazon.com/m2/latest/userguide/tutorial-build-mf.html) in the *AWS Mainframe Modernization User Guide*.
+ [IBM Data Server Client](https://www.ibm.com/docs/en/db2/11.5?topic=overviews-data-server-clients) software is essential for precompiling and binding COBOL Db2 programs in CodeBuild. It acts as a bridge between the COBOL compiler and Db2.

## Best practices
<a name="build-cobol-db2-programs-mainframe-modernization-codebuild-best-practices"></a>
+ Not every COBOL program relies on Db2 as its data persistence layer. Make sure that compilation directives for accessing Db2 are applied only to COBOL programs that are specifically designed to interact with Db2. Implement a logic to distinguish between COBOL Db2 programs and COBOL programs that do not use Db2.
+ We recommend that you avoid compiling programs that haven't been modified. Implement a process to identify which programs require compilation.

## Epics
<a name="build-cobol-db2-programs-mainframe-modernization-codebuild-epics"></a>

### Create the cloud infrastructure
<a name="create-the-cloud-infrastructure"></a>


| Task | Description | Skills required | 
| --- | --- | --- | 
| Create an S3 bucket to host the IBM Data Server Client and pipeline artifacts. | You need to set up an S3 bucket to (a) upload the IBM Data Server Client, (b) store your code from the repository, and (c) store the results of the build process.[\[See the AWS documentation website for more details\]](http://docs.aws.amazon.com/prescriptive-guidance/latest/patterns/build-cobol-db2-programs-mainframe-modernization-codebuild.html)For ways to create an S3 bucket, see the [Amazon S3 documentation.](https://docs.aws.amazon.com/AmazonS3/latest/userguide/create-bucket-overview.html) | General AWS | 
| Upload the IBM Data Server Client to the S3 bucket. | [\[See the AWS documentation website for more details\]](http://docs.aws.amazon.com/prescriptive-guidance/latest/patterns/build-cobol-db2-programs-mainframe-modernization-codebuild.html) | General AWS | 
| Create an AWS Secrets Manager secret for your Db2 credentials. | To create a secret to securely store your DB2 credentials:[\[See the AWS documentation website for more details\]](http://docs.aws.amazon.com/prescriptive-guidance/latest/patterns/build-cobol-db2-programs-mainframe-modernization-codebuild.html)For more information about creating secrets, see the [Secrets Manager documentation](https://docs.aws.amazon.com/secretsmanager/latest/userguide/create_secret.html). | General AWS | 
| Verify that Db2 is accessible from the VPC subnet. | AWS CodeBuild needs a connection to the Db2 server so that the Data Server Client can perform precompilation and bind operations. Make sure that CodeBuild can reach the Db2 server through a secure connection.[\[See the AWS documentation website for more details\]](http://docs.aws.amazon.com/prescriptive-guidance/latest/patterns/build-cobol-db2-programs-mainframe-modernization-codebuild.html) | Network administrator, General AWS | 

### Create the application artifacts
<a name="create-the-application-artifacts"></a>


| Task | Description | Skills required | 
| --- | --- | --- | 
| Create the COBOL Db2 asset. | [\[See the AWS documentation website for more details\]](http://docs.aws.amazon.com/prescriptive-guidance/latest/patterns/build-cobol-db2-programs-mainframe-modernization-codebuild.html) | App developer | 
| Create the `buildspec.yml` file. | [\[See the AWS documentation website for more details\]](http://docs.aws.amazon.com/prescriptive-guidance/latest/patterns/build-cobol-db2-programs-mainframe-modernization-codebuild.html) | AWS DevOps | 
| Connect your repository to CodePipeline. | [\[See the AWS documentation website for more details\]](http://docs.aws.amazon.com/prescriptive-guidance/latest/patterns/build-cobol-db2-programs-mainframe-modernization-codebuild.html)You will need the Amazon Resource Name (ARN) for the connection when you create the AWS Identity and Access Management (IAM) policy for CodePipeline in a later step. | AWS DevOps | 

### Configure permissions
<a name="configure-permissions"></a>


| Task | Description | Skills required | 
| --- | --- | --- | 
| Create an IAM policy for CodeBuild. | The CodeBuild project requires access to some resources, including Secrets Manager and Amazon S3.To set up the necessary permissions:[\[See the AWS documentation website for more details\]](http://docs.aws.amazon.com/prescriptive-guidance/latest/patterns/build-cobol-db2-programs-mainframe-modernization-codebuild.html)For more information about creating IAM policies, see the [IAM documentation](https://docs.aws.amazon.com/IAM/latest/UserGuide/access_policies_create.html). | General AWS | 
| Create an IAM role for CodeBuild. | To make the security policies available for CodeBuild, you need to configure an IAM role.To create this role:1. On the[ IAM console](https://console.aws.amazon.com/iam), in the navigation pane, choose **Roles**, **Create Role**.3. For **Trusted entity type**, keep the default **AWS service** setting.4. For **Use case**, select the CodeBuild service, and then choose **Next**.4. In the list of available IAM policies, locate the policy you created for CodeBuild, and then choose **Next** to attach it to the role.5. Specify a name for the role, and choose **Create role **to save it for future reference in CodeBuild.For more information about creating an IAM role for an AWS service, see the[ IAM documentation](https://docs.aws.amazon.com/IAM/latest/UserGuide/id_roles_create_for-service.html). | General AWS | 
| Create an IAM policy for CodePipeline. | The AWS CodePipeline pipeline requires access to some resources, including your code repository and Amazon S3.Repeat the steps provided previously for CodeBuild to create an IAM policy for CodePipeline (in step 2, choose **CodePipeline** instead of **CodeBuild**). | AWS DevOps | 
| Create an IAM role for CodePipeline. | To make the security policies available for CodePipeline, you need to configure an IAM role.To create this role:[\[See the AWS documentation website for more details\]](http://docs.aws.amazon.com/prescriptive-guidance/latest/patterns/build-cobol-db2-programs-mainframe-modernization-codebuild.html) | AWS DevOps | 

### Compile and bind the COBOL Db2 program
<a name="compile-and-bind-the-cobol-db2-program"></a>


| Task | Description | Skills required | 
| --- | --- | --- | 
| Create a CodePipeline pipeline and CodeBuild project. | To create a CodePipeline pipeline and the CodeBuild project that compiles and binds the COBOL Db2 program:[\[See the AWS documentation website for more details\]](http://docs.aws.amazon.com/prescriptive-guidance/latest/patterns/build-cobol-db2-programs-mainframe-modernization-codebuild.html) | AWS DevOps | 
| Review the output. | Verify the success of the build by reviewing the CodePipeline build logs. | AWS DevOps | 
| Check results in Db2. | Verify the package version on the SYSPLAN table.<pre>select CAST(NAME AS VARCHAR(10)) as name, VALIDATE, LAST_BIND_TIME, LASTUSED, CAST(PKGVERSION AS VARCHAR(10)) as PKGVERSION from SYSIBM.SYSPLAN where NAME = 'CDB2SMP' order by LAST_BIND_TIME desc<br /></pre>The version must match the CodeBuild build ID, which is `CDB2SMP` in our example:<pre>NAME       VALIDATE LAST_BIND_TIME             LASTUSED   PKGVERSION<br />---------- -------- -------------------------- ---------- ----------<br />CDB2SMP    B        2024-05-18-11.53.11.503738 01/01/0001 19</pre> |  | 

## Troubleshooting
<a name="build-cobol-db2-programs-mainframe-modernization-codebuild-troubleshooting"></a>


| Issue | Solution | 
| --- | --- | 
| Occasionally, the AWS console switches Regions when you move between services. | Make sure to verify the selected AWS Region whenever you switch between services.The AWS Region selector is in the upper-right corner of the console window. | 
| It can be difficult to identify Db2 connectivity issues from CodeBuild. | To troubleshoot connectivity problems, add the following DB2 connect command to the `buildspec.yml` file. This addition helps you debug and resolve connectivity issues.<pre>db2 connect to $DB_NAME user $DB2USER using $DB2PASS</pre> | 
| Occasionally, the role pane in the IAM console doesn't immediately show the IAM policy you've created. | If you encounter a delay, refresh the screen to display the latest information. | 

## Related resources
<a name="build-cobol-db2-programs-mainframe-modernization-codebuild-resources"></a>

**IBM documentation**
+ [IBM Data Server Client and driver types](https://www.ibm.com/docs/en/db2/11.5?topic=overviews-data-server-clients)
+ [Download IBM Data Server Client and driver types](https://www.ibm.com/support/pages/download-initial-version-115-clients-and-drivers)

**AWS documentation**
+ [Amazon S3 User Guide](https://docs.aws.amazon.com/AmazonS3/latest/userguide/Welcome.html)
+ [AWS CodeBuild User Guide](https://docs.aws.amazon.com/codebuild/latest/userguide/welcome.html)
+ [AWS Mainframe Modernization User Guide](https://docs.aws.amazon.com/m2/latest/userguide/what-is-m2.html)
+ [AWS Secrets Manager User Guide](https://docs.aws.amazon.com/secretsmanager/latest/userguide/intro.html)
+ [AWS CodePipeline User Guide](https://docs.aws.amazon.com/codepipeline/latest/userguide/welcome.html)
+ [AWS CodeDeploy User Guide** **](https://docs.aws.amazon.com/codedeploy/latest/userguide/getting-started-codedeploy.html)

## Additional information
<a name="build-cobol-db2-programs-mainframe-modernization-codebuild-additional"></a>

**CodeBuild policy**

Replace the placeholders `<RegionID>`, `<AccountID>`, `<SubnetARN>`, `<BucketARN>`, and `<DB2CredSecretARN>` with your values.

```
{"Version": "2012-10-17",		 	 	 
    "Statement": [
        {"Action": "ecr:GetAuthorizationToken", "Effect": "Allow", "Resource": "*" },
        {"Action": ["ecr:GetDownloadUrlForLayer", "ecr:BatchGetImage", 
                    "ecr:BatchCheckLayerAvailability"],
         "Effect": "Allow", 
         "Resource": "arn:aws:ecr:*:673918848628:repository/m2-enterprise-build-tools"},
        {"Action": "s3:PutObject", "Effect": "Allow", "Resource": "arn:aws:s3:::aws-m2-repo-*/*"},
        {"Action": ["logs:PutLogEvents", "logs:CreateLogStream", "logs:CreateLogGroup"],
         "Effect": "Allow", "Resource": "arn:aws:logs:<RegionId>:<AccountId>:*"},
        {"Action": ["ec2:DescribeVpcs", "ec2:DescribeSubnets", 
                    "ec2:DescribeSecurityGroups", "ec2:DescribeNetworkInterfaces", 
                    "ec2:DescribeDhcpOptions", "ec2:DeleteNetworkInterface", 
                    "ec2:CreateNetworkInterface"],
         "Effect": "Allow", "Resource": "*"},
        {"Action": "ec2:CreateNetworkInterfacePermission", 
         "Effect": "Allow", "Resource": ["<SubnetARN>"]},
        {"Action": "s3:*", "Effect": "Allow", "Resource": ["<BucketARN>/*","<BucketARN>"]},
        {"Action": "secretsmanager:GetSecretValue", 
         "Effect": "Allow", "Resource": "<DB2CredSecretARN>"}
    ]
}
```

**CodePipeline policy**

Replace the placeholders `<BucketARN>` and `<ConnectionARN>` with your values.

```
{
    "Version": "2012-10-17",		 	 	 
    "Statement": [
        {"Action": ["s3:List*", "s3:GetObjectVersion", "s3:GetObject", "s3:GetBucketVersioning" ], 
        "Effect": "Allow",
        "Resource": ["<BucketARN>/*", "<BucketARN>"]},
        {"Action": ["codebuild:StartBuild", "codebuild:BatchGetBuilds"], 
         "Effect": "Allow", "Resource": "*"},
        {"Action": ["codestar-connections:UseConnection"],
         "Effect": "Allow", "Resource": "<ConnectionARN>"}
        ]
}
```

**`buildspec.yml`**

Replace the `<your-bucket-name>` placeholder with your actual S3 bucket name.

```
version: 0.2
phases:
  pre_build:
    commands:
      - /var/microfocuslicensing/bin/mfcesd -no > /var/microfocuslicensing/logs/mfcesd_startup.log 2>&1 &
      - |
        mkdir $CODEBUILD_SRC_DIR/db2client
        aws s3 cp s3://<your-bucket-name>/v11.5.8_linuxx64_client.tar.gz $CODEBUILD_SRC_DIR/db2client/ >> /dev/null 2>&1
        tar -xf $CODEBUILD_SRC_DIR/db2client/v11.5.8_linuxx64_client.tar.gz -C $CODEBUILD_SRC_DIR/db2client/
        cd $CODEBUILD_SRC_DIR/db2client/
        ./client/db2_install -f sysreq -y -b /opt/ibm/db2/V11.5 >> /dev/null 2>&1        
        useradd db2cli
        /opt/ibm/db2/V11.5/instance/db2icrt -s client -u db2cli db2cli
        DB2CRED=$(aws secretsmanager get-secret-value --secret-id dev-db2-cred | jq -r '.SecretString | fromjson')
        read -r DB2USER DB2PASS DB_NODE DB_HOST DB_PORT DB_NAME DB_QUAL <<<$(echo $DB2CRED | jq -r '.username, .password, .db2node, .db2host, .db2port, .db2name, .qualifier')
        . /home/db2cli/sqllib/db2profile
        db2 catalog tcpip node $DB_NODE remote $DB_HOST server $DB_PORT
        db2 catalog db $DB_NAME as $DB_NAME at node $DB_NODE authentication server
  build:
    commands:
      - |
        revision=$CODEBUILD_SRC_DIR/loadlib
        mkdir -p $revision; cd $revision
        . /opt/microfocus/EnterpriseDeveloper/bin/cobsetenv
        cob -zU $CODEBUILD_SRC_DIR/CDB2SMP.cbl -C "DB2(DB==${DB_NAME} PASS==${DB2USER}.${DB2PASS} VERSION==${CODEBUILD_BUILD_NUMBER} COLLECTION==DB2AWSDB"
artifacts:
  files:
    - "**/*"
  base-directory: $revision
```

# Build a Micro Focus Enterprise Server PAC with Amazon EC2 Auto Scaling and Systems Manager
<a name="build-a-micro-focus-enterprise-server-pac-with-amazon-ec2-auto-scaling-and-systems-manager"></a>

*Kevin Yung and Krithika Palani Selvam, Amazon Web Services*

*Peter Woods, None*

*Abraham Rondon, Micro Focus*

## Summary
<a name="build-a-micro-focus-enterprise-server-pac-with-amazon-ec2-auto-scaling-and-systems-manager-summary"></a>

This pattern introduces a scalable architecture for mainframe applications using [Micro Focus Enterprise Server in Scale-Out Performance and Availability Cluster (PAC)](https://www.microfocus.com/documentation/enterprise-developer/ed60/ES-WIN/GUID-F6E1BBB7-AEC2-45B1-9E36-1D86B84D2B85.html) and an Amazon Elastic Compute Cloud (Amazon EC2) Auto Scaling group on Amazon Web Services (AWS). The solution is fully automated with AWS Systems Manager and Amazon EC2 Auto Scaling lifecycle hooks. By using this pattern, you can set up your mainframe online and batch applications to achieve high resiliency by automatically scaling in and out based on your capacity demands. 

**Note**  
This pattern was tested with Micro Focus Enterprise Server version 6.0. For version 8, see [Set up Micro Focus Runtime (on Amazon EC2)](https://docs.aws.amazon.com/m2/latest/userguide/mf-runtime-setup.html).

## Prerequisites and limitations
<a name="build-a-micro-focus-enterprise-server-pac-with-amazon-ec2-auto-scaling-and-systems-manager-prereqs"></a>

**Prerequisites **
+ An active AWS account.
+ Micro Focus Enterprise Server software and license. For details, contact [Micro Focus sales](https://www.microfocus.com/en-us/contact/contactme).
+ An understanding of the concept of rebuilding and delivering a mainframe application to run in Micro Focus Enterprise Server. For a high-level overview, see [Micro Focus Enterprise Server Data Sheet](https://www.microfocus.com/media/data-sheet/enterprise_server_ds.pdf).
+ An understanding of the concepts in Micro Focus Enterprise Server scale-out Performance and Availability Cluster. For more information, see the [Micro Focus Enterprise Server documentation](https://www.microfocus.com/documentation/enterprise-developer/ed60/ES-WIN/GUID-F6E1BBB7-AEC2-45B1-9E36-1D86B84D2B85.html).
+ An understanding of the overall concept of mainframe application DevOps with continuous integration (CI). For an AWS Prescriptive Guidance pattern that was developed by AWSand Micro Focus, see [Mainframe modernization: DevOps on AWS with Micro Focus](https://docs.aws.amazon.com/prescriptive-guidance/latest/patterns/mainframe-modernization-devops-on-aws-with-micro-focus.html).

**Note**  
This pattern was tested with Micro Focus Enterprise Server version 6. For version 8, see [Set up Micro Focus Runtime (on Amazon EC2)](https://docs.aws.amazon.com/m2/latest/userguide/mf-runtime-setup.html).

**Limitations **
+ For a list of platforms that are supported by Micro Focus Enterprise Server, see the [Micro Focus Enterprise Server Data Sheet](https://www.microfocus.com/media/data-sheet/enterprise_server_ds.pdf).
+ The scripts and tests used in this pattern are based on Amazon EC2 Windows Server 2019; other Windows Server versions and operating systems were not tested for this pattern.
+ The pattern is based on Micro Focus Enterprise Server 6.0 for Windows; earlier or later releases were not tested in the development of this pattern.

**Product versions**
+ Micro Focus Enterprise Server 6.0
+ Windows Server 2019

## Architecture
<a name="build-a-micro-focus-enterprise-server-pac-with-amazon-ec2-auto-scaling-and-systems-manager-architecture"></a>

In the conventional mainframe environment, you must provision hardware to host your applications and corporate data. To cater for and meet the spikes in seasonal, monthly, quarterly, or even unprecedented or unexpected demands, mainframe users must *scale out* by purchasing additional storage and compute capacity. Increasing the number of storage and compute capacity resources improves overall performance, but the scaling is not linear.

This is not the case when you start adopting an on-demand consumption model on AWS by using Amazon EC2 Auto Scaling and Micro Focus Enterprise Servers. The following sections explain detail how to build a fully automated, scalable mainframe application architecture using Micro Focus Enterprise Server Scale-Out Performance and Availability Cluster (PAC) with an Amazon EC2 Auto Scaling group. 

**Micro Focus Enterprise Server automatic scaling architecture**

First, it is important to understand the basic concepts of Micro Focus Enterprise Server. This environment provides a mainframe-compatible, x86 deployment environment for applications that have traditionally run on the IBM mainframe. It delivers both online and batch runs and a transaction environment that supports the following:
+ IBM COBOL
+ IBM PL/I
+ IBM JCL batch jobs
+ IBM CICS and IMS TM transactions
+ Web services
+ Common batch utilities, including SORT

Micro Focus Enterprise Server enables mainframe applications to run with minimal changes. Existing mainframe workloads can be moved to x86 platforms and modernized to take advantage of AWS Cloud native extensions for rapid expansion to new markets or geographies. 

The AWS Prescriptive Guidance pattern [Mainframe modernization: DevOps on AWS with Micro Focus](https://docs.aws.amazon.com/prescriptive-guidance/latest/patterns/mainframe-modernization-devops-on-aws-with-micro-focus.html) introduced the architecture to accelerate the development and testing of mainframe applications on AWS using Micro Focus Enterprise Developer and Enterprise Test Server with AWS CodePipeline and AWS CodeBuild. This pattern focuses on the deployment of mainframe applications to the AWS production environment to achieve high availability and resiliency.

In a mainframe production environment, you might have set up IBM Parallel Sysplex in the mainframe to achieve high performance and high availability. To create a scale-out architecture similar to Sysplex, Micro Focus introduced the Performance and Availability Cluster (PAC) to Enterprise Server. PACs support mainframe application deployment onto multiple Enterprise Server regions managed as a single image and scaled out in Amazon EC2 instances. PACs also support predictable application performance and system throughput on demand. 

In a PAC, multiple Enterprise Server instances work together as a single logical entity. Failure of one Enterprise Server instance, therefore, will not interrupt business continuity because capacity is shared with other regions while new instances are automatically started using industry standard functionality such as an Amazon EC2 Auto Scaling group. This removes single points of failure, improving resilience to hardware, network, and application issues. Scaled-out Enterprise Server instances can be operated and managed by using the Enterprise Server Common Web Administration (ESCWA) APIs, simplifying the operational maintenance and serviceability of Enterprise Servers. 

**Note**  
Micro Focus recommends that the [Performance and Availability Cluster (PAC)](https://www.microfocus.com/documentation/enterprise-developer/ed60/ES-WIN/GUID-C06DC883-8A67-44DB-8553-8F0DD2062DAB.html) should consist of at least three Enterprise Server regions so that availability is not compromised in the event an Enterprise Server region fails or requires maintenance.

PAC configuration requires a supported relational database management service (RDBMS) to manage the region database, a cross-region database, and optional data store databases. A data store database should be used to managed Virtual Storage Access Method (VSAM) files using the Micro Focus Database File Handler support to improve availability and scalability. Supported RDBMSs include the following:
+ Microsoft SQL Server 2009 R2 and later
+ PostgreSQL 10.x, including Amazon Aurora PostgreSQL-Compatible Edition
+ DB2 10.4 and later

For details of supported RDBMS and PAC requirements, see [Micro Focus Enterprise Server - Prerequisites](https://www.microfocus.com/documentation/enterprise-developer/ed60/ES-WIN/GUID-486C5A4B-E3CD-4B17-81F3-32F9DE970EA5.html) and [Micro Focus Enterprise Server - Recommended PAC Configuration](https://www.microfocus.com/documentation/enterprise-developer/ed60/ES-WIN/GUID-7038DB6F-E89F-4B5F-BCAA-BD1456F6CCA3.html).

The following diagram shows a typical AWS architecture setup for a Micro Focus PAC. 

![\[A three-Availability Zone architecture with five steps described in a table after the diagram.\]](http://docs.aws.amazon.com/prescriptive-guidance/latest/patterns/images/pattern-img/64e3c22b-1058-4ab8-855f-18bbbed5dc13/images/df291568-a442-454f-80bf-49e4ffff4f6d.png)


 


| 
| 
|  | ** Component** | **Description** | 
| --- |--- |--- |
| 1 | Enterprise Server instances automatic scaling group | Set up an automatic scaling group deployed with Enterprise Server instances in a PAC. The number of instances can be scaled out or in initiated by Amazon CloudWatch alarms using CloudWatch metrics. | 
| 2 | Enterprise Server ESCWA instances automatic scaling group  | Set up an automatic scaling group deployed with Enterprise Server Common Web Administration (ESCWA). ESCWA provides cluster management APIs.   The ESCWA servers act as a control plane to add or remove Enterprise Servers and start or stop Enterprise Server regions in the PAC during the Enterprise Server instance automatic scaling events.   Because the ESCWA instance is used only for the PAC management, its traffic pattern is predictable, and its automatic scaling desired capacity requirement can be set to 1.  | 
| 3 | Amazon Aurora instance in a Multi-AZ setup | Set up a relational database management system (RDBMS) to host both user and system data files to be shared across the Enterprise Server instances. | 
| 4 | Amazon ElastiCache (Redis OSS) instance and replica | Set up an ElastiCache (Redis OSS) primary instance and at least one replica to host user data and act as a scale-out repository (SOR) for the Enterprise Server instances. You can configure one or more [scale-out repository](https://www.microfocus.com/documentation/enterprise-developer/ed60/ES-WIN/GUID-3840E10F-80AA-4109-AF2C-894237D3AD00.html) to store specific types of user data.   Enterprise Server uses a Redis NoSQL database as an SOR, [a requirement to maintain PAC integrity](https://www.microfocus.com/documentation/enterprise-developer/ed60/ES-WIN/GUID-176B97CA-4F9F-4CE1-952F-C3F4FB0ADD25.html). | 
| 5 | Network Load Balancer | Set up a load balancer, providing a hostname for applications to connect to the services provided by Enterprise Server instances (for example, accessing the application through a 3270 emulator). | 

These components form the minimum requirement for a Micro Focus Enterprise Server PAC cluster. The next section covers cluster management automation.

**Using AWS Systems Manager Automation for scaling**

After the PAC cluster is deployed on AWS, the PAC is managed through the Enterprise Server Common Web Administration (ESCWA) APIs. 

To automate the cluster management tasks during automatic scaling events, you can use Systems Manager Automation runbooks and Amazon EC2 Auto Scaling with Amazon EventBridge. The architecture of these automations is shown in the following diagram.

![\[AWS architecture diagram showing EventBridge, Systems Manager, and EC2 Auto Scaling integration.\]](http://docs.aws.amazon.com/prescriptive-guidance/latest/patterns/images/pattern-img/64e3c22b-1058-4ab8-855f-18bbbed5dc13/images/6f9e4035-fafd-4aee-a6cc-d5e95d6514c2.png)


 


| 
| 
|  | **Component** | **Description** | 
| --- |--- |--- |
| 1 | Automatic scaling lifecycle hook | Set up automatic scaling lifecycle hooks and send notifications to Amazon EventBridge when new instances are launched and existing instances are terminated in the automatic scaling group. | 
| 2 | Amazon EventBridge | Set up an Amazon EventBridge rule to route automatic scaling events to Systems Manager Automation runbook targets. | 
| 3 | Automation runbooks | Set up Systems Manager Automation runbooks to run Windows PowerShell scripts and invoke ESCWA APIs to manage the PAC. For examples, see the *Additional information* section. | 
| 4 | Enterprise Server ESCWA instance in an automatic scaling group | Set up an Enterprise Server ESCWA instance in an automatic scaling group. The ESCWA instance provides APIs to manage the PAC.  | 

## Tools
<a name="build-a-micro-focus-enterprise-server-pac-with-amazon-ec2-auto-scaling-and-systems-manager-tools"></a>
+ [Micro Focus Enterprise Server](https://www.microfocus.com/documentation/enterprise-developer/ed60/ES-WIN/GUID-A2F23243-962B-440A-A071-480082DF47E7.html) – Micro Focus Enterprise Server provides the run environment for applications created with any integrated development environment (IDE) variant of Enterprise Developer.
+ [Amazon EC2 Auto Scaling](https://docs.aws.amazon.com/autoscaling/ec2/userguide/what-is-amazon-ec2-auto-scaling.html) – Amazon EC2 Auto Scaling helps you ensure that you have the correct number of Amazon EC2 instances available to handle the load for your application. You create collections of EC2 instances, called Auto Scaling groups, and specify minimum and maximum numbers of instances.
+ [Amazon ElastiCache (Redis OSS)](https://docs.aws.amazon.com/AmazonElastiCache/latest/red-ug/WhatIs.html) – Amazon ElastiCache is a web service for setting up, managing, and scaling a distributed in-memory data store or cache environment in the cloud. It provides a high-performance, scalable, and cost-effective caching solution.
+ [Amazon RDS](https://docs.aws.amazon.com/AmazonRDS/latest/UserGuide/Welcome.html) – Amazon Relational Database Service (Amazon RDS) is a web service that makes it easier to set up, operate, and scale a relational database in the AWS Cloud. It provides cost-efficient, resizable capacity for a relational database and manages common database administration tasks. 
+ [AWS Systems Manager](https://docs.aws.amazon.com/systems-manager/latest/userguide/what-is-systems-manager.html) – AWS Systems Manager is an AWS service that you can use to view and control your infrastructure on AWS. Using the Systems Manager console, you can view operational data from multiple AWS services and automate operational tasks across your AWS resources. Systems Manager helps you maintain security and compliance by scanning your managed instances and reporting on (or taking corrective action on) any policy violations it detects.

## Epics
<a name="build-a-micro-focus-enterprise-server-pac-with-amazon-ec2-auto-scaling-and-systems-manager-epics"></a>

### Create an Amazon Aurora instance
<a name="create-an-amazon-aurora-instance"></a>


| Task | Description | Skills required | 
| --- | --- | --- | 
| Create an AWS CloudFormation template for an Amazon Aurora instance. | Use the [AWS example code snippet](https://docs.aws.amazon.com/AWSCloudFormation/latest/UserGuide/AWS_RDS.html) to make a CloudFormation template that will create an Amazon Aurora PostgreSQL-Compatible Edition instance. | Cloud architect | 
| Deploy a CloudFormation stack to create the Amazon Aurora instance. | Use the CloudFormation template to create an Aurora PostgreSQL-Compatible instance that has Multi-AZ replication enabled for production workloads. | Cloud architect | 
| Configure database connection settings for Enterprise Server. | Follow the instructions in the [Micro Focus documentation](https://www.microfocus.com/documentation/enterprise-developer/ed60/ES-WIN/GUID-40748F62-84B3-4B7B-8E96-5484ADEDFB5F.html) to prepare the connection strings and database configuration for Micro Focus Enterprise Server. | Data engineer, DevOps engineer | 

### Create an Amazon ElastiCache cluster for the Redis instance
<a name="create-an-elclong-cluster-for-the-redis-instance"></a>


| Task | Description | Skills required | 
| --- | --- | --- | 
| Create a CloudFormation template for the Amazon ElastiCache cluster for the Redis instance. | Use the [AWS example code snippet](https://docs.aws.amazon.com/AWSCloudFormation/latest/UserGuide/AWS_ElastiCache.html) to make a CloudFormation template that will create an Amazon ElastiCache cluster for the Redis instance. | Cloud architect | 
| Deploy the CloudFormation stack to create an Amazon ElastiCache cluster for the Redis instance. | Create the Amazon ElastiCache cluster for the Redis instance that has Multi-AZ replication enabled for production workloads. | Cloud architect | 
| Configure Enterprise Server PSOR connection settings. | Follow the instructions in the [Micro Focus documentation](https://www.microfocus.com/documentation/enterprise-developer/ed60/ES-WIN/GUID-2A420ADD-4CA6-472D-819F-371C037C0653.html) to prepare the PAC Scale-Out Repository (PSOR) connection configuration for Micro Focus Enterprise Server PAC. | DevOps engineer | 

### Create a Micro Focus Enterprise Server ESCWA automatic scaling group
<a name="create-a-micro-focus-enterprise-server-escwa-automatic-scaling-group"></a>


| Task | Description | Skills required | 
| --- | --- | --- | 
| Create a Micro Focus Enterprise Server AMI. | Create an Amazon EC2 Windows Server instance and install the Micro Focus Enterprise Server binary in the EC2 instance. Create an Amazon Machine Image (AMI) of the EC2 instance. For more information, see the [Enterprise Server installation documentation](https://www.microfocus.com/documentation/enterprise-developer/ed60/ES-WIN/GUID-FACEF60F-BAE3-446C-B2B4-4379A5DF6D9F.html). | Cloud architect | 
| Create a CloudFormation template for Enterprise Server ESCWA.  | Use the [AWS example code snippet](https://docs.aws.amazon.com/AWSCloudFormation/latest/UserGuide/AWS_AutoScaling.html) to make a template for creating a custom stack of Enterprise Server ESCWA in an automatic scaling group. | Cloud architect | 
| Deploy the CloudFormation stack to create an Amazon EC2 scaling group for Enterprise Server ESCWA. | Use the CloudFormation template to deploy the automatic scaling group with the Micro Focus Enterprise Server ESCWA AMI created in the previous story. | Cloud architect | 

### Create an AWS Systems Manager Automation runbook
<a name="create-an-aws-systems-manager-automation-runbook"></a>


| Task | Description | Skills required | 
| --- | --- | --- | 
| Create a CloudFormation template for a Systems Manager Automation runbook. | Use the example code snippets in the *Additional information* section to make a CloudFormation template that will create a Systems Manager Automation runbook for automating PAC creation, Enterprise Server scale in, and Enterprise Server scale out. | Cloud architect | 
| Deploy the CloudFormation stack that contains the Systems Manager Automation runbook. | Use the CloudFormation template to deploy a stack that contains the Automation runbook for PAC creation, Enterprise Server scale in, and Enterprise Server scale out. | Cloud architect | 

### Create an automatic scaling group for Micro Focus Enterprise Server
<a name="create-an-automatic-scaling-group-for-micro-focus-enterprise-server"></a>


| Task | Description | Skills required | 
| --- | --- | --- | 
| Create a CloudFormation template for setting up an automatic scaling group for Micro Focus Enterprise Server. | Use the [AWS example code snippet](https://docs.aws.amazon.com/AWSCloudFormation/latest/UserGuide/AWS_AutoScaling.html) to make a CloudFormation template that will create an automatic scaling group. This template will reuse the same AMI that was created for the Micro Focus Enterprise Server ESCWA instance. Then use an [AWS example code snippet](https://docs.aws.amazon.com/AWSCloudFormation/latest/UserGuide/aws-resource-events-rule.html) to create the automatic scaling lifecycle event and set up Amazon EventBridge to filter for scale-out and scale-in events in the same CloudFormation template. | Cloud architect | 
| Deploy the CloudFormation stack for the automatic scaling group for Micro Focus Enterprise Servers. | Deploy the CloudFormation stack that contains the automatic scaling group for Micro Focus Enterprise Servers. | Cloud architect | 

## Related resources
<a name="build-a-micro-focus-enterprise-server-pac-with-amazon-ec2-auto-scaling-and-systems-manager-resources"></a>
+ [Micro Focus Enterprise Server Performance and Availability Cluster (PAC)](https://www.microfocus.com/documentation/enterprise-developer/ed60/ES-WIN/GUID-613F5E2D-2FBC-47AE-9327-48CA4FF84C5B.html) 
+ [Amazon EC2 Auto Scaling lifecycle hooks](https://docs.aws.amazon.com/autoscaling/ec2/userguide/lifecycle-hooks.html)
+ [Running automations with triggers using EventBridge](https://docs.aws.amazon.com/systems-manager/latest/userguide/automation-cwe-target.html)

## Additional information
<a name="build-a-micro-focus-enterprise-server-pac-with-amazon-ec2-auto-scaling-and-systems-manager-additional"></a>

The following scenarios must be automated for scaling in or scaling out the PAC clusters.

**Automation for starting or recreating a PAC**

At the start of a PAC cluster, Enterprise Server requires ESCWA to invoke APIs to create a PAC configuration. This starts and adds Enterprise Server regions into the PAC. To create or recreate a PAC, use the following steps: 

1. Configure a [PAC Scale-Out Repository (PSOR)](https://www.microfocus.com/documentation/enterprise-developer/ed60/ES-WIN/GUID-2A420ADD-4CA6-472D-819F-371C037C0653.html) in ESCWA with a given name.

   ```
   POST /server/v1/config/groups/sors
   ```

1. Create a PAC with a given name and attach the PSOR to it.

   ```
   POST /server/v1/config/groups/pacs
   ```

1. Configure the region database and cross-region database if this is the first time you are setting up a PAC.
**Note**  
This step uses SQL queries and the Micro Focus Enterprise Suite command line **dbhfhadmin** tool to create the database and import initial data.

1. Install the PAC definition into the Enterprise Server regions.

   ```
   POST /server/v1/config/mfds 
   POST /native/v1/config/groups/pacs/${pac_uid}/install
   ```

1. Start Enterprise Server regions in the PAC.

   ```
   POST /native/v1/regions/${host_ip}/${port}/${region_name}/start
   ```

The previous steps can be implemented by using a Windows PowerShell script. 

The following steps explain how to build an automation for creating a PAC by reusing the Windows PowerShell script.

1. Create an Amazon EC2 launch template that downloads or creates the Windows PowerShell script as part of the bootstrap process. For example, you can use EC2 user data to download the script from an Amazon Simple Storage Service (Amazon S3) bucket.

1. Create an AWS Systems Manager Automation runbook to invoke the Windows PowerShell script.

1. Associate the runbook to the ESCWA instance by using the instance tag.

1. Create an ESCWA automatic scaling group by using the launch template. 

You can use the following example AWS CloudFormation snippet to create the Automation runbook.

*Example CloudFormation snippet for a Systems Manager Automation runbook used for PAC creation*

```
  PACInitDocument:
     Type: AWS::SSM::Document
     Properties:
       DocumentType: Command
       Content:
         schemaVersion: '2.2'
         description: Operation Runbook to create Enterprise Server PAC
         mainSteps:
         - action: aws:runPowerShellScript
           name: CreatePAC
           inputs:
             onFailure: Abort
             timeoutSeconds: "1200"
             runCommand:
             - | 
               C:\Scripts\PAC-Init.ps1
  PacInitAutomation:
     Type: AWS::SSM::Document
     Properties:
       DocumentType: Automation
       Content:
         description: Prepare Micro Focus PAC Cluster via ESCWA Server
         schemaVersion: '0.3'
         assumeRole: !GetAtt SsmAssumeRole.Arn
         mainSteps:
           - name: RunPACInitDocument
             action: aws:runCommand
             timeoutSeconds: 300
             onFailure: Abort
             inputs:
               DocumentName: !Ref PACInitDocument
               Targets:
                 - Key: tag:Enterprise Server - ESCWA
                   Values:
                     - "true"
   PacInitDocumentAssociation:
     Type: AWS::SSM::Association
     Properties:
       DocumentVersion: "$LATEST"
       Name: !Ref PACInitDocument
       Targets:
         - Key: tag:Enterprise Server - ESCWA
           Values:
             - "true"
```

For more information, see [Micro Focus Enterprise Server - Configuring a PAC](https://www.microfocus.com/documentation/enterprise-developer/ed60/ES-WIN/GUID-2B15EBA5-84AF-47C3-9F8E-EE57EB17245F.html).

**Automation for scaling out with a new Enterprise Server instance**

When an Enterprise Server instance is scaled out, its Enterprise Server region must be added to the PAC. The following steps explain how to invoke ESCWA APIs and add the Enterprise Server region into the PAC. 

1. Install the PAC definition into the Enterprise Server regions.

   ```
   POST '/server/v1/config/mfds'
   POST /native/v1/config/groups/pacs/${pac_uid}/install
   ```

1. Warm Start the region in the PAC.

   ```
   POST /native/v1/regions/${host_ip}/${port}/${region_name}/start
   ```

1. Add the Enterprise Server instance to the load balancer by associating the automatic scaling group to the load balancer.

The previous steps can be implemented by using a Windows PowerShell script. For more information, see [Micro Focus Enterprise Server - Configuring a PAC](https://www.microfocus.com/documentation/enterprise-developer/ed60/ES-WIN/GUID-2B15EBA5-84AF-47C3-9F8E-EE57EB17245F.html).

The following steps can be used to build an event driven automation to add a newly launched Enterprise Server instance into a PAC by reusing the Windows PowerShell script. 

1. Create an Amazon EC2 launch template for Enterprise Server instance that provisions an Enterprise Server Region during its bootstrap. For example, you can use the Micro Focus Enterprise Server command mfds to import a region configuration. For further details and options available for this command, see the [Enterprise Server Reference](https://www.microfocus.com/documentation/enterprise-developer/ed60/ES-WIN/HRADRHCOMM06.html).

1. Create an Enterprise Server automatic scaling group that uses the launch template created in the previous step.

1. Create a Systems Manager Automation runbook to invoke the Windows PowerShell script. 

1. Associate the runbook to the ESCWA instance by using the instance tag.

1. Create an Amazon EventBridge rule to filter for the EC2 Instance Launch Successful event for the Enterprise Server automatic scaling group, and create the target to use the Automation runbook.

You can use the following example CloudFormation snippet to create the Automation runbook and the EventBridge rule.

*Example CloudFormation snippet for Systems Manager used for scaling out Enterprise Server instances*

```
  ScaleOutDocument:
     Type: AWS::SSM::Document
     Properties:
       DocumentType: Command
       Content:
         schemaVersion: '2.2'
         description: Operation Runbook to Adding MFDS Server into an existing PAC 
         parameters:
           MfdsPort:
             type: String
           InstanceIpAddress:
             type: String
             default: "Not-Available"
           InstanceId:
             type: String
             default: "Not-Available"
         mainSteps:
         - action: aws:runPowerShellScript
           name: Add_MFDS
           inputs:
             onFailure: Abort
             timeoutSeconds: "300"
             runCommand:
             - |
               $ip = "{{InstanceIpAddress}}"
               if ( ${ip} -eq "Not-Available" ) {
                 $ip = aws ec2 describe-instances --instance-id {{InstanceId}} --output text --query "Reservations[0].Instances[0].PrivateIpAddress"
               }            
               C:\Scripts\Scale-Out.ps1 -host_ip ${ip} -port {{MfdsPort}}
 
   PacScaleOutAutomation:
     Type: AWS::SSM::Document
     Properties:
       DocumentType: Automation
       Content:
         parameters:
           MfdsPort:
             type: String
           InstanceIpAddress:
             type: String
             default: "Not-Available"
           InstanceId:
             type: String
             default: "Not-Available"
         description: Scale Out 1 New Server in Micro Focus PAC Cluster via ESCWA Server
         schemaVersion: '0.3'
         assumeRole: !GetAtt SsmAssumeRole.Arn
         mainSteps:
           - name: RunScaleOutCommand
             action: aws:runCommand
             timeoutSeconds: 300
             onFailure: Abort
             inputs:
               DocumentName: !Ref ScaleOutDocument
               Parameters:
                 InstanceIpAddress: "{{InstanceIpAddress}}"
                 InstanceId: "{{InstanceId}}"
                 MfdsPort: "{{MfdsPort}}"
               Targets:
                 - Key: tag:Enterprise Server - ESCWA
                   Values:
                     - "true"
```

**Automation for scaling in an Enterprise Server instance**

Similar to scaling out, when an Enterprise Server instance is *scaled in*, the event EC2 Instance-terminate Lifecycle Action is initiated, and the following process and API calls are needed to remove a Micro Focus Enterprise Server instance from the PAC. 

1. Stop the region in the terminating Enterprise Server instance.

   ```
   POST "/native/v1/regions/${host_ip}/${port}/${region_name}/stop"
   ```

1. Remove the Enterprise Server Instance from the PAC.

   ```
   DELETE "/server/v1/config/mfds/${uid}"
   ```

1. Send signal to continue terminating the Enterprise Server instance.

The previous steps can be implemented in a Windows PowerShell script. For additional details of this process, see [Micro Focus Enterprise Server document - Administering a PAC](https://www.microfocus.com/documentation/enterprise-developer/ed60/ES-WIN/GUID-E864E2E9-EB49-43BF-9AAD-7FE334749441.html).

The following steps explain how to build an event-driven automation to terminate an Enterprise Server instance from a PAC by reusing the Windows PowerShell script. 

1. Create a Systems Manager Automation runbook to invoke the Windows PowerShell script.

1. Associate the runbook to the ESCWA instance by using the instance tag.

1. Create an automatic scaling group lifecycle hook for EC2 instance termination.

1. Create an Amazon EventBridge rule to filter EC2 Instance-terminate Lifecycle Action event for the Enterprise Server automatic scaling group, and create the target to use the Automation runbook. 

You can use the following example CloudFormation template for creating a Systems Manager Automation runbook, lifecycle hook, and EventBridge rule.

*Example CloudFormation snippet for a Systems Manager Automation runbook used for scaling in an Enterprise Server instance*

```
  ScaleInDocument:
     Type: AWS::SSM::Document
     Properties:
       DocumentType: Command
       Content:
         schemaVersion: '2.2'
         description: Operation Runbook to Remove MFDS Server from PAC 
         parameters:
           MfdsPort:
             type: String
           InstanceIpAddress:
             type: String
             default: "Not-Available"
           InstanceId:
             type: String
             default: "Not-Available"
         mainSteps:
         - action: aws:runPowerShellScript
           name: Remove_MFDS
           inputs:
             onFailure: Abort
             runCommand:
             - |
               $ip = "{{InstanceIpAddress}}"
               if ( ${ip} -eq "Not-Available" ) {
                 $ip = aws ec2 describe-instances --instance-id {{InstanceId}} --output text --query "Reservations[0].Instances[0].PrivateIpAddress"
               }            
               C:\Scripts\Scale-In.ps1 -host_ip ${ip} -port {{MfdsPort}}
 
   PacScaleInAutomation:
     Type: AWS::SSM::Document
     Properties:
       DocumentType: Automation
       Content:
         parameters:
           MfdsPort:
             type: String
           InstanceIpAddress:
             type: String
             default: "Not-Available"            
           InstanceId:
             type: String
             default: "Not-Available"                
         description: Scale In 1 New Server in Micro Focus PAC Cluster via ESCWA Server
         schemaVersion: '0.3'
         assumeRole: !GetAtt SsmAssumeRole.Arn
         mainSteps:
           - name: RunScaleInCommand
             action: aws:runCommand
             timeoutSeconds: "600"
             onFailure: Abort
             inputs:
               DocumentName: !Ref ScaleInDocument
               Parameters:
                 InstanceIpAddress: "{{InstanceIpAddress}}"
                 MfdsPort: "{{MfdsPort}}"
                 InstanceId: "{{InstanceId}}"
               Targets:
                 - Key: tag:Enterprise Server - ESCWA
                   Values:
                     - "true"
           - name: TerminateTheInstance
             action: aws:executeAwsApi
             inputs:
               Service: autoscaling
               Api: CompleteLifecycleAction
               AutoScalingGroupName: !Ref AutoScalingGroup
               InstanceId: "{{ InstanceId }}"
               LifecycleActionResult: CONTINUE
               LifecycleHookName: !Ref ScaleInLifeCycleHook
```

**Automation for an Amazon EC2 automatic scaling trigger**

The process of setting up a scaling policy for Enterprise Server instances requires an understanding of the application behavior. In most cases, you can set up target tracking scaling policies. For example, you can use the average CPU utilization as the Amazon CloudWatch metric to set for the automatic scaling policy. For more information, see [Target tracking scaling policies for Amazon EC2 Auto Scaling](https://docs.aws.amazon.com/autoscaling/ec2/userguide/as-scaling-target-tracking.html). For applications that have regular traffic patterns, consider using a predictive scaling policy. For more information, see [Predictive scaling for Amazon EC2 Auto Scaling](https://docs.aws.amazon.com/autoscaling/ec2/userguide/ec2-auto-scaling-predictive-scaling.html). 

# Build an advanced mainframe file viewer in the AWS Cloud
<a name="build-an-advanced-mainframe-file-viewer-in-the-aws-cloud"></a>

*Boopathy GOPALSAMY and Jeremiah O'Connor, Amazon Web Services*

## Summary
<a name="build-an-advanced-mainframe-file-viewer-in-the-aws-cloud-summary"></a>

This pattern provides code samples and steps to help you build an advanced tool for browsing and reviewing your mainframe fixed-format files by using AWS serverless services. The pattern provides an example of how to convert a mainframe input file to an Amazon OpenSearch Service document for browsing and searching. The file viewer tool can help you achieve the following:
+ Retain the same mainframe file structure and layout for consistency in your AWS target migration environment (for example, you can maintain the same layout for files in a batch application that transmits files to external parties)
+ Speed up development and testing during your mainframe migration
+ Support maintenance activities after the migration

## Prerequisites and limitations
<a name="build-an-advanced-mainframe-file-viewer-in-the-aws-cloud-prereqs"></a>

**Prerequisites**
+ An active AWS account
+ A virtual private cloud (VPC) with a subnet that’s reachable by your legacy platform
+ 
**Note**  
An input file and its corresponding common business-oriented language (COBOL) copybook (: For input file and COBOL copybook examples, see [gfs-mainframe-solutions](https://github.com/aws-samples/gfs-mainframe-patterns.git) on the GitHub repository. For more information about COBOL copybooks, see the [Enterprise COBOL for z/OS 6.3](https://publibfp.boulder.ibm.com/epubs/pdf/igy6pg30.pdf) Programming Guide on the IBM website.)

**Limitations**
+ Copybook parsing is limited to no more than two nested levels (OCCURS)

## Architecture
<a name="build-an-advanced-mainframe-file-viewer-in-the-aws-cloud-architecture"></a>

**Source technology stack  **
+ Input files in [FB (Fixed Blocked)](https://www.ibm.com/docs/en/zos-basic-skills?topic=set-data-record-formats) format
+ COBOL copybook layout

**Target technology stack  **
+ Amazon Athena
+ Amazon OpenSearch Service
+ Amazon Simple Storage Service (Amazon S3)
+ AWS Lambda
+ AWS Step Functions

**Target architecture**

The following diagram shows the process of parsing and converting a mainframe input file to an OpenSearch Service document for browsing and searching.

![\[Process to parse and convert mainframe input file to OpenSearch Service.\]](http://docs.aws.amazon.com/prescriptive-guidance/latest/patterns/images/pattern-img/36d72b00-d163-455f-9e59-e2c872e7c28a/images/cce68438-bcf2-48c1-b86b-01242235ec76.png)


The diagram shows the following workflow:

1. An admin user or application pushes input files to one S3 bucket and COBOL copybooks to another S3 bucket.

1. 
**Note**  
The S3 bucket with the input files invokes a Lambda function that kicks off a serverless Step Functions workflow. : The use of an S3 event trigger and Lambda function to drive the Step Functions workflow in this pattern is optional. The GitHub code samples in this pattern don’t include the use of these services, but you can use these services based on your requirements.

1. The Step Functions workflow coordinates all the batch processes from the following Lambda functions:
   + The `s3copybookparser.py` function parses the copybook layout and extracts field attributes, data types, and offsets (required for input data processing).
   + The `s3toathena.py` function creates an Athena table layout. Athena parses the input data that’s processed by the `s3toathena.py` function and converts the data to a CSV file.
   + The `s3toelasticsearch.py` function ingests the results file from the S3 bucket and pushes the file to OpenSearch Service.

1. Users access OpenSearch Dashboards with OpenSearch Service to retrieve the data in various table and column formats and then run queries against the indexed data.

## Tools
<a name="build-an-advanced-mainframe-file-viewer-in-the-aws-cloud-tools"></a>

**AWS services**
+ [Amazon Athena](https://docs.aws.amazon.com/athena/latest/ug/what-is.html) is an interactive query service that helps you analyze data directly in Amazon Simple Storage Service (Amazon S3) using standard SQL.
+ [AWS Lambda](https://docs.aws.amazon.com/lambda/latest/dg/welcome.html) is a compute service that helps you run code without needing to provision or manage servers. It runs your code only when needed and scales automatically, so you pay only for the compute time that you use. In this pattern, you use Lambda to implement core logic, such as parsing files, converting data, and loading data into OpenSearch Service for interactive file access.
+ [Amazon OpenSearch Service](https://docs.aws.amazon.com/opensearch-service/latest/developerguide/what-is.html) is a managed service that helps you deploy, operate, and scale OpenSearch Service clusters in the AWS Cloud. In this pattern, you use OpenSearch Service to index the converted files and provide interactive search capabilities for users.
+ [Amazon Simple Storage Service (Amazon S3)](https://docs.aws.amazon.com/AmazonS3/latest/userguide/Welcome.html) is a cloud-based object storage service that helps you store, protect, and retrieve any amount of data.
+ [AWS Command Line Interface (AWS CLI)](https://docs.aws.amazon.com/cli/latest/userguide/cli-chap-welcome.html) is an open-source tool that helps you interact with AWS services through commands in your command-line shell.
+ [AWS Identity and Access Management (IAM)](https://docs.aws.amazon.com/IAM/latest/UserGuide/introduction.html) helps you securely manage access to your AWS resources by controlling who is authenticated and authorized to use them.
+ [AWS Step Functions](https://docs.aws.amazon.com/step-functions/latest/dg/welcome.html) is a serverless orchestration service that helps you combine Lambda functions and other AWS services to build business-critical applications. In this pattern, you use Step Functions to orchestrate Lambda functions.

**Other tools**
+ [GitHub](https://github.com/) is a code-hosting service that provides collaboration tools and version control.
+ [Python](https://www.python.org/) is a high-level programming language.

**Code**

The code for this pattern is available in the GitHub [gfs-mainframe-patterns](https://github.com/aws-samples/gfs-mainframe-patterns.git) repository.

## Epics
<a name="build-an-advanced-mainframe-file-viewer-in-the-aws-cloud-epics"></a>

### Prepare the target environment
<a name="prepare-the-target-environment"></a>


| Task | Description | Skills required | 
| --- | --- | --- | 
| Create the S3 bucket. | [Create an S3 bucket](https://docs.aws.amazon.com/AmazonS3/latest/userguide/creating-bucket.html) for storing the copybooks, input files, and output files. We recommend the following folder structure for your S3 bucket:[\[See the AWS documentation website for more details\]](http://docs.aws.amazon.com/prescriptive-guidance/latest/patterns/build-an-advanced-mainframe-file-viewer-in-the-aws-cloud.html) | General AWS | 
| Create the s3copybookparser function. | [\[See the AWS documentation website for more details\]](http://docs.aws.amazon.com/prescriptive-guidance/latest/patterns/build-an-advanced-mainframe-file-viewer-in-the-aws-cloud.html) | General AWS | 
| Create the s3toathena function. | [\[See the AWS documentation website for more details\]](http://docs.aws.amazon.com/prescriptive-guidance/latest/patterns/build-an-advanced-mainframe-file-viewer-in-the-aws-cloud.html) | General AWS | 
| Create the s3toelasticsearch function. | [\[See the AWS documentation website for more details\]](http://docs.aws.amazon.com/prescriptive-guidance/latest/patterns/build-an-advanced-mainframe-file-viewer-in-the-aws-cloud.html) | General AWS | 
| Create the OpenSearch Service cluster. | **Create the cluster**[\[See the AWS documentation website for more details\]](http://docs.aws.amazon.com/prescriptive-guidance/latest/patterns/build-an-advanced-mainframe-file-viewer-in-the-aws-cloud.html)**Grant access to the IAM role**To provide fine-grained access to the Lambda function’s IAM role (`arn:aws:iam::**:role/service-role/s3toelasticsearch-role-**`), do the following:[\[See the AWS documentation website for more details\]](http://docs.aws.amazon.com/prescriptive-guidance/latest/patterns/build-an-advanced-mainframe-file-viewer-in-the-aws-cloud.html) | General AWS | 
| Create Step Functions for orchestration. | [\[See the AWS documentation website for more details\]](http://docs.aws.amazon.com/prescriptive-guidance/latest/patterns/build-an-advanced-mainframe-file-viewer-in-the-aws-cloud.html) | General AWS | 

### Deploy and run
<a name="deploy-and-run"></a>


| Task | Description | Skills required | 
| --- | --- | --- | 
| Upload the input files and copybooks to the S3 bucket. | Download sample files from the [GitHub ](https://github.com/aws-samples/gfs-mainframe-patterns.git)repository sample folder and upload the files to the S3 bucket that you created earlier.[\[See the AWS documentation website for more details\]](http://docs.aws.amazon.com/prescriptive-guidance/latest/patterns/build-an-advanced-mainframe-file-viewer-in-the-aws-cloud.html) | General AWS | 
| Invoke the Step Functions. | [\[See the AWS documentation website for more details\]](http://docs.aws.amazon.com/prescriptive-guidance/latest/patterns/build-an-advanced-mainframe-file-viewer-in-the-aws-cloud.html)<pre>{<br />  "s3_copybook_bucket_name": "<BUCKET NAME>",<br />  "s3_copybook_bucket_key": "<COPYBOOK PATH>",<br />  "s3_source_bucket_name": "<BUCKET NAME",<br />  "s3_source_bucket_key": "INPUT FILE PATH"<br />}</pre>For example:<pre>{<br />  "s3_copybook_bucket_name": "fileaidtest",<br />  "s3_copybook_bucket_key": "copybook/acctix.cpy",<br />  "s3_source_bucket_name": "fileaidtest",<br />  "s3_source_bucket_key": "input/acctindex"<br />}</pre> | General AWS | 
| Validate the workflow execution in Step Functions. | In the [Step Functions console](https://console.aws.amazon.com/states/home), review the workflow execution in the **Graph inspector**. The execution run states are color coded to represent execution status. For example, blue indicates **In Progress**, green indicates **Succeeded**, and red indicates **Failed**. You can also review the table in the **Execution event history** section for more detailed information about the execution events.For an example of a graphical workflow execution, see *Step Functions graph* in the *Additional information* section of this pattern. | General AWS | 
| Validate the delivery logs in Amazon CloudWatch. | [\[See the AWS documentation website for more details\]](http://docs.aws.amazon.com/prescriptive-guidance/latest/patterns/build-an-advanced-mainframe-file-viewer-in-the-aws-cloud.html)For an example of successful delivery logs, see *CloudWatch delivery logs* in the *Additional information* section of this pattern. | General AWS | 
| Validate the formatted file in OpenSearch Dashboards and perform file operations. | [\[See the AWS documentation website for more details\]](http://docs.aws.amazon.com/prescriptive-guidance/latest/patterns/build-an-advanced-mainframe-file-viewer-in-the-aws-cloud.html) | General AWS | 

## Related resources
<a name="build-an-advanced-mainframe-file-viewer-in-the-aws-cloud-resources"></a>

**References **
+ [Example COBOL copybook](https://www.ibm.com/docs/en/record-generator/3.0?topic=SSMQ4D_3.0.0/documentation/cobol_rcg_examplecopybook.html) (IBM documentation)
+ [BMC Compuware File-AID](https://www.bmc.com/it-solutions/bmc-compuware-file-aid.html) (BMC documentation)

**Tutorials**
+ [Tutorial: Using an Amazon S3 trigger to invoke a Lambda function](https://docs.aws.amazon.com/lambda/latest/dg/with-s3-example.html) (AWS Lambda documentation)
+ [How do I create a serverless workflow with AWS Step Functions and AWS Lambda](https://aws.amazon.com/getting-started/hands-on/create-a-serverless-workflow-step-functions-lambda/) (AWS documentation)
+ [Using OpenSearch Dashboards with Amazon OpenSearch Service](https://docs.aws.amazon.com/opensearch-service/latest/developerguide/dashboards.html) (AWS documentation)

## Additional information
<a name="build-an-advanced-mainframe-file-viewer-in-the-aws-cloud-additional"></a>

**Step Functions graph**

The following example shows a Step Functions graph. The graph shows the execution run status for the Lambda functions used in this pattern.

![\[Step Functions graph shows execution run status for the Lambda functions used in this pattern.\]](http://docs.aws.amazon.com/prescriptive-guidance/latest/patterns/images/pattern-img/36d72b00-d163-455f-9e59-e2c872e7c28a/images/11093e5d-2f9e-4bbf-8abc-f3b2980dd550.png)


**CloudWatch delivery logs**

The following example shows successful delivery logs for the execution of the `s3toelasticsearch` execution.


| 
| 
| 2022-08-10T15:53:33.033-05:00 | Number of processing documents: 100 |  | 
| --- |--- |--- |
|  | 2022-08-10T15:53:33.171-05:00 | [INFO] 2022-08-10T20:53:33.171Z a1b2c3d4-5678-90ab-cdef-EXAMPLE11111POST https://search-essearch-3h4uqclifeqaj2vg4mphe7ffle.us-east-2.es.amazonaws.com:443/\$1bulk [status:200 request:0.100s] | 
|  | 2022-08-10T15:53:33.172-05:00 | Bulk write succeed: 100 documents | 

# Containerize mainframe workloads that have been modernized by Blu Age
<a name="containerize-mainframe-workloads-that-have-been-modernized-by-blu-age"></a>

*Richard Milner-Watts, Amazon Web Services*

## Summary
<a name="containerize-mainframe-workloads-that-have-been-modernized-by-blu-age-summary"></a>

This pattern provides a sample container environment for running mainframe workloads that have been modernized by using the [Blu Age](https://www.bluage.com/) tool. Blu Age converts legacy mainframe workloads into modern Java code. This pattern provides a wrapper around the Java application so you can run it by using container orchestration services such as [Amazon Elastic Container Service (Amazon ECS)](https://aws.amazon.com/ecs/) or [Amazon Elastic Kubernetes Service (Amazon EKS)](https://aws.amazon.com/eks/).

For more information about modernizing your workloads by using Blu Age and AWS services, see these AWS Prescriptive Guidance publications:
+ [Running modernized Blu Age mainframe workloads on serverless AWS infrastructure](https://docs.aws.amazon.com/prescriptive-guidance/latest/run-bluage-modernized-mainframes/)
+ [Deploy an environment for containerized Blu Age applications by using Terraform](https://docs.aws.amazon.com/prescriptive-guidance/latest/patterns/deploy-an-environment-for-containerized-blu-age-applications-by-using-terraform.html)

For assistance with using Blu Age to modernize your mainframe workloads, contact the Blu Age team by choosing **Contact our experts** on the [Blu Age website](https://www.bluage.com/). For assistance with migrating your modernized workloads to AWS, integrating them with AWS services, and moving them into production, contact your AWS account manager or fill out the [AWS Professional Services form](https://pages.awscloud.com/AWS-Professional-Services.html).

## Prerequisites and limitations
<a name="containerize-mainframe-workloads-that-have-been-modernized-by-blu-age-prereqs"></a>

**Prerequisites**
+ A modernized Java application that was created by Blu Age. For testing purposes, this pattern provides a sample Java application that you can use as a proof of concept.
+ A [Docker](https://aws.amazon.com/docker/) environment that you can use to build the container.

**Limitations**

Depending on the container orchestration platform that you use, the resources that can be made available to the container (such as CPU, RAM, and storage) might be limited. For example, if you’re using Amazon ECS with AWS Fargate, see the [Amazon ECS documentation](https://docs.aws.amazon.com/AmazonECS/latest/developerguide/AWS_Fargate.html) for limits and considerations.

## Architecture
<a name="containerize-mainframe-workloads-that-have-been-modernized-by-blu-age-architecture"></a>

**Source technology stack**
+ Blu Age
+ Java

**Target technology stack**
+ Docker

**Target architecture**

The following diagram shows the architecture of the Blu Age application within a Docker container.

![\[Blu Age application in Docker container\]](http://docs.aws.amazon.com/prescriptive-guidance/latest/patterns/images/pattern-img/c1747094-357b-4222-b4eb-b1336d810f83/images/0554332d-eff5-49ca-9789-da39b5a10045.png)


1. The entry point for the container is the wrapper script. This bash script is responsible for preparing the runtime environment for the Blu Age application and processing outputs.

1. Environment variables within the container are used to configure variables in the wrapper script, such as the Amazon Simple Storage Service (Amazon S3) bucket names and database credentials. Environment variables are supplied by either AWS Secrets Manager or Parameter Store, a capability of AWS Systems Manager. If you’re using Amazon ECS as your container orchestration service, you can also hardcode the environment variables in the Amazon ECS task definition.

1. The wrapper script is responsible for pulling any input files from the S3 bucket into the container before you run the Blu Age application. The AWS Command Line Interface (AWS CLI) is installed within the container. This provides a mechanism for accessing objects that are stored in Amazon S3 through the gateway virtual private cloud (VPC) endpoint.

1. The Java Archive (JAR) file for the Blu Age application might need to communicate with other data sources, such as Amazon Aurora.

1. After completion, the wrapper script delivers the resulting output files into an S3 bucket for further processing (for example, by Amazon CloudWatch logging services). The pattern also supports delivering zipped log files to Amazon S3, if you’re using an alternative to standard CloudWatch logging.

## Tools
<a name="containerize-mainframe-workloads-that-have-been-modernized-by-blu-age-tools"></a>

**AWS services**
+ [Amazon Elastic Container Registry (Amazon ECR)](https://docs.aws.amazon.com/AmazonECR/latest/userguide/what-is-ecr.html) is a managed container image registry service that’s secure, scalable, and reliable.
+ [Amazon Elastic Container Service (Amazon ECS)](https://docs.aws.amazon.com/AmazonECS/latest/developerguide/Welcome.html) is a fast and scalable container management service that helps you run, stop, and manage containers on a cluster.

**Tools**
+ [Docker](https://aws.amazon.com/docker/) is a software platform for building, testing, and deploying applications. Docker packages software into standardized units called [containers](https://aws.amazon.com/containers/), which have everything the software needs to run, including libraries, system tools, code, and runtime. You can use Docker to deploy and scale applications into any environment.
+ [Bash](https://www.gnu.org/software/bash/manual/) is a command language interface (shell) for the GNU operating system.
+ [Java](https://www.java.com/) is the programming language and development environment used in this pattern.
+ [Blu Age](https://www.bluage.com/) is an AWS mainframe modernization tool that converts legacy mainframe workloads, including application code, dependencies, and infrastructure, into modern workloads for the cloud.

**Code repository**

The code for this pattern is available in the GitHub [Blu Age sample container repository](https://github.com/aws-samples/aws-blu-age-sample-container).

## Best practices
<a name="containerize-mainframe-workloads-that-have-been-modernized-by-blu-age-best-practices"></a>
+ Externalize the variables for altering your application’s behavior by using environment variables. These variables enable the container orchestration solution to alter the runtime environment without rebuilding the container. This pattern includes examples of environment variables that can be useful for Blu Age applications.
+ Validate any application dependencies before you run your Blu Age application. For example, verify that the database is available and credentials are valid. Write tests in the wrapper script to verify dependencies, and fail early if they are not met.
+ Use verbose logging within the wrapper script. Interacting directly with a running container can be challenging, depending on the orchestration platform and how long the job takes. Make sure that useful output is written to `STDOUT` to help diagnose any issues. For example, output might include the contents of the application’s working directory both before and after you run the application.

## Epics
<a name="containerize-mainframe-workloads-that-have-been-modernized-by-blu-age-epics"></a>

### Obtain a Blu Age application JAR file
<a name="obtain-a-blu-age-application-jar-file"></a>


| Task | Description | Skills required | 
| --- | --- | --- | 
| Option 1 - Work with Blu Age to obtain your application's JAR file. | The container in this pattern requires a Blu Age application. Alternatively, you can use the sample Java application that’s provided with this pattern for a prototype.Work with the Blu Age team to obtain a JAR file for your application that can be baked into the container. If the JAR file isn’t available, see the next task to use the sample application instead. | Cloud architect | 
| Option 2 - Build or use the supplied sample application JAR file. | This pattern provides a prebuilt sample JAR file. This file outputs the application’s environment variables to `STDOUT` before sleeping for 30 seconds and exiting.This file is named `bluAgeSample.jar` and is located in the [docker folder](https://github.com/aws-samples/aws-blu-age-sample-container/tree/main/docker) of the GitHub repository.If you want to alter the code and build your own version of the JAR file, use the source code located at [./java\$1sample/src/sample\$1java\$1app.java](https://github.com/aws-samples/aws-blu-age-sample-container/tree/main/java_sample/src) in the GitHub repository. You can use the build script at [./java\$1sample/build.sh](https://github.com/aws-samples/aws-blu-age-sample-container/tree/main/java_sample) to compile the Java source and build a new JAR fie. | App developer | 

### Build the Blu Age container
<a name="build-the-blu-age-container"></a>


| Task | Description | Skills required | 
| --- | --- | --- | 
| Clone the GitHub repository. | Clone the sample code repository by using the command:<pre>git clone https://github.com/aws-samples/aws-blu-age-sample-container</pre> | AWS DevOps | 
| Use Docker to build the container. | Use Docker to build the container before you push it to a Docker registry such as Amazon ECR:[\[See the AWS documentation website for more details\]](http://docs.aws.amazon.com/prescriptive-guidance/latest/patterns/containerize-mainframe-workloads-that-have-been-modernized-by-blu-age.html) | AWS DevOps | 
| Test the Blu Age container. | (Optional) If necessary, test the container locally by using the command:<pre>docker run -it <tag> /bin/bash</pre> | AWS DevOps | 
| Authenticate to your Docker repository. | If you plan to use Amazon  ECR, follow the instructions in the [Amazon ECR documentation](https://docs.aws.amazon.com/AmazonECR/latest/userguide/getting-started-cli.html) to install and configure the AWS CLI and authenticate the Docker CLI to your default registry.We recommend that you use the [get-login-password command](https://awscli.amazonaws.com/v2/documentation/api/latest/reference/ecr/get-login-password.html) for authentication.  The [Amazon ECR console](https://console.aws.amazon.com/ecr/) provides a pre-populated version of this command if you use the **View push commands** button. For more information, see the [Amazon ECR documentation](https://docs.aws.amazon.com/AmazonECR/latest/userguide/getting-started-console.html).<pre>aws ecr get-login-password --region <region> | docker login --username AWS --password-stdin <account>.dkr.ecr.<region>.amazonaws.com</pre>If you don’t plan to use Amazon ECR, follow the instructions provided for your container registry system. | AWS DevOps | 
| Create a container repository. | Create a repository in Amazon ECR. For instructions, see the pattern [Deploy an environment for containerized Blu Age applications by using Terraform](https://docs.aws.amazon.com/prescriptive-guidance/latest/patterns/deploy-an-environment-for-containerized-blu-age-applications-by-using-terraform.html).If you’re using another container registry system, follow the instructions provided for that system. | AWS DevOps | 
| Tag and push your container to the target repository. | If you're using Amazon ECR:[\[See the AWS documentation website for more details\]](http://docs.aws.amazon.com/prescriptive-guidance/latest/patterns/containerize-mainframe-workloads-that-have-been-modernized-by-blu-age.html)For more information, see [Pushing a Docker image](https://docs.aws.amazon.com/AmazonECR/latest/userguide/docker-push-ecr-image.html) in the *Amazon ECR User Guide*. | AWS DevOps | 

## Related resources
<a name="containerize-mainframe-workloads-that-have-been-modernized-by-blu-age-resources"></a>

**AWS resources**
+ [AWS Blu Age sample container repository](https://github.com/aws-samples/aws-blu-age-sample-container)
+ [Running modernized Blu Age mainframe workloads on serverless AWS infrastructure](https://docs.aws.amazon.com/prescriptive-guidance/latest/run-bluage-modernized-mainframes/)
+ [Deploy an environment for containerized Blu Age applications by using Terraform](https://docs.aws.amazon.com/prescriptive-guidance/latest/patterns/deploy-an-environment-for-containerized-blu-age-applications-by-using-terraform.html)
+ [Using Amazon ECR with the AWS CLI](https://docs.aws.amazon.com/AmazonECR/latest/userguide/getting-started-cli.html) (*Amazon ECR User Guide*)
+ [Private registry authentication](https://docs.aws.amazon.com/AmazonECR/latest/userguide/registry_auth.html) (*Amazon ECR User Guide*)
+ [Amazon ECS documentation](https://docs.aws.amazon.com/AmazonECS/latest/developerguide/Welcome.html)
+ [Amazon EKS documentation](https://docs.aws.amazon.com/eks/latest/userguide/what-is-eks.html)

**Additional resources**
+ [Blu Age website](https://www.bluage.com/)
+ [Docker website](https://docker.com/)

# Convert and unpack EBCDIC data to ASCII on AWS by using Python
<a name="convert-and-unpack-ebcdic-data-to-ascii-on-aws-by-using-python"></a>

*Luis Gustavo Dantas, Amazon Web Services*

## Summary
<a name="convert-and-unpack-ebcdic-data-to-ascii-on-aws-by-using-python-summary"></a>

Because mainframes typically host critical business data, modernizing data is one of the most important tasks when migrating data to the Amazon Web Services (AWS) Cloud or other American Standard Code for Information Interchange (ASCII) environment. On mainframes, data is typically encoded in extended binary-coded decimal interchange code (EBCDIC) format. Exporting database, Virtual Storage Access Method (VSAM), or flat files generally produces packed, binary EBCDIC files, which are more complex to migrate. The most commonly used database migration solution is change data capture (CDC), which, in most cases, automatically converts data encoding. However, CDC mechanisms might not be available for these database, VSAM, or flat files. For these files, an alternative approach is required to modernize the data.

This pattern describes how to modernize EBCDIC data by converting it to ASCII format. After conversion, you can load the data into distributed databases or have applications in the cloud process the data directly. The pattern uses the conversion script and sample files in the [mainframe-data-utilities](https://github.com/aws-samples/mainframe-data-utilities) GitHub repository.

## Prerequisites and limitations
<a name="convert-and-unpack-ebcdic-data-to-ascii-on-aws-by-using-python-prereqs"></a>

**Prerequisites**
+ An active AWS account.
+ An EBCDIC input file and its corresponding common business-oriented language (COBOL) copybook. A sample EBCDIC file and COBOL copybook are included in the [mainframe-data-utilities](https://github.com/aws-samples/mainframe-data-utilities) GitHub repository. For more information about COBOL copybooks, see [Enterprise COBOL for z/OS 6.4 Programming Guide](https://publibfp.dhe.ibm.com/epubs/pdf/igy6pg40.pdf) on the IBM website.

**Limitations**
+ File layouts defined inside COBOL programs are not supported. They must be made available separately.

**Product versions**
+ Python version 3.8 or later

## Architecture
<a name="convert-and-unpack-ebcdic-data-to-ascii-on-aws-by-using-python-architecture"></a>

**Source technology stack**
+ EBCDIC data on a mainframe
+ COBOL copybook

**Target technology stack**
+ Amazon Elastic Compute Cloud (Amazon EC2) instance in a virtual private cloud (VPC)
+ Amazon Elastic Block Store (Amazon EBS)
+ Python and its required packages, JavaScript Object Notation (JSON), sys, and datetime
+ ASCII flat file ready to be read by a modern application or loaded in a relational database table

**Target architecture**

![\[EBCDIC data converted to ASCII on an EC2 instance by using Python scripts and a COBOL copybook\]](http://docs.aws.amazon.com/prescriptive-guidance/latest/patterns/images/pattern-img/f5907bfe-7dff-4cd0-8523-57015ad48c4b/images/4f97b1dd-3f20-4966-a291-22180680ea99.png)


The architecture diagram shows the process of converting an EBCDIC file to an ASCII file on an EC2 instance:

1. Using the **parse\$1copybook\$1to\$1json.py** script, you convert the COBOL copybook to a JSON file.

1. Using the JSON file and the **extract\$1ebcdic\$1to\$1ascii.py** script, you convert the EBCDIC data to an ASCII file.

**Automation and scale**

After the resources needed for the first manual file conversions are in place, you can automate file conversion. This pattern doesn’t include instructions for automation. There are multiple ways to automate the conversion. The following is an overview of one possible approach:

1. Encapsulate the AWS Command Line Interface (AWS CLI) and Python script commands into a shell script.

1. Create an AWS Lambda function that asynchronously submits the shell script job into an EC2 instance. For more information, see [Scheduling SSH jobs using AWS Lambda](https://aws.amazon.com/blogs/compute/scheduling-ssh-jobs-using-aws-lambda/).

1. Create an Amazon Simple Storage Service (Amazon S3) trigger that invokes the Lambda function every time a legacy file is uploaded. For more information, see [Using an Amazon S3 trigger to invoke a Lambda function](https://docs.aws.amazon.com/lambda/latest/dg/with-s3-example.html).

## Tools
<a name="convert-and-unpack-ebcdic-data-to-ascii-on-aws-by-using-python-tools"></a>

**AWS services**
+ [Amazon Elastic Compute Cloud (Amazon EC2)](https://docs.aws.amazon.com/ec2/?id=docs_gateway) provides scalable computing capacity in the AWS Cloud. You can launch as many virtual servers as you need, and quickly scale them up or down.
+ [Amazon Elastic Block Store (Amazon EBS)](https://docs.aws.amazon.com/AWSEC2/latest/UserGuide/AmazonEBS.html) provides block-level storage volumes for use with Amazon Elastic Compute Cloud (Amazon EC2) instances.
+ [AWS Command Line Interface (AWS CLI)](https://docs.aws.amazon.com/cli/latest/userguide/cli-chap-welcome.html) is an open-source tool that helps you interact with AWS services through commands in your command-line shell.
+ [AWS Identity and Access Management (IAM)](https://docs.aws.amazon.com/IAM/latest/UserGuide/introduction.html) helps you securely manage access to your AWS resources by controlling who is authenticated and authorized to use them.

**Other tools**
+ [GitHub](https://github.com/) is a code-hosting service that provides collaboration tools and version control.
+ [Python](https://www.python.org/) is a high-level programming language.

**Code repository**

The code for this pattern is available in the [mainframe-data-utilities](https://github.com/aws-samples/mainframe-data-utilities) GitHub repository.

## Epics
<a name="convert-and-unpack-ebcdic-data-to-ascii-on-aws-by-using-python-epics"></a>

### Prepare the EC2 instance
<a name="prepare-the-ec2-instance"></a>


| Task | Description | Skills required | 
| --- | --- | --- | 
| Launch an EC2 instance. | The EC2 instance must have outbound internet access. This allows the instance to access the Python source code available on GitHub. To create the instance:[\[See the AWS documentation website for more details\]](http://docs.aws.amazon.com/prescriptive-guidance/latest/patterns/convert-and-unpack-ebcdic-data-to-ascii-on-aws-by-using-python.html) | General AWS | 
| Install Git. | [\[See the AWS documentation website for more details\]](http://docs.aws.amazon.com/prescriptive-guidance/latest/patterns/convert-and-unpack-ebcdic-data-to-ascii-on-aws-by-using-python.html) | General AWS, Linux | 
| Install Python. | [\[See the AWS documentation website for more details\]](http://docs.aws.amazon.com/prescriptive-guidance/latest/patterns/convert-and-unpack-ebcdic-data-to-ascii-on-aws-by-using-python.html) | General AWS, Linux | 
| Clone the GitHub repository. | [\[See the AWS documentation website for more details\]](http://docs.aws.amazon.com/prescriptive-guidance/latest/patterns/convert-and-unpack-ebcdic-data-to-ascii-on-aws-by-using-python.html) | General AWS, GitHub | 

### Create the ASCII file from the EBCDIC data
<a name="create-the-ascii-file-from-the-ebcdic-data"></a>


| Task | Description | Skills required | 
| --- | --- | --- | 
| Parse the COBOL copybook into the JSON layout file. | Inside the `mainframe-data-utilities` folder, run the **parse\$1copybook\$1to\$1json.py** script. This automation module reads the file layout from a COBOL copybook and creates a JSON file. The JSON file contains the information required to interpret and extract the data from the source file. This creates the JSON metadata from the COBOL copybook. The following command converts the COBOL copybook to a JSON file.<pre>python3 parse_copybook_to_json.py \<br />-copybook LegacyReference/COBPACK2.cpy \<br />-output sample-data/cobpack2-list.json \<br />-dict sample-data/cobpack2-dict.json \<br />-ebcdic sample-data/COBPACK.OUTFILE.txt \<br />-ascii sample-data/COBPACK.ASCII.txt \<br />-print 10000</pre>The script prints the received arguments.<pre>-----------------------------------------------------------------------<br />Copybook file...............| LegacyReference/COBPACK2.cpy<br />Parsed copybook (JSON List).| sample-data/cobpack2-list.json<br />JSON Dict (documentation)...| sample-data/cobpack2-dict.json<br />ASCII file..................| sample-data/COBPACK.ASCII.txt<br />EBCDIC file.................| sample-data/COBPACK.OUTFILE.txt<br />Print each..................| 10000<br />-----------------------------------------------------------------------</pre>For more information about the arguments, see the [README file](https://github.com/aws-samples/mainframe-data-utilities/blob/main/README.md) in the GitHub repository. | General AWS, Linux | 
| Inspect the JSON layout file. | [\[See the AWS documentation website for more details\]](http://docs.aws.amazon.com/prescriptive-guidance/latest/patterns/convert-and-unpack-ebcdic-data-to-ascii-on-aws-by-using-python.html)<pre> "input": "extract-ebcdic-to-ascii/COBPACK.OUTFILE.txt",<br /> "output": "extract-ebcdic-to-ascii/COBPACK.ASCII.txt",<br /> "max": 0,<br /> "skip": 0,<br /> "print": 10000,<br /> "lrecl": 150,<br /> "rem-low-values": true,<br /> "separator": "|",<br /> "transf": [<br /> {<br /> "type": "ch",<br /> "bytes": 19,<br /> "name": "OUTFILE-TEXT"<br /> } </pre>The most important attributes of the JSON layout file are:[\[See the AWS documentation website for more details\]](http://docs.aws.amazon.com/prescriptive-guidance/latest/patterns/convert-and-unpack-ebcdic-data-to-ascii-on-aws-by-using-python.html)For more information about the JSON layout file, see the [README file](https://github.com/aws-samples/mainframe-data-utilities/blob/main/README.md) in the GitHub repository. | General AWS, JSON | 
| Create the ASCII file.  | Run the **extract\$1ebcdic\$1to\$1ascii.py** script, which is included in the cloned GitHub repository. This script reads the EBCDIC file and writes a converted and readable ASCII file.<pre>python3 extract_ebcdic_to_ascii.py -local-json sample-data/cobpack2-list.json</pre>As the script processes the EBCDIC data, it prints a message for every batch of 10,000 records. See the following example.<pre>------------------------------------------------------------------<br />2023-05-15 21:21:46.322253 | Local Json file   | -local-json | sample-data/cobpack2-list.json<br />2023-05-15 21:21:47.034556 | Records processed | 10000<br />2023-05-15 21:21:47.736434 | Records processed | 20000<br />2023-05-15 21:21:48.441696 | Records processed | 30000<br />2023-05-15 21:21:49.173781 | Records processed | 40000<br />2023-05-15 21:21:49.874779 | Records processed | 50000<br />2023-05-15 21:21:50.705873 | Records processed | 60000<br />2023-05-15 21:21:51.609335 | Records processed | 70000<br />2023-05-15 21:21:52.292989 | Records processed | 80000<br />2023-05-15 21:21:52.938366 | Records processed | 89280<br />2023-05-15 21:21:52.938448 Seconds 6.616232</pre>For information about how to change the print frequency, see the [README file](https://github.com/aws-samples/mainframe-data-utilities/blob/main/README.md) in the GitHub repository. | General AWS | 
| Examine the ASCII file. | [\[See the AWS documentation website for more details\]](http://docs.aws.amazon.com/prescriptive-guidance/latest/patterns/convert-and-unpack-ebcdic-data-to-ascii-on-aws-by-using-python.html)If you used the sample EBCDIC file provided, the following is the first record in the ASCII file.<pre>00000000: 2d30 3030 3030 3030 3030 3130 3030 3030  -000000000100000<br />00000010: 3030 307c 3030 3030 3030 3030 3031 3030  000|000000000100<br />00000020: 3030 3030 3030 7c2d 3030 3030 3030 3030  000000|-00000000<br />00000030: 3031 3030 3030 3030 3030 7c30 7c30 7c31  0100000000|0|0|1<br />00000040: 3030 3030 3030 3030 7c2d 3130 3030 3030  00000000|-100000<br />00000050: 3030 307c 3130 3030 3030 3030 307c 2d31  000|100000000|-1<br />00000060: 3030 3030 3030 3030 7c30 3030 3030 7c30  00000000|00000|0<br />00000070: 3030 3030 7c31 3030 3030 3030 3030 7c2d  0000|100000000|-<br />00000080: 3130 3030 3030 3030 307c 3030 3030 3030  100000000|000000<br />00000090: 3030 3030 3130 3030 3030 3030 307c 2d30  0000100000000|-0<br />000000a0: 3030 3030 3030 3030 3031 3030 3030 3030  0000000001000000<br />000000b0: 3030 7c41 7c41 7c0a                      00|A|A|.</pre> | General AWS, Linux | 
| Evaluate the EBCDIC file. | In the Amazon EC2 console, enter the following command. This opens the first record of the EBCDIC file.<pre>head sample-data/COBPACK.OUTFILE.txt -c 150 | xxd</pre>If you used the sample EBCDIC file, the following is the result.<pre> 00000000: 60f0 f0f0 f0f0 f0f0 f0f0 f1f0 f0f0 f0f0 `...............<br /> 00000010: f0f0 f0f0 f0f0 f0f0 f0f0 f0f0 f1f0 f0f0 ................<br /> 00000020: f0f0 f0f0 f0f0 f0f0 f0f0 f0f0 f0f0 f1f0 ................<br /> 00000030: f0f0 f0f0 f0f0 d000 0000 0005 f5e1 00fa ................<br /> 00000040: 0a1f 0000 0000 0005 f5e1 00ff ffff fffa ................<br /> 00000050: 0a1f 0000 000f 0000 0c10 0000 000f 1000 ................<br /> 00000060: 0000 0d00 0000 0000 1000 0000 0f00 0000 ................<br /> 00000070: 0000 1000 0000 0dc1 c100 0000 0000 0000 ................<br /> 00000080: 0000 0000 0000 0000 0000 0000 0000 0000 ................<br /> 00000090: 0000 0000 0000 ......</pre>To evaluate the equivalence between the source and target files, comprehensive knowledge of EBCDIC is required. For example, the first character of the sample EBCDIC file is a hyphen (`-`). In hexadecimal notation of the EBCDIC file, this character is represented by `60`, and in hexadecimal notation of the ASCII file, this character is represented by `2D`. For an EBCDIC-to-ASCII conversion table, see [EBCDIC to ASCII](https://www.ibm.com/docs/en/iis/11.3?topic=tables-ebcdic-ascii) on the IBM website. | General AWS, Linux, EBCDIC | 

## Related resources
<a name="convert-and-unpack-ebcdic-data-to-ascii-on-aws-by-using-python-resources"></a>

**References**
+ [The EBCDIC character set](https://www.ibm.com/docs/en/zos-basic-skills?topic=mainframe-ebcdic-character-set) (IBM documentation)
+ [EBCDIC to ASCII](https://www.ibm.com/docs/en/iis/11.3?topic=tables-ebcdic-ascii) (IBM documentation)
+ [COBOL](https://www.ibm.com/docs/en/i/7.1?topic=languages-cobol) (IBM documentation)
+ [Basic JCL concepts](https://www.ibm.com/docs/en/zos-basic-skills?topic=collection-basic-jcl-concepts) (IBM documentation)
+ [Connect to your Linux instance](https://docs.aws.amazon.com/AWSEC2/latest/UserGuide/AccessingInstances.html) (Amazon EC2 documentation)

**Tutorials**
+ [Scheduling SSH jobs using AWS Lambda](https://aws.amazon.com/blogs/compute/scheduling-ssh-jobs-using-aws-lambda/) (AWS blog post)
+ [Using an Amazon S3 trigger to invoke a Lambda function](https://docs.aws.amazon.com/lambda/latest/dg/with-s3-example.html) (AWS Lambda documentation)

# Convert mainframe files from EBCDIC format to character-delimited ASCII format in Amazon S3 using AWS Lambda
<a name="convert-mainframe-files-from-ebcdic-format-to-character-delimited-ascii-format-in-amazon-s3-using-aws-lambda"></a>

*Luis Gustavo Dantas, Amazon Web Services*

## Summary
<a name="convert-mainframe-files-from-ebcdic-format-to-character-delimited-ascii-format-in-amazon-s3-using-aws-lambda-summary"></a>

This pattern shows you how to launch an AWS Lambda function that automatically converts mainframe Extended Binary Coded Decimal Interchange Code (EBCDIC) files to character-delimited American Standard Code for Information Interchange (ASCII) files. The Lambda function runs after the ASCII files are uploaded to an Amazon Simple Storage Service (Amazon S3) bucket. After the file conversion, you can read the ASCII files on x86-based workloads or load the files into modern databases.

The file conversion approach demonstrated in this pattern can help you overcome the challenges of working with EBCDIC files on modern environments. Files encoded in EBCDIC often contain data represented in a binary or packed decimal format, and fields are fixed-length. These characteristics create obstacles because modern x86-based workloads or distributed environments generally work with ASCII-encoded data and can’t process EBCDIC files.

## Prerequisites and limitations
<a name="convert-mainframe-files-from-ebcdic-format-to-character-delimited-ascii-format-in-amazon-s3-using-aws-lambda-prereqs"></a>

**Prerequisites**
+ An active AWS account
+ An Amazon S3 bucket
+ An AWS Identity and Access Management (IAM) user with administrative permissions
+ AWS CloudShell
+ [Python 3.8.0](https://www.python.org/downloads/release/python-380/) or later
+ A flat file encoded in EBCDIC and its corresponding data structure in a common business-oriented language (COBOL) copybook

**Note**  
This pattern uses a sample EBCDIC file ([CLIENT.EBCDIC.txt](https://github.com/aws-samples/mainframe-data-utilities/blob/main/sample-data/CLIENT.EBCDIC.txt)) and its corresponding COBOL copybook ([COBKS05.cpy](https://github.com/aws-samples/mainframe-data-utilities/blob/main/LegacyReference/COBKS05.cpy)). Both files are available in the GitHub [mainframe-data-utilities](https://github.com/aws-samples/mainframe-data-utilities) repository.

**Limitations**
+ COBOL copybooks usually hold multiple layout definitions. The [mainframe-data-utilities](https://github.com/aws-samples/mainframe-data-utilities) project can parse this kind of copybook but can't infer which layout to consider on data conversion. This is because copybooks don't hold this logic (which remains on COBOL programs instead). Consequently, you must manually configure the rules for selecting layouts after you parse the copybook.
+ This pattern is subject to [Lambda quotas](https://docs.aws.amazon.com/lambda/latest/dg/gettingstarted-limits.html).

## Architecture
<a name="convert-mainframe-files-from-ebcdic-format-to-character-delimited-ascii-format-in-amazon-s3-using-aws-lambda-architecture"></a>

**Source technology stack**
+ IBM z/OS, IBM i, and other EBCDIC systems
+ Sequential files with data encoded in EBCDIC (such as IBM Db2 unloads)
+ COBOL copybook

**Target technology stack**
+ Amazon S3
+ Amazon S3 event notification
+ IAM
+ Lambda function
+ Python 3.8 or later
+ Mainframe Data Utilities
+ JSON metadata
+ Character-delimited ASCII files

**Target architecture**

The following diagram shows an architecture for converting mainframe EBCDIC files to ASCII files.

![\[Architecture for converting mainframe EBCDIC files to ASCII files\]](http://docs.aws.amazon.com/prescriptive-guidance/latest/patterns/images/pattern-img/97ab4129-2639-4733-86cb-962d91526df4/images/3ca7ca44-373a-434f-8c40-09e7c2abf5ec.png)


The diagram shows the following workflow:

1. The user runs the copybook parser script, which converts the COBOL copybook into a JSON file.

1. The user uploads the JSON metadata to an Amazon S3 bucket. This makes the metadata readable by the data conversion Lambda function.

1. The user or an automated process uploads the EBCDIC file to the Amazon S3 bucket.

1. The Amazon S3 notification event triggers the data conversion Lambda function.

1. AWS verifies the Amazon S3 bucket read-write permissions for the Lambda function.

1. Lambda reads the file from the Amazon S3 bucket and locally converts the file from EBCDIC to ASCII.

1. Lambda logs the process status in Amazon CloudWatch.

1. Lambda writes the ASCII file back to Amazon S3.

**Note**  
The copybook parser script runs a single time to perform the metadata conversion to JSON format, which is subsequently stored in an Amazon S3 bucket. After the initial conversion, all subsequent EBCDIC files that reference the same JSON file in the Amazon S3 bucket will use the existing metadata configuration.

## Tools
<a name="convert-mainframe-files-from-ebcdic-format-to-character-delimited-ascii-format-in-amazon-s3-using-aws-lambda-tools"></a>

**AWS services**
+ [Amazon CloudWatch](https://docs.aws.amazon.com/AmazonCloudWatch/latest/monitoring/WhatIsCloudWatch.html) helps you monitor the metrics of your AWS resources and the applications that you run on AWS in real time.
+ [Amazon Simple Storage Service (Amazon S3)](https://docs.aws.amazon.com/AmazonS3/latest/userguide/Welcome.html) is a cloud-based object storage service that helps you store, protect, and retrieve any amount of data.
+ [AWS CloudShell](https://docs.aws.amazon.com/cloudshell/latest/userguide/welcome.html) is a browser-based shell that you can use to manage AWS services by using the AWS Command Line Interface (AWS CLI) and a range of preinstalled development tools.
+ [AWS Identity and Access Management (IAM)](https://docs.aws.amazon.com/IAM/latest/UserGuide/introduction.html) helps you securely manage access to your AWS resources by controlling who is authenticated and authorized to use them.
+ [AWS Lambda](https://docs.aws.amazon.com/lambda/latest/dg/welcome.html) is a compute service that helps you run code without needing to provision or manage servers. Lambda runs your code only when needed and scales automatically, so you pay only for the compute time that you use.

**Other tools**
+ [GitHub](https://github.com/) is a code-hosting service that provides collaboration tools and version control.
+ [Python](https://www.python.org/) is a high-level programming language.

**Code**

The code for this pattern is available in the GitHub [mainframe-data-utilities](https://github.com/aws-samples/mainframe-data-utilities) repository.

## Best practices
<a name="convert-mainframe-files-from-ebcdic-format-to-character-delimited-ascii-format-in-amazon-s3-using-aws-lambda-best-practices"></a>

Consider the following best practices:
+ Set the required permissions at the Amazon Resource Name (ARN) level.
+ Always grant least-privilege permissions for IAM policies. For more information, see [Security best practices in IAM](https://docs.aws.amazon.com/IAM/latest/UserGuide/best-practices.html) in the IAM documentation.

## Epics
<a name="convert-mainframe-files-from-ebcdic-format-to-character-delimited-ascii-format-in-amazon-s3-using-aws-lambda-epics"></a>

### Create environment variables and a working folder
<a name="create-environment-variables-and-a-working-folder"></a>


| Task | Description | Skills required | 
| --- | --- | --- | 
| Create the environment variables. | Copy the following environment variables to a text editor, and then replace the `<placeholder>` values in the following example with your resource values:<pre>bucket=<your_bucket_name><br />account=<your_account_number><br />region=<your_region_code></pre>You will create references to your Amazon S3 bucket, AWS account, and AWS Region later.To define environment variables, open the [CloudShell console](https://console.aws.amazon.com/cloudshell/), and then copy and paste your updated environment variables onto the command line.You must repeat this step every time the CloudShell session restarts. | General AWS | 
| Create a working folder. | To simplify the resource clean-up process later on, create a working folder in CloudShell by running the following command:<pre>mkdir workdir; cd workdir</pre>You must change the directory to the working directory (`workdir`) every time you lose a connection to your CloudShell session. | General AWS | 

### Define an IAM role and policy
<a name="define-an-iam-role-and-policy"></a>


| Task | Description | Skills required | 
| --- | --- | --- | 
| Create a trust policy for the Lambda function. | The EBCDIC converter runs in a Lambda function. The function must have an IAM role. Before you create the IAM role, you must define a trust policy document that enables resources to assume that policy.From the CloudShell working folder, create a policy document by running the following command:<pre>E2ATrustPol=$(cat <<EOF<br />{<br />    "Version": "2012-10-17",		 	 	 <br />    "Statement": [<br />        {<br />            "Effect": "Allow",<br />            "Principal": {<br />                "Service": "lambda.amazonaws.com"<br />            },<br />            "Action": "sts:AssumeRole"<br />        }<br />    ]<br />}<br />EOF<br />)<br />printf "$E2ATrustPol" > E2ATrustPol.json</pre> | General AWS | 
| Create the IAM role for Lambda conversion. | To create an IAM role, run the following AWS CLI command from the CloudShell working folder:<pre>aws iam create-role --role-name E2AConvLambdaRole --assume-role-policy-document file://E2ATrustPol.json</pre> | General AWS | 
| Create the IAM policy document for the Lambda function. | The Lambda function must have read-write access to the Amazon S3 bucket and write permissions for Amazon CloudWatch Logs.To create an IAM policy, run the following command from the CloudShell working folder:<pre>E2APolicy=$(cat <<EOF<br />{<br />    "Version": "2012-10-17",		 	 	 <br />    "Statement": [<br />        {<br />            "Sid": "Logs",<br />            "Effect": "Allow",<br />            "Action": [<br />                "logs:PutLogEvents",<br />                "logs:CreateLogStream",<br />                "logs:CreateLogGroup"<br />            ],<br />            "Resource": [<br />                "arn:aws:logs:*:*:log-group:*",<br />                "arn:aws:logs:*:*:log-group:*:log-stream:*"<br />            ]<br />        },<br />        {<br />            "Sid": "S3",<br />            "Effect": "Allow",<br />            "Action": [<br />                "s3:GetObject",<br />                "s3:PutObject",<br />                "s3:GetObjectVersion"<br />            ],<br />            "Resource": [<br />                "arn:aws:s3:::%s/*",<br />                "arn:aws:s3:::%s"<br />            ]<br />        }<br />    ]<br />}<br />EOF<br />)<br />printf "$E2APolicy" "$bucket" "$bucket" > E2AConvLambdaPolicy.json</pre> | General AWS | 
| Attach the IAM policy document to the IAM role. | To attach the IAM policy to the IAM role, enter the following command from your CloudShell working folder:<pre>aws iam put-role-policy --role-name E2AConvLambdaRole --policy-name E2AConvLambdaPolicy --policy-document file://E2AConvLambdaPolicy.json</pre> | General AWS | 

### Create the Lambda function for EBCDIC conversion
<a name="create-the-lam-function-for-ebcdic-conversion"></a>


| Task | Description | Skills required | 
| --- | --- | --- | 
| Download the EBCDIC conversion source code. | From the CloudShell working folder, run the following command to download the mainframe-data-utilities source code from GitHub:<pre>git clone https://github.com/aws-samples/mainframe-data-utilities.git mdu</pre> | General AWS | 
| Create the ZIP package. | From the CloudShell working folder, enter the following command to create the ZIP package that creates the Lambda function for EBCDIC conversion:<pre>cd mdu; zip ../mdu.zip *.py; cd ..</pre> | General AWS | 
| Create the Lambda function. | From the CloudShell working folder, enter the following command to create the Lambda function for EBCDIC conversion:<pre>aws lambda create-function \<br />--function-name E2A \<br />--runtime python3.9 \<br />--zip-file fileb://mdu.zip \<br />--handler extract_ebcdic_to_ascii.lambda_handler \<br />--role arn:aws:iam::$account:role/E2AConvLambdaRole \<br />--timeout 10 \<br />--environment "Variables={layout=$bucket/layout/}"</pre> The environment variable layout tells the Lambda function where the JSON metadata resides. | General AWS | 
| Create the resource-based policy for the Lambda function. | From the CloudShell working folder, enter the following command to allow your Amazon S3 event notification to trigger the Lambda function for EBCDIC conversion:<pre>aws lambda add-permission \<br />--function-name E2A \<br />--action lambda:InvokeFunction \<br />--principal s3.amazonaws.com \<br />--source-arn arn:aws:s3:::$bucket \<br />--source-account $account \<br />--statement-id 1</pre> | General AWS | 

### Create the Amazon S3 event notification
<a name="create-the-s3-event-notification"></a>


| Task | Description | Skills required | 
| --- | --- | --- | 
| Create the configuration document for the Amazon S3 event notification. | The Amazon S3 event notification initiates the EBCDIC conversion Lambda function when files are placed in the input folder.From the CloudShell working folder, run the following command to create the JSON document for the Amazon S3 event notification:<pre>S3E2AEvent=$(cat <<EOF<br />{<br />"LambdaFunctionConfigurations": [<br />    {<br />      "Id": "E2A",<br />      "LambdaFunctionArn": "arn:aws:lambda:%s:%s:function:E2A",<br />      "Events": [ "s3:ObjectCreated:Put" ],<br />      "Filter": {<br />        "Key": {<br />          "FilterRules": [<br />            {<br />              "Name": "prefix",<br />              "Value": "input/"<br />            }<br />          ]<br />        }<br />      }<br />    }<br />  ]<br />}<br />EOF<br />)<br />printf "$S3E2AEvent" "$region" "$account" > S3E2AEvent.json</pre> | General AWS | 
| Create the Amazon S3 event notification. | From the CloudShell working folder, enter the following command to create the Amazon S3 event notification:<pre>aws s3api put-bucket-notification-configuration --bucket $bucket --notification-configuration file://S3E2AEvent.json</pre> | General AWS | 

### Create and upload the JSON metadata
<a name="create-and-upload-the-json-metadata"></a>


| Task | Description | Skills required | 
| --- | --- | --- | 
| Parse the COBOL copybook. | From the CloudShell working folder, enter the following command to parse a sample COBOL copybook into a JSON file (which defines how to read and slice the data file properly):<pre>python3       mdu/parse_copybook_to_json.py \<br />-copybook     mdu/LegacyReference/COBKS05.cpy \<br />-output       CLIENT.json \<br />-output-s3key CLIENT.ASCII.txt \<br />-output-s3bkt $bucket \<br />-output-type  s3 \<br />-print        25</pre> | General AWS | 
| Add the transformation rule. | The sample data file and its corresponding COBOL copybook is a multi-layout file. This means that the conversion must slice data based on certain rules. In this case, bytes on position 3 and 4 in each row define the layout.From the CloudShell working folder, edit the `CLIENT.json` file and change the contents from `"transf-rule": [],` to the following:<pre>"transf-rule": [<br />{<br />"offset": 4,<br />"size": 2,<br />"hex": "0002",<br />"transf": "transf1"<br />},<br />{<br />"offset": 4,<br />"size": 2,<br />"hex": "0000",<br />"transf": "transf2"<br />}<br />],</pre> | General AWS, IBM Mainframe, Cobol | 
| Upload the JSON metadata to the Amazon S3 bucket. | From the CloudShell working folder, enter the following AWS CLI command to upload the JSON metadata to your Amazon S3 bucket:<pre>aws s3 cp CLIENT.json s3://$bucket/layout/CLIENT.json</pre> | General AWS | 

### Convert the EBCDIC file
<a name="convert-the-ebcdic-file"></a>


| Task | Description | Skills required | 
| --- | --- | --- | 
| Send the EBCDIC file to the Amazon S3 bucket. | From the CloudShell working folder, enter the following command to send the EBCDIC file to the Amazon S3 bucket:<pre>aws s3 cp mdu/sample-data/CLIENT.EBCDIC.txt s3://$bucket/input/</pre> We recommend that you set different folders for input (EBCDIC) and output (ASCII) files to avoid calling the Lambda conversion function again when the ASCII file is uploaded to the Amazon S3 bucket. | General AWS | 
| Check the output. | From the CloudShell working folder, enter the following command to check if the ASCII file is generated in your Amazon S3 bucket:<pre>aws s3 ls s3://$bucket/</pre> The data conversion can take several seconds to happen. We recommend that you check for the ASCII file a few times.After the ASCII file is available, enter the following command to view the contents of the converted file in the Amazon S3 bucket. As needed, you can download it or use it directly from the Amazon S3 bucket:<pre>aws s3 cp s3://$bucket/CLIENT.ASCII.txt - | head</pre>Check the ASCII file content:<pre>0|0|220|<br />1|1|HERBERT MOHAMED|1958-08-31|BACHELOR|0010000.00|<br />1|2|36|THE ROE AVENUE|<br />2|1|JAYLEN GEORGE|1969-05-29|ELEMENTARY|0020000.00|<br />2|2|365|HEATHFIELD ESPLANADE|<br />3|1|MIKAEEL WEBER|1982-02-17|MASTER|0030000.00|<br />3|2|4555|MORRISON STRAND|<br />4|1|APRIL BARRERA|1967-01-12|DOCTOR|0030000.00|<br />4|2|1311|MARMION PARK|<br />5|1|ALEEZA PLANT|1985-03-01|BACHELOR|0008000.00|</pre> | General AWS | 

### Clean the environment
<a name="clean-the-environment"></a>


| Task | Description | Skills required | 
| --- | --- | --- | 
| (Optional) Prepare the variables and folder. | If you lose connection with CloudShell, reconnect and then enter the following command to change the directory to the working folder:<pre>cd workdir</pre>Ensure that the environment variables are defined:<pre>bucket=<your_bucket_name><br />account=<your_account_number><br />region=<your_region_code></pre> | General AWS | 
| Remove the notification configuration for the bucket. | From the CloudShell working folder, run the following command to remove the Amazon S3 event notification configuration:<pre>aws s3api put-bucket-notification-configuration \<br />--bucket=$bucket \<br />--notification-configuration="{}"</pre> | General AWS | 
| Delete the Lambda function. | From the CloudShell working folder, enter the following command to delete the Lambda function for the EBCDIC converter:<pre>aws lambda delete-function \<br />--function-name E2A</pre> | General AWS | 
| Delete the IAM role and policy. | From the CloudShell working folder, enter the following command to remove the EBCDIC converter role and policy:<pre>aws iam delete-role-policy \<br />--role-name E2AConvLambdaRole \<br />--policy-name E2AConvLambdaPolicy<br /><br />aws iam delete-role \<br />--role-name E2AConvLambdaRole</pre> | General AWS | 
| Delete the files generated in the Amazon S3 bucket. | From the CloudShell working folder, enter the following command to delete the files generated in the Amazon S3 bucket:<pre>aws s3 rm s3://$bucket/layout --recursive<br />aws s3 rm s3://$bucket/input --recursive<br />aws s3 rm s3://$bucket/CLIENT.ASCII.txt</pre> | General AWS | 
| Delete the working folder. | From the CloudShell working folder, enter the following command to remove `workdir` and its contents:<pre>cd ..; rm -Rf workdir</pre> | General AWS | 

## Related resources
<a name="convert-mainframe-files-from-ebcdic-format-to-character-delimited-ascii-format-in-amazon-s3-using-aws-lambda-resources"></a>
+ [Mainframe Data Utilities README](https://github.com/aws-samples/mainframe-data-utilities/blob/main/README.md) (GitHub)
+ [The EBCDIC character set](https://www.ibm.com/docs/en/zos-basic-skills?topic=mainframe-ebcdic-character-set) (IBM documentation)
+ [EBCDIC to ASCII](https://www.ibm.com/docs/en/iis/11.7.0?topic=tables-ebcdic-ascii) (IBM documentation)
+ [COBOL](https://www.ibm.com/docs/en/i/7.6.0?topic=languages-cobol) (IBM documentation)
+ [Using an Amazon S3 trigger to invoke a Lambda function](https://docs.aws.amazon.com/lambda/latest/dg/with-s3-example.html) (AWS Lambda documentation)

# Convert mainframe data files with complex record layouts using Micro Focus
<a name="convert-mainframe-data-files-with-complex-record-layouts-using-micro-focus"></a>

*Peter West, Amazon Web Services*

## Summary
<a name="convert-mainframe-data-files-with-complex-record-layouts-using-micro-focus-summary"></a>

Note: AWS Mainframe Modernization Service (Managed Runtime Environment experience) is no longer open to new customers. For capabilities similar to AWS Mainframe Modernization Service (Managed Runtime Environment experience) explore AWS Mainframe Modernization Service (Self-Managed Experience). Existing customers can continue to use the service as normal. For more information, see [AWS Mainframe Modernization availability change](https://docs.aws.amazon.com/m2/latest/userguide/mainframe-modernization-availability-change.html).

This pattern shows you how to convert mainframe data files with non-text data and complex record layouts from EBCDIC (Extended Binary Coded Decimal Interchange Code) character encoding to ASCII (American Standard Code for Information Interchange) character encoding by using a Micro Focus structure file. To complete the file conversion, you must do the following:

1. Prepare a single source file that describes all the data items and record layouts in your mainframe environment.

1. Create a structure file that contains the record layout of the data by using the Micro Focus Data File Editor as part of the Micro Focus Classic Data File Tools or Data File Tools. The structure file identifies the non-text data so that you can correctly convert your mainframe files from EBCDIC to ASCII.

1. Test the structure file by using the Classic Data File Tools or Data File Tools.

## Prerequisites and limitations
<a name="convert-mainframe-data-files-with-complex-record-layouts-using-micro-focus-prereqs"></a>

**Prerequisites**
+ An active AWS account
+ Micro Focus Enterprise Developer for Windows, available through [AWS Mainframe Modernization](https://aws.amazon.com/mainframe-modernization/)

**Product versions**
+ Micro Focus Enterprise Server 7.0 and later

## Tools
<a name="convert-mainframe-data-files-with-complex-record-layouts-using-micro-focus-tools"></a>
+ [Micro Focus Enterprise Developer](https://www.microfocus.com/documentation/enterprise-developer/ed70/ED-Eclipse/GUID-8D6B7358-AC35-4DAF-A445-607D8D97EBB2.html) provides the run environment for applications created with any integrated development environment (IDE) variant of Enterprise Developer.
+ Micro Focus [Classic Data File Tools](https://www.microfocus.com/documentation/enterprise-developer/ed70/ED-Eclipse/GUID-06115324-0FBC-4CB7-BE9D-04BCFEA5821A.html) help you convert, navigate, edit, and create data files. The Classic Data File Tools include [Data File Converter](https://www.microfocus.com/documentation/visual-cobol/vc60/VS2017/BKFHFHDFCV.html), [Record Layout Editor](https://www.microfocus.com/documentation/enterprise-developer/ed70/ED-Eclipse/BKFHFHRLMF.html), and [Data File Editor](https://www.microfocus.com/documentation/visual-cobol/vc60/VS2017/BKFHFHDFED.html).
+ Micro Focus [Data File Tools](https://www.microfocus.com/documentation/enterprise-developer/ed70/ED-Eclipse/GUID-B1BCB613-6947-451C-8F71-72FB8254076A.html) help you create, edit, and move data files. The Data File Tools include [Data File Editor](https://www.microfocus.com/documentation/visual-cobol/vc60/VS2017/BKFHFHDFED.html), [File Conversion Utilities](https://www.microfocus.com/documentation/enterprise-developer/ed70/ED-Eclipse/BKFHFHCONV.html), and the [Data File Structure Command Line Utility](https://www.microfocus.com/documentation/enterprise-developer/ed70/ED-Eclipse/GUID-E84348EB-A93A-481A-A47C-61B0E1C076E6.html).

## Epics
<a name="convert-mainframe-data-files-with-complex-record-layouts-using-micro-focus-epics"></a>

### Prepare the source file
<a name="prepare-the-source-file"></a>


| Task | Description | Skills required | 
| --- | --- | --- | 
| Identify source components. | Identify all possible record layouts for the file, including any redefinitions that contain non-text data.If you have layouts that contain redefinitions, you must factor these layouts down to unique layouts that describe each possible permutation of the data structure. Typically, a data file’s record layouts can be described by the following archetypes:[\[See the AWS documentation website for more details\]](http://docs.aws.amazon.com/prescriptive-guidance/latest/patterns/convert-mainframe-data-files-with-complex-record-layouts-using-micro-focus.html)For more information on creating flattened record layouts for files that contain complex record layouts, see [Rehosting EBCDIC applications on ASCII environments for mainframe migrations](https://docs.aws.amazon.com/prescriptive-guidance/latest/mainframe-rehost-ebcdic-ascii/introduction.html). | App developer | 
| Identify record layout conditions. | For files with multiple record layouts or files that contain complex layouts with a REDEFINES clause, identify the data and conditions within a record that you can use to define which layout to use during conversion. We recommend that you discuss this task with a subject matter expert (SME) who understands the programs that process these files.For example, a file might contain two record types that contain non-text data. You can inspect the source and possibly find code similar to the following:<pre>MOVE "M" TO PART-TYPE<br /> MOVE "MAIN ASSEMBLY" TO PART-NAME<br />MOVE "S" TO PART-TYPE<br /> MOVE "SUB ASSEMBLY 1" TO PART-NAME</pre>The code helps you identify the following:[\[See the AWS documentation website for more details\]](http://docs.aws.amazon.com/prescriptive-guidance/latest/patterns/convert-mainframe-data-files-with-complex-record-layouts-using-micro-focus.html)You can document the values that are used by this field to associate the record layouts with the correct data records in the file. | App developer | 
| Build the source file. | If the file is described over multiple source files or if the record layout contains non-text data that’s subordinate to a REDEFINES clause, then create a new source file that contains the record layouts. The new program doesn’t need to describe the file using SELECT and FD statements. The program can simply contain the record descriptions as 01 levels within Working-Storage.You can create a source file for each data file or create a master source file that describes all the data files. | App developer | 
| Compile the source file. | Compile the source file to build the data dictionary. We recommend that you compile the source file by using the EBCDIC character set. If the IBMCOMP directive or ODOSLIDE directives are being used, then you must use these directives in the source file too.IBMCOMP affects the byte storage of COMP fields and ODOSLIDE affects padding on OCCURS VARYING structures. If these directives are set incorrectly, then the conversion tool won’t read the data record correctly. This results in bad data in the converted file. | App developer | 

### (Option A) Create the structure file using Classic Data File Tools
<a name="option-a-create-the-structure-file-using-classic-data-file-tools"></a>


| Task | Description | Skills required | 
| --- | --- | --- | 
| Start the tool and load the dictionary. | [\[See the AWS documentation website for more details\]](http://docs.aws.amazon.com/prescriptive-guidance/latest/patterns/convert-mainframe-data-files-with-complex-record-layouts-using-micro-focus.html) | App developer | 
| Create the default record layout. | Use the default record layout for all records that don’t match any conditional layouts.[\[See the AWS documentation website for more details\]](http://docs.aws.amazon.com/prescriptive-guidance/latest/patterns/convert-mainframe-data-files-with-complex-record-layouts-using-micro-focus.html)The default layout appears in the **Layouts** pane and can be identified by the red folder icon. | App developer | 
| Create a conditional record layout. | Use the conditional record layout when there is more than one record layout in a file.[\[See the AWS documentation website for more details\]](http://docs.aws.amazon.com/prescriptive-guidance/latest/patterns/convert-mainframe-data-files-with-complex-record-layouts-using-micro-focus.html) | App developer | 

### (Option B) Create the structure file using Data File Tools
<a name="option-b-create-the-structure-file-using-data-file-tools"></a>


| Task | Description | Skills required | 
| --- | --- | --- | 
| Start the tool and load the dictionary. | [\[See the AWS documentation website for more details\]](http://docs.aws.amazon.com/prescriptive-guidance/latest/patterns/convert-mainframe-data-files-with-complex-record-layouts-using-micro-focus.html) | App developer | 
| Create the default record layout. | Use the default record layout for all records that do not match any conditional layouts.[\[See the AWS documentation website for more details\]](http://docs.aws.amazon.com/prescriptive-guidance/latest/patterns/convert-mainframe-data-files-with-complex-record-layouts-using-micro-focus.html)The default layout appears in the **Layouts** pane and can be identified by the blue "D" icon. | App developer | 
| Create a conditional record layout. | Use the conditional record layout when there is more than one record layout in a file.[\[See the AWS documentation website for more details\]](http://docs.aws.amazon.com/prescriptive-guidance/latest/patterns/convert-mainframe-data-files-with-complex-record-layouts-using-micro-focus.html) | App developer | 

### (Option A) Test the structure file using Classic Data File Tools
<a name="option-a-test-the-structure-file-using-classic-data-file-tools"></a>


| Task | Description | Skills required | 
| --- | --- | --- | 
| Test an EBCDIC data file. | Confirm that you can use your structure file to view an EBCDIC test data file correctly.[\[See the AWS documentation website for more details\]](http://docs.aws.amazon.com/prescriptive-guidance/latest/patterns/convert-mainframe-data-files-with-complex-record-layouts-using-micro-focus.html) | App developer | 

### (Option B) Test the structure file using Data File Tools
<a name="option-b-test-the-structure-file-using-data-file-tools"></a>


| Task | Description | Skills required | 
| --- | --- | --- | 
| Test an EBCDIC data file. | Confirm that you can use your structure file to view an EBCDIC test data file correctly.[\[See the AWS documentation website for more details\]](http://docs.aws.amazon.com/prescriptive-guidance/latest/patterns/convert-mainframe-data-files-with-complex-record-layouts-using-micro-focus.html) | App developer | 

### Test data file conversion
<a name="test-data-file-conversion"></a>


| Task | Description | Skills required | 
| --- | --- | --- | 
| Test the conversion of an EBCDIC file. | [\[See the AWS documentation website for more details\]](http://docs.aws.amazon.com/prescriptive-guidance/latest/patterns/convert-mainframe-data-files-with-complex-record-layouts-using-micro-focus.html) | App developer | 

## Related resources
<a name="convert-mainframe-data-files-with-complex-record-layouts-using-micro-focus-resources"></a>
+ [Micro Focus](https://www.microfocus.com/en-us/products/enterprise-suite/overview) (Micro Focus documentation)
+ [Mainframe and legacy code](https://aws.amazon.com/blogs/?awsf.blog-master-category=category%23mainframe-and-legacy) (AWS Blog posts)
+ [AWS Prescriptive Guidance](https://docs.aws.amazon.com/prescriptive-guidance/) (AWS documentation)
+ [AWS Documentation](https://docs.aws.amazon.com/index.html) (AWS documentation)
+ [AWS General Reference](https://docs.aws.amazon.com/general/latest/gr/Welcome.html) (AWS documentation)
+ [AWS glossary](https://docs.aws.amazon.com/general/latest/gr/glos-chap.html) (AWS documentation)

# Deploy an environment for containerized Blu Age applications by using Terraform
<a name="deploy-an-environment-for-containerized-blu-age-applications-by-using-terraform"></a>

*Richard Milner-Watts, Amazon Web Services*

## Summary
<a name="deploy-an-environment-for-containerized-blu-age-applications-by-using-terraform-summary"></a>

Migrating legacy mainframe workloads into modern cloud architectures can eliminate the costs of maintaining a mainframe—costs that only increase as the environment ages. However, migrating jobs from a mainframe can pose unique challenges. Internal resources might not be familiar with the job logic, and the high performance of mainframes at these specialized tasks can be difficult to replicate when compared to commodity, generalized CPUs. Rewriting these jobs can be a large undertaking and require significant effort.

Blu Age converts legacy mainframe workloads into modern Java code, which you can then run as a container.

This pattern provides a sample serverless architecture for running a containerized application that has been modernized with the Blu Age tool. The included HashiCorp Terraform files will build a secure architecture for the orchestration of Blu Age containers, supporting both batch tasks and real-time services.

For more information about modernizing your workloads by using Blu Age and AWS services, see these AWS Prescriptive Guidance publications:
+ [Running mainframe workloads that have been modernized with Blu Age on AWS serverless infrastructure](https://docs.aws.amazon.com/prescriptive-guidance/latest/run-bluage-modernized-mainframes/)
+ [Containerize mainframe workloads that have been modernized by Blu Age](https://docs.aws.amazon.com/prescriptive-guidance/latest/patterns/containerize-mainframe-workloads-that-have-been-modernized-by-blu-age.html)

For assistance with using Blu Age to modernize your mainframe workloads, contact the Blu Age team by choosing **Contact our experts** on the [Blu Age website](https://www.bluage.com/). For assistance with migrating your modernized workloads to AWS, integrating them with AWS services, and moving them into production, contact your AWS account manager or fill out the [AWS Professional Services form](https://pages.awscloud.com/AWS-Professional-Services.html).

## Prerequisites and limitations
<a name="deploy-an-environment-for-containerized-blu-age-applications-by-using-terraform-prereqs"></a>

**Prerequisites**
+ The sample containerized Blu Age application provided by the [Containerize mainframe workloads that have been modernized by Blu Age](https://docs.aws.amazon.com/prescriptive-guidance/latest/patterns/containerize-mainframe-workloads-that-have-been-modernized-by-blu-age.html) pattern. The sample application provides the logic to handle the processing of input and output for the modernized application, and it can integrate with this architecture.
+ Terraform is required to deploy these resources.

**Limitations**
+ Amazon Elastic Container Service (Amazon ECS) places limits on the task resources that can be made available to the container. These resources include CPU, RAM, and storage. For example, when using Amazon ECS with AWS Fargate, the [task resource limits apply](https://docs.aws.amazon.com/AmazonECS/latest/developerguide/AWS_Fargate.html).

**Product versions**

This solution was tested with the following versions:
+ Terraform 1.3.6
+ Terraform AWS Provider 4.46.0

## Architecture
<a name="deploy-an-environment-for-containerized-blu-age-applications-by-using-terraform-architecture"></a>

**Source technology stack**
+ Blu Age
+ Terraform

**Target technology stack**
+ Amazon Aurora PostgreSQL-Compatible Edition
+ AWS Backup
+ Amazon Elastic Container Registry (Amazon ECR)
+ Amazon ECS
+ AWS Identity and Access Management Service (IAM)
+ AWS Key Management Server (AWS KMS)
+ AWS Secrets Manager
+ Amazon Simple Notification Service (Amazon SNS)
+ Amazon Simple Storage Service (Amazon S3)
+ AWS Step Functions
+ AWS Systems Manager

**Target architecture**

The following diagram shows the solution architecture.

![\[The description follows the diagram.\]](http://docs.aws.amazon.com/prescriptive-guidance/latest/patterns/images/pattern-img/12825490-2622-4f0b-80c9-2c5076d50fa3/images/c0708b0a-aa36-458a-8d6c-d42e3dec7727.png)


1. The solution deploys the following IAM roles:
   + Batch task role
   + Batch task execution role
   + Service task role
   + Service task execution role
   + Step Functions role
   + AWS Backup role
   + RDS Enhanced Monitoring role.

   The roles conform to least-privileged access principles.

1. Amazon ECR is used to store the container image that is orchestrated by this pattern.

1. AWS Systems Manager Parameter Store provides configuration data about each environment to the Amazon ECS task definition at runtime.

1. AWS Secrets Manager provides sensitive configuration data about the environment to the Amazon ECS task definition at runtime. The data has been encrypted by AWS KMS.

1. The Terraform modules create Amazon ECS task definitions for all real-time and batch tasks.

1. Amazon ECS runs a batch task by using AWS Fargate as the compute engine. This is a short-lived task, initiated as required by AWS Step Functions.

1. Amazon Aurora PostgreSQL-Compatible provides a database to support the modernized application. This replaces mainframe databases such as IBM Db2 or IBM IMS DB.

1. Amazon ECS runs a long-lived service to deliver a modernized real-time workload. These stateless applications run permanently with containers spread across Availability Zones.

1. A Network Load Balancer is used to grant access to the real-time workload. The Network Load Balancer supports earlier protocols, such as IBM CICS. Alternatively, you can use an Application Load Balancer with HTTP-based workloads.

1. Amazon S3 provides object storage for job inputs and outputs. The container should handle pull and push operations into Amazon S3 to prepare the working directory for the Blu Age application.

1. The AWS Step Functions service is used to orchestrate running the Amazon ECS tasks to process batch workloads.

1. SNS topics for each batch workload are used to integrate the modernized application with other systems, such as email, or to initiate additional actions, such as delivering output objects from Amazon S3 into FTP.

**Note**  
By default, the solution has no access to the internet. This pattern assumes that the virtual private cloud (VPC) will be connected to other networks using a service such as [AWS Transit Gateway](https://aws.amazon.com/transit-gateway/). As such, multiple interface VPC endpoints are deployed to grant access to the AWS services used by the solution. To turn on direct internet access, you can use the toggle in the Terraform module to replace the VPC endpoints with an internet gateway and the associated resources.

**Automation and scale**

The use of serverless resources throughout this pattern helps to ensure that, by scaling out, there are few limits on the scale of this design. This reduces *noisy neighbor concerns*, such as the competition for compute resources that might be experienced on the original mainframe. Batch tasks can be scheduled to run simultaneously as needed.

Individual containers are limited by the maximum sizes supported by Fargate. For more information, see the [https://docs.aws.amazon.com/AmazonECS/latest/developerguide/AWS_Fargate.html#fargate-tasks-size](https://docs.aws.amazon.com/AmazonECS/latest/developerguide/AWS_Fargate.html#fargate-tasks-size) section in the Amazon ECS documentation.

To [scale real-time workloads horizontally](https://nathanpeck.com/amazon-ecs-scaling-best-practices/), you can add containers.

## Tools
<a name="deploy-an-environment-for-containerized-blu-age-applications-by-using-terraform-tools"></a>

**AWS services**
+ [Amazon Aurora PostgreSQL-Compatible Edition](https://docs.aws.amazon.com/AmazonRDS/latest/AuroraUserGuide/Aurora.AuroraPostgreSQL.html) is a fully managed, ACID-compliant relational database engine that helps you set up, operate, and scale PostgreSQL deployments.
+ [AWS Backup](https://docs.aws.amazon.com/aws-backup/latest/devguide/whatisbackup.html) is a fully managed service that helps you centralize and automate data protection across AWS services, in the cloud, and on premises.
+ [Amazon Elastic Container Registry (Amazon ECR)](https://docs.aws.amazon.com/AmazonECR/latest/userguide/what-is-ecr.html) is a managed container image registry service that’s secure, scalable, and reliable.
+ [Amazon Elastic Container Service (Amazon ECS)](https://docs.aws.amazon.com/AmazonECS/latest/developerguide/Welcome.html) is a fast and scalable container management service that helps you run, stop, and manage containers on a cluster.
+ [AWS Identity and Access Management (IAM)](https://docs.aws.amazon.com/IAM/latest/UserGuide/introduction.html) helps you securely manage access to your AWS resources by controlling who is authenticated and authorized to use them.
+ [AWS Key Management Service (AWS KMS)](https://docs.aws.amazon.com/kms/latest/developerguide/overview.html) helps you create and control cryptographic keys to help protect your data.
+ [AWS Secrets Manager](https://docs.aws.amazon.com/secretsmanager/latest/userguide/intro.html) helps you replace hardcoded credentials in your code, including passwords, with an API call to Secrets Manager to retrieve the secret programmatically.
+ [Amazon Simple Notification Service (Amazon SNS)](https://docs.aws.amazon.com/sns/latest/dg/welcome.html) helps you coordinate and manage the exchange of messages between publishers and clients, including web servers and email addresses.
+ [Amazon Simple Storage Service (Amazon S3)](https://docs.aws.amazon.com/AmazonS3/latest/userguide/Welcome.html) is a cloud-based object storage service that helps you store, protect, and retrieve any amount of data.
+ [AWS Step Functions](https://docs.aws.amazon.com/step-functions/latest/dg/welcome.html) is a serverless orchestration service that helps you combine AWS Lambda functions and other AWS services to build business-critical applications.
+ [AWS Systems Manager Parameter Store](https://docs.aws.amazon.com/systems-manager/latest/userguide/systems-manager-parameter-store.html) provides secure, hierarchical storage for configuration data management and secrets management.

**Other services**
+ [HashiCorp Terraform](https://www.terraform.io/docs) is an infrastructure as code (IaC) tool that helps you use code to provision and manage cloud infrastructure and resources. This pattern uses Terraform to create the sample architecture.

**Code repository**

The source code for this pattern is available in the GitHub [Blu Age Sample ECS Infrastructure (Terraform)](https://github.com/aws-samples/aws-blu-age-sample-ecs-infrastructure-using-terraform#aws-blu-age-sample-ecs-infrastructure-terraform) repository.

## Best practices
<a name="deploy-an-environment-for-containerized-blu-age-applications-by-using-terraform-best-practices"></a>
+ For test environments, use features such as the `forceDate` option to configure the modernized application to generate consistent test results by always running for a known time period.
+ Tune each task individually to consume the optimal amount of resources. You can use [Amazon CloudWatch Container Insights](https://docs.aws.amazon.com/AmazonCloudWatch/latest/monitoring/ContainerInsights.html) to obtain guidance on potential bottlenecks.

## Epics
<a name="deploy-an-environment-for-containerized-blu-age-applications-by-using-terraform-epics"></a>

### Prepare the environment for deployment
<a name="prepare-the-environment-for-deployment"></a>


| Task | Description | Skills required | 
| --- | --- | --- | 
| Clone the solution source code. | Clone the solution code from the [GitHub project](https://github.com/aws-samples/aws-blu-age-sample-ecs-infrastructure-using-terraform). | DevOps engineer | 
| Bootstrap the environment by deploying resources to store the Terraform state. | [\[See the AWS documentation website for more details\]](http://docs.aws.amazon.com/prescriptive-guidance/latest/patterns/deploy-an-environment-for-containerized-blu-age-applications-by-using-terraform.html) | DevOps engineer | 

### Deploy the solution infrastructure
<a name="deploy-the-solution-infrastructure"></a>


| Task | Description | Skills required | 
| --- | --- | --- | 
| Review and update the Terraform configuration. | In the root directory, open the file `main.tf,` review the contents, and consider making the following updates:[\[See the AWS documentation website for more details\]](http://docs.aws.amazon.com/prescriptive-guidance/latest/patterns/deploy-an-environment-for-containerized-blu-age-applications-by-using-terraform.html) | DevOps engineer | 
| Deploy the Terraform file. | From your terminal, run the `terraform apply` command to deploy all resources. Review the changes generated by Terraform, and enter **yes** to initiate the build.Note that it can take over 15 minutes to deploy this infrastructure. | DevOps engineer | 

### (Optional) Deploy a valid Blu Age containerized application
<a name="optional-deploy-a-valid-blu-age-containerized-application"></a>


| Task | Description | Skills required | 
| --- | --- | --- | 
| Push the Blu Age container image to Amazon ECR. | Push the container into the Amazon ECR repository that you created in the previous epic. For instructions, see the [Amazon ECR documentation](https://docs.aws.amazon.com/AmazonECR/latest/userguide/docker-push-ecr-image.html).Make a note of the container image URI. | DevOps engineer | 
| Update the Terraform to reference the Blu Age container image. | Update the file `main.tf`** **to reference the container image that you uploaded. | DevOps engineer | 
| Redeploy the Terraform file. | From your terminal, run `terraform apply` to deploy all resources. Review the suggested updates from Terraform, and then enter **yes** to proceed with the deployment. | DevOps engineer | 

## Related resources
<a name="deploy-an-environment-for-containerized-blu-age-applications-by-using-terraform-resources"></a>
+ [Blu Age](https://www.bluage.com/)
+ [Running mainframe workloads that have been modernized with Blu Age on AWS serverless infrastructure](https://docs.aws.amazon.com/prescriptive-guidance/latest/run-bluage-modernized-mainframes/)
+ [Containerize mainframe workloads that have been modernized by Blu Age](https://docs.aws.amazon.com/prescriptive-guidance/latest/patterns/containerize-mainframe-workloads-that-have-been-modernized-by-blu-age.html)

# Generate Db2 z/OS data insights by using AWS Mainframe Modernization and Amazon Q in Quick Sight
<a name="generate-db2-zos-data-insights-aws-mainframe-modernization-amazon-q-in-quicksight"></a>

*Shubham Roy, Roshna Razack, and Santosh Kumar Singh, Amazon Web Services*

## Summary
<a name="generate-db2-zos-data-insights-aws-mainframe-modernization-amazon-q-in-quicksight-summary"></a>

Note: AWS Mainframe Modernization Service (Managed Runtime Environment experience) is no longer open to new customers. For capabilities similar to AWS Mainframe Modernization Service (Managed Runtime Environment experience) explore AWS Mainframe Modernization Service (Self-Managed Experience). Existing customers can continue to use the service as normal. For more information, see [AWS Mainframe Modernization availability change](https://docs.aws.amazon.com/m2/latest/userguide/mainframe-modernization-availability-change.html).

If your organization is hosting business-critical data in an IBM Db2 mainframe environment, gaining insights from that data is crucial for driving growth and innovation. By unlocking mainframe data, you can build faster, secure, and scalable business intelligence to accelerate data-driven decision-making, growth, and innovation in the Amazon Web Services (AWS) Cloud.

This pattern presents a solution for generating business insights and creating sharable narratives from mainframe data in IBM Db2 for z/OS tables. Mainframe data changes are streamed to [Amazon Managed Streaming for Apache Kafka (Amazon MSK)](https://docs.aws.amazon.com/msk/latest/developerguide/what-is-msk.html) topic using [AWS Mainframe Modernization Data Replication with Precisely](https://docs.aws.amazon.com/m2/latest/userguide/precisely.html). Using [Amazon Redshift streaming ingestion](https://docs.aws.amazon.com/redshift/latest/dg/materialized-view-streaming-ingestion.html), Amazon MSK topic data is stored in [Amazon Redshift Serverless](https://docs.aws.amazon.com/redshift/latest/mgmt/serverless-whatis.html) data warehouse tables for analytics in Amazon Quick Sight.

After the data is available in Quick Sight, you can use natural language prompts with [Amazon Q in Quick Sight](https://docs.aws.amazon.com/quicksight/latest/user/quicksight-gen-bi.html) to create summaries of the data, ask questions, and generate data stories. You don't have to write SQL queries or learn a business intelligence (BI) tool.

**Business context**

This pattern presents a solution for mainframe data analytics and data insights use cases. Using the pattern, you build a visual dashboard for your company's data. To demonstrate the solution, this pattern uses a health care company that provides medical, dental, and vision plans to its members in the US. In this example, member demographics and plan information are stored in the IBM Db2 for z/OS data tables. The visual dashboard shows the following:
+ Member distribution by region
+ Member distribution by gender
+ Member distribution by age
+ Member distribution by plan type
+ Members who have not completed preventive immunization

For examples of member distribution by region and members who have not completed preventive immunization, see the Additional information section.

After you create the dashboard, you generate a data story that explains the insights from the previous analysis. The data story provides recommendations for increasing the number of members who have completed preventive immunizations.

## Prerequisites and limitations
<a name="generate-db2-zos-data-insights-aws-mainframe-modernization-amazon-q-in-quicksight-prereqs"></a>

**Prerequisites**
+ An active AWS account. This solution was built and tested on Amazon Linux 2 on Amazon Elastic Compute Cloud (Amazon EC2).
+ An virtual private cloud (VPC) with a subnet that can be accessed by your mainframe system.
+ A mainframe database with business data. For the example data used to build and test this solution, see the *Attachments* section.
+ Change data capture (CDC) enabled on the Db2 z/OS tables. To enable CDC on Db2 z/OS, see the [IBM documentation](https://www.ibm.com/docs/en/daafz/7.5?topic=cdc-enabling-data-capture-changes).
+ Precisely Connect CDC for z/OS installed on the z/OS system that's hosting the source databases. The Precisely Connect CDC for z/OS image is provided as a zip file within the [AWS Mainframe Modernization - Data Replication for IBM z/OS](https://aws.amazon.com/marketplace/pp/prodview-doe2lroefogia?applicationId=AWSMPContessa&ref_=beagle&sr=0-1) Amazon Machine Image (AMI). To install Precisely Connect CDC for z/OS on the mainframe, see the [Precisely installation documentation](https://help.precisely.com/r/AWS-Mainframe-Modernization/Latest/en-US/AWS-Mainframe-Modernization-Data-Replication-for-IBM-z/OS/Install-Precisely-Connect-CDC-z/OS).

**Limitations**
+ Your mainframe Db2 data should be in a data type that's supported by Precisely Connect CDC. For a list of supported data types, see the [Precisely Connect CDC documentation](https://help.precisely.com/r/AWS-Mainframe-Modernization/Latest/en-US/AWS-Mainframe-Modernization-Data-Replication-for-IBM-z/OS/Data-replication-overview/Supported-source-data-types).
+ Your data at Amazon MSK should be in a data type that's supported by Amazon Redshift. For a list of supported data types, see the [Amazon Redshift documentation](https://docs.aws.amazon.com/redshift/latest/dg/c_Supported_data_types.html).
+ Amazon Redshift has different behaviors and size limits for different data types. For more information, see the [Amazon Redshift documentation](https://docs.aws.amazon.com/redshift/latest/dg/materialized-view-streaming-ingestion.html#materialized-view-streaming-ingestion-limitations).
+ The near real-time data in Quick Sight depends on the refresh interval set for the Amazon Redshift database.
+ Some AWS services aren’t available in all AWS Regions. For Region availability, see [AWS services by Region](https://aws.amazon.com/about-aws/global-infrastructure/regional-product-services/). Amazon Q in Quick Sight is currently not available in every Region that supports Quick Sight. For specific endpoints, see the [Service endpoints and quotas](https://docs.aws.amazon.com/general/latest/gr/aws-service-information.html) page, and choose the link for the service.

**Product versions**
+ AWS Mainframe Modernization Data Replication with Precisely version 4.1.44
+ Python version 3.6 or later
+ Apache Kafka version** **3.5.1

## Architecture
<a name="generate-db2-zos-data-insights-aws-mainframe-modernization-amazon-q-in-quicksight-architecture"></a>

**Target architecture**

The following diagram shows an architecture for generating business insights from mainframe data by using [AWS Mainframe Modernization Data Replication with Precisely](https://aws.amazon.com/mainframe-modernization/capabilities/data-replication/) and Amazon Q in Quick Sight.

![\[Seven-step process from z/OS mainframe to Amazon QuickSight.\]](http://docs.aws.amazon.com/prescriptive-guidance/latest/patterns/images/pattern-img/18e72bcb-1b9a-406a-8220-83aca7743ad2/images/cddb6d20-14ae-4276-90d8-14df435db824.png)


The diagram shows the following workflow:

1. The Precisely Log Reader Agent reads data from Db2 logs and writes the data into transient storage on an OMVS file system on the mainframe.

1. The Publisher Agent reads the raw Db2 logs from transient storage.

1. The on-premises controller daemon authenticates, authorizes, monitors, and manages operations.

1. The Apply Agent is deployed on Amazon EC2 by using the preconfigured AMI. It connects with the Publisher Agent through the controller daemon by using TCP/IP. The Apply Agent pushes data to Amazon MSK using multiple workers for high-throughput.

1. The workers write the data to the Amazon MSK topic in JSON format. As the intermediate target for the replicated messages, Amazon MSK provides the highly available and automated failover capabilities.

1. Amazon Redshift streaming ingestion provides low-latency, high-speed data ingestion from Amazon MSK to an Amazon Redshift Serverless database. A stored procedure in Amazon Redshift performs the mainframe change data (insert/update/deletes) reconciliation into Amazon Redshift tables. These Amazon Redshift tables serves as the data analytics source for Quick Sight.

1. Users access the data in Quick Sight for analytics and insights. You can use Amazon Q in Quick Sight to interact with the data by using natural language prompts.

## Tools
<a name="generate-db2-zos-data-insights-aws-mainframe-modernization-amazon-q-in-quicksight-tools"></a>

**AWS services**
+ [Amazon Elastic Compute Cloud (Amazon EC2)](https://docs.aws.amazon.com/ec2/) provides scalable computing capacity in the AWS Cloud. You can launch as many virtual servers as you need and quickly scale them out or in.
+ [AWS Key Management Service (AWS KMS)](https://docs.aws.amazon.com/kms/latest/developerguide/overview.html) helps you create and control cryptographic keys to help protect your data.
+ [Amazon Managed Streaming for Apache Kafka (Amazon MSK)](https://docs.aws.amazon.com/msk/latest/developerguide/what-is-msk.html) is a fully managed service that helps you build and run applications that use Apache Kafka to process streaming data.
+ [Amazon Quick Sight](https://docs.aws.amazon.com/quicksight/latest/user/welcome.html) is a cloud-scale business intelligence (BI) service that helps you visualize, analyze, and report your data in a single dashboard. This pattern uses the generative BI capabilities of Amazon Q in Quick Sight.
+ [Amazon Redshift Serverless](https://aws.amazon.com/redshift/redshift-serverless/) is a serverless option of Amazon Redshift that makes it more efficient to run and scale analytics in seconds without the need to set up and manage data warehouse infrastructure.
+ [AWS Secrets Manager](https://docs.aws.amazon.com/secretsmanager/latest/userguide/intro.html) helps you replace hardcoded credentials in your code, including passwords, with an API call to Secrets Manager to retrieve the secret programmatically.

**Other tools**
+ [Precisely Connect CDC](https://support.precisely.com/products/connect-cdc-formerly-sqdata/) collects and integrates data from legacy systems into cloud and data platforms.

**Code repository**

The code for this pattern is available in the GitHub [Mainframe\$1DataInsights\$1change\$1data\$1reconciliation](https://github.com/aws-samples/Mainframe_DataInsights_change_data_reconcilition) repository. The code is a stored procedure in Amazon Redshift. This stored procedure reconciles mainframe data changes (inserts, updates, and deletes) from Amazon MSK into the Amazon Redshift tables. These Amazon Redshift tables serve as the data analytics source for Quick Sight.

## Best practices
<a name="generate-db2-zos-data-insights-aws-mainframe-modernization-amazon-q-in-quicksight-best-practices"></a>
+ Follow [best practices](https://docs.aws.amazon.com/msk/latest/developerguide/bestpractices.html) while setting up your Amazon MSK cluster.
+ Follow Amazon Redshift [data parsing best practices](https://docs.aws.amazon.com/redshift/latest/dg/materialized-view-streaming-ingestion.html#materialized-view-streaming-recommendations) for improving performance.
+ When you create the AWS Identity and Access Management (IAM) roles for the Precisely setup, follow the principle of least privilege and grant the minimum permissions required to perform a task. For more information, see [Grant least privilege](https://docs.aws.amazon.com/IAM/latest/UserGuide/access_policies.html#grant-least-priv) and [Security best practices](https://docs.aws.amazon.com/IAM/latest/UserGuide/best-practices.html) in the IAM documentation.

## Epics
<a name="generate-db2-zos-data-insights-aws-mainframe-modernization-amazon-q-in-quicksight-epics"></a>

### Set up AWS Mainframe Modernization Data Replication with Precisely on Amazon EC2
<a name="set-up-m2long-data-replication-with-precisely-on-ec2"></a>


| Task | Description | Skills required | 
| --- | --- | --- | 
| Set up a security group. | To connect to the controller daemon and the Amazon MSK cluster, [create a security group](https://docs.aws.amazon.com/vpc/latest/userguide/creating-security-groups.html) for the EC2 instance. Add the following inbound and outbound rules:[\[See the AWS documentation website for more details\]](http://docs.aws.amazon.com/prescriptive-guidance/latest/patterns/generate-db2-zos-data-insights-aws-mainframe-modernization-amazon-q-in-quicksight.html)Note the name of the security group. You will need to reference the name when you launch the EC2 instance and configure the Amazon MSK cluster. | DevOps engineer, AWS DevOps | 
| Create an IAM policy and an IAM role. | [\[See the AWS documentation website for more details\]](http://docs.aws.amazon.com/prescriptive-guidance/latest/patterns/generate-db2-zos-data-insights-aws-mainframe-modernization-amazon-q-in-quicksight.html) | DevOps engineer, AWS systems administrator | 
| Provision an EC2 instance. | To provision an EC2 instance to run Precisely CDC and connect to Amazon MSK, do the following:[\[See the AWS documentation website for more details\]](http://docs.aws.amazon.com/prescriptive-guidance/latest/patterns/generate-db2-zos-data-insights-aws-mainframe-modernization-amazon-q-in-quicksight.html) | AWS administrator, DevOps engineer | 

### Set up Amazon MSK
<a name="set-up-msk"></a>


| Task | Description | Skills required | 
| --- | --- | --- | 
| Create the Amazon MSK cluster. | To create an Amazon MSK cluster, do the following :[\[See the AWS documentation website for more details\]](http://docs.aws.amazon.com/prescriptive-guidance/latest/patterns/generate-db2-zos-data-insights-aws-mainframe-modernization-amazon-q-in-quicksight.html)A typical provisioned cluster takes up to 15 minutes to create. After the cluster is created, its status changes from **Creating** to **Active**. | AWS DevOps, Cloud administrator | 
| Set up SASL/SCRAM authentication. | To set up SASL/SCRAM authentication for an Amazon MSK cluster, do the following:[\[See the AWS documentation website for more details\]](http://docs.aws.amazon.com/prescriptive-guidance/latest/patterns/generate-db2-zos-data-insights-aws-mainframe-modernization-amazon-q-in-quicksight.html) | Cloud architect | 
| Create the Amazon MSK topic. | To create the Amazon MSK topic, do the following:[\[See the AWS documentation website for more details\]](http://docs.aws.amazon.com/prescriptive-guidance/latest/patterns/generate-db2-zos-data-insights-aws-mainframe-modernization-amazon-q-in-quicksight.html) | Cloud administrator | 

### Configure the Precisely Apply Engine on Amazon EC2
<a name="configure-the-precisely-apply-engine-on-ec2"></a>


| Task | Description | Skills required | 
| --- | --- | --- | 
| Set up the Precisely scripts to replicate data changes. | To set up the Precisely Connect CDC scripts to replicate changed data from the mainframe to the Amazon MSK topic, do the following:[\[See the AWS documentation website for more details\]](http://docs.aws.amazon.com/prescriptive-guidance/latest/patterns/generate-db2-zos-data-insights-aws-mainframe-modernization-amazon-q-in-quicksight.html)For example .ddl files, see the [Additional information](#generate-db2-zos-data-insights-aws-mainframe-modernization-amazon-q-in-quicksight-additional) section. | App developer, Cloud architect | 
| Generate the network ACL key. | To generate the network access control list (network ACL) key, do the following:[\[See the AWS documentation website for more details\]](http://docs.aws.amazon.com/prescriptive-guidance/latest/patterns/generate-db2-zos-data-insights-aws-mainframe-modernization-amazon-q-in-quicksight.html) | Cloud architect, AWS DevOps | 

### Prepare the mainframe source environment
<a name="prepare-the-mainframe-source-environment"></a>


| Task | Description | Skills required | 
| --- | --- | --- | 
| Configure defaults in the ISPF screen. | To configure default settings in the Interactive System Productivity Facility (ISPF), follow the instructions in the [Precisely documentation](https://help.precisely.com/r/AWS-Mainframe-Modernization/Latest/en-US/AWS-Mainframe-Modernization-Data-Replication-for-IBM-z/OS/Install-Precisely-Connect-CDC-z/OS/Start-ISPF-Panel-Interface). | Mainframe system administrator | 
| Configure the controller daemon. | To configure the controller daemon, do the following:[\[See the AWS documentation website for more details\]](http://docs.aws.amazon.com/prescriptive-guidance/latest/patterns/generate-db2-zos-data-insights-aws-mainframe-modernization-amazon-q-in-quicksight.html) | Mainframe system administrator | 
| Configure the publisher. | To configure the publisher, do the following:[\[See the AWS documentation website for more details\]](http://docs.aws.amazon.com/prescriptive-guidance/latest/patterns/generate-db2-zos-data-insights-aws-mainframe-modernization-amazon-q-in-quicksight.html) | Mainframe system administrator | 
| Update the daemon configuration file. | To update the publisher details in the controller daemon configuration file, do the following:[\[See the AWS documentation website for more details\]](http://docs.aws.amazon.com/prescriptive-guidance/latest/patterns/generate-db2-zos-data-insights-aws-mainframe-modernization-amazon-q-in-quicksight.html) | Mainframe system administrator | 
| Create the job to start the controller daemon. | To create the job, do the following:[\[See the AWS documentation website for more details\]](http://docs.aws.amazon.com/prescriptive-guidance/latest/patterns/generate-db2-zos-data-insights-aws-mainframe-modernization-amazon-q-in-quicksight.html) | Mainframe system administrator | 
| Generate the capture publisher JCL file. | To generation the capture publisher JCL file, do the following:[\[See the AWS documentation website for more details\]](http://docs.aws.amazon.com/prescriptive-guidance/latest/patterns/generate-db2-zos-data-insights-aws-mainframe-modernization-amazon-q-in-quicksight.html) | Mainframe system administrator | 
| Check and update CDC. | [\[See the AWS documentation website for more details\]](http://docs.aws.amazon.com/prescriptive-guidance/latest/patterns/generate-db2-zos-data-insights-aws-mainframe-modernization-amazon-q-in-quicksight.html) | Mainframe system administrator | 
| Submit the JCL files. | Submit the following JCL files that you configured in the previous steps:[\[See the AWS documentation website for more details\]](http://docs.aws.amazon.com/prescriptive-guidance/latest/patterns/generate-db2-zos-data-insights-aws-mainframe-modernization-amazon-q-in-quicksight.html)After you submit the JCL files, you can start the Apply Engine in Precisely on the EC2 instance. | Mainframe system administrator | 

### Run and validate CDC
<a name="run-and-validate-cdc"></a>


| Task | Description | Skills required | 
| --- | --- | --- | 
| Start the Apply Engine and validate the CDC. | To start the Apply Engine on the EC2 instance and validate the CDC, do the following:[\[See the AWS documentation website for more details\]](http://docs.aws.amazon.com/prescriptive-guidance/latest/patterns/generate-db2-zos-data-insights-aws-mainframe-modernization-amazon-q-in-quicksight.html) | Cloud architect, App developer | 
| Validate the records on the Amazon MSK topic. | To read the message from the Kafka topic, do the following:[\[See the AWS documentation website for more details\]](http://docs.aws.amazon.com/prescriptive-guidance/latest/patterns/generate-db2-zos-data-insights-aws-mainframe-modernization-amazon-q-in-quicksight.html) | App developer, Cloud architect | 

### Store mainframe change data in an Amazon Redshift Serverless data warehouse
<a name="store-mainframe-change-data-in-an-rsslong-data-warehouse"></a>


| Task | Description | Skills required | 
| --- | --- | --- | 
| Set up Amazon Redshift Serverless. | To create an Amazon Redshift Serverless data warehouse, follow the instructions in the [AWS documentation](https://docs.aws.amazon.com/redshift/latest/gsg/new-user-serverless.html).On the Amazon Redshift Serverless dashboard, validate that the namespace and workgroup were created and are available. For this example pattern, the process might take 2‒5 minutes. | Data engineer | 
| Set up the IAM role and trust policy required for streaming ingestion. | To set up Amazon Redshift Serverless streaming ingestion from Amazon MSK, do following:[\[See the AWS documentation website for more details\]](http://docs.aws.amazon.com/prescriptive-guidance/latest/patterns/generate-db2-zos-data-insights-aws-mainframe-modernization-amazon-q-in-quicksight.html) | Data engineer | 
| Connect Amazon Redshift Serverless to Amazon MSK. | To connect to the Amazon MSK topic, create an external schema in Amazon Redshift Serverless. In Amazon Redshift query editor v2, run the following SQL command, replacing `'iam_role_arn'` with the role that you created previously and replacing `'MSK_cluster_arn`' with the ARN for your cluster.<pre>CREATE EXTERNAL SCHEMA member_schema<br />FROM MSK<br />IAM_ROLE 'iam_role_arn'<br />AUTHENTICATION iam<br />URI 'MSK_cluster_arn';</pre> | Migration engineer | 
| Create a materialized view. | To consume the data from the Amazon MSK topic in Amazon Redshift Serverless, create a materialized view. In Amazon Redshift query editor v2, run the following SQL commands, replacing `<MSK_Topic_name>` with the name of your Amazon MSK topic.<pre>CREATE MATERIALIZED VIEW member_view<br />AUTO REFRESH YES<br />AS SELECT<br />kafka_partition, <br />kafka_offset, <br />refresh_time, <br />json_parse(kafka_value) AS Data<br />FROM member_schema.<MSK_Topic_name><br />WHERE CAN_JSON_PARSE(kafka_value); <br /></pre> | Migration engineer | 
| Create target tables in Amazon Redshift. | Amazon Redshift tables provide the input for Quick Sight. This pattern uses the tables `member_dtls` and `member_plans`, which match the source Db2 tables on the mainframe.To create the two tables in Amazon Redshift, run the following SQL commands in Amazon Redshift query editor v2:<pre>-- Table 1: members_dtls<br />CREATE TABLE members_dtls (<br /> memberid INT ENCODE AZ64,<br /> member_name VARCHAR(100) ENCODE ZSTD,<br /> member_type VARCHAR(50) ENCODE ZSTD,<br /> age INT ENCODE AZ64,<br /> gender CHAR(1) ENCODE BYTEDICT,<br /> email VARCHAR(100) ENCODE ZSTD,<br /> region VARCHAR(50) ENCODE ZSTD<br />) DISTSTYLE AUTO;<br /><br />-- Table 2: member_plans<br />CREATE TABLE member_plans (<br /> memberid INT ENCODE AZ64,<br /> medical_plan CHAR(1) ENCODE BYTEDICT,<br /> dental_plan CHAR(1) ENCODE BYTEDICT,<br /> vision_plan CHAR(1) ENCODE BYTEDICT,<br /> preventive_immunization VARCHAR(50) ENCODE ZSTD<br />) DISTSTYLE AUTO;</pre> | Migration engineer | 
| Create a stored procedure in Amazon Redshift. | This pattern uses a stored procedure to sync-up change data (`INSERT`, `UPDATE`, `DELETE`) from the source mainframe to the target Amazon Redshift data warehouse table for analytics in Quick Sight.To create the stored procedure in Amazon Redshift, use query editor v2 to run the stored procedure code that's in the GitHub repository. | Migration engineer | 
| Read from the streaming materialized view and load to the target tables. | The stored procedure reads data change from the streaming materialized view and loads the data changes to the target tables. To run the stored procedure, use the following command:<pre>call SP_Members_Load();</pre>You can use [Amazon EventBridge](https://aws.amazon.com/eventbridge/) to schedule the jobs in your Amazon Redshift data warehouse to call this stored procedure based on your data latency requirements. EventBridge runs jobs at fixed intervals. To monitor whether the previous call to the procedure completed, you might need to use a mechanism such as an [AWS Step Functions](https://aws.amazon.com/step-functions/) state machine. For more information, see the following resources:[\[See the AWS documentation website for more details\]](http://docs.aws.amazon.com/prescriptive-guidance/latest/patterns/generate-db2-zos-data-insights-aws-mainframe-modernization-amazon-q-in-quicksight.html)Another option is to use Amazon Redshift query editor v2 to schedule the refresh. For more information, see [Scheduling a query with query editor v2](https://docs.aws.amazon.com/redshift/latest/mgmt/query-editor-v2-schedule-query.html). | Migration engineer | 

### Connect Quick Sight to data in Amazon Redshift
<a name="connect-quick-sight-to-data-in-rs"></a>


| Task | Description | Skills required | 
| --- | --- | --- | 
| Set up Quick Sight. | To set up Quick Sight, follow the instructions in the [AWS documentation](https://docs.aws.amazon.com/quicksight/latest/user/setting-up.html). | Migration engineer | 
| Set up a secure connection between Quick Sight and Amazon Redshift. | To set up secure a connection between Quick Sight and Amazon Redshift, do the following[\[See the AWS documentation website for more details\]](http://docs.aws.amazon.com/prescriptive-guidance/latest/patterns/generate-db2-zos-data-insights-aws-mainframe-modernization-amazon-q-in-quicksight.html) | Migration engineer | 
| Create a dataset for Quick Sight. | To create a dataset for Quick Sight from Amazon Redshift, do following:[\[See the AWS documentation website for more details\]](http://docs.aws.amazon.com/prescriptive-guidance/latest/patterns/generate-db2-zos-data-insights-aws-mainframe-modernization-amazon-q-in-quicksight.html) | Migration engineer | 
| Join the dataset. | To create analytics in Quick Sight, join the two tables by following the instructions in the [AWS documentation](https://docs.aws.amazon.com/quicksight/latest/user/joining-data.html#create-a-join).In the **Join Configuration** pane, choose **Left** for **Join type**. Under **Join clauses**, use `memberid from member_plans = memberid from members_details`. | Migration engineer | 

### Get business insights from the mainframe data by using Amazon Q in Quick Sight
<a name="get-business-insights-from-the-mainframe-data-by-using-qdev-in-quick-sight"></a>


| Task | Description | Skills required | 
| --- | --- | --- | 
| Set up Amazon Q in Quick Sight. | To set up the Amazon Q in Quick Sight generative BI capability, follow the instructions in the [AWS documentation](https://docs.aws.amazon.com/quicksight/latest/user/generative-bi-get-started.html). | Migration engineer | 
| Analyze mainframe data and build a visual dashboard. | To analyze and visualize your data in Quick Sight, do the following:[\[See the AWS documentation website for more details\]](http://docs.aws.amazon.com/prescriptive-guidance/latest/patterns/generate-db2-zos-data-insights-aws-mainframe-modernization-amazon-q-in-quicksight.html)When you're finished, you can publish your dashboard to share with others in your organization. For examples, see *Mainframe visual dashboard* in the [Additional information](https://docs.aws.amazon.com/prescriptive-guidance/latest/patterns/generate-data-insights-by-using-aws-mainframe-modernization-and-amazon-q-in-quicksight.html#generate-data-insights-by-using-aws-mainframe-modernization-and-amazon-q-in-quicksight-additional) section. | Migration engineer | 

### Create a data story with Amazon Q in Quick Sight from mainframe data
<a name="create-a-data-story-with-qdev-in-quick-sight-from-mainframe-data"></a>


| Task | Description | Skills required | 
| --- | --- | --- | 
| Create a data story. | Create a data story to explain insights from the previous analysis, and generate a recommendation to increase preventive immunization for members:[\[See the AWS documentation website for more details\]](http://docs.aws.amazon.com/prescriptive-guidance/latest/patterns/generate-db2-zos-data-insights-aws-mainframe-modernization-amazon-q-in-quicksight.html) | Migration engineer | 
| View the generated data story. | To view the generated data story, choose that story on the **Data stories** page. | Migration engineer | 
| Edit a generated data story. | To change the formatting, layout, or visuals in a data story, follow the instructions in the [AWS documentation](https://docs.aws.amazon.com/quicksight/latest/user/working-with-stories-edit.html). | Migration engineer | 
| Share a data story. | To share a data story, follow the instructions in the [AWS documentation](https://docs.aws.amazon.com/quicksight/latest/user/working-with-stories-share.html). | Migration engineer | 

## Troubleshooting
<a name="generate-db2-zos-data-insights-aws-mainframe-modernization-amazon-q-in-quicksight-troubleshooting"></a>


| Issue | Solution | 
| --- | --- | 
| For Quick Sight to Amazon Redshift dataset creation, `Validate Connection` has faled. | [\[See the AWS documentation website for more details\]](http://docs.aws.amazon.com/prescriptive-guidance/latest/patterns/generate-db2-zos-data-insights-aws-mainframe-modernization-amazon-q-in-quicksight.html) | 
| Trying to start the Apply engine on the EC2 instance returns the following error:`-bash: sqdeng: command not found` | Export the `sqdata` installation path by running following command:<pre>export PATH=$PATH:/usr/sbin:/opt/precisely/di/sqdata/bin</pre> | 
| Trying to start the Apply Engine returns one of the following connection errors:[\[See the AWS documentation website for more details\]](http://docs.aws.amazon.com/prescriptive-guidance/latest/patterns/generate-db2-zos-data-insights-aws-mainframe-modernization-amazon-q-in-quicksight.html) | Check the mainframe spool to make sure that the controller daemon jobs are running. | 

## Related resources
<a name="generate-db2-zos-data-insights-aws-mainframe-modernization-amazon-q-in-quicksight-resources"></a>
+ [Generate insights by using AWS Mainframe Modernization and Amazon Q in Quick Sight](https://docs.aws.amazon.com/prescriptive-guidance/latest/patterns/generate-data-insights-by-using-aws-mainframe-modernization-and-amazon-q-in-quicksight.html?did=pg_card&trk=pg_card) (pattern)
+ [Generate data insights by using AWS Mainframe Modernization and Amazon Q in Quick Sight](https://youtu.be/F8b7l79p6TM?si=gASuQtFbMVuEm7IJ) (demo)
+ [AWS Mainframe Modernization - Data Replication for IBM z/OS](https://aws.amazon.com/marketplace/pp/prodview-doe2lroefogia?sr=0-4&ref_=beagle&applicationId=AWSMPContessa)
+ [Amazon Redshift streaming ingestion to a materialized view](https://docs.aws.amazon.com/redshift/latest/dg/materialized-view-streaming-ingestion.html)

## Additional information
<a name="generate-db2-zos-data-insights-aws-mainframe-modernization-amazon-q-in-quicksight-additional"></a>

**Example .ddl files**

*members\$1details.ddl*

```
CREATE TABLE MEMBER_DTLS (
memberid INTEGER NOT NULL,
member_name VARCHAR(50),
member_type VARCHAR(20),
age INTEGER,
gender CHAR(1),
email VARCHAR(100),
region VARCHAR(20)
);
```

*member\$1plans.ddl*

```
CREATE TABLE MEMBER_PLANS (
memberid INTEGER NOT NULL,
medical_plan CHAR(1),
dental_plan CHAR(1),
vision_plan CHAR(1),
preventive_immunization VARCHAR(20)
);
```

**Example .sqd file**

Replace** **`<kafka topic name>` with your Amazon MSK topic name.

*script.sqd*

```
-- Name: DB2ZTOMSK: DB2z To MSK JOBNAME DB2ZTOMSK;REPORT EVERY 1;OPTIONS CDCOP('I','U','D');-- Source Descriptions
JOBNAME DB2ZTOMSK;
REPORT EVERY 1;
OPTIONS CDCOP('I','U','D');

-- Source Descriptions 
BEGIN GROUP DB2_SOURCE; 
DESCRIPTION DB2SQL /var/precisely/di/sqdata/apply/DB2ZTOMSK/ddl/mem_details.ddl AS MEMBER_DTLS;
DESCRIPTION DB2SQL /var/precisely/di/sqdata/apply/DB2ZTOMSK/ddl/mem_plans.ddl AS MEMBER_PLANS; 
END GROUP;
-- Source Datastore 
DATASTORE cdc://<zos_host_name>/DB2ZTOMSK/DB2ZTOMSK
OF UTSCDC 
AS CDCIN 
DESCRIBED BY GROUP DB2_SOURCE ;
-- Target Datastore(s)
DATASTORE 'kafka:///<kafka topic name>/key'
OF JSON
AS TARGET
DESCRIBED BY GROUP DB2_SOURCE;
PROCESS INTO TARGET
SELECT
{
REPLICATE(TARGET)
}
FROM CDCIN;
```

**Mainframe visual dashboard**

The following data visual was created by Amazon Q in Quick Sight for the analysis question `show member distribution by region`*.*

![\[Northeast and Southwest have 8 members, Southwest has 5 members, Midwest has 4 members.\]](http://docs.aws.amazon.com/prescriptive-guidance/latest/patterns/images/pattern-img/18e72bcb-1b9a-406a-8220-83aca7743ad2/images/b40a784c-c1fc-444b-b6df-8bd1f7a6abaa.png)


The following data visual was created by Amazon Q in Quick Sight for the question `show member distribution by Region who have not completed preventive immunization, in pie chart`.

![\[Southeast shows 6, Southwest shows 5, and Midwest shows 4.\]](http://docs.aws.amazon.com/prescriptive-guidance/latest/patterns/images/pattern-img/18e72bcb-1b9a-406a-8220-83aca7743ad2/images/8a95da3c-df4a-458b-9cfe-44e34f80a235.png)


**Data story output**

The following screenshots show sections of the data story created by Amazon Q in Quick Sight for the prompt `Build a data story about Region with most numbers of members. Also show the member distribution by age, member distribution by gender. Recommend how to motivate members to complete immunization. Include 4 points of supporting data for this pattern`.

In the introduction, the data story recommends choosing the region with the most members to gain the greatest impact from immunization efforts.

![\[Introduction screen for analysis based on geographic, demographic, and age of the member base.\]](http://docs.aws.amazon.com/prescriptive-guidance/latest/patterns/images/pattern-img/18e72bcb-1b9a-406a-8220-83aca7743ad2/images/40f13957-2db4-42b7-b7a4-a0dd3dad6899.png)


The data story provides an analysis of member numbers for the four regions. The Northeast, Southwest, and Southeast regions have the most members.

![\[Northeast and Southwest regions have 8 members, Southeast has 6 members, and Midwest has 4 members.\]](http://docs.aws.amazon.com/prescriptive-guidance/latest/patterns/images/pattern-img/18e72bcb-1b9a-406a-8220-83aca7743ad2/images/fc6ed0a0-b79c-4397-95ac-a2fc4c87482a.png)


The data story presents an analysis of members by age.

![\[Chart showing that the member base skews toward younger and middle-aged adults.\]](http://docs.aws.amazon.com/prescriptive-guidance/latest/patterns/images/pattern-img/18e72bcb-1b9a-406a-8220-83aca7743ad2/images/8c56f1ec-3a2e-47a6-bbc4-3631782aa333.png)


The data story focuses on immunization efforts in the Midwest.

![\[Recommendation for personal outreach campaign and regional challenges.\]](http://docs.aws.amazon.com/prescriptive-guidance/latest/patterns/images/pattern-img/18e72bcb-1b9a-406a-8220-83aca7743ad2/images/84a647e8-c7d5-4637-94f0-03a611f899b3.png)


![\[Continuation of data story analysis, with anticipated outcomes and conclusion.\]](http://docs.aws.amazon.com/prescriptive-guidance/latest/patterns/images/pattern-img/18e72bcb-1b9a-406a-8220-83aca7743ad2/images/fc9094fc-2a20-485d-b238-e5e4ec70f1d3.png)


## Attachments
<a name="attachments-18e72bcb-1b9a-406a-8220-83aca7743ad2"></a>

To access additional content that is associated with this document, unzip the following file: [attachment.zip](samples/p-attach/18e72bcb-1b9a-406a-8220-83aca7743ad2/attachments/attachment.zip)

# Generate data insights by using AWS Mainframe Modernization and Amazon Q in Quick Sight
<a name="generate-data-insights-by-using-aws-mainframe-modernization-and-amazon-q-in-quicksight"></a>

*Shubham Roy, Roshna Razack, and Santosh Kumar Singh, Amazon Web Services*

## Summary
<a name="generate-data-insights-by-using-aws-mainframe-modernization-and-amazon-q-in-quicksight-summary"></a>

Note: AWS Mainframe Modernization Service (Managed Runtime Environment experience) is no longer open to new customers. For capabilities similar to AWS Mainframe Modernization Service (Managed Runtime Environment experience) explore AWS Mainframe Modernization Service (Self-Managed Experience). Existing customers can continue to use the service as normal. For more information, see [AWS Mainframe Modernization availability change](https://docs.aws.amazon.com/m2/latest/userguide/mainframe-modernization-availability-change.html).

If your organization is hosting business-critical data in a mainframe environment, gaining insights from that data is crucial for driving growth and innovation. By unlocking mainframe data, you can build faster, secure, and scalable business intelligence to accelerate data-driven decision-making, growth, and innovation in the Amazon Web Services (AWS) Cloud.

This pattern presents a solution for generating business insights and creating sharable narratives from mainframe data by using [AWS Mainframe Modernization file transfer](https://docs.aws.amazon.com/m2/latest/userguide/filetransfer.html) with BMC and [Amazon Q in Quick Sight](https://docs.aws.amazon.com/quicksight/latest/user/quicksight-gen-bi.html). Mainframe datasets are transferred to [Amazon Simple Storage Service (Amazon S3)](https://docs.aws.amazon.com/AmazonS3/latest/userguide/Welcome.html) by using AWS Mainframe Modernization file transfer with BMC. An AWS Lambda function formats and prepares mainframe data file for loading into Quick Sight.

After the data is available in Quick Sight, you can use natural language prompts with [Amazon Q](https://docs.aws.amazon.com/quicksight/latest/user/quicksight-gen-bi.html) in Quick Sight to create summaries of the data, ask questions, and generate data stories. You don't have to write SQL queries or learn a business intelligence (BI) tool.

**Business context**

This pattern presents a solution for mainframe data analytics and data insights use cases. Using the pattern, you build a visual dashboard for your company's data. To demonstrate the solution, this pattern uses a health care company that provides medical, dental, and vision plans to its members in the US. In this example, member demographics and plan information are stored in the mainframe datasets. The visual dashboard shows the following:
+ Member distribution by region
+ Member distribution by gender
+ Member distribution by age
+ Member distribution by plan type
+ Members who have not completed preventive immunization

After you create the dashboard, you generate a data story that explains the insights from the previous analysis. The data story provides recommendations for increasing the number of members who have completed preventive immunizations.

## Prerequisites and limitations
<a name="generate-data-insights-by-using-aws-mainframe-modernization-and-amazon-q-in-quicksight-prereqs"></a>

**Prerequisites**
+ An active AWS account
+ Mainframe datasets with business data
+ Access to install a file transfer agent on the mainframe

**Limitations**
+ Your mainframe data file should be in one of the file formats supported by Quick Sight. For a list supported file formats, see [Supported data sources](https://docs.aws.amazon.com/quicksuite/latest/userguide/supported-data-sources.html).
+ This pattern uses a Lambda function to convert the mainframe file into a format supported by Quick Sight.

## Architecture
<a name="generate-data-insights-by-using-aws-mainframe-modernization-and-amazon-q-in-quicksight-architecture"></a>

The following diagram shows an architecture for generating business insights from mainframe data by using AWS Mainframe Modernization file transfer with BMC and Amazon Q in Quick Sight.

![\[Architecture diagram description follows the diagram.\]](http://docs.aws.amazon.com/prescriptive-guidance/latest/patterns/images/pattern-img/53572abb-06c6-4dd7-add4-8fad7e9bfa68/images/6fe0f1d9-961c-4089-a746-e5b8d5fd6c1e.png)


The diagram shows the following workflow:

1. A mainframe dataset containing business data is transferred to Amazon S3 by using AWS Mainframe Modernization file transfer with BMC.

1. The Lambda function converts the file that's in the file-transfer destination S3 bucket to comma-separated values (CSV) format.

1. The Lambda function sends the converted file to the source dataset S3 bucket.

1. The data in the file is ingested by Quick Sight.

1. Users access the data in Quick Sight. You can use Amazon Q in Quick Sight to interact with the data by using natural language prompts.

## Tools
<a name="generate-data-insights-by-using-aws-mainframe-modernization-and-amazon-q-in-quicksight-tools"></a>

**AWS services**
+ [AWS Lambda](https://docs.aws.amazon.com/lambda/latest/dg/welcome.html) is a compute service that helps you run code without needing to provision or manage servers. It runs your code only when needed and scales automatically, so you pay only for the compute time that you use.
+ [AWS Mainframe Modernization file transfer with BMC](https://docs.aws.amazon.com/m2/latest/userguide/filetransfer.html) converts and transfers mainframe datasets to Amazon S3 for mainframe modernization, migration, and augmentation use cases.
+ [Amazon Quick Sight](https://docs.aws.amazon.com/quicksight/latest/user/welcome.html) is a cloud-scale BI service that helps you visualize, analyze, and report your data in a single dashboard. This pattern uses the generative BI capabilities of [Amazon Q in Quick Sight](https://docs.aws.amazon.com/quicksight/latest/user/working-with-quicksight-q.html).
+ [Amazon Simple Storage Service (Amazon S3)](https://docs.aws.amazon.com/AmazonS3/latest/userguide/Welcome.html) is a cloud-based object storage service that helps you store, protect, and retrieve any amount of data.

## Best practices
<a name="generate-data-insights-by-using-aws-mainframe-modernization-and-amazon-q-in-quicksight-best-practices"></a>
+ When you create the AWS Identity and Access Management (IAM) roles for AWS Mainframe Modernization file transfer with BMC and the Lambda function, follow the principle of [least privilege](https://docs.aws.amazon.com/IAM/latest/UserGuide/best-practices.html#grant-least-privilege).
+ Ensure that your source dataset has [supported data types](https://docs.aws.amazon.com/quicksight/latest/user/supported-data-types-and-values.html) for Quick Sight. If your source dataset contains unsupported data types, convert them to supported data types. For information about unsupported mainframe data types and how to convert them to data types supported by Amazon Q in Quick Sight, see the [Related resources](#generate-data-insights-by-using-aws-mainframe-modernization-and-amazon-q-in-quicksight-resources) section.

## Epics
<a name="generate-data-insights-by-using-aws-mainframe-modernization-and-amazon-q-in-quicksight-epics"></a>

### Set up AWS Mainframe Modernization file transfer with BMC
<a name="set-up-m2long-file-transfer-with-bmc"></a>


| Task | Description | Skills required | 
| --- | --- | --- | 
| Install the file transfer agent. | To install AWS Mainframe Modernization file transfer agent, follow the instructions in the [AWS documentation](https://docs.aws.amazon.com/m2/latest/userguide/m2-agent-installation.html). | Mainframe system administrator | 
| Create an S3 bucket for mainframe file transfer. | [Create an S3 bucket](https://docs.aws.amazon.com/AmazonS3/latest/userguide/create-bucket-overview.html) to store the output file from AWS Mainframe Modernization file transfer with BMC. In the architecture diagram, this is the file-transfer destination bucket. | Migration engineer | 
| Create the data transfer endpoint. | [\[See the AWS documentation website for more details\]](http://docs.aws.amazon.com/prescriptive-guidance/latest/patterns/generate-data-insights-by-using-aws-mainframe-modernization-and-amazon-q-in-quicksight.html) | AWS Mainframe Modernization specialist | 

### Convert the mainframe file name extension for Quick Sight integration
<a name="convert-the-mainframe-file-name-extension-for-quick-sight-integration"></a>


| Task | Description | Skills required | 
| --- | --- | --- | 
| Create an S3 bucket. | [Create an S3 bucket](https://docs.aws.amazon.com/AmazonS3/latest/userguide/create-bucket-overview.html) for the Lambda function to copy the converted mainframe file from the source to the final destination bucket. | Migration engineer | 
| Create a Lambda function. | To create a Lambda function that changes the file extension and copies the mainframe file to the destination bucket, do the following:[\[See the AWS documentation website for more details\]](http://docs.aws.amazon.com/prescriptive-guidance/latest/patterns/generate-data-insights-by-using-aws-mainframe-modernization-and-amazon-q-in-quicksight.html) | Migration engineer | 
| Create an Amazon S3 trigger to invoke the Lambda function. | To configure a trigger that invokes the Lambda function, do the following:[\[See the AWS documentation website for more details\]](http://docs.aws.amazon.com/prescriptive-guidance/latest/patterns/generate-data-insights-by-using-aws-mainframe-modernization-and-amazon-q-in-quicksight.html)For more information, see [Tutorial: Using an Amazon S3 trigger to invoke a Lambda function](https://docs.aws.amazon.com/lambda/latest/dg/with-s3-example.html). | Migration lead | 
| Provide IAM permissions for the Lambda function. | IAM permissions are required for the Lambda function to access the file-transfer destination and source dataset S3 buckets. Update the policy associated with the Lambda function execution role by allowing `s3:GetObject` and `s3:DeleteObject`** **permissions** **for the file-transfer destination S3 bucket and `s3:PutObject` access for the source dataset S3 bucket.For more information, see the [Create a permissions policy](https://docs.aws.amazon.com/lambda/latest/dg/with-s3-example.html#with-s3-example-create-policy)** **section in *Tutorial: Using an Amazon S3 trigger to invoke a Lambda function*. | Migration lead | 

### Define a mainframe data transfer task
<a name="define-a-mainframe-data-transfer-task"></a>


| Task | Description | Skills required | 
| --- | --- | --- | 
| Create a transfer task to copy the mainframe file to the S3 bucket. | To create a mainframe file transfer task, follow the instructions in the [AWS Mainframe Modernization documentation.](https://docs.aws.amazon.com/m2/latest/userguide/filetransfer-transfer-tasks.html)Specify **Source code page** encoding as **IBM1047 **and **Target code page** encoding as** UTF-8**. | Migration engineer | 
| Verify the transfer task. | To verify that the data transfer is successful, follow the instructions in the [AWS Mainframe Modernization documentation](https://docs.aws.amazon.com/m2/latest/userguide/filetransfer-transfer-tasks.html#filetransfer-ts-view-console). Confirm that the mainframe file is in the file-transfer destination S3 bucket. | Migration lead | 
| Verify the Lambda copy function. | Verify that the Lambda function is initiated and that the file is copied with a .csv extension to the source dataset S3 bucket.The .csv file created by the Lambda function is the input data file for Quick Sight. For example data, see the `Sample-data-member-healthcare-APG` file in the [Attachments](#attachments-53572abb-06c6-4dd7-add4-8fad7e9bfa68) section. | Migration lead | 

### Connect Quick Sight to the mainframe data
<a name="connect-quick-sight-to-the-mainframe-data"></a>


| Task | Description | Skills required | 
| --- | --- | --- | 
| Set up Quick Sight. | To set up Quick Sight, follow the instructions in the [AWS documentation](https://docs.aws.amazon.com/quicksight/latest/user/setting-up.html). | Migration lead | 
| Create a dataset for Quick Sight. | To create a dataset for Quick Sight, follow the instructions in the [AWS documentation](https://docs.aws.amazon.com/quicksight/latest/user/create-a-data-set-s3.html). The input data file is the converted mainframe file that was created when you defined the mainframe data transfer task. | Migration lead | 

### Get business insights from the mainframe data by using Amazon Q in Quick Sight
<a name="get-business-insights-from-the-mainframe-data-by-using-qdev-in-quick-sight"></a>


| Task | Description | Skills required | 
| --- | --- | --- | 
| Set up Amazon Q in Quick Sight. | This capability requires Enterprise Edition. To set up Amazon Q in Quick Sight, do the following:[\[See the AWS documentation website for more details\]](http://docs.aws.amazon.com/prescriptive-guidance/latest/patterns/generate-data-insights-by-using-aws-mainframe-modernization-and-amazon-q-in-quicksight.html) | Migration lead | 
| Analyze mainframe data and build a visual dashboard. | To analyze and visualize your data in Quick Sight, do the following:[\[See the AWS documentation website for more details\]](http://docs.aws.amazon.com/prescriptive-guidance/latest/patterns/generate-data-insights-by-using-aws-mainframe-modernization-and-amazon-q-in-quicksight.html)When you're finished, you can publish your dashboard to share with others in your organization. For examples, see *Mainframe visual dashboard* in the [Additional information](#generate-data-insights-by-using-aws-mainframe-modernization-and-amazon-q-in-quicksight-additional) section. | Migration engineer | 

### Create a data story with Amazon Q in Quick Sight from the mainframe data
<a name="create-a-data-story-with-qdev-in-quick-sight-from-the-mainframe-data"></a>


| Task | Description | Skills required | 
| --- | --- | --- | 
| Create a data story. | Create a data story to explain insights from the previous analysis, and generate a recommendation to increase preventive immunization for members:[\[See the AWS documentation website for more details\]](http://docs.aws.amazon.com/prescriptive-guidance/latest/patterns/generate-data-insights-by-using-aws-mainframe-modernization-and-amazon-q-in-quicksight.html) | Migration engineer | 
| View the generated data story. | To view the generated data story, follow the instructions in the [AWS documentation](https://docs.aws.amazon.com/quicksight/latest/user/working-with-stories-view.html). | Migration lead | 
| Edit a generated data story. | To change the formatting, layout, or visuals in a data story, follow the instructions in the [AWS documentation](https://docs.aws.amazon.com/quicksight/latest/user/working-with-stories-edit.html). | Migration lead | 
| Share a data story. | To share a data story, follow the instructions in the [AWS documentation](https://docs.aws.amazon.com/quicksight/latest/user/working-with-stories-share.html). | Migration engineer | 

## Troubleshooting
<a name="generate-data-insights-by-using-aws-mainframe-modernization-and-amazon-q-in-quicksight-troubleshooting"></a>


| Issue | Solution | 
| --- | --- | 
| Unable to discover the mainframe files or datasets entered in **Data sets search criteria **for **Create transfer task** in AWS Mainframe Modernization file transfer with BMC. | [\[See the AWS documentation website for more details\]](http://docs.aws.amazon.com/prescriptive-guidance/latest/patterns/generate-data-insights-by-using-aws-mainframe-modernization-and-amazon-q-in-quicksight.html) | 

## Related resources
<a name="generate-data-insights-by-using-aws-mainframe-modernization-and-amazon-q-in-quicksight-resources"></a>

To convert mainframe data types such as [PACKED-DECIMAL (COMP-3)](https://www.ibm.com/docs/en/cobol-zos/6.3?topic=v6-packed-decimal-comp-3) or [BINARY (COMP or COMP-4)](https://www.ibm.com/docs/en/cobol-zos/6.3?topic=v6-binary-comp-comp-4) to a [data type](https://docs.aws.amazon.com/quicksight/latest/user/supported-data-types-and-values.html) supported by Quick Sight, see the following patterns:
+ [Convert and unpack EBCDIC data to ASCII on AWS by using Python](https://docs.aws.amazon.com/prescriptive-guidance/latest/patterns/convert-and-unpack-ebcdic-data-to-ascii-on-aws-by-using-python.html)
+ [Convert mainframe files from EBCDIC format to character-delimited ASCII format in Amazon S3 using AWS Lambda](https://docs.aws.amazon.com/prescriptive-guidance/latest/patterns/convert-mainframe-files-from-ebcdic-format-to-character-delimited-ascii-format-in-amazon-s3-using-aws-lambda.html)

## Additional information
<a name="generate-data-insights-by-using-aws-mainframe-modernization-and-amazon-q-in-quicksight-additional"></a>

**S3CopyLambda.py**

The following Python code was generated by using a prompt with Amazon Q in an IDE:

```
#Create a lambda function triggered by S3. display the S3 bucket name and key
import boto3
s3 = boto3.client('s3')
def lambda_handler(event, context):
print(event)
bucket = event['Records'][0]['s3']['bucket']['name']
key = event['Records'][0]['s3']['object']['key']
print(bucket, key)
#If key starts with object_created, skip copy, print "copy skipped". Return lambda with key value.
if key.startswith('object_created'):
print("copy skipped")
return {
'statusCode': 200,
'body': key
}
# Copy the file from the source bucket to the destination bucket. Destination_bucket_name = 'm2-filetransfer-final-opt-bkt'. Destination_file_key = 'healthdata.csv'
copy_source = {'Bucket': bucket, 'Key': key}
s3.copy_object(Bucket='m2-filetransfer-final-opt-bkt', Key='healthdata.csv', CopySource=copy_source)
print("file copied")
#Delete the file from the source bucket.
s3.delete_object(Bucket=bucket, Key=key)
return {
'statusCode': 200,
'body': 'Copy Successful'
}
```

**Mainframe visual dashboard**

The following data visual was created by Amazon Q in Quick Sight for the analysis question `show member distribution by region`*.*

![\[Chart showing numbers of members for southwest, midwest, northeast, and southeast.\]](http://docs.aws.amazon.com/prescriptive-guidance/latest/patterns/images/pattern-img/53572abb-06c6-4dd7-add4-8fad7e9bfa68/images/e5c1d049-407d-42ff-bc51-28f9d2b24d4f.png)


The following data visual was created by Amazon Q in Quick Sight for the question `show member distribution by Region who have not completed preventive immunization, in pie chart`.

![\[Pie chart showing preventive immunization incompletion by region: Southeast 40%, Southwest 33%, Midwest 27%.\]](http://docs.aws.amazon.com/prescriptive-guidance/latest/patterns/images/pattern-img/53572abb-06c6-4dd7-add4-8fad7e9bfa68/images/47efa1c1-54c9-47cc-b668-416090021d34.png)


**Data story output**

The following screenshots show sections of the data story created by Amazon Q in Quick Sight for the prompt `Build a data story about Region with most numbers of members. Also show the member distribution by medical plan, vision plan, dental plan. Recommend how to motivate members to complete immunization. Include 4 points of supporting data.`

In the introduction, the data story recommends choosing the region with the most members to gain the greatest impact from immunization efforts.

![\[Introduction page for data story focusing on immunization completion rates.\]](http://docs.aws.amazon.com/prescriptive-guidance/latest/patterns/images/pattern-img/53572abb-06c6-4dd7-add4-8fad7e9bfa68/images/4612fcc7-51fd-48a5-bc58-b6b0aa9b0ef3.png)


The data story provides an analysis of member numbers for the top three regions, and names the Southwest as the leading region for focusing on immunization efforts.

![\[Pie chart showing member distribution by region, with Southwest and Northeast leading at 31% each.\]](http://docs.aws.amazon.com/prescriptive-guidance/latest/patterns/images/pattern-img/53572abb-06c6-4dd7-add4-8fad7e9bfa68/images/30d3b56b-3b92-4748-9cef-a73ff9339fee.png)


**Note**  
The Southwest and Northeast regions each have eight members. However, the Southwest has more members that aren't fully vaccinated, so it has more potential to benefit from initiatives to increase immunization rates.

## Attachments
<a name="attachments-53572abb-06c6-4dd7-add4-8fad7e9bfa68"></a>

To access additional content that is associated with this document, unzip the following file: [attachment.zip](samples/p-attach/53572abb-06c6-4dd7-add4-8fad7e9bfa68/attachments/attachment.zip)

# Implement Microsoft Entra ID-based authentication in an AWS Blu Age modernized mainframe application
<a name="implement-entra-id-authentication-in-aws-blu-age-modernized-mainframe-application"></a>

*Vishal Jaswani and Rimpy Tewani, Amazon Web Services*

## Summary
<a name="implement-entra-id-authentication-in-aws-blu-age-modernized-mainframe-application-summary"></a>

**Note**  
AWS Mainframe Modernization Service (Managed Runtime Environment experience) is no longer open to new customers. For capabilities similar to AWS Mainframe Modernization Service (Managed Runtime Environment experience) explore AWS Mainframe Modernization Service (Self-Managed Experience). Existing customers can continue to use the service as normal. For more information, see [AWS Mainframe Modernization availability change](https://docs.aws.amazon.com/m2/latest/userguide/mainframe-modernization-availability-change.html).

Mainframe applications that are modernized by using refactoring patterns, such as those by [AWS Mainframe Modernization Refactor with AWS Blu Age](https://docs.aws.amazon.com/m2/latest/userguide/refactoring-m2.html), require careful integration of authentication mechanisms into the new application architecture. This integration is typically addressed as a post-modernization activity. The task can be complex and often involves the migration or externalization of existing authentication systems to align with modern security standards and cloud-native practices. Developers need to consider how to implement authentication effectively while they work within the constraints of the modernized application's runtime environment and libraries. After modernization, AWS provides ways to make it easier for you to integrate your AWS Blu Age modern code with identity and access management systems such as [Amazon Cognito](https://docs.aws.amazon.com/cognito/latest/developerguide/what-is-amazon-cognito.html) and [Microsoft Entra ID](https://www.microsoft.com/en-us/security/business/identity-access/microsoft-entra-id) (formerly known as Azure AD).

This pattern explains how to implement an authentication mechanism in your modernized application when the authentication provider is Microsoft Entra ID, without spending time on research and trials. The pattern provides:
+ Field-tested and relevant Angular libraries from the Microsoft Authentication Library (MSAL) and other Microsoft Entra ID documentation that are essential to the authentication implementation. 
+ Configurations required on the AWS Blu Age Runtime to enable Spring Security by using OAuth 2.0.
+ A library that captures authenticated users’ identities and passes them to the AWS Blu Age Runtime.
+ Security measures that we recommend implementing.
+ Troubleshooting tips for commonly encountered problems with the Microsoft Entra ID setup.

**Note**  
This pattern uses the AWS Blu Age OAuth extension library, which is provided to customers as part of their [AWS Professional Services](https://aws.amazon.com/professional-services/) engagement. This library isn’t part of the AWS Blu Age Runtime.

## Prerequisites and limitations
<a name="implement-entra-id-authentication-in-aws-blu-age-modernized-mainframe-application-prereqs"></a>

**Prerequisites**
+ A modernized application that was produced by AWS Blu Age mainframe modernization refactoring tools. This pattern uses [CardDemo](https://github.com/aws-samples/aws-mainframe-modernization-carddemo) as a sample open source mainframe application.
+ The AWS Blu Age OAuth extension library, which is provided by the AWS Blu Age team during your engagement with [AWS Professional Services](https://aws.amazon.com/professional-services/).
+ An active AWS account to deploy and test the modernized application.
+ Familiarity with AWS Blu Age configuration files and Microsoft Entra ID fundamentals.

**Limitations**
+ This pattern covers OAuth 2.0 authentication and basic token-based authorization flows. Advanced authorization scenarios and fine-grained access control mechanisms are not in scope.
+ Some AWS services aren’t available in all AWS Regions. For Region availability, see [AWS services by Region](https://aws.amazon.com/about-aws/global-infrastructure/regional-product-services/). For specific endpoints, see [Service endpoints and quotas](https://docs.aws.amazon.com/general/latest/gr/aws-service-information.html), and choose the link for the service.

**Product versions**

This pattern was developed by using:
+ AWS Blu Age Runtime version 4.1.0 (the pattern also works with later versions that are backward compatible)
+ MSAL library version 3.0.23
+ Java Development Kit (JDK) version 17
+ Angular version 16.1

## Architecture
<a name="implement-entra-id-authentication-in-aws-blu-age-modernized-mainframe-application-architecture"></a>

**Source technology stack**

In typical mainframe environments, authentication is implemented through user profiles. These profiles identify users to the system, define who can sign in, and specify which functions users can perform on system resources. User profiles are managed by security officers or security administrators.

**Target technology stack**
+ Microsoft Entra ID
+ Modernized Java Spring Boot-based backend
+ AWS Blu Age Runtime
+ Spring Security with OAuth 2.0
+ Angular single-page application (SPA)

**Target architecture**

AWS Blu Age runtime supports OAuth 2.0-based authentication by default, so the pattern uses that standard to protect backend APIs.

The following diagram illustrates the process flow.

**Note**  
The diagram includes Amazon Aurora as an example of database modernization although Aurora isn’t included in the steps for this pattern.

![\[Process flow for Entra ID-based authentication for an AWS Blu Age application.\]](http://docs.aws.amazon.com/prescriptive-guidance/latest/patterns/images/pattern-img/e51f24b8-178f-4974-aae9-23a0cc8540f5/images/0fdcdb22-9e46-4b02-86b2-395cba3e2f81.png)


where:

1. A user tries to authenticate with Microsoft Entra ID.

1. Microsoft Entra ID returns refresh, access, and ID tokens that the application uses in subsequent calls.

1. The MSAL interceptor includes the access token in the `Authorization` header of an HTTPS request to call the AWS Blu Age Runtime.

1. The AWS Blu Age `extension-oauth` library extracts the user information from the header by using an AWS Blu Age Runtime configuration file (`application-main.yml`) and places this information in a `SharedContext` object so that the business logic can consume it.
**Note**  
`SharedContext` is a runtime component provided by AWS Blu Age that manages application context and state information across the modernized application. For more information about AWS Blu Age Runtime components and updates, see [AWS Blu Age release notes](https://docs.aws.amazon.com/m2/latest/userguide/ba-release-notes.html) in the AWS Mainframe Modernization documentation. For more information about the `application-main.yml` file, see [Set up configuration for AWS Blu Age Runtime](https://docs.aws.amazon.com/m2/latest/userguide/ba-runtime-config.html) in the AWS Mainframe Modernization documentation.

1. The AWS Blu Age Runtime checks if the token is present. 

   1. If the token is present, it checks the validity of the token by communicating with Microsoft Entra ID. 

   1. If the token isn’t present, the AWS Blu Age Runtime returns an error with HTTP status code 403.

1. If the token is valid, the AWS Blue Age Runtime allows the business logic to continue. If the token is invalid, the AWS Blu Age Runtime returns an error with HTTP status code 403.

**OAuth 2.0 workflow**

For a high-level diagram of the OAuth 2.0 workflow, see the [Microsoft Entra documentation](https://learn.microsoft.com/en-us/entra/identity-platform/v2-oauth2-auth-code-flow#protocol-details).

## Tools
<a name="implement-entra-id-authentication-in-aws-blu-age-modernized-mainframe-application-tools"></a>

**AWS services**

[AWS Mainframe Modernization](https://docs.aws.amazon.com/m2/latest/userguide/what-is-m2.html) provides tools and resources to help you plan and implement migration and modernization from mainframes to AWS managed runtime environments. You can use this service’s refactoring features, which are provided by AWS Blu Age, to convert and modernize your legacy mainframe applications.

**Note**  
AWS Mainframe Modernization Service (Managed Runtime Environment experience) is no longer open to new customers. For capabilities similar to AWS Mainframe Modernization Service (Managed Runtime Environment experience) explore AWS Mainframe Modernization Service (Self-Managed Experience). Existing customers can continue to use the service as normal. For more information, see [AWS Mainframe Modernization availability change](https://docs.aws.amazon.com/m2/latest/userguide/mainframe-modernization-availability-change.html).

**Code repository**

The CardDemo application has been updated to demonstrate integration with Microsoft Entra ID. You can access the code from the [GitHub repository for this pattern](https://github.com/aws-samples/sample-microsoft-entra-id-based-auth-in-aws-bluage-modernized-mainframe-app).

**Backend configuration**

This pattern requires changes to the `application-main.yml`** **configuration file to enable Spring Security by using OAuth 2.0 on the backend application.  The `.yml` file looks like this:

```
gapwalk-application.security: enabled
gapwalk-application:
  security: 
    identity: oauth
    issuerUri: ${issuerUrl}
    claim:
      claims:
        -
          claimName: upn
          claimMapValue: username
spring:
  autoconfigure:
    exclude:
     - org.springframework.boot.autoconfigure.security.oauth2.client.servlet.OAuth2ClientAutoConfiguration
     - org.springframework.boot.autoconfigure.security.oauth2.resource.servlet.OAuth2ResourceServerAutoConfiguration
  security:
    oauth2:
      client:
        registration: 
          azure:
            client-id: {clientId}
            client-secret: ${clientSecret}
            provider: azure
            authorization-grant-type: authorization_code
            redirect-uri: ${redirectUri}
            scope: openid
           
        provider:
          azure:
            authorization-uri: ${gapwalk-application.security.issuerUri}/oauth2/v2.0/authorize
            token-uri:  ${gapwalk-application.security.issuerUri}/oauth2/v2.0/token
            jwk-set-uri: ${gapwalk-application.security.issuerUri}/discovery/v2.0/keys
      resourceserver:
        jwt:
          jwk-set-uri: ${gapwalk-application.security.issuerUri}/discovery/v2.0/keys
```

**AWS Blu Age OAuth extension filter library**

The AWS Blu Age OAuth extension library is provided by the AWS Blu Age team during your engagement with [AWS Professional Services](https://aws.amazon.com/professional-services/).

This library reads the `claim.claims` configuration in the `application-main.yml` fie that’s shown in the previous code block. This configuration is a list. Each item in the list provides two values: `claimName` and `claimMapValue`. `claimName` represents a key name in a JSON Web Token (JWT) sent by the frontend, and `claimMapValue` is the name of the key in `SharedContext`. For example, if you want to capture the user ID on the backend, set `claimName` to the key name in the JWT that holds the `userId` that’s provided by Microsoft Entra ID, and set `claimMapValue` to the key name to fetch the user ID in the backend code.

For example, if you set `UserId` in `claimMapValue`, you can use the following code  to extract the user ID:

```
SharedContext.get().getValue("userId", [UserId]);
```

## Best practices
<a name="implement-entra-id-authentication-in-aws-blu-age-modernized-mainframe-application-best-practices"></a>

In your implementation of this pattern, take the following important security considerations into account.

**Important**  
This pattern provides a foundation for authentication integration. We recommend that you implement security measures in addition to those discussed in this section based on your business requirements before you deploy it to production.
+ **AWS configuration security. **Move sensitive configuration values from `application-main.yml` to AWS Secrets Manager. For example, configure the following properties by using Secrets Manager:

  ```
  security:
      oauth2:
        client:
          registration: 
            azure:
              client-id: {clientId}
              client-secret: ${clientSecret}
  ```

  For more information about how you can use Secrets Manager to configure AWS Blu Age parameters, see [AWS Blu Age Runtime secrets](https://docs.aws.amazon.com/m2/latest/userguide/ba-runtime-config-app-secrets.html) in the AWS Mainframe Modernization documentation.
+ **Runtime environment protection.** Configure the modernized application environment with proper AWS security controls:

  ```
  server: 
    tomcat: 
      remoteip: 
       protocol-header: X-Forwarded-Proto 
       remote-ip-header: X-Forwarded-For 
    forward-headers-strategy: NATIVE
  ```
+ **Amazon CloudWatch logging.** Consider adding the file `logback-spring.xml to src/main/resources`:

  ```
  <configuration> 
   <appender name="CLOUDWATCH" class="com.amazonaws.services.logs.logback.CloudWatchAppender">  
     <logGroup>/aws/bluage/application</logGroup> 
     <logStream>${AWS_REGION}-${ENVIRONMENT}</logStream> 
     <layout> 
      <pattern>%d{yyyy-MM-dd HH:mm:ss} [%thread] %-5level %logger{36} - %msg%n</pattern> 
     </layout> 
   </appender> 
  
   <root level="INFO"> 
   <appender-ref ref="CLOUDWATCH"/> 
   </root> 
  </configuration>
  ```

  For information about enabling tracing with CloudWatch, see [Enable trace to log correlation](https://docs.aws.amazon.com/AmazonCloudWatch/latest/monitoring/Application-Signals-TraceLogCorrelation.html) in the CloudWatch documentation.
+ **Token configuration and handling.** Configure token lifetimes in Microsoft Entra ID to align with your security requirements. Set access tokens to expire within 1 hour and refresh tokens to expire within 24 hours. In the AWS Blu Age Runtime configuration (`application-main.yml`), make sure that JWT validation is configured properly with the exact issuer URI and audience values from your Entra ID application registration.

  When a token expires and is refreshed:

  1. The Angular application's error interceptor handles the 401 response by obtaining a new token through MSAL.

  1. The new token is sent with the subsequent request.

  1. The AWS Blu Age Runtime's OAuth filter validates the new token and automatically updates `SharedContext` with the current user information. This ensures that business logic continues to have access to valid user context through `SharedContext.get().getValue()` calls.

  For more information about the AWS Blu Age Runtime components and their updates, see [AWS Blu Age release notes](https://docs.aws.amazon.com/m2/latest/userguide/ba-release-notes.html).
+ **AWS Blu Age Runtime security.** The `oauth2-ext` library provided by AWS Blu Age must be placed in the correct shared directory location (`{app-server-home}/shared/`) with proper file permissions. Verify that the library successfully extracts user information from JWTs by checking the `SharedContext` object population in your logs.
+ **Specific claims configuration.** In `application-main.yml`, define the claims that you need from Microsoft Entra ID explicitly. For example, to capture the user's email and roles, specify:

  ```
  gapwalk-application:
    security:
      claim:
        claims:
          - claimName: upn
            claimMapValue: username
          - claimName: roles
            claimMapValue: userRoles
          - claimName: email
            claimMapValue: userEmail
  ```
+ **Error handling.** Add error handling to address authentication failures in your Angular application; for example:

  ```
  @Injectable()
  export class AuthErrorInterceptor implements HttpInterceptor {
    intercept(request: HttpRequest<any>, next: HttpHandler): Observable<HttpEvent<any>> {
      return next.handle(request).pipe(
        catchError((error: HttpErrorResponse) => {
          if (error.status === 401) {
            // Handle token expiration
            this.authService.login();
          }
          if (error.status === 403) {
            // Handle unauthorized access
            this.router.navigate(['/unauthorized']);
          }
          return throwError(() => error);
        })
      );
    }
  }
  ```
+ **Session timeout configuration.** Configure session timeout settings in both the AWS Blu Age Runtime and Microsoft Entra ID. For example, add the following code to your `application-main.yml` file:

  ```
  server:
    servlet:
      session:
        timeout: 3600 # 1 hour in seconds
  ```
+ **MsalGuard.** You must implement the MsalGuard feature for all protected routes to prevent unauthorized access. For example:

  ```
  const routes: Routes = [
      { path: '', redirectTo: '/transaction-runner', pathMatch: 'full' },
      { path: 'transaction-runner', component: TransactionRunnerComponent, canActivate:guards },
      { path: 'user-info', component: UserInfoComponent, canActivate:guards },
      { path: 'term/:transid/:commarea', component: TermComponent, canActivate:guards },
  	{ path: 'code', component: TransactionRunnerComponent  }
  ];
  ```

  Routes that don’t have MsalGuard protection will be accessible without authentication, potentially exposing sensitive functionality. Make sure that all routes that require authentication include the guards in their configuration.

## Epics
<a name="implement-entra-id-authentication-in-aws-blu-age-modernized-mainframe-application-epics"></a>

### Set up a Microsoft Entra ID
<a name="set-up-a-microsoft-entra-id"></a>


| Task | Description | Skills required | 
| --- | --- | --- | 
| Set up a Microsoft Azure account to create an Entra ID. | For options and instructions, see the [Microsoft Azure website](https://azure.microsoft.com/en-us/free/). | App developer | 
| Set up a Microsoft Entra ID in your application. | To learn how to add Microsoft Entra ID B2C (Azure AD B2C) authentication to your Angular SPA, see the [Microsoft documentation](https://learn.microsoft.com/en-us/azure/active-directory-b2c/enable-authentication-angular-spa-app#add-the-authentication-components). Specifically:[\[See the AWS documentation website for more details\]](http://docs.aws.amazon.com/prescriptive-guidance/latest/patterns/implement-entra-id-authentication-in-aws-blu-age-modernized-mainframe-application.html) | App developer | 

### Clone the repository and deploy your AWS Blu Age code
<a name="clone-the-repository-and-deploy-your-aws-blu-age-code"></a>


| Task | Description | Skills required | 
| --- | --- | --- | 
| Clone the GitHub repository to get the Angular code required for authentication. | Run the following command to clone the [GitHub repository](https://github.com/aws-samples/sample-microsoft-entra-id-based-auth-in-aws-bluage-modernized-mainframe-app) that’s provided with this pattern into your local current working directory:<pre>git clone https://github.com/aws-samples/sample-microsoft-entra-id-based-auth-in-aws-bluage-modernized-mainframe-app.git</pre> | App developer | 
| Deploy the AWS Blu Age modernized code on a Tomcat server to implement authentication. | To set up the local environment that includes Tomcat and the Angular development server, follow the installation steps provided by the AWS Blu Age team as part of your customer engagement with AWS Professional Services. | App developer | 

### Build the authentication solution
<a name="build-the-authentication-solution"></a>


| Task | Description | Skills required | 
| --- | --- | --- | 
| Enable AWS Blu Age Runtime security to protect the AWS Blu Age REST API endpoints. | Configure the `application-main.yml` file that the AWS Blu Age Runtime uses as follows. For an example of this file, see the [Code repository](#implement-entra-id-authentication-in-aws-blu-age-modernized-mainframe-application-tools) section earlier in this pattern.[\[See the AWS documentation website for more details\]](http://docs.aws.amazon.com/prescriptive-guidance/latest/patterns/implement-entra-id-authentication-in-aws-blu-age-modernized-mainframe-application.html) | App developer | 
| Incorporate the example code from your local environment into your Blu Age modernized Angular code base. | For information about how to incorporate the example into your AWS Blu Age modernized Angular code base, see the [Code repository](#implement-entra-id-authentication-in-aws-blu-age-modernized-mainframe-application-tools) section earlier in this pattern. | App developer | 
| Place the `oauth2-ext` library in the shared directory. | Place the `oauth2-ext` library in the** **shared directory of the application server so that your** **AWS Blu Age modernized application can use it**. **Run the following commands:<pre>cd oauth2-ext/target<br />cp extension-oauth-filter-<version>.jar /{app-server-home}/shared/</pre> | App developer | 

### Deploy the authentication solution
<a name="deploy-the-authentication-solution"></a>


| Task | Description | Skills required | 
| --- | --- | --- | 
| Deploy the frontend application. | Run the following commands to start the frontend application locally:<pre>npm install <br />ng serve --ssl<br />npm start</pre>Adding the `--ssl` flag to the `ng serve` command ensures that the development server uses HTTPS, which is more secure than other protocols and provides a better simulation of a production environment. | App developer | 
| Start the backend application. | Start Tomcat server in Eclipse. | App developer | 

### Test the application
<a name="test-the-application"></a>


| Task | Description | Skills required | 
| --- | --- | --- | 
| Test login functionality. | Access the locally deployed application at `http://localhost:4200` to verify that users are asked to confirm their identity.HTTP is used here for demonstration purposes. In a production or other publicly accessible environment, you must use HTTPS for security. Even for local development, we recommend that you set up HTTPS when possible.The Microsoft login prompt should appear, and users who are configured in Microsoft Entra ID should be allowed to access the application. | App developer | 
| Test the authorization header in the request. | The following steps use the [CardDemo](https://github.com/aws-samples/aws-mainframe-modernization-carddemo) application as an example. Testing steps for other modern applications will vary.[\[See the AWS documentation website for more details\]](http://docs.aws.amazon.com/prescriptive-guidance/latest/patterns/implement-entra-id-authentication-in-aws-blu-age-modernized-mainframe-application.html) | App developer | 
| Test the logout functionality. | Choose **Quit** to log out, and try to access the application again. It should present a new login prompt. | App developer | 

## Troubleshooting
<a name="implement-entra-id-authentication-in-aws-blu-age-modernized-mainframe-application-troubleshooting"></a>


| Issue | Solution | 
| --- | --- | 
| The token issued by Microsoft Entra ID isn’t compatible with Spring Boot OAuth 2.0 security. | For a resolution to the issue, see [Microsoft Entra ID OAuth Flow](https://authguidance.com/azure-ad-troubleshooting/) on the OAuth blog. | 
| General token-related questions. | To decode and view the contents of a JWT token, use the [https://jwt.io/](https://jwt.io/) website. | 

## Related resources
<a name="implement-entra-id-authentication-in-aws-blu-age-modernized-mainframe-application-resources"></a>
+ For information about refactoring your application by using AWS Blu Age, see the [AWS Mainframe Modernization documentation](https://docs.aws.amazon.com/m2/latest/userguide/refactoring-m2.html).
+ To understand how OAuth 2.0 works, see the [OAuth 2.0 website](https://oauth.net/2/).
+ For an overview of the Microsoft Authentication Library (MSAL), see the [Microsoft Entra documentation](https://learn.microsoft.com/en-us/azure/active-directory/develop/msal-overview).
+ For information about user profiles on an AS/400 system, see the [IBM i (AS400) tutorial](https://www.go4as400.com/subsystem-jobs-user-profile-in-as400/jobs.aspx?cid=14).
+ For the OAuth 2.0 and OpenID Connect (OIDC) authentication flow in the Microsoft identity platform, see the [Microsoft Entra documentation](https://learn.microsoft.com/en-us/entra/identity-platform/v2-protocols).

# Integrate Stonebranch Universal Controller with AWS Mainframe Modernization
<a name="integrate-stonebranch-universal-controller-with-aws-mainframe-modernization"></a>

*Vaidy Sankaran and Pablo Alonso Prieto, Amazon Web Services*

*Robert Lemieux and Huseyin Gomleksizoglu, Stonebranch*

## Summary
<a name="integrate-stonebranch-universal-controller-with-aws-mainframe-modernization-summary"></a>

Note: AWS Mainframe Modernization Service (Managed Runtime Environment experience) is no longer open to new customers. For capabilities similar to AWS Mainframe Modernization Service (Managed Runtime Environment experience) explore AWS Mainframe Modernization Service (Self-Managed Experience). Existing customers can continue to use the service as normal. For more information, see [AWS Mainframe Modernization availability change](https://docs.aws.amazon.com/m2/latest/userguide/mainframe-modernization-availability-change.html).

This pattern explains how to integrate [Stonebranch Universal Automation Center (UAC) workload orchestration](https://www.stonebranch.com/stonebranch-platform/universal-automation-center) with [Amazon Web Services (AWS) Mainframe Modernization service](https://aws.amazon.com/mainframe-modernization/). AWS Mainframe Modernization service migrates and modernizes mainframe applications to the AWS Cloud. It offers two patterns: [AWS Mainframe Modernization Replatform](https://aws.amazon.com/mainframe-modernization/patterns/replatform/) with Micro Focus Enterprise technology and [AWS Mainframe Modernization Automated Refactor](https://aws.amazon.com/mainframe-modernization/patterns/refactor/?mainframe-blogs.sort-by=item.additionalFields.createdDate&mainframe-blogs.sort-order=desc) with AWS Blu Age.  

Stonebranch UAC is a real-time IT automation and orchestration platform. UAC is designed to automate and orchestrate jobs, activities, and workflows across hybrid IT systems, from on-premises to AWS. Enterprise clients using mainframe systems are transitioning to cloud-centric modernized infrastructures and applications. Stonebranch’s tools and professional services facilitate the migration of existing schedulers and automation capabilities to the AWS Cloud.

When you migrate or modernize your mainframe programs to the AWS Cloud using AWS Mainframe Modernization service, you can use this integration to automate batch scheduling, increase agility, improve maintenance, and decrease costs.

This pattern provides instructions for integrating [Stonebranch scheduler](https://www.stonebranch.com/) with mainframe applications migrated to the AWS Mainframe Modernization service Micro Focus Enterprise runtime. This pattern is for solutions architects, developers, consultants, migration specialists, and others working in migrations, modernizations, operations, or DevOps.

**Targeted outcome**

This pattern focuses on providing the following target outcomes:
+ The ability to schedule, automate, and run mainframe batch jobs running in AWS Mainframe Modernization service (Microfocus runtime) from Stonebranch Universal Controller.
+ Monitor the application’s batch processes from the Stonebranch Universal Controller.
+ Start/Restart/Rerun/Stop batch processes automatically or manually from the Stonebranch Universal Controller.
+ Retrieve the results of the AWS Mainframe Modernization batch processes.
+ Capture the [AWS CloudWatch](https://aws.amazon.com/cloudwatch/) logs of the batch jobs in Stonebranch Universal Controller.

## Prerequisites and limitations
<a name="integrate-stonebranch-universal-controller-with-aws-mainframe-modernization-prereqs"></a>

**Prerequisites**
+ An active AWS account
+ A Micro Focus [Bankdemo](https://d1vi4vxke6c2hu.cloudfront.net/demo/bankdemo_runtime.zip) application with job control language (JCL) files, and a batch process deployed in a AWS Mainframe Modernization service (Micro Focus runtime) environment
+ Basic knowledge of how to build and deploy a mainframe application that runs on Micro Focus [Enterprise Server](https://www.microfocus.com/media/data-sheet/enterprise_server_ds.pdf)
+ Basic knowledge of Stonebranch Universal Controller
+ Stonebranch trial license (contact [Stonebranch](https://www.stonebranch.com/))
+ Windows or Linux Amazon Elastic Compute Cloud (Amazon EC2) instances (for example, xlarge) with a minimum of four cores, 8 GB memory, and 2 GB disk space
+ Apache Tomcat version 8.5.x or 9.0.x
+ Oracle Java Runtime Environment (JRE) or OpenJDK version 8 or 11
+ [Amazon Aurora MySQL–Compatible Edition](https://aws.amazon.com/rds/aurora/)
+ [Amazon Simple Storage Service (Amazon S3)](https://aws.amazon.com/s3/) bucket for export repository
+ [Amazon Elastic File System (Amaon EFS)](https://aws.amazon.com/efs/) for agent Stonebranch Universal Message Service (OMS) connections for high availability (HA)
+ Stonebranch Universal Controller 7.2 Universal Agent 7.2 Installation Files
+ AWS Mainframe Modernization [task scheduling template](https://github.com/aws-samples/aws-mainframe-modernization-stonebranch-integration/releases) (latest released version of the .zip file)

**Limitations**
+ The product and solution has been tested and compatibility validated only with OpenJDK 8 and 11.
+ The [aws-mainframe-modernization-stonebranch-integration](https://github.com/aws-samples/aws-mainframe-modernization-stonebranch-integration/releases) task scheduling template will work only with AWS Mainframe Modernization service.
+ This task scheduling template will work on only a Unix, Linux, or Windows edition of Stonebranch agents.
+ Some AWS services aren’t available in all AWS Regions. For Region availability, see [AWS services by Region](https://aws.amazon.com/about-aws/global-infrastructure/regional-product-services/). For specific endpoints, see the [Service endpoints and quotas](https://docs.aws.amazon.com/general/latest/gr/aws-service-information.html) page, and choose the link for the service.

## Architecture
<a name="integrate-stonebranch-universal-controller-with-aws-mainframe-modernization-architecture"></a>

**Target state architecture**

The following diagram shows the example AWS environment that is required for this pilot.

![\[Stonebranch UAC interacting with AWS Mainframe Modernization environment.\]](http://docs.aws.amazon.com/prescriptive-guidance/latest/patterns/images/pattern-img/01c6f9fa-87e6-459a-b694-5e03dd7f7952/images/4a7bea37-0a5b-4663-902b-9b051e92f0cb.png)


1. Stonebranch Universal Automation Center (UAC) includes two main components: Universal Controller and Universal Agents. Stonebranch OMS is used as a message bus between the controller and individual agents.

1. Stonebranch UAC Database is used by Universal Controller. The database can be MySQL, Microsoft SQL Server, Oracle, or Aurora MySQL–Compatible.

1. AWS Mainframe Modernization service – Micro Focus runtime environment with the [BankDemo application deployed](https://aws.amazon.com/blogs/aws/modernize-your-mainframe-applications-deploy-them-in-the-cloud/). The BankDemo application files will be stored in an S3 bucket. This bucket also contains the mainframe JCL files.

1. Stonebranch UAC can run the following functions for the batch run:

   1. Start a batch job using the JCL file name that exists in the S3 bucket linked to the AWS mainframe modernization service.

   1. Get the status of the batch job run.

   1. Wait until the batch job run is completed.

   1. Fetch logs of the batch job run.

   1. Rerun the failed batch jobs.

   1. Cancel the batch job while the job is running.

1. Stonebranch UAC can run the following functions for the application:

   1. Start Application

   1. Get Status of the Application

   1. Wait until the Application is started or stopped

   1. Stop Application

   1. Fetch Logs of Application operation

**Stonebranch jobs conversion**

The following diagram represents Stonebranch’s job conversion process during the modernization journey. It describes how the job schedules and tasks definitions are converted into a compatible format that can run AWS Mainframe Modernization batch tasks.

![\[Process from the mainframe to conversion to job scheduler on Amazon EC2 with JCL files in Amazon S3.\]](http://docs.aws.amazon.com/prescriptive-guidance/latest/patterns/images/pattern-img/01c6f9fa-87e6-459a-b694-5e03dd7f7952/images/4d2ed890-f143-455e-8180-4d967b71c494.png)


1. For the conversion process, the job definitions are exported from the existing mainframe system.

1. JCL files can be uploaded to the S3 bucket for the Mainframe Modernization application so that these JCL files can be deployed by the AWS Mainframe Modernization service.

1. The conversion tool converts the exported job definitions to UAC tasks.

1. After all the task definitions and job schedules are created, these objects will be imported to the Universal Controller. The converted tasks then run the processes in the AWS Mainframe Modernization service instead of running them on the mainframe.

**Stonebranch UAC architecture**

The following architecture diagram represents an active-active-passive model of high availability (HA) Universal Controller. Stonebranch UAC is deployed in multiple Availability Zones to provide high availability and support disaster recovery (DR).

![\[Multi-AZ environment with DR and controllers, Amazon EFS, Aurora, and an S3 bucket for backups.\]](http://docs.aws.amazon.com/prescriptive-guidance/latest/patterns/images/pattern-img/01c6f9fa-87e6-459a-b694-5e03dd7f7952/images/3f94b855-c146-4fcb-902c-5d343438a558.png)


*Universal Controller*

Two Linux servers are provisioned as Universal Controllers. Both connect to the same database endpoint. Each server houses a Universal Controller application and OMS. The most recent version of Universal Controller is used at the time it is provisioned.

The Universal Controllers are deployed in the Tomcat webapp as the document ROOT and are served on port 80. This deployment eases the configuration of the frontend load balancer.

HTTP over TLS or HTTPS is enabled using the Stonebranch wildcard certificate (for example, `https://customer.stonebranch.cloud`). This secures communication between the browser and the application.

*OMS*

A Universal Agent and OMS (Opswise Message Service) reside on each Universal Controller server. All deployed Universal Agents from the customer end are set up to connect to both OMS services. OMS acts as a common messaging service between the Universal Agents and the Universal Controller.

Amazon EFS mounts a spool directory on each server. OMS uses this shared spool directory to keep the connection and task information from controllers and agents. OMS works in a high-availability mode. If the active OMS goes down, the passive OMS has access to all the data, and it resumes active operations automatically. Universal Agents detect this change and automatically connect to the new active OMS.

*Database*

Amazon Relational Database Service (Amazon RDS) houses the UAC database, with Amazon Aurora MySQL–Compatible as its engine. Amazon RDS is helps in managing and offering scheduled backups at regular intervals. Both Universal Controller instances connect to the same database endpoint.

*Load balancer*

An Application Load Balancer is set-up for each instance. The load balancer directs traffic to the active controller at any given moment. Your instance domain names point to the respective load balancer endpoints.

*URLs*

Each of your instances has a URL, as shown in the following example.


| 
| 
| Environment | Instance | 
| --- |--- |
| **Production** | `customer.stonebranch.cloud` | 
| **Development (non-production)** | `customerdev.stonebranch.cloud` | 
| **Testing (non-production)** | `customertest.stonebranch.cloud` | 

**Note**  
  Non-production instance names can be set based on your needs.

*High availability*

High availability (HA) is the ability of a system to operate continuously without failure for a designated period of time. Such failures include, but are not limited to, storage, server communication response delays caused by CPU or memory issues, and networking connectivity.

To meet HA requirements:
+ All EC2 instances, databases, and other configurations are mirrored across two separate Availability Zones within the same AWS Region.
+ The controller is provisioned through an Amazon Machine Image (AMI) on two Linux servers in the two Availability Zones. For example, if you are provisioned in the Europe eu-west-1 Region, you have a Universal Controller in Availability Zone eu-west-1a and Availability Zone eu-west-1c.
+ No jobs are allowed to run directly on the application servers and no data is allowed to be stored on these servers.
+ The Application Load Balancer runs health checks on each Universal Controller to identify the active one and direct traffic to it. In the event that one server incurs issues, the load balancer automatically promotes the passive Universal Controller to an active state. The load balancer then identifies the new active Universal Controller instance from the health checks and starts directing traffic. The failover happens within four minutes with no job loss, and the frontend URL remains the same.
+ The Aurora MySQL–Compatible database service stores Universal Controller data. For production environments, a database cluster is built with two database instances in two different Availability Zones within a single AWS Region. Both Universal Controllers use a Java Database Connectivity (JDBC) interface that points to a single database cluster endpoint. In the event that one database instance incurs issues, the database cluster endpoint dynamically points to the healthy instance. No manual intervention is required.

*Backup and purge*

Stonebranch Universal Controller is set to back up and purge old data following the schedule shown in the table.


| 
| 
| Type | Schedule | 
| --- |--- |
| **Activity** | 7 days | 
| **Audit** | 90 days | 
| **History** | 60 days | 

Backup data older than the dates shown is exported to .xml format and stored in the file system. After the backup process is complete, older data is purged from the database and archived in an S3 bucket for up to one year for production instances.

You can adjust this schedule in your Universal Controller interface. However, increasing these time-frames might cause a longer downtime during maintenance.

## Tools
<a name="integrate-stonebranch-universal-controller-with-aws-mainframe-modernization-tools"></a>

**AWS services**
+ [AWS Mainframe Modernization](https://docs.aws.amazon.com/m2/latest/userguide/what-is-m2.html) is an AWS cloud-native platform that helps you modernize your mainframe applications to AWS managed runtime environments. It provides tools and resources to help you plan and implement migration and modernization.
+ [Amazon Elastic Block Store (Amazon EBS)](https://docs.aws.amazon.com/AWSEC2/latest/UserGuide/AmazonEBS.html) provides block-level storage volumes for use with Amazon EC2 instances.
+ [Amazon Elastic File System (Amazon EFS)](https://docs.aws.amazon.com/efs/latest/ug/whatisefs.html) helps you create and configure shared file systems in the AWS Cloud.
+ [Amazon Relational Database Service (Amazon RDS)](https://docs.aws.amazon.com/AmazonRDS/latest/UserGuide/Welcome.html) helps you set up, operate, and scale a relational database in the AWS Cloud. This pattern uses Amazon Aurora MySQL–Compatible Edition.
+ [Amazon Simple Storage Service (Amazon S3)](https://docs.aws.amazon.com/AmazonS3/latest/userguide/Welcome.html) is a cloud-based object storage service that helps you store, protect, and retrieve any amount of data.
+ [Elastic Load Balancing (ELB)](https://docs.aws.amazon.com/elasticloadbalancing/latest/userguide/what-is-load-balancing.html) distributes incoming application or network traffic across multiple targets. For example, you can distribute traffic across Amazon EC2 instances, containers, and IP addresses in one or more Availability Zones. This pattern uses an Application Load Balancer.

**Stonebranch**
+ [Universal Automation Center (UAC)](https://stonebranchdocs.atlassian.net/wiki/spaces/SD/pages/239239169/Universal+Automation+Center) is a system of enterprise workload automation products. This pattern uses the following UAC components:
  + [Universal Controller](https://www.stonebranch.com/documentation-universal-controller), a Java web application running in a Tomcat web container, is the enterprise job scheduler and workload automation broker solution of Universal Automation Center. The Controller presents a user interface for creating, monitoring, and configuring Controller information; handles the scheduling logic; processes all messages to and from Universal Agents; and synchronizes much of the high availability operation of Universal Automation Center.
  + [Universal Agent](https://www.stonebranch.com/documentation-universal-agent) is a vendor-independent scheduling agent that collaborates with existing job scheduler on all major computing platforms, both legacy and distributed. All schedulers that run on z/Series, i/Series, Unix, Linux, or Windows are supported.
+ [Universal Agent](https://www.stonebranch.com/documentation-universal-agent) is a vendor-independent scheduling agent that collaborates with existing job scheduler on all major computing platforms, both legacy and distributed. All schedulers that run on z/Series, i/Series, Unix, Linux, or Windows are supported.
+ [Stonebranch aws-mainframe-modernization-stonebranch-integration AWS Mainframe Modernization Universal Extension](https://github.com/aws-samples/aws-mainframe-modernization-stonebranch-integration/releases) is the integration template to run, monitor and rerun batch jobs in AWS Mainframe Modernization platform.

**Code**

The code for this pattern is available in the [aws-mainframe-modernization-stonebranch-integration](https://github.com/aws-samples/aws-mainframe-modernization-stonebranch-integration/releases/) GitHub repository.

## Epics
<a name="integrate-stonebranch-universal-controller-with-aws-mainframe-modernization-epics"></a>

### Install Universal Controller and Universal Agent on Amazon EC2
<a name="install-universal-controller-and-universal-agent-on-amazon-ec2"></a>


| Task | Description | Skills required | 
| --- | --- | --- | 
| Download the installation files. | Download the installation from Stonebranch servers. To get the installation files, contact with Stonebranch. | Cloud architect | 
| Launch the EC2 instance. | You will need about 3 GB of extra space for the Universal Controller and Universal Agent installations. So provide at least 30 GB of disk space for the instance.Add port 8080 to the security group so that it’s accessible. | Cloud architect | 
| Check prerequisites. | Before the installation, do the following:[\[See the AWS documentation website for more details\]](http://docs.aws.amazon.com/prescriptive-guidance/latest/patterns/integrate-stonebranch-universal-controller-with-aws-mainframe-modernization.html) | Cloud administrator, Linux administrator | 
| Install Universal Controller. | [\[See the AWS documentation website for more details\]](http://docs.aws.amazon.com/prescriptive-guidance/latest/patterns/integrate-stonebranch-universal-controller-with-aws-mainframe-modernization.html) | Cloud architect, Linux administrator | 
| Install Universal Agent. | [\[See the AWS documentation website for more details\]](http://docs.aws.amazon.com/prescriptive-guidance/latest/patterns/integrate-stonebranch-universal-controller-with-aws-mainframe-modernization.html) | Cloud administrator, Linux administrator | 
| Add OMS to Universal Controller. | [\[See the AWS documentation website for more details\]](http://docs.aws.amazon.com/prescriptive-guidance/latest/patterns/integrate-stonebranch-universal-controller-with-aws-mainframe-modernization.html) | Universal Controller administrator | 

### Import AWS Mainframe Modernization Universal Extension and create a task
<a name="import-aws-mainframe-modernization-universal-extension-and-create-a-task"></a>


| Task | Description | Skills required | 
| --- | --- | --- | 
| Import Integration Template. | For this step, you need the [AWS Mainframe Modernization Universal Extension](https://github.com/aws-samples/aws-mainframe-modernization-stonebranch-integration/releases). Ensure the latest released version of the .zip file is downloaded.[\[See the AWS documentation website for more details\]](http://docs.aws.amazon.com/prescriptive-guidance/latest/patterns/integrate-stonebranch-universal-controller-with-aws-mainframe-modernization.html)After the Integration Template is imported, you will see **AWS Mainframe Modernization Tasks** under **Available Services**. | Universal Controller administrator | 
| Enable resolvable credentials. | [\[See the AWS documentation website for more details\]](http://docs.aws.amazon.com/prescriptive-guidance/latest/patterns/integrate-stonebranch-universal-controller-with-aws-mainframe-modernization.html) | Universal Controller administrator | 
| Launch the task. | [\[See the AWS documentation website for more details\]](http://docs.aws.amazon.com/prescriptive-guidance/latest/patterns/integrate-stonebranch-universal-controller-with-aws-mainframe-modernization.html) | Universal Controller administrator | 

### Test starting a batch job
<a name="test-starting-a-batch-job"></a>


| Task | Description | Skills required | 
| --- | --- | --- | 
| Create a task for the batch job. | [\[See the AWS documentation website for more details\]](http://docs.aws.amazon.com/prescriptive-guidance/latest/patterns/integrate-stonebranch-universal-controller-with-aws-mainframe-modernization.html) | Universal Controller administrator | 
| Launch the task. | [\[See the AWS documentation website for more details\]](http://docs.aws.amazon.com/prescriptive-guidance/latest/patterns/integrate-stonebranch-universal-controller-with-aws-mainframe-modernization.html) | Universal Controller administrator | 

### Create a workflow for multiple tasks
<a name="create-a-workflow-for-multiple-tasks"></a>


| Task | Description | Skills required | 
| --- | --- | --- | 
| Copy the tasks. | [\[See the AWS documentation website for more details\]](http://docs.aws.amazon.com/prescriptive-guidance/latest/patterns/integrate-stonebranch-universal-controller-with-aws-mainframe-modernization.html) | Universal Controller administrator | 
| Update tasks. | [\[See the AWS documentation website for more details\]](http://docs.aws.amazon.com/prescriptive-guidance/latest/patterns/integrate-stonebranch-universal-controller-with-aws-mainframe-modernization.html) | Universal Controller administrator | 
| Create a workflow. | [\[See the AWS documentation website for more details\]](http://docs.aws.amazon.com/prescriptive-guidance/latest/patterns/integrate-stonebranch-universal-controller-with-aws-mainframe-modernization.html) | Universal Controller administrator | 
| Check the status of the workflow. | [\[See the AWS documentation website for more details\]](http://docs.aws.amazon.com/prescriptive-guidance/latest/patterns/integrate-stonebranch-universal-controller-with-aws-mainframe-modernization.html) | Univeral Controller administrator | 

### Troubleshoot failed batch jobs and rerun
<a name="troubleshoot-failed-batch-jobs-and-rerun"></a>


| Task | Description | Skills required | 
| --- | --- | --- | 
| Fix the failed task and rerun. | [\[See the AWS documentation website for more details\]](http://docs.aws.amazon.com/prescriptive-guidance/latest/patterns/integrate-stonebranch-universal-controller-with-aws-mainframe-modernization.html) | Universal Controller administrator | 

### Create Start Application and Stop Application tasks
<a name="create-start-application-and-stop-application-tasks"></a>


| Task | Description | Skills required | 
| --- | --- | --- | 
| Create the Start Application action. | [\[See the AWS documentation website for more details\]](http://docs.aws.amazon.com/prescriptive-guidance/latest/patterns/integrate-stonebranch-universal-controller-with-aws-mainframe-modernization.html) | Universal Controller administrator | 

### Create a Cancel Batch Execution task
<a name="create-a-cancel-batch-execution-task"></a>


| Task | Description | Skills required | 
| --- | --- | --- | 
| Create the Cancel Batch action. | [\[See the AWS documentation website for more details\]](http://docs.aws.amazon.com/prescriptive-guidance/latest/patterns/integrate-stonebranch-universal-controller-with-aws-mainframe-modernization.html) |  | 

## Related resources
<a name="integrate-stonebranch-universal-controller-with-aws-mainframe-modernization-resources"></a>
+ [Universal Controller](https://stonebranchdocs.atlassian.net/wiki/spaces/UC77/overview)
+ [Universal Agent](https://stonebranchdocs.atlassian.net/wiki/spaces/UA77/overview)
+ [LDAP Settings](https://stonebranchdocs.atlassian.net/wiki/spaces/UC77/pages/794552355/LDAP+Settings)
+ [SAML Single Sign-On](https://stonebranchdocs.atlassian.net/wiki/spaces/UC77/pages/794553130/SAML+Single+Sign-On)
+ [Xpress Conversion Tool](https://www.stonebranch.com/resources/xpress-conversion-windows)

## Additional information
<a name="integrate-stonebranch-universal-controller-with-aws-mainframe-modernization-additional"></a>

**Icons in the Workflow Editor**

![\[RUNHELLO task at the top, FOOBAR in the middle, and the remaining tasks at the third level.\]](http://docs.aws.amazon.com/prescriptive-guidance/latest/patterns/images/pattern-img/01c6f9fa-87e6-459a-b694-5e03dd7f7952/images/837430ee-3159-4fe2-8e17-65168294ef1e.png)


**All tasks connected**

![\[RUNHELLO connects to FOOBAR, which connects to the three remaining tasks.\]](http://docs.aws.amazon.com/prescriptive-guidance/latest/patterns/images/pattern-img/01c6f9fa-87e6-459a-b694-5e03dd7f7952/images/fe483348-9a6f-450b-87e6-ceae6b2bdaad.png)


**Workflow status**

![\[FOOBAR task fails and the remaining three tasks are waiting.\]](http://docs.aws.amazon.com/prescriptive-guidance/latest/patterns/images/pattern-img/01c6f9fa-87e6-459a-b694-5e03dd7f7952/images/5ea4e239-fbbe-4fa4-9ffa-b7a9443b7975.png)


# Migrate and replicate VSAM files to Amazon RDS or Amazon MSK using Connect from Precisely
<a name="migrate-and-replicate-vsam-files-to-amazon-rds-or-amazon-msk-using-connect-from-precisely"></a>

*Prachi Khanna and Boopathy GOPALSAMY, Amazon Web Services*

## Summary
<a name="migrate-and-replicate-vsam-files-to-amazon-rds-or-amazon-msk-using-connect-from-precisely-summary"></a>

This pattern shows you how to migrate and replicate Virtual Storage Access Method (VSAM) files from a mainframe to a target environment in the AWS Cloud by using [Connect](https://www.precisely.com/product/precisely-connect/connect) from Precisely. The target environments covered in this pattern include Amazon Relational Database Service (Amazon RDS) and Amazon Managed Streaming for Apache Kafka (Amazon MSK). Connect uses [change data capture (CDC)](https://www.precisely.com/resource-center/productsheets/change-data-capture-with-connect) to continuously monitor updates to your source VSAM files and then transfer these updates to one or more of your AWS target environments. You can use this pattern to meet your application modernization or data analytics goals. For example, you can use Connect to migrate your VSAM application files to the AWS Cloud with low latency, or migrate your VSAM data to an AWS data warehouse or data lake for analytics that can tolerate synchronization latencies that are higher than required for application modernization.

## Prerequisites and limitations
<a name="migrate-and-replicate-vsam-files-to-amazon-rds-or-amazon-msk-using-connect-from-precisely-prereqs"></a>

**Prerequisites**
+ [IBM z/OS V2R1](https://www-40.ibm.com/servers/resourcelink/svc00100.nsf/pages/zosv2r1-pdf-download?OpenDocument) or later
+ [CICS Transaction Server for z/OS (CICS TS) V5.1](https://www.ibm.com/support/pages/cics-transaction-server-zos-51-detailed-system-requirements) or later (CICS/VSAM data capture)
+ [IBM MQ 8.0](https://www.ibm.com/support/pages/downloading-ibm-mq-80) or later
+ Compliance with [z/OS security requirements](https://help.precisely.com/r/Connect-CDC-SQData/4.1.43/en-US/Connect-CDC-SQData-Help/Installation/Install-Connect-CDC-SQData-on-zOS/Prerequisites-for-z/OS/Security-authorization-requirements-for-z/OS) (for example, APF authorization for SQData load libraries)
+ VSAM recovery logs turned on
+ (Optional) [CICS VSAM Recovery Version (CICS VR)](https://www.ibm.com/docs/en/cics-vr/5.1?topic=started-introducing-cics-vr) to automatically capture CDC logs
+ An active AWS account
+ An [Amazon Virtual Private Cloud (VPC)](https://docs.aws.amazon.com/vpc/latest/userguide/vpc-getting-started.html) with a subnet that’s reachable by your legacy platform
+ A VSAM Connect license from Precisely

**Limitations**
+ Connect doesn’t support automatic target table creation based on source VSAM schemas or copybooks. You must define the target table structure for the first time.
+ For non-streaming targets such as Amazon RDS, you must specify the conversion source to target mapping in the Apply Engine configuration script.
+ Logging, monitoring, and alerting functions are implemented through APIs and require external components (such as Amazon CloudWatch) to be fully operational.

**Product versions**
+ SQData 40134 for z/OS
+ SQData 4.0.43 for the Amazon Linux Amazon Machine Image (AMI) on Amazon Elastic Compute Cloud (Amazon EC2)

## Architecture
<a name="migrate-and-replicate-vsam-files-to-amazon-rds-or-amazon-msk-using-connect-from-precisely-architecture"></a>

**Source technology stack**
+ Job Control Language (JCL)
+ z/OS Unix shell and Interactive System Productivity Facility (ISPF)
+ VSAM utilities (IDCAMS)

** Target technology stack**
+ Amazon EC2
+ Amazon MSK
+ Amazon RDS
+ Amazon VPC

**Target architecture**

*Migrating VSAM files to Amazon RDS*

The following diagram shows how to migrate VSAM files to a relational database, such as Amazon RDS, in real time or near real time by using the CDC agent/publisher in the source environment (on-premises mainframe) and the [Apply Engine](https://help.precisely.com/r/Connect-CDC-SQData/4.1.43/en-US/Connect-CDC-SQData-Help/Source-and-Target-Configuration/Apply-engine) in the target environment (AWS Cloud).

![\[Diagram showing VSAM file migration from on-premises mainframe to AWS Cloud using CDC and Apply Engine.\]](http://docs.aws.amazon.com/prescriptive-guidance/latest/patterns/images/pattern-img/4ee183bd-1c0d-449d-8cdc-eb6e2c41a695/images/47cefbde-e0c8-4c36-ba48-cccc2c443074.png)


The diagram shows the following batch workflow:

1. Connect captures changes to a file by comparing VSAM files from backup files to identify changes and then sends the changes to the logstream.

1. The publisher consumes data from the system logstream.

1. The publisher communicates captured data changes to a target engine through TCP/IP. The Controller Daemon authenticates communication between the source and target environments.

1. The Apply Engine in the target environment receives the changes from the Publisher agent and applies them to a relational or non-relational database.

The diagram shows the following online workflow:

1. Connect captures changes in the online file by using a log replicate and then streams captured changes to a logstream.

1. The publisher consumes data from the system logstream.

1. The publisher communicates captured data changes to the target engine through TCP/IP. The Controller Daemon authenticates communication between the source and target environments.

1. The Apply Engine in the target environment receives the changes from the Publisher agent and then applies them to a relational or non-relational database.

*Migrating VSAM files to Amazon MSK*

The following diagram shows how to stream VSAM data structures from a mainframe to Amazon MSK in high-performance mode and automatically generate JSON or AVRO schema conversions that integrate with Amazon MSK.

![\[Diagram showing data flow from on-premises mainframe to AWS Cloud services via Amazon VPC.\]](http://docs.aws.amazon.com/prescriptive-guidance/latest/patterns/images/pattern-img/4ee183bd-1c0d-449d-8cdc-eb6e2c41a695/images/13eb27ad-c0d2-489b-91e1-5b2a729fb8dd.png)


The diagram shows the following batch workflow:

1. Connect captures changes to a file by using CICS VR or by comparing VSAM files from backup files to identify changes. Captured changes are sent to the logstream.

1. The publisher consumes data from the system logstream.

1. The publisher communicates captured data changes to the target engine through TCP/IP. The Controller Daemon authenticates communication between the source and target environments.

1. The Replicator Engine that’s operating in parallel processing mode splits the data to a unit of work cache.

1. Worker threads capture the data from the cache.

1. Data is published to Amazon MSK topics from the worker threads.

1. Users apply changes from Amazon MSK to targets such as Amazon DynamoDB, Amazon Simple Storage Service (Amazon S3), or Amazon OpenSearch Service by using [connectors](https://docs.aws.amazon.com/msk/latest/developerguide/msk-connect-connectors.html).

The diagram shows the following online workflow:

1. Changes in the online file are captured by using a log replicate. Captured changes are streamed to the logstream.

1. The publisher consumes data from the system logstream.

1. The publisher communicates captured data changes to the target engine through TCP/IP. The Controller Daemon authenticates communication between the source and target environments.

1. The Replicator Engine that’s operating in parallel processing mode splits the data to a unit of work cache.

1. Worker threads capture the data from the cache.

1. Data is published to Amazon MSK topics from the worker threads.

1. Users apply changes from Amazon MSK to targets such as DynamoDB, Amazon S3, or OpenSearch Service by using [connectors](https://docs.aws.amazon.com/msk/latest/developerguide/msk-connect-connectors.html).

## Tools
<a name="migrate-and-replicate-vsam-files-to-amazon-rds-or-amazon-msk-using-connect-from-precisely-tools"></a>
+ [Amazon Managed Streaming for Apache Kafka (Amazon MSK)](https://docs.aws.amazon.com/msk/latest/developerguide/what-is-msk.html) is a fully managed service that helps you build and run applications that use Apache Kafka to process streaming data.
+ [Amazon Relational Database Service (Amazon RDS)](https://docs.aws.amazon.com/AmazonRDS/latest/UserGuide/Welcome.html) helps you set up, operate, and scale a relational database in the AWS Cloud.

## Epics
<a name="migrate-and-replicate-vsam-files-to-amazon-rds-or-amazon-msk-using-connect-from-precisely-epics"></a>

### Prepare the source environment (mainframe)
<a name="prepare-the-source-environment-mainframe"></a>


| Task | Description | Skills required | 
| --- | --- | --- | 
| Install Connect CDC 4.1. | [\[See the AWS documentation website for more details\]](http://docs.aws.amazon.com/prescriptive-guidance/latest/patterns/migrate-and-replicate-vsam-files-to-amazon-rds-or-amazon-msk-using-connect-from-precisely.html) | IBM Mainframe Developer/Admin | 
| Set up the zFS directory. | To set up a zFS directory, follow the instructions from [zFS variable directories](https://help.precisely.com/r/Connect-CDC-SQData/4.1.43/en-US/Connect-CDC-SQData-Help/Installation/Install-Connect-CDC-SQData-on-zOS/Prerequisites-for-z/OS/Security-authorization-requirements-for-z/OS/zFS-variable-directories) in the Precisely documentation.Controller Daemon and Capture/Publisher agent configurations are stored in the z/OS UNIX Systems Services file system (referred to as zFS). The Controller Daemon, Capture, Storage, and Publisher agents require a predefined zFS directory structure for storing a small number of files. | IBM Mainframe Developer/Admin | 
| Configure TCP/IP ports. | To configure TCP/IP ports, follow the instructions from [TCP/IP ports](https://help.precisely.com/r/Connect-CDC-SQData/4.1.43/en-US/Connect-CDC-SQData-Help/Installation/Install-Connect-CDC-SQData-on-UNIX/Prerequisites-for-UNIX/Security-authorization-requirements-for-UNIX/TCP/IP-ports) in the Precisely documentation.The Controller Daemon requires TCP/IP ports on source systems. The ports are referenced by the engines on the target systems (where captured change data is processed). | IBM Mainframe Developer/Admin | 
| Create a z/OS logstream. | To create a [z/OS logstream](https://www.ibm.com/docs/en/was/8.5.5?topic=SSEQTP_8.5.5/com.ibm.websphere.installation.zseries.doc/ae/cins_logstrm.html), follow the instructions from [Create z/OS system logStreams](https://help.precisely.com/r/Connect-CDC-SQData/4.1.43/en-US/Connect-CDC-SQData-Help/Source-and-Target-Configuration/Setup-and-configure-sources/IMS-z/OS/IMS-TM-EXIT-capture/Prepare-environment/Create-z/OS-system-logStreams?tocId=wy6243SXlIiEczwR8JE8WA) in the Precisely documentation.Connect uses the logstream to capture and stream data between your source environment and target environment during migration.For an example JCL that creates a z/OS LogStream, see [Create z/OS system logStreams](https://help.precisely.com/r/Connect-CDC-SQData/4.1.43/en-US/Connect-CDC-SQData-Help/Source-and-Target-Configuration/Setup-and-configure-sources/IMS-z/OS/IMS-TM-EXIT-capture/Prepare-environment/Create-z/OS-system-logStreams?tocId=wy6243SXlIiEczwR8JE8WA) in the Precisely documentation. | IBM Mainframe Developer | 
| Identify and authorize IDs for zFS users and started tasks. | Use RACF to grant access to the OMVS zFS file system. For an example JCL, see [Identify and authorize zFS user and started task IDs](https://help.precisely.com/r/Connect-CDC-SQData/4.1.43/en-US/Connect-CDC-SQData-Help/Source-and-Target-Configuration/Setup-and-configure-sources/IMS-z/OS/IMS-log-reader-capture/Prepare-environment/Identify-and-authorize-zFS-user-and-started-task-IDs?tocId=MrBXpFu~N0iAy~8VTrH0tQ) in the Precisely documentation. | IBM Mainframe Developer/Admin | 
| Generate z/OS public/private keys and the authorized key file. | Run the JCL to generate the key pair. For an example, see *Key pair example* in the *Additional information* section of this pattern.For instructions, see [Generate z/OS public and private keys and authorized key file](https://help.precisely.com/r/Connect-CDC-SQData/4.1.43/en-US/Connect-CDC-SQData-Help/Source-and-Target-Configuration/Setup-and-configure-sources/Db2-z/OS/Prepare-the-environment/Generate-z/OS-public-and-private-keys-and-authorized-key-file?tocId=fceE77dWT8smZsSaE~FeMQ) in the Precisely documentation. | IBM Mainframe Developer/Admin | 
| Activate the CICS VSAM Log Replicate and attach it to the logstream. | Run the following JCL script:<pre> //STEP1 EXEC PGM=IDCAMS<br /> //SYSPRINT DD SYSOUT=*<br /> //SYSIN DD *<br />   ALTER SQDATA.CICS.FILEA -<br />   LOGSTREAMID(SQDATA.VSAMCDC.LOG1) -<br />   LOGREPLICATE</pre> | IBM Mainframe Developer/Admin | 
| Activate the VSAM File Recovery Log through an FCT. | Modify the File Control Table (FCT) to reflect the following parameter changes:<pre> Configure FCT Parms<br />   CEDA ALT FILE(name) GROUP(groupname)<br />   DSNAME(data set name)<br />   RECOVERY(NONE|BACKOUTONLY|ALL)<br />   FWDRECOVLOG(NO|1–99)<br />   BACKUPTYPE(STATIC|DYNAMIC)<br />   RECOVERY PARAMETERS<br />   RECOVery : None | Backoutonly | All<br />   Fwdrecovlog : No | 1-99<br />   BAckuptype : Static | Dynamic</pre> | IBM Mainframe Developer/Admin | 
| Set up CDCzLog for the Publisher agent. | [\[See the AWS documentation website for more details\]](http://docs.aws.amazon.com/prescriptive-guidance/latest/patterns/migrate-and-replicate-vsam-files-to-amazon-rds-or-amazon-msk-using-connect-from-precisely.html) | IBM Mainframe Developer/Admin | 
| Activate the Controller Daemon. | [\[See the AWS documentation website for more details\]](http://docs.aws.amazon.com/prescriptive-guidance/latest/patterns/migrate-and-replicate-vsam-files-to-amazon-rds-or-amazon-msk-using-connect-from-precisely.html) | IBM Mainframe Developer/Admin | 
| Activate the publisher. | [\[See the AWS documentation website for more details\]](http://docs.aws.amazon.com/prescriptive-guidance/latest/patterns/migrate-and-replicate-vsam-files-to-amazon-rds-or-amazon-msk-using-connect-from-precisely.html) | IBM Mainframe Developer/Admin | 
| Activate the logstream. | [\[See the AWS documentation website for more details\]](http://docs.aws.amazon.com/prescriptive-guidance/latest/patterns/migrate-and-replicate-vsam-files-to-amazon-rds-or-amazon-msk-using-connect-from-precisely.html) | IBM Mainframe Developer/Admin | 

### Prepare the target environment (AWS)
<a name="prepare-the-target-environment-aws"></a>


| Task | Description | Skills required | 
| --- | --- | --- | 
| Install Precisely on an EC2 instance. | To install Connect from Precisely on the Amazon Linux AMI for Amazon EC2, follow the instructions from [Install Connect CDC (SQData) on UNIX](https://help.precisely.com/r/Connect-CDC-SQData/4.1.43/en-US/Connect-CDC-SQData-Help/Installation/Install-Connect-CDC-SQData-on-UNIX) in the Precisely documentation. | General AWS | 
| Open TCP/IP ports. | To modify the security group to include the Controller Daemon ports for inbound and outbound access, follow the instructions from [TCP/IP](https://help.precisely.com/r/Connect-CDC-SQData/4.1.43/en-US/Connect-CDC-SQData-Help/Source-and-Target-Configuration/Setup-and-configure-sources/Change-data-capture/Transient-storage-and-publishing/TCP/IP) in the Precisely documentation. | General AWS | 
| Create file directories. | To create file directories, follow the instructions from [Prepare target apply environment](https://help.precisely.com/r/Connect-CDC-SQData/4.1.43/en-US/Connect-CDC-SQData-Help/Source-and-Target-Configuration/Setup-and-configure-targets/Kafka/Prepare-target-apply-environment) in the Precisely documentation. | General AWS | 
| Create the Apply Engine configuration file. | Create the Apply Engine configuration file in the working directory of the Apply Engine. The following example configuration file shows Apache Kafka as the target:<pre>builtin.features=SASL_SCRAM<br />  security.protocol=SASL_SSL<br />  sasl.mechanism=SCRAM-SHA-512<br />  sasl.username=<br />  sasl.password=<br />  metadata.broker.list=</pre>For more information, see [Security](https://kafka.apache.org/documentation/#security) in the Apache Kafka documentation. | General AWS | 
| Create scripts for Apply Engine processing. | Create the scripts for the Apply Engine to process source data and replicate source data to the target. For more information, see [Create an apply engine script](https://help.precisely.com/r/Connect-CDC-SQData/4.1.43/en-US/Connect-CDC-SQData-Help/Source-and-Target-Configuration/Apply-engine/Apply-engine-script-development/Create-an-apply-engine-script) in the Precisely documentation. | General AWS | 
| Run the scripts. | Use the `SQDPARSE` and `SQDENG` commands to run the script. For more information, see [Parse a script for zOS](https://help.precisely.com/r/Connect-CDC-SQData/4.1.43/en-US/Connect-CDC-SQData-Help/Source-and-Target-Configuration/Apply-engine/Apply-engine-script-development/Parse-a-script/Parse-a-script-for-zOS) in the Precisely documentation. | General AWS | 

### Validate the environment
<a name="validate-the-environment"></a>


| Task | Description | Skills required | 
| --- | --- | --- | 
| Validate the list of VSAM files and target tables for CDC processing. | [\[See the AWS documentation website for more details\]](http://docs.aws.amazon.com/prescriptive-guidance/latest/patterns/migrate-and-replicate-vsam-files-to-amazon-rds-or-amazon-msk-using-connect-from-precisely.html) | General AWS, Mainframe | 
| Verify that the Connect CDC SQData product is linked. | Run a testing job and verify that the return code from this job is 0 (Successful).Connect CDC SQData Apply Engine status messages should show active connection messages. | General AWS, Mainframe | 

### Run and validate test cases (Batch)
<a name="run-and-validate-test-cases-batch"></a>


| Task | Description | Skills required | 
| --- | --- | --- | 
| Run the batch job in the mainframe. | Run the batch application job using a modified JCL. Include steps in the modified JCL that do the following:[\[See the AWS documentation website for more details\]](http://docs.aws.amazon.com/prescriptive-guidance/latest/patterns/migrate-and-replicate-vsam-files-to-amazon-rds-or-amazon-msk-using-connect-from-precisely.html) | General AWS, Mainframe | 
| Check the logstream. | Check the logstream to confirm that you can see the change data for the completed mainframe batch job. | General AWS, Mainframe | 
| Validate the counts for the source delta changes and target table. | To confirm the records are tallied, do the following:[\[See the AWS documentation website for more details\]](http://docs.aws.amazon.com/prescriptive-guidance/latest/patterns/migrate-and-replicate-vsam-files-to-amazon-rds-or-amazon-msk-using-connect-from-precisely.html) | General AWS, Mainframe | 

### Run and validate test cases (Online)
<a name="run-and-validate-test-cases-online"></a>


| Task | Description | Skills required | 
| --- | --- | --- | 
| Run the online transaction in a CICS region. | [\[See the AWS documentation website for more details\]](http://docs.aws.amazon.com/prescriptive-guidance/latest/patterns/migrate-and-replicate-vsam-files-to-amazon-rds-or-amazon-msk-using-connect-from-precisely.html) | IBM Mainframe Developer | 
| Check the logstream. | Confirm that the logstream is populated with specific record level changes. | AWS Mainframe Developer | 
| Validate the count in the target database. | Monitor the Apply Engine for record level counts. | Precisely, Linux | 
| Validate the record counts and data records in the target database. | Query the target database to validate the record counts and data records. | General AWS | 

## Related resources
<a name="migrate-and-replicate-vsam-files-to-amazon-rds-or-amazon-msk-using-connect-from-precisely-resources"></a>
+ [VSAM z/OS](https://help.precisely.com/r/Connect-CDC-SQData/4.1.43/en-US/Connect-CDC-SQData-Help/Source-and-Target-Configuration/Setup-and-configure-sources/VSAM-z/OS) (Precisely documentation)
+ [Apply engine](https://help.precisely.com/r/Connect-CDC-SQData/4.1.43/en-US/Connect-CDC-SQData-Help/Source-and-Target-Configuration/Apply-engine) (Precisely documentation)
+ [Replicator engine](https://help.precisely.com/r/Connect-CDC-SQData/4.1.43/en-US/Connect-CDC-SQData-Help/Source-and-Target-Configuration/Replicator-engine) (Precisely documentation)
+ [The log stream](https://www.ibm.com/docs/en/zos/2.3.0?topic=logger-log-stream) (IBM documentation)

## Additional information
<a name="migrate-and-replicate-vsam-files-to-amazon-rds-or-amazon-msk-using-connect-from-precisely-additional"></a>

**Configuration file example**

This is an example configuration file for a logstream where the source environment is a mainframe and the target environment is Amazon MSK:

```
 
  -- JOBNAME -- PASS THE SUBSCRIBER NAME
  -- REPORT  progress report will be produced after "n" (number) of Source records processed.
  
  JOBNAME VSMTOKFK;
  --REPORT EVERY 100;
  -- Change Op has been ‘I’ for insert, ‘D’ for delete , and ‘R’ for Replace. For RDS it is 'U' for update
  -- Character Encoding on z/OS is Code Page 1047, on Linux and UNIX it is Code Page 819 and on Windows, Code Page 1252
  OPTIONS
  CDCOP('I', 'U', 'D'),
  PSEUDO NULL = NO,
  USE AVRO COMPATIBLE NAMES,
  APPLICATION ENCODING SCHEME = 1208;
  
  --       SOURCE DESCRIPTIONS
  
  BEGIN GROUP VSAM_SRC;
  DESCRIPTION COBOL ../copybk/ACCOUNT AS account_file;
  END GROUP;
  
  --       TARGET DESCRIPTIONS
  
  BEGIN GROUP VSAM_TGT;
  DESCRIPTION COBOL ../copybk/ACCOUNT AS account_file;
  END GROUP;
  
  --       SOURCE DATASTORE (IP & Publisher name)
  
  DATASTORE cdc://10.81.148.4:2626/vsmcdct/VSMTOKFK
  OF VSAMCDC
  AS CDCIN
  DESCRIBED BY GROUP VSAM_SRC ACCEPT ALL;
  
  --       TARGET DATASTORE(s) - Kafka and topic name
  
  DATASTORE 'kafka:///MSKTutorialTopic/key'
  OF JSON
  AS CDCOUT
  DESCRIBED BY GROUP VSAM_TGT FOR INSERT;
  
  --       MAIN SECTION
  
  PROCESS INTO
  CDCOUT
  SELECT
  {
  SETURL(CDCOUT, 'kafka:///MSKTutorialTopic/key')
  REMAP(CDCIN, account_file, GET_RAW_RECORD(CDCIN, AFTER), GET_RAW_RECORD(CDCIN, BEFORE))
  REPLICATE(CDCOUT, account_file)
  }
  FROM CDCIN;
```

**Key pair example**

This an example of how to run the JCL to generate the key pair:

```
//SQDUTIL EXEC PGM=SQDUTIL //SQDPUBL DD DSN=&USER..NACL.PUBLIC, // DCB=(RECFM=FB,LRECL=80,BLKSIZE=21200), // DISP=(,CATLG,DELETE),UNIT=SYSDA, // SPACE=(TRK,(1,1)) //SQDPKEY DD DSN=&USER..NACL.PRIVATE, // DCB=(RECFM=FB,LRECL=80,BLKSIZE=21200), // DISP=(,CATLG,DELETE),UNIT=SYSDA, // SPACE=(TRK,(1,1)) //SQDPARMS DD  keygen //SYSPRINT DD SYSOUT= //SYSOUT DD SYSOUT=* //SQDLOG DD SYSOUT=* //*SQDLOG8 DD DUMMY
```

# Modernize the CardDemo mainframe application by using AWS Transform
<a name="modernize-carddemo-mainframe-app"></a>

*Santosh Kumar Singh and Cheryl du Preez, Amazon Web Services*

## Summary
<a name="modernize-carddemo-mainframe-app-summary"></a>

[AWS Transform](https://docs.aws.amazon.com/transform/latest/userguide/what-is-service.html) is designed to accelerate the modernization of mainframe applications. It uses generative AI to streamline the mainframe modernization process. It automates complex tasks, such as: legacy code analysis, mainframe documentation, business rules extraction, monolithic applications decomposition into business domain, and code refactoring. It accelerates modernization projects by automating complex tasks, such as application analysis and migration sequence planning. When decomposing monolithic applications, AWS Transform intelligently sequences the mainframe application transformation, which helps you to  transform business functions in parallel. AWS Transform can accelerate decision making and enhance operational agility and migration efficiency.

This pattern offers step-by-step instructions to help you test the mainframe modernization capabilities of AWS Transform by using [CardDemo](https://github.com/aws-samples/aws-mainframe-modernization-carddemo), which is a sample open source mainframe application.

## Prerequisites and limitations
<a name="modernize-carddemo-mainframe-app-prereqs"></a>

**Prerequisites**
+ An active AWS account
+ AWS IAM Identity Center, [enabled](https://docs.aws.amazon.com/singlesignon/latest/userguide/enable-identity-center.html)
+ [Permissions](https://docs.aws.amazon.com/transform/latest/userguide/security_iam_id-based-policy-examples.html#id-based-policy-examples-admin-enable-transform) that allow administrators to enable AWS Transform
+ [Permissions](https://docs.aws.amazon.com/transform/latest/userguide/security_iam_id-based-policy-examples.html#id-based-policy-examples-admin-connector) that allow administrators to accept Amazon Simple Storage Service (Amazon S3) connection requests for the AWS Transform web application

** Limitations**
+ AWS Transform is available only in some AWS Regions. For a complete list of supported Regions, see [Supported Regions for AWS Transform](https://docs.aws.amazon.com/transform/latest/userguide/regions.html).
+ AWS Transform supports code analysis, document generation, business rules extraction, decomposition, and refactoring from Common Business-Oriented Language (COBOL) to Java. For more information, see [Capabilities and key features](https://docs.aws.amazon.com/transform/latest/userguide/transform-app-mainframe.html#transform-app-mainframe-features) and [Supported file types for transformation of mainframe applications](https://docs.aws.amazon.com/transform/latest/userguide/transform-app-mainframe.html#transform-app-mainframe-supported-files).
+ There is a service quota for mainframe transformation capabilities in AWS Transform. For more information, see [Quotas for AWS Transform](https://docs.aws.amazon.com/transform/latest/userguide/transform-limits.html).
+ In order to collaborate on a shared workspace, all users must be registered users of the same instance of AWS IAM Identity Center that is associated with your instance of the AWS Transform web application.
+ The Amazon S3 bucket and AWS Transform must be in the same AWS account and Region.

## Architecture
<a name="modernize-carddemo-mainframe-app-architecture"></a>

The following diagram shows the architecture that you set up in this pattern.

![\[Using AWS Transform to modernize a mainframe application that is stored in an Amazon S3 bucket.\]](http://docs.aws.amazon.com/prescriptive-guidance/latest/patterns/images/pattern-img/0e539474-b733-452d-b0fb-6b3f4cbd5075/images/75be6d78-5b43-448c-ad07-bf74b9ae14ad.png)


The diagram shows the following workflow:

1. AWS Transform uses a connector to access the CardDemo mainframe application, which is stored in an Amazon S3 bucket.

1. AWS Transform uses AWS IAM Identity Center to manage user access and authentication. The system implements multiple layers of security controls for authentication, authorization, encryption, and access management to help protect code and artifacts during processing. Users interact with the AWS Transform agent through a chat interface. You can provide instructions to the AI agent for specific tasks in English. For more information, see [Human in the loop (HITL)](https://docs.aws.amazon.com/transform/latest/userguide/transform-app-mainframe.html#transform-app-mainframe-hitl) in the AWS Transform documentation.

1. The AI agent interprets the user's instructions, creates a job plan, divides the job into executable tasks, and actions it autonomously. Users can review and approve the transformation. Transformation tasks include the following:
   + **Code analysis** – AWS Transform analyzes the code in each file for details such as file name, file type, lines of code, and their paths. The agent analyzes the source code, runs classifications, creates dependency mappings, and identifies any missing artifacts. It also identifies duplicate components.
   + **Document generation** – AWS Transform generates documentation for the mainframe application. By analyzing the code, it can automatically create detailed documentation of the application programs, including descriptions of the business logic, flows, integrations, and dependencies present in your legacy systems.
   + **Business logic extraction**– AWS Transform analyzes COBOL programs to document their core business logic, to help you understand the fundamental business logic.
   + **Code decomposition** – AWS Transform decomposes the code into domains that account for dependencies between programs and components. Grouping related files and programs within the same domain improves organization and helps preserve the application's logical structure when breaking it down into smaller components.
   + **Migration wave planning** – Based on the domains you created during the decomposition phase, AWS Transform generates a migration wave plan with recommended modernization order.
   + **Code refactoring** – AWS Transform refactors the code in all or selected domain files into Java code. The goal of this step is to preserve the critical business logic of the application while refactoring it to a modernized, cloud-optimized Java application.

1. AWS Transform stores the refactored code, generated documents, associated artifacts, and runtime libraries in your Amazon S3 bucket. You can do the following:
   + Access the runtime folder in your Amazon S3 bucket.
   + Build and deploy the application by following the [Build and deploy your modernized application post-refactoring](https://docs.aws.amazon.com/transform/latest/userguide/transform-app-mainframe-workflow-build-deploy.html) in the AWS Transform documentation.
   + Through the chat interface, request and download a sample AWS CloudFormation, AWS Cloud Development Kit (AWS CDK), or Hashicorp Terraform template. These templates can help you deploy the AWS resources that are necessary to support the refactored application.
   + Use [Reforge](https://docs.aws.amazon.com/transform/latest/userguide/transform-app-mainframe-workflow.html#transform-app-mainframe-workflow-refactor-code-reforge) to improve the quality of refactored code by using large language models (LLMs). The refactoring engine preserves the functional equivalence of COBOL while transforming it into Java code. Reforge is an optional step that is available after the transformation. This step uses LLMs to restructure the code to closely resemble native Java, which can improve readability and maintainability. Reforge also adds human-readable comments to help you understand the code, and it implements modern coding patterns and best practices.

## Tools
<a name="modernize-carddemo-mainframe-app-tools"></a>

**AWS services**
+ [AWS Transform](https://docs.aws.amazon.com/transform/latest/userguide/what-is-service.html) uses agentic AI to help you accelerate the modernization of legacy workloads, such as .NET, mainframe, and VMware workloads.
+ [AWS IAM Identity Center](https://docs.aws.amazon.com/singlesignon/latest/userguide/what-is.html) helps you centrally manage single sign-on (SSO) access to your AWS accounts and cloud applications.
+ [Amazon Simple Storage Service (Amazon S3)](https://docs.aws.amazon.com/AmazonS3/latest/userguide/Welcome.html) is a cloud-based object storage service that helps you store, protect, and retrieve any amount of data.

**Code repository**

You can use the open source AWS [CardDemo](https://github.com/aws-samples/aws-mainframe-modernization-carddemo) mainframe application as a sample application to get started with mainframe modernization.

## Best practices
<a name="modernize-carddemo-mainframe-app-best-practices"></a>
+ **Start small** – Begin with small, less complex code (15,000–20,000 lines of code) to get an understanding of how AWS Transform analyzes and transforms mainframe applications.
+ **Combine with human expertise** – Use AWS Transform as an accelerator while applying human expertise for optimal results.
+ **Review and test thoroughly** – Always review the transformed code carefully and run comprehensive tests to validate the functional equivalency after transformation.
+ **Provide feedback** – To provide feedback and suggestions for improvement, use the **Send feedback** button in the AWS Management Console or create a case with [AWS Support](https://support.console.aws.amazon.com/). For more information, see [Creating a support case](https://docs.aws.amazon.com/awssupport/latest/user/case-management.html). Your input is valuable for service enhancements and future development.

## Epics
<a name="modernize-carddemo-mainframe-app-epics"></a>

### Prepare the mainframe application
<a name="prepare-the-mainframe-application"></a>


| Task | Description | Skills required | 
| --- | --- | --- | 
| Create a bucket. | Create an Amazon S3 bucket in the same AWS account and Region where AWS Transform is enabled. You use this bucket to store the mainframe application code, and AWS Transform uses this bucket to store the generated documents, refactored code and other files associated with the transformation. For instructions, see [Creating a bucket](https://docs.aws.amazon.com/AmazonS3/latest/userguide/create-bucket-overview.html) in the Amazon S3 documentation. | General AWS | 
| Prepare the sample mainframe application. | [\[See the AWS documentation website for more details\]](http://docs.aws.amazon.com/prescriptive-guidance/latest/patterns/modernize-carddemo-mainframe-app.html) | App developer, DevOps engineer | 

### Configure IAM Identity Center and AWS Transform
<a name="configure-sso-and-trn"></a>


| Task | Description | Skills required | 
| --- | --- | --- | 
| Add users to IAM Identity Center. | Add your prospective users to IAM Identity Center. Follow the instructions in [Adding users in IAM Identity Center](https://docs.aws.amazon.com/transform/latest/userguide/transform-user-management.html#transform-add-idc-users) in the AWS Transform documentation. | AWS administrator | 
| Enable AWS Transform and add users. | [\[See the AWS documentation website for more details\]](http://docs.aws.amazon.com/prescriptive-guidance/latest/patterns/modernize-carddemo-mainframe-app.html) | AWS administrator | 
| Configure user access to the AWS Transform web application. | Each user must accept the invitation to access the AWS Transform web application. Follow the instructions in [Accepting the invitation](https://docs.aws.amazon.com/transform/latest/userguide/transform-user-onboarding.html#transform-user-invitation) in the AWS Transform documentation. | App developer, App owner | 
| Log in to the AWS Transform web application. | Follow the instructions in [Signing in to AWS Transform](https://docs.aws.amazon.com/transform/latest/userguide/transform-user-onboarding.html#transform-user-signin). | App developer, App owner | 
| Set up a workspace. | Set up a workspace where users can collaborate in the AWS Transform web application. Follow the instructions in [Start your project](https://docs.aws.amazon.com/transform/latest/userguide/transform-environment.html#start-workflow) in the AWS Transform documentation. | AWS administrator | 

### Transform the mainframe application
<a name="transform-the-mainframe-application"></a>


| Task | Description | Skills required | 
| --- | --- | --- | 
| Create a transformation job. | Create a transformation job to modernize the CardDemo mainframe application. For instructions, see [Create and start a job](https://docs.aws.amazon.com/transform/latest/userguide/transform-app-mainframe-workflow.html#transform-app-mainframe-workflow-start-job) in the AWS Transform documentation. When you are asked to set the objectives in the AWS Transform chat interface, choose **Perform mainframe modernization (IBM z/OS to AWS)** and then choose **Analyze code, Generate technical documentation, Business logic, Decompose code, Plan Migration sequence and Transform code to Java**. | App developer, App owner | 
| Set up the connector. | Establish a connector to the Amazon S3 bucket that contains the CardDemo mainframe application. This connector allows AWS Transform to access resources in the bucket and perform consecutive transformation functions. For instructions, see [Set up a connector](https://docs.aws.amazon.com/transform/latest/userguide/transform-app-mainframe-workflow.html#transform-app-mainframe-workflow-setup-connector) in the AWS Transform documentation. | AWS administrator | 
| Perform code analysis. | [\[See the AWS documentation website for more details\]](http://docs.aws.amazon.com/prescriptive-guidance/latest/patterns/modernize-carddemo-mainframe-app.html)For more information, see [Code analysis](https://docs.aws.amazon.com/transform/latest/userguide/transform-app-mainframe-workflow.html#transform-app-mainframe-workflow-code-analysis) in the AWS Transform documentation. | App developer, App owner | 
| Generate technical documentation. | [\[See the AWS documentation website for more details\]](http://docs.aws.amazon.com/prescriptive-guidance/latest/patterns/modernize-carddemo-mainframe-app.html)For more information, see [Generate technical documentation](https://docs.aws.amazon.com/transform/latest/userguide/transform-app-mainframe-workflow.html#transform-app-mainframe-workflow-generate-documentation) in the AWS Transform documentation. | App developer, App owner | 
| Extract the business logic. | [\[See the AWS documentation website for more details\]](http://docs.aws.amazon.com/prescriptive-guidance/latest/patterns/modernize-carddemo-mainframe-app.html)For more information, see [Extract business logic](https://docs.aws.amazon.com/transform/latest/userguide/transform-app-mainframe-workflow.html#transform-app-mainframe-workflow-extract-business-logic) in the AWS Transform documentation. | App developer, App owner | 
| Decompose the code. | [\[See the AWS documentation website for more details\]](http://docs.aws.amazon.com/prescriptive-guidance/latest/patterns/modernize-carddemo-mainframe-app.html)For more information about decomposition and seeds, see [Decomposition](https://docs.aws.amazon.com/transform/latest/userguide/transform-app-mainframe-workflow.html#transform-app-mainframe-workflow-decomposition) in the AWS Transform documentation. | App developer, App owner | 
| Plan the migration waves. | Plan the migration waves for the CardDemo application. Follow the instructions in [Migration wave planning](https://docs.aws.amazon.com/transform/latest/userguide/transform-app-mainframe-workflow.html#transform-app-mainframe-workflow-wave-planning) in the AWS Transform documentation to review and edit the wave plan. | App developer, App owner | 
| Refactor the code. | [\[See the AWS documentation website for more details\]](http://docs.aws.amazon.com/prescriptive-guidance/latest/patterns/modernize-carddemo-mainframe-app.html) | App developer, App owner | 
| (Optional) Use Reforge to improve the Java code. | [\[See the AWS documentation website for more details\]](http://docs.aws.amazon.com/prescriptive-guidance/latest/patterns/modernize-carddemo-mainframe-app.html)For more information, see [Reforge](https://docs.aws.amazon.com/transform/latest/userguide/transform-app-mainframe-workflow.html#transform-app-mainframe-workflow-refactor-code-reforge) in the AWS Transform documentation. | App developer, App owner | 
| Streamline the deployment. | AWS Transform can provide infrastructure as code (IaC) templates for CloudFormation, AWS CDK, or Terraform. These templates help you deploy core components, including compute, database, storage, and security resources.[\[See the AWS documentation website for more details\]](http://docs.aws.amazon.com/prescriptive-guidance/latest/patterns/modernize-carddemo-mainframe-app.html)For more information, see [Deployment capabilities](https://docs.aws.amazon.com/transform/latest/userguide/transform-app-mainframe-workflow.html#transform-app-mainframe-features-deployment) in the AWS Transform documentation. | App developer, App owner | 

## Troubleshooting
<a name="modernize-carddemo-mainframe-app-troubleshooting"></a>


| Issue | Solution | 
| --- | --- | 
| You are unable to view the source code or generated document in the AWS Transform web application. | Add a policy to the CORS permission for the Amazon S3 bucket to allow AWS Transform as an origin. For more information, see [S3 bucket CORS permissions](https://docs.aws.amazon.com/transform/latest/userguide/transform-app-mainframe-workflow.html#transform-app-mainframe-workflow-setup-connector-s3) in the AWS Transform documentation. | 

## Related resources
<a name="modernize-carddemo-mainframe-app-resources"></a>

**AWS documentation**
+ [Transformation of mainframe applications](https://docs.aws.amazon.com/transform/latest/userguide/transform-app-mainframe-workflow.html) (AWS Transform documentation)

**Other AWS resources**
+ [Accelerate Your Mainframe Modernization Journey using AI Agents with AWS Transform](https://aws.amazon.com/blogs/migration-and-modernization/accelerate-your-mainframe-modernization-journey-using-ai-agents-with-aws-transform/) (AWS blog post)
+ [AWS Transform FAQs](https://aws.amazon.com/transform/faq/)
+ [AWS IAM Identity Center FAQs](https://aws.amazon.com/iam/identity-center/faqs/)

**Videos and tutorials**
+ [Introduction to Amazon Q Developer: Transform](https://explore.skillbuilder.aws/learn/courses/21893/aws-flash-introduction-to-amazon-q-developer-transform) (AWS Skill Builder)
+ [AWS re:Invent 2024 - Modernize mainframe applications faster using Amazon Q Developer](https://www.youtube.com/watch?v=pSi0XtYfY4o) (YouTube)
+ [AWS re:Invent 2024 - Automating migration and modernization to accelerate transformation](https://www.youtube.com/watch?v=9FjxnEoH5wg) (YouTube)
+ [AWS re:Invent 2024 - Toyota drives innovation & enhances operational efficiency with gen AI](https://www.youtube.com/watch?v=_NXc1MJenw4) (YouTube)

**Note**  
AWS Transform was previously known as *Amazon Q Developer transform for mainframe*.

# Modernize and deploy mainframe applications using AWS Transform and Terraform
<a name="modernize-mainframe-app-transform-terraform"></a>

*Mason Cahill, Polaris Jhandi, Prachi Khanna, Sivasubramanian Ramani, and Santosh Kumar Singh, Amazon Web Services*

## Summary
<a name="modernize-mainframe-app-transform-terraform-summary"></a>

[AWS Transform](https://docs.aws.amazon.com/transform/latest/userguide/what-is-service.html) can accelerate large-scale modernization of .NET, mainframe, and VMware workloads. It deploys specialized AI agents that automate complex tasks like assessments, code analysis, refactoring, decomposition, dependency mapping, validation, and transformation planning. This pattern demonstrates how to use AWS Transform to modernize a mainframe application and then deploy it to AWS infrastructure by using [Hashicorp Terraform](https://developer.hashicorp.com/terraform/intro). These step-by-step instructions help you transform [CardDemo](https://github.com/aws-samples/aws-mainframe-modernization-carddemo), which is a sample open source mainframe application, from COBOL to a modern Java application.

## Prerequisites and limitations
<a name="modernize-mainframe-app-transform-terraform-prereqs"></a>

**Prerequisites**
+ An active AWS account
+ Administrative permissions to create AWS resources and deploy applications
+ Terraform version 1.5.7 or higher, [configured](https://developer.hashicorp.com/terraform/tutorials/aws-get-started?utm_source=WEBSITE&utm_medium=WEB_IO&utm_offer=ARTICLE_PAGE&utm_content=DOCS)
+ AWS Provider for Terraform, [configured](https://registry.terraform.io/providers/hashicorp/aws/2.36.0/docs#authentication)
+ AWS IAM Identity Center, [enabled](https://docs.aws.amazon.com/singlesignon/latest/userguide/enable-identity-center.html)
+ AWS Transform, [enabled](https://docs.aws.amazon.com/transform/latest/userguide/getting-started.html)
+ A user, [onboarded](https://docs.aws.amazon.com/transform/latest/userguide/transform-user-management.html) to an AWS Transform workspace with a contributor role that can run transformation jobs

**Limitations**
+ AWS Transform is available only in some AWS Regions. For a complete list of supported Regions, see [Supported Regions for AWS Transform](https://docs.aws.amazon.com/transform/latest/userguide/regions.html).
+ There is a service quota for mainframe transformation capabilities in AWS Transform. For more information, see [Quotas for AWS Transform](https://docs.aws.amazon.com/transform/latest/userguide/transform-limits.html).
+ To collaborate on a shared workspace, all users must be registered users of the same instance of AWS IAM Identity Center that is associated with your instance of the AWS Transform web application.
+ The Amazon Simple Storage Service (Amazon S3) bucket and AWS Transform must be in the same AWS account and Region.

## Architecture
<a name="modernize-mainframe-app-transform-terraform-architecture"></a>

The following diagram shows the end-to-end modernization of the legacy application and deployment to the AWS Cloud. Application and database credentials are stored in AWS Secrets Manager, and Amazon CloudWatch provides monitoring and logging capabilities.

![\[AWS Transform modernizing a mainframe application and deployment through Terraform.\]](http://docs.aws.amazon.com/prescriptive-guidance/latest/patterns/images/pattern-img/78bc1e6e-cd3d-4c6d-ae4b-0675a6898fd9/images/332ccf35-f55a-449e-a05d-7e321b3867b7.png)


The diagram shows the following workflow:

1. Through AWS IAM Identity Center, the user authenticates and accesses AWS Transform in the AWS account.

1. The user uploads the COBOL mainframe code to the Amazon S3 bucket and initiates the transformation in AWS Transform.

1. AWS Transform modernizes the COBOL code into cloud-native Java code and stores the modernized code in the Amazon S3 bucket.

1. Terraform creates the AWS infrastructure to deploy the modernized application, including an Application Load Balancer, Amazon Elastic Compute Cloud (Amazon EC2) instance, and Amazon Relational Database Service (Amazon RDS) database. Terraform deploys the modernized code to the Amazon EC2 instance.

1. The VSAM files are uploaded to Amazon EC2 and are migrated from Amazon EC2 to the Amazon RDS database.

## Tools
<a name="modernize-mainframe-app-transform-terraform-tools"></a>

**AWS services**
+ [Amazon Elastic Compute Cloud (Amazon EC2)](https://docs.aws.amazon.com/AWSEC2/latest/UserGuide/concepts.html) provides scalable computing capacity in the AWS Cloud. You can launch as many virtual servers as you need and quickly scale them up or down. In this pattern, SQL Server failover cluster instances are installed on Amazon EC2 instances.
+ [AWS IAM Identity Center](https://docs.aws.amazon.com/singlesignon/latest/userguide/what-is.html) helps you centrally manage single sign-on (SSO) access to your AWS accounts and cloud applications.
+ [Amazon Relational Database Service (Amazon RDS)](https://docs.aws.amazon.com/AmazonRDS/latest/UserGuide/Welcome.html) helps you set up, operate, and scale a relational database in the AWS Cloud.
+ [AWS Secrets Manager](https://docs.aws.amazon.com/secretsmanager/latest/userguide/intro.html) helps you replace hardcoded credentials in your code, including passwords, with an API call to Secrets Manager to retrieve the secret programmatically.
+ [Amazon Simple Storage Service (Amazon S3)](https://docs.aws.amazon.com/AmazonS3/latest/userguide/Welcome.html) is a cloud-based object storage service that helps you store, protect, and retrieve any amount of data.
+ [AWS Transform](https://docs.aws.amazon.com/transform/latest/userguide/what-is-service.html) uses agentic AI to help you accelerate the modernization of legacy workloads, such as .NET, mainframe, and VMware workloads.

**Other tools**
+ [Apache Maven](https://maven.apache.org/) is an open source software project management and build automation tool for Java projects.
+ [Apache Tomcat](https://tomcat.apache.org/) is an open source Servlet container and web server for Java code.
+ [HashiCorp Terraform](https://www.terraform.io/docs) is an infrastructure as code (IaC) tool that helps you use code to provision and manage cloud infrastructure and resources.
+ [Spring Boot](https://spring.io/projects/spring-boot) is an open source framework built on top of the Spring Framework in Java.

**Code repository**

The code for this pattern is available in the GitHub [Mainframe Transformation E2E](https://github.com/aws-samples/sample-mainframe-transformation-e2e) repository. This pattern uses the open source AWS [CardDemo](https://github.com/aws-samples/aws-mainframe-modernization-carddemo) mainframe application as a sample application.

## Best practices
<a name="modernize-mainframe-app-transform-terraform-best-practices"></a>
+ Assign full ownership of code and resources targeted for migration.
+ Develop and test a proof of concept before scaling to a full migration.
+ Secure commitment from all stakeholders.
+ Establish clear communication channels.
+ Define and document minimum viable product (MVP) requirements.
+ Set clear success criteria.

## Epics
<a name="modernize-mainframe-app-transform-terraform-epics"></a>

### Prepare and upload the mainframe application code
<a name="prepare-and-upload-the-mainframe-application-code"></a>


| Task | Description | Skills required | 
| --- | --- | --- | 
| Create a bucket. | Create an Amazon S3 bucket in the same AWS account and Region where AWS Transform is enabled. You use this bucket to store the mainframe application code, data and additional scripts required to build and run the application. AWS Transform uses this bucket to store the refactored code and other files associated with the transformation. For instructions, see [Creating a bucket](https://docs.aws.amazon.com/AmazonS3/latest/userguide/create-bucket-overview.html) in the Amazon S3 documentation. | General AWS, AWS administrator | 
| Set the CORS permissions for the bucket. | When setting up your bucket for AWS Transform access, you need to configure cross-origin resource sharing (CORS) for the bucket. If this is not set up correctly, you might not be able to use the inline viewing or file comparison functionalities of AWS Transform. For instructions about how to configure CORS for a bucket, see [Using cross-origin resource sharing](https://docs.aws.amazon.com/AmazonS3/latest/userguide/cors.html) in the Amazon S3 bucket. For the policy, see [S3 bucket CORS permissions](https://docs.aws.amazon.com/transform/latest/userguide/transform-app-mainframe-workflow.html#transform-app-mainframe-workflow-setup-connector-s3) in the AWS Transform documentation. | General AWS, AWS administrator | 
| Prepare the sample mainframe application code. | [\[See the AWS documentation website for more details\]](http://docs.aws.amazon.com/prescriptive-guidance/latest/patterns/modernize-mainframe-app-transform-terraform.html) | General AWS, App developer | 

### Transform the mainframe application
<a name="transform-the-mainframe-application"></a>


| Task | Description | Skills required | 
| --- | --- | --- | 
| Set up the AWS Transform job. | [\[See the AWS documentation website for more details\]](http://docs.aws.amazon.com/prescriptive-guidance/latest/patterns/modernize-mainframe-app-transform-terraform.html) | App developer, App owner | 
| Set up a connector. | [\[See the AWS documentation website for more details\]](http://docs.aws.amazon.com/prescriptive-guidance/latest/patterns/modernize-mainframe-app-transform-terraform.html) | App developer, App owner | 
| Transform the code. | [\[See the AWS documentation website for more details\]](http://docs.aws.amazon.com/prescriptive-guidance/latest/patterns/modernize-mainframe-app-transform-terraform.html) | App developer, App owner | 

### Deploy the infrastructure through Terraform
<a name="deploy-the-infrastructure-through-terraform"></a>


| Task | Description | Skills required | 
| --- | --- | --- | 
| Update the templates. | [\[See the AWS documentation website for more details\]](http://docs.aws.amazon.com/prescriptive-guidance/latest/patterns/modernize-mainframe-app-transform-terraform.html)For production or production-like environments, configure additional security components. For example, enable [AWS WAF protections for your Application Load Balancer](https://aws.amazon.com/about-aws/whats-new/2024/02/aws-application-load-balancer-one-click-waf-integrations/). | General AWS, AWS administrator | 
| Deploy the infrastructure. | [\[See the AWS documentation website for more details\]](http://docs.aws.amazon.com/prescriptive-guidance/latest/patterns/modernize-mainframe-app-transform-terraform.html) | Terraform | 

### Install and configure Apache Tomcat on the Amazon EC2 instance
<a name="install-and-configure-apache-tomcat-on-the-ec2-instance"></a>


| Task | Description | Skills required | 
| --- | --- | --- | 
| Install the required software. | [\[See the AWS documentation website for more details\]](http://docs.aws.amazon.com/prescriptive-guidance/latest/patterns/modernize-mainframe-app-transform-terraform.html) | App developer, Migration engineer | 
| Verify software installation. | [\[See the AWS documentation website for more details\]](http://docs.aws.amazon.com/prescriptive-guidance/latest/patterns/modernize-mainframe-app-transform-terraform.html) | App developer, Migration engineer | 

### Compile and package the modernized application code
<a name="compile-and-package-the-modernized-application-code"></a>


| Task | Description | Skills required | 
| --- | --- | --- | 
| Download and extract the generated code. | [\[See the AWS documentation website for more details\]](http://docs.aws.amazon.com/prescriptive-guidance/latest/patterns/modernize-mainframe-app-transform-terraform.html) | App developer, Migration engineer | 
| Build the modernized application. | [\[See the AWS documentation website for more details\]](http://docs.aws.amazon.com/prescriptive-guidance/latest/patterns/modernize-mainframe-app-transform-terraform.html) | App developer, Migration engineer | 

### Migrate the database
<a name="migrate-the-database"></a>


| Task | Description | Skills required | 
| --- | --- | --- | 
| Create the database and JICS schemas. | [\[See the AWS documentation website for more details\]](http://docs.aws.amazon.com/prescriptive-guidance/latest/patterns/modernize-mainframe-app-transform-terraform.html) | App developer, Migration engineer | 
| Validate database creation. | [\[See the AWS documentation website for more details\]](http://docs.aws.amazon.com/prescriptive-guidance/latest/patterns/modernize-mainframe-app-transform-terraform.html) | App developer, Migration engineer | 
| Migrate data to the JICS database. | [\[See the AWS documentation website for more details\]](http://docs.aws.amazon.com/prescriptive-guidance/latest/patterns/modernize-mainframe-app-transform-terraform.html) | App developer, Migration engineer | 

### Install the modernized application
<a name="install-the-modernized-application"></a>


| Task | Description | Skills required | 
| --- | --- | --- | 
| Install the modernized application on the Amazon EC2 instance. | [\[See the AWS documentation website for more details\]](http://docs.aws.amazon.com/prescriptive-guidance/latest/patterns/modernize-mainframe-app-transform-terraform.html) | App developer, Cloud architect | 
| Restart the Tomcat server. | [\[See the AWS documentation website for more details\]](http://docs.aws.amazon.com/prescriptive-guidance/latest/patterns/modernize-mainframe-app-transform-terraform.html) | App developer, Cloud architect | 
| Migrate the VSAM dataset. | [\[See the AWS documentation website for more details\]](http://docs.aws.amazon.com/prescriptive-guidance/latest/patterns/modernize-mainframe-app-transform-terraform.html) | App developer, Migration engineer | 
| Update the parameters in the Groovy scripts. | [\[See the AWS documentation website for more details\]](http://docs.aws.amazon.com/prescriptive-guidance/latest/patterns/modernize-mainframe-app-transform-terraform.html) | App developer | 

### Test the application
<a name="test-the-application"></a>


| Task | Description | Skills required | 
| --- | --- | --- | 
| Test the modernized application. | [\[See the AWS documentation website for more details\]](http://docs.aws.amazon.com/prescriptive-guidance/latest/patterns/modernize-mainframe-app-transform-terraform.html) | App developer, Test engineer | 
| Verify the batch scripts. | [\[See the AWS documentation website for more details\]](http://docs.aws.amazon.com/prescriptive-guidance/latest/patterns/modernize-mainframe-app-transform-terraform.html) | App developer, Test engineer | 

### Clean up
<a name="clean-up"></a>


| Task | Description | Skills required | 
| --- | --- | --- | 
| Prepare to delete the infrastructure. | [\[See the AWS documentation website for more details\]](http://docs.aws.amazon.com/prescriptive-guidance/latest/patterns/modernize-mainframe-app-transform-terraform.html) | General AWS | 
| Delete the infrastructure. | These steps will permanently delete your resources. Make sure you have backed up any important data before proceeding.[\[See the AWS documentation website for more details\]](http://docs.aws.amazon.com/prescriptive-guidance/latest/patterns/modernize-mainframe-app-transform-terraform.html) | General AWS | 

## Troubleshooting
<a name="modernize-mainframe-app-transform-terraform-troubleshooting"></a>


| Issue | Solution | 
| --- | --- | 
| Terraform authentication | [\[See the AWS documentation website for more details\]](http://docs.aws.amazon.com/prescriptive-guidance/latest/patterns/modernize-mainframe-app-transform-terraform.html) | 
| Tomcat-related errors | [\[See the AWS documentation website for more details\]](http://docs.aws.amazon.com/prescriptive-guidance/latest/patterns/modernize-mainframe-app-transform-terraform.html) | 
| URL name not loading | Make sure that the Application Load Balancer security group has your IP address in the inbound rule as a source. | 
| Authentication issue in Tomcat log | Confirm that the database secret password in AWS Secrets Manager and the password in **server.xml** match. | 

## Related resources
<a name="modernize-mainframe-app-transform-terraform-resources"></a>

**AWS Prescriptive Guidance**
+ [Modernize the CardDemo mainframe application by using AWS Transform](https://docs.aws.amazon.com/prescriptive-guidance/latest/patterns/modernize-carddemo-mainframe-app.html)

**AWS service documentation**
+ [AWS Blu Age Blusam Adminstration Console](https://docs.aws.amazon.com/m2/latest/userguide/ba-shared-bac-userguide.html)
+ [Infrastructure setup requirements for AWS Blu Age Runtime (non-managed)](https://docs.aws.amazon.com/m2/latest/userguide/ba-infrastructure-setup.html)
+ [Onboarding AWS Blu Age Runtime](https://docs.aws.amazon.com/m2/latest/userguide/ba-runtime-setup-onboard.html)
+ [Modernization of mainframe applications](https://docs.aws.amazon.com/amazonq/latest/qdeveloper-ug/transform-app-mainframe.html)
+ [Set up configuration for AWS Blu Age Runtime](https://docs.aws.amazon.com/m2/latest/userguide/ba-runtime-config.html)

**AWS blog posts**
+ [Accelerate Your Mainframe Modernization Journey using AI Agents with AWS Transform](https://aws.amazon.com/blogs/migration-and-modernization/accelerate-your-mainframe-modernization-journey-using-ai-agents-with-aws-transform/)

# Modernize mainframe output management on AWS by using Rocket Enterprise Server and LRS PageCenterX
<a name="modernize-mainframe-output-management-on-aws-by-using-rocket-enterprise-server-and-lrs-pagecenterx"></a>

*Shubham Roy, Amazon Web Services*

*Abraham Rondon, Micro Focus*

*Guy Tucker, Levi, Ray and Shoup Inc*

## Summary
<a name="modernize-mainframe-output-management-on-aws-by-using-rocket-enterprise-server-and-lrs-pagecenterx-summary"></a>

By modernizing your mainframe output management, you can achieve cost savings, mitigate the technical debt of maintaining legacy systems, and improve resiliency and agility through DevOps and Amazon Web Services (AWS) cloud-native technologies. This pattern shows you how to modernize your business-critical mainframe output-management workloads on the AWS Cloud. The pattern uses [Rocket Enterprise Server](https://www.rocketsoftware.com/en-us/products/enterprise-suite/enterprise-server) as a runtime for a modernized mainframe application, with Levi, Ray & Shoup, Inc. (LRS) VPSX/MFI (Micro Focus Interface) as a print server and LRS PageCenterX as an archive server. LRS PageCenterX provides output-management solutions for viewing, indexing, searching, archiving, and securing access to business outputs.

The pattern is based on the [replatform](https://aws.amazon.com/blogs/apn/demystifying-legacy-migration-options-to-the-aws-cloud/) mainframe modernization approach. Mainframe applications are migrated by [AWS Mainframe Modernization](https://docs.aws.amazon.com/m2/latest/userguide/what-is-m2.html) on Amazon Elastic Compute Cloud (Amazon EC2). Mainframe output-management workloads are migrated to Amazon EC2, and a mainframe database, such as IBM Db2 for z/OS, is migrated to Amazon Relational Database Service (Amazon RDS). The LRS Directory Integration Server (LRS/DIS) works with AWS Directory Service for Microsoft Active Directory for output-management workflow authentication and authorization.

## Prerequisites and limitations
<a name="modernize-mainframe-output-management-on-aws-by-using-rocket-enterprise-server-and-lrs-pagecenterx-prereqs"></a>

**Prerequisites**
+ An active AWS account.
+ A mainframe output-management workload.
+ Basic knowledge of how to rebuild and deliver a mainframe application that runs on Rocket Enterprise Server. For more information, see the [Rocket Enterprise Server](https://www.rocketsoftware.com/sites/default/files/resource_files/enterprise-server.pdf) data sheet in the Rocket Software documentation.
+ Basic knowledge of LRS cloud printing solutions and concepts. For more information, see *Output Modernization* in the LRS documentation.
+ Rocket Enterprise Server software and license. For more information, contact [Rocket Software](https://www.rocketsoftware.com/products/enterprise-suite/request-contact).
+ LRS VPSX/MFI, LRS PageCenterX, LRS/Queue, and LRS/DIS software and licenses. For more information, [contact LRS](https://www.lrsoutputmanagement.com/about-us/contact-us/). You must provide the hostnames of the EC2 instances where the LRS products will be installed.


| 
| 
| Note: For more information about configuration considerations for mainframe output-management workloads, see *Considerations* in the [Additional information](#modernize-mainframe-output-management-on-aws-by-using-rocket-enterprise-server-and-lrs-pagecenterx-additional) section of this pattern. | 
| --- |

**Product versions**
+ [Rocket Enterprise Server 10.0](https://www.rocketsoftware.com/products/enterprise-suite/enterprise-test-server)
+ [LRS VPSX/MFI](https://www.lrsoutputmanagement.com/products/modernization-products/)
+ [LRS PageCenterX](https://www.lrsoutputmanagement.com/products/content-management/pagecenterx-for-open-systems/) V1R3 or later

## Architecture
<a name="modernize-mainframe-output-management-on-aws-by-using-rocket-enterprise-server-and-lrs-pagecenterx-architecture"></a>

**Source technology stack**
+ Operating system – IBM z/OS
+ Programming language – Common business-oriented language (COBOL), job control language (JCL), and Customer Information Control System (CICS)
+ Database – IBM Db2 for z/OS, IBM Information Management System (IMS) database, and Virtual Storage Access Method (VSAM)
+ Security – Resource Access Control Facility (RACF), CA Top Secret for z/OS, and Access Control Facility 2 (ACF2)
+ Print and archive solutions – IBM mainframe z/OS output and printing products (IBM Infoprint Server for z/OS, LRS, and CA Deliver) and archiving solutions (CA Deliver, ASG Mobius, or CA Bundle)

**Source architecture**

The following diagram shows a typical current state architecture for a mainframe output-management workload.

![\[Mainframe output process in seven steps.\]](http://docs.aws.amazon.com/prescriptive-guidance/latest/patterns/images/pattern-img/f9ad041d-b9f0-4a9a-aba7-40fdc3088b27/images/d170394a-c9b2-43c0-a3d4-677b5f7c2473.png)


The diagram shows the following workflow:

1. Users perform business transactions on a system of engagement (SoE) that’s built on an IBM CICS application written in COBOL.

1. The SoE invokes the mainframe service, which records the business transaction data in a system-of-records (SoR) database such as IBM Db2 for z/OS.

1. The SoR persists the business data from the SoE.

1. The batch job scheduler initiates a batch job to generate print output.

1. The batch job extracts data from the database. It formats the data according to business requirements, and then it generates business output such as billing statements, ID cards, or loan statements. Finally, the batch job routes the output to output management for format, publish, and storage of the output based on the business requirements.

1. Output management receives output from the batch job. Output management indexes, arranges, and publishes the output to a specified destination in the output-management system, such as LRS PageCenterX solutions (as demonstrated in this pattern) or CA View.

1. Users can view, search, and retrieve the output.

**Target technology stack**
+ Operating system – Windows Server running on Amazon EC2
+ Compute – Amazon EC2
+ Storage – Amazon Elastic Block Store (Amazon EBS) and Amazon FSx for Windows File Server
+ Programming language – COBOL, JCL, and CICS
+ Database – Amazon RDS
+ Security – AWS Managed Microsoft AD
+ Printing and archiving – LRS printing (VPSX) and archiving (PageCenterX) solution on AWS
+ Mainframe runtime environment – Rocket Enterprise Server

**Target architecture**

The following diagram shows an architecture for a mainframe output-management workload that’s deployed in the AWS Cloud.

![\[Target architecture for batch app and output management in seven steps.\]](http://docs.aws.amazon.com/prescriptive-guidance/latest/patterns/images/pattern-img/f9ad041d-b9f0-4a9a-aba7-40fdc3088b27/images/3e25ab03-bf3a-4fea-b5eb-38cea9e50138.png)


The diagram shows the following workflow:

1. The batch job scheduler initiates a batch job to create output, such as billing statements, ID cards, or loan statements.

1. The mainframe batch job ([replatformed to Amazon EC2](https://aws.amazon.com/blogs/apn/demystifying-legacy-migration-options-to-the-aws-cloud/)) uses the Rocket Enterprise Server runtime to extract data from the application database, apply business logic to the data, and format the data. It then sends the data to an output destination by using the [Rocket Software printer exit module](https://www.microfocus.com/documentation/enterprise-developer/ed100/ED-Eclipse/HCOMCMJCLOU020.html) (OpenText Micro Focus documentation).

1. The application database (an SoR that runs on Amazon RDS) persists data for print output.

1. The LRS VPSX/MFI printing solution is deployed on Amazon EC2, and its operational data is stored in Amazon EBS. LRS VPSX/MFI uses the TCP/IP-based LRS/Queue transmission agent to collect output data through the Rocket Software JES Print Exit API.

   LRS VPSX/MFI does data preprocessing, such as EBCDIC to ASCII translation. It also does more complex tasks, including converting mainframe-exclusive data streams such as IBM Advanced Function Presentation (AFP) and Xerox Line Conditioned Data Stream (LCDS) into more common viewing and printing data streams such as Printer Command Language (PCL) and PDF.

   During the maintenance window of LRS PageCenterX, LRS VPSX/MFI persists the output queue and serves as backup for the output queue. LRS VPSX/MFI connects and sends output to LRS PageCenterX by using the LRS/Queue protocol. LRS/Queue performs an exchange of both readiness and completion for the jobs to help ensure that the data transfer occurs.

   **Notes:**

   For more information on print data passed from Rocket Software Print Exit to LRS/Queue and LRS VPSX/MFI supported mainframe batch mechanisms, see *Print data capture* in the [Additional information](#modernize-mainframe-output-management-on-aws-by-using-rocket-enterprise-server-and-lrs-pagecenterx-additional) section.

   LRS VPSX/MFI can perform health checks at the printer-fleet level. For more information, see *Printer-fleet health checks* in the [Additional information](#modernize-mainframe-output-management-on-aws-by-using-rocket-enterprise-server-and-lrs-pagecenterx-additional) section of this pattern.

1. The LRS PageCenterX output-management solution is deployed on Amazon EC2, and its operational data is stored in Amazon FSx for Windows File Server. LRS PageCenterX provides a central report management system of all files imported into LRS PageCenterX along with all users able to access the files. Users can view specific file content or perform searches across multiple files for matching criteria.

   The LRS/NetX component is a multi-threaded web application server that provides a common runtime environment for the LRS PageCenterX application and other LRS applications. The LRS/Web Connect component is installed on your web server and provides a connector from the web server to the LRS/NetX web application server.

1. LRS PageCenterX provides storage for file system objects. The operational data of LRS PageCenterX is stored in Amazon FSx for Windows File Server.

1. Output-management authentication and authorization are performed by AWS Managed Microsoft AD with LRS/DIS.

**Note**  
The target solution typically doesn’t require application changes to accommodate mainframe formatting languages, such as IBM AFP or Xerox LCDS.

**AWS infrastructure architecture**

The following diagram shows a highly available and secure AWS infrastructure architecture for a mainframe output-management workload.

![\[Multi-AZ AWS infrastructure with a workflow in seven steps.\]](http://docs.aws.amazon.com/prescriptive-guidance/latest/patterns/images/pattern-img/f9ad041d-b9f0-4a9a-aba7-40fdc3088b27/images/8d8aa995-b576-4ecd-8a7c-5f566740a515.png)


The diagram shows the following workflow:

1. The batch scheduler initiates the batch process and is deployed on Amazon EC2 across multiple [Availability Zones](https://aws.amazon.com/about-aws/global-infrastructure/regions_az/) for high availability (HA).
**Note**  
This pattern doesn’t cover the implementation of the batch scheduler. For more information about implementation, see the software vendor documentation for your scheduler.

1. The mainframe batch job (written in a programming language such as JCL or COBOL) uses core business logic to process and generate print output, such as billing statements, ID cards, and loan statements. The batch job is deployed on Amazon EC2 across two Availability Zones for HA. It uses the Rocket Software Print Exit API to route print output to LRS VPSX/MFI for data preprocessing.

1. The LRS VPSX/MFI print server is deployed on Amazon EC2 across two Availability Zones for HA (active-standby redundant pair). It uses [Amazon EBS](https://docs.aws.amazon.com/AWSEC2/latest/UserGuide/AmazonEBS.html) as an operational data store. The Network Load Balancer performs a health check on the LRS VPSX/MFI EC2 instances. If an active instance is in an unhealthy state, the load balancer routes traffic to hot standby instances in the other Availability Zone. The print requests are persisted in the LRS Job Queue locally in each of the EC2 instances. In the event of a failure, a failed instance must be restarted before the LRS services can resume processing the print request.
**Note**  
LRS VPSX/MFI can also perform health checks at the printer-fleet level. For more information, see *Printer-fleet health checks* in the [Additional information](#modernize-mainframe-output-management-on-aws-by-using-rocket-enterprise-server-and-lrs-pagecenterx-additional) section of this pattern.

1. LRS PageCenterX output management is deployed on Amazon EC2 across two Availability Zones for HA (active-standby redundant pair). It uses [Amazon FSx for Windows File Server](https://docs.aws.amazon.com/fsx/latest/WindowsGuide/what-is.html) as an operational data store. If an active instance is in an unhealthy state, the load balancer performs a health check on the LRS PageCenterX EC2 instances and routes traffic to standby instances in the other Availability Zone.

1. A [Network Load Balancer](https://docs.aws.amazon.com/elasticloadbalancing/latest/network/introduction.html) provides a DNS name to integrate the LRS VPSX/MFI Server with LRS PageCenterX.
**Note**  
LRS PageCenterX supports a Layer 4 load balancer.

1. LRS PageCenterX uses Amazon FSx for Windows File Server as an operational data store deployed across two Availability Zones for HA. LRS PageCenterX understands only files that are in the file share, not in an external database.

1. [AWS Managed Microsoft AD](https://docs.aws.amazon.com/directoryservice/latest/admin-guide/directory_microsoft_ad.html) is used with LRS/DIS to perform output-management workflow authentication and authorization. For more information, see *Print output authentication and authorization* in the [Additional information](#modernize-mainframe-output-management-on-aws-by-using-rocket-enterprise-server-and-lrs-pagecenterx-additional) section.

## Tools
<a name="modernize-mainframe-output-management-on-aws-by-using-rocket-enterprise-server-and-lrs-pagecenterx-tools"></a>

**AWS services**
+ [AWS Directory Service for Microsoft Active Directory](https://docs.aws.amazon.com/directoryservice/latest/admin-guide/directory_microsoft_ad.html) enables your directory-aware workloads and AWS resources to use Microsoft Active Directory in the AWS Cloud.
+ [Amazon Elastic Block Store (Amazon EBS)](https://docs.aws.amazon.com/AWSEC2/latest/UserGuide/AmazonEBS.html) provides block-level storage volumes for use with Amazon Elastic Compute Cloud (Amazon EC2) instances.
+ [Amazon Elastic Compute Cloud (Amazon EC2)](https://docs.aws.amazon.com/ec2/) provides scalable computing capacity in the AWS Cloud. You can launch as many virtual servers as you need and quickly scale them up or down.
+ [Elastic Load Balancing (ELB)](https://docs.aws.amazon.com/elasticloadbalancing/latest/userguide/what-is-load-balancing.html) distributes incoming application or network traffic across multiple targets. For example, you can distribute traffic across Amazon EC2 instances, containers, and IP addresses in one or more Availability Zones. This pattern uses a Network Load Balancer.
+ [Amazon FSx](https://docs.aws.amazon.com/fsx/?id=docs_gateway) provides file systems that support industry-standard connectivity protocols and offer high availability and replication across AWS Regions. This pattern uses Amazon FSx for Windows File Server.
+ [Amazon Relational Database Service (Amazon RDS)](https://docs.aws.amazon.com/AmazonRDS/latest/UserGuide/Welcome.html) helps you set up, operate, and scale a relational database in the AWS Cloud.

**Other tools**
+ [LRS PageCenterX](https://www.lrsoutputmanagement.com/products/content-management/pagecenterx-for-open-systems/) software provides a scalable document and report content management solution that helps users obtain maximum value from information through automated indexing, encryption, and advanced search features.
+ [LRS VPSX/MFI (Micro Focus Interface)](https://www.lrsoutputmanagement.com/products/modernization-products/), codeveloped by LRS and Rocket Software, captures output from a Rocket Software JES spool and reliably delivers it to a specified print destination.
+ LRS/Queue is a transmission agent that’s TCP/IP based. LRS VPSX/MFI uses LRS/Queue to collect or capture print data through the Rocket Software JES Print Exit programming interface.
+ LRS Directory Integration Server (LRS/DIS) is used for authentication and authorization during the print workflow.
+ [Rocket Enterprise Server](https://www.microfocus.com/documentation/enterprise-developer/ed80/ES-WIN/GUID-F7D8FD6E-BDE0-4169-8D8C-96DDFFF6B495.html) is an application deployment environment for mainframe applications. It provides the runtime environment for mainframe applications that are migrated or created by using any version of Rocket Enterprise Developer.

## Epics
<a name="modernize-mainframe-output-management-on-aws-by-using-rocket-enterprise-server-and-lrs-pagecenterx-epics"></a>

### Set up the Rocket runtime and deploy a mainframe batch application
<a name="set-up-the-rocket-runtime-and-deploy-a-mainframe-batch-application"></a>


| Task | Description | Skills required | 
| --- | --- | --- | 
| Set up the runtime and deploy a demo application. | To set up Rocket Enterprise Server on Amazon EC2 and deploy the Rocket Software BankDemo demonstration application, follow the instructions in AWS Mainframe Modernization [user-guide](https://docs.aws.amazon.com/m2/latest/userguide/mf-runtime-setup.html).The BankDemo application is a mainframe batch application that creates and then initiates print output. | Cloud architect | 

### Set up an LRS print server on Amazon EC2
<a name="set-up-an-lrs-print-server-on-amazon-ec2"></a>


| Task | Description | Skills required | 
| --- | --- | --- | 
| Create an Amazon EC2 Windows instance. | To launch an Amazon EC2 Windows instance, follow the instructions in [Launch an Amazon EC2 instance](https://docs.aws.amazon.com/AWSEC2/latest/UserGuide/LaunchingAndUsingInstances.html) in the Amazon EC2 documentation. Use the same hostname that you used for your LRS product license.Your instance must meet the following hardware and software requirements for LRS VPSX/MFI:[\[See the AWS documentation website for more details\]](http://docs.aws.amazon.com/prescriptive-guidance/latest/patterns/modernize-mainframe-output-management-on-aws-by-using-rocket-enterprise-server-and-lrs-pagecenterx.html)The preceding hardware and software requirements are intended for a small printer fleet (around 500-1000). To get the full requirements, consult with your LRS and AWS contacts.[\[See the AWS documentation website for more details\]](http://docs.aws.amazon.com/prescriptive-guidance/latest/patterns/modernize-mainframe-output-management-on-aws-by-using-rocket-enterprise-server-and-lrs-pagecenterx.html) | Cloud architect | 
| Install LRS VPSX/MFI on the EC2 instance. | [\[See the AWS documentation website for more details\]](http://docs.aws.amazon.com/prescriptive-guidance/latest/patterns/modernize-mainframe-output-management-on-aws-by-using-rocket-enterprise-server-and-lrs-pagecenterx.html) | Cloud architect | 
| Install LRS/Queue. | [\[See the AWS documentation website for more details\]](http://docs.aws.amazon.com/prescriptive-guidance/latest/patterns/modernize-mainframe-output-management-on-aws-by-using-rocket-enterprise-server-and-lrs-pagecenterx.html) | Cloud architect | 
| Install LRS/DIS. | The LRS/DIS product often is included in LRS VPSX installation. However, if LRS/DIS wasn't installed along with LRS VPSX, use the following steps to install it:[\[See the AWS documentation website for more details\]](http://docs.aws.amazon.com/prescriptive-guidance/latest/patterns/modernize-mainframe-output-management-on-aws-by-using-rocket-enterprise-server-and-lrs-pagecenterx.html) | Cloud architect | 
| Create a target group. | Create a target group by following the instructions in [Create a target group for your Network Load Balancer](https://docs.aws.amazon.com/elasticloadbalancing/latest/network/create-target-group.html). When you create the target group, register the LRS VPSX/MFI EC2 instance as the target:[\[See the AWS documentation website for more details\]](http://docs.aws.amazon.com/prescriptive-guidance/latest/patterns/modernize-mainframe-output-management-on-aws-by-using-rocket-enterprise-server-and-lrs-pagecenterx.html) | Cloud architect | 
| Create a Network Load Balancer. | To create the Network Load Balancer, follow the instructions in the [Elastic Load Balancing documentation](https://docs.aws.amazon.com/elasticloadbalancing/latest/network/create-network-load-balancer.html). Your Network Load Balancer routes traffic from Rocket Enterprise Server to the LRS VPSX/MFI EC2 instance.When you create the Network Load Balancer, choose the following values on the **Listeners and Routing** page:[\[See the AWS documentation website for more details\]](http://docs.aws.amazon.com/prescriptive-guidance/latest/patterns/modernize-mainframe-output-management-on-aws-by-using-rocket-enterprise-server-and-lrs-pagecenterx.html) | Cloud architect | 

### Integrate Rocket Enterprise Server with LRS/Queue and LRS VPSX/MFI
<a name="integrate-rocket-enterprise-server-with-lrs-queue-and-lrs-vpsx-mfi"></a>


| Task | Description | Skills required | 
| --- | --- | --- | 
| Configure Rocket Enterprise Server for LRS/Queue integration. | [\[See the AWS documentation website for more details\]](http://docs.aws.amazon.com/prescriptive-guidance/latest/patterns/modernize-mainframe-output-management-on-aws-by-using-rocket-enterprise-server-and-lrs-pagecenterx.html) | Cloud architect | 
| Configure Rocket Enterprise Server for LRS VPSX/MFI integration. | [\[See the AWS documentation website for more details\]](http://docs.aws.amazon.com/prescriptive-guidance/latest/patterns/modernize-mainframe-output-management-on-aws-by-using-rocket-enterprise-server-and-lrs-pagecenterx.html) | Cloud architect | 

### Set up the print queue and the print users
<a name="set-up-the-print-queue-and-the-print-users"></a>


| Task | Description | Skills required | 
| --- | --- | --- | 
| Associate the Rocket Software Print Exit module with the Rocket Enterprise Server batch printer Server Execution Process. | [\[See the AWS documentation website for more details\]](http://docs.aws.amazon.com/prescriptive-guidance/latest/patterns/modernize-mainframe-output-management-on-aws-by-using-rocket-enterprise-server-and-lrs-pagecenterx.html) | Cloud architect | 
| Create a print output queue in LRS VPSX/MFI and integrate it with LRS PageCenterX. | [\[See the AWS documentation website for more details\]](http://docs.aws.amazon.com/prescriptive-guidance/latest/patterns/modernize-mainframe-output-management-on-aws-by-using-rocket-enterprise-server-and-lrs-pagecenterx.html) | Cloud architect | 
| Create a print user in LRS VPSX/MFI. | [\[See the AWS documentation website for more details\]](http://docs.aws.amazon.com/prescriptive-guidance/latest/patterns/modernize-mainframe-output-management-on-aws-by-using-rocket-enterprise-server-and-lrs-pagecenterx.html) | Cloud architect | 

### Set up an LRS PageCenterX server on Amazon EC2
<a name="set-up-an-lrs-pagecenterx-server-on-amazon-ec2"></a>


| Task | Description | Skills required | 
| --- | --- | --- | 
| Create an Amazon EC2 Windows instance. | Launch an Amazon EC2 Windows instance by following the instructions from [Step 1: Launch an instance](https://docs.aws.amazon.com/AWSEC2/latest/WindowsGuide/EC2_GetStarted.html#ec2-launch-instance) in the Amazon EC2 documentation. Use the same hostname that you used for your LRS product license.Your instance must meet the following hardware and software requirements for LRS PageCenterX:[\[See the AWS documentation website for more details\]](http://docs.aws.amazon.com/prescriptive-guidance/latest/patterns/modernize-mainframe-output-management-on-aws-by-using-rocket-enterprise-server-and-lrs-pagecenterx.html)The preceding hardware and software requirements are intended for a small printer fleet (around 500–1000). To get the full requirements, consult with your LRS and AWS contacts.[\[See the AWS documentation website for more details\]](http://docs.aws.amazon.com/prescriptive-guidance/latest/patterns/modernize-mainframe-output-management-on-aws-by-using-rocket-enterprise-server-and-lrs-pagecenterx.html) | Cloud architect | 
| Install LRS PageCenterX on the EC2 instance. | [\[See the AWS documentation website for more details\]](http://docs.aws.amazon.com/prescriptive-guidance/latest/patterns/modernize-mainframe-output-management-on-aws-by-using-rocket-enterprise-server-and-lrs-pagecenterx.html) | Cloud architect | 
| Install LRS/DIS. | The LRS/DIS product often is included in LRS VPSX installation. However, if LRS/DIS wasn't installed along with LRS VPSX, use the following steps to install it:[\[See the AWS documentation website for more details\]](http://docs.aws.amazon.com/prescriptive-guidance/latest/patterns/modernize-mainframe-output-management-on-aws-by-using-rocket-enterprise-server-and-lrs-pagecenterx.html) | Cloud architect | 
| Create a target group. | Create a target group by following the instructions in [Create a target group for your Network Load Balancer](https://docs.aws.amazon.com/elasticloadbalancing/latest/network/create-target-group.html). When you create the target group, register the LRS PageCenterX EC2 instance as the target:[\[See the AWS documentation website for more details\]](http://docs.aws.amazon.com/prescriptive-guidance/latest/patterns/modernize-mainframe-output-management-on-aws-by-using-rocket-enterprise-server-and-lrs-pagecenterx.html) | Cloud architect | 
| Create a Network Load Balancer. | To create the Network Load Balancer, follow the instructions in the [Elastic Load Balancing documentation](https://docs.aws.amazon.com/elasticloadbalancing/latest/network/create-network-load-balancer.html). Your Network Load Balancer routes traffic from LRS VPSX/MFI to the LRS PageCenterX EC2 instance.When you create the Network Load Balancer, choose the following values on the **Listeners and Routing** page:[\[See the AWS documentation website for more details\]](http://docs.aws.amazon.com/prescriptive-guidance/latest/patterns/modernize-mainframe-output-management-on-aws-by-using-rocket-enterprise-server-and-lrs-pagecenterx.html) | Cloud architect | 

### Set up output-management features in LRS PageCenterX
<a name="set-up-output-management-features-in-lrs-pagecenterx"></a>


| Task | Description | Skills required | 
| --- | --- | --- | 
| Enable the Import function in LRS PageCenterX. | You can use the LRS PageCenterX Import function to recognize the outputs landing on LRS PageCenterX by criteria such as Job name or Form ID. You can then route the outputs to specific folders in LRS PageCenterX.To enable the Import function, do the following:[\[See the AWS documentation website for more details\]](http://docs.aws.amazon.com/prescriptive-guidance/latest/patterns/modernize-mainframe-output-management-on-aws-by-using-rocket-enterprise-server-and-lrs-pagecenterx.html) | Cloud architect | 
| Configure the document retention policy. | LRS PageCenterX uses a document retention policy to decide how long to keep a document in LRS PageCenterX.To configure the document retention policy, do the following:[\[See the AWS documentation website for more details\]](http://docs.aws.amazon.com/prescriptive-guidance/latest/patterns/modernize-mainframe-output-management-on-aws-by-using-rocket-enterprise-server-and-lrs-pagecenterx.html) | Cloud architect | 
| Create a rule to route the output document to a specific folder in LRS PageCenterX. | In LRS PageCenterX, **Destination **determines the folder path where output will be sent when this destination is invoked by **Report Definition**. For this example, create a folder based on the **Form ID** folder in the report definition, and save the output to that folder.[\[See the AWS documentation website for more details\]](http://docs.aws.amazon.com/prescriptive-guidance/latest/patterns/modernize-mainframe-output-management-on-aws-by-using-rocket-enterprise-server-and-lrs-pagecenterx.html) | Cloud architect | 
| Create a report definition. | [\[See the AWS documentation website for more details\]](http://docs.aws.amazon.com/prescriptive-guidance/latest/patterns/modernize-mainframe-output-management-on-aws-by-using-rocket-enterprise-server-and-lrs-pagecenterx.html) | Cloud architect | 

### Set up authentication and authorization for output management
<a name="set-up-authentication-and-authorization-for-output-management"></a>


| Task | Description | Skills required | 
| --- | --- | --- | 
| Create an AWS Managed Microsoft AD domain with users and groups. | [\[See the AWS documentation website for more details\]](http://docs.aws.amazon.com/prescriptive-guidance/latest/patterns/modernize-mainframe-output-management-on-aws-by-using-rocket-enterprise-server-and-lrs-pagecenterx.html) | Cloud architect | 
| Join the EC2 instances to an AWS Managed Microsoft AD domain. | Join the LRS VPSX/MFI and LRS PageCenterX EC2 instances to your AWS Managed Microsoft AD domain [automatically](https://aws.amazon.com/premiumsupport/knowledge-center/ec2-systems-manager-dx-domain/) (AWS Knowledge Center documentation) or [manually](https://docs.aws.amazon.com/directoryservice/latest/admin-guide/join_windows_instance.html) (AWS Directory Service documentation). | Cloud architect | 
| Configure and integrate LRS/DIS with AWS Managed Microsoft AD for the LRS PageCenterX EC2 instance. | [\[See the AWS documentation website for more details\]](http://docs.aws.amazon.com/prescriptive-guidance/latest/patterns/modernize-mainframe-output-management-on-aws-by-using-rocket-enterprise-server-and-lrs-pagecenterx.html) | Cloud architect | 
| Configure an Import group to import output from LRS VPSX to LRS PageCenterX. | [\[See the AWS documentation website for more details\]](http://docs.aws.amazon.com/prescriptive-guidance/latest/patterns/modernize-mainframe-output-management-on-aws-by-using-rocket-enterprise-server-and-lrs-pagecenterx.html) | Cloud architect | 
| Add a security rule to the Import group. | [\[See the AWS documentation website for more details\]](http://docs.aws.amazon.com/prescriptive-guidance/latest/patterns/modernize-mainframe-output-management-on-aws-by-using-rocket-enterprise-server-and-lrs-pagecenterx.html) | Cloud architect | 
| Create a user in LRS PageCenterX to perform output import from LRS VPSX/MFI.  | When you create a user in LRS PageCenterX to perform output import, the username should be the same as the **VPSX ID** of the print output queue in LRS VPSX/MFI. In this example, the VPSX ID is **VPS1**.[\[See the AWS documentation website for more details\]](http://docs.aws.amazon.com/prescriptive-guidance/latest/patterns/modernize-mainframe-output-management-on-aws-by-using-rocket-enterprise-server-and-lrs-pagecenterx.html) | Cloud architect | 
| Add the LRS PageCenterX Import user to the Import only group. | To provide necessary permission for document import from LRS VPSX to LRS PageCenterX, do the following:[\[See the AWS documentation website for more details\]](http://docs.aws.amazon.com/prescriptive-guidance/latest/patterns/modernize-mainframe-output-management-on-aws-by-using-rocket-enterprise-server-and-lrs-pagecenterx.html) | Cloud architect | 
| Configure LRS/DIS with AWS Managed Microsoft AD for the LRS VPSX/MFI EC2 instance. | [\[See the AWS documentation website for more details\]](http://docs.aws.amazon.com/prescriptive-guidance/latest/patterns/modernize-mainframe-output-management-on-aws-by-using-rocket-enterprise-server-and-lrs-pagecenterx.html) | Cloud architect | 

### Configure Amazon FSx for Windows File Server as the operational data store for LRS PageCenterX
<a name="configure-amazon-fsx-for-windows-file-server-as-the-operational-data-store-for-lrs-pagecenterx"></a>


| Task | Description | Skills required | 
| --- | --- | --- | 
| Create a file system for LRS PageCenterX. | To use Amazon FSx for Windows File Server as an operational data store for LRS PageCenterX in a Multi-AZ environment, follow the instructions in [Step 1: Create your file system](https://docs.aws.amazon.com/fsx/latest/WindowsGuide/getting-started-step1.html). | Cloud architect | 
| Map the file share to the LRS PageCenterX EC2 instance. | To map the file share created in previous step to the LRS PageCenterX EC2 instance, follow the instructions in [Step 2: Map your file share to an EC2 instance running Windows Server](https://docs.aws.amazon.com/fsx/latest/WindowsGuide/getting-started-step2.html). | Cloud architect | 
| Map LRS PageCenterX Control Directory and Master Folder Directory to the Amazon FSx network share drive. | [\[See the AWS documentation website for more details\]](http://docs.aws.amazon.com/prescriptive-guidance/latest/patterns/modernize-mainframe-output-management-on-aws-by-using-rocket-enterprise-server-and-lrs-pagecenterx.html) | Cloud architect | 

### Test an output-management workflow
<a name="test-an-output-management-workflow"></a>


| Task | Description | Skills required | 
| --- | --- | --- | 
| Initiate a batch print request from the Rocket Software BankDemo app. | [\[See the AWS documentation website for more details\]](http://docs.aws.amazon.com/prescriptive-guidance/latest/patterns/modernize-mainframe-output-management-on-aws-by-using-rocket-enterprise-server-and-lrs-pagecenterx.html) | Test engineer | 
| Check the print output in LRS PageCenterX. | [\[See the AWS documentation website for more details\]](http://docs.aws.amazon.com/prescriptive-guidance/latest/patterns/modernize-mainframe-output-management-on-aws-by-using-rocket-enterprise-server-and-lrs-pagecenterx.html) | Test engineer | 

## Related resources
<a name="modernize-mainframe-output-management-on-aws-by-using-rocket-enterprise-server-and-lrs-pagecenterx-resources"></a>
+ [LRS](https://www.lrsoutputmanagement.com/products/modernization-products)
+ [Advanced Function Presentation data stream](https://www.ibm.com/docs/en/i/7.4?topic=streams-advanced-function-presentation-data-stream) (IBM documentation)
+ [Line Conditioned Data Stream (LCDS)](https://www.compart.com/en/lcds) (Compart documentation)
+ [Empowering Enterprise Mainframe Workloads on AWS with Micro Focus](https://aws.amazon.com/blogs/apn/empowering-enterprise-grade-mainframe-workloads-on-aws-with-micro-focus/) (blog post)
+ [Modernize your mainframe online printing workloads on AWS](https://docs.aws.amazon.com/prescriptive-guidance/latest/patterns/modernize-mainframe-online-printing-workloads-on-aws-by-using-micro-focus-enterprise-server-and-lrs-vpsx-mfi.html) (AWS Prescriptive Guidance)
+ [Modernize your mainframe batch printing workloads on AWS](https://docs.aws.amazon.com/prescriptive-guidance/latest/patterns/modernize-mainframe-batch-printing-workloads-on-aws-by-using-micro-focus-enterprise-server-and-lrs-vpsx-mfi.html) (AWS Prescriptive Guidance)

## Additional information
<a name="modernize-mainframe-output-management-on-aws-by-using-rocket-enterprise-server-and-lrs-pagecenterx-additional"></a>

**Considerations**

During your modernization journey, you might consider a wide variety of configurations for mainframe batch and online processes and the output they generate. The mainframe platform has been customized by every customer and vendor that uses it with particular requirements that directly affect print. For example, your current platform might incorporate the IBM AFP data stream or Xerox LCDS into the current workflow. Additionally, [mainframe carriage control characters](https://www.ibm.com/docs/en/cmofz/10.5.0?topic=tips-ansi-machine-carriage-controls) and [channel command words](https://www.ibm.com/docs/en/zos/3.1.0?topic=devices-channel-command-words) can affect the look of the printed page and might need special handling. As part of the modernization planning process, we recommend that you assess and understand the configurations in your specific print environment.  

**Print data capture**

Rocket Software Print Exit passes the necessary information for LRS VPSX/MFI to effectively process the spool file. The information consists of fields passed in the relevant control blocks, such as the following:
+ JOBNAME
+ OWNER (USERID)
+ DESTINATION
+ FORM
+ FILENAME
+ WRITER

LRS VPSX/MFI supports the following mainframe batch mechanisms for capturing data from Rocket Enterprise Server:
+ BATCH COBOL print/spool processing using standard z/OS JCL SYSOUT DD/OUTPUT statements.
+ BATCH COBOL print/spool processing using standard z/OS JCL CA-SPOOL SUBSYS DD statements.
+ IMS/COBOL print/spool processing using the CBLTDLI interface. For a full list of supported methods and programming examples, see the LRS documentation that’s included with your product license.

**Printer-fleet health checks**

LRS VPSX/MFI (LRS LoadX) can perform deep dive health checks, including device management and operational optimization. Device management can detect failure in a printer device and route the print request to a healthy printer. For more information about deep-dive health checks for printer fleets, see the LRS documentation that’s included with your product license.

**Print authentication and authorization**

LRS/DIS enables LRS applications to authenticate user IDs and passwords by using Microsoft Active Directory or a Lightweight Directory Access Protocol (LDAP) server. In addition to basic print authorization, LRS/DIS can also apply granular-level print security controls in the following use cases:
+ Manage who can browse the printer job.
+ Manage the browsing level of other user's jobs.
+ Manage operational tasks—for example, command-level security such as hold or release, purge, modify, copy, and reroute. Security can be set up by either the user-ID or the group, similar to an Active Directory security group or an LDAP group.

## Attachments
<a name="attachments-f9ad041d-b9f0-4a9a-aba7-40fdc3088b27"></a>

To access additional content that is associated with this document, unzip the following file: [attachment.zip](samples/p-attach/f9ad041d-b9f0-4a9a-aba7-40fdc3088b27/attachments/attachment.zip)

# Modernize mainframe batch printing workloads on AWS by using Rocket Enterprise Server and LRS VPSX/MFI
<a name="modernize-mainframe-batch-printing-workloads-on-aws-by-using-rocket-enterprise-server-and-lrs-vpsx-mfi"></a>

*Shubham Roy and Kevin Yung, Amazon Web Services*

*Abraham Rondon, Micro Focus*

*Guy Tucker, Levi, Ray and Shoup Inc*

## Summary
<a name="modernize-mainframe-batch-printing-workloads-on-aws-by-using-rocket-enterprise-server-and-lrs-vpsx-mfi-summary"></a>

This pattern shows you how to modernize your business-critical mainframe batch printing workloads on the Amazon Web Services (AWS) Cloud by using Rocket Enterprise Server as a runtime for a modernized mainframe application and LRS VPSX/MFI (Micro Focus Interface) as a print server. The pattern is based on the [replatform](https://aws.amazon.com/blogs/apn/demystifying-legacy-migration-options-to-the-aws-cloud/) mainframe modernization approach. In this approach, you migrate your mainframe batch jobs to Amazon Elastic Compute Cloud (Amazon EC2) and migrate your mainframe database, such as IBM DB2 for z/OS, to Amazon Relational Database Service (Amazon RDS). The authentication and authorization for the modernized print workflow is performed by AWS Directory Service for Microsoft Active Directory, also known as AWS Managed Microsoft AD. The LRS Directory Information Server (LRS/DIS) is integrated with AWS Managed Microsoft AD. By modernizing your batch printing workloads, you can reduce IT infrastructure costs, mitigate the technical debt of maintaining legacy systems, remove data silos, increase agility and efficiency with a DevOps model, and take advantage of on-demand resources and automation in the AWS Cloud.

## Prerequisites and limitations
<a name="modernize-mainframe-batch-printing-workloads-on-aws-by-using-rocket-enterprise-server-and-lrs-vpsx-mfi-prereqs"></a>

**Prerequisites**
+ An active AWS account
+ A mainframe printing or output management workload
+ Basic knowledge of how to rebuild and deliver a mainframe application that runs on Rocket Enterprise Server (For more information, see the [Rocket Enterprise Server](https://www.rocketsoftware.com/sites/default/files/resource_files/enterprise-server.pdf) data sheet in the Rocket documentation.)
+ Basic knowledge of [LRS cloud printing](https://www.lrsoutputmanagement.com/solutions/solutions-cloud-printing/) solutions and concepts
+ Rocket Enterprise Server software and license (For more information, contact [Rocket sales](https://www.rocketsoftware.com/en-us/products/enterprise-suite/request-contact).)
+ LRS VPSX/MFI, LRS/Queue, and LRS/DIS software and licenses (For more information, contact [LRS sales](https://www.lrsoutputmanagement.com/about-us/contact-us/).)

**Note**  
For more information about configuration considerations for mainframe batch printing workloads, see *Considerations* in the [Additional information](#modernize-mainframe-batch-printing-workloads-on-aws-by-using-rocket-enterprise-server-and-lrs-vpsx-mfi-additional) section of this pattern.

**Product versions**
+ [Rocket Enterprise Server](https://www.microfocus.com/en-us/products/enterprise-server/overview?utm_campaign=7018e000000PgfnAAC&utm_content=SCH-BR-AMC-AppM-AMS&gclid=EAIaIQobChMIoZCQ6fvS9wIVxQN9Ch2MzAOlEAAYASAAEgKx2fD_BwE) 6.0 (product update 7)
+ [LRS VPSX/MFI](https://www.lrsoutputmanagement.com/products/vpsx-enterprise/) V1R3 or higher

## Architecture
<a name="modernize-mainframe-batch-printing-workloads-on-aws-by-using-rocket-enterprise-server-and-lrs-vpsx-mfi-architecture"></a>

**Source technology stack**
+ Operating system – IBM z/OS
+ Programming language – Common Business-Oriented Language (COBOL), job control language (JCL), and Customer Information Control System (CICS)
+ Database – IBM DB2 for z/OS and Virtual Storage Access Method (VSAM)
+ Security – Resource Access Control Facility (RACF), CA Top Secret for z/OS, and Access Control Facility 2 (ACF2)
+ Printing and output management – IBM mainframe z/OS printing products (IBM Tivoli Output Manager for z/OS, LRS, and CA View)

**Target technology stack**
+ Operating system – Microsoft Windows Server running on Amazon EC2
+ Compute – Amazon EC2
+ Programming language – COBOL, JCL, and CICS
+ Database – Amazon RDS
+ Security – AWS Managed Microsoft AD
+ Printing and output management – LRS printing solution on AWS
+ Mainframe runtime environment – Rocket Enterprise Server

**Source architecture**

The following diagram shows a typical current state architecture for a mainframe batch printing workload:

![\[From user to mainframe service, Db2 for z/OS, job scheduler, batch job, and output in six steps.\]](http://docs.aws.amazon.com/prescriptive-guidance/latest/patterns/images/pattern-img/36de7312-4860-4702-a325-c01cf74c4f33/images/83d82435-0aa6-4eb8-a5c8-0920102afb09.png)


The diagram shows the following workflow:

1. Users perform business transactions on a system of engagement (SoE) that’s built on an IBM CICS application written in COBOL.

1. The SoE invokes the mainframe service, which records the business transaction data in a system-of-records (SoR) database such as IBM DB2 for z/OS.

1. The SoR persists the business data from the SoE.

1. The batch job scheduler initiates a batch job to generate print output.

1. The batch job extracts data from the database, formats the data according to business requirements, and then generates business output such as billing statements, ID cards,  or loan statements. Finally, the batch job routes the output to printing output management for processing and output delivery, based on the business requirements. 

1. Printing output management receives print output from the batch job, and then delivers that output to a specified destination, such as email, a file share that uses secure FTP, a physical printer that uses LRS printing solutions (as demonstrated in this pattern), or IBM Tivoli.

**Target architecture**

The following diagram shows an architecture for a mainframe batch printing workload that’s deployed in the AWS Cloud:

![\[Batch application on AWS with scheduler, Rocket Enterprise Server, and database in four steps.\]](http://docs.aws.amazon.com/prescriptive-guidance/latest/patterns/images/pattern-img/36de7312-4860-4702-a325-c01cf74c4f33/images/8cdd4ef7-3cbd-476a-9aa4-c1c0924f17c6.png)


The diagram shows the following workflow:

1. The batch job scheduler initiates a batch job to create print output, such as billing statements, ID cards, or loan statements.

1. The mainframe batch job ([replatformed to Amazon EC2](https://aws.amazon.com/blogs/apn/demystifying-legacy-migration-options-to-the-aws-cloud/)) uses the Rocket Enterprise Server runtime to extract data from the application database, apply business logic to the data, format the data, and then send the data to a print destination by using [Rocket Software Print Exit](https://www.microfocus.com/documentation/enterprise-developer/ed70/ED-Eclipse/HCOMCMJCLOU020.html) (Micro Focus documentation).

1. The application database (an SoR that runs on Amazon RDS) persists data for print output.

1. The LRS VPSX/MFI printing solution is deployed on Amazon EC2 and its operational data is stored in Amazon Elastic Block Store (Amazon EBS). LRS VPSX/MFI uses the TCP/IP-based LRS/Queue transmission agent to collect print data through the Rocket Software JES Print Exit API and deliver the data to a specified printer destination.

**Note**  
The target solution typically doesn’t require application changes to accommodate mainframe formatting languages, such as IBM Advanced Function Presentation (AFP) or Xerox Line Condition Data Stream (LCDS). For more information about using Rocket Software for mainframe application migration and modernization on AWS, see the [Empowering Enterprise Mainframe Workloads on AWS with Micro Focus](https://aws.amazon.com/blogs/apn/empowering-enterprise-grade-mainframe-workloads-on-aws-with-micro-focus/) blog post.

**AWS infrastructure architecture**

The following diagram shows a highly available and secure AWS infrastructure architecture for a mainframe batch printing workload:

![\[Multi-AZ deployment on AWS with Rocket Software and LRS components in seven steps.\]](http://docs.aws.amazon.com/prescriptive-guidance/latest/patterns/images/pattern-img/36de7312-4860-4702-a325-c01cf74c4f33/images/287dd143-338c-4d83-a9b2-8e39214a81b0.png)


The diagram shows the following workflow:

1. The batch scheduler initiates the batch process and is deployed on Amazon EC2 across multiple [Availability Zones](https://aws.amazon.com/about-aws/global-infrastructure/regions_az/) for high availability (HA). 
**Note**  
This pattern doesn’t cover the implementation of the batch scheduler. For more information about implementation, see the software vendor documentation for your scheduler.

1. The mainframe batch job (written on a programming language such as JCL or COBOL) uses core business logic to process and generate print output, such as billing statements, ID cards, and loan statements. The job is deployed on Amazon EC2 across two Availability Zones for HA and uses Rocket Software Print Exit to route print output to LRS VPSX/MFI for end-user printing.

1. LRS VPSX/MFI uses a TCP/IP-based LRS/Queue transmission agent to collect or capture print data from the Rocket Software JES Print Exit programming interface. Print Exit passes the necessary information to enable LRS VPSX/MFI to effectively process the spool file and dynamically build LRS/Queue commands. The commands are then run using a standard built-in function from Rocket Software. 
**Note**  
For more information on print data passed from Rocket Software Print Exit to LRS/Queue and LRS VPSX/MFI supported mainframe batch mechanisms, see *Print data capture* in the [Additional information](#modernize-mainframe-batch-printing-workloads-on-aws-by-using-rocket-enterprise-server-and-lrs-vpsx-mfi-additional) section of this pattern.

1. 
**Note**  
A [Network Load Balancer](https://docs.aws.amazon.com/elasticloadbalancing/latest/network/introduction.html) provides a DNS name to integrate Rocket Enterprise Server with LRS VPSX/MFI. : LRS VPSX/MFI supports a Layer 4 load balancer. The Network Load Balancer also does a basic health check on LRS VPSX/MFI and routes traffic to the registered targets that are healthy.

1. 
**Note**  
The LRS VPSX/MFI print server is deployed on Amazon EC2 across two Availability Zones for HA and uses [Amazon EBS](https://docs.aws.amazon.com/ebs/latest/userguide/what-is-ebs.html) as an operational data store. LRS VPSX/MFI supports both the active-active and active-passive service modes. This architecture uses multiple AZs in an active-passive pair as an active and hot standby. The Network Load Balancer performs a health check on LRS VPSX/MFI EC2 instances and routes traffic to hot standby instances in the other AZ if an active instance is in an unhealthy state. The print requests are persisted in the LRS Job Queue locally in each of the EC2 instances. In the event of recovery, a failed instance has to be restarted for the LRS services to resume processing the print request. : LRS VPSX/MFI can also perform health checks at the printer fleet level. For more information, see *Printer fleet health checks* in the [Additional information](#modernize-mainframe-batch-printing-workloads-on-aws-by-using-rocket-enterprise-server-and-lrs-vpsx-mfi-additional) section of this pattern.

1. [AWS Managed Microsoft AD](https://docs.aws.amazon.com/directoryservice/latest/admin-guide/directory_microsoft_ad.html) integrates with LRS/DIS to perform print workflow authentication and authorization. For more information, see *Print authentication and authorization *in the [Additional information](#modernize-mainframe-batch-printing-workloads-on-aws-by-using-rocket-enterprise-server-and-lrs-vpsx-mfi-additional) section of this pattern.

1. LRS VPSX/MFI uses Amazon EBS for block storage. You can back up Amazon EBS data from active EC2 instances to Amazon S3 as point-in-time snapshots and restore them to hot standby EBS volumes. To automate the creation, retention, and deletion of Amazon EBS volume snapshots, you can use [Amazon Data Lifecycle Manager](https://aws.amazon.com/blogs/aws/new-lifecycle-management-for-amazon-ebs-snapshots/) to set the frequency of automated snapshots and restore them based on your [RTO/RPO requirements](https://docs.aws.amazon.com/whitepapers/latest/disaster-recovery-workloads-on-aws/disaster-recovery-options-in-the-cloud.html).

## Tools
<a name="modernize-mainframe-batch-printing-workloads-on-aws-by-using-rocket-enterprise-server-and-lrs-vpsx-mfi-tools"></a>

**AWS services**
+ [Amazon Elastic Block Store (Amazon EBS)](https://docs.aws.amazon.com/ebs/latest/userguide/what-is-ebs.html) provides block level storage volumes for use with EC2 instances. EBS volumes behave like raw, unformatted block devices. You can mount these volumes as devices on your instances.
+ [Amazon Elastic Compute Cloud (Amazon EC2)](https://docs.aws.amazon.com/AWSEC2/latest/UserGuide/concepts.html) provides scalable computing capacity in the AWS Cloud. You can use Amazon EC2 to launch as many or as few virtual servers as you need, and you can scale out or scale in.
+ [Amazon Relational Database Service (Amazon RDS)](https://docs.aws.amazon.com/AmazonRDS/latest/UserGuide/Welcome.html) is a web service that makes it easier to set up, operate, and scale a relational database in the AWS Cloud. It provides cost-efficient, resizable capacity for a relational database and manages common database administration tasks.
+ [AWS Directory Service for Microsoft Active Directory](https://docs.aws.amazon.com/directoryservice/latest/admin-guide/directory_microsoft_ad.html), also known as AWS Managed Microsoft AD, enables your directory-aware workloads and AWS resources to use Microsoft Active Directory in the AWS Cloud.

**Other tools**
+ [LRS VPSX/MFI (Micro Focus Interface)](https://www.lrsoutputmanagement.com/products/vpsx-enterprise/), co-developed by LRS and Rocket Software, captures output from a Rocket Enterprise Server JES spool and reliably delivers it to a specified print destination.
+ LRS Directory Information Server (LRS/DIS) is used for authentication and authorization during the print workflow.
+ TCP/IP-based LRS/Queue transmission agent is used by LRS VPSX/MFI to collect or capture print data through the Rocket Software JES Print Exit programming interface.
+ [Rocket Enterprise Server](https://www.microfocus.com/documentation/enterprise-developer/ed60/ES-WIN/GUID-A2F23243-962B-440A-A071-480082DF47E7.html) is an application deployment environment for mainframe applications. It provides the execution environment for mainframe applications that are migrated or created by using any version of Rocket Software Enterprise Developer.

## Epics
<a name="modernize-mainframe-batch-printing-workloads-on-aws-by-using-rocket-enterprise-server-and-lrs-vpsx-mfi-epics"></a>

### Set up Rocket Enterprise Server on Amazon EC2 and deploy a mainframe batch application
<a name="set-up-rocket-enterprise-server-on-amazon-ec2-and-deploy-a-mainframe-batch-application"></a>


| Task | Description | Skills required | 
| --- | --- | --- | 
| Set up Rocket Enterprise Server and deploy a demo application. | Set up Rocket Enterprise Server on Amazon EC2, and then deploy the Rocket Software BankDemo demonstration application on Amazon EC2.The BankDemo application is a mainframe batch application that creates and then initiates print output. | Cloud architect | 

### Set up an LRS print server on Amazon EC2
<a name="set-up-an-lrs-print-server-on-amazon-ec2"></a>


| Task | Description | Skills required | 
| --- | --- | --- | 
| Get an LRS product license for printing. | To get an LRS product license for LRS VPSX/MFI, LRS/Queue, and LRS/DIS, contact the [LRS Output Management team](https://www.lrsoutputmanagement.com/about-us/contact-us/). You must provide the host names of the EC2 instances where the LRS products will be installed. | Build lead | 
| Create an Amazon EC2 Windows instance to install LRS VPSX/MFI. | Launch an Amazon EC2 Windows instance by following the instructions from [Launch an Amazon EC2 instance](https://docs.aws.amazon.com/AWSEC2/latest/UserGuide/LaunchingAndUsingInstances.html) in the Amazon EC2 documentation. Your instance must meet the following hardware and software requirements for LRS VPSX/MFI:  [\[See the AWS documentation website for more details\]](http://docs.aws.amazon.com/prescriptive-guidance/latest/patterns/modernize-mainframe-batch-printing-workloads-on-aws-by-using-rocket-enterprise-server-and-lrs-vpsx-mfi.html)The preceding hardware and software requirements are intended for a small printer fleet (around 500–1000). To get the full requirements, consult with your LRS and AWS contacts.When you create your Windows instance, do the following:[\[See the AWS documentation website for more details\]](http://docs.aws.amazon.com/prescriptive-guidance/latest/patterns/modernize-mainframe-batch-printing-workloads-on-aws-by-using-rocket-enterprise-server-and-lrs-vpsx-mfi.html) | Cloud architect | 
| Install LRS VPSX/MFI on the EC2 instance. | [\[See the AWS documentation website for more details\]](http://docs.aws.amazon.com/prescriptive-guidance/latest/patterns/modernize-mainframe-batch-printing-workloads-on-aws-by-using-rocket-enterprise-server-and-lrs-vpsx-mfi.html) | Cloud architect | 
| Install LRS/Queue. | [\[See the AWS documentation website for more details\]](http://docs.aws.amazon.com/prescriptive-guidance/latest/patterns/modernize-mainframe-batch-printing-workloads-on-aws-by-using-rocket-enterprise-server-and-lrs-vpsx-mfi.html) | Cloud architect | 
| Install LRS/DIS. | [\[See the AWS documentation website for more details\]](http://docs.aws.amazon.com/prescriptive-guidance/latest/patterns/modernize-mainframe-batch-printing-workloads-on-aws-by-using-rocket-enterprise-server-and-lrs-vpsx-mfi.html) | Cloud architect | 
| Create a target group and register LRS VPSX/MFI EC2 as the target. | Create a target group by following the instructions from [Create a target group for your Network Load Balancer](https://docs.aws.amazon.com/elasticloadbalancing/latest/network/create-target-group.html) in the Elastic Load Balancing documentation.When you create the target group, do the following:[\[See the AWS documentation website for more details\]](http://docs.aws.amazon.com/prescriptive-guidance/latest/patterns/modernize-mainframe-batch-printing-workloads-on-aws-by-using-rocket-enterprise-server-and-lrs-vpsx-mfi.html) | Cloud architect | 
| Create a Network Load Balancer. | Follow the instructions from [Create a Network Load Balancer](https://docs.aws.amazon.com/elasticloadbalancing/latest/network/create-network-load-balancer.html) in the Elastic Load Balancing documentation. Your Network Load Balancer routes traffic from Rocket Enterprise Server to LRS VPSX/MFI EC2.When you create the Network Load Balancer, do the following on the **Listeners and Routing** page:[\[See the AWS documentation website for more details\]](http://docs.aws.amazon.com/prescriptive-guidance/latest/patterns/modernize-mainframe-batch-printing-workloads-on-aws-by-using-rocket-enterprise-server-and-lrs-vpsx-mfi.html) | Cloud architect | 

### Integrate Rocket Enterprise Server with LRS VPSX/MFI and LRS/Queue
<a name="integrate-rocket-enterprise-server-with-lrs-vpsx-mfi-and-lrs-queue"></a>


| Task | Description | Skills required | 
| --- | --- | --- | 
| Configure Rocket Enterprise Server for LRS/Queue integration. | [\[See the AWS documentation website for more details\]](http://docs.aws.amazon.com/prescriptive-guidance/latest/patterns/modernize-mainframe-batch-printing-workloads-on-aws-by-using-rocket-enterprise-server-and-lrs-vpsx-mfi.html)LRS currently supports a maximum character limit of 50 for DNS names, but this is subject to change in the future. If your DNS name is greater than 50, then you can use the IP address of the Network Load Balancer as an alternative. | Cloud architect | 
| Configure Rocket Enterprise Server for LRS VPSX/MFI integration. | [\[See the AWS documentation website for more details\]](http://docs.aws.amazon.com/prescriptive-guidance/latest/patterns/modernize-mainframe-batch-printing-workloads-on-aws-by-using-rocket-enterprise-server-and-lrs-vpsx-mfi.html) | Cloud architect | 

### Set up printers and print users in Rocket Enterprise Server and LRS VPSX/MFI
<a name="set-up-printers-and-print-users-in-rocket-enterprise-server-and-lrs-vpsx-mfi"></a>


| Task | Description | Skills required | 
| --- | --- | --- | 
| Associate the Rocket Software Print Exit module to the Rocket Enterprise Server batch printer Server Execution Process. | [\[See the AWS documentation website for more details\]](http://docs.aws.amazon.com/prescriptive-guidance/latest/patterns/modernize-mainframe-batch-printing-workloads-on-aws-by-using-rocket-enterprise-server-and-lrs-vpsx-mfi.html)For more information about configuration, see [Using the Exit](https://www.microfocus.com/documentation/enterprise-developer/ed70/ED-Eclipse/HCOMCMJCLOS025.html) in the Micro Focus documentation. | Cloud architect | 
| Add a printer in LRS VPSX/MFI. | [\[See the AWS documentation website for more details\]](http://docs.aws.amazon.com/prescriptive-guidance/latest/patterns/modernize-mainframe-batch-printing-workloads-on-aws-by-using-rocket-enterprise-server-and-lrs-vpsx-mfi.html) | Cloud architect | 
| Create a print user in LRS VPSX/MFI. | [\[See the AWS documentation website for more details\]](http://docs.aws.amazon.com/prescriptive-guidance/latest/patterns/modernize-mainframe-batch-printing-workloads-on-aws-by-using-rocket-enterprise-server-and-lrs-vpsx-mfi.html) | Cloud architect | 

### Set up print authentication and authorization
<a name="set-up-print-authentication-and-authorization"></a>


| Task | Description | Skills required | 
| --- | --- | --- | 
| Create an AWS Managed Microsoft AD domain with users and groups. | [\[See the AWS documentation website for more details\]](http://docs.aws.amazon.com/prescriptive-guidance/latest/patterns/modernize-mainframe-batch-printing-workloads-on-aws-by-using-rocket-enterprise-server-and-lrs-vpsx-mfi.html) | Cloud architect | 
| Join LRS VPSX/MFI EC2 to an AWS Managed Microsoft AD domain. | Join LRS VPSX/MFI EC2 to your AWS Managed Microsoft AD domain [automatically](https://repost.aws/knowledge-center/ec2-systems-manager-dx-domain) (AWS Knowledge Center documentation) or [manually](https://docs.aws.amazon.com/directoryservice/latest/admin-guide/launching_instance.html) (AWS Directory Service documentation). | Cloud architect | 
| Configure and integrate LRS/DIS with AWS Managed Microsoft AD. | [\[See the AWS documentation website for more details\]](http://docs.aws.amazon.com/prescriptive-guidance/latest/patterns/modernize-mainframe-batch-printing-workloads-on-aws-by-using-rocket-enterprise-server-and-lrs-vpsx-mfi.html) | Cloud architect | 

### Test a print workflow
<a name="test-a-print-workflow"></a>


| Task | Description | Skills required | 
| --- | --- | --- | 
| Initiate a batch print request from the Rocket Software BankDemo app. | [\[See the AWS documentation website for more details\]](http://docs.aws.amazon.com/prescriptive-guidance/latest/patterns/modernize-mainframe-batch-printing-workloads-on-aws-by-using-rocket-enterprise-server-and-lrs-vpsx-mfi.html) | Test engineer | 
| Check the print output in LRS VPSX/MFI. | [\[See the AWS documentation website for more details\]](http://docs.aws.amazon.com/prescriptive-guidance/latest/patterns/modernize-mainframe-batch-printing-workloads-on-aws-by-using-rocket-enterprise-server-and-lrs-vpsx-mfi.html)You can now see the print output of an account statement with columns for **Account No.**, **Description**, **Date**, **Amount**, and **Balance**. For an example, see the **batch\$1print\$1output **attachment for this pattern. | Test engineer | 

## Related resources
<a name="modernize-mainframe-batch-printing-workloads-on-aws-by-using-rocket-enterprise-server-and-lrs-vpsx-mfi-resources"></a>
+ [LRS Output Modernization](https://www.lrsoutputmanagement.com/) (LRS documentation)
+ [ANSI and machine carriage controls](https://www.ibm.com/docs/en/cmofz/9.5.0?topic=tips-ansi-machine-carriage-controls) (IBM documentation)
+ [Channel command words](https://www.ibm.com/docs/en/zos/2.3.0?topic=devices-channel-command-words) (IBM documentation)
+ [Empowering Enterprise Mainframe Workloads on AWS with Micro Focus](https://aws.amazon.com/blogs/apn/empowering-enterprise-grade-mainframe-workloads-on-aws-with-micro-focus/) (AWS Partner Network Blog)
+ [Build a Micro Focus Enterprise Server PAC with Amazon EC2 Auto Scaling and Systems Manager](https://docs.aws.amazon.com/prescriptive-guidance/latest/patterns/build-a-micro-focus-enterprise-server-pac-with-amazon-ec2-auto-scaling-and-systems-manager.html) (AWS Prescriptive Guidance documentation)
+ [Advanced Function Presentation (AFP) data stream](https://www.ibm.com/docs/en/i/7.4?topic=streams-advanced-function-presentation-data-stream) (IBM documentation)
+ [Line Conditioned Data Stream (LCDS)](https://www.compart.com/en/lcds) (Compart documentation)

## Additional information
<a name="modernize-mainframe-batch-printing-workloads-on-aws-by-using-rocket-enterprise-server-and-lrs-vpsx-mfi-additional"></a>

**Considerations**

During your modernization journey, you may consider a wide variety of configurations for both mainframe batch processes and the output they generate. The mainframe platform has been customized by every customer and vendor that uses it with particular requirements that directly affect print. For example, your current platform may incorporate the IBM Advanced Function Presentation (AFP) or the Xerox Line Condition Data Stream (LCDS) into the current workflow. Additionally, [mainframe carriage control characters](https://www.ibm.com/docs/en/cmofz/9.5.0?topic=tips-ansi-machine-carriage-controls) and [channel command words](https://www.ibm.com/docs/en/zos/2.3.0?topic=devices-channel-command-words) can affect the look of the printed page and may need special handling. As part of the modernization planning process, we recommend that you assess and understand the configurations in your specific print environment.

**Print data capture**

Rocket Software Print Exit passes the necessary information to enable LRS VPSX/MFI to effectively process the spool file. The information consists of fields passed in the relevant control blocks, such as:
+ JOBNAME
+ OWNER (USERID)
+ DESTINATION
+ FORM
+ FILENAME
+ WRITER

LRS VPSX/MFI supports the following mainframe batch mechanisms for capturing data from Rocket Enterprise Server.
+ BATCH COBOL print/spool processing using standard z/OS JCL SYSOUT DD/OUTPUT statements
+ BATCH COBOL print/spool processing using standard z/OS JCL CA-SPOOL SUBSYS DD statements
+ IMS/COBOL print/spool processing using the CBLTDLI interface (For a full list of supported methods and programming examples, see the LRS documentation that’s included with your product license.)

**Printer fleet health checks**

LRS VPSX/MFI (LRS LoadX) can perform deep dive health checks, including device management and operational optimization. Device management can detect failure in a printer device and route the print request to a healthy printer. For more information about deep dive health checks for printer fleets, see the LRS documentation that’s included with your product license.

**Print authentication and authorization **

LRS/DIS enables LRS applications to authenticate user IDs and passwords by using Microsoft Active Directory or an LDAP server. In addition to basic print authorization, LRS/DIS can also apply granular-level print security controls in the following use cases:
+ Manage who can browse the printer job.
+ Manage the browsing level of other user's jobs.
+ Manage operational tasks. For example, command-level security such as hold/release, purge, modify, copy, and reroute. Security can be set up by either the User-ID or Group (similar to AD group or LDAP group).** **

## Attachments
<a name="attachments-36de7312-4860-4702-a325-c01cf74c4f33"></a>

To access additional content that is associated with this document, unzip the following file: [attachment.zip](samples/p-attach/36de7312-4860-4702-a325-c01cf74c4f33/attachments/attachment.zip)

# Mainframe modernization: DevOps on AWS with Rocket Software Enterprise Suite
<a name="mainframe-modernization-devops-on-aws-with-micro-focus"></a>

*Kevin Yung, Amazon Web Services*

## Summary
<a name="mainframe-modernization-devops-on-aws-with-micro-focus-summary"></a>

**Customer challenges**

Organizations that run core applications on mainframe hardware usually encounter a few challenges when the hardware needs to scale up to meet the demands of digital innovations. These challenges include the following constraints. 
+ Mainframe development and test environments are unable to scale due to the inflexibility of mainframe hardware components and the high cost of changing.
+ Mainframe development is facing skill shortages, because new developers are not familiar and not interested in the traditional mainframe development tools. Modern technology such as containers, continuous integration/continuous delivery (CI/CD) pipelines, and modern test frameworks are not available in mainframe development.

**Pattern outcomes**

To address these challenges, Amazon Web Services (AWS) and Rocket Software Micro Focus, an AWS Partner Network (APN) Partner, have collaborated to create this pattern. The solution is designed to help you achieve the following outcomes.
+ Improved developer productivity. Developers can be given new mainframe development instances within minutes.
+ Use of the AWS Cloud to create new mainframe test environments with virtually unlimited capacity.
+ Rapid provisioning of new mainframe CI/CD infrastructure. Provisioning on AWS can be completed within an hour by using AWS CloudFormation and AWS Systems Manager.
+ Native use of AWS DevOps tools for mainframe development, including AWS CodeBuild, AWS CodeCommit, AWS CodePipeline, AWS CodeDeploy, and Amazon Elastic Container Registry (Amazon ECR).
+ Transform traditional waterfall development to agile development in mainframe projects.

**Technologies summary**

In this pattern, the target stack contains the following components.


| 
| 
| Logical components | Implementation solutions | Description | 
| --- |--- |--- |
| Source code repositories | Rocket Software AccuRev Server, CodeCommit, Amazon ECR  | Source code management – The solution uses two types of source code: Mainframe source code, for example, COBOL and JCL. AWS infrastructure templates and automation scripts Both types of source code need version control, but they are managed in different SCMs. Source code deployed into mainframe or Rocket Software Enterprise Servers is managed in Rocket Software Micro Focus AccuRev Server. AWS templates and automation scripts are managed in CodeCommit. Amazon ECR is used for the Docker image repositories.  | 
| Enterprise developer instances | Amazon Elastic Compute Cloud (Amazon EC2), Rocket Software Enterprise Developer for Eclipse | Mainframe developers can develop code in Amazon EC2 by using Rocket Software Enterprise Developer for Eclipse. This eliminates the need to rely on mainframe hardware to write and test code.  | 
| Rocket Software Enterprise Suite license management | Rocket Software Enterprise Suite License Manager | For centralized Rocket Software Enterprise Suite license management and governance, the solution uses Rocket Software Enterprise Suite License Manager to host the required license. | 
| CI/CD pipelines | CodePipeline, CodeBuild, CodeDeploy, Rocket Software Enterprise Developer in a container, Rocket Software Enterprise Test Server in a container, Rocket Software Micro Focus Enterprise Server | Mainframe development teams need CI/CD pipelines to perform code compilation, integration tests, and regression tests. In AWS, CodePipeline and CodeBuild can work with Rocket Software Enterprise Developer and Enterprise Test Server in a container natively. | 

## Prerequisites and limitations
<a name="mainframe-modernization-devops-on-aws-with-micro-focus-prereqs"></a>

**Prerequisites **


| 
| 
| Name | Description | 
| --- |--- |
| py3270 | py3270 is a Python interface to x3270, an IBM 3270 terminal emulator. It provides an API to a x3270 or s3270 subprocess. | 
| x3270 | x3270 is an IBM 3270 terminal emulator for the X Window System and Windows.  This can be used by developer for unit testing locally. | 
| Robot-Framework-Mainframe-3270-Library | Mainframe3270 is a library for Robot Framework based on py3270 project. | 
| Rocket Software Verastream | Rocket Software Verastream is an integration platform that enables testing mainframe assets the way that mobile apps, web applications, and SOA web services are tested. | 
| Rocket Software Unified Functional Testing (UFT) installer and license | Rocket Software Unified Functional Testing is software that provides functional and regression test automation for software applications and environments. | 
| Rocket Software Enterprise Server installer and license | Enterprise Server provides the runtime environment for mainframe applications. | 
| Rocket Software Enterprise Test Server installer and license | Rocket Software Enterprise Test Server is an IBM mainframe application test environment. | 
| Rocket Software AccuRev installer and license for Server, and Rocket Software Micro Focus AccuRev installer and license for Windows and Linux operating systems  | AccuRev provides source code management (SCM). The AccuRev system is designed for use by a team of people who are developing a set of files. | 
| Rocket Software Enterprise Developer for Eclipse installer, patch and license | Enterprise Developer provide mainframe developer a platform to develop and maintain the core mainframe online and batch applications. | 

**Limitations **
+ Building a Windows Docker image is not supported in CodeBuild. This [reported issue](https://github.com/docker-library/docker/issues/49) needs support from Windows Kernel/HCS and Docker teams. The work-around is to create a Docker image build runbook by using Systems Manager. This pattern uses the work-around to build Rocket Software Enterpise Developer for Eclipse and Rocket Software Micro Focus Enterprise Test Server Container images. 
+ Virtual private cloud (VPC) connectivity from CodeBuild is not supported in Windows yet, so the pattern does not use Rocket Software License Manager to manage licenses in OpenText Rocket Software Enterprise Developer and Rocket Software Enterprise Test Server containers.

**Product versions**
+ Rocket Software Enterprise Developer 5.5 or later
+ Rocket Software Enterprise Test Server 5.5 or later
+ Rocket Software Enterprise Server 5.5 or later
+ Rocket Software AccuRev 7.x or later
+ Windows Docker base image for Rocket Software Enterprise Developer and Enterprise Test Server: **microsoft/dotnet-framework-4.7.2-runtime**
+ Linux Docker base image for AccuRev client: **amazonlinux:2**

## Architecture
<a name="mainframe-modernization-devops-on-aws-with-micro-focus-architecture"></a>

**Mainframe environment**

In conventional mainframe development, the developers need to use mainframe hardware to develop and test programs. They face capacity limitations, for example restricted million instructions per second (MIPS) for the dev/test environment, and they must rely on the tools that are available on the mainframe computers.

In many organizations, mainframe development follows the waterfall development methodology, with teams relying on long cycles to release changes. These release cycles are usually longer than digital product development.   

The following diagram shows multiple mainframe projects sharing mainframe hardware for their development. In mainframe hardware, it is expensive to scale out a development and test environment for more projects.

![\[Diagram showing mainframe architecture with z/OS, databases, programming languages, and user groups.\]](http://docs.aws.amazon.com/prescriptive-guidance/latest/patterns/images/pattern-img/2359db4c-f351-45a6-8516-88a3b62e61f9/images/84e717fc-5aea-41a6-977a-d7e7a7ca5da7.png)


 

 

*AWS*** architecture  **

This pattern extends mainframe development to the AWS Cloud. First, it uses AccuRev SCM to host the mainframe source code on AWS. Then it makes Enterprise Developer and Enterprise Test Server available for building and testing the mainframe code on AWS. 

The following sections describe the pattern's three major components.

**1. SCM **

In AWS, the pattern uses AccuRev to create a set of SCM workspaces and version control for the mainframe source code. Its stream-based architecture enables parallel mainframe development for multiple teams. To merge a change, AccuRev uses the promote concept. To add that change to other workspaces, AccuRev uses the update concept.

At the project level, each team can create one or more streams in AccuRev to track project level changes. These are called project streams. These project streams are inherited from the same parent stream. The parent stream is used to merge the changes from different project streams.

Each project stream can promote code to AccuRev, and a promote post trigger is set up to initiate the AWS CI/CD pipeline. The successful build for a project stream change can be promoted to its parent stream for more regression tests.  

Usually, the parent stream is called the system integration stream. When there is a promotion from a project stream to a system integration stream, a post promotion trigger initiates another CI/CD pipeline to run regression tests.

In addition to mainframe code, this pattern includes AWS CloudFormation templates, Systems Manager Automation documents, and scripts. Following infrastructure-as-code best practices, they are version-controlled in CodeCommit. 

If you need to synchronize mainframe code back to a mainframe environment for deployment, Rocket Software provides the Enterprise Sync solution, which synchronizes code from the AccuRev SCM back to the mainframe SCM.

**2. Developer and test environments**

In a large organization, scaling more than a hundred or even more than a thousand mainframe developers is challenging. To address this constraint, the pattern uses Amazon EC2 Windows instances for development. On the instances, Enterprise Developer for Eclipse tools are installed. The developer can perform all mainframe code test and debugging locally on the instance. 

AWS Systems Manager State Manager and Automation documents are used to automate the developer instance provisioning. The average time to create a developer instance is within 15 minutes. The following software and configurations are prepared:
+ AccuRev Windows client for checking out and committing source code into AccuRev
+ Enterprise Developers for Eclipse tool, for writing, testing, and debugging mainframe code locally
+ Open source testing frameworks Python behavior-driven development (BDD) test framework Behave, py3270, and the x3270 emulator for creating scripts to test applications
+ A Docker developer tool for building the Enterprise Test Server Docker image and testing the application in the Enterprise Test Server Docker container 

In the development cycle, developers use the EC2 instance to develop and test mainframe code locally. When the local changes are tested successfully, developers promote the change into the AccuRev server.  

**3. CI/CD pipelines**

In the pattern, CI/CD pipelines are used for integration tests and regression tests before deployment to the production environment. 

As explained in the SCM section, AccuRev uses two types of streams: a project stream and an integration stream. Each stream is hooked up with CI/CD pipelines. To perform the integration between the AccuRev server and AWS CodePipeline, the pattern uses AccuRev post promotion script to create an event to initiate CI/CD.

For example, when a developer promotes a change to a project stream in AccuRev, it initiates a post promotion script to run in AccuRev Server. Then the script uploads the metadata of the change into an Amazon Simple Storage Service (Amazon S3) bucket to create an Amazon S3 event. This event will initiate a CodePipeline configured pipeline to run. 

The same event-initiating mechanism is used for the integration stream and its associated pipelines. 

In the CI/CD pipeline, CodePipeline uses CodeBuild with the AccuRev Linux client container to check out the latest code from the AccuRev streams. Then the pipeline starts CodeBuild to use the Enterprise Developer Windows container to compile the source code, and to use the Enterprise Test Server Windows container in CodeBuild to test mainframe applications.

The CI/CD pipelines are built using CloudFormation templates, and the blueprint will be used for new projects. By using the templates, it takes less than an hour for a project to create a new CI/CD pipeline in AWS.

To scale your mainframe test capability on AWS, the pattern builds out the Rocket Software DevOps test suite, Verastream and UFT server. By using the modern DevOps tools, you can run as many tests on AWS as you need.

An example mainframe development environment with Rocket Software on AWS is shown in the following diagram.

![\[AWS development pipeline with shared components for multiple project teams.\]](http://docs.aws.amazon.com/prescriptive-guidance/latest/patterns/images/pattern-img/2359db4c-f351-45a6-8516-88a3b62e61f9/images/27da6a52-4573-44cb-8716-1ac49430f618.png)


 

*Target technology stack *

This section provides a closer look at the architecture of each component in the pattern.

**1. Source code repository – AccuRev SCM **

AccuRev SCM is set up to manage mainframe source code versions. For high availability, AccuRev supports primary and replica modes. Operators can fail over to the replica when performing maintenance on the primary node. 

To speed up the response of the CI/CD pipeline, the pattern uses Amazon CloudWatch Events to detect source code changes and initiate the start of the pipeline.

1. The pipeline is set up to use an Amazon S3 source.

1. A CloudWatch Events rule is set up to capture S3 events from a source S3 bucket.

1. The CloudWatch Events rule sets a target to the pipeline.

1. AccuRev SCM is configured to run a post promotion script locally after promotion is complete.

1. AccuRev SCM generates an XML file that contains the metadata of the promotion, and the script uploads the XML file to the source S3 bucket.

1. After the upload, the source S3 bucket sends events to match the CloudWatch Events rule, and the CloudWatch Events rule initiates the pipeline to run. 

When the pipeline runs, it kicks off a CodeBuild project to use an AccuRev Linux client container to check out the latest mainframe code from an associated AccuRev stream.   

The following diagram shows an AccuRev Server setup.

![\[AWS Cloud diagram showing AccuRev setup with primary and replica instances across availability zones.\]](http://docs.aws.amazon.com/prescriptive-guidance/latest/patterns/images/pattern-img/2359db4c-f351-45a6-8516-88a3b62e61f9/images/e60345cc-2283-4b03-8f57-e3dac1770978.png)


**2. Enterprise Developer template** 

The pattern uses Amazon EC2 templates to simplify creation of the developer instance. By using State Manager, it can apply software and license settings to EC2 instances consistently.

The Amazon EC2 template builds in its VPC context settings and default instance settings, and it follows enterprise tagging requirements. By using a template, a team can create their own new development instances. 

When a developer instance starts, by associating with tags, Systems Manager uses State Manager to apply automation. The automation includes the following general steps.

1. Install Enterprise Developer software and install patches.

1. Install the AccuRev client for Windows.

1. Install the pre-configured script for developers to join the AccuRev stream. Initialize Eclipse workspaces.

1. Install development tools, including x3270, py3270, and Docker.

1. Configure license settings to point to a License Manager load balancer.

The following diagram shows an Enterprise developer instance created by the Amazon EC2 template, with software and configuration applied to the instance by State Manager. Enterprise developer instances connect to AWS License Manager to activate their license.

![\[AWS Cloud diagram showing Enterprise Developer Instance setup with License Manager and Systems Manager components.\]](http://docs.aws.amazon.com/prescriptive-guidance/latest/patterns/images/pattern-img/2359db4c-f351-45a6-8516-88a3b62e61f9/images/7ca8f538-8362-4a11-a842-7ecff6fa0248.png)


 

**3.  CI/CD pipelines**

As explained in the AWS architecture section, in the pattern, there are project-level CI/CD pipelines and system integration pipelines. Each mainframe project team creates a pipeline or multiple CI/CD pipelines for building the programs that they are developing in a project. These project CI/CD pipelines check out source code from an associated AccuRev stream. 

In a project team, developers promote their code in the associated AccuRev stream. Then the promotion initiates the project pipeline to build the code and run integration tests. 

Each project CI/CD pipeline uses CodeBuild projects with the Enterprise Developer tool Amazon ECR image and Enterprise Test Server tool Amazon ECR image. 

CodePipeline and CodeBuild are used to create the CI/CD pipelines. Because CodeBuild and CodePipeline have no upfront fees or commitments, you pay only for what you use. Compared to mainframe hardware, the AWS solution greatly reduces hardware provisioning lead time and lowers the cost of your testing environment.

In modern development, multiple test methodologies are used. For example, test-driven development (TDD), BDD, and Robot Framework. With this pattern, developers can use these modern tools for mainframe testing. For example, by using x3270, py3270 and the Behave python test tool, you can define an online application's behavior. You can also use build mainframe 3270 robot framework in these CI/CD pipelines.

The following diagram shows the team stream CI/CD pipeline. 

![\[AWS Cloud CI/CD pipeline showing CodeCommit, CodePipeline, and CodeBuild with Micro Focus tools integration.\]](http://docs.aws.amazon.com/prescriptive-guidance/latest/patterns/images/pattern-img/2359db4c-f351-45a6-8516-88a3b62e61f9/images/da59f837-2f23-404f-948b-41402cc6fe0c.png)


The following diagram shows the project CI/CD test report produced by CodePipeline in Mainframe3270 Robot Framework.

![\[Test report summary showing 100% pass rate for 3 test cases in 2.662 seconds.\]](http://docs.aws.amazon.com/prescriptive-guidance/latest/patterns/images/pattern-img/2359db4c-f351-45a6-8516-88a3b62e61f9/images/4752321a-c60d-455c-ac2f-6f0e2bc3dca0.png)


The following diagram shows the project CI/CD test report produced by CodePipeline in Py3270 and Behave BDD.

![\[Test report summary showing 100% pass rate for 2 test cases in a pipeline.\]](http://docs.aws.amazon.com/prescriptive-guidance/latest/patterns/images/pattern-img/2359db4c-f351-45a6-8516-88a3b62e61f9/images/d005466e-aeb8-4fd6-8342-743ed049f98a.png)


After project level tests are passed successfully, the tested code is manually promoted to the integration stream in AccuRev SCM. You can automate this step after the teams have a confidence on the tests coverage of their project pipeline.

When code is promoted, the system integration CI/CD pipeline checks out the merged code and performs regression tests. The merged code is promoted from all parallel project streams.

Depending on how fine grain the test environment is required, customers can have more system integration CI/CD pipelines in a different environment, for example UAT, Pre-Production. 

In the pattern, the tools used in the system integration pipeline are Enterprise Test Server, UFT Server, and Verastream. All these tools can be deployed into the Docker container and used with CodeBuild.

After successfully testing of the mainframe programs, the artifact is stored, with version control, in an S3 bucket. 

The following diagram shows a system integration CI/CD pipeline.

![\[CI/CD pipeline showing AWS services and Micro Focus tools for source, build, test, and promote stages.\]](http://docs.aws.amazon.com/prescriptive-guidance/latest/patterns/images/pattern-img/2359db4c-f351-45a6-8516-88a3b62e61f9/images/693212e5-1cd0-4f82-a910-39b00d977c38.png)


 

After the artifact has been successfully tested in the system integration CI/CD pipelines, it can be promoted for production deployment. 

If you need to deploy source code back to the mainframe, Rocket Software offers the Enterprise Sync solution to synchronize source code from AccuRev back to Mainframe Endeavour.

The following diagram shows a production CI/CD pipeline deploying the artifact into Enterprise Servers. In this example, CodeDeploy orchestrates the deployment of the tested mainframe artifact into Enterprise Server.

![\[CI/CD pipeline diagram showing CodePipeline, CodeBuild, and CodeDeploy stages for artifact deployment.\]](http://docs.aws.amazon.com/prescriptive-guidance/latest/patterns/images/pattern-img/2359db4c-f351-45a6-8516-88a3b62e61f9/images/56749c2a-e038-4e56-9487-b2ff83894725.png)


In addition to the architecture walkthrough of the CI/CD pipeline, see the AWS DevOps blog post [Automate thousands of mainframe tests on AWS with the Micro Focus Enterprise Suite](https://aws.amazon.com/blogs/devops/automate-mainframe-tests-on-aws-with-micro-focus/) for more information on testing mainframe applications in CodeBuild and CodePipeline. (Micro Focus is now Rocket Software.) Refer to the blog post for the best practices and details of doing mainframe tests on AWS.

## Tools
<a name="mainframe-modernization-devops-on-aws-with-micro-focus-tools"></a>

**AWS automation tools**
+ [CloudFormation](https://docs.aws.amazon.com/AWSCloudFormation/latest/UserGuide/Welcome.html)
+ [Amazon CloudWatch Events](https://docs.aws.amazon.com/AmazonCloudWatch/latest/events/WhatIsCloudWatchEvents.html)
+ [AWS CodeBuild](https://docs.aws.amazon.com/codebuild/latest/userguide/welcome.html)
+ [AWS CodeDeploy](https://docs.aws.amazon.com/codedeploy/latest/userguide/welcome.html)
+ [AWS CodePipeline](https://docs.aws.amazon.com/codepipeline/latest/userguide/welcome.html)
+ [Amazon ECR](https://docs.aws.amazon.com/AmazonECR/latest/userguide/what-is-ecr.html)
+ [Amazon S3](https://docs.aws.amazon.com/AmazonS3/latest/dev/Welcome.html)
+ [AWS Secrets Manager](https://docs.aws.amazon.com/secretsmanager/latest/userguide/intro.html)
+ [AWS Systems Manager](https://docs.aws.amazon.com/systems-manager/latest/userguide/what-is-systems-manager.html)

**Rocket Software tools**
+ [Rocket Enterprise Developer for Eclipse](https://www.microfocus.com/documentation/enterprise-developer/ed60/ED-Eclipse/GUID-8D6B7358-AC35-4DAF-A445-607D8D97EBB2.html)
+ [Rocket Enterprise Test Server](https://www.microfocus.com/documentation/enterprise-developer/ed60/ETS-help/GUID-ECA56693-D9FE-4590-8798-133257BFEBE7.html)
+ [Rocket Enterprise Server](https://www.microfocus.com/documentation/enterprise-developer/es_60/) (production deployment)
+ [Rocket Software AccuRev](https://supportline.microfocus.com/documentation/books/AccuRev/AccuRev/6.2/webhelp/wwhelp/wwhimpl/js/html/wwhelp.htm)
+ [Rocket Software Enterprise Suite License Manager](https://www.microfocus.com/documentation/slm/)
+ [Rocket Software Verastream Host Integrator](https://www.microfocus.com/documentation/verastream-host-integrator/)
+ [Rocket Software UFT One](https://admhelp.microfocus.com/uft/en/24.4/UFT_Help/Content/User_Guide/Ch_UFT_Intro.htm)

**Other tools**
+ x3270
+ [py3270](https://pypi.org/project/py3270/)
+ [Robot-Framework-Mainframe-3270-Library](https://github.com/Altran-PT-GDC/Robot-Framework-Mainframe-3270-Library)

## Epics
<a name="mainframe-modernization-devops-on-aws-with-micro-focus-epics"></a>

### Create the AccuRev SCM infrastructure
<a name="create-the-accurev-scm-infrastructure"></a>


| Task | Description | Skills required | 
| --- | --- | --- | 
| Deploy a primary AccuRev SCM server by using CloudFormation. |  | AWS CloudFormation | 
| Create the AccuRev Administrator user. | Log in to AccuRev SCM Server, and run the CLI command to create an Administrator user. | AccuRev SCM Server Administrator | 
| Create AccuRev streams. | Create AccuRev streams that inherit from upper streams in sequence: Production, System Integration, Team streams. | AccuRev SCM Administrator | 
| Create the developer AccuRev login accounts. | Use AccuRev SCM CLI commands to create AccuRev users login accounts for mainframe developers. | AccuRev SCM Administrator | 

### Create the Enterprise Developer Amazon EC2 launch template
<a name="create-the-enterprise-developer-ec2-launch-template"></a>


| Task | Description | Skills required | 
| --- | --- | --- | 
| Deploy the Amazon EC2 launch template by using CloudFormation. | Use CloudFormation to deploy an Amazon EC2 launch template for Enterprise Developer instances. The template includes a Systems Manager Automation document for the Rocket Enterprise Developer instance. | AWS CloudFormation | 
| Create the Enterprise Developer instance from the Amazon EC2 template. |  | AWS Console Login and Mainframe Developer Skills | 

### Create the Enterprise Developer tool Docker image
<a name="create-the-enterprise-developer-tool-docker-image"></a>


| Task | Description | Skills required | 
| --- | --- | --- | 
| Create the Enterprise Developer tool Docker image. | Use the Docker command and the Enterprise Developer tool Dockerfile to create the Docker image. | Docker | 
| Create the Docker repository in Amazon ECR. | On the Amazon ECR console, create the repository for the Enterprise Developer Docker image. | Amazon ECR | 
| Push the Enterprise Developer tool Docker image to Amazon ECR. | Run the Docker push command to push the Enterprise Developer tool Docker image to save it in the Docker repository in Amazon ECR. | Docker | 

### Create the Enterprise Test Server Docker image
<a name="create-the-enterprise-test-server-docker-image"></a>


| Task | Description | Skills required | 
| --- | --- | --- | 
| Create the Enterprise Test Server Docker image. | Use the Docker command and the Enterprise Test Server Dockerfile to create the Docker image. | Docker | 
| Create the Docker repository in Amazon ECR. | On the Amazon ECR console, create the Amazon ECR repository for the Enterprise Test Server Docker image. | Amazon ECR | 
| Push the Enterprise Test Server Docker image to Amazon ECR. | Run the Docker push command to push and save the Enterprise Test Server Docker image in Amazon ECR. | Docker | 

### Create the team stream CI/CD pipeline
<a name="create-the-team-stream-ci-cd-pipeline"></a>


| Task | Description | Skills required | 
| --- | --- | --- | 
| Create the CodeCommit repository. | On the CodeCommit console, create a Git-based repository for infrastructure and CloudFormation code. | AWS CodeCommit | 
| Upload the CloudFormation template and the automation code into the CodeCommit repository. | Run the Git push command to upload CloudFormation template and automation code into the repository. | Git | 
| Deploy the team stream CI/CD pipeline by using CloudFormation. | Use the prepared CloudFormation template to deploy a team stream CI/CD pipeline. | AWS CloudFormation | 

### Create the system integration CI/CD pipeline
<a name="create-the-system-integration-ci-cd-pipeline"></a>


| Task | Description | Skills required | 
| --- | --- | --- | 
| Create the UFT Docker image. | Use the Docker command and the UFT Dockerfile to create the Docker image. | Docker | 
| Create the Docker repository in Amazon ECR for the UFT image. | On the Amazon ECR console, create the Docker repository for the UFT image. | Amazon ECR | 
| Push the UFT Docker image to Amazon ECR. | Run the Docker push command to push and save the Enterprise Test Server Docker image in Amazon ECR. | Docker | 
| Create the Verastream Docker image. | Use the Docker command and the Verastream Dockerfile to create the Docker image. | Docker | 
| Create the Docker repository in Amazon ECR for the Verastream image. | On the Amazon ECR console, create the Docker repository for the Verastream image. | Amazon ECR | 
| Deploy the system integration CI/CD pipeline by using CloudFormation. | Use the prepared CloudFormation template to deploy a system integration CI/CD pipeline. | AWS CloudFormation | 

### Create production deployment CI/CD pipeline
<a name="create-production-deployment-ci-cd-pipeline"></a>


| Task | Description | Skills required | 
| --- | --- | --- | 
| Deploy Enterprise Server by using the AWS Quick Start. | To deploy Enterprise Server by using CloudFormation, launch the Enterprise Server on AWS Quick Start. | AWS CloudFormation | 
| Deploy a production deployment CI/CD pipeline. | On the CloudFormation console, use the CloudFormation template to deploy a production deployment CI/CD pipeline. | AWS CloudFormation | 

## Related resources
<a name="mainframe-modernization-devops-on-aws-with-micro-focus-resources"></a>

**References**
+ [AWS DevOps Blog - Automate thousands of mainframe tests on AWS with the Micro Focus Enterprise Suite](https://aws.amazon.com/blogs/devops/automate-mainframe-tests-on-aws-with-micro-focus/) (Micro Focus is now Rocket Software.)
+ [py3270/py3270 GitHub repository](https://github.com/py3270/py3270)
+ [Altran-PT-GDC/Robot-Framework-Mainframe-3270-Library GitHub repository](https://github.com/Altran-PT-GDC/Robot-Framework-Mainframe-3270-Library)
+ [Welcome to behave\$1](https://behave.readthedocs.io/en/latest/index.html)
+ [APN Partner Blog - Tag: Micro Focus](https://aws.amazon.com/blogs/apn/tag/micro-focus/) (Micro Focus is now Rocket Software.)
+ [Launching an instance from a launch template](https://docs.aws.amazon.com/AWSEC2/latest/UserGuide/ec2-launch-templates.html)

**AWS Marketplace**
+ [Rocket Software UFT One](https://aws.amazon.com/marketplace/pp/B01EGCA5OS?ref_=srh_res_product_title)

**AWS Quick Start**
+ [Rocket Enterprise Server on AWS](https://aws.amazon.com/quickstart/architecture/micro-focus-enterprise-server/)

# Modernize mainframe online printing workloads on AWS by using Micro Focus Enterprise Server and LRS VPSX/MFI
<a name="modernize-mainframe-online-printing-workloads-on-aws-by-using-micro-focus-enterprise-server-and-lrs-vpsx-mfi"></a>

*Shubham Roy and Kevin Yung, Amazon Web Services*

*Abraham Rondon, Micro Focus*

*Guy Tucker, Levi, Ray and Shoup Inc*

## Summary
<a name="modernize-mainframe-online-printing-workloads-on-aws-by-using-micro-focus-enterprise-server-and-lrs-vpsx-mfi-summary"></a>

This pattern shows you how to modernize your business-critical mainframe online printing workloads on the Amazon Web Services (AWS) Cloud by using Micro Focus Enterprise Server as a runtime for a modernized mainframe application and LRS VPSX/MFI (Micro Focus Interface) as a print server. The pattern is based on the [replatform](https://aws.amazon.com/blogs/apn/demystifying-legacy-migration-options-to-the-aws-cloud/) mainframe modernization approach. In this approach, you migrate your mainframe online application to Amazon Elastic Compute Cloud (Amazon EC2) and migrate your mainframe database, such as IBM DB2 for z/OS, to Amazon Relational Database Service (Amazon RDS). The authentication and authorization for the modernized print workflow is performed by AWS Directory Service for Microsoft Active Directory, also known as AWS Managed Microsoft AD. The LRS Directory Information Server (LRS/DIS) is integrated with AWS Managed Microsoft AD for print workflow authentication and authorization. By modernizing your online printing workloads, you can reduce IT infrastructure costs, mitigate the technical debt of maintaining legacy systems, remove data silos, increase agility and efficiency with a DevOps model, and take advantage of on-demand resources and automation in the AWS Cloud.

## Prerequisites and limitations
<a name="modernize-mainframe-online-printing-workloads-on-aws-by-using-micro-focus-enterprise-server-and-lrs-vpsx-mfi-prereqs"></a>

**Prerequisites**
+ An active AWS account
+ A mainframe online printing or output management workload
+ Basic knowledge of how to rebuild and deliver a mainframe application that runs on Micro Focus Enterprise Server (For more information, see the [Enterprise Server](https://www.microfocus.com/media/data-sheet/enterprise_server_ds.pdf) data sheet in the Micro Focus documentation.)
+ Basic knowledge of LRS cloud printing solutions and concepts (For more information, see [Output Modernization](https://www.lrsoutputmanagement.com/products/modernization-products) in the LRS documentation.)
+ Micro Focus Enterprise Server software and license (For more information, contact [Micro Focus sales](https://www.microfocus.com/en-us/contact/contactme).)
+ LRS VPSX/MFI, LRS/Queue, and LRS/DIS software and licenses (For more information, contact [LRS sales](https://www.lrsoutputmanagement.com/about-us/contact-us/).)

**Note**  
For more information about configuration considerations for mainframe online printing workloads, see *Considerations* in the *Additional information* section of this pattern.

**Product versions**
+ [Micro Focus Enterprise Server](https://www.microfocus.com/en-us/products/enterprise-server/overview?utm_campaign=7018e000000PgfnAAC&utm_content=SCH-BR-AMC-AppM-AMS&gclid=EAIaIQobChMIoZCQ6fvS9wIVxQN9Ch2MzAOlEAAYASAAEgKx2fD_BwE) 8.0 or later
+ [LRS VPSX/MFI](https://www.lrsoutputmanagement.com/products/modernization-products/) V1R3 or later

## Architecture
<a name="modernize-mainframe-online-printing-workloads-on-aws-by-using-micro-focus-enterprise-server-and-lrs-vpsx-mfi-architecture"></a>

**Source technology stack**
+ Operating system – IBM z/OS
+ Programming language – Common Business-Oriented Language (COBOL) and Customer Information Control System (CICS) 
+ Database – IBM DB2 for z/OS IBM Information Management System (IMS) and Virtual Storage Access Method (VSAM)
+ Security – Resource Access Control Facility (RACF), CA Top Secret for z/OS, and Access Control Facility 2 (ACF2)
+ Printing and output management – IBM mainframe z/OS printing products (IBM Infoprint Server for z/OS, LRS, and CA View)

**Target technology stack**
+ Operating system – Microsoft Windows Server running on Amazon EC2
+ Compute – Amazon EC2
+ Programming language – COBOL and CICS
+ Database – Amazon RDS
+ Security – AWS Managed Microsoft AD
+ Printing and output management – LRS printing solution on AWS
+ Mainframe runtime environment – Micro Focus Enterprise Server

**Source architecture**

The following diagram shows a typical current state architecture for a mainframe online printing workload.

![\[Six-step process to produce viewable output.\]](http://docs.aws.amazon.com/prescriptive-guidance/latest/patterns/images/pattern-img/924cdae7-9265-4fc9-8e5e-bb2da5368e7e/images/293368f5-d102-4f4e-b290-71da4aeff347.png)


The diagram shows the following workflow:

1. Users perform business transactions on a system of engagement (SoE) that’s built on an IBM CICS application written in COBOL.

1. The SoE invokes the mainframe service, which records the business transaction data in a system-of-records (SoR) database such as IBM DB2 for z/OS.

1. The SoR persists the business data from the SoE.

1. A user initiates a request to generate print output from the CICS SoE, which initiates a print transaction application to process the print request. 

1. The print transaction application (such as a CICS and COBOL program) extracts data from the database, formats the data according to business requirements, and generates business output (print data) such as billing statements, ID cards, or loan statements. Then, the application sends a print request by using Virtual Telecommunications Access Method (VTAM). A z/OS print server (such as IBM Infoprint Server) uses NetSpool or a similar VTAM component to intercept the print requests, and then creates print output datasets on the JES spool by using JES output parameters. The JES output parameters specify routing information that the print server uses to transmit the output to a particular network printer. The term *VTAM* refers to the z/OS Communications Server and the System Network Architecture (SNA) services element of z/OS.

1. The printing output transmission component transmits the output print datasets from the JES spool to remote printers or print servers, such as LRS (as demonstrated in this pattern), IBM Infoprint Server, or email destinations.

**Target architecture**

The following diagram shows an architecture for a mainframe online printing workload that’s deployed in the AWS Cloud:

![\[Four-step process from initiate print request to processing on AWS to LRS printing.\]](http://docs.aws.amazon.com/prescriptive-guidance/latest/patterns/images/pattern-img/924cdae7-9265-4fc9-8e5e-bb2da5368e7e/images/07c97b6f-1a86-493d-a4e0-b8321b46f9b7.png)


The diagram shows the following workflow:

1. A user initiates a print request from an online (CICS) user-interface to create print output, such as billing statements, ID cards, or loan statements.

1. The mainframe online application ([replatformed to Amazon EC2](https://aws.amazon.com/blogs/apn/demystifying-legacy-migration-options-to-the-aws-cloud/)) uses the Micro Focus Enterprise Server runtime to extract data from the application database, apply business logic to the data, format the data, and then send the data to a print destination by using [Micro Focus CICS Print Exit](https://www.microfocus.com/documentation/enterprise-developer/ed70/ED-Eclipse/HCOMCMJCLOU020.html) (DFHUPRNT). 

1. The application database (an SoR that runs on Amazon RDS) persists data for print output.

1. The LRS VPSX/MFI printing solution is deployed on Amazon EC2, and its operational data is stored in Amazon Elastic Block Store (Amazon EBS). LRS VPSX/MFI uses a TCP/IP-based LRS/Queue transmission agent to collect print data through the Micro Focus CICS Print Exit API (DFHUPRNT) and deliver the data to a specified printer destination. The original TERMID (TERM) that’s used in the modernized CICS application is used as the VPSX/MFI Queue name. 

**Note**  
The target solution typically doesn’t require application changes to accommodate mainframe formatting languages, such as IBM Advanced Function Presentation (AFP) or Xerox Line Condition Data Stream (LCDS). For more information about using Micro Focus for mainframe application migration and modernization on AWS, see [Empowering Enterprise Mainframe Workloads on AWS with Micro Focus](https://aws.amazon.com/blogs/apn/empowering-enterprise-grade-mainframe-workloads-on-aws-with-micro-focus/) in the AWS documentation.

**AWS infrastructure architecture**

The following diagram shows a highly available and secure AWS infrastructure architecture for a mainframe online printing workload:

![\[Two Availability Zones with Micro Focus Enterprise server on EC2, Amazon RDS, and LRS printing.\]](http://docs.aws.amazon.com/prescriptive-guidance/latest/patterns/images/pattern-img/924cdae7-9265-4fc9-8e5e-bb2da5368e7e/images/093555a1-342c-420c-bb90-e9440d2e8650.png)


The diagram shows the following workflow:

1. The mainframe online application (written on a programming language such as CICS or COBOL) uses core business logic to process and generate print output, such as billing statements, ID cards, and loan statements. The online application is deployed on Amazon EC2 across two [Availability Zones](https://aws.amazon.com/about-aws/global-infrastructure/regions_az/) (AZ) for high availability (HA) and uses Micro Focus CICS Print Exit to route print output to LRS VPSX/MFI for end-user printing.

1. LRS VPSX/MFI uses a TCP/IP-based LRS/Queue transmission agent to collect or capture print data from the Micro Focus online Print Exit programming interface. Online Print Exit passes the necessary information to enable LRS VPSX/MFI to effectively process the print file and dynamically build LRS/Queue commands. 
**Note**  
For more information on various CICS application programing methods for print and how they are supported in Micro Focus Enterprise server and LRS VPSX/MFI, see *Print data capture* in the *Additional information* section of this pattern.

1. 
**Note**  
A [Network Load Balancer](https://docs.aws.amazon.com/elasticloadbalancing/latest/network/introduction.html) provides a DNS name to integrate Micro Focus Enterprise Server with LRS VPSX/MFI. : LRS VPSX/MFI supports a Layer 4 load balancer. The Network Load Balancer also does a basic health check on LRS VPSX/MFI and routes traffic to the registered targets that are healthy.

1. The LRS VPSX/MFI print server is deployed on Amazon EC2 across two Availability Zones for HA and uses [Amazon EBS](https://docs.aws.amazon.com/AWSEC2/latest/UserGuide/AmazonEBS.html) as an operational data store. LRS VPSX/MFI supports both the active-active and active-passive service modes. This architecture uses multiple Availability Zones in an active-passive pair as an active and hot standby. The Network Load Balancer performs a health check on LRS VPSX/MFI EC2 instances and routes traffic to hot standby instances in another Availability Zone if an active instance is in an unhealthy state. The print requests are persisted in the LRS Job Queue locally in each of the EC2 instances. In the event of recovery, a failed instance has to be restarted for the LRS services to resume processing the print request. 
**Note**  
LRS VPSX/MFI can also perform health checks at the printer fleet level. For more information, see *Printer fleet health checks* in the *Additional information* section of this pattern.

1. [AWS Managed Microsoft AD](https://docs.aws.amazon.com/directoryservice/latest/admin-guide/directory_microsoft_ad.html) integrates with LRS/DIS to perform print workflow authentication and authorization. For more information, see *Print authentication and authorization* in the *Additional information* section of this pattern.

1. LRS VPSX/MFI uses Amazon EBS for block storage. You can back up Amazon EBS data from active EC2 instances to Amazon S3 as point-in-time snapshots and restore them to hot standby EBS volumes. To automate the creation, retention, and deletion of Amazon EBS volume snapshots, you can use [Amazon Data Lifecycle Manager](https://aws.amazon.com/blogs/aws/new-lifecycle-management-for-amazon-ebs-snapshots/) to set the frequency of automated snapshots and restore them based on your [RTO/RPO requirements](https://docs.aws.amazon.com/whitepapers/latest/disaster-recovery-workloads-on-aws/disaster-recovery-options-in-the-cloud.html).

## Tools
<a name="modernize-mainframe-online-printing-workloads-on-aws-by-using-micro-focus-enterprise-server-and-lrs-vpsx-mfi-tools"></a>

**AWS services**
+ [Amazon Elastic Block Store (Amazon EBS)](https://docs.aws.amazon.com/AWSEC2/latest/UserGuide/AmazonEBS.html) provides block-level storage volumes for use with Amazon EC2 instances. EBS volumes behave like raw, unformatted block devices. You can mount these volumes as devices on your instances.
+ [Amazon Elastic Compute Cloud (Amazon EC2)](https://docs.aws.amazon.com/ec2/) provides scalable computing capacity in the AWS Cloud. You can launch as many virtual servers as you need and quickly scale them up or down.
+ [Amazon Relational Database Service (Amazon RDS)](https://docs.aws.amazon.com/AmazonRDS/latest/UserGuide/Welcome.html) helps you set up, operate, and scale a relational database in the AWS Cloud.
+ [AWS Directory Service for Microsoft Active Directory (AD)](https://aws.amazon.com/directoryservice/active-directory/), also known as AWS Managed Microsoft Active Directory, enables your directory-aware workloads and AWS resources to use managed Active Directory in AWS.

**Other tools**
+ [LRS VPSX/MFI (Micro Focus Interface)](https://www.lrsoutputmanagement.com/products/modernization-products/), co-developed by LRS and Micro Focus, captures output from a Micro Focus Enterprise Server JES spool and reliably delivers it to a specified print destination.
+ LRS Directory Information Server (LRS/DIS) is used for authentication and authorization during the print workflow.
+ LRS/Queue is a TCP/IP-based LRS/Queue transmission agent, used by LRS VPSX/MFI, to collect or capture print data through the Micro Focus online Print Exit programming interface.
+ [Micro Focus Enterprise Server](https://www.microfocus.com/documentation/enterprise-developer/ed60/ES-WIN/GUID-A2F23243-962B-440A-A071-480082DF47E7.html) is an application deployment environment for mainframe applications. It provides the execution environment for mainframe applications that are migrated or created by using any version of Micro Focus Enterprise Developer.

## Epics
<a name="modernize-mainframe-online-printing-workloads-on-aws-by-using-micro-focus-enterprise-server-and-lrs-vpsx-mfi-epics"></a>

### Set up Micro Focus Enterprise Server on Amazon EC2 and deploy a mainframe online application
<a name="set-up-micro-focus-enterprise-server-on-amazon-ec2-and-deploy-a-mainframe-online-application"></a>


| Task | Description | Skills required | 
| --- | --- | --- | 
| Set up Micro Focus Enterprise Server and deploy a demo online application. | Set up Micro Focus Enterprise Server on Amazon EC2, and then deploy the Micro Focus Account Demo application (ACCT Demo) on Amazon EC2 by following the instructions from [Tutorial: CICS Support](https://www.microfocus.com/documentation/enterprise-developer/ed70/ED-Eclipse/GMWALK00.html) in the Micro Focus documentation.The ACCT Demo application is a mainframe online (CICS) application that creates and then initiates print output. | Cloud architect | 

### Set up an LRS print server on Amazon EC2
<a name="set-up-an-lrs-print-server-on-amazon-ec2"></a>


| Task | Description | Skills required | 
| --- | --- | --- | 
| Get an LRS product license for printing. | To get an LRS product license for LRS VPSX/MFI, LRS/Queue, and LRS/DIS, contact the [LRS Output Management team](https://www.lrsoutputmanagement.com/about-us/contact-us/). You must provide the host names of the EC2 instances where the LRS products will be installed. | Build lead | 
| Create an Amazon EC2 Windows instance to install LRS VPSX/MFI. | Launch an Amazon EC2 Windows instance by following the instructions from [Step 1: Launch an instance](https://docs.aws.amazon.com/AWSEC2/latest/WindowsGuide/EC2_GetStarted.html#ec2-launch-instance) in the Amazon EC2 documentation. Your instance must meet the following hardware and software requirements for LRS VPSX/MFI:  [\[See the AWS documentation website for more details\]](http://docs.aws.amazon.com/prescriptive-guidance/latest/patterns/modernize-mainframe-online-printing-workloads-on-aws-by-using-micro-focus-enterprise-server-and-lrs-vpsx-mfi.html)The preceding hardware and software requirements are intended for a small printer fleet (around 500–1000). To get the full requirements, consult with your LRS and AWS contacts.When you create your Windows instance, do the following:[\[See the AWS documentation website for more details\]](http://docs.aws.amazon.com/prescriptive-guidance/latest/patterns/modernize-mainframe-online-printing-workloads-on-aws-by-using-micro-focus-enterprise-server-and-lrs-vpsx-mfi.html) | Cloud architect | 
| Install LRS VPSX/MFI on the EC2 instance. | [\[See the AWS documentation website for more details\]](http://docs.aws.amazon.com/prescriptive-guidance/latest/patterns/modernize-mainframe-online-printing-workloads-on-aws-by-using-micro-focus-enterprise-server-and-lrs-vpsx-mfi.html) | Cloud architect | 
| Install LRS/Queue. | [\[See the AWS documentation website for more details\]](http://docs.aws.amazon.com/prescriptive-guidance/latest/patterns/modernize-mainframe-online-printing-workloads-on-aws-by-using-micro-focus-enterprise-server-and-lrs-vpsx-mfi.html) | Cloud architect | 
| Install LRS/DIS. | [\[See the AWS documentation website for more details\]](http://docs.aws.amazon.com/prescriptive-guidance/latest/patterns/modernize-mainframe-online-printing-workloads-on-aws-by-using-micro-focus-enterprise-server-and-lrs-vpsx-mfi.html) | Cloud architect | 
| Create a target group and register LRS VPSX/MFI EC2 as the target. | Create a target group by following the instructions from [Create a target group for your Network Load Balancer](https://docs.aws.amazon.com/elasticloadbalancing/latest/network/create-target-group.html) in the Elastic Load Balancing documentation.When you create the target group, do the following:[\[See the AWS documentation website for more details\]](http://docs.aws.amazon.com/prescriptive-guidance/latest/patterns/modernize-mainframe-online-printing-workloads-on-aws-by-using-micro-focus-enterprise-server-and-lrs-vpsx-mfi.html) | Cloud architect | 
| Create a Network Load Balancer. | Follow the instructions from [Create a Network Load Balancer](https://docs.aws.amazon.com/elasticloadbalancing/latest/network/create-network-load-balancer.html) in the Elastic Load Balancing documentation. Your Network Load Balancer routes traffic from Micro Focus Enterprise Server to LRS VPSX/MFI EC2.When you create the Network Load Balancer, do the following on the **Listeners and Routing** page:[\[See the AWS documentation website for more details\]](http://docs.aws.amazon.com/prescriptive-guidance/latest/patterns/modernize-mainframe-online-printing-workloads-on-aws-by-using-micro-focus-enterprise-server-and-lrs-vpsx-mfi.html) | Cloud architect | 

### Integrate Micro Focus Enterprise Server with LRS VPSX/MFI and LRS/Queue
<a name="integrate-micro-focus-enterprise-server-with-lrs-vpsx-mfi-and-lrs-queue"></a>


| Task | Description | Skills required | 
| --- | --- | --- | 
| Configure Micro Focus Enterprise Server for LRS/Queue integration. | [\[See the AWS documentation website for more details\]](http://docs.aws.amazon.com/prescriptive-guidance/latest/patterns/modernize-mainframe-online-printing-workloads-on-aws-by-using-micro-focus-enterprise-server-and-lrs-vpsx-mfi.html) | Cloud architect | 
| Make CICS Print Exit (DFHUPRNT) available to Micro Focus Enterprise Server initialization. | [\[See the AWS documentation website for more details\]](http://docs.aws.amazon.com/prescriptive-guidance/latest/patterns/modernize-mainframe-online-printing-workloads-on-aws-by-using-micro-focus-enterprise-server-and-lrs-vpsx-mfi.html)**Validate that Micro Focus Enterprise Server has detected CICS Print Exit (DFHUPRNT)**[\[See the AWS documentation website for more details\]](http://docs.aws.amazon.com/prescriptive-guidance/latest/patterns/modernize-mainframe-online-printing-workloads-on-aws-by-using-micro-focus-enterprise-server-and-lrs-vpsx-mfi.html) | Cloud architect | 
| Define the CICS printer's terminal ID (TERMIDs) as Micro Focus Enterprise Server. | **Enable 3270 printing in Micro Focus Enterprise Server**[\[See the AWS documentation website for more details\]](http://docs.aws.amazon.com/prescriptive-guidance/latest/patterns/modernize-mainframe-online-printing-workloads-on-aws-by-using-micro-focus-enterprise-server-and-lrs-vpsx-mfi.html)**Define the CICS printer's terminal in Micro Focus Enterprise Server**[\[See the AWS documentation website for more details\]](http://docs.aws.amazon.com/prescriptive-guidance/latest/patterns/modernize-mainframe-online-printing-workloads-on-aws-by-using-micro-focus-enterprise-server-and-lrs-vpsx-mfi.html) | Cloud architect | 

### Set up printers and print users in Micro Focus Enterprise Server and LRS VPSX/MFI
<a name="set-up-printers-and-print-users-in-micro-focus-enterprise-server-and-lrs-vpsx-mfi"></a>


| Task | Description | Skills required | 
| --- | --- | --- | 
| Create a print queue in the LRS VPSX. | [\[See the AWS documentation website for more details\]](http://docs.aws.amazon.com/prescriptive-guidance/latest/patterns/modernize-mainframe-online-printing-workloads-on-aws-by-using-micro-focus-enterprise-server-and-lrs-vpsx-mfi.html)The print queue must be equivalent to the Print TERMIDs created in Micro Focus Enterprise Server. | Cloud architect | 
| Create a print user in LRS VPSX/MFI. | [\[See the AWS documentation website for more details\]](http://docs.aws.amazon.com/prescriptive-guidance/latest/patterns/modernize-mainframe-online-printing-workloads-on-aws-by-using-micro-focus-enterprise-server-and-lrs-vpsx-mfi.html) | Cloud architect | 

### Set up print authentication and authorization
<a name="set-up-print-authentication-and-authorization"></a>


| Task | Description | Skills required | 
| --- | --- | --- | 
| Create an AWS Managed Microsoft AD domain with users and groups. | [\[See the AWS documentation website for more details\]](http://docs.aws.amazon.com/prescriptive-guidance/latest/patterns/modernize-mainframe-online-printing-workloads-on-aws-by-using-micro-focus-enterprise-server-and-lrs-vpsx-mfi.html) | Cloud architect | 
| Join LRS VPSX/MFI EC2 to an AWS Managed Microsoft AD domain. | Join LRS VPSX/MFI EC2 to your AWS Managed Microsoft AD domain [automatically](https://aws.amazon.com/premiumsupport/knowledge-center/ec2-systems-manager-dx-domain/) (AWS Knowledge Center documentation) or [manually](https://docs.aws.amazon.com/directoryservice/latest/admin-guide/join_windows_instance.html) (AWS Directory Service documentation). | Cloud architect | 
| Configure and integrate LRS/DIS with AWS Managed Microsoft AD. | [\[See the AWS documentation website for more details\]](http://docs.aws.amazon.com/prescriptive-guidance/latest/patterns/modernize-mainframe-online-printing-workloads-on-aws-by-using-micro-focus-enterprise-server-and-lrs-vpsx-mfi.html) | Cloud architect | 

### Test an online print workflow
<a name="test-an-online-print-workflow"></a>


| Task | Description | Skills required | 
| --- | --- | --- | 
| Initiate an online print request from the Micro Focus ACCT Demo app. | [\[See the AWS documentation website for more details\]](http://docs.aws.amazon.com/prescriptive-guidance/latest/patterns/modernize-mainframe-online-printing-workloads-on-aws-by-using-micro-focus-enterprise-server-and-lrs-vpsx-mfi.html)The "Print Request Scheduled" message appears at the bottom of the screen. This confirms that an online print request was generated from the ACCT Demo application and sent to LRS VPS/MFI for print processing.  | Cloud architect | 
| Check the print output in LRS VPSX/MFI. | [\[See the AWS documentation website for more details\]](http://docs.aws.amazon.com/prescriptive-guidance/latest/patterns/modernize-mainframe-online-printing-workloads-on-aws-by-using-micro-focus-enterprise-server-and-lrs-vpsx-mfi.html)You can now see the print output of an account statement with columns for Account No., SURNAME, FIRST, ADDRESS, TELEPHONE, No. Cards Issued, Date issued, Amount, and Balance.For an example, see the **online\$1print\$1output** attachment for this pattern. | Test engineer | 

## Related resources
<a name="modernize-mainframe-online-printing-workloads-on-aws-by-using-micro-focus-enterprise-server-and-lrs-vpsx-mfi-resources"></a>
+ [LRS Output Modernization](https://www.lrsoutputmanagement.com/products/modernization-products) (LRS documentation)
+ [VTAM networking concepts](https://www.ibm.com/docs/en/zos/2.1.0?topic=guide-vtam-networking-concepts) (IBM documentation)
+ [Summary of logical unit (LU) types](https://www.ibm.com/docs/en/wsfz-and-o/1.1?topic=installation-summary-logical-unit-lu-types) (IBM documentation)
+ [ANSI and machine carriage controls](https://www.ibm.com/docs/en/cmofz/9.5.0?topic=tips-ansi-machine-carriage-controls) (IBM documentation)
+ [Empowering Enterprise Mainframe Workloads on AWS with Micro Focus](https://aws.amazon.com/blogs/apn/empowering-enterprise-grade-mainframe-workloads-on-aws-with-micro-focus/) (AWS Partner Network Blog)
+ [Build a Micro Focus Enterprise Server PAC with Amazon EC2 Auto Scaling and Systems Manager](https://docs.aws.amazon.com/prescriptive-guidance/latest/patterns/build-a-micro-focus-enterprise-server-pac-with-amazon-ec2-auto-scaling-and-systems-manager.html) (AWS Prescriptive Guidance documentation)
+ [Advanced Function Presentation (AFP) data stream](https://www.ibm.com/docs/en/i/7.4?topic=streams-advanced-function-presentation-data-stream) (IBM documentation)
+ [Line Conditioned Data Stream (LCDS)](https://www.compart.com/en/lcds) (Compart documentation)

## Additional information
<a name="modernize-mainframe-online-printing-workloads-on-aws-by-using-micro-focus-enterprise-server-and-lrs-vpsx-mfi-additional"></a>

**Considerations**

During your modernization journey, you may consider a wide variety of configurations for mainframe online processes and the output they generate. The mainframe platform has been customized by every customer and vendor that uses it with particular requirements that directly affect print. For example, your current platform may incorporate the IBM Advanced Function Presentation (AFP) or the Xerox Line Condition Data Stream (LCDS) into the current workflow. Additionally, [mainframe carriage control characters](https://www.ibm.com/docs/en/cmofz/9.5.0?topic=tips-ansi-machine-carriage-controls) and [channel command words](https://www.ibm.com/docs/en/zos/2.3.0?topic=devices-channel-command-words) can affect the look of the printed page and may need special handling. As part of the modernization planning process, we recommend that you assess and understand the configurations in your specific print environment.

**Print data capture**

This section summarizes the CICS application programming methods that you can use in an IBM mainframe environment for printing. LRS VPSX/MFI components provide techniques to allow the same application programs to create data in the same way. The following table describes how each application programming method is supported in a modernized CICS application running in AWS and Micro Focus Enterprise Server with an LRS VPSX/MFI print server.


| 
| 
| Method | Description | Support for the method in a modernized environment | 
| --- |--- |--- |
| EXEC CICS SEND TEXT..    or EXEC CICS SEND MAP..  | These CICS and VTAM methods are responsible for creating and delivering 3270/SCS print data streams to LUTYPE0, LUTYPE1, and LUTYPE3 print devices.  | A Micro Focus online Print Exit (DFHUPRNT) application program interface (API) enables print data to be processed by VPSX/MFI when 3270/SCS print data streams are created by using either of these methods.  | 
| EXEC CICS SEND TEXT..    or EXEC CICS SEND MAP.. (with third-party IBM mainframe software) | The CICS and VTAM methods are responsible for creating and delivering 3270/SCS print data streams to LUTYPE0, LUTYPE1, and LUTYPE3 print devices. Third-party software products intercept the print data, convert the data to standard print format data with an ASA/MCH control character, and place the data on the JES spool to be processed by mainframe-based printing systems that use JES.  | A Micro Focus online Print Exit (DFHUPRNT) API enables print data to be processed by VPSX/MFI when 3270/SCS print data streams are created by using either of these methods.  | 
| EXEC CICS SPOOLOPEN  | This method is used by CICS application programs to write data directly to the JES spool. The data then becomes available to be processed by mainframe-based printing systems that use JES.  | Micro Focus Enterprise Server spools the data to the Enterprise Server spool where it can be processed by the VPSX/MFI Batch Print Exit (LRSPRTE6) that spools the data to VPSX.  | 
| DRS/API | An LRS-supplied programmatic interface is used for writing print data to JES.  | VPSX/MFI supplies a replacement interface that spools the print data directly to VPSX.  | 

**Printer fleet health checks**

LRS VPSX/MFI (LRS LoadX) can perform deep dive health checks, including device management and operational optimization. Device management can detect failure in a printer device and route the print request to a healthy printer. For more information about deep dive health checks for printer fleets, see the LRS documentation that’s included with your product license.

**Print authentication and authorization **

LRS/DIS enables LRS applications to authenticate user IDs and passwords by using Microsoft Active Directory or an LDAP server. In addition to basic print authorization, LRS/DIS can also apply granular-level print security controls in the following use cases:
+ Manage who can browse the printer job.
+ Manage the browsing level of other user's jobs.
+ Manage operational tasks. For example, command-level security such as hold/release, purge, modify, copy, and reroute. Security can be set up by either the User-ID or Group (similar to AD group or LDAP group).

## Attachments
<a name="attachments-924cdae7-9265-4fc9-8e5e-bb2da5368e7e"></a>

To access additional content that is associated with this document, unzip the following file: [attachment.zip](samples/p-attach/924cdae7-9265-4fc9-8e5e-bb2da5368e7e/attachments/attachment.zip)

# Move mainframe files directly to Amazon S3 using Transfer Family
<a name="move-mainframe-files-directly-to-amazon-s3-using-transfer-family"></a>

*Luis Gustavo Dantas, Amazon Web Services*

## Summary
<a name="move-mainframe-files-directly-to-amazon-s3-using-transfer-family-summary"></a>

As part of the modernization journey, you can face the challenge of transferring files between your on-premises servers and the Amazon Web Services (AWS) Cloud. Transferring data from mainframes can be a significant challenge because mainframes typically can’t access modern data stores like Amazon Simple Storage Service (Amazon S3), Amazon Elastic Block Store (Amazon EBS), or Amazon Elastic File System (Amazon EFS).

Many customers use intermediate staging resources, such as on-premises Linux, Unix, or Windows servers, to transfer files to the AWS Cloud. You can avoid this indirect method by using AWS Transfer Family with the Secure Shell (SSH) File Transfer Protocol (SFTP) to upload mainframe files directly to Amazon S3.

## Prerequisites and limitations
<a name="move-mainframe-files-directly-to-amazon-s3-using-transfer-family-prereqs"></a>

**Prerequisites**
+ An active AWS account
+ A virtual private cloud (VPC) with a subnet that’s reachable by your legacy platform
+ A Transfer Family endpoint for your VPC
+ Mainframe Virtual Storage Access Method (VSAM) files converted to sequential, [fixed-length files](https://www.ibm.com/docs/en/zos/2.1.0?topic=reports-converting-vb-fb) (IBM documentation)

**Limitations**
+ SFTP transfers files in binary mode by default, which means that files are uploaded to Amazon S3 with EBCDIC encoding preserved. If your file doesn't contain binary or packed data, then you can use the **sftp **[ascii subcommand](https://www.ibm.com/docs/en/zos/2.3.0?topic=version-what-zos-openssh-supports) (IBM documentation) to convert your files to text during the transfer.
+ You must [unpack mainframe files](https://apg-library.amazonaws.com/content/f5907bfe-7dff-4cd0-8523-57015ad48c4b) (AWS Prescriptive Guidance) that contain packed and binary content to use these files in your target environment.
+ Amazon S3 objects can range in size from a minimum of 0 bytes to a maximum of 5 TB. For more information about Amazon S3 capabilities, see [Amazon S3 FAQs](https://aws.amazon.com/s3/faqs/?nc1=h_ls).

## Architecture
<a name="move-mainframe-files-directly-to-amazon-s3-using-transfer-family-architecture"></a>

**Source technology stack**
+ Job control language (JCL)
+ z/OS Unix shell and ISPF
+ SFTP
+ VSAM and flat files

**Target technology stack**
+ Transfer Family
+ Amazon S3
+ Amazon Virtual Private Cloud (Amazon VPC)

**Target architecture**

The following diagram shows a reference architecture for using Transfer Family with SFTP to upload mainframe files directly to an S3 bucket.

![\[Using Transfer Family with SFTP to upload mainframe files directly to an S3 bucket\]](http://docs.aws.amazon.com/prescriptive-guidance/latest/patterns/images/pattern-img/1f4fa1fd-b681-41bc-81d8-d556426b14c2/images/110491d5-b58d-4451-8de9-e742756bb192.png)


The diagram shows the following workflow:

1. You use a JCL job to transfer your mainframe files from the legacy mainframe to the AWS Cloud through Direct Connect.

1. Direct Connect enables your network traffic to remain on the AWS global network and bypass the public internet. Direct Connect also enhances the network speed, starting at 50 Mbps and scaling up to 100 Gbps.

1. The VPC endpoint enables connections between your VPC resources and the supported services without using the public internet. Access to Transfer Family and Amazon S3 achieves high availability by taking place through the elastic network interfaces located in two private subnets and Availability Zones.

1. Transfer Family authenticates users and uses SFTP to receive your files from the legacy environment and move them to an S3 bucket.

**Automation and scale**

After the Transfer Family service is in place, you can transfer an unlimited number of files from the mainframe to Amazon S3 by using a JCL job as the SFTP client. You can also automate the file transfer by using a mainframe batch job scheduler to run the SFTP jobs when you’re ready to transfer the mainframe files.

## Tools
<a name="move-mainframe-files-directly-to-amazon-s3-using-transfer-family-tools"></a>
+ [Amazon Simple Storage Service (Amazon S3)](https://docs.aws.amazon.com/AmazonS3/latest/userguide/Welcome.html) is a cloud-based object storage service that helps you store, protect, and retrieve any amount of data.
+ [Amazon Virtual Private Cloud (Amazon VPC)](https://docs.aws.amazon.com/vpc/latest/userguide/what-is-amazon-vpc.html) helps you launch AWS resources into a virtual network that you’ve defined. This virtual network resembles a traditional network that you’d operate in your own data center, with the benefits of using the scalable infrastructure of AWS.
+ [AWS Transfer Family](https://docs.aws.amazon.com/transfer/latest/userguide/what-is-aws-transfer-family.html) enables you to securely scale your recurring business-to-business file transfers to Amazon S3 and Amazon EFS by using SFTP, FTPS, and FTP protocols.

## Epics
<a name="move-mainframe-files-directly-to-amazon-s3-using-transfer-family-epics"></a>

### Create the S3 bucket and the access policy
<a name="create-the-s3-bucket-and-the-access-policy"></a>


| Task | Description | Skills required | 
| --- | --- | --- | 
| Create the S3 bucket. | [Create an S3 bucket](https://docs.aws.amazon.com/AmazonS3/latest/userguide/create-bucket-overview.html) to host the files that you transfer from your legacy environment. | General AWS | 
| Create the IAM role and policy. | Transfer Family uses your AWS Identity and Access Management (IAM) role to grant access to the S3 bucket that you created earlier.[Create an IAM role](https://docs.aws.amazon.com/IAM/latest/UserGuide/id_roles_create_for-service.html) that includes the following [IAM policy](https://docs.aws.amazon.com/IAM/latest/UserGuide/access_policies_create.html):<pre>{<br />    "Version": "2012-10-17",		 	 	 <br />    "Statement": [<br />        {<br />            "Sid": "UserFolderListing",<br />            "Action": [<br />                "s3:ListBucket",<br />                "s3:GetBucketLocation"<br />            ],<br />            "Effect": "Allow",<br />            "Resource": [<br />                "arn:aws:s3:::<your-bucket-name>"<br />            ]<br />        },<br />        {<br />            "Sid": "HomeDirObjectAccess",<br />            "Effect": "Allow",<br />            "Action": [<br />                "s3:PutObject",<br />                "s3:GetObjectAcl",<br />                "s3:GetObject",<br />                "s3:DeleteObjectVersion",<br />                "s3:DeleteObject",<br />                "s3:PutObjectAcl",<br />                "s3:GetObjectVersion"<br />            ],<br />            "Resource": "arn:aws:s3:::<your-bucket-name>/*"<br />        }<br />    ]<br />}</pre>You must choose the Transfer use case when you create the IAM role. | General AWS | 

### Define the transfer service
<a name="define-the-transfer-service"></a>


| Task | Description | Skills required | 
| --- | --- | --- | 
| Create the SFTP server. | [\[See the AWS documentation website for more details\]](http://docs.aws.amazon.com/prescriptive-guidance/latest/patterns/move-mainframe-files-directly-to-amazon-s3-using-transfer-family.html)For more information about how to set up an SFTP server, see [Create an SFTP-enabled server](https://docs.aws.amazon.com/transfer/latest/userguide/create-server-sftp.html) (AWS Transfer Family User Guide). | General AWS | 
| Get the server address. | [\[See the AWS documentation website for more details\]](http://docs.aws.amazon.com/prescriptive-guidance/latest/patterns/move-mainframe-files-directly-to-amazon-s3-using-transfer-family.html) | General AWS | 
| Create the SFTP client key pair. | Create an SSH key pair for either [Microsoft Windows](https://docs.aws.amazon.com/transfer/latest/userguide/key-management.html#windows-ssh) or [macOS/Linux/UNIX](https://docs.aws.amazon.com/transfer/latest/userguide/key-management.html#macOS-linux-unix-ssh). | General AWS, SSH | 
| Create the SFTP user. | [\[See the AWS documentation website for more details\]](http://docs.aws.amazon.com/prescriptive-guidance/latest/patterns/move-mainframe-files-directly-to-amazon-s3-using-transfer-family.html) | General AWS | 

### Transfer the mainframe file
<a name="transfer-the-mainframe-file"></a>


| Task | Description | Skills required | 
| --- | --- | --- | 
| Send the SSH private key to the mainframe. | Use SFTP or SCP to send the SSH private key to the legacy environment.SFTP example:<pre>sftp [USERNAME@mainframeIP]<br />[password]<br />cd [/u/USERNAME]<br />put [your-key-pair-file]</pre>SCP example:<pre>scp [your-key-pair-file] [USERNAME@MainframeIP]:/[u/USERNAME]</pre>Next, store the SSH key in the z/OS Unix file system under the user name that will later run the file transfer batch job (for example, `/u/CONTROLM`). For more information about z/OS Unix shell, see [An introduction to the z/OS shells](https://www.ibm.com/docs/en/zos/2.2.0?topic=shells-introduction-zos) (IBM documentation). | Mainframe, z/OS Unix shell, FTP, SCP | 
| Create the JCL SFTP client. | Because mainframes don't have a native SFTP client, you must use the BPXBATCH utility to run the SFTP client from the z/OS Unix shell.In the ISPF editor, create the JCL SFTP client. For example:<pre>//JOBNAM JOB ...<br />//**********************************************************************<br />//SFTP EXEC PGM=BPXBATCH,REGION=0M <br />//STDPARM DD * <br />SH cp "//'MAINFRAME.FILE.NAME'" filename.txt; <br />echo 'put filename.txt' > uplcmd; <br />sftp -b uplcmd -i ssh_private_key_file ssh_username@<transfer service ip or DNS>; <br />//SYSPRINT DD SYSOUT=* <br />//STDOUT DD SYSOUT=* <br />//STDENV DD * <br />//STDERR DD SYSOUT=*</pre>For more information about how to run a command in the z/OS Unix shell, see [The BPXBATCH utility](https://www.ibm.com/docs/en/zos/2.2.0?topic=ispf-bpxbatch-utility) (IBM documentation). For more information about how to create or edit JCL jobs in z/OS, see [What is ISPF?](https://www.ibm.com/docs/en/zos-basic-skills?topic=interfaces-what-is-ispf) and [The ISPF editor](https://www.ibm.com/docs/en/zos-basic-skills?topic=ispf-editor) (IBM documentation). | JCL, Mainframe, z/OS Unix shell | 
| Run the JCL SFTP client. | [\[See the AWS documentation website for more details\]](http://docs.aws.amazon.com/prescriptive-guidance/latest/patterns/move-mainframe-files-directly-to-amazon-s3-using-transfer-family.html)For more information about how to check the activity of batch jobs, see [z/OS SDSF User's Guide](https://www.ibm.com/docs/en/zos/2.4.0?topic=sdsf-zos-users-guide) (IBM documentation). | Mainframe, JCL, ISPF | 
| Validate the file transfer. | [\[See the AWS documentation website for more details\]](http://docs.aws.amazon.com/prescriptive-guidance/latest/patterns/move-mainframe-files-directly-to-amazon-s3-using-transfer-family.html) | General AWS | 
| Automate the JCL SFTP client. | Use job scheduler to automatically trigger the JCL SFTP client.You can use mainframe job schedulers, such as [BMC Control-M](https://www.bmcsoftware.pt/it-solutions/control-m.html) or [CA Workload Automation](https://www.broadcom.com/products/mainframe/workload-automation/ca7), to automate batch jobs for file transfers based on time and other batch job dependencies. | Job scheduler | 

## Related resources
<a name="move-mainframe-files-directly-to-amazon-s3-using-transfer-family-resources"></a>
+ [How AWS Transfer Family works](https://docs.aws.amazon.com/transfer/latest/userguide/how-aws-transfer-works.html)

# Optimize the performance of your AWS Blu Age modernized application
<a name="optimize-performance-aws-blu-age-modernized-application"></a>

*Vishal Jaswani, Manish Roy, and Himanshu Sah, Amazon Web Services*

## Summary
<a name="optimize-performance-aws-blu-age-modernized-application-summary"></a>

Mainframe applications that are modernized with AWS Blu Age require functional and performance equivalence testing before they’re deployed to production. In performance tests, modernized applications can perform more slowly than legacy systems, particularly in complex batch jobs. This disparity exists because mainframe applications are monolithic, whereas modern applications use multitier architectures. This pattern presents optimization techniques to address these performance gaps for applications that are modernized by using [automated refactoring with AWS Blu Age](https://docs.aws.amazon.com/m2/latest/userguide/refactoring-m2.html).

The pattern uses the AWS Blu Age modernization framework with native Java and database tuning capabilities to identify and resolve performance bottlenecks. The pattern describes how you can use profiling and monitoring to identify performance issues with metrics such as SQL execution times, memory utilization, and I/O patterns. It then explains how you can apply targeted optimizations, including database query restructuring, caching, and business logic refinement.

The improvements in batch processing times and system resource utilization help you match mainframe performance levels in your modernized systems. This approach maintains functional equivalence during transition to modern cloud-based architectures.

To use this pattern, set up your system and identify performance hotspots by following the instructions in the [Epics](#optimize-performance-aws-blu-age-modernized-application-epics) section, and apply the optimization techniques that are covered in detail in the [Architecture](#optimize-performance-aws-blu-age-modernized-application-architecture) section.

## Prerequisites and limitations
<a name="optimize-performance-aws-blu-age-modernized-application-prereqs"></a>

**Prerequisites**
+ An AWS Blu Age modernized application
+ A [JProfiler license](https://www.ej-technologies.com/store/jprofiler)
+ Administrative privileges to install database client and profiling tools
+ AWS Blu Age [Level 3 certification](https://bluinsights.aws/certification/)
+ Intermediate-level understanding of the AWS Blu Age framework, generated code structure, and Java programming

**Limitations**

The following optimization capabilities and features are outside the scope of this pattern:
+ Network latency optimization between application tiers
+ Infrastructure-level optimizations through Amazon Elastic Compute Cloud (Amazon EC2) instance types and storage optimization
+ Concurrent user load testing and stress testing

**Product versions**
+ JProfiler version 13.0 or later (we recommend the latest version)
+ pgAdmin version 8.14 or later

## Architecture
<a name="optimize-performance-aws-blu-age-modernized-application-architecture"></a>

This pattern sets up a profiling environment for an AWS Blu Age application by using tools such as JProfiler and pgAdmin. It supports optimization through the DAOManager and SQLExecutionBuilder APIs provided by AWS Blu Age.

The remainder of this section provides detailed information and examples for identifying performance hotspots and optimization strategies for your modernized applications. The steps in the [Epics](#optimize-performance-aws-blu-age-modernized-application-epics) section refer back to this information for further guidance.

**Identifying performance hotspots in modernized mainframe applications**

In modernized mainframe applications, *performance hotspots* are specific areas in the code that cause significant slowdowns or inefficiencies. These hotspots are often caused by the architectural differences between mainframe and modernized applications. To identify these performance bottlenecks and optimize the performance of your modernized application, you can use three techniques: SQL logging, a query `EXPLAIN` plan, and JProfiler analysis.

*Hotspot identification technique: SQL logging*

Modern Java applications, including those that have been modernized by using AWS Blu Age, have built-in capabilities to log SQL queries. You can enable specific loggers in AWS Blu Age projects to track and analyze the SQL statements executed by your application. This technique is particularly useful for identifying inefficient database access patterns, such as excessive individual queries or poorly structured database calls, that could be optimized through batching or query refinement.

To implement SQL logging in your AWS Blu Age modernized application, set the log level to `DEBUG` for SQL statements in the `application.properties` file to capture query execution details:

```
level.org.springframework.beans.factory.support.DefaultListableBeanFactory : WARN
level.com.netfective.bluage.gapwalk.runtime.sort.internal: WARN
level.org.springframework.jdbc.core.StatementCreatorUtils: DEBUG
level.com.netfective.bluage.gapwalk.rt.blu4iv.dao: DEBUG
level.com.fiserv.signature: DEBUG
level.com.netfective.bluage.gapwalk.database.support.central: DEBUG
level.com.netfective.bluage.gapwalk.rt.db.configuration.DatabaseConfiguration: DEBUG
level.com.netfective.bluage.gapwalk.rt.db.DatabaseInteractionLoggerUtils: DEBUG
level.com.netfective.bluage.gapwalk.database.support.AbstractDatabaseSupport: DEBUG
level.com.netfective.bluage.gapwalk.rt: DEBUG
```

Monitor high-frequency and slow-performing queries by using the logged data to identify optimization targets. Focus on queries within batch processes because they typically have the highest performance impact.

*Hotspot identification technique: Query EXPLAIN plan*

This method uses the query planning capabilities of relational database management systems. You can use commands such as `EXPLAIN` in PostgreSQL or MySQL, or `EXPLAIN PLAN` in Oracle, to examine how your database intends to run a given query. The output of these commands provides valuable insights into the query execution strategy, including whether indexes will be used or full table scans will be performed. This information is critical for optimizing query performance, especially in cases where proper indexing can significantly reduce execution time.

Extract the most repetitive SQL queries from the application logs and analyze the execution path of slow-performing queries by using the `EXPLAIN` command that’s specific to your database. Here’s an example for a PostgreSQL database.

Query:

```
SELECT * FROM tenk1 WHERE unique1 < 100;
```

`EXPLAIN` command:

```
EXPLAIN SELECT * FROM tenk1 where unique1 < 100;
```

Output:

```
Bitmap Heap Scan on tenk1 (cost=5.06..224.98 rows=100 width=244) 
Recheck Cond: (unique1 < 100) 
-> Bitmap Index Scan on tenk1_unique1 (cost=0.00..5.04 rows=100 width=0)
Index Cond: (unique1 < 100)
```

You can interpret the `EXPLAIN` output as follows:
+ Read the `EXPLAIN` plan from the innermost to the outermost (bottom to top) operations.
+ Look for key terms. For example, `Seq Scan` indicates full table scan and `Index Scan` shows index usage.
+ Check cost values: The first number is the startup cost, and the second number is the total cost.
+ See the `rows` value for the estimated number of output rows.

In this example, the query engine uses an index scan to find the matching rows, and then fetches only those rows (`Bitmap Heap Scan`). This is more efficient than scanning the entire table, despite the higher cost of individual row access.

Table scan operations in the output of an `EXPLAIN` plan indicate a missing index. Optimization requires the creation of an appropriate index.

*Hotspot identification technique: JProfiler analysis*

JProfiler is a comprehensive Java profiling tool that helps you resolve performance bottlenecks by identifying slow database calls and CPU-intensive calls. This tool is particularly effective in identifying slow SQL queries and inefficient memory usage.

Example analysis for query:

```
select evt. com.netfective.bluage.gapwalk.rt.blu4iv.dao.Blu4ivTableManager.queryNonTrasactional
```

The JProfiler Hot Spots view provides the following information:
+ **Time** column
  + Shows total execution duration (for example, 329 seconds)
  + Displays percentage of total application time (for example, 58.7%)
  + Helps identify most time-consuming operations
+ **Average Time** column
  + Shows per-execution duration (for example, 2,692 microseconds)
  + Indicates individual operation performance
  + Helps spot slow individual operations
+ **Events** column
  + Shows execution count (for example, 122,387 times)
  + Indicates operation frequency
  + Helps identify frequently called methods

For the example results:
+ High frequency: 122,387 executions indicate potential for optimization
+ Performance concern: 2,692 microseconds for average time suggests inefficiency
+ Critical impact: 58.7% of total time indicates major bottleneck

JProfiler can analyze your application's runtime behavior to reveal hotspots that might not be apparent through static code analysis or SQL logging. These metrics help you identify the operations that need optimization and determine the optimization strategy that would be most effective. For more information about JProfiler features, see the [JProfiler documentation](https://www.ej-technologies.com/resources/jprofiler/help/doc/main/introduction.html).

When you use these three techniques (SQL logging, query `EXPLAIN` plan, and JProfiler) in combination, you can gain a holistic view of your application's performance characteristics. By identifying and addressing the most critical performance hotspots, you can bridge the performance gap between your original mainframe application and your modernized cloud-based system.

After you identify your application’s performance hotspots, you can apply optimization strategies, which are explained in the next section.

**Optimization strategies for mainframe modernization**

This section outlines key strategies for optimizing applications that have been modernized from mainframe systems. It focuses on three strategies: using existing APIs, implementing effective caching, and optimizing business logic.

*Optimization strategy: Using existing APIs*

AWS Blu Age provides several powerful APIs in DAO interfaces that you can use to optimize performance. Two primary interfaces—DAOManager and SQLExecutionBuilder—offer capabilities for enhancing application performance.

**DAOManager**

DAOManager serves as the primary interface for database operations in modernized applications. It offers multiple methods to enhance database operations and improve application performance, particularly for straightforward create, read, update, and delete (CRUD) operations and batch processing.
+ **Use SetMaxResults.** In the DAOManager API, you can use the **SetMaxResults** method to specify the maximum number of records to retrieve in a single database operation. By default, DAOManager retrieves only 10 records at a time, which can lead to multiple database calls when processing large datasets. Use this optimization when your application needs to process a large number of records and is currently making multiple database calls to retrieve them. This is particularly useful in batch processing scenarios where you're iterating through a large dataset. In the following example, the code on the left (before optimization) uses the default data retrieval value of 10 records. The code on the right (after optimization) sets **setMaxResults** to retrieve 100,000 records at a time.  
![\[Example of using SetMaxResults to avoid multiple database calls.\]](http://docs.aws.amazon.com/prescriptive-guidance/latest/patterns/images/pattern-img/6b42fafd-1535-416d-8abd-1a5f9007ddba/images/beb9623e-e7a8-45ef-adc6-19a249224b05.png)
**Note**  
Choose larger batch sizes carefully and check object size, because this optimization increases the memory footprint.
+ **Replace SetOnGreatorOrEqual with SetOnEqual.** This optimization involves changing the method you use to set the condition for retrieving records. The **SetOnGreatorOrEqual** method retrieves records that are greater than or equal to a specified value, whereas **SetOnEqual** retrieves only records that exactly match the specified value.

  Use **SetOnEqual** as illustrated in the following code example, when you know that you need exact matches and you're currently using the **SetOnGreatorOrEqual** method followed by **readNextEqual()**. This optimization reduces unnecessary data retrieval.  
![\[Example of using SetOnEqual to retrieve records based on an exact match.\]](http://docs.aws.amazon.com/prescriptive-guidance/latest/patterns/images/pattern-img/6b42fafd-1535-416d-8abd-1a5f9007ddba/images/5ce0dac9-f281-4862-a71f-1614493a83f0.png)
+ **Use batch write and update operations.** You can use batch operations to group multiple write or update operations into a single database transaction. This reduces the number of database calls and can significantly improve performance for operations that involve multiple records.

  In the following example, the code on the left performs write operations in a loop, which slows down the application’s performance. You can optimize this code by using a batch write operation: During each iteration of the `WHILE` loop, you add records to a batch until the batch size reaches a predetermined size of 100. You can then flush the batch when it reaches the predetermined batch size, and then flush any remaining records to the database. This is particularly useful in scenarios where you process large datasets that require updates.  
![\[Example of grouping multiple operations into a single database transaction.\]](http://docs.aws.amazon.com/prescriptive-guidance/latest/patterns/images/pattern-img/6b42fafd-1535-416d-8abd-1a5f9007ddba/images/e3bd60d4-06f5-4c1c-9cbd-463f6835a1ba.png)
+ **Add indexes.** Adding indexes is a database-level optimization that can significantly improve query performance. An index allows the database to quickly locate rows with a specific column value without scanning the entire table. Use indexing on columns that are frequently used in `WHERE` clauses, `JOIN` conditions, or `ORDER BY` statements. This is particularly important for large tables or when quick data retrieval is crucial.

**SQLExecutionBuilder**

SQLExecutionBuilder is a flexible API that you can use to take control of the SQL queries that will be executed, fetch certain columns only, `INSERT` by using `SELECT`, and use dynamic table names. In the following example, SQLExecutorBuilder uses a custom query that you define. 

![\[Example of using SQLExecutorBuilder with a custom query.\]](http://docs.aws.amazon.com/prescriptive-guidance/latest/patterns/images/pattern-img/6b42fafd-1535-416d-8abd-1a5f9007ddba/images/364e9fb1-0cbc-47d0-936d-46fb3b48b608.png)


**Choosing between DAOManager and SQLExecutionBuilder**

The choice between these APIs depends on your specific use case:
+ Use DAOManager when you want AWS Blu Age Runtime to generate the SQL queries instead of writing them yourself.
+ Choose SQLExecutionBuilder when you need to write SQL queries to take advantage of database-specific features or write optimal SQL queries.

*Optimization strategy: Caching*

In modernized applications, implementing effective caching strategies can significantly reduce database calls and improve response times. This helps bridge the performance gap between mainframe and cloud environments.

In AWS Blu Age applications, simple caching implementations use internal data structures such as hash maps or array lists, so you don’t have to set up an external caching solution that requires cost and code restructuring. This approach is particularly effective for data that is accessed frequently but changes infrequently. When you implement caching, consider memory constraints and update patterns to ensure that the cached data remains consistent and provides actual performance benefits.

The key to successful caching is identifying the right data to cache. In the following example, the code on the left always reads data from the table, whereas the code on the right reads data from the table when the local hash map doesn’t have a value for a given key. `cacheMap` is a hash map object that’s created in the context of the program and cleared in the cleanup method of the program context.

Caching with DAOManager:

![\[Example of caching optimizations with DAOManager.\]](http://docs.aws.amazon.com/prescriptive-guidance/latest/patterns/images/pattern-img/6b42fafd-1535-416d-8abd-1a5f9007ddba/images/4efd3d22-c694-4f7d-a543-2bed341d1651.png)


Caching with SQLExecutionBuilder:

![\[Example of caching optimizations with SQLExecutionBuilder.\]](http://docs.aws.amazon.com/prescriptive-guidance/latest/patterns/images/pattern-img/6b42fafd-1535-416d-8abd-1a5f9007ddba/images/c8964804-96eb-4e26-b2bf-8742e62b4c33.png)


*Optimization strategy: Business logic optimization*

Business logic optimization focuses on restructuring automatically generated code from AWS Blu Age to better align with modern architecture capabilities. This becomes necessary when the generated code maintains the same logic structure as the legacy mainframe code, which may not be optimal for modern systems. The goal is to improve performance while maintaining functional equivalence with the original application.

This optimization approach goes beyond simple API tweaks and caching strategies. It involves changes to how the application processes data and interacts with the database. Common optimizations include avoiding unnecessary read operations for simple updates, removing redundant database calls, and restructuring data access patterns to better align with modern application architecture. Here are a few examples:
+ **Updating data directly in the database. **Restructure your business logic by using direct SQL updates instead of multiple DAOManager operations with loops. For example, the following code (left side) makes multiple database calls and uses excessive memory. Specifically, it uses multiple database read and write operations within loops, individual updates instead of batch processing, and unnecessary object creation for each iteration.

  The following optimized code (right side) uses a single Direct SQL update operation. Specifically, it uses a single database call instead of multiple calls and doesn’t require loops because all updates are handled in a single statement. This optimization provides better performance and resource utilization, and reduces complexity. It prevents SQL injection, provides better query plan caching, and helps improve security.  
![\[Restructuring code by using direct SQL updates instead of DAOManager operations with loops.\]](http://docs.aws.amazon.com/prescriptive-guidance/latest/patterns/images/pattern-img/6b42fafd-1535-416d-8abd-1a5f9007ddba/images/7d0a7879-8db2-4cc5-b41c-ee370b3f22e5.png)
**Note**  
Always use parameterized queries to prevent SQL injection and ensure proper transaction management.
+ **Reducing redundant database calls.** Redundant database calls can significantly impact application performance, particularly when they occur within loops. A simple but effective optimization technique is to avoid repeating the same database query multiple times. The following code comparison demonstrates how moving the `retrieve()` database call outside the loop prevents redundant execution of identical queries, which improves efficiency.  
![\[alt text not found\]](http://docs.aws.amazon.com/prescriptive-guidance/latest/patterns/images/pattern-img/6b42fafd-1535-416d-8abd-1a5f9007ddba/images/da9c15f4-bcf1-4827-b91a-73212fe35cca.png)
+ **Reducing database calls by using the SQL** `JOIN`** clause.** Implement SQLExecutionBuilder to minimize the calls to the database. SQLExecutionBuilder provides more control over SQL generation and is particularly useful for complex queries that DAOManager cannot handle efficiently. For example, the following code uses multiple DAOManager calls:

  ```
  List<Employee> employees = daoManager.readAll();
  for(Employee emp : employees) {
      Department dept = deptManager.readById(emp.getDeptId());  // Additional call for each employee
      Project proj = projManager.readById(emp.getProjId());     // Another call for each employee
      processEmployeeData(emp, dept, proj);
  }
  ```

  The optimized code uses a single database call in SQLExecutionBuilder:

  ```
  SQLExecutionBuilder builder = new SQLExecutionBuilder();
  builder.append("SELECT e.*, d.name as dept_name, p.name as proj_name");
  builder.append("FROM employee e");
  builder.append("JOIN department d ON e.dept_id = d.id");
  builder.append("JOIN project p ON e.proj_id = p.id");
  builder.append("WHERE e.status = ?", "ACTIVE");
  
  List<Map<String, Object>> results = builder.execute();  // Single database call
  for(Map<String, Object> result : results) {
      processComplexData(result);
  }
  ```

*Using optimization strategies together*

These three strategies work synergistically: APIs provide the tools for efficient data access, caching reduces the need for repeated data retrieval, and business logic optimization ensures that these APIs are used in the most effective way possible. Regular monitoring and adjustment of these optimizations ensure continued performance improvements while maintaining the reliability and functionality of the modernized application. The key to success lies in understanding when and how to apply each strategy based on your application’s characteristics and performance goals.

## Tools
<a name="optimize-performance-aws-blu-age-modernized-application-tools"></a>
+ [JProfiler](https://www.ej-technologies.com/jprofiler) is a Java profiling tool that’s designed for developers and performance engineers. It analyzes Java applications  and helps identify performance bottlenecks, memory leaks, and threading issues. JProfiler offers CPU, memory, and thread profiling as well as database and Java virtual machine (JVM) monitoring to provide insights into application behavior.
**Note**  
As an alternative to JProfiler, you can use [Java VisualVM](https://visualvm.github.io/). This is a free, open source performance profiling and monitoring tool for Java applications that offers real-time monitoring of CPU usage, memory consumption, thread management, and garbage collection statistics. Because Java VisualVM is a built-in JDK tool, it is more cost-effective than JProfiler for basic profiling needs.
+ [pgAdmin](https://www.pgadmin.org/) is an open source administration and development tool for PostgreSQL. It provides a graphical interface that helps you create, maintain, and use database objects. You can use pgAdmin to perform a wide range of tasks, from writing simple SQL queries to developing complex databases. Its features include a syntax highlighting SQL editor, a server-side code editor, a scheduling agent for SQL, shell, and batch tasks, and support for all PostgreSQL features for both novice and experienced PostgreSQL users.

## Best practices
<a name="optimize-performance-aws-blu-age-modernized-application-best-practices"></a>

Identifying performance hotspots:
+ Document baseline performance metrics before you start optimizations.
+ Set clear performance improvement targets based on business requirements.
+ When benchmarking, disable verbose logging, because it can affect performance.
+ Set up a performance test suite and run it periodically.
+ Use the latest version of pgAdmin. (Older versions don't support the `EXPLAIN` query plan.)
+ For benchmarking, detach JProfiler after your optimizations are complete because it adds to latency.
+ For benchmarking, make sure to run the server in start mode instead of debug mode, because debug mode adds to latency.

Optimization strategies:
+ Configure **SetMaxResults** values in the `application.yaml` file, to specify right-sized batches according to your system specifications.
+ Configure **SetMaxResults** values based on data volume and memory constraints.
+ Change **SetOnGreatorOrEqual** to **SetOnEqual** only when subsequent calls are `.readNextEqual()`.
+ In batch write or update operations, handle the last batch separately, because it might be smaller than the configured batch size and could be missed by the write or update operation.

Caching:
+ Fields that are introduced for caching in `processImpl`, which mutate with each execution, should always be defined in the context of that `processImpl`. The fields should also be cleared by using the `doReset()` or `cleanUp()` method.
+ When you implement in-memory caching, right-size the cache. Very large caches that are stored in memory can take up all the resources, which might affect the overall performance of your application.

SQLExecutionBuilder:
+ For queries that you are planning to use in SQLExecutionBuilder, use key names such as `PROGRAMNAME_STATEMENTNUMBER`.
+ When you use SQLExecutionBuilder, always check for the `Sqlcod` field. This field contains a value that specifies whether the query was executed correctly or encountered any errors.
+ Use parameterized queries to prevent SQL injection.

Business logic optimization:
+ Maintain functional equivalence when restructuring code, and run regression testing and database comparison for the relevant subset of programs.
+ Maintain profiling snapshots for comparison.

## Epics
<a name="optimize-performance-aws-blu-age-modernized-application-epics"></a>

### Install JProfiler and pgAdmin
<a name="install-jprofiler-and-pgadmin"></a>


| Task | Description | Skills required | 
| --- | --- | --- | 
| Install and configure JProfiler. | [\[See the AWS documentation website for more details\]](http://docs.aws.amazon.com/prescriptive-guidance/latest/patterns/optimize-performance-aws-blu-age-modernized-application.html) | App developer | 
| Install and configure pgAdmin. | In this step, you install and configure a DB client to query your database. This pattern uses a PostgreSQL database and pgAdmin as a database client. If you are using another database engine, follow the documentation for the corresponding DB client.[\[See the AWS documentation website for more details\]](http://docs.aws.amazon.com/prescriptive-guidance/latest/patterns/optimize-performance-aws-blu-age-modernized-application.html) | App developer | 

### Identify hotspots
<a name="identify-hotspots"></a>


| Task | Description | Skills required | 
| --- | --- | --- | 
| Enable SQL query logging in your AWS Blu Age application. | Enable the loggers for SQL query logging in the `application.properties` file of your AWS Blu Age application, as explained in the [Architecture](#optimize-performance-aws-blu-age-modernized-application-architecture) section. | App developer | 
| Generate and analyze query `EXPLAIN` plans to identify database performance hotspots. | For details, see the [Architecture](#optimize-performance-aws-blu-age-modernized-application-architecture) section. | App developer | 
| Create a JProfiler snapshot to analyze a slow-performing test case. | [\[See the AWS documentation website for more details\]](http://docs.aws.amazon.com/prescriptive-guidance/latest/patterns/optimize-performance-aws-blu-age-modernized-application.html) | App developer | 
| Analyze the JProfiler snapshot to identify performance bottlenecks. | Follow these steps to analyze the JProfiler snapshot.[\[See the AWS documentation website for more details\]](http://docs.aws.amazon.com/prescriptive-guidance/latest/patterns/optimize-performance-aws-blu-age-modernized-application.html)For more information about using JProfiler, see the [Architecture](#optimize-performance-aws-blu-age-modernized-application-architecture) section and the [JProfiler documentation](https://www.ej-technologies.com/jprofiler/docs). | App developer | 

### Establish a baseline
<a name="establish-a-baseline"></a>


| Task | Description | Skills required | 
| --- | --- | --- | 
| Establish a performance baseline before you implement optimizations. | [\[See the AWS documentation website for more details\]](http://docs.aws.amazon.com/prescriptive-guidance/latest/patterns/optimize-performance-aws-blu-age-modernized-application.html) | App developer | 

### Apply optimization strategies
<a name="apply-optimization-strategies"></a>


| Task | Description | Skills required | 
| --- | --- | --- | 
| Optimize read calls. | Optimize data retrieval by using the DAOManager **SetMaxResults** method. For more information about this approach, see the [Architecture](#optimize-performance-aws-blu-age-modernized-application-architecture) section. | App developer, DAOManager | 
| Refactor the business logic to avoid multiple calls to the database. | Reduce database calls by using a SQL `JOIN` clause. For details and examples, see *Business logic optimization* in the [Architecture](#optimize-performance-aws-blu-age-modernized-application-architecture) section. | App developer, SQLExecutionBuilder | 
| Refactor the code to use caching to reduce the latency of read calls. | For information about this technique, see *Caching* in the [Architecture](#optimize-performance-aws-blu-age-modernized-application-architecture) section. | App developer | 
| Rewrite inefficient code that uses multiple DAOManager operations for simple update operations. | For more information about updating data directly in the database, see *Business logic optimization* in the [Architecture](#optimize-performance-aws-blu-age-modernized-application-architecture) section. | App developer | 

### Test optimization strategies
<a name="test-optimization-strategies"></a>


| Task | Description | Skills required | 
| --- | --- | --- | 
| Validate each optimization change iteratively while maintaining functional equivalence. | [\[See the AWS documentation website for more details\]](http://docs.aws.amazon.com/prescriptive-guidance/latest/patterns/optimize-performance-aws-blu-age-modernized-application.html)Using baseline metrics as a reference ensures accurate measurement of each optimization's impact while maintaining system reliability. | App developer | 

## Troubleshooting
<a name="optimize-performance-aws-blu-age-modernized-application-troubleshooting"></a>


| Issue | Solution | 
| --- | --- | 
| When you run the modern application, you see an exception with the error `Query_ID not found`. | To resolve this issue:[\[See the AWS documentation website for more details\]](http://docs.aws.amazon.com/prescriptive-guidance/latest/patterns/optimize-performance-aws-blu-age-modernized-application.html) | 
| You’ve added indexes, but you don’t see any performance improvements. | Follow these steps to ensure that the query engine is using the index:[\[See the AWS documentation website for more details\]](http://docs.aws.amazon.com/prescriptive-guidance/latest/patterns/optimize-performance-aws-blu-age-modernized-application.html) | 
| You encounter an out-of-memory exception. | Verify that the code releases the memory held by the data structure. | 
| Batch write operations result in missing records in the table | Review the code to ensure that an additional write operation is performed when the batch count isn’t zero. | 
| SQL logging doesn’t appear in application logs. | [\[See the AWS documentation website for more details\]](http://docs.aws.amazon.com/prescriptive-guidance/latest/patterns/optimize-performance-aws-blu-age-modernized-application.html) | 

## Related resources
<a name="optimize-performance-aws-blu-age-modernized-application-resources"></a>
+ [Refactoring applications automatically with AWS Blu Age](https://docs.aws.amazon.com/m2/latest/userguide/refactoring-m2.html) (*AWS Mainframe Modernization User Guide*)
+ [pgAdmin documentation](https://www.pgadmin.org/docs/)
+ [JProfiler documentation](https://www.ej-technologies.com/jprofiler/docs)

# Secure and streamline user access in a Db2 federation database on AWS by using trusted contexts
<a name="secure-and-streamline-user-access-in-a-db2-federation-database-on-aws-by-using-trusted-contexts"></a>

*Sai Parthasaradhi, Amazon Web Services*

## Summary
<a name="secure-and-streamline-user-access-in-a-db2-federation-database-on-aws-by-using-trusted-contexts-summary"></a>

Many companies are migrating their legacy mainframe workloads to Amazon Web Services (AWS). This migration includes shifting IBM Db2 for z/OS databases to Db2 for Linux, Unix, and Windows (LUW) on Amazon Elastic Compute Cloud (Amazon EC2). During a phased migration from on premises to AWS, users might need to access data in IBM Db2 z/OS and in Db2 LUW on Amazon EC2 until all applications and databases are fully migrated to Db2 LUW. In such remote data-access scenarios, user authentication can be challenging because different platforms use different authentication mechanisms.

This pattern covers how to set up a federation server on Db2 for LUW with Db2 for z/OS as a remote database. The pattern uses a trusted context to propagate a user’s identity from Db2 LUW to Db2 z/OS without re-authenticating on the remote database. For more information about trusted contexts, see the [Additional information](#secure-and-streamline-user-access-in-a-db2-federation-database-on-aws-by-using-trusted-contexts-additional) section.

## Prerequisites and limitations
<a name="secure-and-streamline-user-access-in-a-db2-federation-database-on-aws-by-using-trusted-contexts-prereqs"></a>

**Prerequisites **
+ An active AWS account
+ A Db2 instance running on an Amazon EC2 instance
+ A remote Db2 for z/OS database running on premises
+ The on-premises network connected to AWS through [AWS Site-to-Site VPN](https://aws.amazon.com/vpn/) or [AWS Direct Connect](https://aws.amazon.com/directconnect/)

## Architecture
<a name="secure-and-streamline-user-access-in-a-db2-federation-database-on-aws-by-using-trusted-contexts-architecture"></a>

**Target architecture**

![\[On-premises mainframe connects through on-premises Db2 server and VPN to the Db2 DB on EC2.\]](http://docs.aws.amazon.com/prescriptive-guidance/latest/patterns/images/pattern-img/9e04f0fe-bae2-412a-93ac-83da50222017/images/0a384695-7907-4fb8-bb7e-d170dcc114af.png)


## Tools
<a name="secure-and-streamline-user-access-in-a-db2-federation-database-on-aws-by-using-trusted-contexts-tools"></a>

**AWS services**
+ [Amazon Elastic Compute Cloud (Amazon EC2)](https://docs.aws.amazon.com/ec2/) provides scalable computing capacity in the AWS Cloud. You can launch as many virtual servers as you need and quickly scale them up or down.
+ [AWS Site-to-Site VPN](https://docs.aws.amazon.com/vpn/latest/s2svpn/VPC_VPN.html) helps you pass traffic between instances that you launch on AWS and your own remote network.

**Other tools**
+ [db2cli](https://www.ibm.com/docs/en/db2/11.5?topic=commands-db2cli-db2-interactive-cli) is the Db2 interactive command line interface (CLI) command.

## Epics
<a name="secure-and-streamline-user-access-in-a-db2-federation-database-on-aws-by-using-trusted-contexts-epics"></a>

### Enable federation on the Db2 LUW database running on AWS
<a name="enable-federation-on-the-db2-luw-database-running-on-aws"></a>


| Task | Description | Skills required | 
| --- | --- | --- | 
| Enable federation on the DB2 LUW DB. | To enable federation on DB2 LUW, run the following command.<pre>update dbm cfg using federated YES</pre> | DBA | 
| Restart the database. | To restart the database, run the following command.<pre>db2stop force;<br />db2start;</pre> | DBA | 

### Catalog the remote database
<a name="catalog-the-remote-database"></a>


| Task | Description | Skills required | 
| --- | --- | --- | 
| Catalog the remote Db2 z/OS subsystem. | To catalog the remote Db2 z/OS database on Db2 LUW running on AWS, use the following example command.<pre>catalog TCPIP NODE tcpnode REMOTE mainframehost SERVER mainframeport</pre> | DBA | 
| Catalog the remote database. | To catalog the remote database, use the following example command.<pre>catalog db dbnam1 as ndbnam1 at node tcpnode</pre> | DBA | 

### Create the remote server definition
<a name="create-the-remote-server-definition"></a>


| Task | Description | Skills required | 
| --- | --- | --- | 
| Collect user credentials for the remote Db2 z/OS database. | Before proceeding with the next steps, gather the following information:[\[See the AWS documentation website for more details\]](http://docs.aws.amazon.com/prescriptive-guidance/latest/patterns/secure-and-streamline-user-access-in-a-db2-federation-database-on-aws-by-using-trusted-contexts.html) | DBA | 
| Create the DRDA wrapper. | To create the DRDA wrapper, run the following command.<pre>CREATE WRAPPER DRDA;</pre> | DBA | 
| Create the server definition. | To create the server definition, run the following example command.<pre>CREATE SERVER ndbserver<br />TYPE DB2/ZOS VERSION 12<br />WRAPPER DRDA<br />AUTHORIZATION "dbuser1" PASSWORD "dbpasswd" OPTIONS ( DBNAME 'ndbnam1',FED_PROXY_USER 'ZPROXY' );</pre>In this definition, `FED_PROXY_USER` specifies the proxy user that will be used for establishing trusted connections to the Db2 z/OS database. The authorization user ID and password are required only for creating the remote server object in the Db2 LUW database. They won’t be used later during runtime. | DBA | 

### Create user mappings
<a name="create-user-mappings"></a>


| Task | Description | Skills required | 
| --- | --- | --- | 
| Create a user mapping for the proxy user. | To create a user mapping for proxy user, run the following command.<pre>CREATE USER MAPPING FOR ZPROXY SERVER ndbserver OPTIONS (REMOTE_AUTHID 'ZPROXY', REMOTE_PASSWORD 'zproxy');</pre> | DBA | 
| Create user mappings for each user on Db2 LUW. | Create user mappings for all the users on the Db2 LUW database on AWS who need to access remote data through the proxy user. To create the user mappings, run the following command.<pre>CREATE USER MAPPING FOR PERSON1 SERVER ndbserver OPTIONS (REMOTE_AUTHID 'USERZID', USE_TRUSTED_CONTEXT 'Y');</pre>The statement specifies that a user on Db2 LUW (`PERSON1`) can establish a trusted connection to the remote Db2 z/OS database (`USE_TRUSTED_CONTEXT 'Y'`). After the connection is established through the proxy user, the user can access the data by using the Db2 z/OS user ID (`REMOTE_AUTHID 'USERZID'`). | DBA | 

### Create the trusted context object
<a name="create-the-trusted-context-object"></a>


| Task | Description | Skills required | 
| --- | --- | --- | 
| Create the trusted context object. | To create the trusted context object on the remote Db2 z/OS database, use the following example command.<pre>CREATE TRUSTED CONTEXT CTX_LUW_ZOS<br />BASED UPON CONNECTION USING SYSTEM AUTHID ZPROXY<br />ATTRIBUTES (<br />ADDRESS '10.10.10.10'<br />)<br />NO DEFAULT ROLE<br />ENABLE<br />WITH USE FOR PUBLIC WITHOUT AUTHENTICATION;</pre>In this definition, `CTX_LUW_ZOS` is an arbitrary name for the trusted context object. The object contains the proxy user ID and the IP address of the server from which the trusted connection must originate. In this example, the server the Db2 LUW database on AWS. You can use the domain name instead of the IP address. The clause `WITH USE FOR PUBLIC WITHOUT AUTHENTICATION` indicates that switching the user ID on a trusted connection is allowed for every user ID. A password doesn't need to be provided. | DBA | 

## Related resources
<a name="secure-and-streamline-user-access-in-a-db2-federation-database-on-aws-by-using-trusted-contexts-resources"></a>
+ [IBM Resource Access Control Facility (RACF)](https://www.ibm.com/products/resource-access-control-facility)
+ [IBM Db2 LUW Federation](https://www.ibm.com/docs/en/db2/11.5?topic=federation)
+ [Trusted contexts](https://www.ibm.com/docs/en/db2-for-zos/13?topic=contexts-trusted)

## Additional information
<a name="secure-and-streamline-user-access-in-a-db2-federation-database-on-aws-by-using-trusted-contexts-additional"></a>

**Db2 trusted contexts**

A trusted context is a Db2 database object that defines a trust relationship between a federated server and a remote database server. To define a trusted relationship, the trusted context specifies trust attributes. There are three types of trust attributes:
+ The system authorization ID that makes the initial database connection request
+ The IP address or domain name from which the connection is made
+ The encryption setting for data communications between the database server and the database client

A trusted connection is established when all attributes of a connection request match the attributes specified in any trusted context object defined on the server. There are two types of trusted connections: implicit and explicit. After an implicit trusted connection is established, a user inherits a role that isn't available to them outside the scope of that trusted connection definition. After an explicit trusted connection is established, users can be switched on the same physical connection, with or without authentication. In addition, Db2 users can be granted roles that specify privileges that are for use only within the trusted connection. This pattern uses an explicit trusted connection.

*Trusted context in this pattern*

After the pattern is complete, PERSON1 on Db2 LUW accesses remote data from Db2 z/OS by using a federated trusted context. The connection for PERSON1 is established through a proxy user if the connection originates from the IP address or domain name that is specified in the trusted context definition. After the connection is established, PERSON1's corresponding Db2 z/OS user ID is switched without re-authentication, and the user can access the data or objects based on the Db2 privileges set up for that user.

*Benefits of federated trusted contexts*
+ This approach maintains the principle of least privilege by eliminating the use of a common user ID or application ID that would need a superset of all the privileges required by all users.
+ The real identity of the user who performs the transaction on both the federated and remote database is always known and can be audited.
+ Performance improves because the physical connection is being reused across the users without the need for the federated server to re-authenticate.

# Transfer large-scale Db2 z/OS data to Amazon S3 in CSV files
<a name="transfer-large-scale-db2-z-os-data-to-amazon-s3-in-csv-files"></a>

*Bruno Sahinoglu, Abhijit Kshirsagar, and Ivan Schuster, Amazon Web Services*

## Summary
<a name="transfer-large-scale-db2-z-os-data-to-amazon-s3-in-csv-files-summary"></a>

A mainframe is still a system of record in many enterprises, containing a massive amount of data including master data entities with records of current as well as the historical business transactions. It is often siloed and not easily accessed by the distributed systems within the same enterprise. With the emergence of cloud technology and big data democratization, enterprises are interested in using the insights hidden in the mainframe data to develop new business capabilities.

With that objective, enterprises are looking to open their mainframe Db2 data to their Amazon Web Services (AWS) Cloud environment. The business reasons are several and the transfer methods differ from case to case. You might prefer to connect your application directly to the mainframe, or you might prefer to replicate your data in near real time. If the use case is to feed a data warehouse or a data lake, having an up-to-date copy is no longer a concern, and the procedure described in this pattern might suffice, especially if you want to avoid any third-party product licensing costs. Another use case might be the mainframe data transfer for a migration project. In a migration scenario, data is required for performing the functional equivalence testing. The approach described in this post is a cost effective way to transfer the Db2 data to the AWS Cloud environment.

Because Amazon Simple Storage Service (Amazon S3) is one of the most integrated AWS services, you can access the data from there and gather insights directly by using other AWS services such as Amazon Athena, AWS Lambda functions, or Amazon QuickSight . You can also load the data to Amazon Aurora or Amazon DynamoDB by using AWS Glue or AWS Database Migration Service (AWS DMS). With that aim in mind, this describes how to unload Db2 data in CSV files in ASCII format on the mainframe and transfer the files to Amazon S3.

For this purpose, [mainframe scripts](https://github.com/aws-samples/unloaddb2-samples) have been developed to help to generate job control languages (JCLs) to unload and transfer as many Db2 tables as you need.

## Prerequisites and limitations
<a name="transfer-large-scale-db2-z-os-data-to-amazon-s3-in-csv-files-prereqs"></a>

**Prerequisites**
+ An IBM z/OS operating system user with authorization to run Restructured Extended Executor (REXX) and JCL scripts.
+ Access to z/OS Unix System Services (USS) to generate SSH (Secure Shell) private and public keys.
+ A writable S3 bucket. For more information, see [Create your first S3 bucket](https://docs.aws.amazon.com/AmazonS3/latest/userguide/creating-bucket.html) in the Amazon S3 documentation.
+ An AWS Transfer Family SSH File Transfer Protocol (SFTP)-enabled server using **Service managed** as the identity provider and Amazon S3 as the AWS storage service. For more information, see [Create an SFTP-enabled server](https://docs.aws.amazon.com/transfer/latest/userguide/create-server-sftp.html) in the AWS Transfer Family documentation.

**Limitations**
+ This approach isn’t suitable for near real-time or real-time data synchronization.
+ Data can be moved only from Db2 z/OS to Amazon S3, not the other way around.

## Architecture
<a name="transfer-large-scale-db2-z-os-data-to-amazon-s3-in-csv-files-architecture"></a>

**Source technology stack**
+ Mainframe running Db2 on z/OS

**Target technology stack**
+ AWS Transfer Family
+ Amazon S3
+ Amazon Athena
+ Amazon QuickSight
+ AWS Glue
+ Amazon Relational Database Service (Amazon RDS)
+ Amazon Aurora
+ Amazon Redshift

**Source and target architecture**

The following diagram shows the process for generating, extracting, and transferring Db2 z/OS data in ASCII CSV format to an S3 bucket.

![\[Data flow from corporate data center to AWS Cloud, showing mainframe extraction and cloud processing steps.\]](http://docs.aws.amazon.com/prescriptive-guidance/latest/patterns/images/pattern-img/66e6fa1a-1c7d-4b7a-8404-9ba85e433b24/images/87b13e0d-0be9-4462-bdbf-67342334416c.png)


1. A list of tables is selected for data migration from the Db2 catalog.

1. The list is used to drive the generation of unload jobs with the numeric and data columns in the external format.

1. The data is then transferred over to Amazon S3 by using AWS Transfer Family.

1. An AWS Glue extract, transform, and load (ETL) job can transform the data and load it to a processed bucket in the specified format, or AWS Glue can feed the data directly into the database.

1. Amazon Athena and Amazon QuickSight can be used to query and render the data to drive analytics.

The following diagram shows a logical flow of the entire process.

![\[Flowchart showing JCL process with TABNAME, REXXEXEC, and JCL decks steps, including inputs and outputs.\]](http://docs.aws.amazon.com/prescriptive-guidance/latest/patterns/images/pattern-img/66e6fa1a-1c7d-4b7a-8404-9ba85e433b24/images/d72f2572-10c9-43f9-b6c9-7e57c9a69d52.png)


1. The first JCL, called TABNAME, will use the Db2 utility DSNTIAUL to extract and generate the list of tables that you plan to unload from Db2. To choose your tables, you must manually adapt the SQL input to select and add filter criteria to include one or more Db2 schemas.

1. The second JCL, called REXXEXEC, will use the a JCL skeleton and the REXX program that is provided to process the Table list created by the JCL TABNAME and generate one JCL per table name. Each JCL will contain one step for unloading the table and another step for sending the file to the S3 bucket by using the SFTP protocol.

1. The last step consists of running the JCL to unload the table and transferring the file to AWS. The entire process can be automated using a scheduler on premises or on AWS.

## Tools
<a name="transfer-large-scale-db2-z-os-data-to-amazon-s3-in-csv-files-tools"></a>

**AWS services**
+ [Amazon Athena](https://docs.aws.amazon.com/athena/latest/ug/what-is.html) is an interactive query service that helps you analyze data directly in Amazon Simple Storage Service (Amazon S3) by using standard SQL.
+ [Amazon Aurora](https://docs.aws.amazon.com/AmazonRDS/latest/AuroraUserGuide/CHAP_AuroraOverview.html) is a fully managed relational database engine that's built for the cloud and compatible with MySQL and PostgreSQL.
+ [AWS Glue](https://docs.aws.amazon.com/glue/latest/dg/what-is-glue.html) is a fully managed extract, transform, and load (ETL) service. It helps you reliably categorize, clean, enrich, and move data between data stores and data streams.
+ [Amazon QuickSight](https://docs.aws.amazon.com/quicksight/latest/user/welcome.html) is a cloud-scale business intelligence (BI) service that helps you visualize, analyze, and report your data in a single dashboard.
+ [Amazon Redshift](https://docs.aws.amazon.com/redshift/latest/gsg/getting-started.html) is a managed petabyte-scale data warehouse service in the AWS Cloud.
+ [Amazon Relational Database Service (Amazon RDS)](https://docs.aws.amazon.com/AmazonRDS/latest/UserGuide/Welcome.html) helps you set up, operate, and scale a relational database in the AWS Cloud.
+ [Amazon Simple Storage Service (Amazon S3)](https://docs.aws.amazon.com/AmazonS3/latest/userguide/Welcome.html) is a cloud-based object storage service that helps you store, protect, and retrieve any amount of data.
+ [AWS Transfer Family](https://docs.aws.amazon.com/transfer/latest/userguide/what-is-aws-transfer-family.html) is a secure transfer service that enables you to transfer files into and out of AWS storage services.

**Mainframe tools**
+ [SSH File Transfer Protocol (SFTP)](https://www.ssh.com/academy/ssh/sftp-ssh-file-transfer-protocol) is a secure file transfer protocol that allows remote login to and file transfer between servers. SSH provides security by encrypting all traffic.
+ [DSNTIAUL](https://www.ibm.com/docs/en/db2-for-zos/11?topic=dpasp-dsntiaul-sample-program) is a sample program provided by IBM for unloading data.
+ [DSNUTILB](https://www.ibm.com/docs/en/db2-for-zos/11?topic=sharing-recommendations-utilities-in-coexistence) is a utilities batch program provided by IBM for unloading data with different options from DSNTIAUL.
+ [z/OS OpenSSH](https://www.ibm.com/docs/en/zos/2.4.0?topic=zbed-zos-openssh) is a port of Open Source Software SSH running on the Unix System Service under the IBM operating system z/OS. SSH is a secure, encrypted connection program between two computers running on a TCP/IP network. It provides multiple utilities, including ssh-keygen.
+ [REXX (Restructured Extended Executor)](https://www.ibm.com/docs/en/zos/2.1.0?topic=guide-learning-rexx-language) script is used to automate JCL generation with the Db2 Unload and SFTP steps.

**Code**

The code for this pattern is available in the GitHub [unloaddb2](https://github.com/aws-samples/unloaddb2-samples) repository.

## Best practices
<a name="transfer-large-scale-db2-z-os-data-to-amazon-s3-in-csv-files-best-practices"></a>

For the first unload, the generated JCLs should unload the entire table data.

After the first full unload, perform incremental unloads for better performance and cost savings. pdate the SQL query in the template JCL deck to accommodate any changes to the unload process.

You can convert the schema manually or by using a script on Lambda with the Db2 SYSPUNCH as an input. For an industrial process, [AWS Schema Conversion Tool (SCT)](https://docs.aws.amazon.com/SchemaConversionTool/latest/userguide/CHAP_Source.DB2zOS.html) is the preferred option.

Finally, use a mainframe-based scheduler or a scheduler on AWS with an agent on the mainframe to help manage and automate the entire process.

## Epics
<a name="transfer-large-scale-db2-z-os-data-to-amazon-s3-in-csv-files-epics"></a>

### Set up the S3 bucket
<a name="set-up-the-s3-bucket"></a>


| Task | Description | Skills required | 
| --- | --- | --- | 
| Create the S3 bucket. | For instructions, see [Create your first S3 bucket](https://docs.aws.amazon.com/AmazonS3/latest/userguide/creating-bucket.html). | General AWS | 

### Set up the Transfer Family server
<a name="set-up-the-transfer-family-server"></a>


| Task | Description | Skills required | 
| --- | --- | --- | 
| Create an SFTP-enabled server. | To open and create an SFTP server on the [AWS Transfer Family console](https://console.aws.amazon.com/transfer/), do the following:[\[See the AWS documentation website for more details\]](http://docs.aws.amazon.com/prescriptive-guidance/latest/patterns/transfer-large-scale-db2-z-os-data-to-amazon-s3-in-csv-files.html) | General AWS | 
| Create an IAM role for Transfer Family. | To create an AWS Identity and Access Management (IAM) role for Transfer Family to access Amazon S3, follow the instructions in [Create an IAM role and policy](https://docs.aws.amazon.com/transfer/latest/userguide/requirements-roles.html).  | AWS administrator | 
| Add an Amazon S3 service-managed user. | To add the Amazon S3 service-managed user, follow the instructions in the [AWS documentation](https://docs.aws.amazon.com/transfer/latest/userguide/service-managed-users.html#add-s3-user), and use your mainframe user ID. | General AWS | 

### Secure the communication protocol
<a name="secure-the-communication-protocol"></a>


| Task | Description | Skills required | 
| --- | --- | --- | 
| Create the SSH key. | Under your mainframe USS environment, run the following command.<pre>ssh-keygen -t rsa</pre>When prompted for a passphrase, keep it empty. | Mainframe developer | 
| Give the right authorization levels to the SSH folder and key files. | By default, the public and private keys will be stored in the user directory `/u/home/username/.ssh`.You must give the authorization 644 to the key files and 700 to the folder.<pre>chmod 644 .ssh/id_rsa<br />chmod 700 .ssh</pre> | Mainframe developer | 
| Copy the public key content to your Amazon S3 service-managed user. | To copy the USS-generated public key content, open the [AWS Transfer Family console](https://console.aws.amazon.com/transfer/).[\[See the AWS documentation website for more details\]](http://docs.aws.amazon.com/prescriptive-guidance/latest/patterns/transfer-large-scale-db2-z-os-data-to-amazon-s3-in-csv-files.html) | Mainframe developer | 

### Generate the JCLs
<a name="generate-the-jcls"></a>


| Task | Description | Skills required | 
| --- | --- | --- | 
| Generate the in-scope Db2 table list. | Provide input SQL to create a list of the tables that are scoped for data migration. This step requires you to specify selection criteria quering the Db2 catalog table SYSIBM.SYSTABLES using a SQL where clause. Filters can be customized to include a specific schema or table names that start with a particular prefix or based on a timestamp for incremental unload. Output is captured in a physical sequential (PS) dataset on the mainframe. This dataset will act as input for the next phase of JCL generation.Before you use the JCL TABNAME (You can rename it if necessary), make the following changes:[\[See the AWS documentation website for more details\]](http://docs.aws.amazon.com/prescriptive-guidance/latest/patterns/transfer-large-scale-db2-z-os-data-to-amazon-s3-in-csv-files.html)**Db2 table list extraction job**<pre><Jobcard><br />//* <br />//* UNLOAD ALL THE TABLE NAMES FOR A PARTICULAR SCHEMA<br />//* <br />//STEP01  EXEC PGM=IEFBR14<br />//* <br />//DD1      DD  DISP=(MOD,DELETE,DELETE),<br />//         UNIT=SYSDA,<br />//         SPACE=(1000,(1,1)),<br />//         DSN=<HLQ1>.DSN81210.TABLIST<br />//* <br />//DD2      DD  DISP=(MOD,DELETE,DELETE),<br />//         UNIT=SYSDA,<br />//         SPACE=(1000,(1,1)),<br />//         DSN=<HLQ1>.DSN81210.SYSPUNCH <br />//* <br />//UNLOAD  EXEC PGM=IKJEFT01,DYNAMNBR=20 <br />//SYSTSPRT DD  SYSOUT=* <br />//STEPLIB  DD  DISP=SHR,DSN=DSNC10.DBCG.SDSNEXIT<br />//         DD  DISP=SHR,DSN=DSNC10.SDSNLOAD<br />//         DD  DISP=SHR,DSN=CEE.SCEERUN <br />//         DD  DISP=SHR,DSN=DSNC10.DBCG.RUNLIB.LOAD <br />//SYSTSIN  DD  *<br />  DSN SYSTEM(DBCG) <br />  RUN  PROGRAM(DSNTIAUL) PLAN(DSNTIB12) PARMS('SQL') - <br />       LIB('DSNC10.DBCG.RUNLIB.LOAD')<br />  END<br />//SYSPRINT DD SYSOUT=*<br />//* <br />//SYSUDUMP DD SYSOUT=*<br />//* <br />//SYSREC00 DD DISP=(NEW,CATLG,DELETE),<br />//            UNIT=SYSDA,SPACE=(32760,(1000,500)),<br />//            DSN=<HLQ1>.DSN81210.TABLIST <br />//* <br />//SYSPUNCH DD DISP=(NEW,CATLG,DELETE), <br />//            UNIT=SYSDA,SPACE=(32760,(1000,500)),<br />//            VOL=SER=SCR03,RECFM=FB,LRECL=120,BLKSIZE=12 <br />//            DSN=<HLQ1>.DSN81210.SYSPUNCH <br />//* <br />//SYSIN    DD * <br />   SELECT CHAR(CREATOR), CHAR(NAME)<br />     FROM SYSIBM.SYSTABLES <br />    WHERE OWNER = '<Schema>' <br />      AND NAME LIKE '<Prefix>%' <br />      AND TYPE = 'T'; <br />/* </pre> | Mainframe developer | 
| Modify the JCL templates. | The JCL templates that are provided with this pattern contain a generic job card and library names. However, most mainframe sites will have their own naming standards for dataset names, library names, and job cards. For example, a specific job class might be required to run Db2 jobs. The Job Entry Subsytem implementations JES2 and JES3 can impose additional changes. Standard load libraries might have a different first qualifier than `SYS1`, which is IBM default. Therefore, customize the templates to account for your site-specific standards before you run them.Make the following changes in the skeleton JCL UNLDSKEL:[\[See the AWS documentation website for more details\]](http://docs.aws.amazon.com/prescriptive-guidance/latest/patterns/transfer-large-scale-db2-z-os-data-to-amazon-s3-in-csv-files.html)**Unload and SFTP JCL skeleton**<pre>//&USRPFX.U JOB (DB2UNLOAD),'JOB',CLASS=A,MSGCLASS=A, <br />//         TIME=1440,NOTIFY=&USRPFX<br />//* DELETE DATASETS<br />//STEP01   EXEC PGM=IEFBR14<br />//DD01     DD DISP=(MOD,DELETE,DELETE),<br />//            UNIT=SYSDA,<br />//            SPACE=(TRK,(1,1)),<br />// DSN=&USRPFX..DB2.PUNCH.&JOBNAME<br />//DD02     DD DISP=(MOD,DELETE,DELETE),<br />//            UNIT=SYSDA,<br />//            SPACE=(TRK,(1,1)),<br />// DSN=&USRPFX..DB2.UNLOAD.&JOBNAME<br />//*<br />//* RUNNING DB2 EXTRACTION BATCH JOB FOR AWS DEMO<br />//*<br />//UNLD01   EXEC PGM=DSNUTILB,REGION=0M,<br />// PARM='<DSN>,UNLOAD'<br />//STEPLIB  DD  DISP=SHR,DSN=DSNC10.DBCG.SDSNEXIT<br />//         DD  DISP=SHR,DSN=DSNC10.SDSNLOAD<br />//SYSPRINT DD  SYSOUT=*<br />//UTPRINT  DD  SYSOUT=*<br />//SYSOUT   DD  SYSOUT=*<br />//SYSPUN01 DD  DISP=(NEW,CATLG,DELETE),<br />//             SPACE=(CYL,(1,1),RLSE),<br />// DSN=&USRPFX..DB2.PUNCH.&JOBNAME<br />//SYSREC01 DD  DISP=(NEW,CATLG,DELETE),<br />//             SPACE=(CYL,(10,50),RLSE),<br />// DSN=&USRPFX..DB2.UNLOAD.&JOBNAME<br />//SYSPRINT DD SYSOUT=*<br />//SYSIN    DD *<br />  UNLOAD<br />  DELIMITED COLDEL ','<br />  FROM TABLE &TABNAME<br />  UNLDDN SYSREC01<br />  PUNCHDDN SYSPUN01<br />  SHRLEVEL CHANGE ISOLATION UR;<br /> /*<br />//*<br />//* FTP TO AMAZON S3 BACKED FTP SERVER IF UNLOAD WAS SUCCESSFUL<br />//*<br />//SFTP EXEC PGM=BPXBATCH,COND=(4,LE),REGION=0M<br />//STDPARM DD *<br /> SH cp "//'&USRPFX..DB2.UNLOAD.&JOBNAME'"<br />   &TABNAME..csv;<br /> echo "ascii             " >> uplcmd;<br /> echo "PUT &TABNAME..csv " >>>> uplcmd;<br /> sftp -b uplcmd -i .ssh/id_rsa &FTPUSER.@&FTPSITE;<br /> rm &TABNAME..csv;<br /> //SYSPRINT DD SYSOUT=*<br /> //STDOUT DD SYSOUT=*<br /> //STDENV DD *<br /> //STDERR DD SYSOUT=*                                                </pre>  | Mainframe developer | 
| Generate the Mass Unload JCL. | This step involves running a REXX script under an ISPF environment by using JCL. Provide the list of in-scope tables created on the first step as input for mass JCL generation against the `TABLIST DD` name. The JCL will generate one new JCL per table name in a user-specified partitioned dataset specified against the `ISPFILE DD` name. Allocate this library beforehand. Each new JCL will have two steps: one step to unload the Db2 table into a file, and one step to send the file to the S3 bucket.Make the following changes in the JCL REXXEXEC (you can change the name):[\[See the AWS documentation website for more details\]](http://docs.aws.amazon.com/prescriptive-guidance/latest/patterns/transfer-large-scale-db2-z-os-data-to-amazon-s3-in-csv-files.html)**Mass JCL generation job**<pre>//RUNREXX JOB (CREATEJCL),'RUNS ISPF TABLIST',CLASS=A,MSGCLASS=A,      <br />//         TIME=1440,NOTIFY=&SYSUID<br />//* Most of the values required can be updated to your site specific<br />//* values using the command 'TSO ISRDDN' in your ISPF session. <br />//* Update all the lines tagged with //update marker to desired<br />//* site specific values. <br />//ISPF EXEC PGM=IKJEFT01,REGION=2048K,DYNAMNBR=25<br />//SYSPROC   DD DISP=SHR,DSN=USER.Z23D.CLIST<br />//SYSEXEC   DD DISP=SHR,DSN=<HLQ1>.TEST.REXXLIB<br />//ISPPLIB   DD DISP=SHR,DSN=ISP.SISPPENU<br />//ISPSLIB   DD DISP=SHR,DSN=ISP.SISPSENU<br />//          DD DISP=SHR,DSN=<HLQ1>.TEST.ISPSLIB<br />//ISPMLIB   DD DSN=ISP.SISPMENU,DISP=SHR<br />//ISPTLIB   DD DDNAME=ISPTABL<br />//          DD DSN=ISP.SISPTENU,DISP=SHR<br />//ISPTABL   DD LIKE=ISP.SISPTENU,UNIT=VIO<br />//ISPPROF   DD LIKE=ISP.SISPTENU,UNIT=VIO<br />//ISPLOG    DD SYSOUT=*,RECFM=VA,LRECL=125<br />//SYSPRINT  DD SYSOUT=*<br />//SYSTSPRT  DD SYSOUT=*<br />//SYSUDUMP  DD SYSOUT=*<br />//SYSDBOUT  DD SYSOUT=*<br />//SYSTSPRT  DD SYSOUT=*<br />//SYSUDUMP  DD SYSOUT=*<br />//SYSDBOUT  DD SYSOUT=*<br />//SYSHELP   DD DSN=SYS1.HELP,DISP=SHR <br />//SYSOUT    DD SYSOUT=*<br />//* Input list of tablenames<br />//TABLIST   DD DISP=SHR,DSN=<HLQ1>.DSN81210.TABLIST<br />//* Output pds<br />//ISPFILE   DD DISP=SHR,DSN=<HLQ1>.TEST.JOBGEN<br />//SYSTSIN   DD *<br />ISPSTART CMD(ZSTEPS <MFUSER> <FTPUSER> <AWS TransferFamily IP>)<br />/*</pre>Before you use the REXX script, make the following changes:[\[See the AWS documentation website for more details\]](http://docs.aws.amazon.com/prescriptive-guidance/latest/patterns/transfer-large-scale-db2-z-os-data-to-amazon-s3-in-csv-files.html)**ZSTEPS REXX script**<pre>/*REXX - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - */<br />/* 10/27/2021 - added new parms to accommodate ftp */<br />Trace "o" <br />    parse arg usrpfx ftpuser ftpsite<br />    Say "Start"<br />    Say "Ftpuser: " ftpuser "Ftpsite:" ftpsite<br />    Say "Reading table name list"<br />    "EXECIO * DISKR TABLIST (STEM LINE. FINIS"<br />    DO I = 1 TO LINE.0<br />      Say I<br />      suffix = I<br />      Say LINE.i<br />      Parse var LINE.i schema table rest<br />      tabname = schema !! "." !! table<br />      Say tabname<br />      tempjob= "LOD" !! RIGHT("0000" !! i, 5) <br />      jobname=tempjob<br />      Say tempjob<br />      ADDRESS ISPEXEC "FTOPEN "<br />      ADDRESS ISPEXEC "FTINCL UNLDSKEL"<br />      /* member will be saved in ISPDSN library allocated in JCL */<br />      ADDRESS ISPEXEC "FTCLOSE NAME("tempjob")"<br />    END<br /><br />    ADDRESS TSO "FREE F(TABLIST) "<br />    ADDRESS TSO "FREE F(ISPFILE) "<br /><br />exit 0</pre> | Mainframe developer | 

### Run the JCLs
<a name="run-the-jcls"></a>


| Task | Description | Skills required | 
| --- | --- | --- | 
| Perform the Db2 Unload step. | After the JCL generation, you will have as many JCLs as you have tables that need to be unloaded.This story uses a JCL-generated example to explain the structure and the most important steps.No action is required on your part. The following information is for reference only. If your intention is to submit the JCLs that you have generated in the previous step, skip to the *Submit the LODnnnnn JCLs* task.When unloading Db2 data using a JCL with the IBM provided DSNUTILB Db2 utility, you must make sure that the unloaded data does not contain compressed numeric data. To accomplish this, use the DSNUTILB `DELIMITED` parameter.The `DELIMITED` parameter supports unloading the data in CSV format by adding a character as the delimiter and double quotation marks for the text field, removing the padding in the VARCHAR column, and converting all the numeric fields into EXTERNAL FORMAT, including the DATE fields.The following example shows what the unload step in the generated JCL looks like, using the comma character as a delimiter.<pre>                            <br /> UNLOAD<br /> DELIMITED COLDEL ',' <br /> FROM TABLE SCHEMA_NAME.TBNAME<br /> UNLDDN SYSREC01<br /> PUNCHDDN SYSPUN01<br /> SHRLEVEL CHANGE ISOLATION UR;</pre> | Mainframe developer, System engineer | 
| Perform the SFTP step. | To use the SFTP protocol from a JCL, use the BPXBATCH utility. The SFTP utility can’t access the MVS datasets directly. You can use the copy command (`cp`) to copy the sequential file `&USRPFX..DB2.UNLOAD.&JOBNAME` to the USS directory, where it becomes `&TABNAME..csv`.Run the `sftp` command using the private key (`id_rsa`) and using the RACF user ID as the user name to connect to the AWS Transfer Family IP address.<pre>SH cp "//'&USRPFX..DB2.UNLOAD.&JOBNAME'"<br />   &TABNAME..csv;<br /> echo "ascii             " >> uplcmd;<br /> echo "PUT &TABNAME..csv " >>>> uplcmd;<br /> sftp -b uplcmd -i .ssh/id_rsa &FTPUSER.@&FTP_TF_SITE;<br /> rm &TABNAME..csv; </pre> | Mainframe developer, System engineer | 
| Submit the LODnnnnn JCLs. | The prior JCL has generated all LODnnnnn JCL  tables that need to be unloaded, transformed into CSV, and transferred to the S3 bucket.Run the `submit` command on all the JCLs that have been generated. | Mainframe developer, System engineer | 

## Related resources
<a name="transfer-large-scale-db2-z-os-data-to-amazon-s3-in-csv-files-resources"></a>

For more information about the different tools and solutions used in this document, see the following:
+ [z/OS OpenSSH User’s Guide](https://www-01.ibm.com/servers/resourcelink/svc00100.nsf/pages/zOSV2R4sc276806/$file/foto100_v2r4.pdf)
+ [Db2 z/OS – Sample UNLOAD control statements](https://www.ibm.com/docs/en/db2-for-zos/11?topic=unload-sample-control-statements)
+ [Db2 z/OS – Unloading delimited files](https://www.ibm.com/docs/en/db2-for-zos/11?topic=unload-unloading-delimited-files)
+ [Transfer Family – Create an SFTP-enabled server](https://docs.aws.amazon.com/transfer/latest/userguide/create-server-sftp.html)
+ [Transfer Family – Working with service-managed users](https://docs.aws.amazon.com/transfer/latest/userguide/service-managed-users.html)

## Additional information
<a name="transfer-large-scale-db2-z-os-data-to-amazon-s3-in-csv-files-additional"></a>

After you have your Db2 data on Amazon S3, you have many ways to develop new insights. Because Amazon S3 integrates with AWS data analytics services, you can freely consume or expose this data on the distributed side. For example, you can do the following:
+ Build a [data lake on Amazon S3](https://aws.amazon.com/products/storage/data-lake-storage/), and extract valuable insights by using query-in-place, analytics, and machine learning tools without moving the data.
+ Initiate a [Lambda function](https://aws.amazon.com/lambda/) by setting up a post-upload processing workflow that is integrated with AWS Transfer Family.
+ Develop new microservices for accessing the data in Amazon S3 or in [fully managed database](https://aws.amazon.com/free/database/?trk=ps_a134p000007CdNEAA0&trkCampaign=acq_paid_search_brand&sc_channel=PS&sc_campaign=acquisition_FR&sc_publisher=Google&sc_category=Database&sc_country=FR&sc_geo=EMEA&sc_outcome=acq&sc_detail=amazon%20relational%20database%20service&sc_content=Relational%20Database_e&sc_matchtype=e&sc_segment=548727697660&sc_medium=ACQ-P|PS-GO|Brand|Desktop|SU|Database|Solution|FR|EN|Text&s_kwcid=AL!4422!3!548727697660!e!!g!!amazon%20relational%20database%20service&ef_id=CjwKCAjwzt6LBhBeEiwAbPGOgcGbQIl1-QsbHfWTgMZSSHEXzSG377R9ZyK3tCcbnHuT45L230FufxoCeEkQAvD_BwE:G:s&s_kwcid=AL!4422!3!548727697660!e!!g!!amazon%20relational%20database%20service) by using [AWS Glue](https://aws.amazon.com/glue/), which is a serverless data integration service that makes it easy to discover, prepare, and combine data for analytics, machine learning, and application development.

In a migration use case, because you can transfer any data from the mainframe to S3, you can do the following:
+ Retire physical infrastructure, and create a cost-effective data archival strategy with Amazon S3 Glacier and S3 Glacier Deep Archive. 
+ Build scalable, durable, and secure backup and restore solutions with Amazon S3 and other AWS services, such as S3 Glacier and Amazon Elastic File System (Amazon EFS), to augment or replace existing on-premises capabilities.

# Transform Easytrieve to modern languages by using AWS Transform custom
<a name="transform-easytrieve-modern-languages"></a>

*Shubham Roy, Subramanyam Malisetty, and Harshitha Shashidhar, Amazon Web Services*

## Summary
<a name="transform-easytrieve-modern-languages-summary"></a>

This pattern provides prescriptive guidance for faster and lower-risk transformation of mainframe Broadcom [Easytrieve Report Generator](https://techdocs.broadcom.com/us/en/ca-mainframe-software/devops/ca-easytrieve-report-generator/11-6.html) (EZT) workloads using [AWS Transform custom](https://aws.amazon.com/transform/custom/) language-to-language transformation. It addresses the challenges of modernizing niche and proprietary mainframe EZT workloads that are commonly used for batch data processing and report generation. The pattern replaces expensive, lengthy, and error-prone migration approaches that rely on proprietary tooling and rare mainframe expertise with an agentic AI automated solution you create on AWS Transform. 

This pattern provides a ready-to -use custom transformation definition for EZT transformation. The definition uses multiple transformation inputs:
+ EZT business rules extracted using [AWS Transform for mainframe](https://aws.amazon.com/transform/mainframe/)
+ EZT programming reference documentation
+ EZT source code
+ Mainframe input and output datasets

AWS Transform custom uses these inputs to generate functionally equivalent applications in modern target languages, such as Java or Python.

The transformation process uses intelligent test execution, automated debugging, and iterative fix capabilities to validate functional equivalence against expected outputs. It also supports continual learning, enabling the custom transformation definition to improve accuracy and consistency across successive transformations. Using this pattern, organizations can reduce migration effort and risk, address niche mainframe technical debt, and modernize EZT workloads on AWS to improve agility, reliability, security, and innovation.

## Prerequisites and limitations
<a name="transform-easytrieve-modern-languages-prereqs"></a>

**Prerequisites**
+ An active AWS account 
+ A mainframe EZT workload with input and output data 

**Limitations**

*Scope limitations *
+ **Language support** – Only EZT-to-Java transformation is supported for this specific transformation pattern. 
+ **Out of scope** – Transformation of other mainframe programming languages requires a new custom transformation definition in AWS Transform custom.

*Process limitations *
+ **Validation dependency** – Without baseline output data the transformation cannot be validated. 
+ **Proprietary logic** – Highly specific, custom-developed utilities require additional user documentation and reference materials in order to be correctly interpreted by the AI agent.

*Technical limitations *
+ **Service limits** – For AWS Transform custom service limits and quotas see [AWS Transform User Guide - Quotas](https://docs.aws.amazon.com/transform/latest/userguide/transform-limits.html) and the [AWS General Reference - Transform Quotas](https://docs.aws.amazon.com/general/latest/gr/aws-transform.html).

**Product versions**
+ AWS Transform CLI –  Latest version
+ Node.js –  version 20 or later
+ Git –  Latest version
+ Target environment
  + Java –  version 17 or later
  + Spring Boot –  version 3.x is the primary target for refactored applications
  + Maven –  version 3.6 or later

## Architecture
<a name="transform-easytrieve-modern-languages-architecture"></a>

**Source technology stack**
+ **Operating system** – IBM z/OS
+ **Programming language** – Easytrieve, Job control language (JCL)
+ **Database** – IBM DB2 for z/OS, Virtual Storage Access Method (VSAM), Mainframe flat files

**Target technology stack**
+ **Operating system** – Amazon Linux
+ **Compute** – Amazon Elastic Compute Cloud (Amazon EC2)
+ **Programming language** – Java
+ **Database** Amazon Relational Database Service (Amazon RDS)

**Target architecture**

![\[target architecture diagram for using AWS Transform custom to transform EZT to modern code.\]](http://docs.aws.amazon.com/prescriptive-guidance/latest/patterns/images/pattern-img/71f15422-42cb-4c7e-94fa-051a4f130445/images/eb89eed0-dd55-485c-a433-9869162eaad9.png)


**Workflow**

This solution uses an AWS Transform custom language-to-language migration transformation pattern to modernize mainframe Easytrieve (EZT) applications to Java through a four-step automated workflow.

*Step 1 –  Provide your legacy code to AWS Transform for Mainframe, which:*
+ Analyzes the code
+ Extracts the high-level business logic
+ Extracts the detailed business logic.

*Step 2 –  Create a folder with the required inputs:*
+ EZT business rules extracted using AWS Transform for mainframe 
+ EZT programming reference documentation 
+ EZT source code
+ Mainframe input and output datasets

*Step 3 – Create and run a custom transformation definition*

1. Use the AWS Transform CLI to describe transformation objectives in natural language. AWS Transform custom analyzes the BRE, source code, and EZT programming guides to generate a custom transformation definition for developer review and approval.

1. Then, invoke the AWS Transform CLI with the project source code. AWS Transform custom creates transformation plans, converts EZT to Java upon approval, generates supporting files, builds the executable JAR, and validates exit criteria.

1. Use the validation agent to test the  functional equivalence against mainframe output. The Self-Debugger Agent autonomously fixes issues. Final deliverables include validated Java code and HTML validation reports.

**Automation and scale**
+ Agentic AI multi-mode execution architecture – AWS Transform custom leverages agentic AI with 3 execution modes (conversational, interactive, full automation) to automate complex transformation tasks including code analysis, refactoring, transformation planning and testing.
+ Adaptive learning feedback system – The platform implements continuous learning mechanisms through code sample analysis, documentation parsing, and developer feedback integration with versioned transformation definitions.
+ Concurrent application processing architecture – The system enables distributed parallel execution of multiple application transformation operations simultaneously across scalable infrastructure.

## Tools
<a name="transform-easytrieve-modern-languages-tools"></a>

**AWS services  **
+ [AWS Transform custom](https://docs.aws.amazon.com/transform/latest/userguide/custom.html) is an agentic AI service is used to transform legacy EZT applications into modern programming languages. 
+ [AWS Transform](https://docs.aws.amazon.com/transform/latest/userguide/what-is-service.html) uses agentic AI to help you accelerate the modernization of legacy workloads, such as .NET, mainframe, and VMware workloads.
+ [AWS Transform for mainframe ](https://docs.aws.amazon.com/transform/latest/userguide/transform-app-mainframe.html)is used to analyze legacy EZT applications to extract embedded business logic and generate comprehensive business rule documentation, including logic summaries, acronym definitions, and structured knowledge bases. These serve as input data for AWS Transform custom. 
+ [Amazon Simple Storage Service (Amazon S3)](https://docs.aws.amazon.com/AmazonS3/latest/userguide/Welcome.html) is a cloud-based object storage service that helps you store, protect, and retrieve any amount of data. Amazon S3 serves as the primary storage service for AWS Transform custom for storing transformation definitions, code repositories, and processing results. 
+ [AWS Identity and Access Management (IAM)](https://docs.aws.amazon.com/IAM/latest/UserGuide/introduction.html) helps you securely manage access to your AWS resources by controlling who is authenticated and authorized to use them. IAM provides the security framework for AWS Transform custom, managing permissions and access control for transformation operations.

**Other tools**
+ [AWS Transform CLI](https://docs.aws.amazon.com/transform/latest/userguide/custom-command-reference.html) is the command-line interface for AWS Transform custom, enabling developers to define, execute, and manage custom code transformations through natural language conversations and automated execution modes. AWS Transform custom supports both interactive sessions (atx custom def exec) and autonomous transformations for scalable modernization of codebases.
+ [Git](https://git-scm.com/doc) version control system used for branch protection, change tracking, and rollback capabilities during automated fix application. 
+ [Java](https://www.java.com/en/) is the programming language and development environment used in this pattern. 

**Code repository**

The code for this pattern is available in [Easytrieve to Modern Languages Transformation with AWS Transform Custom](https://github.com/aws-samples/sample-mainframe-easytrieve-transform?tab=readme-ov-file#easytrieve-to-modern-languages-transformation-with-aws-transform-custom) on GitHub.

## Best practices
<a name="transform-easytrieve-modern-languages-best-practices"></a>
+ Establish standardized project structure – Create a four-folder structure (source-code, bre-doc, input-data, output-data), validate completeness, and document contents before transformation.
+ Use baseline files for validation – Use production baseline input files, perform byte-by-byte comparison with baseline output, accept zero tolerance for deviations.
+ Use all available reference documents  – To increase accuracy of transformation provide all available reference documents such as business requirements and coding checklists.
+ Provide input for quality improvement  – AWS Transform custom automatically extracts learnings from transformation executions (developer feedback, code issues) and creates knowledge items for them. after each successful transformation review knowledge items and approve the one that you would like to be used in future executions. This improves the quality of future transformations.

## Epics
<a name="transform-easytrieve-modern-languages-epics"></a>

### Generate a business rule extract (BRE)
<a name="generate-a-business-rule-extract-bre"></a>


| Task | Description | Skills required | 
| --- | --- | --- | 
| Configure AWS Transform for mainframe. | Set up the environment and required AWS Identity and Access Management (IAM) permissions to support mainframe modernization workflows. For more information see [Transformation of mainframe applications](https://docs.aws.amazon.com/transform/latest/userguide/transform-app-mainframe-workflow.html) in AWS documentation. | App developer | 
| Generate Business Rule Extract (BRE) documentation. | Extract business logic from source EZT or COBOL code to generate functional documentation. For instructions on how to initiate the extraction process and review the output, see [Extract business logic](https://docs.aws.amazon.com/transform/latest/userguide/transform-app-mainframe-workflow.html#transform-app-mainframe-workflow-extract-business-logic) in the AWS Transform documentation. | App developer | 

### Set up AWS Transform custom
<a name="set-up-trn-custom"></a>


| Task | Description | Skills required | 
| --- | --- | --- | 
| Provision the infrastructure for AWS Transform custom. | Deploy the production-ready infrastructure required to host a secure transformation environment. This includes a private Amazon EC2 instance configured with the necessary tools, IAM permissions, and network settings for converting Easytrieve code. To provision the environment using infrastructure as code (IaC), follow the deployment instructions in the [Easytrieve to Modern Languages Transformation with AWS Transform Custom](https://github.com/aws-samples/sample-mainframe-easytrieve-transform) GitHub repository. | App developer, AWS administrator | 
| Prepare input materials for transformation. | [\[See the AWS documentation website for more details\]](http://docs.aws.amazon.com/prescriptive-guidance/latest/patterns/transform-easytrieve-modern-languages.html) | App developer | 

###  Create a custom transformation definition
<a name="create-a-custom-transformation-definition"></a>


| Task | Description | Skills required | 
| --- | --- | --- | 
| Create transformation definition. | Follow these steps to create the custom transformation definition for EZT to Java transformation with functional validation.[\[See the AWS documentation website for more details\]](http://docs.aws.amazon.com/prescriptive-guidance/latest/patterns/transform-easytrieve-modern-languages.html) | App developer | 
| Publish transformation definition. | After review and validation of the transformation definition you can publish it to the AWS Transform custom registry with a natural language prompt, providing a definition name such as *Easytrieve-to-Java-Migration*. | App developer | 

### Prepare baseline data for validation.
<a name="prepare-baseline-data-for-validation"></a>


| Task | Description | Skills required | 
| --- | --- | --- | 
| Review the transformation validation summary. | Before executing the AWS Transform custom transformation, validate that the `input-data` folder contains the required data files captured before execution of the mainframe batch job. After the mainframe batch job execution, ensure that the `output-data` folder captures the resulting files. All files are in Sequential/Text/DB2 format using EBCDIC encoding based on execution requirements.[\[See the AWS documentation website for more details\]](http://docs.aws.amazon.com/prescriptive-guidance/latest/patterns/transform-easytrieve-modern-languages.html) | App developer | 
| Run the custom transformation job. | Execute the AWS Transform CLI command, choosing the non-interactive or the interactive option:<pre>:# Non-interactive execution (fully autonomous):<br />atx custom def exec \<br />  --transformation-name "Easytrieve-to-Java-Migration" \<br />  --code-repository-path ~/root/transform-workspace/mainframe-source/source-code \<br />  --build-command "mvn clean install" \<br />  --non-interactive \<br />  --trust-all-tools \<br /><br /># Interactive execution (with human oversight):<br />atx custom def exec \<br />  -n "Easytrieve-to-Java-Migration" \<br />  -p ~/root/transform-workspace/mainframe-source/source-code \<br />  -c "mvn clean install"<br /><br /># Resume interrupted execution:<br />atx -resume<br /># OR<br />atx --conversation-id <conversation-id><br /></pre>AWS Transform automatically validates through build/test commands during transformation execution. | App developer | 

### Validate and deliver tested code
<a name="validate-and-deliver-tested-code"></a>


| Task | Description | Skills required | 
| --- | --- | --- | 
| Review the transformation validation summary. | [\[See the AWS documentation website for more details\]](http://docs.aws.amazon.com/prescriptive-guidance/latest/patterns/transform-easytrieve-modern-languages.html) | App developer | 
| Access validation reports. | Enter these commands to review the detailed validation artifacts:<pre># Full validation report<br />cat ~/.aws/atx/custom/$LATEST_SESSION/artifacts/validation_report.html<br /><br /># Generated code location<br />ls ~/.aws/atx/custom/$LATEST_SESSION/generated/<br /><br /># Execution logs<br />cat ~/.aws/atx/custom/$LATEST_SESSION/logs/execution.log</pre> | App developer | 
| Enable knowledge items for continuous learning. | Improve future transformation accuracy by promoting suggested knowledge items to your persistent configuration. After a transformation, the agent stores identified patterns and mapping rules in your local session directory. To review and apply these learned items, run these commands on your Amazon EC2 instance:<pre># List all knowledge items for a specific transformation definition<br />atx custom def list-ki -n <transformation-name><br /><br /># Retrieve the details of a specific knowledge item<br />atx custom def get-ki -n <transformation-name> --id <id><br /><br /># Update the status of a knowledge item (ENABLED or DISABLED)<br />atx custom def update-ki-status -n <transformation-name> --id <id> --status ENABLED<br /><br /># Update the knowledge item configuration to enable auto-approval<br />atx custom def update-ki-config -n <transformation-name> --auto-enabled TRUE</pre> | App developer | 

## Troubleshooting
<a name="transform-easytrieve-modern-languages-troubleshooting"></a>


| Issue | Solution | 
| --- | --- | 
| *Input and output path configuration*Input files are not being read, or output files are not being written correctly.  | Specify the complete directory path where input files are stored and clearly indicate the location where output should be written. Ensure proper access permissions are configured for these directories. Best practices include using absolute paths rather than relative paths to avoid ambiguity and verifying that all specified paths exist with appropriate read/write permissions.  | 
| *Resuming interrupted executions*Execution was interrupted or needs to be continued from where stopped | You can resume execution from where you left off by providing the conversation ID in the CLI command.Find the conversation ID in the logs of your previous execution attempt.   | 
| *Resolving memory constraints*Out of memory error occurs during execution. | You can ask AWS Transform to share the current in-memory JVM size and then increase the memory allocation based on this information. This adjustment helps accommodate larger processing requirements.Consider breaking large jobs into smaller batches if memory constraints persist after adjustments.  | 
| *Addressing output file discrepancies*Output files don't match expectations, and AWS Transform indicates no further changes are possible. | Provide specific feedback and technical reasons explaining why the current output is incorrect. Include additional technical or business documentation to support your requirements. This detailed context helps AWS Transform correct the code to generate the proper output files. [\[See the AWS documentation website for more details\]](http://docs.aws.amazon.com/prescriptive-guidance/latest/patterns/transform-easytrieve-modern-languages.html) | 

## Related resources
<a name="transform-easytrieve-modern-languages-resources"></a>
+ [AWS Transform custom documentation](https://docs.aws.amazon.com/transform/latest/userguide/custom.html)
+ [Easytrieve Report Generator 11.6](https://techdocs.broadcom.com/us/en/ca-mainframe-software/devops/ca-easytrieve-report-generator/11-6/getting-started.html)

# More patterns
<a name="mainframe-more-patterns-pattern-list"></a>

**Topics**
+ [Deploy the Security Automations for AWS WAF solution by using Terraform](deploy-the-security-automations-for-aws-waf-solution-by-using-terraform.md)
+ [Replicate mainframe databases to AWS by using Precisely Connect](replicate-mainframe-databases-to-aws-by-using-precisely-connect.md)