

AWS Mainframe Modernization Service (Managed Runtime Environment experience) is no longer open to new customers. For capabilities similar to AWS Mainframe Modernization Service (Managed Runtime Environment experience) explore AWS Mainframe Modernization Service (Self-Managed Experience). Existing customers can continue to use the service as normal. For more information, see [AWS Mainframe Modernization availability change](https://docs.aws.amazon.com/m2/latest/userguide/mainframe-modernization-availability-change.html).

# AWS Transform for mainframe Blusam Administration Console
<a name="ba-shared-bac-userguide"></a>

The Blusam Administration Console (BAC) is a secure web-application for handling Blusam data sets. This guide covers the BAC user interface. For remote management through REST endpoints, see [Blusam application console REST endpoints](ba-endpoints-bac.md).

**Topics**
+ [Deploying the BAC](bac-deployment.md)
+ [Using the BAC](bac-usage.md)
+ [LISTCAT JSON format](ba-shared-bac-listcat-json-format.md)

# Deploying the BAC
<a name="bac-deployment"></a>

The BAC is available as a secured single web application, using the web-archive format (.war). It is intended to be deployed alongside the AWS Transform for mainframe Gapwalk-Application, in an Apache Tomcat application server, but can also be deployed as a standalone application. The BAC inherits the access to the Blusam storage from the Gapwalk-Application configuration if present.

The BAC has its own dedicated configuration file, named `application-bac.yml`. For configuration details, see [BAC dedicated configuration file](#ba-shared-bac-configuration-file).

The BAC is secured. For details about security configuration, see [Configuring security for the BAC](#ba-shared-bac-securing).

## BAC dedicated configuration file
<a name="ba-shared-bac-configuration-file"></a>

Standalone deployment: If the BAC is deployed alone the Gapwalk-Application, the connection to the Blusam storage must be configured in the application-bac.yml configuration file.

Default values for data sets configuration used to browse data set records must be set in the configuration file. See [Browsing records from a data set](bac-usage.md#ba-shared-bac-read-dataset). The records browsing page can use an optional mask mechanism that makes it possible to show a structured view on a record's content. Some properties impact the records view when masks are used.

The following configurable properties must set in the configuration file. The BAC application does not assume any default value for these properties.


| Key | Type | Description | 
| --- | --- | --- | 
| bac.crud.limit | integer | A positive integer value giving the maximum number of records returned when browsing records. Using 0 means unlimited. Recommended value: 10 (then adjust the value data set by data set on the browsing page, to fit your needs). | 
| bac.crud.encoding | string | The default character set name, used to decode records bytes as alphanumeric content. The provided charset name must be java compatible (please see the java documentation for supported charsets). Recommended value: the legacy charset used on the legacy platform where data sets are coming from; this will be an EBCDIC variant most of the times. | 
| bac.crud.initCharacter | string | The default character (byte) used to init data items. Two special values can be used: "LOW-VALUE", the 0x00 byte (recommended value) and "HI-VALUE", the 0xFF byte. Used when masks are applied. | 
| bac.crud.defaultCharacter | string | The default character (byte), as a one character string, used for padding records (on the right). Recommended value: " " (space). Used when masks are applied. | 
| bac.crud.blankCharacter | string | The default character (byte), as a one character string, used to represent blanks in records.Recommended value: " " (space). Used when masks are applied. | 
| bac.crud.strictZoned | boolean | A flag to indicate which zoned mode is used for the record. If true, the Strict zone mode will be used; if false, the Modified zoned mode will be used. Recommended value: true. Used when masks are applied. | 
| bac.crud.decimalSeparator | string | The character used as decimal separator in numeric edited fields (used when masks are applied). | 
| bac.crud.currencySign | string | The default character, as a one character string, used to represent currency in numeric edited fields, when formatting is applied (used when masks are applied). | 
| bac.crud.pictureCurrencySign | string | The default character, as a one character string, used to represent currency in numeric edited fields pictures (used when masks are applied). | 

The following sample is a configuration file snippet.

```
bac.crud.limit: 10
bac.crud.encoding: ascii
bac.crud.initCharacter: "LOW-VALUE"
bac.crud.defaultCharacter: " "
bac.crud.blankCharacter: " "
bac.crud.strictZoned: true
bac.crud.decimalSeparator: "."
bac.crud.currencySign: "$"
bac.crud.pictureCurrencySign: "$"
```

## Configuring security for the BAC
<a name="ba-shared-bac-securing"></a>

Configuring security for the BAC relies on the mechanisms detailed in this documentation page. The authentication scheme is OAuth2, and configuration details for Amazon Cognito or Keycloak are provided.

While general setup can be applied, some specifics about the BAC need to be detailed here. The access to the BAC features is protected using a role-based policy and relies on the following roles.
+ ROLE\$1USER:
  + Basic user role
  + No import, export, creation, or deletion of data sets allowed
  + No control over caching policies
  + No administration features allowed
+ ROLE\$1ADMIN:
  + Inherits ROLE\$1USER permissions
  + All data set operations allowed
  + Caching policies administration allowed

## Installing the masks
<a name="ba-shared-bac-masks"></a>

In Blusam storage, data sets records are stored in a byte array column in the database, for versatility and performance considerations. Having access to a structured view, using fields, of the business records, based on application point of view is a convenient feature of the BAC. This relies on the SQL masks produced during the AWS Transform for mainframe driven modernization process.

For the SQL masks to be generated, please make sure to set the relevant option (`export.SQL.masks`) in the configuration of the AWS Transform for mainframe refactor Transformation Center to true:

![\[Property set configuration with export.sql.masks option set to true and boolean type.\]](http://docs.aws.amazon.com/m2/latest/userguide/images/ba-bac-bluinsights-generate-masks-option.png)


The masks are part of the modernization artifacts that can be downloaded from AWS Transform for mainframe refactor for a given project. They are SQL scripts, organized by modernized programs, giving the applicative point of view on data sets records.

For example, using the [AWS CardDemo sample application](https://github.com/aws-samples/aws-mainframe-modernization-carddemo/tree/main/app/cbl), you can find in the downloaded artifacts from the modernization result of this application, the following SQL masks for the program CBACT04C.cbl:

![\[List of SQL mask files for CBACT04C program, including account, discrep, and transaction records.\]](http://docs.aws.amazon.com/m2/latest/userguide/images/ba-bac-sample-masks.png)


Each SQL mask name is the concatenation of the program name and the record structure name for a given data set within the program.

For example, looking at the [[CBACT04C.cbl](https://github.com/aws-samples/aws-mainframe-modernization-carddemo/blob/main/app/cbl/CBACT04C.cbl) program, the given file control entry:

```
    FILE-CONTROL.      
        SELECT TCATBAL-FILE ASSIGN TO TCATBALF   
               ORGANIZATION IS INDEXED
               ACCESS MODE  IS SEQUENTIAL
               RECORD KEY   IS FD-TRAN-CAT-KEY
               FILE STATUS  IS TCATBALF-STATUS.
```

is associated with the given FD record definition

```
       FILE SECTION. 
       FD  TCATBAL-FILE.  
       01  FD-TRAN-CAT-BAL-RECORD.  
           05 FD-TRAN-CAT-KEY.  
              10 FD-TRANCAT-ACCT-ID             PIC 9(11).  
              10 FD-TRANCAT-TYPE-CD             PIC X(02).
              10 FD-TRANCAT-CD                  PIC 9(04).  
           05 FD-FD-TRAN-CAT-DATA               PIC X(33).
```

The matching SQL mask named `cbact04c_fd_tran_cat_bal_record.SQL` is the mask that gives the point of view of the program CBACT04C.cbl on the FD record named `FD-TRAN-CAT-BAL-RECORD`.

Its content is:

```
-- Generated by AWS Transform for mainframe Velocity
-- Mask : cbact04c_fd_tran_cat_bal_record

INSERT INTO mask (name, length) VALUES ('cbact04c_fd_tran_cat_bal_record', 50);
  INSERT INTO mask_item (name, c_offset, length, skip, type, options, mask_fk) VALUES ('fd_trancat_acct_id', 1, 11, false, 'zoned', 'integerSize=11!fractionalSize=0!signed=false', (SELECT MAX(id) FROM mask));
  INSERT INTO mask_item (name, c_offset, length, skip, type, options, mask_fk) VALUES ('fd_trancat_type_cd', 12, 2, false, 'alphanumeric', 'length=2', (SELECT MAX(id) FROM mask));
  INSERT INTO mask_item (name, c_offset, length, skip, type, options, mask_fk) VALUES ('fd_trancat_cd', 14, 4, false, 'zoned', 'integerSize=4!fractionalSize=0!signed=false', (SELECT MAX(id) FROM mask));
  INSERT INTO mask_item (name, c_offset, length, skip, type, options, mask_fk) VALUES ('fd_fd_tran_cat_data', 18, 33, false, 'alphanumeric', 'length=33', (SELECT MAX(id) FROM mask));
```

Masks are stored in the Blusam storage using two tables:
+ mask: used to identify masks. The columns of the mas table are: 
  + name: used to store mask identification (used as primary key, so must be unique)
  + length: size in bytes of the record mask
+ mask\$1item: used to store mask details. Every elementary field from a FD record definition will produce a row in the mask\$1item table, with details on how to interpret the given record part. The columns of the mask\$1item table are: 
  + name: name of the record field, based on the elementary name, using lowercase and replacing dash with underscore
  + c\$1offset: 1-based offset of the record sub-part, used for the field content
  + length: length in bytes of the record sub-part, used for the field content
  + skip: flag to indicate whether the given record part should be skipped or not, in the view presentation
  + type: the field kind (based on its legacy picture clause)
  + options: additional type options -- type-dependant
  + mask\$1fk: reference to the mask identifier to attach this item to

Note the following:
+ SQL masks represent a point of view from a program on records from a data set: several programs might have a different point of view on a given data set; only install the masks that you find relevant for your purpose.
+ A SQL mask can also represent the point of view from a program based on a 01 data structure from the WORKING STORAGE section, not only from a FD record. The SQL masks are organized into sub-folders according to their nature:
  + FD record based masks will be located in the sub-folder named `file`
  + 01 data structure based masks will be located in the sub-folder named `working` 

  While FD records definitions always match the record content from a data set, 01 data structures might not be aligned or might only represent a subset from a data set record. Before you use them, inspect the code and understands the possible shortcomings.

# Using the BAC
<a name="bac-usage"></a>

Because the BAC is secured and delivers permissions to use features based on the user role, the first step to access the application is to authenticate yourself. After the authentication step, you'll be redirected to the home page. The home page presents the paginated list of data sets found in the Blusam storage:

![\[Blusam Administration Console showing configuration settings and a table of data sets.\]](http://docs.aws.amazon.com/m2/latest/userguide/images/ba-bac-list-datasets.png)


To return to the home page with the data sets listing, choose the AWS Transform for mainframe logo in the upper left corner of any page of the application. The following image shows the logo.

![\[Blu Age logo with stylized blue text and orange hyphen.\]](http://docs.aws.amazon.com/m2/latest/userguide/images/logo_blu_age_aws_console_s.png)


The foldable header, labelled "Blusam configuration", contains information about the used Blusam storage configuration:
+ `Persistence`: the persistent storage engine (PostgreSQL)
+ `Cache Enabled`: whether the storage cache is enabled

On the right side of the header, two drop-down lists, each one listing operations related to data sets:
+ **Bulk actions**
+ **Create actions**

To learn about the detailed contents of these lists, see [Existing data set operations](#ba-shared-bac-usage-datasets).

The **Bulk Actions** button is disabled when no data set selection has been made.

You can use the search field to filer the list based on the data sets names:

![\[Search field and table showing KSDS data sets with details like keys, records, and dates.\]](http://docs.aws.amazon.com/m2/latest/userguide/images/ba-bac-filtered-list-datasets.png)


The paginated list that follows shows one data set per table row, with the following columns:
+ Selection checkbox: A checkbox to select the current data set.
+ Name: The name of the data set.
+ Type: The type of the data set, one of the following:
  + KSDS
  + ESDS
  + RRDS
+ Keys: A link to show or hide details about the keys (if any). For example, the given KSDS has the mandatory primary key and one alternative key.   
![\[Key details table showing primary and alternative keys with their names, uniqueness, offsets, and lengths.\]](http://docs.aws.amazon.com/m2/latest/userguide/images/ba-shared-bac-keys-details.png)

  There is one row per key, with the following columns. None of the fields are editable.
  + Key nature: either a primary key or an alternative key
  + Name: the name of the key
  + Unique: whether the key accepts duplicate entries
  + Offset: offset of the key start within the record
  + Length: length in bytes of the key portion in the record
+ Records: The total number of records in the data set.
+ Record size max: The maximal size for records, expressed in bytes.
+ Fixed record length: A checkbox that indicates whether the records are fixed length (selected) or variable length (unselected).
+ Compression: A checkbox that indicates whether compression is applied (selected) or not (unselected) to stored indexes.
+ Creation date: The date when the data set was created in the Blusam storage.
+ Last modification date: The date when the data set was last updated in the Blusam storage.
+ Cache: A link to show or hide details about the caching strategy applied to this dataset.   
![\[Cache details section with options to enable cache at startup and warm up cache.\]](http://docs.aws.amazon.com/m2/latest/userguide/images/ba-shared-bac-cache-details.png)
  + Enable cache at startup: A checkbox to specify the startup caching strategy for this data set. If selected, the data set will be loaded into cache at startup time.
  + Warm up cache: A button to load the given data set into cache, starting immediately (but hydrating the cache takes some time, depending on the data set size and number of keys). After the data set gets loaded into cache, a notification like the following one appears.  
![\[Green box indicating successful achievement of DataSet AWS.M2.CARDDEMO.CUSTDATA.V SAM.KSDS cache warm up.\]](http://docs.aws.amazon.com/m2/latest/userguide/images/ba-bac-warmed-up-cache-notification.png)
+ Actions: A drop-down list of possible data sets operations. For details, see [Existing data set operations](#ba-shared-bac-usage-datasets).

At the bottom of the page, there is a regular paginated navigation widget for browsing through the pages of the list of data sets.

## Existing data set operations
<a name="ba-shared-bac-usage-datasets"></a>

For each data set in the paginated list, there is an **Actions** drop-down list with the following content:

![\[Dropdown menu showing options: Read, Load, Export, Clear, and Delete.\]](http://docs.aws.amazon.com/m2/latest/userguide/images/ba-bac-dataset-actions-dropdown.png)


Each item in the list is an active link that makes it possible to perform the specified action on the data set:
+ Read: browse records from the data sets
+ Load: import records from a legacy data set file
+ Export: export records to a flat file (compatible with legacy systems)
+ Clear: remove all records from the data set
+ Delete: remove the data set from the storage

Details for each action are provided in the following sections.

### Browsing records from a data set
<a name="ba-shared-bac-read-dataset"></a>

When you choose the **Read** action for a given data set, you get the following page.

![\[Blusam Administration Console interface for dataset management with search and filter options.\]](http://docs.aws.amazon.com/m2/latest/userguide/images/ba-bac-dataset-browse-empty.png)


The page is made of:
+ a header, with: 
  + Dataset: the data set name
  + Record size: the fixed record length, expressed in bytes
  + Total Records: the total number of records stored for this data set
  + Show configuration button (on the right side): a toggle button to show/hide the data set configuration. At first, the configuration is hidden. When using the button, the configuration you see the configuration, as shown in the following image.  
![\[Dataset configuration panel with fields for encoding, characters, separators, and currency signs.\]](http://docs.aws.amazon.com/m2/latest/userguide/images/ba-bac-dataset-configuration.png)

    When configuration is shown, two new buttons: Save and Reset, used respectively to:
    + save the configuration for this data set and current work session
    + reset the configuration to default values for all fields.
  + A list of configurable properties to tailor the browsing experience for the given data set.

The configurable properties match the configuration properties described in [BAC dedicated configuration file](bac-deployment.md#ba-shared-bac-configuration-file). Refer to that section to understand the meaning of each column and applicable values. Each value can be redefined here for the data set and saved for the work session (using the Save button). After you save the configuration, a banner similar to the one shown in the following image appears.

![\[Success message indicating configuration has been saved for the current dataset view session.\]](http://docs.aws.amazon.com/m2/latest/userguide/images/ba-bac-dataset-configuration-saved-banner.png)


The banner states that the work session ends when you leave the current page.

There is an extra configurable property that is not documented in the configuration section: Record size. This is used to specify a given record size, expressed in bytes, that will filter the applicable masks to this data set: only masks whose total length matches the given record size will be listed in the Data mask drop-down list.

Retrieving records from the data set is triggered by the Search button, using all options and filters nearby.

First line of options:
+ the Data mask drop-down list show applicable masks (respecting the record size). Please note that, matching the record size is not enough to be an effective applicable mask. The mask definition must also be compatible with the records contents. The Data mask picked here has
+ Max results: limits the number of records retrieved by the search. Set to 0 for unlimited (paginated) results from the data set.
+ Search button: launch the records retrieval using filters and options
+ Clear mask button: will clear the used mask if any and switch back the results page to a raw key/data presentation.
+ Clear filter button: will clear the used filter(s) if any and update the results page accordingly.
+ All fields toggle: When selected, mask items defined with `skip = true` are shown anyway, otherwise mask items with `skip = true` are hidden.

Next lines of filters: It is possible to define a list of filters, based on the usage of filtering conditions applied to fields (columns) from a given mask, as shown in the following image.
+ Filter mask: The name of the mask to pick the filtering column from. When you choose the field, the list of applicable masks appears. You can choose the mask you want from that list.  
![\[Text input field labeled "Filter mask" with a dropdown arrow and placeholder text.\]](http://docs.aws.amazon.com/m2/latest/userguide/images/ba-bac-mask-quick-select.png)
+ Filter column: The name of the field (column) from the mask, used to filter records. When you choose the field, the list of mask columns appears. To fill the **Filter column** field, choose the desired cell.  
![\[Dropdown menu showing filter column options for a data mask, including transaction and account IDs.\]](http://docs.aws.amazon.com/m2/latest/userguide/images/ba-bac-dataset-filter-column.png)
+ Filter operator: An operator to apply to the selected column. The following operators are available.
  + equals to: the column value for the record must be equal to the filter value
  + starts with: the column value for the record must start with the filter value
  + ends with: the column value for the record must end with the filter value
  + contains: the column value for the record must contain the filter value
+ Filter options:
  + Inverse: apply the inverse condition for the filter operator; for instance, 'equals to' is replaced by 'not equals to';
  + Ignore case: ignore case on alphanumeric comparisons for the filter operator
+ Filter value: The value used for comparison by the filter operator with the filter column.

Once the minimal number of filter items are set (at least: Filter mask, filter column, filter operator and Filter value must be set), the Add Filter button is enabled, and clicking on it creates a new filter condition on the retrieved records. Another empty filter condition row is added at the top and the added filter condition has a Remove filter button that can be used to suppress the given filter condition:

![\[Filter configuration interface with options for mask, column, operator, and value.\]](http://docs.aws.amazon.com/m2/latest/userguide/images/ba-bac-dataset-added-filter.png)


When you launch the search, the filtered results appear in a paginated table.

**Note**
+ Successive filters are linked by an **and** or an **or**. Every new filter definition starts by setting the link operator, as shown in the following image.  
![\[Dropdown menu showing options for filter link operator: "and" or "or".\]](http://docs.aws.amazon.com/m2/latest/userguide/images/bac-bac-filter-link-operator.png)
+ There might not be any records that match the given filter conditions.

Otherwise, the results table looks like the one in the following image.

![\[Data table showing transaction records with account IDs, types, and numerical data.\]](http://docs.aws.amazon.com/m2/latest/userguide/images/ba-bac-dataset-filtered-results.png)


A header indicates the total number of records that match the filter conditions. After the header, you see the following.
+ Reminder of the used data mask (if any) and the filter conditions.
+ A refresh button that you can use to trigger the refresh of the whole results table with latest values from the Blusam storage (as it might have been updated by another user for instance).

For each retrieved record, the table has a row that shows the result of applying the data mask to the records' contents. Each column is the interpretation of the record sub-portion according to the column's type (and using the selected encoding). To the left of each row, there are three buttons:
+ a magnifying glass button: leads to a dedicated page showing the detailed record's contents
+ a pen button: leads to a dedicated edit page for the record's contents:
+ a trashcan button: used to delete the given record from the blusam storage

Viewing the record's contents in detail:

![\[Data mask table showing fields for a transaction record with name, type, options, and value columns.\]](http://docs.aws.amazon.com/m2/latest/userguide/images/bac-bac-record-ro-details.png)

+ Three toggle buttons for hiding or showing some columns: 
  + Hide/show the type
  + Hide/show the display flag
  + Hide/show the range
+ To leave this dedicated page and go back to the results table, choose **Close**.
+ Each row represents a column from the data mask, with the following columns: 
  + Name: the column's name
  + Type: the column's type
  + Display: the display indicator; a green check will be displayed if the matching mask item is defined with `skip = false`, otherwise a red cross will be displayed
  + From & To: the 0-based range for the record sub-portion
  + Value: the interpreted value of the record sub-portion, using type and encoding

Editing the record's contents:

![\[Data record editor showing fields for transaction account details and data.\]](http://docs.aws.amazon.com/m2/latest/userguide/images/bac-bac-record-rw-details.png)


The editing page is similar to the view page described above, except that the mask items values are editable. Three buttons control the update process:
+ Reset: resets the editable values to the initial record values (prior to any edition);
+ Validate: validates the input, with regards to the mask item type. For each mask item, the result of the validation will be printed using visual labels (`OK` and checkbox if validation succeeded, `ERROR` and red cross if validation failed, alongside an error message giving hints about the validation failure). If the validation succeeded, two new buttons will appear:
  + Save: attempt to update the existing record into Blusam storage
  + Save a copy: attempt to create a new record into Blusam storage  
![\[Data record form with fields for transaction account details and validation status.\]](http://docs.aws.amazon.com/m2/latest/userguide/images/bac-bac-record-rw-valid-details.png)
  + If saving the record to the storage is successful, a message is displayed and the page will switch to a read-only mode (mask items values cannot be edited anymore):   
![\[Data mask record details showing fields, types, options, and values in a table format.\]](http://docs.aws.amazon.com/m2/latest/userguide/images/ba-bac-record-updated.png)
  + If for any reason the record persistence to the storage fails, an error message is displayed in red, providing a failure reason. The most common case of failures are that storing the record would lead to a key corruption (invalid or duplicate key). For an illustration, see the following note. 
  + To exit, choose the **Close** button.
+ Cancel: Ends the editing session, closes the page, and takes you back to the records list page.

**Note:**
+ The validation mechanism only checks that the mask item value is formally compatible with the mask item type. For example, see this failed validation on a numeric mask item:  
![\[Data entry form with validation error on numeric field, showing incompatible value.\]](http://docs.aws.amazon.com/m2/latest/userguide/images/bac-bac-record-rw-invalid-format.png)
+ The validation mechanism might try to auto-correct invalid input, displaying an informational message in blue to indicate that the value has been automatically corrected, according to its type. For example, inputting 7XX0 as the numeric value in the numeric `fd_trncat_cd` mask item:   
![\[Data mask interface showing auto-correction of numeric value 7XX0 in fd_trncat_cd field.\]](http://docs.aws.amazon.com/m2/latest/userguide/images/bac-bac-record-rw-half-invalid-format.png)

  Calling validation leads to the following:  
![\[Data mask interface showing record fields, types, options, and values for a transaction category.\]](http://docs.aws.amazon.com/m2/latest/userguide/images/bac-bac-record-rw-half-invalid-format-autofix.png)
+ The validation mechanism does not check whether the given value is valid in terms of key integrity (if any unique key is involved for the given data set). For instance, despite validation being successful, if provided values lead to an invalid or duplicate key situation, the persistence will fail and an error message will be displayed:  
![\[Data entry form with error message and fields for transaction details.\]](http://docs.aws.amazon.com/m2/latest/userguide/images/bac-bac-record-rw-invalid-key.png)

Deleting a record:

To delete a record, choose the trashcan button:

![\[Confirmation dialog for deleting a record, with Cancel and Confirm options.\]](http://docs.aws.amazon.com/m2/latest/userguide/images/ba-bac-record-deletion-confirmation-popup.png)


### Loading records into a data set
<a name="ba-shared-bac-load-dataset"></a>

To loading records into a data set, choose **Actions**, then choose **Load**.

![\[Dropdown menu showing options: Read, Load, Export, Clear, and Delete.\]](http://docs.aws.amazon.com/m2/latest/userguide/images/ba-bac-dataset-load-cmd.png)


A window with load options appears.

![\[Data set loading interface with reading parameters and file selection options.\]](http://docs.aws.amazon.com/m2/latest/userguide/images/ba-bac-dataset-load-popup.png)


At first, both the **Load on server** and **Load on Blusam** buttons are disabled.

Reading parameters:
+ Record length kind:
  + Fixed or Variable record length: use the radio-button to specify whether the legacy data set export uses fixed length records or variable length records (the records are expected to start with RDW bytes). If you choose Fixed, the record length must be specified (in bytes) as a positive integer value in the input field. The value should be pre-filled by the information coming from the data set. If you choose Variable, the given input field disappears.
  + File selection: 
    + Local: choose the data set file from your local computer, using the file selector below (Note: the file selector uses your browser's locale for printing its messages -- here in french, but it might look different on your side, which is expected). After you make the selection, the window is updated with the data file name and the **Load on server** button is enabled:   
![\[File selection interface with Local and Server options, Browse button, and Load on server button.\]](http://docs.aws.amazon.com/m2/latest/userguide/images/ba-bac-dataset-load-selection.png)

      Choose **Load on server**. After the progress bar reaches its end, the **Load on Blusam** button gets enabled:  
![\[Progress bar fully loaded, with "Load on Blusam" button enabled.\]](http://docs.aws.amazon.com/m2/latest/userguide/images/ba-bac-dataset-load-selection-uploaded.png)

      To complete the load process to the Blusam storage, choose the **Load on Blusam**. Otherwise, choose **Cancel**. If you choose to go on with the load process, a notification will appear in the lower right corner after the loading process is completed:  
![\[Green success notification indicating file loading completed successfully.\]](http://docs.aws.amazon.com/m2/latest/userguide/images/ba-bac-dataset-load-notification.png)
    + Server: choosing this option makes an input field appear while the **Load on server** button disappears. The input field is where you must specify the path to the data set file on the Blusam server (this assumes that you have transferred the given file to the Blusam server first). After you specify the path, **Load on Blusam** gets enabled:   
![\[File selection interface with server option and file path input field.\]](http://docs.aws.amazon.com/m2/latest/userguide/images/ba-bac-dataset-load-from-server.png)

      To complete the loading process, Choose **Load on Blusam**. Otherwise, choose **Cancel**. If you choose to proceed with the loading, a notification appears after the loading process is complete. The notification is different from the load from the browser as it displays the data file server path followed by the words **from server**:  
![\[Green success notification showing file loaded from server path.\]](http://docs.aws.amazon.com/m2/latest/userguide/images/ba-bac-dataset-load-from-server-notification.png)

### Exporting records from a data set
<a name="ba-shared-bac-export-dataset"></a>

To export data set records, choose **Actions** in the current data set row, then choose **Export**:

![\[Dropdown menu showing options: Read, Load, Export, Clear, and Delete.\]](http://docs.aws.amazon.com/m2/latest/userguide/images/ba-bac-dataset-export-cmd.png)


The following pop-up window appears.

![\[Data dump configuration window with options for local or server storage and zip dump.\]](http://docs.aws.amazon.com/m2/latest/userguide/images/ba-bac-dataset-export-popup.png)


Options:

**To** : a radio button choice, to pick the export destination, either as a download in the browser (**Local (on browser)**) or to a given folder on the **Server** hosting the BAC application. If you choose to export using the **Server** choice, a new input field will be displayed: 

![\[Radio button for selecting Server as the export destination, with an input field for target folder.\]](http://docs.aws.amazon.com/m2/latest/userguide/images/ba-bac-dataset-export-server-folder-location.png)


As the red asterisk on the right of the input field indicates, it is mandatory to provide a valid folder location on the server (the Dump button will be inactive while no folder location has been provided).

To export to the server, you must have the sufficient access rights for the server file system, if you plan to manipulate the exported data set file after the export.

**Zip dump**: a checkbox that produces a zipped archive instead of a raw file.

**Options**: To include a Record Descriptor Word (RDW) at the beginning of each record in the exported data set in the case of variable length record data set, choose **Include RDW fields**.

To launch the data set export process, choose **Dump**. If you choose to export to browser, check the download folder for the export data set file. The file will have the same name as the data set:

![\[File name AWS.M2.CARDDEMO.CARDXREF.VSAM.KSDS with details on size and type.\]](http://docs.aws.amazon.com/m2/latest/userguide/images/ba-bac-dataset-export-result-file.png)


**Note:**
+ For KSDS, the records will exported following the primary key order.
+ For ESDS and RRDS, the records will be exported following the RBA (Relative Byte Address) order.
+ For all data sets kinds, records will be exported as raw binary arrays (no conversion of any kind happening), ensuring direct compatibility with legacy platforms.

### Clearing records from a data set
<a name="ba-shared-bac-clear-dataset"></a>

To clear all records from a data set, choose **Actions**, then choose **Clear**:

![\[Dropdown menu showing options: Read, Load, Export, Clear, and Delete.\]](http://docs.aws.amazon.com/m2/latest/userguide/images/ba-bac-dataset-clear-cmd.png)


After all records are removed from a data set, the following notification appears.

![\[Green success notification showing "Succeeded" with a checkmark and data set details.\]](http://docs.aws.amazon.com/m2/latest/userguide/images/ba-bac-dataset-clear-notification.png)


### Deleting a data set
<a name="ba-shared-bac-delete-dataset"></a>

To delete a data set, choose **Actions**, then choose **Delete**:

![\[Dropdown menu showing options: Read, Load, Export, Clear, and Delete.\]](http://docs.aws.amazon.com/m2/latest/userguide/images/ba-bac-dataset-delete-cmd.png)


After you delete a data set, the following notification appears:

![\[Green success notification with checkmark indicating data set deletion completed.\]](http://docs.aws.amazon.com/m2/latest/userguide/images/ba-bac-dataset-delete-notification.png)


### Bulk operations
<a name="ba-shared-bac-bulk-usage-existing-datasets"></a>

Three bulk operations are available on data sets:
+ Export
+ Clear
+ Delete

Bulk operations can only be applied to a selection of data sets (at least one data set needs to be selected); selecting data sets is done through ticking selection checkboxes on the left of data sets rows, in the data sets list table. Selecting at least one data set will enable the Bulk Actions drop down list:

![\[Dropdown menu showing Bulk Actions options: Export, Clear, and Delete.\]](http://docs.aws.amazon.com/m2/latest/userguide/images/ba-bac-dataset-bulk-actions-dropdown.png)


 Apart from the fact that the given actions apply on a selection of data sets rather than a single one, the actions are similar to those described above, so please refer to dedicated actions documentation for details. The pop-up windows text contents will be slightly different to reflect the bulk nature. For instance, when trying to delete several data sets, the pop-up window will look like:

![\[Confirmation dialog asking if user wants to delete all selected data sets.\]](http://docs.aws.amazon.com/m2/latest/userguide/images/ba-bac-dataset-delete-bulk-popup.png)


## Creating operations
<a name="ba-shared-bac-usage-creating-datasets"></a>

### Create a single data set
<a name="ba-shared-bac-create-single-dataset"></a>

Choose **Actions**, then choose **Create single data set**:

![\[Dropdown menu showing "Bulk Actions" and "Create Actions" buttons with options.\]](http://docs.aws.amazon.com/m2/latest/userguide/images/ba-bac-dataset-single-create.png)


The data set creation form will then be displayed as a pop-up window:

![\[Data set creation form with fields for name, record size, type, and other configuration options.\]](http://docs.aws.amazon.com/m2/latest/userguide/images/ba-bac-dataset-creation-form-window.png)


You can specify the following attributes for the data set definition:
+ Enabling and disabling naming rules: Use the 'Disable naming rules / Enable naming rules' toggle widget to disable and enable data set naming conventions. We recommend that you leave the toggle on the default value, with enabled data set naming rules (the toggle widget should display "Disable naming rules"):  
![\[Toggle switch for disabling or enabling naming rules, currently set to "Disable naming rules".\]](http://docs.aws.amazon.com/m2/latest/userguide/images/ba-bac-disable-dataset-naming-rules.png)  
![\[Toggle switch for enabling naming rules, shown in the off position.\]](http://docs.aws.amazon.com/m2/latest/userguide/images/ba-bac-enable-dataset-naming-rules.png)
+ Data Set name: The name for the data set. If you specify a name that is already in use, the following error message appears.  
![\[Error message indicating dataset name already exists, prompting user to choose another.\]](http://docs.aws.amazon.com/m2/latest/userguide/images/bac-bac-dataset-name-already-used-err-msg.png)

  The name must also respect the naming convention if it is enabled:  
![\[Input field with naming convention rule for dataset names using alphabetic or national characters.\]](http://docs.aws.amazon.com/m2/latest/userguide/images/ba-bac-dataset-name-segment-convention-err-msg.png)  
![\[Text field labeled "DataSet Name" with input validation instructions for allowed characters.\]](http://docs.aws.amazon.com/m2/latest/userguide/images/ba-bac-dataset-name-segment-characters-err-msg.png)  
![\[Input field for dataset name with character limit instruction in red text.\]](http://docs.aws.amazon.com/m2/latest/userguide/images/ba-bac-dataset-name-segment-length-err-msg.png)  
![\[Input field with error message indicating dataset name must not end with a period.\]](http://docs.aws.amazon.com/m2/latest/userguide/images/ba-bac-dataset-name-ends-with-period-err-msg.png)
+ Record size max: This must be a positive integer representing the record size for a data set with fixed-length records. You can leave it blank for data sets with variable-length records .
+ Fixed length record: A check box to specify whether the record length is fixed or variable. If selected, the data set will have fixed-length records, otherwise the record length will be variable.

  When you import legacy data to a variable length records data set, the provided legacy records must contain the Record Descriptor Word (RDW) that gives the length of each record.
+ Data set Type: A drop-down list for specifying the current data set type. The following types are supported.
  + ESDS
  + LargeESDS
  + KSDS

  For KSDS, you must specify the primary key:  
![\[Form fields for KSDS dataset configuration, including Primary Key, Offset, Length, and Unique option.\]](http://docs.aws.amazon.com/m2/latest/userguide/images/ba-bac-dataset-creation-ksds.png)

  For the primary key, specify the following:
  + Name: This field is optional. The default is **PK**.
  + Offset: The 0-based offset of the primary key within the record. The offset must be a positive integer. This field is required.
  + Length: The length of the primary key. This length must be a positive integer. This field is required.

  For KSDS and ESDS, you can optionally define a collection of alternate keys, by choosing the Plus button in front of the Alternate Keys label. Each time you choose that button, a new alternate key definition section appears in the data set creation form:  
![\[Form fields for defining alternate keys with options for key name, offset, length, and uniqueness.\]](http://docs.aws.amazon.com/m2/latest/userguide/images/ba-bac-dataset-altkey-definition.png)

  For each alternative key, you need to provide:
  + Name: This field is optional. The default value is **ALTK\$1\$1**, where \$1 represents an auto-incremented counter that starts at 0.
  + Offset: The 0-based offset of the alternative key within the record. Must be a positive integer. This field is required.
  + Length: The length of the alternative key. This length must be a positive integer. This field is required.
  + Unique: A checkbox to indicate whether the alternative key will accept duplicate entries. If selected, the alternative key will be defined as unique (NOT accepting duplicate key entries). This field is required.

  To remove the alternate key definition, use the trashcan button on the left.
+ Compression: A checkbox to specify whether compression will be used to store the data set.
+ Enable cache at startup: A checkbox to specify whether the data set should be loaded into cache at application startup.

After you specify the attribute definitions, choose **Create** to proceed:

![\[Data set creation form with fields for name, size, type, keys, and other settings.\]](http://docs.aws.amazon.com/m2/latest/userguide/images/ba-bac-dataset-form-complete-sample.png)


The creation window will be closed and the home page showing the list of data sets will be displayed. You can view the details of the newly created data set.

![\[Data set details showing primary and alternative keys with their properties.\]](http://docs.aws.amazon.com/m2/latest/userguide/images/ba-bac-dataset-freshly-created.png)


### Create a single data set in Multi-schema mode
<a name="ba-shared-bac-create-single-dataset-Multi-schema"></a>

A data set can be created in a Multi-schema mode by prefixing the data set name with the schema name followed by a pipe (\$1) symbol (e.g., `schema1|AWS.M2.CARDDEMO.ACCTDATA.VSAM.KSDS`).

**Note**  
The Schema used for creating the data set should be specified in the `application-main.yml` configuration. For more information, see [Multi-schema configuration properties](ba-shared-blusam.md#ba-shared-blusam-configuration-multi-schema).

![\[Data set creation form with fields for name, size, type, and other configuration options.\]](http://docs.aws.amazon.com/m2/latest/userguide/images/ba-bac-create-single-dataset-Multi-schema.png)


If no schema prefix is provided, the data set will get created in the default schema specified in the Blusam datasource URL in [Blusam Datasource configuration](ba-shared-blusam.md#ba-shared-blusam-configuration-multi-schema). If no schema is specified in the Blusam datasource URL, then 'public' schema is used by default.

**Note**  
In Multi-schema mode, BAC console displays the schema information of the data set in the first column.

![\[Blusam Administration Console showing configuration details and dataset information.\]](http://docs.aws.amazon.com/m2/latest/userguide/images/ba-bac-create-display-datasets-Multi-schema.png)


### Create data sets from LISTCAT
<a name="ba-shared-bac-create-datasets-from-listcat"></a>

This feature makes it possible to take advantage of the LISTCAT JSON files created during the AWS Transform for mainframe transformation process using AWS Transform for mainframe refactor Transformation Center as the result of parsing LISTCAT export from the legacy platforms: LISTCAT exports are parsed and transformed into JSON files that hold the data set definitions (names, data set type, keys definitions, and whether the record length is fixed or variable).

Having the LISTCAT JSON files makes it possible to create data sets directly without having to manually enter all the information required for data sets. You can also create a collection of data sets directly instead of having to create them one by one.

If no LISTCAT JON file is available for your project (for example, because no LISTCAT export file was available at transformation time), you can always manually create one, provided you adhere to the LISTCAT JSON format detailed in the appendix.

From the Create Actions drop-down list, choose **Create data sets from LISTCAT**.

The following dedicated page will be displayed:

![\[Interface for creating datasets from LISTCAT files, with options for file source and folder path.\]](http://docs.aws.amazon.com/m2/latest/userguide/images/ba-bac-load-LISTCAT.png)


At this stage, the **Load** button is disabled, which is expected.

Use the radio buttons to specify how you want to provide the LISTCAT JSON files. There are two options:
+ You can use your browser to upload the JSON files.
+ You can select the JSON files from a folder location on the server. To choose this option, you must first copy the JSON files to the given folder path on the server with proper access rights.

**To use JSON files on the server**

1. Set the folder path on the server, pointing at the folder containing the LISTCAT JSON files:  
![\[Text input field for server folder path with a "Load" button below.\]](http://docs.aws.amazon.com/m2/latest/userguide/images/ba-bac-dataset-creation-from-server-listcat-files.png)

1. Choose the **Load** button. All recognized data set definitions will be listed in a table:  
![\[List of AWS_M2_CARDDEMO data set definitions from LISTCAT, showing various VSAM_KSDS types.\]](http://docs.aws.amazon.com/m2/latest/userguide/images/ba-bac-dataset-creation-from-server-listcat-files-list.png)

   Each row represents a data set definition. You can use trashcan button to remove a data set definition from the list.
**Important**  
The removal from the list is immediate, with no warning message.

1. The name on the left is a link. You can choose it to show or hide the details of the data set definition, which is editable. You can freely modify the definition, starting on the basis of the parsed JSON file.  
![\[Data set configuration form with fields for name, record size, type, and key settings.\]](http://docs.aws.amazon.com/m2/latest/userguide/images/ba-bac-dataset-creation-definition-edit-form.png)

1. To create all data sets, choose **Create**. All data sets will be created, and will be displayed on the data sets results page. The newly created data sets will all have 0 records.  
![\[Data sets results page showing newly created AWS M2 CARDDEMO data sets with 0 records.\]](http://docs.aws.amazon.com/m2/latest/userguide/images/ba-bac-freshly-created-datasets-from-listcat.png)

**To upload files to the server**

1. This option is similar to using the files from the server folder path, but in this case you must first upload the files using the file selector. Select all files to upload from your local machine, then choose **Load on server**.  
![\[File upload interface with Browse, Load on server, and Remove all buttons, and a progress bar.\]](http://docs.aws.amazon.com/m2/latest/userguide/images/ba-bac-dataset-creation-from-uploaded-listcat-files.png)

1. When the progress bar reaches the end, all files have been successfully uploaded to the server and the **Load** button is enabled. Choose the **Load** button and use the discovered data set definitions as explained previously.

# LISTCAT JSON format
<a name="ba-shared-bac-listcat-json-format"></a>

The LISTCAT JSON format is defined by the following attributes:
+ optional "catalogId": identifier of the legacy catalog as a String, or "default" for the default catalog.
+ "identifier": the data set name, as a String.
+ "isIndexed": a boolean flag to indicate KSDS: true for KSDS, false otherwise.
+ "isLinear": a boolean flag to indicate ESDS: true for ESDS, false otherwise.
+ "isRelative": a boolean flag to indicate RRDS: true for RRDS, false otherwise
+ **Note**: "isIndexed", "isLinear", and "isRelative" are mutually exclusive.
+ "isFixedLengthRecord": a boolean flag: set to true if fixed length records data set, false otherwise.
+ "avgRecordSize": Average record size in bytes, expressed as a positive integer.
+ "maxRecordSize": Maximal Record size in bytes, expressed as an integer. Should be equal to avgRecordSize for fixed length record size.
+ for KSDS only: Mandatory primary Key definition (as nested object)
  + labelled "primaryKey"
  + "offset": 0-based bytes offset for the primary key in the record.
  + "length": length in bytes of the primary key.
  + "unique": must be set to true for primary key.
+ for KSDS/ESDS, collection of alternate keys (as collection of nested objects):
  + labelled "alternateKeys"
  + For each alternate key: 
    + "offset": 0-based bytes offset for the alternate key in the record.
    + "length": length in bytes of the alternate key.
    + "unique": must be set to true for alternate key, if the key does not accept duplicate entries, false otherwise.
+ if no alternate keys are present, provide an empty collection:

  ```
  alternateKeys: []
  ```

The following is a sample KSDS LISTCAT JSON file.

```
{
  "catalogId": "default",
  "identifier": "AWS_M2_CARDDEMO_CARDXREF_VSAM_KSDS",
  "isIndexed": true,
  "isLinear": false,
  "isRelative": false,
  "isFixedLengthRecord": true,
  "avgRecordSize": 50,
  "maxRecordSize": 50,
  "primaryKey": {
    "offset": 0,
    "length": 16,
    "unique": true
  },
  "alternateKeys": [
    {
      "offset": 25,
      "length": 11,
      "unique": false
    }
  ]
}
```