

Amazon CodeCatalyst is no longer open to new customers. Existing customers can continue to use the service as normal. For more information, see [How to migrate from CodeCatalyst](migration.md).

# Testing with workflows
<a name="test-workflow-actions"></a>

In CodeCatalyst, you can run tests as part of different workflow actions, such as build and test. These workflow actions can all generate quality reports. A *test action* is a workflow action that produces test, code coverage, software composition analysis, and static analysis reports. These reports are displayed in the CodeCatalyst console.

**Topics**
+ [Quality report types](#test-reporting)
+ [Adding the test action](test-add-action.md)
+ [Viewing the results of a test action](test-view-results.md)
+ [Skipping failed tests in an action](test.error-handling.md)
+ [Integrating with universal-test-runner](test.universal-test-runner.md)
+ [Configuring quality reports in an action](test-config-action.md)
+ [Best practices for testing](test-best-practices.md)
+ [Supported SARIF properties](test.sarif.md)

## Quality report types
<a name="test-reporting"></a>

The Amazon CodeCatalyst test action supports the following types of quality reports. For an example on how to format these reports in your YAML, see [Quality reports YAML example](test-config-action.md#test.success-criteria-example).

**Topics**
+ [Test reports](#test-reports)
+ [Code coverage reports](#test-code-coverage-reports)
+ [Software composition analysis reports](#test-sca-reports)
+ [Static analysis reports](#test-static-analysis-reports)

### Test reports
<a name="test-reports"></a>

In CodeCatalyst, you can configure unit tests, integration tests, and system tests that run during builds. Then CodeCatalyst can create reports that contain the results of your tests.

You can use a test report to help troubleshoot problems with your tests. If you have many test reports from multiple builds, you can use your test reports to view failure rates to help you optimize your builds.

You can use the following test report file formats:
+ Cucumber JSON (.json)
+ JUnit XML (.xml)
+ NUnit XML (.xml)
+ NUnit3 XML (.xml)
+ TestNG XML (.xml)
+ Visual Studio TRX (.trx, .xml)

### Code coverage reports
<a name="test-code-coverage-reports"></a>

In CodeCatalyst, you can generate code coverage reports for your tests. CodeCatalyst provides the following code coverage metrics:

Line coverage  
Measures how many statements your tests cover. A statement is a single instruction, not including comments.  
`line coverage = (total lines covered)/(total number of lines)`

Branch coverage  
Measures how many branches your tests cover out of every possible branch of a control structure such as an `if` or `case` statement.  
`branch coverage = (total branches covered)/(total number of branches)`

The following code coverage report file formats are supported:
+ JaCoCo XML (.xml)
+ SimpleCov JSON (generated by [simplecov](https://github.com/simplecov-ruby/simplecov), not [simplecov-json](https://github.com/vicentllongo/simplecov-json), .json)
+ Clover XML (version 3, .xml)
+ Cobertura XML (.xml)
+ LCOV (.info)

### Software composition analysis reports
<a name="test-sca-reports"></a>

In CodeCatalyst, you can use software composition analysis (SCA) tools to analyze components of your application and check for known security vulnerabilities. You can discover and parse SARIF reports that detail vulnerabilities with varying severities and ways to fix them. Valid severity values, from most to least severe, are: `CRITICAL`, `HIGH`, `MEDIUM`, `LOW`, `INFORMATIONAL`.

The following SCA report file formats are supported:
+ SARIF (.sarif, .json)

### Static analysis reports
<a name="test-static-analysis-reports"></a>

You can use static analysis (SA) reports to identify source-level code defects. In CodeCatalyst, you can generate SA reports to help resolve issues in your code before you deploy it. These issues include bugs, security vulnerabilities, quality issues, and other vulnerabilities. Valid severity values, from most to least severe, are: `CRITICAL`, `HIGH`, `MEDIUM`, `LOW`, and `INFORMATIONAL`.

CodeCatalyst provides the following SA metrics:

Bugs  
Identifies a number of possible bugs found in your source code. These bugs can include issues regarding memory safety. The following is an example of a bug.  

```
// The while loop will inadvertently index into array x out-of-bounds
int x[64];
while (int n = 0; n <= 64; n++) {
  x[n] = 0;
}
```

Security vulnerabilities  
Identifies a number of possible security vulnerabilities found in your source code. These security vulnerabilities can include issues such as storing your secret tokens in plaintext.

Quality issues  
Identifies a number of possible quality issues found in your source code. These quality issues can include issues regarding style conventions. The following is an example of a quality issue.  

```
// The function name doesn't adhere to the style convention of camelCase
int SUBTRACT(int x, int y) {
  return x-y
}
```

Other vulnerabilities  
Identifies a number of possible other vulnerabilities found in your source code.

CodeCatalyst supports the following SA report file formats:
+ PyLint (.py)
+ ESLint (.js, .jsx, .ts, .tsx)
+ SARIF (.sarif, .json)

# Adding the test action
<a name="test-add-action"></a>

Use the following procedure to add a test action to your CodeCatalyst workflow. 

------
#### [ Visual ]

**To add a test action using the visual editor**

1. Open the CodeCatalyst console at [https://codecatalyst.aws/](https://codecatalyst.aws/).

1. In the navigation pane, choose **CI/CD**, and then choose **Workflows**.

1. Choose the name of your workflow. You can filter by the source repository or branch name where the workflow is defined, or filter by workflow name or status.

1. Choose **Edit**.

1. Choose **Visual**.

1. Choose **Actions**.

1. In **Actions**, choose **Test**. 

1. In the **Inputs** and **Configuration** tabs, complete the fields according to your needs. For a description of each field, see the [Build and test actions YAML](build-action-ref.md). This reference provides detailed information on each field (and corresponding YAML property value) as it appears in both the YAML and visual editors.

1. (Optional) Choose **Validate** to validate the workflow's YAML code before committing.

1. Choose **Commit**, enter a commit message, and choose **Commit** again.

------
#### [ YAML ]

**To add a build action using the YAML editor**

1. Open the CodeCatalyst console at [https://codecatalyst.aws/](https://codecatalyst.aws/).

1. In the navigation pane, choose **CI/CD**, and then choose **Workflows**.

1. Choose the name of your workflow. You can filter by the source repository or branch name where the workflow is defined, or filter by workflow name or status.

1. Choose **Edit**.

1. Choose **YAML**.

1. Choose **Actions**.

1. In **Actions**, choose **Test**.

1. Modify the properties in the YAML code according to your needs. An explanation of each available property is provided in the [Build and test actions YAML](build-action-ref.md).

1. (Optional) Choose **Validate** to validate the workflow's YAML code before committing.

1. Choose **Commit**, enter a commit message, and choose **Commit** again.

------

## Test action definition
<a name="test-add-action-definition"></a>

The test action is defined as a set of YAML properties inside your workflow definition file. For information about these properties, see [Build and test actions YAML](build-action-ref.md) in the [Workflow YAML definition](workflow-reference.md).

# Viewing the results of a test action
<a name="test-view-results"></a>

Use the following instructions to view the results of a test action, including the generated logs, reports, and variables.

**To view the results of a test action**

1. In the navigation pane, choose **CI/CD**, and then choose **Workflows**.

1. Choose the name of your workflow. You can filter by the source repository or branch name where the workflow is defined, or filter by workflow name or status.

1. In the workflow diagram, choose the name of your test action, for example, **Test**.

1. To view the logs generated by an action, choose **Logs**. The logs for the various action phases are displayed. You can expand or collapse the logs as needed.

1. To view the test reports produced by the test action, choose **Reports**, or in the navigation pane, choose **Reports**. For more information, see [Quality report types](test-workflow-actions.md#test-reporting).

1. To view the configuration used for the test action, choose **Configuration**. For more information, see [Adding the test action](test-add-action.md).

1. To view the variables used by the test action, choose **Variables**. For more information, see [Using variables in workflows](workflows-working-with-variables.md).

# Skipping failed tests in an action
<a name="test.error-handling"></a>

If your action has more than one test command, you might want to allow subsequent test commands in the action to run even if a previous command fails. For example, in the following commands, you may want `test2` to run always, even if `test1` fails.

```
Steps:
- Run: npm install
- Run: npm run test1
- Run: npm run test2
```

Normally, when a step returns an error, Amazon CodeCatalyst stops the workflow action and marks it as failed. You can allow the action steps to continue to run by redirecting the error output to `null`. You can do this by adding `2>/dev/null` to the command. With this modification, the preceding example would look like the following.

```
Steps:
- Run: npm install
- Run: npm run test1 2>/dev/null
- Run: npm run test2
```

In the second code snippet, the status of the `npm install` command will be honored, but any error returned by the `npm run test1` command will be ignored. As a result the `npm run test2` command is run. By doing this, you're able to view both reports at once regardless of whether an error occurs.

# Integrating with universal-test-runner
<a name="test.universal-test-runner"></a>

Test actions integrate with the open-source command line tool `universal-test-runner`. `universal-test-runner` uses the [Test Execution Protocol](https://github.com/aws/universal-test-runner/blob/main/protocol/README.md) to run your tests for any language in a given framework. `universal-test-runner` supports the following frameworks:
+ [Gradle](https://gradle.org/)
+ [Jest](https://jestjs.io/)
+ [Maven](https://maven.apache.org/)
+ [pytest](https://pytest.org)
+ [.NET](https://learn.microsoft.com/en-us/dotnet/core/tools/)

`universal-test-runner` is installed only on the curated images for test actions. If you configure a test action to use a custom Docker Hub or Amazon ECR, you must manually install `universal-test-runner` to enable advanced testing features. To do so, install Node.js (14 or higher) on the image, then install `universal-test-runner` through `npm` using the shell command `- Run: npm install -g @aws/universal-test-runner`. For more information about installing Node.js in your container through shell commands, see [Installing and Updating Node Version Manager](https://github.com/nvm-sh/nvm#install--update-script).

For more information about `universal-test-runner`, see [What is universal-test-runner?](https://github.com/aws/universal-test-runner#-what-is-universal-test-runner)

------
#### [ Visual ]

**To use universal-test-runner in the visual editor**

1. Open the CodeCatalyst console at [https://codecatalyst.aws/](https://codecatalyst.aws/).

1. In the navigation pane, choose **CI/CD**, and then choose **Workflows**.

1. Choose the name of your workflow.

1. Choose **Edit**.

1. Choose **Visual**.

1. Choose **Actions**.

1. In **Actions**, choose **Test**. 

1. On the **Configuration** tab, complete the **Shell commands** field by updating the sample code with your choice of the supported frameworks. For example, to use a supported framework, you would use a `Run` command similar to the following.

   ```
   - Run: run-tests <framework>
   ```

   If the framework you want is not supported, consider contributing a custom adapter or runner. For a description of the **Shell commands** field, see [Steps](build-action-ref.md#build.configuration.steps).

1. (Optional) Choose **Validate** to validate the workflow's YAML code before committing.

1. Choose **Commit**, enter a commit message, and choose **Commit** again.

------
#### [ YAML ]

**To use universal-test-runner in the YAML editor**

1. Open the CodeCatalyst console at [https://codecatalyst.aws/](https://codecatalyst.aws/).

1. In the navigation pane, choose **CI/CD**, and then choose **Workflows**.

1. Choose the name of your workflow.

1. Choose **Edit**.

1. Choose **YAML**.

1. Choose **Actions**.

1. In **Actions**, choose **Test**.

1. Modify the YAML code according to your needs. For example, to use a supported framework, you would use a `Run` command similar to the following.

   ```
   Configuration:
     Steps:
       - Run: run-tests <framework>
   ```

   If the framework you want is not supported, consider contributing a custom adapter or runner. For a description of the **Steps** property, see [Steps](build-action-ref.md#build.configuration.steps).

1. (Optional) Choose **Validate** to validate the workflow's YAML code before committing.

1. Choose **Commit**, enter a commit message, and choose **Commit** again.

------

# Configuring quality reports in an action
<a name="test-config-action"></a>

This section describes how to configure a quality report in an action.

**Topics**
+ [Auto-discovery and manual reports](#test.auto-discovery)
+ [Configuring success criteria for reports](#test.success-criteria)
+ [Quality reports YAML example](#test.success-criteria-example)

## Auto-discovery and manual reports
<a name="test.auto-discovery"></a>

When auto-discovery is enabled, CodeCatalyst searches all inputs passed into the action, and all files generated by the action itself, looking for test, code coverage, software composition analysis (SCA), and static analysis (SA) reports. You can view and manipulate each of these reports in CodeCatalyst.

You can also manually configure which reports are generated. You can specify the type of report you'd like to generate as well as the file format. For more information, see [Quality report types](test-workflow-actions.md#test-reporting).

## Configuring success criteria for reports
<a name="test.success-criteria"></a>

You can set the values that determine the success criteria for a test, code coverage, software composition analysis (SCA), or static analysis (SA) report.

Success criteria are thresholds that determine whether a report passes or fails. CodeCatalyst first generates your report, which can be a test, code coverage, SCA, or SA report, and then applies the success criteria to the generated reports. It then shows whether the success criteria were met, and to what extent. If any report does not meet the specified success criteria, the CodeCatalyst action that specified the success criteria fails.

For example, when you set the success criteria for your SCA report, the valid vulnerability values ranging from most to least severe are: `CRITICAL`, `HIGH`, `MEDIUM`, `LOW`, `INFORMATIONAL`. If you set the criteria to scan for one vulnerability at `HIGH` severity, the report will fail if there is either at least one vulnerability at `HIGH` severity or no vulnerabilities at `HIGH` severity, but at least one vulnerability at a higher severity level, such as one vulnerability at `CRITICAL` severity.

If you do not specify success criteria, then:
+ The CodeCatalyst report that is generated based on your raw reports will not display success criteria.
+ Success criteria will not be used to determine whether the associated workflow action passes or fails.

------
#### [ Visual ]

**To configure success criteria**

1. In the navigation pane, choose **CI/CD**, and then choose **Workflows**.

1. Choose a workflow containing an action that generates a report. This is the report for which you want to apply success criteria. You can filter by the source repository or branch name where the workflow is defined, or filter by workflow name or status.

1. Choose **Edit**.

1. Choose **Visual**.

1. In the workflow diagram, choose the action that you have configured to generate CodeCatalyst reports.

1. Choose the **Outputs** tab.

1. Under **Auto-discover reports** or under **Manually configure reports**, choose **Success criteria**.

   Success criteria appear. Depending on your previous selections, you may see any or all of these options:

   **Pass rate**

   Specify the percentage of tests in a test report that must pass for the associated CodeCatalyst report to be marked as passed. Valid values include decimal numbers. For example: `50`, `60.5`. The pass rate criteria are applied only to test reports. For more information about test reports, see [Test reports](test-workflow-actions.md#test-reports).

   **Line coverage**

   Specify the percentage of lines in a code coverage report that must be covered for the associated CodeCatalyst report to be marked as passed. Valid values include decimal numbers. For example: `50`, `60.5`. Line coverage criteria are applied only to code coverage reports. For more information about code coverage reports, see [Code coverage reports](test-workflow-actions.md#test-code-coverage-reports).

   **Branch coverage**

   Specify the percentage of branches in a code coverage report that must be covered for the associated CodeCatalyst report to be marked as passed. Valid values include decimal numbers. For example: `50`, `60.5`. Branch coverage criteria are applied only to code coverage reports. For more information about code coverage reports, see [Code coverage reports](test-workflow-actions.md#test-code-coverage-reports).

   **Vulnerabilities (SCA)**

   Specify the maximum number and severity of vulnerabilities permitted in the SCA report for the associated CodeCatalyst report to be marked as passed. To specify vulnerabilities, you must specify:
   + The minimum severity of the vulnerabilities you want to include in the count. Valid values, from most to least severe, are: `CRITICAL`, `HIGH`, `MEDIUM`, `LOW`, `INFORMATIONAL`.

     For example, if you choose `HIGH`, then `HIGH` and `CRITICAL` vulnerabilities will be tallied.
   + The maximum number of vulnerabilities of the specified severity you want permit. Exceeding this number causes the CodeCatalyst report to be marked as failed. Valid values are whole numbers.

   Vulnerabilities criteria are applied only to SCA reports. For more information about SCA reports, see [Software composition analysis reports](test-workflow-actions.md#test-sca-reports).

   **Bugs**

   Specify the maximum number and severity of bugs permitted in the SA report for the associated CodeCatalyst report to be marked as passed. To specify bugs, you must specify:
   + The minimum severity of the bugs you want to include in the count. Valid values, from most to least severe, are: `CRITICAL`, `HIGH`, `MEDIUM`, `LOW`, `INFORMATIONAL`.

     For example, if you choose `HIGH`, then `HIGH` and `CRITICAL` bugs will be tallied.
   + The maximum number of bugs of the specified severity you want permit. Exceeding this number causes the CodeCatalyst report to be marked as failed. Valid values are whole numbers.

   Bugs criteria are applied only to PyLint and ESLint SA reports. For more information about SA reports, see [Static analysis reports](test-workflow-actions.md#test-static-analysis-reports).

   **Security vulnerabilities**

   Specify the maximum number and severity of security vulnerabilities permitted in the SA report for the associated CodeCatalyst report to be marked as passed. To specify security vulnerabilities, you must specify:
   + The minimum severity of the security vulnerabilities you want to include in the count. Valid values, from most to least severe, are: `CRITICAL`, `HIGH`, `MEDIUM`, `LOW`, `INFORMATIONAL`.

     For example, if you choose `HIGH`, then `HIGH` and `CRITICAL` security vulnerabilities will be tallied.
   + The maximum number of security vulnerabilities of the specified severity you want permit. Exceeding this number causes the CodeCatalyst report to be marked as failed. Valid values are whole numbers.

   Security vulnerabilities criteria are applied only to PyLint and ESLint SA reports. For more information about SA reports, see [Static analysis reports](test-workflow-actions.md#test-static-analysis-reports).

   **Quality issues**

   Specify the maximum number and severity of quality issues permitted in the SA report for the associated CodeCatalyst report to be marked as passed. To specify quality issues, you must specify:
   + The minimum severity of the quality issues you want to include in the count. Valid values, from most to least severe, are: `CRITICAL`, `HIGH`, `MEDIUM`, `LOW`, `INFORMATIONAL`.

     For example, if you choose `HIGH`, then `HIGH` and `CRITICAL` quality issues will be tallied.
   + The maximum number of quality issues of the specified severity you want permit. Exceeding this number causes the CodeCatalyst report to be marked as failed. Valid values are whole numbers.

   Quality issues criteria are applied only to PyLint and ESLint SA reports. For more information about SA reports, see [Static analysis reports](test-workflow-actions.md#test-static-analysis-reports).

1. Choose **Commit**.

1. Run your workflow to have CodeCatalyst apply success criteria to your raw reports, and regenerate the associated CodeCatalyst reports with success criteria information included. For more information, see [Starting a workflow run manually](workflows-manually-start.md).

------
#### [ YAML ]

**To configure success criteria**

1. In the navigation pane, choose **CI/CD**, and then choose **Workflows**.

1. Choose a workflow containing an action that generates a report. This is the report for which you want to apply success criteria. You can filter by the source repository or branch name where the workflow is defined, or filter by workflow name or status.

1. Choose **Edit**.

1. Choose **YAML**.

1. In the workflow diagram, choose the action that you have configured to generate CodeCatalyst reports.

1. In the details pane, choose the **Outputs** tab.

1. In the action, in `AutoDiscoverReports` section, or in the `Reports` section, add a **SuccessCriteria** property, along with `PassRate`, `LineCoverage`, `BranchCoverage`, `Vulnerabilities`, `StaticAnalysisBug`, `StaticAnalysisSecurity`, and `StaticAnalysisQuality` properties.

   For an explanation of each of these properties, consult the [Build and test actions YAML](build-action-ref.md).

1. Choose **Commit**.

1. Run your workflow to have CodeCatalyst apply success criteria to your raw reports, and regenerate the associated CodeCatalyst reports with the success criteria information included. For more information on starting a workflow, see [Starting a workflow run manually](workflows-manually-start.md).

------

## Quality reports YAML example
<a name="test.success-criteria-example"></a>

 The following example shows how to manually configure four reports: a test report, a code coverage report, a software composition analysis report, and a static analysis report.

```
Reports:
  MyTestReport:
    Format: JUNITXML
    IncludePaths:
      - "*.xml"
    ExcludePaths:
      - report1.xml
      SuccessCriteria:
        PassRate: 90
  MyCoverageReport:
    Format: CLOVERXML
    IncludePaths:
      - output/coverage/jest/clover.xml
      SuccessCriteria:
        LineCoverage: 75
        BranchCoverage: 75
  MySCAReport:
    Format: SARIFSCA
    IncludePaths:
      - output/sca/reports.xml
      SuccessCriteria:
        Vulnerabilities:
          Number: 5
          Severity: HIGH
  MySAReport:
    Format: ESLINTJSON
    IncludePaths:
      - output/static/eslint.xml
      SuccessCriteria:
        StaticAnalysisBug:
          Number: 10
          Severity: MEDIUM
        StaticAnalysisSecurity:
          Number: 5
          Severity: CRITICAL
        StaticAnalysisQuality:
          Number: 0
          Severity: INFORMATIONAL
```

# Best practices for testing
<a name="test-best-practices"></a>

When using the testing features provided by CodeCatalyst, we recommend that you follow these best practices.

**Topics**
+ [Auto-discovery](#test.best-auto-discovery)
+ [Success criteria](#test.best-success-criteria)
+ [Include/exclude paths](#test.best-include-exclude)

## Auto-discovery
<a name="test.best-auto-discovery"></a>

When configuring actions in CodeCatalyst, auto-discovery lets you automatically discover outputs of various tools, such as JUnit test reports, and generate relevant CodeCatalyst reports from them. Auto-discovery helps ensure that reports continue to be generated even if names or paths to discovered outputs change. When new files are added, CodeCatalyst automatically discovers them and produces relevant reports. However, if you use auto-discovery, it is important to factor in some of the following aspects of this feature:
+ When you activate auto-discovery in your action, all automatically discovered reports of the same type will share the same success criteria. For example, a shared criteria such as minimum pass rate would apply to all auto-discovered test reports. If you need different criteria for reports of the same type, you must explicitly configure each of these reports.
+ Auto-discovery can also find reports that are produced by your dependencies and, if success criteria are configured, might fail the action on these reports. This issue can be addressed by updating the exclude path configuration.
+ Auto-discovery is not guaranteed to produce the same list of reports every time, because it scans the action at runtime. In the case where you want a particular report to always be produced, you should configure reports explicitly. For example, if tests were to stop running as part of your build, the test framework would not produce any outputs and, as a result, no test report would be produced and the action might succeed. If you want the success of your action to depend on that particular test, then you must explicitly configure that report.

**Tip**  
When getting started on a new or existing project, use auto-discovery for the entire project directory (include `**/*`). This invokes report generation across all files in your project, including those within subdirectories.

For more information, see [Configuring quality reports in an action](test-config-action.md).

## Success criteria
<a name="test.best-success-criteria"></a>

You can enforce quality thresholds on your reports by configuring success criteria. For example, if two code coverage reports were auto-discovered, one with a line coverage of 80% and the other with a line coverage of 60%, you have the following options:
+ Set the auto-discovery success criteria for line coverage at 80%. This would cause the first report to pass and the second report to fail, which would result in the overall action failing. To unblock the workflow, add new tests to your project until the line coverage for the second report exceeds 80%.
+ Set the auto-discovery success criteria for line coverage at 60%. This would cause both reports to pass, which would result in the action succeeding. You could then work on increasing the code coverage in the second report. However, with this approach, you cannot guarantee that the coverage in the first report is not dropping below 80%.
+ Explicitly configure one or both of the reports by using the visual editor or adding an explicit YAML section and path for each report. This would allow you to configure separate success criteria and custom names for each report. However, with this approach, the action could fail if the report paths change.

For more information, see [Configuring success criteria for reports](test-config-action.md#test.success-criteria).

## Include/exclude paths
<a name="test.best-include-exclude"></a>

When reviewing action results, you can adjust the list of reports that are generated by CodeCatalyst by configuring `IncludePaths` and `ExcludePaths`.
+ Use `IncludePaths` to specify the files and file paths you want CodeCatalyst to include when searching for reports. For example, if you specify `"/test/report/*"`, CodeCatalyst searches the entire build image used by the action looking for the `/test/report/` directory. When it finds that directory, CodeCatalyst then looks for reports in that directory.
**Note**  
For manually configured reports, `IncludePaths` must be a glob pattern that matches a single file.
+ Use `ExcludePaths` to specify the files and file paths you want CodeCatalyst to exclude when searching for reports. For example, if you specify `"/test/reports/**/*"`, CodeCatalyst will not search for files in the `/test/reports/` directory. To ignore all files in a directory, use the `**/*` glob pattern.

The following are examples of possible glob patterns.


| Pattern | Description | 
| --- | --- | 
|  `*.*`  |  Matches all object names in the current directory that contain a dot  | 
|  `*.xml`  |  Matches all object names in the current directory ending with `.xml`  | 
|  `*.{xml,txt}`  |  Matches all object names in the current directory ending with `.xml` or `.txt`  | 
|  `**/*.xml`  |  Matches object names across all directories ending with `.xml`  | 
|  `testFolder`  |  Matches an object called `testFolder`, treating it as a file  | 
|  `testFolder/*`  |  Matches objects in one level of the subfolder from `testFolder`, such as `testFolder/file.xml`  | 
|  `testFolder/*/*`  |  Matches objects in two levels of the subfolder from `testFolder`, such as `testFolder/reportsFolder/file.xml`  | 
|  `testFolder/**`  |  Matches subfolder `testFolder` as well as files below `testFolder`, such as `testFolder/file.xml` and `testFolder/otherFolder/file.xml`  | 

CodeCatalyst interprets the glob patterns as follows:
+ The slash (`/`) character separates directories in file paths.
+ The asterisk (`*`) character matches zero or more characters of a name component without crossing folder boundaries.
+ A double asterisk (`**`) matches zero or more characters of a name component across all directories.

**Note**  
`ExcludePaths` takes precedence over `IncludePaths`. If both `IncludePaths` and `ExcludePaths` include the same folder, that folder is not scanned for reports.

# Supported SARIF properties
<a name="test.sarif"></a>

Static Analysis Results Interchange Format (SARIF) is an output file format which is available in software composition analysis (SCA) and static analysis reports in Amazon CodeCatalyst. The following example shows how to manually configure SARIF in a static analysis report:

```
Reports:
MySAReport:
Format: SARIFSA
IncludePaths:
    - output/sa_report.json
SuccessCriteria:
    StaticAnalysisFinding:
    Number: 25
    Severity: HIGH
```

CodeCatalyst supports the following SARIF properties which can be used to optimize how the analysis results will appear in your reports.

**Topics**
+ [`sarifLog` object](#test.sarif.sarifLog)
+ [`run` object](#test.sarif.run)
+ [`toolComponent` object](#test.sarif.toolComponent)
+ [`reportingDescriptor` object](#test.sarif.reportingDescriptor)
+ [`result` object](#test.sarif.result)
+ [`location` object](#test.sarif.location)
+ [`physicalLocation` object](#test.sarif.physicalLocation)
+ [`logicalLocation` object](#test.sarif.logicalLocation)
+ [`fix` object](#test.sarif.fix)

## `sarifLog` object
<a name="test.sarif.sarifLog"></a>


| Name | Required | Description | 
| --- | --- | --- | 
|  `$schema`  |  Yes  |  The URI of the SARIF JSON schema for version [2.1.0](https://json.schemastore.org/sarif-2.1.0.json).  | 
|  `version`  |  Yes  |  CodeCatalyst only supports SARIF version 2.1.0.  | 
|  `runs[]`  |  Yes  |  A SARIF file contains an array of one or more runs, each of which represents a single run of the analysis tool.  | 

## `run` object
<a name="test.sarif.run"></a>


| Name | Required | Description | 
| --- | --- | --- | 
|  `tool.driver`  |  Yes  |  A `toolComponent` object that describes the analysis tool.  | 
|  `tool.name`  |  No  |  A property that indicates the name of the tool used to perform analysis.  | 
|  `results[]`  |  Yes  |  The results of the analysis tool that are displayed on CodeCatalyst.  | 

## `toolComponent` object
<a name="test.sarif.toolComponent"></a>


| Name | Required | Description | 
| --- | --- | --- | 
|  `name`  |  Yes  |  The name of the analysis tool.  | 
|  `properties.artifactScanned`  |  No  |  A total number of artifacts analyzed by the tool.  | 
|  `rules[]`  |  Yes  |  An array of `reportingDescriptor` objects that represent rules. Based on these rules, the analysis tool finds problems in the code that is analyzed.  | 

## `reportingDescriptor` object
<a name="test.sarif.reportingDescriptor"></a>


| Name | Required | Description | 
| --- | --- | --- | 
|  `id`  |  Yes  |  The unique identifier for the rule that is used to reference a finding. Maximum length: 1,024 characters  | 
|  `name`  |  No  |  The display name of the rule. Maximum length: 1,024 characters  | 
|  `shortDescription.text`  |  No  |  A shortened description of the rule. Maximum length: 3,000 characters  | 
|  `fullDescription.text`  |  No  |  A complete description of the rule. Maximum length: 3,000 characters  | 
|  `helpUri`  |  No  |  A string that can be localized to contain the absolute URI of the primary documentation for the rule. Maximum length: 3,000 characters  | 
|  `properties.unscore`  |  No  |  A flag that indicates if the scan finding has been scored.  | 
|  `properties.score.severity`  |  No  |  A fixed set of strings that specify the severity level of the finding. Maximum length: 1,024 characters  | 
|  `properties.cvssv3_baseSeverity`  |  No  |  A qualitative severity rating of [Common Vulnerability Scoring System v3.1](https://www.first.org/cvss/v3.1/specification-document).  | 
|  `properties.cvssv3_baseScore`  |  No  |  A CVSS v3 Base Score ranging from [0.0 - 10.0](https://nvd.nist.gov/vuln-metrics/cvss).  | 
|  `properties.cvssv2_severity`  |  No  |  If CVSS v3 values are not available, CodeCatalyst searches for CVSS v2 values.  | 
|  `properties.cvssv2_score`  |  No  |  A CVSS v2 Base Score ranging from [0.0 - 10.0](https://nvd.nist.gov/vuln-metrics/cvss).  | 
|  `properties.severity`  |  No  |  A fixed set of strings that specify the severity level of the finding. Maximum length: 1,024 characters  | 
|  `defaultConfiguration.level`  |  No  |  The default severity of a rule.  | 

## `result` object
<a name="test.sarif.result"></a>


| Name | Required | Description | 
| --- | --- | --- | 
|  `ruleId`  |  Yes  |  The unique identifier for the rule that is used to reference a finding. Maximum length: 1,024 characters  | 
|  `ruleIndex`  |  Yes  |  The index of the associated rule in the tool component `rules[]`.  | 
|  `message.text`  |  Yes  |  A message that describes the result and displays the message for each finding. Maximum length: 3,000 characters  | 
|  `rank`  |  No  |  A value between 0.0 to 100.0 inclusive that represents the priority or importance of the result. This scale values 0.0 being the lowest priority and 100.0 being the highest priority.  | 
|  `level`  |  No  |  The severity of the result. Maximum length: 1,024 characters  | 
|  `properties.unscore`  |  No  |  A flag that indicates if the scan finding has been scored.  | 
|  `properties.score.severity`  |  No  |  A fixed set of strings that specify the severity level of the finding. Maximum length: 1,024 characters  | 
|  `properties.cvssv3_baseSeverity`  |  No  |  A qualitative severity rating of [Common Vulnerability Scoring System v3.1](https://www.first.org/cvss/v3.1/specification-document).  | 
|  `properties.cvssv3_baseScore`  |  No  |  A CVSS v3 Base Score ranging from [0.0 - 10.0](https://nvd.nist.gov/vuln-metrics/cvss).  | 
|  `properties.cvssv2_severity`  |  No  |  If CVSS v3 values are not available, CodeCatalyst searches for CVSS v2 values.  | 
|  `properties.cvssv2_score`  |  No  |  A CVSS v2 Base Score ranging from [0.0 - 10.0](https://nvd.nist.gov/vuln-metrics/cvss).  | 
|  `properties.severity`  |  No  |  A fixed set of strings that specify the severity level of the finding. Maximum length: 1,024 characters  | 
|  `locations[]`  |  Yes  |  The set of locations where the result was detected. Only one location should be included unless the problem can only be corrected by making a change at every specified location. CodeCatalyst uses the first value in the location array to annotate the result. Maximum number of `location` objects: 10  | 
|  `relatedLocations[]`  |  No  |  A list of additional locations references in the finding. Maximum number of `location` objects: 50  | 
|  `fixes[]`  |  No  |  An array of `fix` objects that represent the recommendation provided by the scanning tool. CodeCatalyst uses the first recommendation in the `fixes` array.  | 

## `location` object
<a name="test.sarif.location"></a>


| Name | Required | Description | 
| --- | --- | --- | 
|  `physicalLocation`  |  Yes  |  Identifies the artifact and region.  | 
|  `logicalLocations[]`  |  No  |  The set of locations described by name without reference to the artifact.  | 

## `physicalLocation` object
<a name="test.sarif.physicalLocation"></a>


| Name | Required | Description | 
| --- | --- | --- | 
|  `artifactLocation.uri`  |  Yes  |  The URI indicating the location of an artifact, usually a file either in the repository or generated during a build.  | 
|  `fileLocation.uri`  |  No  |  The fall back URI indicating the location of the file. This is used if `artifactLocation.uri` returns empty.  | 
|  `region.startLine`  |  Yes  |  The line number of the first character in the region.  | 
|  `region.startColumn`  |  Yes  |  The column number of the first character in the region.  | 
|  `region.endLine`  |  Yes  |  The line number of the last character in the region.  | 
|  `region.endColumn`  |  Yes  |  The column number of the last character in the region.  | 

## `logicalLocation` object
<a name="test.sarif.logicalLocation"></a>


| Name | Required | Description | 
| --- | --- | --- | 
|  `fullyQualifiedName`  |  No  |  Additional information that describes the location of the result. Maximum length: 1,024 characters  | 

## `fix` object
<a name="test.sarif.fix"></a>


| Name | Required | Description | 
| --- | --- | --- | 
|  `description.text`  |  No  |  A message that displays a recommendation for each finding. Maximum length: 3,000 characters  | 
|  `artifactChanges.[0].artifactLocation.uri`  |  No  |  The URI indicating the location of the artifact that needs to be updated.  | 