

# Execute parallel tests in batch builds
<a name="parallel-test"></a>

You can use AWS CodeBuild to execute parallel tests in batch builds. Parallel test execution is a testing approach where multiple test cases run simultaneously across different environments, machines, or browsers, rather than executing sequentially. This approach can significantly reduce overall test execution time and improve testing efficiency. In CodeBuild, you can split your tests across multiple environments and run them concurrently.

The key advantages of parallel test execution include:

1. **Reduced execution time** - Tests that would take hours sequentially can complete in minutes.

1. **Better resource utilization** - Makes efficient use of available computing resources.

1. **Earlier feedback** - Faster test completion means quicker feedback to developers.

1. **Cost-effective** - Saves both time and computing costs in the long run.

When implementing parallel test execution, two main approaches are commonly considered: separate environments and multithreading. While both methods aim to achieve concurrent test execution, they differ significantly in their implementation and effectiveness. Separate environments create isolated instances where each test suite runs independently, while multithreading executes multiple tests simultaneously within the same process space using different threads.

The key advantages of separate environments over multithreading include:

1. **Isolation** - Each test runs in a completely isolated environment, preventing interference between tests.

1. **Resource conflicts** - No competition for shared resources that often occurs in multithreading.

1. **Stability** - Less prone to race conditions and synchronization issues.

1. **Easier debugging** - When tests fail, it's simpler to identify the cause as each environment is independent.

1. **State management** - Easily manage shared state issues that plague multithreaded tests.

1. **Better scalability** - Can easily add more environments without complexity.

**Topics**
+ [Support in AWS CodeBuild](#parallel-test-support)
+ [Enable parallel test execution in batch builds](parallel-test-enable.md)
+ [Use the `codebuild-tests-run` CLI command](parallel-test-tests-run.md)
+ [Use the `codebuild-glob-search` CLI command](parallel-test-glob-search.md)
+ [About test splitting](parallel-test-splitting.md)
+ [Automatically merge individual build reports](parallel-test-auto-merge.md)
+ [Parallel test execution for various test frameworks sample](sample-parallel-test.md)

## Support in AWS CodeBuild
<a name="parallel-test-support"></a>

AWS CodeBuild provides robust support for parallel test execution through its batch build feature, specifically designed to leverage separate environment execution. This implementation aligns perfectly with the benefits of isolated testing environments.

**Batch build with test distribution**  
CodeBuild's batch build functionality enables the creation of multiple build environments that run simultaneously. Each environment operates as a completely isolated unit, with its own compute resources, runtime environment, and dependencies. Through the batch build configuration, you can specify how many parallel environments they need and how tests should be distributed across them.

**Test sharding CLI**  
CodeBuild includes a built-in test distribution mechanism through its CLI tool, `codebuild-tests-run`, which automatically divides tests into different environments.

**Report aggregation**  
One of the key strengths of CodeBuild's implementation is its ability to handle test result aggregation seamlessly. While tests execute in separate environments, CodeBuild automatically collects and combines the test reports from each environment into a unified test report at the batch build level. This consolidation provides a comprehensive view of test results while maintaining the efficiency benefits of parallel execution.

The following is the diagram explains the complete concept of parallel test execution in AWS CodeBuild.

![\[Concept diagram of parallel test execution.\]](http://docs.aws.amazon.com/codebuild/latest/userguide/images/parallel-test.png)


# Enable parallel test execution in batch builds
<a name="parallel-test-enable"></a>

To run tests in parallel, update the batch build buildspec file to include the build-fanout field and the number of parallel builds to split the test suite in the `parallelism` field as shown below. The `parallelism` field specifies how many independent executors are setup to execute the test suite.

To run the tests in multiple parallel execution environments, set the `parallelism` field to a value greater than zero. In example below, `parallelism` is set to five, meaning CodeBuild starts five identical builds that executes a portion of the test suite in parallel.

You can use the [codebuild-tests-run](parallel-test-tests-run.md) CLI command to split and run your tests. Your test files will be split up, and a portion of your tests run in each build. This reduces the overall time taken to run the full test suite. In the following example, tests will be split up into five and the split points are calculated based on name of the tests.

```
version: 0.2

batch:
  fast-fail: false 
  build-fanout:
    parallelism: 5
    ignore-failure: false
    
phases:
  install:
    commands:
      - npm install jest-junit --save-dev
  pre_build:
    commands:
      - echo 'prebuild'
  build:
    commands:
      - |
        codebuild-tests-run \
         --test-command 'npx jest --runInBand --coverage' \
         --files-search "codebuild-glob-search '**/_tests_/**/*.test.js'" \
         --sharding-strategy 'equal-distribution'

  post_build:
    commands:
      - codebuild-glob-search '**/*.xml'  
      - echo "Running post-build steps..."
      - echo "Build completed on `date`"

reports:
  test-reports:
    files:
      - '**/junit.xml'               
    base-directory: .
    discard-paths: yes           
    file-format: JUNITXML
```

If reports are configured for build-fanout build, then the test reports are generated for each build separately, which can be viewed under the **Reports** tab of the corresponding builds in the AWS CodeBuild console.

For more information on how to execute parallel tests in batch, see [Parallel test execution for various test frameworks sample](sample-parallel-test.md).

# Use the `codebuild-tests-run` CLI command
<a name="parallel-test-tests-run"></a>

AWS CodeBuild provides CLI that will take test command and test file location as input. The CLI with these input will split the tests into number of shards as specified in the `parallelism` field based on test file names. The assignment of test files to shard is decided by the sharding strategy.

```
codebuild-tests-run \
    --files-search "codebuild-glob-search '**/__tests__/*.js'" \
    --test-command 'npx jest --runInBand --coverage' \
    --sharding-strategy 'equal-distribution'
```

The following table describes the fields for the `codebuild-tests-run` CLI command.


| Field name | Type | Required or optional | Definition | 
| --- | --- | --- | --- | 
|  `test-command`  |  String  |  Required  |  This command is used for running the tests.  | 
|  `files-search`  |  String  |  Required  |  This command gives a list of test files. You can use the AWS CodeBuild provided [codebuild-glob-search](parallel-test-glob-search.md) CLI command or any other file search tool of your choice.  Ensure that the `files-search` command outputs file names, each separated by a new line.   | 
|  `sharding-strategy`  |  Enum  |  Optional  |  Valid values: `equal-distribution` (default), `stability` [\[See the AWS documentation website for more details\]](http://docs.aws.amazon.com/codebuild/latest/userguide/parallel-test-tests-run.html) For more information, see [About test splitting](parallel-test-splitting.md).  | 

The `codebuild-tests-run` CLI works first to identify the list of test files using the command provided in the `files-search` parameter. It then determines a subset of test files designated for the current shard (environment) using the specified sharding strategy. Finally, this subset of test files is formatted into a space-separated list and appended to the end of the command provided in the `test-command` parameter before being executed.

For test frameworks that don't accept space-separated lists, the `codebuild-tests-run` CLI provides a flexible alternative through the `CODEBUILD_CURRENT_SHARD_FILES` environment variable. This variable contains a newline-separated list of test file paths designated for the current build shard. By leveraging this environment variable, you can easily adapt to various test framework requirements, accommodating those that expect input formats different from space-separated lists. Moreover, you can also format the test file names as per need of test framework. The following is an example of the use of `CODEBUILD_CURRENT_SHARD_FILES` on Linux with the Django framework. Here `CODEBUILD_CURRENT_SHARD_FILES` is used to get *dot notation* file paths supported by Django:

```
codebuild-tests-run \
    —files-search "codebuild-glob-search '/tests/test_.py'" \
    —test-command 'python3 manage.py test $(echo "$CODEBUILD_CURRENT_SHARD_FILES" | sed -E "s/\//__/g; s/\.py$//; s/__/./g")' \
    —sharding-strategy 'equal-distribution'
```

**Note**  
Note that the `CODEBUILD_CURRENT_SHARD_FILES` environment variable can be used only inside the scope of the `codebuild-tests-run` CLI.  
Also, if you are using `CODEBUILD_CURRENT_SHARD_FILES` inside test-command, put `CODEBUILD_CURRENT_SHARD_FILES` inside double quotes as shown in above example.

# Use the `codebuild-glob-search` CLI command
<a name="parallel-test-glob-search"></a>

AWS CodeBuild provides a built-in CLI tool called `codebuild-glob-search` that allows you to search for files in your working directory based on one or more glob patterns. This tool can be particularly useful when you want to run tests on specific files or directories within your project.

## Usage
<a name="parallel-test-glob-search.usage"></a>

The `codebuild-glob-search` CLI has the following usage syntax:

```
codebuild-glob-search <glob_pattern1> [<glob_pattern2> ...]
```
+ `<glob_pattern1>`, `<glob_pattern2>`, etc.: One or more glob patterns to match against the files in your working directory.
+ `*`: Matches any sequence of characters (excluding path separators).
+ `**`: Matches any sequence of characters (including path separators).

**Note**  
Ensure that the glob string has quotes. To check the results of pattern-matching, use the `echo` command.  

```
version: 0.2

phases:
  build:
    commands:
      - echo $(codebuild-glob-search '**/__tests__/*.js')
      - codebuild-glob-search '**/__tests__/*.js' | xargs -n 1 echo
```

## Output
<a name="parallel-test-glob-search.output"></a>

The CLI will output a newline-separated list of file paths that match the provided glob patterns. The file paths returned will be relative to the working directory.

If no files are found matching the provided patterns, the CLI will output a message indicating that no files were found.

Note that directories found due to any given pattern will be excluded from the search results.

## Example
<a name="parallel-test-glob-search.example"></a>

If you want to search only for files inside the tests directory and its subdirectories with a `.js` extension, you can use the following command with the `codebuild-glob-search` CLI:

```
codebuild-glob-search '**/__tests__/*.js'
```

This command will search for all files with a `.js` extension inside the `__tests__`directory and its subdirectories, as denoted by the pattern.

# About test splitting
<a name="parallel-test-splitting"></a>

AWS CodeBuild's test splitting feature allows you to parallelize your test suite execution across multiple compute instances, reducing the overall test run time. This feature is enabled through the batch configuration in your CodeBuild project settings and the `codebuild-tests-run`utility in your buildspec file.

The tests are split based on the sharding strategy specified. CodeBuild provides two sharding strategies as specified below:

Equal-distribution  
The `equal-distribution` sharding strategy divides the tests across parallel builds based on the alphabetical order of the test file names. This approach first sorts the test files and then employs a chunk-based method to distribute them, ensuring that similar files are grouped together for testing. It is recommended when dealing with a relatively small set of test files. While this method aims to allocate an approximately equal number of files to each shard, with a maximum difference of one, it does not guarantee stability. When test files are added or removed in subsequent builds, the distribution of existing files may change, potentially causing reassignment across shards.

Stability  
The `stability` sharding strategy employs a consistent hashing algorithm to split tests among shards, ensuring that file distribution remains stable. When new files are added or removed, this approach ensures that the existing file-to-shard assignments remain largely unchanged. For large test suites, it is recommended to use the stability option to evenly distribute the tests across shards. This mechanism aims to provide a near-equal distribution, ensuring that each shard receives a similar number of files, with only minimal variance. While the stability strategy does not guarantee an ideal equal distribution, it offers a near-equal distribution that maintains consistency in file assignments across builds, even as files are added or removed.

To enable test splitting, you need to configure the batch section in your CodeBuild project settings, specifying the desired `parallelism` level and other relevant parameters. Additionally, you'll need to include the `codebuild-tests-run` utility in your buildspec file, along with the appropriate test commands and splitting method.

# Automatically merge individual build reports
<a name="parallel-test-auto-merge"></a>

In fanout batch builds, AWS CodeBuild supports automatic merging of individual build reports into a consolidated batch-level report. This feature provides a comprehensive view of test results and code coverage across all builds within a batch.

## How it works
<a name="parallel-test-auto-merge.how"></a>

When executing `fanout` batch builds, each individual build generates [test reports](test-reporting.md). CodeBuild then automatically consolidates identical reports from different builds into a unified report, which is attached to the batch build. These consolidated reports are readily accessible through the [ BatchGetBuildBatches](https://docs.aws.amazon.com/codebuild/latest/APIReference/API_BatchGetBuildBatches.html#CodeBuild-BatchGetBuildBatches-response-buildBatches) API's `reportArns` field, and can also be viewed in the **Reports** tab of the console. This merging capability extends to auto-discovered reports as well.

Consolidated reports are created under [report groups](test-report-group.md) that are either specified in the buildspec or auto-discovered by CodeBuild. You can analyze trends of the merged reports directly under these report groups, providing valuable insights into the overall build performance and quality metrics across historical builds of the same build-batch project.

For each individual build within the batch, CodeBuild automatically creates separate report groups. These follow a specific naming convention, combining the batch build report group name with a suffix of `BuildFanoutShard<shard_number>`, where the `shard_number` represents the number of the shard in which the report group is created. This organization allows you to track and analyze trends at both the consolidated and individual build levels, providing flexibility in how you monitor and evaluate their build processes.

The batch-build report follows the same structure as [individual build reports](https://docs.aws.amazon.com/codebuild/latest/APIReference/API_Report.html). The following key fields in the **Report** tab are specific to batch-build reports:

**Batch build report status**  
The status of batch build reports follows specific rules depending on the report type:  
+ Test reports:
  + Succeeded: Status is set to succeeded when all individual build reports have succeeded.
  + Failed: Status is set to failed if any individual build report has failed.
  + Incomplete: Status is marked as incomplete if any individual build report is missing or has an incomplete status.
+ Code coverage reports:
  + Complete: Status is set to complete when all individual build reports are complete.
  + Failed: Status is set to failed if any individual build report has failed.
  + Incomplete: Status is marked as incomplete if any individual build report is missing or has an incomplete status.

**Test summary**  
The merged test report consolidates the following fields from all individual build reports:  
+ duration-in-nano-seconds: Maximum test duration time in nanoseconds among all individual build reports.
+ total: The combined count of all test cases, summing the total number of tests from each build.
+ status-counts: Provides a consolidated view of test statuses such as passed, failed, or skipped, calculated by aggregating the count of each status type across all individual builds.

**Code coverage summary**  
The merged code coverage report combines fields from all individual builds using the following calculations:  
+ branches-covered: Sum of all covered branches from individual reports.
+ branches-missed: Sum of all missed branches from individual reports.
+ branch-coverage-percentage: `(Total covered branches / Total branches) * 100`
+ lines-covered: Sum of all covered lines from individual reports.
+ lines-missed: Sum of all missed lines from individual reports.
+ lines-coverage-percentage: `(Total covered lines / Total lines) * 100`

**Execution ID**  
The batch build ARN.

**Test cases**  
The merged report contains a consolidated list of all test cases from individual builds, accessible through both the [DescribeTestCases](https://docs.aws.amazon.com/codebuild/latest/APIReference/API_DescribeTestCases.html) API and the batch build report in the console.

**Code coverages**  
The merged code coverage report provides consolidated line and branch coverage information for each file across all individual builds, accessible through both the [DescribeCodeCoverages](https://docs.aws.amazon.com/codebuild/latest/APIReference/API_DescribeCodeCoverages.html) API and the batch build report in the console. Note: For files covered by multiple test files distributed across different shards, the merged report uses the following selection criteria:  

1. Primary selection is based on the highest line coverage among shards.

1. If line coverage is equal across multiple shards, the shard with the highest branch coverage is selected.

# Parallel test execution for various test frameworks sample
<a name="sample-parallel-test"></a>

You can use the `codebuild-tests-run` CLI command to split and run your tests across parallel execution environments. The following section provides `buildspec.yml` samples for various frameworks, illustrating the usage of the `codebuild-tests-run` command.
+ Each example below includes a `parallelism` level of five, meaning that five identical execution environments will be created to split your tests across. You can choose a `parallelism` level to suit your project by modifying the `parallelism` value in the `build-fanout` section.
+ Each example below shows configuring your tests to be split by the test file name, which is by default. This distributes the tests evenly across the parallel execution environments.

Before you get started, see [Execute parallel tests in batch builds](parallel-test.md) for more information.

For a full list of options when using the `codebuild-tests-run` CLI command, see [Use the `codebuild-tests-run` CLI command](parallel-test-tests-run.md).

**Topics**
+ [Configure parallel tests with Django](sample-parallel-test-django.md)
+ [Configure parallel tests with Elixir](sample-parallel-test-elixir.md)
+ [Configure parallel tests with Go](sample-parallel-test-go.md)
+ [Configure parallel tests with Java (Maven)](sample-parallel-test-java-maven.md)
+ [Configure parallel tests with Javascript (Jest)](sample-parallel-test-javascript.md)
+ [Configure parallel tests with Kotlin](sample-parallel-test-kotlin.md)
+ [Configure parallel tests with PHPUnit](sample-parallel-test-phpunit.md)
+ [Configure parallel tests with Pytest](sample-parallel-test-python.md)
+ [Configure parallel tests with Ruby (Cucumber)](sample-parallel-test-ruby-cucumber.md)
+ [Configure parallel tests with Ruby (RSpec)](sample-parallel-test-ruby.md)

# Configure parallel tests with Django
<a name="sample-parallel-test-django"></a>

The following is sample of a `buildspec.yml` that shows parallel test execution with Django on an Ubuntu platform:

```
version: 0.2

batch:
  fast-fail: false
  build-fanout:
    parallelism: 5

phases:
  install:
    commands:
      - echo 'Installing Python dependencies'
      - sudo yum install -y python3 python3-pip 
      - python3 -m ensurepip --upgrade 
      - python3 -m pip install django
  pre_build:
    commands:
      - echo 'Prebuild'
  build:
    commands:
      - echo 'Running Django Tests'
      - |
        codebuild-tests-run \
         --test-command 'python3 manage.py test $(echo "$CODEBUILD_CURRENT_SHARD_FILES" | sed -E "s/\//__/g; s/\.py$//; s/__/./g")' \ 
         --files-search "codebuild-glob-search '**/tests/*test_*.py'" \
         --sharding-strategy 'equal-distribution'
  post_build:
    commands:
      - echo 'Test execution completed'
```

The above example shows the usage of the environment variable `CODEBUILD_CURRENT_SHARD_FILES`. Here `CODEBUILD_CURRENT_SHARD_FILES` is used to fetch dot notation file paths supported by Django. Use `CODEBUILD_CURRENT_SHARD_FILES` inside double quotes as shown above.

# Configure parallel tests with Elixir
<a name="sample-parallel-test-elixir"></a>

The following is sample of a `buildspec.yml` that shows parallel test execution with Elixir on an Ubuntu platform:

```
version: 0.2

batch:
  fast-fail: false
  build-fanout:
    parallelism: 5

phases:
  install:
    commands:
      - echo 'Installing Elixir dependencies'
      - sudo apt update
      - sudo DEBIAN_FRONTEND=noninteractive apt install -y elixir
      - elixir --version
      - mix --version
  pre_build:
    commands:
      - echo 'Prebuild'
  build:
    commands:
      - echo 'Running Elixir Tests'
      - |
        codebuild-tests-run \
         --test-command 'mix test' \
         --files-search "codebuild-glob-search '**/test/**/*_test.exs'" \ 
         --sharding-strategy 'equal-distribution'
  post_build:
    commands:
      - echo "Test execution completed"
```

# Configure parallel tests with Go
<a name="sample-parallel-test-go"></a>

The following is sample of a `buildspec.yml` that shows parallel test execution with Go on an Linux platform:

```
version: 0.2

batch:
  fast-fail: false
  build-fanout:
    parallelism: 5
    ignore-failure: false

phases:
  install:
    commands:
      - echo 'Fetching Go version'
      - go version
  pre_build:
    commands:
      - echo 'prebuild'
  build:
    commands:
      - echo 'Running go Tests'
      - go mod init calculator
      - cd calc
      - |
        codebuild-tests-run \
         --test-command "go test -v calculator.go" \
         --files-search "codebuild-glob-search '**/*test.go'"
  post_build:
    commands:
      - echo "Test execution completed"
```

In above example, `calculator.go` function contains simple mathematical functions to test and all test files and `calculator.go` file is inside `calc` folder.

# Configure parallel tests with Java (Maven)
<a name="sample-parallel-test-java-maven"></a>

The following is sample of a `buildspec.yml` that shows parallel test execution with Java on an Linux platform:

```
version: 0.2

batch:
  fast-fail: false 
  build-fanout:
    parallelism: 5
    ignore-failure: false
    
phases:
  pre_build:
    commands:
      - echo 'prebuild'
  build:
    commands:
      - echo "Running mvn test"
      - |
        codebuild-tests-run \
          --test-command 'mvn test -Dtest=$(echo "$CODEBUILD_CURRENT_SHARD_FILES" | sed "s|src/test/java/||g; s/\.java//g; s|/|.|g; s/ /,/g" | tr "\n" "," | sed "s/,$//")' \
          --files-search "codebuild-glob-search '**/test/**/*.java'"
         
  post_build:
    commands:
      - echo "Running post-build steps..."
      - echo "Test execution completed"
```

In the given example, the environment variable `CODEBUILD_CURRENT_SHARD_FILES` contains test files in the current shard, separated by newlines. These files are converted into a comma-separated list of class names in the format accepted by the `-Dtest` parameter for Maven.

# Configure parallel tests with Javascript (Jest)
<a name="sample-parallel-test-javascript"></a>

The following is sample of a `buildspec.yml` that shows parallel test execution with Javascript on an Ubuntu platform:

```
version: 0.2

batch:
  fast-fail: true
  build-fanout:
    parallelism: 5
    ignore-failure: false

phases:
  install:
    commands:
      - echo 'Installing Node.js dependencies'
      - apt-get update
      - apt-get install -y nodejs
      - npm install
      - npm install --save-dev jest-junit
  pre_build:
    commands:
      - echo 'prebuild'
  build:
    commands:
      - echo 'Running JavaScript Tests'
      - |
         codebuild-tests-run \
          --test-command "npx jest" \
          --files-search "codebuild-glob-search '**/test/**/*.test.js'" \
          --sharding-strategy 'stability'
    post_build:
      commands:
        - echo 'Test execution completed'
```

# Configure parallel tests with Kotlin
<a name="sample-parallel-test-kotlin"></a>

The following is sample of a `buildspec.yml` that shows parallel test execution with Kotlin on an Linux platform:

```
version: 0.2

batch:
  fast-fail: false
  build-fanout:
    parallelism: 2
    ignore-failure: false

phases:
  install:
    runtime-versions:
      java: corretto11 
    commands:
      - echo 'Installing dependencies'
      - KOTLIN_VERSION="1.8.20" # Replace with your desired version
      - curl -o kotlin-compiler.zip -L "https://github.com/JetBrains/kotlin/releases/download/v${KOTLIN_VERSION}/kotlin-compiler-${KOTLIN_VERSION}.zip"
      - unzip kotlin-compiler.zip -d /usr/local
      - export PATH=$PATH:/usr/local/kotlinc/bin
      - kotlin -version
      - curl -O https://repo1.maven.org/maven2/org/junit/platform/junit-platform-console-standalone/1.8.2/junit-platform-console-standalone-1.8.2.jar
  pre_build:
    commands:
      - echo 'prebuild'
  build:
    commands:
      - echo 'Running Kotlin Tests'
      - |
        codebuild-tests-run \
          --test-command 'kotlinc src/main/kotlin/*.kt $(echo "$CODEBUILD_CURRENT_SHARD_FILES" | tr "\n" " ") -d classes -cp junit-platform-console-standalone-1.8.2.jar' \
          --files-search "codebuild-glob-search 'src/test/kotlin/*.kt'"
      - |
        codebuild-tests-run \
          --test-command '
            java -jar junit-platform-console-standalone-1.8.2.jar --class-path classes \
              $(for file in $CODEBUILD_CURRENT_SHARD_FILES; do
                 class_name=$(basename "$file" .kt)
                 echo "--select-class $class_name"
               done)
          ' \
          --files-search "codebuild-glob-search 'src/test/kotlin/*.kt'"
  post_build:
    commands:
      - echo "Test execution completed"
```

In the above example, the `codebuild-tests-run` CLI is used twice. During the first run, kotlinc compiles the files. The `CODEBUILD_CURRENT_SHARD_FILES` variable retrieves the test files assigned to the current shard, which are then converted into a space-separated list. In the second run, JUnit executes the tests. Again, `CODEBUILD_CURRENT_SHARD_FILES` fetches the test files assigned to the current shard, but this time they are converted into class names.

# Configure parallel tests with PHPUnit
<a name="sample-parallel-test-phpunit"></a>

The following is sample of a `buildspec.yml` that shows parallel test execution with PHPUnit on an Linux platform:

```
version: 0.2
 
batch:
   fast-fail: false
   build-fanout:
     parallelism: 5
     ignore-failure: false
 
phases:
   install:
     commands:
       - echo 'Install dependencies'
       - composer require --dev phpunit/phpunit
   pre_build:
     commands:
       - echo 'prebuild'
   build:
     commands:
       - echo 'Running phpunit Tests'
       - composer dump-autoload
       - | 
         codebuild-tests-run \
          --test-command "./vendor/bin/phpunit --debug" \ 
          --files-search "codebuild-glob-search '**/tests/*Test.php'"
   post_build:
       commands:
         - echo 'Test execution completed'
```

# Configure parallel tests with Pytest
<a name="sample-parallel-test-python"></a>

The following is sample of a `buildspec.yml` that shows parallel test execution with Pytest on an Ubuntu platform:

```
version: 0.2

batch:
  fast-fail: false
  build-fanout:
    parallelism: 5
    ignore-failure: false

phases:
  install:
    commands:
      - echo 'Installing Python dependencies'
      - apt-get update
      - apt-get install -y python3 python3-pip
      - pip3 install --upgrade pip
      - pip3 install pytest
  build:
    commands:
      - echo 'Running Python Tests'
      - |
         codebuild-tests-run \
          --test-command 'python -m pytest' \
          --files-search "codebuild-glob-search 'tests/test_*.py'" \
          --sharding-strategy 'equal-distribution'
  post_build:
    commands:
      - echo "Test execution completed"
```

The following is sample of a `buildspec.yml` that shows parallel test execution with Pytest on an Windows platform:

```
version: 0.2

batch:
  fast-fail: false
  build-fanout:
    parallelism: 5
    ignore-failure: false

phases:
  install:
    commands:
      - echo 'Installing Python dependencies'
      - pip install pytest
  pre_build:
    commands:
      - echo 'prebuild'
  build:
    commands:
      - echo 'Running pytest'
      - |
        & codebuild-tests-run `
         --test-command 'pytest @("$env:CODEBUILD_CURRENT_SHARD_FILES" -split \"`r?`n\")'  `
         --files-search "codebuild-glob-search '**/test_*.py' '**/*_test.py'" `
         --sharding-strategy 'equal-distribution' 
  post_build:
    commands:
      - echo "Test execution completed"
```

In above example, `CODEBUILD_CURRENT_SHARD_FILES` environment variable is used to fetch test files assigned to current shard and passed as array to pytest command.

# Configure parallel tests with Ruby (Cucumber)
<a name="sample-parallel-test-ruby-cucumber"></a>

The following is sample of a `buildspec.yml` that shows parallel test execution with Cucumber on an Linux platform:

```
version: 0.2

batch:
  fast-fail: false
  build-fanout:
    parallelism: 5
    ignore-failure: false

phases:
  install:
    commands:
      - echo 'Installing Ruby dependencies'
      - gem install bundler
      - bundle install
  pre_build:
    commands:
      - echo 'prebuild'
  build:
    commands:
      - echo 'Running Cucumber Tests'
      - cucumber --init
      - |
        codebuild-tests-run \
         --test-command "cucumber" \
         --files-search "codebuild-glob-search '**/*.feature'"
  post_build:
    commands:
      - echo "Test execution completed"
```

# Configure parallel tests with Ruby (RSpec)
<a name="sample-parallel-test-ruby"></a>

The following is sample of a `buildspec.yml` that shows parallel test execution with RSpec on an Ubuntu platform:

```
version: 0.2

batch:
  fast-fail: false
  build-fanout:
    parallelism: 5
    ignore-failure: false

phases:
  install:
    commands:
      - echo 'Installing Ruby dependencies'
      - apt-get update
      - apt-get install -y ruby ruby-dev build-essential
      - gem install bundler
      - bundle install
  build:
    commands:
      - echo 'Running Ruby Tests'
      - |
         codebuild-tests-run \
          --test-command 'bundle exec rspec' \
          --files-search "codebuild-glob-search 'spec/**/*_spec.rb'" \
          --sharding-strategy 'equal-distribution'
  post_build:
    commands:
      - echo "Test execution completed"
```