

Amazon CodeCatalyst is no longer open to new customers. Existing customers can continue to use the service as normal. For more information, see [How to migrate from CodeCatalyst](migration.md).

# Configuring compute and runtime images
<a name="workflows-working-compute"></a>

In a CodeCatalyst workflow, you can specify the compute and runtime environment image that CodeCatalyst uses to run workflow actions.

*Compute* refers to the computing engine (the CPU, memory, and operating system) managed and maintained by CodeCatalyst to run workflow actions.

**Note**  
If compute is defined as a property of the workflow, then it can't be defined as a property of any action in that workflow. Similarly, if compute is defined as a property of any action, it can't be defined in the workflow.

A *runtime environment image* is a Docker container within which CodeCatalyst runs workflow actions. The Docker container runs on top of your chosen compute platform, and includes an operating system and extra tools that a workflow action might need, such as the AWS CLI, Node.js, and .tar.

**Topics**
+ [Compute types](#compute.types)
+ [Compute fleets](#compute.fleets)
+ [On-demand fleet properties](#compute.on-demand)
+ [Provisioned fleet properties](#compute.provisioned-fleets)
+ [Creating a provisioned fleet](projects-create-compute-resource.md)
+ [Editing a provisioned fleet](edit-compute-resource.md)
+ [Deleting a provisioned fleet](delete-compute-resource.md)
+ [Assigning a fleet or compute to an action](workflows-assign-compute-resource.md)
+ [Sharing compute across actions](compute-sharing.md)
+ [Specifying runtime environment images](build-images.md)

## Compute types
<a name="compute.types"></a>

CodeCatalyst offers the following compute types:
+ Amazon EC2
+ AWS Lambda

Amazon EC2 offers optimized flexibility during action runs and Lambda offers optimized action start-up speeds. Lambda supports faster workflow action runs due to a lower start-up latency. Lambda allows you to run basic workflows that can build, test, and deploy serverless applications with common runtimes. These runtimes include Node.js, Python, Java, .NET, and Go. However, there are some use-cases which Lambda does not support, and if they impact you, use the Amazon EC2 compute type:
+ Lambda doesn't support runtime environment images from a specified registry.
+ Lambda doesn't support tools that require root permissions. For tools such as `yum` or `rpm`, use the Amazon EC2 compute type or other tools that don't require root permissions.
+ Lambda doesn't support Docker builds or runs. The following actions that use Docker images are not supported: Deploy AWS CloudFormation stack, Deploy to Amazon ECS, Amazon S3 publish, AWS CDK bootstrap, AWS CDK deploy, AWS Lambda invoke , and GitHub Actions. Docker-based GitHub Actions that are running within CodeCatalyst GitHub Actions action are also not supported with Lambda compute. You can use alternatives that don’t require root permissions, such as Podman.
+ Lambda doesn't support writing to files outside `/tmp`. When configuring your workflow actions, you can reconfigure your tools to install or write to `/tmp`. If you have a build action that installs `npm`, make sure you configure it to install to `/tmp`.
+ Lambda doesn't support runtimes longer than 15 minutes.

## Compute fleets
<a name="compute.fleets"></a>

CodeCatalyst offers the following compute fleets:
+ On-demand fleets
+ Provisioned fleets

With on-demand fleets, when a workflow action starts, the workflow provisions the resources it needs. The machines are destroyed when the action finishes. You only pay for the number of minutes that you're running your actions. On-demand fleets are fully managed, and includes automatic scaling capabilities to handle spikes in demand.

CodeCatalyst also offers provisioned fleets which contain machines powered by Amazon EC2 that are maintained by CodeCatalyst. With provisioned fleets, you configure a set of dedicated machines to run your workflow actions. These machines remain idle, ready to process actions immediately. With provisioned fleets, your machines are always running and will incur costs as long they're provisioned.

In order to create, update, or delete a fleet, you must have the **Space administrator** role or the **Project administrator** role.

## On-demand fleet properties
<a name="compute.on-demand"></a>

CodeCatalyst provides the following on-demand fleets:

[\[See the AWS documentation website for more details\]](http://docs.aws.amazon.com/codecatalyst/latest/userguide/workflows-working-compute.html)

**Note**  
The specifications for on-demand fleets will vary depending on your billing tier. For more information, see [Pricing](https://codecatalyst.aws/explore/pricing).

If no fleet is selected, CodeCatalyst uses `Linux.x86-64.Large`.

## Provisioned fleet properties
<a name="compute.provisioned-fleets"></a>

A provisioned fleet contains the following properties: 

**Operating system**  
The operating system. The following operating systems are available:  
+ Amazon Linux 2
+ Windows Server 2022
**Note**  
Windows fleets are only supported in the build action. Other actions do not currently support Windows.

**Architecture**  
The processor architecture. The following architectures are available:  
+ x86\$164
+ Arm64

**Machine type**  
The machine type for each instance. The following machine types are available:      
[\[See the AWS documentation website for more details\]](http://docs.aws.amazon.com/codecatalyst/latest/userguide/workflows-working-compute.html)

**Capacity**  
The initial number of machines allocated to the fleet, which defines the number of actions that can run in parallel.

**Scaling mode**  
Defines the behavior when the number of actions exceeds the fleet capacity.    
**Provision additional capacity on demand**  
Additional machines are set up on demand which automatically scale up in response to new actions running, and then scale down to the base capacity as actions finish. This can incur additional costs, since you pay by the minute for each machine running.  
**Wait until additional fleet capacity is available**  
Action runs are placed in a queue until a machine is available. This limits additional costs because no additional machines are allocated.

# Creating a provisioned fleet
<a name="projects-create-compute-resource"></a>

Use the following instructions to create a provisioned fleet.

**Note**  
Provisioned fleets will be deactivated after 2 weeks of inactivity. If used again, they will be re-activated automatically, but this re-activation may cause a latency to occur.

**To create a provisioned fleet**

1. In the navigation pane, choose **CI/CD**, and then choose **Compute**.

1. Choose **Create provisioned fleet**.

1. In the **Provisioned fleet name** text field, enter a name for your fleet.

1. From the **Operating system** drop-down menu, choose the operating system.

1. From the **Machine type** drop-down menu, choose the machine type for your machine.

1. In the **Capacity** text field, enter the maximum number of machines in the fleet.

1. From the **Scaling mode** drop-down menu, choose the desired overflow behavior. For more information about these fields, see [Provisioned fleet properties](workflows-working-compute.md#compute.provisioned-fleets).

1. Choose **Create**.

After creating the provisioned fleet, you are ready to assign it to an action. For more information, see [Assigning a fleet or compute to an action](workflows-assign-compute-resource.md).

# Editing a provisioned fleet
<a name="edit-compute-resource"></a>

Use the following instructions to edit a provisioned fleet.

**Note**  
Provisioned fleets will be deactivated after 2 weeks of inactivity. If used again, they will be re-activated automatically, but this re-activation may cause a latency to occur.

**To edit a provisioned fleet**

1. In the navigation pane, choose **CI/CD**, and then choose **Compute**.

1. In the **Provisioned fleet** list, choose the fleet you want to edit.

1. Choose **Edit**.

1. In the **Capacity** text field, enter the maximum number of machines in the fleet.

1. From the **Scaling mode** drop-down menu, choose the desired overflow behavior. For more information about these fields, see [Provisioned fleet properties](workflows-working-compute.md#compute.provisioned-fleets).

1. Choose **Save**.

# Deleting a provisioned fleet
<a name="delete-compute-resource"></a>

Use the following instructions to delete a provisioned fleet.

**To delete a provisioned fleet**
**Warning**  
Before deleting a provisioned fleet, remove it from all actions by deleting the `Fleet` property from the action's YAML code. Any action that continues to reference a provisioned fleet after it is deleted will fail the next time the action runs.

1. In the navigation pane, choose **CI/CD**, and then choose **Compute**.

1. In the **Provisioned fleet** list, choose the fleet you want to delete.

1. Choose **Delete**.

1. Enter **delete** to confirm the deletion.

1. Choose **Delete**.

# Assigning a fleet or compute to an action
<a name="workflows-assign-compute-resource"></a>

By default, workflow actions use the `Linux.x86-64.Large` on-demand fleet with an Amazon EC2 compute type. To use a provisioned fleet instead, or to use a different on-demand fleet, such as `Linux.x86-64.2XLarge`, use the following instructions.

------
#### [ Visual ]

**Before you begin**
+ If you want to assign a provisioned fleet, you must first create the provisioned fleet. For more information, see [Creating a provisioned fleet](projects-create-compute-resource.md).

**To assign a provisioned fleet or different fleet type to an action**

1. Open the CodeCatalyst console at [https://codecatalyst.aws/](https://codecatalyst.aws/).

1. Choose your project.

1. In the navigation pane, choose **CI/CD**, and then choose **Workflows**.

1. Choose the name of your workflow. You can filter by the source repository or branch name where the workflow is defined, or filter by workflow name or status.

1. Choose **Edit**.

1. Choose **Visual**.

1. In the workflow diagram, choose the action that you want to assign your provisioned fleet or new fleet type to.

1. Choose the **Configuration** tab.

1. In **Compute fleet**, do the following:

   Specify the machine or fleet that will run your workflow or workflow actions. With on-demand fleets, when an action starts, the workflow provisions the resources it needs, and the machines are destroyed when the action finishes. Examples of on-demand fleets: `Linux.x86-64.Large`, `Linux.x86-64.XLarge`. For more information about on-demand fleets, see [On-demand fleet properties](workflows-working-compute.md#compute.on-demand).

   With provisioned fleets, you configure a set of dedicated machines to run your workflow actions. These machines remain idle, ready to process actions immediately. For more information about provisioned fleets, see [Provisioned fleet properties](workflows-working-compute.md#compute.provisioned-fleets).

   If `Fleet` is omitted, the default is `Linux.x86-64.Large`.

1. (Optional) Choose **Validate** to validate the workflow's YAML code before committing.

1. Choose **Commit**, enter a commit message, and choose **Commit** again.

------
#### [ YAML ]

**Before you begin**
+ If you want to assign a provisioned fleet, you must first create the provisioned fleet. For more information, see [Creating a provisioned fleet](projects-create-compute-resource.md).

**To assign a provisioned fleet or different fleet type to an action**

1. Open the CodeCatalyst console at [https://codecatalyst.aws/](https://codecatalyst.aws/).

1. Choose your project.

1. In the navigation pane, choose **CI/CD**, and then choose **Workflows**.

1. Choose the name of your workflow. You can filter by the source repository or branch name where the workflow is defined, or filter by workflow name or status.

1. Choose **Edit**.

1. Choose **YAML**.

1. Find the action that you want to assign your provisioned fleet or new fleet type to.

1. In the action, add a `Compute` property and set `Fleet` to the name of your fleet or on-demand fleet type. For more information, see the description of the `Fleet` property in the [Build and test actions YAML](build-action-ref.md) for your action.

1. (Optional) Choose **Validate** to validate the workflow's YAML code before committing.

1. Choose **Commit**, enter a commit message, and choose **Commit** again.

------

# Sharing compute across actions
<a name="compute-sharing"></a>

By default, actions in a workflow run on separate instances in a [ﬂeet](workflows-working-compute.md#compute.fleets). This behavior provides actions with isolation and predictability on the state of inputs. The default behavior requires explicit configuration to share context such as files and variables between actions. 

Compute sharing is a capability that allows you to run all the actions in a workﬂow on the same instance. Using compute sharing can provide faster workﬂow runtimes because less time is spent provisioning instances. You can also share files (artifacts) between actions without additional workflow conﬁguration.

When a workflow is run using compute sharing, an instance in the default or specified fleet is reserved for the duration of all actions in that workflow. When the workflow run completes, the instance reservation is released.

**Topics**
+ [Running multiple actions on shared compute](#how-to-compute-share)
+ [Considerations for compute sharing](#compare-compute-sharing)
+ [Turning on compute sharing](#compute-sharing-steps)
+ [Examples](#compute-sharing-examples)

## Running multiple actions on shared compute
<a name="how-to-compute-share"></a>

You can use the `Compute` attribute in the definition YAML at the workflow level to specify both the fleet and compute sharing properties of actions. You can also configure compute properties using the visual editor in CodeCatalyst. To specify a fleet, set the name of an existing fleet, set the compute type to **EC2**, and turn on compute sharing.

**Note**  
Compute sharing is only supported if the compute type is set to **EC2**, and it's not supported for the Windows Server 2022 operating system. For more information about compute fleets, compute types, and properties, see [Configuring compute and runtime images](workflows-working-compute.md).

**Note**  
If you're on the Free tier and you specify the `Linux.x86-64.XLarge` or `Linux.x86-64.2XLarge` fleet manually in the workflow definition YAML, the action will still run on the default fleet (`Linux.x86-64.Large`). For more information about compute availability and pricing, see the [table for the tiers options](https://codecatalyst.aws/explore/pricing). 

When compute sharing is turned on, the folder containing the workflow source is automatically copied across actions. You don't need to configure output artifacts and reference them as input artifacts throughout a workflow definition (YAML file). As a workflow author, you need to wire up environment variables using inputs and outputs, just as you would without using compute sharing. If you want to share folders between actions outside the workflow source, consider file caching. For more information, see [Sharing artifacts and files between actions](workflows-working-artifacts.md) and [Caching files between workflow runs](workflows-caching.md).

The source repository where your workflow definition file resides is identified by the label `WorkflowSource`. While using compute sharing, the workflow source is downloaded in the first action that references it and automatically made available for subsequent actions in the workflow run to use. Any changes made to the folder containing the workflow source by an action, such as adding, modifying, or removing files, are also visible in the subsequent actions in the workflow. You can reference files that reside in the workflow source folder in any of your workflow actions, just as you can without using compute sharing. For more information, see [Referencing source repository files](workflows-sources-reference-files.md).

**Note**  
Compute sharing workflows need to specify a strict sequence of actions, so parallel actions can't be set. While output artifacts can be configured at any action in the sequence, input artifacts aren't supported.

## Considerations for compute sharing
<a name="compare-compute-sharing"></a>

You can run workflows with compute sharing in order to accelerate workflow runs and share context between actions in a workflow that use the same instance. Consider the following to determine whether using compute sharing is appropriate for your scenario:


|   | Compute sharing | Without compute sharing | 
| --- | --- | --- | 
|  Compute type  |  Amazon EC2  |  Amazon EC2, AWS Lambda  | 
|  Instance provisioning  |  Actions run on same instance  |  Actions run on separate instances  | 
|  Operating system  |  Amazon Linux 2  |  Amazon Linux 2, Windows Server 2022 (build action only)  | 
|  Referencing files  |  `$CATALYST_SOURCE_DIR_WorkflowSource`, `/sources/WorkflowSource/`  |  `$CATALYST_SOURCE_DIR_WorkflowSource`, `/sources/WorkflowSource/`  | 
|  Workflow structure  |  Actions can only run sequentially  |  Actions can run parallel  | 
|  Accessing data across workflow actions  |  Access cached workflow source (`WorkflowSource`)  |  Access outputs of shared artifacts (requires additional configuration)  | 

## Turning on compute sharing
<a name="compute-sharing-steps"></a>

Use the following instruction to turn on compute sharing for a workflow.

------
#### [ Visual ]

**To turn on compute sharing using the visual editor**

1. Open the CodeCatalyst console at [https://codecatalyst.aws/](https://codecatalyst.aws/).

1. Choose your project.

1. In the navigation pane, choose **CI/CD**, and then choose **Workflows**.

1. Choose the name of your workflow.

1. Choose **Edit**.

1. Choose **Visual**.

1. Choose **Workflow properties**.

1. From the **Compute type** dropdown menu, choose **EC2**.

1. (Optional) From the **Compute fleet - optional** dropdown menu, choose a fleet you want to use to run workflow actions. You can choose an on-demand fleet or create and choose a provisioned fleet. For more information, see [Creating a provisioned fleet](projects-create-compute-resource.md) and [Assigning a fleet or compute to an action](workflows-assign-compute-resource.md) 

1. Switch the toggle to turn on compute sharing and have actions in the workflow run on the same fleet.

1. (Optional) Choose the run mode for the workflow. For more information, see [Configuring the queuing behavior of runs](workflows-configure-runs.md).

1. Choose **Commit**, enter a commit message, and choose **Commit** again.

------
#### [ YAML ]

**To turn on compute sharing using the YAML editor**

1. Open the CodeCatalyst console at [https://codecatalyst.aws/](https://codecatalyst.aws/).

1. Choose your project.

1. In the navigation pane, choose **CI/CD**, and then choose **Workflows**.

1. Choose the name of your workflow.

1. Choose **Edit**.

1. Choose **YAML**.

1. Turn on compute sharing setting the `SharedInstance` field to `TRUE` and `Type` to `EC2`. Set `Fleet` to a compute fleet you want to use to run workflow actions. You can choose an on-demand fleet or create and choose a provisioned fleet. For more information, see [Creating a provisioned fleet](projects-create-compute-resource.md) and [Assigning a fleet or compute to an action](workflows-assign-compute-resource.md)

   In a workflow YAML, add code similar to the following:

   ```
     Name: MyWorkflow
     SchemaVersion: "1.0"
     Compute: # Define compute configuration.
       Type: EC2
       Fleet: MyFleet # Optionally, choose an on-demand or provisioned fleet.
       SharedInstance: true # Turn on compute sharing. Default is False.
     Actions:
       BuildFirst:
         Identifier: aws/build@v1
         Inputs:
           Sources:
             - WorkflowSource
         Configuration:
           Steps:
             - Run: ...
             ...
   ```

1. (Optional) Choose **Validate** to validate the workflow's YAML code before committing.

1. Choose **Commit**, enter a commit message, and choose **Commit** again.

------

## Examples
<a name="compute-sharing-examples"></a>

**Topics**
+ [Example: Amazon S3 Publish](#compute-share-s3)

### Example: Amazon S3 Publish
<a name="compute-share-s3"></a>

The following workflow examples show how to perform the Amazon Amazon S3 Publish action in two ways: first using input artifacts and then using compute sharing. With compute sharing, the input artifacts aren't needed since you can access the cached `WorkflowSource`. Additionally, the output artifact in the Build action is no longer needed. The S3 Publish action is configured to use the explicit `DependsOn` property to maintain sequential actions; the Build action must run successfully in order for the S3 Publish action to run.
+ Without compute sharing, you need to use input artifacts and share the outputs with subsequent actions:

  ```
  Name: S3PublishUsingInputArtifact
  SchemaVersion: "1.0"
  Actions:
    Build:
      Identifier: aws/build@v1
      Outputs:
        Artifacts:
          - Name: ArtifactToPublish
            Files: [output.zip]
      Inputs:
        Sources:
          - WorkflowSource
      Configuration:
        Steps:
          - Run: ./build.sh # Build script that generates output.zip
    PublishToS3:
      Identifier: aws/s3-publish@v1
      Inputs:
        Artifacts:
        - ArtifactToPublish
      Environment:
        Connections:
          - Role: codecatalyst-deployment-role
            Name: dev-deployment-role
        Name: dev-connection
      Configuration:
        SourcePath: output.zip
        DestinationBucketName: amzn-s3-demo-bucket
  ```
+ When using compute sharing by setting `SharedInstance` to `TRUE`, you can run multiple actions on the same instance and share artifacts by specifying a single workflow source. Input artifacts aren't required and can't be specified:

  ```
  Name: S3PublishUsingComputeSharing
  SchemaVersion: "1.0"
  Compute: 
    Type: EC2
    Fleet: dev-fleet
    SharedInstance: TRUE
  Actions:
    Build:
      Identifier: aws/build@v1
      Inputs:
        Sources:
          - WorkflowSource
      Configuration:
        Steps:
          - Run: ./build.sh # Build script that generates output.zip
    PublishToS3:
      Identifier: aws/s3-publish@v1
      DependsOn: 
        - Build
      Environment:
        Connections:
          - Role: codecatalyst-deployment-role
            Name: dev-deployment-role
        Name: dev-connection
      Configuration:
        SourcePath: output.zip
        DestinationBucketName: amzn-s3-demo-bucket
  ```

# Specifying runtime environment images
<a name="build-images"></a>

A *runtime environment image* is a Docker container within which CodeCatalyst runs workflow actions. The Docker container runs on top of your chosen compute platform, and includes an operating system and extra tools that a workflow action might need, such as the AWS CLI, Node.js, and .tar.

By default, workflow actions will run on one of the [active images](#build-curated-images) that are supplied and maintained by CodeCatalyst. Only build and test actions support custom images. For more information, see [Assigning a custom runtime environment Docker image to an action](#build-images-specify).

**Topics**
+ [Active images](#build-curated-images)
+ [What if an active image doesn't include the tools I need?](#build-images-more-tools)
+ [Assigning a custom runtime environment Docker image to an action](#build-images-specify)
+ [Examples](#workflows-working-custom-image-ex)

## Active images
<a name="build-curated-images"></a>

*Active images* are runtime environment images that are fully supported by CodeCatalyst and include preinstalled tooling. There are currently two sets of active images: one released in March 2024, and another released in November 2022.

Whether an action uses a March 2024 or November 2022 image depends on the action:
+ Build and test actions that are added to a workflow on or after March 26, 2024 will include a `Container` section in their YAML definition that explicitly specifies a [March 2024 image](#build.default-image). You can optionally remove the `Container` section to revert back to the [November 2022 image](#build.previous-image).
+ Build and test actions that were added to a workflow prior to March 26, 2024 will *not* include a `Container` section in their YAML definition, and consequently will use a [November 2022 image](#build.previous-image). You can keep the November 2022 image, or you can upgrade it. To upgrade the image, open the action in the visual editor, choose the **Configuration** tab, and then select the March 2024 image from the **Runtime environment docker image** drop-down list. This selection will add a `Container` section to the action's YAML definition that is populated with the appropriate March 2024 image.
+ All other actions will use either a [November 2022 image](#build.previous-image) or a [March 2024 image](#build.default-image). For more information, see the action's documentation. 

**Topics**
+ [March 2024 images](#build.default-image)
+ [November 2022 images](#build.previous-image)

### March 2024 images
<a name="build.default-image"></a>

The March 2024 images are the latest images provided by CodeCatalyst. There is one March 2024 image per compute type/fleet combination.

The following table shows the tools installed on each March 2024 image.


**March 2024 image tools**  

| Tool | CodeCatalyst Amazon EC2 for Linux x86\$164 - `CodeCatalystLinux_x86_64:2024_03` | CodeCatalyst Lambda for Linux x86\$164 - `CodeCatalystLinuxLambda_x86_64:2024_03` | CodeCatalyst Amazon EC2 for Linux Arm64 - `CodeCatalystLinux_Arm64:2024_03` | CodeCatalyst Lambda for Linux Arm64 - `CodeCatalystLinuxLambda_Arm64:2024_03` | 
| --- | --- | --- | --- | --- | 
| AWS CLI | 2.15.17 | 2.15.17 | 2.15.17 | 2.15.17 | 
| AWS Copilot CLI | 1.32.1 | 1.32.1 | 1.32.1 | 1.32.1 | 
| Docker | 24.0.9 | N/A | 24.0.9 | N/A | 
| Docker Compose | 2.23.3 | N/A | 2.23.3 | N/A | 
| Git | 2.43.0 | 2.43.0 | 2.43.0 | 2.43.0 | 
| Go | 1.21.5 | 1.21.5 | 1.21.5 | 1.21.5 | 
| Gradle | 8.5 | 8.5 | 8.5 | 8.5 | 
| Java | Corretto17 | Corretto17 | Corretto17 | Corretto17 | 
| Maven | 3.9.6 | 3.9.6 | 3.9.6 | 3.9.6 | 
| Node.js | 18.19.0 | 18.19.0 | 18.19.0 | 18.19.0 | 
| npm | 10.2.3 | 10.2.3 | 10.2.3 | 10.2.3 | 
| Python | 3.9.18 | 3.9.18 | 3.9.18 | 3.9.18 | 
| Python3 | 3.11.6 | 3.11.6 | 3.11.6 | 3.11.6 | 
| pip | 22.3.1 | 22.3.1 | 22.3.1 | 22.3.1 | 
| .NET | 8.0.100 | 8.0.100 | 8.0.100 | 8.0.100 | 

### November 2022 images
<a name="build.previous-image"></a>

There is one November 2022 image per compute type/fleet combination. There is also a November 2022 Windows image available with the build action if you've configured a [provisioned fleet](workflows-working-compute.md#compute.fleets).

The following table shows the tools installed on each November 2022 image.


**November 2022 image tools**  

| Tool | CodeCatalyst Amazon EC2 for Linux x86\$164 - `CodeCatalystLinux_x86_64:2022_11` | CodeCatalyst Lambda for Linux x86\$164 - `CodeCatalystLinuxLambda_x86_64:2022_11` | CodeCatalyst Amazon EC2 for Linux Arm64 - `CodeCatalystLinux_Arm64:2022_11` | CodeCatalyst Lambda for Linux Arm64 - `CodeCatalystLinuxLambda_Arm64:2022_11` | CodeCatalyst Amazon EC2 for Windows x86\$164 - `CodeCatalystWindows_x86_64:2022_11` | 
| --- | --- | --- | --- | --- | --- | 
| AWS CLI | 2.15.17 | 2.15.17 | 2.15.17 | 2.15.17 | 2.13.19 | 
| AWS Copilot CLI | 0.6.0 | 0.6.0 | N/A | N/A | 1.30.1 | 
| Docker | 23.01 | N/A | 23.0.1 | N/A | N/A | 
| Docker Compose | 2.16.0 | N/A | 2.16.0 | N/A | N/A | 
| Git | 2.40.0 | 2.40.0 | 2.39.2 | 2.39.2 | 2.42.0 | 
| Go | 1.20.2 | 1.20.2 | 1.20.1 | 1.20.1 | 1.19 | 
| Gradle | 8.0.2 | 8.0.2 | 8.0.1 | 8.0.1 | 8.3 | 
| Java | Corretto17 | Corretto17 | Corretto17 | Corretto17 | Corretto17 | 
| Maven | 3.9.4 | 3.9.4 | 3.9.0 | 3.9.0 | 3.9.4 | 
| Node.js | 16.20.2 | 16.20.2 | 16.19.1 | 16.14.2 | 16.20.0 | 
| npm | 8.19.4 | 8.19.4 | 8.19.3 | 8.5.0 | 8.19.4 | 
| Python | 3.9.15 | 2.7.18 | 3.11.2 | 2.7.18 | 3.9.13 | 
| Python3 | N/A | 3.9.15 | N/A | 3.11.2 | N/A | 
| pip | 22.2.2 | 22.2.2 | 23.0.1 | 23.0.1 | 22.0.4 | 
| .NET | 6.0.407 | 6.0.407 | 6.0.406 | 6.0.406 | 6.0.414 | 

## What if an active image doesn't include the tools I need?
<a name="build-images-more-tools"></a>

If none of the [active images](#build-curated-images) supplied by CodeCatalyst include the tools you need, you have a couple of options:
+ You can provide a custom runtime environment Docker image that includes the necessary tools. For more information, see [Assigning a custom runtime environment Docker image to an action](#build-images-specify).
**Note**  
 If you want to provide a custom runtime environment Docker image, make sure that your custom image has Git installed in it. 
+ You can have your workflow's build or test action install the tools you need.

  For example, you could include the following instructions in the `Steps` section of the build or test action's YAML code:

  ```
  Configuration:
    Steps:
      - Run: ./setup-script
  ```

  The *setup-script* instruction would then run the following script to install the Node package manager (npm):

  ```
  #!/usr/bin/env bash
  echo "Setting up environment"
  
  touch ~/.bashrc
  curl -o- https://raw.githubusercontent.com/nvm-sh/nvm/v0.38.0/install.sh | bash
  source ~/.bashrc 
  nvm install v16.1.0
  source ~/.bashrc
  ```

  For more information about the build action YAML, see [Build and test actions YAML](build-action-ref.md).

## Assigning a custom runtime environment Docker image to an action
<a name="build-images-specify"></a>

If you don't want to use an [Active image](#build-curated-images) supplied by CodeCatalyst, you can provide a custom runtime environment Docker image. If you want to provide a custom image, make sure it has Git installed in it. The image can reside in Docker Hub, Amazon Elastic Container Registry, or any public repository.

To learn how to create a custom Docker image, see [Containerize an application](https://docs.docker.com/get-started/02_our_app/) in the Docker documentation.

Use the following instructions to assign your custom runtime environment Docker image to an action. After specifying an image, CodeCatalyst deploys it to your compute platform when the action starts.

**Note**  
The following actions do not support custom runtime environment Docker images: **Deploy CloudFormation stack**, **Deploy to ECS**, and **GitHub Actions**. custom runtime environment Docker images also do not support the **Lambda** compute type.

------
#### [ Visual ]

**To assign a custom runtime environment Docker image using the visual editor**

1. Open the CodeCatalyst console at [https://codecatalyst.aws/](https://codecatalyst.aws/).

1. In the navigation pane, choose **CI/CD**, and then choose **Workflows**.

1. Choose the name of your workflow. You can filter by the source repository or branch name where the workflow is defined, or filter by workflow name or status.

1. Choose **Edit**.

1. Choose **Visual**.

1. In the workflow diagram, choose the action that will use your custom runtime environment Docker image.

1. Choose the **Configuration** tab.

1. Near the bottom, fill out the following fields.

   **Runtime environment Docker image - optional**

   Specify the registry where your image is stored. Valid values include:
   + `CODECATALYST` (YAML editor)

     The image is stored in the CodeCatalyst registry.
   + **Docker Hub** (visual editor) or `DockerHub` (YAML editor)

     The image is stored in the Docker Hub image registry.
   + **Other registry** (visual editor) or `Other` (YAML editor)

     The image is stored in a custom image registry. Any publicly available registry can be used.
   + **Amazon Elastic Container Registry** (visual editor) or `ECR` (YAML editor)

     The image is stored in an Amazon Elastic Container Registry image repository. To use an image in an Amazon ECR repository, this action needs access to Amazon ECR. To enable this access, you must create an [IAM role](https://docs.aws.amazon.com/IAM/latest/UserGuide/id_roles.html) that includes the following permissions and custom trust policy. (You can modify an existing role to include the permissions and policy, if you want.)

     The IAM role must include the following permissions in its role policy:
     + `ecr:BatchCheckLayerAvailability`
     + `ecr:BatchGetImage`
     + `ecr:GetAuthorizationToken`
     + `ecr:GetDownloadUrlForLayer`

     The IAM role must include the following custom trust policy:

     For more information about creating IAM roles, see [Creating a role using custom trust policies (console)](https://docs.aws.amazon.com/IAM/latest/UserGuide/id_roles_create_for-custom.html) in the *IAM User Guide*.

     Once you have created the role, you must assign it to the action through an environment. For more information, see [Associating an environment with an action](deploy-environments-add-app-to-environment.md).

   **ECR image URL**, **Docker Hub image** or **Image URL**

   Specify one of the following:
   + If you are using a `CODECATALYST` registry, set the image to one of the of the following [active images](#build-curated-images):
     + `CodeCatalystLinux_x86_64:2024_03`
     + `CodeCatalystLinux_x86_64:2022_11`
     + `CodeCatalystLinux_Arm64:2024_03`
     + `CodeCatalystLinux_Arm64:2022_11`
     + `CodeCatalystLinuxLambda_x86_64:2024_03`
     + `CodeCatalystLinuxLambda_x86_64:2022_11`
     + `CodeCatalystLinuxLambda_Arm64:2024_03`
     + `CodeCatalystLinuxLambda_Arm64:2022_11`
     + `CodeCatalystWindows_x86_64:2022_11`
   + If you are using a Docker Hub registry, set the image to the Docker Hub image name and optional tag.

     Example: `postgres:latest`
   + If you are using an Amazon ECR registry, set the image to the Amazon ECR registry URI.

     Example: `111122223333.dkr.ecr.us-west-2.amazonaws.com/codecatalyst-ecs-image-repo`
   + If you are using a custom registry, set the image to the value expected by the custom registry.

1. (Optional) Choose **Validate** to validate the workflow's YAML code before committing.

1. Choose **Commit**, enter a commit message, and choose **Commit** again.

------
#### [ YAML ]

**To assign a custom runtime environment Docker image using the YAML editor**

1. In the navigation pane, choose **CI/CD**, and then choose **Workflows**.

1. Choose the name of your workflow. You can filter by the source repository or branch name where the workflow is defined, or filter by workflow name or status.

1. Choose **Edit**.

1. Choose **YAML**.

1. Find the action that you want to assign a runtime environment Docker image to.

1. In the action, add a `Container` section and underlying `Registry` and `Image` properties. For more information, see the description of the `Container`, `Registry` and `Image` properties in the [Actions](workflow-reference.md#actions-reference) for your action.

1. (Optional) Choose **Validate** to validate the workflow's YAML code before committing.

1. Choose **Commit**, enter a commit message, and choose **Commit** again.

------

## Examples
<a name="workflows-working-custom-image-ex"></a>

The following examples show how to assign a custom runtime environment Docker image to an action in the workflow definition file.

**Topics**
+ [Example: Using a custom runtime environment Docker image to add support for Node.js 18 with Amazon ECR](#workflows-working-custom-image-ex-ecr-node18)
+ [Example: Using a custom runtime environment Docker image to add support for Node.js 18 with Docker Hub](#workflows-working-custom-image-ex-docker-node18)

### Example: Using a custom runtime environment Docker image to add support for Node.js 18 with Amazon ECR
<a name="workflows-working-custom-image-ex-ecr-node18"></a>

The following example shows how to use a custom runtime environment Docker image to add support for Node.js 18 with [Amazon ECR](https://gallery.ecr.aws/amazonlinux/amazonlinux).

```
Configuration:
  Container:
    Registry: ECR
    Image: public.ecr.aws/amazonlinux/amazonlinux:2023
```

### Example: Using a custom runtime environment Docker image to add support for Node.js 18 with Docker Hub
<a name="workflows-working-custom-image-ex-docker-node18"></a>

The following example shows how to use a custom runtime environment Docker image to add support for Node.js 18 with [Docker Hub](https://hub.docker.com/_/node).

```
Configuration:
  Container:
    Registry: DockerHub
    Image: node:18.18.2
```