

# Deploying with Docker containers to Elastic Beanstalk
<a name="create_deploy_docker"></a>

This chapter explains how you can use Elastic Beanstalk to deploy web applications from Docker containers. Docker containers are self contained and include all the configuration information and software that your web application requires to run. With Docker containers you can define your own runtime environment. You can also choose your own programming language and application dependencies, such as package managers or tools, which typically aren't supported by other Elastic Beanstalk platforms. 

Follow the steps in [QuickStart for Docker](docker-quickstart.md) to create a Docker "Hello World" application and deploy it to an Elastic Beanstalk environment using the EB CLI.

**Topics**
+ [Elastic Beanstalk Docker platform branches](docker-platform.md)
+ [Using the Elastic Beanstalk Docker platform branch](docker.md)
+ [Using the ECS managed Docker platform branch in Elastic Beanstalk](create_deploy_docker_ecs.md)
+ [Authenticating with image repositories](docker-configuration.remote-repo.md)
+ [Configuring Elastic Beanstalk Docker environments](create_deploy_docker.container.console.md)
+ [Legacy platforms](create_deploy_dockerpreconfig-legacy.md)

# Elastic Beanstalk Docker platform branches
<a name="docker-platform"></a>

The Elastic Beanstalk Docker platform supports the following platform branches:

***Docker running Amazon Linux 2*and *Docker running AL2023***  
Elastic Beanstalk deploys Docker container(s) and source code to EC2 instances and manages them. These platform branches offer multi-container support. You can use the Docker Compose tool to simplify your application configuration, testing, and deployment. For more information about this platform branch, see [Using the Elastic Beanstalk Docker platform branch](docker.md).

***ECS running on Amazon Linux 2* and *ECS running on AL2023***  
We provide this branch for customers who need a migration path to AL2023/AL2 from the retired platform branch *Multi-container Docker running on (Amazon Linux AMI)*. The latest platform branches support all of the features from the retired platform branch. No changes to the source code are required. For more information, see [Migrating your Elastic Beanstalk application from ECS managed Multi-container Docker on AL1 to ECS on Amazon Linux 2023](migrate-to-ec2-AL2-platform.md). If you don't have an Elastic Beanstalk environment running on an ECS based platform branch, we recommend you use the platform branch, *Docker Running on 64bit AL2023*. This offers a simpler approach and requires less resources.

For a list of the software component versions associated with each of these platform branches, see [Docker](https://docs.aws.amazon.com/elasticbeanstalk/latest/platforms/platforms-supported.html#platforms-supported.docker) in the *AWS Elastic Beanstalk Platforms* document.

## Retired platform branches running on Amazon Linux AMI (AL1)
<a name="al1-platforms"></a>

 On [July 18, 2022](https://docs.aws.amazon.com/elasticbeanstalk/latest/relnotes/release-2022-07-18-linux-al1-retire.html), Elastic Beanstalk set the status of all platform branches based on Amazon Linux AMI (AL1) to **retired**. Expand each section that follows to read more about each retired platform branch and its migration path to the latest platform branch running on Amazon Linux 2 or Amazon Linux 2023 (recommended).

### Docker (Amazon Linux AMI)
<a name="docker-platform-single"></a>

This platform branch can deploy a Docker image, described in a Dockerfile or a `Dockerrun.aws.json` v1 definition. This platform branch runs *only one* container for each instance. Its succeeding platform branches,*Docker running on 64bit AL2023* and *Docker running on 64bit Amazon Linux 2* support multiple Docker containers per instance.

We recommend that you create your environments with the newer and supported platform branch *Docker running on 64bit AL2023*. You can then migrate your application to the newly created environment. For more information about creating these environments, see [Using the Elastic Beanstalk Docker platform branch](docker.md). For more information about migration, see [Migrating your Elastic Beanstalk Linux application to Amazon Linux 2023 or Amazon Linux 2](using-features.migration-al.md).

### Multi-container Docker (Amazon Linux AMI)
<a name="docker-platform-multi"></a>

This platform branch uses Amazon ECS to coordinate a deployment of multiple Docker containers to an Amazon ECS cluster in an Elastic Beanstalk environment. If you're currently using this retired platform branch, we recommend that you migrate to the latest *ECS Running on Amazon Linux 2023* platform branch. The latest platform branch supports all of the features from this discontinued platform branch. No changes to the source code are required. For more information, see [Migrating your Elastic Beanstalk application from ECS managed Multi-container Docker on AL1 to ECS on Amazon Linux 2023](migrate-to-ec2-AL2-platform.md).

### Preconfigured Docker containers
<a name="docker-platform-preconfigured"></a>

In addition to the prior mentioned Docker platforms, there is also the *Preconfigured Docker GlassFish* platform branch that runs on the Amazon Linux AMI operating system (AL1).

This platform branch has been superseded by the platform branches *Docker running on 64bit AL2023* and *Docker running on 64bit Amazon Linux 2*. For more information, see [Deploying a GlassFish application to the Docker platform](create_deploy_dockerpreconfig.md#docker-glassfish-tutorial).

# Using the Elastic Beanstalk Docker platform branch
<a name="docker"></a>

This section describes how to prepare your Docker image for launch with the either of the Elastic Beanstalk platform branches *Docker running AL2 or AL2023*.

Follow the steps in [QuickStart for Docker](docker-quickstart.md) to create a Docker "Hello World" application and deploy it to an Elastic Beanstalk environment using the EB CLI.

**Topics**
+ [QuickStart: Deploy a Docker application to Elastic Beanstalk](docker-quickstart.md)
+ [QuickStart: Deploy a Docker Compose application to Elastic Beanstalk](docker-compose-quickstart.md)
+ [Preparing your Docker image for deployment to Elastic Beanstalk](single-container-docker-configuration.md)

# QuickStart: Deploy a Docker application to Elastic Beanstalk
<a name="docker-quickstart"></a>

This QuickStart tutorial walks you through the process of creating a Docker application and deploying it to an AWS Elastic Beanstalk environment.

**Not for production use**  
Examples are intended for demonstration only. Do not use example applications in production.

**Topics**
+ [Your AWS account](#docker-quickstart-aws-account)
+ [Prerequisites](#docker-quickstart-prereq)
+ [Step 1: Create a Docker application and container](#docker-quickstart-create-app)
+ [Step 2: Run your application locally](#docker-quickstart-run-local)
+ [Step 3: Deploy your Docker application with the EB CLI](#docker-quickstart-deploy)
+ [Step 4: Run your application on Elastic Beanstalk](#docker-quickstart-run-eb-ap)
+ [Step 5: Clean up](#go-tutorial-cleanup)
+ [AWS resources for your application](#docker-quickstart-eb-resources)
+ [Next steps](#docker-quickstart-next-steps)
+ [Deploy with the Elastic Beanstalk console](#docker-quickstart-console)

## Your AWS account
<a name="docker-quickstart-aws-account"></a>

If you're not already an AWS customer, you need to create an AWS account. Signing up enables you to access Elastic Beanstalk and other AWS services that you need.

If you already have an AWS account, you can move on to [Prerequisites](#docker-quickstart-prereq).

### Create an AWS account
<a name="docker-quickstart-aws-account-procedure"></a>

#### Sign up for an AWS account
<a name="sign-up-for-aws"></a>

If you do not have an AWS account, complete the following steps to create one.

**To sign up for an AWS account**

1. Open [https://portal.aws.amazon.com/billing/signup](https://portal.aws.amazon.com/billing/signup).

1. Follow the online instructions.

   Part of the sign-up procedure involves receiving a phone call or text message and entering a verification code on the phone keypad.

   When you sign up for an AWS account, an *AWS account root user* is created. The root user has access to all AWS services and resources in the account. As a security best practice, assign administrative access to a user, and use only the root user to perform [tasks that require root user access](https://docs.aws.amazon.com/IAM/latest/UserGuide/id_root-user.html#root-user-tasks).

AWS sends you a confirmation email after the sign-up process is complete. At any time, you can view your current account activity and manage your account by going to [https://aws.amazon.com/](https://aws.amazon.com/) and choosing **My Account**.

#### Create a user with administrative access
<a name="create-an-admin"></a>

After you sign up for an AWS account, secure your AWS account root user, enable AWS IAM Identity Center, and create an administrative user so that you don't use the root user for everyday tasks.

**Secure your AWS account root user**

1.  Sign in to the [AWS Management Console](https://console.aws.amazon.com/) as the account owner by choosing **Root user** and entering your AWS account email address. On the next page, enter your password.

   For help signing in by using root user, see [Signing in as the root user](https://docs.aws.amazon.com/signin/latest/userguide/console-sign-in-tutorials.html#introduction-to-root-user-sign-in-tutorial) in the *AWS Sign-In User Guide*.

1. Turn on multi-factor authentication (MFA) for your root user.

   For instructions, see [Enable a virtual MFA device for your AWS account root user (console)](https://docs.aws.amazon.com/IAM/latest/UserGuide/enable-virt-mfa-for-root.html) in the *IAM User Guide*.

**Create a user with administrative access**

1. Enable IAM Identity Center.

   For instructions, see [Enabling AWS IAM Identity Center](https://docs.aws.amazon.com//singlesignon/latest/userguide/get-set-up-for-idc.html) in the *AWS IAM Identity Center User Guide*.

1. In IAM Identity Center, grant administrative access to a user.

   For a tutorial about using the IAM Identity Center directory as your identity source, see [ Configure user access with the default IAM Identity Center directory](https://docs.aws.amazon.com//singlesignon/latest/userguide/quick-start-default-idc.html) in the *AWS IAM Identity Center User Guide*.

**Sign in as the user with administrative access**
+ To sign in with your IAM Identity Center user, use the sign-in URL that was sent to your email address when you created the IAM Identity Center user.

  For help signing in using an IAM Identity Center user, see [Signing in to the AWS access portal](https://docs.aws.amazon.com/signin/latest/userguide/iam-id-center-sign-in-tutorial.html) in the *AWS Sign-In User Guide*.

**Assign access to additional users**

1. In IAM Identity Center, create a permission set that follows the best practice of applying least-privilege permissions.

   For instructions, see [ Create a permission set](https://docs.aws.amazon.com//singlesignon/latest/userguide/get-started-create-a-permission-set.html) in the *AWS IAM Identity Center User Guide*.

1. Assign users to a group, and then assign single sign-on access to the group.

   For instructions, see [ Add groups](https://docs.aws.amazon.com//singlesignon/latest/userguide/addgroups.html) in the *AWS IAM Identity Center User Guide*.

## Prerequisites
<a name="docker-quickstart-prereq"></a>

To follow the procedures in this guide, you will need a command line terminal or shell to run commands. Commands are shown in listings preceded by a prompt symbol (\$1) and the name of the current directory, when appropriate.

```
~/eb-project$ this is a command
this is output
```

On Linux and macOS, you can use your preferred shell and package manager. On Windows you can [install the Windows Subsystem for Linux](https://docs.microsoft.com/en-us/windows/wsl/install-win10) to get a Windows-integrated version of Ubuntu and Bash.

### EB CLI
<a name="docker-quickstart-prereq.ebcli"></a>

This tutorial uses the Elastic Beanstalk Command Line Interface (EB CLI). For details on installing and configuring the EB CLI, see [Install EB CLI with setup script (recommended)](eb-cli3.md#eb-cli3-install) and [Configure the EB CLI](eb-cli3-configuration.md).

### Docker
<a name="docker-quickstart-prereq.runtime"></a>

To follow this tutorial, you'll need a working local installation of Docker. For more information, see [Get Docker](https://docs.docker.com/get-docker/) on the Docker documentation website.

Verify the Docker daemon is up an running by running the following command.

```
~$ docker info
```

## Step 1: Create a Docker application and container
<a name="docker-quickstart-create-app"></a>

For this example, we create a Docker image of the sample Flask application that's also referenced in [Deploying a Flask application to Elastic Beanstalk](create-deploy-python-flask.md).

The application consists of two files:
+ `app.py`— the Python file that contains the code that will execute in the container.
+ `Dockerfile`— the Dockerfile to build your image.

Place both files at the root of a directory.

```
~/eb-docker-flask/
|-- Dockerfile
|-- app.py
```

Add the following contents to your `Dockerfile`.

**Example `~/eb-docker-flask/Dockerfile`**  

```
FROM public.ecr.aws/docker/library/python:3.12
COPY . /app
WORKDIR /app
RUN pip install Flask==3.1.1
EXPOSE 5000
CMD [ "python3", "-m" , "flask", "run", "--host=0.0.0.0"]
```

Add the following contents to your `app.py` file.

**Example `~/eb-docker-flask/app.py`**  

```
from flask import Flask
app = Flask(__name__)
@app.route('/')
def hello_world():
    return 'Hello Elastic Beanstalk! This is a Docker application'
```

Use the [docker build](https://docs.docker.com/reference/cli/docker/image/build/) command to build your container image locally, tagging the image with `eb-docker-flask`. The period (`.`) at the end of the command specificies that path is a local directory.

```
~/eb-docker-flask$ docker build -t eb-docker-flask .
```

## Step 2: Run your application locally
<a name="docker-quickstart-run-local"></a>

Run your container with the [docker run](https://docs.docker.com/reference/cli/docker/container/run/) command. The command will print the ID of the running container. The **-d** option runs docker in background mode. The **-p** option exposes your application at port 5000. Elastic Beanstalk serves traffic to port 5000 on the Docker platform by default.

```
~/eb-docker-flask$ docker run -dp 127.0.0.1:5000:5000 eb-docker-flask
```

Navigate to `http://127.0.0.1:5000/ `in your browser. You should see the text "Hello Elastic Beanstalk\$1 This is a Docker application".

Run the [docker kill](https://docs.docker.com/reference/cli/docker/container/kill/) command to terminate the container.

```
~/eb-docker-flask$ docker kill container-id
```

## Step 3: Deploy your Docker application with the EB CLI
<a name="docker-quickstart-deploy"></a>

Run the following commands to create an Elastic Beanstalk environment for this application.

 

**To create an environment and deploy your Docker application**

1. Initialize your EB CLI repository with the **eb init** command.

   ```
   ~/eb-docker-flask$ eb init -p docker docker-tutorial --region us-east-2
   Application docker-tutorial has been created.
   ```

   This command creates an application named `docker-tutorial` and configures your local repository to create environments with the latest Docker platform version.

1. (Optional) Run **eb init** again to configure a default key pair so that you can use SSH to connect to the EC2 instance running your application.

   ```
   ~/eb-docker-flask$ eb init
   Do you want to set up SSH for your instances?
   (y/n): y
   Select a keypair.
   1) my-keypair
   2) [ Create new KeyPair ]
   ```

   Select a key pair if you have one already, or follow the prompts to create one. If you don't see the prompt or need to change your settings later, run **eb init -i**.

1. Create an environment and deploy your application to it with **eb create**. Elastic Beanstalk automatically builds a zip file for your application and starts it on port 5000.

   ```
   ~/eb-docker-flask$ eb create docker-tutorial
   ```

   It takes about five minutes for Elastic Beanstalk to create your environment.

## Step 4: Run your application on Elastic Beanstalk
<a name="docker-quickstart-run-eb-ap"></a>

When the process to create your environment completes, open your website with **eb open**.

```
~/eb-docker-flask$ eb open
```

Congratulations\$1 You've deployed a Docker application with Elastic Beanstalk\$1 This opens a browser window using the domain name created for your application.

## Step 5: Clean up
<a name="go-tutorial-cleanup"></a>

You can terminate your environment when you finish working with your application. Elastic Beanstalk terminates all AWS resources associated with your environment.

To terminate your Elastic Beanstalk environment with the EB CLI run the following command.

```
~/eb-docker-flask$ eb terminate
```

## AWS resources for your application
<a name="docker-quickstart-eb-resources"></a>

You just created a single instance application. It serves as a straightforward sample application with a single EC2 instance, so it doesn't require load balancing or auto scaling. For single instance applications Elastic Beanstalk creates the following AWS resources:
+ **EC2 instance** – An Amazon EC2 virtual machine configured to run web apps on the platform you choose.

  Each platform runs a different set of software, configuration files, and scripts to support a specific language version, framework, web container, or combination thereof. Most platforms use either Apache or nginx as a reverse proxy that processes web traffic in front of your web app, forwards requests to it, serves static assets, and generates access and error logs.
+ **Instance security group** – An Amazon EC2 security group configured to allow incoming traffic on port 80. This resource lets HTTP traffic from the load balancer reach the EC2 instance running your web app. By default, traffic is not allowed on other ports.
+ **Amazon S3 bucket** – A storage location for your source code, logs, and other artifacts that are created when you use Elastic Beanstalk.
+ **Amazon CloudWatch alarms** – Two CloudWatch alarms that monitor the load on the instances in your environment and are triggered if the load is too high or too low. When an alarm is triggered, your Auto Scaling group scales up or down in response.
+ **CloudFormation stack** – Elastic Beanstalk uses CloudFormation to launch the resources in your environment and propagate configuration changes. The resources are defined in a template that you can view in the [CloudFormation console](https://console.aws.amazon.com/cloudformation).
+  **Domain name** – A domain name that routes to your web app in the form **subdomain*.*region*.elasticbeanstalk.com*. 

Elastic Beanstalk manages all of these resources. When you terminate your environment, Elastic Beanstalk terminates all the resources that it contains.

## Next steps
<a name="docker-quickstart-next-steps"></a>

After you have an environment running an application, you can deploy a new version of the application or a different application at any time. Deploying a new application version is very quick because it doesn't require provisioning or restarting EC2 instances. You can also explore your new environment using the Elastic Beanstalk console. For detailed steps, see [Explore your environment](GettingStarted.md#GettingStarted.Explore) in the *Getting started* chapter of this guide.

After you deploy a sample application or two and are ready to start developing and running Docker applications locally, see [Preparing your Docker image for deployment to Elastic Beanstalk](single-container-docker-configuration.md). 

## Deploy with the Elastic Beanstalk console
<a name="docker-quickstart-console"></a>

You can also use the Elastic Beanstalk console to launch the sample application. For detailed steps, see [Create an example application](GettingStarted.md#GettingStarted.CreateApp) in the *Getting started* chapter of this guide.

# QuickStart: Deploy a Docker Compose application to Elastic Beanstalk
<a name="docker-compose-quickstart"></a>

This QuickStart tutorial walks you through the process of creating a multi-container Docker Compose application and deploying it to an AWS Elastic Beanstalk environment. You'll create a Flask web application with an nginx reverse proxy to demonstrate how Docker Compose simplifies orchestrating multiple containers.

**Not for production use**  
Examples are intended for demonstration only. Do not use example applications in production.

**Topics**
+ [Your AWS account](#docker-compose-quickstart-aws-account)
+ [Prerequisites](#docker-compose-quickstart-prereq)
+ [Step 1: Create a Docker Compose application](#docker-compose-quickstart-create-app)
+ [Step 2: Run your application locally](#docker-compose-quickstart-run-local)
+ [Step 3: Deploy your Docker Compose application with the EB CLI](#docker-compose-quickstart-deploy)
+ [Step 4: Test your application on Elastic Beanstalk](#docker-compose-quickstart-run-eb-ap)
+ [Step 5: Clean up](#docker-compose-quickstart-cleanup)
+ [AWS resources for your application](#docker-compose-quickstart-eb-resources)
+ [Next steps](#docker-compose-quickstart-next-steps)
+ [Deploy with the Elastic Beanstalk console](#docker-compose-quickstart-console)

## Your AWS account
<a name="docker-compose-quickstart-aws-account"></a>

If you're not already an AWS customer, you need to create an AWS account. Signing up enables you to access Elastic Beanstalk and other AWS services that you need.

If you already have an AWS account, you can move on to [Prerequisites](#docker-compose-quickstart-prereq).

### Create an AWS account
<a name="docker-compose-quickstart-aws-account-procedure"></a>

#### Sign up for an AWS account
<a name="sign-up-for-aws"></a>

If you do not have an AWS account, complete the following steps to create one.

**To sign up for an AWS account**

1. Open [https://portal.aws.amazon.com/billing/signup](https://portal.aws.amazon.com/billing/signup).

1. Follow the online instructions.

   Part of the sign-up procedure involves receiving a phone call or text message and entering a verification code on the phone keypad.

   When you sign up for an AWS account, an *AWS account root user* is created. The root user has access to all AWS services and resources in the account. As a security best practice, assign administrative access to a user, and use only the root user to perform [tasks that require root user access](https://docs.aws.amazon.com/IAM/latest/UserGuide/id_root-user.html#root-user-tasks).

AWS sends you a confirmation email after the sign-up process is complete. At any time, you can view your current account activity and manage your account by going to [https://aws.amazon.com/](https://aws.amazon.com/) and choosing **My Account**.

#### Create a user with administrative access
<a name="create-an-admin"></a>

After you sign up for an AWS account, secure your AWS account root user, enable AWS IAM Identity Center, and create an administrative user so that you don't use the root user for everyday tasks.

**Secure your AWS account root user**

1.  Sign in to the [AWS Management Console](https://console.aws.amazon.com/) as the account owner by choosing **Root user** and entering your AWS account email address. On the next page, enter your password.

   For help signing in by using root user, see [Signing in as the root user](https://docs.aws.amazon.com/signin/latest/userguide/console-sign-in-tutorials.html#introduction-to-root-user-sign-in-tutorial) in the *AWS Sign-In User Guide*.

1. Turn on multi-factor authentication (MFA) for your root user.

   For instructions, see [Enable a virtual MFA device for your AWS account root user (console)](https://docs.aws.amazon.com/IAM/latest/UserGuide/enable-virt-mfa-for-root.html) in the *IAM User Guide*.

**Create a user with administrative access**

1. Enable IAM Identity Center.

   For instructions, see [Enabling AWS IAM Identity Center](https://docs.aws.amazon.com//singlesignon/latest/userguide/get-set-up-for-idc.html) in the *AWS IAM Identity Center User Guide*.

1. In IAM Identity Center, grant administrative access to a user.

   For a tutorial about using the IAM Identity Center directory as your identity source, see [ Configure user access with the default IAM Identity Center directory](https://docs.aws.amazon.com//singlesignon/latest/userguide/quick-start-default-idc.html) in the *AWS IAM Identity Center User Guide*.

**Sign in as the user with administrative access**
+ To sign in with your IAM Identity Center user, use the sign-in URL that was sent to your email address when you created the IAM Identity Center user.

  For help signing in using an IAM Identity Center user, see [Signing in to the AWS access portal](https://docs.aws.amazon.com/signin/latest/userguide/iam-id-center-sign-in-tutorial.html) in the *AWS Sign-In User Guide*.

**Assign access to additional users**

1. In IAM Identity Center, create a permission set that follows the best practice of applying least-privilege permissions.

   For instructions, see [ Create a permission set](https://docs.aws.amazon.com//singlesignon/latest/userguide/get-started-create-a-permission-set.html) in the *AWS IAM Identity Center User Guide*.

1. Assign users to a group, and then assign single sign-on access to the group.

   For instructions, see [ Add groups](https://docs.aws.amazon.com//singlesignon/latest/userguide/addgroups.html) in the *AWS IAM Identity Center User Guide*.

## Prerequisites
<a name="docker-compose-quickstart-prereq"></a>

To follow the procedures in this guide, you will need a command line terminal or shell to run commands. Commands are shown in listings preceded by a prompt symbol (\$1) and the name of the current directory, when appropriate.

```
~/eb-project$ this is a command
this is output
```

On Linux and macOS, you can use your preferred shell and package manager. On Windows you can [install the Windows Subsystem for Linux](https://docs.microsoft.com/en-us/windows/wsl/install-win10) to get a Windows-integrated version of Ubuntu and Bash.

### EB CLI
<a name="docker-compose-quickstart-prereq.ebcli"></a>

This tutorial uses the Elastic Beanstalk Command Line Interface (EB CLI). For details on installing and configuring the EB CLI, see [Install EB CLI with setup script (recommended)](eb-cli3.md#eb-cli3-install) and [Configure the EB CLI](eb-cli3-configuration.md).

### Docker and Docker Compose
<a name="docker-compose-quickstart-prereq.runtime"></a>

To follow this tutorial, you'll need a working local installation of Docker and Docker Compose. For more information, see [Get Docker](https://docs.docker.com/get-docker/) and [Install Docker Compose](https://docs.docker.com/compose/install/) on the Docker documentation website.

Verify that Docker and Docker Compose are installed and running by running the following commands.

```
~$ docker info
~$ docker compose version
```

## Step 1: Create a Docker Compose application
<a name="docker-compose-quickstart-create-app"></a>

For this example, we create a multi-container application using Docker Compose that consists of a Flask web application and an nginx reverse proxy. This demonstrates how Docker Compose simplifies orchestrating multiple containers that work together.

The application includes health monitoring configuration that allows Elastic Beanstalk to collect detailed application metrics from your nginx proxy.

The application consists of the following structure:

```
~/eb-docker-compose-flask/
|-- docker-compose.yml
|-- web/
|   |-- Dockerfile
|   |-- app.py
|   `-- requirements.txt
|-- proxy/
|   |-- Dockerfile
|   `-- nginx.conf
`-- .platform/
    `-- hooks/
        `-- postdeploy/
            `-- 01_setup_healthd_permissions.sh
```

Create the directory structure and add the following files:

First, create the main `docker-compose.yml` file that defines the services and their relationships.

**Example `~/eb-docker-compose-flask/docker-compose.yml`**  

```
services:
  web:
    build: ./web
    expose:
      - "5000"

  nginx-proxy:
    build: ./proxy
    ports:
      - "80:80"
    volumes:
      - "/var/log/nginx:/var/log/nginx"
    depends_on:
      - web
```

Create the Flask web application in the `web` directory. Add the following contents to your `app.py` file.

**Example `~/eb-docker-compose-flask/web/app.py`**  

```
from flask import Flask
app = Flask(__name__)
@app.route('/')
def hello_world():
    return 'Hello Elastic Beanstalk! This is a Docker Compose application'
```

Add the following contents to your web service `Dockerfile`.

**Example `~/eb-docker-compose-flask/web/Dockerfile`**  

```
FROM public.ecr.aws/docker/library/python:3.12
COPY . /app
WORKDIR /app
RUN pip install Flask==3.1.1
EXPOSE 5000
CMD [ "python3", "-m" , "flask", "run", "--host=0.0.0.0"]
```

Create the nginx reverse proxy in the `proxy` directory. Add the following contents to your `nginx.conf` file.

This configuration includes health monitoring setup that allows Elastic Beanstalk to collect detailed application metrics. For more information about customizing health monitoring log formats, see [Enhanced health log format](health-enhanced-serverlogs.md).

**Example `~/eb-docker-compose-flask/proxy/nginx.conf`**  

```
events {
    worker_connections 1024;
}

http {
    include       /etc/nginx/mime.types;
    default_type  application/octet-stream;

    map $http_upgrade $connection_upgrade {
        default       "upgrade";
    }

    # Health monitoring log format for Elastic Beanstalk
    log_format healthd '$msec"$uri"$status"$request_time"$upstream_response_time"$http_x_forwarded_for';
    
    upstream flask_app {
        server web:5000;
    }

    server {
        listen 80 default_server;
        root /usr/share/nginx/html;

        # Standard access log
        access_log /var/log/nginx/access.log;
        
        # Health monitoring log for Elastic Beanstalk
        if ($time_iso8601 ~ "^(\d{4})-(\d{2})-(\d{2})T(\d{2})") {
            set $year $1;
            set $month $2;
            set $day $3;
            set $hour $4;
        }
        access_log /var/log/nginx/healthd/application.log.$year-$month-$day-$hour healthd;
        
        location / {
            proxy_pass http://flask_app;
            proxy_http_version    1.1;

            proxy_set_header    Connection             $connection_upgrade;
            proxy_set_header    Upgrade                $http_upgrade;
            proxy_set_header    Host                   $host;
            proxy_set_header    X-Real-IP              $remote_addr;
            proxy_set_header    X-Forwarded-For        $proxy_add_x_forwarded_for;
        }
    }
}
```

Add the following contents to your proxy service `Dockerfile`.

**Example `~/eb-docker-compose-flask/proxy/Dockerfile`**  

```
FROM public.ecr.aws/nginx/nginx:alpine
COPY nginx.conf /etc/nginx/nginx.conf
EXPOSE 80
```

Finally, create a platform hook script to set up the necessary log directories and permissions for health monitoring. Platform hooks allow you to run custom scripts during the deployment process. For more information about platform hooks, see [Platform hooks](platforms-linux-extend.hooks.md).

**Example `~/eb-docker-compose-flask/.platform/hooks/postdeploy/01_setup_healthd_permissions.sh`**  

```
#!/bin/bash
set -ex

NGINX_CONTAINER=$(docker ps --filter "name=nginx-proxy" -q)

if [ -z "$NGINX_CONTAINER" ]; then
    echo "Error: No nginx-proxy container found running"
    exit 1
fi

NGINX_UID=$(docker exec ${NGINX_CONTAINER} id -u nginx)
NGINX_GID=$(docker exec ${NGINX_CONTAINER} id -g nginx)

mkdir -p /var/log/nginx/healthd
chown -R ${NGINX_UID}:${NGINX_GID} /var/log/nginx
```

## Step 2: Run your application locally
<a name="docker-compose-quickstart-run-local"></a>

Use the [docker compose up](https://docs.docker.com/compose/reference/up/) command to build and run your multi-container application locally. Docker Compose will build both container images and start the services defined in your `docker-compose.yml` file.

```
~/eb-docker-compose-flask$ docker compose up --build
```

The **--build** option ensures that Docker Compose builds the container images before starting the services. You should see output showing both the web service and nginx-proxy service starting up.

Navigate to `http://localhost` in your browser. You should see the text "Hello Elastic Beanstalk\$1 This is a Docker Compose application". The nginx proxy receives your request on port 80 and forwards it to the Flask application running on port 5000.

When you're finished testing, stop the application by pressing **Ctrl\$1C** in the terminal, or run the following command in a separate terminal:

```
~/eb-docker-compose-flask$ docker compose down
```

## Step 3: Deploy your Docker Compose application with the EB CLI
<a name="docker-compose-quickstart-deploy"></a>

Run the following commands to create an Elastic Beanstalk environment for this application.

 

**To create an environment and deploy your Docker Compose application**

1. Initialize your EB CLI repository with the **eb init** command.

   ```
   ~/eb-docker-compose-flask$ eb init -p docker docker-compose-tutorial --region us-east-2
   Application docker-compose-tutorial has been created.
   ```

   This command creates an application named `docker-compose-tutorial` and configures your local repository to create environments with the latest Docker platform version.

1. (Optional) Run **eb init** again to configure a default key pair so that you can use SSH to connect to the EC2 instance running your application.

   ```
   ~/eb-docker-compose-flask$ eb init
   Do you want to set up SSH for your instances?
   (y/n): y
   Select a keypair.
   1) my-keypair
   2) [ Create new KeyPair ]
   ```

   Select a key pair if you have one already, or follow the prompts to create one. If you don't see the prompt or need to change your settings later, run **eb init -i**.

1. Create an environment and deploy your application to it with **eb create**. Elastic Beanstalk automatically detects your `docker-compose.yml` file and deploys your multi-container application.

   ```
   ~/eb-docker-compose-flask$ eb create docker-compose-tutorial
   ```

   It takes about five minutes for Elastic Beanstalk to create your environment and deploy your multi-container application.

## Step 4: Test your application on Elastic Beanstalk
<a name="docker-compose-quickstart-run-eb-ap"></a>

When the process to create your environment completes, open your website with **eb open**.

```
~/eb-docker-compose-flask$ eb open
```

Great\$1 You've deployed a multi-container Docker Compose application with Elastic Beanstalk\$1 This opens a browser window using the domain name created for your application. You should see the message from your Flask application, served through the nginx reverse proxy.

## Step 5: Clean up
<a name="docker-compose-quickstart-cleanup"></a>

You can terminate your environment when you finish working with your application. Elastic Beanstalk terminates all AWS resources associated with your environment.

To terminate your Elastic Beanstalk environment with the EB CLI run the following command.

```
~/eb-docker-compose-flask$ eb terminate
```

## AWS resources for your application
<a name="docker-compose-quickstart-eb-resources"></a>

You just created a single instance application running multiple containers. It serves as a straightforward sample application with a single EC2 instance, so it doesn't require load balancing or auto scaling. For single instance applications Elastic Beanstalk creates the following AWS resources:
+ **EC2 instance** – An Amazon EC2 virtual machine configured to run web apps on the platform you choose.

  Each platform runs a different set of software, configuration files, and scripts to support a specific language version, framework, web container, or combination thereof. Most platforms use either Apache or nginx as a reverse proxy that processes web traffic in front of your web app, forwards requests to it, serves static assets, and generates access and error logs.
+ **Instance security group** – An Amazon EC2 security group configured to allow incoming traffic on port 80. This resource lets HTTP traffic from the load balancer reach the EC2 instance running your web app. By default, traffic is not allowed on other ports.
+ **Amazon S3 bucket** – A storage location for your source code, logs, and other artifacts that are created when you use Elastic Beanstalk.
+ **Amazon CloudWatch alarms** – Two CloudWatch alarms that monitor the load on the instances in your environment and are triggered if the load is too high or too low. When an alarm is triggered, your Auto Scaling group scales up or down in response.
+ **CloudFormation stack** – Elastic Beanstalk uses CloudFormation to launch the resources in your environment and propagate configuration changes. The resources are defined in a template that you can view in the [CloudFormation console](https://console.aws.amazon.com/cloudformation).
+  **Domain name** – A domain name that routes to your web app in the form **subdomain*.*region*.elasticbeanstalk.com*. 

Elastic Beanstalk manages all of these resources. When you terminate your environment, Elastic Beanstalk terminates all the resources that it contains. Your Docker Compose application runs multiple containers on the single EC2 instance, with Elastic Beanstalk handling the orchestration automatically.

## Next steps
<a name="docker-compose-quickstart-next-steps"></a>

After you have an environment running an application, you can deploy a new version of the application or a different application at any time. Deploying a new application version is very quick because it doesn't require provisioning or restarting EC2 instances. You can also explore your new environment using the Elastic Beanstalk console. For detailed steps, see [Explore your environment](GettingStarted.md#GettingStarted.Explore) in the *Getting started* chapter of this guide.

After you deploy a sample application or two and are ready to start developing and running Docker Compose applications locally, see [Preparing your Docker image for deployment to Elastic Beanstalk](single-container-docker-configuration.md). 

## Deploy with the Elastic Beanstalk console
<a name="docker-compose-quickstart-console"></a>

You can also use the Elastic Beanstalk console to launch a Docker Compose application. Create a ZIP file containing your `docker-compose.yml` file and all associated directories and files, then upload it when creating a new application. For detailed steps, see [Create an example application](GettingStarted.md#GettingStarted.CreateApp) in the *Getting started* chapter of this guide.

# Preparing your Docker image for deployment to Elastic Beanstalk
<a name="single-container-docker-configuration"></a>

This section describes how to prepare your Docker image for deployment to Elastic Beanstalk with either of the *Docker running AL2 or AL2023* platform branches. The configuration files that you'll require depend on whether your images are local, remote, and if you're using Docker Compose.

**Note**  
 For an example of a procedure that launches a Docker environment see the [QuickStart for Docker](docker-quickstart.md) topic.

**Topics**
+ [Managing your images with Docker Compose in Elastic Beanstalk](#single-container-docker-configuration-dc)
+ [Managing images without Docker Compose in Elastic Beanstalk](#single-container-docker-configuration.no-compose)
+ [Building custom images with a Dockerfile](#single-container-docker-configuration.dockerfile)

## Managing your images with Docker Compose in Elastic Beanstalk
<a name="single-container-docker-configuration-dc"></a>

You may choose to use Docker Compose to manage various services in one YAML file. To learn more about Docker Compose see [Why use Compose?](https://docs.docker.com/compose/intro/features-uses/) on the Docker website.
+ Create a `docker-compose.yml`. This file is required if you're using Docker Compose to manage your application with Elastic Beanstalk. If all your deployments are sourced from images in public repositories, then no other configuration files are required. If your deployment's source images are in a private repository, you'll need to do some additional configuration. For more information, see [Using images from a private repository](docker-configuration.remote-repo.md). For more information about the `docker-compose.yml` file, see [Compose file reference](https://docs.docker.com/compose/compose-file/) on the Docker website.
+  The `Dockerfile` is optional. Create one if you need Elastic Beanstalk to build and run a local custom image. For more information about the `Dockerfile` see [Dockerfile reference](https://docs.docker.com/engine/reference/builder/) on the Docker website.
+  You may need to create a `.zip` file. If you use only a `Dockerfile` file to deploy your application, you don't need to create one. If you use additional configuration files the .zip file must include the `Dockerfile`, the `docker-compose.yml` file, your application files, and any application file dependencies. The `Dockerfile` and the `docker-compose.yml` must be at the root, or top level, of the .zip archive. If you use the EB CLI to deploy your application, it creates a .zip file for you.

To learn more about Docker Compose and how to install it, see the Docker sites [Overview of Docker Compose](https://docs.docker.com/compose/) and [Install Docker Compose](https://docs.docker.com/compose/install/).

## Managing images without Docker Compose in Elastic Beanstalk
<a name="single-container-docker-configuration.no-compose"></a>

If you're not using Docker Compose to manage your Docker images, you'll need to configure a `Dockerfile`, a `Dockerrun.aws.json` file, or both.
+ Create a `Dockerfile` to have Elastic Beanstalk build and run a custom image locally.
+ Create a `Dockerrun.aws.json v1` file to deploy a Docker image from a hosted repository to Elastic Beanstalk.
+ You may need to create a `.zip` file. If you use *only one* of either file, the `Dockerfile` or the `Dockerrun.aws.json`, then you don't need to create a .zip file. If you use both files, then you do need a .zip file. The .zip file must include both the `Dockerfile` and the `Dockerrun.aws.json`, along with the file containing your application files plus any application file dependencies. If you use the EB CLI to deploy your application, it creates a `.zip` file for you.

### `Dockerrun.aws.json` v1 configuration file
<a name="single-container-docker-configuration.dockerrun"></a>

A `Dockerrun.aws.json` file describes how to deploy a remote Docker image as an Elastic Beanstalk application. This JSON file is specific to Elastic Beanstalk. If your application runs on an image that is available in a hosted repository, you can specify the image in a `Dockerrun.aws.json v1` file and omit the `Dockerfile`.

**`Dockerrun.aws.json` versions**  
 The `AWSEBDockerrunVersion` parameter indicates the version of the `Dockerrun.aws.json` file.  
The Docker AL2 and AL2023 platforms use the following versions of the file.  
`Dockerrun.aws.json v3` — environments that use Docker Compose.
`Dockerrun.aws.json v1` — environments that do not use Docker Compose.
*ECS running on Amazon Linux 2* and *ECS running on AL2023* uses the `Dockerrun.aws.json v2` file. The retired platform *ECS-The Multicontainer Docker Amazon Linux AMI (AL1)* also used this same version.



#### Dockerrun.aws.json v1
<a name="single-container-docker-configuration.dockerrun.awsjson"></a>

Valid keys and values for the `Dockerrun.aws.json v1` file include the following operations:

**AWSEBDockerrunVersion**  
(Required) Specify the version number `1` if you're not using Docker Compose to manage your image.

**Authentication**  
(Required only for private repositories) Specifies the Amazon S3 object storing the `.dockercfg` file.  
See [Authenticating with image repositoriesUsing AWS Secrets Manager](docker-configuration.remote-repo.md#docker-configuration.remote-repo.dockerrun-aws) in *Using images from a private repository* later in this chapter.

**Image**  
Specifies the Docker base image on an existing Docker repository from which you're building a Docker container. Specify the value of the **Name** key in the format *<organization>/<image name>* for images on Docker Hub, or *<site>/<organization name>/<image name>* for other sites.   
When you specify an image in the `Dockerrun.aws.json` file, each instance in your Elastic Beanstalk environment runs `docker pull` to run the image. Optionally, include the **Update** key. The default value is `true` and instructs Elastic Beanstalk to check the repository, pull any updates to the image, and overwrite any cached images.  
When using a `Dockerfile`, do not specify the **Image** key in the `Dockerrun.aws.json` file. Elastic Beanstalk always builds and uses the image described in the `Dockerfile` when one is present.

**Ports**  
(Required when you specify the **Image** key) Lists the ports to expose on the Docker container. Elastic Beanstalk uses the **ContainerPort** value to connect the Docker container to the reverse proxy running on the host.  
You can specify multiple container ports, but Elastic Beanstalk uses only the first port. It uses this port to connect your container to the host's reverse proxy and route requests from the public internet. If you're using a `Dockerfile`, the first **ContainerPort** value should match the first entry in the `Dockerfile`'s **EXPOSE** list.   
Optionally, you can specify a list of ports in **HostPort**. **HostPort** entries specify the host ports that **ContainerPort** values are mapped to. If you don't specify a **HostPort** value, it defaults to the **ContainerPort** value.   

```
{
  "Image": {
    "Name": "image-name"
  },
  "Ports": [
    {
      "ContainerPort": 8080,
      "HostPort": 8000
    }
  ]
}
```

****Volumes****  
Map volumes from an EC2 instance to your Docker container. Specify one or more arrays of volumes to map.  

```
{
  "Volumes": [
    {
      "HostDirectory": "/path/inside/host",
      "ContainerDirectory": "/path/inside/container"
    }
  ]
...
```

****Logging****  
Specify the directory inside the container to which your application writes logs. Elastic Beanstalk uploads any logs in this directory to Amazon S3 when you request tail or bundle logs. If you rotate logs to a folder named `rotated` within this directory, you can also configure Elastic Beanstalk to upload rotated logs to Amazon S3 for permanent storage. For more information, see [Viewing logs from Amazon EC2 instances in your Elastic Beanstalk environment](using-features.logging.md).

**Command**  
Specify a command to run in the container. If you specify an **Entrypoint**, then **Command** is added as an argument to **Entrypoint**. For more information, see [CMD](https://docs.docker.com/engine/reference/run/#cmd-default-command-or-options) in the Docker documentation.

**Entrypoint**  
Specify a default command to run when the container starts. For more information, see [ENTRYPOINT](https://docs.docker.com/engine/reference/run/#cmd-default-command-or-options) in the Docker documentation.

The following snippet is an example that illustrates the syntax of the `Dockerrun.aws.json` file for a single container.

```
{
  "AWSEBDockerrunVersion": "1",
  "Image": {
    "Name": "janedoe/image",
    "Update": "true"
  },
  "Ports": [
    {
      "ContainerPort": "1234"
    }
  ],
  "Volumes": [
    {
      "HostDirectory": "/var/app/mydb",
      "ContainerDirectory": "/etc/mysql"
    }
  ],
  "Logging": "/var/log/nginx",
  "Entrypoint": "/app/bin/myapp",
  "Command": "--argument"
}>
```

You can provide Elastic Beanstalk with only the `Dockerrun.aws.json` file, or with a `.zip` archive containing both the `Dockerrun.aws.json` and `Dockerfile` files. When you provide both files, the `Dockerfile` describes the Docker image and the `Dockerrun.aws.json` file provides additional information for deployment, as described later in this section.

**Note**  
The two files must be at the root, or top level, of the `.zip` archive. Don't build the archive from a directory containing the files. Instead, navigate into that directory and build the archive there.  
When you provide both files, don't specify an image in the `Dockerrun.aws.json` file. Elastic Beanstalk builds and uses the image described in the `Dockerfile` and ignores the image specified in the `Dockerrun.aws.json` file.

## Building custom images with a Dockerfile
<a name="single-container-docker-configuration.dockerfile"></a>

You need to create a `Dockerfile` if you don't already have an existing image hosted in a repository.

The following snippet is an example of the `Dockerfile`. If you follow the instructions in [QuickStart for Docker](docker-quickstart.md), you can upload this `Dockerfile` as written. Elastic Beanstalk runs the game 2048 when you use this `Dockerfile`.

For more information about instructions you can include in the `Dockerfile`, see [Dockerfile reference](https://docs.docker.com/engine/reference/builder) on the Docker website.

```
FROM ubuntu:12.04

RUN apt-get update
RUN apt-get install -y nginx zip curl

RUN echo "daemon off;" >> /etc/nginx/nginx.conf
RUN curl -o /usr/share/nginx/www/master.zip -L https://codeload.github.com/gabrielecirulli/2048/zip/master
RUN cd /usr/share/nginx/www/ && unzip master.zip && mv 2048-master/* . && rm -rf 2048-master master.zip

EXPOSE 80

CMD ["/usr/sbin/nginx", "-c", "/etc/nginx/nginx.conf"]
```

**Note**  
You can run multi-stage builds from a single Dockerfile to produce smaller-sized images with a significant reduction in complexity. For more information, see [Use multi-stage builds](https://docs.docker.com/develop/develop-images/multistage-build/) on the Docker documentation website.

# Using the ECS managed Docker platform branch in Elastic Beanstalk
<a name="create_deploy_docker_ecs"></a>

This topic provides an overview of the Elastic Beanstalk ECS managed Docker platform branches for Amazon Linux 2 and Amazon Linux 2023. It also provides configuration information that's specific to the Docker ECS managed platform. 

**Migration from Multi-container Docker on AL1**  
On [July 18, 2022](https://docs.aws.amazon.com/elasticbeanstalk/latest/relnotes/release-2022-07-18-linux-al1-retire.html), Elastic Beanstalk set the status of all platform branches based on Amazon Linux AMI (AL1) to **retired**. Although this chapter provides configuration information for this retired platform, we strongly recommend that you migrate to the latest supported platform branch. If you're presently using the retired *Multi-container Docker running on AL1* platform branch, you can migrate to the latest *ECS Running on AL2023* platform branch. The latest platform branch supports all of the features from the discontinued platform branch. No changes to the source code are required. For more information, see [Migrating your Elastic Beanstalk application from ECS managed Multi-container Docker on AL1 to ECS on Amazon Linux 2023](migrate-to-ec2-AL2-platform.md).

## ECS managed Docker platform overview
<a name="create_deploy_docker_ecs_platform"></a>

Elastic Beanstalk uses Amazon Elastic Container Service (Amazon ECS) to coordinate container deployments to ECS managed Docker environments. Amazon ECS provides tools to manage a cluster of instances running Docker containers. Elastic Beanstalk takes care of Amazon ECS tasks including cluster creation, task definition and execution. Each of the instances in the environment run the same set of containers, which are defined in a `Dockerrun.aws.json` v2 file. In order to get the most out of Docker, Elastic Beanstalk lets you create an environment where your Amazon EC2 instances run multiple Docker containers side by side.

The following diagram shows an example Elastic Beanstalk environment configured with three Docker containers running on each Amazon EC2 instance in an Auto Scaling group:

![\[Elastic Beanstalk environment with load balancer, auto scaling group, and containerized tasks.\]](http://docs.aws.amazon.com/elasticbeanstalk/latest/dg/images/aeb-multicontainer-docker-example.png)


## Amazon ECS resources created by Elastic Beanstalk
<a name="create_deploy_docker_ecs_resources"></a>

When you create an environment using the ECS managed Docker platform, Elastic Beanstalk automatically creates and configures several Amazon Elastic Container Service resources while building the environment. In doing so, it creates the necessary containers on each Amazon EC2 instance. 
+ **Amazon ECS Cluster** – Container instances in Amazon ECS are organized into clusters. When used with Elastic Beanstalk, one cluster is always created for each ECS managed Docker environment. An ECS cluster also contains Auto Scaling group capacity providers and other resources.
+ **Amazon ECS Task Definition** – Elastic Beanstalk uses the `Dockerrun.aws.json` v2 in your project to generate the Amazon ECS task definition that is used to configure container instances in the environment. 
+ **Amazon ECS Task** – Elastic Beanstalk communicates with Amazon ECS to run a task on every instance in the environment to coordinate container deployment. In a scalable environment, Elastic Beanstalk initiates a new task whenever an instance is added to the cluster. 
+ **Amazon ECS Container Agent** – The agent runs in a Docker container on the instances in your environment. The agent polls the Amazon ECS service and waits for a task to run. 
+ **Amazon ECS Data Volumes** – In addition to the volumes that you define in the `Dockerrun.aws.json` v2, Elastic Beanstalk inserts volume definitions into the task definition to facilitate log collection. 

  Elastic Beanstalk creates log volumes on the container instance, one for each container, at `/var/log/containers/containername`. These volumes are named `awseb-logs-containername` and are provided for containers to mount. See [Container definition format](create_deploy_docker_v2config.md#create_deploy_docker_v2config_dockerrun_format) for details on how to mount them.

For more information about Amazon ECS resources, see the [Amazon Elastic Container Service Developer Guide](https://docs.aws.amazon.com/AmazonECS/latest/developerguide/Welcome.html).

## `Dockerrun.aws.json` v2 file
<a name="create_deploy_docker_ecs_dockerrun"></a>

The container instances require a configuration file named `Dockerrun.aws.json`. *Container instances* refers to Amazon EC2 instances running ECS managed Docker in an Elastic Beanstalk environment. This file is specific to Elastic Beanstalk and can be used alone or combined with source code and content in a [source bundle](applications-sourcebundle.md) to create an environment on a Docker platform. 

**Note**  
The Version 2 format of the `Dockerrun.aws.json` adds support for multiple containers per Amazon EC2 instance and can only be used with an ECS managed Docker platform. The format differs significantly from the other configuration file versions that support the Docker platform branches that aren't managed by ECS.

 See the [`Dockerrun.aws.json` v2](create_deploy_docker_v2config.md#create_deploy_docker_v2config_dockerrun) for details on the updated format and an example file. 

## Docker images
<a name="create_deploy_docker_ecs_images"></a>

 The ECS managed Docker platform for Elastic Beanstalk requires images to be prebuilt and stored in a public or private online image repository before creating an Elastic Beanstalk environment.

**Note**  
Building custom images during deployment with a `Dockerfile` is not supported by the ECS managed Docker platform on Elastic Beanstalk. Build your images and deploy them to an online repository before creating an Elastic Beanstalk environment.

Specify images by name in `Dockerrun.aws.json` v2.

To configure Elastic Beanstalk to authenticate to a private repository, include the `authentication` parameter in your `Dockerrun.aws.json` v2 file. 

## Failed container deployments
<a name="create_deploy_docker_ecs_rollback"></a>

 If an Amazon ECS task fails, one or more containers in your Elastic Beanstalk environment will not start. Elastic Beanstalk does not roll back multi-container environments due to a failed Amazon ECS task. If a container fails to start in your environment, redeploy the current version or a previous working version from the Elastic Beanstalk console. 

**To deploy an existing version**

1. Open the Elastic Beanstalk console in your environment's region.

1. Click **Actions** to the right of your application name and then click **View application versions**.

1. Select a version of your application and click **Deploy**.

## Extending ECS based Docker platforms for Elastic Beanstalk
<a name="create_deploy_docker_ecs_extending_linux"></a>

Elastic Beanstalk offers extensibility features that enable you to apply your own commands, scripts, software, and configurations to your application deployments. The deployment workflow for the *ECS AL2 and AL2023* platform branches varies slightly from the other Linux based platforms. For more information, see [Instance deployment workflow for ECS running on Amazon Linux 2 and laterInstance deployment workflow for ECS on AL2 and later](platforms-linux-extend.workflow.ecs-al2.md). 

# ECS managed Docker configuration for Elastic Beanstalk
<a name="create_deploy_docker_ecs_config"></a>

This chapter explains how to configure your ECS managed Docker environment. The following list summarizes the configuration items that this chapter explains.
+  `Dockerrun.aws.json` v2 – This configuration file specifies your image repository and the name of your Docker images, among other components. 
+ EC2 Instance profile role – If you have a custom instance profile, we explain how to configure it so that the permissions required for ECS to manage your containers stay current.
+ Elastic Load Balancing listeners – You'll need to configure multiple Elastic Load Balancing listeners if you need your environment to support inbound traffic for proxies or other services that don't run on the default HTTP port.

**Topics**
+ [Configuring the Dockerrun.aws.json v2 file](create_deploy_docker_v2config.md)
+ [Container managed policy and EC2 instance role](create_deploy_docker_ecs_role.md)
+ [Using multiple Elastic Load Balancing listeners](create_deploy_docker_ecs_listeners.md)

# Configuring the Dockerrun.aws.json v2 file
<a name="create_deploy_docker_v2config"></a>

`Dockerrun.aws.json v2` is an Elastic Beanstalk configuration file that describes how to deploy a set of Docker containers hosted in an ECS cluster in an Elastic Beanstalk environment. The Elastic Beanstalk platform creates an ECS *task definition*, which includes an ECS *container definition*. These definitions are described in the `Dockerrun.aws.json` configuration file.

The container definition in the `Dockerrun.aws.json` file describes the containers to deploy to each Amazon EC2 instance in the ECS cluster. In this case an Amazon EC2 instance is also referred to as a host *container instance*, because it hosts the Docker containers. The configuration file also describes the data volumes to create on the host container instance for the Docker containers to mount. For more information and a diagram of the components in an ECS managed Docker environment on Elastic Beanstalk, see the [ECS managed Docker platform overview](create_deploy_docker_ecs.md#create_deploy_docker_ecs_platform) earlier in this chapter.

 A `Dockerrun.aws.json` file can be used on its own or zipped up with additional source code in a single archive. Source code that is archived with a `Dockerrun.aws.json` is deployed to Amazon EC2 container instances and accessible in the `/var/app/current/` directory.

**Topics**
+ [`Dockerrun.aws.json` v2](#create_deploy_docker_v2config_dockerrun)
+ [Volume format](#create_deploy_docker_v2config_volume_format)
+ [Execution Role ARN format](#create_deploy_docker_v2config_executionRoleArn_format)
+ [Container definition format](#create_deploy_docker_v2config_dockerrun_format)
+ [Authentication format – using images from a private repository](#docker-multicontainer-dockerrun-privaterepo)
+ [Example Dockerrun.aws.json v2](#create_deploy_docker_v2config_example)

## `Dockerrun.aws.json` v2
<a name="create_deploy_docker_v2config_dockerrun"></a>

The `Dockerrun.aws.json` file includes the following sections:

**AWSEBDockerrunVersion**  
Specifies the version number as the value `2` for ECS managed Docker environments.

**executionRoleArn **  
Specifies the task execution IAM roles for different purposes and services associated with your account. For your application to use Elastic Beanstalk [environment variables stored as secrets](AWSHowTo.secrets.env-vars.md), you’ll need to specify the ARN of a task execution role that grants the required permissions. Other common use cases may also require this parameter. For more information, see [Execution Role ARN format](#create_deploy_docker_v2config_executionRoleArn_format).

**volumes**  
Creates volumes from folders in the Amazon EC2 container instance, or from your source bundle (deployed to `/var/app/current`). Mount these volumes to paths within your Docker containers using `mountPoints` in the `containerDefinitions` section.

**containerDefinitions**  
An array of container definitions.

**authentication (optional)**  
The location in Amazon S3 of a `.dockercfg` file that contains authentication data for a private repository.

The *containerDefinitions* and *volumes * sections of `Dockerrun.aws.json` use the same formatting as the corresponding sections of an Amazon ECS task definition file. For more information about the task definition format and a full list of task definition parameters, see [Amazon ECS task definitions](http://docs.aws.amazon.com/AmazonECS/latest/developerguide/task_defintions.html) in the *Amazon Elastic Container Service Developer Guide*.

## Volume format
<a name="create_deploy_docker_v2config_volume_format"></a>

The *volume* parameter creates volumes from either folders in the Amazon EC2 container instance, or from your source bundle (deployed to `/var/app/current`).

 Volumes are specified in the following format: 

```
"volumes": [
    {
      "name": "volumename",
      "host": {
        "sourcePath": "/path/on/host/instance"
      }
    }
  ],
```

Mount these volumes to paths within your Docker containers using `mountPoints` in the container definition.

Elastic Beanstalk configures additional volumes for logs, one for each container. These should be mounted by your Docker containers in order to write logs to the host instance. 

For more details, see the `mountPoints` field in the *Container definition format* section that follows.

## Execution Role ARN format
<a name="create_deploy_docker_v2config_executionRoleArn_format"></a>

In order for your application to use Elastic Beanstalk [environment variables stored as secrets](AWSHowTo.secrets.env-vars.md), you'll need to specify a task execution IAM role. The role must grant the Amazon ECS container permission to make AWS API calls on your behalf using AWS Secrets Manager secrets or AWS Systems Manager Parameter Store parameters to reference sensitive data. For instructions to create a task execution IAM role with the required permissions for your account, see [Amazon ECS task execution IAM role ](https://docs.aws.amazon.com/AmazonECS/latest/developerguide/task_execution_IAM_role.html) in the Amazon Elastic Container Service Developer Guide.

```
{
"AWSEBDockerrunVersion": 2,
  "executionRoleArn": "arn:aws:iam::111122223333:role/ecsTaskExecutionRole",
```

### Additional permissions required for the Amazon ECS managed Docker platform
<a name="create_deploy_docker_v2config_executionRoleArn_format_passRole"></a>

**EC2 instance profile grants `iam:PassRole` to ECS**  
In order for your EC2 instance profile to be able to grant this role to the ECS container, you must include the `iam:PassRole` permission demonstrated in the following example. The `iam:PassRole` allows the EC2 instances permission *to pass * the task execution role to the ECS container.

In this example, we limit the EC2 instance to only pass the role to the ECS service. Although this condition is not required, we add it to follow best practices to reduce the scope of the permission shared. We accomplish this with the `Condition` element.

**Note**  
Any usage of the ECS IAM task execution role requires the `iam:PassRole` permission. There are other common use cases that require the ECS task execution managed service role. For more information, see [Amazon ECS task execution IAM role ](https://docs.aws.amazon.com/AmazonECS/latest/developerguide/task_execution_IAM_role.html) in the Amazon Elastic Container Service Developer Guide.



**Example policy with `iam:PassRole` permission**    
****  

```
{
    "Version":"2012-10-17",		 	 	 
    "Statement": [
        {
            "Effect": "Allow",
            "Action": "iam:PassRole",
            "Resource": [
                "arn:aws:iam::123456789012:role/ecs-task-execution-role"
            ],
            "Condition": {
                "StringLike": {
                    "iam:PassedToService": "ecs-tasks.amazonaws.com"
                }
            }
        }
    ]
}
```





**Granting secrets and parameters access to the Amazon ECS container agent**  
The Amazon ECS task execution IAM role also needs permissions to access the secrets and parameter stores. Similar to the requirements of the EC2 instance profile role, the ECS container agent requires permission to pull the necessary Secrets Manager or Systems Manager resources. For more information, see [Secrets Manager or Systems Manager permissions](https://docs.aws.amazon.com/AmazonECS/latest/developerguide/task_execution_IAM_role.html#task-execution-secrets) in the *Amazon Elastic Container Service Developer Guide*

**Granting secrets and parameters access to the Elastic Beanstalk EC2 instances**  
To support secrets configured as environment variables, you'll also need to add permissions to your EC2 instance profile. For more information, see [Fetching secrets and parameters to Elastic Beanstalk environment variables](AWSHowTo.secrets.env-vars.md) and [Required IAM permissions for Secrets Manager](AWSHowTo.secrets.IAM-permissions.md#AWSHowTo.secrets.IAM-permissions.secrets-manager).

The following examples combine the previous `iam:PassRole` example with the examples provided in the referenced [Required IAM permissions for Secrets Manager](AWSHowTo.secrets.IAM-permissions.md#AWSHowTo.secrets.IAM-permissions.secrets-manager). They add the permissions that the EC2 instances require to access the AWS Secrets Manager and AWS Systems Manager stores to retrieve the secrets and parameter data to initialize the Elastic Beanstalk environment variables that have been configured for secrets.

**Example Secrets Manager policy combined with `iam:PassRole` permission**    
****  

```
{
    "Version": "2012-10-17",
    "Statement": [
       {
            "Effect": "Allow",
            "Action": "iam:PassRole",
            "Resource": [
                "arn:aws:iam::123456789012:role/ecs-task-execution-role"
            ],
            "Condition": {
                "StringLike": {
                    "iam:PassedToService": "ecs-tasks.amazonaws.com"
               }
            } 
        },
        {
            "Effect": "Allow",
            "Action": [
                "secretsmanager:GetSecretValue",
                "kms:Decrypt"          
            ],
            "Resource": [
                "arn:aws:secretsmanager:us-east-1:111122223333:secret:my-secret",
                "arn:aws:kms:us-east-1:111122223333:key/my-key"
            ]
        }
    ]
}
```

**Example Systems Manager policy combined with `iam:PassRole` permission**    
****  

```
{
    "Version": "2012-10-17",
    "Statement": [
       {
            "Effect": "Allow",
            "Action": "iam:PassRole",
            "Resource": [
                "arn:aws:iam::123456789012:role/ecs-task-execution-role"
            ],
            "Condition": {
                "StringLike": {
                    "iam:PassedToService": "ecs-tasks.amazonaws.com"
               }
            } 
        },
        {
            "Effect": "Allow",
            "Action": [
                "ssm:GetParameter",
                "kms:Decrypt"
            ],
            "Resource": [
                "arn:aws:ssm:us-east-1:111122223333:parameter/my-parameter",
                "arn:aws:kms:us-east-1:111122223333:key/my-key"
            ]
        }
    ]
}
```

## Container definition format
<a name="create_deploy_docker_v2config_dockerrun_format"></a>

The following examples show a subset of parameters that are commonly used in the *containerDefinitions* section. More optional parameters are available.

The Beanstalk platform creates an ECS *task definition*, which includes an ECS *container definition*. Beanstalk supports a sub-set of parameters for the ECS container definition. For more information, see [Container definitions](http://docs.aws.amazon.com/AmazonECS/latest/developerguide/task_definition_parameters.html#container_definitions) in the *Amazon Elastic Container Service Developer Guide*.

A `Dockerrun.aws.json` file contains an array of one or more container definition objects with the following fields:

**name**  
The name of the container. See [Standard Container Definition Parameters](http://docs.aws.amazon.com/AmazonECS/latest/developerguide/task_definition_parameters.html#standard_container_definition_params) for information about the maximum length and allowed characters.

**image**  
The name of a Docker image in an online Docker repository from which you're building a Docker container. Note these conventions:   
+  Images in official repositories on Docker Hub use a single name (for example, `ubuntu` or `mongo`).
+ Images in other repositories on Docker Hub are qualified with an organization name (for example, `amazon/amazon-ecs-agent`.
+ Images in other online repositories are qualified further by a domain name (for example, `quay.io/assemblyline/ubuntu`).

**environment**  
An array of environment variables to pass to the container.  
For example, the following entry defines an environment variable with the name **Container** and the value **PHP**:  

```
"environment": [
  {
    "name": "Container",
    "value": "PHP"
  }
],
```

**essential**  
True if the task should stop if the container fails. Nonessential containers can finish or crash without affecting the rest of the containers on the instance. 

**memory**  
Amount of memory on the container instance to reserve for the container. Specify a non-zero integer for one or both of the `memory` or `memoryReservation` parameters in container definitions.

**memoryReservation**  
The soft limit (in MiB) of memory to reserve for the container. Specify a non-zero integer for one or both of the `memory` or `memoryReservation` parameters in container definitions.

**mountPoints**  
Volumes from the Amazon EC2 container instance to mount, and the location on the Docker container file system at which to mount them. When you mount volumes that contain application content, your container can read the data you upload in your source bundle. When you mount log volumes for writing log data, Elastic Beanstalk can gather log data from these volumes.   
 Elastic Beanstalk creates log volumes on the container instance, one for each Docker container, at `/var/log/containers/containername`. These volumes are named `awseb-logs-containername` and should be mounted to the location within the container file structure where logs are written.   
For example, the following mount point maps the nginx log location in the container to the Elastic Beanstalk–generated volume for the `nginx-proxy` container.   

```
{
  "sourceVolume": "awseb-logs-nginx-proxy",
  "containerPath": "/var/log/nginx"
}
```

**portMappings**  
Maps network ports on the container to ports on the host.

**links**  
List of containers to link to. Linked containers can discover each other and communicate securely. 

**volumesFrom**  
Mount all of the volumes from a different container. For example, to mount volumes from a container named `web`:  

```
"volumesFrom": [
  {
    "sourceContainer": "web"
  }
],
```

## Authentication format – using images from a private repository
<a name="docker-multicontainer-dockerrun-privaterepo"></a>

The `authentication` section contains authentication data for a private repository. This entry is optional.

Add the information about the Amazon S3 bucket that contains the authentication file in the `authentication` parameter of the `Dockerrun.aws.json` file. Make sure that the `authentication` parameter contains a valid Amazon S3 bucket and key. The Amazon S3 bucket must be hosted in the same region as the environment that is using it. Elastic Beanstalk will not download files from Amazon S3 buckets hosted in other regions.

Uses the following format:

```
"authentication": {
    "bucket": "amzn-s3-demo-bucket",
    "key": "mydockercfg"
  },
```

For information about generating and uploading the authentication file, see [Authenticating with image repositoriesUsing AWS Secrets Manager](docker-configuration.remote-repo.md).

## Example Dockerrun.aws.json v2
<a name="create_deploy_docker_v2config_example"></a>

The following snippet is an example that illustrates the syntax of the `Dockerrun.aws.json` file for an instance with two containers.

```
{
  "AWSEBDockerrunVersion": 2,
  "volumes": [
    {
      "name": "php-app",
      "host": {
        "sourcePath": "/var/app/current/php-app"
      }
    },
    {
      "name": "nginx-proxy-conf",
      "host": {
        "sourcePath": "/var/app/current/proxy/conf.d"
      }
    }
  ],
  "containerDefinitions": [
    {
      "name": "php-app",
      "image": "php:fpm",
      "environment": [
        {
          "name": "Container",
          "value": "PHP"
        }
      ],
      "essential": true,
      "memory": 128,
      "mountPoints": [
        {
          "sourceVolume": "php-app",
          "containerPath": "/var/www/html",
          "readOnly": true
        }
      ]
    },
    {
      "name": "nginx-proxy",
      "image": "nginx",
      "essential": true,
      "memory": 128,
      "portMappings": [
        {
          "hostPort": 80,
          "containerPort": 80
        }
      ],
      "links": [
        "php-app"
      ],
      "mountPoints": [
        {
          "sourceVolume": "php-app",
          "containerPath": "/var/www/html",
          "readOnly": true
        },
        {
          "sourceVolume": "nginx-proxy-conf",
          "containerPath": "/etc/nginx/conf.d",
          "readOnly": true
        },
        {
          "sourceVolume": "awseb-logs-nginx-proxy",
          "containerPath": "/var/log/nginx"
        }
      ]
    }
  ]
}
```

# Container managed policy and EC2 instance role
<a name="create_deploy_docker_ecs_role"></a>

When you create an environment in the Elastic Beanstalk console, it prompts you to create a default instance profile that includes the `AWSElasticBeanstalkMulticontainerDocker` managed policy. So initially, your default EC2 instance profile, should include this managed policy. If your environment uses a custom EC2 instance profile role instead of the default, make sure that the managed policy `AWSElasticBeanstalkMulticontainerDocker` is attached so the required permissions for container management stay up-to-date. 

Elastic Beanstalk uses an Amazon ECS-optimized AMI with an Amazon ECS container agent that runs in a Docker container. The agent communicates with Amazon ECS to coordinate container deployments. In order to communicate with Amazon ECS, each Amazon EC2 instance must have the corresponding IAM permissions, which are specified in this managed policy. See the [AWSElasticBeanstalkMulticontainerDocker](https://docs.aws.amazon.com/aws-managed-policy/latest/reference/AWSElasticBeanstalkMulticontainerDocker.html) in the *AWS Managed Policy Reference Guide* to view these permissions.

If you use Elastic Beanstalk environment variables that are configured to access secrets or parameters that are stored in AWS Secrets Manager or AWS Systems Manager Parameter Store, you must customize your EC2 instance profile with additional permissions. For more information, see [Execution Role ARN format](create_deploy_docker_v2config.md#create_deploy_docker_v2config_executionRoleArn_format). 



# Using multiple Elastic Load Balancing listeners
<a name="create_deploy_docker_ecs_listeners"></a>

You can configure multiple Elastic Load Balancing listeners on a ECS managed Docker environment in order to support inbound traffic for proxies or other services that don't run on the default HTTP port.

Create a `.ebextensions` folder in your source bundle and add a file with a `.config` file extension. The following example shows a configuration file that creates an Elastic Load Balancing listener on port 8080.

**`.ebextensions/elb-listener.config`**

```
option_settings:
  aws:elb:listener:8080:
    ListenerProtocol: HTTP
    InstanceProtocol: HTTP
    InstancePort: 8080
```

If your environment is running in a custom [Amazon Virtual Private Cloud](https://docs.aws.amazon.com/vpc/latest/userguide/) (Amazon VPC) that you created, Elastic Beanstalk takes care of the rest. In a default VPC, you need to configure your instance's security group to allow ingress from the load balancer. Add a second configuration file that adds an ingress rule to the security group:

**`.ebextensions/elb-ingress.config`**

```
Resources:
  port8080SecurityGroupIngress:
    Type: AWS::EC2::SecurityGroupIngress
    Properties:
      GroupId: {"Fn::GetAtt" : ["AWSEBSecurityGroup", "GroupId"]}
      IpProtocol: tcp
      ToPort: 8080
      FromPort: 8080
      SourceSecurityGroupName: { "Fn::GetAtt": ["AWSEBLoadBalancer", "SourceSecurityGroup.GroupName"] }
```

For more information on the configuration file format, see [Adding and customizing Elastic Beanstalk environment resources](environment-resources.md) and [Option settings](ebextensions-optionsettings.md). 

 In addition to adding a listener to the Elastic Load Balancing configuration and opening a port in the security group, you need to map the port on the host instance to a port on the Docker container in the `containerDefinitions` section of the `Dockerrun.aws.json` v2 file. The following excerpt shows an example: 

```
"portMappings": [
  {
    "hostPort": 8080,
    "containerPort": 8080
  }
]
```

See [`Dockerrun.aws.json` v2](create_deploy_docker_v2config.md#create_deploy_docker_v2config_dockerrun) for details about the `Dockerrun.aws.json` v2 file format. 

# Creating an ECS managed Docker environment with the Elastic Beanstalk console
<a name="create_deploy_docker_ecstutorial"></a>

This tutorial details container configuration and source code preparation for an ECS managed Docker environment that uses two containers. 

The containers, a PHP application and an nginx proxy, run side by side on each of the Amazon Elastic Compute Cloud (Amazon EC2) instances in an Elastic Beanstalk environment. After creating the environment and verifying that the applications are running, you'll connect to a container instance to see how it all fits together.

**Topics**
+ [Define ECS managed Docker containers](#create_deploy_docker_ecstutorial_config)
+ [Add content](#create_deploy_docker_ecstutorial_code)
+ [Deploy to Elastic Beanstalk](#create_deploy_docker_ecstutorial_deploy)
+ [Connect to a container instance](#create_deploy_docker_ecstutorial_connect)
+ [Inspect the Amazon ECS container agent](#create_deploy_docker_ecstutorial_connect_inspect)

## Define ECS managed Docker containers
<a name="create_deploy_docker_ecstutorial_config"></a>

The first step in creating a new Docker environment is to create a directory for your application data. This folder can be located anywhere on your local machine and have any name you choose. In addition to a container configuration file, this folder will contain the content that you will upload to Elastic Beanstalk and deploy to your environment. 

**Note**  
All of the code for this tutorial is available in the awslabs repository on GitHub at [https://github.com/awslabs/eb-docker-nginx-proxy](https://github.com/awslabs/eb-docker-nginx-proxy).

The file that Elastic Beanstalk uses to configure the containers on an Amazon EC2 instance is a JSON-formatted text file named `Dockerrun.aws.json` v2. The ECS managed Docker platform versions use a Version 2 format of this file. This format can only be used with the ECS managed Docker platform, as it differs significantly from the other configuration file versions that support the Docker platform branches that aren't managed by ECS.

Create a `Dockerrun.aws.json` v2 text file with this name at the root of your application and add the following text: 

```
{
  "AWSEBDockerrunVersion": 2,
  "volumes": [
    {
      "name": "php-app",
      "host": {
        "sourcePath": "/var/app/current/php-app"
      }
    },
    {
      "name": "nginx-proxy-conf",
      "host": {
        "sourcePath": "/var/app/current/proxy/conf.d"
      }
    }  
  ],
  "containerDefinitions": [
    {
      "name": "php-app",
      "image": "php:fpm",
      "essential": true,
      "memory": 128,
      "mountPoints": [
        {
          "sourceVolume": "php-app",
          "containerPath": "/var/www/html",
          "readOnly": true
        }
      ]
    },
    {
      "name": "nginx-proxy",
      "image": "nginx",
      "essential": true,
      "memory": 128,
      "portMappings": [
        {
          "hostPort": 80,
          "containerPort": 80
        }
      ],
      "links": [
        "php-app"
      ],
      "mountPoints": [
        {
          "sourceVolume": "php-app",
          "containerPath": "/var/www/html",
          "readOnly": true
        },
        {
          "sourceVolume": "nginx-proxy-conf",
          "containerPath": "/etc/nginx/conf.d",
          "readOnly": true
        },
        {
          "sourceVolume": "awseb-logs-nginx-proxy",
          "containerPath": "/var/log/nginx"
        }
      ]
    }
  ]
}
```

This example configuration defines two containers, a PHP web site with an nginx proxy in front of it. These two containers will run side by side in Docker containers on each instance in your Elastic Beanstalk environment, accessing shared content (the content of the website) from volumes on the host instance, which are also defined in this file. The containers themselves are created from images hosted in official repositories on Docker Hub. The resulting environment looks like the following:

![\[Elastic Beanstalk environment with load balancer, auto scaling group, and two instances running Nginx and PHP-FPM.\]](http://docs.aws.amazon.com/elasticbeanstalk/latest/dg/images/aeb-multicontainer-tutorial.png)


The volumes defined in the configuration correspond to the content that you will create next and upload as part of your application source bundle. The containers access content on the host by mounting volumes in the `mountPoints` section of the container definitions. 

For more information on the format of `Dockerrun.aws.json` v2 and its parameters, see [Container definition format](create_deploy_docker_v2config.md#create_deploy_docker_v2config_dockerrun_format). 

## Add content
<a name="create_deploy_docker_ecstutorial_code"></a>

Next you will add some content for your PHP site to display to visitors, and a configuration file for the nginx proxy. 

**php-app/index.php**

```
<h1>Hello World!!!</h1>
<h3>PHP Version <pre><?= phpversion()?></pre></h3>
```

**php-app/static.html**

```
<h1>Hello World!</h1>
<h3>This is a static HTML page.</h3>
```

**proxy/conf.d/default.conf**

```
server {
  listen 80;
  server_name localhost;
  root /var/www/html;
 
  index index.php;
 
  location ~ [^/]\.php(/|$) {
    fastcgi_split_path_info ^(.+?\.php)(/.*)$;
    if (!-f $document_root$fastcgi_script_name) {
      return 404;
    }

    include fastcgi_params;
    fastcgi_param SCRIPT_FILENAME $document_root$fastcgi_script_name;
    fastcgi_param PATH_INFO $fastcgi_path_info;
    fastcgi_param PATH_TRANSLATED $document_root$fastcgi_path_info;

    fastcgi_pass php-app:9000;
    fastcgi_index index.php;
  }
}
```

## Deploy to Elastic Beanstalk
<a name="create_deploy_docker_ecstutorial_deploy"></a>

Your application folder now contains the following files:

```
├── Dockerrun.aws.json
├── php-app
│   ├── index.php
│   └── static.html
└── proxy
    └── conf.d
        └── default.conf
```

This is all you need to create the Elastic Beanstalk environment. Create a `.zip` archive of the above files and folders (not including the top-level project folder). To create the archive in Windows explorer, select the contents of the project folder, right-click, select **Send To**, and then click **Compressed (zipped) Folder** 

**Note**  
For information on the required file structure and instructions for creating archives in other environments, see [Create an Elastic Beanstalk application source bundle](applications-sourcebundle.md) 

Next, upload the source bundle to Elastic Beanstalk and create your environment. For **Platform**, select **Docker**. For **Platform branch**, select **ECS running on 64bit Amazon Linux 2023**.

**To launch an environment (console)**

1. Open the Elastic Beanstalk console with this preconfigured link: [console.aws.amazon.com/elasticbeanstalk/home\$1/newApplication?applicationName=tutorials&environmentType=LoadBalanced](https://console.aws.amazon.com/elasticbeanstalk/home#/newApplication?applicationName=tutorials&environmentType=LoadBalanced)

1. For **Platform**, select the platform and platform branch that match the language used by your application, or the Docker platform for container-based applications.

1. For **Application code**, choose **Upload your code**.

1. Choose **Local file**, choose **Choose file**, and then open the source bundle.

1. Choose **Review and launch**.

1. Review the available settings, and then choose **Create app**.

The Elastic Beanstalk console redirects you to the management dashboard for your new environment. This screen shows the health status of the environment and events output by the Elastic Beanstalk service. When the status is Green, click the URL next to the environment name to see your new website. 

## Connect to a container instance
<a name="create_deploy_docker_ecstutorial_connect"></a>

Next you will connect to an Amazon EC2 instance in your Elastic Beanstalk environment to see some of the moving parts in action. 

The easiest way to connect to an instance in your environment is by using the EB CLI. To use it, [install the EB CLI](eb-cli3.md#eb-cli3-install), if you haven't done so already. You'll also need to configure your environment with an Amazon EC2 SSH key pair. Use either the console's [security configuration page](using-features.managing.security.md) or the EB CLI [eb init](eb3-init.md) command to do that. To connect to an environment instance, use the EB CLI [eb ssh](eb3-ssh.md) command.

Now that your connected to an Amazon EC2 instance hosting your docker containers, you can see how things are set up. Run `ls` on `/var/app/current`: 

```
[ec2-user@ip-10-0-0-117 ~]$ ls /var/app/current
Dockerrun.aws.json  php-app  proxy
```

This directory contains the files from the source bundle that you uploaded to Elastic Beanstalk during environment creation. 

```
[ec2-user@ip-10-0-0-117 ~]$ ls /var/log/containers
nginx-proxy    nginx-proxy-4ba868dbb7f3-stdouterr.log     
php-app        php-app-dcc3b3c8522c-stdouterr.log       rotated
```

This is where logs are created on the container instance and collected by Elastic Beanstalk. Elastic Beanstalk creates a volume in this directory for each container, which you mount to the container location where logs are written. 

You can also take a look at Docker to see the running containers with `docker ps`. 

```
[ec2-user@ip-10-0-0-117 ~]$ sudo docker ps
CONTAINER ID   IMAGE                            COMMAND                  CREATED         STATUS                  PORTS                               NAMES                                                
4ba868dbb7f3   nginx                            "/docker-entrypoint.…"   4 minutes ago   Up 4 minutes            0.0.0.0:80->80/tcp, :::80->80/tcp   ecs-awseb-Tutorials-env-dc2aywfjwg-1-nginx-proxy-acca84ef87c4aca15400        
dcc3b3c8522c   php:fpm                          "docker-php-entrypoi…"   4 minutes ago   Up 4 minutes            9000/tcp                            ecs-awseb-Tutorials-env-dc2aywfjwg-1-php-app-b8d38ae288b7b09e8101                             
d9367c0baad6   amazon/amazon-ecs-agent:latest   "/agent"                 5 minutes ago   Up 5 minutes (healthy)                                      ecs-agent
```

This shows the two running containers that you deployed, as well as the Amazon ECS container agent that coordinated the deployment. 

## Inspect the Amazon ECS container agent
<a name="create_deploy_docker_ecstutorial_connect_inspect"></a>

Amazon EC2 instances in a ECS managed Docker environment on Elastic Beanstalk run an agent process in a Docker container. This agent connects to the Amazon ECS service in order to coordinate container deployments. These deployments run as tasks in Amazon ECS, which are configured in task definition files. Elastic Beanstalk creates these task definition files based on the `Dockerrun.aws.json` that you upload in a source bundle. 

Check the status of the container agent with an HTTP get request to `http://localhost:51678/v1/metadata`: 

```
[ec2-user@ip-10-0-0-117 ~]$ curl http://localhost:51678/v1/metadata
{
  "Cluster":"awseb-Tutorials-env-dc2aywfjwg",
  "ContainerInstanceArn":"arn:aws:ecs:us-west-2:123456789012:container-instance/awseb-Tutorials-env-dc2aywfjwg/db7be5215cd74658aacfcb292a6b944f",
  "Version":"Amazon ECS Agent - v1.57.1 (089b7b64)"
}
```

This structure shows the name of the Amazon ECS cluster, and the ARN ([Amazon Resource Name](https://docs.aws.amazon.com/general/latest/gr/aws-arns-and-namespaces.html)) of the cluster instance (the Amazon EC2 instance that you are connected to). 

For more information, make an HTTP get request to `http://localhost:51678/v1/tasks`:

```
[ec2-user@ip-10-0-0-117 ~]$ curl http://localhost:51678/v1/tasks
{
   "Tasks":[
      {
         "Arn":"arn:aws:ecs:us-west-2:123456789012:task/awseb-Tutorials-env-dc2aywfjwg/bbde7ebe1d4e4537ab1336340150a6d6",
         "DesiredStatus":"RUNNING",
         "KnownStatus":"RUNNING",
         "Family":"awseb-Tutorials-env-dc2aywfjwg",
         "Version":"1",
         "Containers":[
            {
               "DockerId":"dcc3b3c8522cb9510b7359689163814c0f1453b36b237204a3fd7a0b445d2ea6",
               "DockerName":"ecs-awseb-Tutorials-env-dc2aywfjwg-1-php-app-b8d38ae288b7b09e8101",
               "Name":"php-app",
               "Volumes":[
                  {
                     "Source":"/var/app/current/php-app",
                     "Destination":"/var/www/html"
                  }
               ]
            },
            {
               "DockerId":"4ba868dbb7f3fb3328b8afeb2cb6cf03e3cb1cdd5b109e470f767d50b2c3e303",
               "DockerName":"ecs-awseb-Tutorials-env-dc2aywfjwg-1-nginx-proxy-acca84ef87c4aca15400",
               "Name":"nginx-proxy",
               "Ports":[
                  {
                     "ContainerPort":80,
                     "Protocol":"tcp",
                     "HostPort":80
                  },
                  {
                     "ContainerPort":80,
                     "Protocol":"tcp",
                     "HostPort":80
                  }
               ],
               "Volumes":[
                  {
                     "Source":"/var/app/current/php-app",
                     "Destination":"/var/www/html"
                  },
                  {
                     "Source":"/var/log/containers/nginx-proxy",
                     "Destination":"/var/log/nginx"
                  },
                  {
                     "Source":"/var/app/current/proxy/conf.d",
                     "Destination":"/etc/nginx/conf.d"
                  }
               ]
            }
         ]
      }
   ]
}
```

This structure describes the task that is run to deploy the two docker containers from this tutorial's example project. The following information is displayed: 
+ **KnownStatus** – The `RUNNING` status indicates that the containers are still active.
+ **Family** – The name of the task definition that Elastic Beanstalk created from `Dockerrun.aws.json`.
+ **Version** – The version of the task definition. This is incremented each time the task definition file is updated.
+ **Containers** – Information about the containers running on the instance.

Even more information is available from the Amazon ECS service itself, which you can call using the AWS Command Line Interface. For instructions on using the AWS CLI with Amazon ECS, and information about Amazon ECS in general, see the [ Amazon ECS User Guide](http://docs.aws.amazon.com/AmazonECS/latest/developerguide/ECS_GetStarted.html). 

# Migrating your Elastic Beanstalk application from ECS managed Multi-container Docker on AL1 to ECS on Amazon Linux 2023
<a name="migrate-to-ec2-AL2-platform"></a>

**Note**  
On [July 18, 2022](https://docs.aws.amazon.com/elasticbeanstalk/latest/relnotes/release-2022-07-18-linux-al1-retire.html), Elastic Beanstalk set the status of all platform branches based on Amazon Linux AMI (AL1) to **retired**..

This topic guides you in the migration of your applications from the retired platform branch *Multi-container Docker running on 64bit Amazon Linux* to *ECS Running on 64bit AL2023*. This target platform branch is current and supported. Like the previous *Multi-container Docker AL1* branch, the newer *ECS AL2023* platform branch uses Amazon ECS to coordinate deployment of multiple Docker containers to an Amazon ECS cluster in an Elastic Beanstalk environment. The new *ECS AL2023* platform branch supports all of the features in the previous *Multi-container Docker AL1* platform branch. Also, the same `Dockerrun.aws.json` v2 file is supported.

**Topics**
+ [Migrate with the Elastic Beanstalk console](#migrate-to-ec2-AL2-platform-steps-console)
+ [Migrate with the AWS CLI](#migrate-to-ec2-AL2-platform-steps-cli)

## Migrate with the Elastic Beanstalk console
<a name="migrate-to-ec2-AL2-platform-steps-console"></a>

To migrate using the Elastic Beanstalk console deploy the same source code to a new environment that’s based on the *ECS Running on AL2023* platform branch. No changes to the source code are required. 

**To migrate to the *ECS Running on Amazon Linux 2023* platform branch**

1. Using the application source that's already deployed to the old environment, create an application source bundle. You can use the same application source bundle and the same `Dockerrun.aws.json` v2 file.

1. Create a new environment using the *ECS Running on Amazon Linux 2023* platform branch. Use the source bundle from the prior step for **Application code**. For more detailed steps, see [Deploy to Elastic Beanstalk](create_deploy_docker_ecstutorial.md#create_deploy_docker_ecstutorial_deploy) in the *ECS managed Docker tutorial* earlier in this chapter.

## Migrate with the AWS CLI
<a name="migrate-to-ec2-AL2-platform-steps-cli"></a>

You also have the option to use the AWS Command Line Interface (AWS CLI) to migrate your existing *Multi-container Docker Amazon Linux Docker* environment to the newer *ECS AL2023* platform branch. In this case you don't need to create a new environment or redeploy your source code. You only need to run the AWS CLI [update-environment](https://docs.aws.amazon.com/cli/latest/reference/elasticbeanstalk/update-environment.html) command. It will perform a platform update to migrate your existing environment to the *ECS Amazon Linux 2023* platform branch.

Use the following syntax to migrate your environment to the new platform branch.

```
aws elasticbeanstalk update-environment \
--environment-name my-env \
--solution-stack-name "64bit Amazon Linux 2023 version running ECS" \
--region my-region
```

The following is an example of the command to migrate environment *beta-101* to *version 3.0.0* of the *ECS Amazon Linux 2023* platform branch in the *us-east-1* region. 

```
aws elasticbeanstalk update-environment \
--environment-name beta-101 \
--solution-stack-name "64bit Amazon Linux 2023 v4.0.0 running ECS" \
--region us-east-1
```

The `solution-stack-name` parameter provides the platform branch and its version. Use the most recent platform branch *version* by specifying the proper *solution stack name*. The version of every platform branch is included in the *solution stack name*, as shown in the above example. For a list of the most current solution stacks for the Docker platform, see [Supported platforms](https://docs.aws.amazon.com/elasticbeanstalk/latest/platforms/platforms-supported.html#platforms-supported.docker) in the *AWS Elastic Beanstalk Platforms* guide.

**Note**  
 The [list-available-solution-stacks](https://docs.aws.amazon.com/cli/latest/reference/elasticbeanstalk/list-available-solution-stacks.html) command provides a list of the platform versions available for your account in an AWS Region.  

```
aws elasticbeanstalk list-available-solution-stacks --region us-east-1 --query SolutionStacks
```

To learn more about the AWS CLI, see the [https://docs.aws.amazon.com//cli/latest/userguide/cli-chap-welcome.html](https://docs.aws.amazon.com//cli/latest/userguide/cli-chap-welcome.html). For more information about AWS CLI commands for Elastic Beanstalk, see the [https://docs.aws.amazon.com//cli/latest/reference/elasticbeanstalk/index.html](https://docs.aws.amazon.com//cli/latest/reference/elasticbeanstalk/index.html).

# Authenticating with image repositories
<a name="docker-configuration.remote-repo"></a>

This topic describes how to authenticate to online image repositories with Elastic Beanstalk. For private repositories, Elastic Beanstalk must authenticate before it can pull and deploy your images. For Amazon ECR Public, authentication is optional but provides higher rate limits and improved reliability.

## Using images from an Amazon ECR repository
<a name="docker-images-ecr"></a>

You can store your custom Docker images in AWS with [Amazon Elastic Container Registry](https://aws.amazon.com/ecr) (Amazon ECR). 

When you store your Docker images in Amazon ECR, Elastic Beanstalk automatically authenticates to the Amazon ECR registry with your environment's [instance profile](concepts-roles-instance.md). Therefore you'll need to provide your instances with permission to access the images in your Amazon ECR repository. To do so add permissions to your environment's instance profile by attaching the [AmazonEC2ContainerRegistryReadOnly](https://docs.aws.amazon.com/aws-managed-policy/latest/reference/AmazonEC2ContainerRegistryReadOnly.html) managed policy to the instance profile. This provides read-only access to all the Amazon ECR repositories in your account. You also have the option to only access to single repository by using the following template to create a custom policy:

------
#### [ JSON ]

****  

```
{
    "Version":"2012-10-17",		 	 	 
    "Statement": [
        {
            "Sid": "AllowEbAuth",
            "Effect": "Allow",
            "Action": [
                "ecr:GetAuthorizationToken"
            ],
            "Resource": [
                "*"
            ]
        },
        {
            "Sid": "AllowPull",
            "Effect": "Allow",
            "Resource": [
                "arn:aws:ecr:us-east-2:111122223333:repository/repository-name"
            ],
            "Action": [
                "ecr:GetAuthorizationToken",
                "ecr:BatchCheckLayerAvailability",
                "ecr:GetDownloadUrlForLayer",
                "ecr:GetRepositoryPolicy",
                "ecr:DescribeRepositories",
                "ecr:ListImages",
                "ecr:BatchGetImage"
            ]
        }
    ]
}
```

------

Replace the Amazon Resource Name (ARN) in the above policy with the ARN of your repository.

You'll need to specify the image information in your `Dockerrun.aws.json` file. The configuration will be different depending on which platform you use.

For the [ECS managed Docker platform](create_deploy_docker_v2config.md), use the `image` key in a container definition object ****:

```
"containerDefinitions": [
        {
        "name": "my-image",
        "image": "account-id.dkr.ecr.us-east-2.amazonaws.com/repository-name:latest",
```

For the [Docker platform](single-container-docker-configuration.md) refer to the image by URL. The URL goes in the `Image` definition of your `Dockerrun.aws.json` file:

```
  "Image": {
      "Name": "account-id.dkr.ecr.us-east-2.amazonaws.com/repository-name:latest",
      "Update": "true"
    },
```

## Using AWS Secrets Manager
<a name="docker-configuration.remote-repo.secrets"></a>

Configure Elastic Beanstalk to authenticate with your private repository before deployment to enable access to your container images.

This approach uses the *prebuild* phase of the Elastic Beanstalk deployment process with two components:
+ [ebextensions](ebextensions.md) to define environment variables that store repository credentials
+ [platform hook scripts](platforms-linux-extend.hooks.md) to execute **docker login** before pulling images

The hook scripts retrieve a username and password from environment variables that are populated from a single AWS Secrets Manager secret in JSON format. This feature requires Elastic Beanstalk Docker and ECS managed Docker platforms released on or after [January 13, 2026](https://docs.aws.amazon.com/elasticbeanstalk/latest/relnotes/release-2026-01-13-al2023.html). For more details, see [environment secrets](AWSHowTo.secrets.env-vars.md).

**To configure Elastic Beanstalk to authenticate to your private repository with AWS Secrets Manager**
**Note**  
Before proceeding, ensure you have set up your credentials in AWS Secrets Manager and configured the necessary IAM permissions. See [Prerequisites to configure secrets as environment variables](AWSHowTo.secrets.env-vars.md#AWSHowTo.secrets.configure-env-vars.prerequisites) for details. 

1. Create the following directory structure for your project:

   ```
   ├── .ebextensions
   │   └── env.config
   ├── .platform
   │   ├── confighooks
   │   │   └── prebuild
   │   │       └── 01login.sh
   │   └── hooks
   │       └── prebuild
   │           └── 01login.sh
   ├── Dockerfile
   ```

1. Use [AWS Secrets Manager](https://docs.aws.amazon.com/secretsmanager/latest/userguide/intro.html) to save the credentials of your private repository as a JSON-formatted secret.

   ```
   aws secretsmanager create-secret --name repo-credentials \
       --secret-string '{"username":"myuser","password":"mypassword"}'
   ```

1. Create the following `env.config` file and place it in the `.ebextensions` directory as shown in the preceding directory structure. This configuration uses the [aws:elasticbeanstalk:application:environmentsecrets](command-options-general.md#command-options-general-elasticbeanstalk-application-environmentsecrets) namespace with [JSON key extraction](AWSHowTo.secrets.env-vars.md#AWSHowTo.secrets.json) to initialize the `USER` and `PASSWD` Elastic Beanstalk environment variables from individual fields in the secret.

   ```
   option_settings:
     aws:elasticbeanstalk:application:environmentsecrets:
       USER: arn:aws:secretsmanager:us-east-1:111122223333:secret:repo-credentials-AbCd12:username
       PASSWD: arn:aws:secretsmanager:us-east-1:111122223333:secret:repo-credentials-AbCd12:password
   ```

1. Create the following `01login.sh` script file and place it in the following locations (also shown in the preceding directory structure):
   + `.platform/confighooks/prebuild/01login.sh`
   + `.platform/hooks/prebuild/01login.sh`

   ```
   #!/bin/bash
   echo $PASSWD | docker login -u $USER --password-stdin
   ```

   The `01login.sh` script uses the environment variables configured in **Step 3** and passes the password to **docker login** via `stdin`. For more information about Docker authentication, see [docker login](https://docs.docker.com/engine/reference/commandline/login/) in the Docker documentation.
**Notes**  
The ECS managed Docker platform uses the native ECS syntax for referencing secrets. For more information, see [Pass Secrets Manager secrets through Amazon ECS environment variables](https://docs.aws.amazon.com/AmazonECS/latest/developerguide/secrets-envvar-secrets-manager.html) in the *Amazon Elastic Container Service Developer Guide*.
For more information about platform hooks, see [Platform hooks](platforms-linux-extend.hooks.md) in *Extending Elastic Beanstalk Linux platforms*.

Once authentication is configured, Elastic Beanstalk can pull and deploy images from your private repository.

## Using the `Dockerrun.aws.json` file
<a name="docker-configuration.remote-repo.dockerrun-aws"></a>

This section describes another approach to authenticate Elastic Beanstalk to a private repository. With this approach, you generate an authentication file with the Docker command, and then upload the authentication file to an Amazon S3 bucket. You must also include the bucket information in your `Dockerrun.aws.json` file.

**To generate and provide an authentication file to Elastic Beanstalk**

1. Generate an authentication file with the **docker login** command. For repositories on Docker Hub, run **docker login**:

   ```
   $ docker login
   ```

   For other registries, include the URL of the registry server:

   ```
   $ docker login registry-server-url
   ```
**Note**  
If your Elastic Beanstalk environment uses the Amazon Linux AMI Docker platform version (precedes Amazon Linux 2), read the relevant information in [Docker configuration on Amazon Linux AMI (preceding Amazon Linux 2)](create_deploy_docker.container.console.md#docker-alami).

   For more information about the authentication file, see [ Store images on Docker Hub ](https://docs.docker.com/docker-hub/repos/) and [ docker login ](https://docs.docker.com/engine/reference/commandline/login/) on the Docker website.

1. Upload a copy of the authentication file that is named `.dockercfg` to a secure Amazon S3 bucket.
   + The Amazon S3 bucket must be hosted in the same AWS Region as the environment that is using it. Elastic Beanstalk cannot download files from an Amazon S3 bucket hosted in other Regions.
   + Grant permissions for the `s3:GetObject` operation to the IAM role in the instance profile. For more information, see [Managing Elastic Beanstalk instance profiles](iam-instanceprofile.md).

1. Include the Amazon S3 bucket information in the `Authentication` parameter in your `Dockerrun.aws.json` file.

   The following example shows the use of an authentication file named `mydockercfg` in a bucket named `amzn-s3-demo-bucket` to use a private image in a third-party registry. For the correct version number for `AWSEBDockerrunVersion`, see the note that follows the example.

   ```
   {
     "AWSEBDockerrunVersion": "version-no",
     "Authentication": {
       "Bucket": "amzn-s3-demo-bucket",
       "Key": "mydockercfg"
     },
     "Image": {
       "Name": "quay.io/johndoe/private-image",
       "Update": "true"
     },
     "Ports": [
       {
         "ContainerPort": "1234"
       }
     ],
     "Volumes": [
       {
         "HostDirectory": "/var/app/mydb",
         "ContainerDirectory": "/etc/mysql"
       }
     ],
     "Logging": "/var/log/nginx"
   }
   ```
**`Dockerrun.aws.json` versions**  
 The `AWSEBDockerrunVersion` parameter indicates the version of the `Dockerrun.aws.json` file.  
The Docker AL2 and AL2023 platforms use the following versions of the file.  
`Dockerrun.aws.json v3` — environments that use Docker Compose.
`Dockerrun.aws.json v1` — environments that do not use Docker Compose.
*ECS running on Amazon Linux 2* and *ECS running on AL2023* uses the `Dockerrun.aws.json v2` file. The retired platform *ECS-The Multicontainer Docker Amazon Linux AMI (AL1)* also used this same version.

After Elastic Beanstalk can authenticate with the online registry that hosts the private repository, your images can be deployed and pulled.

## Using images from Amazon ECR Public
<a name="docker-images-ecr-public"></a>

Amazon ECR Public is a public container registry that hosts Docker images. While Amazon ECR Public repositories are publicly accessible, authenticating provides higher rate limits and better reliability for your deployments.

**Note**  
Amazon ECR Public authentication is not supported in China regions (`cn-*`) and AWS GovCloud regions (`us-gov-*`). In these regions, Elastic Beanstalk will use unauthenticated pulls.

To enable Amazon ECR Public authentication, add the following permissions to your environment's [instance profile](concepts-roles-instance.md). For more information about Amazon ECR Public authentication, see [Registry authentication in Amazon ECR public](https://docs.aws.amazon.com/AmazonECR/latest/public/public-registries.html) in the *Amazon Elastic Container Registry Public User Guide*:

------
#### [ JSON ]

****  

```
{
    "Version":"2012-10-17",		 	 	 
    "Statement": [
       {
          "Effect": "Allow",
          "Action": [
             "ecr-public:GetAuthorizationToken",
             "sts:GetServiceBearerToken"
          ],
          "Resource": "*"
       }
    ]
}
```

------

Once these permissions are attached to your instance profile, Elastic Beanstalk will automatically authenticate with Amazon ECR Public registries. You can reference Amazon ECR Public images using the standard `public.ecr.aws/registry-alias/repository-name:tag` format in your `Dockerrun.aws.json` file or Dockerfile.

# Configuring Elastic Beanstalk Docker environments
<a name="create_deploy_docker.container.console"></a>

This chapter explains additional configuration information for all of the supported Docker platform branches, including the ECS managed Docker platform branch. Unless a specific platform branch or platform branch component is identified in a section, it applies to all environments that are running supported Docker and ECS manged Docker platforms. 

**Note**  
If your Elastic Beanstalk environment uses an Amazon Linux AMI Docker platform version (preceding Amazon Linux 2), be sure to read the additional information in [Docker configuration on Amazon Linux AMI (preceding Amazon Linux 2)](#docker-alami).

**Topics**
+ [Configuring software in Docker environments](#docker-software-config)
+ [Referencing environment variables in containers](#docker-env-cfg.env-variables)
+ [Using interpolate feature for environment variables with Docker Compose](#docker-env-cfg.env-variables-dc-interpolate)
+ [Generating logs for enhanced health reporting with Docker Compose](#docker-env-cfg.healthd-logging)
+ [Docker container customized logging with Docker Compose](#docker-env-cfg.dc-customized-logging)
+ [Docker images](#docker-images)
+ [Configuring managed updates for Docker environments](#docker-managed-updates)
+ [Docker configuration namespaces](#docker-namespaces)
+ [Docker configuration on Amazon Linux AMI (preceding Amazon Linux 2)](#docker-alami)

## Configuring software in Docker environments
<a name="docker-software-config"></a>

You can use the Elastic Beanstalk console to configure the software running on your environment's instances.

**To configure your Docker environment in the Elastic Beanstalk console**

1. Open the [Elastic Beanstalk console](https://console.aws.amazon.com/elasticbeanstalk), and in the **Regions** list, select your AWS Region.

1. In the navigation pane, choose **Environments**, and then choose the name of your environment from the list.

1. In the navigation pane, choose **Configuration**.

1. In the **Updates, monitoring, and logging** configuration category, choose **Edit**.

1. Make necessary configuration changes.

1. To save the changes choose **Apply** at the bottom of the page.

For information about configuring software settings in any environment, see [Environment variables and other software settings](environments-cfg-softwaresettings.md). The following sections cover Docker specific information.

### Container options
<a name="docker-software-config.container"></a>

The **Container options** section has platform-specific options. For Docker environments, it lets you choose whether or not your environment includes the NGINX proxy server.

**Environments with Docker Compose**  
If you manage your Docker environment with Docker Compose, Elastic Beanstalk assumes that you run a proxy server as a container. Therefore it defaults to **None** for the **Proxy server** setting, and Elastic Beanstalk does not provide an NGINX configuration.

**Note**  
Even if you select **NGINX** as a proxy server, this setting is ignored in an environment with Docker Compose. The **Proxy server** setting still defaults to **None**. 

Since the NGINX web server proxy is disabled for the Docker on Amazon Linux 2 platform with Docker Compose, you must follow the instructions for generating logs for enhanced health reporting. For more information, see [Generating logs for enhanced health reporting with Docker Compose](#docker-env-cfg.healthd-logging).

### Environment properties (environment variables)
<a name="docker-software-config.env"></a>

You can use environment properties, (also known as environment variables), to pass values, such endpoints, debug settings, and other information to your application. The **Environment variables** section of the console lets you specify environment variables on the EC2 instances that are running your application. Environment variables are passed in as key-value pairs to the application.

Your application code running in a container can refer to an environment variable by name and read its value. The source code that reads these environment variables will vary by programming language. You can find instructions for reading environment variable values in the programming languages that Elastic Beanstalk managed platforms support in the respective platform topic. For a list of links to these topics, see [Environment variables and other software settings](environments-cfg-softwaresettings.md).

**Secrets and parameters in Elastic Beanstalk environment variables**  
Elastic Beanstalk offers the ability to reference AWS Secrets Manager and AWS Systems Manager Parameter Store data in environment variables. This is a secure option for your application to natively access secrets and parameters stored by these services without having to manage API calls to them. Your Elastic Beanstalk Docker and ECS managed Docker platforms must be a version released on or after [March 26, 2025](https://docs.aws.amazon.com/elasticbeanstalk/latest/relnotes/release-2025-03-26-windows.html) to support this feature. For more information about using environment variables to reference secrets, see [Fetching secrets and parameters to Elastic Beanstalk environment variables](AWSHowTo.secrets.env-vars.md).

**Environments with Docker Compose**  
If you manage your Docker environment with Docker Compose, you must make some additional configuration to retrieve the environment variables in the containers. In order for the executables running in your container to access these environment variables, you must reference them in the `docker-compose.yml`. For more information see [Referencing environment variables in containers](#docker-env-cfg.env-variables). 

## Referencing environment variables in containers
<a name="docker-env-cfg.env-variables"></a>

If you are using the Docker Compose tool on the Amazon Linux 2 Docker platform, Elastic Beanstalk generates a Docker Compose environment file called `.env` in the root directory of your application project. This file stores the environment variables you configured for Elastic Beanstalk.

**Note**  
 If you include a `.env` file in your application bundle, Elastic Beanstalk will not generate an `.env` file. 

In order for a container to reference the environment variables you define in Elastic Beanstalk, you must follow one or both of these configuration approaches.
+ Add the `.env` file generated by Elastic Beanstalk to the `env_file` configuration option in the `docker-compose.yml` file.
+ Directly define the environment variables in the `docker-compose.yml` file.

The following files provide an example. The sample `docker-compose.yml` file demonstrates both approaches. 
+ If you define environment properties `DEBUG_LEVEL=1` and `LOG_LEVEL=error`, Elastic Beanstalk generates the following `.env` file for you:

  ```
  DEBUG_LEVEL=1
  LOG_LEVEL=error
  ```
+ In this `docker-compose.yml` file, the `env_file` configuration option points to the `.env` file, and it also defines the environment variable `DEBUG=1` directly in the `docker-compose.yml` file.

  ```
  services:
    web:
      build: .
      environment:
        - DEBUG=1
      env_file:
        - .env
  ```

**Notes**  
If you set the same environment variable in both files, the variable defined in the `docker-compose.yml` file has higher precedence than the variable defined in the `.env` file.
Be careful to not leave spaces between the equal sign (=) and the value assigned to your variable in order to prevent spaces from being added to the string.

To learn more about environment variables in Docker Compose, see [Environment variables in Compose](https://docs.docker.com/compose/environment-variables/) 

## Using interpolate feature for environment variables with Docker Compose
<a name="docker-env-cfg.env-variables-dc-interpolate"></a>

Starting with the [July 28, 2023](https://docs.aws.amazon.com/elasticbeanstalk/latest/relnotes/release-2023-07-28-al2.html) platform release, the *Docker Amazon Linux 2* platform branch offers the Docker Compose *interpolation* feature. With this feature, values in a Compose file can be set by variables and interpolated at runtime. For more information about this feature, see [Interpolation](https://docs.docker.com/compose/compose-file/12-interpolation/) on the Docker documentation website.

**Important**  
If you'd like to use this feature with your applications, be aware that you'll need to implement an approach that uses platform hooks.  
This is necessary due a mitigation that we implemented in the platform engine. This mitigation ensures backward compatibility for customers that aren't aware of the new interpolation feature and have existing applications that use environment variables with the `$` character. The updated platform engine escapes the interpolation by default by replacing the `$` character with `$$` characters.

The following is an example of a platform hook script that you can set up to allow use of the interpolation feature.

```
#!/bin/bash

: '
example data format in .env file
key1=value1
key2=value2
'
envfile="/var/app/staging/.env"
tempfile=$(mktemp)

while IFS= read -r line; do
  # split each env var string at '='
  split_str=(${line//=/ })
  if [ ${#split_str[@]} -eq 2 ]; then
    # replace '$$' with '$'
    replaced_str=${split_str[1]//\$\$/\$}
    # update the value of env var using ${replaced_str}
    line="${split_str[0]}=${replaced_str}"
  fi
  # append the updated env var to the tempfile
  echo "${line}" ≫"${tempfile}"
done < "${envfile}"
# replace the original .env file with the tempfile
mv "${tempfile}" "${envfile}"
```

Place the platform hooks under both of these directories:
+ `.platform/confighooks/predeploy/`
+ `.platform/hooks/predeploy/`

For more information, see [Platform hooks](platforms-linux-extend.hooks.md) in the* Extending Linux platforms* topic of this guide.

## Generating logs for enhanced health reporting with Docker Compose
<a name="docker-env-cfg.healthd-logging"></a>

 The [Elastic Beanstalk health agent](health-enhanced.md#health-enhanced-agent) provides operating system and application health metrics for Elastic Beanstalk environments. It relies on web server log formats that relay information in a specific format.

Elastic Beanstalk assumes that you run a web server proxy as a container. As a result the NGINX web server proxy is disabled for Docker environments running Docker Compose. You must configure your server to write logs in the location and format that the Elastic Beanstalk health agent uses. Doing so allows you to make full use of enhanced health reporting, even if the web server proxy is disabled.

For instructions on how to do this, see [Web server log configuration](health-enhanced-serverlogs.md#health-enhanced-serverlogs.configure) 

## Docker container customized logging with Docker Compose
<a name="docker-env-cfg.dc-customized-logging"></a>

In order to efficiently troubleshoot issues and monitor your containerized services, you can [request instance logs](using-features.logging.md) from Elastic Beanstalk through the environment management console or the EB CLI. Instance logs are comprised of bundle logs and tail logs, combined and packaged to allow you to view logs and recent events in an efficient and straightforward manner.

 Elastic Beanstalk creates log directories on the container instance, one for each service defined in the `docker-compose.yml` file, at `/var/log/eb-docker/containers/<service name>`. If you are using the Docker Compose feature on the Amazon Linux 2 Docker platform, you can mount these directories to the location within the container file structure where logs are written. When you mount log directories for writing log data, Elastic Beanstalk can gather log data from these directories.

**To configure your service's logs files to be retrievable tail files and bundle logs**

1. Edit the `docker-compose.yml` file.

1. Under the `volumes` key for your service add a bind mount to be the following:

    ` "${EB_LOG_BASE_DIR}/<service name>:<log directory inside container> ` 

   In the sample `docker-compose.yml` file below:
   +  `nginx-proxy` is *<service name>* 
   +  `/var/log/nginx` is *<log directory inside container>* 

   ```
   services:
     nginx-proxy:
       image: "nginx"
       volumes:
         - "${EB_LOG_BASE_DIR}/nginx-proxy:/var/log/nginx"
   ```


+  The `var/log/nginx` directory contains the logs for the *nginx-proxy* service in the container, and it will be mapped to the `/var/log/eb-docker/containers/nginx-proxy` directory on the host. 
+  All of the logs in this directory are now retrievable as bundle and tail logs through Elastic Beanstalk's [request instance logs](using-features.logging.md) functionality. 



**Notes**  
*\$1\$1EB\$1LOG\$1BASE\$1DIR\$1* is an environment variable set by Elastic Beanstalk with the value `/var/log/eb-docker/containers`.
Elastic Beanstalk automatically creates the `/var/log/eb-docker/containers/<service name>` directory for each service in the `docker-compose.yml`file.

## Docker images
<a name="docker-images"></a>

The Docker and ECS managed Docker platform branches for Elastic Beanstalk support the use of Docker images stored in a public or private online image repository.

Specify images by name in `Dockerrun.aws.json`. Note these conventions:
+ Images in official repositories on Docker Hub use a single name (for example, `ubuntu` or `mongo`).
+ Images in other repositories on Docker Hub are qualified with an organization name (for example, `amazon/amazon-ecs-agent`).
+ Images in other online repositories are qualified further by a domain name (for example, `quay.io/assemblyline/ubuntu` or `account-id.dkr.ecr.us-east-2.amazonaws.com/ubuntu:trusty`). 

For environments using the Docker platform only, you can also build your own image during environment creation with a Dockerfile. See [Building custom images with a Dockerfile](single-container-docker-configuration.md#single-container-docker-configuration.dockerfile) for details. The ECS managed Docker platform doesn't support this functionality.

## Configuring managed updates for Docker environments
<a name="docker-managed-updates"></a>

With [managed platform updates](environment-platform-update-managed.md), you can configure your environment to automatically update to the latest version of a platform on a schedule.

In the case of Docker environments, you might want to decide if an automatic platform update should happen across Docker versions—when the new platform version includes a new Docker version. Elastic Beanstalk supports managed platform updates across Docker versions when updating from an environment running a Docker platform version newer than 2.9.0. When a new platform version includes a new version of Docker, Elastic Beanstalk increments the minor update version number. Therefore, to allow managed platform updates across Docker versions, enable managed platform updates for both minor and patch version updates. To prevent managed platform updates across Docker versions, enable managed platform updates to apply patch version updates only.

For example, the following [configuration file](ebextensions.md) enables managed platform updates at 9:00 AM UTC each Tuesday for both minor and patch version updates, thereby allowing for managed updates across Docker versions:

**Example .ebextensions/managed-platform-update.config**  

```
option_settings:
  aws:elasticbeanstalk:managedactions:
    ManagedActionsEnabled: true
    PreferredStartTime: "Tue:09:00"
  aws:elasticbeanstalk:managedactions:platformupdate:
    UpdateLevel: minor
```

For environments running Docker platform versions 2.9.0 or earlier, Elastic Beanstalk never performs managed platform updates if the new platform version includes a new Docker version.

## Docker configuration namespaces
<a name="docker-namespaces"></a>

You can use a [configuration file](ebextensions.md) to set configuration options and perform other instance configuration tasks during deployments. Configuration options can be [platform specific](command-options-specific.md) or apply to [all platforms](command-options-general.md) in the Elastic Beanstalk service as a whole. Configuration options are organized into *namespaces*.

**Note**  
 This information only applies to Docker environment that are not running Docker Compose. This option has a different behavior with Docker environments that run Docker Compose. For further information on proxy services with Docker Compose see [Container options](#docker-software-config.container). 

The Docker platform supports options in the following namespaces, in addition to the [options supported for all Elastic Beanstalk environments](command-options-general.md):
+ `aws:elasticbeanstalk:environment:proxy` – Choose the proxy server for your environment. Docker supports either running Nginx or no proxy server.

The following example configuration file configures a Docker environment to run no proxy server.

**Example .ebextensions/docker-settings.config**  

```
option_settings:
  aws:elasticbeanstalk:environment:proxy:
    ProxyServer: none
```

## Docker configuration on Amazon Linux AMI (preceding Amazon Linux 2)
<a name="docker-alami"></a>

If your Elastic Beanstalk Docker environment uses an Amazon Linux AMI platform version (preceding Amazon Linux 2), read the additional information in this section.

### Using an authentication file for a private repository
<a name="docker-alami.images-private"></a>

This information is relevant to you if you are [using images from a private repository](docker-configuration.remote-repo.md). Beginning with Docker version 1.7, the **docker login** command changed the name of the authentication file, and the format of the file. Amazon Linux AMI Docker platform versions (preceding Amazon Linux 2) require the older `~/.dockercfg` format configuration file.

With Docker version 1.7 and later, the **docker login** command creates the authentication file in `~/.docker/config.json` in the following format.

```
{
    "auths":{
      "server":{
        "auth":"key"
      }
    }
  }
```

With Docker version 1.6.2 and earlier, the **docker login** command creates the authentication file in `~/.dockercfg` in the following format.

```
{
    "server" :
    {
      "auth" : "auth_token",
      "email" : "email"
    }
  }
```

To convert a `config.json` file, remove the outer `auths` key, add an `email` key, and flatten the JSON document to match the old format.

On Amazon Linux 2 Docker platform versions, Elastic Beanstalk uses the newer authentication file name and format. If you're using an Amazon Linux 2 Docker platform version, you can use the authentication file that the **docker login** command creates without any conversion.

### Configuring additional storage volumes
<a name="docker-alami.volumes"></a>

For improved performance on Amazon Linux AMI, Elastic Beanstalk configures two Amazon EBS storage volumes for your Docker environment's Amazon EC2 instances. In addition to the root volume provisioned for all Elastic Beanstalk environments, a second 12GB volume named `xvdcz` is provisioned for image storage on Docker environments.

If you need more storage space or increased IOPS for Docker images, you can customize the image storage volume by using the `BlockDeviceMapping` configuration option in the [aws:autoscaling:launchconfiguration](command-options-general.md#command-options-general-autoscalinglaunchconfiguration) namespace.

For example, the following [configuration file](ebextensions.md) increases the storage volume's size to 100 GB with 500 provisioned IOPS:

**Example .ebextensions/blockdevice-xvdcz.config**  

```
option_settings:
  aws:autoscaling:launchconfiguration:
    BlockDeviceMappings: /dev/xvdcz=:100::io1:500
```

If you use the `BlockDeviceMappings` option to configure additional volumes for your application, you should include a mapping for `xvdcz` to ensure that it is created. The following example configures two volumes, the image storage volume `xvdcz` with default settings and an additional 24 GB application volume named `sdh`:

**Example .ebextensions/blockdevice-sdh.config**  

```
option_settings:
  aws:autoscaling:launchconfiguration:
    BlockDeviceMappings: /dev/xvdcz=:12:true:gp2,/dev/sdh=:24
```

**Note**  
When you change settings in this namespace, Elastic Beanstalk replaces all instances in your environment with instances running the new configuration. See [Configuration changes](environments-updating.md) for details.

# Legacy platforms
<a name="create_deploy_dockerpreconfig-legacy"></a>

This chapter lists content related to previous Docker platforms that are no longer supported by AWS Elastic Beanstalk. The topics listed here remain in this document as a reference for any customers that used these features or components prior to their retirement.

**Topics**
+ [Migrating to Elastic Beanstalk Docker running on Amazon Linux 2 from Multi-container Docker running on Amazon Linux](docker-multicontainer-migration.md)
+ [Preconfigured Docker GlassFish containers on Elastic Beanstalk](create_deploy_dockerpreconfig.md)

# Migrating to Elastic Beanstalk Docker running on Amazon Linux 2 from Multi-container Docker running on Amazon Linux
<a name="docker-multicontainer-migration"></a>

Prior to the release of the *ECS Running on 64bit Amazon Linux 2* platform branch, Elastic Beanstalk offered an alternate migration path to Amazon Linux 2 for customers with environments based on the *Multi-container Docker running on 64bit Amazon Linux* platform branch. This topic describes that migration path, and remains in this document as a reference for any customers that completed that migration path.

We now recommend that customers with environments based on the *Multi-container Docker running on 64bit Amazon Linux* platform branch migrate to the *ECS Running on 64bit Amazon Linux 2* platform branch. Unlike the alternate migration path, this approach continues to use Amazon ECS to coordinate container deployments to ECS managed Docker environments. This aspect allows a more straightforward approach. No changes to the source code are required, and the same `Dockerrun.aws.json` v2 is supported. For more information, see [Migrating your Elastic Beanstalk application from ECS managed Multi-container Docker on AL1 to ECS on Amazon Linux 2023](migrate-to-ec2-AL2-platform.md). 

## Legacy Migration from Multi-container Docker on Amazon Linux to the Docker Amazon Linux 2 platform branch
<a name="docker-multicontainer-migration-to-docker-al2"></a>

You can migrate your applications running on the [Multi-container Docker platform on Amazon Linux AMI](create_deploy_docker_ecs.md) to the Amazon Linux 2 Docker platform. The Multi-container Docker platform on Amazon Linux AMI requires that you specify prebuilt application images to run as containers. After migrating, you will no longer have this limitation, because the Amazon Linux 2 Docker platform also allows Elastic Beanstalk to build your container images during deployment. Your applications will continue to run in multi-container environments with the added benefits from the Docker Compose tool.





Docker Compose is tool for defining and running multi-container Docker applications. To learn more about Docker Compose and how to install it, see the Docker sites [Overview of Docker Compose](https://docs.docker.com/compose/) and [Install Docker Compose](https://docs.docker.com/compose/install/).

### The `docker-compose.yml` file
<a name="docker-multicontainer-migration.files"></a>

The Docker Compose tool uses the `docker-compose.yml` file for configuration of your application services. This file replaces your `Dockerrun.aws.json v2` file in your application project directory and application source bundle. You create the `docker-compose.yml` file manually, and will find it helpful to reference your `Dockerrun.aws.json v2` file for most of the parameter values.

Below is an example of a `docker-compose.yml` file and the corresponding `Dockerrun.aws.json v2` file for the same application. For more information on the `docker-compose.yml` file, see [Compose file reference](https://docs.docker.com/compose/compose-file/). For more information on the `Dockerrun.aws.json v2` file, see [`Dockerrun.aws.json` v2](create_deploy_docker_v2config.md#create_deploy_docker_v2config_dockerrun). 


| **`docker-compose.yml`** | **`Dockerrun.aws.json v2`** | 
| --- | --- | 
|  [\[See the AWS documentation website for more details\]](http://docs.aws.amazon.com/elasticbeanstalk/latest/dg/docker-multicontainer-migration.html)  |  [\[See the AWS documentation website for more details\]](http://docs.aws.amazon.com/elasticbeanstalk/latest/dg/docker-multicontainer-migration.html)  | 
|  <pre>version: '2.4'<br />services:<br />  php-app:<br />    image: "php:fpm"<br />    volumes:<br />      - "./php-app:/var/www/html:ro"<br />      - "${EB_LOG_BASE_DIR}/php-app:/var/log/sample-app"<br />    mem_limit: 128m<br />    environment:<br />      Container: PHP<br />  nginx-proxy:<br />    image: "nginx"<br />    ports:<br />      - "80:80"<br />    volumes:<br />      - "./php-app:/var/www/html:ro"<br />      - "./proxy/conf.d:/etc/nginx/conf.d:ro"<br />      - "${EB_LOG_BASE_DIR}/nginx-proxy:/var/log/nginx"<br />    mem_limit: 128m<br />    links:<br />      - php-app</pre>  | 
|  <pre>{<br />  "AWSEBDockerrunVersion": 2,<br />  "volumes": [<br />    {<br />      "name": "php-app",<br />      "host": {<br />        "sourcePath": "/var/app/current/php-app"<br />      }<br />    },<br />    {<br />      "name": "nginx-proxy-conf",<br />      "host": {<br />        "sourcePath": "/var/app/current/proxy/conf.d"<br />      }<br />    }<br />  ],<br />  "containerDefinitions": [<br />    {<br />      "name": "php-app",<br />      "image": "php:fpm",<br />      "environment": [<br />        {<br />          "name": "Container",<br />          "value": "PHP"<br />        }<br />      ],<br />      "essential": true,<br />      "memory": 128,<br />      "mountPoints": [<br />        {<br />          "sourceVolume": "php-app",<br />          "containerPath": "/var/www/html",<br />          "readOnly": true<br />        }<br />      ]<br />    },<br />    {<br />      "name": "nginx-proxy",<br />      "image": "nginx",<br />      "essential": true,<br />      "memory": 128,<br />      "portMappings": [<br />        {<br />          "hostPort": 80,<br />          "containerPort": 80<br />        }<br />      ],<br />      "links": [<br />        "php-app"<br />      ],<br />      "mountPoints": [<br />        {<br />          "sourceVolume": "php-app",<br />          "containerPath": "/var/www/html",<br />          "readOnly": true<br />        },<br />        {<br />          "sourceVolume": "nginx-proxy-conf",<br />          "containerPath": "/etc/nginx/conf.d",<br />          "readOnly": true<br />        },<br />        {<br />          "sourceVolume": "awseb-logs-nginx-proxy",<br />          "containerPath": "/var/log/nginx"<br />        }<br />      ]<br />    }<br />  ]<br />}<br /> </pre>  | 

### Additional Migration Considerations
<a name="docker-multicontainer-migration.considerations"></a>

The Docker Amazon Linux 2 platform and Multi-container Docker Amazon Linux AMI platform implement environment properties differently. These two platforms also have different log directories that Elastic Beanstalk creates for each of their containers. After you migrate from the Amazon Linux AMI Multi-container Docker platform, you will need to be aware of these different implementations for your new Amazon Linux 2 Docker platform environment.


|  **Area**  |  **Docker platform on Amazon Linux 2 with Docker Compose**  |  **Multi-container Docker platform on Amazon Linux AMI**  | 
| --- | --- | --- | 
|  Environment properties  |  In order for your containers to access environment properties you must add a reference to the `.env` file in the `docker-compose.yml` file. Elastic Beanstalk generates the `.env` file, listing each of the properties as environment variables. For more information see [Referencing environment variables in containers](create_deploy_docker.container.console.md#docker-env-cfg.env-variables).   |  Elastic Beanstalk can directly pass environment properties to the container. Your code running in the container can access these properties as environment variables without any additional configuration.   | 
|  Log directories  |  For each container Elastic Beanstalk creates a log directory called `/var/log/eb-docker/containers/<service name>` (or `${EB_LOG_BASE_DIR}/<service name>`). For more information see [Docker container customized logging with Docker Compose](create_deploy_docker.container.console.md#docker-env-cfg.dc-customized-logging).   |  For each container, Elastic Beanstalk creates a log directory called `/var/log/containers/<containername>`. For more information see `mountPoints` field in [Container definition format](create_deploy_docker_v2config.md#create_deploy_docker_v2config_dockerrun_format).   | 

### Migration Steps
<a name="docker-multicontainer-migration.procedure"></a>

**To migrate to the Amazon Linux 2 Docker platform**

1. Create the `docker-compose.yml ` file for your application, based on its existing `Dockerrun.aws.json v2` file. For more information see the above section [The `docker-compose.yml` file](#docker-multicontainer-migration.files).

1. In your application project folder's root directory, replace the `Dockerrun.aws.json v2` file with the `docker-compose.yml` you just created. 

   Your directory structure should be as follows.

   ```
   ~/myApplication
   |-- docker-compose.yml
   |-- .ebextensions
   |-- php-app
   |-- proxy
   ```

1. Use the **eb init** command to configure your local directory for deployment to Elastic Beanstalk.

   ```
   ~/myApplication$ eb init -p docker application-name
   ```

1. Use the **eb create** command to create an environment and deploy your Docker image.

   ```
   ~/myApplication$ eb create environment-name
   ```

1. If your app is a web application, after your environment launches, use the **eb open** command to view it in a web browser.

   ```
   ~/myApplication$ eb open environment-name
   ```

1. You can display the status of your newly created environment using the **eb status** command.

   ```
   ~/myApplication$ eb status environment-name
   ```

# Preconfigured Docker GlassFish containers on Elastic Beanstalk
<a name="create_deploy_dockerpreconfig"></a>

**Note**  
 On [July 18, 2022](https://docs.aws.amazon.com/elasticbeanstalk/latest/relnotes/release-2022-07-18-linux-al1-retire.html), Elastic Beanstalk set the status of all platform branches based on Amazon Linux AMI (AL1) to **retired**. For more information about migrating to a current and fully supported Amazon Linux 2023 platform branch, see [Migrating your Elastic Beanstalk Linux application to Amazon Linux 2023 or Amazon Linux 2](using-features.migration-al.md).

The Preconfigured Docker GlassFish platform branch that runs on the Amazon Linux AMI (AL1) is no longer supported. To migrate your GlassFish application to a supported Amazon Linux 2023 platform, deploy GlassFish and your application code to an Amazon Linux 2023 Docker image. For more information, see the following topic, [Deploying a GlassFish application to the Docker platform: a migration path to Amazon Linux 2023](#docker-glassfish-tutorial).

## Getting started with preconfigured Docker containers - on Amazon Linux AMI (preceding Amazon Linux 2)
<a name="create_deploy_dockerpreconfig.walkthrough"></a>

This section shows you how to develop an example application locally and then deploy your application to Elastic Beanstalk with a preconfigured Docker container.

### Set up your local development environment
<a name="create_deploy_dockerpreconfig.walkthrough.setup"></a>

For this walk-through we use a GlassFish example application.

**To set up your environment**

1. Create a new folder for the example application.

   ```
   ~$ mkdir eb-preconf-example
   ~$ cd eb-preconf-example
   ```

1. Download the example application code into the new folder.

   ```
   ~$ wget https://docs.aws.amazon.com/elasticbeanstalk/latest/dg/samples/docker-glassfish-v1.zip
   ~$ unzip docker-glassfish-v1.zip
   ~$ rm docker-glassfish-v1.zip
   ```

### Develop and test locally
<a name="create_deploy_dockerpreconfig.walkthrough.dev"></a>

**To develop an example GlassFish application**

1. Add a `Dockerfile` to your application’s root folder. In the file, specify the AWS Elastic Beanstalk Docker base image to be used to run your local preconfigured Docker container. You'll later deploy your application to an Elastic Beanstalk Preconfigured Docker GlassFish platform version. Choose the Docker base image that this platform version uses. To find out the current Docker image of the platform version, see the [Preconfigured Docker](https://docs.aws.amazon.com/elasticbeanstalk/latest/platforms/platforms-supported.html#platforms-supported.dockerpreconfig) section of the *AWS Elastic Beanstalk Supported Platforms* page in the *AWS Elastic Beanstalk Platforms* guide.  
**Example \$1/Eb-preconf-example/Dockerfile**  

   ```
   # For Glassfish 5.0 Java 8
   FROM amazon/aws-eb-glassfish:5.0-al-onbuild-2.11.1
   ```

   For more information about using a `Dockerfile`, see [Preparing your Docker image for deployment to Elastic Beanstalk](single-container-docker-configuration.md).

1. Build the Docker image.

   ```
   ~/eb-preconf-example$ docker build -t my-app-image .
   ```

1. Run the Docker container from the image.
**Note**  
You must include the `-p` flag to map port 8080 on the container to the localhost port 3000. Elastic Beanstalk Docker containers always expose the application on port 8080 on the container. The `-it` flags run the image as an interactive process. The `--rm` flag cleans up the container file system when the container exits. You can optionally include the `-d` flag to run the image as a daemon.

   ```
   $ docker run -it --rm -p 3000:8080 my-app-image
   ```

1. To view the example application, type the following URL into your web browser.

   ```
   http://localhost:3000
   ```  
![\[The GlassFish example application showing in a web browser\]](http://docs.aws.amazon.com/elasticbeanstalk/latest/dg/images/dockerpreconfig-webpage.png)

### Deploy to Elastic Beanstalk
<a name="create_deploy_dockerpreconfig.walkthrough.deploy"></a>

After testing your application, you are ready to deploy it to Elastic Beanstalk.

**To deploy your application to Elastic Beanstalk**

1. In your application's root folder, rename the `Dockerfile` to `Dockerfile.local`. This step is required for Elastic Beanstalk to use the `Dockerfile` that contains the correct instructions for Elastic Beanstalk to build a customized Docker image on each Amazon EC2 instance in your Elastic Beanstalk environment.
**Note**  
You do not need to perform this step if your `Dockerfile` includes instructions that modify the platform version's base Docker image. You do not need to use a `Dockerfile` at all if your `Dockerfile` includes only a `FROM` line to specify the base image from which to build the container. In that situation, the `Dockerfile` is redundant.

1. Create an application source bundle.

   ```
   ~/eb-preconf-example$ zip myapp.zip -r *
   ```

1. Open the Elastic Beanstalk console with this preconfigured link: [console.aws.amazon.com/elasticbeanstalk/home\$1/newApplication?applicationName=tutorials&environmentType=LoadBalanced](https://console.aws.amazon.com/elasticbeanstalk/home#/newApplication?applicationName=tutorials&environmentType=LoadBalanced)

1. For **Platform**, under **Preconfigured – Docker**, choose **Glassfish**.

1. For **Application code**, choose **Upload your code**, and then choose **Upload**.

1. Choose **Local file**, choose **Browse**, and then open the application source bundle you just created.

1. Choose **Upload**.

1. Choose **Review and launch**.

1. Review the available settings, and then choose **Create app**.

1. When the environment is created, you can view the deployed application. Choose the environment URL that is displayed at the top of the console dashboard.

## Deploying a GlassFish application to the Docker platform: a migration path to Amazon Linux 2023
<a name="docker-glassfish-tutorial"></a>

The goal of this tutorial is to provide customers using the Preconfigured Docker GlassFish platform (based on Amazon Linux AMI) with a migration path to Amazon Linux 2023. You can migrate your GlassFish application to Amazon Linux 2023 by deploying GlassFish and your application code to an Amazon Linux 2023 Docker image.

The tutorial walks you through using the AWS Elastic Beanstalk Docker platform to deploy an application based on the [Java EE GlassFish application server](https://www.oracle.com/middleware/technologies/glassfish-server.html) to an Elastic Beanstalk environment. 

We demonstrate two approaches to building a Docker image:
+ **Simple** – Provide your GlassFish application source code and let Elastic Beanstalk build and run a Docker image as part of provisioning your environment. This is easy to set up, at a cost of increased instance provisioning time.
+ **Advanced** – Build a custom Docker image containing your application code and dependencies, and provide it to Elastic Beanstalk to use in your environment. This approach is slightly more involved, and decreases the provisioning time of instances in your environment.

### Prerequisites
<a name="docker-glassfish-tutorial.prereqs"></a>

This tutorial assumes that you have some knowledge of basic Elastic Beanstalk operations, the Elastic Beanstalk command line interface (EB CLI), and Docker. If you haven't already, follow the instructions in [Learn how to get started with Elastic Beanstalk](GettingStarted.md) to launch your first Elastic Beanstalk environment. This tutorial uses the [EB CLI](eb-cli3.md), but you can also create environments and upload applications by using the Elastic Beanstalk console.

To follow this tutorial, you will also need the following Docker components:
+ A working local installation of Docker. For more information, see [Get Docker](https://docs.docker.com/install/) on the Docker documentation website.
+ Access to Docker Hub. You will need to create a Docker ID to access the Docker Hub. For more information, see [Share the application](https://docs.docker.com/get-started/04_sharing_app/) on the Docker documentation website.

To learn more about configuring Docker environments on Elastic Beanstalk platforms, see [Preparing your Docker image for deployment to Elastic Beanstalk](single-container-docker-configuration.md) in this same chapter.

### Simple example: provide your application code
<a name="docker-glassfish-tutorial.simple"></a>

This is an easy way to deploy your GlassFish application. You provide your application source code together with the `Dockerfile` included in this tutorial. Elastic Beanstalk builds a Docker image that includes your application and the GlassFish software stack. Then Elastic Beanstalk runs the image on your environment instances.

An issue with this approach is that Elastic Beanstalk builds the Docker image locally whenever it creates an instance for your environment. The image build increases instance provisioning time. This impact isn't limited to initial environment creation—it happens during scale-out actions too.

**To launch an environment with an example GlassFish application**

1. Download the example `docker-glassfish-al2-v1.zip`, and then expand the `.zip` file into a directory in your development environment.

   ```
   ~$ curl https://docs.aws.amazon.com/elasticbeanstalk/latest/dg/samples/docker-glassfish-al2-v1.zip --output docker-glassfish-al2-v1.zip
   ~$ mkdir glassfish-example
   ~$ cd glassfish-example
   ~/glassfish-example$ unzip ../docker-glassfish-al2-v1.zip
   ```

   Your directory structure should be as follows.

   ```
   ~/glassfish-example
   |-- Dockerfile
   |-- Dockerrun.aws.json
   |-- glassfish-start.sh
   |-- index.jsp
   |-- META-INF
   |   |-- LICENSE.txt
   |   |-- MANIFEST.MF
   |   `-- NOTICE.txt
   |-- robots.txt
   `-- WEB-INF
       `-- web.xml
   ```

   The following files are key to building and running a Docker container in your environment:
   + `Dockerfile` – Provides instructions that Docker uses to build an image with your application and required dependencies.
   + `glassfish-start.sh` – A shell script that the Docker image runs to start your application.
   + `Dockerrun.aws.json` – Provides a logging key, to include the GlassFish application server log in [log file requests](using-features.logging.md). If you aren't interested in GlassFish logs, you can omit this file.

1. Configure your local directory for deployment to Elastic Beanstalk.

   ```
   ~/glassfish-example$ eb init -p docker glassfish-example
   ```

1. (Optional) Use the **eb local run** command to build and run your container locally.

   ```
   ~/glassfish-example$ eb local run --port 8080
   ```
**Note**  
To learn more about the **eb local** command, see [**eb local**](eb3-local.md). The command isn't supported on Windows. Alternatively, you can build and run your container with the **docker build** and **docker run** commands. For more information, see the [Docker documentation](https://docs.docker.com/).

1. (Optional) While your container is running, use the **eb local open** command to view your application in a web browser. Alternatively, open [http://localhost:8080/](http://localhost:8080/) in a web browser.

   ```
   ~/glassfish-example$ eb local open
   ```

1. Use the **eb create** command to create an environment and deploy your application.

   ```
   ~/glassfish-example$ eb create glassfish-example-env
   ```

1. After your environment launches, use the **eb open** command to view it in a web browser.

   ```
   ~/glassfish-example$ eb open
   ```

When you're done working with the example, terminate the environment and delete related resources.

```
~/glassfish-example$ eb terminate --all
```

### Advanced example: provide a prebuilt Docker image
<a name="docker-glassfish-tutorial.advanced"></a>

This is a more advanced way to deploy your GlassFish application. Building on the first example, you create a Docker image containing your application code and the GlassFish software stack, and push it to Docker Hub. After you've done this one-time step, you can launch Elastic Beanstalk environments based on your custom image.

When you launch an environment and provide your Docker image, instances in your environment download and use this image directly and don't need to build a Docker image. Therefore, instance provisioning time is decreased.

**Notes**  
The following steps create a publicly available Docker image.
You will use Docker commands from your local Docker installation, along with your Docker Hub credentials. For more information, see the preceding *Prerequisites* section in this topic.

**To launch an environment with a prebuilt GlassFish application Docker image**

1. Download and expand the example `docker-glassfish-al2-v1.zip` as in the previous [simple example](#docker-glassfish-tutorial.simple). If you've completed that example, you can use the directory you already have.

1. Build a Docker image and push it to Docker Hub. Enter your Docker ID for *docker-id* to sign in to Docker Hub.

   ```
   ~/glassfish-example$ docker build -t docker-id/beanstalk-glassfish-example:latest .
   ~/glassfish-example$ docker push docker-id/beanstalk-glassfish-example:latest
   ```
**Note**  
Before pushing your image, you might need to run **docker login**. You will be prompted for your Docker Hub credentials if you run the command without parameters.

1. Create an additional directory.

   ```
   ~$ mkdir glassfish-prebuilt
   ~$ cd glassfish-prebuilt
   ```

1. Copy the following example into a file named `Dockerrun.aws.json`.  
**Example `~/glassfish-prebuilt/Dockerrun.aws.json`**  

   ```
   {
     "AWSEBDockerrunVersion": "1",
     "Image": {
       "Name": "docker-username/beanstalk-glassfish-example"
     },
     "Ports": [
       {
         "ContainerPort": 8080,
         "HostPort": 8080
       }
     ],
     "Logging": "/usr/local/glassfish5/glassfish/domains/domain1/logs"
   }
   ```

1. Configure your local directory for deployment to Elastic Beanstalk.

   ```
   ~/glassfish-prebuilt$ eb init -p docker glassfish-prebuilt$
   ```

1. (Optional) Use the **eb local run** command to run your container locally.

   ```
   ~/glassfish-prebuilt$ eb local run --port 8080
   ```

1. (Optional) While your container is running, use the **eb local open** command to view your application in a web browser. Alternatively, open [http://localhost:8080/](http://localhost:8080/) in a web browser.

   ```
   ~/glassfish-prebuilt$ eb local open
   ```

1. Use the **eb create** command to create an environment and deploy your Docker image.

   ```
   ~/glassfish-prebuilt$ eb create glassfish-prebuilt-env
   ```

1. After your environment launches, use the **eb open** command to view it in a web browser.

   ```
   ~/glassfish-prebuilt$ eb open
   ```

When you're done working with the example, terminate the environment and delete related resources.

```
~/glassfish-prebuilt$ eb terminate --all
```