

# Amazon ECS task definitions
<a name="task_definitions"></a>

A *task definition* is a blueprint for your application. It is a text file in JSON format that describes the parameters and one or more containers that form your application. 

The following are some of the parameters that you can specify in a task definition:
+ The capacity to use, which determines the infrastructure that your tasks are hosted on
+ The Docker image to use with each container in your task
+ How much CPU and memory to use with each task or each container within a task
+ The memory and CPU requirements
+ The operating system of the container that the task runs on
+ The Docker networking mode to use for the containers in your task
+ The logging configuration to use for your tasks
+ Whether the task continues to run if the container finishes or fails
+ The command that the container runs when it's started
+ Any data volumes that are used with the containers in the task
+ The IAM role that your tasks use

For a complete list of task definition parameters, see [Amazon ECS task definition parameters for Fargate](task_definition_parameters.md).

After you create a task definition, you can run the task definition as a task or a service.
+ A *task* is the instantiation of a task definition within a cluster. After you create a task definition for your application within Amazon ECS, you can specify the number of tasks to run on your cluster. 
+ An Amazon ECS *service* runs and maintains your desired number of tasks simultaneously in an Amazon ECS cluster. How it works is that, if any of your tasks fail or stop for any reason, the Amazon ECS service scheduler launches another instance based on your task definition. It does this to replace it and thereby maintain your desired number of tasks in the service.

**Topics**
+ [

# Amazon ECS task definition states
](task-definition-state.md)
+ [

# Architect your application for Amazon ECS
](application_architecture.md)
+ [

# Creating an Amazon ECS task definition using the console
](create-task-definition.md)
+ [

# Using Amazon Q Developer to provide task definition recommendations in the Amazon ECS console
](using-amazon-q.md)
+ [

# Updating an Amazon ECS task definition using the console
](update-task-definition-console-v2.md)
+ [

# Deregistering an Amazon ECS task definition revision using the console
](deregister-task-definition-v2.md)
+ [

# Deleting an Amazon ECS task definition revision using the console
](delete-task-definition-v2.md)
+ [

# Amazon ECS task definition use cases
](use-cases.md)
+ [

# Amazon ECS task definition parameters for Amazon ECS Managed Instances
](task_definition_parameters-managed-instances.md)
+ [

# Amazon ECS task definition parameters for Fargate
](task_definition_parameters.md)
+ [

# Amazon ECS task definition parameters for Amazon EC2
](task_definition_parameters_ec2.md)
+ [

# Amazon ECS task definition template
](task-definition-template.md)
+ [

# Example Amazon ECS task definitions
](example_task_definitions.md)

# Amazon ECS task definition states
<a name="task-definition-state"></a>

A task definition changes states when you create, deregister, or delete it. You can view the task definition state in the console, or by using `DescribeTaskDefinition`. 

The following are the possible states for a task definition:

ACTIVE  
A task definition is `ACTIVE` after it is registered with Amazon ECS. You can use task definitions in the `ACTIVE` state to run tasks, or create services.

INACTIVE  
A task definition transitions from the `ACTIVE` state to the `INACTIVE` state when you deregister a task definition. You can retrieve an `INACTIVE` task definition by calling `DescribeTaskDefinition`. You cannot run new tasks or create new services with a task definition in the `INACTIVE` state. There is no impact on existing services or tasks.

DELETE\$1IN\$1PROGRESS  
A task definition transitions from the `INACTIVE` state to the `DELETE_IN_PROGRESS` state after you submitted the task definition for deletion. After the task definition is in the `DELETE_IN_PROGRESS` state, Amazon ECS periodically verifies that the target task definition is not being referenced by any active tasks or deployments, and then deletes the task definition permanently. You cannot run new tasks or create new services with a task definition in the `DELETE_IN_PROGRESS` state. A task definition can be submitted for deletion at any moment without impacting existing tasks and services.  
Task definitions that are in the `DELETE_IN_PROGRESS` state can be viewed in the console and you can retrieve the task definition by calling `DescribeTaskDefinition`.  
When you delete all `INACTIVE` task definition revisions, the task definition name is not displayed in the console and not returned in the API. If a task definition revision is in the `DELETE_IN_PROGRESS` state, the task definition name is displayed in the console and returned in the API. The task definition name is retained by Amazon ECS and the revision is incremented the next time you create a task definition with that name.

If you use AWS Config to manage your task definitions, AWS Config charges you for all task definition registrations. You are only charged for deregistering the latest `ACTIVE` task definition. There is no charge for deleting a task definition. For more information about pricing, see [AWS Config Pricing](https://aws.amazon.com/config/pricing/).

## Amazon ECS resources that can block a deletion
<a name="resource-block-delete"></a>

A task definition deletion request will not complete when there are any Amazon ECS resources that depend on the task definition revision. The following resources might prevent a task definition from being deleted:
+ Amazon ECS standalone tasks - The task definition is required in order for the task to remain healthy.
+ Amazon ECS service tasks - The task definition is required in order for the task to remain healthy.
+ Amazon ECS service deployments and task sets - The task definition is required when a scaling event is initiated for an Amazon ECS deployment or task set.

If your task definition remains in the `DELETE_IN_PROGRESS` state, you can use the console, or the AWS CLI to identify, and then stop the resources which block the task definition deletion.

### Task definition deletion after the blocked resource is removed
<a name="resource-block-remove"></a>

The following rules apply after you remove the resources that block the task definition deletion:
+ Amazon ECS tasks - The task definition deletion can take up to 1 hour to complete after the task is stopped.
+ Amazon ECS service deployments and task sets - The task definition deletion can take up to 24 hours to complete after the deployment or task set is deleted.

# Architect your application for Amazon ECS
<a name="application_architecture"></a>

You architect your application by creating a task definition for your application. The task definition contains the parameters that define information about the application, including:
+ The capacity to use, which determines the infrastructure that your tasks are hosted on.

  When you use the EC2 capacity provider, you also choose the instance type. When you use the Amazon ECS Managed Instances capacity provider, you can provide instance requirements for Amazon ECS to manage compute capacity. For some instance types, such as GPU, you need to set specific parameters. For more information, see [Amazon ECS task definition use cases](use-cases.md).
+ The container image, which holds your application code and all the dependencies that your application code requires to run.
+ The networking mode to use for the containers in your task.

  The networking mode determines how your task communicates over the network.

  For tasks that run on EC2 instances and Amazon ECS Managed Instances, there are multiple options, but we recommend that you use the `awsvpc` network mode. The `awsvpc` network mode simplifies container networking by giving you more control over how your applications communicate with each other and other services within your VPCs. 

  For tasks that run on Fargate, you must use the `awsvpc` network mode.
+ The logging configuration to use for your tasks.
+ Any data volumes that are used with the containers in the task.

For a complete list of task definition parameters, see [Amazon ECS task definition parameters for Fargate](task_definition_parameters.md).

Use the following guidelines when you create your task definitions:
+ Use each task definition family for only one business purpose.

  If you group multiple types of application containers together in the same task definition, you can't independently scale those containers. For example, a website and an API typically require different scaling patterns. As traffic increases, you might need a different number of web containers than API containers. If these two containers are deployed in the same task definition, every task runs the same number of web containers and API containers.
+ Match each application version with a task definition revision within a task definition family.

  Within a task definition family, each task definition revision represents a point-in-time snapshot of the settings for a particular container image. This is similar to how the container is a snapshot of all the components needed to run a particular version of your application code.

  Create a one-to-one mapping between a version of application code, a container image tag, and a task definition revision. A typical release process involves a git commit that gets turned into a container image that's tagged with the git commit SHA. Then, that container image tag gets its own Amazon ECS task definition revision. Last, the Amazon ECS service is updated to deploy the new task definition revision.
+ Use different IAM roles for each task definition family.

  Define each task definition with its own IAM role. Implement this practice along with providing each business component its own task definition family. By implementing both of these best practices, you can limit how much access each service has to resources in your AWS account. For example, you can give your authentication service access to connect to your passwords database. At the same time, you can ensure that only your order service has access to the credit card payment information.

# Best practices for Amazon ECS task sizes
<a name="capacity-tasksize"></a>

 Your container and task sizes are both essential for scaling and capacity planning. In Amazon ECS, CPU and memory are two resource metrics used for capacity. CPU is measured in units of 1/1024 of a full vCPU (where 1024 units is equal to 1 whole vCPU). Memory is measured in mebibytes. In your task definition, you can configure resource reservations and limits.

When you configure a reservation, you're setting the minimum amount of resources that a task requires. Your task receives at least the amount of resources requested. Your application might be able to use more CPU or memory than the reservation that you declare. However, this is subject to any limits that you also declared. Using more than the reservation amount is known as bursting. In Amazon ECS, reservations are guaranteed. For example, if you use Amazon EC2 instances to provide capacity, Amazon ECS doesn't place a task on an instance where the reservation can't be fulfilled.

A limit is the maximum amount of CPU units or memory that your container or task can use. Any attempt to use more CPU more than this limit results in throttling. Any attempt to use more memory results in your container being stopped.

Choosing these values can be challenging. This is because the values that are the most well suited for your application greatly depend on the resource requirements of your application. Load testing your application is the key to successful resource requirement planning and better understanding your application's requirements.

## Stateless applications
<a name="capacity-tasksize-stateless"></a>

For stateless applications that scale horizontally, such as an application behind a load balancer, we recommend that you first determine the amount of memory that your application consumes when it serves requests. To do this, you can use traditional tools such as `ps` or `top`, or monitoring solutions such as CloudWatch Container Insights.

When determining a CPU reservation, consider how you want to scale your application to meet your business requirements. You can use smaller CPU reservations, such as 256 CPU units (or 1/4 vCPU), to scale out in a fine-grained way that minimizes cost. But, they might not scale fast enough to meet significant spikes in demand. You can use larger CPU reservations to scale in and out more quickly and therefore match demand spikes more quickly. However, larger CPU reservations are more costly.

## Other applications
<a name="capacity-tasksize-other"></a>

For applications that don't scale horizontally, such as singleton workers or database servers, available capacity and cost represent your most important considerations. You should choose the amount of memory and CPU based on what load testing indicates you need to serve traffic to meet your service-level objective. Amazon ECS ensures that the application is placed on a host that has adequate capacity.

# Amazon ECS task networking for Amazon ECS Managed Instances
<a name="managed-instance-networking"></a>

The networking behavior of Amazon ECS tasks running on Amazon ECS Managed Instances is determined by the *network mode* specified in the task definition. You must specify a network mode in the task definition. You will not be able to run tasks on Amazon ECS Managed Instances using a task definition that doesn't specify a network mode. Amazon ECS Managed Instances supports the following networking modes, ensuring backward compatibility for migrating workloads from Fargate or Amazon ECS on Amazon EC2:


| Network mode | Description | 
| --- | --- | 
|  `awsvpc`  |  Each task receives its own elastic network interface (ENI) and private IPv4 address. This provides the same networking properties as Amazon EC2 instances and is compatible with traditional Fargate tasks. Uses ENI trunking for high task density.  | 
|  `host`  |  Tasks share the host's network namespace directly. Container networking is tied to the underlying host instance.  | 

## Using a VPC in IPv6-only mode
<a name="managed-instances-networking-ipv6-only"></a>

In an IPv6-only configuration, your Amazon ECS tasks communicate exclusively over IPv6. To set up VPCs and subnets for an IPv6-only configuration, you must add an IPv6 CIDR block to the VPC and create subnets that include only an IPv6 CIDR block. For more information see [Add IPv6 support for your VPC](https://docs.aws.amazon.com/vpc/latest/userguide/vpc-migrate-ipv6-add.html) and [Create a subnet](https://docs.aws.amazon.com/vpc/latest/userguide/create-subnets.html) in the *Amazon VPC User Guide*. You must also update route tables with IPv6 targets and configure security groups with IPv6 rules. For more information, see [Configure route tables](https://docs.aws.amazon.com/vpc/latest/userguide/VPC_Route_Tables.html) and [Configure security group rules](https://docs.aws.amazon.com/vpc/latest/userguide/working-with-security-group-rules.html) in the *Amazon VPC User Guide*.

The following considerations apply:
+ You can update an IPv4-only or dualstack Amazon ECS service to an IPv6-only configuration by either updating the service directly to use IPv6-only subnets or by creating a parallel IPv6-only service and using Amazon ECS blue-green deployments to shift traffic to the new service. For more information about Amazon ECS blue-green deployments, see [Amazon ECS blue/green deployments](deployment-type-blue-green.md).
+ An IPv6-only Amazon ECS service must use dualstack load balancers with IPv6 target groups. If you're migrating an existing Amazon ECS service that's behind a Application Load Balancer or a Network Load Balancer, you can create a new dualstack load balancer and shift traffic from the old load balancer, or update the IP address type of the existing load balancer.

   For more information about Network Load Balancers, see [Create a Network Load Balancer](https://docs.aws.amazon.com/elasticloadbalancing/latest/network/create-network-load-balancer.html) and [Update the IP address types for your Network Load Balancer](https://docs.aws.amazon.com/elasticloadbalancing/latest/network/load-balancer-ip-address-type.html) in the *User Guide for Network Load Balancers*. For more information about Application Load Balancers, see [Create an Application Load Balancer](https://docs.aws.amazon.com/elasticloadbalancing/latest/application/create-application-load-balancer.html) and [Update the IP address types for your Application Load Balancer](https://docs.aws.amazon.com/elasticloadbalancing/latest/application/load-balancer-ip-address-type.html) in the *User Guide for Application Load Balancers*.
+ For Amazon ECS tasks in an IPv6-only configuration to communicate with IPv4-only endpoints, you can set up DNS64 and NAT64 for network address translation from IPv6 to IPv4. For more information, see [DNS64 and NAT64](https://docs.aws.amazon.com/vpc/latest/userguide/nat-gateway-nat64-dns64.html) in the *Amazon VPC User Guide*.
+ Amazon ECS workloads in an IPv6-only configuration must use Amazon ECR dualstack image URI endpoints when pulling images from Amazon ECR. For more information, see [Getting started with making requests over IPv6](https://docs.aws.amazon.com/AmazonECR/latest/userguide/ecr-requests.html#ipv6-access-getting-started) in the *Amazon Elastic Container Registry User Guide*.
**Note**  
Amazon ECR doesn't support dualstack interface VPC endpoints that tasks in an IPv6-only configuration can use. For more information, see [Getting started with making requests over IPv6](https://docs.aws.amazon.com/AmazonECR/latest/userguide/ecr-requests.html#ipv6-access-getting-started) in the *Amazon Elastic Container Registry User Guide*.
+ Amazon ECS Exec isn't supported in an IPv6-only configuration.

# Allocate a network interface for tasks on Amazon ECS Managed Instances
<a name="managed-instances-awsvpc-mode"></a>

 Using the `awsvpc` network mode in Amazon ECS Managed Instances simplifies container networking because you have more control over how your applications communicate with each other and other services within your VPCs. The `awsvpc` network mode also provides greater security for your containers by allowing you to use security groups and network monitoring tools at a more granular level within your tasks.

By default, every Amazon ECS Managed Instances instance has a trunk Elastic Network Interface (ENI) attached during launch as a primary ENI when the instance type supports trunking. For more information about instance types that support ENI trunking, see [Supported instances for increased Amazon ECS container network interfaces](https://docs.aws.amazon.com/AmazonECS/latest/developerguide/eni-trunking-supported-instance-types.html).

**Note**  
When the chosen instance type doesn't support trunk ENIs, the instance will be launched with a regular ENI.

Each task that runs on the instance receives its own ENI attached to the trunk ENI, with a primary private IP address. If your VPC is configured for dual-stack mode and you use a subnet with an IPv6 CIDR block, the ENI also receives an IPv6 address. When using a public subnet, you can optionally assign a public IP address to the Amazon ECS Managed Instance primary ENI by enabling IPv4 public addressing for the subnet. For more information, see [Modify the IP addressing attributes of your subnet](https://docs.aws.amazon.com//vpc/latest/userguide/subnet-public-ip.html) in *Amazon VPC User Guide*. A task can only have one ENI that's associated with it at a time. 

 Containers that belong to the same task can also communicate over the `localhost` interface. For more information about VPCs and subnets, see [How Amazon VPC works](https://docs.aws.amazon.com/vpc/latest/userguide/how-it-works.html) in the *Amazon VPC User Guide*

The following operations use the primary ENI attached to the instance:
+ **Image downloads** - Container images are downloaded from Amazon ECR through the primary ENI.
+ **Secrets retrieval** - Secrets Manager secrets and other credentials are retrieved through the primary ENI.
+ **Log uploads** - Logs are uploaded to CloudWatch through the primary ENI.
+ **Environment file downloads** - Environment files are downloaded through the primary ENI.

Application traffic flows through the task ENI.

Because each task gets its own ENI, you can use networking features such as VPC Flow Logs, which you can use to monitor traffic to and from your tasks. For more information, see [VPC Flow Logs](https://docs.aws.amazon.com/vpc/latest/userguide/flow-logs.html) in the *Amazon VPC User Guide*.

You can also take advantage of AWS PrivateLink. You can configure a VPC interface endpoint so that you can access Amazon ECS APIs through private IP addresses. AWS PrivateLink restricts all network traffic between your VPC and Amazon ECS to the Amazon network. You don't need an internet gateway, a NAT device, or a virtual private gateway. For more information, see [Amazon ECS interface VPC endpoints (AWS PrivateLink)](https://docs.aws.amazon.com/AmazonECS/latest/developerguide/vpc-endpoints.html).

The `awsvpc` network mode also allows you to leverage Amazon VPC Traffic Mirroring for security and monitoring of network traffic when using instance types that don't have trunk ENIs attached. For more information, see [What is Traffic Mirroring?](https://docs.aws.amazon.com/vpc/latest/mirroring/what-is-traffic-mirroring.html) in the *Amazon VPC Traffic Mirroring Guide*.

## Considerations for `awsvpc` mode
<a name="managed-instances-awsvpc-considerations"></a>
+ Tasks require the Amazon ECS service-linked role for ENI management. This role is created automatically when you create a cluster or service.
+ Task ENIs are managed by Amazon ECS and cannot be manually detached or modified.
+ Assigning a public IP address to the task ENI using `assignPublicIp` when running a standalone task (`RunTask`) or creating or updating a service (`CreateService`/`UpdateService`) is not supported.
+ When you configure `awsvpc` networking at the task level, you must use the same VPC that you specified as part of the Amazon ECS Managed Instances capacity provider's launch template. You can use different subnets and security groups from those specified in the launch template.
+ For `awsvpc` network mode tasks, use `ip` target type when configuring load balancer target groups. Amazon ECS automatically manages target group registration for supported networking modes.

## Using a VPC in dual-stack mode
<a name="managed-instance-networking-vpc-dual-stack"></a>

When using a VPC in dual-stack mode, your tasks can communicate over IPv4, or IPv6, or both. IPv4 and IPv6 addresses are independent of each other. Therefore you must configure routing and security in your VPC separately for IPv4 and IPv6. For more information about how to configure your VPC for dual-stack mode, see [Migrating to IPv6](https://docs.aws.amazon.com/vpc/latest/userguide/vpc-migrate-ipv6.html) in the *Amazon VPC User Guide*.

If you configured your VPC with an internet gateway or an outbound-only internet gateway, you can use your VPC in dual-stack mode. By doing this, tasks that are assigned an IPv6 address can access the internet through an internet gateway or an egress-only internet gateway. NAT gateways are optional. For more information, see [Internet gateways](https://docs.aws.amazon.com/vpc/latest/userguide/VPC_Internet_Gateway.html) and [Egress-only internet gateways](https://docs.aws.amazon.com/vpc/latest/userguide/egress-only-internet-gateway.html) in the *Amazon VPC User Guide*.

Amazon ECS tasks are assigned an IPv6 address if the following conditions are met:
+ The Amazon ECS Managed Instances instance that hosts the task is using version `1.45.0` or later of the container agent. For information about how to check the agent version your instance is using, and updating it if needed, see [Updating the Amazon ECS container agent](ecs-agent-update.md).
+ The `dualStackIPv6` account setting is enabled. For more information, see [Access Amazon ECS features with account settings](ecs-account-settings.md).
+ Your task is using the `awsvpc` network mode.
+ Your VPC and subnet are configured for IPv6. The configuration includes the network interfaces that are created in the specified subnet. For more information about how to configure your VPC for dual-stack mode, see [Migrating to IPv6](https://docs.aws.amazon.com/vpc/latest/userguide/vpc-migrate-ipv6.html) and [Modify the IPv6 addressing attribute for your subnet](https://docs.aws.amazon.com/vpc/latest/userguide/vpc-ip-addressing.html#subnet-ipv6) in the *Amazon VPC User Guide*.

# Host network mode
<a name="managed-instances-host-modes"></a>

In `host` mode, tasks share the host's network namespace directly. The container's networking configuration is tied to the underlying Amazon ECS Managed Instances host instance that you specify using the `networkConfiguration` parameter when you create an Amazon ECS Managed Instances capacity provider.``

There are significant drawbacks to using this network mode. You can’t run more than a single instantiation of a task on each host. This is because only the first task can bind to its required port on the Amazon EC2 instance. There's also no way to remap a container port when it's using `host` network mode. For example, if an application needs to listen on a particular port number, you can't remap the port number directly. Instead, you must manage any port conflicts through changing the application configuration.

There are also security implications when using the `host` network mode. This mode allows containers to impersonate the host, and it allows containers to connect to private loopback network services on the host.

Use host mode only when you need direct access to host networking or when migrating applications that require host-level network access.

# Amazon ECS task networking options for EC2
<a name="task-networking"></a>

The networking behavior of Amazon ECS tasks that are hosted on Amazon EC2 instances is dependent on the *network mode* that's defined in the task definition. We recommend that you use the `awsvpc` network mode unless you have a specific need to use a different network mode.

The following are the available network modes.


| Network mode | Linux containers on EC2 | Windows containers on EC2 | Description | 
| --- | --- | --- | --- | 
|  `awsvpc`  |  Yes  |  Yes  |  The task is allocated its own elastic network interface (ENI) and a primary private IPv4 or IPv6 address. This gives the task the same networking properties as Amazon EC2 instances.  | 
|  `bridge`  |  Yes  |  No  |  The task uses Docker's built-in virtual network on Linux, which runs inside each Amazon EC2 instance that hosts the task. The built-in virtual network on Linux uses the `bridge` Docker network driver. This is the default network mode on Linux if a network mode isn't specified in the task definition.  | 
|  `host`  |  Yes  |  No  |  The task uses the host's network which bypasses Docker's built-in virtual network by mapping container ports directly to the ENI of the Amazon EC2 instance that hosts the task. Dynamic port mappings can’t be used in this network mode. A container in a task definition that uses this mode must specify a specific `hostPort` number. A port number on a host can’t be used by multiple tasks. As a result, you can’t run multiple tasks of the same task definition on a single Amazon EC2 instance.  | 
|  `none`  |  Yes  |  No  |  The task has no external network connectivity.  | 
|  `default`  |  No  |  Yes  |  The task uses Docker's built-in virtual network on Windows, which runs inside each Amazon EC2 instance that hosts the task. The built-in virtual network on Windows uses the `nat` Docker network driver. This is the default network mode on Windows if a network mode isn't specified in the task definition.  | 

For more information about Docker networking on Linux, see [Networking overview](https://docs.docker.com/engine/network/) in the *Docker Documentation*.

For more information about Docker networking on Windows, see [Windows container networking](https://learn.microsoft.com/en-us/virtualization/windowscontainers/container-networking/architecture) in the Microsoft *Containers on Windows Documentation*.

## Using a VPC in IPv6-only mode
<a name="networking-ipv6-only"></a>

In an IPv6-only configuration, your Amazon ECS tasks communicate exclusively over IPv6. To set up VPCs and subnets for an IPv6-only configuration, you must add an IPv6 CIDR block to the VPC and create new subnets that include only an IPv6 CIDR block. For more information see [Add IPv6 support for your VPC](https://docs.aws.amazon.com/vpc/latest/userguide/vpc-migrate-ipv6-add.html) and [Create a subnet](https://docs.aws.amazon.com/vpc/latest/userguide/create-subnets.html) in the *Amazon VPC User Guide*.

You must also update route tables with IPv6 targets and configure security groups with IPv6 rules. For more information, see [Configure route tables](https://docs.aws.amazon.com/vpc/latest/userguide/VPC_Route_Tables.html) and [Configure security group rules](https://docs.aws.amazon.com/vpc/latest/userguide/working-with-security-group-rules.html) in the *Amazon VPC User Guide*.

The following considerations apply:
+ You can update an IPv4-only or dualstack Amazon ECS service to an IPv6-only configuration by either updating the service directly to use IPv6-only subnets or by creating a parallel IPv6-only service and using Amazon ECS blue-green deployments to shift traffic to the new service. For more information about Amazon ECS blue-green deployments, see [Amazon ECS blue/green deployments](deployment-type-blue-green.md).
+ An IPv6-only Amazon ECS service must use dualstack load balancers with IPv6 target groups. If you're migrating an existing Amazon ECS service that's behind a Application Load Balancer or a Network Load Balancer, you can create a new dualstack load balancer and shift traffic from the old load balancer, or update the IP address type of the existing load balancer.

  For more information about Network Load Balancers, see [Create a Network Load Balancer](https://docs.aws.amazon.com/elasticloadbalancing/latest/network/create-network-load-balancer.html) and [Update the IP address types for your Network Load Balancer](https://docs.aws.amazon.com/elasticloadbalancing/latest/network/load-balancer-ip-address-type.html) in the *User Guide for Network Load Balancers*. For more information about Application Load Balancers, see [Create an Application Load Balancer](https://docs.aws.amazon.com/elasticloadbalancing/latest/application/create-application-load-balancer.html) and [Update the IP address types for your Application Load Balancer](https://docs.aws.amazon.com/elasticloadbalancing/latest/application/load-balancer-ip-address-type.html) in the *User Guide for Application Load Balancers*.
+ IPv6-only configuration isn't supported on Windows. You must use Amazon ECS-optimized Linux AMIs to run tasks in an IPv6-only configuration. For more information about Amazon ECS-optimized Linux AMIs, see [Amazon ECS-optimized Linux AMIs](ecs-optimized_AMI.md).
+ When you launch a container instance for running tasks in an IPv6-only configuration, you must set a primary IPv6 address for the instance by using the `--enable-primary-ipv6` EC2 parameter.
**Note**  
Without a primary IPv6 address, tasks running on the container instance in the host or bridge network modes will fail to register with load balancers or with AWS Cloud Map.

  For more information about the `--enable-primary-ipv6` for running Amazon EC2 instances, see [run-instances](https://docs.aws.amazon.com/cli/latest/reference/ec2/run-instances.html) in the *AWS CLI Command Reference*.

  For more information about launching container instances using the AWS Management Console, see [Launching an Amazon ECS Linux container instance](launch_container_instance.md).
+ By default, the Amazon ECS container agent will try to detect the container instance's compatibility for an IPv6-only configuration by looking at the instance's default IPv4 and IPv6 routes. To override this behavior, you can set the ` ECS_INSTANCE_IP_COMPATIBILITY` parameter to `ipv4` or `ipv6` in the instance's `/etc/ecs/ecs.config` file.
+ Tasks must use version `1.99.1` or later of the container agent. For information about how to check the agent version your instance is using and updating it if needed, see [Updating the Amazon ECS container agent](ecs-agent-update.md).
+ For Amazon ECS tasks in an IPv6-only configuration to communicate with IPv4-only endpoints, you can set up DNS64 and NAT64 for network address translation from IPv6 to IPv4. For more information, see [DNS64 and NAT64](https://docs.aws.amazon.com/vpc/latest/userguide/nat-gateway-nat64-dns64.html) in the *Amazon VPC User Guide*.
+ Amazon ECS workloads in an IPv6-only configuration must use Amazon ECR dualstack image URI endpoints when pulling images from Amazon ECR. For more information, see [Getting started with making requests over IPv6](https://docs.aws.amazon.com/AmazonECR/latest/userguide/ecr-requests.html#ipv6-access-getting-started) in the *Amazon Elastic Container Registry User Guide*.
**Note**  
Amazon ECR doesn't support dualstack interface VPC endpoints that tasks in an IPv6-only configuration can use. For more information, see [Getting started with making requests over IPv6](https://docs.aws.amazon.com/AmazonECR/latest/userguide/ecr-requests.html#ipv6-access-getting-started) in the *Amazon Elastic Container Registry User Guide*.
+ Amazon ECS Exec isn't supported in an IPv6-only configuration.

### AWS Regions that support IPv6-only mode for Amazon ECS
<a name="networking-ipv6-only-regions"></a>

You can run tasks in an IPv6-only configuration in the following AWS regions that Amazon ECS is available in:
+ US East (Ohio)
+ US East (N. Virginia)
+ US West (N. California)
+ US West (Oregon)
+ Africa (Cape Town)
+ Asia Pacific (Hong Kong)
+ Asia Pacific (Hyderabad)
+ Asia Pacific (Jakarta)
+ Asia Pacific (Melbourne)
+ Asia Pacific (Mumbai)
+ Asia Pacific (Osaka)
+ Asia Pacific (Seoul)
+ Asia Pacific (Singapore)
+ Asia Pacific (Sydney)
+ Asia Pacific (Tokyo)
+ Canada (Central)
+ Canada West (Calgary)
+ China (Beijing)
+ China (Ningxia)
+ Europe (Frankfurt)
+ Europe (London)
+ Europe (Milan)
+ Europe (Paris)
+ Europe (Spain)
+ Israel (Tel Aviv)
+ Middle East (Bahrain)
+ Middle East (UAE)
+ South America (São Paulo)
+ AWS GovCloud (US-East)
+ AWS GovCloud (US-West)

# Allocate a network interface for an Amazon ECS task
<a name="task-networking-awsvpc"></a>

The task networking features that are provided by the `awsvpc` network mode give Amazon ECS tasks the same networking properties as Amazon EC2 instances. Using the `awsvpc` network mode simplifies container networking, because you have more control over how your applications communicate with each other and other services within your VPCs. The `awsvpc` network mode also provides greater security for your containers by allowing you to use security groups and network monitoring tools at a more granular level within your tasks. You can also use other Amazon EC2 networking features such as VPC Flow Logs to monitor traffic to and from your tasks. Additionally, containers that belong to the same task can communicate over the `localhost` interface.

The task elastic network interface (ENI) is a fully managed feature of Amazon ECS. Amazon ECS creates the ENI and attaches it to the host Amazon EC2 instance with the specified security group. The task sends and receives network traffic over the ENI in the same way that Amazon EC2 instances do with their primary network interfaces. Each task ENI is assigned a private IPv4 address by default. If your VPC is enabled for dual-stack mode and you use a subnet with an IPv6 CIDR block, the task ENI will also receive an IPv6 address. Each task can only have one ENI. 

These ENIs are visible in the Amazon EC2 console for your account. Your account can't detach or modify the ENIs. This is to prevent accidental deletion of an ENI that is associated with a running task. You can view the ENI attachment information for tasks in the Amazon ECS console or with the [DescribeTasks](https://docs.aws.amazon.com/AmazonECS/latest/APIReference/API_DescribeTasks.html) API operation. When the task stops or if the service is scaled down, the task ENI is detached and deleted.

When you need increased ENI density, use the `awsvpcTrunking` account setting. Amazon ECS also creates and attaches a "trunk" network interface for your container instance. The trunk network is fully managed by Amazon ECS. The trunk ENI is deleted when you either terminate or deregister your container instance from the Amazon ECS cluster. For more information about the `awsvpcTrunking` account setting, see [Prerequisites](container-instance-eni.md#eni-trunking-launching).

You specify `awsvpc` in the `networkMode` parameter of the task definition. For more information, see [Network mode](task_definition_parameters.md#network_mode). 

Then, when you run a task or create a service, use the `networkConfiguration` parameter that includes one or more subnets to place your tasks in and one or more security groups to attach to an ENI. For more information, see [Network configuration](service_definition_parameters.md#sd-networkconfiguration). The tasks are placed on compatible Amazon EC2 instances in the same Availability Zones as those subnets, and the specified security groups are associated with the ENI that's provisioned for the task.

## Linux considerations
<a name="linux"></a>

 Consider the following when using the Linux operating system.
+ If you use a p5.48xlarge instance in `awsvpc` mode, you can't run more than 1 task on the instance.
+ Tasks and services that use the `awsvpc` network mode require the Amazon ECS service-linked role to provide Amazon ECS with the permissions to make calls to other AWS services on your behalf. This role is created for you automatically when you create a cluster or if you create or update a service, in the AWS Management Console. For more information, see [Using service-linked roles for Amazon ECS](using-service-linked-roles.md). You can also create the service-linked role with the following AWS CLI command:

  ```
  aws iam [create-service-linked-role](https://docs.aws.amazon.com/cli/latest/reference/iam/create-service-linked-role.html) --aws-service-name ecs.amazonaws.com
  ```
+ Your Amazon EC2 Linux instance requires version `1.15.0` or later of the container agent to run tasks that use the `awsvpc` network mode. If you're using an Amazon ECS-optimized AMI, your instance needs at least version `1.15.0-4` of the `ecs-init` package as well.
+ Amazon ECS populates the hostname of the task with an Amazon-provided (internal) DNS hostname when both the `enableDnsHostnames` and `enableDnsSupport` options are enabled on your VPC. If these options aren't enabled, the DNS hostname of the task is set to a random hostname. For more information about the DNS settings for a VPC, see [Using DNS with Your VPC](https://docs.aws.amazon.com/vpc/latest/userguide/vpc-dns.html) in the *Amazon VPC User Guide*.
+ Each Amazon ECS task that uses the `awsvpc` network mode receives its own elastic network interface (ENI), which is attached to the Amazon EC2 instance that hosts it. There's a default quota for the number of network interfaces that can be attached to an Amazon EC2 Linux instance. The primary network interface counts as one toward that quota. For example, by default, a `c5.large` instance might have only up to three ENIs that can be attached to it. The primary network interface for the instance counts as one. You can attach an additional two ENIs to the instance. Because each task that uses the `awsvpc` network mode requires an ENI, you can typically only run two such tasks on this instance type. For more information about the default ENI limits for each instance type, see [IP addresses per network interface per instance type](https://docs.aws.amazon.com/AWSEC2/latest/UserGuide/using-eni.html#AvailableIpPerENI) in the *Amazon EC2 User Guide*.
+ Amazon ECS supports the launch of Amazon EC2 Linux instances that use supported instance types with increased ENI density. When you opt in to the `awsvpcTrunking` account setting and register Amazon EC2 Linux instances that use these instance types to your cluster, these instances have higher ENI quota. Using these instances with this higher quota means that you can place more tasks on each Amazon EC2 Linux instance. To use the increased ENI density with the trunking feature, your Amazon EC2 instance must use version `1.28.1` or later of the container agent. If you're using an Amazon ECS-optimized AMI, your instance also requires at least version `1.28.1-2` of the `ecs-init` package. For more information about opting in to the `awsvpcTrunking` account setting, see [Access Amazon ECS features with account settings](ecs-account-settings.md). For more information about ENI trunking, see [Increasing Amazon ECS Linux container instance network interfaces](container-instance-eni.md).
+ When hosting tasks that use the `awsvpc` network mode on Amazon EC2 Linux instances, your task ENIs aren't given public IP addresses. To access the internet, tasks must be launched in a private subnet that's configured to use a NAT gateway. For more information, see [NAT gateways](https://docs.aws.amazon.com/vpc/latest/userguide/vpc-nat-gateway.html) in the *Amazon VPC User Guide*. Inbound network access must be from within a VPC that uses the private IP address or routed through a load balancer from within the VPC. Tasks that are launched within public subnets do not have access to the internet.
+ Amazon ECS recognizes only the ENIs that it attaches to your Amazon EC2 Linux instances. If you manually attached ENIs to your instances, Amazon ECS might attempt to add a task to an instance that doesn't have enough network adapters. This can result in the task timing out and moving to a deprovisioning status and then a stopped status. We recommend that you don't attach ENIs to your instances manually.
+ Amazon EC2 Linux instances must be registered with the `ecs.capability.task-eni` capability to be considered for placement of tasks with the `awsvpc` network mode. Instances running version `1.15.0-4` or later of `ecs-init` are registered with this attribute automatically.
+ The ENIs that are created and attached to your Amazon EC2 Linux instances cannot be detached manually or modified by your account. This is to prevent the accidental deletion of an ENI that is associated with a running task. To release the ENIs for a task, stop the task.
+ There is a limit of 16 subnets and 5 security groups that are able to be specified in the `awsVpcConfiguration` when running a task or creating a service that uses the `awsvpc` network mode. For more information, see [AwsVpcConfiguration](https://docs.aws.amazon.com/AmazonECS/latest/APIReference/API_AwsVpcConfiguration.html) in the *Amazon Elastic Container Service API Reference*.
+ When a task is started with the `awsvpc` network mode, the Amazon ECS container agent creates an additional `pause` container for each task before starting the containers in the task definition. It then configures the network namespace of the `pause` container by running the [amazon-ecs-cni-plugins ](https://github.com/aws/amazon-ecs-cni-plugins) CNI plugins. The agent then starts the rest of the containers in the task so that they share the network stack of the `pause` container. This means that all containers in a task are addressable by the IP addresses of the ENI, and they can communicate with each other over the `localhost` interface.
+ Services with tasks that use the `awsvpc` network mode only support Application Load Balancer and Network Load Balancer. When you create any target groups for these services, you must choose `ip` as the target type. Do not use `instance`. This is because tasks that use the `awsvpc` network mode are associated with an ENI, not with an Amazon EC2 Linux instance. For more information, see [Use load balancing to distribute Amazon ECS service traffic](service-load-balancing.md).
+ If your VPC is updated to change the DHCP options set it uses, you can't apply these changes to existing tasks. Start new tasks with these changes applied to them, verify that they are working correctly, and then stop the existing tasks in order to safely change these network configurations.

## Windows considerations
<a name="windows"></a>

 The following are considerations when you use the Windows operating system:
+ Container instances using the Amazon ECS optimized Windows Server 2016 AMI can't host tasks that use the `awsvpc` network mode. If you have a cluster that contains Amazon ECS optimized Windows Server 2016 AMIs and Windows AMIs that support `awsvpc` network mode, tasks that use `awsvpc` network mode aren't launched on the Windows 2016 Server instances. Rather, they're launched on instances that support `awsvpc` network mode.
+ Your Amazon EC2 Windows instance requires version `1.57.1` or later of the container agent to use CloudWatch metrics for Windows containers that use the `awsvpc` network mode.
+ Tasks and services that use the `awsvpc` network mode require the Amazon ECS service-linked role to provide Amazon ECS with the permissions to make calls to other AWS services on your behalf. This role is created for you automatically when you create a cluster, or if you create or update a service, in the AWS Management Console. For more information, see [Using service-linked roles for Amazon ECS](using-service-linked-roles.md). You can also create the service-linked role with the following AWS CLI command.

  ```
  aws iam [create-service-linked-role](https://docs.aws.amazon.com/cli/latest/reference/iam/create-service-linked-role.html) --aws-service-name ecs.amazonaws.com
  ```
+ Your Amazon EC2 Windows instance requires version `1.54.0` or later of the container agent to run tasks that use the `awsvpc` network mode. When you bootstrap the instance, you must configure the options that are required for `awsvpc` network mode. For more information, see [Bootstrapping Amazon ECS Windows container instances to pass data](bootstrap_windows_container_instance.md).
+ Amazon ECS populates the hostname of the task with an Amazon provided (internal) DNS hostname when both the `enableDnsHostnames` and `enableDnsSupport` options are enabled on your VPC. If these options aren't enabled, the DNS hostname of the task is a random hostname. For more information about the DNS settings for a VPC, see [Using DNS with Your VPC](https://docs.aws.amazon.com/vpc/latest/userguide/vpc-dns.html) in the *Amazon VPC User Guide*.
+ Each Amazon ECS task that uses the `awsvpc` network mode receives its own elastic network interface (ENI), which is attached to the Amazon EC2 Windows instance that hosts it. There is a default quota for the number of network interfaces that can be attached to an Amazon EC2 Windows instance. The primary network interface counts as one toward this quota. For example, by default a `c5.large` instance might have only up to three ENIs attached to it. The primary network interface for the instance counts as one of those. You can attach an additional two ENIs to the instance. Because each task using the `awsvpc` network mode requires an ENI, you can typically only run two such tasks on this instance type. For more information about the default ENI limits for each instance type, see [IP addresses per network interface per instance type](https://docs.aws.amazon.com/AWSEC2/latest/WindowsGuide/using-eni.html#AvailableIpPerENI) in the *Amazon EC2 User Guide*.
+ When hosting tasks that use the `awsvpc` network mode on Amazon EC2 Windows instances, your task ENIs aren't given public IP addresses. To access the internet, launch tasks in a private subnet that's configured to use a NAT gateway. For more information, see [NAT gateways](https://docs.aws.amazon.com/vpc/latest/userguide/vpc-nat-gateway.html) in the *Amazon VPC User Guide*. Inbound network access must be from within the VPC that is using the private IP address or routed through a load balancer from within the VPC. Tasks that are launched within public subnets don't have access to the internet.
+ Amazon ECS recognizes only the ENIs that it has attached to your Amazon EC2 Windows instance. If you manually attached ENIs to your instances, Amazon ECS might attempt to add a task to an instance that doesn't have enough network adapters. This can result in the task timing out and moving to a deprovisioning status and then a stopped status. We recommend that you don't attach ENIs to your instances manually.
+ Amazon EC2 Windows instances must be registered with the `ecs.capability.task-eni` capability to be considered for placement of tasks with the `awsvpc` network mode. 
+  You can't manually modify or detach ENIs that are created and attached to your Amazon EC2 Windows instances. This is to prevent you from accidentally deleting an ENI that's associated with a running task. To release the ENIs for a task, stop the task.
+  You can only specify up to 16 subnets and 5 security groups in `awsVpcConfiguration` when you run a task or create a service that uses the `awsvpc` network mode. For more information, see [AwsVpcConfiguration](https://docs.aws.amazon.com/AmazonECS/latest/APIReference/API_AwsVpcConfiguration.html) in the *Amazon Elastic Container Service API Reference*.
+ When a task is started with the `awsvpc` network mode, the Amazon ECS container agent creates an additional `pause` container for each task before starting the containers in the task definition. It then configures the network namespace of the `pause` container by running the [amazon-ecs-cni-plugins ](https://github.com/aws/amazon-ecs-cni-plugins) CNI plugins. The agent then starts the rest of the containers in the task so that they share the network stack of the `pause` container. This means that all containers in a task are addressable by the IP addresses of the ENI, and they can communicate with each other over the `localhost` interface.
+ Services with tasks that use the `awsvpc` network mode only support Application Load Balancer and Network Load Balancer. When you create any target groups for these services, you must choose `ip` as the target type, not `instance`. This is because tasks that use the `awsvpc` network mode are associated with an ENI, not with an Amazon EC2 Windows instance. For more information, see [Use load balancing to distribute Amazon ECS service traffic](service-load-balancing.md).
+ If your VPC is updated to change the DHCP options set it uses, you can't apply these changes to existing tasks. Start new tasks with these changes applied to them, verify that they are working correctly, and then stop the existing tasks in order to safely change these network configurations.
+ The following are not supported when you use `awsvpc` network mode in an EC2 Windows configuration:
  + Dual-stack configuration
  + IPv6
  + ENI trunking

## Using a VPC in dual-stack mode
<a name="task-networking-vpc-dual-stack"></a>

When using a VPC in dual-stack mode, your tasks can communicate over IPv4, or IPv6, or both. IPv4 and IPv6 addresses are independent of each other. Therefore you must configure routing and security in your VPC separately for IPv4 and IPv6. For more information about how to configure your VPC for dual-stack mode, see [Migrating to IPv6](https://docs.aws.amazon.com/vpc/latest/userguide/vpc-migrate-ipv6.html) in the *Amazon VPC User Guide*.

If you configured your VPC with an internet gateway or an outbound-only internet gateway, you can use your VPC in dual-stack mode. By doing this, tasks that are assigned an IPv6 address can access the internet through an internet gateway or an egress-only internet gateway. NAT gateways are optional. For more information, see [Internet gateways](https://docs.aws.amazon.com/vpc/latest/userguide/VPC_Internet_Gateway.html) and [Egress-only internet gateways](https://docs.aws.amazon.com/vpc/latest/userguide/egress-only-internet-gateway.html) in the *Amazon VPC User Guide*.

Amazon ECS tasks are assigned an IPv6 address if the following conditions are met:
+ The Amazon EC2 Linux instance that hosts the task is using version `1.45.0` or later of the container agent. For information about how to check the agent version your instance is using, and updating it if needed, see [Updating the Amazon ECS container agent](ecs-agent-update.md).
+ The `dualStackIPv6` account setting is enabled. For more information, see [Access Amazon ECS features with account settings](ecs-account-settings.md).
+ Your task is using the `awsvpc` network mode.
+ Your VPC and subnet are configured for IPv6. The configuration includes the network interfaces that are created in the specified subnet. For more information about how to configure your VPC for dual-stack mode, see [Migrating to IPv6](https://docs.aws.amazon.com/vpc/latest/userguide/vpc-migrate-ipv6.html) and [Modify the IPv6 addressing attribute for your subnet](https://docs.aws.amazon.com/vpc/latest/userguide/vpc-ip-addressing.html#subnet-ipv6) in the *Amazon VPC User Guide*.

# Map Amazon ECS container ports to the EC2 instance network interface
<a name="networking-networkmode-host"></a>

The `host` network mode is only supported for Amazon ECS tasks hosted on Amazon EC2 instances. It's not supported when using Amazon ECS on Fargate.

The `host` network mode is the most basic network mode that's supported in Amazon ECS. Using host mode, the networking of the container is tied directly to the underlying host that's running the container.

![\[Diagram showing architecture of a network with containers using the host network mode.\]](http://docs.aws.amazon.com/AmazonECS/latest/developerguide/images/networkmode-host.png)


Assume that you're running a Node.js container with an Express application that listens on port `3000` similar to the one illustrated in the preceding diagram. When the `host` network mode is used, the container receives traffic on port 3000 using the IP address of the underlying host Amazon EC2 instance. We do not recommend using this mode.

There are significant drawbacks to using this network mode. You can’t run more than a single instantiation of a task on each host. This is because only the first task can bind to its required port on the Amazon EC2 instance. There's also no way to remap a container port when it's using `host` network mode. For example, if an application needs to listen on a particular port number, you can't remap the port number directly. Instead, you must manage any port conflicts through changing the application configuration.

There are also security implications when using the `host` network mode. This mode allows containers to impersonate the host, and it allows containers to connect to private loopback network services on the host.

# Use Docker's virtual network for Amazon ECS Linux tasks
<a name="networking-networkmode-bridge"></a>

The `bridge` network mode is only supported for Amazon ECS tasks hosted on Amazon EC2 instances.

With `bridge` mode, you're using a virtual network bridge to create a layer between the host and the networking of the container. This way, you can create port mappings that remap a host port to a container port. The mappings can be either static or dynamic.

![\[Diagram showing architecture of a network using bridge network mode with static port mapping.\]](http://docs.aws.amazon.com/AmazonECS/latest/developerguide/images/networkmode-bridge.png)


With a static port mapping, you can explicitly define which host port you want to map to a container port. Using the example above, port `80` on the host is being mapped to port `3000` on the container. To communicate to the containerized application, you send traffic to port `80` to the Amazon EC2 instance's IP address. From the containerized application’s perspective it sees that inbound traffic on port `3000`.

If you only want to change the traffic port, then static port mappings is suitable. However, this still has the same disadvantage as using the `host` network mode. You can't run more than a single instantiation of a task on each host. This is because a static port mapping only allows a single container to be mapped to port 80.

To solve this problem, consider using the `bridge` network mode with a dynamic port mapping as shown in the following diagram.

![\[Diagram showing architecture of a network using bridge network mode with dynamic port mapping.\]](http://docs.aws.amazon.com/AmazonECS/latest/developerguide/images/networkmode-bridge-dynamic.png)


By not specifying a host port in the port mapping, you can have Docker choose a random, unused port from the ephemeral port range and assign it as the public host port for the container. For example, the Node.js application listening on port `3000` on the container might be assigned a random high number port such as `47760` on the Amazon EC2 host. Doing this means that you can run multiple copies of that container on the host. Moreover, each container can be assigned its own port on the host. Each copy of the container receives traffic on port `3000`. However, clients that send traffic to these containers use the randomly assigned host ports.

Amazon ECS helps you to keep track of the randomly assigned ports for each task. It does this by automatically updating load balancer target groups and AWS Cloud Map service discovery to have the list of task IP addresses and ports. This makes it easier to use services operating using `bridge` mode with dynamic ports.

However, one disadvantage of using the `bridge` network mode is that it's difficult to lock down service to service communications. Because services might be assigned to any random, unused port, it's necessary to open broad port ranges between hosts. However, it's not easy to create specific rules so that a particular service can only communicate to one other specific service. The services have no specific ports to use for security group networking rules.

## Configuring bridge networking mode for IPv6-only workloads
<a name="networking-networkmode-bridge-ipv6-only"></a>

To configure `bridge` mode for communication over IPv6, you must update Docker daemon settings. Update `/etc/docker/daemon.json` with the following:

```
{
  "ipv6": true,
  "fixed-cidr-v6": "2001:db8:1::/64",
  "ip6tables": true,
  "experimental": true
}
```

After updating Docker daemon settings, you will need to restart the daemon.

**Note**  
When you update and restart the daemon, Docker enables IPv6 forwarding on the instance, which can result in the loss of default routes on instances that use an Amazon Linux 2 AMI. To avoid this, use the following command to add a default route via the subnet's IPv6 gateway.  

```
ip route add default via FE80:EC2::1 dev eth0 metric 100
```

# Amazon ECS task networking options for Fargate
<a name="fargate-task-networking"></a>

By default, every Amazon ECS task on Fargate is provided an elastic network interface (ENI) with a primary private IP address. When using a public subnet, you can optionally assign a public IP address to the task's ENI. If your VPC is configured for dual-stack mode and you use a subnet with an IPv6 CIDR block, your task's ENI also receives an IPv6 address. A task can only have one ENI that's associated with it at a time. Containers that belong to the same task can also communicate over the `localhost` interface. For more information about VPCs and subnets, see [How Amazon VPC works](https://docs.aws.amazon.com/vpc/latest/userguide/how-it-works.html) in the *Amazon VPC User Guide*.

For a task on Fargate to pull a container image, the task must have a route to the internet. The following describes how you can verify that your task has a route to the internet.
+ When using a public subnet, you can assign a public IP address to the task ENI.
+ When using a private subnet, the subnet can have a NAT gateway attached.
+ When using container images that are hosted in Amazon ECR, you can configure Amazon ECR to use an interface VPC endpoint and the image pull occurs over the task's private IPv4 address. For more information, see [Amazon ECR interface VPC endpoints (AWS PrivateLink)](https://docs.aws.amazon.com/AmazonECR/latest/userguide/vpc-endpoints.html) in the *Amazon Elastic Container Registry User Guide*.

Because each task gets its own ENI, you can use networking features such as VPC Flow Logs, which you can use to monitor traffic to and from your tasks. For more information, see [VPC Flow Logs](https://docs.aws.amazon.com/vpc/latest/userguide/flow-logs.html) in the *Amazon VPC User Guide*.

You can also take advantage of AWS PrivateLink. You can configure a VPC interface endpoint so that you can access Amazon ECS APIs through private IP addresses. AWS PrivateLink restricts all network traffic between your VPC and Amazon ECS to the Amazon network. You don't need an internet gateway, a NAT device, or a virtual private gateway. For more information, see [Amazon ECS interface VPC endpoints (AWS PrivateLink)](https://docs.aws.amazon.com/AmazonECS/latest/developerguide/vpc-endpoints.html).

For examples of how to use the `NetworkConfiguration` resource with CloudFormation, see [CloudFormation example templates for Amazon ECS](working-with-templates.md).

The ENIs that are created are fully managed by AWS Fargate. Moreover, there's an associated IAM policy that's used to grant permissions for Fargate. For tasks using Fargate platform version `1.4.0` or later, the task receives a single ENI (referred to as the task ENI) and all network traffic flows through that ENI within your VPC. This traffic is recorded in your VPC flow logs. For tasks that use Fargate platform version `1.3.0` and earlier, in addition to the task ENI, the task also receives a separate Fargate owned ENI, which is used for some network traffic that isn't visible in the VPC flow logs. The following table describes the network traffic behavior and the required IAM policy for each platform version.


|  Action  |  Traffic flow with Linux platform version `1.3.0` and earlier  |  Traffic flow with Linux platform version `1.4.0`  |  Traffic flow with Windows platform version `1.0.0`  |  IAM permission  | 
| --- | --- | --- | --- | --- | 
|  Retrieving Amazon ECR login credentials  |  Fargate owned ENI  |  Task ENI  |  Task ENI  |  Task execution IAM role  | 
|  Image pull  |  Task ENI  |  Task ENI  |  Task ENI  |  Task execution IAM role  | 
|  Sending logs through a log driver  |  Task ENI  |  Task ENI  |  Task ENI  |  Task execution IAM role  | 
|  Sending logs through FireLens for Amazon ECS  |  Task ENI  |  Task ENI  |  Task ENI  |  Task IAM role  | 
|  Retrieving secrets from Secrets Manager or Systems Manager  |  Fargate owned ENI  |  Task ENI  |  Task ENI  |  Task execution IAM role  | 
|  Amazon EFS file system traffic  |  Not available  |  Task ENI  |  Task ENI  |  Task IAM role  | 
|  Application traffic  |  Task ENI  |  Task ENI  |  Task ENI  |  Task IAM role  | 

## Considerations
<a name="fargate-task-networking-considerations"></a>

Consider the following when using task networking.
+ The Amazon ECS service-linked role is required to provide Amazon ECS with the permissions to make calls to other AWS services on your behalf. This role is created for you when you create a cluster or if you create or update a service in the AWS Management Console. For more information, see [Using service-linked roles for Amazon ECS](using-service-linked-roles.md). You can also create the service-linked role using the following AWS CLI command.

  ```
  aws iam [create-service-linked-role](https://docs.aws.amazon.com/cli/latest/reference/iam/create-service-linked-role.html) --aws-service-name ecs.amazonaws.com
  ```
+ Amazon ECS populates the hostname of the task with an Amazon provided DNS hostname when both the `enableDnsHostnames` and `enableDnsSupport` options are enabled on your VPC. If these options aren't enabled, the DNS hostname of the task is set to a random hostname. For more information about the DNS settings for a VPC, see [Using DNS with Your VPC](https://docs.aws.amazon.com/vpc/latest/userguide/vpc-dns.html) in the *Amazon VPC User Guide*.
+ You can only specify up to 16 subnets and 5 security groups for `awsVpcConfiguration`. For more information, see [AwsVpcConfiguration](https://docs.aws.amazon.com/AmazonECS/latest/APIReference/API_AwsVpcConfiguration.html) in the *Amazon Elastic Container Service API Reference*.
+ You can't manually detach or modify the ENIs that are created and attached by Fargate. This is to prevent the accidental deletion of an ENI that's associated with a running task. To release the ENIs for a task, stop the task.
+ If a VPC subnet is updated to change the DHCP options set it uses, you can't also apply these changes to existing tasks that use the VPC. Start new tasks, which will receive the new setting to smoothly migrate while testing the new change and then stop the old ones, if no rollback is required.
+ The following applies to tasks run on Fargate platform version `1.4.0` or later for Linux or `1.0.0` for Windows. Tasks launched in dual-stack subnets receive an IPv4 address and an IPv6 address. Tasks launched in IPv6-only subnets receive only an IPv6 address.
+ For tasks that use platform version `1.4.0` or later for Linux or `1.0.0` for Windows, the task ENIs support jumbo frames. Network interfaces are configured with a maximum transmission unit (MTU), which is the size of the largest payload that fits within a single frame. The larger the MTU, the more application payload can fit within a single frame, which reduces per-frame overhead and increases efficiency. Supporting jumbo frames reduces overhead when the network path between your task and the destination supports jumbo frames.
+ Services with tasks that use Fargate only support Application Load Balancer and Network Load Balancer. Classic Load Balancer isn't supported. When you create any target groups, you must choose `ip` as the target type, not `instance`. For more information, see [Use load balancing to distribute Amazon ECS service traffic](service-load-balancing.md).

## Using a VPC in dual-stack mode
<a name="fargate-task-networking-vpc-dual-stack"></a>

When using a VPC in dual-stack mode, your tasks can communicate over IPv4 or IPv6, or both. IPv4 and IPv6 addresses are independent of each other and you must configure routing and security in your VPC separately for IPv4 and IPv6. For more information about configuring your VPC for dual-stack mode, see [Migrating to IPv6](https://docs.aws.amazon.com/vpc/latest/userguide/vpc-migrate-ipv6.html) in the *Amazon VPC User Guide*.

If the following conditions are met, Amazon ECS tasks on Fargate are assigned an IPv6 address:
+ Your Amazon ECS `dualStackIPv6` account setting is turned on (`enabled`) for the IAM principal launching your tasks in the Region you're launching your tasks in. This setting can only be modified using the API or AWS CLI. You have the option to turn this setting on for a specific IAM principal on your account or for your entire account by setting your account default setting. For more information, see [Access Amazon ECS features with account settings](ecs-account-settings.md).
+ Your VPC and subnet are enabled for IPv6. For more information about how to configure your VPC for dual-stack mode, see [Migrating to IPv6](https://docs.aws.amazon.com/vpc/latest/userguide/vpc-migrate-ipv6.html) in the *Amazon VPC User Guide*.
+ Your subnet is enabled for auto-assigning IPv6 addresses. For more information about how to configure your subnet, see [Modify the IPv6 addressing attribute for your subnet](https://docs.aws.amazon.com/vpc/latest/userguide/modify-subnets.html) in the *Amazon VPC User Guide*.
+ The task or service uses Fargate platform version `1.4.0` or later for Linux.

For Amazon ECS tasks on Fargate running in a VPC in dual-stack mode, to communicate with dependency services used in task launch process such as ECR, SSM and SecretManager, the public subnet's route table needs IPv4 (0.0.0.0/0) route to an internet gateway and the private subnet's route table needs IPv4 (0.0.0.0/0) route to an NAT gateway. For more information, see [Internet gateways](https://docs.aws.amazon.com/vpc/latest/userguide/VPC_Internet_Gateway.html) and [NAT gateways](https://docs.aws.amazon.com/vpc/latest/userguide/vpc-nat-gateway.html) in the *Amazon VPC User Guide*. 

For examples of how to configure a dual-stack VPC, see [ Example dual-stack VPC configuration](https://docs.aws.amazon.com/vpc/latest/userguide/vpc-migrate-ipv6-example.html). 

## Using a VPC in IPv6-only mode
<a name="fargate-task-networking-vpc-ipv6-only"></a>

In an IPv6-only configuration, your Amazon ECS tasks communicate exclusively over IPv6. To set up VPCs and subnets for an IPv6-only configuration, you must add an IPv6 CIDR block to the VPC and create subnets that include only an IPv6 CIDR block. For more information see [Add IPv6 support for your VPC](https://docs.aws.amazon.com/vpc/latest/userguide/vpc-migrate-ipv6-add.html) and [Create a subnet](https://docs.aws.amazon.com/vpc/latest/userguide/create-subnets.html) in the *Amazon VPC User Guide*. You must also update route tables with IPv6 targets and configure security groups with IPv6 rules. For more information, see [Configure route tables](https://docs.aws.amazon.com/vpc/latest/userguide/VPC_Route_Tables.html) and [Configure security group rules](https://docs.aws.amazon.com/vpc/latest/userguide/working-with-security-group-rules.html) in the *Amazon VPC User Guide*.

The following considerations apply:
+ You can update an IPv4-only or dualstack Amazon ECS service to an IPv6-only configuration by either updating the service directly to use IPv6-only subnets or by creating a parallel IPv6-only service and using Amazon ECS blue-green deployments to shift traffic to the new service. For more information about Amazon ECS blue-green deployments, see [Amazon ECS blue/green deployments](deployment-type-blue-green.md).
+ An IPv6-only Amazon ECS service must use dualstack load balancers with IPv6 target groups. If you're migrating an existing Amazon ECS service that's behind a Application Load Balancer or a Network Load Balancer, you can create a new dualstack load balancer and shift traffic from the old load balancer, or update the IP address type of the existing load balancer.

   For more information about Network Load Balancers, see [Create a Network Load Balancer](https://docs.aws.amazon.com/elasticloadbalancing/latest/network/create-network-load-balancer.html) and [Update the IP address types for your Network Load Balancer](https://docs.aws.amazon.com/elasticloadbalancing/latest/network/load-balancer-ip-address-type.html) in the *User Guide for Network Load Balancers*. For more information about Application Load Balancers, see [Create an Application Load Balancer](https://docs.aws.amazon.com/elasticloadbalancing/latest/application/create-application-load-balancer.html) and [Update the IP address types for your Application Load Balancer](https://docs.aws.amazon.com/elasticloadbalancing/latest/application/load-balancer-ip-address-type.html) in the *User Guide for Application Load Balancers*.
+ IPv6-only configuration isn't supported on Windows.
+ For Amazon ECS tasks in an IPv6-only configuration to communicate with IPv4-only endpoints, you can set up DNS64 and NAT64 for network address translation from IPv6 to IPv4. For more information, see [DNS64 and NAT64](https://docs.aws.amazon.com/vpc/latest/userguide/nat-gateway-nat64-dns64.html) in the *Amazon VPC User Guide*.
+ IPv6-only configuration is supported on Fargate platform version `1.4.0` or later.
+ Amazon ECS workloads in an IPv6-only configuration must use Amazon ECR dualstack image URI endpoints when pulling images from Amazon ECR. For more information, see [Getting started with making requests over IPv6](https://docs.aws.amazon.com/AmazonECR/latest/userguide/ecr-requests.html#ipv6-access-getting-started) in the *Amazon Elastic Container Registry User Guide*.
**Note**  
Amazon ECR doesn't support dualstack interface VPC endpoints that tasks in an IPv6-only configuration can use. For more information, see [Getting started with making requests over IPv6](https://docs.aws.amazon.com/AmazonECR/latest/userguide/ecr-requests.html#ipv6-access-getting-started) in the *Amazon Elastic Container Registry User Guide*.
+ Amazon ECS Exec isn't supported in an IPv6-only configuration.
+ Amazon CloudWatch doesn't support a dualstack FIPS endpoint that can be used to monitor Amazon ECS tasks in IPv6-only configuration that use FIPS-140 compliance. For more information about FIPS-140, see [AWS Fargate Federal Information Processing Standard (FIPS-140)](ecs-fips-compliance.md).

### AWS Regions that support IPv6-only mode for Amazon ECS
<a name="fargate-task-networking-ipv6-only-regions"></a>

You can run tasks in an IPv6-only configuration in the following AWS Regions that Amazon ECS is available in:
+ US East (Ohio)
+ US East (N. Virginia)
+ US West (N. California)
+ US West (Oregon)
+ Africa (Cape Town)
+ Asia Pacific (Hong Kong)
+ Asia Pacific (Hyderabad)
+ Asia Pacific (Jakarta)
+ Asia Pacific (Melbourne)
+ Asia Pacific (Mumbai)
+ Asia Pacific (Osaka)
+ Asia Pacific (Seoul)
+ Asia Pacific (Singapore)
+ Asia Pacific (Sydney)
+ Asia Pacific (Tokyo)
+ Canada (Central)
+ Canada West (Calgary)
+ China (Beijing)
+ China (Ningxia)
+ Europe (Frankfurt)
+ Europe (London)
+ Europe (Milan)
+ Europe (Paris)
+ Europe (Spain)
+ Israel (Tel Aviv)
+ Middle East (Bahrain)
+ Middle East (UAE)
+ South America (São Paulo)
+ AWS GovCloud (US-East)
+ AWS GovCloud (US-West)

# Storage options for Amazon ECS tasks
<a name="using_data_volumes"></a>

Amazon ECS provides you with flexible, cost effective, and easy-to-use data storage options depending on your needs. Amazon ECS supports the following data volume options for containers:


| Data volume | Supported capacity | Supported operating systems | Storage persistence | Use cases | 
| --- | --- | --- | --- | --- | 
| Amazon Elastic Block Store (Amazon EBS) | Fargate, Amazon EC2, Amazon ECS Managed Instances | Linux, Windows (on Amazon EC2 only) | Can be persisted when attached to a standalone task. Ephemeral when attached to a task maintained by a service. | Amazon EBS volumes provide cost-effective, durable, high-performance block storage for data-intensive containerized workloads. Common use cases include transactional workloads such as databases, virtual desktops and root volumes, and throughput intensive workloads such as log processing and ETL workloads. For more information, see [Use Amazon EBS volumes with Amazon ECS](ebs-volumes.md). | 
| Amazon Elastic File System (Amazon EFS) | Fargate, Amazon EC2, Amazon ECS Managed Instances | Linux | Persistent | Amazon EFS volumes provide simple, scalable, and persistent shared file storage for use with your Amazon ECS tasks that grows and shrinks automatically as you add and remove files. Amazon EFS volumes support concurrency and are useful for containerized applications that scale horizontally and need storage functionalities like low latency, high throughput, and read-after-write consistency. Common use cases include workloads such as data analytics, media processing, content management, and web serving. For more information, see [Use Amazon EFS volumes with Amazon ECS](efs-volumes.md). | 
| Amazon FSx for Windows File Server | Amazon EC2 | Windows | Persistent | FSx for Windows File Server volumes provide fully managed Windows file servers that you can use to provision your Windows tasks that need persistent, distributed, shared, and static file storage. Common use cases include .NET applications that might require local folders as persistent storage to save application outputs. Amazon FSx for Windows File Server offers a local folder in the container which allows for multiple containers to read-write on the same file system that's backed by a SMB Share. For more information, see [Use FSx for Windows File Server volumes with Amazon ECS](wfsx-volumes.md). | 
| Amazon FSx for NetApp ONTAP | Amazon EC2 | Linux | Persistent | Amazon FSx for NetApp ONTAP volumes provide fully managed NetApp ONTAP file systems that you can use to provision your Linux tasks that need persistent, high-performance, and feature-rich shared file storage. Amazon FSx for NetApp ONTAP supports NFS and SMB protocols and provides enterprise-grade features like snapshots, cloning, and data deduplication. Common use cases include high-performance computing workloads, content repositories, and applications requiring POSIX-compliant shared storage. For more information, see [Mounting Amazon FSx for NetApp ONTAP file systems from Amazon ECS containers](https://docs.aws.amazon.com/fsx/latest/ONTAPGuide/mount-ontap-ecs-containers.html). | 
| Amazon S3 Files | Fargate, Amazon ECS Managed Instances | Linux | Persistent | Amazon S3 Files is a high-performance file system that provides fast, cached access to Amazon S3 data through a mountable file system interface. S3 Files volumes give your containers direct file-system access to data stored in S3 buckets. Common use cases include data analytics, machine learning training, and applications that need high-throughput access to S3 data. For more information, see [Configuring S3 Files for Amazon ECS](s3files-volumes.md). | 
| Docker volumes | Amazon EC2 | Windows, Linux | Persistent | Docker volumes are a feature of the Docker container runtime that allow containers to persist data by mounting a directory from the file system of the host. Docker volume drivers (also referred to as plugins) are used to integrate container volumes with external storage systems. Docker volumes can be managed by third-party drivers or by the built in local driver. Common use cases for Docker volumes include providing persistent data volumes or sharing volumes at different locations on different containers on the same container instance. For more information, see [Use Docker volumes with Amazon ECS](docker-volumes.md). | 
| Bind mounts | Fargate, Amazon EC2, Amazon ECS Managed Instances | Windows, Linux | Ephemeral | Bind mounts consist of a file or directory on the host, such as an Amazon EC2 instance or AWS Fargate, that is mounted onto a container. Common use cases for bind mounts include sharing a volume from a source container with other containers in the same task, or mounting a host volume or an empty volume in one or more containers. For more information, see [Use bind mounts with Amazon ECS](bind-mounts.md). | 

# Use Amazon EBS volumes with Amazon ECS
<a name="ebs-volumes"></a>

Amazon Elastic Block Store (Amazon EBS) volumes provide highly available, cost-effective, durable, high-performance block storage for data-intensive workloads. Amazon EBS volumes can be used with Amazon ECS tasks for high throughput and transaction-intensive applications. For more information about Amazon EBS volumes, see [Amazon EBS volumes](https://docs.aws.amazon.com/ebs/latest/userguide/ebs-volumes.html) in the *Amazon EBS User Guide*.

Amazon EBS volumes that are attached to Amazon ECS tasks are managed by Amazon ECS on your behalf. During standalone task launch, you can provide the configuration that will be used to attach one EBS volume to the task. During service creation or update, you can provide the configuration that will be used to attach one EBS volume per task to each task managed by the Amazon ECS service. You can either configure new, empty volumes for attachment, or you can use snapshots to load data from existing volumes.

**Note**  
When you use snapshots to configure volumes, you can specify a `volumeInitializationRate`, in MiB/s, at which data is fetched from the snapshot to create volumes that are fully initialized in a predictable amount of time. For more information about volume initialization, see [Initialize Amazon EBS volumes](https://docs.aws.amazon.com/ebs/latest/userguide/initalize-volume.html) in the *Amazon EBS User Guide*. For more information about configuring Amazon EBS volumes, see [Defer volume configuration to launch time in an Amazon ECS task definition](specify-ebs-config.md) and [Specify Amazon EBS volume configuration at Amazon ECS deployment](configure-ebs-volume.md).

Volume configuration is deferred to launch time using the `configuredAtLaunch` parameter in the task definition. By providing volume configuration at launch time rather than in the task definition, you get to create task definitions that aren't constrained to a specific data volume type or specific EBS volume settings. You can then reuse your task definitions across different runtime environments. For example, you can provide more throughput during deployment for your production workloads than your pre-prod environments.

 Amazon EBS volumes attached to tasks can be encrypted with AWS Key Management Service (AWS KMS) keys to protect your data. For more information see, [Encrypt data stored in Amazon EBS volumes attached to Amazon ECS tasks](ebs-kms-encryption.md).

To monitor your volume's performance, you can also use Amazon CloudWatch metrics. For more information about Amazon ECS metrics for Amazon EBS volumes, see [Amazon ECS CloudWatch metrics](available-metrics.md) and [Amazon ECS Container Insights metrics](https://docs.aws.amazon.com/AmazonCloudWatch/latest/monitoring/Container-Insights-metrics-ECS.html).

Attaching an Amazon EBS volume to a task is supported in all commercial and China [AWS Regions](https://docs.aws.amazon.com/glossary/latest/reference/glos-chap.html?icmpid=docs_homepage_addtlrcs#region) that support Amazon ECS.

## Supported operating systems and capacity
<a name="ebs-volumes-configuration"></a>

The following table provides the supported operating system and capacity configurations.


| Capacity | Linux  | Windows | 
| --- | --- | --- | 
| Fargate |  Amazon EBS volumes are supported on platform version 1.4.0 or later (Linux). For more information, see [Fargate platform versions for Amazon ECS](platform-fargate.md). | Not supported | 
| EC2 | Amazon EBS volumes are supported for tasks hosted on Nitro-based instances with Amazon ECS-optimized Amazon Machine Images (AMIs). For more information about instance types, see [Instance types](https://docs.aws.amazon.com/AWSEC2/latest/UserGuide/instance-types.html) in the Amazon EC2 User Guide. Amazon EBS volumes are supported on ECS-optimized AMI `20231219` or later. For more information, see [Retrieving Amazon ECS-Optimized AMI metadata](https://docs.aws.amazon.com/AmazonECS/latest/developerguide/retrieve-ecs-optimized_AMI.html). | Tasks hosted on Nitro-based instances with Amazon ECS-optimized Amazon Machine Images (AMIs). For more information about instance types, see [Instance types](https://docs.aws.amazon.com/AWSEC2/latest/UserGuide/instance-types.html) in the Amazon EC2 User Guide. Amazon EBS volumes are supported on ECS-optimized AMI `20241017` or later. For more information, see [Retrieving Amazon ECS-Optimized Windows AMI metadata](https://docs.aws.amazon.com/AmazonECS/latest/developerguide/retrieve-ecs-optimized_windows_AMI.html). | 
| Amazon ECS Managed Instances | Amazon EBS volumes are supported for tasks hosted on Amazon ECS Managed Instances on Linux. | Not supported | 

## Considerations
<a name="ebs-volume-considerations"></a>

 Consider the following when using Amazon EBS volumes:
+ You can't configure Amazon EBS volumes for attachment to Fargate Amazon ECS tasks in the `use1-az3` Availability Zone.
+ The magnetic (`standard`) Amazon EBS volume type is not supported for tasks hosted on Fargate. For more information about Amazon EBS volume types, see [Amazon EBS volumes](https://docs.aws.amazon.com/ebs/latest/userguide/ebs-volume-types.html) in the *Amazon EC2 User Guide*.
+ An Amazon ECS infrastructure IAM role is required when creating a service or a standalone task that is configuring a volume at deployment. You can attach the AWS managed `AmazonECSInfrastructureRolePolicyForVolumes` IAM policy to the role, or you can use the managed policy as a guide to create and attach your own policy with permissions that meet your specific needs. For more information, see [Amazon ECS infrastructure IAM role](infrastructure_IAM_role.md).
+ You can attach at most one Amazon EBS volume to each Amazon ECS task, and it must be a new volume. You can't attach an existing Amazon EBS volume to a task. However, you can configure a new Amazon EBS volume at deployment using the snapshot of an existing volume.
+ To use Amazon EBS volumes with Amazon ECS services, the deployment controller must be `ECS`. Both rolling and blue/green deployment strategies are supported when using this deployment controller.
+ For a container in your task to write to the mounted Amazon EBS volume, the container must have appropriate file system permissions. When you specify a non-root user in your container definition, Amazon ECS automatically configures the volume with group-based permissions that allow the specified user to read and write to the volume. If no user is specified, the container runs as root and has full access to the volume.
+ Amazon ECS automatically adds the reserved tags `AmazonECSCreated` and `AmazonECSManaged` to the attached volume. If you remove these tags from the volume, Amazon ECS won't be able to manage the volume on your behalf. For more information about tagging Amazon EBS volumes, see [Tagging Amazon EBS volumes](https://docs.aws.amazon.com/AmazonECS/latest/developerguide/specify-ebs-config.html#ebs-volume-tagging). For more information about tagging Amazon ECS resources, see [Tagging your Amazon ECS resources](https://docs.aws.amazon.com/AmazonECS/latest/developerguide/ecs-using-tags.html).
+ Provisioning volumes from a snapshot of an Amazon EBS volume that contains partitions isn't supported.
+ Volumes that are attached to tasks that are managed by a service aren't preserved and are always deleted upon task termination.
+ You can't configure Amazon EBS volumes for attachment to Amazon ECS tasks that are running on AWS Outposts.

# Non-root user behavior
<a name="ebs-non-root-behavior"></a>

When you specify a non-root user in your container definition, Amazon ECS automatically configures the Amazon EBS volume with group-based permissions that allow the specified user to read and write to the volume. The volume is mounted with the following characteristics:
+ The volume is owned by the root user and root group.
+ Group permissions are set to allow read and write access.
+ The non-root user is added to the appropriate group to access the volume.

Follow these best practices when using Amazon EBS volumes with non-root containers:
+ Use consistent user IDs (UIDs) and group IDs (GIDs) across your container images to ensure consistent permissions.
+ Pre-create mount point directories in your container image and set appropriate ownership and permissions.
+ Test your containers with Amazon EBS volumes in a development environment to confirm that file system permissions work as expected.
+ If multiple containers in the same task share a volume, ensure they either use compatible UIDs/GIDs or mount the volume with consistent access expectations.

# Defer volume configuration to launch time in an Amazon ECS task definition
<a name="specify-ebs-config"></a>

To configure an Amazon EBS volume for attachment to your task, you must specify the mount point configuration in your task definition and name the volume. You must also set `configuredAtLaunch` to `true` because Amazon EBS volumes can't be configured for attachment in the task definition. Instead, Amazon EBS volumes are configured for attachment during deployment.

To register the task definition by using the AWS Command Line Interface (AWS CLI), save the template as a JSON file, and then pass the file as an input for the `[register-task-definition](https://docs.aws.amazon.com/cli/latest/reference/ecs/register-task-definition.html)` command. 

To create and register a task definition using the AWS Management Console, see [Creating an Amazon ECS task definition using the console](create-task-definition.md).

The following task definition shows the syntax for the `mountPoints` and `volumes` objects in the task definition. For more information about task definition parameters, see [Amazon ECS task definition parameters for Fargate](task_definition_parameters.md). To use this example, replace the `user input placeholders` with your own information.

## Linux
<a name="linux-example"></a>

```
{
    "family": "mytaskdef",
    "containerDefinitions": [
        {
            "name": "nginx",
            "image": "public.ecr.aws/nginx/nginx:latest",
            "networkMode": "awsvpc",
           "portMappings": [
                {
                    "name": "nginx-80-tcp",
                    "containerPort": 80,
                    "hostPort": 80,
                    "protocol": "tcp",
                    "appProtocol": "http"
                }
            ],
            "mountPoints": [
                {
                    "sourceVolume": "myEBSVolume",
                    "containerPath": "/mount/ebs",
                    "readOnly": true
                }
            ]
        }
    ],
    "volumes": [
        {
            "name": "myEBSVolume",
            "configuredAtLaunch": true
        }
    ],
    "requiresCompatibilities": [
        "FARGATE", "EC2"
    ],
    "cpu": "1024",
    "memory": "3072",
    "networkMode": "awsvpc"
}
```

## Windows
<a name="windows-example"></a>

```
{
    "family": "mytaskdef",
     "memory": "4096",
     "cpu": "2048",
    "family": "windows-simple-iis-2019-core",
    "executionRoleArn": "arn:aws:iam::012345678910:role/ecsTaskExecutionRole",
    "runtimePlatform": {"operatingSystemFamily": "WINDOWS_SERVER_2019_CORE"},
    "requiresCompatibilities": ["EC2"]
    "containerDefinitions": [
        {
             "command": ["New-Item -Path C:\\inetpub\\wwwroot\\index.html -Type file -Value '<html> <head> <title>Amazon ECS Sample App</title> <style>body {margin-top: 40px; background-color: #333;} </style> </head><body> <div style=color:white;text-align:center> <h1>Amazon ECS Sample App</h1> <h2>Congratulations!</h2> <p>Your application is now running on a container in Amazon ECS.</p>'; C:\\ServiceMonitor.exe w3svc"],
            "entryPoint": [
                "powershell",
                "-Command"
            ],
            "essential": true,
            "cpu": 2048,
            "memory": 4096,
            "image": "mcr.microsoft.com/windows/servercore/iis:windowsservercore-ltsc2019",
            "name": "sample_windows_app",
            "portMappings": [
                {
                    "hostPort": 443,
                    "containerPort": 80,
                    "protocol": "tcp"
                }
            ],
            "mountPoints": [
                {
                    "sourceVolume": "myEBSVolume",
                    "containerPath": "drive:\ebs",
                    "readOnly": true
                }
            ]
        }
    ],
    "volumes": [
        {
            "name": "myEBSVolume",
            "configuredAtLaunch": true
        }
    ],
    "requiresCompatibilities": [
        "FARGATE", "EC2"
    ],
    "cpu": "1024",
    "memory": "3072",
    "networkMode": "awsvpc"
}
```

`mountPoints`  
Type: Object array  
Required: No  
The mount points for the data volumes in your container. This parameter maps to `Volumes` in the create-container Docker API and the `--volume` option to docker run.  
Windows containers can mount whole directories on the same drive as `$env:ProgramData`. Windows containers cannot mount directories on a different drive, and mount points cannot be used across drives. You must specify mount points to attach an Amazon EBS volume directly to an Amazon ECS task.    
`sourceVolume`  
Type: String  
Required: Yes, when `mountPoints` are used  
The name of the volume to mount.  
`containerPath`  
Type: String  
Required: Yes, when `mountPoints` are used  
The path in the container where the volume will be mounted.  
`readOnly`  
Type: Boolean  
Required: No  
If this value is `true`, the container has read-only access to the volume. If this value is `false`, then the container can write to the volume. The default value is `false`.  
For tasks that run on EC2 instances running the Windows operating system, leave the value as the default of `false`.

`name`  
Type: String  
Required: No  
The name of the volume. Up to 255 letters (uppercase and lowercase), numbers, hyphens (`-`), and underscores (`_`) are allowed. This name is referenced in the `sourceVolume` parameter of the container definition `mountPoints` object.

`configuredAtLaunch`  
Type: Boolean  
Required: Yes, when you want to use attach an EBS volume directly to a task.  
Specifies whether a volume is configurable at launch. When set to `true`, you can configure the volume when you run a standalone task, or when you create or update a service. When set to `false`, you won't be able to provide another volume configuration in the task definition. This parameter must be provided and set to `true` to configure an Amazon EBS volume for attachment to a task.

# Encrypt data stored in Amazon EBS volumes attached to Amazon ECS tasks
<a name="ebs-kms-encryption"></a>

You can use AWS Key Management Service (AWS KMS) to make and manage cryptographic keys that protect your data. Amazon EBS volumes are encrypted at rest by using AWS KMS keys. The following types of data are encrypted:
+ Data stored at rest on the volume
+ Disk I/O
+ Snapshots created from the volume
+ New volumes created from encrypted snapshots

Amazon EBS volumes that are attached to tasks can be encrypted by using either a default AWS managed key with the alias `alias/aws/ebs`, or a symmetric customer managed key specified in the volume configuration. Default AWS managed keys are unique to each AWS account per AWS Region and are created automatically. To create a symmetric customer managed key, follow the steps in [Creating symmetric encryption KMS keys](https://docs.aws.amazon.com/kms/latest/developerguide/create-keys.html#create-symmetric-cmk) in the *AWS KMS Developer Guide*.

You can configure Amazon EBS encryption by default so that all new volumes created and attached to a task in a specific AWS Region are encrypted by using the KMS key that you specify for your account. For more information about Amazon EBS encryption and encryption by default, see [Amazon EBS encryption](https://docs.aws.amazon.com/ebs/latest/userguide/ebs-encryption.html) in the *Amazon EBS User Guide*.

## Amazon ECS Managed Instances behavior
<a name="managed-instances"></a>

You encrypt Amazon EBS volumes by enabling encryption, either using encryption by default or by enabling encryption when you create a volume that you want to encrypt. For information about how to enable encryption by default (at the account-level, see [Encryption by default](https://docs.aws.amazon.com/ebs/latest/userguide/encryption-by-default.html) in the *Amazon EBS User Guide*.

You can configure any combination of these keys. The order of precedence of KMS keys is as follows:

1. The KMS key specified in the volume configuration. When you specify a KMS key in the volume configuration, it overrides the Amazon EBS default and any KMS key that is specified at the account level.

1. The KMS key specified at the account level. When you specify a KMS key for cluster-level encryption of Amazon ECS managed storage, it overrides Amazon EBS default encryption but does not override any KMS key that is specified in the volume configuration.

1. Amazon EBS default encryption. Default encryption applies when you don't specify either a account-level KMS key or a key in the volume configuration. If you enable Amazon EBS encryption by default, the default is the KMS key you specify for encryption by default. Otherwise, the default is the AWS managed key with the alias `alias/aws/ebs`.
**Note**  
If you set `encrypted` to `false` in your volume configuration, specify no account-level KMS key, and enable Amazon EBS encryption by default, the volume will still be encrypted with the key specified for Amazon EBS encryption by default.

## Non-Amazon ECS Managed Instances behavior
<a name="non-managed-instances"></a>

You can also set up Amazon ECS cluster-level encryption for Amazon ECS managed storage when you create or update a cluster. Cluster-level encryption takes effect at the task level and can be used to encrypt the Amazon EBS volumes attached to each task running in a specific cluster by using the specified KMS key. For more information about configuring encryption at the cluster level for each task, see [ManagedStorageConfiguration](https://docs.aws.amazon.com/AmazonECS/latest/APIReference/API_ManagedStorageConfiguration.html) in the *Amazon ECS API reference*.

You can configure any combination of these keys. The order of precedence of KMS keys is as follows:

1. The KMS key specified in the volume configuration. When you specify a KMS key in the volume configuration, it overrides the Amazon EBS default and any KMS key that is specified at the cluster level.

1. The KMS key specified at the cluster level. When you specify a KMS key for cluster-level encryption of Amazon ECS managed storage, it overrides Amazon EBS default encryption but does not override any KMS key that is specified in the volume configuration.

1. Amazon EBS default encryption. Default encryption applies when you don't specify either a cluster-level KMS key or a key in the volume configuration. If you enable Amazon EBS encryption by default, the default is the KMS key you specify for encryption by default. Otherwise, the default is the AWS managed key with the alias `alias/aws/ebs`.
**Note**  
If you set `encrypted` to `false` in your volume configuration, specify no cluster-level KMS key, and enable Amazon EBS encryption by default, the volume will still be encrypted with the key specified for Amazon EBS encryption by default.

## Customer managed KMS key policy
<a name="ebs-kms-encryption-policy"></a>

To encrypt an EBS volume that's attached to your task by using a customer managed key, you must configure your KMS key policy to ensure that the IAM role that you use for volume configuration has the necessary permissions to use the key. The key policy must include the `kms:CreateGrant` and `kms:GenerateDataKey*` permissions. The `kms:ReEncryptTo` and `kms:ReEncryptFrom` permissions are necessary for encrypting volumes that are created using snapshots. If you want to configure and encrypt only new, empty volumes for attachment, you can exclude the `kms:ReEncryptTo` and `kms:ReEncryptFrom` permissions. 

The following JSON snippet shows key policy statements that you can attach to your KMS key policy. Using these statements will provide access for Amazon ECS to use the key for encrypting the EBS volume. To use the example policy statements, replace the `user input placeholders` with your own information. As always, only configure the permissions that you need.

```
{
      "Effect": "Allow",
      "Principal": { "AWS": "arn:aws:iam::111122223333:role/ecsInfrastructureRole" },
      "Action": "kms:DescribeKey",
      "Resource":"*"
    },
    {
      "Effect": "Allow",
      "Principal": { "AWS": "arn:aws:iam::111122223333:role/ecsInfrastructureRole" },
      "Action": [
      "kms:GenerateDataKey*",
      "kms:ReEncryptTo",
      "kms:ReEncryptFrom"
      ],
      "Resource":"*",
      "Condition": {
        "StringEquals": {
          "kms:CallerAccount": "aws_account_id",
          "kms:ViaService": "ec2.region.amazonaws.com"
        },
        "ForAnyValue:StringEquals": {
          "kms:EncryptionContextKeys": "aws:ebs:id"
        }
      }
    },
    {
      "Effect": "Allow",
      "Principal": { "AWS": "arn:aws:iam::111122223333:role/ecsInfrastructureRole" },
      "Action": "kms:CreateGrant",
      "Resource":"*",
      "Condition": {
        "StringEquals": {
          "kms:CallerAccount": "aws_account_id",
          "kms:ViaService": "ec2.region.amazonaws.com"
        },
        "ForAnyValue:StringEquals": {
          "kms:EncryptionContextKeys": "aws:ebs:id"
        },
        "Bool": {
          "kms:GrantIsForAWSResource": true
        }
      }
    }
```

For more information about key policies and permissions, see [Key policies in AWS KMS](https://docs.aws.amazon.com/kms/latest/developerguide/key-policies.html) and [AWS KMS permissions](https://docs.aws.amazon.com/kms/latest/developerguide/kms-api-permissions-reference.html) in the *AWS KMS Developer Guide*. For troubleshooting EBS volume attachment issues related to key permissions, see [Troubleshooting Amazon EBS volume attachments to Amazon ECS tasks](troubleshoot-ebs-volumes.md).

# Specify Amazon EBS volume configuration at Amazon ECS deployment
<a name="configure-ebs-volume"></a>

After you register a task definition with the `configuredAtLaunch` parameter set to `true`, you can configure an Amazon EBS volume at deployment when you run a standalone task, or when you create or update a service. For more information about deferring volume configuration to launch time using the `configuredAtLaunch` parameter, see [Defer volume configuration to launch time in an Amazon ECS task definition](specify-ebs-config.md).

To configure a volume, you can use the Amazon ECS APIs, or you can pass a JSON file as input for the following AWS CLI commands:
+ `[run-task](https://docs.aws.amazon.com/cli/latest/reference/ecs/run-task.html)` to run a standalone ECS task.
+ `[start-task](https://docs.aws.amazon.com/cli/latest/reference/ecs/start-task.html)` to run a standalone ECS task in a specific container instance. This command is not applicable for Fargate tasks.
+ `[create-service](https://docs.aws.amazon.com/cli/latest/reference/ecs/create-service.html)` to create a new ECS service.
+ `[update-service](https://docs.aws.amazon.com/cli/latest/reference/ecs/update-service.html)` to update an existing service.

**Note**  
For a container in your task to write to the mounted Amazon EBS volume, the container must have appropriate file system permissions. When you specify a non-root user in your container definition, Amazon ECS automatically configures the volume with group-based permissions that allow the specified user to read and write to the volume. If no user is specified, the container runs as root and has full access to the volume.

 You can also configure an Amazon EBS volume by using the AWS Management Console. For more information, see [Running an application as an Amazon ECS task](standalone-task-create.md), [Creating an Amazon ECS rolling update deployment](create-service-console-v2.md), and [Updating an Amazon ECS service](update-service-console-v2.md).

The following JSON snippet shows all the parameters of an Amazon EBS volume that can be configured at deployment. To use these parameters for volume configuration, replace the `user input placeholders` with your own information. For more information about these parameters, see [Volume configurations](https://docs.aws.amazon.com/AmazonECS/latest/developerguide/service_definition_parameters.html#sd-volumeConfigurations).

```
"volumeConfigurations": [
        {
            "name": "ebs-volume", 
            "managedEBSVolume": {
                "encrypted": true, 
                "kmsKeyId": "arn:aws:kms:us-east-1:111122223333:key/1234abcd-12ab-34cd-56ef-1234567890ab", 
                "volumeType": "gp3", 
                "sizeInGiB": 10, 
                "snapshotId": "snap-12345", 
                "volumeInitializationRate":100,
                "iops": 3000, 
                "throughput": 125, 
                "tagSpecifications": [
                    {
                        "resourceType": "volume", 
                        "tags": [
                            {
                                "key": "key1", 
                                "value": "value1"
                            }
                        ], 
                        "propagateTags": "NONE"
                    }
                ], 
                "roleArn": "arn:aws:iam::1111222333:role/ecsInfrastructureRole", 
                 "terminationPolicy": {
                    "deleteOnTermination": true//can't be configured for service-managed tasks, always true 
                },
                "filesystemType": "ext4"
            }
        }
    ]
```

**Important**  
Ensure that the `volumeName` you specify in the configuration is the same as the `volumeName` you specify in your task definition.

For information about checking the status of volume attachment, see [Troubleshooting Amazon EBS volume attachments to Amazon ECS tasks](troubleshoot-ebs-volumes.md). For information about the Amazon ECS infrastructure AWS Identity and Access Management (IAM) role necessary for EBS volume attachment, see [Amazon ECS infrastructure IAM role](infrastructure_IAM_role.md).

The following are JSON snippet examples that show the configuration of Amazon EBS volumes. These examples can be used by saving the snippets in JSON files and passing the files as parameters (using the `--cli-input-json file://filename` parameter) for AWS CLI commands. Replace the `user input placeholders` with your own information.

## Configure a volume for a standalone task
<a name="ebs-run-task"></a>

The following snippet shows the syntax for configuring Amazon EBS volumes for attachment to a standalone task. The following JSON snippet shows the syntax for configuring the `volumeType`, `sizeInGiB`, `encrypted`, and `kmsKeyId` settings. The configuration specified in the JSON file is used to create and attach an EBS volume to the standalone task.

```
{
   "cluster": "mycluster",
   "taskDefinition": "mytaskdef",
   "volumeConfigurations": [
        {
            "name": "datadir",
            "managedEBSVolume": {
                "volumeType": "gp3",
                "sizeInGiB": 100,
                "roleArn":"arn:aws:iam::1111222333:role/ecsInfrastructureRole",
                "encrypted": true,
                "kmsKeyId": "arn:aws:kms:region:111122223333:key/1234abcd-12ab-34cd-56ef-1234567890ab"
            }
        }
   ]
}
```

## Configure a volume at service creation
<a name="ebs-create-service"></a>

The following snippet shows the syntax for configuring Amazon EBS volumes for attachment to tasks managed by a service. The volumes are sourced from the snapshot specified using the `snapshotId` parameter at a rate of 200 MiB/s. The configuration specified in the JSON file is used to create and attach an EBS volume to each task managed by the service.

```
{
   "cluster": "mycluster",
   "taskDefinition": "mytaskdef",
   "serviceName": "mysvc",
   "desiredCount": 2,
   "volumeConfigurations": [
        {
            "name": "myEbsVolume",
            "managedEBSVolume": {
              "roleArn":"arn:aws:iam::1111222333:role/ecsInfrastructureRole",
              "snapshotId": "snap-12345",
              "volumeInitializationRate": 200
            }
        }
   ]
}
```

## Configure a volume at service update
<a name="ebs-update-service"></a>

The following JSON snippet shows the syntax for updating a service that previously did not have Amazon EBS volumes configured for attachment to tasks. You must provide the ARN of a task definition revision with `configuredAtLaunch` set to `true`. The following JSON snippet shows the syntax for configuring the `volumeType`, `sizeInGiB`, `throughput`, and `iops`, and `filesystemType` settings. This configuration is used to create and attach an EBS volume to each task managed by the service.

```
{
   "cluster": "mycluster",
   "taskDefinition": "mytaskdef",
   "service": "mysvc",
   "desiredCount": 2,
   "volumeConfigurations": [
        {
            "name": "myEbsVolume",
            "managedEBSVolume": {
              "roleArn":"arn:aws:iam::1111222333:role/ecsInfrastructureRole",
               "volumeType": "gp3",
                "sizeInGiB": 100,
                 "iops": 3000, 
                "throughput": 125, 
                "filesystemType": "ext4"
            }
        }
   ]
}
```

### Configure a service to no longer utilize Amazon EBS volumes
<a name="ebs-service-disable-ebs"></a>

The following JSON snippet shows the syntax for updating a service to no longer utilize Amazon EBS volumes. You must provide the ARN of a task definition with `configuredAtLaunch` set to `false`, or a task definition without the `configuredAtLaunch` parameter. You must also provide an empty `volumeConfigurations` object.

```
{
   "cluster": "mycluster",
   "taskDefinition": "mytaskdef",
   "service": "mysvc",
   "desiredCount": 2,
   "volumeConfigurations": []
}
```

## Termination policy for Amazon EBS volumes
<a name="ebs-volume-termination-policy"></a>

When an Amazon ECS task terminates, Amazon ECS uses the `deleteOnTermination` value to determine whether the Amazon EBS volume that's associated with the terminated task should be deleted. By default, EBS volumes that are attached to tasks are deleted when the task is terminated. For standalone tasks, you can change this setting to instead preserve the volume upon task termination.

**Note**  
Volumes that are attached to tasks that are managed by a service are not preserved and are always deleted upon task termination.

## Tag Amazon EBS volumes
<a name="ebs-volume-tagging"></a>

You can tag Amazon EBS volumes by using the `tagSpecifications` object. Using the object, you can provide your own tags and set propagation of tags from the task definition or the service, depending on whether the volume is attached to a standalone task or a task in a service. The maximum number of tags that can be attached to a volume is 50.

**Important**  
Amazon ECS automatically attaches the `AmazonECSCreated` and `AmazonECSManaged` reserved tags to an Amazon EBS volume. This means you can control the attachment of a maximum of 48 additional tags to a volume. These additional tags can be user-defined, ECS-managed, or propagated tags.

If you want to add Amazon ECS-managed tags to your volume, you must set `enableECSManagedTags` to `true` in your `UpdateService`, `CreateService`,`RunTask` or `StartTask` call. If you turn on Amazon ECS-managed tags, Amazon ECS will tag the volume automatically with cluster and service information (`aws:ecs:clusterName` and `aws:ecs:serviceName`). For more information about tagging Amazon ECS resources, see [Tagging your Amazon ECS resources](https://docs.aws.amazon.com/AmazonECS/latest/developerguide/ecs-using-tags.html).

The following JSON snippet shows the syntax for tagging each Amazon EBS volume that is attached to each task in a service with a user-defined tag. To use this example for creating a service, replace the `user input placeholders` with your own information.

```
{
   "cluster": "mycluster",
   "taskDefinition": "mytaskdef",
   "serviceName": "mysvc",
   "desiredCount": 2,
   "enableECSManagedTags": true,
   "volumeConfigurations": [
        {
            "name": "datadir",
            "managedEBSVolume": {
                "volumeType": "gp3",
                "sizeInGiB": 100,
                 "tagSpecifications": [
                    {
                        "resourceType": "volume", 
                        "tags": [
                            {
                                "key": "key1", 
                                "value": "value1"
                            }
                        ], 
                        "propagateTags": "NONE"
                    }
                ],
                "roleArn":"arn:aws:iam:1111222333:role/ecsInfrastructureRole",
                "encrypted": true,
                "kmsKeyId": "arn:aws:kms:region:111122223333:key/1234abcd-12ab-34cd-56ef-1234567890ab"
            }
        }
   ]
}
```

**Important**  
You must specify a `volume` resource type to tag Amazon EBS volumes.

# Performance of Amazon EBS volumes for Fargate on-demand tasks
<a name="ebs-fargate-performance-limits"></a>

The baseline Amazon EBS volume IOPS and throughput available for a Fargate on-demand task depends on the total CPU units you request for the task. If you request 0.25, 0.5, or 1 virtual CPU unit (vCPU) for your Fargate task, we recommend that you configure a General Purpose SSD volume (`gp2` or `gp3`) or a Hard Disk Drive (HDD) volume (`st1` or `sc1`). If you request more than 1 vCPU for your Fargate task, the following baseline performance limits apply to an Amazon EBS volume attached to the task. You may temporarily get higher EBS performance than the following limits. However, we recommend that you plan your workload based on these limits.


| CPU units requested (in vCPUs) | Baseline Amazon EBS IOPS(16 KiB I/O) | Baseline Amazon EBS Throughput (in MiBps, 128 KiB I/O) | Baseline bandwidth (in Mbps) | 
| --- | --- | --- | --- | 
| 2 | 3,000 | 75 | 360 | 
| 4 | 5,000 | 120 | 1,150 | 
| 8 | 10,000 | 250 | 2,300 | 
| 16 | 15,000 | 500 | 4,500 | 

**Note**  
 When you configure an Amazon EBS volume for attachment to a Fargate task, the Amazon EBS performance limit for Fargate task is shared between the task's ephemeral storage and the attached volume.

# Performance of Amazon EBS volumes for EC2 tasks
<a name="ebs-fargate-performance-limits-ec2"></a>

Amazon EBS provides volume types, which differ in performance characteristics and price, so that you can tailor your storage performance and cost to the needs of your applications. For information about performance, including IOPS per volume and throughput per volume, see [Amazon EBS volume types](https://docs.aws.amazon.com/ebs/latest/userguide/ebs-volume-types.html) in the *Amazon Elastic Block Store User Guide*.

# Performance of Amazon EBS volumes for Amazon ECS Managed Instances tasks
<a name="ebs-managed-instances-performance"></a>

Amazon EBS provides volume types, which differ in performance characteristics and price, so that you can tailor your storage performance and cost to the needs of your applications. For information about performance, including IOPS per volume and throughput per volume, see [Amazon EBS volume types](https://docs.aws.amazon.com/ebs/latest/userguide/ebs-volume-types.html) in the *Amazon Elastic Block Store User Guide*.

# Troubleshooting Amazon EBS volume attachments to Amazon ECS tasks
<a name="troubleshoot-ebs-volumes"></a>

You might need to troubleshoot or verify the attachment of Amazon EBS volumes to Amazon ECS tasks.

## Check volume attachment status
<a name="troubleshoot-ebs-volumes-location"></a>

You can use the AWS Management Console to view the status of an Amazon EBS volume's attachment to an Amazon ECS task. If the task starts and the attachment fails, you'll also see a status reason that you can use to troubleshoot. The created volume will be deleted and the task will be stopped. For more information about status reasons, see [Status reasons for Amazon EBS volume attachment to Amazon ECS tasks](troubleshoot-ebs-volumes-scenarios.md).

**To view a volume's attachment status and status reason using the console**

1. Open the console at [https://console.aws.amazon.com/ecs/v2](https://console.aws.amazon.com/ecs/v2).

1. On the **Clusters** page, choose the cluster that your task is running in. The cluster's details page appears.

1. On the cluster's details page, choose the **Tasks** tab.

1. Choose the task that you want to view the volume attachment status for. You might need to use **Filter desired status** and choose **Stopped** if the task you want to examine has stopped.

1. On the task's details page, choose the **Volumes** tab. You will be able to see the attachment status of the Amazon EBS volume under **Attachment status**. If the volume fails to attach to the task, you can choose the status under **Attachment status** to display the cause of the failure.

You can also view a task's volume attachment status and associated status reason by using the [DescribeTasks](https://docs.aws.amazon.com/AmazonECS/latest/APIReference/API_DescribeTasks.html) API.

## Service and task failures
<a name="service-task-failures"></a>

You might encounter service or task failures that aren't specific to Amazon EBS volumes that can affect volume attachment. For more information, see
+ [Service event messages](https://docs.aws.amazon.com/AmazonECS/latest/developerguide/service-event-messages.html)
+ [Stopped task error codes](https://docs.aws.amazon.com/AmazonECS/latest/developerguide/stopped-task-error-codes.html)
+ [API failure reasons](https://docs.aws.amazon.com/AmazonECS/latest/developerguide/api_failures_messages.html)

# Container can't write to Amazon EBS volume
<a name="troubleshoot-non-root-container"></a>

Non-root user without proper permissions  
When you specify a non-root user in your container definition, Amazon ECS automatically configures the volume with group-based permissions to allow write access. However, if you're still experiencing permission issues:  
+ Verify that the `user` parameter is correctly specified in your container definition using the format `uid:gid` (for example, `1001:1001`).
+ Ensure your container image doesn't override the user permissions after the volume is mounted.
+ Check that your application is running with the expected user ID by examining the container logs or using Amazon ECS Exec to inspect the running container.

Root user with permission issues  
If no user is specified in your container definition, the container runs as root and should have full access to the volume. If you're experiencing issues:  
+ Verify that the volume is properly mounted by checking the mount points inside the container.
+ Ensure the volume isn't configured as read-only in your mount point configuration.

Multi-container tasks with different users  
In tasks with multiple containers running as different users, Amazon ECS automatically manages group permissions to allow all specified users to write to the volume. If containers can't write:  
+ Verify that all containers requiring write access have the `user` parameter properly configured.
+ Check that the volume is mounted in all containers that need access to it.

For more information about configuring users in container definitions, see [ Amazon ECS task definition parameters for Fargate ](https://docs.aws.amazon.com/./task_definition_parameters.html). 

# Status reasons for Amazon EBS volume attachment to Amazon ECS tasks
<a name="troubleshoot-ebs-volumes-scenarios"></a>

Use the following reference to fix issues that you might encounter in the form of status reasons in the AWS Management Console when you configure Amazon EBS volumes for attachment to Amazon ECS tasks. For more information on locating these status reasons in the console, see [Check volume attachment status](troubleshoot-ebs-volumes.md#troubleshoot-ebs-volumes-location).

ECS was unable to assume the configured ECS Infrastructure Role 'arn:aws:iam::*111122223333*:role/*ecsInfrastructureRole*'. Please verify that the role being passed has the proper trust relationship with Amazon ECS  
This status reason appears in the following scenarios.  
+  You provide an IAM role without the necessary trust policy attached. Amazon ECS can't access the Amazon ECS infrastructure IAM role that you provide if the role doesn't have the necessary trust policy. The task can get stuck in the `DEPROVISIONING` state. For more information about the necessary trust policy, see [Amazon ECS infrastructure IAM role](infrastructure_IAM_role.md).
+ Your IAM user doesn't have permission to pass the Amazon ECS infrastructure role to Amazon ECS. The task can get stuck in the `DEPROVISIONING` state. To avoid this problem, you can attach the `PassRole` permission to your user. For more information, see [Amazon ECS infrastructure IAM role](infrastructure_IAM_role.md).
+ Your IAM role doesn't have the necessary permissions for Amazon EBS volume attachment. The task can get stuck in the `DEPROVISIONING` state. For more information about the specific permissions necessary for attaching Amazon EBS volumes to tasks, see [Amazon ECS infrastructure IAM role](infrastructure_IAM_role.md).
You may also see this error message due to a delay in role propagation. If retrying to use the role after waiting for a few minutes doesn't fix the issue, you might have misconfigured the trust policy for the role.

ECS failed to set up the EBS volume. Encountered IdempotentParameterMismatch"; "The client token you have provided is associated with a resource that is already deleted. Please use a different client token."  
The following AWS KMS key scenarios can lead to an `IdempotentParameterMismatch` message appearing:  
+ You specify a KMS key ARN, ID, or alias that isn't valid. In this scenario, the task might appear to launch successfully, but the task eventually fails because AWS authenticates the KMS key asynchronously. For more information, see [Amazon EBS encryption](https://docs.aws.amazon.com/ebs/latest/userguide/ebs-encryption.html) in the *Amazon EC2 User Guide*.
+ You provide a customer managed key that lacks the permissions that allow the Amazon ECS infrastructure IAM role to use the key for encryption. To avoid key-policy permission issues, see the example AWS KMS key policy in [Data encryption for Amazon EBS volumes](https://docs.aws.amazon.com/AmazonECS/latest/developerguide/ebs-volumes.html#ebs-kms-encryption).
You can set up Amazon EventBridge to send Amazon EBS volume events and Amazon ECS task state change events to a target, such as Amazon CloudWatch groups. You can then use these events to identify the specific customer managed key related issue that affected volume attachment. For more information, see  
+  [How can I create a CloudWatch log group to use as a target for an EventBridge rule?](https://repost.aws/knowledge-center/cloudwatch-log-group-eventbridge) on AWS re:Post.
+ [Task state change events](https://docs.aws.amazon.com/AmazonECS/latest/developerguide/ecs_cwe_events.html#ecs_task_events).
+ [Amazon EventBridge events for Amazon EBS](https://docs.aws.amazon.com/ebs/latest/userguide/ebs-cloud-watch-events.html) in the *Amazon EBS User Guide*.

ECS timed out while configuring the EBS volume attachment to your Task.  
The following file system format scenarios result in this message.  
+ The file system format that you specify during configuration isn't compatible with the [task's operating system](https://docs.aws.amazon.com/AmazonECS/latest/APIReference/API_RuntimePlatform.html).
+ You configure an Amazon EBS volume to be created from a snapshot, and the snapshot's file system format isn't compatible with the task's operating system. For volumes created from a snapshot, you must specify the same filesystem type that the volume was using when the snapshot was created.
You can utilize the Amazon ECS container agent logs to troubleshoot this message for EC2 tasks. For more information, see [Amazon ECS log file locations](https://docs.aws.amazon.com/AmazonECS/latest/developerguide/logs.html) and [Amazon ECS log collector](https://docs.aws.amazon.com/AmazonECS/latest/developerguide/ecs-logs-collector.html).

# Use Amazon EFS volumes with Amazon ECS
<a name="efs-volumes"></a>

Amazon Elastic File System (Amazon EFS) provides simple, scalable file storage for use with your Amazon ECS tasks. With Amazon EFS, storage capacity is elastic. It grows and shrinks automatically as you add and remove files. Your applications can have the storage they need and when they need it.

You can use Amazon EFS file systems with Amazon ECS to export file system data across your fleet of container instances. That way, your tasks have access to the same persistent storage, no matter the instance on which they land. Your task definitions must reference volume mounts on the container instance to use the file system.

For a tutorial, see [Configuring Amazon EFS file systems for Amazon ECS using the console](tutorial-efs-volumes.md).

## Considerations
<a name="efs-volume-considerations"></a>

 Consider the following when using Amazon EFS volumes:
+ For tasks that run on EC2, Amazon EFS file system support was added as a public preview with Amazon ECS-optimized AMI version `20191212` with container agent version 1.35.0. However, Amazon EFS file system support entered general availability with Amazon ECS-optimized AMI version `20200319` with container agent version 1.38.0, which contained the Amazon EFS access point and IAM authorization features. We recommend that you use Amazon ECS-optimized AMI version `20200319` or later to use these features. For more information, see [Amazon ECS-optimized Linux AMIs](ecs-optimized_AMI.md).
**Note**  
If you create your own AMI, you must use container agent 1.38.0 or later, `ecs-init` version 1.38.0-1 or later, and run the following commands on your Amazon EC2 instance to enable the Amazon ECS volume plugin. The commands are dependent on whether you're using Amazon Linux 2 or Amazon Linux as your base image.  
Amazon Linux 2  

  ```
  yum install amazon-efs-utils
  systemctl enable --now amazon-ecs-volume-plugin
  ```
Amazon Linux  

  ```
  yum install amazon-efs-utils
  sudo shutdown -r now
  ```
+ For tasks that are hosted on Fargate, Amazon EFS file systems are supported on platform version 1.4.0 or later (Linux). For more information, see [Fargate platform versions for Amazon ECS](platform-fargate.md).
+ When using Amazon EFS volumes for tasks that are hosted on Fargate, Fargate creates a supervisor container that's responsible for managing the Amazon EFS volume. The supervisor container uses a small amount of the task's memory and CPU. The supervisor container is visible when querying the task metadata version 4 endpoint. Additionally, it is visible in CloudWatch Container Insights as the container name `aws-fargate-supervisor`. For more information when using the EC2, see [Amazon ECS task metadata endpoint version 4](task-metadata-endpoint-v4.md). For more information when using the Fargate, see [Amazon ECS task metadata endpoint version 4 for tasks on Fargate](task-metadata-endpoint-v4-fargate.md).
+ Using Amazon EFS volumes or specifying an `EFSVolumeConfiguration` isn't supported on external instances.
+ Using Amazon EFS volumes is supported for tasks that run on Amazon ECS Managed Instances.
+ We recommend that you set the `ECS_ENGINE_TASK_CLEANUP_WAIT_DURATION` parameter in the agent configuration file to a value that is less than the default (about 1 hour). This change helps prevent EFS mount credential expiration and allows for cleanup of mounts that are not in use.  For more information, see [Amazon ECS container agent configuration](ecs-agent-config.md).

## Use Amazon EFS access points
<a name="efs-volume-accesspoints"></a>

Amazon EFS access points are application-specific entry points into an EFS file system for managing application access to shared datasets. For more information about Amazon EFS access points and how to control access to them, see [Working with Amazon EFS Access Points](https://docs.aws.amazon.com/efs/latest/ug/efs-access-points.html) in the *Amazon Elastic File System User Guide*.

Access points can enforce a user identity, including the user's POSIX groups, for all file system requests that are made through the access point. Access points can also enforce a different root directory for the file system. This is so that clients can only access data in the specified directory or its subdirectories.

**Note**  
When creating an EFS access point, specify a path on the file system to serve as the root directory. When referencing the EFS file system with an access point ID in your Amazon ECS task definition, the root directory must either be omitted or set to `/`, which enforces the path set on the EFS access point.

You can use an Amazon ECS task IAM role to enforce that specific applications use a specific access point. By combining IAM policies with access points, you can provide secure access to specific datasets for your applications. For more information about how to use task IAM roles, see [Amazon ECS task IAM role](task-iam-roles.md).

# Best practices for using Amazon EFS volumes with Amazon ECS
<a name="efs-best-practices"></a>

Make note of the following best practice recommendations when you use Amazon EFS with Amazon ECS.

## Security and access controls for Amazon EFS volumes
<a name="storage-efs-security"></a>

Amazon EFS offers access control features that you can use to ensure that the data stored in an Amazon EFS file system is secure and accessible only from applications that need it. You can secure data by enabling encryption at rest and in-transit. For more information, see [Data encryption in Amazon EFS](https://docs.aws.amazon.com/efs/latest/ug/encryption.html) in the *Amazon Elastic File System User Guide*.

In addition to data encryption, you can also use Amazon EFS to restrict access to a file system. There are three ways to implement access control in EFS.
+ **Security groups**—With Amazon EFS mount targets, you can configure a security group that's used to permit and deny network traffic. You can configure the security group attached to Amazon EFS to permit NFS traffic (port 2049) from the security group that's attached to your Amazon ECS instances or, when using the `awsvpc` network mode, the Amazon ECS task.
+ **IAM**—You can restrict access to an Amazon EFS file system using IAM. When configured, Amazon ECS tasks require an IAM role for file system access to mount an EFS file system. For more information, see [Using IAM to control file system data access](https://docs.aws.amazon.com/efs/latest/ug/iam-access-control-nfs-efs.html) in the *Amazon Elastic File System User Guide*.

  IAM policies can also enforce predefined conditions such as requiring a client to use TLS when connecting to an Amazon EFS file system. For more information, see [Amazon EFS condition keys for clients](https://docs.aws.amazon.com/efs/latest/ug/iam-access-control-nfs-efs.html#efs-condition-keys-for-nfs) in the *Amazon Elastic File System User Guide*.
+ **Amazon EFS access points**—Amazon EFS access points are application-specific entry points into an Amazon EFS file system. You can use access points to enforce a user identity, including the user's POSIX groups, for all file system requests that are made through the access point. Access points can also enforce a different root directory for the file system. This is so that clients can only access data in the specified directory or its sub-directories.

### IAM policies
<a name="storage-efs-security-iam"></a>

You can use IAM policies to control the access to the Amazon EFS file system.

You can specify the following actions for clients accessing a file system using a file system policy.


| Action | Description | 
| --- | --- | 
|  `elasticfilesystem:ClientMount`  |  Provides read-only access to a file system.  | 
|  `elasticfilesystem:ClientWrite`  |  Provides write permissions on a file system.  | 
|  `elasticfilesystem:ClientRootAccess`  |  Provides use of the root user when accessing a file system.  | 

You need to specify each action in a policy. The policies can be defined in the following ways:
+ Client-based - Attach the policy to the task role

  Set the **IAM authorization** option when you create the task definition. 
+ Resource-based - Attach the policy to Amazon EFS file system

  If the resource-based policy does not exist, by default at file system creation access is granted to all principals (\$1). 

When you set the **IAM authorization** option, we merge the the policy associated with the task role and the Amazon EFS resource-based. The **IAM authorization** option passes the task identity (the task role) with the policy to Amazon EFS. This allows the Amazon EFS resource-based policy to have context for the IAM user or role specified in the policy. If you do not set the option, the Amazon EFS resource-level policy identifies the IAM user as ”anonymous".

Consider implementing all three access controls on an Amazon EFS file system for maximum security. For example, you can configure the security group attached to an Amazon EFS mount point to only permit ingress NFS traffic from a security group that's associated with your container instance or Amazon ECS task. Additionally, you can configure Amazon EFS to require an IAM role to access the file system, even if the connection originates from a permitted security group. Last, you can use Amazon EFS access points to enforce POSIX user permissions and specify root directories for applications.

The following task definition snippet shows how to mount an Amazon EFS file system using an access point.

```
"volumes": [
    {
      "efsVolumeConfiguration": {
        "fileSystemId": "fs-1234",
        "authorizationConfig": {
          "accessPointId": "fsap-1234",
          "iam": "ENABLED"
        },
        "transitEncryption": "ENABLED",
        "rootDirectory": ""
      },
      "name": "my-filesystem"
    }
]
```

## Amazon EFS volume performance
<a name="storage-efs-performance"></a>

Amazon EFS offers two performance modes: General Purpose and Max I/O. General Purpose is suitable for latency-sensitive applications such as content management systems and CI/CD tools. In contrast, Max I/O file systems are suitable for workloads such as data analytics, media processing, and machine learning. These workloads need to perform parallel operations from hundreds or even thousands of containers and require the highest possible aggregate throughput and IOPS. For more information, see [Amazon EFS performance modes](https://docs.aws.amazon.com/efs/latest/ug/performance.html#performancemodes) in the *Amazon Elastic File System User Guide*.

Some latency sensitive workloads require both the higher I/O levels that are provided by Max I/O performance mode and the lower latency that are provided by General Purpose performance mode. For this type of workload, we recommend creating multiple General Purpose performance mode file systems. That way, you can spread your application workload across all these file systems, as long as the workload and applications can support it.

## Amazon EFS volume throughput
<a name="storage-efs-performance-throughput"></a>

All Amazon EFS file systems have an associated metered throughput that's determined by either the amount of provisioned throughput for file systems using *Provisioned Throughput* or the amount of data stored in the EFS Standard or One Zone storage class for file systems using *Bursting Throughput*. For more information, see [Understanding metered throughput](https://docs.aws.amazon.com/efs/latest/ug/performance.html#read-write-throughput) in the *Amazon Elastic File System User Guide*.

The default throughput mode for Amazon EFS file systems is bursting mode. With bursting mode, the throughput that's available to a file system scales in or out as a file system grows. Because file-based workloads typically spike, requiring high levels of throughput for periods of time and lower levels of throughput the rest of the time, Amazon EFS is designed to burst to allow high throughput levels for periods of time. Additionally, because many workloads are read-heavy, read operations are metered at a 1:3 ratio to other NFS operations (like write). 

All Amazon EFS file systems deliver a consistent baseline performance of 50 MB/s for each TB of Amazon EFS Standard or Amazon EFS One Zone storage. All file systems (regardless of size) can burst to 100 MB/s. File systems with more than 1TB of EFS Standard or EFS One Zone storage can burst to 100 MB/s for each TB. Because read operations are metered at a 1:3 ratio, you can drive up to 300 MiBs/s for each TiB of read throughput. As you add data to your file system, the maximum throughput that's available to the file system scales linearly and automatically with your storage in the Amazon EFS Standard storage class. If you need more throughput than you can achieve with your amount of data stored, you can configure Provisioned Throughput to the specific amount your workload requires.

File system throughput is shared across all Amazon EC2 instances connected to a file system. For example, a 1TB file system that can burst to 100 MB/s of throughput can drive 100 MB/s from a single Amazon EC2 instance can each drive 10 MB/s. For more information, see [Amazon EFS performance](https://docs.aws.amazon.com/efs/latest/ug/performance.html) in the *Amazon Elastic File System User Guide*.

## Optimizing cost for Amazon EFS volumes
<a name="storage-efs-costopt"></a>

Amazon EFS simplifies scaling storage for you. Amazon EFS file systems grow automatically as you add more data. Especially with Amazon EFS *Bursting Throughput* mode, throughput on Amazon EFS scales as the size of your file system in the standard storage class grows. To improve the throughput without paying an additional cost for provisioned throughput on an EFS file system, you can share an Amazon EFS file system with multiple applications. Using Amazon EFS access points, you can implement storage isolation in shared Amazon EFS file systems. By doing so, even though the applications still share the same file system, they can't access data unless you authorize it.

As your data grows, Amazon EFS helps you automatically move infrequently accessed files to a lower storage class. The Amazon EFS Standard-Infrequent Access (IA) storage class reduces storage costs for files that aren't accessed every day. It does this without sacrificing the high availability, high durability, elasticity, and the POSIX file system access that Amazon EFS provides. For more information, see [EFS storage classes](https://docs.aws.amazon.com/efs/latest/ug/features.html) in the *Amazon Elastic File System User Guide*.

Consider using Amazon EFS lifecycle policies to automatically save money by moving infrequently accessed files to Amazon EFS IA storage. For more information, see [Amazon EFS lifecycle management](https://docs.aws.amazon.com/efs/latest/ug/lifecycle-management-efs.html) in the *Amazon Elastic File System User Guide*.

When creating an Amazon EFS file system, you can choose if Amazon EFS replicates your data across multiple Availability Zones (Standard) or stores your data redundantly within a single Availability Zone. The Amazon EFS One Zone storage class can reduce storage costs by a significant margin compared to Amazon EFS Standard storage classes. Consider using Amazon EFS One Zone storage class for workloads that don't require multi-AZ resilience. You can further reduce the cost of Amazon EFS One Zone storage by moving infrequently accessed files to Amazon EFS One Zone-Infrequent Access. For more information, see [Amazon EFS Infrequent Access](https://aws.amazon.com/efs/features/infrequent-access/).

## Amazon EFS volume data protection
<a name="storage-efs-dataprotection"></a>

Amazon EFS stores your data redundantly across multiple Availability Zones for file systems using Standard storage classes. If you select Amazon EFS One Zone storage classes, your data is redundantly stored within a single Availability Zone. Additionally, Amazon EFS is designed to provide 99.999999999% (11 9’s) of durability over a given year.

As with any environment, it's a best practice to have a backup and to build safeguards against accidental deletion. For Amazon EFS data, that best practice includes a functioning, regularly tested backup using AWS Backup. File systems using Amazon EFS One Zone storage classes are configured to automatically back up files by default at file system creation unless you choose to disable this functionality. For more information, see [Backing up EFS file systems](https://docs.aws.amazon.com/efs/latest/ug/awsbackup.html) in the *Amazon Elastic File System User Guide*.

# Specify an Amazon EFS file system in an Amazon ECS task definition
<a name="specify-efs-config"></a>

To use Amazon EFS file system volumes for your containers, you must specify the volume and mount point configurations in your task definition. The following task definition JSON snippet shows the syntax for the `volumes` and `mountPoints` objects for a container.

```
{
    "containerDefinitions": [
        {
            "name": "container-using-efs",
            "image": "public.ecr.aws/amazonlinux/amazonlinux:latest",
            "entryPoint": [
                "sh",
                "-c"
            ],
            "command": [
                "ls -la /mount/efs"
            ],
            "mountPoints": [
                {
                    "sourceVolume": "myEfsVolume",
                    "containerPath": "/mount/efs",
                    "readOnly": true
                }
            ]
        }
    ],
    "volumes": [
        {
            "name": "myEfsVolume",
            "efsVolumeConfiguration": {
                "fileSystemId": "fs-1234",
                "rootDirectory": "/path/to/my/data",
                "transitEncryption": "ENABLED",
                "transitEncryptionPort": integer,
                "authorizationConfig": {
                    "accessPointId": "fsap-1234",
                    "iam": "ENABLED"
                }
            }
        }
    ]
}
```

`efsVolumeConfiguration`  
Type: Object  
Required: No  
This parameter is specified when using Amazon EFS volumes.    
`fileSystemId`  
Type: String  
Required: Yes  
The Amazon EFS file system ID to use.  
`rootDirectory`  
Type: String  
Required: No  
The directory within the Amazon EFS file system to mount as the root directory inside the host. If this parameter is omitted, the root of the Amazon EFS volume is used. Specifying `/` has the same effect as omitting this parameter.  
If an EFS access point is specified in the `authorizationConfig`, the root directory parameter must either be omitted or set to `/`, which enforces the path set on the EFS access point.  
`transitEncryption`  
Type: String  
Valid values: `ENABLED` \$1 `DISABLED`  
Required: No  
Specifies whether to enable encryption for Amazon EFS data in transit between the Amazon ECS host and the Amazon EFS server. If Amazon EFS IAM authorization is used, transit encryption must be enabled. If this parameter is omitted, the default value of `DISABLED` is used. For more information, see [Encrypting Data in Transit](https://docs.aws.amazon.com/efs/latest/ug/encryption-in-transit.html) in the *Amazon Elastic File System User Guide*.  
`transitEncryptionPort`  
Type: Integer  
Required: No  
The port to use when sending encrypted data between the Amazon ECS host and the Amazon EFS server. If you don't specify a transit encryption port, it uses the port selection strategy that the Amazon EFS mount helper uses. For more information, see [EFS Mount Helper](https://docs.aws.amazon.com/efs/latest/ug/efs-mount-helper.html) in the *Amazon Elastic File System User Guide*.  
`authorizationConfig`  
Type: Object  
Required: No  
The authorization configuration details for the Amazon EFS file system.    
`accessPointId`  
Type: String  
Required: No  
The access point ID to use. If an access point is specified, the root directory value in the `efsVolumeConfiguration` must either be omitted or set to `/`, which enforces the path set on the EFS access point. If an access point is used, transit encryption must be enabled in the `EFSVolumeConfiguration`. For more information, see [Working with Amazon EFS Access Points](https://docs.aws.amazon.com/efs/latest/ug/efs-access-points.html) in the *Amazon Elastic File System User Guide*.  
`iam`  
Type: String  
Valid values: `ENABLED` \$1 `DISABLED`  
Required: No  
 Specifies whether to use the Amazon ECS task IAM role defined in a task definition when mounting the Amazon EFS file system. If enabled, transit encryption must be enabled in the `EFSVolumeConfiguration`. If this parameter is omitted, the default value of `DISABLED` is used. For more information, see [IAM Roles for Tasks](https://docs.aws.amazon.com/AmazonECS/latest/developerguide/task-iam-roles.html).

# Configuring Amazon EFS file systems for Amazon ECS using the console
<a name="tutorial-efs-volumes"></a>

Learn how to use Amazon Elastic File System (Amazon EFS) file systems with Amazon ECS.

## Step 1: Create an Amazon ECS cluster
<a name="efs-create-cluster"></a>

Use the following steps to create an Amazon ECS cluster. 

**To create a new cluster (Amazon ECS console)**

Before you begin, assign the appropriate IAM permission. For more information, see [Amazon ECS cluster examples](security_iam_id-based-policy-examples.md#IAM_cluster_policies).

1. Open the console at [https://console.aws.amazon.com/ecs/v2](https://console.aws.amazon.com/ecs/v2).

1. From the navigation bar, select the Region to use.

1. In the navigation pane, choose **Clusters**.

1. On the **Clusters** page, choose **Create cluster**.

1. Under **Cluster configuration**, for **Cluster name**, enter `EFS-tutorial` for the cluster name.

1. (Optional) To change the VPC and subnets where your tasks and services launch, under **Networking**, perform any of the following operations:
   + To remove a subnet, under **Subnets**, choose **X** for each subnet that you want to remove.
   + To change to a VPC other than the **default** VPC, under **VPC**, choose an existing **VPC**, and then under **Subnets**, select each subnet.

1.  To add Amazon EC2 instances to your cluster, expand **Infrastructure**, and then select **Amazon EC2 instances**. Next, configure the Auto Scaling group which acts as the capacity provider:

   1. To create a Auto Scaling group, from **Auto Scaling group (ASG)**, select **Create new group**, and then provide the following details about the group:
     + For **Operating system/Architecture**, choose Amazon Linux 2.
     + For **EC2 instance type**, choose `t2.micro`.

        For **SSH key pair**, choose the pair that proves your identity when you connect to the instance.
     + For **Capacity**, enter `1`.

1. Choose **Create**.

## Step 2: Create a security group for Amazon EC2 instances and the Amazon EFS file system
<a name="efs-security-group"></a>

In this step, you create a security group for your Amazon EC2 instances that allows inbound network traffic on port 80 and your Amazon EFS file system that allows inbound access from your container instances. 

Create a security group for your Amazon EC2 instances with the following options:
+ **Security group name** - a unique name for your security group.
+ **VPC** - the VPC that you identified earlier for your cluster.
+ **Inbound rule**
  + **Type** - **HTTP**
  + **Source** - **0.0.0.0/0**.

Create a security group for your Amazon EFS file system with the following options:
+ **Security group name** - a unique name for your security group. For example, `EFS-access-for-sg-dc025fa2`.
+ **VPC** - the VPC that you identified earlier for your cluster.
+ **Inbound rule**
  + **Type** - **NFS**
  + **Source** - **Custom** with the ID of the security group you created for your instances.

For information about how to create a security group, see [Create a security group for your Amazon EC2 instance](https://docs.aws.amazon.com/AWSEC2/latest/UserGuide/creating-security-group.html) in the *Amazon EC2 User Guide*.

## Step 3: Create an Amazon EFS file system
<a name="efs-create-filesystem"></a>

In this step, you create an Amazon EFS file system.

**To create an Amazon EFS file system for Amazon ECS tasks.**

1. Open the Amazon Elastic File System console at [https://console.aws.amazon.com/efs/](https://console.aws.amazon.com/efs/).

1. Choose **Create file system**.

1. Enter a name for your file system and then choose the VPC that your container instances are hosted in. By default, each subnet in the specified VPC receives a mount target that uses the default security group for that VPC. Then, choose ** Customize**.
**Note**  
This tutorial assumes that your Amazon EFS file system, Amazon ECS cluster, container instances, and tasks are in the same VPC. For more information about mounting a file system from a different VPC, see [Walkthrough: Mount a file system from a different VPC](https://docs.aws.amazon.com/efs/latest/ug/efs-different-vpc.html) in the *Amazon EFS User Guide*.

1. On the **File system settings** page, configure optional settings and then under **Performance settings**, choose the **Bursting** throughput mode for your file system. After you have configured settings, select **Next**.

   1. (Optional) Add tags for your file system. For example, you could specify a unique name for the file system by entering that name in the **Value** column next to the **Name** key.

   1. (Optional) Enable lifecycle management to save money on infrequently accessed storage. For more information, see [EFS Lifecycle Management](https://docs.aws.amazon.com/efs/latest/ug/lifecycle-management-efs.html) in the *Amazon Elastic File System User Guide*.

   1. (Optional) Enable encryption. Select the check box to enable encryption of your Amazon EFS file system at rest.

1. On the **Network access** page, under **Mount targets**, replace the existing security group configuration for every availability zone with the security group you created for the file system in [Step 2: Create a security group for Amazon EC2 instances and the Amazon EFS file system](#efs-security-group) and then choose **Next**.

1.  You do not need to configure **File system policy** for this tutorial, so you can skip the section by choosing **Next**.

1. Review your file system options and choose **Create** to complete the process.

1. From the **File systems** screen, record the **File system ID**. In the next step, you will reference this value in your Amazon ECS task definition.

## Step 4: Add content to the Amazon EFS file system
<a name="efs-add-content"></a>

In this step, you mount the Amazon EFS file system to an Amazon EC2 instance and add content to it. This is for testing purposes in this tutorial, to illustrate the persistent nature of the data. When using this feature you would normally have your application or another method of writing data to your Amazon EFS file system.

**To create an Amazon EC2 instance and mount the Amazon EFS file system**

1. Open the Amazon EC2 console at [https://console.aws.amazon.com/ec2/](https://console.aws.amazon.com/ec2/).

1. Choose **Launch Instance**.

1. Under **Application and OS Images (Amazon Machine Image)**, select the **Amazon Linux 2 AMI (HVM)**.

1. Under **Instance type**, keep the default instance type, `t2.micro`.

1.  Under **Key pair (login)**, select a key pair for SSH access to the instance.

1. Under **Network settings**, select the VPC that you specified for your Amazon EFS file system and Amazon ECS cluster. Select a subnet and the instance security group created in [Step 2: Create a security group for Amazon EC2 instances and the Amazon EFS file system](#efs-security-group). Configure the instance's security group. Ensure that **Auto-assign public IP** is enabled.

1. Under **Configure storage**, choose the **Edit** button for file systems and then choose **EFS**. Select the file system you created in [Step 3: Create an Amazon EFS file system](#efs-create-filesystem). You can optionally change the mount point or leave the default value.
**Important**  
Your must select a subnet before you can add a file system to the instance.

1. Clear the **Automatically create and attach security groups**. Leave the other check box selected. Choose **Add shared file system**.

1. Under **Advanced Details**, ensure that the user data script is populated automatically with the Amazon EFS file system mounting steps.

1.  Under **Summary**, ensure the **Number of instances** is **1**. Choose **Launch instance**.

1. On the **Launch an instance** page, choose **View all instances** to see the status of your instances. Initially, the **Instance state** status is `PENDING`. After the state changes to `RUNNING` and the instance passes all status checks, the instance is ready for use.

Now, you connect to the Amazon EC2 instance and add content to the Amazon EFS file system.

**To connect to the Amazon EC2 instance and add content to the Amazon EFS file system**

1. SSH to the Amazon EC2 instance you created. For more information, see [Connect to your Linux instance using SSH](https://docs.aws.amazon.com/AWSEC2/latest/UserGuide/connect-to-linux-instance.html) in the *Amazon EC2 User Guide*.

1. From the terminal window, run the **df -T** command to verify that the Amazon EFS file system is mounted. In the following output, we have highlighted the Amazon EFS file system mount.

   ```
   $ df -T
   Filesystem     Type            1K-blocks    Used        Available Use% Mounted on
   devtmpfs       devtmpfs           485468       0           485468   0% /dev
   tmpfs          tmpfs              503480       0           503480   0% /dev/shm
   tmpfs          tmpfs              503480     424           503056   1% /run
   tmpfs          tmpfs              503480       0           503480   0% /sys/fs/cgroup
   /dev/xvda1     xfs               8376300 1310952          7065348  16% /
   127.0.0.1:/    nfs4     9007199254739968       0 9007199254739968   0% /mnt/efs/fs1
   tmpfs          tmpfs              100700       0           100700   0% /run/user/1000
   ```

1. Navigate to the directory that the Amazon EFS file system is mounted at. In the example above, that is `/mnt/efs/fs1`.

1. Create a file named `index.html` with the following content:

   ```
   <html>
       <body>
           <h1>It Works!</h1>
           <p>You are using an Amazon EFS file system for persistent container storage.</p>
       </body>
   </html>
   ```

## Step 5: Create a task definition
<a name="efs-task-def"></a>

The following task definition creates a data volume named `efs-html`. The `nginx` container mounts the host data volume at the NGINX root, `/usr/share/nginx/html`.

**To create a new task definition using the Amazon ECS console**

1. Open the console at [https://console.aws.amazon.com/ecs/v2](https://console.aws.amazon.com/ecs/v2).

1. In the navigation pane, choose **Task definitions**.

1. Choose **Create new task definition**, **Create new task definition with JSON**.

1. In the JSON editor box, copy and paste the following JSON text, replacing the `fileSystemId` with the ID of your Amazon EFS file system.

   ```
   {
       "containerDefinitions": [
           {
               "memory": 128,
               "portMappings": [
                   {
                       "hostPort": 80,
                       "containerPort": 80,
                       "protocol": "tcp"
                   }
               ],
               "essential": true,
               "mountPoints": [
                   {
                       "containerPath": "/usr/share/nginx/html",
                       "sourceVolume": "efs-html"
                   }
               ],
               "name": "nginx",
               "image": "public.ecr.aws/docker/library/nginx:latest"
           }
       ],
       "volumes": [
           {
               "name": "efs-html",
               "efsVolumeConfiguration": {
                   "fileSystemId": "fs-1324abcd",
                   "transitEncryption": "ENABLED"
               }
           }
       ],
       "family": "efs-tutorial",
       "executionRoleArn":"arn:aws:iam::111122223333:role/ecsTaskExecutionRole"
   }
   ```
**Note**  
The Amazon ECS task execution IAM role does not require any specific Amazon EFS-related permissions to mount an Amazon EFS file system. By default, if no Amazon EFS resource-based policy exists, access is granted to all principals (\$1) at file system creation.  
The Amazon ECS task role is only required if "EFS IAM authorization" is enabled in the Amazon ECS task definition. When enabled, the task role identity must be allowed access to the Amazon EFS file system in the Amazon EFS resource-based policy, and anonymous access should be disabled.

1. Choose **Create**.

## Step 6: Run a task and view the results
<a name="efs-run-task"></a>

Now that your Amazon EFS file system is created and there is web content for the NGINX container to serve, you can run a task using the task definition that you created. The NGINX web server serves your simple HTML page. If you update the content in your Amazon EFS file system, those changes are propagated to any containers that have also mounted that file system.

The task runs in the subnet that you defined for the cluster.

**To run a task and view the results using the console**

1. Open the console at [https://console.aws.amazon.com/ecs/v2](https://console.aws.amazon.com/ecs/v2).

1. On the **Clusters** page, select the cluster to run the standalone task in.

   Determine the resource from where you launch the service.    
[\[See the AWS documentation website for more details\]](http://docs.aws.amazon.com/AmazonECS/latest/developerguide/tutorial-efs-volumes.html)

1. (Optional) Choose how your scheduled task is distributed across your cluster infrastructure. Expand **Compute configuration**, and then do the following:    
[\[See the AWS documentation website for more details\]](http://docs.aws.amazon.com/AmazonECS/latest/developerguide/tutorial-efs-volumes.html)

1. For **Application type**, choose **Task**.

1. For **Task definition**, choose the `efs-tutorial` task definition that you created earlier.

1. For **Desired tasks**, enter `1`.

1. Choose **Create**.

1. On the **Cluster** page, choose **Infrastructure**.

1. Under **Container Instances**, choose the container instance to connect to.

1. On the **Container Instance** page, under **Networking**, record the **Public IP** for your instance.

1. Open a browser and enter the public IP address. You should see the following message:

   ```
   It works!
   You are using an Amazon EFS file system for persistent container storage.
   ```
**Note**  
If you do not see the message, make sure that the security group for your container instance allows inbound network traffic on port 80 and the security group for your file system allows inbound access from the container instance.

# Use FSx for Windows File Server volumes with Amazon ECS
<a name="wfsx-volumes"></a>

FSx for Windows File Server provides fully managed Windows file servers, that are backed by a Windows file system. When using FSx for Windows File Server together with ECS, you can provision your Windows tasks with persistent, distributed, shared, static file storage. For more information, see [What Is FSx for Windows File Server?](https://docs.aws.amazon.com/fsx/latest/WindowsGuide/what-is.html).

**Note**  
EC2 instances that use the Amazon ECS-Optimized Windows Server 2016 Full AMI do not support FSx for Windows File Server ECS task volumes.  
You can't use FSx for Windows File Server volumes in a Windows containers on Fargate configuration. Instead, you can [modify containers to mount them on startup](https://aws.amazon.com/blogs/containers/use-smb-storage-with-windows-containers-on-aws-fargate/).

You can use FSx for Windows File Server to deploy Windows workloads that require access to shared external storage, highly-available Regional storage, or high-throughput storage. You can mount one or more FSx for Windows File Server file system volumes to an Amazon ECS container that runs on an Amazon ECS Windows instance. You can share FSx for Windows File Server file system volumes between multiple Amazon ECS containers within a single Amazon ECS task.

To enable the use of FSx for Windows File Server with ECS, include the FSx for Windows File Server file system ID and the related information in a task definition. This is in the following example task definition JSON snippet. Before you create and run a task definition, you need the following.
+ An ECS Windows EC2 instance that's joined to a valid domain. It can be hosted by an [AWS Directory Service for Microsoft Active Directory](https://docs.aws.amazon.com/directoryservice/latest/admin-guide/directory_microsoft_ad.html), on-premises Active Directory or self-hosted Active Directory on Amazon EC2.
+ An AWS Secrets Manager secret or Systems Manager parameter that contains the credentials that are used to join the Active Directory domain and attach the FSx for Windows File Server file system. The credential values are the name and password credentials that you entered when creating the Active Directory.

For a related tutorial, see [Learn how to configure FSx for Windows File Server file systems for Amazon ECS](tutorial-wfsx-volumes.md).

## Considerations
<a name="wfsx-volume-considerations"></a>

Consider the following when using FSx for Windows File Server volumes:
+ FSx for Windows File Server volumes are natively supported with Amazon ECS on Windows Amazon EC2 instances — Amazon ECS automatically manages the mount through task definition configuration.

  On Linux Amazon EC2 instances, Amazon ECS can't automatically mount FSx for Windows File Server volumes through task definitions. However, you can manually mount an FSx for Windows File Server file share on a Linux EC2 instance at the host level and then bind-mount that path into your Amazon ECS containers. For more information, see [Mounting Amazon FSx file shares from Linux](https://docs.aws.amazon.com/fsx/latest/WindowsGuide/map-shares-linux.html).
**Important**  
This is a self-managed configuration. For guidance on mounting and maintaining FSx for Windows File Server file shares on Linux, refer to the [FSx for Windows File Server documentation](https://docs.aws.amazon.com/fsx/latest/WindowsGuide/).
**Important**  
When using a manually mounted FSx for Windows File Server share on Linux EC2 instances, Amazon ECS and FSx for Windows File Server operate independently — Amazon ECS does not monitor the Amazon FSx mount, and FSx for Windows File Server does not track Amazon ECS task placement or lifecycle events. You are responsible for ensuring network reachability between your Amazon ECS container instances and the Amazon FSx file system, implementing mount health checks, and handling reconnection logic to tolerate failover events.
+ FSx for Windows File Server with Amazon ECS doesn't support AWS Fargate.
+ FSx for Windows File Server with Amazon ECS isn't supported on Amazon ECS Managed Instances.
+ FSx for Windows File Server with Amazon ECS with `awsvpc` network mode requires version `1.54.0` or later of the container agent.
+ The maximum number of drive letters that can be used for an Amazon ECS task is 23. Each task with an FSx for Windows File Server volume gets a drive letter assigned to it.
+ By default, task resource cleanup time is three hours after the task ended. Even if no tasks use it, a file mapping that's created by a task persists for three hours. The default cleanup time can be configured by using the Amazon ECS environment variable `ECS_ENGINE_TASK_CLEANUP_WAIT_DURATION`. For more information, see [Amazon ECS container agent configuration](ecs-agent-config.md).
+ Tasks typically only run in the same VPC as the FSx for Windows File Server file system. However, it's possible to have cross-VPC support if there's an established network connectivity between the Amazon ECS cluster VPC and the FSx for Windows File Server file-system through VPC peering.
+ You control access to an FSx for Windows File Server file system at the network level by configuring the VPC security groups. Only tasks that are hosted on EC2 instances joined to the Active Directory domain with correctly configured Active Directory security groups can access the FSx for Windows File Server file-share. If the security groups are misconfigured, Amazon ECS fails to launch the task with the following error message: `unable to mount file system fs-id`.” 
+ FSx for Windows File Server is integrated with AWS Identity and Access Management (IAM) to control the actions that your IAM users and groups can take on specific FSx for Windows File Server resources. With client authorization, customers can define IAM roles that allow or deny access to specific FSx for Windows File Server file systems, optionally require read-only access, and optionally allow or disallow root access to the file system from the client. For more information, see [Security](https://docs.aws.amazon.com/fsx/latest/WindowsGuide/security.html) in the Amazon FSx Windows User Guide.

# Best practices for using FSx for Windows File Server with Amazon ECS
<a name="wfsx-best-practices"></a>

Make note of the following best practice recommendations when you use FSx for Windows File Server with Amazon ECS.

## Security and access controls for FSx for Windows File Server
<a name="wfsx-security-access-controls"></a>

FSx for Windows File Server offers the following access control features that you can use to ensure that the data stored in an FSx for Windows File Server file system is secure and accessible only from applications that need it.

### Data encryption for FSx for Windows File Server volumes
<a name="storage-fsx-security-encryption"></a>

FSx for Windows File Server supports two forms of encryption for file systems. They are encryption of data in transit and encryption at rest. Encryption of data in transit is supported on file shares that are mapped on a container instance that supports SMB protocol 3.0 or newer. Encryption of data at rest is automatically enabled when creating an Amazon FSx file system. Amazon FSx automatically encrypts data in transit using SMB encryption as you access your file system without the need for you to modify your applications. For more information, see [Data encryption in Amazon FSx](https://docs.aws.amazon.com/fsx/latest/WindowsGuide/encryption.html) in the *Amazon FSx for Windows File Server User Guide*.

### Use Windows ACLs for folder level access control
<a name="storage-fsx-security-access"></a>

The Windows Amazon EC2 instance access Amazon FSx file shares using Active Directory credentials. It uses standard Windows access control lists (ACLs) for fine-grained file-level and folder-level access control. You can create multiple credentials, each one for a specific folder within the share which maps to a specific task.

In the following example, the task has access to the folder `App01` using a credential saved in Secrets Manager. Its Amazon Resource Name (ARN) is `1234`.

```
"rootDirectory": "\\path\\to\\my\\data\App01",
"credentialsParameter": "arn-1234",
"domain": "corp.fullyqualified.com",
```

In another example, a task has access to the folder `App02` using a credential saved in the Secrets Manager. Its ARN is `6789`.

```
"rootDirectory": "\\path\\to\\my\\data\App02",
"credentialsParameter": "arn-6789",
"domain": "corp.fullyqualified.com",
```

# Specify an FSx for Windows File Server file system in an Amazon ECS task definition
<a name="specify-wfsx-config"></a>

To use FSx for Windows File Server file system volumes for your containers, specify the volume and mount point configurations in your task definition. The following task definition JSON snippet shows the syntax for the `volumes` and `mountPoints` objects for a container.

```
{
    "containerDefinitions": [
        {
            "entryPoint": [
                "powershell",
                "-Command"
            ],
            "portMappings": [],
            "command": ["New-Item -Path C:\\fsx-windows-dir\\index.html -ItemType file -Value '<html> <head> <title>Amazon ECS Sample App</title> <style>body {margin-top: 40px; background-color: #333;} </style> </head><body> <div style=color:white;text-align:center> <h1>Amazon ECS Sample App</h1> <h2>It Works!</h2> <p>You are using Amazon FSx for Windows File Server file system for persistent container storage.</p>' -Force"],
            "cpu": 512,
            "memory": 256,
            "image": "mcr.microsoft.com/windows/servercore/iis:windowsservercore-ltsc2019",
            "essential": false,
            "name": "container1",
            "mountPoints": [
                {
                    "sourceVolume": "fsx-windows-dir",
                    "containerPath": "C:\\fsx-windows-dir",
                    "readOnly": false
                }
            ]
        },
        {
            "entryPoint": [
                "powershell",
                "-Command"
            ],
            "portMappings": [
                {
                    "hostPort": 443,
                    "protocol": "tcp",
                    "containerPort": 80
                }
            ],
            "command": ["Remove-Item -Recurse C:\\inetpub\\wwwroot\\* -Force; Start-Sleep -Seconds 120; Move-Item -Path C:\\fsx-windows-dir\\index.html -Destination C:\\inetpub\\wwwroot\\index.html -Force; C:\\ServiceMonitor.exe w3svc"],
            "mountPoints": [
                {
                    "sourceVolume": "fsx-windows-dir",
                    "containerPath": "C:\\fsx-windows-dir",
                    "readOnly": false
                }
            ],
            "cpu": 512,
            "memory": 256,
            "image": "mcr.microsoft.com/windows/servercore/iis:windowsservercore-ltsc2019",
            "essential": true,
            "name": "container2"
        }
    ],
    "family": "fsx-windows",
    "executionRoleArn": "arn:aws:iam::111122223333:role/ecsTaskExecutionRole",
    "volumes": [
        {
            "name": "fsx-windows-dir",
            "fsxWindowsFileServerVolumeConfiguration": {
                "fileSystemId": "fs-0eeb5730b2EXAMPLE",
                "authorizationConfig": {
                    "domain": "example.com",
                    "credentialsParameter": "arn:arn-1234"
                },
                "rootDirectory": "share"
            }
        }
    ]
}
```

`FSxWindowsFileServerVolumeConfiguration`  
Type: Object  
Required: No  
This parameter is specified when you're using [FSx for Windows File Server](https://docs.aws.amazon.com/fsx/latest/WindowsGuide/what-is.html) file system for task storage.    
`fileSystemId`  
Type: String  
Required: Yes  
The FSx for Windows File Server file system ID to use.  
`rootDirectory`  
Type: String  
Required: Yes  
The directory within the FSx for Windows File Server file system to mount as the root directory inside the host.  
`authorizationConfig`    
`credentialsParameter`  
Type: String  
Required: Yes  
The authorization credential options:  
+ Amazon Resource Name (ARN) of an [Secrets Manager](https://docs.aws.amazon.com/secretsmanager/latest/userguide/intro.html) secret.
+ Amazon Resource Name (ARN) of an [Systems Manager](https://docs.aws.amazon.com/systems-manager/latest/userguide/integration-ps-secretsmanager.html) parameter.  
`domain`  
Type: String  
Required: Yes  
A fully qualified domain name that's hosted by an [AWS Directory Service for Microsoft Active Directory](https://docs.aws.amazon.com/directoryservice/latest/admin-guide/directory_microsoft_ad.html) (AWS Managed Microsoft AD) directory or a self-hosted EC2 Active Directory.

## Methods for storing FSx for Windows File Server volume credentials
<a name="creds"></a>

There are two different methods of storing credentials for use with the credentials parameter.
+ **AWS Secrets Manager secret**

  This credential can be created in the AWS Secrets Manager console by using the *Other type of secret* category. You add a row for each key/value pair, username/admin and password/*password*.
+ **Systems Manager parameter**

  This credential can be created in the Systems Manager parameter console by entering text in the form that's in the following example code snippet.

  ```
  {
    "username": "admin",
    "password": "password"
  }
  ```

The `credentialsParameter` in the task definition `FSxWindowsFileServerVolumeConfiguration` parameter holds either the secret ARN or the Systems Manager parameter ARN. For more information, see [What is AWS Secrets Manager](https://docs.aws.amazon.com/secretsmanager/latest/userguide/intro.html) in the *Secrets Manager User Guide* and [Systems Manager Parameter Store](https://docs.aws.amazon.com/systems-manager/latest/userguide/systems-manager-parameter-store.html) from the *Systems Manager User Guide*.

# Learn how to configure FSx for Windows File Server file systems for Amazon ECS
<a name="tutorial-wfsx-volumes"></a>

Learn how to launch an Amazon ECS-Optimized Windows instance that hosts an FSx for Windows File Server file system and containers that can access the file system. To do this, you first create an Directory Service AWS Managed Microsoft Active Directory. Then, you create an FSx for Windows File Server File Server file system and cluster with an Amazon EC2 instance and a task definition. You configure the task definition for your containers to use the FSx for Windows File Server file system. Finally, you test the file system.

It takes 20 to 45 minutes each time you launch or delete either the Active Directory or the FSx for Windows File Server file system. Be prepared to reserve at least 90 minutes to complete the tutorial or complete the tutorial over a few sessions.

## Prerequisites for the tutorial
<a name="wfsx-prerequisites"></a>
+ An administrative user. See [Set up to use Amazon ECS](get-set-up-for-amazon-ecs.md).
+ (Optional) A `PEM` key pair for connecting to your EC2 Windows instance through RDP access. For information about how to create key pairs, see [Amazon EC2 key pairs and Amazon EC2 instances](https://docs.aws.amazon.com/AWSEC2/latest/UserGuide/ec2-key-pairs.html) in the *Amazon EC2 User Guide*.
+ A VPC with at least one public and one private subnet, and one security group. You can use your default VPC. You don't need a NAT gateway or device. Directory Service doesn't support Network Address Translation (NAT) with Active Directory. For this to work, the Active Directory, FSx for Windows File Server file system, ECS Cluster, and EC2 instance must be located within your VPC. For more information regarding VPCs and Active Directories, see [Create a VPC](https://docs.aws.amazon.com/vpc/latest/userguide/create-vpc.html) and [Prerequisites for creating an AWS Managed Microsoft AD](https://docs.aws.amazon.com/directoryservice/latest/admin-guide/ms_ad_getting_started.html#ms_ad_getting_started_prereqs).
+ The IAM ecsInstanceRole and ecsTaskExecutionRole permissions are associated with your account. These service-linked roles allow services to make API calls and access containers, secrets, directories, and file servers on your behalf.

## Step 1: Create IAM access roles
<a name="iam-roles"></a>

**Create a cluster with the AWS Management Console.**

1. See [Amazon ECS container instance IAM role](instance_IAM_role.md) to check whether you have an ecsInstanceRole and to see how you can create one if you don't have one.

1. We recommend that role policies are customized for minimum permissions in an actual production environment. For the purpose of working through this tutorial, verify that the following AWS managed policy is attached to your ecsInstanceRole. Attach the policy if it is not already attached.
   + AmazonEC2ContainerServiceforEC2Role
   + AmazonSSMManagedInstanceCore
   + AmazonSSMDirectoryServiceAccess

   To attach AWS managed policies.

   1. Open the [IAM console](https://console.aws.amazon.com//iam/).

   1. In the navigation pane, choose **Roles.**

   1. Choose an **AWS managed role**.

   1. Choose **Permissions, Attach policies**.

   1. To narrow the available policies to attach, use **Filter**.

   1. Select the appropriate policy and choose **Attach policy**.

1. See [Amazon ECS task execution IAM role](task_execution_IAM_role.md) to check whether you have an ecsTaskExecutionRole and to see how you can create one if you don't have one.

   We recommend that role policies are customized for minimum permissions in an actual production environment. For the purpose of working through this tutorial, verify that the following AWS managed policies are attached to your ecsTaskExecutionRole. Attach the policies if they are not already attached. Use the procedure given in the preceding section to attach the AWS managed policies.
   + SecretsManagerReadWrite
   + AmazonFSxReadOnlyAccess
   + AmazonSSMReadOnlyAccess
   + AmazonECSTaskExecutionRolePolicy

## Step 2: Create Windows Active Directory (AD)
<a name="wfsx-create-ads"></a>

1. Follow the steps described in [Creating your AWS Managed Microsoft AD](https://docs.aws.amazon.com/directoryservice/latest/admin-guide/ms_ad_getting_started.html#ms_ad_getting_started_create_directory) in the AWS *Directory Service Administration Guide*. Use the VPC you have designated for this tutorial. On Step 3 of *Creating your AWS Managed Microsoft AD*, save the user name and admin password for use in a following step. Also, note the fully qualified directory DNS name for future steps. You can complete the following step while the Active Directory is being created.

1. Create an AWS Secrets Manager secret to use in the following steps. For more information, see [Get started with Secrets Manager](https://docs.aws.amazon.com/secretsmanager/latest/userguide/intro.html#get-started) in the AWS *Secrets Manager User Guide*.

   1. Open the [Secrets Manager console](https://console.aws.amazon.com//secretsmanager/).

   1. Click **Store a new secret**.

   1. Select **Other type of secrets**.

   1. For **Secret key/value**, in the first row, create a key **username** with value **admin**. Click on **\$1 Add row**.

   1. In the new row, create a key **password**. For value, type in the password you entered in Step 3 of *Create Your AWS Managed AD Directory*.

   1. Click on the **Next** button.

   1. Provide a secret name and description. Click **Next**.

   1. Click **Next**. Click **Store**.

   1. From the list of **Secrets** page, click on the secret you have just created.

   1. Save the ARN of the new secret for use in the following steps.

   1. You can proceed to the next step while your Active Directory is being created.

## Step 3: Verify and update your security group
<a name="wfsx-sg"></a>

In this step, you verify and update the rules for the security group that you're using. For this, you can use the default security group that was created for your VPC.

**Verify and update security group.**

You need to create or edit your security group to send data from and to the ports, which are described in [Amazon VPC Security Groups](https://docs.aws.amazon.com/fsx/latest/WindowsGuide/limit-access-security-groups.html#fsx-vpc-security-groups) in the *FSx for Windows File Server User Guide*. You can do this by creating the security group inbound rule shown in the first row of the following table of inbound rules. This rule allows inbound traffic from network interfaces (and their associated instances) that are assigned to the security group. All of the cloud resources you create are within the same VPC and attached to the same security group. Therefore, this rule allows traffic to be sent to and from the FSx for Windows File Server file system, Active Directory, and ECS instance as required. The other inbound rules allow traffic to serve the website and RDP access for connecting to your ECS instance.

The following table shows which security group inbound rules are required for this tutorial.


| Type | Protocol | Port range | Source | 
| --- | --- | --- | --- | 
|  All traffic  |  All  |  All  |  *sg-securitygroup*  | 
|  HTTPS  |  TCP  |  443  |  0.0.0.0/0  | 
|  RDP  |  TCP  |  3389  |  your laptop IP address  | 

The following table shows which security group outbound rules are required for this tutorial.


| Type | Protocol | Port range | Destination | 
| --- | --- | --- | --- | 
|  All traffic  |  All  |  All  |  0.0.0.0/0  | 

1. Open the [EC2 console](https://console.aws.amazon.com//ec2/) and select **Security Groups** from the left-hand menu.

1. From the list of security groups now displayed, select check the check-box to the left of the security group that you are using for this tutorial.

   Your security group details are displayed.

1. Edit the inbound and outbound rules by selecting the **Inbound rules** or **Outbound rules** tabs and choosing the **Edit inbound rules** or **Edit outbound rules** buttons. Edit the rules to match those displayed in the preceding tables. After you create your EC2 instance later on in this tutorial, edit the inbound rule RDP source with the public IP address of your EC2 instance as described in [Connect to your Windows instance using RDP](https://docs.aws.amazon.com/AWSEC2/latest/UserGuide/connecting_to_windows_instance.html) from the *Amazon EC2 User Guide*.

## Step 4: Create an FSx for Windows File Server file system
<a name="wfsx-create-fsx"></a>

After your security group is verified and updated and your Active Directory is created and is in the active status, create the FSx for Windows File Server file system in the same VPC as your Active Directory. Use the following steps to create an FSx for Windows File Server file system for your Windows tasks.

**Create your first file system.**

1. Open the [Amazon FSx console](https://console.aws.amazon.com//fsx/).

1. On the dashboard, choose **Create file system** to start the file system creation wizard.

1. On the **Select file system type** page, choose **FSx for Windows File Server**, and then choose **Next**. The **Create file system** page appears.

1. In the **File system details** section, provide a name for your file system. Naming your file systems makes it easier to find and manage your them. You can use up to 256 Unicode characters. Allowed characters are letters, numbers, spaces, and the special characters plus sign (\$1). minus sign (-), equal sign (=), period (.), underscore (\$1), colon (:), and forward slash (/).

1. For **Deployment type** choose **Single-AZ** to deploy a file system that is deployed in a single Availability Zone. *Single-AZ 2* is the latest generation of single Availability Zone file systems, and it supports SSD and HDD storage.

1. For **Storage type**, choose **HDD**.

1. For **Storage capacity**, enter the minimum storage capacity. 

1. Keep **Throughput capacity** at its default setting.

1. In the **Network & security** section, choose the same Amazon VPC that you chose for your Directory Service directory.

1. For **VPC Security Groups**, choose the security group that you verified in *Step 3: Verify and update your security group*.

1. For **Windows authentication**, choose **AWS Managed Microsoft Active Directory**, and then choose your Directory Service directory from the list.

1. For **Encryption**, keep the default **Encryption key** setting of **aws/fsx (default)**.

1. Keep the default settings for **Maintenance preferences**.

1. Click on the **Next** button.

1. Review the file system configuration shown on the **Create file system** page. For your reference, note which file system settings you can modify after file system is created. Choose **Create file system**. 

1. Note the file system ID. You will need to use it in a later step.

   You can go on to the next steps to create a cluster and EC2 instance while the FSx for Windows File Server file system is being created.

## Step 5: Create an Amazon ECS cluster
<a name="wfsx-create-cluster"></a>

**Create a cluster using the Amazon ECS console**

1. Open the console at [https://console.aws.amazon.com/ecs/v2](https://console.aws.amazon.com/ecs/v2).

1. From the navigation bar, select the Region to use.

1. In the navigation pane, choose **Clusters**.

1. On the **Clusters** page, choose **Create cluster**.

1. Under **Cluster configuration**, for **Cluster name**, enter **windows-fsx-cluster**.

1. Expand **Infrastructure**, clear AWS Fargate (serverless) and then select **Amazon EC2 instances**.

   1. To create a Auto Scaling group, from **Auto Scaling group (ASG)**, select **Create new group**, and then provide the following details about the group:
     + For **Operating system/Architecture**, choose **Windows Server 2019 Core**.
     + For **EC2 instance type**, choose t2.medium or t2.micro.

1. Choose **Create**.

## Step 6: Create an Amazon ECS optimized Amazon EC2 instance
<a name="wfsx-create-instance"></a>

Create an Amazon ECS Windows container instance.

**To create an Amazon ECS instance**

1. Use the `aws ssm get-parameters` command to retrieve the AMI name for the Region that hosts your VPC. For more information, see [Retrieving Amazon ECS-Optimized AMI metadata](https://docs.aws.amazon.com/AmazonECS/latest/developerguide/retrieve-ecs-optimized_windows_AMI.html).

1. Use the Amazon EC2 console to launch the instance.

   1. Open the Amazon EC2 console at [https://console.aws.amazon.com/ec2/](https://console.aws.amazon.com/ec2/).

   1. From the navigation bar, select the Region to use.

   1. From the **EC2 Dashboard**, choose **Launch instance**.

   1. For **Name**, enter a unique name.

   1. For **Application and OS Images (Amazon Machine Image)**, in the **search** field, enter the AMI name that you retrieved.

   1. For **Instance type**, choose t2.medium or t2.micro.

   1. For **Key pair (login)**, choose a key pair. If you don't specify a key pair, you 

   1. Under **Network settings**, for **VPC** and **Subnet**, choose your VPC and a public subnet.

   1. Under **Network settings**, for **Security group**, choose an existing security group, or create a new one. Ensure that the security group you choose has the inbound and outbound rules defined in [Prerequisites for the tutorial](#wfsx-prerequisites)

   1. Under **Network settings**, for **Auto-assign Public IP**, select **Enable**. 

   1. Expand **Advanced details**, and then for **Domain join directory**, select the ID of the Active Directory that you created. This option domain joins your AD when the EC2 instance is launched.

   1. Under **Advanced details**, for **IAM instance profile** , choose **ecsInstanceRole**.

   1. Configure your Amazon ECS container instance with the following user data. Under **Advanced Details**, paste the following script into the **User data** field, replacing *cluster\$1name* with the name of your cluster.

      ```
      <powershell>
      Initialize-ECSAgent -Cluster windows-fsx-cluster -EnableTaskIAMRole
      </powershell>
      ```

   1. When you are ready, select the acknowledgment field, and then choose **Launch Instances**. 

   1. A confirmation page lets you know that your instance is launching. Choose **View Instances** to close the confirmation page and return to the console.

1. Open the console at [https://console.aws.amazon.com/ecs/v2](https://console.aws.amazon.com/ecs/v2).

1. In the navigation pane, choose **Clusters**, and then choose **windows-fsx-cluster**.

1. Choose the **Infrastructure** tab and verify that your instance has been registered in the **windows-fsx-cluster** cluster.

## Step 7: Register a Windows task definition
<a name="register_windows_task_def"></a>

Before you can run Windows containers in your Amazon ECS cluster, you must register a task definition. The following task definition example displays a simple web page. The task launches two containers that have access to the FSx file system. The first container writes an HTML file to the file system. The second container downloads the HTML file from the file system and serves the webpage.

1. Open the console at [https://console.aws.amazon.com/ecs/v2](https://console.aws.amazon.com/ecs/v2).

1. In the navigation pane, choose **Task definitions**.

1. Choose **Create new task definition**, **Create new task definition with JSON**.

1. In the JSON editor box, replace the values for your task execution role and the details about your FSx file system and then choose **Save**.

   ```
   {
       "containerDefinitions": [
           {
               "entryPoint": [
                   "powershell",
                   "-Command"
               ],
               "portMappings": [],
               "command": ["New-Item -Path C:\\fsx-windows-dir\\index.html -ItemType file -Value '<html> <head> <title>Amazon ECS Sample App</title> <style>body {margin-top: 40px; background-color: #333;} </style> </head><body> <div style=color:white;text-align:center> <h1>Amazon ECS Sample App</h1> <h2>It Works!</h2> <p>You are using Amazon FSx for Windows File Server file system for persistent container storage.</p>' -Force"],
               "cpu": 512,
               "memory": 256,
               "image": "mcr.microsoft.com/windows/servercore/iis:windowsservercore-ltsc2019",
               "essential": false,
               "name": "container1",
               "mountPoints": [
                   {
                       "sourceVolume": "fsx-windows-dir",
                       "containerPath": "C:\\fsx-windows-dir",
                       "readOnly": false
                   }
               ]
           },
           {
               "entryPoint": [
                   "powershell",
                   "-Command"
               ],
               "portMappings": [
                   {
                       "hostPort": 443,
                       "protocol": "tcp",
                       "containerPort": 80
                   }
               ],
               "command": ["Remove-Item -Recurse C:\\inetpub\\wwwroot\\* -Force; Start-Sleep -Seconds 120; Move-Item -Path C:\\fsx-windows-dir\\index.html -Destination C:\\inetpub\\wwwroot\\index.html -Force; C:\\ServiceMonitor.exe w3svc"],
               "mountPoints": [
                   {
                       "sourceVolume": "fsx-windows-dir",
                       "containerPath": "C:\\fsx-windows-dir",
                       "readOnly": false
                   }
               ],
               "cpu": 512,
               "memory": 256,
               "image": "mcr.microsoft.com/windows/servercore/iis:windowsservercore-ltsc2019",
               "essential": true,
               "name": "container2"
           }
       ],
       "family": "fsx-windows",
       "executionRoleArn": "arn:aws:iam::111122223333:role/ecsTaskExecutionRole",
       "volumes": [
           {
               "name": "fsx-windows-dir",
               "fsxWindowsFileServerVolumeConfiguration": {
                   "fileSystemId": "fs-0eeb5730b2EXAMPLE",
                   "authorizationConfig": {
                       "domain": "example.com",
                       "credentialsParameter": "arn:arn-1234"
                   },
                   "rootDirectory": "share"
               }
           }
       ]
   }
   ```

## Step 8: Run a task and view the results
<a name="wfsx-run-task"></a>

Before running the task, verify that the status of your FSx for Windows File Server file system is **Available**. After it is available, you can run a task using the task definition that you created. The task starts out by creating containers that shuffle an HTML file between them using the file system. After the shuffle, a web server serves the simple HTML page.

**Note**  
You might not be able to connect to the website from within a VPN.

**Run a task and view the results with the Amazon ECS console.**

1. Open the console at [https://console.aws.amazon.com/ecs/v2](https://console.aws.amazon.com/ecs/v2).

1. In the navigation pane, choose **Clusters**, and then choose **windows-fsx-cluster**.

1. Choose the **Tasks** tab, and then choose **Run new task**.

1. For **Launch Type**, choose **EC2**.

1. Under Deployment configuration, for **Task Definition**, choose the **fsx-windows**, and then choose **Create**.

1. When your task status is **RUNNING**, choose the task ID.

1. Under **Containers**, when the container1 status is **STOPPED**, select container2 to view the container's details.

1.  Under **Container details for container2**, select **Network bindings** and then click on the external IP address that is associated with the container. Your browser will open and display the following message.

   ```
   Amazon ECS Sample App
   It Works! 
   You are using Amazon FSx for Windows File Server file system for persistent container storage.
   ```
**Note**  
It may take a few minutes for the message to be displayed. If you don't see this message after a few minutes, check that you aren't running in a VPN and make sure that the security group for your container instance allows inbound network HTTP traffic on port 443.

## Step 9: Clean up
<a name="wfsx-cleanup"></a>

**Note**  
It takes 20 to 45 minutes to delete the FSx for Windows File Server file system or the AD. You must wait until the FSx for Windows File Server file system delete operations are complete before starting the AD delete operations.

**Delete FSx for Windows File Server file system.**

1. Open the [Amazon FSx console](https://console.aws.amazon.com//fsx/)

1. Choose the radio button to the left of the FSx for Windows File Server file system that you just created.

1. Choose **Actions**.

1. Select **Delete file system**.

**Delete AD.**

1. Open the [Directory Service console](https://console.aws.amazon.com//directoryservicev2/).

1. Choose the radio button to the left of the AD you just created.

1. Choose **Actions**.

1. Select **Delete directory**.

**Delete the cluster.**

1. Open the console at [https://console.aws.amazon.com/ecs/v2](https://console.aws.amazon.com/ecs/v2).

1. In the navigation pane, choose **Clusters**, and then choose **windows-fsx-cluster**.

1. Choose **Delete cluster**.

1. Enter the phrase and then choose **Delete**.

**Terminate EC2 instance.**

1. Open the [Amazon EC2 console](https://console.aws.amazon.com//ec2/).

1. From the left-hand menu, select **Instances**.

1. Check the box to the left of the EC2 instance you created.

1. Click the **Instance state**, **Terminate instance**.

**Delete secret.**

1. Open the [Secrets Manager console](https://console.aws.amazon.com//secretsmanager/).

1. Select the secret you created for this walk through.

1. Click **Actions**.

1. Select **Delete secret**.

# Configuring S3 Files for Amazon ECS
<a name="s3files-volumes"></a>

S3 Files is a shared file system that connects any AWS compute resource directly with your data in Amazon S3. It provides fast, direct access to all of your S3 data as files with full file system semantics and low-latency performance, without your data ever leaving S3. You can read, write, and organize data using file and directory operations, while S3 Files keeps your file system and S3 bucket synchronized automatically. With Amazon ECS, you can define S3 file systems as volumes in your task definitions, giving your containers direct file system access to data stored in S3 buckets. To learn more about Amazon S3 Files and its capabilities, see the [Amazon S3 User Guide](https://docs.aws.amazon.com/AmazonS3/latest/userguide/).

## Availability
<a name="s3files-volume-availability"></a>

S3 Files support in Amazon ECS is available for the following launch types at General Availability:
+ **Fargate** — Fully supported.
+ **Amazon ECS Managed Instances** — Fully supported.

**Important**  
S3 Files are not supported on the Amazon EC2 launch type at this time. If you configure an S3 file system in a task definition and attempt to run it on the Amazon EC2 launch type, the task will fail at launch. Amazon EC2 launch type support is planned for a future release.

## Considerations
<a name="s3files-volume-considerations"></a>
+ S3 file system uses a dedicated `s3filesVolumeConfiguration` parameter in the task definition.
+ S3 file system requires a full Amazon Resource Name (ARN) to identify the file system. The ARN format is:

  ```
  arn:{partition}:s3files:{region}:{account-id}:file-system/fs-xxxxx
  ```
+ Transit encryption is mandatory for S3 file system volumes and is automatically enforced. There is no option to disable it.
+ Task IAM Role is mandatory for S3 file system volumes and is automatically enforced. There is no option to disable it.

## Prerequisites
<a name="s3files-volume-prerequisites"></a>

Before configuring S3 file system volumes in your Amazon ECS task definitions, ensure the following prerequisites are met:
+ **An S3 file system and mount target** — You must have an S3 file system created and associated with an S3 bucket. For instructions on creating an S3 file system, see the [Amazon S3 Files User Guide](https://docs.aws.amazon.com/AmazonS3/latest/userguide/).
+ **A Task IAM Role** — Your task definition must include a Task IAM Role with the following permissions:
  + Permissions to connect to and interact with S3 file systems from your application code (running in the container).
  + Permissions to read S3 objects from your application code (running in the container).
+ **VPC and security group configuration** — Your S3 file system must be accessible from the VPC and subnets where your Amazon ECS tasks run.
+ **(Optional) S3 Files access points** — If you want to enforce application-specific access controls, create an S3 Files access point and provide the ARN in the task definition.

For more information, refer to [prerequisites for S3 Files](https://docs.aws.amazon.com/AmazonS3/latest/userguide/s3-files-prereq-policies.html#s3-files-prereq-iam-compute-role).

# Specify an Amazon S3 Files volume in your Amazon ECS task definition
<a name="specify-s3files-config"></a>

You can configure S3 Files volumes in your Amazon ECS task definitions using the Amazon ECS console, the AWS CLI, or the AWS API.

## Using the Amazon ECS console
<a name="s3files-volume-console"></a>

1. Open the Amazon ECS console at [https://console.aws.amazon.com/ecs/](https://console.aws.amazon.com/ecs/).

1. In the navigation pane, choose **Task definitions**.

1. Choose **Create new task definition** or select an existing task definition and create a new revision.

1. In the **Infrastructure** section, ensure you have a Task IAM Role configured with the required permissions.

1. In the **Storage** section, choose **Add volume**.

1. For **Volume type**, select **S3 Files**.

1. For **File system ARN**, enter the full ARN of your S3 file system. The ARN format is:

   ```
   arn:{partition}:s3files:{region}:{account-id}:file-system/fs-xxxxx
   ```

1. (Optional) For **Root directory**, enter the path within the file system to mount as the root. If not specified, the root of the file system (`/`) is used.

1. (Optional) For **Transit encryption port**, enter the port number for sending encrypted data between the Amazon ECS host and the S3 file system. If you don't specify a transit encryption port, it uses the port selection strategy that the Amazon EFS mount helper uses.

1. (Optional) For **Access point ARN**, select the S3 Files access point to use from the dropdown list.

1. In the **Container mount points** section, select the container and specify the local mount path in your container where the volume should be mounted inside the container.

1. Choose **Create** to create the task definition.

## Using the AWS CLI
<a name="s3files-volume-cli"></a>

To specify an S3 Files volume in a task definition using the AWS CLI, use the `register-task-definition` command with the `s3filesVolumeConfiguration` parameter in the volume definition.

The following is an example task definition JSON snippet that defines an S3 Files volume and mounts it to a container:

```
{
  "family": "s3files-task-example",
  "taskRoleArn": "arn:aws:iam::123456789012:role/ecsTaskRole",
  "containerDefinitions": [
    {
      "name": "my-container",
      "image": "my-image:latest",
      "essential": true,
      "mountPoints": [
        {
          "containerPath": "/mnt/s3data",
          "sourceVolume": "my-s3files-volume"
        }
      ]
    }
  ],
  "volumes": [
    {
      "name": "my-s3files-volume",
      "s3filesVolumeConfiguration": {
        "fileSystemArn": "arn:aws:s3files:us-east-1:123456789012:file-system/fs-0123456789abcdef0",
        "rootDirectory": "/",
        "transitEncryptionPort": 2999
      }
    }
  ]
}
```

Register the task definition:

```
aws ecs register-task-definition --cli-input-json file://s3files-task-def.json
```

To use an access point, include the `accessPointArn` parameter:

```
{
  "name": "my-s3files-volume",
  "s3filesVolumeConfiguration": {
    "fileSystemArn": "arn:aws:s3files:us-east-1:123456789012:file-system/fs-0123456789abcdef0",
    "rootDirectory": "/",
    "transitEncryptionPort": 2999,
    "accessPointArn": "arn:aws:s3files:us-east-1:123456789012:file-system/fs-0123456789abcdef0/access-point/fsap-0123456789abcdef0"
  }
}
```

## S3 Files volume configuration parameters
<a name="s3files-volume-parameters"></a>

The following table describes the parameters available in the `s3filesVolumeConfiguration` object:

`fileSystemArn`  
Type: String  
Required: Yes  
The full ARN of the S3 file system to mount. Format: `arn:{partition}:s3files:{region}:{account-id}:file-system/fs-xxxxx`

`rootDirectory`  
Type: String  
Required: No  
The directory within the S3 file system to mount as the root of the volume. Defaults to `/` if not specified.

`transitEncryptionPort`  
Type: Integer  
Required: No  
The port to use for sending encrypted data between the Amazon ECS host and the S3 file system. Transit encryption itself is always enabled and cannot be disabled.

`accessPointArn`  
Type: String  
Required: No  
The full ARN of the S3 Files access point to use. Access points provide application-specific entry points into the file system with enforced user identity and root directory settings.

# Use Docker volumes with Amazon ECS
<a name="docker-volumes"></a>

When using Docker volumes, the built-in `local` driver or a third-party volume driver can be used. Docker volumes are managed by Docker and a directory is created in `/var/lib/docker/volumes` on the container instance that contains the volume data.

To use Docker volumes, specify a `dockerVolumeConfiguration` in your task definition. For more information, see [Volumes](https://docs.docker.com/engine/storage/volumes/) in the Docker documentation.

Some common use cases for Docker volumes are the following:
+ To provide persistent data volumes for use with containers
+ To share a defined data volume at different locations on different containers on the same container instance
+ To define an empty, nonpersistent data volume and mount it on multiple containers within the same task
+ To provide a data volume to your task that's managed by a third-party driver

## Considerations for using Docker volumes
<a name="docker-volume-considerations"></a>

Consider the following when using Docker volumes:
+ Docker volumes are only supported when using the EC2 launch type or external instances.
+ Windows containers only support the use of the `local` driver.
+ If a third-party driver is used, make sure it's installed and active on the container instance before the container agent is started. If the third-party driver isn't active before the agent is started, you can restart the container agent using one of the following commands:
  + For the Amazon ECS-optimized Amazon Linux 2 AMI:

    ```
    sudo systemctl restart ecs
    ```
  + For the Amazon ECS-optimized Amazon Linux AMI:

    ```
    sudo stop ecs && sudo start ecs
    ```

For information about how to specify a Docker volume in a task definition, see [Specify a Docker volume in an Amazon ECS task definition](specify-volume-config.md).

# Specify a Docker volume in an Amazon ECS task definition
<a name="specify-volume-config"></a>

Before your containers can use data volumes, you must specify the volume and mount point configurations in your task definition. This section describes the volume configuration for a container. For tasks that use a Docker volume, specify a `dockerVolumeConfiguration`. For tasks that use a bind mount host volume, specify a `host` and optional `sourcePath`.

The following task definition JSON shows the syntax for the `volumes` and `mountPoints` objects for a container.

```
{
    "containerDefinitions": [
        {
            "mountPoints": [
                {
                    "sourceVolume": "string",
                    "containerPath": "/path/to/mount_volume",
                    "readOnly": boolean
                }
            ]
        }
    ],
    "volumes": [
        {
            "name": "string",
            "dockerVolumeConfiguration": {
                "scope": "string",
                "autoprovision": boolean,
                "driver": "string",
                "driverOpts": {
                    "key": "value"
                },
                "labels": {
                    "key": "value"
                }
            }
        }
    ]
}
```

`name`  
Type: String  
Required: No  
The name of the volume. Up to 255 letters (uppercase and lowercase), numbers, hyphens (`-`), and underscores (`_`) are allowed. This name is referenced in the `sourceVolume` parameter of the container definition `mountPoints` object.

`dockerVolumeConfiguration`  
Type: [DockerVolumeConfiguration](https://docs.aws.amazon.com/AmazonECS/latest/APIReference/API_DockerVolumeConfiguration.html) Object  
Required: No  
This parameter is specified when using Docker volumes. Docker volumes are supported only when running tasks on EC2 instances. Windows containers support only the use of the `local` driver. To use bind mounts, specify a `host` instead.    
`scope`  
Type: String  
Valid Values: `task` \$1 `shared`  
Required: No  
The scope for the Docker volume, which determines its lifecycle. Docker volumes that are scoped to a `task` are automatically provisioned when the task starts and destroyed when the task stops. Docker volumes that are scoped as `shared` persist after the task stops.  
`autoprovision`  
Type: Boolean  
Default value: `false`  
Required: No  
If this value is `true`, the Docker volume is created if it doesn't already exist. This field is used only if the `scope` is `shared`. If the `scope` is `task`, then this parameter must be omitted.  
`driver`  
Type: String  
Required: No  
The Docker volume driver to use. The driver value must match the driver name provided by Docker because this name is used for task placement. If the driver was installed by using the Docker plugin CLI, use `docker plugin ls` to retrieve the driver name from your container instance. If the driver was installed by using another method, use Docker plugin discovery to retrieve the driver name.  
`driverOpts`  
Type: String  
Required: No  
A map of Docker driver-specific options to pass through. This parameter maps to `DriverOpts` in the Create a volume section of Docker.  
`labels`  
Type: String  
Required: No  
Custom metadata to add to your Docker volume.

`mountPoints`  
Type: Object array  
Required: No  
The mount points for the data volumes in your container. This parameter maps to `Volumes` in the create-container Docker API and the `--volume` option to docker run.  
Windows containers can mount whole directories on the same drive as `$env:ProgramData`. Windows containers cannot mount directories on a different drive, and mount points cannot be used across drives. You must specify mount points to attach an Amazon EBS volume directly to an Amazon ECS task.    
`sourceVolume`  
Type: String  
Required: Yes, when `mountPoints` are used  
The name of the volume to mount.  
`containerPath`  
Type: String  
Required: Yes, when `mountPoints` are used  
The path in the container where the volume will be mounted.  
`readOnly`  
Type: Boolean  
Required: No  
If this value is `true`, the container has read-only access to the volume. If this value is `false`, then the container can write to the volume. The default value is `false`.  
For tasks that run on EC2 instances running the Windows operating system, leave the value as the default of `false`.

# Docker volume examples for Amazon ECS
<a name="docker-volume-examples"></a>

The following examples show how to provide ephemeral storage for a container and how to provide a shared volume for multiple containers, and how to provide NFS persistent storage for a container.

**To provide ephemeral storage for a container using a Docker volume**

In this example, a container uses an empty data volume that is disposed of after the task is finished. One example use case is that you might have a container that needs to access some scratch file storage location during a task. This task can be achieved using a Docker volume.

1. In the task definition `volumes` section, define a data volume with `name` and `DockerVolumeConfiguration` values. In this example, we specify the scope as `task` so the volume is deleted after the task stops and use the built-in `local` driver.

   ```
   "volumes": [
       {
           "name": "scratch",
           "dockerVolumeConfiguration" : {
               "scope": "task",
               "driver": "local",
               "labels": {
                   "scratch": "space"
               }
           }
       }
   ]
   ```

1. In the `containerDefinitions` section, define a container with `mountPoints` values that reference the name of the defined volume and the `containerPath` value to mount the volume at on the container.

   ```
   "containerDefinitions": [
       {
           "name": "container-1",
           "mountPoints": [
               {
                 "sourceVolume": "scratch",
                 "containerPath": "/var/scratch"
               }
           ]
       }
   ]
   ```

**To provide persistent storage for multiple containers using a Docker volume**

In this example, you want a shared volume for multiple containers to use and you want it to persist after any single task that use it stopped. The built-in `local` driver is being used. This is so the volume is still tied to the lifecycle of the container instance.

1. In the task definition `volumes` section, define a data volume with `name` and `DockerVolumeConfiguration` values. In this example, specify a `shared` scope so the volume persists, set autoprovision to `true`. This is so that the volume is created for use. Then, also use the built-in `local` driver.

   ```
   "volumes": [
       {
           "name": "database",
           "dockerVolumeConfiguration" : {
               "scope": "shared",
               "autoprovision": true,
               "driver": "local",
               "labels": {
                   "database": "database_name"
               }
           }
       }
   ]
   ```

1. In the `containerDefinitions` section, define a container with `mountPoints` values that reference the name of the defined volume and the `containerPath` value to mount the volume at on the container.

   ```
   "containerDefinitions": [
       {
           "name": "container-1",
           "mountPoints": [
           {
             "sourceVolume": "database",
             "containerPath": "/var/database"
           }
         ]
       },
       {
         "name": "container-2",
         "mountPoints": [
           {
             "sourceVolume": "database",
             "containerPath": "/var/database"
           }
         ]
       }
     ]
   ```

**To provide NFS persistent storage for a container using a Docker volume**

 In this example, a container uses an NFS data volume that is automatically mounted when the task starts and unmounted when the task stops. This uses the Docker built-in `local` driver. One example use case is that you might have a local NFS storage and need to access it from an ECS Anywhere task. This can be achieved using a Docker volume with NFS driver option.

1. In the task definition `volumes` section, define a data volume with `name` and `DockerVolumeConfiguration` values. In this example, specify a `task` scope so the volume is unmounted after the task stops. Use the `local` driver and configure the `driverOpts` with the `type`, `device`, and `o` options accordingly. Replace `NFS_SERVER` with the NFS server endpoint.

   ```
   "volumes": [
          {
              "name": "NFS",
              "dockerVolumeConfiguration" : {
                  "scope": "task",
                  "driver": "local",
                  "driverOpts": {
                      "type": "nfs",
                      "device": "$NFS_SERVER:/mnt/nfs",
                      "o": "addr=$NFS_SERVER"
                  }
              }
          }
      ]
   ```

1. In the `containerDefinitions` section, define a container with `mountPoints` values that reference the name of the defined volume and the `containerPath` value to mount the volume on the container.

   ```
   "containerDefinitions": [
          {
              "name": "container-1",
              "mountPoints": [
                  {
                    "sourceVolume": "NFS",
                    "containerPath": "/var/nfsmount"
                  }
              ]
          }
      ]
   ```

# Use bind mounts with Amazon ECS
<a name="bind-mounts"></a>

With bind mounts, a file or directory on a host, such as an Amazon EC2 instance, is mounted into a container. Bind mounts are supported for tasks that are hosted on both Fargate and Amazon EC2 instances. Bind mounts are tied to the lifecycle of the container that uses them. After all of the containers that use a bind mount are stopped, such as when a task is stopped, the data is removed. For tasks that are hosted on Amazon EC2 instances, the data can be tied to the lifecycle of the host Amazon EC2 instance by specifying a `host` and optional `sourcePath` value in your task definition. For more information, see [Bind mounts](https://docs.docker.com/engine/storage/bind-mounts/) in the Docker documentation.

The following are common use cases for bind mounts.
+ To provide an empty data volume to mount in one or more containers.
+ To mount a host data volume in one or more containers.
+ To share a data volume from a source container with other containers in the same task.
+ To expose a path and its contents from a Dockerfile to one or more containers.

## Considerations when using bind mounts
<a name="bind-mount-considerations"></a>

When using bind mounts, consider the following.
+ By default, tasks that are hosted on AWS Fargate using platform version `1.4.0` or later (Linux) or `1.0.0` or later (Windows) receive a minimum of 20 GiB of ephemeral storage for bind mounts. You can increase the total amount of ephemeral storage up to a maximum of 200 GiB by specifying the `ephemeralStorage` parameter in your task definition.
+ To expose files from a Dockerfile to a data volume when a task is run, the Amazon ECS data plane looks for a `VOLUME` directive. If the absolute path that's specified in the `VOLUME` directive is the same as the `containerPath` that's specified in the task definition, the data in the `VOLUME` directive path is copied to the data volume. In the following Dockerfile example, a file that's named `examplefile` in the `/var/log/exported` directory is written to the host and then mounted inside the container.

  ```
  FROM public.ecr.aws/amazonlinux/amazonlinux:latest
  RUN mkdir -p /var/log/exported
  RUN touch /var/log/exported/examplefile
  VOLUME ["/var/log/exported"]
  ```

  By default, the volume permissions are set to `0755` and the owner as `root`. You can customize these permissions in the Dockerfile. The following example defines the owner of the directory as `node`.

  ```
  FROM public.ecr.aws/amazonlinux/amazonlinux:latest
  RUN yum install -y shadow-utils && yum clean all
  RUN useradd node
  RUN mkdir -p /var/log/exported && chown node:node /var/log/exported
  RUN touch /var/log/exported/examplefile
  USER node
  VOLUME ["/var/log/exported"]
  ```
+ For tasks that are hosted on Amazon EC2 instances, when a `host` and `sourcePath` value aren't specified, the Docker daemon manages the bind mount for you. When no containers reference this bind mount, the Amazon ECS container agent task cleanup service eventually deletes it. By default, this happens three hours after the container exits. However, you can configure this duration with the `ECS_ENGINE_TASK_CLEANUP_WAIT_DURATION` agent variable. For more information, see [Amazon ECS container agent configuration](ecs-agent-config.md). If you need this data to persist beyond the lifecycle of the container, specify a `sourcePath` value for the bind mount.
+ For tasks that are hosted on Amazon ECS Managed Instances, portions of the root filesystem are read-only. Read/write bind mounts must use writable directories such as `/var` for persistent data or `/tmp` for temporary data. Attempting to create read/write bind mounts to other directories results in the task failing to launch with an error similar to the following:

  ```
  error creating empty volume: error while creating volume path '/path': mkdir /path: read-only file system
  ```

  Read-only bind mounts (configured with `"readOnly": true` in the `mountPoints` parameter) can point to any accessible directory on the host.

  To view a full list of writable paths, you can run a task on a Amazon ECS Managed Instance and use to inspect the instance's mount table. Create a task definition with the following settings to access the host filesystem:

  ```
  {
      "pidMode": "host",
      "containerDefinitions": [{
          "privileged": true,
          ...
      }]
  }
  ```

  Then run the following commands from within the container:

  ```
  # List writable mounts
  cat /proc/1/root/proc/1/mounts | awk '$4 ~ /^rw,/ || $4 == "rw" {print $2}' | sort
  
  # List read-only mounts
  cat /proc/1/root/proc/1/mounts | awk '$4 ~ /^ro,/ || $4 == "ro" {print $2}' | sort
  ```
**Important**  
The `privileged` setting grants the container extended capabilities on the host, equivalent to root access. In this example, it is used to inspect the host's mount table for diagnostic purposes. For more information, see [Avoid running containers as privileged (Amazon EC2)](security-tasks-containers.md#security-tasks-containers-recommendations-avoid-privileged-containers).

  For more information about running commands interactively in containers, see [Monitor Amazon ECS containers with ECS Exec](ecs-exec.md).

# Specify a bind mount in an Amazon ECS task definition
<a name="specify-bind-mount-config"></a>

For Amazon ECS tasks that are hosted on either Fargate or Amazon EC2 instances, the following task definition JSON snippet shows the syntax for the `volumes`, `mountPoints`, and `ephemeralStorage` objects for a task definition.

```
{
   "family": "",
   ...
   "containerDefinitions" : [
      {
         "mountPoints" : [
            {
               "containerPath" : "/path/to/mount_volume",
               "sourceVolume" : "string"
            }
          ],
          "name" : "string"
       }
    ],
    ...
    "volumes" : [
       {
          "name" : "string"
       }
    ],
    "ephemeralStorage": {
	   "sizeInGiB": integer
    }
}
```

For Amazon ECS tasks that are hosted on Amazon EC2 instances, you can use the optional `host` parameter and a `sourcePath` when specifying the task volume details. When it's specified, it ties the bind mount to the lifecycle of the task rather than the container.

```
"volumes" : [
    {
        "host" : {
            "sourcePath" : "string"
        },
        "name" : "string"
    }
]
```

The following describes each task definition parameter in more detail.

`name`  
Type: String  
Required: No  
The name of the volume. Up to 255 letters (uppercase and lowercase), numbers, hyphens (`-`), and underscores (`_`) are allowed. This name is referenced in the `sourceVolume` parameter of the container definition `mountPoints` object.

`host`  
Required: No  
The `host` parameter is used to tie the lifecycle of the bind mount to the host Amazon EC2 instance, rather than the task, and where it is stored. If the `host` parameter is empty, then the Docker daemon assigns a host path for your data volume, but the data is not guaranteed to persist after the containers associated with it stop running.  
Windows containers can mount whole directories on the same drive as `$env:ProgramData`.  
The `sourcePath` parameter is supported only when using tasks that are hosted on Amazon EC2 instances or Amazon ECS Managed Instances.  
`sourcePath`  
Type: String  
Required: No  
When the `host` parameter is used, specify a `sourcePath` to declare the path on the host Amazon EC2 instance that is presented to the container. If this parameter is empty, then the Docker daemon assigns a host path for you. If the `host` parameter contains a `sourcePath` file location, then the data volume persists at the specified location on the host Amazon EC2 instance until you delete it manually. If the `sourcePath` value does not exist on the host Amazon EC2 instance, the Docker daemon creates it. If the location does exist, the contents of the source path folder are exported.

`mountPoints`  
Type: Object array  
Required: No  
The mount points for the data volumes in your container. This parameter maps to `Volumes` in the create-container Docker API and the `--volume` option to docker run.  
Windows containers can mount whole directories on the same drive as `$env:ProgramData`. Windows containers cannot mount directories on a different drive, and mount points cannot be used across drives. You must specify mount points to attach an Amazon EBS volume directly to an Amazon ECS task.    
`sourceVolume`  
Type: String  
Required: Yes, when `mountPoints` are used  
The name of the volume to mount.  
`containerPath`  
Type: String  
Required: Yes, when `mountPoints` are used  
The path in the container where the volume will be mounted.  
`readOnly`  
Type: Boolean  
Required: No  
If this value is `true`, the container has read-only access to the volume. If this value is `false`, then the container can write to the volume. The default value is `false`.  
For tasks that run on EC2 instances running the Windows operating system, leave the value as the default of `false`.

`ephemeralStorage`  
Type: Object  
Required: No  
The amount of ephemeral storage to allocate for the task. This parameter is used to expand the total amount of ephemeral storage available, beyond the default amount, for tasks hosted on AWS Fargate using platform version `1.4.0` or later (Linux) or `1.0.0` or later (Windows).  
You can use the Copilot CLI, CloudFormation, the AWS SDK or the CLI to specify ephemeral storage for a bind mount.

# Bind mount examples for Amazon ECS
<a name="bind-mount-examples"></a>

The following examples cover the common use cases for using a bind mount for your containers.

**To allocate an increased amount of ephemeral storage space for a Fargate task**

For Amazon ECS tasks that are hosted on Fargate using platform version `1.4.0` or later (Linux) or `1.0.0` (Windows), you can allocate more than the default amount of ephemeral storage for the containers in your task to use. This example can be incorporated into the other examples to allocate more ephemeral storage for your Fargate tasks.
+ In the task definition, define an `ephemeralStorage` object. The `sizeInGiB` must be an integer between the values of `21` and `200` and is expressed in GiB.

  ```
  "ephemeralStorage": {
      "sizeInGiB": integer
  }
  ```

**To provide an empty data volume for one or more containers**

In some cases, you want to provide the containers in a task some scratch space. For example, you might have two database containers that need to access the same scratch file storage location during a task. This can be achieved using a bind mount.

1. In the task definition `volumes` section, define a bind mount with the name `database_scratch`.

   ```
     "volumes": [
       {
         "name": "database_scratch"
       }
     ]
   ```

1. In the `containerDefinitions` section, create the database container definitions. This is so that they mount the volume.

   ```
   "containerDefinitions": [
       {
         "name": "database1",
         "image": "my-repo/database",
         "cpu": 100,
         "memory": 100,
         "essential": true,
         "mountPoints": [
           {
             "sourceVolume": "database_scratch",
             "containerPath": "/var/scratch"
           }
         ]
       },
       {
         "name": "database2",
         "image": "my-repo/database",
         "cpu": 100,
         "memory": 100,
         "essential": true,
         "mountPoints": [
           {
             "sourceVolume": "database_scratch",
             "containerPath": "/var/scratch"
           }
         ]
       }
     ]
   ```

**To expose a path and its contents in a Dockerfile to a container**

In this example, you have a Dockerfile that writes data that you want to mount inside a container. This example works for tasks that are hosted on Fargate or Amazon EC2 instances.

1. Create a Dockerfile. The following example uses the public Amazon Linux 2 container image and creates a file that's named `examplefile` in the `/var/log/exported` directory that we want to mount inside the container. The `VOLUME` directive should specify an absolute path.

   ```
   FROM public.ecr.aws/amazonlinux/amazonlinux:latest
   RUN mkdir -p /var/log/exported
   RUN touch /var/log/exported/examplefile
   VOLUME ["/var/log/exported"]
   ```

   By default, the volume permissions are set to `0755` and the owner as `root`. These permissions can be changed in the Dockerfile. In the following example, the owner of the `/var/log/exported` directory is set to `node`.

   ```
   FROM public.ecr.aws/amazonlinux/amazonlinux:latest
   RUN yum install -y shadow-utils && yum clean all
   RUN useradd node
   RUN mkdir -p /var/log/exported && chown node:node /var/log/exported					    
   USER node
   RUN touch /var/log/exported/examplefile
   VOLUME ["/var/log/exported"]
   ```

1. In the task definition `volumes` section, define a volume with the name `application_logs`.

   ```
     "volumes": [
       {
         "name": "application_logs"
       }
     ]
   ```

1. In the `containerDefinitions` section, create the application container definitions. This is so they mount the storage. The `containerPath` value must match the absolute path that's specified in the `VOLUME` directive from the Dockerfile.

   ```
     "containerDefinitions": [
       {
         "name": "application1",
         "image": "my-repo/application",
         "cpu": 100,
         "memory": 100,
         "essential": true,
         "mountPoints": [
           {
             "sourceVolume": "application_logs",
             "containerPath": "/var/log/exported"
           }
         ]
       },
       {
         "name": "application2",
         "image": "my-repo/application",
         "cpu": 100,
         "memory": 100,
         "essential": true,
         "mountPoints": [
           {
             "sourceVolume": "application_logs",
             "containerPath": "/var/log/exported"
           }
         ]
       }
     ]
   ```

**To provide an empty data volume for a container that's tied to the lifecycle of the host Amazon EC2 instance**

For tasks that are hosted on Amazon EC2 instances, you can use bind mounts and have the data tied to the lifecycle of the host Amazon EC2 instance. You can do this by using the `host` parameter and specifying a `sourcePath` value. Any files that exist at the `sourcePath` are presented to the containers at the `containerPath` value. Any files that are written to the `containerPath` value are written to the `sourcePath` value on the host Amazon EC2 instance.
**Important**  
Amazon ECS doesn't sync your storage across Amazon EC2 instances. Tasks that use persistent storage can be placed on any Amazon EC2 instance in your cluster that has available capacity. If your tasks require persistent storage after stopping and restarting, always specify the same Amazon EC2 instance at task launch time with the AWS CLI [start-task](https://docs.aws.amazon.com/cli/latest/reference/ecs/start-task.html) command. You can also use Amazon EFS volumes for persistent storage. For more information, see [Use Amazon EFS volumes with Amazon ECS](efs-volumes.md).

1. In the task definition `volumes` section, define a bind mount with `name` and `sourcePath` values. In the following example, the host Amazon EC2 instance contains data at `/ecs/webdata` that you want to mount inside the container.

   ```
     "volumes": [
       {
         "name": "webdata",
         "host": {
           "sourcePath": "/ecs/webdata"
         }
       }
     ]
   ```

1. In the `containerDefinitions` section, define a container with a `mountPoints` value that references the name of the bind mount and the `containerPath` value to mount the bind mount at on the container.

   ```
     "containerDefinitions": [
       {
         "name": "web",
         "image": "public.ecr.aws/docker/library/nginx:latest",
         "cpu": 99,
         "memory": 100,
         "portMappings": [
           {
             "containerPort": 80,
             "hostPort": 80
           }
         ],
         "essential": true,
         "mountPoints": [
           {
             "sourceVolume": "webdata",
             "containerPath": "/usr/share/nginx/html"
           }
         ]
       }
     ]
   ```

**To mount a defined volume on multiple containers at different locations**

You can define a data volume in a task definition and mount that volume at different locations on different containers. For example, your host container has a website data folder at `/data/webroot`. You might want to mount that data volume as read-only on two different web servers that have different document roots.

1. In the task definition `volumes` section, define a data volume with the name `webroot` and the source path `/data/webroot`.

   ```
     "volumes": [
       {
         "name": "webroot",
         "host": {
           "sourcePath": "/data/webroot"
         }
       }
     ]
   ```

1. In the `containerDefinitions` section, define a container for each web server with `mountPoints` values that associate the `webroot` volume with the `containerPath` value pointing to the document root for that container.

   ```
     "containerDefinitions": [
       {
         "name": "web-server-1",
         "image": "my-repo/ubuntu-apache",
         "cpu": 100,
         "memory": 100,
         "portMappings": [
           {
             "containerPort": 80,
             "hostPort": 80
           }
         ],
         "essential": true,
         "mountPoints": [
           {
             "sourceVolume": "webroot",
             "containerPath": "/var/www/html",
             "readOnly": true
           }
         ]
       },
       {
         "name": "web-server-2",
         "image": "my-repo/sles11-apache",
         "cpu": 100,
         "memory": 100,
         "portMappings": [
           {
             "containerPort": 8080,
             "hostPort": 8080
           }
         ],
         "essential": true,
         "mountPoints": [
           {
             "sourceVolume": "webroot",
             "containerPath": "/srv/www/htdocs",
             "readOnly": true
           }
         ]
       }
     ]
   ```

**To mount volumes from another container using `volumesFrom`**

For tasks hosted on Amazon EC2 instances, you can define one or more volumes on a container, and then use the `volumesFrom` parameter in a different container definition within the same task to mount all of the volumes from the `sourceContainer` at their originally defined mount points. The `volumesFrom` parameter applies to volumes defined in the task definition, and those that are built into the image with a Dockerfile.

1. (Optional) To share a volume that is built into an image, use the `VOLUME` instruction in the Dockerfile. The following example Dockerfile uses an `httpd` image, and then adds a volume and mounts it at `dockerfile_volume` in the Apache document root. It is the folder used by the `httpd` web server.

   ```
   FROM httpd
   VOLUME ["/usr/local/apache2/htdocs/dockerfile_volume"]
   ```

   You can build an image with this Dockerfile and push it to a repository, such as Docker Hub, and use it in your task definition. The example `my-repo/httpd_dockerfile_volume` image that's used in the following steps was built with the preceding Dockerfile.

1. Create a task definition that defines your other volumes and mount points for the containers. In this example `volumes` section, you create an empty volume called `empty`, which the Docker daemon manages. There's also a host volume defined that's called `host_etc`. It exports the `/etc` folder on the host container instance.

   ```
   {
     "family": "test-volumes-from",
     "volumes": [
       {
         "name": "empty",
         "host": {}
       },
       {
         "name": "host_etc",
         "host": {
           "sourcePath": "/etc"
         }
       }
     ],
   ```

   In the container definitions section, create a container that mounts the volumes defined earlier. In this example, the `web` container mounts the `empty` and `host_etc` volumes. This is the container that uses the image built with a volume in the Dockerfile.

   ```
   "containerDefinitions": [
       {
         "name": "web",
         "image": "my-repo/httpd_dockerfile_volume",
         "cpu": 100,
         "memory": 500,
         "portMappings": [
           {
             "containerPort": 80,
             "hostPort": 80
           }
         ],
         "mountPoints": [
           {
             "sourceVolume": "empty",
             "containerPath": "/usr/local/apache2/htdocs/empty_volume"
           },
           {
             "sourceVolume": "host_etc",
             "containerPath": "/usr/local/apache2/htdocs/host_etc"
           }
         ],
         "essential": true
       },
   ```

   Create another container that uses `volumesFrom` to mount all of the volumes that are associated with the `web` container. All of the volumes on the `web` container are likewise mounted on the `busybox` container. This includes the volume that's specified in the Dockerfile that was used to build the `my-repo/httpd_dockerfile_volume` image.

   ```
       {
         "name": "busybox",
         "image": "busybox",
         "volumesFrom": [
           {
             "sourceContainer": "web"
           }
         ],
         "cpu": 100,
         "memory": 500,
         "entryPoint": [
           "sh",
           "-c"
         ],
         "command": [
           "echo $(date) > /usr/local/apache2/htdocs/empty_volume/date && echo $(date) > /usr/local/apache2/htdocs/host_etc/date && echo $(date) > /usr/local/apache2/htdocs/dockerfile_volume/date"
         ],
         "essential": false
       }
     ]
   }
   ```

   When this task is run, the two containers mount the volumes, and the `command` in the `busybox` container writes the date and time to a file. This file is called `date` in each of the volume folders. The folders are then visible at the website displayed by the `web` container.
**Note**  
Because the `busybox` container runs a quick command and then exits, it must be set as `"essential": false` in the container definition. Otherwise, it stops the entire task when it exits.

# Managing container swap memory space on Amazon ECS
<a name="container-swap"></a>

With Amazon ECS, you can control the usage of swap memory space on your Linux-based Amazon EC2 instances at the container level. Using a per-container swap configuration, each container within a task definition can have swap enabled or disabled. For those that have it enabled, the maximum amount of swap space that's used can be limited. For example, latency-critical containers can have swap disabled. In contrast, containers with high transient memory demands can have swap turned on to reduce the chances of out-of-memory errors when the container is under load.

The swap configuration for a container is managed by the following container definition parameters.

`maxSwap`  
The total amount of swap memory (in MiB) a container can use. This parameter is translated to the `--memory-swap` option to docker run where the value is the sum of the container memory plus the `maxSwap` value.  
If a `maxSwap` value of `0` is specified, the container doesn't use swap. Accepted values are `0` or any positive integer. If the `maxSwap` parameter is omitted, the container uses the swap configuration for the container instance that it's running on. A `maxSwap` value must be set for the `swappiness` parameter to be used.

`swappiness`  
You can use this to tune a container's memory swappiness behavior. A `swappiness` value of `0` causes swapping to not occur unless required. A `swappiness` value of `100` causes pages to be swapped aggressively. Accepted values are whole numbers between `0` and `100`. If the `swappiness` parameter isn't specified, a default value of `60` is used. If a value isn't specified for `maxSwap`, this parameter is ignored. This parameter maps to the `--memory-swappiness` option to docker run.

In the following example, the JSON syntax is provided.

```
"containerDefinitions": [{
        ...
        "linuxParameters": {
            "maxSwap": integer,
            "swappiness": integer
        },
        ...
}]
```

## Considerations
<a name="container-swap-considerations"></a>

Consider the following when you use a per-container swap configuration.
+ Swap space must be enabled and allocated on the Amazon EC2 instance hosting your tasks for the containers to use. By default, the Amazon ECS optimized AMIs do not have swap enabled. You must enable swap on the instance to use this feature. For more information, see [Instance Store Swap Volumes](https://docs.aws.amazon.com/AWSEC2/latest/UserGuide/instance-store-swap-volumes.html) in the *Amazon EC2 User Guide* or [How do I allocate memory to work as swap space in an Amazon EC2 instance?](https://repost.aws/knowledge-center/ec2-memory-swap-file).
+ The swap space container definition parameters are only supported for task definitions that specify EC2. These parameters are not supported for task definitions intended only for Amazon ECS on Fargate use.
+ This feature is only supported for Linux containers. Windows containers are not supported currently.
+ If the `maxSwap` and `swappiness` container definition parameters are omitted from a task definition, each container has a default `swappiness` value of `60`. Moreover, the total swap usage is limited to two times the memory of the container.
+ If you're using tasks on Amazon Linux 2023 the `swappiness` parameter isn't supported.

# Amazon ECS task definition differences for Amazon ECS Managed Instances
<a name="managed-instances-tasks-services"></a>

In order to use Amazon ECS Managed Instances, you must configure your task definition to use the Amazon ECS Managed Instances launch type. There are additional considerations when using Amazon ECS Managed Instances.

## Task definition parameters
<a name="managed-instances-task-parameters"></a>

Tasks that use Amazon ECS Managed Instances support most of the Amazon ECS task definition parameters that are available. However, some parameters have specific behaviors or limitations when used with Amazon ECS Managed Instances tasks.

The following task definition parameters are not valid in Amazon ECS Managed Instances tasks:
+ `disableNetworking`
+ `dnsSearchDomains`
+ `dnsServers`
+ `dockerLabels`
+ `dockerSecurityOptions`
+ `dockerVolumeConfiguration`
+ `ephemeralStorage`
+ `extraHosts`
+ `fsxWindowsFileServerVolumeConfiguration`
+ `hostname`
+ `inferenceAccelerator`
+ `ipcMode`
+ `links`
+ `maxSwap`
+ `proxyConfiguration`
+ `sharedMemorySize`
+ `sourcepath` volumes
+ `swappiness`
+ `tmpfs`

The following task definition parameters are valid in Amazon ECS Managed Instances tasks, but have limitations that should be noted:
+ `networkConfiguration` - Amazon ECS Managed Instances tasks use the `awsvpc` or `host` network mode.
+ `placementConstraints` - The following constraint attributes are supported.
  + `ecs.subnet-id`
  + `ecs.availability-zone`
  + `ecs.instance-type`
  + `ecs.cpu-architecture`
+ `requiresCompatibilities` - Must include `MANAGED_INSTANCES` to ensure the task definition is compatible with Amazon ECS Managed Instances.
+ `resourceRequirement` - `InferenceAccelerator` is not supported.
+ `operatingSystemFamily` - Amazon ECS Managed Instances use `LINUX`.
+ `volumes` - When using bind mounts with a `sourcePath`, the path must point to a writable directory on the host. Portions of the Amazon ECS Managed Instance filesystem are read-only. Writable directories include `/var` and `/tmp`. For more information, see [Use bind mounts with Amazon ECS](bind-mounts.md).

To ensure that your task definition validates for use with Amazon ECS Managed Instances, you can specify the following when you register the task definition: 
+ In the AWS Management Console, for the **Requires Compatibilities** field, specify `MANAGED_INSTANCES`.
+ In the AWS CLI, specify the `--requires-compatibilities` option.
+ In the Amazon ECS API, specify the `requiresCompatibilities` flag.

# Amazon ECS task definition differences for Fargate
<a name="fargate-tasks-services"></a>

In order to use Fargate, you must configure your task definition to use the Fargate launch type. There are additional considerations when using Fargate.

## Task definition parameters
<a name="fargate-task-parameters"></a>

Tasks that use Fargate don't support all of the Amazon ECS task definition parameters that are available. Some parameters aren't supported at all, and others behave differently for Fargate tasks.

The following task definition parameters are not valid in Fargate tasks:
+ `disableNetworking`
+ `dnsSearchDomains`
+ `dnsServers`
+ `dockerSecurityOptions`
+ `extraHosts`
+ `gpu`
+ `ipcMode`
+ `links`
+ `placementConstraints`
+ `privileged`
+ `maxSwap`
+ `swappiness`

The following task definition parameters are valid in Fargate tasks, but have limitations that should be noted:
+ `linuxParameters` – When specifying Linux-specific options that are applied to the container, for `capabilities` the only capability you can add is `CAP_SYS_PTRACE`. The `devices`, `sharedMemorySize`, and `tmpfs` parameters are not supported. For more information, see [Linux parameters](task_definition_parameters.md#container_definition_linuxparameters).
+ `volumes` – Fargate tasks only support bind mount host volumes, so the `dockerVolumeConfiguration` parameter is not supported. For more information, see [Volumes](task_definition_parameters.md#volumes).
+ `cpu` - For Windows containers on AWS Fargate, the value cannot be less than 1 vCPU.
+ `networkConfiguration` - Fargate tasks always use the `awsvpc` network mode.

To ensure that your task definition validates for use with Fargate, you can specify the following when you register the task definition: 
+ In the AWS Management Console, for the **Requires Compatibilities** field, specify `FARGATE`.
+ In the AWS CLI, specify the `--requires-compatibilities` option.
+ In the Amazon ECS API, specify the `requiresCompatibilities` flag.

## Operating Systems and architectures
<a name="fargate-task-os"></a>

When you configure a task and container definition for AWS Fargate, you must specify the Operating System that the container runs. The following Operating Systems are supported for AWS Fargate:
+ Amazon Linux 2
**Note**  
Linux containers use only the kernel and kernel configuration from the host Operating System. For example, the kernel configuration includes the `sysctl` system controls. A Linux container image can be made from a base image that contains the files and programs from any Linux distribution. If the CPU architecture matches, you can run containers from any Linux container image on any Operating System.
+ Windows Server 2019 Full
+ Windows Server 2019 Core
+ Windows Server 2022 Full
+ Windows Server 2022 Core

When you run Windows containers on AWS Fargate, you must have the X86\$164 CPU architecture.

When you run Linux containers on AWS Fargate, you can use the X86\$164 CPU architecture, or the ARM64 architecture for your ARM-based applications. For more information, see [Amazon ECS task definitions for 64-bit ARM workloads](ecs-arm64.md).

## Task CPU and memory
<a name="fargate-tasks-size"></a>

Amazon ECS task definitions for AWS Fargate require that you specify CPU and memory at the task level. Most use cases are satisfied by only specifying these resources at the task level. The table below shows the valid combinations of task-level CPU and memory. You can specify memory values in the task definition as a string in MiB or GB. For example, you can specify a memory value either as `3072` in MiB or `3 GB` in GB. You can specify CPU values in the JSON file as a string in CPU units or virtual CPUs (vCPUs). For example, you can specify a CPU value either as `1024` in CPU units or `1 vCPU` in vCPUs.


|  CPU value  |  Memory value  |  Operating systems supported for AWS Fargate  | 
| --- | --- | --- | 
|  256 (.25 vCPU)  |  512 MiB, 1 GB, 2 GB  |  Linux  | 
|  512 (.5 vCPU)  |  1 GB, 2 GB, 3 GB, 4 GB  |  Linux  | 
|  1024 (1 vCPU)  |  2 GB, 3 GB, 4 GB, 5 GB, 6 GB, 7 GB, 8 GB  |  Linux, Windows  | 
|  2048 (2 vCPU)  |  Between 4 GB and 16 GB in 1 GB increments  |  Linux, Windows  | 
|  4096 (4 vCPU)  |  Between 8 GB and 30 GB in 1 GB increments  |  Linux, Windows  | 
|  8192 (8 vCPU)  This option requires Linux platform `1.4.0` or later.   |  Between 16 GB and 60 GB in 4 GB increments  |  Linux  | 
|  16384 (16vCPU)  This option requires Linux platform `1.4.0` or later.   |  Between 32 GB and 120 GB in 8 GB increments  |  Linux  | 

## Task networking
<a name="fargate-tasks-services-networking"></a>

Amazon ECS tasks for AWS Fargate require the `awsvpc` network mode, which provides each task with an elastic network interface. When you run a task or create a service with this network mode, you must specify one or more subnets to attach the network interface and one or more security groups to apply to the network interface. 

If you are using public subnets, decide whether to provide a public IP address for the network interface. For a Fargate task in a public subnet to pull container images, a public IP address needs to be assigned to the task's elastic network interface, with a route to the internet or a NAT gateway that can route requests to the internet. For a Fargate task in a private subnet to pull container images, you need a NAT gateway in the subnet to route requests to the internet. When you host your container images in Amazon ECR, you can configure Amazon ECR to use an interface VPC endpoint. In this case, the task's private IPv4 address is used for the image pull. For more information about Amazon ECR interface endpoints, see [Amazon ECR interface VPC endpoints (AWS PrivateLink)](https://docs.aws.amazon.com/AmazonECR/latest/userguide/vpc-endpoints.html) in the *Amazon Elastic Container Registry User Guide*.

The following is an example of the `networkConfiguration` section for a Fargate service:

```
"networkConfiguration": { 
   "awsvpcConfiguration": { 
      "assignPublicIp": "ENABLED",
      "securityGroups": [ "sg-12345678" ],
      "subnets": [ "subnet-12345678" ]
   }
}
```

## Task resource limits
<a name="fargate-resource-limits"></a>

Amazon ECS task definitions for Linux containers on AWS Fargate support the `ulimits` parameter to define the resource limits to set for a container.

Amazon ECS task definitions for Windows on AWS Fargate do not support the `ulimits` parameter to define the resource limits to set for a container.

Amazon ECS tasks hosted on Fargate use the default resource limit values set by the operating system with the exception of the `nofile` resource limit parameter. The `nofile` resource limit sets a restriction on the number of open files that a container can use. On Fargate, the default `nofile` soft limit is ` 65535` and hard limit is `65535`. You can set the values of both limits up to `1048576`.

The following is an example task definition snippet that shows how to define a custom `nofile` limit that has been doubled:

```
"ulimits": [
    {
       "name": "nofile",
       "softLimit": 2048,
       "hardLimit": 8192
    }
]
```

For more information on the other resource limits that can be adjusted, see [Resource limits](task_definition_parameters.md#container_definition_limits).

## Logging
<a name="fargate-tasks-logging"></a>

### Event logging
<a name="fargate-event-logging"></a>

Amazon ECS logs the actions that it takes to EventBridge. You can use Amazon ECS events for EventBridge to receive near real-time notifications regarding the current state of your Amazon ECS clusters, services, and tasks. Additionally, you can automate actions to respond to these events. For more information, see [Automate responses to Amazon ECS errors using EventBridge](cloudwatch_event_stream.md).

### Task lifecycle logging
<a name="fargate-task-status"></a>

Tasks that run on Fargate publish timestamps to track the task through the states of the task lifecycle. You can see the timestamps in the task details in the AWS Management Console and by describing the task in the AWS CLI and SDKs. For example, you can use the timestamps to evaluate how much time the task spent downloading the container images and decide if you should optimize the container image size, or use Seekable OCI indexes. For more information about container image practices, see [Best practices for Amazon ECS container images](container-considerations.md).

### Application logging
<a name="fargate-app-logging"></a>

Amazon ECS task definitions for AWS Fargate support the `awslogs`, `splunk`, and `awsfirelens` log drivers for the log configuration.

The `awslogs` log driver configures your Fargate tasks to send log information to Amazon CloudWatch Logs. The following shows a snippet of a task definition where the `awslogs` log driver is configured:

```
"logConfiguration": { 
   "logDriver": "awslogs",
   "options": { 
      "awslogs-group" : "/ecs/fargate-task-definition",
      "awslogs-region": "us-east-1",
      "awslogs-stream-prefix": "ecs"
   }
}
```

For more information about using the `awslogs` log driver in a task definition to send your container logs to CloudWatch Logs, see [Send Amazon ECS logs to CloudWatch](using_awslogs.md).

For more information about the `awsfirelens` log driver in a task definition, see [Send Amazon ECS logs to an AWS service or AWS Partner](using_firelens.md).

For more information about using the `splunk` log driver in a task definition, see [`splunk` log driver](example_task_definitions.md#example_task_definition-splunk).

## Task storage
<a name="fargate-tasks-storage"></a>

For Amazon ECS tasks hosted on Fargate, the following storage types are supported:
+ Amazon EBS volumes provide cost-effective, durable, high-performance block storage for data-intensive containerized workloads. For more information, see [Use Amazon EBS volumes with Amazon ECS](ebs-volumes.md).
+ Amazon EFS volumes for persistent storage. For more information, see [Use Amazon EFS volumes with Amazon ECS](efs-volumes.md).
+ Bind mounts for ephemeral storage. For more information, see [Use bind mounts with Amazon ECS](bind-mounts.md).

## Lazy loading container images using Seekable OCI (SOCI)
<a name="fargate-tasks-soci-images"></a>

Amazon ECS tasks on Fargate that use Linux platform version `1.4.0` can use Seekable OCI (SOCI) to help start tasks faster. With SOCI, containers only spend a few seconds on the image pull before they can start, providing time for environment setup and application instantiation while the image is downloaded in the background. This is called *lazy loading*. When Fargate starts an Amazon ECS task, Fargate automatically detects if a SOCI index exists for an image in the task and starts the container without waiting for the entire image to be downloaded.

For containers that run without SOCI indexes, container images are downloaded completely before the container is started. This behavior is the same on all other platform versions of Fargate and on the Amazon ECS-optimized AMI on Amazon EC2 instances.

Seekable OCI (SOCI) is an open source technology developed by AWS that can launch containers faster by lazily loading the container image. SOCI works by creating an index (SOCI Index) of the files within an existing container image. This index helps to launch containers faster, providing the capability to extract an individual file from a container image before downloading the entire image. The SOCI index must be stored as an artifact in the same repository as the image within the container registry. You should only use SOCI indices from trusted sources, as the index is the authoritative source for the contents of the image. For more information, see [Introducing Seekable OCI for lazy loading container images](https://aws.amazon.com/about-aws/whats-new/2022/09/introducing-seekable-oci-lazy-loading-container-images/).

Customers looking to use SOCI will only be able to use the SOCI index manifest v2. Existing customers that have previously used SOCI on Fargate can continue to use the SOCI Index Manifest v1, however we strongly advise those customers migrate to SOCI Index Manifest v2. The SOCI Index Manifest v2 creates an explicit relationship between container images and their SOCI indexes to ensure consistent deployments.
<a name="fargate-soci-considerations"></a>
**Considerations**  
If you want Fargate to use a SOCI index to lazily load container images in a task, consider the following:
+ Only tasks that run on Linux platform version `1.4.0` can use SOCI indexes. Tasks that run Windows containers on Fargate aren't supported.
+ Tasks that run on X86\$164 or ARM64 CPU architecture are supported.
+ Container images in the task definition must be stored in a compatible image registry. The following lists the compatible registries:
  + Amazon ECR private registries.
+ Only container images that use gzip compression, or are not compressed are supported. Container images that use zstd compression aren't supported.
+ For SOCI Index Manifest v2, generating a SOCI index manifest modifies the container image manifest as we add an annotation for the SOCI index. This results in a new container image digest. The contents of the container images filesystem layers do not change.
+ For SOCI Index Manifest v2, when the container image has already been stored in the container image repository, after a SOCI index has been generated, you need to repush the container image. Re-pushing a container image will not increase storage costs by duplicating the filesystem layers, it only uploads a new manifest file.
+ We recommend that you try lazy loading with container images greater than 250 MiB compressed in size. You are less likely to see a reduction in the time to load smaller images.
+ Because lazy loading can change how long your tasks take to start, you might need to change various timeouts like the health check grace period for Elastic Load Balancing.
+ If you want to prevent a container image from being lazy loaded, the container image will need to be repushed without a SOCI index attached.
<a name="create-soci"></a>
**Creating a Seekable OCI index**  
For a container image to be lazy loaded it needs a SOCI index (a metadata file) created and stored in the container image repository along side the container image. To create and push a SOCI index you can use the open source [ soci-snapshotter CLI tool](https://github.com/awslabs/soci-snapshotter) on GitHub. Or, you can deploy the CloudFormation AWS SOCI Index Builder. This is a serverless solution that automatically creates and pushes a SOCI index when a container image is pushed to Amazon ECR. For more information about the solution and the installation steps, see [CloudFormation AWS SOCI Index Builder](https://awslabs.github.io/cfn-ecr-aws-soci-index-builder/) on GitHub. The CloudFormation AWS SOCI Index Builder is a way to automate getting started with SOCI, while the open source soci tool has more flexibility around index generation and the ability to integrate index generation in your continuous integration and continuous delivery (CI/CD) pipelines.

**Note**  
For the SOCI index to be created for an image, the image must exist in the containerd image store on the computer running `soci-snapshotter`. If the image is in the Docker image store, the image can't be found.
<a name="verify-soci"></a>
**Verifying that a task used lazy loading**  
To verify that a task was lazily loaded using SOCI, check the task metadata endpoint from inside the task. When you query the task metadata endpoint version 4, there is a `Snapshotter` field in the default path for the container that you are querying from. Additionally, there are `Snapshotter` fields for each container in the `/task` path. The default value for this field is `overlayfs`, and this field is set to `soci` if SOCI is used. To verify that a container image has a SOCI Index Manifest v2 attached you can retrieve the Image Index from Amazon ECR using the AWS CLI.

```
IMAGE_REPOSITORY=r
IMAGE_TAG=latest

aws ecr batch-get-image \
    --repository-name=$IMAGE_REPOSITORY \
    --image-ids imageTag=$IMAGE_TAG \
    --query 'images[0].imageManifest' --output text | jq -r '.manifests[] | select(.artifactType=="application/vnd.amazon.soci.index.v2+json")'
```

To verify that a container image has a SOCI Index Manifest v1 attached you can use the OCI Referrers API.

```
ACCOUNT_ID=111222333444
AWS_REGION=us-east-1
IMAGE_REPOSITORY=nginx-demo
IMAGE_TAG=latest
IMAGE_DIGEST=$(aws ecr describe-images --repository-name $IMAGE_REPOSITORY --image-ids imageTag=$IMAGE_TAG --query 'imageDetails[0].imageDigest' --output text)
ECR_PASSWORD=$(aws ecr get-login-password)

curl \
    --silent \
    --user AWS:$ECR_PASSWORD \
    https://$ACCOUNT_ID.dkr.ecr.$AWS_REGION.amazonaws.com/v2/$IMAGE_REPOSITORY/referrers/$IMAGE_DIGEST?artifactType=application%2Fvnd.amazon.soci.index.v1%2Bjson | jq -r '.'
```

# Amazon ECS task definition differences for EC2 instances running Windows
<a name="windows_task_definitions"></a>

Tasks that run on EC2 Windows instances don't support all of the Amazon ECS task definition parameters that are available. Some parameters aren't supported at all, and others behave differently.

The following task definition parameters aren't supported for Amazon EC2 Windows task definitions:
+ `containerDefinitions`
  + `disableNetworking`
  + `dnsServers`
  + `dnsSearchDomains`
  + `extraHosts`
  + `links`
  + `linuxParameters`
  + `privileged`
  + `readonlyRootFilesystem`
  + `user`
  + `ulimits`
+ `volumes`
  + `dockerVolumeConfiguration`
+ `cpu`

  We recommend specifying container-level CPU for Windows containers.
+ `memory`

  We recommend specifying container-level memory for Windows containers.
+ `proxyConfiguration`
+ `ipcMode`
+ `pidMode`
+ `taskRoleArn`

  The IAM roles for tasks on EC2 Windows instances features requires additional configuration, but much of this configuration is similar to configuring IAM roles for tasks on Linux container instances. For more information see [Amazon EC2 Windows instance additional configuration](task-iam-roles.md#windows_task_IAM_roles).

# Creating an Amazon ECS task definition using the console
<a name="create-task-definition"></a>

You create a task definition so that you can define the application that you run as a task or service.

When you create a task definition for the external launch type, you need to create the task definition using JSON editor and set the `requireCapabilities` parameter to `EXTERNAL`.

You can create a task definition by using the console experience, or by specifying a JSON file. You can have Amazon Q provide recommendations when you use the JSON editor. For more information, see [Using Amazon Q Developer to provide task definition recommendations in the Amazon ECS console](using-amazon-q.md)

## JSON validation
<a name="json-validate-for-create"></a>

The Amazon ECS console JSON editor validates the following in the JSON file:
+ The file is a valid JSON file.
+ The file doesn't contain any extraneous keys.
+ The file contains the `familyName` parameter.
+ There is at least one entry under `containerDefinitions`.

## CloudFormation stacks
<a name="cloudformation-stack"></a>

The following behavior applies to task definitions that were created in the new Amazon ECS console before January 12, 2023.

When you create a task definition, the Amazon ECS console automatically creates a CloudFormation stack that has a name that begins with `ECS-Console-V2-TaskDefinition-`. If you used the AWS CLI or an AWS SDK to deregister the task definition, then you must manually delete the task definition stack. For more information, see [Deleting a stack](https://docs.aws.amazon.com/AWSCloudFormation/latest/UserGuide/cfn-console-delete-stack.html) in the *CloudFormation User Guide*.

Task definitions created after January 12, 2023, do not have a CloudFormation stack automatically created for them.

## Procedure
<a name="create-task-procedure"></a>

------
#### [ Amazon ECS console ]

1. Open the console at [https://console.aws.amazon.com/ecs/v2](https://console.aws.amazon.com/ecs/v2).

1. In the navigation pane, choose **Task definitions**.

1. On the **Create new task definition** menu, choose **Create new task definition**.

1. For **Task definition family**, specify a unique name for the task definition.

1. For **Launch type**, choose the application environment. The console default is **AWS Fargate** (which is serverless). Amazon ECS uses this value to perform validation to ensure that the task definition parameters are valid for the infrastructure type.

1. For **Operating system/Architecture**, choose the operating system and CPU architecture for the task. 

   To run your task on a 64-bit ARM architecture, choose **Linux/ARM64**. For more information, see [Runtime platform](task_definition_parameters.md#runtime-platform).

   To run your **AWS Fargate** tasks on Windows containers, choose a supported Windows operating system. For more information, see [Operating Systems and architectures](fargate-tasks-services.md#fargate-task-os).

1. For **Task size**, choose the CPU and memory values to reserve for the task. The CPU value is specified as vCPUs and memory is specified as GB.

   For tasks hosted on Fargate, the following table shows the valid CPU and memory combinations.    
[\[See the AWS documentation website for more details\]](http://docs.aws.amazon.com/AmazonECS/latest/developerguide/create-task-definition.html)

   For tasks that use EC2 instances, or external instances, the supported task CPU values are between 128 CPU units (0.125 vCPUs) and 196608 CPU units (192 vCPUs).

   To specify the memory value in GB, enter **GB** after the value. For example, to set the **Memory value** to 3GB, enter **3GB**.
**Note**  
Task-level CPU and memory parameters are ignored for Windows containers.

1. For **Network mode**, choose the network mode to use. The default is **awsvpc** mode. For more information, see [Amazon ECS task networking](https://docs.aws.amazon.com/AmazonECS/latest/developerguide/task-networking.html).

   If you choose **bridge**, under **Port mappings**, for **Host port**, enter the port number on the container instance to reserve for your container.

1. (Optional) Expand the **Task roles** section to configure the AWS Identity and Access Management (IAM) roles for the task:

   1. For **Task role**, choose the IAM role to assign to the task. A task IAM role provides permissions for the containers in a task to call AWS API operations.

   1. For **Task execution role**, choose the role.

      For information about when to use a task execution role, see [Amazon ECS task execution IAM role](task_execution_IAM_role.md). If you don't need the role, choose **None**.

1. (Optional) Expand the **Task placement** section to add placement contraints. Task placement constraints allow you to filter the container instances used for the placement of your tasks using built-in or custom attributes.

1. (Optional) Expand the **Fault injection** section to enable fault injection. Fault injection lets you test how your application responds to certain impairment scenarios.

1. For each container to define in your task definition, complete the following steps.

   1. For **Name**, enter a name for the container.

   1. For **Image URI**, enter the image to use to start a container. Images in the Amazon ECR Public Gallery registry can be specified by using the Amazon ECR Public registry name only. For example, if `public.ecr.aws/ecs/amazon-ecs-agent:latest` is specified, the Amazon Linux container hosted on the Amazon ECR Public Gallery is used. For all other repositories, specify the repository by using either the `repository-url/image:tag` or `repository-url/image@digest` formats.

   1. If your image is in a private registry outside of Amazon ECR, under **Private registry**, turn on **Private registry authentication**. Then, in **Secrets Manager ARN or name**, enter the Amazon Resource Name (ARN) of the secret.

   1. For **Essential container**, if your task definition has two or more containers defined, you can specify whether the container should be considered essential. When a container is marked as **Essential**, if that container stops, then the task is stopped. Each task definition must contain at least one essential container.

   1. A port mapping allows the container to access ports on the host to send or receive traffic. Under **Port mappings**, do one of the following: 
      + When you use the **awsvpc** network mode, for **Container port** and **Protocol**, choose the port mapping to use for the container.
      + When you use the **bridge** network mode, for **Container port** and **Protocol**, choose the port mapping to use for the container.

      Choose **Add more port mappings** to specify additional container port mappings.

   1. To give the container read-only access to its root file system, for **Read only root file system**, select **Read only**.

   1. (Optional) To define the container-level CPU, GPU, and memory limits that are different from task-level values, under **Resource allocation limits**, do the following:
      + For **CPU**, enter the number of CPU units that the Amazon ECS container agent reserves for the container.
      + For **GPU**, enter the number of GPU units for the container instance. 

        An Amazon EC2 instance with GPU support has 1 GPU unit for every GPU. For more information, see [Amazon ECS task definitions for GPU workloads](ecs-gpu.md).
      + For **Memory hard limit**, enter the amount of memory, in GB, to present to the container. If the container attempts to exceed the hard limit, the container stops.
      + The Docker 20.10.0 or later daemon reserves a minimum of 6 mebibytes (MiB) of memory for a container, so don't specify fewer than 6 MiB of memory for your containers.

        The Docker 19.03.13-ce or earlier daemon reserves a minimum of 4 MiB of memory for a container, so don't specify fewer than 4 MiB of memory for your containers.
      + For **Memory soft limit**, enter the soft limit (in GB) of memory to reserve for the container. 

        When system memory is under contention, Docker attempts to keep the container memory to this soft limit. If you don't specify task-level memory, you must specify a non-zero integer for one or both of **Memory hard limit** and **Memory soft limit**. If you specify both, **Memory hard limit** must be greater than **Memory soft limit**. 

        This feature is not supported on Windows containers.

   1. (Optional) Expand the **Environment variables** section to specify environment variables to inject into the container. You can specify environment variables either individually by using key-value pairs or in bulk by specifying an environment variable file that's hosted in an Amazon S3 bucket. For information about how to format an environment variable file, see [Pass an individual environment variable to an Amazon ECS container](taskdef-envfiles.md).

      When you specify an environment variable for secret storage, for **Key**, enter the secret name. Then for **ValueFrom**, enter the full ARN of the Systems Manager Parameter Store secret or Secrets Manager secret 

   1. (Optional) Select the **Use log collection** option to specify a log configuration. For each available log driver, there are log driver options to specify. The default option sends container logs to Amazon CloudWatch Logs. The other log driver options are configured by using AWS FireLens. For more information, see [Send Amazon ECS logs to an AWS service or AWS Partner](using_firelens.md).

      The following describes each container log destination in more detail.
      + **Amazon CloudWatch** – Configure the task to send container logs to CloudWatch Logs. The default log driver options are provided, which create a CloudWatch log group on your behalf. To specify a different log group name, change the driver option values.
      + **Export logs to Splunk** – Configure the task to send container logs to the Splunk driver that sends the logs to a remote service. You must enter the URL to your Splunk web service. The Splunk token is specified as a secret option because it can be treated as sensitive data.
      + **Export logs to Amazon Data Firehose** – Configure the task to send container logs to Firehose. The default log driver options are provided, which sends log to an Firehose delivery stream. To specify a different delivery stream name, change the driver option values.
      + **Export logs to Amazon Kinesis Data Streams** – Configure the task to send container logs to Kinesis Data Streams. The default log driver options are provided, which send logs to a Kinesis Data Streams stream. To specify a different stream name, change the driver option values.
      + **Export logs to Amazon OpenSearch Service** – Configure the task to send container logs to an OpenSearch Service domain. The log driver options must be provided.
      + **Export logs to Amazon S3** – Configure the task to send container logs to an Amazon S3 bucket. The default log driver options are provided, but you must specify a valid Amazon S3 bucket name.

   1. (Optional) Configure additional container parameters.    
[\[See the AWS documentation website for more details\]](http://docs.aws.amazon.com/AmazonECS/latest/developerguide/create-task-definition.html)

   1. (Optional) Choose **Add more containers** to add additional containers to the task definition. 

1. (Optional) The **Storage** section is used to expand the amount of ephemeral storage for tasks hosted on Fargate. You can also use this section to add a data volume configuration for the task.

   1. To expand the available ephemeral storage beyond the default value of 20 gibibytes (GiB) for your Fargate tasks, for **Amount**, enter a value up to 200 GiB.

1. (Optional) To add a data volume configuration for the task definition, choose **Add volume**, and then follow these steps.

   1. For **Volume name**, enter a name for the data volume. The data volume name is used when creating a container mount point.

   1. For **Volume configuration**, select whether you want to configure your volume when creating the task definition or during deployment.
**Note**  
Volumes that can be configured when creating a task definition include Bind mount, Docker, Amazon EFS, and Amazon FSx for Windows File Server. Volumes that can be configured at deployment when running a task, or when creating or updating a service include Amazon EBS.

   1. For **Volume type**, select a volume type compatible with the configuration type that you selected, and then configure the volume type.    
[\[See the AWS documentation website for more details\]](http://docs.aws.amazon.com/AmazonECS/latest/developerguide/create-task-definition.html)

1. To add a volume from another container, choose **Add volume from**, and then configure the following:
   + For **Container**, choose the container.
   + For **Source**, choose the container which has the volume you want to mount.
   + For **Read only**, select whether the container has read-only access to the volume.

1. (Optional) To configure your application trace and metric collection settings by using the AWS Distro for OpenTelemetry integration, expand **Monitoring**, and then select **Use metric collection** to collect and send metrics for your tasks to either Amazon CloudWatch or Amazon Managed Service for Prometheus. When this option is selected, Amazon ECS creates an AWS Distro for OpenTelemetry container sidecar that is preconfigured to send the application metrics. For more information, see [Correlate Amazon ECS application performance using application metrics](metrics-data.md).

   1. When **Amazon CloudWatch** is selected, your custom application metrics are routed to CloudWatch as custom metrics. For more information, see [Exporting application metrics to Amazon CloudWatch](application-metrics-cloudwatch.md).
**Important**  
When exporting application metrics to Amazon CloudWatch, your task definition requires a task IAM role with the required permissions. For more information, see [Required IAM permissions for AWS Distro for OpenTelemetry integration with Amazon CloudWatch](application-metrics-cloudwatch.md#application-metrics-cloudwatch-iam). 

   1. When you select **Amazon Managed Service for Prometheus (Prometheus libraries instrumentation)**, your task-level CPU, memory, network, and storage metrics and your custom application metrics are routed to Amazon Managed Service for Prometheus. For **Workspace remote write endpoint**, enter the remote write endpoint URL for your Prometheus workspace. For **Scraping target**, enter the host and port the AWS Distro for OpenTelemetry collector can use to scrape for metrics data. For more information, see [Exporting application metrics to Amazon Managed Service for Prometheus](application-metrics-prometheus.md).
**Important**  
When exporting application metrics to Amazon Managed Service for Prometheus, your task definition requires a task IAM role with the required permissions. For more information, see [Required IAM permissions for AWS Distro for OpenTelemetry integration with Amazon Managed Service for Prometheus](application-metrics-prometheus.md#application-metrics-prometheus-iam). 

   1. When you select **Amazon Managed Service for Prometheus (OpenTelemetry instrumentation)**, your task-level CPU, memory, network, and storage metrics and your custom application metrics are routed to Amazon Managed Service for Prometheus. For **Workspace remote write endpoint**, enter the remote write endpoint URL for your Prometheus workspace. For more information, see [Exporting application metrics to Amazon Managed Service for Prometheus](application-metrics-prometheus.md).
**Important**  
When exporting application metrics to Amazon Managed Service for Prometheus, your task definition requires a task IAM role with the required permissions. For more information, see [Required IAM permissions for AWS Distro for OpenTelemetry integration with Amazon Managed Service for Prometheus](application-metrics-prometheus.md#application-metrics-prometheus-iam). 

1. (Optional) Expand the **Tags** section to add tags, as key-value pairs, to the task definition.
   + [Add a tag] Choose **Add tag**, and then do the following:
     + For **Key**, enter the key name.
     + For **Value**, enter the key value.
   + [Remove a tag] Next to the tag, choose **Remove tag**.

1. Choose **Create** to register the task definition.

------
#### [ Amazon ECS console JSON editor ]

1. Open the console at [https://console.aws.amazon.com/ecs/v2](https://console.aws.amazon.com/ecs/v2).

1. In the navigation pane, choose **Task definitions**.

1. On the **Create new task definition** menu, choose **Create new task definition with JSON**.

1. In the JSON editor box, edit your JSON file,

   The JSON must pass the validation checks specified in [JSON validation](#json-validate-for-create).

1. Choose **Create**.

------

# Using Amazon Q Developer to provide task definition recommendations in the Amazon ECS console
<a name="using-amazon-q"></a>

When you use the JSON editor in the Amazon ECS console to create a task definition, you can use Amazon Q Developer to provide AI-generated code suggestions for your task definitions. 

You can use the inline chat capability to ask Amazon Q Developer to generate, explain, or refactor task definition JSON with a conversational interface. You can inject generated suggestions at any point in the task definition and accept or reject the changes proposed. Amazon ECS has also enhanced the existing inline suggestions feature to utilize Amazon Q Developer.

When you create a task definition using the JSON editor, you can have Amazon Q Developer provide recommendations to help you create a task definition more quickly. You can have property-based inline suggestions, or use the Amazon Q Developer suggestions to autocomplete whole blocks of sample code.

You can use this feature in Regions where Amazon Q Developer is supported. For more information, see [AWS Services by Regions](https://aws.amazon.com/about-aws/global-infrastructure/regional-product-services/).

## Prerequisites
<a name="amazon-q-prerequisites"></a>

The following are prerequisites:
+ In addition to the console permissions, the user that creates the task definition in the console must have the `codewhisperer:GenerateRecommendations` permission for the recommendations and `q:SendMessage` to use inline chat. For more information, see [Permissions required for using Amazon Q Developer to provide recommendations in the console](console-permissions.md#amazon-q-permission).

## Procedure
<a name="amazon-q-procedure"></a>

1. Open the console at [https://console.aws.amazon.com/ecs/v2](https://console.aws.amazon.com/ecs/v2).

1. In the navigation pane, choose **Task definitions**.

1. On the **Create new task definition** menu, choose **Create new task definition with JSON**.

   The **Create task definition** page opens.

   The console provides the following default template.

   ```
   {
       "requiresCompatibilities": [
           "FARGATE"
       ],
       "family": "",
       "containerDefinitions": [
           {
               "name": "",
               "image": "",
               "essential": true
           }
       ],
       "volumes": [],
       "networkMode": "awsvpc",
       "memory": "3 GB",
       "cpu": "1 vCPU",
       "executionRoleArn": "arn:aws:iam::123456789012:role/ecsTaskExecutionRole"
   }
   ```

1. In the Amazon Q inline suggestions pop-up, choose **Allow**.

   If you dismiss the pop-up, you can enable Amazon Q under the gear icon.

1. In the JSON editor box, edit the JSON document.

   To have Amazon Q create and populate the parameters, enter a comment with what you want to add. In the example below, the comment causes Amazon Q to generate the lines in bold.

   ```
   {
       "requiresCompatibilities": [
           "FARGATE"
       ],
       "family": "",
       "containerDefinitions": [
           {
               "name": "",
               "image": "",
               "essential": true
           },
           // add an nginx container using an image from Public ECR, with port 80 open, and send logs to CloudWatch log group "myproxy"
           {
               "name": "nginx",
               "image": "public.ecr.aws/nginx/nginx:latest",
               "essential": true,
               "portMappings": [
                   {
                       "containerPort": 80,
                       "hostPort": 80,
                       "protocol": "tcp"
                   }
               ],
               "logConfiguration": {
                   "logDriver": "awslogs",
                   "options": {
                       "awslogs-group": "myproxy",
                       "awslogs-region": "us-east-1",
                       "awslogs-stream-prefix": "nginx"
                   }
               }
           }
           
       ],
       "volumes": [],
       "networkMode": "awsvpc",
       "memory": "3 GB",
       "cpu": "1 vCPU",
       "executionRoleArn": "arn:aws:iam::123456789012:role/ecsTaskExecutionRole"
   }
   ```

1. To use the inline chate feature, you can highlight the lines, and then choose the star icon. 

   The Amazon Q Developer chat box displays.

   Enter your request.

   Amazon Q Developer generates, and then updates the JSON.

   To accept the changes, choose **Accept All**

1. Choose **Create**.

# Updating an Amazon ECS task definition using the console
<a name="update-task-definition-console-v2"></a>

A *task definition revision* is a copy of the current task definition with the new parameter values replacing the existing ones. All parameters that you do not modify are in the new revision.

To update a task definition, create a task definition revision. If the task definition is used in a service, you must update that service to use the updated task definition.

When you create a revision, you can modify the following container properties and environment properties.
+ Container image URI
+ Port mappings
+ Environment variables
+ Infrastructure requirements
+ Task size
+ Container size
+ Task role
+ Task execution role
+ Volumes and container mount points
+ Private registry

You can have Amazon Q provide recommendations when you use the JSON editor. For more information, see [Using Amazon Q Developer to provide task definition recommendations in the Amazon ECS console](using-amazon-q.md)

## JSON validation
<a name="json-validate-for-update"></a>

The Amazon ECS console JSON editor validates the following in the JSON file:
+ The file is a valid JSON file
+ The file does not contain any extraneous keys
+ The file contains the `familyName` parameter
+ There is at least one entry under `containerDefinitions`

## Procedure
<a name="update-task-definition-console-v2-procedure"></a>

------
#### [ Amazon ECS console ]

1. Open the console at [https://console.aws.amazon.com/ecs/v2](https://console.aws.amazon.com/ecs/v2).

1. From the navigation bar, choose the Region that contains your task definition.

1. In the navigation pane, choose **Task definitions**.

1. Choose the task definition.

1. Select the task definition revision, and then choose **Create new revision**, **Create new revision**.

1. On the **Create new task definition revision** page, make changes. For example, to change the existing container definitions (such as the container image, memory limits, or port mappings), select the container, and then make the changes. You can update task definition compatibility to one of **AWS Fargate**, **Managed Instances**, **Amazon EC2 instances**.

1. Verify the information, and then choose **Update**.

1. If your task definition is used in a service, update your service with the updated task definition. For more information, see [Updating an Amazon ECS service](update-service-console-v2.md).

------
#### [ Amazon ECS console JSON editor ]

1. Open the console at [https://console.aws.amazon.com/ecs/v2](https://console.aws.amazon.com/ecs/v2).

1. In the navigation pane, choose **Task definitions**.

1. Choose **Create new revision**, **Create new revision with JSON**.

1. In the JSON editor box, edit your JSON file,

   The JSON must pass the validation checks specified in [JSON validation](#json-validate-for-update).

1. Choose **Create**.

------

# Deregistering an Amazon ECS task definition revision using the console
<a name="deregister-task-definition-v2"></a>

You can deregister the task definition revision so that it no longer displays in your `ListTaskDefinition` API calls or in the console when you want to run a task or update a service.

When you deregister a task definition revision, it is immediately marked as `INACTIVE`. Existing tasks and services that reference an `INACTIVE` task definition revision continue to run without disruption. Existing services that reference an `INACTIVE` task definition revision can still scale up or down by modifying the service's desired count.

You can't use an `INACTIVE` task definition revision to run new tasks or create new services. You also can't update an existing service to reference an `INACTIVE` task definition revision (even though there may be up to a 10-minute window following deregistration where these restrictions have not yet taken effect).

**Note**  
When you deregister all revisions in a task family, the task definition family is moved to the `INACTIVE` list. Adding a new revision of an `INACTIVE` task definition moves the task definition family back to the `ACTIVE` list.  
At this time, `INACTIVE` task definition revisions remain discoverable in your account indefinitely. However, this behavior is subject to change in the future. Therefore, you should not rely on `INACTIVE` task definition revisions persisting beyond the lifecycle of any associated tasks and services.

## CloudFormation stacks
<a name="cloudformation-stack"></a>

The following behavior applies to task definitions that were created in the new Amazon ECS console before January 12, 2023.

When you create a task definition, the Amazon ECS console automatically creates a CloudFormation stack that has a name that begins with `ECS-Console-V2-TaskDefinition-`. If you used the AWS CLI or an AWS SDK to deregister the task definition, then you must manually delete the task definition stack. For more information, see [Deleting a stack](https://docs.aws.amazon.com/AWSCloudFormation/latest/UserGuide/cfn-console-delete-stack.html) in the *CloudFormation User Guide*.

Task definitions created after January 12, 2023, do not have a CloudFormation stack automatically created for them.

## Procedure
<a name="deregister-task-definition-v2-procedure"></a>

**To deregister a new task definition (Amazon ECS console)**

1. Open the console at [https://console.aws.amazon.com/ecs/v2](https://console.aws.amazon.com/ecs/v2).

1. From the navigation bar, choose the region that contains your task definition.

1. In the navigation pane, choose **Task definitions**.

1. On the **Task definitions** page, choose the task definition family that contains one or more revisions that you want to deregister.

1. On the **task definition Name** page, select the revisions to delete, and then choose **Actions**, **Deregister**.

1. Verify the information in the **Deregister** window, and then choose **Deregister** to finish.

# Deleting an Amazon ECS task definition revision using the console
<a name="delete-task-definition-v2"></a>

When no longer need a specific task definition revision in Amazon ECS, you can delete the task definition revision.

When you delete a task definition revision, it immediately transitions from the `INACTIVE` to `DELETE_IN_PROGRESS`. Existing tasks and services that reference a `DELETE_IN_PROGRESS` task definition revision continue to run without disruption. 

You can't use a `DELETE_IN_PROGRESS` task definition revision to run new tasks or create new services. You also can't update an existing service to reference a `DELETE_IN_PROGRESS` task definition revision.

When you delete all `INACTIVE` task definition revisions, the task definition name is not displayed in the console and not returned in the API. If a task definition revision is in the `DELETE_IN_PROGRESS` state, the task definition name is displayed in the console and returned in the API. The task definition name is retained by Amazon ECS and the revision is incremented the next time you create a task definition with that name.

## Amazon ECS resources that can block a deletion
<a name="resource-block-delete"></a>

A task definition deletion request will not complete when there are any Amazon ECS resources that depend on the task definition revision. The following resources might prevent a task definition from being deleted:
+ Amazon ECS standalone tasks - The task definition is required in order for the task to remain healthy.
+ Amazon ECS service tasks - The task definition is required in order for the task to remain healthy.
+ Amazon ECS service deployments and task sets - The task definition is required when a scaling event is initiated for an Amazon ECS deployment or task set.

If your task definition remains in the `DELETE_IN_PROGRESS` state, you can use the console, or the AWS CLI to identify, and then stop the resources which block the task definition deletion.

### Task definition deletion after the blocked resource is removed
<a name="resource-block-remove"></a>

The following rules apply after you remove the resources that block the task definition deletion:
+ Amazon ECS tasks - The task definition deletion can take up to 1 hour to complete after the task is stopped.
+ Amazon ECS service deployments and task sets - The task definition deletion can take up to 24 hours to complete after the deployment or task set is deleted.

## Procedure
<a name="delete-task-def-procedure"></a>

**To delete task definitions (Amazon ECS console)**

You must deregister a task definition revision before you delete it. For more information, see [Deregistering an Amazon ECS task definition revision using the console](deregister-task-definition-v2.md).

1. Open the console at [https://console.aws.amazon.com/ecs/v2](https://console.aws.amazon.com/ecs/v2).

1. From the navigation bar, choose the region that contains your task definition.

1. In the navigation pane, choose **Task definitions**.

1. On the **Task definitions** page, choose the task definition family that contains one or more revisions that you want to delete.

1. On the **Task definition name** page, select the revisions to delete, and then choose **Actions**, **Delete**.

   If **Delete** is unavailable, you must deregister the task definition.

1. Verify the information in the **Delete** confirmation box, and then choose **Delete** to finish.

# Amazon ECS task definition use cases
<a name="use-cases"></a>

Learn more about how to write task definitions for various AWS services and features.

Depending on your workload, there are certain task definition parameters that need to be set. Also for EC2, you have to choose specific instances that are engineered for the workload.

**Topics**
+ [

# Amazon ECS task definitions for GPU workloads
](ecs-gpu.md)
+ [

# Amazon ECS task definitions for video transcoding workloads
](ecs-vt1.md)
+ [

# Amazon ECS task definitions for AWS Neuron machine learning workloads
](ecs-inference.md)
+ [

# Amazon ECS task definitions for deep learning instances
](ecs-dl1.md)
+ [

# Amazon ECS task definitions for 64-bit ARM workloads
](ecs-arm64.md)
+ [

# Send Amazon ECS logs to CloudWatch
](using_awslogs.md)
+ [

# Send Amazon ECS logs to an AWS service or AWS Partner
](using_firelens.md)
+ [

# Using non-AWS container images in Amazon ECS
](private-auth.md)
+ [

# Restart individual containers in Amazon ECS tasks with container restart policies
](container-restart-policy.md)
+ [

# Pass sensitive data to an Amazon ECS container
](specifying-sensitive-data.md)

# Amazon ECS task definitions for GPU workloads
<a name="ecs-gpu"></a>

Amazon ECS supports workloads that use GPUs, when you create clusters with container instances that support GPUs. Amazon EC2 GPU-based container instances that use the p2, p3, p5, g3, g4, and g5 instance types provide access to NVIDIA GPUs. For more information, see [Linux Accelerated Computing Instances](https://docs.aws.amazon.com/ec2/latest/instancetypes/ac.html) in the *Amazon EC2 Instance Types guide*.

Amazon ECS provides a GPU-optimized AMI that comes with pre-configured NVIDIA kernel drivers and a Docker GPU runtime. For more information, see [Amazon ECS-optimized Linux AMIs](ecs-optimized_AMI.md).

You can designate a number of GPUs in your task definition for task placement consideration at a container level. Amazon ECS schedules to available container instances that support GPUs and pin physical GPUs to proper containers for optimal performance. 

The following Amazon EC2 GPU-based instance types are supported. For more information, see [Amazon EC2 P2 Instances](https://aws.amazon.com/ec2/instance-types/p2/), [Amazon EC2 P3 Instances](https://aws.amazon.com/ec2/instance-types/p3/), [Amazon EC2 P4d Instances](https://aws.amazon.com/ec2/instance-types/p4/), [Amazon EC2 P5 Instances](https://aws.amazon.com/ec2/instance-types/p5/), [Amazon EC2 G3 Instances](https://aws.amazon.com/ec2/instance-types/g3/), [Amazon EC2 G4 Instances](https://aws.amazon.com/ec2/instance-types/g4/), [Amazon EC2 G5 Instances](https://aws.amazon.com/ec2/instance-types/g5/), [Amazon EC2 G6 Instances](https://aws.amazon.com/ec2/instance-types/g6/), and [Amazon EC2 G6e Instances](https://aws.amazon.com/ec2/instance-types/g6e/).


|  Instance type  |  GPUs  |  GPU memory (GiB)  |  vCPUs  |  Memory (GiB)  | 
| --- | --- | --- | --- | --- | 
|  p3.2xlarge  |  1  |  16  |  8  |  61  | 
|  p3.8xlarge  |  4  |  64  |  32  |  244  | 
|  p3.16xlarge  |  8  |  128  |  64  |  488  | 
|  p3dn.24xlarge  |  8  |  256  |  96  |  768  | 
|  p4d.24xlarge  | 8 | 320 | 96 | 1152 | 
| p5.48xlarge | 8 | 640 | 192 | 2048 | 
|  g3s.xlarge  |  1  |  8  |  4  |  30.5  | 
|  g3.4xlarge  |  1  |  8  |  16  |  122  | 
|  g3.8xlarge  |  2  |  16  |  32  |  244  | 
|  g3.16xlarge  |  4  |  32  |  64  |  488  | 
|  g4dn.xlarge  |  1  |  16  |  4  |  16  | 
|  g4dn.2xlarge  |  1  |  16  |  8  |  32  | 
|  g4dn.4xlarge  |  1  |  16  |  16  |  64  | 
|  g4dn.8xlarge  |  1  |  16  |  32  |  128  | 
|  g4dn.12xlarge  |  4  |  64  |  48  |  192  | 
|  g4dn.16xlarge  |  1  |  16  |  64  |  256  | 
|  g5.xlarge  |  1  |  24  |  4  |  16  | 
|  g5.2xlarge  |  1  |  24  |  8  |  32  | 
|  g5.4xlarge  |  1  |  24  |  16  |  64  | 
|  g5.8xlarge  |  1  |  24  |  32  |  128  | 
|  g5.16xlarge  |  1  |  24  |  64  |  256  | 
|  g5.12xlarge  |  4  |  96  |  48  |  192  | 
|  g5.24xlarge  |  4  |  96  |  96  |  384  | 
|  g5.48xlarge  |  8  |  192  |  192  |  768  | 
| g6.xlarge | 1 | 24 | 4 | 16 | 
| g6.2xlarge | 1 | 24 | 8 | 32 | 
| g6.4xlarge | 1 | 24 | 16 | 64 | 
| g6.8xlarge | 1 | 24 | 32 | 128 | 
| g6.16.xlarge | 1 | 24 | 64 | 256 | 
| g6.12xlarge | 4 | 96 | 48 | 192 | 
| g6.24xlarge | 4 | 96 | 96 | 384 | 
| g6.48xlarge | 8 | 192 | 192 | 768 | 
| g6.metal | 8 | 192 | 192 | 768 | 
| gr6.4xlarge | 1 | 24 | 16 | 128 | 
| g6e.xlarge | 1 | 48 | 4 | 32 | 
| g6e.2xlarge | 1 | 48 | 8 | 64 | 
| g6e.4xlarge | 1 | 48 | 16 | 128 | 
| g6e.8xlarge | 1 | 48 | 32 | 256 | 
| g6e16.xlarge | 1 | 48 | 64 | 512 | 
| g6e12.xlarge | 4 | 192 | 48 | 384 | 
| g6e24.xlarge | 4 | 192 | 96 | 768 | 
| g6e48.xlarge | 8 | 384 | 192 | 1536 | 
| gr6.8xlarge | 1 | 24 | 32 | 256 | 

You can retrieve the Amazon Machine Image (AMI) ID for Amazon ECS-optimized AMIs by querying the AWS Systems Manager Parameter Store API. Using this parameter, you don't need to manually look up Amazon ECS-optimized AMI IDs. For more information about the Systems Manager Parameter Store API, see [GetParameter](https://docs.aws.amazon.com/systems-manager/latest/APIReference/API_GetParameter.html). The user that you use must have the `ssm:GetParameter` IAM permission to retrieve the Amazon ECS-optimized AMI metadata.

```
aws ssm get-parameters --names /aws/service/ecs/optimized-ami/amazon-linux-2/gpu/recommended --region us-east-1
```

# Use GPUs with Amazon ECS Managed Instances
<a name="managed-instances-gpu"></a>

Amazon ECS Managed Instances supports GPU-accelerated computing for workloads such as machine learning, high-performance computing, and video processing through the following Amazon EC2 instance types. For more information about instance types supported by Amazon ECS Managed Instances, see [Amazon ECS Managed Instances instance types](managed-instances-instance-types.md).

The following is a subset of GPU-based instance types supported on Amazon ECS Managed Instances:
+ `g4dn`: Powered by NVIDIA T4 GPUs, suitable for machine learning inference, computer vision, and graphics-intensive applications.
+ `g5`: Powered by NVIDIA A10G GPUs, offering higher performance for graphics-intensive applications and machine learning workloads.
+ `p3`: Powered by NVIDIA V100 GPUs, designed for high-performance computing and deep learning training.
+ `p4d`: Powered by NVIDIA A100 GPUs, offering the highest performance for for machine learning training and high-performance computing.

When you use GPU-enabled instance types with Amazon ECS Managed Instances, the NVIDIA drivers and CUDA toolkit are pre-installed on the instance, making it easier to run GPU-accelerated workloads.

## GPU-enabled instance selection
<a name="managed-instances-gpu-instance-selection"></a>

To select GPU-enabled instance types for your Amazon ECS Managed Instances workloads, use the `instanceRequirements` object in the launch template of the capacity provider. The following snippet shows the attributes that can be used for selecting GPU-enabled instances.

```
{
  "instanceRequirements": {
    "acceleratorTypes": "gpu",
    "acceleratorCount": 1,
    "acceleratorManufacturers": ["nvidia"]
  }
}
```

The following snippet shows the attributes that can be used to specify GPU-enabled instance types in the launch template.

```
{
  "instanceRequirements": {
    "allowedInstanceTypes": ["g4dn.xlarge", "p4de.24xlarge"]
  }
}
```

## GPU-enabled container images
<a name="managed-instances-gpu-container-images"></a>

To use GPUs in your containers, you need to use container images that contain the necessary GPU libraries and tools. NVIDIA provides several pre-built container images that you can use as a base for your GPU workloads, including the following:
+ `nvidia:cuda`: Base images with the CUDA toolkit for GPU computing.
+ `tensorflow/tensorflow:latest-gpu`: TensorFlow with GPU support.
+ `pytorch/pytorch:latest-cuda`: PyTorch with GPU support.

For an example task definition for Amazon ECS on Amazon ECS Managed Instances that involves the use of GPUs, see [Specifying GPUs in an Amazon ECS task definition](ecs-gpu-specifying.md).

## Considerations
<a name="gpu-considerations"></a>

**Note**  
The support for g2 instance family type has been deprecated.  
The p2 instance family type is only supported on versions earlier than `20230912` of the Amazon ECS GPU-optimized AMI. If you need to continue to use p2 instances, see [What to do if you need a P2 instance](#p2-instance).  
In-place updates of the NVIDIA/CUDA drivers on both these instance family types will cause potential GPU workload failures.

We recommend that you consider the following before you begin working with GPUs on Amazon ECS.
+ Your clusters can contain a mix of GPU and non-GPU container instances.
+ You can run GPU workloads on external instances. When registering an external instance with your cluster, ensure the `--enable-gpu` flag is included on the installation script. For more information, see [Registering an external instance to an Amazon ECS cluster](ecs-anywhere-registration.md).
+ You must set `ECS_ENABLE_GPU_SUPPORT` to `true` in your agent configuration file. For more information, see [Amazon ECS container agent configuration](ecs-agent-config.md).
+ When running a task or creating a service, you can use instance type attributes when you configure task placement constraints to determine the container instances the task is to be launched on. By doing this, you can more effectively use your resources. For more information, see [How Amazon ECS places tasks on container instances](task-placement.md).

  The following example launches a task on a `g4dn.xlarge` container instance in your default cluster.

  ```
  aws ecs run-task --cluster default --task-definition ecs-gpu-task-def \
       --placement-constraints type=memberOf,expression="attribute:ecs.instance-type ==  g4dn.xlarge" --region us-east-2
  ```
+ For each container that has a GPU resource requirement that's specified in the container definition, Amazon ECS sets the container runtime to be the NVIDIA container runtime.
+ The NVIDIA container runtime requires some environment variables to be set in the container to function properly. For a list of these environment variables, see [Specialized Configurations with Docker](https://docs.nvidia.com/datacenter/cloud-native/container-toolkit/latest/docker-specialized.html?highlight=environment%20variable). Amazon ECS sets the `NVIDIA_VISIBLE_DEVICES` environment variable value to be a list of the GPU device IDs that Amazon ECS assigns to the container. For the other required environment variables, Amazon ECS doesn't set them. So, make sure that your container image sets them or they're set in the container definition.
+ The p5 instance type family is supported on version `20230929` and later of the Amazon ECS GPU-optimized AMI. 
+ The g4 instance type family is supported on version `20230913` and later of the Amazon ECS GPU-optimized AMI. For more information, see [Amazon ECS-optimized Linux AMIs](ecs-optimized_AMI.md). It's not supported in the Create Cluster workflow in the Amazon ECS console. To use these instance types, you must either use the Amazon EC2 console, AWS CLI, or API and manually register the instances to your cluster.
+ The p4d.24xlarge instance type only works with CUDA 11 or later.
+ The Amazon ECS GPU-optimized AMI has IPv6 enabled, which causes issues when using `yum`. This can be resolved by configuring `yum` to use IPv4 with the following command.

  ```
  echo "ip_resolve=4" >> /etc/yum.conf
  ```
+  When you build a container image that doesn't use the NVIDIA/CUDA base images, you must set the `NVIDIA_DRIVER_CAPABILITIES` container runtime variable to one of the following values:
  + `utility,compute`
  + `all`

  For information about how to set the variable, see [Controlling the NVIDIA Container Runtime](https://sarus.readthedocs.io/en/stable/user/custom-cuda-images.html#controlling-the-nvidia-container-runtime) on the NVIDIA website.
+ GPUs are not supported on Windows containers.

# Launch a GPU container instance for Amazon ECS
<a name="gpu-launch"></a>

To use a GPU instance on Amazon ECS on Amazon EC2, you need to create a launch template, a user data file, and launch the instance.

You can then run a task that uses a task definition configured for GPU.

## Use a launch template
<a name="gpu-launch-template"></a>

You can create a launch template.
+ Create a launch template that uses the Amazon ECS-optimized GPU AMI ID For the AMI. For information about how to create a launch template, see [Create a new launch template using parameters you define](https://docs.aws.amazon.com/AWSEC2/latest/UserGuide/create-launch-template.html#create-launch-template-define-parameters) in the *Amazon EC2 User Guide*.

  Use the AMI ID from the previous step for the **Amazon Machine image**. For information about how to specify the AMI ID with the Systems Manager parameter, see [Specify a Systems Manager parameter in a launch template](https://docs.aws.amazon.com/AWSEC2/latest/UserGuide/create-launch-template.html#use-an-ssm-parameter-instead-of-an-ami-id) in the *Amazon EC2 User Guide*.

  Add the following to the **User data** in the launch template. Replace *cluster-name* with the name of your cluster.

  ```
  #!/bin/bash
  echo ECS_CLUSTER=cluster-name >> /etc/ecs/ecs.config;
  echo ECS_ENABLE_GPU_SUPPORT=true >> /etc/ecs/ecs.config
  ```

## Use the AWS CLI
<a name="gpu-launch-cli"></a>

You can use the AWS CLI to launch the container instance.

1. Create a file that's called `userdata.toml`. This file is used for the instance user data. Replace *cluster-name* with the name of your cluster.

   ```
   #!/bin/bash
   echo ECS_CLUSTER=cluster-name >> /etc/ecs/ecs.config;
   echo ECS_ENABLE_GPU_SUPPORT=true >> /etc/ecs/ecs.config
   ```

1. Run the following command to get the GPU AMI ID. You use this in the following step.

   ```
   aws ssm get-parameters --names /aws/service/ecs/optimized-ami/amazon-linux-2/gpu/recommended --region us-east-1
   ```

1. Run the following command to launch the GPU instance. Remember to replace the following parameters:
   + Replace *subnet* with the ID of the private or public subnet that your instance will launch in.
   + Replace *gpu\$1ami* with the AMI ID from the previous step.
   + Replace *t3.large* with the instance type that you want to use.
   + Replace *region* with the Region code.

   ```
   aws ec2 run-instances --key-name ecs-gpu-example \
      --subnet-id subnet \
      --image-id gpu_ami \
      --instance-type t3.large \
      --region region \
      --tag-specifications 'ResourceType=instance,Tags=[{Key=GPU,Value=example}]' \
      --user-data file://userdata.toml \
      --iam-instance-profile Name=ecsInstanceRole
   ```

1. Run the following command to verify that the container instance is registered to the cluster. When you run this command, remember to replace the following parameters:
   + Replace *cluster* with your cluster name.
   + Replace *region* with your Region code.

   ```
   aws ecs list-container-instances --cluster cluster-name --region region
   ```

# Specifying GPUs in an Amazon ECS task definition
<a name="ecs-gpu-specifying"></a>

To use the GPUs on a container instance and the Docker GPU runtime, make sure that you designate the number of GPUs your container requires in the task definition. As containers that support GPUs are placed, the Amazon ECS container agent pins the desired number of physical GPUs to the appropriate container. The number of GPUs reserved for all containers in a task cannot exceed the number of available GPUs on the container instance the task is launched on. For more information, see [Creating an Amazon ECS task definition using the console](create-task-definition.md).

**Important**  
If your GPU requirements aren't specified in the task definition, the task uses the default Docker runtime.

The following shows the JSON format for the GPU requirements in a task definition:

```
{
  "containerDefinitions": [
     {
        ...
        "resourceRequirements" : [
            {
               "type" : "GPU", 
               "value" : "2"
            }
        ],
     },
...
}
```

The following example demonstrates the syntax for a Docker container that specifies a GPU requirement. This container uses two GPUs, runs the `nvidia-smi` utility, and then exits.

```
{
  "containerDefinitions": [
    {
      "memory": 80,
      "essential": true,
      "name": "gpu",
      "image": "nvidia/cuda:11.0.3-base",
      "resourceRequirements": [
         {
           "type":"GPU",
           "value": "2"
         }
      ],
      "command": [
        "sh",
        "-c",
        "nvidia-smi"
      ],
      "cpu": 100
    }
  ],
  "family": "example-ecs-gpu"
}
```

The following example task definition shows a TensorFlow container that prints the number of available GPUs. The task runs on Amazon ECS Managed Instances, requires one GPU, and uses a `g4dn.xlarge` instance.

```
{
  "family": "tensorflow-gpu",
  "networkMode": "awsvpc",
  "executionRoleArn": "arn:aws:iam::account-id:role/ecsTaskExecutionRole",
  "containerDefinitions": [
    {
      "name": "tensorflow",
      "image": "tensorflow/tensorflow:latest-gpu",
      "essential": true,
      "command": [
        "python",
        "-c",
        "import tensorflow as tf; print('Num GPUs Available: ', len(tf.config.list_physical_devices('GPU')))"
      ],
      "resourceRequirements": [
        {
          "type": "GPU",
          "value": "1"
        }
      ],
      "logConfiguration": {
        "logDriver": "awslogs",
        "options": {
          "awslogs-group": "/ecs/tensorflow-gpu",
          "awslogs-region": "region",
          "awslogs-stream-prefix": "ecs"
        }
      }
    }
  ],
  "requiresCompatibilities": [
    "MANAGED_INSTANCES"
  ],
  "cpu": "4096",
  "memory": "8192",
}
```

## Share GPUs
<a name="share-gpu"></a>

When you want to share GPUs, you need to configure the following.

1. Remove GPU resource requirements from your task definitions so that Amazon ECS does not reserve any GPUs that should be shared.

1. Add the following user data to your instances when you want to share GPUs. This will make nvidia the default Docker container runtime on the container instance so that all Amazon ECS containers can use the GPUs. For more information see [Run commands when you launch an EC2 instance with user data input](https://docs.aws.amazon.com/AWSEC2/latest/UserGuide/user-data.html) in the *Amazon EC2 User Guide*.

   ```
   const userData = ec2.UserData.forLinux();
    userData.addCommands(
    'sudo rm /etc/sysconfig/docker',
    'echo DAEMON_MAXFILES=1048576 | sudo tee -a /etc/sysconfig/docker',
    'echo OPTIONS="--default-ulimit nofile=32768:65536 --default-runtime nvidia" | sudo tee -a /etc/sysconfig/docker',
    'echo DAEMON_PIDFILE_TIMEOUT=10 | sudo tee -a /etc/sysconfig/docker',
    'sudo systemctl restart docker',
   );
   ```

1. Set the `NVIDIA_VISIBLE_DEVICES` environment variable on your container. You can do this by specifying the environment variable in your task definition. For information on the valid values, see [GPU Enumeration](https://docs.nvidia.com/datacenter/cloud-native/container-toolkit/latest/docker-specialized.html#gpu-enumeration) on the NVIDIA documentation site.

## What to do if you need a P2 instance
<a name="p2-instance"></a>

If you need to use P2 instance, you can use one of the following options to continue using the instances.

You must modify the instance user data for both options. For more information see [Run commands when you launch an EC2 instance with user data input](https://docs.aws.amazon.com/AWSEC2/latest/UserGuide/user-data.html) in the *Amazon EC2 User Guide*.

**Use the last supported GPU-optimized AMI**

You can use the `20230906` version of the GPU-optimized AMI, and add the following to the instance user data.

Replace cluster-name with the name of your cluster.

```
#!/bin/bash
echo "exclude=*nvidia* *cuda*" >> /etc/yum.conf
echo "ECS_CLUSTER=cluster-name" >> /etc/ecs/ecs.config
```

**Use the latest GPU-optimized AMI, and update the user data**

You can add the following to the instance user data. This uninstalls the Nvidia 535/Cuda12.2 drivers, and then installs the Nvidia 470/Cuda11.4 drivers and fixes the version.

```
#!/bin/bash
yum remove -y cuda-toolkit* nvidia-driver-latest-dkms*
tmpfile=$(mktemp)
cat >$tmpfile <<EOF
[amzn2-nvidia]
name=Amazon Linux 2 Nvidia repository
mirrorlist=\$awsproto://\$amazonlinux.\$awsregion.\$awsdomain/\$releasever/amzn2-nvidia/latest/\$basearch/mirror.list
priority=20
gpgcheck=1
gpgkey=https://developer.download.nvidia.com/compute/cuda/repos/rhel7/x86_64/7fa2af80.pub
enabled=1
exclude=libglvnd-*
EOF

mv $tmpfile /etc/yum.repos.d/amzn2-nvidia-tmp.repo
yum install -y system-release-nvidia cuda-toolkit-11-4 nvidia-driver-latest-dkms-470.182.03
yum install -y libnvidia-container-1.4.0 libnvidia-container-tools-1.4.0 nvidia-container-runtime-hook-1.4.0 docker-runtime-nvidia-1

echo "exclude=*nvidia* *cuda*" >> /etc/yum.conf
nvidia-smi
```

**Create your own P2 compatible GPU-optimized AMI**

You can create your own custom Amazon ECS GPU-optimized AMI that is compatible with P2 instances, and then launch P2 instances using the AMI.

1. Run the following command to clone the `amazon-ecs-ami repo`.

   ```
   git clone https://github.com/aws/amazon-ecs-ami
   ```

1. Set the required Amazon ECS agent and source Amazon Linux AMI versions in `release.auto.pkrvars.hcl` or `overrides.auto.pkrvars.hcl`.

1. Run the following command to build a private P2 compatible EC2 AMI.

   Replace region with the Region with the instance Region.

   ```
   REGION=region make al2keplergpu
   ```

1. Use the AMI with the following instance user data to connect to the Amazon ECS cluster.

   Replace cluster-name with the name of your cluster.

   ```
   #!/bin/bash
   echo "ECS_CLUSTER=cluster-name" >> /etc/ecs/ecs.config
   ```

# Amazon ECS task definitions for video transcoding workloads
<a name="ecs-vt1"></a>

To use video transcoding workloads on Amazon ECS, register [Amazon EC2 VT1](https://aws.amazon.com/ec2/instance-types/vt1/) instances. After you registered these instances, you can run live and pre-rendered video transcoding workloads as tasks on Amazon ECS. Amazon EC2 VT1 instances use Xilinx U30 media transcoding cards to accelerate live and pre-rendered video transcoding workloads.

**Note**  
For instructions on how to run video transcoding workloads in containers other than Amazon ECS, see the [Xilinx documentation](https://xilinx.github.io/video-sdk/v1.5/container_setup.html#working-with-docker-vt1).

## Considerations
<a name="ecs-vt1-considerations"></a>

Before you begin deploying VT1 on Amazon ECS, consider the following:
+ Your clusters can contain a mix of VT1 and non-VT1 instances.
+ You need a Linux application that uses Xilinx U30 media transcoding cards with accelerated AVC (H.264) and HEVC (H.265) codecs.
**Important**  
Applications that use other codecs might not have improved performance on VT1 instances.
+ Only one transcoding task can run on a U30 card. Each card has two devices that are associated with it. You can run as many transcoding tasks as there are cards for each of your VT1 instance.
+ When creating a service or running a standalone task, you can use instance type attributes when configuring task placement constraints. This ensures that the task is launched on the container instance that you specify. Doing so helps ensure that you use your resources effectively and that your tasks for video transcoding workloads are on your VT1 instances. For more information, see [How Amazon ECS places tasks on container instances](task-placement.md).

  In the following example, a task is run on a `vt1.3xlarge` instance on your `default` cluster.

  ```
  aws ecs run-task \
       --cluster default \
       --task-definition vt1-3xlarge-xffmpeg-processor \
       --placement-constraints type=memberOf,expression="attribute:ecs.instance-type == vt1.3xlarge"
  ```
+ You configure a container to use the specific U30 card available on the host container instance. You can do this by using the `linuxParameters` parameter and specifying the device details. For more information, see [Task definition requirements](#ecs-vt1-requirements).

## Using a VT1 AMI
<a name="ecs-vt1-ami"></a>

You have two options for running an AMI on Amazon EC2 for Amazon ECS container instances. The first option is to use the Xilinx official AMI on the AWS Marketplace. The second option is to build your own AMI from the sample repository.
+ [Xilinx offers AMIs on the AWS Marketplace](https://aws.amazon.com/marketplace/pp/prodview-phvk6d4mq3hh6).
+ Amazon ECS provides a sample repository that you can use to build an AMI for video transcoding workloads. This AMI comes with Xilinx U30 drivers. You can find the repository that contains Packer scripts on [GitHub](https://github.com/aws-samples/aws-vt-baseami-pipeline). For more information about Packer, see the [Packer documentation](https://developer.hashicorp.com/packer/docs).

## Task definition requirements
<a name="ecs-vt1-requirements"></a>

To run video transcoding containers on Amazon ECS, your task definition must contain a video transcoding application that uses the accelerated H.264/AVC and H.265/HEVC codecs. You can build a container image by following the steps on the [Xilinx GitHub](https://xilinx.github.io/video-sdk/v1.5/container_setup.html#creating-a-docker-image-for-vt1-usage).

The task definition must be specific to the instance type. The instance types are 3xlarge, 6xlarge, and 24xlarge. You must configure a container to use specific Xilinx U30 devices that are available on the host container instance. You can do so using the `linuxParameters` parameter. The following table details the cards and device SoCs that are specific to each instance type.


| Instance Type | vCPUs | RAM (GiB) | U30 accelerator cards | Addressable XCU30 SoC devices | Device Paths | 
| --- | --- | --- | --- | --- | --- | 
| vt1.3xlarge | 12 | 24 | 1 | 2 | /dev/dri/renderD128,/dev/dri/renderD129 | 
| vt1.6xlarge | 24 | 48 | 2 | 4 | /dev/dri/renderD128,/dev/dri/renderD129,/dev/dri/renderD130,/dev/dri/renderD131 | 
| vt1.24xlarge | 96 | 182 | 8 | 16 | /dev/dri/renderD128,/dev/dri/renderD129,/dev/dri/renderD130,/dev/dri/renderD131,/dev/dri/renderD132,/dev/dri/renderD133,/dev/dri/renderD134,/dev/dri/renderD135,/dev/dri/renderD136,/dev/dri/renderD137,/dev/dri/renderD138,/dev/dri/renderD139,/dev/dri/renderD140,/dev/dri/renderD141,/dev/dri/renderD142,/dev/dri/renderD143 | 

**Important**  
If the task definition lists devices that the EC2 instance doesn't have, the task fails to run. When the task fails, the following error message appears in the `stoppedReason`: `CannotStartContainerError: Error response from daemon: error gathering device information while adding custom device "/dev/dri/renderD130": no such file or directory`.

# Specifying video transcoding in an Amazon ECS task definition
<a name="task-def-video-transcode"></a>

In the following example, the syntax that's used for a task definition of a Linux container on Amazon EC2 is provided. This task definition is for container images that are built following the procedure that's provided in the [Xilinx documentation](https://xilinx.github.io/video-sdk/v1.5/container_setup.html#creating-a-docker-image-for-vt1-usage). If you use this example, replace `image` with your own image, and copy your video files into the instance in the `/home/ec2-user` directory.

------
#### [ vt1.3xlarge ]

1. Create a text file that's named `vt1-3xlarge-ffmpeg-linux.json` with the following content.

   ```
   {
       "family": "vt1-3xlarge-xffmpeg-processor",
       "requiresCompatibilities": ["EC2"],
       "placementConstraints": [
           {
               "type": "memberOf",
               "expression": "attribute:ecs.os-type == linux"
           },
           {
               "type": "memberOf",
               "expression": "attribute:ecs.instance-type == vt1.3xlarge"
           }
       ],
       "containerDefinitions": [
           {
               "entryPoint": [
                   "/bin/bash",
                   "-c"
               ],
               "command": ["/video/ecs_ffmpeg_wrapper.sh"],
               "linuxParameters": {
                   "devices": [
                       {
                           "containerPath": "/dev/dri/renderD128",
                           "hostPath": "/dev/dri/renderD128",
                           "permissions": [
                               "read",
                               "write"
                           ]
                       },
                       {
                           "containerPath": "/dev/dri/renderD129",
                           "hostPath": "/dev/dri/renderD129",
                           "permissions": [
                               "read",
                               "write"
                           ]
                       }
                   ]
               },
               "mountPoints": [
                   {
                       "containerPath": "/video",
                       "sourceVolume": "video_file"
                   }
               ],
               "cpu": 0,
               "memory": 12000,
               "image": "0123456789012.dkr.ecr.us-west-2.amazonaws.com/aws/xilinx-xffmpeg",
               "essential": true,
               "name": "xilinix-xffmpeg"
           }
       ],
       "volumes": [
           {
               "name": "video_file",
               "host": {"sourcePath": "/home/ec2-user"}
           }
       ]
   }
   ```

1. Register the task definition.

   ```
   aws ecs register-task-definition --family vt1-3xlarge-xffmpeg-processor --cli-input-json file://vt1-3xlarge-xffmpeg-linux.json --region us-east-1
   ```

------
#### [ vt1.6xlarge ]

1. Create a text file that's named `vt1-6xlarge-ffmpeg-linux.json` with the following content.

   ```
   {
       "family": "vt1-6xlarge-xffmpeg-processor",
       "requiresCompatibilities": ["EC2"],
       "placementConstraints": [
           {
               "type": "memberOf",
               "expression": "attribute:ecs.os-type == linux"
           },
           {
               "type": "memberOf",
               "expression": "attribute:ecs.instance-type == vt1.6xlarge"
           }
       ],
       "containerDefinitions": [
           {
               "entryPoint": [
                   "/bin/bash",
                   "-c"
               ],
               "command": ["/video/ecs_ffmpeg_wrapper.sh"],
               "linuxParameters": {
                   "devices": [
                       {
                           "containerPath": "/dev/dri/renderD128",
                           "hostPath": "/dev/dri/renderD128",
                           "permissions": [
                               "read",
                               "write"
                           ]
                       },
                       {
                           "containerPath": "/dev/dri/renderD129",
                           "hostPath": "/dev/dri/renderD129",
                           "permissions": [
                               "read",
                               "write"
                           ]
                       },
                       {
                           "containerPath": "/dev/dri/renderD130",
                           "hostPath": "/dev/dri/renderD130",
                           "permissions": [
                               "read",
                               "write"
                           ]
                       },
                       {
                           "containerPath": "/dev/dri/renderD131",
                           "hostPath": "/dev/dri/renderD131",
                           "permissions": [
                               "read",
                               "write"
                           ]
                       }
                   ]
               },
               "mountPoints": [
                   {
                       "containerPath": "/video",
                       "sourceVolume": "video_file"
                   }
               ],
               "cpu": 0,
               "memory": 12000,
               "image": "0123456789012.dkr.ecr.us-west-2.amazonaws.com/aws/xilinx-xffmpeg",
               "essential": true,
               "name": "xilinix-xffmpeg"
           }
       ],
       "volumes": [
           {
               "name": "video_file",
               "host": {"sourcePath": "/home/ec2-user"}
           }
       ]
   }
   ```

1. Register the task definition.

   ```
   aws ecs register-task-definition --family vt1-6xlarge-xffmpeg-processor --cli-input-json file://vt1-6xlarge-xffmpeg-linux.json --region us-east-1
   ```

------
#### [ vt1.24xlarge ]

1. Create a text file that's named `vt1-24xlarge-ffmpeg-linux.json` with the following content.

   ```
   {
       "family": "vt1-24xlarge-xffmpeg-processor",
       "requiresCompatibilities": ["EC2"],
       "placementConstraints": [
           {
               "type": "memberOf",
               "expression": "attribute:ecs.os-type == linux"
           },
           {
               "type": "memberOf",
               "expression": "attribute:ecs.instance-type == vt1.24xlarge"
           }
       ],
       "containerDefinitions": [
           {
               "entryPoint": [
                   "/bin/bash",
                   "-c"
               ],
               "command": ["/video/ecs_ffmpeg_wrapper.sh"],
               "linuxParameters": {
                   "devices": [
                       {
                           "containerPath": "/dev/dri/renderD128",
                           "hostPath": "/dev/dri/renderD128",
                           "permissions": [
                               "read",
                               "write"
                           ]
                       },
                       {
                           "containerPath": "/dev/dri/renderD129",
                           "hostPath": "/dev/dri/renderD129",
                           "permissions": [
                               "read",
                               "write"
                           ]
                       },
                       {
                           "containerPath": "/dev/dri/renderD130",
                           "hostPath": "/dev/dri/renderD130",
                           "permissions": [
                               "read",
                               "write"
                           ]
                       },
                       {
                           "containerPath": "/dev/dri/renderD131",
                           "hostPath": "/dev/dri/renderD131",
                           "permissions": [
                               "read",
                               "write"
                           ]
                       },
                       {
                           "containerPath": "/dev/dri/renderD132",
                           "hostPath": "/dev/dri/renderD132",
                           "permissions": [
                               "read",
                               "write"
                           ]
                       },
                       {
                           "containerPath": "/dev/dri/renderD133",
                           "hostPath": "/dev/dri/renderD133",
                           "permissions": [
                               "read",
                               "write"
                           ]
                       },
                       {
                           "containerPath": "/dev/dri/renderD134",
                           "hostPath": "/dev/dri/renderD134",
                           "permissions": [
                               "read",
                               "write"
                           ]
                       },
                       {
                           "containerPath": "/dev/dri/renderD135",
                           "hostPath": "/dev/dri/renderD135",
                           "permissions": [
                               "read",
                               "write"
                           ]
                       },
                       {
                           "containerPath": "/dev/dri/renderD136",
                           "hostPath": "/dev/dri/renderD136",
                           "permissions": [
                               "read",
                               "write"
                           ]
                       },
                       {
                           "containerPath": "/dev/dri/renderD137",
                           "hostPath": "/dev/dri/renderD137",
                           "permissions": [
                               "read",
                               "write"
                           ]
                       },
                       {
                           "containerPath": "/dev/dri/renderD138",
                           "hostPath": "/dev/dri/renderD138",
                           "permissions": [
                               "read",
                               "write"
                           ]
                       },
                       {
                           "containerPath": "/dev/dri/renderD139",
                           "hostPath": "/dev/dri/renderD139",
                           "permissions": [
                               "read",
                               "write"
                           ]
                       },
                       {
                           "containerPath": "/dev/dri/renderD140",
                           "hostPath": "/dev/dri/renderD140",
                           "permissions": [
                               "read",
                               "write"
                           ]
                       },
                       {
                           "containerPath": "/dev/dri/renderD141",
                           "hostPath": "/dev/dri/renderD141",
                           "permissions": [
                               "read",
                               "write"
                           ]
                       },
                       {
                           "containerPath": "/dev/dri/renderD142",
                           "hostPath": "/dev/dri/renderD142",
                           "permissions": [
                               "read",
                               "write"
                           ]
                       },
                       {
                           "containerPath": "/dev/dri/renderD143",
                           "hostPath": "/dev/dri/renderD143",
                           "permissions": [
                               "read",
                               "write"
                           ]
                       }
                   ]
               },
               "mountPoints": [
                   {
                       "containerPath": "/video",
                       "sourceVolume": "video_file"
                   }
               ],
               "cpu": 0,
               "memory": 12000,
               "image": "0123456789012.dkr.ecr.us-west-2.amazonaws.com/aws/xilinx-xffmpeg",
               "essential": true,
               "name": "xilinix-xffmpeg"
           }
       ],
       "volumes": [
           {
               "name": "video_file",
               "host": {"sourcePath": "/home/ec2-user"}
           }
       ]
   }
   ```

1. Register the task definition.

   ```
   aws ecs register-task-definition --family vt1-24xlarge-xffmpeg-processor --cli-input-json file://vt1-24xlarge-xffmpeg-linux.json --region us-east-1
   ```

------

# Amazon ECS task definitions for AWS Neuron machine learning workloads
<a name="ecs-inference"></a>

You can register [Amazon EC2 Trn1](https://aws.amazon.com/ec2/instance-types/trn1/), [Amazon EC2 Trn2](https://aws.amazon.com/ec2/instance-types/trn2/), [Amazon EC2 Inf1](https://aws.amazon.com/ec2/instance-types/inf1/), and [Amazon EC2 Inf2](https://aws.amazon.com/ec2/instance-types/inf2/) instances to your clusters for machine learning workloads.

Amazon EC2 Trn1 and Trn2 instances are powered by [AWS Trainium](https://aws.amazon.com/ai/machine-learning/trainium/) chips. These instances provide high performance and low cost training for machine learning in the cloud. You can train a machine learning inference model using a machine learning framework with AWS Neuron on a Trn1 or Trn2 instance. Then, you can run the model on a Inf1 instance, or an Inf2 instance to use the acceleration of the AWS Inferentia chips.

The Amazon EC2 Inf1 instances and Inf2 instances are powered by [AWS Inferentia](https://aws.amazon.com/ai/machine-learning/inferentia/) chips They provide high performance and lowest cost inference in the cloud.

Machine learning models are deployed to containers using [AWS Neuron](https://aws.amazon.com/ai/machine-learning/neuron/), which is a specialized Software Developer Kit (SDK). The SDK consists of a compiler, runtime, and profiling tools that optimize the machine learning performance of AWS machine learning chips. AWS Neuron supports popular machine learning frameworks such as TensorFlow, PyTorch, and Apache MXNet.

## Considerations
<a name="ecs-inference-considerations"></a>

Before you begin deploying Neuron on Amazon ECS, consider the following:
+ Your clusters can contain a mix of Trn1, Trn2, Inf1, Inf2 and other instances.
+ You need a Linux application in a container that uses a machine learning framework that supports AWS Neuron.
**Important**  
Applications that use other frameworks might not have improved performance on Trn1, Trn2, Inf1, and Inf2 instances.
+ Only one inference or inference-training task can run on each [AWS Trainium](https://aws.amazon.com/ai/machine-learning/trainium/) or [AWS Inferentia](https://aws.amazon.com/ai/machine-learning/inferentia/) chip. For Inf1, each chip has 4 NeuronCores. For Trn1, Trn2, and Inf2 each chip has 2 NeuronCores. You can run as many tasks as there are chips for each of your Trn1, Trn2, Inf1, and Inf2 instances.
+ When creating a service or running a standalone task, you can use instance type attributes when you configure task placement constraints. This ensures that the task is launched on the container instance that you specify. Doing so can help you optimize overall resource utilization and ensure that tasks for inference workloads are on your Trn1, Trn2, Inf1, and Inf2 instances. For more information, see [How Amazon ECS places tasks on container instances](task-placement.md).

  In the following example, a task is run on an `Inf1.xlarge` instance on your `default` cluster.

  ```
  aws ecs run-task \
       --cluster default \
       --task-definition ecs-inference-task-def \
       --placement-constraints type=memberOf,expression="attribute:ecs.instance-type == Inf1.xlarge"
  ```
+ Neuron resource requirements can't be defined in a task definition. Instead, you configure a container to use specific AWS Trainium or AWS Inferentia chips available on the host container instance. Do this by using the `linuxParameters` parameter and specifying the device details. For more information, see [Task definition requirements](#ecs-inference-requirements).

## Use the Amazon ECS-optimized Amazon Linux 2023 (Neuron) AMI
<a name="ecs-inference-ami2023"></a>

Amazon ECS provides an Amazon ECS optimized AMI that's based on Amazon Linux 2023 for AWS Trainium and AWS Inferentia workloads. It comes with the AWS Neuron drivers and runtime for Docker. This AMI makes running machine learning inference workloads easier on Amazon ECS.

We recommend using the Amazon ECS-optimized Amazon Linux 2023 (Neuron) AMI when launching your Amazon EC2 Trn1, Inf1, and Inf2 instances. 

You can retrieve the current Amazon ECS-optimized Amazon Linux 2023 (Neuron) AMI using the AWS CLI with the following command.

```
aws ssm get-parameters --names /aws/service/ecs/optimized-ami/amazon-linux-2023/neuron/recommended
```

## Task definition requirements
<a name="ecs-inference-requirements"></a>

To deploy Neuron on Amazon ECS, your task definition must contain the container definition for a pre-built container serving the inference model for TensorFlow. It's provided by AWS Deep Learning Containers. This container contains the AWS Neuron runtime and the TensorFlow Serving application. At startup, this container fetches your model from Amazon S3, launches Neuron TensorFlow Serving with the saved model, and waits for prediction requests. In the following example, the container image has TensorFlow 1.15 and Ubuntu 18.04. A complete list of pre-built Deep Learning Containers optimized for Neuron is maintained on GitHub. For more information, see [Using AWS Neuron TensorFlow Serving](https://docs.aws.amazon.com/dlami/latest/devguide/tutorial-inferentia-tf-neuron-serving.html).

```
763104351884.dkr.ecr.us-east-1.amazonaws.com/tensorflow-inference-neuron:1.15.4-neuron-py37-ubuntu18.04
```

Alternatively, you can build your own Neuron sidecar container image. For more information, see [Tutorial: Neuron TensorFlow Serving](https://github.com/aws-neuron/aws-neuron-sdk/blob/master/frameworks/tensorflow/tensorflow-neuron/tutorials/tutorials-tensorflow-utilizing-neuron-capabilities.rst) in the *AWS Deep Learning AMIs Developer Guide*.

The task definition must be specific to a single instance type. You must configure a container to use specific AWS Trainium or AWS Inferentia devices that are available on the host container instance. You can do so using the `linuxParameters` parameter. For a sample task definition, see [Specifying AWS Neuron machine learning in an Amazon ECS task definition](ecs-inference-task-def.md). The following table details the chips that are specific to each instance type.


| Instance Type | vCPUs | RAM (GiB) | AWS ML accelerator chips | Device Paths | 
| --- | --- | --- | --- | --- | 
| trn1.2xlarge | 8 | 32 | 1 | /dev/neuron0 | 
| trn1.32xlarge | 128 | 512 | 16 |  /dev/neuron0, /dev/neuron1, /dev/neuron2, /dev/neuron3, /dev/neuron4, /dev/neuron5, /dev/neuron6, /dev/neuron7, /dev/neuron8, /dev/neuron9, /dev/neuron10, /dev/neuron11, /dev/neuron12, /dev/neuron13, /dev/neuron14, /dev/neuron15  | 
| trn2.48xlarge | 192 | 1536 | 16 |  /dev/neuron0, /dev/neuron1, /dev/neuron2, /dev/neuron3, /dev/neuron4, /dev/neuron5, /dev/neuron6, /dev/neuron7, /dev/neuron8, /dev/neuron9, /dev/neuron10, /dev/neuron11, /dev/neuron12, /dev/neuron13, /dev/neuron14, /dev/neuron15  | 
| inf1.xlarge | 4 | 8 | 1 | /dev/neuron0 | 
| inf1.2xlarge | 8 | 16 | 1 | /dev/neuron0 | 
| inf1.6xlarge | 24 | 48 | 4 | /dev/neuron0, /dev/neuron1, /dev/neuron2, /dev/neuron3 | 
| inf1.24xlarge | 96 | 192 | 16 |  /dev/neuron0, /dev/neuron1, /dev/neuron2, /dev/neuron3, /dev/neuron4, /dev/neuron5, /dev/neuron6, /dev/neuron7, /dev/neuron8, /dev/neuron9, /dev/neuron10, /dev/neuron11, /dev/neuron12, /dev/neuron13, /dev/neuron14, /dev/neuron15  | 
| inf2.xlarge | 8 | 16 | 1 | /dev/neuron0 | 
| inf2.8xlarge | 32 | 64 | 1 | /dev/neuron0 | 
| inf2.24xlarge | 96 | 384 | 6 | /dev/neuron0, /dev/neuron1, /dev/neuron2, /dev/neuron3, /dev/neuron4, /dev/neuron5,  | 
| inf2.48xlarge | 192 | 768 | 12 | /dev/neuron0, /dev/neuron1, /dev/neuron2, /dev/neuron3, /dev/neuron4, /dev/neuron5, /dev/neuron6, /dev/neuron7, /dev/neuron8, /dev/neuron9, /dev/neuron10, /dev/neuron11 | 

# Specifying AWS Neuron machine learning in an Amazon ECS task definition
<a name="ecs-inference-task-def"></a>

The following is an example Linux task definition for `inf1.xlarge`, displaying the syntax to use.

```
{
    "family": "ecs-neuron",
    "requiresCompatibilities": ["EC2"],
    "placementConstraints": [
        {
            "type": "memberOf",
            "expression": "attribute:ecs.os-type == linux"
        },
        {
            "type": "memberOf",
            "expression": "attribute:ecs.instance-type == inf1.xlarge"
        }
    ],
    "executionRoleArn": "${YOUR_EXECUTION_ROLE}",
    "containerDefinitions": [
        {
            "entryPoint": [
                "/usr/local/bin/entrypoint.sh",
                "--port=8500",
                "--rest_api_port=9000",
                "--model_name=resnet50_neuron",
                "--model_base_path=s3://amzn-s3-demo-bucket/resnet50_neuron/"
            ],
            "portMappings": [
                {
                    "hostPort": 8500,
                    "protocol": "tcp",
                    "containerPort": 8500
                },
                {
                    "hostPort": 8501,
                    "protocol": "tcp",
                    "containerPort": 8501
                },
                {
                    "hostPort": 0,
                    "protocol": "tcp",
                    "containerPort": 80
                }
            ],
            "linuxParameters": {
                "devices": [
                    {
                        "containerPath": "/dev/neuron0",
                        "hostPath": "/dev/neuron0",
                        "permissions": [
                            "read",
                            "write"
                        ]
                    }
                ],
                "capabilities": {
                    "add": [
                        "IPC_LOCK"
                    ]
                }
            },
            "cpu": 0,
            "memoryReservation": 1000,
            "image": "763104351884.dkr.ecr.us-east-1.amazonaws.com/tensorflow-inference-neuron:1.15.4-neuron-py37-ubuntu18.04",
            "essential": true,
            "name": "resnet50"
        }
    ]
}
```

# Amazon ECS task definitions for deep learning instances
<a name="ecs-dl1"></a>

To use deep learning workloads on Amazon ECS, register [Amazon EC2 DL1](https://aws.amazon.com/ec2/instance-types/dl1/) instances to your clusters. Amazon EC2 DL1 instances are powered by Gaudi accelerators from Habana Labs (an Intel company). Use the Habana SynapseAI SDK to connect to the Habana Gaudi accelerators. The SDK supports the popular machine learning frameworks, TensorFlow and PyTorch.

## Considerations
<a name="ecs-dl1-considerations"></a>

Before you begin deploying DL1 on Amazon ECS, consider the following:
+ Your clusters can contain a mix of DL1 and non-DL1 instances.
+ When creating a service or running a standalone task, you can use instance type attributes specifically when you configure task placement constraints to ensure that your task is launched on the container instance that you specify. Doing so ensures that your resources are used effectively and that your tasks for deep learning workloads are on your DL1 instances. For more information, see [How Amazon ECS places tasks on container instances](task-placement.md).

  The following example runs a task on a `dl1.24xlarge` instance on your `default` cluster.

  ```
  aws ecs run-task \
       --cluster default \
       --task-definition ecs-dl1-task-def \
       --placement-constraints type=memberOf,expression="attribute:ecs.instance-type == dl1.24xlarge"
  ```

## Using a DL1 AMI
<a name="ecs-dl1-ami"></a>

You have three options for running an AMI on Amazon EC2 DL1 instances for Amazon ECS:
+ AWS Marketplace AMIs that are provided by Habana [here](https://aws.amazon.com/marketplace/pp/prodview-h24gzbgqu75zq).
+ Habana Deep Learning AMIs that are provided by Amazon Web Services. Because it's not included, you need to install the Amazon ECS container agent separately.
+ Use Packer to build a custom AMI that's provided by the [GitHub repo](https://github.com/aws-samples/aws-habana-baseami-pipeline). For more information, see [the Packer documentation](https://developer.hashicorp.com/packer/docs).

# Specifying deep learning in an Amazon ECS task definition
<a name="ecs-dl1-requirements"></a>

To run Habana Gaudi accelerated deep learning containers on Amazon ECS, your task definition must contain the container definition for a pre-built container that serves the deep learning model for TensorFlow or PyTorch using Habana SynapseAI that's provided by AWS Deep Learning Containers.

The following container image has TensorFlow 2.7.0 and Ubuntu 20.04. A complete list of pre-built Deep Learning Containers that's optimized for the Habana Gaudi accelerators is maintained on GitHub. For more information, see [Habana Training Containers](https://github.com/aws/deep-learning-containers/blob/master/available_images.md#habana-training-containers).

```
763104351884.dkr.ecr.us-east-1.amazonaws.com/tensorflow-training-habana:2.7.0-hpu-py38-synapseai1.2.0-ubuntu20.04
```

The following is an example task definition for Linux containers on Amazon EC2, displaying the syntax to use. This example uses an image containing the Habana Labs System Management Interface Tool (HL-SMI) found here: `vault.habana.ai/gaudi-docker/1.1.0/ubuntu20.04/habanalabs/tensorflow-installer-tf-cpu-2.6.0:1.1.0-614`

```
{
    "family": "dl-test",
    "requiresCompatibilities": ["EC2"],
    "placementConstraints": [
        {
            "type": "memberOf",
            "expression": "attribute:ecs.os-type == linux"
        },
        {
            "type": "memberOf",
            "expression": "attribute:ecs.instance-type == dl1.24xlarge"
        }
    ],
    "networkMode": "host",
    "cpu": "10240",
    "memory": "1024",
    "containerDefinitions": [
        {
            "entryPoint": [
                "sh",
                "-c"
            ],
            "command": ["hl-smi"],
            "cpu": 8192,
            "environment": [
                {
                    "name": "HABANA_VISIBLE_DEVICES",
                    "value": "all"
                }
            ],
            "image": "vault.habana.ai/gaudi-docker/1.1.0/ubuntu20.04/habanalabs/tensorflow-installer-tf-cpu-2.6.0:1.1.0-614",
            "essential": true,
            "name": "tensorflow-installer-tf-hpu"
        }
    ]
}
```

# Amazon ECS task definitions for 64-bit ARM workloads
<a name="ecs-arm64"></a>

Amazon ECS supports using 64-bit ARM applications. You can run your applications on the platform that's powered by [AWS Graviton Processors](https://aws.amazon.com/ec2/graviton/). It's suitable for a wide variety of workloads. This includes workloads such as application servers, micro-services, high-performance computing, CPU-based machine learning inference, video encoding, electronic design automation, gaming, open-source databases, and in-memory caches.

## Considerations
<a name="ecs-arm64-considerations"></a>

Before you begin deploying task definitions that use the 64-bit ARM architecture, consider the following:
+ The applications can use the Fargate or EC2s.
+ The applications can only use the Linux operating system.
+ For the Fargate type, the applications must use Fargate platform version `1.4.0` or later.
+ The applications can use Fluent Bit or CloudWatch for monitoring.
+ For the Fargate, the following AWS Regions do not support 64-bit ARM workloads:
  + US East (N. Virginia), the `use1-az3` Availability Zone
+  For the EC2, see the following to verify that the Region that you're in supports the instance type you want to use:
  + [Amazon EC2 M6g Instances](https://aws.amazon.com/ec2/instance-types/m6)
  +  [Amazon EC2 T4g Instances](https://aws.amazon.com/ec2/instance-types/t4/)
  +  [Amazon EC2 C6g Instances](https://aws.amazon.com/ec2/instance-types/c6g/)
  +  [Amazon EC2 R6gd Instances](https://aws.amazon.com/ec2/instance-types/r6/)
  +  [Amazon EC2 X2gd Instances](https://aws.amazon.com/ec2/instance-types/x2/)

  You can also use the Amazon EC2 `describe-instance-type-offerings` command with a filter to view the instance offering for your Region. 

  ```
  aws ec2 describe-instance-type-offerings --filters Name=instance-type,Values=instance-type --region region
  ```

  The following example checks for the M6 instance type availability in the US East (N. Virginia) (us-east-1) Region.

  ```
  aws ec2 describe-instance-type-offerings --filters "Name=instance-type,Values=m6*" --region us-east-1
  ```

  For more information, see [describe-instance-type-offerings ](https://docs.aws.amazon.com/cli/latest/reference/ec2/describe-instance-type-offerings.html)in the *Amazon EC2 Command Line Reference*.

# Specifying the ARM architecture in an Amazon ECS task definition
<a name="ecs-arm-specifying"></a>

To use the ARM architecture, specify `ARM64` for the `cpuArchitecture` task definition parameter. 

In the following example, the ARM architecture is specified in a task definition. It's in JSON format.

```
{
    "runtimePlatform": {
        "operatingSystemFamily": "LINUX",
        "cpuArchitecture": "ARM64"
    },
...
}
```

In the following example, a task definition for the ARM architecture displays "hello world."

```
{
 "family": "arm64-testapp",
 "networkMode": "awsvpc",
 "containerDefinitions": [
    {
        "name": "arm-container",
        "image": "public.ecr.aws/docker/library/busybox:latest",
        "cpu": 100,
        "memory": 100,
        "essential": true,
        "command": [ "echo hello world" ],
        "entryPoint": [ "sh", "-c" ]
    }
 ],
 "requiresCompatibilities": [ "EC2" ],
 "cpu": "256",
 "memory": "512",
 "runtimePlatform": {
        "operatingSystemFamily": "LINUX",
        "cpuArchitecture": "ARM64"
  },
 "executionRoleArn": "arn:aws:iam::123456789012:role/ecsTaskExecutionRole"
}
```

# Send Amazon ECS logs to CloudWatch
<a name="using_awslogs"></a>

You can configure the containers in your tasks to send log information to CloudWatch Logs. If you're using Fargate for your tasks, you can view the logs from your containers. If you're using EC2, you can view different logs from your containers in one convenient location, and it prevents your container logs from taking up disk space on your container instances. 

**Note**  
The type of information that is logged by the containers in your task depends mostly on their `ENTRYPOINT` command. By default, the logs that are captured show the command output that you typically might see in an interactive terminal if you ran the container locally, which are the `STDOUT` and `STDERR` I/O streams. The `awslogs` log driver simply passes these logs from Docker to CloudWatch Logs. For more information about how Docker logs are processed, including alternative ways to capture different file data or streams, see [View logs for a container or service](https://docs.docker.com/engine/logging/) in the Docker documentation.

To send system logs from your Amazon ECS container instances to CloudWatch Logs, see [Monitoring Log Files](https://docs.aws.amazon.com/AmazonCloudWatch/latest/DeveloperGuide/WhatIsCloudWatchLogs.html) and [CloudWatch Logs quotas](https://docs.aws.amazon.com/AmazonCloudWatch/latest/logs/cloudwatch_limits_cwl.html) in the *Amazon CloudWatch Logs User Guide*.

## Fargate
<a name="enable_awslogs"></a>

If you're using Fargate for your tasks, you need to add the required `logConfiguration` parameters to your task definition to turn on the `awslogs` log driver. For more information, see [Example Amazon ECS task definition: Route logs to CloudWatch](specify-log-config.md).

For Windows container on Fargate, perform one of the following options when any of your task definition parameters have special characters such as, `& \ < > ^ |`:
+ Add an escape (`\`) with double quotes around the entire parameter string

  Example

  ```
  "awslogs-multiline-pattern": "\"^[|DEBUG|INFO|WARNING|ERROR\"",
  ```
+ Add an escape (`^`) character around each special character

  Example

  ```
  "awslogs-multiline-pattern": "^^[^|DEBUG^|INFO^|WARNING^|ERROR",
  ```

## EC2
<a name="ec2-considerations"></a>

If you're using EC2 for your tasks and want to turn on the `awslogs` log driver, your Amazon ECS container instances require at least version 1.9.0 of the container agent. For information about how to check your agent version and updating to the latest version, see [Updating the Amazon ECS container agent](ecs-agent-update.md).

**Note**  
You must use either an Amazon ECS-optimized AMI or a custom AMI with at least version `1.9.0-1` of the `ecs-init` package. When using a custom AMI, you must specify that the `awslogs` logging driver is available on the Amazon EC2 instance when you start the agent by using the following environment variable in your **docker run** statement or environment variable file.  

```
ECS_AVAILABLE_LOGGING_DRIVERS=["json-file","awslogs"]
```

Your Amazon ECS container instances also require `logs:CreateLogStream` and `logs:PutLogEvents` permission on the IAM role that you can launch your container instances with. If you created your Amazon ECS container instance role before `awslogs` log driver support was enabled in Amazon ECS, you might need to add this permission. The `ecsTaskExecutionRole` is used when it's assigned to the task and likely contains the correct permissions. For information about the task execution role, see [Amazon ECS task execution IAM role](task_execution_IAM_role.md). If your container instances use the managed IAM policy for container instances, your container instances likely have the correct permissions. For information about the managed IAM policy for container instances, see [Amazon ECS container instance IAM role](instance_IAM_role.md).

# Example Amazon ECS task definition: Route logs to CloudWatch
<a name="specify-log-config"></a>

Before your containers can send logs to CloudWatch, you must specify the `awslogs` log driver for containers in your task definition. For more information about the log parameters, see [Storage and logging](task_definition_parameters.md#container_definition_storage)

The task definition JSON that follows has a `logConfiguration` object specified for each container. One is for the WordPress container that sends logs to a log group called `awslogs-wordpress`. The other is for a MySQL container that sends logs to a log group that's called `awslogs-mysql`. Both containers use the `awslogs-example` log stream prefix.

```
{
    "containerDefinitions": [
        {
            "name": "wordpress",
            "links": [
                "mysql"
            ],
            "image": "public.ecr.aws/docker/library/wordpress:latest",
            "essential": true,
            "portMappings": [
                {
                    "containerPort": 80,
                    "hostPort": 80
                }
            ],
            "logConfiguration": {
                "logDriver": "awslogs",
                "options": {
                    "awslogs-create-group": "true",
                    "awslogs-group": "awslogs-wordpress",
                    "awslogs-region": "us-west-2",
                    "awslogs-stream-prefix": "awslogs-example"
                }
            },
            "memory": 500,
            "cpu": 10
        },
        {
            "environment": [
                {
                    "name": "MYSQL_ROOT_PASSWORD",
                    "value": "password"
                }
            ],
            "name": "mysql",
            "image": "public.ecr.aws/docker/library/mysql:latest",
            "cpu": 10,
            "memory": 500,
            "essential": true,
            "logConfiguration": {
                "logDriver": "awslogs",
                "options": {
                    "awslogs-create-group": "true",
                    "awslogs-group": "awslogs-mysql",
                    "awslogs-region": "us-west-2",
                    "awslogs-stream-prefix": "awslogs-example",
                    "mode": "non-blocking", 
                    "max-buffer-size": "25m" 
                }
            }
        }
    ],
    "family": "awslogs-example"
}
```

## Next steps
<a name="specify-log-config-next-steps"></a>
+ You can optionally set a retention policy for the log group by using the CloudWatch AWS CLI or API. For more information, see [put-retention-policy](https://docs.aws.amazon.com/cli/latest/reference/logs/put-retention-policy.html) in the *AWS Command Line Interface Reference*.
+ After you have registered a task definition with the `awslogs` log driver in a container definition log configuration, you can run a task or create a service with that task definition to start sending logs to CloudWatch Logs. For more information, see [Running an application as an Amazon ECS task](standalone-task-create.md) and [Creating an Amazon ECS rolling update deployment](create-service-console-v2.md).

# Send Amazon ECS logs to an AWS service or AWS Partner
<a name="using_firelens"></a>

You can use FireLens for Amazon ECS to use task definition parameters to route logs to an AWS service or AWS Partner Network (APN) destination for log storage and analytics. The AWS Partner Network is a global community of partners that leverages programs, expertise, and resources to build, market, and sell customer offerings. For more information see [AWS Partner](https://aws.amazon.com/partners/work-with-partners/). FireLens works with [Fluentd](https://www.fluentd.org/) and [Fluent Bit](https://fluentbit.io/). We provide the AWS for Fluent Bit image or you can use your own Fluentd or Fluent Bit image.

By default, Amazon ECS configures the container dependency so that the Firelens container starts before any container that uses it. The Firelens container also stops after all containers that use it stop.

To use this feature, you must create an IAM role for your tasks that provides the permissions necessary to use any AWS services that the tasks require. For example, if a container is routing logs to Firehose, the task requires permission to call the `firehose:PutRecordBatch` API. For more information, see [Adding and Removing IAM Identity Permissions](https://docs.aws.amazon.com/IAM/latest/UserGuide/access_policies_manage-attach-detach.html) in the *IAM User Guide*.

Your task might also require the Amazon ECS task execution role under the following conditions. For more information, see [Amazon ECS task execution IAM role](task_execution_IAM_role.md).
+ If your task is hosted on Fargate and you are pulling container images from Amazon ECR or referencing sensitive data from AWS Secrets Manager in your log configuration, then you must include the task execution IAM role.
+ When you use a custom configuration file that's hosted in Amazon S3, your task execution IAM role must include the `s3:GetObject` permission.

Consider the following when using FireLens for Amazon ECS:
+ We recommend that you add `my_service_` to the log container name so that you can easily distinguish container names in the console.
+ Amazon ECS adds a start container order dependency between the application containers and the FireLens container by default. When you specify a container order between the application containers and the FireLens container, then the default start container order is overridden.
+ FireLens for Amazon ECS is supported for tasks that are hosted on both AWS Fargate on Linux and Amazon EC2 on Linux. Windows containers don't support FireLens.

  For information about how to configure centralized logging for Windows containers, see [Centralized logging for Windows containers on Amazon ECS using Fluent Bit](https://aws.amazon.com/blogs/containers/centralized-logging-for-windows-containers-on-amazon-ecs-using-fluent-bit/).
+ You can use CloudFormation templates to configure FireLens for Amazon ECS. For more information, see [AWS::ECS::TaskDefinition FirelensConfiguration](https://docs.aws.amazon.com/AWSCloudFormation/latest/UserGuide/aws-properties-ecs-taskdefinition-firelensconfiguration.html) in the *AWS CloudFormation User Guide*
+ FireLens listens on port `24224`, so to ensure that the FireLens log router isn't reachable outside of the task, you must not allow inbound traffic on port `24224` in the security group your task uses. For tasks that use the `awsvpc` network mode, this is the security group associated with the task. For tasks using the `host` network mode, this is the security group that's associated with the Amazon EC2 instance hosting the task. For tasks that use the `bridge` network mode, don't create any port mappings that use port `24224`.
+ For tasks that use the `bridge` network mode, the container with the FireLens configuration must start before any application containers that rely on it start. To control the start order of your containers, use dependency conditions in your task definition. For more information, see [Container dependency](task_definition_parameters.md#container_definition_dependson).
**Note**  
If you use dependency condition parameters in container definitions with a FireLens configuration, ensure that each container has a `START` or `HEALTHY` condition requirement.
+ By default, FireLens adds the cluster and task definition name and the Amazon Resource Name (ARN) of the cluster as metadata keys to your stdout/stderr container logs. The following is an example of the metadata format.

  ```
  "ecs_cluster": "cluster-name",
  "ecs_task_arn": "arn:aws:ecs:region:111122223333:task/cluster-name/f2ad7dba413f45ddb4EXAMPLE",
  "ecs_task_definition": "task-def-name:revision",
  ```

  If you do not want the metadata in your logs, set `enable-ecs-log-metadata` to `false` in the `firelensConfiguration` section of the task definition.

  ```
  "firelensConfiguration":{
     "type":"fluentbit",
     "options":{
        "enable-ecs-log-metadata":"false",
        "config-file-type":"file",
        "config-file-value":"/extra.conf"
  }
  ```

You can configure the FireLens container to run as a non-root user. Consider the following:
+  To configure the FireLens container to run as a non-root user, you must specify the user in one of the following formats:
  + `uid`
  + `uid:gid`
  + `uid:group`

  For more information about specifying a user in a container definition, see [ContainerDefinition](https://docs.aws.amazon.com/AmazonECS/latest/APIReference/API_ContainerDefinition.html) in the *Amazon Elastic Container Service API Reference*.

  The FireLens container receives application logs over a UNIX socket. The Amazon ECS agent uses the `uid` to assign ownership of the socket directory to the FireLens container.
+ Configuring the FireLens container to run as a non-root user is supported on Amazon ECS Agent version `1.96.0` and later, and Amazon ECS-optimized AMI version `v20250716` and later.
+ When you specify a user for the FireLens container, the `uid` must be unique and not used for other processes belonging to other containers in the task or the container instance.

For information about how to use multiple configuration files with Amazon ECS, including files that you host or files in Amazon S3, see [Init process for Fluent Bit on ECS, multi-config support](https://github.com/aws/aws-for-fluent-bit/tree/mainline/use_cases/init-process-for-fluent-bit).

For information about example configurations, see [Example Amazon ECS task definition: Route logs to FireLens](firelens-taskdef.md).

For more information about configuring logs for high throughput, see [Configuring Amazon ECS logs for high throughput](firelens-docker-buffer-limit.md).

# Configuring Amazon ECS logs for high throughput
<a name="firelens-docker-buffer-limit"></a>

For high log throughput scenarios, we recommend using the `awsfirelens` log driver with FireLens and Fluent Bit. Fluent Bit is a lightweight log processor that's efficient with resources and can handle millions of log records. However, achieving optimal performance at scale requires tuning its configuration.

This section covers advanced Fluent Bit optimization techniques for handling high log throughput while maintaining system stability and ensuring no data loss.

For information about how to use custom configuration files with FireLens, see [Use a custom configuration file](firelens-taskdef.md#firelens-taskdef-customconfig). For additional examples, see [Amazon ECS FireLens examples](https://github.com/aws-samples/amazon-ecs-firelens-examples) on GitHub.

**Note**  
Some configuration options in this section, such as `workers` and `threaded`, require AWS for Fluent Bit version 3 or later. For information about available versions, see [AWS for Fluent Bit releases](https://github.com/aws/aws-for-fluent-bit/releases).

## Understanding chunks
<a name="firelens-understanding-chunks"></a>

Fluent Bit processes data in units called *chunks*. When an INPUT plugin receives data, the engine creates a chunk that gets stored in memory or on the filesystem before being sent to OUTPUT destinations.

Buffering behavior depends on the `storage.type` setting in your INPUT sections. By default, Fluent Bit uses memory buffering. For high-throughput or production scenarios, filesystem buffering provides better resilience.

For more information, see [Chunks](https://docs.fluentbit.io/manual/administration/buffering-and-storage#chunks) in the Fluent Bit documentation and [What is a Chunk?](https://github.com/aws-samples/amazon-ecs-firelens-examples/tree/mainline/examples/fluent-bit/oomkill-prevention#what-is-a-chunk) in the AWS for Fluent Bit examples repository.

## Memory buffering (default)
<a name="firelens-memory-buffering"></a>

By default, Fluent Bit uses memory buffering (`storage.type memory`). You can limit memory usage per INPUT plugin using the `Mem_Buf_Limit` parameter.

The following example shows a memory-buffered input configuration:

```
[INPUT]
    Name          tcp
    Tag           ApplicationLogs
    Port          5170
    storage.type  memory
    Mem_Buf_Limit 5MB
```

**Important**  
When `Mem_Buf_Limit` is exceeded for a plugin, Fluent Bit pauses the input and new records are lost. This can cause backpressure and slow down your application. The following warning appears in the Fluent Bit logs:  

```
[input] tcp.1 paused (mem buf overlimit)
```

Memory buffering is suitable for simple use cases with low to moderate log throughput. For high-throughput or production scenarios where data loss is a concern, use filesystem buffering instead.

For more information, see [Buffering and Memory](https://docs.fluentbit.io/manual/administration/buffering-and-storage#buffering-and-memory) in the Fluent Bit documentation and [Memory Buffering Only](https://github.com/aws-samples/amazon-ecs-firelens-examples/tree/mainline/examples/fluent-bit/oomkill-prevention#case-1-memory-buffering-only-default-or-storagetype-memory) in the AWS for Fluent Bit examples repository.

## Filesystem buffering
<a name="firelens-filesystem-buffering"></a>

For high-throughput scenarios, we recommend using filesystem buffering. For more information about how Fluent Bit manages buffering and storage, see [Buffering and Storage](https://docs.fluentbit.io/manual/administration/buffering-and-storage) in the Fluent Bit documentation.

Filesystem buffering provides the following advantages:
+ **Larger buffer capacity** – Disk space is typically more abundant than memory.
+ **Persistence** – Buffered data survives Fluent Bit restarts.
+ **Graceful degradation** – During output failures, data accumulates on disk rather than causing memory exhaustion.

To enable filesystem buffering, provide a custom Fluent Bit configuration file. The following example shows the recommended configuration:

```
[SERVICE]
    # Flush logs every 1 second
    Flush 1
    # Wait 120 seconds during shutdown to flush remaining logs
    Grace 120
    # Directory for filesystem buffering
    storage.path             /var/log/flb-storage/
    # Limit chunks stored 'up' in memory (reduce for memory-constrained environments)
    storage.max_chunks_up    32
    # Flush backlog chunks to destinations during shutdown (prevents log loss)
    storage.backlog.flush_on_shutdown On

[INPUT]
    Name forward
    unix_path /var/run/fluent.sock
    # Run input in separate thread to prevent blocking
    threaded true
    # Enable filesystem buffering for persistence
    storage.type filesystem

[OUTPUT]
    Name cloudwatch_logs
    Match *
    region us-west-2
    log_group_name /aws/ecs/my-app
    log_stream_name $(ecs_task_id)
    # Use multiple workers for parallel processing
    workers 2
    # Retry failed flushes up to 15 times
    retry_limit 15
    # Maximum disk space for buffered data for this output
    storage.total_limit_size 10G
```

Key configuration parameters:

`storage.path`  
The directory where Fluent Bit stores buffered chunks on disk.

`storage.backlog.flush_on_shutdown`  
When enabled, Fluent Bit attempts to flush all backlog filesystem chunks to their destinations during shutdown. This helps ensure data delivery before Fluent Bit stops, but may increase shutdown time.

`storage.max_chunks_up`  
The number of chunks that remain in memory. The default is 128 chunks, which can consume 500 MB\$1 of memory because each chunk can use up to 4–5 MB. In memory-constrained environments, lower this value. For example, if you have 50 MB available for buffering, set this to 8–10 chunks.

`storage.type filesystem`  
Enables filesystem storage for the input plugin. Despite the name, Fluent Bit uses `mmap` to map chunks to both memory and disk, providing persistence without sacrificing performance.

`storage.total_limit_size`  
The maximum disk space for buffered data for a specific OUTPUT plugin. When this limit is reached, the oldest records for that output are dropped. For more information about sizing, see [Understanding `storage.total_limit_size`](#firelens-storage-sizing).

`threaded true`  
Runs the input in its own thread, separate from Fluent Bit's main event loop. This prevents slow inputs from blocking the entire pipeline.

For more information, see [Filesystem Buffering](https://docs.fluentbit.io/manual/administration/buffering-and-storage#filesystem-buffering) in the Fluent Bit documentation and [Filesystem and Memory Buffering](https://github.com/aws-samples/amazon-ecs-firelens-examples/tree/mainline/examples/fluent-bit/oomkill-prevention#case-2-filesystem-and-memory-buffering-storagetype-filesystem) in the AWS for Fluent Bit examples repository.

## Understanding `storage.total_limit_size`
<a name="firelens-storage-sizing"></a>

The `storage.total_limit_size` parameter on each OUTPUT plugin controls the maximum disk space for buffered data for that output. When this limit is reached, the oldest records for that output are dropped to make room for new data. When disk space is completely exhausted, Fluent Bit fails to queue records and they are lost.

Use the following formula to calculate the appropriate `storage.total_limit_size` based on your log rate and desired recovery window:

```
If log rate is in KB/s, convert to MB/s first:
log_rate (MB/s) = log_rate (KB/s) / 1000

storage.total_limit_size (GB) = log_rate (MB/s) × duration (hours) × 3600 (seconds/hour) / 1000 (MB to GB)
```

The following table shows example calculations for common log rates and recovery windows:


| Log Rate | 1 hour | 6 hours | 12 hours | 24 hours | 
| --- | --- | --- | --- | --- | 
| 0.25 MB/s | 0.9 GB | 5.4 GB | 10.8 GB | 21.6 GB | 
| 0.5 MB/s | 1.8 GB | 10.8 GB | 21.6 GB | 43.2 GB | 
| 1 MB/s | 3.6 GB | 21.6 GB | 43.2 GB | 86.4 GB | 
| 5 MB/s | 18 GB | 108 GB | 216 GB | 432 GB | 
| 10 MB/s | 36 GB | 216 GB | 432 GB | 864 GB | 

To observe peak throughput and choose appropriate buffer sizes, use the [measure-throughput FireLens sample](https://github.com/aws-samples/amazon-ecs-firelens-examples/tree/mainline/examples/measure-throughput).

Use the formula, example calculations, and benchmarking to choose a suitable `storage.total_limit_size` that provides runway for best-effort recovery during an outage.

## Amazon ECS task storage requirements
<a name="firelens-storage-task-requirements"></a>

Sum all `storage.total_limit_size` values across OUTPUT sections and add buffer for overhead. This total determines the storage space needed in your Amazon ECS task definition. For example, 3 outputs × 10 GB each = 30 GB \$1 buffer (5–10 GB) = 35–40 GB total required. If the total exceeds available storage, Fluent Bit may fail to queue records and they will be lost.

The following storage options are available:

Bind mounts (ephemeral storage)  
+ For AWS Fargate, the default is 20 GB of ephemeral storage (max 200 GB). Configure using `ephemeralStorage` in the task definition. For more information, see [EphemeralStorage](https://docs.aws.amazon.com/AWSCloudFormation/latest/UserGuide/aws-properties-ecs-taskdefinition-ephemeralstorage.html) in the *AWS CloudFormation User Guide*.
+ For EC2, the default is 30 GB when using the Amazon ECS-optimized AMI (shared between the OS and Docker). Increase by changing the root volume size.

Amazon EBS volumes  
+ Provides highly available, durable, high-performance block storage.
+ Requires volume configuration and `mountPoint` in the task definition pointing to `storage.path` (default: `/var/log/flb-storage/`).
+ For more information, see [Defer volume configuration to launch time in an Amazon ECS task definition](specify-ebs-config.md).

Amazon EFS volumes  
+ Provides simple, scalable file storage.
+ Requires volume configuration and `mountPoint` in the task definition pointing to `storage.path` (default: `/var/log/flb-storage/`).
+ For more information, see [Specify an Amazon EFS file system in an Amazon ECS task definition](specify-efs-config.md).

For more information about data volumes, see [Storage options for Amazon ECS tasks](using_data_volumes.md).

## Optimize output configuration
<a name="firelens-output-optimization"></a>

Network issues, service outages, and destination throttling can prevent logs from being delivered. Proper output configuration ensures resilience without data loss.

When an output flush fails, Fluent Bit can retry the operation. The following parameters control retry behavior:

`retry_limit`  
The maximum number of retries after the initial attempt before dropping records. The default is 1. For example, `retry_limit 3` means 4 total attempts (1 initial \$1 3 retries). For production environments, we recommend 15 or higher, which covers several minutes of outage with exponential backoff.  
Set to `no_limits` or `False` for infinite retries:  
+ With memory buffering, infinite retries cause the input plugin to pause when memory limits are reached.
+ With filesystem buffering, the oldest records are dropped when `storage.total_limit_size` is reached.
After exhausting all retry attempts (1 initial \$1 `retry_limit` retries), records are dropped. AWS plugins with `auto_retry_requests true` (default) provide an additional retry layer before Fluent Bit's retry mechanism. For more information, see [Configure retries](https://docs.fluentbit.io/manual/administration/scheduling-and-retries#configure-retries) in the Fluent Bit documentation.  
For example, `retry_limit 3` with default settings (`scheduler.base 5`, `scheduler.cap 2000`, `net.connect_timeout 10s`) provides approximately 70 seconds of scheduler wait time (10s \$1 20s \$1 40s), 40 seconds of network connect timeouts (4 attempts × 10s), plus AWS plugin retries — totaling approximately 2–10 minutes depending on network conditions and OS TCP timeouts.

`scheduler.base`  
The base seconds between retries (default: 5). We recommend 10 seconds.

`scheduler.cap`  
The maximum seconds between retries (default: 2000). We recommend 60 seconds.

Wait time between retries uses exponential backoff with jitter:

```
wait_time = random(base, min(base × 2^retry_number, cap))
```

For example, with `scheduler.base 10` and `scheduler.cap 60`:
+ First retry: random wait between 10–20 seconds
+ Second retry: random wait between 10–40 seconds
+ Third retry and later: random wait between 10–60 seconds (capped)

For more information, see [Configure wait time for retry](https://docs.fluentbit.io/manual/administration/scheduling-and-retries#configure-wait-time-for-retry) and [Networking](https://docs.fluentbit.io/manual/administration/networking) in the Fluent Bit documentation.

`workers`  
The number of threads for parallel output processing. Multiple workers allow concurrent flushes, improving throughput when processing many chunks.

`auto_retry_requests`  
An AWS plugin-specific setting that provides an additional retry layer before Fluent Bit's built-in retry mechanism. The default is `true`. When enabled, the AWS output plugin retries failed requests internally before the request is considered a failed flush and subject to the `retry_limit` configuration.

The `Grace` parameter in the `[SERVICE]` section sets the time Fluent Bit waits during shutdown to flush buffered data. The `Grace` period must be coordinated with the container's `stopTimeout`. Ensure that `stopTimeout` exceeds the `Grace` period to allow Fluent Bit to complete flushing before receiving `SIGKILL`. For example, if `Grace` is 120 seconds, set `stopTimeout` to 150 seconds.

The following example shows a complete Fluent Bit configuration with all recommended settings for high-throughput scenarios:

```
[SERVICE]
    # Flush logs every 1 second
    Flush 1
    # Wait 120 seconds during shutdown to flush remaining logs
    Grace 120
    # Directory for filesystem buffering
    storage.path             /var/log/flb-storage/
    # Limit chunks stored 'up' in memory (reduce for memory-constrained environments)
    storage.max_chunks_up    32
    # Flush backlog chunks to destinations during shutdown (prevents log loss)
    storage.backlog.flush_on_shutdown On
    # Minimum seconds between retries
    scheduler.base           10
    # Maximum seconds between retries (exponential backoff cap)
    scheduler.cap            60

[INPUT]
    Name forward
    unix_path /var/run/fluent.sock
    # Run input in separate thread to prevent blocking
    threaded true
    # Enable filesystem buffering for persistence
    storage.type filesystem

[OUTPUT]
    Name cloudwatch_logs
    Match *
    region us-west-2
    log_group_name /aws/ecs/my-app
    log_stream_name $(ecs_task_id)
    # Use multiple workers for parallel processing
    workers 2
    # Retry failed flushes up to 15 times
    retry_limit 15
    # Maximum disk space for buffered data for this output
    storage.total_limit_size 10G
```

## Understanding data loss scenarios
<a name="firelens-record-loss-scenarios"></a>

Records can be lost during extended outages or issues with output destinations. The configuration recommendations in this guide are best-effort approaches to minimize data loss, but cannot guarantee zero loss during prolonged failures. Understanding these scenarios helps you configure Fluent Bit to maximize resilience.

Records can be lost in two ways: oldest records are dropped when storage fills up, or newest records are rejected when the system cannot accept more data.

### Oldest records dropped
<a name="firelens-record-loss-oldest-dropped"></a>

The oldest buffered records are dropped when retry attempts are exhausted or when `storage.total_limit_size` fills up and needs to make room for new data.

Retry limit exceeded  
Occurs after AWS plugin retries (if `auto_retry_requests true`) plus 1 initial Fluent Bit attempt plus `retry_limit` retries. To mitigate, set `retry_limit no_limits` per OUTPUT plugin for infinite retries:  

```
[OUTPUT]
    Name                        cloudwatch_logs
    Match                       ApplicationLogs
    retry_limit                 no_limits
    auto_retry_requests         true
```
Infinite retries prevent dropping records due to retry exhaustion, but may cause `storage.total_limit_size` to fill up.

Storage limit reached (filesystem buffering)  
Occurs when the output destination is unavailable longer than your configured `storage.total_limit_size` can buffer. For example, a 10 GB buffer at 1 MB/s log rate provides approximately 2.7 hours of buffering. To mitigate, increase `storage.total_limit_size` per OUTPUT plugin and provision adequate Amazon ECS task storage:  

```
[OUTPUT]
    Name                        cloudwatch_logs
    Match                       ApplicationLogs
    storage.total_limit_size    10G
```

### Newest records rejected
<a name="firelens-record-loss-newest-rejected"></a>

The newest records are dropped when disk space is exhausted or when input is paused due to `Mem_Buf_Limit`.

Disk space exhausted (filesystem buffering)  
Occurs when disk space is completely exhausted. Fluent Bit fails to queue new records and they are lost. To mitigate, sum all `storage.total_limit_size` values and provision adequate Amazon ECS task storage. For more information, see [Amazon ECS task storage requirements](#firelens-storage-task-requirements).

Memory limit reached (memory buffering)  
Occurs when the output destination is unavailable and the memory buffer fills. Paused input plugins stop accepting new records. To mitigate, use `storage.type filesystem` for better resilience, or increase `Mem_Buf_Limit`.

### Best practices to minimize data loss
<a name="firelens-record-loss-best-practices"></a>

Consider the following best practices to minimize data loss:
+ **Use filesystem buffering** – Set `storage.type filesystem` for better resilience during outages.
+ **Size storage appropriately** – Calculate `storage.total_limit_size` based on log rate and desired recovery window.
+ **Provision adequate disk** – Ensure the Amazon ECS task has sufficient ephemeral storage, Amazon EBS, or Amazon EFS.
+ **Configure retry behavior** – Balance between `retry_limit` (drops records after exhausting retries) and `no_limits` (retries indefinitely but may fill storage).

## Use multi-destination logging for reliability
<a name="firelens-multi-destination"></a>

Sending logs to multiple destinations eliminates single points of failure. For example, if CloudWatch Logs experiences an outage, logs still reach Amazon S3.

Multi-destination logging provides the following benefits. The Amazon S3 output plugin also supports compression options such as gzip and Parquet format, which can reduce storage costs. For more information, see [S3 compression](https://docs.fluentbit.io/manual/pipeline/outputs/s3#compression) in the Fluent Bit documentation.

Multi-destination logging can provide the following benefits:
+ **Redundancy** – If one destination fails, logs still reach the other.
+ **Recovery** – Reconstruct gaps in one system from the other.
+ **Durability** – Archive logs in Amazon S3 for long-term retention.
+ **Cost optimization** – Keep recent logs in a fast query service like CloudWatch Logs with shorter retention, while archiving all logs to lower-cost Amazon S3 storage for long-term retention.

The following Fluent Bit configuration sends logs to both CloudWatch Logs and Amazon S3:

```
[OUTPUT]
    Name cloudwatch_logs
    Match *
    region us-west-2
    log_group_name /aws/ecs/my-app
    log_stream_name $(ecs_task_id)
    workers 2
    retry_limit 15

[OUTPUT]
    Name s3
    Match *
    bucket my-logs-bucket
    region us-west-2
    total_file_size 100M
    s3_key_format /fluent-bit-logs/$(ecs_task_id)/%Y%m%d/%H/%M/$UUID
    upload_timeout 10m
    # Maximum disk space for buffered data for this output
    storage.total_limit_size 5G
```

Both outputs use the same `Match *` pattern, so all records are sent to both destinations independently. During an outage of one destination, logs continue flowing to the other while failed flushes accumulate in the filesystem buffer for later retry.

## Use file-based logging with the tail input plugin
<a name="firelens-tail-input"></a>

For high-throughput scenarios where log loss is a critical concern, you can use an alternative approach: have your application write logs to files on disk, and configure Fluent Bit to read them using the `tail` input plugin. This approach bypasses the Docker logging driver layer entirely.

File-based logging with the tail plugin provides the following benefits:
+ **Offset tracking** – The tail plugin can store file offsets in a database file (using the `DB` option), providing durability across Fluent Bit restarts. This helps prevent log loss during container restarts.
+ **Input-level buffering** – You can configure memory buffer limits directly on the input plugin using `Mem_Buf_Limit`, providing more granular control over memory usage.
+ **Avoids Docker overhead** – Logs go directly from file to Fluent Bit without passing through Docker's log buffers.

To use this approach, your application must write logs to files instead of `stdout`. Both the application container and the Fluent Bit container mount a shared volume where the log files are stored.

The following example shows a tail input configuration with best practices:

```
[INPUT]
    Name tail
    # File path or glob pattern to tail
    Path /var/log/app.log
    # Database file for storing file offsets (enables resuming after restart)
    DB /var/log/flb_tail.db
    # when true, controls that only fluent-bit will access the database (improves performance)
    DB.locking true
    # Skip long lines instead of skipping the entire file
    Skip_Long_Lines On
    # How often (in seconds) to check for new files matching the glob pattern
    Refresh_Interval 10
    # Extra seconds to monitor a file after rotation to account for pending flush
    Rotate_Wait 30
    # Maximum size of the buffer for a single line
    Buffer_Max_Size 10MB
    # Initial allocation size for reading file data
    Buffer_Chunk_Size 1MB
    # Maximum memory buffer size (tail pauses when full)
    Mem_Buf_Limit 75MB
```

When using the tail input plugin, consider the following:
+ Implement log rotation for your application logs to prevent disk exhaustion. Monitor the underlying volume metrics to gauge performance.
+ Consider settings like `Ignore_Older`, `Read_from_Head`, and multiline parsers based on your log format.

For more information, see [Tail](https://docs.fluentbit.io/manual/pipeline/inputs/tail) in the Fluent Bit documentation. For best practices, see [Tail config with best practices](https://github.com/aws/aws-for-fluent-bit/blob/mainline/troubleshooting/debugging.md#tail-config-with-best-practices) in the AWS for Fluent Bit troubleshooting guide.

## Log directly to FireLens
<a name="firelens-environment-variables"></a>

When the `awsfirelens` log driver is specified in a task definition, the Amazon ECS container agent injects the following environment variables into the container:

`FLUENT_HOST`  
The IP address that's assigned to the FireLens container.  
If you're using EC2 with the `bridge` network mode, the `FLUENT_HOST` environment variable in your application container can become inaccurate after a restart of the FireLens log router container (the container with the `firelensConfiguration` object in its container definition). This is because `FLUENT_HOST` is a dynamic IP address and can change after a restart. Logging directly from the application container to the `FLUENT_HOST` IP address can start failing after the address changes. For more information about restarting individual containers, see [Restart individual containers in Amazon ECS tasks with container restart policies](container-restart-policy.md).

`FLUENT_PORT`  
The port that the Fluent Forward protocol is listening on.

You can use these environment variables to log directly to the Fluent Bit log router from your application code using the Fluent Forward protocol, instead of writing to `stdout`. This approach bypasses the Docker logging driver layer, which provides the following benefits:
+ **Lower latency** – Logs go directly to Fluent Bit without passing through Docker's logging infrastructure.
+ **Structured logging** – Send structured log data natively without JSON encoding overhead.
+ **Better control** – Your application can implement its own buffering and error handling logic.

The following Fluent logger libraries support the Fluent Forward protocol and can be used to send logs directly to Fluent Bit:
+ **Go** – [fluent-logger-golang](https://github.com/fluent/fluent-logger-golang)
+ **Python** – [fluent-logger-python](https://github.com/fluent/fluent-logger-python)
+ **Java** – [fluent-logger-java](https://github.com/fluent/fluent-logger-java)
+ **Node.js** – [fluent-logger-node](https://github.com/fluent/fluent-logger-node)
+ **Ruby** – [fluent-logger-ruby](https://github.com/fluent/fluent-logger-ruby)

## Configure the Docker buffer limit
<a name="firelens-buffer-limit"></a>

When you create a task definition, you can specify the number of log lines that are buffered in memory by specifying the value in `log-driver-buffer-limit`. This controls the buffer between Docker and Fluent Bit. For more information, see [Fluentd logging driver](https://docs.docker.com/engine/logging/drivers/fluentd/) in the Docker documentation.

Use this option when there's high throughput, because Docker might run out of buffer memory and discard buffer messages so it can add new messages.

Consider the following when using this option:
+ This option is supported on EC2 and Fargate type with platform version `1.4.0` or later.
+ The option is only valid when `logDriver` is set to `awsfirelens`.
+ The default buffer limit is `1048576` log lines.
+ The buffer limit must be greater than or equal to `0` and less than `536870912` log lines.
+ The maximum amount of memory used for this buffer is the product of the size of each log line and the size of the buffer. For example, if the application's log lines are on average `2` KiB, a buffer limit of 4096 would use at most `8` MiB. The total amount of memory allocated at the task level should be greater than the amount of memory that's allocated for all the containers in addition to the log driver memory buffer.

The following task definition shows how to configure `log-driver-buffer-limit`:

```
{
    "containerDefinitions": [
        {
            "name": "my_service_log_router",
            "image": "public.ecr.aws/aws-observability/aws-for-fluent-bit:3",
            "cpu": 0,
            "memoryReservation": 51,
            "essential": true,
            "firelensConfiguration": {
                "type": "fluentbit"
            }
        },
        {
            "essential": true,
            "image": "public.ecr.aws/docker/library/httpd:latest",
            "name": "app",
            "logConfiguration": {
                "logDriver": "awsfirelens",
                "options": {
                    "Name": "firehose",
                    "region": "us-west-2",
                    "delivery_stream": "my-stream",
                    "log-driver-buffer-limit": "52428800"
                }
            },
            "dependsOn": [
                {
                    "containerName": "my_service_log_router",
                    "condition": "START"
                }
            ],
            "memoryReservation": 100
        }
    ]
}
```

# AWS for Fluent Bit image repositories for Amazon ECS
<a name="firelens-using-fluentbit"></a>

AWS provides a Fluent Bit image with plugins for both CloudWatch Logs and Firehose. We recommend using Fluent Bit as your log router because it has a lower resource utilization rate than Fluentd. For more information, see [CloudWatch Logs for Fluent Bit](https://github.com/aws/amazon-cloudwatch-logs-for-fluent-bit) and [Amazon Kinesis Firehose for Fluent Bit](https://github.com/aws/amazon-kinesis-firehose-for-fluent-bit).

The **AWS for Fluent Bit** image is available on Amazon ECR on both the Amazon ECR Public Gallery and in an Amazon ECR repository for high availability.

## Amazon ECR Public Gallery
<a name="firelens-image-ecrpublic"></a>

The AWS for Fluent Bit image is available on the Amazon ECR Public Gallery. This is the recommended location to download the AWS for Fluent Bit image because it's a public repository and available to be used from all AWS Regions. For more information, see [aws-for-fluent-bit](https://gallery.ecr.aws/aws-observability/aws-for-fluent-bit) on the Amazon ECR Public Gallery.

### Linux
<a name="firelens-image-ecrpublic-linux"></a>

The AWS for Fluent Bit image in the Amazon ECR Public Gallery supports the Amazon Linux operating system with the `ARM64` or `x86-64` architecture.

You can pull the AWS for Fluent Bit image from the Amazon ECR Public Gallery by specifying the repository URL with the desired image tag. The available image tags can be found on the **Image tags** tab on the Amazon ECR Public Gallery.

The following shows the syntax to use for the Docker CLI.

```
docker pull public.ecr.aws/aws-observability/aws-for-fluent-bit:tag
```

For example, you can pull the latest image in the "3.x" family of AWS for Fluent Bit releases using this Docker CLI command.

```
docker pull public.ecr.aws/aws-observability/aws-for-fluent-bit:3
```

**Note**  
Unauthenticated pulls are allowed, but have a lower rate limit than authenticated pulls. To authenticate using your AWS account before pulling, use the following command.  

```
aws ecr-public get-login-password --region us-east-1 | docker login --username AWS --password-stdin public.ecr.aws
```

#### AWS for Fluent Bit 3.0.0
<a name="firelens-image-ecrpublic-linux-3.0.0"></a>

In addition to the existing AWS for Fluent Bit versions `2.x`, AWS for Fluent Bit supports a new major version `3.x`. The new major version includes upgrading images from Amazon Linux 2 to Amazon Linux 2023 and Fluent Bit version `1.9.10` to `4.1.1`. For more information, see the [AWS for Fluent Bit repository](https://github.com/aws/aws-for-fluent-bit/blob/mainline/VERSIONS.md) on GitHub.

The following examples demonstrate updated tags for AWS for Fluent Bit `3.x` images:

You can use multi-architecture tags for the AWS for Fluent Bit image.

```
docker pull public.ecr.aws/aws-observability/aws-for-fluent-bit:3
```

### Windows
<a name="firelens-image-ecrpublic-windows"></a>

The AWS for Fluent Bit image in the Amazon ECR Public Gallery supports the `AMD64` architecture with the following operating systems:
+ Windows Server 2022 Full
+ Windows Server 2022 Core
+ Windows Server 2019 Full
+ Windows Server 2019 Core

Windows containers that are on AWS Fargate don't support FireLens.

You can pull the AWS for Fluent Bit image from the Amazon ECR Public Gallery by specifying the repository URL with the desired image tag. The available image tags can be found on the **Image tags** tab on the Amazon ECR Public Gallery.

The following shows the syntax to use for the Docker CLI.

```
docker pull public.ecr.aws/aws-observability/aws-for-fluent-bit:tag
```

For example, you can pull the newest stable AWS for Fluent Bit image using this Docker CLI command.

```
docker pull public.ecr.aws/aws-observability/aws-for-fluent-bit:windowsservercore-stable
```

**Note**  
Unauthenticated pulls are allowed, but have a lower rate limit than authenticated pulls. To authenticate using your AWS account before pulling, use the following command.  

```
aws ecr-public get-login-password --region us-east-1 | docker login --username AWS --password-stdin public.ecr.aws
```

## Amazon ECR
<a name="firelens-image-ecr"></a>

The AWS for Fluent Bit image is available on Amazon ECR for high availability. The following commands can be used to retreive image URIs and establish image availability in a given AWS Region.

### Linux
<a name="firelens-image-ecr-linux"></a>

The latest stable AWS for Fluent Bit image URI can be retrieved using the following command.

```
aws ssm get-parameters \
      --names /aws/service/aws-for-fluent-bit/stable \
      --region us-east-1
```

All versions of the AWS for Fluent Bit image can be listed using the following command to query the Systems Manager Parameter Store parameter.

```
aws ssm get-parameters-by-path \
      --path /aws/service/aws-for-fluent-bit \
      --region us-east-1
```

The newest stable AWS for Fluent Bit image can be referenced in an CloudFormation template by referencing the Systems Manager parameter store name. The following is an example:

```
Parameters:
  FireLensImage:
    Description: Fluent Bit image for the FireLens Container
    Type: AWS::SSM::Parameter::Value<String>
    Default: /aws/service/aws-for-fluent-bit/stable
```

**Note**  
If the command fails or there is no output, the image isn't available in the AWS Region in which the command is called.

### Windows
<a name="firelens-image-ecr-windows"></a>

The latest stable AWS for Fluent Bit image URI can be retrieved using the following command.

```
aws ssm get-parameters \
      --names /aws/service/aws-for-fluent-bit/windowsservercore-stable \
      --region us-east-1
```

All versions of the AWS for Fluent Bit image can be listed using the following command to query the Systems Manager Parameter Store parameter.

```
aws ssm get-parameters-by-path \
      --path /aws/service/aws-for-fluent-bit/windowsservercore \
      --region us-east-1
```

The latest stable AWS for Fluent Bit image can be referenced in an CloudFormation template by referencing the Systems Manager parameter store name. The following is an example:

```
Parameters:
  FireLensImage:
    Description: Fluent Bit image for the FireLens Container
    Type: AWS::SSM::Parameter::Value<String>
    Default: /aws/service/aws-for-fluent-bit/windowsservercore-stable
```

# Example Amazon ECS task definition: Route logs to FireLens
<a name="firelens-taskdef"></a>

To use custom log routing with FireLens, you must specify the following in your task definition:
+ A log router container that contains a FireLens configuration. We recommend that the container be marked as `essential`.
+ One or more application containers that contain a log configuration specifying the `awsfirelens` log driver.
+ A task IAM role Amazon Resource Name (ARN) that contains the permissions needed for the task to route the logs.

When creating a new task definition using the AWS Management Console, there is a FireLens integration section that makes it easy to add a log router container. For more information, see [Creating an Amazon ECS task definition using the console](create-task-definition.md).

Amazon ECS converts the log configuration and generates the Fluentd or Fluent Bit output configuration. The output configuration is mounted in the log routing container at `/fluent-bit/etc/fluent-bit.conf` for Fluent Bit and `/fluentd/etc/fluent.conf` for Fluentd.

**Important**  
FireLens listens on port `24224`. Therefore, to ensure that the FireLens log router isn't reachable outside of the task, you must not allow ingress traffic on port `24224` in the security group your task uses. For tasks that use the `awsvpc` network mode, this is the security group that's associated with the task. For tasks that use the `host` network mode, this is the security group that's associated with the Amazon EC2 instance hosting the task. For tasks that use the `bridge` network mode, don't create any port mappings that use port `24224`.

By default, Amazon ECS adds additional fields in your log entries that help identify the source of the logs. 
+ `ecs_cluster` – The name of the cluster that the task is part of.
+ `ecs_task_arn` – The full Amazon Resource Name (ARN) of the task that the container is part of.
+ `ecs_task_definition` – The task definition name and revision that the task is using.
+ `ec2_instance_id` – The Amazon EC2 instance ID that the container is hosted on. This field is only valid for tasks using the EC2 launch type.

You can set the `enable-ecs-log-metadata` to `false` if you do not want the metadata.

The following task definition example defines a log router container that uses Fluent Bit to route its logs to CloudWatch Logs. It also defines an application container that uses a log configuration to route logs to Amazon Data Firehose and sets the memory that's used to buffer events to the 2 MiB.

**Note**  
For more example task definitions, see [Amazon ECS FireLens examples](https://github.com/aws-samples/amazon-ecs-firelens-examples) on GitHub.

```
{
  "family": "firelens-example-firehose",
  "taskRoleArn": "arn:aws:iam::123456789012:role/ecs_task_iam_role",
  "containerDefinitions": [
    {
            "name": "log_router",
            "image": "public.ecr.aws/aws-observability/aws-for-fluent-bit:3",
            "cpu": 0,
            "memoryReservation": 51,
            "portMappings": [],
            "essential": true,
            "environment": [],
            "mountPoints": [],
            "volumesFrom": [],
            "logConfiguration": {
                "logDriver": "awslogs",
                "options": {
                    "awslogs-group": "/ecs/ecs-aws-firelens-sidecar-container",
                    "mode": "non-blocking",
                    "awslogs-create-group": "true",
                    "max-buffer-size": "25m",
                    "awslogs-region": "us-east-1",
                    "awslogs-stream-prefix": "firelens"
                },
                "secretOptions": []
            },
            "systemControls": [],
            "firelensConfiguration": {
                "type": "fluentbit"
            }
        },
    {
      "essential": true,
      "image": "public.ecr.aws/docker/library/httpd:latest",
      "name": "app",
      "logConfiguration": {
        "logDriver": "awsfirelens",
        "options": {
          "Name": "firehose",
          "region": "us-west-2",
          "delivery_stream": "my-stream",
          "log-driver-buffer-limit": "1048576"
        }
      },
      "memoryReservation": 100
    }
  ]
}
```

The key-value pairs specified as options in the `logConfiguration` object are used to generate the Fluentd or Fluent Bit output configuration. The following is a code example from a Fluent Bit output definition.

```
[OUTPUT]
    Name   firehose
    Match  app-firelens*
    region us-west-2
    delivery_stream my-stream
```

**Note**  
FireLens manages the `match` configuration. You do not specify the `match` configuration in your task definition. 

## Use a custom configuration file
<a name="firelens-taskdef-customconfig"></a>

You can specify a custom configuration file. The configuration file format is the native format for the log router that you're using. For more information, see [Fluentd Config File Syntax](https://docs.fluentd.org/configuration/config-file) and [YAML Configuration](https://docs.fluentbit.io/manual/administration/configuring-fluent-bit/yaml).

In your custom configuration file, for tasks using the `bridge` or `awsvpc` network mode, don't set a Fluentd or Fluent Bit forward input over TCP because FireLens adds it to the input configuration.

Your FireLens configuration must contain the following options to specify a custom configuration file:

`config-file-type`  
The source location of the custom configuration file. The available options are `s3` or `file`.  
Tasks that are hosted on AWS Fargate only support the `file` configuration file type. However, you can use configuration files hosted in Amazon S3 on AWS Fargate by using the AWS for Fluent Bit init container. For more information, see [Init process for Fluent Bit on ECS, multi-config support](https://github.com/aws/aws-for-fluent-bit/blob/mainline/use_cases/init-process-for-fluent-bit/README.md) on GitHub.

`config-file-value`  
The source for the custom configuration file. If the `s3` config file type is used, the config file value is the full ARN of the Amazon S3 bucket and file. If the `file` config file type is used, the config file value is the full path of the configuration file that exists either in the container image or on a volume that's mounted in the container.  
When using a custom configuration file, you must specify a different path than the one FireLens uses. Amazon ECS reserves the `/fluent-bit/etc/fluent-bit.conf` filepath for Fluent Bit and `/fluentd/etc/fluent.conf` for Fluentd.

The following example shows the syntax required when specifying a custom configuration.

**Important**  
To specify a custom configuration file that's hosted in Amazon S3, ensure you have created a task execution IAM role with the proper permissions. 

The following shows the syntax required when specifying a custom configuration.

```
{
  "containerDefinitions": [
    {
      "essential": true,
      "image": "906394416424.dkr.ecr.us-west-2.amazonaws.com/aws-for-fluent-bit:3",
      "name": "log_router",
      "firelensConfiguration": {
        "type": "fluentbit",
        "options": {
          "config-file-type": "s3 | file",
          "config-file-value": "arn:aws:s3:::amzn-s3-demo-bucket/fluent.conf | filepath"
        }
      }
    }
  ]
}
```

**Note**  
Tasks hosted on AWS Fargate only support the `file` configuration file type. However, you can use configuration files hosted in Amazon S3 on AWS Fargate by using the AWS for Fluent Bit init container. For more information, see [Init process for Fluent Bit on ECS, multi-config support](https://github.com/aws/aws-for-fluent-bit/blob/mainline/use_cases/init-process-for-fluent-bit/README.md) on GitHub.

# Using non-AWS container images in Amazon ECS
<a name="private-auth"></a>

Use private registry to store your credentials in AWS Secrets Manager, and then reference them in your task definition. This provides a way to reference container images that exist in private registries outside of AWS that require authentication in your task definitions. This feature is supported by tasks hosted on Fargate, Amazon EC2 instances, and external instances using Amazon ECS Anywhere.

**Important**  
If your task definition references an image that's stored in Amazon ECR, this topic doesn't apply. For more information, see [Using Amazon ECR Images with Amazon ECS](https://docs.aws.amazon.com/AmazonECR/latest/userguide/ECR_on_ECS.html) in the *Amazon Elastic Container Registry User Guide*.

For tasks hosted on Amazon EC2 instances, this feature requires version `1.19.0` or later of the container agent. However, we recommend using the latest container agent version. For information about how to check your agent version and update to the latest version, see [Updating the Amazon ECS container agent](ecs-agent-update.md).

For tasks hosted on Fargate, this feature requires platform version `1.2.0` or later. For information, see [Fargate platform versions for Amazon ECS](platform-fargate.md).

Within your container definition, specify the `repositoryCredentials` object with the details of the secret that you created. The referenced secret can be from a different AWS Region or a different account than the task using it.

**Note**  
When using the Amazon ECS API, AWS CLI, or AWS SDK, if the secret exists in the same AWS Region as the task that you're launching then you can use either the full ARN or name of the secret. If the secret exists in a different account, the full ARN of the secret must be specified. When using the AWS Management Console, the full ARN of the secret must be specified always.

The following is a snippet of a task definition that shows the required parameters:

Substitute the following parameters:
+ *private-repo* with the private repository host name 
+ *private-image* with the image name
+ *arn:aws:secretsmanager:region:aws\$1account\$1id:secret:secret\$1name* with the secret Amazon Resource Name (ARN)

```
"containerDefinitions": [
    {
        "image": "private-repo/private-image",
        "repositoryCredentials": {
            "credentialsParameter": "arn:aws:secretsmanager:region:aws_account_id:secret:secret_name"
        }
    }
]
```

**Note**  
Another method of enabling private registry authentication uses Amazon ECS container agent environment variables to authenticate to private registries. This method is only supported for tasks hosted on Amazon EC2 instances. For more information, see [Configuring Amazon ECS container instances for private Docker images](private-auth-container-instances.md).

**To use private registry**

1. The task definition must have a task execution role. This allows the container agent to pull the container image. For more information, see [Amazon ECS task execution IAM role](task_execution_IAM_role.md).

   Private registry authentication allows your Amazon ECS tasks to pull container images from private registries outside of AWS (such as Docker Hub, Quay.io, or your own private registry) that require authentication credentials. This feature uses Secrets Manager to securely store your registry credentials, which are then referenced in your task definition using the `repositoryCredentials` parameter.

   For more information about configuring private registry authentication, see [Using non-AWS container images in Amazon ECS](https://docs.aws.amazon.com/AmazonECS/latest/developerguide/private-auth.html).

   To provide access to the secrets that contain your private registry credentials, add the following permissions as an inline policy to the task execution role. For more information, see [Adding and Removing IAM Policies](https://docs.aws.amazon.com/IAM/latest/UserGuide/access_policies_manage-attach-detach.html).
   + `secretsmanager:GetSecretValue`—Required to retrieve the private registry credentials from Secrets Manager.
   + `kms:Decrypt`—Required only if your secret uses a custom KMS key and not the default key. The Amazon Resource Name (ARN) for your custom key must be added as a resource.

   The following is an example inline policy that adds the permissions.

------
#### [ JSON ]

****  

   ```
   {
       "Version":"2012-10-17",		 	 	 
       "Statement": [
           {
               "Effect": "Allow",
               "Action": [
                   "kms:Decrypt",
                   "secretsmanager:GetSecretValue"
               ],
               "Resource": [
                   "arn:aws:secretsmanager:us-east-1:111122223333:secret:secret_name",
                   "arn:aws:kms:us-east-1:111122223333:key/key_id"
               ]
           }
       ]
   }
   ```

------

1. Use AWS Secrets Manager to create a secret for your private registry credentials. For information about how to create a secret, see [Create an AWS Secrets Manager secret](https://docs.aws.amazon.com/secretsmanager/latest/userguide/create_secret.html) in the *AWS Secrets Manager User Guide*.

   Enter your private registry credentials using the following format:

   ```
   {
     "username" : "privateRegistryUsername",
     "password" : "privateRegistryPassword"
   }
   ```

1. Register a task definition. For more information, see [Creating an Amazon ECS task definition using the console](create-task-definition.md).

# Restart individual containers in Amazon ECS tasks with container restart policies
<a name="container-restart-policy"></a>

You can enable a restart policy for each essential and non-essential container defined in your task definition, to overcome transient failures faster and maintain task availability. When you enable a restart policy for a container, Amazon ECS can restart the container if it exits, without needing to replace the task.

Restart policies are not enabled for containers by default. When you enable a restart policy for a container, you can specify exit codes that the container will not be restarted on. These can be exit codes that indicate success, like exit code `0`, that don't require a restart. You can also specify how long a container must run succesfully before a restart can be attempted. For more information about these parameters, see [Restart policy](task_definition_parameters.md#container_definition_restart_policy). For an example task definition that specifies these values, see [Specifying a container restart policy in an Amazon ECS task definition](container-restart-policy-example.md).

You can use the Amazon ECS task metadata endpoint or CloudWatch Container Insights to monitor the number of times a container has restarted. For more information about the task metadata endpoint, see [Amazon ECS task metadata endpoint version 4](task-metadata-endpoint-v4.md) and [Amazon ECS task metadata endpoint version 4 for tasks on Fargate](task-metadata-endpoint-v4-fargate.md). For more information about Container Insights metrics for Amazon ECS, see [Amazon ECS Container Insights metrics](https://docs.aws.amazon.com/AmazonCloudWatch/latest/monitoring/Container-Insights-metrics-ECS.html) in the *Amazon CloudWatch User Guide*.

Container restart policies are supported by tasks hosted on Fargate, Amazon EC2 instances, and external instances using Amazon ECS Anywhere.

## Considerations
<a name="container-restart-policy-considerations"></a>

Consider the following before enabling a restart policy for your container:
+ Restart policies aren't supported for Windows containers on Fargate.
+ For tasks hosted on Amazon EC2 instances, this feature requires version `1.86.0` or later of the container agent. However, we recommend using the latest container agent version. For information about how to check your agent version and update to the latest version, see [Updating the Amazon ECS container agent](ecs-agent-update.md).
+ If you're using EC2 with the `bridge` network mode, the `FLUENT_HOST` environment variable in your application container can become inaccurate after a restart of the FireLens log router container (the container with the `firelensConfiguration` object in its container definition). This is because `FLUENT_HOST` is a dynamic IP address and can change after a restart. Logging directly from the application container to the `FLUENT_HOST` IP address can start failing after the address changes. For more information about `FLUENT_HOST`, see [Configuring Amazon ECS logs for high throughput](firelens-docker-buffer-limit.md).
+ The Amazon ECS agent handles the container restart policies. If for some unexpected reason the Amazon ECS agent fails or is no longer running, the container won't be restarted.
+  The restart attempt period defined in your policy determines the period of time (in seconds) that the container must run for before Amazon ECS restarts a container.

# Specifying a container restart policy in an Amazon ECS task definition
<a name="container-restart-policy-example"></a>

To specify a restart policy for a container in a task definition, within the container definition, specify the `restartPolicy` object. For more information about the `restartPolicy` object, see [Restart policy](task_definition_parameters.md#container_definition_restart_policy).

The following is a task definition using the Linux containers on Fargate that sets up a web server. The container definition includes the `restartPolicy` object, with `enabled` set to true to enable a restart policy for the container. The container must run for 180 seconds before it can be restarted and will not be restarted if it exits with the exit code `0`, which indicates success.

```
{
  "containerDefinitions": [
    {
      "command": [
        "/bin/sh -c \"echo '<html> <head> <title>Amazon ECS Sample App</title> <style>body {margin-top: 40px; background-color: #333;} </style> </head><body> <div style=color:white;text-align:center> <h1>Amazon ECS Sample App</h1> <h2>Congratulations!</h2> <p>Your application is now running on a container in Amazon ECS.</p> </div></body></html>' >  /usr/local/apache2/htdocs/index.html && httpd-foreground\""
      ],
      "entryPoint": ["sh", "-c"],
      "essential": true,
      "image": "public.ecr.aws/docker/library/httpd:2.4",
      "logConfiguration": {
        "logDriver": "awslogs",
        "options": {
          "awslogs-group": "/ecs/fargate-task-definition",
          "awslogs-region": "us-east-1",
          "awslogs-stream-prefix": "ecs"
        }
      },
      "name": "sample-fargate-app",
      "portMappings": [
        {
          "containerPort": 80,
          "hostPort": 80,
          "protocol": "tcp"
        }
      ],
      "restartPolicy": {
        "enabled": true,
        "ignoredExitCodes": [0],
        "restartAttemptPeriod": 180
      }
    }
  ],
  "cpu": "256",
  "executionRoleArn": "arn:aws:iam::012345678910:role/ecsTaskExecutionRole",
  "family": "fargate-task-definition",
  "memory": "512",
  "networkMode": "awsvpc",
  "runtimePlatform": {
    "operatingSystemFamily": "LINUX"
  },
  "requiresCompatibilities": ["FARGATE"]
}
```

After you have registered a task definition with the `restartPolicy` object in a container definition, you can run a task or create a service with that task definition. For more information, see [Running an application as an Amazon ECS task](standalone-task-create.md) and [Creating an Amazon ECS rolling update deployment](create-service-console-v2.md).

# Pass sensitive data to an Amazon ECS container
<a name="specifying-sensitive-data"></a>

You can safely pass sensitive data, such as credentials to a database, into your container. 

Secrets, such as API keys and database credentials, are frequently used by applications to gain access other systems. They often consist of a username and password, a certificate, or API key. Access to these secrets should be restricted to specific IAM principals that are using IAM and injected into containers at runtime.

Secrets can be seamlessly injected into containers from AWS Secrets Manager and Amazon EC2 Systems Manager Parameter Store. These secrets can be referenced in your task as any of the following.

1. They're referenced as environment variables that use the `secrets` container definition parameter.

1. They're referenced as `secretOptions` if your logging platform requires authentication. For more information, see [logging configuration options](https://docs.aws.amazon.com/AmazonECS/latest/APIReference/API_LogConfiguration.html#API_LogConfiguration_Contents).

1. They're referenced as secrets pulled by images that use the `repositoryCredentials` container definition parameter if the registry where the container is being pulled from requires authentication. Use this method when pulling images from Amazon ECR Public Gallery. For more information, see [Private registry authentication for tasks](https://docs.aws.amazon.com/AmazonECS/latest/developerguide/private-auth.html).

We recommend that you do the following when setting up secrets management.

## Use AWS Secrets Manager or AWS Systems Manager Parameter Store for storing secret materials
<a name="security-secrets-management-recommendations-storing-secret-materials"></a>

You should securely store API keys, database credentials, and other secret materials in Secrets Manager or as an encrypted parameter in Systems Manager Parameter Store. These services are similar because they're both managed key-value stores that use AWS KMS to encrypt sensitive data. Secrets Manager, however, also includes the ability to automatically rotate secrets, generate random secrets, and share secrets across accounts. To utilize these features, use Secrets Manager. Otherwise, use encrypted parameters in Systems Manager Parameter Store.

**Important**  
If your secret changes, you must force a new deployment or launch a new task to retrieve the latest secret value. For more information, see the following topics:  
Tasks - Stop the task, and then start it. For more information, see [Stopping an Amazon ECS task](standalone-task-stop.md) and [Running an application as an Amazon ECS task](standalone-task-create.md).
Service - Update the service and use the force new deployment option. For more information, see [Updating an Amazon ECS service](update-service-console-v2.md).

## Retrieve data from an encrypted Amazon S3 bucket
<a name="security-secrets-management-recommendations-encrypted-s3-buckets"></a>

You should store secrets in an encrypted Amazon S3 bucket and use task roles to restrict access to those secrets. This prevents the values of environment variables from inadvertently leaking in logs and getting revealed when running `docker inspect`. When you do this, your application must be written to read the secret from the Amazon S3 bucket. For instructions, see [Setting default server-side encryption behavior for Amazon S3 buckets](https://docs.aws.amazon.com/AmazonS3/latest/userguide/bucket-encryption.html).

## Mount the secret to a volume using a sidecar container
<a name="security-secrets-management-recommendations-mount-secret-volumes"></a>

Because there's an elevated risk of data leakage with environment variables, you should run a sidecar container that reads your secrets from AWS Secrets Manager and write them to a shared volume. This container can run and exit before the application container by using [Amazon ECS container ordering](https://docs.aws.amazon.com/AmazonECS/latest/APIReference/API_ContainerDependency.html). When you do this, the application container subsequently mounts the volume where the secret was written. Like the Amazon S3 bucket method, your application must be written to read the secret from the shared volume. Because the volume is scoped to the task, the volume is automatically deleted after the task stops. For an example, see the [task-def.json](https://github.com/aws-samples/aws-secret-sidecar-injector/blob/master/ecs-task-def/task-def.json) project.

On Amazon EC2, the volume that the secret is written to can be encrypted with a AWS KMS customer managed key. On AWS Fargate, volume storage is automatically encrypted using a service managed key. 

# Pass an individual environment variable to an Amazon ECS container
<a name="taskdef-envfiles"></a>

**Important**  
We recommend storing your sensitive data in either AWS Secrets Manager secrets or AWS Systems Manager Parameter Store parameters. For more information, see [Pass sensitive data to an Amazon ECS container](specifying-sensitive-data.md).  
Environment variables specified in the task definition are readable by all users and roles that are allowed the `DescribeTaskDefinition` action for the task definition.

You can pass environment variables to your containers in the following ways:
+ Individually using the `environment` container definition parameter. This maps to the `--env` option to [https://docs.docker.com/reference/cli/docker/container/run/](https://docs.docker.com/reference/cli/docker/container/run/).
+ In bulk, using the `environmentFiles` container definition parameter to list one or more files that contain the environment variables. The file must be hosted in Amazon S3. This maps to the `--env-file` option to [https://docs.docker.com/reference/cli/docker/container/run/](https://docs.docker.com/reference/cli/docker/container/run/).

The following is a snippet of a task definition showing how to specify individual environment variables.

```
{
    "family": "",
    "containerDefinitions": [
        {
            "name": "",
            "image": "",
            ...
            "environment": [
                {
                    "name": "variable",
                    "value": "value"
                }
            ],
            ...
        }
    ],
    ...
}
```

# Pass environment variables to an Amazon ECS container
<a name="use-environment-file"></a>

**Important**  
We recommend storing your sensitive data in either AWS Secrets Manager secrets or AWS Systems Manager Parameter Store parameters. For more information, see [Pass sensitive data to an Amazon ECS container](specifying-sensitive-data.md).  
Environment variable files are objects in Amazon S3 and all Amazon S3 security considerations apply.   
You can't use the `environmentFiles` parameter on Windows containers and Windows containers on Fargate.

You can create an environment variable file and store it in Amazon S3 to pass environment variables to your container.

By specifying environment variables in a file, you can bulk inject environment variables. Within your container definition, specify the `environmentFiles` object with a list of Amazon S3 buckets containing your environment variable files.

Amazon ECS doesn't enforce a size limit on the environment variables, but a large environment variables file might fill up the disk space. Each task that uses an environment variables file causes a copy of the file to be downloaded to disk. Amazon ECS removes the file as part of the task cleanup.

For information about the supported environment variables, see [Advanced container definition parameters- Environment](https://docs.aws.amazon.com/AmazonECS/latest/developerguide/task_definition_parameters.html#container_definition_environment).

Consider the following when specifying an environment variable file in a container definition.
+ For Amazon ECS tasks on Amazon EC2, your container instances require that the container agent is version `1.39.0` or later to use this feature. For information about how to check your agent version and update to the latest version, see [Updating the Amazon ECS container agent](ecs-agent-update.md).
+ For Amazon ECS tasks on AWS Fargate, your tasks must use platform version `1.4.0` or later (Linux) to use this feature. For more information, see [Fargate platform versions for Amazon ECS](platform-fargate.md).

  Verify that the variable is supported for the operating system platform. For more information, see [Container definitions](task_definition_parameters.md#container_definitions) and [Other task definition parameters](task_definition_parameters.md#other_task_definition_params).
+ The file must use the `.env` file extension and UTF-8 encoding.
+ The task execution role is required to use this feature with the additional permissions for Amazon S3. This allows the container agent to pull the environment variable file from Amazon S3. For more information, see [Amazon ECS task execution IAM role](task_execution_IAM_role.md).
+ There is a limit of 10 files per task definition.
+ Each line in an environment file must contain an environment variable in `VARIABLE=VALUE` format. Spaces or quotation marks **are** included as part of the values for Amazon ECS files. Lines beginning with `#` are treated as comments and are ignored. For more information about the environment variable file syntax, see [Set environment variables (-e, --env, --env-file)](https://docs.docker.com/reference/cli/docker/container/run/#env) in the Docker documentation.

  The following is the appropriate syntax.

  ```
  #This is a comment and will be ignored
  VARIABLE=VALUE
  ENVIRONMENT=PRODUCTION
  ```
+ If there are environment variables specified using the `environment` parameter in a container definition, they take precedence over the variables contained within an environment file.
+ If multiple environment files are specified and they contain the same variable, they're processed in order of entry. This means that the first value of the variable is used and subsequent values of duplicate variables are ignored. We recommend that you use unique variable names.
+ If an environment file is specified as a container override, it's used. Moreover, any other environment files that are specified in the container definition is ignored.
+ The following rules apply to the Fargate:
  + The file is handled similar to a native Docker env-file.
  + Container definitions that reference environment variables that are blank and stored in Amazon S3 do not appear in the container.
  + There is no support for shell escape handling.
  + The container entry point interperts the `VARIABLE` values.

## Example
<a name="environment-file-example"></a>

The following is a snippet of a task definition showing how to specify an environment variable file.

```
{
    "family": "",
    "containerDefinitions": [
        {
            "name": "",
            "image": "",
            ...
            "environmentFiles": [
                {
                    "value": "arn:aws:s3:::amzn-s3-demo-bucket/envfile_object_name.env",
                    "type": "s3"
                }
            ],
            ...
        }
    ],
    ...
}
```

# Pass Secrets Manager secrets programmatically in Amazon ECS
<a name="secrets-app-secrets-manager"></a>

Instead of hardcoding sensitive information in plain text in your application, you can use Secrets Manager to store the sensitive data.

We recommend this method of retrieving sensitive data because if the Secrets Manager secret is subsequently updated, the application automatically retrieves the latest version of the secret.

Create a secret in Secrets Manager. After you create a Secrets Manager secret, update your application code to retrieve the secret.

Review the following considerations before securing sensitive data in Secrets Manager.
+ Only secrets that store text data, which are secrets created with the `SecretString` parameter of the [CreateSecret](https://docs.aws.amazon.com/secretsmanager/latest/apireference/API_CreateSecret.html) API, are supported. Secrets that store binary data, which are secrets created with the `SecretBinary` parameter of the [CreateSecret](https://docs.aws.amazon.com/secretsmanager/latest/apireference/API_CreateSecret.html) API are not supported.
+ Use interface VPC endpoints to enhance security controls. You must create the interface VPC endpoints for Secrets Manager. For information about the VPC endpoint, see [Create VPC endpoints](https://docs.aws.amazon.com/secretsmanager/latest/userguide/setup-create-vpc.html) in the *AWS Secrets Manager User Guide*.
+ The VPC your task uses must use DNS resolution.
+ Your task definition must use a task role with the additional permissions for Secrets Manager. For more information, see [Amazon ECS task IAM role](task-iam-roles.md).

## Create the Secrets Manager secret
<a name="secrets-app-secrets-manager-create-secret"></a>

You can use the Secrets Manager console to create a secret for your sensitive data. For information about how to create secrets, see [Create an AWS Secrets Manager secret](https://docs.aws.amazon.com/secretsmanager/latest/userguide/create_secret.html) in the *AWS Secrets Manager User Guide*.

## Update your application to programmatically retrieve Secrets Manager secrets
<a name="secrets-app-secrets-manager-update-app"></a>

You can retrieve secrets with a call to the Secrets Manager APIs directly from your application. For information, see [Retrieve secrets from AWS Secrets Manager](https://docs.aws.amazon.com/secretsmanager/latest/userguide/retrieving-secrets.html) in the *AWS Secrets Manager User Guide*.

To retrieve the sensitive data stored in the AWS Secrets Manager, see [Code examples for AWS Secrets Manager using AWS SDKs](https://docs.aws.amazon.com/code-library/latest/ug/secrets-manager_code_examples.html) in the *AWS SDK Code Examples Code Library*.

# Pass Systems Manager Parameter Store secrets programmatically in Amazon ECS
<a name="secrets-app-ssm-paramstore"></a>

Systems Manager Parameter Store provides secure storage and management of secrets. You can store data such as passwords, database strings, EC2 instance IDs and AMI IDs, and license codes as parameter values, instead of hardcoding this information in your application. You can store values as plain text or encrypted data.

We recommend this method of retrieving sensitive data because if the Systems Manager Parameter Store parameter is subsequently updated, the application automatically retrieves the latest version.

Review the following considerations before securing sensitive data in Systems Manager Parameter Store.
+ Only secrets that store text data are supported. Secrets that store binary data are not supported.
+ Use interface VPC endpoints to enhance security controls.
+ The VPC your task uses must use DNS resolution.
+ For tasks that use EC2, you must use the Amazon ECS agent configuration variable `ECS_ENABLE_AWSLOGS_EXECUTIONROLE_OVERRIDE=true` to use this feature. You can add it to the `/etc/ecs/ecs.config` file during container instance creation or you can add it to an existing instance and then restart the ECS agent. For more information, see [Amazon ECS container agent configuration](ecs-agent-config.md).
+ Your task definition must use a task role with the additional permissions for Systems Manager Parameter Store. For more information, see [Amazon ECS task IAM role](task-iam-roles.md).

## Create the parameter
<a name="secrets-app-ssm-paramstore-create-secret"></a>

You can use the Systems Manager console to create a Systems Manager Parameter Store parameter for your sensitive data. For more information, see [Create a Systems Manager parameter (console)](https://docs.aws.amazon.com/systems-manager/latest/userguide/parameter-create-console.html) or [Create a Systems Manager parameter (AWS CLI)](https://docs.aws.amazon.com/systems-manager/latest/userguide/param-create-cli.html) in the *AWS Systems Manager User Guide*.

## Update your application to programmatically retrieve Systems Manager Parameter Store secrets
<a name="secrets-app-ssm-paramstore-update-app"></a>

To retrieve the sensitive data stored in the Systems Manager Parameter Store parameter, see [Code examples for Systems Manager using AWS SDKs](https://docs.aws.amazon.com/code-library/latest/ug/ssm_code_examples.html) in the *AWS SDK Code Examples Code Library*.

# Pass Secrets Manager secrets through Amazon ECS environment variables
<a name="secrets-envvar-secrets-manager"></a>

When you inject a secret as an environment variable, you can specify the full contents of a secret, a specific JSON key within a secret. This helps you control the sensitive data exposed to your container. For more information about secret versioning, see [What's in a Secrets Manager secret?](https://docs.aws.amazon.com/secretsmanager/latest/userguide/whats-in-a-secret.html#term_version) in the *AWS Secrets Manager User Guide*.

The following should be considered when using an environment variable to inject a Secrets Manager secret into a container.
+ Sensitive data is injected into your container when the container is initially started. If the secret is subsequently updated or rotated, the container will not receive the updated value automatically. You must either launch a new task or if your task is part of a service you can update the service and use the **Force new deployment** option to force the service to launch a fresh task.
+ Applications that run on the container and container logs and debugging tools have access to the environment variables.
+ For Amazon ECS tasks on AWS Fargate, consider the following:
  + To inject the full content of a secret as an environment variable or in a log configuration, you must use platform version `1.3.0` or later. For information, see [Fargate platform versions for Amazon ECS](platform-fargate.md).
  + To inject a specific JSON key or version of a secret as an environment variable or in a log configuration, you must use platform version `1.4.0` or later (Linux) or `1.0.0` (Windows). For information, see [Fargate platform versions for Amazon ECS](platform-fargate.md).
+ For Amazon ECS tasks on EC2, the following should be considered:
  + To inject a secret using a specific JSON key or version of a secret, your container instance must have version `1.37.0` or later of the container agent. However, we recommend using the latest container agent version. For information about checking your agent version and updating to the latest version, see [Updating the Amazon ECS container agent](ecs-agent-update.md).

    To inject the full contents of a secret as an environment variable or to inject a secret in a log configuration, your container instance must have version `1.22.0` or later of the container agent.
+ Use interface VPC endpoints to enhance security controls and connect to Secrets Manager through a private subnet. You must create the interface VPC endpoints for Secrets Manager. For information about the VPC endpoint, see [Create VPC endpoints](https://docs.aws.amazon.com/secretsmanager/latest/userguide/setup-create-vpc.html) in the *AWS Secrets Manager User Guide*. For more information about using Secrets Manager and Amazon VPC, see [How to connect to Secrets Manager service within a Amazon VPC](https://aws.amazon.com/blogs//security/how-to-connect-to-aws-secrets-manager-service-within-a-virtual-private-cloud/).
+ For Windows tasks that are configured to use the `awslogs` logging driver, you must also set the `ECS_ENABLE_AWSLOGS_EXECUTIONROLE_OVERRIDE` environment variable on your container instance. Use the following syntax:

  ```
  <powershell>
  [Environment]::SetEnvironmentVariable("ECS_ENABLE_AWSLOGS_EXECUTIONROLE_OVERRIDE", $TRUE, "Machine")
  Initialize-ECSAgent -Cluster <cluster name> -EnableTaskIAMRole -LoggingDrivers '["json-file","awslogs"]'
  </powershell>
  ```
+ Your task definition must use a task execution role with the additional permissions for Secrets Manager. For more information, see [Amazon ECS task execution IAM role](task_execution_IAM_role.md).

## Create the AWS Secrets Manager secret
<a name="secrets-envvar-secrets-manager-create-secret"></a>

You can use the Secrets Manager console to create a secret for your sensitive data. For more information, see [Create an AWS Secrets Manager secret](https://docs.aws.amazon.com/secretsmanager/latest/userguide/create_secret.html) in the *AWS Secrets Manager User Guide*.

## Add the environment variable to the container definition
<a name="secrets-envvar-secrets-manager-update-container-definition"></a>

Within your container definition, you can specify the following:
+ The `secrets` object containing the name of the environment variable to set in the container
+ The Amazon Resource Name (ARN) of the Secrets Manager secret
+ Additional parameters that contain the sensitive data to present to the container

The following example shows the full syntax that must be specified for the Secrets Manager secret.

```
arn:aws:secretsmanager:region:aws_account_id:secret:secret-name:json-key:version-stage:version-id
```

The following section describes the additional parameters. These parameters are optional, but if you do not use them, you must include the colons `:` to use the default values. Examples are provided below for more context.

`json-key`  
Specifies the name of the key in a key-value pair with the value that you want to set as the environment variable value. Only values in JSON format are supported. If you do not specify a JSON key, then the full contents of the secret is used.

`version-stage`  
Specifies the staging label of the version of a secret that you want to use. If a version staging label is specified, you cannot specify a version ID. If no version stage is specified, the default behavior is to retrieve the secret with the `AWSCURRENT` staging label.  
Staging labels are used to keep track of different versions of a secret when they are either updated or rotated. Each version of a secret has one or more staging labels and an ID.

`version-id`  
Specifies the unique identifier of the version of a secret that you want to use. If a version ID is specified, you cannot specify a version staging label. If no version ID is specified, the default behavior is to retrieve the secret with the `AWSCURRENT` staging label.  
Version IDs are used to keep track of different versions of a secret when they are either updated or rotated. Each version of a secret has an ID. For more information, see [Key Terms and Concepts for AWS Secrets Manager](https://docs.aws.amazon.com/secretsmanager/latest/userguide/terms-concepts.html#term_secret) in the *AWS Secrets Manager User Guide*.

### Example container definitions
<a name="secrets-examples"></a>

The following examples show ways in which you can reference Secrets Manager secrets in your container definitions.

**Example referencing a full secret**  
The following is a snippet of a task definition showing the format when referencing the full text of a Secrets Manager secret.  

```
{
  "containerDefinitions": [{
    "secrets": [{
      "name": "environment_variable_name",
      "valueFrom": "arn:aws:secretsmanager:region:aws_account_id:secret:secret_name-AbCdEf"
    }]
  }]
}
```
To access the value of this secret from within the container you would need to call the `$environment_variable_name`.

**Example referencing full secrets**  
The following is a snippet of a task definition showing the format when referencing the full text of multiple Secrets Manager secrets.  

```
{
  "containerDefinitions": [{
     "secrets": [
      {
        "name": "environment_variable_name1",
         "valueFrom": "arn:aws:secretsmanager:region:aws_account_id:secret:secret_name-AbCdEf"
      },
      {
        "name": "environment_variable_name2",
         "valueFrom": "arn:aws:secretsmanager:region:aws_account_id:secret:secret_name-abcdef"
      },
      {
        "name": "environment_variable_name3",
        "valueFrom": "arn:aws:secretsmanager:region:aws_account_id:secret:secret_name-ABCDEF"
      }
    ]
  }]
}
```
To access the value of this secret from within the container you would need to call the `$environment_variable_name1`, `$environment_variable_name2`, and `$environment_variable_name3`.

**Example referencing a specific key within a secret**  
The following shows an example output from a [get-secret-value](https://docs.aws.amazon.com/cli/latest/reference/secretsmanager/get-secret-value.html) command that displays the contents of a secret along with the version staging label and version ID associated with it.  

```
{
    "ARN": "arn:aws:secretsmanager:region:aws_account_id:secret:appauthexample-AbCdEf",
    "Name": "appauthexample",
    "VersionId": "871d9eca-18aa-46a9-8785-981ddEXAMPLE",
    "SecretString": "{\"username1\":\"password1\",\"username2\":\"password2\",\"username3\":\"password3\"}",
    "VersionStages": [
        "AWSCURRENT"
    ],
    "CreatedDate": 1581968848.921
}
```
Reference a specific key from the previous output in a container definition by specifying the key name at the end of the ARN.  

```
{
  "containerDefinitions": [{
    "secrets": [{
      "name": "environment_variable_name",
      "valueFrom": "arn:aws:secretsmanager:region:aws_account_id:secret:appauthexample-AbCdEf:username1::"
    }]
  }]
}
```

**Example referencing a specific secret version**  
The following shows an example output from a [describe-secret](https://docs.aws.amazon.com/cli/latest/reference/secretsmanager/describe-secret.html) command that displays the unencrypted contents of a secret along with the metadata for all versions of the secret.  

```
{
    "ARN": "arn:aws:secretsmanager:region:aws_account_id:secret:appauthexample-AbCdEf",
    "Name": "appauthexample",
    "Description": "Example of a secret containing application authorization data.",
    "RotationEnabled": false,
    "LastChangedDate": 1581968848.926,
    "LastAccessedDate": 1581897600.0,
    "Tags": [],
    "VersionIdsToStages": {
        "871d9eca-18aa-46a9-8785-981ddEXAMPLE": [
            "AWSCURRENT"
        ],
        "9d4cb84b-ad69-40c0-a0ab-cead3EXAMPLE": [
            "AWSPREVIOUS"
        ]
    }
}
```
Reference a specific version staging label from the previous output in a container definition by specifying the key name at the end of the ARN.  

```
{
  "containerDefinitions": [{
    "secrets": [{
      "name": "environment_variable_name",
      "valueFrom": "arn:aws:secretsmanager:region:aws_account_id:secret:appauthexample-AbCdEf::AWSPREVIOUS:"
    }]
  }]
}
```
Reference a specific version ID from the previous output in a container definition by specifying the key name at the end of the ARN.  

```
{
  "containerDefinitions": [{
    "secrets": [{
      "name": "environment_variable_name",
      "valueFrom": "arn:aws:secretsmanager:region:aws_account_id:secret:appauthexample-AbCdEf:::9d4cb84b-ad69-40c0-a0ab-cead3EXAMPLE"
    }]
  }]
}
```

**Example referencing a specific key and version staging label of a secret**  
The following shows how to reference both a specific key within a secret and a specific version staging label.  

```
{
  "containerDefinitions": [{
    "secrets": [{
      "name": "environment_variable_name",
      "valueFrom": "arn:aws:secretsmanager:region:aws_account_id:secret:appauthexample-AbCdEf:username1:AWSPREVIOUS:"
    }]
  }]
}
```
To specify a specific key and version ID, use the following syntax.  

```
{
  "containerDefinitions": [{
    "secrets": [{
      "name": "environment_variable_name",
      "valueFrom": "arn:aws:secretsmanager:region:aws_account_id:secret:appauthexample-AbCdEf:username1::9d4cb84b-ad69-40c0-a0ab-cead3EXAMPLE"
    }]
  }]
}
```

For information about how to create a task definition with the secret specified in an environment variable, see [Creating an Amazon ECS task definition using the console](create-task-definition.md). 

# Pass Systems Manager parameters through Amazon ECS environment variables
<a name="secrets-envvar-ssm-paramstore"></a>

Amazon ECS allows you to inject sensitive data into your containers by storing your sensitive data in AWS Systems Manager Parameter Store parameters and then referencing them in your container definition.

Consider the following when using an environment variable to inject a Systems Manager secret into a container.
+ Sensitive data is injected into your container when the container is initially started. If the secret is subsequently updated or rotated, the container will not receive the updated value automatically. You must either launch a new task or if your task is part of a service you can update the service and use the **Force new deployment** option to force the service to launch a fresh task.
+ For Amazon ECS tasks on AWS Fargate, the following should be considered:
  + To inject the full content of a secret as an environment variable or in a log configuration, you must use platform version `1.3.0` or later. For information, see [Fargate platform versions for Amazon ECS](platform-fargate.md).
  + To inject a specific JSON key or version of a secret as an environment variable or in a log configuration, you must use platform version `1.4.0` or later (Linux) or `1.0.0` (Windows). For information, see [Fargate platform versions for Amazon ECS](platform-fargate.md).
+ For Amazon ECS tasks on EC2, the following should be considered:
  + To inject a secret using a specific JSON key or version of a secret, your container instance must have version `1.37.0` or later of the container agent. However, we recommend using the latest container agent version. For information about checking your agent version and updating to the latest version, see [Updating the Amazon ECS container agent](ecs-agent-update.md).

    To inject the full contents of a secret as an environment variable or to inject a secret in a log configuration, your container instance must have version `1.22.0` or later of the container agent.
+ Use interface VPC endpoints to enhance security controls. You must create the interface VPC endpoints for Systems Manager. For information about the VPC endpoint, see [Improve the security of EC2 instances by using VPC endpoints for Systems Manager](https://docs.aws.amazon.com/systems-manager/latest/userguide/setup-create-vpc.html) in the *AWS Systems Manager User Guide*.
+ Your task definition must use a task execution role with the additional permissions for Systems Manager Parameter Store. For more information, see [Amazon ECS task execution IAM role](task_execution_IAM_role.md).
+ For Windows tasks that are configured to use the `awslogs` logging driver, you must also set the `ECS_ENABLE_AWSLOGS_EXECUTIONROLE_OVERRIDE` environment variable on your container instance. Use the following syntax:

  ```
  <powershell>
  [Environment]::SetEnvironmentVariable("ECS_ENABLE_AWSLOGS_EXECUTIONROLE_OVERRIDE", $TRUE, "Machine")
  Initialize-ECSAgent -Cluster <cluster name> -EnableTaskIAMRole -LoggingDrivers '["json-file","awslogs"]'
  </powershell>
  ```

## Create the Systems Manager parameter
<a name="secrets-envvar-ssm-paramstore-create-parameter"></a>

You can use the Systems Manager console to create a Systems Manager Parameter Store parameter for your sensitive data. For more information, see [Create a Systems Manager parameter (console)](https://docs.aws.amazon.com/systems-manager/latest/userguide/parameter-create-console.html) or [Create a Systems Manager parameter (AWS CLI)](https://docs.aws.amazon.com/systems-manager/latest/userguide/param-create-cli.html) in the *AWS Systems Manager User Guide*.

## Add the environment variable to the container definition
<a name="secrets-ssm-paramstore-update-container-definition"></a>

Within your container definition in the task definition, specify `secrets` with the name of the environment variable to set in the container and the full ARN of the Systems Manager Parameter Store parameter containing the sensitive data to present to the container. For more information, see [secrets](task_definition_parameters.md#ContainerDefinition-secrets).

The following is a snippet of a task definition showing the format when referencing a Systems Manager Parameter Store parameter. If the Systems Manager Parameter Store parameter exists in the same Region as the task you are launching, then you can use either the full ARN or name of the parameter. If the parameter exists in a different Region, then specify the full ARN.

```
{
  "containerDefinitions": [{
    "secrets": [{
      "name": "environment_variable_name",
      "valueFrom": "arn:aws:ssm:region:aws_account_id:parameter/parameter_name"
    }]
  }]
}
```

For information about how to create a task definition with the secret specified in an environment variable, see [Creating an Amazon ECS task definition using the console](create-task-definition.md).

## Update your application to programmatically retrieve Systems Manager Parameter Store secrets
<a name="secrets-ssm-paramstore-update-app"></a>

To retrieve the sensitive data stored in the Systems Manager Parameter Store parameter, see [Code examples for Systems Manager using AWS SDKs](https://docs.aws.amazon.com/code-library/latest/ug/ssm_code_examples.html) in the *AWS SDK Code Examples Code Library*.

# Pass secrets for Amazon ECS logging configuration
<a name="secrets-logconfig"></a>

You can use the `secretOptions` parameter in `logConfiguration` to pass sensitive data used for logging.

You can store the secret in Secrets Manager or Systems Manager.

## Use Secrets Manager
<a name="secrets-logconfig-secrets-manager"></a>

Within your container definition, when specifying a `logConfiguration` you can specify `secretOptions` with the name of the log driver option to set in the container and the full ARN of the Secrets Manager secret containing the sensitive data to present to the container. For more information about creating secrets, see [Create an AWS Secrets Manager](https://docs.aws.amazon.com/secretsmanager/latest/userguide/create_secret.html).

The following is a snippet of a task definition showing the format when referencing an Secrets Manager secret.

```
{
  "containerDefinitions": [{
    "logConfiguration": [{
      "logDriver": "splunk",
      "options": {
        "splunk-url": "https://your_splunk_instance:8088"
      },
      "secretOptions": [{
        "name": "splunk-token",
        "valueFrom": "arn:aws:secretsmanager:region:aws_account_id:secret:secret_name-AbCdEf"
      }]
    }]
  }]
}
```

## Add the environment variable to the container definition
<a name="secrets-envvar-ssm-paramstore-update-container-definition"></a>

Within your container definition, specify `secrets` with the name of the environment variable to set in the container and the full ARN of the Systems Manager Parameter Store parameter containing the sensitive data to present to the container. For more information, see [secrets](task_definition_parameters.md#ContainerDefinition-secrets).

The following is a snippet of a task definition showing the format when referencing a Systems Manager Parameter Store parameter. If the Systems Manager Parameter Store parameter exists in the same Region as the task you are launching, then you can use either the full ARN or name of the parameter. If the parameter exists in a different Region, then specify the full ARN.

```
{
  "containerDefinitions": [{
    "secrets": [{
      "name": "environment_variable_name",
      "valueFrom": "arn:aws:ssm:region:aws_account_id:parameter/parameter_name"
    }]
  }]
}
```

For information about how to create a task definition with the secret specified in an environment variable, see [Creating an Amazon ECS task definition using the console](create-task-definition.md).

## Use Systems Manager
<a name="secrets-logconfig-ssm-paramstore"></a>

You can inject sensitive data in a log configuration. Within your container definition, when specifying a `logConfiguration` you can specify `secretOptions` with the name of the log driver option to set in the container and the full ARN of the Systems Manager Parameter Store parameter containing the sensitive data to present to the container.

**Important**  
If the Systems Manager Parameter Store parameter exists in the same Region as the task you are launching, then you can use either the full ARN or name of the parameter. If the parameter exists in a different Region, then specify the full ARN.

The following is a snippet of a task definition showing the format when referencing a Systems Manager Parameter Store parameter.

```
{
  "containerDefinitions": [{
    "logConfiguration": [{
      "logDriver": "fluentd",
      "options": {
        "tag": "fluentd demo"
      },
      "secretOptions": [{
        "name": "fluentd-address",
        "valueFrom": "arn:aws:ssm:region:aws_account_id:parameter:/parameter_name"
      }]
    }]
  }]
}
```

# Specifying sensitive data using Secrets Manager secrets in Amazon ECS
<a name="specifying-sensitive-data-tutorial"></a>

Amazon ECS allows you to inject sensitive data into your containers by storing your sensitive data in AWS Secrets Manager secrets and then referencing them in your container definition. For more information, see [Pass sensitive data to an Amazon ECS container](specifying-sensitive-data.md).

Learn how to create an Secrets Manager secret, reference the secret in an Amazon ECS task definition, and then verify it worked by querying the environment variable inside a container showing the contents of the secret.

## Prerequisites
<a name="specifying-sensitive-data-tutorial-prereqs"></a>

This tutorial assumes that the following prerequisites have been completed:
+ The steps in [Set up to use Amazon ECS](get-set-up-for-amazon-ecs.md) have been completed.
+ Your user has the required IAM permissions to create the Secrets Manager and Amazon ECS resources.

## Step 1: Create an Secrets Manager secret
<a name="specifying-sensitive-data-tutorial-create-secret"></a>

You can use the Secrets Manager console to create a secret for your sensitive data. In this tutorial we will be creating a basic secret for storing a username and password to reference later in a container. For more information, see [Create an AWS Secrets Manager secret](https://docs.aws.amazon.com/secretsmanager/latest/userguide/create_secret.html) in the *AWS Secrets Manager User Guide*.

The ** key/value pairs to be stored in this secret** is the environment variable value in your container at the end of the tutorial.

Save the **Secret ARN** to reference in your task execution IAM policy and task definition in later steps.

## Step 2: Add the secrets permissions to the task execution role
<a name="specifying-sensitive-data-tutorial-update-iam"></a>

In order for Amazon ECS to retrieve the sensitive data from your Secrets Manager secret, you must have the secrets permissions for the task execution role. For more information, see [Secrets Manager or Systems Manager permissions](task_execution_IAM_role.md#task-execution-secrets).

## Step 3: Create a task definition
<a name="specifying-sensitive-data-tutorial-create-taskdef"></a>

You can use the Amazon ECS console to create a task definition that references a Secrets Manager secret.

**To create a task definition that specifies a secret**

Use the IAM console to update your task execution role with the required permissions.

1. Open the console at [https://console.aws.amazon.com/ecs/v2](https://console.aws.amazon.com/ecs/v2).

1. In the navigation pane, choose **Task definitions**.

1. Choose **Create new task definition**, **Create new task definition with JSON**.

1. In the JSON editor box, enter the following task definition JSON text, ensuring that you specify the full ARN of the Secrets Manager secret you created in step 1 and the task execution role you updated in step 2. Choose **Save**.

1. 

   ```
   {
       "executionRoleArn": "arn:aws:iam::aws_account_id:role/ecsTaskExecutionRole",
       "containerDefinitions": [
           {
               "entryPoint": [
                   "sh",
                   "-c"
               ],
               "portMappings": [
                   {
                       "hostPort": 80,
                       "protocol": "tcp",
                       "containerPort": 80
                   }
               ],
               "command": [
                   "/bin/sh -c \"echo '<html> <head> <title>Amazon ECS Sample App</title> <style>body {margin-top: 40px; background-color: #333;} </style> </head><body> <div style=color:white;text-align:center> <h1>Amazon ECS Sample App</h1> <h2>Congratulations!</h2> <p>Your application is now running on a container in Amazon ECS.</p> </div></body></html>' >  /usr/local/apache2/htdocs/index.html && httpd-foreground\""
               ],
               "cpu": 10,
               "secrets": [
                   {
                       "valueFrom": "arn:aws:secretsmanager:region:aws_account_id:secret:username_value",
                       "name": "username_value"
                   }
               ],
               "memory": 300,
               "image": "public.ecr.aws/docker/library/httpd:2.4",
               "essential": true,
               "name": "ecs-secrets-container"
           }
       ],
       "family": "ecs-secrets-tutorial"
   }
   ```

1. Choose **Create**.

## Step 4: Create a cluster
<a name="specifying-sensitive-data-tutorial-create-cluster"></a>

You can use the Amazon ECS console to create a cluster containing a container instance to run the task on. If you have an existing cluster with at least one container instance registered to it with the available resources to run one instance of the task definition created for this tutorial you can skip to the next step.

For this tutorial we will be creating a cluster with one `t2.micro` container instance using the Amazon ECS-optimized Amazon Linux 2 AMI.

For information about how to create a cluster for EC2, see [Creating an Amazon ECS cluster for Amazon EC2 workloads](create-ec2-cluster-console-v2.md).

## Step 5: Run a task
<a name="specifying-sensitive-data-tutorial-run-task"></a>

You can use the Amazon ECS console to run a task using the task definition you created. For this tutorial we will be running a task using EC2, using the cluster we created in the previous step. 

For information about how to run a task, see [Running an application as an Amazon ECS task](standalone-task-create.md).

## Step 6: Verify
<a name="specifying-sensitive-data-tutorial-verify"></a>

You can verify all of the steps were completed successfully and the environment variable was created properly in your container using the following steps.

**To verify that the environment variable was created**

1. Find the public IP or DNS address for your container instance.

   1. Open the console at [https://console.aws.amazon.com/ecs/v2](https://console.aws.amazon.com/ecs/v2).

   1. In the navigation pane, choose **Clusters**, and then choose the cluster you created.

   1. Choose **Infrastructure**, and then choose the container instance.

   1. Record the **Public IP** or **Public DNS** for your instance.

1. If you are using a macOS or Linux computer, connect to your instance with the following command, substituting the path to your private key and the public address for your instance:

   ```
   $ ssh -i /path/to/my-key-pair.pem ec2-user@ec2-198-51-100-1.compute-1.amazonaws.com
   ```

   For more information about using a Windows computer, see [Connect to your Linux instance using PuTTY](https://docs.aws.amazon.com/AWSEC2/latest/UserGuide/connect-linux-inst-from-windows.html) in the *Amazon EC2 User Guide*.
**Important**  
For more information about any issues while connecting to your instance, see [Troubleshooting Connecting to Your Instance](https://docs.aws.amazon.com/AWSEC2/latest/UserGuide/TroubleshootingInstancesConnecting.html) in the *Amazon EC2 User Guide*.

1. List the containers running on the instance. Note the container ID for `ecs-secrets-tutorial` container.

   ```
   docker ps
   ```

1. Connect to the `ecs-secrets-tutorial` container using the container ID from the output of the previous step.

   ```
   docker exec -it container_ID /bin/bash
   ```

1. Use the `echo` command to print the value of the environment variable.

   ```
   echo $username_value
   ```

   If the tutorial was successful, you should see the following output:

   ```
   password_value
   ```
**Note**  
Alternatively, you can list all environment variables in your container using the `env` (or `printenv`) command.

## Step 7: Clean up
<a name="specifying-sensitive-data-tutorial-cleanup"></a>

When you are finished with this tutorial, you should clean up the associated resources to avoid incurring charges for unused resources.

**To clean up the resources**

1. Open the console at [https://console.aws.amazon.com/ecs/v2](https://console.aws.amazon.com/ecs/v2).

1. In the navigation pane, choose **Clusters**.

1. On the **Clusters** page, choose the cluster.

1. Choose **Delete Cluster**. 

1. In the confirmation box, enter **delete *cluster name***, and then choose **Delete**.

1. Open the IAM console at [https://console.aws.amazon.com/iam/](https://console.aws.amazon.com/iam/).

1. In the navigation pane, choose **Roles**. 

1. Search the list of roles for `ecsTaskExecutionRole` and select it.

1. Choose **Permissions**, then choose the **X** next to **ECSSecretsTutorial**. Choose **Remove**.

1. Open the Secrets Manager console at [https://console.aws.amazon.com/secretsmanager/](https://console.aws.amazon.com/secretsmanager/).

1. Select the **username\$1value** secret you created and choose **Actions**, **Delete secret**.

# Amazon ECS task definition parameters for Amazon ECS Managed Instances
<a name="task_definition_parameters-managed-instances"></a>

Task definitions are split into separate parts: the task family, the AWS Identity and Access Management (IAM) task role, the network mode, container definitions, volumes, and capacity. The family and container definitions are required in a task definition. In contrast, task role, network mode, volumes, and capacity are optional.

You can use these parameters in a JSON file to configure your task definition.

The following are more detailed descriptions for each task definition parameter for Amazon ECS Managed Instances.

## Family
<a name="family-managed-instances"></a>

`family`  
Type: String  
Required: Yes  
When you register a task definition, you give it a family, which is similar to a name for multiple versions of the task definition, specified with a revision number. The first task definition that's registered into a particular family is given a revision of 1, and any task definitions registered after that are given a sequential revision number.

## Capacity
<a name="requires_compatibilities-managed-instances"></a>

When you register a task definition, you can specify the capacity that Amazon ECS should validate the task definition against. If the task definition doesn't validate against the compatibilities specified, a client exception is returned. For more information, see [Amazon ECS launch types](https://docs.aws.amazon.com/AmazonECS/latest/developerguide/launch_types.html).

The following parameter is allowed in a task definition.

`requiresCompatibilities`  
Type: String array  
Required: No  
Valid Values: `MANAGED_INSTANCES`  
The capacity to validate the task definition against. This initiates a check to ensure that all of the parameters that are used in the task definition meet the requirements for Amazon ECS Managed Instances.

## Task role
<a name="task_role_arn-managed-instances"></a>

`taskRoleArn`  
Type: String  
Required: No  
When you register a task definition, you can provide a task role for an IAM role that allows the containers in the task permission to call the AWS APIs that are specified in its associated policies on your behalf. For more information, see [Amazon ECS task IAM role](task-iam-roles.md).

## Task execution role
<a name="execution_role_arn-managed-instances"></a>

`executionRoleArn`  
Type: String  
Required: Conditional  
The Amazon Resource Name (ARN) of the task execution role that grants the Amazon ECS container agent permission to make AWS API calls on your behalf. For more information, see [Amazon ECS task execution IAM role](task_execution_IAM_role.md).  
The task execution IAM role is required depending on the requirements of your task. The role is required for private ECR image pulls and using the `awslogs` log driver.

## Network mode
<a name="network_mode-managed-instances"></a>

`networkMode`  
Type: String  
Required: No  
Default: `awsvpc`  
The networking mode to use for the containers in the task. For Amazon ECS tasks that are hosted on Amazon ECS Managed Instances, the valid values are `awsvpc` and `host`. If no network mode is specified, the default network mode is `awsvpc`.  
If the network mode is `host`, the task bypasses network isolation and containers use the host's network stack directly.  
When running tasks that use the `host` network mode, do not run containers using the root user (UID 0) for better security. As a security best practice, always use a non-root user.
If the network mode is `awsvpc`, the task is allocated an elastic network interface, and you must specify a `NetworkConfiguration` when you create a service or run a task with the task definition. For more information, see [Amazon ECS task networking for Amazon ECS Managed Instances](managed-instance-networking.md).  
The `host` and `awsvpc` network modes offer the highest networking performance for containers because they use the Amazon EC2 network stack. With the `host` and `awsvpc` network modes, exposed container ports are mapped directly to the corresponding host port (for the `host` network mode) or the attached elastic network interface port (for the `awsvpc` network mode). Because of this, you can't use dynamic host port mappings.

## Runtime platform
<a name="runtime-platform-managed-instances"></a>

`operatingSystemFamily`  
Type: String  
Required: No  
Default: LINUX  
When you register a task definition, you specify the operating system family.   
The valid value for this field is `LINUX`.  
All task definitions that are used in a service must have the same value for this parameter.  
When a task definition is part of a service, this value must match the service `platformFamily` value.

`cpuArchitecture`  
Type: String  
Required: Conditional  
When you register a task definition, you specify the CPU architecture. The valid values are `X86_64` and `ARM64`.  
If you don't specify a value, Amazon ECS attempts to place tasks on the available CPU architecture based on the capacity provider configuration. To ensure that tasks are placed on a specific CPU architecture, specify a value for `cpuArchitecture` in the task definition.  
All task definitions that are used in a service must have the same value for this parameter.  
For more information about `ARM64`, see [Amazon ECS task definitions for 64-bit ARM workloads](ecs-arm64.md).

## Task size
<a name="task_size-managed-instances"></a>

When you register a task definition, you can specify the total CPU and memory used for the task. This is separate from the `cpu` and `memory` values at the container definition level. For tasks that are hosted on Amazon EC2 instances, these fields are optional.

**Note**  
Task-level CPU and memory parameters are ignored for Windows containers. We recommend specifying container-level resources for Windows containers.

`cpu`  
Type: String  
Required: Conditional  
The hard limit of CPU units to present for the task. You can specify CPU values in the JSON file as a string in CPU units or virtual CPUs (vCPUs). For example, you can specify a CPU value either as `1024` in CPU units or `1 vCPU` in vCPUs. When the task definition is registered, a vCPU value is converted to an integer indicating the CPU units.  
This field is optional. If your cluster doesn't have any registered container instances with the requested CPU units available, the task fails. Supported values are between `0.125` vCPUs and `10` vCPUs.

`memory`  
Type: String  
Required: Conditional  
The hard limit of memory to present to the task. You can specify memory values in the task definition as a string in mebibytes (MiB) or gigabytes (GB). For example, you can specify a memory value either as `3072` in MiB or `3 GB`in GB. When the task definition is registered, a GB value is converted to an integer indicating the MiB.  
This field is optional and any value can be used. If a task-level memory value is specified, then the container-level memory value is optional. If your cluster doesn't have any registered container instances with the requested memory available, the task fails. You can maximize your resource utilization by providing your tasks as much memory as possible for a particular instance type. For more information, see [Reserving Amazon ECS Linux container instance memory](memory-management.md).

## Other task definition parameters
<a name="other_task_definition_params-managed-instances"></a>

The following task definition parameters can be used when registering task definitions in the Amazon ECS console by using the **Configure via JSON** option. For more information, see [Creating an Amazon ECS task definition using the console](create-task-definition.md).

**Topics**
+ [

### Ephemeral storage
](#task_definition_ephemeralStorage-managed-instances)
+ [

### IPC mode
](#task_definition_ipcmode-managed-instances)
+ [

### PID mode
](#task_definition_pidmode-managed-instances)
+ [

### Proxy configuration
](#proxyConfiguration-managed-instances)
+ [

### Tags
](#tags-managed-instances)
+ [

### Elastic Inference accelerator (deprecated)
](#elastic-Inference-accelerator-managed-instances)
+ [

### Placement constraints
](#constraints-managed-instances)
+ [

### Volumes
](#volumes-managed-instances)

### Ephemeral storage
<a name="task_definition_ephemeralStorage-managed-instances"></a>

`ephemeralStorage`  
This parameter isn't supported for tasks running on Amazon ECS Managed Instances.
Type: [EphemeralStorage](https://docs.aws.amazon.com/AmazonECS/latest/APIReference/API_EphemeralStorage.html) object  
Required: No  
The amount of ephemeral storage (in GB) to allocate for the task. This parameter is used to expand the total amount of ephemeral storage available, beyond the default amount, for tasks that are hosted on AWS Fargate. For more information, see [Use bind mounts with Amazon ECS](bind-mounts.md).

### IPC mode
<a name="task_definition_ipcmode-managed-instances"></a>

`ipcMode`  
This parameter isn't supported for tasks running on Amazon ECS Managed Instances.
Type: String  
Required: No  
The IPC resource namespace to use for the containers in the task. The valid values are `host`, `task`, or `none`. If `host` is specified, then all the containers that are within the tasks that specified the `host` IPC mode on the same container instance share the same IPC resources with the host Amazon EC2 instance. If `task` is specified, all the containers that are within the specified task share the same IPC resources. If `none` is specified, then IPC resources within the containers of a task are private and not shared with other containers in a task or on the container instance. If no value is specified, then the IPC resource namespace sharing depends on the container runtime configuration.

### PID mode
<a name="task_definition_pidmode-managed-instances"></a>

`pidMode`  
Type: String  
Required: No  
The process namespace to use for the containers in the task. The valid values are `host` or `task`. If `host` is specified, then all the containers that are within the tasks that specified the `host` PID mode on the same container instance share the same process namespace with the host Amazon EC2 instance. If `task` is specified, all the containers that are within the specified task share the same process namespace. If no value is specified, the default is a private namespace.  
If the `host` PID mode is used, there's a heightened risk of undesired process namespace exposure.

### Proxy configuration
<a name="proxyConfiguration-managed-instances"></a>

`proxyConfiguration`  
This parameter isn't supported for tasks running on Amazon ECS Managed Instances.
Type: [ProxyConfiguration](https://docs.aws.amazon.com/AmazonECS/latest/APIReference/API_ProxyConfiguration.html) object  
Required: No  
The configuration details for the App Mesh proxy.

### Tags
<a name="tags-managed-instances"></a>

The metadata that you apply to a task definition to help you categorize and organize them. Each tag consists of a key and an optional value. You define both of them.

The following basic restrictions apply to tags:
+ Maximum number of tags per resource - 50
+ For each resource, each tag key must be unique, and each tag key can have only one value.
+ Maximum key length - 128 Unicode characters in UTF-8
+ Maximum value length - 256 Unicode characters in UTF-8
+ If your tagging schema is used across multiple services and resources, remember that other services may have restrictions on allowed characters. Generally allowed characters are: letters, numbers, and spaces representable in UTF-8, and the following characters: \$1 - = . \$1 : / @.
+ Tag keys and values are case-sensitive.
+ Don't use `aws:`, `AWS:`, or any upper or lowercase combination of such as a prefix for either keys or values as it is reserved for AWS use. You can't edit or delete tag keys or values with this prefix. Tags with this prefix do not count against your tags per resource limit.

`key`  
Type: String  
Required: No  
One part of a key-value pair that make up a tag. A key is a general label that acts like a category for more specific tag values.

`value`  
Type: String  
Required: No  
The optional part of a key-value pair that make up a tag. A value acts as a descriptor within a tag category (key).

### Elastic Inference accelerator (deprecated)
<a name="elastic-Inference-accelerator-managed-instances"></a>

**Note**  
Amazon Elastic Inference (EI) is no longer available to customers.

`inferenceAccelerator`  
This parameter isn't supported for tasks running on Amazon ECS Managed Instances.
Type: [InferenceAccelerator](https://docs.aws.amazon.com/AmazonECS/latest/APIReference/API_InferenceAccelerator.html) object  
Required: No  
The Elastic Inference accelerators to use for the containers in the task.

### Placement constraints
<a name="constraints-managed-instances"></a>

`placementConstraints`  
Type: Array of [TaskDefinitionPlacementConstraint](https://docs.aws.amazon.com/AmazonECS/latest/APIReference/API_TaskDefinitionPlacementConstraint.html) objects  
Required: No  
An array of placement constraint objects to use for the task. You can specify a maximum of 10 constraints per task (this limit includes constraints in the task definition and those specified at runtime).  
Amazon ECS supports the `distinctInstace` and `memberOf` placement constraints for tasks running on Amazon ECS Managed Instances. The following attributes are supported for tasks that use the `memberOf` placement constraint:  
+ `ecs.subnet-id`
+ `ecs.availability-zone`
+ `ecs.cpu-architecture`
+ `ecs.instance-type`
For more information about placement constraints, see [Define which container instances Amazon ECS uses for tasks](task-placement-constraints.md).

### Volumes
<a name="volumes-managed-instances"></a>

When you register a task definition, you can optionally specify a list of volumes for your tasks. This allows you to use data volumes in your tasks.

For more information about volume types and other parameters, see [Storage options for Amazon ECS tasks](using_data_volumes.md).

`name`  
Type: String  
Required: Yes  
The name of the volume. Up to 255 letters (uppercase and lowercase), numbers, and hyphens are allowed. This name is referenced in the `sourceVolume` parameter of container definition `mountPoints`.

`host`  
Type: [HostVolumeProperties](https://docs.aws.amazon.com/AmazonECS/latest/APIReference/API_HostVolumeProperties.html) object  
Required: No  
This parameter is specified when you're using bind mount host volumes. The contents of the `host` parameter determine whether your bind mount host volume persists on the host container instance and where it's stored. If the `host` parameter is empty, then the system assigns a host path for your data volume. However, the data isn't guaranteed to persist after the containers that are associated with it stop running.    
`sourcePath`  
Type: String  
Required: No  
When the `host` parameter is used, specify a `sourcePath` to declare the path on the host instance that is presented to the container. If this parameter is empty, then the system assigns a host path for you. If the `host` parameter contains a `sourcePath` file location, then the data volume persists at the specified location on the host instance until you delete it manually. If the `sourcePath` value does not exist on the host instance, the system creates it. If the location does exist, the contents of the source path folder are exported.  
On Amazon ECS Managed Instances, portions of the host filesystem are read-only. The `sourcePath` must point to a writable directory such as `/var` or `/tmp`. For more information, see [Use bind mounts with Amazon ECS](bind-mounts.md).

`dockerVolumeConfiguration`  
This parameter isn't supported for tasks running on Amazon ECS Managed Instances.
Type: [DockerVolumeConfiguration](https://docs.aws.amazon.com/AmazonECS/latest/APIReference/API_DockerVolumeConfiguration.html) object  
Required: No  
This parameter is specified when you're using Docker volumes.

`efsVolumeConfiguration`  
Type: [EFSVolumeConfiguration](https://docs.aws.amazon.com/AmazonECS/latest/APIReference/API_EFSVolumeConfiguration.html) object  
Required: No  
This parameter is specified when you're using an Amazon EFS file system for task storage.

`fsxWindowsFileServerVolumeConfiguration`  
This parameter isn't supported for tasks running on Amazon ECS Managed Instances.
Type: [FSxWindowsFileServerVolumeConfiguration](https://docs.aws.amazon.com/AmazonECS/latest/APIReference/API_FSxWindowsFileServerVolumeConfiguration.html) object  
Required: No  
This parameter is specified when you're using Amazon FSx for Windows File Server file system for task storage.

`configuredAtLaunch`  
Type: Boolean  
Required: No  
Indicates whether the volume should be configured at launch time. This is used to create Amazon EBS volumes for standalone tasks or tasks created as part of a service. Each task definition revision may only have one volume configured at launch in the volume configuration.

## Container definitions
<a name="container_definitions-managed-instances"></a>

When you register a task definition, you must specify a list of container definitions that are passed to the Docker daemon on a container instance. The following parameters are allowed in a container definition.

**Topics**
+ [

### Name
](#container_definition_name-managed-instances)
+ [

### Image
](#container_definition_image-managed-instances)
+ [

### Memory
](#container_definition_memory-managed-instances)
+ [

### CPU
](#container_definition_cpu-managed-instances)
+ [

### Port mappings
](#container_definition_portmappings-managed-instances)
+ [

### Private Repository Credentials
](#container_definition_repositoryCredentials-managed-instances)
+ [

### Essential
](#container_definition_essential-managed-instances)
+ [

### Entry point
](#container_definition_entrypoint-managed-instances)
+ [

### Command
](#container_definition_command-managed-instances)
+ [

### Working directory
](#container_definition_workingdirectory-managed-instances)
+ [

### Advanced container definition parameters
](#advanced_container_definition_params-managed-instances)
+ [

### Linux parameters
](#container_definition_linuxparameters-managed-instances)

### Name
<a name="container_definition_name-managed-instances"></a>

`name`  
Type: String  
Required: Yes  
The name of a container. Up to 255 letters (uppercase and lowercase), numbers, hyphens, and underscores are allowed. If you're linking multiple containers in a task definition, the `name` of one container can be entered in the `links` of another container. This is to connect the containers.

### Image
<a name="container_definition_image-managed-instances"></a>

`image`  
Type: String  
Required: Yes  
The image used to start a container. This string is passed directly to the Docker daemon. By default, images in the Docker Hub registry are available. You can also specify other repositories with either `repository-url/image:tag` or `repository-url/image@digest`. Up to 255 letters (uppercase and lowercase), numbers, hyphens, underscores, colons, periods, forward slashes, and number signs are allowed. This parameter maps to `Image` in the docker create-container command and the `IMAGE` parameter of the docker run command.  
+ When a new task starts, the Amazon ECS container agent pulls the latest version of the specified image and tag for the container to use. However, subsequent updates to a repository image aren't propagated to already running tasks.
+ When you don't specify a tag or digest in the image path in the task definition, the Amazon ECS container agent uses the `latest` tag to pull the specified image. 
+  Subsequent updates to a repository image aren't propagated to already running tasks.
+ Images in private registries are supported. For more information, see [Using non-AWS container images in Amazon ECS](private-auth.md).
+ Images in Amazon ECR repositories can be specified by using either the full `registry/repository:tag` or `registry/repository@digest` naming convention (for example, `aws_account_id.dkr.ecr.region.amazonaws.com``/my-web-app:latest` or `aws_account_id.dkr.ecr.region.amazonaws.com``/my-web-app@sha256:94afd1f2e64d908bc90dbca0035a5b567EXAMPLE`).
+ Images in official repositories on Docker Hub use a single name (for example, `ubuntu` or `mongo`).
+ Images in other repositories on Docker Hub are qualified with an organization name (for example, `amazon/amazon-ecs-agent`).
+ Images in other online repositories are qualified further by a domain name (for example, `quay.io/assemblyline/ubuntu`).

`versionConsistency`  
Type: String  
Valid values: `enabled`\$1`disabled`  
Required: No  
Specifies whether Amazon ECS will resolve the container image tag provided in the container definition to an image digest. By default, this behavior is `enabled`. If you set the value for a container as `disabled`, Amazon ECS will not resolve the container image tag to a digest and will use the original image URI specified in the container definition for deployment. For more information about container image resolution, see [Container image resolution](deployment-type-ecs.md#deployment-container-image-stability).

### Memory
<a name="container_definition_memory-managed-instances"></a>

`memory`  
Type: Integer  
Required: No  
The amount (in MiB) of memory to present to the container. If your container attempts to exceed the memory specified here, the container is killed. The total amount of memory reserved for all containers within a task must be lower than the task `memory` value, if one is specified. This parameter maps to `Memory` in the docker create-container command and the `--memory` option to docker run.  
The Docker 20.10.0 or later daemon reserves a minimum of 6 MiB of memory for a container. So, don't specify less than 6 MiB of memory for your containers.  
The Docker 19.03.13-ce or earlier daemon reserves a minimum of 4 MiB of memory for a container. So, don't specify less than 4 MiB of memory for your containers.  
If you're trying to maximize your resource utilization by providing your tasks as much memory as possible for a particular instance type, see [Reserving Amazon ECS Linux container instance memory](memory-management.md).

`memoryReservation`  
Type: Integer  
Required: No  
The soft limit (in MiB) of memory to reserve for the container. When system memory is under contention, Docker attempts to keep the container memory to this soft limit. However, your container can use more memory when needed. The container can use up to the hard limit that's specified with the `memory` parameter (if applicable) or all of the available memory on the container instance, whichever comes first. This parameter maps to `MemoryReservation` in the docker create-container command and the `--memory-reservation` option to docker run.  
If a task-level memory value isn't specified, you must specify a non-zero integer for one or both of `memory` or `memoryReservation` in a container definition. If you specify both, `memory` must be greater than `memoryReservation`. If you specify `memoryReservation`, then that value is subtracted from the available memory resources for the container instance that the container is placed on. Otherwise, the value of `memory` is used.  
For example, suppose that your container normally uses 128 MiB of memory, but occasionally bursts to 256 MiB of memory for short periods of time. You can set a `memoryReservation` of 128 MiB, and a `memory` hard limit of 300 MiB. This configuration allows the container to only reserve 128 MiB of memory from the remaining resources on the container instance. At the same time, this configuration also allows the container to use more memory resources when needed.  
The Docker 20.10.0 or later daemon reserves a minimum of 6 MiB of memory for a container. So, don't specify less than 6 MiB of memory for your containers.  
The Docker 19.03.13-ce or earlier daemon reserves a minimum of 4 MiB of memory for a container. So, don't specify less than 4 MiB of memory for your containers.  
If you're trying to maximize your resource utilization by providing your tasks as much memory as possible for a particular instance type, see [Reserving Amazon ECS Linux container instance memory](memory-management.md).

### CPU
<a name="container_definition_cpu-managed-instances"></a>

`cpu`  
Type: Integer  
Required: No  
The number of `cpu` units reserved for the container. This parameter maps to `CpuShares` in the docker create-container command and the `--cpu-shares` option to docker run.  
This field is optional for tasks using EC2 capacity providers, and the only requirement is that the total amount of CPU reserved for all containers within a task be lower than the task-level `cpu` value.  
You can determine the number of CPU units that are available per EC2 instance type by multiplying the vCPUs listed for that instance type on the [Amazon EC2 Instances](http://aws.amazon.com/ec2/instance-types/) detail page by 1,024.
Linux containers share unallocated CPU units with other containers on the container instance with the same ratio as their allocated amount. For example, if you run a single-container task on a single-core instance type with 512 CPU units specified for that container, and that's the only task running on the container instance, that container could use the full 1,024 CPU units at any given time. However, if you launched another copy of the same task on that container instance, each task is guaranteed a minimum of 512 CPU units when needed. Moreover, each container could float to higher CPU usage if the other container was not using it. If both tasks were 100% active all of the time, they would be limited to 512 CPU units.  
On Linux container instances, the Docker daemon on the container instance uses the CPU value to calculate the relative CPU share ratios for running containers. For more information, see [CPU share constraint](https://docs.docker.com/engine/reference/run/#cpu-share-constraint) in the Docker documentation. The minimum valid CPU share value that the Linux kernel allows is 2. However, the CPU parameter isn't required, and you can use CPU values below 2 in your container definitions. For CPU values below 2 (including null), the behavior varies based on your Amazon ECS container agent version:  
+ **Agent versions less than or equal to 1.1.0:** Null and zero CPU values are passed to Docker as 0, which Docker then converts to 1,024 CPU shares. CPU values of 1 are passed to Docker as 1, which the Linux kernel converts to two CPU shares.
+ **Agent versions greater than or equal to 1.2.0:** Null, zero, and CPU values of 1 are passed to Docker as 2.
On Windows container instances, the CPU limit is enforced as an absolute limit, or a quota. Windows containers only have access to the specified amount of CPU that's described in the task definition. A null or zero CPU value is passed to Docker as `0`, which Windows interprets as 1% of one CPU.

### Port mappings
<a name="container_definition_portmappings-managed-instances"></a>

`portMappings`  
Type: Object array  
Required: No  
Port mappings expose your container's network ports to the outside world. this allows clients to access your application. It's also used for inter-container communication within the same task.  
For task definitions that use the `awsvpc` network mode, only specify the `containerPort`. The `hostPort` is always ignored, and the container port is automatically mapped to a random high-numbered port on the host.  
Most fields of this parameter (including `containerPort`, `hostPort`, `protocol`) map to `PortBindings` in thedocker create-container command and the `--publish` option to docker run. If the network mode of a task definition is set to `host`, host ports must either be undefined or match the container port in the port mapping.  
After a task reaches the `RUNNING` status, manual and automatic host and container port assignments are visible in the following locations:  
+ Console: The **Network Bindings** section of a container description for a selected task.
+ AWS CLI: The `networkBindings` section of the **describe-tasks** command output.
+ API: The `DescribeTasks` response.
+ Metadata: The task metadata endpoint.  
`appProtocol`  
Type: String  
Required: No  
The application protocol that's used for the port mapping. This parameter only applies to Service Connect. We recommend that you set this parameter to be consistent with the protocol that your application uses. If you set this parameter, Amazon ECS adds protocol-specific connection handling to the service connect proxy. If you set this parameter, Amazon ECS adds protocol-specific telemetry in the Amazon ECS console and CloudWatch.  
If you don't set a value for this parameter, then TCP is used. However, Amazon ECS doesn't add protocol-specific telemetry for TCP.  
For more information, see [Use Service Connect to connect Amazon ECS services with short names](service-connect.md).  
Valid protocol values: `"HTTP" | "HTTP2" | "GRPC" `  
`containerPort`  
Type: Integer  
Required: Yes, when `portMappings` are used  
The port number on the container that's bound to the user-specified or automatically assigned host port.  
For tasks that use the `awsvpc` network mode, you use `containerPort` to specify the exposed ports.  
`containerPortRange`  
Type: String  
Required: No  
The port number range on the container that's bound to the dynamically mapped host port range.   
You can only set this parameter by using the `register-task-definition` API. The option is available in the `portMappings` parameter. For more information, see [register-task-definition](https://docs.aws.amazon.com/cli/latest/reference/ecs/register-task-definition.html) in the *AWS Command Line Interface Reference*.  
The following rules apply when you specify a `containerPortRange`:  
+ You must use the `awsvpc` network mode.
+ The container instance must have at least version 1.67.0 of the container agent and at least version 1.67.0-1 of the `ecs-init` package.
+ You can specify a maximum of 100 port ranges for each container.
+ You don't specify a `hostPortRange`. The value of the `hostPortRange` is set as follows:
  + For containers in a task with the `awsvpc` network mode, the `hostPort` is set to the same value as the `containerPort`. This is a static mapping strategy.
+ The `containerPortRange` valid values are between 1 and 65535.
+ A port can only be included in one port mapping for each container.
+ You can't specify overlapping port ranges.
+ The first port in the range must be less than last port in the range.
+ Docker recommends that you turn off the docker-proxy in the Docker daemon config file when you have a large number of ports.

  For more information, see [ Issue \$111185](https://github.com/moby/moby/issues/11185) on GitHub.

  For information about how to turn off the docker-proxy in the Docker daemon config file, see [Docker daemon](https://docs.aws.amazon.com/AmazonECS/latest/developerguide/bootstrap_container_instance.html#bootstrap_docker_daemon) in the *Amazon ECS Developer Guide*.
You can call [DescribeTasks](https://docs.aws.amazon.com/AmazonECS/latest/APIReference/API_DescribeTasks.html) to view the `hostPortRange`, which are the host ports that are bound to the container ports.  
The port ranges aren't included in the Amazon ECS task events, which are sent to EventBridge. For more information, see [Automate responses to Amazon ECS errors using EventBridge](cloudwatch_event_stream.md).  
`hostPortRange`  
Type: String  
Required: No  
The port number range on the host that's used with the network binding. This is assigned by Docker and delivered by the Amazon ECS agent.  
`hostPort`  
Type: Integer  
Required: No  
The port number on the container instance to reserve for your container.  
The `hostPort` can either be kept blank or be the same value as `containerPort`.  
The default ephemeral port range Docker version 1.6.0 and later is listed on the instance under `/proc/sys/net/ipv4/ip_local_port_range`. If this kernel parameter is unavailable, the default ephemeral port range from `49153–65535` is used. Don't attempt to specify a host port in the ephemeral port range. This is because these are reserved for automatic assignment. In general, ports under `32768` are outside of the ephemeral port range.   
The default reserved ports are `22` for SSH, the Docker ports `2375` and `2376`, and the Amazon ECS container agent ports `51678-51680`. Any host port that was previously user-specified for a running task is also reserved while the task is running. After a task stops, the host port is released. The current reserved ports are displayed in the `remainingResources` of **describe-container-instances** output. A container instance might have up to 100 reserved ports at a time, including the default reserved ports. Automatically assigned ports don't count toward the 100 reserved ports quota.  
`name`  
Type: String  
Required: No, required for Service Connect and VPC Lattice to be configured in a service  
The name that's used for the port mapping. This parameter only applies to Service Connect and VPC Lattice. This parameter is the name that you use in the Service Connect and VPC Lattice configuration of a service.  
For more information, see [Use Service Connect to connect Amazon ECS services with short names](service-connect.md).  
In the following example, both of the required fields for Service Connect and VPC Lattice are used.  

```
"portMappings": [
    {
        "name": string,
        "containerPort": integer
    }
]
```  
`protocol`  
Type: String  
Required: No  
The protocol that's used for the port mapping. Valid values are `tcp` and `udp`. The default is `tcp`.  
Only `tcp` is supported for Service Connect. Remember that `tcp` is implied if this field isn't set. 
If you're specifying a host port, use the following syntax.  

```
"portMappings": [
    {
        "containerPort": integer,
        "hostPort": integer
    }
    ...
]
```
If you want an automatically assigned host port, use the following syntax.  

```
"portMappings": [
    {
        "containerPort": integer
    }
    ...
]
```

### Private Repository Credentials
<a name="container_definition_repositoryCredentials-managed-instances"></a>

`repositoryCredentials`  
Type: [RepositoryCredentials](https://docs.aws.amazon.com/AmazonECS/latest/APIReference/API_RepositoryCredentials.html) object  
Required: No  
The repository credentials for private registry authentication.  
For more information, see [Using non-AWS container images in Amazon ECS](private-auth.md).    
 `credentialsParameter`  
Type: String  
Required: Yes, when `repositoryCredentials` are used  
The Amazon Resource Name (ARN) of the secret containing the private repository credentials.  
For more information, see [Using non-AWS container images in Amazon ECS](private-auth.md).  
When you use the Amazon ECS API, AWS CLI, or AWS SDKs, if the secret exists in the same Region as the task that you're launching then you can use either the full ARN or the name of the secret. When you use the AWS Management Console, you must specify the full ARN of the secret.
The following is a snippet of a task definition that shows the required parameters:  

```
"containerDefinitions": [
    {
        "image": "private-repo/private-image",
        "repositoryCredentials": {
            "credentialsParameter": "arn:aws:secretsmanager:region:aws_account_id:secret:secret_name"
        }
    }
]
```

### Essential
<a name="container_definition_essential-managed-instances"></a>

`essential`  
Type: Boolean  
Required: No  
If the `essential` parameter of a container is marked as `true`, and that container fails or stops for any reason, all other containers that are part of the task are stopped. If the `essential` parameter of a container is marked as `false`, its failure doesn't affect the rest of the containers in a task. If this parameter is omitted, a container is assumed to be essential.  
All tasks must have at least one essential container. If you have an application that's composed of multiple containers, group containers that are used for a common purpose into components, and separate the different components into multiple task definitions. For more information, see [Architect your application for Amazon ECS](application_architecture.md).

### Entry point
<a name="container_definition_entrypoint-managed-instances"></a>

`entryPoint`  
Type: String array  
Required: No  
The entry point that's passed to the container. This parameter maps to `Entrypoint` in the docker create-container command and the `--entrypoint` option to docker run.  

```
"entryPoint": ["string", ...]
```

### Command
<a name="container_definition_command-managed-instances"></a>

`command`  
Type: String array  
Required: No  
The command that's passed to the container. This parameter maps to `Cmd` in the docker create-container command and the `COMMAND` parameter to docker run. If there are multiple arguments, each argument is a separated string in the array.  

```
"command": ["string", ...]
```

### Working directory
<a name="container_definition_workingdirectory-managed-instances"></a>

`workingDirectory`  
Type: String  
Required: No  
The working directory to run commands inside the container in. This parameter maps to `WorkingDir` in the docker create-container command and the `--workdir` option to docker run.

### Advanced container definition parameters
<a name="advanced_container_definition_params-managed-instances"></a>

The following advanced container definition parameters provide extended capabilities to the docker run command that's used to launch containers on your Amazon ECS container instances.

**Topics**
+ [

#### Restart policy
](#container_definition_restart_policy-managed-instances)
+ [

#### Health check
](#container_definition_healthcheck-managed-instances)
+ [

#### Environment
](#container_definition_environment-managed-instances)
+ [

#### Security
](#container_definition_security-managed-instances)
+ [

#### Network settings
](#container_definition_network-managed-instances)
+ [

#### Storage and logging
](#container_definition_storage-managed-instances)
+ [

#### Resource requirements
](#container_definition_resourcerequirements-managed-instances)
+ [

#### Container timeouts
](#container_definition_timeout-managed-instances)
+ [

#### Container dependency
](#container_definition_dependency-managed-instances)
+ [

#### System controls
](#container_definition_systemcontrols-managed-instances)
+ [

#### Interactive
](#container_definition_interactive-managed-instances)
+ [

#### Pseudo terminal
](#container_definition_pseudoterminal-managed-instances)

#### Restart policy
<a name="container_definition_restart_policy-managed-instances"></a>

`restartPolicy`  
The container restart policy and associated configuration parameters. When you set up a restart policy for a container, Amazon ECS can restart the container without needing to replace the task. For more information, see [Restart individual containers in Amazon ECS tasks with container restart policies](container-restart-policy.md).    
`enabled`  
Type: Boolean  
Required: Yes  
Specifies whether a restart policy is enabled for the container.  
`ignoredExitCodes`  
Type: Integer array  
Required: No  
A list of exit codes that Amazon ECS will ignore and not attempt a restart on. You can specify a maximum of 50 container exit codes. By default, Amazon ECS does not ignore any exit codes.  
`restartAttemptPeriod`  
Type: Integer  
Required: No  
A period of time (in seconds) that the container must run for before a restart can be attempted. A container can be restarted only once every `restartAttemptPeriod` seconds. If a container isn't able to run for this time period and exits early, it will not be restarted. You can set a minimum `restartAttemptPeriod` of 60 seconds and a maximum `restartAttemptPeriod` of 1800 seconds. By default, a container must run for 300 seconds before it can be restarted.

#### Health check
<a name="container_definition_healthcheck-managed-instances"></a>

`healthCheck`  
The container health check command and the associated configuration parameters for the container. For more information, see [Determine Amazon ECS task health using container health checks](healthcheck.md).    
`command`  
A string array that represents the command that the container runs to determine if it's healthy. The string array can start with `CMD` to run the command arguments directly, or `CMD-SHELL` to run the command with the container's default shell. If neither is specified, `CMD` is used.  
When registering a task definition in the AWS Management Console, use a comma separated list of commands. These commands are converted to a string after the task definition is created. An example input for a health check is the following.  

```
CMD-SHELL, curl -f http://localhost/ || exit 1
```
When registering a task definition using the AWS Management Console JSON panel, the AWS CLI, or the APIs, enclose the list of commands in brackets. An example input for a health check is the following.  

```
[ "CMD-SHELL", "curl -f http://localhost/ || exit 1" ]
```
An exit code of 0, with no `stderr` output, indicates success, and a non-zero exit code indicates failure.   
`interval`  
The period of time (in seconds) between each health check. You can specify between 5 and 300 seconds. The default value is 30 seconds.  
`timeout`  
The period of time (in seconds) to wait for a health check to succeed before it's considered a failure. You can specify between 2 and 60 seconds. The default value is 5 seconds.  
`retries`  
The number of times to retry a failed health check before the container is considered unhealthy. You can specify between 1 and 10 retries. The default value is three retries.  
`startPeriod`  
The optional grace period to provide containers time to bootstrap in before failed health checks count towards the maximum number of retries. You can specify a value between 0 and 300 seconds. By default, `startPeriod` is disabled.  
If a health check succeeds within the `startPeriod`, then the container is considered healthy and any subsequent failures count toward the maximum number of retries.

#### Environment
<a name="container_definition_environment-managed-instances"></a>

`cpu`  
Type: Integer  
Required: No  
The number of `cpu` units the Amazon ECS container agent reserves for the container. On Linux, this parameter maps to `CpuShares` in the [Create a container](https://docs.docker.com/reference/api/engine/version/v1.38/#operation/ContainerCreate) section.  
This field is optional for tasks that run on Amazon ECS Managed Instances. The total amount of CPU reserved for all the containers that are within a task must be lower than the task-level `cpu` value.  
Linux containers share unallocated CPU units with other containers on the container instance with the same ratio as their allocated amount. For example, assume that you run a single-container task on a single-core instance type with 512 CPU units specified for that container. Moreover, that task is the only task running on the container instance. In this example, the container can use the full 1,024 CPU unit share at any given time. However, assume then that you launched another copy of the same task on that container instance. Each task is guaranteed a minimum of 512 CPU units when needed. Similarly, if the other container isn't using the remaining CPU, each container can float to higher CPU usage. However, if both tasks were 100% active all of the time, they are limited to 512 CPU units.  
On Linux container instances, the Docker daemon on the container instance uses the CPU value to calculate the relative CPU share ratios for running containers. The minimum valid CPU share value that the Linux kernel allows is 2, and the maximum valid CPU share value that the Linux kernel allows is 262144. However, the CPU parameter isn't required, and you can use CPU values below two and above 262144 in your container definitions. For CPU values below two (including null) and above 262144, the behavior varies based on your Amazon ECS container agent version:  
For more examples, see [How Amazon ECS manages CPU and memory resources](https://aws.amazon.com/blogs/containers/how-amazon-ecs-manages-cpu-and-memory-resources/).

`gpu`  
Type: [ResourceRequirement](https://docs.aws.amazon.com/AmazonECS/latest/APIReference/API_ResourceRequirement.html) object  
Required: No  
The number of physical `GPUs` that the Amazon ECS container agent reserves for the container. The number of GPUs reserved for all containers in a task must not exceed the number of available GPUs on the container instance the task is launched on. For more information, see [Amazon ECS task definitions for GPU workloads](ecs-gpu.md).

`Elastic Inference accelerator`  
This parameter isn't supported for containers that are hosted on Amazon ECS Managed Instances.
Type: [ResourceRequirement](https://docs.aws.amazon.com/AmazonECS/latest/APIReference/API_ResourceRequirement.html) object  
Required: No  
For the `InferenceAccelerator` type, the `value` matches the `deviceName` for an `InferenceAccelerator` specified in a task definition. For more information, see [Elastic Inference accelerator name (deprecated)](task_definition_parameters.md#elastic-Inference-accelerator).

`essential`  
Type: Boolean  
Required: No  
Suppose that the `essential` parameter of a container is marked as `true`, and that container fails or stops for any reason. Then, all other containers that are part of the task are stopped. If the `essential` parameter of a container is marked as `false`, then its failure doesn't affect the rest of the containers in a task. If this parameter is omitted, a container is assumed to be essential.  
All tasks must have at least one essential container. Suppose that you have an application that's composed of multiple containers. Then, group containers that are used for a common purpose into components, and separate the different components into multiple task definitions. For more information, see [Architect your application for Amazon ECS](application_architecture.md).  

```
"essential": true|false
```

`entryPoint`  
Early versions of the Amazon ECS container agent don't properly handle `entryPoint` parameters. If you have problems using `entryPoint`, update your container agent or enter your commands and arguments as `command` array items instead.
Type: String array  
Required: No  
The entry point that's passed to the container.   

```
"entryPoint": ["string", ...]
```

`command`  
Type: String array  
Required: No  
The command that's passed to the container. This parameter maps to `Cmd` in the create-container command and the `COMMAND` parameter to docker run. If there are multiple arguments, make sure that each argument is a separated string in the array.  

```
"command": ["string", ...]
```

`workingDirectory`  
Type: String  
Required: No  
The working directory to run commands inside the container in. This parameter maps to `WorkingDir` in the [Create a container](https://docs.docker.com/reference/api/engine/version/v1.38/#operation/ContainerCreate) section of the [Docker Remote API](https://docs.docker.com/reference/api/engine/version/v1.38/) and the `--workdir` option to [https://docs.docker.com/reference/cli/docker/container/run/](https://docs.docker.com/reference/cli/docker/container/run/).  

```
"workingDirectory": "string"
```

`environmentFiles`  
Type: Object array  
Required: No  
A list of files containing the environment variables to pass to a container. This parameter maps to the `--env-file` option to the docker run command.  
You can specify up to 10 environment files. The file must have a `.env` file extension. Each line in an environment file contains an environment variable in `VARIABLE=VALUE` format. Lines that start with `#` are treated as comments and are ignored.   
If there are individual environment variables specified in the container definition, they take precedence over the variables contained within an environment file. If multiple environment files are specified that contain the same variable, they're processed from the top down. We recommend that you use unique variable names. For more information, see [Pass an individual environment variable to an Amazon ECS container](taskdef-envfiles.md).    
`value`  
Type: String  
Required: Yes  
The Amazon Resource Name (ARN) of the Amazon S3 object containing the environment variable file.  
`type`  
Type: String  
Required: Yes  
The file type to use. The only supported value is `s3`.

`environment`  
Type: Object array  
Required: No  
The environment variables to pass to a container. This parameter maps to `Env` in the docker create-container command and the `--env` option to the docker run command.  
We do not recommend using plaintext environment variables for sensitive information, such as credential data.  
`name`  
Type: String  
Required: Yes, when `environment` is used  
The name of the environment variable.  
`value`  
Type: String  
Required: Yes, when `environment` is used  
The value of the environment variable.

```
"environment" : [
    { "name" : "string", "value" : "string" },
    { "name" : "string", "value" : "string" }
]
```

`secrets`  
Type: Object array  
Required: No  
An object that represents the secret to expose to your container. For more information, see [Pass sensitive data to an Amazon ECS container](specifying-sensitive-data.md).    
`name`  
Type: String  
Required: Yes  
The value to set as the environment variable on the container.  
`valueFrom`  
Type: String  
Required: Yes  
The secret to expose to the container. The supported values are either the full Amazon Resource Name (ARN) of the AWS Secrets Manager secret or the full ARN of the parameter in the AWS Systems Manager Parameter Store.  
If the Systems Manager Parameter Store parameter or Secrets Manager parameter exists in the same AWS Region as the task that you're launching, you can use either the full ARN or name of the secret. If the parameter exists in a different Region, then the full ARN must be specified.

```
"secrets": [
    {
        "name": "environment_variable_name",
        "valueFrom": "arn:aws:ssm:region:aws_account_id:parameter/parameter_name"
    }
]
```

#### Security
<a name="container_definition_security-managed-instances"></a>

`privileged`  
Type: Boolean  
Required: No  
When this parameter is `true`, the container is given elevated privileges on the host container instance (similar to the `root` user). This parameter maps to `Privileged` in the docker create-container command and the `--privileged` option to docker run.

`user`  
Type: String  
Required: No  
The user to use inside the container. This parameter maps to `User` in the docker create-container command and the `--user` option to docker run.  
When running tasks using the `host` network mode, don't run containers using the root user (UID 0). We recommend using a non-root user for better security.
You can specify the `user` using the following formats. If specifying a UID or GID, you must specify it as a positive integer.  
+ `user`
+ `user:group`
+ `uid`
+ `uid:gid`
+ `user:gid`
+ `uid:group`

`readonlyRootFilesystem`  
Type: Boolean  
Required: No  
When this parameter is `true`, the container is given a read-only root filesystem. This parameter maps to `ReadonlyRootfs` in the docker create-container command and the `--read-only` option to docker run.

`dockerSecurityOptions`  
This parameter isn't supported for tasks running on Amazon ECS Managed Instances.
Type: String array  
Required: No  
A list of strings to provide custom labels for SELinux and AppArmor multi-level security systems. This field isn't valid for containers in tasks using Fargate.

`ulimits`  
Type: Array of [Ulimit](https://docs.aws.amazon.com/AmazonECS/latest/APIReference/API_Ulimit.html) objects  
Required: No  
A list of `ulimits` to set in the container. If a ulimit value is specified in a task definition, it overrides the default values set by Docker. This parameter maps to `Ulimits` in the docker create-container command and the `--ulimit` option to docker run. Valid naming values are displayed in the [Ulimit](https://docs.aws.amazon.com/AmazonECS/latest/APIReference/API_Ulimit.html) data type.  
Amazon ECS tasks that are hosted on Fargate use the default resource limit values set by the operating system with the exception of the `nofile` resource limit parameter which Fargate overrides. The `nofile` resource limit sets a restriction on the number of open files that a container can use. The default `nofile` soft limit is `1024` and the default hard limit is `65535`.  
This parameter requires version 1.18 of the Docker Remote API or greater on your container instance. To check the Docker Remote API version on your container instance, log in to your container instance and run the following command: `sudo docker version --format '{{.Server.APIVersion}}'`

`dockerLabels`  
Type: String to string map  
Required: No  
A key/value map of labels to add to the container. This parameter maps to `Labels` in the docker create-container command and the `--label` option to docker run.   
This parameter requires version 1.18 of the Docker Remote API or greater on your container instance.  

```
"dockerLabels": {"string": "string"
      ...}
```

#### Network settings
<a name="container_definition_network-managed-instances"></a>

`disableNetworking`  
This parameter isn't supported for tasks running on Amazon ECS Managed Instances.
Type: Boolean  
Required: No  
When this parameter is true, networking is off within the container.  
The default is `false`.  

```
"disableNetworking": true|false
```

`links`  
This parameter isn't supported for tasks running on Amazon ECS Managed Instances.
Type: String array  
Required: No  
The `link` parameter allows containers to communicate with each other without the need for port mappings. This parameter is only supported if the network mode of a task definition is set to `bridge`. The `name:internalName` construct is analogous to `name:alias` in Docker links. Up to 255 letters (uppercase and lowercase), numbers, hyphens, and underscores are allowed..  
Containers that are collocated on the same container instance might communicate with each other without requiring links or host port mappings. The network isolation on a container instance is controlled by security groups and VPC settings.

```
"links": ["name:internalName", ...]
```

`hostname`  
This parameter isn't supported for tasks running on Amazon ECS Managed Instances.
Type: String  
Required: No  
The hostname to use for your container. This parameter maps to `Hostname` in the docker create-container and the `--hostname` option to docker run.  

```
"hostname": "string"
```

`dnsServers`  
This parameter isn't supported for tasks running on Amazon ECS Managed Instances.
Type: String array  
Required: No  
A list of DNS servers that are presented to the container.  

```
"dnsServers": ["string", ...]
```

`extraHosts`  
This parameter isn't supported for tasks that use the `awsvpc` network mode.
Type: Object array  
Required: No  
A list of hostnames and IP address mappings to append to the `/etc/hosts` file on the container.   
This parameter maps to `ExtraHosts` in the docker create-container command and the `--add-host` option to docker run.  

```
"extraHosts": [
      {
        "hostname": "string",
        "ipAddress": "string"
      }
      ...
    ]
```  
`hostname`  
Type: String  
Required: Yes, when `extraHosts` are used  
The hostname to use in the `/etc/hosts` entry.  
`ipAddress`  
Type: String  
Required: Yes, when `extraHosts` are used  
The IP address to use in the `/etc/hosts` entry.

#### Storage and logging
<a name="container_definition_storage-managed-instances"></a>

`readonlyRootFilesystem`  
Type: Boolean  
Required: No  
When this parameter is true, the container is given read-only access to its root file system. This parameter maps to `ReadonlyRootfs` in the docker create-container command the `--read-only` option to docker run.  
The default is `false`.  

```
"readonlyRootFilesystem": true|false
```

`mountPoints`  
Type: Object array  
Required: No  
The mount points for the data volumes in your container. This parameter maps to `Volumes` in the create-container Docker API and the `--volume` option to docker run.  
Windows containers can mount whole directories on the same drive as `$env:ProgramData`. Windows containers cannot mount directories on a different drive, and mount points cannot be used across drives. You must specify mount points to attach an Amazon EBS volume directly to an Amazon ECS task.    
`sourceVolume`  
Type: String  
Required: Yes, when `mountPoints` are used  
The name of the volume to mount.  
`containerPath`  
Type: String  
Required: Yes, when `mountPoints` are used  
The path in the container where the volume will be mounted.  
`readOnly`  
Type: Boolean  
Required: No  
If this value is `true`, the container has read-only access to the volume. If this value is `false`, then the container can write to the volume. The default value is `false`.  
For tasks that run on EC2 instances running the Windows operating system, leave the value as the default of `false`.

`volumesFrom`  
Type: Object array  
Required: No  
Data volumes to mount from another container. This parameter maps to `VolumesFrom` in the docker create-container command and the `--volumes-from` option to docker run.    
`sourceContainer`  
Type: String  
Required: Yes, when `volumesFrom` is used  
The name of the container to mount volumes from.  
`readOnly`  
Type: Boolean  
Required: No  
If this value is `true`, the container has read-only access to the volume. If this value is `false`, then the container can write to the volume. The default value is `false`.

```
"volumesFrom": [
                {
                  "sourceContainer": "string",
                  "readOnly": true|false
                }
              ]
```

`logConfiguration`  
Type: [LogConfiguration](https://docs.aws.amazon.com/AmazonECS/latest/APIReference/API_LogConfiguration.html) Object  
Required: No  
The log configuration specification for the container.  
For example task definitions that use a log configuration, see [Example Amazon ECS task definitions](example_task_definitions.md).  
This parameter maps to `LogConfig` in the docker create-container command and the `--log-driver` option to docker run. By default, containers use the same logging driver that the Docker daemon uses. However, the container might use a different logging driver than the Docker daemon by specifying a log driver with this parameter in the container definition. To use a different logging driver for a container, the log system must be configured properly on the container instance (or on a different log server for remote logging options).   
Consider the following when specifying a log configuration for your containers:  
+ Amazon ECS supports a subset of the logging drivers that are available to the Docker daemon.
+ This parameter requires version 1.18 or later of the Docker Remote API on your container instance.

```
"logConfiguration": {
      "logDriver": "awslogs",""splunk", "awsfirelens",
      "options": {"string": "string"
        ...},
	"secretOptions": [{
		"name": "string",
		"valueFrom": "string"
	}]
}
```  
`logDriver`  
Type: String  
Valid values: `"awslogs","splunk","awsfirelens"`  
Required: Yes, when `logConfiguration` is used  
The log driver to use for the container. By default, the valid values that are listed earlier are log drivers that the Amazon ECS container agent can communicate with.  
The supported log drivers are `awslogs`, `splunk`, and `awsfirelens`.  
For more information about how to use the `awslogs` log driver in task definitions to send your container logs to CloudWatch Logs, see [Send Amazon ECS logs to CloudWatch](using_awslogs.md).  
For more information about using the `awsfirelens` log driver, see [Send Amazon ECS logs to an AWS service or AWS Partner](using_firelens.md).  
If you have a custom driver that isn't listed, you can fork the Amazon ECS container agent project that's [available on GitHub](https://github.com/aws/amazon-ecs-agent) and customize it to work with that driver. We encourage you to submit pull requests for changes that you want to have included. However, we don't currently support running modified copies of this software.
This parameter requires version 1.18 of the Docker Remote API or greater on your container instance.  
`options`  
Type: String to string map  
Required: No  
The key/value map of configuration options to send to the log driver.  
The options you can specify depend on the log driver. Some of the options you can specify when you use the `awslogs` router to route logs to Amazon CloudWatch include the following:    
`awslogs-create-group`  
Required: No  
Specify whether you want the log group to be created automatically. If this option isn't specified, it defaults to `false`.  
Your IAM policy must include the `logs:CreateLogGroup` permission before you attempt to use `awslogs-create-group`.  
`awslogs-region`  
Required: Yes  
Specify the AWS Region that the `awslogs` log driver is to send your Docker logs to. You can choose to send all of your logs from clusters in different Regions to a single region in CloudWatch Logs. This is so that they're all visible in one location. Otherwise, you can separate them by Region for more granularity. Make sure that the specified log group exists in the Region that you specify with this option.  
`awslogs-group`  
Required: Yes  
Make sure to specify a log group that the `awslogs` log driver sends its log streams to.  
`awslogs-stream-prefix`  
Required: Yes  
Use the `awslogs-stream-prefix` option to associate a log stream with the specified prefix, the container name, and the ID of the Amazon ECS task that the container belongs to. If you specify a prefix with this option, then the log stream takes the following format.  

```
prefix-name/container-name/ecs-task-id
```
If you don't specify a prefix with this option, then the log stream is named after the container ID that's assigned by the Docker daemon on the container instance. Because it's difficult to trace logs back to the container that sent them with just the Docker container ID (which is only available on the container instance), we recommend that you specify a prefix with this option.  
For Amazon ECS services, you can use the service name as the prefix. Doing so, you can trace log streams to the service that the container belongs to, the name of the container that sent them, and the ID of the task that the container belongs to.  
You must specify a stream-prefix for your logs to have your logs appear in the Log pane when using the Amazon ECS console.  
`awslogs-datetime-format`  
Required: No  
This option defines a multiline start pattern in Python `strftime` format. A log message consists of a line that matches the pattern and any following lines that don’t match the pattern. The matched line is the delimiter between log messages.  
One example of a use case for using this format is for parsing output such as a stack dump, which might otherwise be logged in multiple entries. The correct pattern allows it to be captured in a single entry.  
For more information, see [awslogs-datetime-format](https://docs.docker.com/engine/logging/drivers/awslogs/#awslogs-datetime-format).  
You cannot configure both the `awslogs-datetime-format` and `awslogs-multiline-pattern` options.  
Multiline logging performs regular expression parsing and matching of all log messages. This might have a negative impact on logging performance.  
`awslogs-multiline-pattern`  
Required: No  
This option defines a multiline start pattern that uses a regular expression. A log message consists of a line that matches the pattern and any following lines that don’t match the pattern. The matched line is the delimiter between log messages.  
For more information, see [awslogs-multiline-pattern](https://docs.docker.com/engine/logging/drivers/awslogs/#awslogs-multiline-pattern).  
This option is ignored if `awslogs-datetime-format` is also configured.  
You cannot configure both the `awslogs-datetime-format` and `awslogs-multiline-pattern` options.  
Multiline logging performs regular expression parsing and matching of all log messages. This might have a negative impact on logging performance.  
`mode`  
Required: No  
Valid values: `non-blocking` \$1 `blocking`  
This option defines the delivery mode of log messages from the container to the `awslogs` log driver. The delivery mode you choose affects application availability when the flow of logs from the container is interrupted.  
If you use the `blocking` mode and the flow of logs to CloudWatch is interrupted, calls from container code to write to the `stdout` and `stderr` streams will block. The logging thread of the application will block as a result. This may cause the application to become unresponsive and lead to container healthcheck failure.   
If you use the `non-blocking` mode, the container's logs are instead stored in an in-memory intermediate buffer configured with the `max-buffer-size` option. This prevents the application from becoming unresponsive when logs cannot be sent to CloudWatch. We recommend using this mode if you want to ensure service availability and are okay with some log loss. For more information, see [Preventing log loss with non-blocking mode in the `awslogs` container log driver](https://aws.amazon.com/blogs/containers/preventing-log-loss-with-non-blocking-mode-in-the-awslogs-container-log-driver/).  
`max-buffer-size`  
Required: No  
Default value: `1m`  
When `non-blocking` mode is used, the `max-buffer-size` log option controls the size of the buffer that's used for intermediate message storage. Make sure to specify an adequate buffer size based on your application. When the buffer fills up, further logs cannot be stored. Logs that cannot be stored are lost. 
To route logs using the `splunk` log router, you need to specify a `splunk-token` and a `splunk-url`.  
When you use the `awsfirelens` log router to route logs to an AWS service or AWS Partner Network destination for log storage and analytics, you can set the `log-driver-buffer-limit` option to limit the number of events that are buffered in memory, before being sent to the log router container. It can help to resolve potential log loss issue because high throughput might result in memory running out for the buffer inside of Docker. For more information, see [Configuring Amazon ECS logs for high throughput](firelens-docker-buffer-limit.md).  
Other options you can specify when using `awsfirelens` to route logs depend on the destination. When you export logs to Amazon Data Firehose, you can specify the AWS Region with `region` and a name for the log stream with `delivery_stream`.  
When you export logs to Amazon Kinesis Data Streams, you can specify an AWS Region with `region` and a data stream name with `stream`.  
 When you export logs to Amazon OpenSearch Service, you can specify options like `Name`, `Host` (OpenSearch Service endpoint without protocol), `Port`, `Index`, `Type`, `Aws_auth`, `Aws_region`, `Suppress_Type_Name`, and `tls`.  
When you export logs to Amazon S3, you can specify the bucket using the `bucket` option. You can also specify `region`, `total_file_size`, `upload_timeout`, and `use_put_object` as options.  
This parameter requires version 1.19 of the Docker Remote API or greater on your container instance.  
`secretOptions`  
Type: Object array  
Required: No  
An object that represents the secret to pass to the log configuration. Secrets that are used in log configuration can include an authentication token, certificate, or encryption key. For more information, see [Pass sensitive data to an Amazon ECS container](specifying-sensitive-data.md).    
`name`  
Type: String  
Required: Yes  
The value to set as the environment variable on the container.  
`valueFrom`  
Type: String  
Required: Yes  
The secret to expose to the log configuration of the container.

```
"logConfiguration": {
	"logDriver": "splunk",
	"options": {
		"splunk-url": "https://cloud.splunk.com:8080",
		"splunk-token": "...",
		"tag": "...",
		...
	},
	"secretOptions": [{
		"name": "splunk-token",
		"valueFrom": "/ecs/logconfig/splunkcred"
	}]
}
```

`firelensConfiguration`  
Type: [FirelensConfiguration](https://docs.aws.amazon.com/AmazonECS/latest/APIReference/API_FirelensConfiguration.html) Object  
Required: No  
The FireLens configuration for the container. This is used to specify and configure a log router for container logs. For more information, see [Send Amazon ECS logs to an AWS service or AWS Partner](using_firelens.md).  

```
{
    "firelensConfiguration": {
        "type": "fluentd",
        "options": {
            "KeyName": ""
        }
    }
}
```  
`options`  
Type: String to string map  
Required: No  
The key/value map of options to use when configuring the log router. This field is optional and can be used to specify a custom configuration file or to add additional metadata, such as the task, task definition, cluster, and container instance details to the log event. If specified, the syntax to use is `"options":{"enable-ecs-log-metadata":"true|false","config-file-type:"s3|file","config-file-value":"arn:aws:s3:::amzn-s3-demo-bucket/fluent.conf|filepath"}`. For more information, see [Example Amazon ECS task definition: Route logs to FireLens](firelens-taskdef.md).  
`type`  
Type: String  
Required: Yes  
The log router to use. The valid values are `fluentd` or `fluentbit`.

#### Resource requirements
<a name="container_definition_resourcerequirements-managed-instances"></a>

`resourceRequirements`  
Type: Array of [ResourceRequirement](https://docs.aws.amazon.com/AmazonECS/latest/APIReference/API_ResourceRequirement.html) objects  
Required: No  
The type and amount of a resource to assign to a container. The only supported resource is a GPU.    
`type`  
Type: String  
Required: Yes  
The type of resource to assign to a container. The supported value is `GPU`.  
`value`  
Type: String  
Required: Yes  
The value for the specified resource type.  
If the `GPU` type is used, the value is the number of physical `GPUs` the Amazon ECS container agent reserves for the container. The number of GPUs that's reserved for all containers in a task can't exceed the number of available GPUs on the container instance the task is launched on.  
GPUs aren't available for tasks that are running on Fargate.

#### Container timeouts
<a name="container_definition_timeout-managed-instances"></a>

`startTimeout`  
Type: Integer  
Required: No  
Example values: `120`  
Time duration (in seconds) to wait before giving up on resolving dependencies for a container.  
For example, you specify two containers in a task definition with `containerA` having a dependency on `containerB` reaching a `COMPLETE`, `SUCCESS`, or `HEALTHY` status. If a `startTimeout` value is specified for `containerB` and it doesn't reach the desired status within that time, then `containerA` doesn't start.  
If a container doesn't meet a dependency constraint or times out before meeting the constraint, Amazon ECS doesn't progress dependent containers to their next state.
The maximum value is 600 seconds (10 minutes).

`stopTimeout`  
Type: Integer  
Required: No  
Example values: `120`  
Time duration (in seconds) to wait before the container is forcefully killed if it doesn't exit normally on its own.  
If the parameter isn't specified, then the default value of 30 seconds is used. The maximum value is 86400 seconds (24 hours).

#### Container dependency
<a name="container_definition_dependency-managed-instances"></a>

`dependsOn`  
Type: Array of [ContainerDependency](https://docs.aws.amazon.com/AmazonECS/latest/APIReference/API_ContainerDependency.html) objects  
Required: No  
The dependencies defined for container startup and shutdown. A container can contain multiple dependencies. When a dependency is defined for container startup, for container shutdown it is reversed. For an example, see [Container dependency](example_task_definitions.md#example_task_definition-containerdependency).  
If a container doesn't meet a dependency constraint or times out before meeting the constraint, Amazon ECS doesn't progress dependent containers to their next state.
This parameter requires that the task or service uses platform version `1.3.0` or later (Linux) or `1.0.0` (Windows).  

```
"dependsOn": [
    {
        "containerName": "string",
        "condition": "string"
    }
]
```  
`containerName`  
Type: String  
Required: Yes  
The container name that must meet the specified condition.  
`condition`  
Type: String  
Required: Yes  
The dependency condition of the container. The following are the available conditions and their behavior:  
+ `START` – This condition emulates the behavior of links and volumes today. The condition validates that a dependent container is started before permitting other containers to start.
+ `COMPLETE` – This condition validates that a dependent container runs to completion (exits) before permitting other containers to start. This can be useful for non-essential containers that run a script and then exit. This condition can't be set on an essential container.
+ `SUCCESS` – This condition is the same as `COMPLETE`, but it also requires that the container exits with a `zero` status. This condition can't be set on an essential container.
+ `HEALTHY` – This condition validates that the dependent container passes its container health check before permitting other containers to start. This requires that the dependent container has health checks configured in the task definition. This condition is confirmed only at task startup.

#### System controls
<a name="container_definition_systemcontrols-managed-instances"></a>

`systemControls`  
Type: [SystemControl](https://docs.aws.amazon.com/AmazonECS/latest/APIReference/API_SystemControl.html) object  
Required: No  
A list of namespace kernel parameters to set in the container. This parameter maps to `Sysctls` in the docker create-container commandand the `--sysctl` option to docker run. For example, you can configure `net.ipv4.tcp_keepalive_time` setting to maintain longer lived connections.  
We don't recommend that you specify network-related `systemControls` parameters for multiple containers in a single task that also uses either the `awsvpc` or `host` network mode. Doing this has the following disadvantages:  
+ If you set `systemControls` for any container, it applies to all containers in the task. If you set different `systemControls` for multiple containers in a single task, the container that's started last determines which `systemControls` take effect.
If you're setting an IPC resource namespace to use for the containers in the task, the following conditions apply to your system controls. For more information, see [IPC mode](task_definition_parameters.md#task_definition_ipcmode).  
+ For tasks that use the `host` IPC mode, IPC namespace `systemControls` aren't supported.
+ For tasks that use the `task` IPC mode, IPC namespace `systemControls` values apply to all containers within a task.

```
"systemControls": [
    {
         "namespace":"string",
         "value":"string"
    }
]
```  
`namespace`  
Type: String  
Required: No  
The namespace kernel parameter to set a `value` for.  
Valid IPC namespace values: `"kernel.msgmax" | "kernel.msgmnb" | "kernel.msgmni" | "kernel.sem" | "kernel.shmall" | "kernel.shmmax" | "kernel.shmmni" | "kernel.shm_rmid_forced"`, and `Sysctls` that start with `"fs.mqueue.*"`  
Valid network namespace values: `Sysctls` that start with `"net.*"`. On Fargate, only namespaced `Sysctls` that exist within the container are accepted.  
All of these values are supported by Fargate.  
`value`  
Type: String  
Required: No  
The value for the namespace kernel parameter that's specified in `namespace`.

#### Interactive
<a name="container_definition_interactive-managed-instances"></a>

`interactive`  
Type: Boolean  
Required: No  
When this parameter is `true`, you can deploy containerized applications that require `stdin` or a `tty` to be allocated. This parameter maps to `OpenStdin` in the docker create-container command and the `--interactive` option to docker run.  
The default is `false`.

#### Pseudo terminal
<a name="container_definition_pseudoterminal-managed-instances"></a>

`pseudoTerminal`  
Type: Boolean  
Required: No  
When this parameter is `true`, a TTY is allocated. This parameter maps to `Tty` in the docker create-container command and the `--tty` option to docker run.  
The default is `false`.

### Linux parameters
<a name="container_definition_linuxparameters-managed-instances"></a>

`linuxParameters`  
Type: [LinuxParameters](https://docs.aws.amazon.com/AmazonECS/latest/APIReference/API_LinuxParameters.html) object  
Required: No  
Linux-specific modifications that are applied to the container, such as Linux kernel capabilities.    
`capabilities`  
Type: [KernelCapabilities](https://docs.aws.amazon.com/AmazonECS/latest/APIReference/API_KernelCapabilities.html) object  
Required: No  
The Linux capabilities for the container that are added to or dropped from the default configuration provided by Docker.  
`devices`  
Type: Array of [Device](https://docs.aws.amazon.com/AmazonECS/latest/APIReference/API_Device.html) objects  
Required: No  
Any host devices to expose to the container. This parameter maps to `Devices` in the docker create-container command and the `--device` option to docker run.  
`initProcessEnabled`  
Type: Boolean  
Required: No  
Run an `init` process inside the container that forwards signals and reaps processes. This parameter maps to the `--init` option to docker run.  
`maxSwap`  
This parameter isn't supported for tasks running on Amazon ECS Managed Instances.
Type: Integer  
Required: No  
The total amount of swap memory (in MiB) a container can use. This parameter is translated to the `--memory-swap` option to docker run where the value is the sum of the container memory plus the `maxSwap` value.  
`swappiness`  
This parameter isn't supported for tasks running on Amazon ECS Managed Instances.
Type: Integer  
Required: No  
This allows you to tune a container's memory swappiness behavior. A `swappiness` value of `0` causes swapping not to happen unless absolutely necessary. A `swappiness` value of `100` causes pages to be swapped very aggressively. Valid values are whole numbers between `0` and `100`. If the `swappiness` parameter isn't specified, a default value of `60` is used. If a value isn't specified for `maxSwap`, then this parameter is ignored. This parameter maps to the `--memory-swappiness` option to docker run.  
`sharedMemorySize`  
This parameter isn't supported for tasks running on Amazon ECS Managed Instances.
Type: Integer  
Required: No  
The size (in MiB) of the `/dev/shm` volume. This parameter maps to the `--shm-size` option to docker run.  
`tmpfs`  
Type: Array of [Tmpfs](https://docs.aws.amazon.com/AmazonECS/latest/APIReference/API_Tmpfs.html) objects  
Required: No  
The container path, mount options, and size (in MiB) of the tmpfs mount. This parameter maps to the `--tmpfs` option to docker run.

# Amazon ECS task definition parameters for Fargate
<a name="task_definition_parameters"></a>

Task definitions are split into separate parts: the task family, the AWS Identity and Access Management (IAM) task role, the network mode, container definitions, volumes, and launch types. The family and container definitions are required in a task definition. In contrast, task role, network mode, volumes, and launch type are optional.

You can use these parameters in a JSON file to configure your task definition.

The following are more detailed descriptions for each task definition parameter for Fargate.

## Family
<a name="family"></a>

`family`  
Type: String  
Required: Yes  
When you register a task definition, you give it a family, which is similar to a name for multiple versions of the task definition, specified with a revision number. The first task definition that's registered into a particular family is given a revision of 1, and any task definitions registered after that are given a sequential revision number.

## Capacity
<a name="requires_compatibilities"></a>

When you register a task definition, you can specify the capacity that Amazon ECS should validate the task definition against. If the task definition doesn't validate against the compatibilities specified, a client exception is returned. 

The following parameter is allowed in a task definition.

`requiresCompatibilities`  
Type: String array  
Required: No  
Valid Values: `FARGATE`  
The capacity to validate the task definition against. This initiates a check to ensure that all of the parameters that are used in the task definition meet the requirements of Fargate

## Task role
<a name="task_role_arn"></a>

`taskRoleArn`  
Type: String  
Required: No  
When you register a task definition, you can provide a task role for an IAM role that allows the containers in the task permission to call the AWS APIs that are specified in its associated policies on your behalf. For more information, see [Amazon ECS task IAM role](task-iam-roles.md).

## Task execution role
<a name="execution_role_arn"></a>

`executionRoleArn`  
Type: String  
Required: Conditional  
The Amazon Resource Name (ARN) of the task execution role that grants the Amazon ECS container agent permission to make AWS API calls on your behalf.   
The task execution IAM role is required depending on the requirements of your task. For more information, see [Amazon ECS task execution IAM role](task_execution_IAM_role.md).

## Network mode
<a name="network_mode"></a>

`networkMode`  
Type: String  
Required: Yes  
The Docker networking mode to use for the containers in the task. For Amazon ECS tasks hosted on Fargate, the `awsvpc` network mode is required.

## Runtime platform
<a name="runtime-platform"></a>

`operatingSystemFamily`  
Type: String  
Required: Conditional  
Default: LINUX  
This parameter is required for Amazon ECS tasks that are hosted on Fargate.  
When you register a task definition, you specify the operating system family.   
The valid values are `LINUX`, `WINDOWS_SERVER_2025_FULL`, `WINDOWS_SERVER_2025_CORE`, `WINDOWS_SERVER_2022_FULL`, `WINDOWS_SERVER_2022_CORE`, `WINDOWS_SERVER_2019_FULL`, and `WINDOWS_SERVER_2019_CORE`.  
All task definitions that are used in a service must have the same value for this parameter.  
When a task definition is part of a service, this value must match the service `platformFamily` value.

`cpuArchitecture`  
Type: String  
Required: Conditional  
Default: X86\$164  
If the parameter is left as `null`, the default value is automatically assigned upon the initiation of a task hosted on Fargate.  
When you register a task definition, you specify the CPU architecture. The valid values are `X86_64` and `ARM64`.  
All task definitions that are used in a service must have the same value for this parameter.  
When you have Linux tasks, you can set the value to `ARM64`. For more information, see [Amazon ECS task definitions for 64-bit ARM workloads](ecs-arm64.md).

## Task size
<a name="task_size"></a>

When you register a task definition, you can specify the total CPU and memory used for the task. This is separate from the `cpu` and `memory` values at the container definition level. For tasks that are hosted on Fargate (both Linux and Windows), these fields are required and there are specific values for both `cpu` and `memory` that are supported.

The following parameter is allowed in a task definition:

`cpu`  
Type: String  
Required: Yes  
Task-level CPU and memory parameters are required and used to determine the instance type and size that tasks run on. For Windows tasks, these values aren’t enforced at runtime, because Windows doesn't have a native mechanism that can easily enforce collective resource limits on a group of containers. If you want to enforce resource limits, we recommend using the container-level resources for Windows containers.
The hard limit of CPU units to present for the task. You can specify CPU values in the JSON file as a string in CPU units or virtual CPUs (vCPUs). For example, you can specify a CPU value either as `1024` in CPU units or `1 vCPU` in vCPUs. When the task definition is registered, a vCPU value is converted to an integer indicating the CPU units.  
This field is required and you must use one of the following values, which determines your range of supported values for the `memory` parameter. The table below shows the valid combinations of task-level CPU and memory.      
[\[See the AWS documentation website for more details\]](http://docs.aws.amazon.com/AmazonECS/latest/developerguide/task_definition_parameters.html)

`memory`  
Type: String  
Required: Yes  
Task-level CPU and memory parameters are required and used to determine the instance type and size that tasks run on. For Windows tasks, these values aren’t enforced at runtime, because Windows doesn't have a native mechanism that can easily enforce collective resource limits on a group of containers. If you want to enforce resource limits, we recommend using the container-level resources for Windows containers.
The hard limit of memory to present to the task. You can specify memory values in the task definition as a string in mebibytes (MiB) or gigabytes (GB). For example, you can specify a memory value either as `3072` in MiB or `3 GB`in GB. When the task definition is registered, a GB value is converted to an integer indicating the MiB.  
This field is required and you must use one of the following values, which determines your range of supported values for the `cpu` parameter:      
[\[See the AWS documentation website for more details\]](http://docs.aws.amazon.com/AmazonECS/latest/developerguide/task_definition_parameters.html)

## Container definitions
<a name="container_definitions"></a>

When you register a task definition, you must specify a list of container definitions that are passed to the Docker daemon on a container instance. The following parameters are allowed in a container definition.

**Topics**
+ [

### Standard container definition parameters
](#standard_container_definition_params)
+ [

### Advanced container definition parameters
](#advanced_container_definition_params)
+ [

### Other container definition parameters
](#other_container_definition_params)

### Standard container definition parameters
<a name="standard_container_definition_params"></a>

The following task definition parameters are either required or used in most container definitions.

**Topics**
+ [

#### Name
](#container_definition_name)
+ [

#### Image
](#container_definition_image)
+ [

#### Memory
](#container_definition_memory)
+ [

#### Port mappings
](#container_definition_portmappings)
+ [

#### Private Repository Credentials
](#container_definition_repositoryCredentials)

#### Name
<a name="container_definition_name"></a>

`name`  
Type: String  
Required: Yes  
The name of a container. Up to 255 letters (uppercase and lowercase), numbers, hyphens, and underscores are allowed. If you're linking multiple containers in a task definition, the `name` of one container can be entered in the `links` of another container. This is to connect the containers.

#### Image
<a name="container_definition_image"></a>

`image`  
Type: String  
Required: Yes  
The image used to start a container. This string is passed directly to the Docker daemon. By default, images in the Docker Hub registry are available. You can also specify other repositories with either `repository-url/image:tag` or `repository-url/image@digest`. Up to 255 letters (uppercase and lowercase), numbers, hyphens, underscores, colons, periods, forward slashes, and number signs are allowed. This parameter maps to `Image` in the docker create-container command and the `IMAGE` parameter of the docker run command.  
+ When a new task starts, the Amazon ECS container agent pulls the latest version of the specified image and tag for the container to use. However, subsequent updates to a repository image aren't propagated to already running tasks.
+ When you don't specify a tag or digest in the image path in the task definition, the Amazon ECS container agent uses the `latest` tag to pull the specified image. 
+  Subsequent updates to a repository image aren't propagated to already running tasks.
+ Images in private registries are supported. For more information, see [Using non-AWS container images in Amazon ECS](private-auth.md).
+ Images in Amazon ECR repositories can be specified by using either the full `registry/repository:tag` or `registry/repository@digest` naming convention (for example, `aws_account_id.dkr.ecr.region.amazonaws.com``/my-web-app:latest` or `aws_account_id.dkr.ecr.region.amazonaws.com``/my-web-app@sha256:94afd1f2e64d908bc90dbca0035a5b567EXAMPLE`).
+ Images in official repositories on Docker Hub use a single name (for example, `ubuntu` or `mongo`).
+ Images in other repositories on Docker Hub are qualified with an organization name (for example, `amazon/amazon-ecs-agent`).
+ Images in other online repositories are qualified further by a domain name (for example, `quay.io/assemblyline/ubuntu`).

`versionConsistency`  
Type: String  
Valid values: `enabled`\$1`disabled`  
Required: No  
Specifies whether Amazon ECS will resolve the container image tag provided in the container definition to an image digest. By default, this behavior is `enabled`. If you set the value for a container as `disabled`, Amazon ECS will not resolve the container image tag to a digest and will use the original image URI specified in the container definition for deployment. For more information about container image resolution, see [Container image resolution](deployment-type-ecs.md#deployment-container-image-stability).

#### Memory
<a name="container_definition_memory"></a>

`memory`  
Type: Integer  
Required: No  
The amount (in MiB) of memory to present to the container. If your container attempts to exceed the memory specified here, the container is killed. The total amount of memory reserved for all containers within a task must be lower than the task `memory` value, if one is specified. This parameter maps to `Memory` in the docker create-container command and the `--memory` option to docker run.  
The Docker 20.10.0 or later daemon reserves a minimum of 6 MiB of memory for a container. So, don't specify less than 6 MiB of memory for your containers.  
The Docker 19.03.13-ce or earlier daemon reserves a minimum of 4 MiB of memory for a container. So, don't specify less than 4 MiB of memory for your containers.  
If you're trying to maximize your resource utilization by providing your tasks as much memory as possible for a particular instance type, see [Reserving Amazon ECS Linux container instance memory](memory-management.md).

`memoryReservation`  
Type: Integer  
Required: No  
The soft limit (in MiB) of memory to reserve for the container. When system memory is under contention, Docker attempts to keep the container memory to this soft limit. However, your container can use more memory when needed. The container can use up to the hard limit that's specified with the `memory` parameter (if applicable) or all of the available memory on the container instance, whichever comes first. This parameter maps to `MemoryReservation` in the docker create-container command and the `--memory-reservation` option to docker run.  
If a task-level memory value isn't specified, you must specify a non-zero integer for one or both of `memory` or `memoryReservation` in a container definition. If you specify both, `memory` must be greater than `memoryReservation`. If you specify `memoryReservation`, then that value is subtracted from the available memory resources for the container instance that the container is placed on. Otherwise, the value of `memory` is used.  
For example, suppose that your container normally uses 128 MiB of memory, but occasionally bursts to 256 MiB of memory for short periods of time. You can set a `memoryReservation` of 128 MiB, and a `memory` hard limit of 300 MiB. This configuration allows the container to only reserve 128 MiB of memory from the remaining resources on the container instance. At the same time, this configuration also allows the container to use more memory resources when needed.  
This parameter isn't supported for Windows containers.
The Docker 20.10.0 or later daemon reserves a minimum of 6 MiB of memory for a container. So, don't specify less than 6 MiB of memory for your containers.  
The Docker 19.03.13-ce or earlier daemon reserves a minimum of 4 MiB of memory for a container. So, don't specify less than 4 MiB of memory for your containers.  
If you're trying to maximize your resource utilization by providing your tasks as much memory as possible for a particular instance type, see [Reserving Amazon ECS Linux container instance memory](memory-management.md).

#### Port mappings
<a name="container_definition_portmappings"></a>

`portMappings`  
Type: Object array  
Required: No  
Port mappings expose your container's network ports to the outside world. this allows clients to access your application. It's also used for inter-container communication within the same task.  
For task definitions that use the `awsvpc` network mode, only specify the `containerPort`. The `hostPort` is always ignored, and the container port is automatically mapped to a random high-numbered port on the host.  
Port mappings on Windows use the `NetNAT` gateway address rather than `localhost`. There's no loopback for port mappings on Windows, so you can't access a container's mapped port from the host itself.   
Most fields of this parameter (including `containerPort`, `hostPort`, `protocol`) map to `PortBindings` in thedocker create-container command and the `--publish` option to docker run. If the network mode of a task definition is set to `host`, host ports must either be undefined or match the container port in the port mapping.  
After a task reaches the `RUNNING` status, manual and automatic host and container port assignments are visible in the following locations:  
+ Console: The **Network Bindings** section of a container description for a selected task.
+ AWS CLI: The `networkBindings` section of the **describe-tasks** command output.
+ API: The `DescribeTasks` response.
+ Metadata: The task metadata endpoint.  
`appProtocol`  
Type: String  
Required: No  
The application protocol that's used for the port mapping. This parameter only applies to Service Connect. We recommend that you set this parameter to be consistent with the protocol that your application uses. If you set this parameter, Amazon ECS adds protocol-specific connection handling to the service connect proxy. If you set this parameter, Amazon ECS adds protocol-specific telemetry in the Amazon ECS console and CloudWatch.  
If you don't set a value for this parameter, then TCP is used. However, Amazon ECS doesn't add protocol-specific telemetry for TCP.  
For more information, see [Use Service Connect to connect Amazon ECS services with short names](service-connect.md).  
Valid protocol values: `"http" | "http2" | "grpc" `  
`containerPort`  
Type: Integer  
Required: Yes, when `portMappings` are used  
The port number on the container that's bound to the user-specified or automatically assigned host port.  
For tasks that use the `awsvpc` network mode, you use `containerPort` to specify the exposed ports.  
For Windows containers on Fargate, you can't use port 3150 for the `containerPort`. This is because it's reserved.  
`containerPortRange`  
Type: String  
Required: No  
The port number range on the container that's bound to the dynamically mapped host port range.   
You can only set this parameter by using the `register-task-definition` API. The option is available in the `portMappings` parameter. For more information, see [register-task-definition](https://docs.aws.amazon.com/cli/latest/reference/ecs/register-task-definition.html) in the *AWS Command Line Interface Reference*.  
The following rules apply when you specify a `containerPortRange`:  
+ You must use the `awsvpc` network mode.
+ This parameter is available for both the Linux and Windows operating systems.
+ The container instance must have at least version 1.67.0 of the container agent and at least version 1.67.0-1 of the `ecs-init` package.
+ You can specify a maximum of 100 port ranges for each container.
+ You don't specify a `hostPortRange`. The value of the `hostPortRange` is set as follows:
  + For containers in a task with the `awsvpc` network mode, the `hostPort` is set to the same value as the `containerPort`. This is a static mapping strategy.
+ The `containerPortRange` valid values are between 1 and 65535.
+ A port can only be included in one port mapping for each container.
+ You can't specify overlapping port ranges.
+ The first port in the range must be less than last port in the range.
+ Docker recommends that you turn off the docker-proxy in the Docker daemon config file when you have a large number of ports.

  For more information, see [ Issue \$111185](https://github.com/moby/moby/issues/11185) on GitHub.

  For information about how to turn off the docker-proxy in the Docker daemon config file, see [Docker daemon](https://docs.aws.amazon.com/AmazonECS/latest/developerguide/bootstrap_container_instance.html#bootstrap_docker_daemon) in the *Amazon ECS Developer Guide*.
You can call [DescribeTasks](https://docs.aws.amazon.com/AmazonECS/latest/APIReference/API_DescribeTasks.html) to view the `hostPortRange`, which are the host ports that are bound to the container ports.  
The port ranges aren't included in the Amazon ECS task events, which are sent to EventBridge. For more information, see [Automate responses to Amazon ECS errors using EventBridge](cloudwatch_event_stream.md).  
`hostPortRange`  
Type: String  
Required: No  
The port number range on the host that's used with the network binding. This is assigned by Docker and delivered by the Amazon ECS agent.  
`hostPort`  
Type: Integer  
Required: No  
The port number on the container instance to reserve for your container.  
The `hostPort` can either be kept blank or be the same value as `containerPort`.  
The default ephemeral port range Docker version 1.6.0 and later is listed on the instance under `/proc/sys/net/ipv4/ip_local_port_range`. If this kernel parameter is unavailable, the default ephemeral port range from `49153–65535` is used. Don't attempt to specify a host port in the ephemeral port range. This is because these are reserved for automatic assignment. In general, ports under `32768` are outside of the ephemeral port range.   
The default reserved ports are `22` for SSH, the Docker ports `2375` and `2376`, and the Amazon ECS container agent ports `51678-51680`. Any host port that was previously user-specified for a running task is also reserved while the task is running. After a task stops, the host port is released. The current reserved ports are displayed in the `remainingResources` of **describe-container-instances** output. A container instance might have up to 100 reserved ports at a time, including the default reserved ports. Automatically assigned ports don't count toward the 100 reserved ports quota.  
`name`  
Type: String  
Required: No, required for Service Connect and VPC Lattice to be configured in a service  
The name that's used for the port mapping. This parameter only applies to Service Connect and VPC Lattice. This parameter is the name that you use in the Service Connect and VPC Lattice configuration of a service.  
For more information, see [Use Service Connect to connect Amazon ECS services with short names](service-connect.md).  
In the following example, both of the required fields for Service Connect and VPC Lattice are used.  

```
"portMappings": [
    {
        "name": string,
        "containerPort": integer
    }
]
```  
`protocol`  
Type: String  
Required: No  
The protocol that's used for the port mapping. Valid values are `tcp` and `udp`. The default is `tcp`.  
Only `tcp` is supported for Service Connect. Remember that `tcp` is implied if this field isn't set. 
If you're specifying a host port, use the following syntax.  

```
"portMappings": [
    {
        "containerPort": integer,
        "hostPort": integer
    }
    ...
]
```
If you want an automatically assigned host port, use the following syntax.  

```
"portMappings": [
    {
        "containerPort": integer
    }
    ...
]
```

#### Private Repository Credentials
<a name="container_definition_repositoryCredentials"></a>

`repositoryCredentials`  
Type: [RepositoryCredentials](https://docs.aws.amazon.com/AmazonECS/latest/APIReference/API_RepositoryCredentials.html) object  
Required: No  
The repository credentials for private registry authentication.  
For more information, see [Using non-AWS container images in Amazon ECS](private-auth.md).    
 `credentialsParameter`  
Type: String  
Required: Yes, when `repositoryCredentials` are used  
The Amazon Resource Name (ARN) of the secret containing the private repository credentials.  
For more information, see [Using non-AWS container images in Amazon ECS](private-auth.md).  
When you use the Amazon ECS API, AWS CLI, or AWS SDKs, if the secret exists in the same Region as the task that you're launching then you can use either the full ARN or the name of the secret. When you use the AWS Management Console, you must specify the full ARN of the secret.
The following is a snippet of a task definition that shows the required parameters:  

```
"containerDefinitions": [
    {
        "image": "private-repo/private-image",
        "repositoryCredentials": {
            "credentialsParameter": "arn:aws:secretsmanager:region:aws_account_id:secret:secret_name"
        }
    }
]
```

### Advanced container definition parameters
<a name="advanced_container_definition_params"></a>

The following advanced container definition parameters provide extended capabilities to the docker run command that's used to launch containers on your Amazon ECS container instances.

**Topics**
+ [

#### Restart policy
](#container_definition_restart_policy)
+ [

#### Health check
](#container_definition_healthcheck)
+ [

#### Environment
](#container_definition_environment)
+ [

#### Network settings
](#container_definition_network)
+ [

#### Storage and logging
](#container_definition_storage)
+ [

#### Security
](#container_definition_security)
+ [

#### Resource limits
](#container_definition_limits)
+ [

#### Docker labels
](#container_definition_labels)

#### Restart policy
<a name="container_definition_restart_policy"></a>

`restartPolicy`  
The container restart policy and associated configuration parameters. When you set up a restart policy for a container, Amazon ECS can restart the container without needing to replace the task. For more information, see [Restart individual containers in Amazon ECS tasks with container restart policies](container-restart-policy.md).    
`enabled`  
Type: Boolean  
Required: Yes  
Specifies whether a restart policy is enabled for the container.  
`ignoredExitCodes`  
Type: Integer array  
Required: No  
A list of exit codes that Amazon ECS will ignore and not attempt a restart on. You can specify a maximum of 50 container exit codes. By default, Amazon ECS does not ignore any exit codes.  
`restartAttemptPeriod`  
Type: Integer  
Required: No  
A period of time (in seconds) that the container must run for before a restart can be attempted. A container can be restarted only once every `restartAttemptPeriod` seconds. If a container isn't able to run for this time period and exits early, it will not be restarted. You can set a minimum `restartAttemptPeriod` of 60 seconds and a maximum `restartAttemptPeriod` of 1800 seconds. By default, a container must run for 300 seconds before it can be restarted.

#### Health check
<a name="container_definition_healthcheck"></a>

`healthCheck`  
The container health check command and the associated configuration parameters for the container. For more information, see [Determine Amazon ECS task health using container health checks](healthcheck.md).    
`command`  
A string array that represents the command that the container runs to determine if it's healthy. The string array can start with `CMD` to run the command arguments directly, or `CMD-SHELL` to run the command with the container's default shell. If neither is specified, `CMD` is used.  
When registering a task definition in the AWS Management Console, use a comma separated list of commands. These commands are converted to a string after the task definition is created. An example input for a health check is the following.  

```
CMD-SHELL, curl -f http://localhost/ || exit 1
```
When registering a task definition using the AWS Management Console JSON panel, the AWS CLI, or the APIs, enclose the list of commands in brackets. An example input for a health check is the following.  

```
[ "CMD-SHELL", "curl -f http://localhost/ || exit 1" ]
```
An exit code of 0, with no `stderr` output, indicates success, and a non-zero exit code indicates failure.   
`interval`  
The period of time (in seconds) between each health check. You can specify between 5 and 300 seconds. The default value is 30 seconds.  
`timeout`  
The period of time (in seconds) to wait for a health check to succeed before it's considered a failure. You can specify between 2 and 60 seconds. The default value is 5 seconds.  
`retries`  
The number of times to retry a failed health check before the container is considered unhealthy. You can specify between 1 and 10 retries. The default value is three retries.  
`startPeriod`  
The optional grace period to provide containers time to bootstrap in before failed health checks count towards the maximum number of retries. You can specify a value between 0 and 300 seconds. By default, `startPeriod` is disabled.  
If a health check succeeds within the `startPeriod`, then the container is considered healthy and any subsequent failures count toward the maximum number of retries.

#### Environment
<a name="container_definition_environment"></a>

`cpu`  
Type: Integer  
Required: No  
The number of `cpu` units the Amazon ECS container agent reserves for the container. On Linux, this parameter maps to `CpuShares` in the [Create a container](https://docs.docker.com/reference/api/engine/version/v1.38/#operation/ContainerCreate) section.  
This field is optional for tasks that use Fargate. The total amount of CPU reserved for all the containers that are within a task must be lower than the task-level `cpu` value.  
Linux containers share unallocated CPU units with other containers on the container instance with the same ratio as their allocated amount. For example, assume that you run a single-container task on a single-core instance type with 512 CPU units specified for that container. Moreover, that task is the only task running on the container instance. In this example, the container can use the full 1,024 CPU unit share at any given time. However, assume then that you launched another copy of the same task on that container instance. Each task is guaranteed a minimum of 512 CPU units when needed. Similarly, if the other container isn't using the remaining CPU, each container can float to higher CPU usage. However, if both tasks were 100% active all of the time, they are limited to 512 CPU units.  
On Linux container instances, the Docker daemon on the container instance uses the CPU value to calculate the relative CPU share ratios for running containers. The minimum valid CPU share value that the Linux kernel allows is 2, and the maximum valid CPU share value that the Linux kernel allows is 262144. However, the CPU parameter isn't required, and you can use CPU values below two and above 262144 in your container definitions. For CPU values below two (including null) and above 262144, the behavior varies based on your Amazon ECS container agent version:  
On Windows container instances, the CPU quota is enforced as an absolute quota. Windows containers only have access to the specified amount of CPU that's defined in the task definition. A null or zero CPU value is passed to Docker as `0`. Windows then interprets this value as 1% of one CPU.  
For more examples, see [How Amazon ECS manages CPU and memory resources](https://aws.amazon.com/blogs/containers/how-amazon-ecs-manages-cpu-and-memory-resources/).

`gpu`  
This parameter isn't supported for containers that are hosted on Fargate.  
Type: [ResourceRequirement](https://docs.aws.amazon.com/AmazonECS/latest/APIReference/API_ResourceRequirement.html) object  
Required: No  
The number of physical `GPUs` that the Amazon ECS container agent reserves for the container. The number of GPUs reserved for all containers in a task must not exceed the number of available GPUs on the container instance the task is launched on. For more information, see [Amazon ECS task definitions for GPU workloads](ecs-gpu.md).

`Elastic Inference accelerator`  
This parameter isn't supported for containers that are hosted on Fargate.  
Type: [ResourceRequirement](https://docs.aws.amazon.com/AmazonECS/latest/APIReference/API_ResourceRequirement.html) object  
Required: No  
For the `InferenceAccelerator` type, the `value` matches the `deviceName` for an `InferenceAccelerator` specified in a task definition. For more information, see [Elastic Inference accelerator name (deprecated)](#elastic-Inference-accelerator).

`essential`  
Type: Boolean  
Required: No  
Suppose that the `essential` parameter of a container is marked as `true`, and that container fails or stops for any reason. Then, all other containers that are part of the task are stopped. If the `essential` parameter of a container is marked as `false`, then its failure doesn't affect the rest of the containers in a task. If this parameter is omitted, a container is assumed to be essential.  
All tasks must have at least one essential container. Suppose that you have an application that's composed of multiple containers. Then, group containers that are used for a common purpose into components, and separate the different components into multiple task definitions. For more information, see [Architect your application for Amazon ECS](application_architecture.md).  

```
"essential": true|false
```

`entryPoint`  
Early versions of the Amazon ECS container agent don't properly handle `entryPoint` parameters. If you have problems using `entryPoint`, update your container agent or enter your commands and arguments as `command` array items instead.
Type: String array  
Required: No  
The entry point that's passed to the container.   

```
"entryPoint": ["string", ...]
```

`command`  
Type: String array  
Required: No  
The command that's passed to the container. This parameter maps to `Cmd` in the create-container command and the `COMMAND` parameter to docker run. If there are multiple arguments, make sure that each argument is a separated string in the array.  

```
"command": ["string", ...]
```

`workingDirectory`  
Type: String  
Required: No  
The working directory to run commands inside the container in. This parameter maps to `WorkingDir` in the [Create a container](https://docs.docker.com/reference/api/engine/version/v1.38/#operation/ContainerCreate) section of the [Docker Remote API](https://docs.docker.com/reference/api/engine/version/v1.38/) and the `--workdir` option to [https://docs.docker.com/reference/cli/docker/container/run/](https://docs.docker.com/reference/cli/docker/container/run/).  

```
"workingDirectory": "string"
```

`environmentFiles`  
This isn't available for Windows containers on Fargate.  
Type: Object array  
Required: No  
A list of files containing the environment variables to pass to a container. This parameter maps to the `--env-file` option to the docker run command.  
You can specify up to 10 environment files. The file must have a `.env` file extension. Each line in an environment file contains an environment variable in `VARIABLE=VALUE` format. Lines that start with `#` are treated as comments and are ignored.   
If there are individual environment variables specified in the container definition, they take precedence over the variables contained within an environment file. If multiple environment files are specified that contain the same variable, they're processed from the top down. We recommend that you use unique variable names. For more information, see [Pass an individual environment variable to an Amazon ECS container](taskdef-envfiles.md).    
`value`  
Type: String  
Required: Yes  
The Amazon Resource Name (ARN) of the Amazon S3 object containing the environment variable file.  
`type`  
Type: String  
Required: Yes  
The file type to use. The only supported value is `s3`.

`environment`  
Type: Object array  
Required: No  
The environment variables to pass to a container. This parameter maps to `Env` in the docker create-container command and the `--env` option to the docker run command.  
We do not recommend using plaintext environment variables for sensitive information, such as credential data.  
`name`  
Type: String  
Required: Yes, when `environment` is used  
The name of the environment variable.  
`value`  
Type: String  
Required: Yes, when `environment` is used  
The value of the environment variable.

```
"environment" : [
    { "name" : "string", "value" : "string" },
    { "name" : "string", "value" : "string" }
]
```

`secrets`  
Type: Object array  
Required: No  
An object that represents the secret to expose to your container. For more information, see [Pass sensitive data to an Amazon ECS container](specifying-sensitive-data.md).    
`name`  
Type: String  
Required: Yes  
The value to set as the environment variable on the container.  
`valueFrom`  
Type: String  
Required: Yes  
The secret to expose to the container. The supported values are either the full Amazon Resource Name (ARN) of the AWS Secrets Manager secret or the full ARN of the parameter in the AWS Systems Manager Parameter Store.  
If the Systems Manager Parameter Store parameter or Secrets Manager parameter exists in the same AWS Region as the task that you're launching, you can use either the full ARN or name of the secret. If the parameter exists in a different Region, then the full ARN must be specified.

```
"secrets": [
    {
        "name": "environment_variable_name",
        "valueFrom": "arn:aws:ssm:region:aws_account_id:parameter/parameter_name"
    }
]
```

#### Network settings
<a name="container_definition_network"></a>

`disableNetworking`  
This parameter is not supported for tasks running on Fargate.  
Type: Boolean  
Required: No  
When this parameter is true, networking is off within the container.  
The default is `false`.  

```
"disableNetworking": true|false
```

`links`  
This parameter isn't supported for tasks using the `awsvpc` network mode.  
Type: String array  
Required: No  
The `link` parameter allows containers to communicate with each other without the need for port mappings. This parameter is only supported if the network mode of a task definition is set to `bridge`. The `name:internalName` construct is analogous to `name:alias` in Docker links. Up to 255 letters (uppercase and lowercase), numbers, hyphens, and underscores are allowed..  
Containers that are collocated on the same container instance might communicate with each other without requiring links or host port mappings. The network isolation on a container instance is controlled by security groups and VPC settings.

```
"links": ["name:internalName", ...]
```

`hostname`  
Type: String  
Required: No  
The hostname to use for your container. This parameter maps to `Hostname` in the docker create-container and the `--hostname` option to docker run.  
If you're using the `awsvpc` network mode, the `hostname` parameter isn't supported.

```
"hostname": "string"
```

`dnsServers`  
This is not supported for tasks running on Fargate.  
Type: String array  
Required: No  
A list of DNS servers that are presented to the container.  

```
"dnsServers": ["string", ...]
```

`extraHosts`  
This parameter isn't supported for tasks that use the `awsvpc` network mode.  
Type: Object array  
Required: No  
A list of hostnames and IP address mappings to append to the `/etc/hosts` file on the container.   
This parameter maps to `ExtraHosts` in the docker create-container command and the `--add-host` option to docker run.  

```
"extraHosts": [
      {
        "hostname": "string",
        "ipAddress": "string"
      }
      ...
    ]
```  
`hostname`  
Type: String  
Required: Yes, when `extraHosts` are used  
The hostname to use in the `/etc/hosts` entry.  
`ipAddress`  
Type: String  
Required: Yes, when `extraHosts` are used  
The IP address to use in the `/etc/hosts` entry.

#### Storage and logging
<a name="container_definition_storage"></a>

`readonlyRootFilesystem`  
Type: Boolean  
Required: No  
When this parameter is true, the container is given read-only access to its root file system. This parameter maps to `ReadonlyRootfs` in the docker create-container command the `--read-only` option to docker run.  
This parameter is not supported for Windows containers.
The default is `false`.  

```
"readonlyRootFilesystem": true|false
```

`mountPoints`  
Type: Object array  
Required: No  
The mount points for the data volumes in your container. This parameter maps to `Volumes` in the create-container Docker API and the `--volume` option to docker run.  
Windows containers can mount whole directories on the same drive as `$env:ProgramData`. Windows containers cannot mount directories on a different drive, and mount points cannot be used across drives. You must specify mount points to attach an Amazon EBS volume directly to an Amazon ECS task.    
`sourceVolume`  
Type: String  
Required: Yes, when `mountPoints` are used  
The name of the volume to mount.  
`containerPath`  
Type: String  
Required: Yes, when `mountPoints` are used  
The path in the container where the volume will be mounted.  
`readOnly`  
Type: Boolean  
Required: No  
If this value is `true`, the container has read-only access to the volume. If this value is `false`, then the container can write to the volume. The default value is `false`.  
For tasks that run on EC2 instances running the Windows operating system, leave the value as the default of `false`.

`volumesFrom`  
Type: Object array  
Required: No  
Data volumes to mount from another container. This parameter maps to `VolumesFrom` in the docker create-container command and the `--volumes-from` option to docker run.    
`sourceContainer`  
Type: String  
Required: Yes, when `volumesFrom` is used  
The name of the container to mount volumes from.  
`readOnly`  
Type: Boolean  
Required: No  
If this value is `true`, the container has read-only access to the volume. If this value is `false`, then the container can write to the volume. The default value is `false`.

```
"volumesFrom": [
                {
                  "sourceContainer": "string",
                  "readOnly": true|false
                }
              ]
```

`logConfiguration`  
Type: [LogConfiguration](https://docs.aws.amazon.com/AmazonECS/latest/APIReference/API_LogConfiguration.html) Object  
Required: No  
The log configuration specification for the container.  
For example task definitions that use a log configuration, see [Example Amazon ECS task definitions](example_task_definitions.md).  
This parameter maps to `LogConfig` in the docker create-container command and the `--log-driver` option to docker run. By default, containers use the same logging driver that the Docker daemon uses. However, the container might use a different logging driver than the Docker daemon by specifying a log driver with this parameter in the container definition. To use a different logging driver for a container, the log system must be configured properly on the container instance (or on a different log server for remote logging options).   
Consider the following when specifying a log configuration for your containers:  
+ Amazon ECS supports a subset of the logging drivers that are available to the Docker daemon.
+ This parameter requires version 1.18 or later of the Docker Remote API on your container instance.
+ You must install any additional software outside of the task. For example, the Fluentd output aggregators or a remote host running Logstash to send Gelf logs to.

```
"logConfiguration": {
      "logDriver": "awslogs","fluentd","gelf","json-file","journald","splunk","syslog","awsfirelens",
      "options": {"string": "string"
        ...},
	"secretOptions": [{
		"name": "string",
		"valueFrom": "string"
	}]
}
```  
`logDriver`  
Type: String  
Valid values: `"awslogs","fluentd","gelf","json-file","journald","splunk","syslog","awsfirelens"`  
Required: Yes, when `logConfiguration` is used  
The log driver to use for the container. By default, the valid values that are listed earlier are log drivers that the Amazon ECS container agent can communicate with.  
The supported log drivers are `awslogs`, `splunk`, and `awsfirelens`.  
For more information about how to use the `awslogs` log driver in task definitions to send your container logs to CloudWatch Logs, see [Send Amazon ECS logs to CloudWatch](using_awslogs.md).  
For more information about using the `awsfirelens` log driver, see [Custom Log Routing](https://docs.aws.amazon.com/AmazonECS/latest/developerguide/using_firelens.html).  
If you have a custom driver that isn't listed, you can fork the Amazon ECS container agent project that's [available on GitHub](https://github.com/aws/amazon-ecs-agent) and customize it to work with that driver. We encourage you to submit pull requests for changes that you want to have included. However, we don't currently support running modified copies of this software.
This parameter requires version 1.18 of the Docker Remote API or greater on your container instance.  
`options`  
Type: String to string map  
Required: No  
The key/value map of configuration options to send to the log driver.  
The options you can specify depend on the log driver. Some of the options you can specify when you use the `awslogs` router to route logs to Amazon CloudWatch include the following:    
`awslogs-create-group`  
Required: No  
Specify whether you want the log group to be created automatically. If this option isn't specified, it defaults to `false`.  
Your IAM policy must include the `logs:CreateLogGroup` permission before you attempt to use `awslogs-create-group`.  
`awslogs-region`  
Required: Yes  
Specify the AWS Region that the `awslogs` log driver is to send your Docker logs to. You can choose to send all of your logs from clusters in different Regions to a single region in CloudWatch Logs. This is so that they're all visible in one location. Otherwise, you can separate them by Region for more granularity. Make sure that the specified log group exists in the Region that you specify with this option.  
`awslogs-group`  
Required: Yes  
Make sure to specify a log group that the `awslogs` log driver sends its log streams to.  
`awslogs-stream-prefix`  
Required: Yes  
Use the `awslogs-stream-prefix` option to associate a log stream with the specified prefix, the container name, and the ID of the Amazon ECS task that the container belongs to. If you specify a prefix with this option, then the log stream takes the following format.  

```
prefix-name/container-name/ecs-task-id
```
If you don't specify a prefix with this option, then the log stream is named after the container ID that's assigned by the Docker daemon on the container instance. Because it's difficult to trace logs back to the container that sent them with just the Docker container ID (which is only available on the container instance), we recommend that you specify a prefix with this option.  
For Amazon ECS services, you can use the service name as the prefix. Doing so, you can trace log streams to the service that the container belongs to, the name of the container that sent them, and the ID of the task that the container belongs to.  
You must specify a stream-prefix for your logs to have your logs appear in the Log pane when using the Amazon ECS console.  
`awslogs-datetime-format`  
Required: No  
This option defines a multiline start pattern in Python `strftime` format. A log message consists of a line that matches the pattern and any following lines that don’t match the pattern. The matched line is the delimiter between log messages.  
One example of a use case for using this format is for parsing output such as a stack dump, which might otherwise be logged in multiple entries. The correct pattern allows it to be captured in a single entry.  
For more information, see [awslogs-datetime-format](https://docs.docker.com/engine/logging/drivers/awslogs/#awslogs-datetime-format).  
You cannot configure both the `awslogs-datetime-format` and `awslogs-multiline-pattern` options.  
Multiline logging performs regular expression parsing and matching of all log messages. This might have a negative impact on logging performance.  
`awslogs-multiline-pattern`  
Required: No  
This option defines a multiline start pattern that uses a regular expression. A log message consists of a line that matches the pattern and any following lines that don’t match the pattern. The matched line is the delimiter between log messages.  
For more information, see [awslogs-multiline-pattern](https://docs.docker.com/engine/logging/drivers/awslogs/#awslogs-multiline-pattern).  
This option is ignored if `awslogs-datetime-format` is also configured.  
You cannot configure both the `awslogs-datetime-format` and `awslogs-multiline-pattern` options.  
Multiline logging performs regular expression parsing and matching of all log messages. This might have a negative impact on logging performance.
The following options apply to all supported log drivers.    
`mode`  
Required: No  
Valid values: `non-blocking` \$1 `blocking`  
This option defines the delivery mode of log messages from the container to the log driver specified using `logDriver`. The delivery mode you choose affects application availability when the flow of logs from the container is interrupted.  
If you use the `blocking` mode and the flow of logs is interrupted, calls from container code to write to the `stdout` and `stderr` streams will block. The logging thread of the application will block as a result. This may cause the application to become unresponsive and lead to container healthcheck failure.   
If you use the `non-blocking` mode, the container's logs are instead stored in an in-memory intermediate buffer configured with the `max-buffer-size` option. This prevents the application from becoming unresponsive when logs cannot be sent. We recommend using this mode if you want to ensure service availability and are okay with some log loss. For more information, see [Preventing log loss with non-blocking mode in the `awslogs` container log driver](https://aws.amazon.com/blogs/containers/preventing-log-loss-with-non-blocking-mode-in-the-awslogs-container-log-driver/).  
You can set a default `mode` for all containers in a specific AWS Region by using the `defaultLogDriverMode` account setting. If you don't specify the `mode` option in the `logConfiguration` or configure the account setting, Amazon ECS will default to `non-blocking` mode. For more information about the account setting, see [Default log driver mode](ecs-account-settings.md#default-log-driver-mode).  
When `non-blocking` mode is used, the `max-buffer-size` log option controls the size of the buffer that's used for intermediate message storage. Make sure to specify an adequate buffer size based on your application. The total amount of memory allocated at the task level should be greater than the amount of memory that's allocated for all the containers in addition to the log driver memory buffer.  
On June 25, 2025, Amazon ECS changed the default log driver mode from `blocking` to `non-blocking` to prioritize task availability over logging. To continue using the `blocking` mode after this change, do one of the following:  
+ Set the `mode` option in your container definition's `logConfiguration` as `blocking`.
+ Set the `defaultLogDriverMode` account setting to `blocking`.  
`max-buffer-size`  
Required: No  
Default value: `10m`  
When `non-blocking` mode is used, the `max-buffer-size` log option controls the size of the buffer that's used for intermediate message storage. Make sure to specify an adequate buffer size based on your application. When the buffer fills up, further logs cannot be stored. Logs that cannot be stored are lost. 
To route logs using the `splunk` log router, you need to specify a `splunk-token` and a `splunk-url`.  
When you use the `awsfirelens` log router to route logs to an AWS service or AWS Partner Network destination for log storage and analytics, you can set the `log-driver-buffer-limit` option to limit the number of log lines that are buffered in memory, before being sent to the log router container. It can help to resolve potential log loss issue because high throughput might result in memory running out for the buffer inside of Docker. For more information, see [Configuring Amazon ECS logs for high throughput](firelens-docker-buffer-limit.md).  
Other options you can specify when using `awsfirelens` to route logs depend on the destination. When you export logs to Amazon Data Firehose, you can specify the AWS Region with `region` and a name for the log stream with `delivery_stream`.  
When you export logs to Amazon Kinesis Data Streams, you can specify an AWS Region with `region` and a data stream name with `stream`.  
 When you export logs to Amazon OpenSearch Service, you can specify options like `Name`, `Host` (OpenSearch Service endpoint without protocol), `Port`, `Index`, `Type`, `Aws_auth`, `Aws_region`, `Suppress_Type_Name`, and `tls`.  
When you export logs to Amazon S3, you can specify the bucket using the `bucket` option. You can also specify `region`, `total_file_size`, `upload_timeout`, and `use_put_object` as options.  
This parameter requires version 1.19 of the Docker Remote API or greater on your container instance.  
`secretOptions`  
Type: Object array  
Required: No  
An object that represents the secret to pass to the log configuration. Secrets that are used in log configuration can include an authentication token, certificate, or encryption key. For more information, see [Pass sensitive data to an Amazon ECS container](specifying-sensitive-data.md).    
`name`  
Type: String  
Required: Yes  
The value to set as the environment variable on the container.  
`valueFrom`  
Type: String  
Required: Yes  
The secret to expose to the log configuration of the container.

```
"logConfiguration": {
	"logDriver": "splunk",
	"options": {
		"splunk-url": "https://cloud.splunk.com:8080",
		"splunk-token": "...",
		"tag": "...",
		...
	},
	"secretOptions": [{
		"name": "splunk-token",
		"valueFrom": "/ecs/logconfig/splunkcred"
	}]
}
```

`firelensConfiguration`  
Type: [FirelensConfiguration](https://docs.aws.amazon.com/AmazonECS/latest/APIReference/API_FirelensConfiguration.html) Object  
Required: No  
The FireLens configuration for the container. This is used to specify and configure a log router for container logs. For more information, see [Send Amazon ECS logs to an AWS service or AWS Partner](using_firelens.md).  

```
{
    "firelensConfiguration": {
        "type": "fluentd",
        "options": {
            "KeyName": ""
        }
    }
}
```  
`options`  
Type: String to string map  
Required: No  
The key/value map of options to use when configuring the log router. This field is optional and can be used to specify a custom configuration file or to add additional metadata, such as the task, task definition, cluster, and container instance details to the log event. If specified, the syntax to use is `"options":{"enable-ecs-log-metadata":"true|false","config-file-type:"s3|file","config-file-value":"arn:aws:s3:::amzn-s3-demo-bucket/fluent.conf|filepath"}`. For more information, see [Example Amazon ECS task definition: Route logs to FireLens](firelens-taskdef.md).  
`type`  
Type: String  
Required: Yes  
The log router to use. The valid values are `fluentd` or `fluentbit`.

#### Security
<a name="container_definition_security"></a>

For more information about container security, see [Amazon ECS task and container security best practices](https://docs.aws.amazon.com/AmazonECS/latest/developerguide/security-tasks-containers.html).

`credentialSpecs`  
Type: String array  
Required: No  
A list of ARNs in SSM or Amazon S3 to a credential spec (`CredSpec`) file that configures the container for Active Directory authentication. We recommend that you use this parameter instead of the `dockerSecurityOptions`. The maximum number of ARNs is 1.  
There are two formats for each ARN.    
credentialspecdomainless:MyARN  
You use `credentialspecdomainless:MyARN` to provide a `CredSpec` with an additional section for a secret in Secrets Manager. You provide the login credentials to the domain in the secret.  
Each task that runs on any container instance can join different domains.  
You can use this format without joining the container instance to a domain.  
credentialspec:MyARN  
You use `credentialspec:MyARN` to provide a `CredSpec` for a single domain.  
You must join the container instance to the domain before you start any tasks that use this task definition.
In both formats, replace `MyARN` with the ARN in SSM or Amazon S3.  
The `credspec` must provide a ARN in Secrets Manager for a secret containing the username, password, and the domain to connect to. For better security, the instance isn't joined to the domain for domainless authentication. Other applications on the instance can't use the domainless credentials. You can use this parameter to run tasks on the same instance, even it the tasks need to join different domains. For more information, see [Using gMSAs for Windows Containers](https://docs.aws.amazon.com/AmazonECS/latest/developerguide/windows-gmsa.html) and [Using gMSAs for Linux Containers](https://docs.aws.amazon.com/AmazonECS/latest/developerguide/linux-gmsa.html).

`user`  
Type: String  
Required: No  
The user to use inside the container. This parameter maps to `User` in the docker create-container command and the `--user` option to docker run.  
When running tasks that use the `host` network mode, don't run containers using the root user (UID 0). As a security best practice, always use a non-root user.
You can specify the `user` using the following formats. If specifying a UID or GID, you must specify it as a positive integer.  
+ `user`
+ `user:group`
+ `uid`
+ `uid:gid`
+ `user:gid`
+ `uid:group`
This parameter is not supported for Windows containers.

```
"user": "string"
```

#### Resource limits
<a name="container_definition_limits"></a>

`ulimits`  
Type: Object array  
Required: No  
A list of `ulimit` values to define for a container. This value overwrites the default resource quota setting for the operating system. This parameter maps to `Ulimits` in the docker create-container command and the `--ulimit` option to docker run.  
Amazon ECS tasks hosted on Fargate use the default resource limit values set by the operating system with the exception of the `nofile` resource limit parameter. The `nofile` resource limit sets a restriction on the number of open files that a container can use. On Fargate, the default `nofile` soft limit is ` 65535` and hard limit is `65535`. You can set the values of both limits up to `1048576`. For more information, see [Task resource limits](fargate-tasks-services.md#fargate-resource-limits).  
This parameter requires version 1.18 of the Docker Remote API or greater on your container instance.  
This parameter is not supported for Windows containers.

```
"ulimits": [
      {
        "name": "core"|"cpu"|"data"|"fsize"|"locks"|"memlock"|"msgqueue"|"nice"|"nofile"|"nproc"|"rss"|"rtprio"|"rttime"|"sigpending"|"stack",
        "softLimit": integer,
        "hardLimit": integer
      }
      ...
    ]
```  
`name`  
Type: String  
Valid values: `"core" | "cpu" | "data" | "fsize" | "locks" | "memlock" | "msgqueue" | "nice" | "nofile" | "nproc" | "rss" | "rtprio" | "rttime" | "sigpending" | "stack"`  
Required: Yes, when `ulimits` are used  
The `type` of the `ulimit`.  
`hardLimit`  
Type: Integer  
Required: Yes, when `ulimits` are used  
The hard limit for the `ulimit` type. The value can be specified in bytes, seconds, or as a count, depending on the `type` of the `ulimit`.  
`softLimit`  
Type: Integer  
Required: Yes, when `ulimits` are used  
The soft limit for the `ulimit` type. The value can be specified in bytes, seconds, or as a count, depending on the `type` of the `ulimit`.

#### Docker labels
<a name="container_definition_labels"></a>

`dockerLabels`  
Type: String to string map  
Required: No  
A key/value map of labels to add to the container. This parameter maps to `Labels` in the docker create-container command and the `--label` option to docker run.   
This parameter requires version 1.18 of the Docker Remote API or greater on your container instance.  

```
"dockerLabels": {"string": "string"
      ...}
```

### Other container definition parameters
<a name="other_container_definition_params"></a>

The following container definition parameters can be used when registering task definitions in the Amazon ECS console by using the **Configure via JSON** option. For more information, see [Creating an Amazon ECS task definition using the console](create-task-definition.md).

**Topics**
+ [

#### Linux parameters
](#container_definition_linuxparameters)
+ [

#### Container dependency
](#container_definition_dependson)
+ [

#### Container timeouts
](#container_definition_timeout)
+ [

#### System controls
](#container_definition_systemcontrols)
+ [

#### Interactive
](#container_definition_interactive)
+ [

#### Pseudo terminal
](#container_definition_pseudoterminal)

#### Linux parameters
<a name="container_definition_linuxparameters"></a>

`linuxParameters`  
Type: [LinuxParameters](https://docs.aws.amazon.com/AmazonECS/latest/APIReference/API_LinuxParameters.html) object  
Required: No  
Linux-specific options that are applied to the container, such as [KernelCapabilities](https://docs.aws.amazon.com/AmazonECS/latest/APIReference/API_KernelCapabilities.html).  
This parameter isn't supported for Windows containers.

```
"linuxParameters": {
      "capabilities": {
        "add": ["string", ...],
        "drop": ["string", ...]
        }
      }
```  
`capabilities`  
Type: [KernelCapabilities](https://docs.aws.amazon.com/AmazonECS/latest/APIReference/API_KernelCapabilities.html) object  
Required: No  
The Linux capabilities for the container that are dropped from the default configuration provided by Docker. For more information about these Linux capabilities, see the [capabilities(7)](http://man7.org/linux/man-pages/man7/capabilities.7.html) Linux manual page.    
`add`  
Type: String array  
Valid values: `"SYS_PTRACE"`  
Required: No  
The Linux capabilities for the container to add to the default configuration that's provided by Docker. This parameter maps to `CapAdd` in the docker create-container command and the `--cap-add` option to docker run.  
`drop`  
Type: String array  
Valid values: `"ALL" | "AUDIT_CONTROL" | "AUDIT_WRITE" | "BLOCK_SUSPEND" | "CHOWN" | "DAC_OVERRIDE" | "DAC_READ_SEARCH" | "FOWNER" | "FSETID" | "IPC_LOCK" | "IPC_OWNER" | "KILL" | "LEASE" | "LINUX_IMMUTABLE" | "MAC_ADMIN" | "MAC_OVERRIDE" | "MKNOD" | "NET_ADMIN" | "NET_BIND_SERVICE" | "NET_BROADCAST" | "NET_RAW" | "SETFCAP" | "SETGID" | "SETPCAP" | "SETUID" | "SYS_ADMIN" | "SYS_BOOT" | "SYS_CHROOT" | "SYS_MODULE" | "SYS_NICE" | "SYS_PACCT" | "SYS_PTRACE" | "SYS_RAWIO" | "SYS_RESOURCE" | "SYS_TIME" | "SYS_TTY_CONFIG" | "SYSLOG" | "WAKE_ALARM"`  
Required: No  
The Linux capabilities for the container to remove from the default configuration that's provided by Docker. This parameter maps to `CapDrop` in the docker create-container command and the `--cap-drop` option to docker run.  
`devices`  
Any host devices to expose to the container. This parameter maps to `Devices` in the docker create-container command and the `--device` option to docker run.  
The `devices` parameter isn't supported when you use the Fargate launch type.
Type: Array of [Device](https://docs.aws.amazon.com/AmazonECS/latest/APIReference/API_Device.html) objects  
Required: No    
`hostPath`  
The path for the device on the host container instance.  
Type: String  
Required: Yes  
`containerPath`  
The path inside the container to expose the host device at.  
Type: String  
Required: No  
`permissions`  
The explicit permissions to provide to the container for the device. By default, the container has permissions for `read`, `write`, and `mknod` on the device.  
Type: Array of strings  
Valid Values: `read` \$1 `write` \$1 `mknod`  
`initProcessEnabled`  
Run an `init` process inside the container that forwards signals and reaps processes. This parameter maps to the `--init` option to docker run.  
This parameter requires version 1.25 of the Docker Remote API or greater on your container instance.  
`maxSwap`  
This is not supported for tasks running on Fargate.  
The total amount of swap memory (in MiB) a container can use. This parameter is translated to the `--memory-swap` option to docker run where the value is the sum of the container memory plus the `maxSwap` value.  
If a `maxSwap` value of `0` is specified, the container doesn't use swap. Accepted values are `0` or any positive integer. If the `maxSwap` parameter is omitted, the container uses the swap configuration for the container instance that it's running on. A `maxSwap` value must be set for the `swappiness` parameter to be used.  
`sharedMemorySize`  
The value for the size (in MiB) of the `/dev/shm` volume. This parameter maps to the `--shm-size` option to docker run.  
If you're using tasks that use Fargate, the `sharedMemorySize` parameter isn't supported.
Type: Integer  
`tmpfs`  
The container path, mount options, and maximum size (in MiB) of the tmpfs mount. This parameter maps to the `--tmpfs` option to docker run.  
Type: Array of [Tmpfs](https://docs.aws.amazon.com/AmazonECS/latest/APIReference/API_Tmpfs.html) objects  
Required: No    
`containerPath`  
The absolute file path where the tmpfs volume is to be mounted.  
Type: String  
Required: Yes  
`mountOptions`  
The list of tmpfs volume mount options.  
Type: Array of strings  
Required: No  
Valid Values: `"defaults" | "ro" | "rw" | "suid" | "nosuid" | "dev" | "nodev" | "exec" | "noexec" | "sync" | "async" | "dirsync" | "remount" | "mand" | "nomand" | "atime" | "noatime" | "diratime" | "nodiratime" | "bind" | "rbind" | "unbindable" | "runbindable" | "private" | "rprivate" | "shared" | "rshared" | "slave" | "rslave" | "relatime" | "norelatime" | "strictatime" | "nostrictatime" | "mode" | "uid" | "gid" | "nr_inodes" | "nr_blocks" | "mpol"`  
`size`  
The maximum size (in MiB) of the tmpfs volume.  
Type: Integer  
Required: Yes

#### Container dependency
<a name="container_definition_dependson"></a>

`dependsOn`  
Type: Array of [ContainerDependency](https://docs.aws.amazon.com/AmazonECS/latest/APIReference/API_ContainerDependency.html) objects  
Required: No  
The dependencies defined for container startup and shutdown. A container can contain multiple dependencies. When a dependency is defined for container startup, for container shutdown it is reversed. For an example, see [Container dependency](example_task_definitions.md#example_task_definition-containerdependency).  
If a container doesn't meet a dependency constraint or times out before meeting the constraint, Amazon ECS doesn't progress dependent containers to their next state.
This parameter requires that the task or service uses platform version `1.3.0` or later (Linux) or `1.0.0` (Windows).  

```
"dependsOn": [
    {
        "containerName": "string",
        "condition": "string"
    }
]
```  
`containerName`  
Type: String  
Required: Yes  
The container name that must meet the specified condition.  
`condition`  
Type: String  
Required: Yes  
The dependency condition of the container. The following are the available conditions and their behavior:  
+ `START` – This condition emulates the behavior of links and volumes today. The condition validates that a dependent container is started before permitting other containers to start.
+ `COMPLETE` – This condition validates that a dependent container runs to completion (exits) before permitting other containers to start. This can be useful for non-essential containers that run a script and then exit. This condition can't be set on an essential container.
+ `SUCCESS` – This condition is the same as `COMPLETE`, but it also requires that the container exits with a `zero` status. This condition can't be set on an essential container.
+ `HEALTHY` – This condition validates that the dependent container passes its container health check before permitting other containers to start. This requires that the dependent container has health checks configured in the task definition. This condition is confirmed only at task startup.

#### Container timeouts
<a name="container_definition_timeout"></a>

`startTimeout`  
Type: Integer  
Required: No  
Example values: `120`  
Time duration (in seconds) to wait before giving up on resolving dependencies for a container.  
For example, you specify two containers in a task definition with `containerA` having a dependency on `containerB` reaching a `COMPLETE`, `SUCCESS`, or `HEALTHY` status. If a `startTimeout` value is specified for `containerB` and it doesn't reach the desired status within that time, then `containerA` doesn't start.  
If a container doesn't meet a dependency constraint or times out before meeting the constraint, Amazon ECS doesn't progress dependent containers to their next state.
This parameter requires that the task or service uses platform version `1.3.0` or later (Linux). The maximum value is 120 seconds.

`stopTimeout`  
Type: Integer  
Required: No  
Example values: `120`  
Time duration (in seconds) to wait before the container is forcefully killed if it doesn't exit normally on its own.  
This parameter requires that the task or service uses platform version `1.3.0` or later (Linux). If the parameter isn't specified, then the default value of 30 seconds is used. The maximum value is 120 seconds.

#### System controls
<a name="container_definition_systemcontrols"></a>

`systemControls`  
Type: [SystemControl](https://docs.aws.amazon.com/AmazonECS/latest/APIReference/API_SystemControl.html) object  
Required: No  
A list of namespace kernel parameters to set in the container. This parameter maps to `Sysctls` in the docker create-container commandand the `--sysctl` option to docker run. For example, you can configure `net.ipv4.tcp_keepalive_time` setting to maintain longer lived connections.  
We don't recommend that you specify network-related `systemControls` parameters for multiple containers in a single task that also uses either the `awsvpc` or `host` network mode. Doing this has the following disadvantages:  
+ If you set `systemControls` for any container, it applies to all containers in the task. If you set different `systemControls` for multiple containers in a single task, the container that's started last determines which `systemControls` take effect.
If you're setting an IPC resource namespace to use for the containers in the task, the following conditions apply to your system controls. For more information, see [IPC mode](#task_definition_ipcmode).  
+ For tasks that use the `host` IPC mode, IPC namespace `systemControls` aren't supported.
+ For tasks that use the `task` IPC mode, IPC namespace `systemControls` values apply to all containers within a task.
This parameter is not supported for Windows containers.
This parameter is only supported for tasks that are hosted on AWS Fargate if the tasks are using platform version `1.4.0` or later (Linux). This isn't supported for Windows containers on Fargate.

```
"systemControls": [
    {
         "namespace":"string",
         "value":"string"
    }
]
```  
`namespace`  
Type: String  
Required: No  
The namespace kernel parameter to set a `value` for.  
Valid IPC namespace values: `"kernel.msgmax" | "kernel.msgmnb" | "kernel.msgmni" | "kernel.sem" | "kernel.shmall" | "kernel.shmmax" | "kernel.shmmni" | "kernel.shm_rmid_forced"`, and `Sysctls` that start with `"fs.mqueue.*"`  
Valid network namespace values: `Sysctls` that start with `"net.*"`. On Fargate, only namespaced `Sysctls` that exist within the container are accepted.  
All of these values are supported by Fargate.  
`value`  
Type: String  
Required: No  
The value for the namespace kernel parameter that's specified in `namespace`.

#### Interactive
<a name="container_definition_interactive"></a>

`interactive`  
Type: Boolean  
Required: No  
When this parameter is `true`, you can deploy containerized applications that require `stdin` or a `tty` to be allocated. This parameter maps to `OpenStdin` in the docker create-container command and the `--interactive` option to docker run.  
The default is `false`.

#### Pseudo terminal
<a name="container_definition_pseudoterminal"></a>

`pseudoTerminal`  
Type: Boolean  
Required: No  
When this parameter is `true`, a TTY is allocated. This parameter maps to `Tty` in the docker create-container command and the `--tty` option to docker run.  
The default is `false`.

## Elastic Inference accelerator name (deprecated)
<a name="elastic-Inference-accelerator"></a>

The Elastic Inference accelerator resource requirement for your task definition. 

**Note**  
Amazon Elastic Inference (EI) is no longer available to customers.

The following parameters are allowed in a task definition:

`deviceName`  
Type: String  
Required: Yes  
The Elastic Inference accelerator device name. The `deviceName` must also be referenced in a container definition see [Elastic Inference accelerator](#ContainerDefinition-elastic-inference).

`deviceType`  
Type: String  
Required: Yes  
The Elastic Inference accelerator to use.

## Proxy configuration
<a name="proxyConfiguration"></a>

`proxyConfiguration`  
Type: [ProxyConfiguration](https://docs.aws.amazon.com/AmazonECS/latest/APIReference/API_ProxyConfiguration.html) object  
Required: No  
The configuration details for the App Mesh proxy.  
This parameter is not supported for Windows containers.

```
"proxyConfiguration": {
    "type": "APPMESH",
    "containerName": "string",
    "properties": [
        {
           "name": "string",
           "value": "string"
        }
    ]
}
```  
`type`  
Type: String  
Value values: `APPMESH`  
Required: No  
The proxy type. The only supported value is `APPMESH`.  
`containerName`  
Type: String  
Required: Yes  
The name of the container that serves as the App Mesh proxy.  
`properties`  
Type: Array of [KeyValuePair](https://docs.aws.amazon.com/AmazonECS/latest/APIReference/API_KeyValuePair.html) objects  
Required: No  
The set of network configuration parameters to provide the Container Network Interface (CNI) plugin, specified as key-value pairs.  
+ `IgnoredUID` – (Required) The user ID (UID) of the proxy container as defined by the `user` parameter in a container definition. This is used to ensure the proxy ignores its own traffic. If `IgnoredGID` is specified, this field can be empty.
+ `IgnoredGID` – (Required) The group ID (GID) of the proxy container as defined by the `user` parameter in a container definition. This is used to ensure the proxy ignores its own traffic. If `IgnoredUID` is specified, this field can be empty.
+ `AppPorts` – (Required) The list of ports that the application uses. Network traffic to these ports is forwarded to the `ProxyIngressPort` and `ProxyEgressPort`.
+ `ProxyIngressPort` – (Required) Specifies the port that incoming traffic to the `AppPorts` is directed to.
+ `ProxyEgressPort` – (Required) Specifies the port that outgoing traffic from the `AppPorts` is directed to.
+ `EgressIgnoredPorts` – (Required) The outbound traffic going to these specified ports is ignored and not redirected to the `ProxyEgressPort`. It can be an empty list.
+ `EgressIgnoredIPs` – (Required) The outbound traffic going to these specified IP addresses is ignored and not redirected to the `ProxyEgressPort`. It can be an empty list.  
`name`  
Type: String  
Required: No  
The name of the key-value pair.  
`value`  
Type: String  
Required: No  
The value of the key-value pair.

## Volumes
<a name="volumes"></a>

When you register a task definition, you can optionally specify a list of volumes to be passed to the Docker daemon on a container instance, which then becomes available for access by other containers on the same container instance.

The following are the types of data volumes that can be used:
+ Amazon EBS volumes — Provides cost-effective, durable, high-performance block storage for data intensive containerized workloads. You can attach 1 Amazon EBS volume per Amazon ECS task when running a standalone task, or when creating or updating a service. Amazon EBS volumes are supported for Linux tasks hosted on Fargate. For more information, see [Use Amazon EBS volumes with Amazon ECS](ebs-volumes.md).
+ Amazon EFS volumes — Provides simple, scalable, and persistent file storage for use with your Amazon ECS tasks. With Amazon EFS, storage capacity is elastic. It grows and shrinks automatically as you add and remove files. Your applications can have the storage that they need and when they need it. Amazon EFS volumes are supported for tasks that are hosted on Fargate. For more information, see [Use Amazon EFS volumes with Amazon ECS](efs-volumes.md).
+ FSx for Windows File Server volumes — Provides fully managed Microsoft Windows file servers. These file servers are backed by a Windows file system. When using FSx for Windows File Server together with Amazon ECS, you can provision your Windows tasks with persistent, distributed, shared, and static file storage. For more information, see [Use FSx for Windows File Server volumes with Amazon ECS](wfsx-volumes.md).

  Windows containers on Fargate do not support this option.
+ Bind mounts – A file or directory on the host machine that is mounted into a container. Bind mount host volumes are supported when running tasks. To use bind mount host volumes, specify a `host` and optional `sourcePath` value in your task definition.

For more information, see [Storage options for Amazon ECS tasks](using_data_volumes.md).

The following parameters are allowed in a container definition.

`name`  
Type: String  
Required: No  
The name of the volume. Up to 255 letters (uppercase and lowercase), numbers, hyphens (`-`), and underscores (`_`) are allowed. This name is referenced in the `sourceVolume` parameter of the container definition `mountPoints` object.

`host`  
Required: No  
The `host` parameter is used to tie the lifecycle of the bind mount to the host Amazon EC2 instance, rather than the task, and where it is stored. If the `host` parameter is empty, then the Docker daemon assigns a host path for your data volume, but the data is not guaranteed to persist after the containers associated with it stop running.  
Windows containers can mount whole directories on the same drive as `$env:ProgramData`.  
The `sourcePath` parameter is supported only when using tasks that are hosted on Amazon EC2 instances or Amazon ECS Managed Instances.  
`sourcePath`  
Type: String  
Required: No  
When the `host` parameter is used, specify a `sourcePath` to declare the path on the host Amazon EC2 instance that is presented to the container. If this parameter is empty, then the Docker daemon assigns a host path for you. If the `host` parameter contains a `sourcePath` file location, then the data volume persists at the specified location on the host Amazon EC2 instance until you delete it manually. If the `sourcePath` value does not exist on the host Amazon EC2 instance, the Docker daemon creates it. If the location does exist, the contents of the source path folder are exported.

`configuredAtLaunch`  
Type: Boolean  
Required: No  
Specifies whether a volume is configurable at launch. When set to `true`, you can configure the volume when running a standalone task, or when creating or updating a service. When set to `true`, you won't be able to provide another volume configuration in the task definition. This parameter must be set to `true` to configure an Amazon EBS volume for attachment to a task. Setting `configuredAtLaunch` to `true` and deferring volume configuration to the launch phase allows you to create task definitions that aren't constrained to a volume type or to specific volume settings. Doing this makes your task definition reusable across different execution environments. For more information, see [Amazon EBS volumes](https://docs.aws.amazon.com/AmazonECS/latest/developerguide/ebs-volumes.html).

`dockerVolumeConfiguration`  
Type: [DockerVolumeConfiguration](https://docs.aws.amazon.com/AmazonECS/latest/APIReference/API_DockerVolumeConfiguration.html) Object  
Required: No  
This parameter is specified when using Docker volumes. Docker volumes are supported only when running tasks on EC2 instances. Windows containers support only the use of the `local` driver. To use bind mounts, specify a `host` instead.    
`scope`  
Type: String  
Valid Values: `task` \$1 `shared`  
Required: No  
The scope for the Docker volume, which determines its lifecycle. Docker volumes that are scoped to a `task` are automatically provisioned when the task starts and destroyed when the task stops. Docker volumes that are scoped as `shared` persist after the task stops.  
`autoprovision`  
Type: Boolean  
Default value: `false`  
Required: No  
If this value is `true`, the Docker volume is created if it doesn't already exist. This field is used only if the `scope` is `shared`. If the `scope` is `task`, then this parameter must be omitted.  
`driver`  
Type: String  
Required: No  
The Docker volume driver to use. The driver value must match the driver name provided by Docker because this name is used for task placement. If the driver was installed by using the Docker plugin CLI, use `docker plugin ls` to retrieve the driver name from your container instance. If the driver was installed by using another method, use Docker plugin discovery to retrieve the driver name.  
`driverOpts`  
Type: String  
Required: No  
A map of Docker driver-specific options to pass through. This parameter maps to `DriverOpts` in the Create a volume section of Docker.  
`labels`  
Type: String  
Required: No  
Custom metadata to add to your Docker volume.

`efsVolumeConfiguration`  
Type: [EFSVolumeConfiguration](https://docs.aws.amazon.com/AmazonECS/latest/APIReference/API_EFSVolumeConfiguration.html) Object  
Required: No  
This parameter is specified when using Amazon EFS volumes.    
`fileSystemId`  
Type: String  
Required: Yes  
The Amazon EFS file system ID to use.  
`rootDirectory`  
Type: String  
Required: No  
The directory within the Amazon EFS file system to mount as the root directory inside the host. If this parameter is omitted, the root of the Amazon EFS volume will be used. Specifying `/` has the same effect as omitting this parameter.  
If an EFS access point is specified in the `authorizationConfig`, the root directory parameter must either be omitted or set to `/`, which will enforce the path set on the EFS access point.  
`transitEncryption`  
Type: String  
Valid values: `ENABLED` \$1 `DISABLED`  
Required: No  
Specifies whether to enable encryption for Amazon EFS data in transit between the Amazon ECS host and the Amazon EFS server. If Amazon EFS IAM authorization is used, transit encryption must be enabled. If this parameter is omitted, the default value of `DISABLED` is used. For more information, see [Encrypting Data in Transit](https://docs.aws.amazon.com/efs/latest/ug/encryption-in-transit.html) in the *Amazon Elastic File System User Guide*.  
`transitEncryptionPort`  
Type: Integer  
Required: No  
The port to use when sending encrypted data between the Amazon ECS host and the Amazon EFS server. If you don't specify a transit encryption port, the task will use the port selection strategy that the Amazon EFS mount helper uses. For more information, see [EFS Mount Helper](https://docs.aws.amazon.com/efs/latest/ug/efs-mount-helper.html) in the *Amazon Elastic File System User Guide*.  
`authorizationConfig`  
Type: [EFSAuthorizationConfig](https://docs.aws.amazon.com/AmazonECS/latest/APIReference/API_EFSAuthorizationConfig.html) Object  
Required: No  
The authorization configuration details for the Amazon EFS file system.    
`accessPointId`  
Type: String  
Required: No  
The access point ID to use. If an access point is specified, the root directory value in the `efsVolumeConfiguration` must either be omitted or set to `/`, which will enforce the path set on the EFS access point. If an access point is used, transit encryption must be enabled in the `EFSVolumeConfiguration`. For more information, see [Working with Amazon EFS Access Points](https://docs.aws.amazon.com/efs/latest/ug/efs-access-points.html) in the *Amazon Elastic File System User Guide*.  
`iam`  
Type: String  
Valid values: `ENABLED` \$1 `DISABLED`  
Required: No  
Specifies whether to use the Amazon ECS task IAM role that's defined in a task definition when mounting the Amazon EFS file system. If enabled, transit encryption must be enabled in the `EFSVolumeConfiguration`. If this parameter is omitted, the default value of `DISABLED` is used. For more information, see [IAM Roles for Tasks](https://docs.aws.amazon.com/AmazonECS/latest/developerguide/task-iam-roles.html).

`FSxWindowsFileServerVolumeConfiguration`  
Type: [FSxWindowsFileServerVolumeConfiguration](https://docs.aws.amazon.com/AmazonECS/latest/APIReference/API_FSxWindowsFileServerVolumeConfiguration.html) Object  
Required: Yes  
This parameter is specified when you're using an [Amazon FSx for Windows File Server](https://docs.aws.amazon.com/fsx/latest/WindowsGuide/what-is.html) file system for task storage.    
`fileSystemId`  
Type: String  
Required: Yes  
The FSx for Windows File Server file system ID to use.  
`rootDirectory`  
Type: String  
Required: Yes  
The directory within the FSx for Windows File Server file system to mount as the root directory inside the host.  
`authorizationConfig`    
`credentialsParameter`  
Type: String  
Required: Yes  
The authorization credential options.  

**options:**
+ Amazon Resource Name (ARN) of an [AWS Secrets Manager](https://docs.aws.amazon.com/secretsmanager/latest/userguide/intro.html) secret.
+ ARN of an [AWS Systems Manager](https://docs.aws.amazon.com/systems-manager/latest/userguide/integration-ps-secretsmanager.html) parameter.  
`domain`  
Type: String  
Required: Yes  
A fully qualified domain name hosted by an [AWS Directory Service for Microsoft Active Directory](https://docs.aws.amazon.com/directoryservice/latest/admin-guide/directory_microsoft_ad.html) (AWS Managed Microsoft AD) directory or a self-hosted EC2 Active Directory.

## Tags
<a name="tags"></a>

When you register a task definition, you can optionally specify metadata tags that are applied to the task definition. Tags help you categorize and organize your task definition. Each tag consists of a key and an optional value. You define both of them. For more information, see [Tagging Amazon ECS resources](ecs-using-tags.md).

**Important**  
Don't add personally identifiable information or other confidential or sensitive information in tags. Tags are accessible to many AWS services, including billing. Tags aren't intended to be used for private or sensitive data.

The following parameters are allowed in a tag object.

`key`  
Type: String  
Required: No  
One part of a key-value pair that make up a tag. A key is a general label that acts like a category for more specific tag values.

`value`  
Type: String  
Required: No  
The optional part of a key-value pair that make up a tag. A value acts as a descriptor within a tag category (key).

## Other task definition parameters
<a name="other_task_definition_params"></a>

The following task definition parameters can be used when registering task definitions in the Amazon ECS console by using the **Configure via JSON** option. For more information, see [Creating an Amazon ECS task definition using the console](create-task-definition.md).

**Topics**
+ [

### Ephemeral storage
](#task_definition_ephemeralStorage)
+ [

### IPC mode
](#task_definition_ipcmode)
+ [

### PID mode
](#task_definition_pidmode)
+ [

### Fault injection
](#task_definition_faultInjection)

### Ephemeral storage
<a name="task_definition_ephemeralStorage"></a>

`ephemeralStorage`  
Type: [EphemeralStorage](https://docs.aws.amazon.com/AmazonECS/latest/APIReference/API_EphemeralStorage.html) object  
Required: No  
The amount of ephemeral storage (in GB) to allocate for the task. This parameter is used to expand the total amount of ephemeral storage available, beyond the default amount, for tasks that are hosted on AWS Fargate. For more information, see [Use bind mounts with Amazon ECS](bind-mounts.md).  
This parameter is only supported on platform version `1.4.0` or later (Linux) or `1.0.0` or later (Windows).

### IPC mode
<a name="task_definition_ipcmode"></a>

`ipcMode`  
This is not supported for tasks running on Fargate.  
Type: String  
Required: No  
The IPC resource namespace to use for the containers in the task. The valid values are `host`, `task`, or `none`. If `host` is specified, then all the containers that are within the tasks that specified the `host` IPC mode on the same container instance share the same IPC resources with the host Amazon EC2 instance. If `task` is specified, all the containers that are within the specified task share the same IPC resources. If `none` is specified, then IPC resources within the containers of a task are private and not shared with other containers in a task or on the container instance. If no value is specified, then the IPC resource namespace sharing depends on the Docker daemon setting on the container instance.  
If the `host` IPC mode is used, there's a heightened risk of undesired IPC namespace exposure.  
If you're setting namespaced kernel parameters that use `systemControls` for the containers in the task, the following applies to your IPC resource namespace.   
+ For tasks that use the `host` IPC mode, IPC namespace that's related `systemControls` aren't supported.
+ For tasks that use the `task` IPC mode, `systemControls` that relate to the IPC namespace apply to all containers within a task.

**Note**  
This parameter is not supported for Windows containers or tasks using the Fargate launch type.

### PID mode
<a name="task_definition_pidmode"></a>

`pidMode`  
Type: String  
Valid Values: `host` \$1 `task`  
Required: No  
The process namespace to use for the containers in the task. The valid values are `host` or `task`. On Linux containers, the only valid value is `task`. For example, monitoring sidecars might need `pidMode` to access information about other containers running in the same task.  
If `task` is specified, all containers within the specified task share the same process namespace.  
If no value is specified, the default is a private namespace for each container. 

**Note**  
This parameter is only supported for tasks that are hosted on AWS Fargate if the tasks are using platform version `1.4.0` or later (Linux). This isn't supported for Windows containers on Fargate.

### Fault injection
<a name="task_definition_faultInjection"></a>

`enableFaultInjection`  
Type: Boolean  
Valid Values: `true` \$1 `false`  
Required: No  
If this parameter is set to `true`, in a task's payload, Amazon ECS and Fargate accept fault injection requests from the task’s containers. By default, this parameter is set to `false`.

# Amazon ECS task definition parameters for Amazon EC2
<a name="task_definition_parameters_ec2"></a>

Task definitions are split into separate parts: the task family, the AWS Identity and Access Management (IAM) task role, the network mode, container definitions, volumes, task placement constraints, and capacity. The family and container definitions are required in a task definition. In contrast, task role, network mode, volumes, task placement constraints, and launch type are optional.

You can use these parameters in a JSON file to configure your task definition.

The following are more detailed descriptions for each task definition parameter for Amazon EC2

## Family
<a name="family_ec2"></a>

`family`  
Type: String  
Required: Yes  
When you register a task definition, you give it a family, which is similar to a name for multiple versions of the task definition, specified with a revision number. The first task definition that's registered into a particular family is given a revision of 1, and any task definitions registered after that are given a sequential revision number.

## Capacity
<a name="requires_compatibilities_ec2"></a>

When you register a task definition, you can specify the capacity that Amazon ECS should validate the task definition against. If the task definition doesn't validate against the compatibilities specified, a client exception is returned.

The following parameter is allowed in a task definition.

`requiresCompatibilities`  
Type: String array  
Required: No  
Valid Values: `EC2`   
The capacity to validate the task definition against. This initiates a check to ensure that all of the parameters that are used in the task definition meet the requirements of Amazon EC2.

## Task role
<a name="task_role_arn_ec2"></a>

`taskRoleArn`  
Type: String  
Required: No  
When you register a task definition, you can provide a task role for an IAM role that allows the containers in the task permission to call the AWS APIs that are specified in its associated policies on your behalf. For more information, see [Amazon ECS task IAM role](task-iam-roles.md).  
When you launch the Amazon ECS-optimized Windows Server AMI, IAM roles for tasks on Windows require that the `-EnableTaskIAMRole` option is set. Your containers must also run some configuration code to use the feature. For more information, see [Amazon EC2 Windows instance additional configuration](task-iam-roles.md#windows_task_IAM_roles).

## Task execution role
<a name="execution_role_arn_ec2"></a>

`executionRoleArn`  
Type: String  
Required: Conditional  
The Amazon Resource Name (ARN) of the task execution role that grants the Amazon ECS container agent permission to make AWS API calls on your behalf.   
The task execution IAM role is required depending on the requirements of your task. For more information, see [Amazon ECS task execution IAM role](task_execution_IAM_role.md).

## Network mode
<a name="network_mode_ec2"></a>

`networkMode`  
Type: String  
Required: No  
The Docker networking mode to use for the containers in the task. For Amazon ECS tasks that are hosted on Amazon EC2 Linux instances, the valid values are `none`, `bridge`, `awsvpc`, and `host`. If no network mode is specified, the default network mode is `bridge`. For Amazon ECS tasks hosted on Amazon EC2 Windows instances, the valid values are `default`, and `awsvpc`. If no network mode is specified, the `default` network mode is used.  
If the network mode is set to `none`, the task's containers don't have external connectivity and port mappings can't be specified in the container definition.  
If the network mode is `bridge`, the task uses Docker's built-in virtual network on Linux, which runs inside each Amazon EC2 instance that hosts the task. The built-in virtual network on Linux uses the `bridge` Docker network driver.  
If the network mode is `host`, the task uses the host's network which bypasses Docker's built-in virtual network by mapping container ports directly to the ENI of the Amazon EC2 instance that hosts the task. Dynamic port mappings can’t be used in this network mode. A container in a task definition that uses this mode must specify a specific `hostPort` number. A port number on a host can’t be used by multiple tasks. As a result, you can’t run multiple tasks of the same task definition on a single Amazon EC2 instance.  
When running tasks that use the `host` network mode, do not run containers using the root user (UID 0) for better security. As a security best practice, always use a non-root user.
If the network mode is `awsvpc`, the task is allocated an elastic network interface, and you must specify a `NetworkConfiguration` when you create a service or run a task with the task definition. For more information, see [Amazon ECS task networking options for EC2](task-networking.md).  
If the network mode is `default`, the task uses Docker's built-in virtual network on Windows, which runs inside each Amazon EC2 instance that hosts the task. The built-in virtual network on Windows uses the `nat` Docker network driver.   
The `host` and `awsvpc` network modes offer the highest networking performance for containers because they use the Amazon EC2 network stack. With the `host` and `awsvpc` network modes, exposed container ports are mapped directly to the corresponding host port (for the `host` network mode) or the attached elastic network interface port (for the `awsvpc` network mode). Because of this, you can't use dynamic host port mappings.  
The allowable network mode depends on the underlying EC2 instance's operating system. If Linux, any network mode can be used. If Windows, the `default`, and `awsvpc` modes can be used. 

## Runtime platform
<a name="runtime-platform_ec2"></a>

`operatingSystemFamily`  
Type: String  
Required: Conditional  
Default: LINUX  
When you register a task definition, you specify the operating system family.   
The valid values are `LINUX`, `WINDOWS_SERVER_2025_FULL`, `WINDOWS_SERVER_2025_CORE`, `WINDOWS_SERVER_2022_CORE`, `WINDOWS_SERVER_2022_FULL`, `WINDOWS_SERVER_2019_FULL`, and `WINDOWS_SERVER_2019_CORE`, `WINDOWS_SERVER_2016_FULL`, `WINDOWS_SERVER_2004_CORE`, and `WINDOWS_SERVER_20H2_CORE`.  
All task definitions that are used in a service must have the same value for this parameter.  
When a task definition is part of a service, this value must match the service `platformFamily` value.

`cpuArchitecture`  
Type: String  
Required: Conditional  
When you register a task definition, you can specify the CPU architecture. The valid values are `X86_64` and `ARM64`. If you don't specify a value, Amazon ECS attempts to place tasks on the available CPU architecture based on the capacity provider configuration. To ensure that tasks are placed on a specific CPU architecture, specify a value for `cpuArchitecture` in the task definition.  
All task definitions that are used in a service must have the same value for this parameter.  
When you have Linux tasks, you can set the value to `ARM64`. For more information, see [Amazon ECS task definitions for 64-bit ARM workloads](ecs-arm64.md).

## Task size
<a name="task_size_ec2"></a>

When you register a task definition, you can specify the total CPU and memory used for the task. This is separate from the `cpu` and `memory` values at the container definition level. For tasks that are hosted on Amazon EC2 instances, these fields are optional.

**Note**  
Task-level CPU and memory parameters are ignored for Windows containers. We recommend specifying container-level resources for Windows containers.

`cpu`  
Type: String  
Required: Conditional  
This parameter is not supported for Windows containers.
The hard limit of CPU units to present for the task. You can specify CPU values in the JSON file as a string in CPU units or virtual CPUs (vCPUs). For example, you can specify a CPU value either as `1024` in CPU units or `1 vCPU` in vCPUs. When the task definition is registered, a vCPU value is converted to an integer indicating the CPU units.  
This field is optional. If your cluster doesn't have any registered container instances with the requested CPU units available, the task fails. Supported values are between `0.125` vCPUs and `192` vCPUs.

`memory`  
Type: String  
Required: Conditional  
This parameter is not supported for Windows containers.
The hard limit of memory to present to the task. You can specify memory values in the task definition as a string in mebibytes (MiB) or gigabytes (GB). For example, you can specify a memory value either as `3072` in MiB or `3 GB`in GB. When the task definition is registered, a GB value is converted to an integer indicating the MiB.  
This field is optional and any value can be used. If a task-level memory value is specified, then the container-level memory value is optional. If your cluster doesn't have any registered container instances with the requested memory available, the task fails. You can maximize your resource utilization by providing your tasks as much memory as possible for a particular instance type. For more information, see [Reserving Amazon ECS Linux container instance memory](memory-management.md).

## Container definitions
<a name="container_definitions_ec2"></a>

When you register a task definition, you must specify a list of container definitions that are passed to the Docker daemon on a container instance. The following parameters are allowed in a container definition.

**Topics**
+ [

### Standard container definition parameters
](#standard_container_definition_params_ec2)
+ [

### Advanced container definition parameters
](#advanced_container_definition_params_ec2)
+ [

### Other container definition parameters
](#other_container_definition_params_ec2)

### Standard container definition parameters
<a name="standard_container_definition_params_ec2"></a>

The following task definition parameters are either required or used in most container definitions.

**Topics**
+ [

#### Name
](#container_definition_name_ec2)
+ [

#### Image
](#container_definition_image_ec2)
+ [

#### Memory
](#container_definition_memory_ec2)
+ [

#### Port mappings
](#container_definition_portmappings_ec2)
+ [

#### Private Repository Credentials
](#container_definition_repositoryCredentials_ec2)

#### Name
<a name="container_definition_name_ec2"></a>

`name`  
Type: String  
Required: Yes  
The name of a container. Up to 255 letters (uppercase and lowercase), numbers, hyphens, and underscores are allowed. If you're linking multiple containers in a task definition, the `name` of one container can be entered in the `links` of another container. This is to connect the containers.

#### Image
<a name="container_definition_image_ec2"></a>

`image`  
Type: String  
Required: Yes  
The image used to start a container. This string is passed directly to the Docker daemon. By default, images in the Docker Hub registry are available. You can also specify other repositories with either `repository-url/image:tag` or `repository-url/image@digest`. Up to 255 letters (uppercase and lowercase), numbers, hyphens, underscores, colons, periods, forward slashes, and number signs are allowed. This parameter maps to `Image` in the docker create-container command and the `IMAGE` parameter of the docker run command.  
+ When a new task starts, the Amazon ECS container agent pulls the latest version of the specified image and tag for the container to use. However, subsequent updates to a repository image aren't propagated to already running tasks.
+ When you don't specify a tag or digest in the image path in the task definition, the Amazon ECS container agent uses the `latest` tag to pull the specified image. 
+  Subsequent updates to a repository image aren't propagated to already running tasks.
+ Images in private registries are supported. For more information, see [Using non-AWS container images in Amazon ECS](private-auth.md).
+ Images in Amazon ECR repositories can be specified by using either the full `registry/repository:tag` or `registry/repository@digest` naming convention (for example, `aws_account_id.dkr.ecr.region.amazonaws.com``/my-web-app:latest` or `aws_account_id.dkr.ecr.region.amazonaws.com``/my-web-app@sha256:94afd1f2e64d908bc90dbca0035a5b567EXAMPLE`).
+ Images in official repositories on Docker Hub use a single name (for example, `ubuntu` or `mongo`).
+ Images in other repositories on Docker Hub are qualified with an organization name (for example, `amazon/amazon-ecs-agent`).
+ Images in other online repositories are qualified further by a domain name (for example, `quay.io/assemblyline/ubuntu`).

`versionConsistency`  
Type: String  
Valid values: `enabled`\$1`disabled`  
Required: No  
Specifies whether Amazon ECS will resolve the container image tag provided in the container definition to an image digest. By default, this behavior is `enabled`. If you set the value for a container as `disabled`, Amazon ECS will not resolve the container image tag to a digest and will use the original image URI specified in the container definition for deployment. For more information about container image resolution, see [Container image resolution](deployment-type-ecs.md#deployment-container-image-stability).

#### Memory
<a name="container_definition_memory_ec2"></a>

`memory`  
Type: Integer  
Required: No  
The amount (in MiB) of memory to present to the container. If your container attempts to exceed the memory specified here, the container is killed. The total amount of memory reserved for all containers within a task must be lower than the task `memory` value, if one is specified. This parameter maps to `Memory` in the docker create-container command and the `--memory` option to docker run.  
You must specify either a task-level memory value or a container-level memory value. If you specify both a container-level `memory` and `memoryReservation` value, the `memory` value must be greater than the `memoryReservation` value. If you specify `memoryReservation`, then that value is subtracted from the available memory resources for the container instance that the container is placed on. Otherwise, the value of `memory` is used.  
The Docker 20.10.0 or later daemon reserves a minimum of 6 MiB of memory for a container. So, don't specify less than 6 MiB of memory for your containers.  
The Docker 19.03.13-ce or earlier daemon reserves a minimum of 4 MiB of memory for a container. So, don't specify less than 4 MiB of memory for your containers.  
If you're trying to maximize your resource utilization by providing your tasks as much memory as possible for a particular instance type, see [Reserving Amazon ECS Linux container instance memory](memory-management.md).

`memoryReservation`  
Type: Integer  
Required: No  
The soft limit (in MiB) of memory to reserve for the container. When system memory is under contention, Docker attempts to keep the container memory to this soft limit. However, your container can use more memory when needed. The container can use up to the hard limit that's specified with the `memory` parameter (if applicable) or all of the available memory on the container instance, whichever comes first. This parameter maps to `MemoryReservation` in the docker create-container command and the `--memory-reservation` option to docker run.  
If a task-level memory value isn't specified, you must specify a non-zero integer for one or both of `memory` or `memoryReservation` in a container definition. If you specify both, `memory` must be greater than `memoryReservation`. If you specify `memoryReservation`, then that value is subtracted from the available memory resources for the container instance that the container is placed on. Otherwise, the value of `memory` is used.  
For example, suppose that your container normally uses 128 MiB of memory, but occasionally bursts to 256 MiB of memory for short periods of time. You can set a `memoryReservation` of 128 MiB, and a `memory` hard limit of 300 MiB. This configuration allows the container to only reserve 128 MiB of memory from the remaining resources on the container instance. At the same time, this configuration also allows the container to use more memory resources when needed.  
This parameter isn't supported for Windows containers.
The Docker 20.10.0 or later daemon reserves a minimum of 6 MiB of memory for a container. So, don't specify less than 6 MiB of memory for your containers.  
The Docker 19.03.13-ce or earlier daemon reserves a minimum of 4 MiB of memory for a container. So, don't specify less than 4 MiB of memory for your containers.  
If you're trying to maximize your resource utilization by providing your tasks as much memory as possible for a particular instance type, see [Reserving Amazon ECS Linux container instance memory](memory-management.md).

#### Port mappings
<a name="container_definition_portmappings_ec2"></a>

`portMappings`  
Type: Object array  
Required: No  
Port mappings expose your container's network ports to the outside world. this allows clients to access your application. It's also used for inter-container communication within the same task.  
For task definitions that use the `awsvpc` network mode, only specify the `containerPort`. The `hostPort` is always ignored, and the container port is automatically mapped to a random high-numbered port on the host.  
Port mappings on Windows use the `NetNAT` gateway address rather than `localhost`. There's no loopback for port mappings on Windows, so you can't access a container's mapped port from the host itself.   
Most fields of this parameter (including `containerPort`, `hostPort`, `protocol`) map to `PortBindings` in thedocker create-container command and the `--publish` option to docker run. If the network mode of a task definition is set to `host`, host ports must either be undefined or match the container port in the port mapping.  
After a task reaches the `RUNNING` status, manual and automatic host and container port assignments are visible in the following locations:  
+ Console: The **Network Bindings** section of a container description for a selected task.
+ AWS CLI: The `networkBindings` section of the **describe-tasks** command output.
+ API: The `DescribeTasks` response.
+ Metadata: The task metadata endpoint.  
`appProtocol`  
Type: String  
Required: No  
The application protocol that's used for the port mapping. This parameter only applies to Service Connect. We recommend that you set this parameter to be consistent with the protocol that your application uses. If you set this parameter, Amazon ECS adds protocol-specific connection handling to the service connect proxy. If you set this parameter, Amazon ECS adds protocol-specific telemetry in the Amazon ECS console and CloudWatch.  
If you don't set a value for this parameter, then TCP is used. However, Amazon ECS doesn't add protocol-specific telemetry for TCP.  
For more information, see [Use Service Connect to connect Amazon ECS services with short names](service-connect.md).  
Valid protocol values: `"http" | "http2" | "grpc" `  
`containerPort`  
Type: Integer  
Required: Yes, when `portMappings` are used  
The port number on the container that's bound to the user-specified or automatically assigned host port.  
For tasks that use the `awsvpc` network mode, you use `containerPort` to specify the exposed ports.  
Suppose that you're using containers in a task with the EC2 capacity providers and you specify a container port and not a host port. Then, your container automatically receives a host port in the ephemeral port range. For more information, see `hostPort`. Port mappings that are automatically assigned in this way don't count toward the 100 reserved ports quota of a container instance.  
`containerPortRange`  
Type: String  
Required: No  
The port number range on the container that's bound to the dynamically mapped host port range.   
You can only set this parameter by using the `register-task-definition` API. The option is available in the `portMappings` parameter. For more information, see [register-task-definition](https://docs.aws.amazon.com/cli/latest/reference/ecs/register-task-definition.html) in the *AWS Command Line Interface Reference*.  
The following rules apply when you specify a `containerPortRange`:  
+ You must use either the `bridge` network mode or the `awsvpc` network mode.
+ This parameter is available for both the Linux and Windows operating systems.
+ The container instance must have at least version 1.67.0 of the container agent and at least version 1.67.0-1 of the `ecs-init` package.
+ You can specify a maximum of 100 port ranges for each container.
+ You don't specify a `hostPortRange`. The value of the `hostPortRange` is set as follows:
  + For containers in a task with the `awsvpc` network mode, the `hostPort` is set to the same value as the `containerPort`. This is a static mapping strategy.
  + For containers in a task with the `bridge` network mode, the Amazon ECS agent finds open host ports from the default ephemeral range and passes it to docker to bind them to the container ports.
+ The `containerPortRange` valid values are between 1 and 65535.
+ A port can only be included in one port mapping for each container.
+ You can't specify overlapping port ranges.
+ The first port in the range must be less than last port in the range.
+ Docker recommends that you turn off the docker-proxy in the Docker daemon config file when you have a large number of ports.

  For more information, see [ Issue \$111185](https://github.com/moby/moby/issues/11185) on GitHub.

  For information about how to turn off the docker-proxy in the Docker daemon config file, see [Docker daemon](https://docs.aws.amazon.com/AmazonECS/latest/developerguide/bootstrap_container_instance.html#bootstrap_docker_daemon) in the *Amazon ECS Developer Guide*.
You can call [DescribeTasks](https://docs.aws.amazon.com/AmazonECS/latest/APIReference/API_DescribeTasks.html) to view the `hostPortRange`, which are the host ports that are bound to the container ports.  
The port ranges aren't included in the Amazon ECS task events, which are sent to EventBridge. For more information, see [Automate responses to Amazon ECS errors using EventBridge](cloudwatch_event_stream.md).  
`hostPortRange`  
Type: String  
Required: No  
The port number range on the host that's used with the network binding. This is assigned by Docker and delivered by the Amazon ECS agent.  
`hostPort`  
Type: Integer  
Required: No  
The port number on the container instance to reserve for your container.  
You can specify a non-reserved host port for your container port mapping. This is referred to as *static* host port mapping. Or, you can omit the `hostPort` (or set it to `0`) while specifying a `containerPort`. Your container automatically receives a port in the ephemeral port range for your container instance operating system and Docker version. This is referred to as *dynamic* host port mapping.  
The default ephemeral port range Docker version 1.6.0 and later is listed on the instance under `/proc/sys/net/ipv4/ip_local_port_range`. If this kernel parameter is unavailable, the default ephemeral port range from `49153–65535` is used. Don't attempt to specify a host port in the ephemeral port range. This is because these are reserved for automatic assignment. In general, ports under `32768` are outside of the ephemeral port range. You may use `ECS_DYNAMIC_HOST_PORT_RANGE` setting in ECS container agent configuration to specify a custom range for dynamically assigned host ports. That may be helpful if your tasks are failing to start due to port conflicts with other processes on the container instance such as outbound connections that also use ports from the ephemeral port range. For more information, see [Amazon ECS container agent configuration](ecs-agent-config.md).  
The default reserved ports are `22` for SSH, the Docker ports `2375` and `2376`, and the Amazon ECS container agent ports `51678-51680`. Any host port that was previously user-specified for a running task is also reserved while the task is running. After a task stops, the host port is released. The current reserved ports are displayed in the `remainingResources` of the **describe-container-instances** output. A container instance might have up to 100 reserved ports at a time, including the default reserved ports. Automatically assigned ports don't count toward the 100 reserved ports quota.  
`name`  
Type: String  
Required: No, required for Service Connect and VPC Lattice to be configured in a service  
The name that's used for the port mapping. This parameter only applies to Service Connect and VPC Lattice. This parameter is the name that you use in the Service Connect and VPC Lattice configuration of a service.  
For more information, see [Use Service Connect to connect Amazon ECS services with short names](service-connect.md).  
In the following example, both of the required fields for Service Connect and VPC Lattice are used.  

```
"portMappings": [
    {
        "name": string,
        "containerPort": integer
    }
]
```  
`protocol`  
Type: String  
Required: No  
The protocol that's used for the port mapping. Valid values are `tcp` and `udp`. The default is `tcp`.  
Only `tcp` is supported for Service Connect. Remember that `tcp` is implied if this field isn't set. 
UDP support is only available on container instances that were launched with version 1.2.0 of the Amazon ECS container agent (such as the `amzn-ami-2015.03.c-amazon-ecs-optimized` AMI) or later, or with container agents that have been updated to version 1.3.0 or later. To update your container agent to the latest version, see [Updating the Amazon ECS container agent](ecs-agent-update.md).
If you're specifying a host port, use the following syntax.  

```
"portMappings": [
    {
        "containerPort": integer,
        "hostPort": integer
    }
    ...
]
```
If you want an automatically assigned host port, use the following syntax.  

```
"portMappings": [
    {
        "containerPort": integer
    }
    ...
]
```

#### Private Repository Credentials
<a name="container_definition_repositoryCredentials_ec2"></a>

`repositoryCredentials`  
Type: [RepositoryCredentials](https://docs.aws.amazon.com/AmazonECS/latest/APIReference/API_RepositoryCredentials.html) object  
Required: No  
The repository credentials for private registry authentication.  
For more information, see [Using non-AWS container images in Amazon ECS](private-auth.md).    
 `credentialsParameter`  
Type: String  
Required: Yes, when `repositoryCredentials` are used  
The Amazon Resource Name (ARN) of the secret containing the private repository credentials.  
For more information, see [Using non-AWS container images in Amazon ECS](private-auth.md).  
When you use the Amazon ECS API, AWS CLI, or AWS SDKs, if the secret exists in the same Region as the task that you're launching then you can use either the full ARN or the name of the secret. When you use the AWS Management Console, you must specify the full ARN of the secret.
The following is a snippet of a task definition that shows the required parameters:  

```
"containerDefinitions": [
    {
        "image": "private-repo/private-image",
        "repositoryCredentials": {
            "credentialsParameter": "arn:aws:secretsmanager:region:aws_account_id:secret:secret_name"
        }
    }
]
```

### Advanced container definition parameters
<a name="advanced_container_definition_params_ec2"></a>

The following advanced container definition parameters provide extended capabilities to the docker run command that's used to launch containers on your Amazon ECS container instances.

**Topics**
+ [

#### Restart policy
](#container_definition_restart_policy_ec2)
+ [

#### Health check
](#container_definition_healthcheck_ec2)
+ [

#### Environment
](#container_definition_environment_ec2)
+ [

#### Network settings
](#container_definition_network_ec2)
+ [

#### Storage and logging
](#container_definition_storage_ec2)
+ [

#### Security
](#container_definition_security_ec2)
+ [

#### Resource limits
](#container_definition_limits_ec2)
+ [

#### Docker labels
](#container_definition_labels_ec2)

#### Restart policy
<a name="container_definition_restart_policy_ec2"></a>

`restartPolicy`  
The container restart policy and associated configuration parameters. When you set up a restart policy for a container, Amazon ECS can restart the container without needing to replace the task. For more information, see [Restart individual containers in Amazon ECS tasks with container restart policies](container-restart-policy.md).    
`enabled`  
Type: Boolean  
Required: Yes  
Specifies whether a restart policy is enabled for the container.  
`ignoredExitCodes`  
Type: Integer array  
Required: No  
A list of exit codes that Amazon ECS will ignore and not attempt a restart on. You can specify a maximum of 50 container exit codes. By default, Amazon ECS does not ignore any exit codes.  
`restartAttemptPeriod`  
Type: Integer  
Required: No  
A period of time (in seconds) that the container must run for before a restart can be attempted. A container can be restarted only once every `restartAttemptPeriod` seconds. If a container isn't able to run for this time period and exits early, it will not be restarted. You can set a minimum `restartAttemptPeriod` of 60 seconds and a maximum `restartAttemptPeriod` of 1800 seconds. By default, a container must run for 300 seconds before it can be restarted.

#### Health check
<a name="container_definition_healthcheck_ec2"></a>

`healthCheck`  
The container health check command and the associated configuration parameters for the container. For more information, see [Determine Amazon ECS task health using container health checks](healthcheck.md).    
`command`  
A string array that represents the command that the container runs to determine if it's healthy. The string array can start with `CMD` to run the command arguments directly, or `CMD-SHELL` to run the command with the container's default shell. If neither is specified, `CMD` is used.  
When registering a task definition in the AWS Management Console, use a comma separated list of commands. These commands are converted to a string after the task definition is created. An example input for a health check is the following.  

```
CMD-SHELL, curl -f http://localhost/ || exit 1
```
When registering a task definition using the AWS Management Console JSON panel, the AWS CLI, or the APIs, enclose the list of commands in brackets. An example input for a health check is the following.  

```
[ "CMD-SHELL", "curl -f http://localhost/ || exit 1" ]
```
An exit code of 0, with no `stderr` output, indicates success, and a non-zero exit code indicates failure.   
`interval`  
The period of time (in seconds) between each health check. You can specify between 5 and 300 seconds. The default value is 30 seconds.  
`timeout`  
The period of time (in seconds) to wait for a health check to succeed before it's considered a failure. You can specify between 2 and 60 seconds. The default value is 5 seconds.  
`retries`  
The number of times to retry a failed health check before the container is considered unhealthy. You can specify between 1 and 10 retries. The default value is three retries.  
`startPeriod`  
The optional grace period to provide containers time to bootstrap in before failed health checks count towards the maximum number of retries. You can specify between 0 and 300 seconds. By default, `startPeriod` is disabled.  
If a health check succeeds within the `startPeriod`, then the container is considered healthy and any subsequent failures count toward the maximum number of retries.

#### Environment
<a name="container_definition_environment_ec2"></a>

`cpu`  
Type: Integer  
Required: No  
The number of `cpu` units the Amazon ECS container agent reserves for the container. On Linux, this parameter maps to `CpuShares` in the [Create a container](https://docs.docker.com/reference/api/engine/version/v1.38/#operation/ContainerCreate) section.  
You can determine the number of CPU units that are available to each Amazon EC2 instance type. To do this, multiply the number of vCPUs listed for that instance type on the [Amazon EC2 Instances](http://aws.amazon.com/ec2/instance-types/) detail page by 1,024.
Linux containers share unallocated CPU units with other containers on the container instance with the same ratio as their allocated amount. For example, assume that you run a single-container task on a single-core instance type with 512 CPU units specified for that container. Moreover, that task is the only task running on the container instance. In this example, the container can use the full 1,024 CPU unit share at any given time. However, assume then that you launched another copy of the same task on that container instance. Each task is guaranteed a minimum of 512 CPU units when needed. Similarly, if the other container isn't using the remaining CPU, each container can float to higher CPU usage. However, if both tasks were 100% active all of the time, they are limited to 512 CPU units.  
On Linux container instances, the Docker daemon on the container instance uses the CPU value to calculate the relative CPU share ratios for running containers. The minimum valid CPU share value that the Linux kernel allows is 2, and the maximum valid CPU share value that the Linux kernel allows is 262144. However, the CPU parameter isn't required, and you can use CPU values below two and above 262144 in your container definitions. For CPU values below two (including null) and above 262144, the behavior varies based on your Amazon ECS container agent version:  
+ **Agent versions <= 1.1.0:** Null and zero CPU values are passed to Docker as 0. Docker then converts this value to 1,024 CPU shares. CPU values of one are passed to Docker as one, which the Linux kernel converts to two CPU shares.
+ **Agent versions >= 1.2.0:** Null, zero, and CPU values of one are passed to Docker as two CPU shares.
+ **Agent versions >= 1.84.0:** CPU values greater than 256 vCPU are passed to Docker as 256, which is equivalent to 262144 CPU shares.
On Windows container instances, the CPU quota is enforced as an absolute quota. Windows containers only have access to the specified amount of CPU that's defined in the task definition. A null or zero CPU value is passed to Docker as `0`. Windows then interprets this value as 1% of one CPU.  
For more examples, see [How Amazon ECS manages CPU and memory resources](https://aws.amazon.com/blogs/containers/how-amazon-ecs-manages-cpu-and-memory-resources/).

`gpu`  
Type: [ResourceRequirement](https://docs.aws.amazon.com/AmazonECS/latest/APIReference/API_ResourceRequirement.html) object  
Required: No  
The number of physical `GPUs` that the Amazon ECS container agent reserves for the container. The number of GPUs reserved for all containers in a task must not exceed the number of available GPUs on the container instance the task is launched on. For more information, see [Amazon ECS task definitions for GPU workloads](ecs-gpu.md).  
This parameter isn't supported for Windows containers.

`Elastic Inference accelerator`  
Type: [ResourceRequirement](https://docs.aws.amazon.com/AmazonECS/latest/APIReference/API_ResourceRequirement.html) object  
Required: No  
For the `InferenceAccelerator` type, the `value` matches the `deviceName` for an `InferenceAccelerator` specified in a task definition. For more information, see [Elastic Inference accelerator name (deprecated)](task_definition_parameters.md#elastic-Inference-accelerator).  
This parameter isn't supported for Windows containers.

`essential`  
Type: Boolean  
Required: No  
Suppose that the `essential` parameter of a container is marked as `true`, and that container fails or stops for any reason. Then, all other containers that are part of the task are stopped. If the `essential` parameter of a container is marked as `false`, then its failure doesn't affect the rest of the containers in a task. If this parameter is omitted, a container is assumed to be essential.  
All tasks must have at least one essential container. Suppose that you have an application that's composed of multiple containers. Then, group containers that are used for a common purpose into components, and separate the different components into multiple task definitions. For more information, see [Architect your application for Amazon ECS](application_architecture.md).  

```
"essential": true|false
```

`entryPoint`  
Early versions of the Amazon ECS container agent don't properly handle `entryPoint` parameters. If you have problems using `entryPoint`, update your container agent or enter your commands and arguments as `command` array items instead.
Type: String array  
Required: No  
The entry point that's passed to the container.   

```
"entryPoint": ["string", ...]
```

`command`  
Type: String array  
Required: No  
The command that's passed to the container. This parameter maps to `Cmd` in the create-container command and the `COMMAND` parameter to docker run. If there are multiple arguments, make sure that each argument is a separated string in the array.  

```
"command": ["string", ...]
```

`workingDirectory`  
Type: String  
Required: No  
The working directory to run commands inside the container in. This parameter maps to `WorkingDir` in the [Create a container](https://docs.docker.com/reference/api/engine/version/v1.38/#operation/ContainerCreate) section of the [Docker Remote API](https://docs.docker.com/reference/api/engine/version/v1.38/) and the `--workdir` option to [https://docs.docker.com/reference/cli/docker/container/run/](https://docs.docker.com/reference/cli/docker/container/run/).  

```
"workingDirectory": "string"
```

`environmentFiles`  
Type: Object array  
Required: No  
A list of files containing the environment variables to pass to a container. This parameter maps to the `--env-file` option to the docker run command.  
When FIPS is enabled, bucket names that have periods (.) (for example, amzn-s3-demo-bucket1.name.example) aren't supported. Having periods (.) in the bucket name prevents the task from starting because the agent can't pull the environment variable file from Amazon S3.  
This isn't available for Windows containers.  
You can specify up to 10 environment files. The file must have a `.env` file extension. Each line in an environment file contains an environment variable in `VARIABLE=VALUE` format. Lines that start with `#` are treated as comments and are ignored.   
If there are individual environment variables specified in the container definition, they take precedence over the variables contained within an environment file. If multiple environment files are specified that contain the same variable, they're processed from the top down. We recommend that you use unique variable names. For more information, see [Pass an individual environment variable to an Amazon ECS container](taskdef-envfiles.md).    
`value`  
Type: String  
Required: Yes  
The Amazon Resource Name (ARN) of the Amazon S3 object containing the environment variable file.  
`type`  
Type: String  
Required: Yes  
The file type to use. The only supported value is `s3`.

`environment`  
Type: Object array  
Required: No  
The environment variables to pass to a container. This parameter maps to `Env` in the docker create-container command and the `--env` option to the docker run command.  
We do not recommend using plaintext environment variables for sensitive information, such as credential data.  
`name`  
Type: String  
Required: Yes, when `environment` is used  
The name of the environment variable.  
`value`  
Type: String  
Required: Yes, when `environment` is used  
The value of the environment variable.

```
"environment" : [
    { "name" : "string", "value" : "string" },
    { "name" : "string", "value" : "string" }
]
```

`secrets`  
Type: Object array  
Required: No  
An object that represents the secret to expose to your container. For more information, see [Pass sensitive data to an Amazon ECS container](specifying-sensitive-data.md).    
`name`  
Type: String  
Required: Yes  
The value to set as the environment variable on the container.  
`valueFrom`  
Type: String  
Required: Yes  
The secret to expose to the container. The supported values are either the full Amazon Resource Name (ARN) of the AWS Secrets Manager secret or the full ARN of the parameter in the AWS Systems Manager Parameter Store.  
If the Systems Manager Parameter Store parameter or Secrets Manager parameter exists in the same AWS Region as the task that you're launching, you can use either the full ARN or name of the secret. If the parameter exists in a different Region, then the full ARN must be specified.

```
"secrets": [
    {
        "name": "environment_variable_name",
        "valueFrom": "arn:aws:ssm:region:aws_account_id:parameter/parameter_name"
    }
]
```

#### Network settings
<a name="container_definition_network_ec2"></a>

`disableNetworking`  
Type: Boolean  
Required: No  
When this parameter is true, networking is off within the container.  
This parameter isn't supported for Windows containers or tasks using the `awsvpc` network mode.
The default is `false`.  

```
"disableNetworking": true|false
```

`links`  
Type: String array  
Required: No  
The `link` parameter allows containers to communicate with each other without the need for port mappings. This parameter is only supported if the network mode of a task definition is set to `bridge`. The `name:internalName` construct is analogous to `name:alias` in Docker links. Up to 255 letters (uppercase and lowercase), numbers, hyphens, and underscores are allowed..  
This parameter isn't supported for Windows containers or tasks using the `awsvpc` network mode.
Containers that are collocated on the same container instance might communicate with each other without requiring links or host port mappings. The network isolation on a container instance is controlled by security groups and VPC settings.

```
"links": ["name:internalName", ...]
```

`hostname`  
Type: String  
Required: No  
The hostname to use for your container. This parameter maps to `Hostname` in the docker create-container and the `--hostname` option to docker run.  
If you're using the `awsvpc` network mode, the `hostname` parameter isn't supported.

```
"hostname": "string"
```

`dnsServers`  
Type: String array  
Required: No  
A list of DNS servers that are presented to the container.  
This parameter isn't supported for Windows containers or tasks using the `awsvpc` network mode.

```
"dnsServers": ["string", ...]
```

`dnsSearchDomains`  
Type: String array  
Required: No  
Pattern: ^[a-zA-Z0-9-.]\$10,253\$1[a-zA-Z0-9]\$1  
A list of DNS search domains that are presented to the container. This parameter maps to `DnsSearch` in the docker create-container command the `--dns-search` option to docker run.  
This parameter isn't supported for Windows containers or tasks that use the `awsvpc` network mode.

```
"dnsSearchDomains": ["string", ...]
```

`extraHosts`  
Type: Object array  
Required: No  
A list of hostnames and IP address mappings to append to the `/etc/hosts` file on the container.   
This parameter maps to `ExtraHosts` in the docker create-container command and the `--add-host` option to docker run.  
This parameter isn't supported for Windows containers or tasks that use the `awsvpc` network mode.

```
"extraHosts": [
      {
        "hostname": "string",
        "ipAddress": "string"
      }
      ...
    ]
```  
`hostname`  
Type: String  
Required: Yes, when `extraHosts` are used  
The hostname to use in the `/etc/hosts` entry.  
`ipAddress`  
Type: String  
Required: Yes, when `extraHosts` are used  
The IP address to use in the `/etc/hosts` entry.

#### Storage and logging
<a name="container_definition_storage_ec2"></a>

`readonlyRootFilesystem`  
Type: Boolean  
Required: No  
When this parameter is true, the container is given read-only access to its root file system. This parameter maps to `ReadonlyRootfs` in the docker create-container command the `--read-only` option to docker run.  
This parameter is not supported for Windows containers.
The default is `false`.  

```
"readonlyRootFilesystem": true|false
```

`mountPoints`  
Type: Object array  
Required: No  
The mount points for the data volumes in your container. This parameter maps to `Volumes` in the create-container Docker API and the `--volume` option to docker run.  
Windows containers can mount whole directories on the same drive as `$env:ProgramData`. Windows containers cannot mount directories on a different drive, and mount points cannot be used across drives. You must specify mount points to attach an Amazon EBS volume directly to an Amazon ECS task.    
`sourceVolume`  
Type: String  
Required: Yes, when `mountPoints` are used  
The name of the volume to mount.  
`containerPath`  
Type: String  
Required: Yes, when `mountPoints` are used  
The path in the container where the volume will be mounted.  
`readOnly`  
Type: Boolean  
Required: No  
If this value is `true`, the container has read-only access to the volume. If this value is `false`, then the container can write to the volume. The default value is `false`.  
For tasks that run the Windows operating system, leave the value as the default of `false`.

`volumesFrom`  
Type: Object array  
Required: No  
Data volumes to mount from another container. This parameter maps to `VolumesFrom` in the docker create-container command and the `--volumes-from` option to docker run.    
`sourceContainer`  
Type: String  
Required: Yes, when `volumesFrom` is used  
The name of the container to mount volumes from.  
`readOnly`  
Type: Boolean  
Required: No  
If this value is `true`, the container has read-only access to the volume. If this value is `false`, then the container can write to the volume. The default value is `false`.

```
"volumesFrom": [
                {
                  "sourceContainer": "string",
                  "readOnly": true|false
                }
              ]
```

`logConfiguration`  
Type: [LogConfiguration](https://docs.aws.amazon.com/AmazonECS/latest/APIReference/API_LogConfiguration.html) Object  
Required: No  
The log configuration specification for the container.  
For example task definitions that use a log configuration, see [Example Amazon ECS task definitions](example_task_definitions.md).  
This parameter maps to `LogConfig` in the docker create-container command and the `--log-driver` option to docker run. By default, containers use the same logging driver that the Docker daemon uses. However, the container might use a different logging driver than the Docker daemon by specifying a log driver with this parameter in the container definition. To use a different logging driver for a container, the log system must be configured properly on the container instance (or on a different log server for remote logging options).   
Consider the following when specifying a log configuration for your containers:  
+ Amazon ECS supports a subset of the logging drivers that are available to the Docker daemon.
+ This parameter requires version 1.18 or later of the Docker Remote API on your container instance.
+ The Amazon ECS container agent that runs on a container instance must register the logging drivers that are available on that instance with the `ECS_AVAILABLE_LOGGING_DRIVERS` environment variable before containers that are placed on that instance can use these log configuration options. For more information, see [Amazon ECS container agent configuration](ecs-agent-config.md).

```
"logConfiguration": {
      "logDriver": "awslogs","fluentd","gelf","json-file","journald","splunk","syslog","awsfirelens",
      "options": {"string": "string"
        ...},
	"secretOptions": [{
		"name": "string",
		"valueFrom": "string"
	}]
}
```  
`logDriver`  
Type: String  
Valid values: `"awslogs","fluentd","gelf","json-file","journald","splunk","syslog","awsfirelens"`  
Required: Yes, when `logConfiguration` is used  
The log driver to use for the container. By default, the valid values that are listed earlier are log drivers that the Amazon ECS container agent can communicate with.  
The supported log drivers are `awslogs`, `fluentd`, `gelf`, `json-file`, `journald`, `syslog`, `splunk`, and `awsfirelens`.  
For more information about how to use the `awslogs` log driver in task definitions to send your container logs to CloudWatch Logs, see [Send Amazon ECS logs to CloudWatch](using_awslogs.md).  
For more information about using the `awsfirelens` log driver, see [Custom Log Routing](https://docs.aws.amazon.com/AmazonECS/latest/developerguide/using_firelens.html).  
If you have a custom driver that isn't listed, you can fork the Amazon ECS container agent project that's [available on GitHub](https://github.com/aws/amazon-ecs-agent) and customize it to work with that driver. We encourage you to submit pull requests for changes that you want to have included. However, we don't currently support running modified copies of this software.
This parameter requires version 1.18 of the Docker Remote API or greater on your container instance.  
`options`  
Type: String to string map  
Required: No  
The key/value map of configuration options to send to the log driver.  
The options you can specify depend on the log driver. Some of the options you can specify when you use the `awslogs` router to route logs to Amazon CloudWatch include the following:    
`awslogs-create-group`  
Required: No  
Specify whether you want the log group to be created automatically. If this option isn't specified, it defaults to `false`.  
Your IAM policy must include the `logs:CreateLogGroup` permission before you attempt to use `awslogs-create-group`.  
`awslogs-region`  
Required: Yes  
Specify the AWS Region that the `awslogs` log driver is to send your Docker logs to. You can choose to send all of your logs from clusters in different Regions to a single region in CloudWatch Logs. This is so that they're all visible in one location. Otherwise, you can separate them by Region for more granularity. Make sure that the specified log group exists in the Region that you specify with this option.  
`awslogs-group`  
Required: Yes  
Make sure to specify a log group that the `awslogs` log driver sends its log streams to.  
`awslogs-stream-prefix`  
Required: Optional  
Use the `awslogs-stream-prefix` option to associate a log stream with the specified prefix, the container name, and the ID of the Amazon ECS task that the container belongs to. If you specify a prefix with this option, then the log stream takes the following format.  

```
prefix-name/container-name/ecs-task-id
```
If you don't specify a prefix with this option, then the log stream is named after the container ID that's assigned by the Docker daemon on the container instance. Because it's difficult to trace logs back to the container that sent them with just the Docker container ID (which is only available on the container instance), we recommend that you specify a prefix with this option.  
For Amazon ECS services, you can use the service name as the prefix. Doing so, you can trace log streams to the service that the container belongs to, the name of the container that sent them, and the ID of the task that the container belongs to.  
You must specify a stream-prefix for your logs to have your logs appear in the Log pane when using the Amazon ECS console.  
`awslogs-datetime-format`  
Required: No  
This option defines a multiline start pattern in Python `strftime` format. A log message consists of a line that matches the pattern and any following lines that don’t match the pattern. The matched line is the delimiter between log messages.  
One example of a use case for using this format is for parsing output such as a stack dump, which might otherwise be logged in multiple entries. The correct pattern allows it to be captured in a single entry.  
For more information, see [awslogs-datetime-format](https://docs.docker.com/engine/logging/drivers/awslogs/#awslogs-datetime-format).  
You cannot configure both the `awslogs-datetime-format` and `awslogs-multiline-pattern` options.  
Multiline logging performs regular expression parsing and matching of all log messages. This might have a negative impact on logging performance.  
`awslogs-multiline-pattern`  
Required: No  
This option defines a multiline start pattern that uses a regular expression. A log message consists of a line that matches the pattern and any following lines that don’t match the pattern. The matched line is the delimiter between log messages.  
For more information, see [awslogs-multiline-pattern](https://docs.docker.com/engine/logging/drivers/awslogs/#awslogs-multiline-pattern).  
This option is ignored if `awslogs-datetime-format` is also configured.  
You cannot configure both the `awslogs-datetime-format` and `awslogs-multiline-pattern` options.  
Multiline logging performs regular expression parsing and matching of all log messages. This might have a negative impact on logging performance.
The following options apply to all supported log drivers.    
`mode`  
Required: No  
Valid values: `non-blocking` \$1 `blocking`  
This option defines the delivery mode of log messages from the container to the log driver specified using `logDriver`. The delivery mode you choose affects application availability when the flow of logs from the container is interrupted.  
If you use the `blocking` mode and the flow of logs is interrupted, calls from container code to write to the `stdout` and `stderr` streams will block. The logging thread of the application will block as a result. This may cause the application to become unresponsive and lead to container healthcheck failure.   
If you use the `non-blocking` mode, the container's logs are instead stored in an in-memory intermediate buffer configured with the `max-buffer-size` option. This prevents the application from becoming unresponsive when logs cannot be sent. We recommend using this mode if you want to ensure service availability and are okay with some log loss. For more information, see [Preventing log loss with non-blocking mode in the `awslogs` container log driver](https://aws.amazon.com/blogs/containers/preventing-log-loss-with-non-blocking-mode-in-the-awslogs-container-log-driver/).  
You can set a default `mode` for all containers in a specific AWS Region by using the `defaultLogDriverMode` account setting. If you don't specify the `mode` option in the `logConfiguration` or configure the account setting, Amazon ECS will default to `non-blocking` mode. For more information about the account setting, see [Default log driver mode](ecs-account-settings.md#default-log-driver-mode).  
When `non-blocking` mode is used, the `max-buffer-size` log option controls the size of the buffer that's used for intermediate message storage. Make sure to specify an adequate buffer size based on your application. The total amount of memory allocated at the task level should be greater than the amount of memory that's allocated for all the containers in addition to the log driver memory buffer.  
On June 25, 2025, Amazon ECS changed the default log driver mode from `blocking` to `non-blocking` to prioritize task availability over logging. To continue using the `blocking` mode after this change, do one of the following:  
+ Set the `mode` option in your container definition's `logConfiguration` as `blocking`.
+ Set the `defaultLogDriverMode` account setting to `blocking`.  
`max-buffer-size`  
Required: No  
Default value: `10 m`  
When `non-blocking` mode is used, the `max-buffer-size` log option controls the size of the buffer that's used for intermediate message storage. Make sure to specify an adequate buffer size based on your application. When the buffer fills up, further logs cannot be stored. Logs that cannot be stored are lost. 
To route logs using the `splunk` log router, you need to specify a `splunk-token` and a `splunk-url`.  
When you use the `awsfirelens` log router to route logs to an AWS service or AWS Partner Network destination for log storage and analytics, you can set the `log-driver-buffer-limit` option to limit the number of log lines that are buffered in memory, before being sent to the log router container. It can help to resolve potential log loss issue because high throughput might result in memory running out for the buffer inside of Docker. For more information, see [Configuring Amazon ECS logs for high throughput](firelens-docker-buffer-limit.md).  
Other options you can specify when using `awsfirelens` to route logs depend on the destination. When you export logs to Amazon Data Firehose, you can specify the AWS Region with `region` and a name for the log stream with `delivery_stream`.  
When you export logs to Amazon Kinesis Data Streams, you can specify an AWS Region with `region` and a data stream name with `stream`.  
 When you export logs to Amazon OpenSearch Service, you can specify options like `Name`, `Host` (OpenSearch Service endpoint without protocol), `Port`, `Index`, `Type`, `Aws_auth`, `Aws_region`, `Suppress_Type_Name`, and `tls`.  
When you export logs to Amazon S3, you can specify the bucket using the `bucket` option. You can also specify `region`, `total_file_size`, `upload_timeout`, and `use_put_object` as options.  
This parameter requires version 1.19 of the Docker Remote API or greater on your container instance.  
`secretOptions`  
Type: Object array  
Required: No  
An object that represents the secret to pass to the log configuration. Secrets that are used in log configuration can include an authentication token, certificate, or encryption key. For more information, see [Pass sensitive data to an Amazon ECS container](specifying-sensitive-data.md).    
`name`  
Type: String  
Required: Yes  
The value to set as the environment variable on the container.  
`valueFrom`  
Type: String  
Required: Yes  
The secret to expose to the log configuration of the container.

```
"logConfiguration": {
	"logDriver": "splunk",
	"options": {
		"splunk-url": "https://cloud.splunk.com:8080",
		"splunk-token": "...",
		"tag": "...",
		...
	},
	"secretOptions": [{
		"name": "splunk-token",
		"valueFrom": "/ecs/logconfig/splunkcred"
	}]
}
```

`firelensConfiguration`  
Type: [FirelensConfiguration](https://docs.aws.amazon.com/AmazonECS/latest/APIReference/API_FirelensConfiguration.html) Object  
Required: No  
The FireLens configuration for the container. This is used to specify and configure a log router for container logs. For more information, see [Send Amazon ECS logs to an AWS service or AWS Partner](using_firelens.md).  

```
{
    "firelensConfiguration": {
        "type": "fluentd",
        "options": {
            "KeyName": ""
        }
    }
}
```  
`options`  
Type: String to string map  
Required: No  
The key/value map of options to use when configuring the log router. This field is optional and can be used to specify a custom configuration file or to add additional metadata, such as the task, task definition, cluster, and container instance details to the log event. If specified, the syntax to use is `"options":{"enable-ecs-log-metadata":"true|false","config-file-type:"s3|file","config-file-value":"arn:aws:s3:::amzn-s3-demo-bucket/fluent.conf|filepath"}`. For more information, see [Example Amazon ECS task definition: Route logs to FireLens](firelens-taskdef.md).  
`type`  
Type: String  
Required: Yes  
The log router to use. The valid values are `fluentd` or `fluentbit`.

#### Security
<a name="container_definition_security_ec2"></a>

For more information about container security, see [Amazon ECS task and container security best practices](https://docs.aws.amazon.com/AmazonECS/latest/developerguide/security-tasks-containers.html).

`credentialSpecs`  
Type: String array  
Required: No  
A list of ARNs in SSM or Amazon S3 to a credential spec (`CredSpec`) file that configures the container for Active Directory authentication. We recommend that you use this parameter instead of the `dockerSecurityOptions`. The maximum number of ARNs is 1.  
There are two formats for each ARN.    
credentialspecdomainless:MyARN  
You use `credentialspecdomainless:MyARN` to provide a `CredSpec` with an additional section for a secret in Secrets Manager. You provide the login credentials to the domain in the secret.  
Each task that runs on any container instance can join different domains.  
You can use this format without joining the container instance to a domain.  
credentialspec:MyARN  
You use `credentialspec:MyARN` to provide a `CredSpec` for a single domain.  
You must join the container instance to the domain before you start any tasks that use this task definition.
In both formats, replace `MyARN` with the ARN in SSM or Amazon S3.  
The `credspec` must provide a ARN in Secrets Manager for a secret containing the username, password, and the domain to connect to. For better security, the instance isn't joined to the domain for domainless authentication. Other applications on the instance can't use the domainless credentials. You can use this parameter to run tasks on the same instance, even it the tasks need to join different domains. For more information, see [Using gMSAs for Windows Containers](https://docs.aws.amazon.com/AmazonECS/latest/developerguide/windows-gmsa.html) and [Using gMSAs for Linux Containers](https://docs.aws.amazon.com/AmazonECS/latest/developerguide/linux-gmsa.html).

`privileged`  
Type: Boolean  
Required: No  
When this parameter is true, the container is given elevated privileges on the host container instance (similar to the `root` user). We recommend against running containers with `privileged`. In most cases, you can specify the exact privileges that you need by using the specific parameters instead of using `privileged`.  
This parameter maps to `Privileged` in the docker create-container command and the `--privileged` option to docker run.  
This parameter is not supported for Windows containers or tasks using the Fargate launch type.
The default is `false`.  

```
"privileged": true|false
```

`user`  
Type: String  
Required: No  
The user to use inside the container. This parameter maps to `User` in the docker create-container command and the `--user` option to docker run.  
When running tasks that use the `host` network mode, don't run containers using the root user (UID 0). As a security best practice, always use a non-root user.
You can specify the `user` using the following formats. If specifying a UID or GID, you must specify it as a positive integer.  
+ `user`
+ `user:group`
+ `uid`
+ `uid:gid`
+ `user:gid`
+ `uid:group`
This parameter is not supported for Windows containers.

```
"user": "string"
```

`dockerSecurityOptions`  
Type: String array  
Valid values: "no-new-privileges" \$1 "apparmor:PROFILE" \$1 "label:*value*" \$1 "credentialspec:*CredentialSpecFilePath*"  
Required: No  
A list of strings to provide custom configuration for multiple security systems.  
For Linux tasks, this parameter can be used to reference custom labels for SELinux and AppArmor multi-level security systems.  
This parameter can be used to reference a credential spec file that configures a container for Active Directory authentication. For more information, see [Learn how to use gMSAs for EC2 Windows containers for Amazon ECS](windows-gmsa.md) and [Using gMSA for EC2 Linux containers on Amazon ECS](linux-gmsa.md).  
This parameter maps to `SecurityOpt` in the docker create-container command and the `--security-opt` option to docker run.  

```
"dockerSecurityOptions": ["string", ...]
```
The Amazon ECS container agent that run on a container instance must register with the `ECS_SELINUX_CAPABLE=true` or `ECS_APPARMOR_CAPABLE=true` environment variables before containers that are placed on that instance can use these security options. For more information, see [Amazon ECS container agent configuration](ecs-agent-config.md).

#### Resource limits
<a name="container_definition_limits_ec2"></a>

`ulimits`  
Type: Object array  
Required: No  
A list of `ulimit` values to define for a container. This value overwrites the default resource quota setting for the operating system. This parameter maps to `Ulimits` in the docker create-container command and the `--ulimit` option to docker run.  
This parameter requires version 1.18 of the Docker Remote API or greater on your container instance.  
This parameter is not supported for Windows containers.

```
"ulimits": [
      {
        "name": "core"|"cpu"|"data"|"fsize"|"locks"|"memlock"|"msgqueue"|"nice"|"nofile"|"nproc"|"rss"|"rtprio"|"rttime"|"sigpending"|"stack",
        "softLimit": integer,
        "hardLimit": integer
      }
      ...
    ]
```  
`name`  
Type: String  
Valid values: `"core" | "cpu" | "data" | "fsize" | "locks" | "memlock" | "msgqueue" | "nice" | "nofile" | "nproc" | "rss" | "rtprio" | "rttime" | "sigpending" | "stack"`  
Required: Yes, when `ulimits` are used  
The `type` of the `ulimit`.  
`hardLimit`  
Type: Integer  
Required: Yes, when `ulimits` are used  
The hard limit for the `ulimit` type. The value can be specified in bytes, seconds, or as a count, depending on the `type` of the `ulimit`.  
`softLimit`  
Type: Integer  
Required: Yes, when `ulimits` are used  
The soft limit for the `ulimit` type. The value can be specified in bytes, seconds, or as a count, depending on the `type` of the `ulimit`.

#### Docker labels
<a name="container_definition_labels_ec2"></a>

`dockerLabels`  
Type: String to string map  
Required: No  
A key/value map of labels to add to the container. This parameter maps to `Labels` in the docker create-container command and the `--label` option to docker run.   
This parameter requires version 1.18 of the Docker Remote API or greater on your container instance.  

```
"dockerLabels": {"string": "string"
      ...}
```

### Other container definition parameters
<a name="other_container_definition_params_ec2"></a>

The following container definition parameters can be used when registering task definitions in the Amazon ECS console by using the **Configure via JSON** option. For more information, see [Creating an Amazon ECS task definition using the console](create-task-definition.md).

**Topics**
+ [

#### Linux parameters
](#container_definition_linuxparameters_ec2)
+ [

#### Container dependency
](#container_definition_dependson_ec2)
+ [

#### Container timeouts
](#container_definition_timeout_ec2)
+ [

#### System controls
](#container_definition_systemcontrols_ec2)
+ [

#### Interactive
](#container_definition_interactive_ec2)
+ [

#### Pseudo terminal
](#container_definition_pseudoterminal_ec2)

#### Linux parameters
<a name="container_definition_linuxparameters_ec2"></a>

`linuxParameters`  
Type: [LinuxParameters](https://docs.aws.amazon.com/AmazonECS/latest/APIReference/API_LinuxParameters.html) object  
Required: No  
Linux-specific options that are applied to the container, such as [KernelCapabilities](https://docs.aws.amazon.com/AmazonECS/latest/APIReference/API_KernelCapabilities.html).  
This parameter isn't supported for Windows containers.

```
"linuxParameters": {
      "capabilities": {
        "add": ["string", ...],
        "drop": ["string", ...]
        }
      }
```  
`capabilities`  
Type: [KernelCapabilities](https://docs.aws.amazon.com/AmazonECS/latest/APIReference/API_KernelCapabilities_ec2.html) object  
Required: No  
The Linux capabilities for the container that are added to or dropped from the default configuration provided by Docker. For more information about these Linux capabilities, see the [capabilities(7)](http://man7.org/linux/man-pages/man7/capabilities.7.html) Linux manual page.    
`add`  
Type: String array  
Valid values: `"ALL" | "AUDIT_CONTROL" | "AUDIT_READ" | "AUDIT_WRITE" | "BLOCK_SUSPEND" | "CHOWN" | "DAC_OVERRIDE" | "DAC_READ_SEARCH" | "FOWNER" | "FSETID" | "IPC_LOCK" | "IPC_OWNER" | "KILL" | "LEASE" | "LINUX_IMMUTABLE" | "MAC_ADMIN" | "MAC_OVERRIDE" | "MKNOD" | "NET_ADMIN" | "NET_BIND_SERVICE" | "NET_BROADCAST" | "NET_RAW" | "SETFCAP" | "SETGID" | "SETPCAP" | "SETUID" | "SYS_ADMIN" | "SYS_BOOT" | "SYS_CHROOT" | "SYS_MODULE" | "SYS_NICE" | "SYS_PACCT" | "SYS_PTRACE" | "SYS_RAWIO" | "SYS_RESOURCE" | "SYS_TIME" | "SYS_TTY_CONFIG" | "SYSLOG" | "WAKE_ALARM"`  
Required: No  
The Linux capabilities for the container to add to the default configuration provided by Docker. This parameter maps to `CapAdd` in the docker create-container command and the `--cap-add` option to docker run.  
`add`  
Type: String array  
Valid values: `"SYS_PTRACE"`  
Required: No  
The Linux capabilities for the container to add to the default configuration that's provided by Docker. This parameter maps to `CapAdd` in the docker create-container command and the `--cap-add` option to docker run.  
`drop`  
Type: String array  
Valid values: `"ALL" | "AUDIT_CONTROL" | "AUDIT_WRITE" | "BLOCK_SUSPEND" | "CHOWN" | "DAC_OVERRIDE" | "DAC_READ_SEARCH" | "FOWNER" | "FSETID" | "IPC_LOCK" | "IPC_OWNER" | "KILL" | "LEASE" | "LINUX_IMMUTABLE" | "MAC_ADMIN" | "MAC_OVERRIDE" | "MKNOD" | "NET_ADMIN" | "NET_BIND_SERVICE" | "NET_BROADCAST" | "NET_RAW" | "SETFCAP" | "SETGID" | "SETPCAP" | "SETUID" | "SYS_ADMIN" | "SYS_BOOT" | "SYS_CHROOT" | "SYS_MODULE" | "SYS_NICE" | "SYS_PACCT" | "SYS_PTRACE" | "SYS_RAWIO" | "SYS_RESOURCE" | "SYS_TIME" | "SYS_TTY_CONFIG" | "SYSLOG" | "WAKE_ALARM"`  
Required: No  
The Linux capabilities for the container to remove from the default configuration that's provided by Docker. This parameter maps to `CapDrop` in the docker create-container command and the `--cap-drop` option to docker run.  
`devices`  
Any host devices to expose to the container. This parameter maps to `Devices` in the docker create-container command and the `--device` option to docker run.  
Type: Array of [Device](https://docs.aws.amazon.com/AmazonECS/latest/APIReference/API_Device.html) objects  
Required: No    
`hostPath`  
The path for the device on the host container instance.  
Type: String  
Required: Yes  
`containerPath`  
The path inside the container to expose the host device at.  
Type: String  
Required: No  
`permissions`  
The explicit permissions to provide to the container for the device. By default, the container has permissions for `read`, `write`, and `mknod` on the device.  
Type: Array of strings  
Valid Values: `read` \$1 `write` \$1 `mknod`  
`initProcessEnabled`  
Run an `init` process inside the container that forwards signals and reaps processes. This parameter maps to the `--init` option to docker run.  
This parameter requires version 1.25 of the Docker Remote API or greater on your container instance.  
`maxSwap`  
The total amount of swap memory (in MiB) a container can use. This parameter is translated to the `--memory-swap` option to docker run where the value is the sum of the container memory plus the `maxSwap` value.  
If a `maxSwap` value of `0` is specified, the container doesn't use swap. Accepted values are `0` or any positive integer. If the `maxSwap` parameter is omitted, the container uses the swap configuration for the container instance that it's running on. A `maxSwap` value must be set for the `swappiness` parameter to be used.  
`sharedMemorySize`  
The value for the size (in MiB) of the `/dev/shm` volume. This parameter maps to the `--shm-size` option to docker run.  
Type: Integer  
`swappiness`  
You can use this parameter to tune a container's memory swappiness behavior. A `swappiness` value of `0` prevents swapping from happening unless required. A `swappiness` value of `100` causes pages to be swapped frequently. Accepted values are whole numbers between `0` and `100`. If you don't specify a value, the default value of `60` is used. Moreover, if you don't specify a value for `maxSwap`, then this parameter is ignored. This parameter maps to the `--memory-swappiness` option to docker run.  
If you're using tasks on Amazon Linux 2023 the `swappiness` parameter isn't supported.  
`tmpfs`  
The container path, mount options, and maximum size (in MiB) of the tmpfs mount. This parameter maps to the `--tmpfs` option to docker run.  
Type: Array of [Tmpfs](https://docs.aws.amazon.com/AmazonECS/latest/APIReference/API_Tmpfs.html) objects  
Required: No    
`containerPath`  
The absolute file path where the tmpfs volume is to be mounted.  
Type: String  
Required: Yes  
`mountOptions`  
The list of tmpfs volume mount options.  
Type: Array of strings  
Required: No  
Valid Values: `"defaults" | "ro" | "rw" | "suid" | "nosuid" | "dev" | "nodev" | "exec" | "noexec" | "sync" | "async" | "dirsync" | "remount" | "mand" | "nomand" | "atime" | "noatime" | "diratime" | "nodiratime" | "bind" | "rbind" | "unbindable" | "runbindable" | "private" | "rprivate" | "shared" | "rshared" | "slave" | "rslave" | "relatime" | "norelatime" | "strictatime" | "nostrictatime" | "mode" | "uid" | "gid" | "nr_inodes" | "nr_blocks" | "mpol"`  
`size`  
The maximum size (in MiB) of the tmpfs volume.  
Type: Integer  
Required: Yes

#### Container dependency
<a name="container_definition_dependson_ec2"></a>

`dependsOn`  
Type: Array of [ContainerDependency](https://docs.aws.amazon.com/AmazonECS/latest/APIReference/API_ContainerDependency.html) objects  
Required: No  
The dependencies defined for container startup and shutdown. A container can contain multiple dependencies. When a dependency is defined for container startup, for container shutdown it is reversed. For an example, see [Container dependency](example_task_definitions.md#example_task_definition-containerdependency).  
If a container doesn't meet a dependency constraint or times out before meeting the constraint, Amazon ECS doesn't progress dependent containers to their next state.
The instances require at least version `1.26.0` of the container agent to enable container dependencies. However, we recommend using the latest container agent version. For information about checking your agent version and updating to the latest version, see [Updating the Amazon ECS container agent](ecs-agent-update.md). If you're using an Amazon ECS-optimized Amazon Linux AMI, your instance needs at least version `1.26.0-1` of the `ecs-init` package. If your container instances are launched from version `20190301` or later, they contain the required versions of the container agent and `ecs-init`. For more information, see [Amazon ECS-optimized Linux AMIs](ecs-optimized_AMI.md).  

```
"dependsOn": [
    {
        "containerName": "string",
        "condition": "string"
    }
]
```  
`containerName`  
Type: String  
Required: Yes  
The container name that must meet the specified condition.  
`condition`  
Type: String  
Required: Yes  
The dependency condition of the container. The following are the available conditions and their behavior:  
+ `START` – This condition emulates the behavior of links and volumes today. The condition validates that a dependent container is started before permitting other containers to start.
+ `COMPLETE` – This condition validates that a dependent container runs to completion (exits) before permitting other containers to start. This can be useful for non-essential containers that run a script and then exit. This condition can't be set on an essential container.
+ `SUCCESS` – This condition is the same as `COMPLETE`, but it also requires that the container exits with a `zero` status. This condition can't be set on an essential container.
+ `HEALTHY` – This condition validates that the dependent container passes its container health check before permitting other containers to start. This requires that the dependent container has health checks configured in the task definition. This condition is confirmed only at task startup.

#### Container timeouts
<a name="container_definition_timeout_ec2"></a>

`startTimeout`  
Type: Integer  
Required: No  
Example values: `120`  
Time duration (in seconds) to wait before giving up on resolving dependencies for a container.  
For example, you specify two containers in a task definition with `containerA` having a dependency on `containerB` reaching a `COMPLETE`, `SUCCESS`, or `HEALTHY` status. If a `startTimeout` value is specified for `containerB` and it doesn't reach the desired status within that time, then `containerA` doesn't start.  
If a container doesn't meet a dependency constraint or times out before meeting the constraint, Amazon ECS doesn't progress dependent containers to their next state.
The maximum value is 120 seconds.

`stopTimeout`  
Type: Integer  
Required: No  
Example values: `120`  
Time duration (in seconds) to wait before the container is forcefully killed if it doesn't exit normally on its own.  
If the `stopTimeout` parameter isn't specified, the value set for the Amazon ECS container agent configuration variable `ECS_CONTAINER_STOP_TIMEOUT` is used. If neither the `stopTimeout` parameter or the `ECS_CONTAINER_STOP_TIMEOUT` agent configuration variable is set, the default values of 30 seconds for Linux containers and 30 seconds on Windows containers are used. Container instances require at least version 1.26.0 of the container agent to enable a container stop timeout value. However, we recommend using the latest container agent version. For information about how to check your agent version and update to the latest version, see [Updating the Amazon ECS container agent](ecs-agent-update.md). If you're using an Amazon ECS-optimized Amazon Linux AMI, your instance needs at least version 1.26.0-1 of the `ecs-init` package. If your container instances are launched from version `20190301` or later, they contain the required versions of the container agent and `ecs-init`. For more information, see [Amazon ECS-optimized Linux AMIs](ecs-optimized_AMI.md).

#### System controls
<a name="container_definition_systemcontrols_ec2"></a>

`systemControls`  
Type: [SystemControl](https://docs.aws.amazon.com/AmazonECS/latest/APIReference/API_SystemControl.html) object  
Required: No  
A list of namespace kernel parameters to set in the container. This parameter maps to `Sysctls` in the docker create-container commandand the `--sysctl` option to docker run. For example, you can configure `net.ipv4.tcp_keepalive_time` setting to maintain longer lived connections.  
We don't recommend that you specify network-related `systemControls` parameters for multiple containers in a single task that also uses either the `awsvpc` or `host` network mode. Doing this has the following disadvantages:  
+ For tasks that use the `awsvpc` network mode, if you set `systemControls` for any container, it applies to all containers in the task. If you set different `systemControls` for multiple containers in a single task, the container that's started last determines which `systemControls` take effect.
+ For tasks that use the `host` network mode, the network namespace `systemControls` aren't supported.
If you're setting an IPC resource namespace to use for the containers in the task, the following conditions apply to your system controls. For more information, see [IPC mode](task_definition_parameters.md#task_definition_ipcmode).  
+ For tasks that use the `host` IPC mode, IPC namespace `systemControls` aren't supported.
+ For tasks that use the `task` IPC mode, IPC namespace `systemControls` values apply to all containers within a task.
This parameter is not supported for Windows containers.

```
"systemControls": [
    {
         "namespace":"string",
         "value":"string"
    }
]
```  
`namespace`  
Type: String  
Required: No  
The namespace kernel parameter to set a `value` for.  
Valid IPC namespace values: `"kernel.msgmax" | "kernel.msgmnb" | "kernel.msgmni" | "kernel.sem" | "kernel.shmall" | "kernel.shmmax" | "kernel.shmmni" | "kernel.shm_rmid_forced"`, and `Sysctls` that start with `"fs.mqueue.*"`  
Valid network namespace values: `Sysctls` that start with `"net.*"`  
`value`  
Type: String  
Required: No  
The value for the namespace kernel parameter that's specified in `namespace`.

#### Interactive
<a name="container_definition_interactive_ec2"></a>

`interactive`  
Type: Boolean  
Required: No  
When this parameter is `true`, you can deploy containerized applications that require `stdin` or a `tty` to be allocated. This parameter maps to `OpenStdin` in the docker create-container command and the `--interactive` option to docker run.  
The default is `false`.

#### Pseudo terminal
<a name="container_definition_pseudoterminal_ec2"></a>

`pseudoTerminal`  
Type: Boolean  
Required: No  
When this parameter is `true`, a TTY is allocated. This parameter maps to `Tty` in the docker create-container command and the `--tty` option to docker run.  
The default is `false`.

## Elastic Inference accelerator name (deprecated)
<a name="elastic-Inference-accelerator_ec2"></a>

The Elastic Inference accelerator resource requirement for your task definition. 

**Note**  
Amazon Elastic Inference (EI) is no longer available to customers.

The following parameters are allowed in a task definition:

`deviceName`  
Type: String  
Required: Yes  
The Elastic Inference accelerator device name. The `deviceName` must also be referenced in a container definition see [Elastic Inference accelerator](task_definition_parameters.md#ContainerDefinition-elastic-inference).

`deviceType`  
Type: String  
Required: Yes  
The Elastic Inference accelerator to use.

## Task placement constraints
<a name="constraints_ec2"></a>

When you register a task definition, you can provide task placement constraints that customize how Amazon ECS places tasks.

You can use constraints to place tasks based on Availability Zone, instance type, or custom attributes. For more information, see [Define which container instances Amazon ECS uses for tasks](task-placement-constraints.md).

The following parameters are allowed in a container definition:

`expression`  
Type: String  
Required: No  
A cluster query language expression to apply to the constraint. For more information, see [Create expressions to define container instances for Amazon ECS tasks](cluster-query-language.md).

`type`  
Type: String  
Required: Yes  
The type of constraint. Use `memberOf` to restrict the selection to a group of valid candidates.

## Proxy configuration
<a name="proxyConfiguration_ec2"></a>

`proxyConfiguration`  
Type: [ProxyConfiguration](https://docs.aws.amazon.com/AmazonECS/latest/APIReference/API_ProxyConfiguration.html) object  
Required: No  
The configuration details for the App Mesh proxy.  
For tasks that use EC2, the container instances require at least version 1.26.0 of the container agent and at least version 1.26.0-1 of the `ecs-init` package to enable a proxy configuration. If your container instances are launched from the Amazon ECS-optimized AMI version `20190301` or later, then they contain the required versions of the container agent and `ecs-init`. For more information, see [Amazon ECS-optimized Linux AMIs](ecs-optimized_AMI.md).  
This parameter is not supported for Windows containers.

```
"proxyConfiguration": {
    "type": "APPMESH",
    "containerName": "string",
    "properties": [
        {
           "name": "string",
           "value": "string"
        }
    ]
}
```  
`type`  
Type: String  
Value values: `APPMESH`  
Required: No  
The proxy type. The only supported value is `APPMESH`.  
`containerName`  
Type: String  
Required: Yes  
The name of the container that serves as the App Mesh proxy.  
`properties`  
Type: Array of [KeyValuePair](https://docs.aws.amazon.com/AmazonECS/latest/APIReference/API_KeyValuePair.html) objects  
Required: No  
The set of network configuration parameters to provide the Container Network Interface (CNI) plugin, specified as key-value pairs.  
+ `IgnoredUID` – (Required) The user ID (UID) of the proxy container as defined by the `user` parameter in a container definition. This is used to ensure the proxy ignores its own traffic. If `IgnoredGID` is specified, this field can be empty.
+ `IgnoredGID` – (Required) The group ID (GID) of the proxy container as defined by the `user` parameter in a container definition. This is used to ensure the proxy ignores its own traffic. If `IgnoredUID` is specified, this field can be empty.
+ `AppPorts` – (Required) The list of ports that the application uses. Network traffic to these ports is forwarded to the `ProxyIngressPort` and `ProxyEgressPort`.
+ `ProxyIngressPort` – (Required) Specifies the port that incoming traffic to the `AppPorts` is directed to.
+ `ProxyEgressPort` – (Required) Specifies the port that outgoing traffic from the `AppPorts` is directed to.
+ `EgressIgnoredPorts` – (Required) The outbound traffic going to these specified ports is ignored and not redirected to the `ProxyEgressPort`. It can be an empty list.
+ `EgressIgnoredIPs` – (Required) The outbound traffic going to these specified IP addresses is ignored and not redirected to the `ProxyEgressPort`. It can be an empty list.  
`name`  
Type: String  
Required: No  
The name of the key-value pair.  
`value`  
Type: String  
Required: No  
The value of the key-value pair.

## Volumes
<a name="volumes_ec2"></a>

When you register a task definition, you can optionally specify a list of volumes to be passed to the Docker daemon on a container instance, which then becomes available for access by other containers on the same container instance.

The following are the types of data volumes that can be used:
+ Amazon EBS volumes — Provides cost-effective, durable, high-performance block storage for data intensive containerized workloads. You can attach 1 Amazon EBS volume per Amazon ECS task when running a standalone task, or when creating or updating a service. Amazon EBS volumes are supported for Linux tasks. For more information, see [Use Amazon EBS volumes with Amazon ECS](ebs-volumes.md).
+ Amazon EFS volumes — Provides simple, scalable, and persistent file storage for use with your Amazon ECS tasks. With Amazon EFS, storage capacity is elastic. It grows and shrinks automatically as you add and remove files. Your applications can have the storage that they need and when they need it. Amazon EFS volumes are supported. For more information, see [Use Amazon EFS volumes with Amazon ECS](efs-volumes.md).
+ FSx for Windows File Server volumes — Provides fully managed Microsoft Windows file servers. These file servers are backed by a Windows file system. When using FSx for Windows File Server together with Amazon ECS, you can provision your Windows tasks with persistent, distributed, shared, and static file storage. For more information, see [Use FSx for Windows File Server volumes with Amazon ECS](wfsx-volumes.md).

  Windows containers on Fargate do not support this option.
+ Docker volumes – A Docker-managed volume that is created under `/var/lib/docker/volumes` on the host Amazon EC2 instance. Docker volume drivers (also referred to as plugins) are used to integrate the volumes with external storage systems, such as Amazon EBS. The built-in `local` volume driver or a third-party volume driver can be used. Docker volumes are supported only when running tasks on Amazon EC2 instances. Windows containers support only the use of the `local` driver. To use Docker volumes, specify a `dockerVolumeConfiguration` in your task definition.
+ Bind mounts – A file or directory on the host machine that is mounted into a container. Bind mount host volumes are supported. To use bind mount host volumes, specify a `host` and optional `sourcePath` value in your task definition.

For more information, see [Storage options for Amazon ECS tasks](using_data_volumes.md).

The following parameters are allowed in a container definition.

`name`  
Type: String  
Required: No  
The name of the volume. Up to 255 letters (uppercase and lowercase), numbers, hyphens (`-`), and underscores (`_`) are allowed. This name is referenced in the `sourceVolume` parameter of the container definition `mountPoints` object.

`host`  
Required: No  
The `host` parameter is used to tie the lifecycle of the bind mount to the host Amazon EC2 instance, rather than the task, and where it is stored. If the `host` parameter is empty, then the Docker daemon assigns a host path for your data volume, but the data is not guaranteed to persist after the containers associated with it stop running.  
Windows containers can mount whole directories on the same drive as `$env:ProgramData`.  
The `sourcePath` parameter is supported only when using tasks that are hosted on Amazon EC2 instances or Amazon ECS Managed Instances.  
`sourcePath`  
Type: String  
Required: No  
When the `host` parameter is used, specify a `sourcePath` to declare the path on the host Amazon EC2 instance that is presented to the container. If this parameter is empty, then the Docker daemon assigns a host path for you. If the `host` parameter contains a `sourcePath` file location, then the data volume persists at the specified location on the host Amazon EC2 instance until you delete it manually. If the `sourcePath` value does not exist on the host Amazon EC2 instance, the Docker daemon creates it. If the location does exist, the contents of the source path folder are exported.

`configuredAtLaunch`  
Type: Boolean  
Required: No  
Specifies whether a volume is configurable at launch. When set to `true`, you can configure the volume when running a standalone task, or when creating or updating a service. When set to `true`, you won't be able to provide another volume configuration in the task definition. This parameter must be set to `true` to configure an Amazon EBS volume for attachment to a task. Setting `configuredAtLaunch` to `true` and deferring volume configuration to the launch phase allows you to create task definitions that aren't constrained to a volume type or to specific volume settings. Doing this makes your task definition reusable across different execution environments. For more information, see [Amazon EBS volumes](https://docs.aws.amazon.com/AmazonECS/latest/developerguide/ebs-volumes.html).

`dockerVolumeConfiguration`  
Type: [DockerVolumeConfiguration](https://docs.aws.amazon.com/AmazonECS/latest/APIReference/API_DockerVolumeConfiguration.html) Object  
Required: No  
This parameter is specified when using Docker volumes. Docker volumes are supported only when running tasks on EC2 instances. Windows containers support only the use of the `local` driver. To use bind mounts, specify a `host` instead.    
`scope`  
Type: String  
Valid Values: `task` \$1 `shared`  
Required: No  
The scope for the Docker volume, which determines its lifecycle. Docker volumes that are scoped to a `task` are automatically provisioned when the task starts and destroyed when the task stops. Docker volumes that are scoped as `shared` persist after the task stops.  
`autoprovision`  
Type: Boolean  
Default value: `false`  
Required: No  
If this value is `true`, the Docker volume is created if it doesn't already exist. This field is used only if the `scope` is `shared`. If the `scope` is `task`, then this parameter must be omitted.  
`driver`  
Type: String  
Required: No  
The Docker volume driver to use. The driver value must match the driver name provided by Docker because this name is used for task placement. If the driver was installed by using the Docker plugin CLI, use `docker plugin ls` to retrieve the driver name from your container instance. If the driver was installed by using another method, use Docker plugin discovery to retrieve the driver name.  
`driverOpts`  
Type: String  
Required: No  
A map of Docker driver-specific options to pass through. This parameter maps to `DriverOpts` in the Create a volume section of Docker.  
`labels`  
Type: String  
Required: No  
Custom metadata to add to your Docker volume.

`efsVolumeConfiguration`  
Type: [EFSVolumeConfiguration](https://docs.aws.amazon.com/AmazonECS/latest/APIReference/API_EFSVolumeConfiguration.html) Object  
Required: No  
This parameter is specified when using Amazon EFS volumes.    
`fileSystemId`  
Type: String  
Required: Yes  
The Amazon EFS file system ID to use.  
`rootDirectory`  
Type: String  
Required: No  
The directory within the Amazon EFS file system to mount as the root directory inside the host. If this parameter is omitted, the root of the Amazon EFS volume will be used. Specifying `/` has the same effect as omitting this parameter.  
If an EFS access point is specified in the `authorizationConfig`, the root directory parameter must either be omitted or set to `/`, which will enforce the path set on the EFS access point.  
`transitEncryption`  
Type: String  
Valid values: `ENABLED` \$1 `DISABLED`  
Required: No  
Specifies whether to enable encryption for Amazon EFS data in transit between the Amazon ECS host and the Amazon EFS server. If Amazon EFS IAM authorization is used, transit encryption must be enabled. If this parameter is omitted, the default value of `DISABLED` is used. For more information, see [Encrypting Data in Transit](https://docs.aws.amazon.com/efs/latest/ug/encryption-in-transit.html) in the *Amazon Elastic File System User Guide*.  
`transitEncryptionPort`  
Type: Integer  
Required: No  
The port to use when sending encrypted data between the Amazon ECS host and the Amazon EFS server. If you don't specify a transit encryption port, the task will use the port selection strategy that the Amazon EFS mount helper uses. For more information, see [EFS Mount Helper](https://docs.aws.amazon.com/efs/latest/ug/efs-mount-helper.html) in the *Amazon Elastic File System User Guide*.  
`authorizationConfig`  
Type: [EFSAuthorizationConfig](https://docs.aws.amazon.com/AmazonECS/latest/APIReference/API_EFSAuthorizationConfig.html) Object  
Required: No  
The authorization configuration details for the Amazon EFS file system.    
`accessPointId`  
Type: String  
Required: No  
The access point ID to use. If an access point is specified, the root directory value in the `efsVolumeConfiguration` must either be omitted or set to `/`, which will enforce the path set on the EFS access point. If an access point is used, transit encryption must be enabled in the `EFSVolumeConfiguration`. For more information, see [Working with Amazon EFS Access Points](https://docs.aws.amazon.com/efs/latest/ug/efs-access-points.html) in the *Amazon Elastic File System User Guide*.  
`iam`  
Type: String  
Valid values: `ENABLED` \$1 `DISABLED`  
Required: No  
Specifies whether to use the Amazon ECS task IAM role that's defined in a task definition when mounting the Amazon EFS file system. If enabled, transit encryption must be enabled in the `EFSVolumeConfiguration`. If this parameter is omitted, the default value of `DISABLED` is used. For more information, see [IAM Roles for Tasks](https://docs.aws.amazon.com/AmazonECS/latest/developerguide/task-iam-roles.html).

`FSxWindowsFileServerVolumeConfiguration`  
Type: [FSxWindowsFileServerVolumeConfiguration](https://docs.aws.amazon.com/AmazonECS/latest/APIReference/API_FSxWindowsFileServerVolumeConfiguration.html) Object  
Required: Yes  
This parameter is specified when you're using an [Amazon FSx for Windows File Server](https://docs.aws.amazon.com/fsx/latest/WindowsGuide/what-is.html) file system for task storage.    
`fileSystemId`  
Type: String  
Required: Yes  
The FSx for Windows File Server file system ID to use.  
`rootDirectory`  
Type: String  
Required: Yes  
The directory within the FSx for Windows File Server file system to mount as the root directory inside the host.  
`authorizationConfig`    
`credentialsParameter`  
Type: String  
Required: Yes  
The authorization credential options.  

**options:**
+ Amazon Resource Name (ARN) of an [AWS Secrets Manager](https://docs.aws.amazon.com/secretsmanager/latest/userguide/intro.html) secret.
+ ARN of an [AWS Systems Manager](https://docs.aws.amazon.com/systems-manager/latest/userguide/integration-ps-secretsmanager.html) parameter.  
`domain`  
Type: String  
Required: Yes  
A fully qualified domain name hosted by an [AWS Directory Service for Microsoft Active Directory](https://docs.aws.amazon.com/directoryservice/latest/admin-guide/directory_microsoft_ad.html) (AWS Managed Microsoft AD) directory or a self-hosted EC2 Active Directory.

## Tags
<a name="tags_ec2"></a>

When you register a task definition, you can optionally specify metadata tags that are applied to the task definition. Tags help you categorize and organize your task definition. Each tag consists of a key and an optional value. You define both of them. For more information, see [Tagging Amazon ECS resources](ecs-using-tags.md).

**Important**  
Don't add personally identifiable information or other confidential or sensitive information in tags. Tags are accessible to many AWS services, including billing. Tags aren't intended to be used for private or sensitive data.

The following parameters are allowed in a tag object.

`key`  
Type: String  
Required: No  
One part of a key-value pair that make up a tag. A key is a general label that acts like a category for more specific tag values.

`value`  
Type: String  
Required: No  
The optional part of a key-value pair that make up a tag. A value acts as a descriptor within a tag category (key).

## Other task definition parameters
<a name="other_task_definition_params_ec2"></a>

The following task definition parameters can be used when registering task definitions in the Amazon ECS console by using the **Configure via JSON** option. For more information, see [Creating an Amazon ECS task definition using the console](create-task-definition.md).

**Topics**
+ [

### IPC mode
](#task_definition_ipcmode_ec2)
+ [

### PID mode
](#task_definition_pidmode_ec2)
+ [

### Fault injection
](#task_definition_faultInjection_ec2)

### IPC mode
<a name="task_definition_ipcmode_ec2"></a>

`ipcMode`  
Type: String  
Required: No  
The IPC resource namespace to use for the containers in the task. The valid values are `host`, `task`, or `none`. If `host` is specified, then all the containers that are within the tasks that specified the `host` IPC mode on the same container instance share the same IPC resources with the host Amazon EC2 instance. If `task` is specified, all the containers that are within the specified task share the same IPC resources. If `none` is specified, then IPC resources within the containers of a task are private and not shared with other containers in a task or on the container instance. If no value is specified, then the IPC resource namespace sharing depends on the Docker daemon setting on the container instance.  
If the `host` IPC mode is used, there's a heightened risk of undesired IPC namespace exposure.  
If you're setting namespaced kernel parameters that use `systemControls` for the containers in the task, the following applies to your IPC resource namespace.   
+ For tasks that use the `host` IPC mode, IPC namespace that's related `systemControls` aren't supported.
+ For tasks that use the `task` IPC mode, `systemControls` that relate to the IPC namespace apply to all containers within a task.

### PID mode
<a name="task_definition_pidmode_ec2"></a>

`pidMode`  
Type: String  
Valid Values: `host` \$1 `task`  
Required: No  
The process namespace to use for the containers in the task. The valid values are `host` or `task`. For example, monitoring sidecars might need `pidMode` to access information about other containers running in the same task.  
If `host` is specified, all containers within the tasks that specified the `host` PID mode on the same container instance share the same process namespace with the host Amazon EC2 instance.  
If `task` is specified, all containers within the specified task share the same process namespace.  
If no value is specified, the default is a private namespace for each container.   
If the `host` PID mode is used, there's a heightened risk of undesired process namespace exposure.

**Note**  
This parameter is not supported for Windows containers.

### Fault injection
<a name="task_definition_faultInjection_ec2"></a>

`enableFaultInjection`  
Type: Boolean  
Valid Values: `true` \$1 `false`  
Required: No  
If this parameter is set to `true`, in a task's payload, Amazon ECS accepts fault injection requests from the task’s containers. By default, this parameter is set to `false`.

# Amazon ECS task definition template
<a name="task-definition-template"></a>

An empty task definition template is shown as follows. You can use this template to create your task definition, which can then be pasted into the console JSON input area or saved to a file and used with the AWS CLI `--cli-input-json` option. For more information, see [Amazon ECS task definition parameters for Fargate](task_definition_parameters.md).

**EC2 template**

```
{
  "family": "",
  "taskRoleArn": "",
  "executionRoleArn": "",
  "networkMode": "none",
  "containerDefinitions": [
    {
      "name": "",
      "image": "",
      "repositoryCredentials": {
        "credentialsParameter": ""
      },
      "cpu": 0,
      "memory": 0,
      "memoryReservation": 0,
      "links": [""],
      "portMappings": [
        {
          "containerPort": 0,
          "hostPort": 0,
          "protocol": "tcp"
        }
      ],
      "restartPolicy": {
        "enabled": true,
        "ignoredExitCodes": [0],
        "restartAttemptPeriod": 180
      },
      "essential": true,
      "entryPoint": [""],
      "command": [""],
      "environment": [
        {
          "name": "",
          "value": ""
        }
      ],
      "environmentFiles": [
        {
          "value": "",
          "type": "s3"
        }
      ],
      "mountPoints": [
        {
          "sourceVolume": "",
          "containerPath": "",
          "readOnly": true
        }
      ],
      "volumesFrom": [
        {
          "sourceContainer": "",
          "readOnly": true
        }
      ],
      "linuxParameters": {
        "capabilities": {
          "add": [""],
          "drop": [""]
        },
        "devices": [
          {
            "hostPath": "",
            "containerPath": "",
            "permissions": ["read"]
          }
        ],
        "initProcessEnabled": true,
        "sharedMemorySize": 0,
        "tmpfs": [
          {
            "containerPath": "",
            "size": 0,
            "mountOptions": [""]
          }
        ],
        "maxSwap": 0,
        "swappiness": 0
      },
      "secrets": [
        {
          "name": "",
          "valueFrom": ""
        }
      ],
      "dependsOn": [
        {
          "containerName": "",
          "condition": "COMPLETE"
        }
      ],
      "startTimeout": 0,
      "stopTimeout": 0,
      "hostname": "",
      "user": "",
      "workingDirectory": "",
      "disableNetworking": true,
      "privileged": true,
      "readonlyRootFilesystem": true,
      "dnsServers": [""],
      "dnsSearchDomains": [""],
      "extraHosts": [
        {
          "hostname": "",
          "ipAddress": ""
        }
      ],
      "dockerSecurityOptions": [""],
      "interactive": true,
      "pseudoTerminal": true,
      "dockerLabels": {
        "KeyName": ""
      },
      "ulimits": [
        {
          "name": "nofile",
          "softLimit": 0,
          "hardLimit": 0
        }
      ],
      "logConfiguration": {
        "logDriver": "splunk",
        "options": {
          "KeyName": ""
        },
        "secretOptions": [
          {
            "name": "",
            "valueFrom": ""
          }
        ]
      },
      "healthCheck": {
        "command": [""],
        "interval": 0,
        "timeout": 0,
        "retries": 0,
        "startPeriod": 0
      },
      "systemControls": [
        {
          "namespace": "",
          "value": ""
        }
      ],
      "resourceRequirements": [
        {
          "value": "",
          "type": "InferenceAccelerator"
        }
      ],
      "firelensConfiguration": {
        "type": "fluentbit",
        "options": {
          "KeyName": ""
        }
      }
    }
  ],
  "volumes": [
    {
      "name": "",
      "host": {
        "sourcePath": ""
      },
      "configuredAtLaunch": true,
      "dockerVolumeConfiguration": {
        "scope": "shared",
        "autoprovision": true,
        "driver": "",
        "driverOpts": {
          "KeyName": ""
        },
        "labels": {
          "KeyName": ""
        }
      },
      "efsVolumeConfiguration": {
        "fileSystemId": "",
        "rootDirectory": "",
        "transitEncryption": "DISABLED",
        "transitEncryptionPort": 0,
        "authorizationConfig": {
          "accessPointId": "",
          "iam": "ENABLED"
        }
      },
      "fsxWindowsFileServerVolumeConfiguration": {
        "fileSystemId": "",
        "rootDirectory": "",
        "authorizationConfig": {
          "credentialsParameter": "",
          "domain": ""
        }
      }
    }
  ],
  "placementConstraints": [
    {
      "type": "memberOf",
      "expression": ""
    }
  ],
  "requiresCompatibilities": ["EC2"],
  "cpu": "",
  "memory": "",
  "tags": [
    {
      "key": "",
      "value": ""
    }
  ],
  "pidMode": "task",
  "ipcMode": "task",
  "proxyConfiguration": {
    "type": "APPMESH",
    "containerName": "",
    "properties": [
      {
        "name": "",
        "value": ""
      }
    ]
  },
  "inferenceAccelerators": [
    {
      "deviceName": "",
      "deviceType": ""
    }
  ],
  "ephemeralStorage": {
    "sizeInGiB": 0
  },
  "runtimePlatform": {
    "cpuArchitecture": "X86_64",
    "operatingSystemFamily": "WINDOWS_SERVER_20H2_CORE"
  }
}
```

**Fargate template**

**Important**  
 For Fargate, you must include the `operatingSystemFamily` parameter with one of the following values:  
`LINUX`
`WINDOWS_SERVER_2019_FULL`
`WINDOWS_SERVER_2019_CORE`
`WINDOWS_SERVER_2022_FULL`
`WINDOWS_SERVER_2022_CORE`

```
{
    "family": "",
    "runtimePlatform": {"operatingSystemFamily": ""},
    "taskRoleArn": "",
    "executionRoleArn": "",
    "networkMode": "awsvpc",
    "platformFamily": "",
    "containerDefinitions": [
        {
            "name": "",
            "image": "",
            "repositoryCredentials": {"credentialsParameter": ""},
            "cpu": 0,
            "memory": 0,
            "memoryReservation": 0,
            "links": [""],
            "portMappings": [
                {
                    "containerPort": 0,
                    "hostPort": 0,
                    "protocol": "tcp"
                }
            ],
            "essential": true,
            "entryPoint": [""],
            "command": [""],
            "environment": [
                {
                    "name": "",
                    "value": ""
                }
            ],
            "environmentFiles": [
                {
                    "value": "",
                    "type": "s3"
                }
            ],
            "mountPoints": [
                {
                    "sourceVolume": "",
                    "containerPath": "",
                    "readOnly": true
                }
            ],
            "volumesFrom": [
                {
                    "sourceContainer": "",
                    "readOnly": true
                }
            ],
            "linuxParameters": {
                "capabilities": {
                    "add": [""],
                    "drop": [""]
                },
                "devices": [
                    {
                        "hostPath": "",
                        "containerPath": "",
                        "permissions": ["read"]
                    }
                ],
                "initProcessEnabled": true,
                "sharedMemorySize": 0,
                "tmpfs": [
                    {
                        "containerPath": "",
                        "size": 0,
                        "mountOptions": [""]
                    }
                ],
                "maxSwap": 0,
                "swappiness": 0
            },
            "secrets": [
                {
                    "name": "",
                    "valueFrom": ""
                }
            ],
            "dependsOn": [
                {
                    "containerName": "",
                    "condition": "HEALTHY"
                }
            ],
            "startTimeout": 0,
            "stopTimeout": 0,
            "hostname": "",
            "user": "",
            "workingDirectory": "",
            "disableNetworking": true,
            "privileged": true,
            "readonlyRootFilesystem": true,
            "dnsServers": [""],
            "dnsSearchDomains": [""],
            "extraHosts": [
                {
                    "hostname": "",
                    "ipAddress": ""
                }
            ],
            "dockerSecurityOptions": [""],
            "interactive": true,
            "pseudoTerminal": true,
            "dockerLabels": {"KeyName": ""},
            "ulimits": [
                {
                    "name": "msgqueue",
                    "softLimit": 0,
                    "hardLimit": 0
                }
            ],
            "logConfiguration": {
                "logDriver": "awslogs",
                "options": {"KeyName": ""},
                "secretOptions": [
                    {
                        "name": "",
                        "valueFrom": ""
                    }
                ]
            },
            "healthCheck": {
                "command": [""],
                "interval": 0,
                "timeout": 0,
                "retries": 0,
                "startPeriod": 0
            },
            "systemControls": [
                {
                    "namespace": "",
                    "value": ""
                }
            ],
            "resourceRequirements": [
                {
                    "value": "",
                    "type": "GPU"
                }
            ],
            "firelensConfiguration": {
                "type": "fluentd",
                "options": {"KeyName": ""}
            }
        }
    ],
    "volumes": [
        {
            "name": "",
            "host": {"sourcePath": ""},
            "configuredAtLaunch":true,
            "dockerVolumeConfiguration": {
                "scope": "task",
                "autoprovision": true,
                "driver": "",
                "driverOpts": {"KeyName": ""},
                "labels": {"KeyName": ""}
            },
            "efsVolumeConfiguration": {
                "fileSystemId": "",
                "rootDirectory": "",
                "transitEncryption": "ENABLED",
                "transitEncryptionPort": 0,
                "authorizationConfig": {
                    "accessPointId": "",
                    "iam": "ENABLED"
                }
            }
        }
    ],
    "requiresCompatibilities": ["FARGATE"],
    "cpu": "",
    "memory": "",
    "tags": [
        {
            "key": "",
            "value": ""
        }
    ],
    "ephemeralStorage": {"sizeInGiB": 0},
    "pidMode": "task",
    "ipcMode": "none",
    "proxyConfiguration": {
        "type": "APPMESH",
        "containerName": "",
        "properties": [
            {
                "name": "",
                "value": ""
            }
        ]
    },
    "inferenceAccelerators": [
        {
            "deviceName": "",
            "deviceType": ""
        }
    ]
}
```

You can generate this task definition template using the following AWS CLI command.

```
aws ecs register-task-definition --generate-cli-skeleton
```

# Example Amazon ECS task definitions
<a name="example_task_definitions"></a>

You can copy the examples and snippets to start creating your own task definitions. 

You can copy the examples, and then paste them when you use the **Configure via JSON** option in the console. Make sure to customize the examples, such as using your account ID. You can include the snippets in your task definition JSON. For more information, see [Creating an Amazon ECS task definition using the console](create-task-definition.md) and [Amazon ECS task definition parameters for Fargate](task_definition_parameters.md).

For more task definition examples, see [AWS Sample Task Definitions](https://github.com/aws-samples/aws-containers-task-definitions) on GitHub.

**Topics**
+ [

## Webserver
](#example_task_definition-webserver)
+ [

## `splunk` log driver
](#example_task_definition-splunk)
+ [

## `fluentd` log driver
](#example_task_definition-fluentd)
+ [

## `gelf` log driver
](#example_task_definition-gelf)
+ [

## Workloads on external instances
](#ecs-anywhere-runtask)
+ [

## Amazon ECR image and task definition IAM role
](#example_task_definition-iam)
+ [

## Entrypoint with command
](#example_task_definition-ping)
+ [

## Container dependency
](#example_task_definition-containerdependency)
+ [

## Volumes in task definitions
](#volume_sample_task_defs)
+ [

## Windows sample task definitions
](#windows_sample_task_defs)

## Webserver
<a name="example_task_definition-webserver"></a>

The following is an example task definition using the Linux containers on Fargate that sets up a web server:

```
{
   "containerDefinitions": [ 
      { 
         "command": [
            "/bin/sh -c \"echo '<html> <head> <title>Amazon ECS Sample App</title> <style>body {margin-top: 40px; background-color: #333;} </style> </head><body> <div style=color:white;text-align:center> <h1>Amazon ECS Sample App</h1> <h2>Congratulations!</h2> <p>Your application is now running on a container in Amazon ECS.</p> </div></body></html>' >  /usr/local/apache2/htdocs/index.html && httpd-foreground\""
         ],
         "entryPoint": [
            "sh",
            "-c"
         ],
         "essential": true,
         "image": "public.ecr.aws/docker/library/httpd:2.4",
         "logConfiguration": { 
            "logDriver": "awslogs",
            "options": { 
               "awslogs-group" : "/ecs/fargate-task-definition",
               "awslogs-region": "us-east-1",
               "awslogs-stream-prefix": "ecs"
            }
         },
         "name": "sample-fargate-app",
         "portMappings": [ 
            { 
               "containerPort": 80,
               "hostPort": 80,
               "protocol": "tcp"
            }
         ]
      }
   ],
   "cpu": "256",
   "executionRoleArn": "arn:aws:iam::012345678910:role/ecsTaskExecutionRole",
   "family": "fargate-task-definition",
   "memory": "512",
   "networkMode": "awsvpc",
   "runtimePlatform": {
        "operatingSystemFamily": "LINUX"
    },
   "requiresCompatibilities": [ 
       "FARGATE" 
    ]
}
```

The following is an example task definition using the Windows containers on Fargate that sets up a web server:

```
{
    "containerDefinitions": [
        {
            "command": ["New-Item -Path C:\\inetpub\\wwwroot\\index.html -Type file -Value '<html> <head> <title>Amazon ECS Sample App</title> <style>body {margin-top: 40px; background-color: #333;} </style> </head><body> <div style=color:white;text-align:center> <h1>Amazon ECS Sample App</h1> <h2>Congratulations!</h2> <p>Your application is now running on a container in Amazon ECS.</p>'; C:\\ServiceMonitor.exe w3svc"],
            "entryPoint": [
                "powershell",
                "-Command"
            ],
            "essential": true,
            "cpu": 2048,
            "memory": 4096,
            "image": "mcr.microsoft.com/windows/servercore/iis:windowsservercore-ltsc2019",
            "name": "sample_windows_app",
            "portMappings": [
                {
                    "hostPort": 80,
                    "containerPort": 80,
                    "protocol": "tcp"
                }
            ]
        }
    ],
    "memory": "4096",
    "cpu": "2048",
    "networkMode": "awsvpc",
    "family": "windows-simple-iis-2019-core",
    "executionRoleArn": "arn:aws:iam::012345678910:role/ecsTaskExecutionRole",
    "runtimePlatform": {"operatingSystemFamily": "WINDOWS_SERVER_2019_CORE"},
    "requiresCompatibilities": ["FARGATE"]
}
```

## `splunk` log driver
<a name="example_task_definition-splunk"></a>

The following snippet demonstrates how to use the `splunk` log driver in a task definition that sends the logs to a remote service. The Splunk token parameter is specified as a secret option because it can be treated as sensitive data. For more information, see [Pass sensitive data to an Amazon ECS container](specifying-sensitive-data.md).

```
"containerDefinitions": [{
		"logConfiguration": {
			"logDriver": "splunk",
			"options": {
				"splunk-url": "https://cloud.splunk.com:8080",
				"tag": "tag_name",
			},
			"secretOptions": [{
				"name": "splunk-token",
				"valueFrom": "arn:aws:secretsmanager:region:aws_account_id:secret:splunk-token-KnrBkD"
}],
```

## `fluentd` log driver
<a name="example_task_definition-fluentd"></a>

The following snippet demonstrates how to use the `fluentd` log driver in a task definition that sends the logs to a remote service. The `fluentd-address` value is specified as a secret option as it may be treated as sensitive data. For more information, see [Pass sensitive data to an Amazon ECS container](specifying-sensitive-data.md).

```
"containerDefinitions": [{
	"logConfiguration": {
		"logDriver": "fluentd",
		"options": {
			"tag": "fluentd demo"
		},
		"secretOptions": [{
			"name": "fluentd-address",
			"valueFrom": "arn:aws:secretsmanager:region:aws_account_id:secret:fluentd-address-KnrBkD"
		}]
	},
	"entryPoint": [],
	"portMappings": [{
             "hostPort": 80,
             "protocol": "tcp",
             "containerPort": 80
             },
             {
		"hostPort": 24224,
		"protocol": "tcp",
		"containerPort": 24224
	}]
}],
```

## `gelf` log driver
<a name="example_task_definition-gelf"></a>

The following snippet demonstrates how to use the `gelf` log driver in a task definition that sends the logs to a remote host running Logstash that takes Gelf logs as an input. For more information, see [logConfiguration](task_definition_parameters.md#ContainerDefinition-logConfiguration).

```
"containerDefinitions": [{
	"logConfiguration": {
		"logDriver": "gelf",
		"options": {
			"gelf-address": "udp://logstash-service-address:5000",
			"tag": "gelf task demo"
		}
	},
	"entryPoint": [],
	"portMappings": [{
			"hostPort": 5000,
			"protocol": "udp",
			"containerPort": 5000
		},
		{
			"hostPort": 5000,
			"protocol": "tcp",
			"containerPort": 5000
		}
	]
}],
```

## Workloads on external instances
<a name="ecs-anywhere-runtask"></a>

When registering an Amazon ECS task definition, use the `requiresCompatibilities` parameter and specify `EXTERNAL` which validates that the task definition is compatible to use when running Amazon ECS workloads on your external instances. If you use the console for registering a task definition, you must use the JSON editor. For more information, see [Creating an Amazon ECS task definition using the console](create-task-definition.md).

**Important**  
If your tasks require a task execution IAM role, make sure that it's specified in the task definition. 

When you deploy your workload, use the `EXTERNAL` launch type when creating your service or running your standalone task.

The following is an example task definition.

------
#### [ Linux ]

```
{
	"requiresCompatibilities": [
		"EXTERNAL"
	],
	"containerDefinitions": [{
		"name": "nginx",
		"image": "public.ecr.aws/nginx/nginx:latest",
		"memory": 256,
		"cpu": 256,
		"essential": true,
		"portMappings": [{
			"containerPort": 80,
			"hostPort": 8080,
			"protocol": "tcp"
		}]
	}],
	"networkMode": "bridge",
	"family": "nginx"
}
```

------
#### [ Windows ]

```
{
	"requiresCompatibilities": [
		"EXTERNAL"
	],
	"containerDefinitions": [{
		"name": "windows-container",
		"image": "mcr.microsoft.com/windows/servercore/iis:windowsservercore-ltsc2019",
		"memory": 256,
		"cpu": 512,
		"essential": true,
		"portMappings": [{
			"containerPort": 80,
			"hostPort": 8080,
			"protocol": "tcp"
		}]
	}],
	"networkMode": "bridge",
	"family": "windows-container"
}
```

------

## Amazon ECR image and task definition IAM role
<a name="example_task_definition-iam"></a>

The following snippet uses an Amazon ECR image called `aws-nodejs-sample` with the `v1` tag from the `123456789012.dkr.ecr.us-west-2.amazonaws.com` registry. The container in this task inherits IAM permissions from the `arn:aws:iam::123456789012:role/AmazonECSTaskS3BucketRole` role. For more information, see [Amazon ECS task IAM role](task-iam-roles.md).

```
{
    "containerDefinitions": [
        {
            "name": "sample-app",
            "image": "123456789012.dkr.ecr.us-west-2.amazonaws.com/aws-nodejs-sample:v1",
            "memory": 200,
            "cpu": 10,
            "essential": true
        }
    ],
    "family": "example_task_3",
    "taskRoleArn": "arn:aws:iam::123456789012:role/AmazonECSTaskS3BucketRole"
}
```

## Entrypoint with command
<a name="example_task_definition-ping"></a>

The following snippet demonstrates the syntax for a Docker container that uses an entry point and a command argument. This container pings `example.com` four times and then exits.

```
{
    "containerDefinitions": [
        {
            "memory": 32,
            "essential": true,
            "entryPoint": ["ping"],
            "name": "alpine_ping",
            "readonlyRootFilesystem": true,
            "image": "alpine:3.4",
            "command": [
                "-c",
                "4",
                "example.com"
            ],
            "cpu": 16
        }
    ],
    "family": "example_task_2"
}
```

## Container dependency
<a name="example_task_definition-containerdependency"></a>

This snippet demonstrates the syntax for a task definition with multiple containers where container dependency is specified. In the following task definition, the `envoy` container must reach a healthy status, determined by the required container health check parameters, before the `app` container will start. For more information, see [Container dependency](task_definition_parameters.md#container_definition_dependson).

```
{
  "family": "appmesh-gateway",
  "runtimePlatform": {
        "operatingSystemFamily": "LINUX"
  },
  "proxyConfiguration":{
      "type": "APPMESH",
      "containerName": "envoy",
      "properties": [
          {
              "name": "IgnoredUID",
              "value": "1337"
          },
          {
              "name": "ProxyIngressPort",
              "value": "15000"
          },
          {
              "name": "ProxyEgressPort",
              "value": "15001"
          },
          {
              "name": "AppPorts",
              "value": "9080"
          },
          {
              "name": "EgressIgnoredIPs",
              "value": "169.254.170.2,169.254.169.254"
          }
      ]
  },
  "containerDefinitions": [
    {
      "name": "app",
      "image": "application_image",
      "portMappings": [
        {
          "containerPort": 9080,
          "hostPort": 9080,
          "protocol": "tcp"
        }
      ],
      "essential": true,
      "dependsOn": [
        {
          "containerName": "envoy",
          "condition": "HEALTHY"
        }
      ]
    },
    {
      "name": "envoy",
      "image": "840364872350.dkr.ecr.region-code.amazonaws.com/aws-appmesh-envoy:v1.15.1.0-prod",
      "essential": true,
      "environment": [
        {
          "name": "APPMESH_VIRTUAL_NODE_NAME",
          "value": "mesh/meshName/virtualNode/virtualNodeName"
        },
        {
          "name": "ENVOY_LOG_LEVEL",
          "value": "info"
        }
      ],
      "healthCheck": {
        "command": [
          "CMD-SHELL",
          "echo hello"
        ],
        "interval": 5,
        "timeout": 2,
        "retries": 3
      }    
    }
  ],
  "executionRoleArn": "arn:aws:iam::123456789012:role/ecsTaskExecutionRole",
  "networkMode": "awsvpc"
}
```

## Volumes in task definitions
<a name="volume_sample_task_defs"></a>

Use the following to understand how to specify volumes in tasks.
+ For information about how to configure an Amazon EBS volume, see [Specify Amazon EBS volume configuration at Amazon ECS deployment](configure-ebs-volume.md).
+ For information about how to configure an Amazon EFS volume, see [Configuring Amazon EFS file systems for Amazon ECS using the console](tutorial-efs-volumes.md).
+ For information about how to configure a FSx for Windows File Server volume, see [Learn how to configure FSx for Windows File Server file systems for Amazon ECS](tutorial-wfsx-volumes.md).
+ For information about how to configure a Docker volume, see [Docker volume examples for Amazon ECS](docker-volume-examples.md).
+ For information about how to configure a bind mount, see [Bind mount examples for Amazon ECS](bind-mount-examples.md).

## Windows sample task definitions
<a name="windows_sample_task_defs"></a>

The following is a sample task definition to help you get started with Windows containers on Amazon ECS.

**Example Amazon ECS Console Sample Application for Windows**  
The following task definition is the Amazon ECS console sample application that is produced in the first-run wizard for Amazon ECS; it has been ported to use the `microsoft/iis` Windows container image.  

```
{
  "family": "windows-simple-iis",
  "containerDefinitions": [
    {
      "name": "windows_sample_app",
      "image": "mcr.microsoft.com/windows/servercore/iis",
      "cpu": 1024,
      "entryPoint":["powershell", "-Command"],
      "command":["New-Item -Path C:\\inetpub\\wwwroot\\index.html -Type file -Value '<html> <head> <title>Amazon ECS Sample App</title> <style>body {margin-top: 40px; background-color: #333;} </style> </head><body> <div style=color:white;text-align:center> <h1>Amazon ECS Sample App</h1> <h2>Congratulations!</h2> <p>Your application is now running on a container in Amazon ECS.</p>'; C:\\ServiceMonitor.exe w3svc"],
      "portMappings": [
        {
          "protocol": "tcp",
          "containerPort": 80
        }
      ],
      "memory": 1024,
      "essential": true
    }
  ],
  "networkMode": "awsvpc",
  "memory": "1024",
  "cpu": "1024"
}
```