

# Amazon ECS service deployment controllers and strategies
<a name="ecs_service-options"></a>

Before you deploy your service, determine the options for deploying it, and the features the service uses.

## Scheduling strategy
<a name="service-strategy"></a>

There are two service scheduler strategies available:
+ `REPLICA`—The replica scheduling strategy places and maintains the desired number of tasks across your cluster. By default, the service scheduler spreads tasks across Availability Zones. You can use task placement strategies and constraints to customize task placement decisions. For more information, see [Replica scheduling strategy](#service_scheduler_replica).
+ `DAEMON`—The daemon scheduling strategy deploys exactly one task on each active container instance that meets all of the task placement constraints that you specify in your cluster. When using this strategy, there is no need to specify a desired number of tasks, a task placement strategy, or use Service Auto Scaling policies. For more information, see [Daemon scheduling strategy](#service_scheduler_daemon).
**Note**  
Fargate tasks do not support the `DAEMON` scheduling strategy.

### Replica scheduling strategy
<a name="service_scheduler_replica"></a>

The *replica* scheduling strategy places and maintains the desired number of tasks in your cluster.

For a service that runs tasks on Fargate, when the service scheduler launches new tasks or stops running tasks, the service scheduler uses a best attempt to maintain a balance across Availability Zones. You don't need to specify task placement strategies or constraints.

When you create a service that runs tasks on EC2 instances, you can optionally specify task placement strategies and constraints to customize task placement decisions. If no task placement strategies or constraints are specified, then by default the service scheduler spreads the tasks across Availability Zones. The service scheduler uses the following logic:
+ Determines which of the container instances in your cluster can support your service's task definition (for example, required CPU, memory, ports, and container instance attributes).
+ Determines which container instances satisfy any placement constraints that are defined for the service.
+ When you have a replica service that depends on a daemon service (for example, a daemon log router task that needs to be running before tasks can use logging), create a task placement constraint that ensures that the daemon service tasks get placed on the EC2 instance prior to the replica service tasks. For more information, see [Example Amazon ECS task placement constraints](constraint-examples.md).
+ When there's a defined placement strategy, use that strategy to select an instance from the remaining candidates.
+ When there's no defined placement strategy, use the following logic to balance tasks across the Availability Zones in your cluster:
  + Sorts the valid container instances. Gives priority to instances that have the fewest number of running tasks for this service in their respective Availability Zone. For example, if zone A has one running service task and zones B and C each have zero, valid container instances in either zone B or C are considered optimal for placement.
  + Places the new service task on a valid container instance in an optimal Availability Zone based on the previous steps. Favors container instances with the fewest number of running tasks for this service.

We recommend that you use the service rebalancing feature when you use the `REPLICA` strategy because it helps ensure high availability for your service.

### Daemon scheduling strategy
<a name="service_scheduler_daemon"></a>

The *daemon* scheduling strategy deploys exactly one task on each active container instance that meets all of the task placement constraints specified in your cluster. The service scheduler evaluates the task placement constraints for running tasks, and stops tasks that don't meet the placement constraints. When you use this strategy, you don't need to specify a desired number of tasks, a task placement strategy, or use Service Auto Scaling policies.

Amazon ECS reserves container instance compute resources including CPU, memory, and network interfaces for the daemon tasks. When you launch a daemon service on a cluster with other replica services, Amazon ECS prioritizes the daemon task. This means that the daemon task is the first task to launch on the instances and the last task to stop after all replica tasks are stopped. This strategy ensures that resources aren't used by pending replica tasks and are available for the daemon tasks.

The daemon service scheduler doesn't place any tasks on instances that have a `DRAINING` status. If a container instance transitions to a `DRAINING` status, the daemon tasks on it are stopped. The service scheduler also monitors when new container instances are added to your cluster and adds the daemon tasks to them.

When you specify a deployment configuration, the value for the `maximumPercent` parameter must be `100` (specified as a percentage), which is the default value used if not set. The default value for the `minimumHealthyPercent` parameter is `0` (specified as a percentage).

You must restart the service when you change the placement constraints for the daemon service. Amazon ECS dynamically updates the resources that are reserved on qualifying instances for the daemon task. For existing instances, the scheduler tries to place the task on the instance. 

A new deployment starts when there is a change to the task size or container resource reservation in the task definition. A new deployment also starts when updating a service or setting a different revision of the task definition. Amazon ECS picks up the updated CPU and memory reservations for the daemon, and then blocks that capacity for the daemon task.

If there are insufficient resources for either of the above cases, the following happens:
+ The task placement fails.
+ A CloudWatch event is generated.
+ Amazon ECS continues to try and schedule the task on the instance by waiting for resources to become available. 
+ Amazon ECS frees up any reserved instances that no longer meet the placement constraint criteria and stops the corresponding daemon tasks.

The daemon scheduling strategy can be used in the following cases:
+ Running application containers
+ Running support containers for logging, monitoring and tracing tasks

Tasks using Fargate or the `CODE_DEPLOY` or `EXTERNAL` deployment controller types don't support the daemon scheduling strategy.

When the service scheduler stops running tasks, it attempts to maintain balance across the Availability Zones in your cluster. The scheduler uses the following logic: 
+ If a placement strategy is defined, use that strategy to select which tasks to terminate. For example, if a service has an Availability Zone spread strategy defined, a task is selected that leaves the remaining tasks with the best spread.
+ If no placement strategy is defined, use the following logic to maintain balance across the Availability Zones in your cluster:
  + Sort the valid container instances. Give priority to instances that have the largest number of running tasks for this service in their respective Availability Zone. For example, if zone A has one running service task and zones B and C each have two running service task, container instances in either zone B or C are considered optimal for termination.
  + Stop the task on a container instance in an optimal Availability Zone based on the previous steps. Favoring container instances with the largest number of running tasks for this service.

## Deployment controllers
<a name="service_deployment-controllers"></a>

The deployment controller is the mechanism that determines how tasks are deployed for your service. The valid options are:
+ ECS

  When you create a service which uses the `ECS` deployment controller, you can choose between the following deployment strategies:
  + `ROLLING`: When you create a service which uses the *rolling update* (`ROLLING`) deployment strategy, the Amazon ECS service scheduler replaces the currently running tasks with new tasks. The number of tasks that Amazon ECS adds or removes from the service during a rolling update is controlled by the service deployment configuration. 

    Rolling update deployments are best suited for the following scenarios:
    + Gradual service updates: You need to update your service incrementally without taking the entire service offline at once.
    + Limited resource requirements: You want to avoid the additional resource costs of running two complete environments simultaneously (as required by blue/green deployments).
    + Acceptable deployment time: Your application can tolerate a longer deployment process, as rolling updates replace tasks one by one.
    + No need for instant roll back: Your service can tolerate a rollback process that takes minutes rather than seconds.
    + Simple deployment process: You prefer a straightforward deployment approach without the complexity of managing multiple environments, target groups, and listeners.
    + No load balancer requirement: Your service doesn't use or require a load balancer, Application Load Balancer, Network Load Balancer, or Service Connect (which are required for blue/green deployments).
    + Stateful applications: Your application maintains state that makes it difficult to run two parallel environments.
    + Cost sensitivity: You want to minimize deployment costs by not running duplicate environments during deployment.

    Rolling updates are the default deployment strategy for services and provide a balance between deployment safety and resource efficiency for many common application scenarios.
  + `BLUE_GREEN`: A *blue/green* deployment strategy (`BLUE_GREEN`) is a release methodology that reduces downtime and risk by running two identical production environments called blue and green. With Amazon ECS blue/green deployments, you can validate new service revisions before directing production traffic to them. This approach provides a safer way to deploy changes with the ability to quickly roll back if needed.

    Amazon ECS blue/green deployments are best suited for the following scenarios:
    + Service validation: When you need to validate new service revisions before directing production traffic to them
    + Zero downtime: When your service requires zero-downtime deployments
    + Instant roll back: When you need the ability to quickly roll back if issues are detected
    + Load balancer requirement: When your service uses Application Load Balancer, Network Load Balancer, or Service Connect
  + `LINEAR`: A *linear* deployment strategy (`LINEAR`) gradually shifts traffic from the current production environment to a new environment in equal percentage increments over a specified time period. With Amazon ECS linear deployments, you can control the pace of traffic shifting and validate new service revisions with increasing amounts of production traffic.

    Amazon ECS linear deployments are best suited for the following scenarios:
    + Gradual validation: When you want to gradually validate your new service version with increasing traffic
    + Performance monitoring: When you need time to monitor metrics and performance during the deployment
    + Risk minimization: When you want to minimize risk by exposing the new version to production traffic incrementally
    + Load balancer requirement: When your service uses Application Load Balancer, Network Load Balancer, or Service Connect
  + `CANARY`: A *canary* deployment strategy (`CANARY`) shifts a small percentage of traffic to the new service revision first, then shifts the remaining traffic all at once after a specified time period. This allows you to test the new version with a subset of users before full deployment.

    Amazon ECS canary deployments are best suited for the following scenarios:
    + Feature testing: When you want to test new features with a small subset of users before full rollout
    + Production validation: When you need to validate performance and functionality with real production traffic
    + Blast radius control: When you want to minimize blast radius if issues are discovered in the new version
    + Load balancer requirement: When your service uses Application Load Balancer, Network Load Balancer, or Service Connect
+ External

  Use a third-party deployment controller.
+ Blue/green deployment (powered by AWS CodeDeploy)

  CodeDeploy installs an updated version of the application as a new replacement task set and reroutes production traffic from the original application task set to the replacement task set. The original task set is terminated after a successful deployment. Use this deployment controller to verify a new deployment of a service before sending production traffic to it.

# Amazon ECS deployment failure detection
<a name="deployment-failure-detection"></a>

Amazon ECS provides two methods for detecting deployment failures:
+ Deployment Circuit Breaker
+ CloudWatch Alarms

Both methods can be configured to automatically roll back failed deployments to the last known good state.

Consider the following:
+ Both methods only support rolling update deployment and blue/green deployment types.
+ When both methods are used, either can trigger deployment failure.
+ The rollback method requires a previous deployment in COMPLETED state.
+ EventBridge events are generated for deployment state changes.

# How the Amazon ECS deployment circuit breaker detects failures
<a name="deployment-circuit-breaker"></a>

The deployment circuit breaker is the rolling update mechanism that determines if the tasks reach a steady state. The deployment circuit breaker has an option that will automatically roll back a failed deployment to the deployment that is in the `COMPLETED` state.

When a service deployment changes state, Amazon ECS sends a service deployment state change event to EventBridge. This provides a programmatic way to monitor the status of your service deployments. For more information, see [Amazon ECS service deployment state change events](ecs_service_deployment_events.md). We recommend that you create and monitor an EventBridge rule with an `eventName` of `SERVICE_DEPLOYMENT_FAILED` so that you can take manual action to start your deployment. For more information, see [Getting started with EventBridge](https://docs.aws.amazon.com/eventbridge/latest/userguide/eb-get-started.html) in the *Amazon EventBridge User Guide*.

When the deployment circuit breaker determines that a deployment failed, it looks for the most recent deployment that is in a `COMPLETED` state. This is the deployment that it uses as the roll-back deployment. When the rollback starts, the deployment changes from a `COMPLETED` to `IN_PROGRESS`. This means that the deployment is not eligible for another rollback until it reaches a `COMPLETED` state. When the deployment circuit breaker does not find a deployment that is in a `COMPLETED` state, the circuit breaker does not launch new tasks and the deployment is stalled. 

When you create a service, the scheduler keeps track of the tasks that failed to launch in two stages.
+ Stage 1 - The scheduler monitors the tasks to see if they transition into the RUNNING state.
  + Success - The deployment has a chance of transitioning to the COMPLETED state because there is more than one task that transitioned to the RUNNING state. The failure criteria is skipped and the circuit breaker moves to stage 2.
  + Failure - There are consecutive tasks that did not transition to the RUNNING state and the deployment might transition to the FAILED state. 
+ Stage 2 - The deployment enters this stage when there is at least one task in the RUNNING state. The circuit breaker checks the health checks for the tasks in the current deployment being evaluated. The validated health checks are Elastic Load Balancing, AWS Cloud Map service health checks, and container health checks. 
  + Success - There is at least one task in the running state with health checks that have passed.
  + Failure - The tasks that are replaced because of health check failures have reached the failure threshold.

Consider the following when you use the deployment circuit breaker method on a service. EventBridge generates the rule.
+ The `DescribeServices` response provides insight into the state of a deployment, the `rolloutState` and `rolloutStateReason`. When a new deployment is started, the rollout state begins in an `IN_PROGRESS` state. When the service reaches a steady state, the rollout state transitions to `COMPLETED`. If the service fails to reach a steady state and circuit breaker is turned on, the deployment will transition to a `FAILED` state. A deployment in a `FAILED` state doesn't launch any new tasks.
+ In addition to the service deployment state change events Amazon ECS sends for deployments that have started and have completed, Amazon ECS also sends an event when a deployment with circuit breaker turned on fails. These events provide details about why a deployment failed or if a deployment was started because of a rollback. For more information, see [Amazon ECS service deployment state change events](ecs_service_deployment_events.md).
+ If a new deployment is started because a previous deployment failed and a rollback occurred, the `reason` field of the service deployment state change event indicates the deployment was started because of a rollback.
+ The deployment circuit breaker is only supported for Amazon ECS services that use the rolling update (`ECS`) deployment controller.
+ You must use the Amazon ECS console, or the AWS CLI when you use the deployment circuit breaker with the CloudWatch option. For more information, see [Create a service using defined parameters](create-service-console-v2.md#create-custom-service) and [create-service](https://docs.aws.amazon.com/cli/latest/reference/ecs/create-service.html) in the *AWS Command Line Interface Reference*.

The following `create-service` AWS CLI example shows how to create a Linux service when the deployment circuit breaker is used with the rollback option.

```
aws ecs create-service \
     --service-name MyService \
     --deployment-controller type=ECS \
     --desired-count 3 \
     --deployment-configuration "deploymentCircuitBreaker={enable=true,rollback=true}" \
     --task-definition sample-fargate:1 \
     --launch-type FARGATE \
     --platform-family LINUX \
     --platform-version 1.4.0 \
     --network-configuration "awsvpcConfiguration={subnets=[subnet-12344321],securityGroups=[sg-12344321],assignPublicIp=ENABLED}"
```

Example:

Deployment 1 is in a `COMPLETED` state.

Deployment 2 cannot start, so the circuit breaker rolls back to Deployment 1. Deployment 1 transitions to the `IN_PROGRESS` state.

Deployment 3 starts and there is no deployment in the `COMPLETED` state, so Deployment 3 cannot roll back, or launch tasks. 

## Failure threshold
<a name="failure-threshold"></a>

The deployment circuit breaker calculates the threshold value, and then uses the value to determine when to move the deployment to a `FAILED` state.

The deployment circuit breaker has a minimum threshold of 3 and a maximum threshold of 200. and uses the values in the following formula to determine the deployment failure.

```
Minimum threshold <= 0.5 * desired task count => maximum threshold
```

When the result of the calculation is greater than the minimum of 3, but smaller than the maximum of 200, the failure threshold is set to the calculated threshold (rounded up).

**Note**  
You cannot change either of the threshold values.

There are two stages for the deployment status check.

1. The deployment circuit breaker monitors tasks that are part of the deployment and checks for tasks that are in the `RUNNING` state. The scheduler ignores the failure criteria when a task in the current deployment is in the `RUNNING` state and proceeds to the next stage. When tasks fail to reach in the `RUNNING` state, the deployment circuit breaker increases the failure count by one. When the failure count equals the threshold, the deployment is marked as `FAILED`.

1. This stage is entered when there are one or more tasks in the `RUNNING` state. The deployment circuit breaker performs health checks on the following resources for the tasks in the current deployment:
   + Elastic Load Balancing load balancers
   + AWS Cloud Map service
   + Amazon ECS container health checks

   When a health check fails for the task, the deployment circuit breaker increases the failure count by one. When the failure count equals the threshold, the deployment is marked as `FAILED`.

The following table provides some examples.


| Desired task count | Calculation | Threshold | 
| --- | --- | --- | 
|  1  |  <pre>3 <= 0.5 * 1 => 200</pre>  | 3 (the calculated value is less than the minimum) | 
|  25  |  <pre>3 <= 0.5 * 25 => 200</pre>  | 13 (the value is rounded up) | 
|  400  |  <pre>3 <= 0.5 * 400 => 200</pre>  | 200 | 
|  800  |  <pre>3 <= 0.5 * 800 => 200</pre>  | 200 (the calculated value is greater than the maximum) | 

For example, when the threshold is 3, the circuit breaker starts with the failure count set at 0. When a task fails to reach the `RUNNING` state, the deployment circuit breaker increases the failure count by one. When the failure count equals 3, the deployment is marked as `FAILED`.

For additional examples about how to use the rollback option, see [Announcing Amazon ECS deployment circuit breaker](https://aws.amazon.com/blogs/containers/announcing-amazon-ecs-deployment-circuit-breaker/).

# How CloudWatch alarms detect Amazon ECS deployment failures
<a name="deployment-alarm-failure"></a>

You can configure Amazon ECS to set the deployment to failed when it detects that a specified CloudWatch alarm has gone into the `ALARM` state.

You can optionally set the configuration to roll back a failed deployment to the last completed deployment.

The following `create-service` AWS CLI example shows how to create a Linux service when the deployment alarms are used with the rollback option.

```
aws ecs create-service \
     --service-name MyService \
     --deployment-controller type=ECS \
     --desired-count 3 \
     --deployment-configuration "alarms={alarmNames=[alarm1Name,alarm2Name],enable=true,rollback=true}" \
     --task-definition sample-fargate:1 \
     --launch-type FARGATE \
     --platform-family LINUX \
     --platform-version 1.4.0 \
     --network-configuration "awsvpcConfiguration={subnets=[subnet-12344321],securityGroups=[sg-12344321],assignPublicIp=ENABLED}"
```

Consider the following when you use the Amazon CloudWatch alarms method on a service.
+ The duration when both blue and green service revisions are running simultaneously after the production traffic has shifted. Amazon ECS computes this time period based on the alarm configuration associated with the deployment. You can't set this value. 
+ The `deploymentConfiguration` request parameter now contains the `alarms` data type. You can specify the alarm names, whether to use the method, and whether to initiate a rollback when the alarms indicate a deployment failure. For more information, see [CreateService](https://docs.aws.amazon.com/AmazonECS/latest/APIReference/API_CreateService.html) in the *Amazon Elastic Container Service API Reference*.
+ The `DescribeServices` response provides insight into the state of a deployment, the `rolloutState` and `rolloutStateReason`. When a new deployment starts, the rollout state begins in an `IN_PROGRESS` state. When the service reaches a steady state and the bake time is complete, the rollout state transitions to `COMPLETED`. If the service fails to reach a steady state and the alarm has gone into the `ALARM` state, the deployment will transition to a `FAILED` state. A deployment in a `FAILED` state won't launch any new tasks.
+ In addition to the service deployment state change events Amazon ECS sends for deployments that have started and have completed, Amazon ECS also sends an event when a deployment that uses alarms fails. These events provide details about why a deployment failed or if a deployment was started because of a rollback. For more information, see [Amazon ECS service deployment state change events](ecs_service_deployment_events.md).
+ If a new deployment is started because a previous deployment failed and rollback was turned on, the `reason` field of the service deployment state change event will indicate the deployment was started because of a rollback.
+ If you use the deployment circuit breaker and the Amazon CloudWatch alarms to detect failures, either one can initiate a deployment failure as soon as the criteria for either method is met. A rollback occurs when you use the rollback option for the method that initiated the deployment failure.
+ The Amazon CloudWatch alarms is only supported for Amazon ECS services that use the rolling update (`ECS`) deployment controller.
+ You can configure this option by using the Amazon ECS console, or the AWS CLI. For more information, see [Create a service using defined parameters](create-service-console-v2.md#create-custom-service) and [create-service](https://docs.aws.amazon.com/cli/latest/reference/ecs/create-service.html) in the *AWS Command Line Interface Reference*.
+ You might notice that the deployment status remains `IN_PROGRESS` for a prolonged amount of time. The reason for this is that Amazon ECS does not change the status until it has deleted the active deployment, and this does not happen until after the bake time. Depending on your alarm configuration, the deployment might appear to take several minutes longer than it does when you don't use alarms (even though the new primary task set is scaled up and the old deployment is scaled down). If you use CloudFormation timeouts, consider increasing the timeouts. For more information, see [Creating wait conditions in a template](https://docs.aws.amazon.com/AWSCloudFormation/latest/UserGuide/using-cfn-waitcondition.html) in the *AWS CloudFormation User Guide*.
+ Amazon ECS calls `DescribeAlarms` to poll the alarms. The calls to `DescribeAlarms` count toward the CloudWatch service quotas associated with your account. If you have other AWS services that call `DescribeAlarms`, there might be an impact on Amazon ECS to poll the alarms. For example, if another service makes enough `DescribeAlarms` calls to reach the quota, that service is throttled and Amazon ECS is also throttled and unable to poll alarms. If an alarm is generated during the throttling period, Amazon ECS might miss the alarm and the roll back might not occur. There is no other impact on the deployment. For more information on CloudWatch service quotas, see [CloudWatch service quotas](https://docs.aws.amazon.com/AmazonCloudWatch/latest/monitoring/cloudwatch_limits.htm) in the *CloudWatch User Guide*.
+ If an alarm is in the `ALARM` state at the beginning of a deployment, Amazon ECS will not monitor alarms for the duration of that deployment (Amazon ECS ignores the alarm configuration). This behavior addresses the case where you want to start a new deployment to fix an initial deployment failure.

## Recommended alarms
<a name="ecs-deployment-alarms"></a>

We recommend that you use the following alarm metrics:
+ If you use an Application Load Balancer, use the `HTTPCode_ELB_5XX_Count` and `HTTPCode_ELB_4XX_Count` Application Load Balancer metrics. These metrics check for HTTP spikes. For more information about the Application Load Balancer metrics, see [CloudWatch metrics for your Application Load Balancer](https://docs.aws.amazon.com/elasticloadbalancing/latest/application/load-balancer-cloudwatch-metrics.html) in the *User Guide for Application Load Balancers*.
+ If you have an existing application, use the `CPUUtilization` and `MemoryUtilization` metrics. These metrics check for the percentage of CPU and memory that the cluster or service uses. For more information, see [Considerations](cloudwatch-metrics.md#enable_cloudwatch).
+ If you use Amazon Simple Queue Service queues in your tasks, use `ApproximateNumberOfMessagesNotVisible` Amazon SQS metric. This metric checks for number of messages in the queue that are delayed and not available for reading immediately. For more information about Amazon SQS metrics, see [Available CloudWatch metrics for Amazon SQS](https://docs.aws.amazon.com/AWSSimpleQueueService/latest/SQSDeveloperGuide/sqs-available-cloudwatch-metrics.html) in the *Amazon Simple Queue Service Developer Guide*.

## Bake time
<a name="deployment-bake-time"></a>

When you use the rollback option for your service deployments, Amazon ECS waits an additional amount of time after the target service revision has been deployed before it sends a CloudWatch alarm. This is referred to as the bake time. This time starts after:
+ All tasks for a target service revision are running and in a healthy state
+ Source service revisions are scaled down to 0%

The default bake time is less than 5 minutes. The service deployment is marked as complete after the bake time expires.

You can configure the bake time for a rolling deployment. When you use CloudWatch alarms to detect failure, if you change the bake time, and then decide you want the Amazon ECS default, you must manually set the bake time.

# Lifecycle hooks for Amazon ECS service deployments
<a name="deployment-lifecycle-hooks"></a>

When a deployment starts, it goes through lifecycle stages. These stages can be in states such as IN\$1PROGRESS or successful. You can use lifecycle hooks, which are Lambda functions that Amazon ECS runs on your behalf at specified lifecycle stages. The functions can be either of the following:
+ An asynchronous API which validates the health check within 15 minutes.
+ A poll API which initiates another asynchronous process which evaluates the lifecycle hook completion. 

After the function has finished running, it must return a `hookStatus` for the deployment to continue. If a `hookStatus` is not returned, or if the function fails, the deployment rolls back. The following are the `hookStatus` values:
+ `SUCCEEDED` - the deployment continues to the next lifecycle stage
+ `FAILED` – the deployment rolls back to the last successful deployment.
+ `IN_PROGRESS` – Amazon ECS runs the function again after a short period of time. By default this is a 30 second interval, however this value is customizable by returning a `callBackDelay` alongside the `hookStatus`.

The following example shows how to return a `hookStatus` with a custom callback delay. In this example, Amazon ECS would retry this hook in 60 seconds instead of the default 30 seconds:

```
{
    "hookStatus": "IN_PROGRESS",
    "callBackDelay": 60
}
```

When a roll back happens, Amazon ECS runs the lifecycle hooks for the following lifecycle stages:
+ PRODUCTION\$1TRAFFIC\$1SHIFT
+ TEST\$1TRAFFIC\$1SHIFT

## Lifecycle payloads
<a name="service-deployment-lifecycle-payloads"></a>

 When you configure lifecycle hooks for your ECS service deployments, Amazon ECS invokes these hooks at specific stages of the deployment process. Each lifecycle stage provides a JSON payload with information about the current state of the deployment. This document describes the payload structure for each lifecycle stage. 

### Common payload structure
<a name="common-payload-structure"></a>

 All lifecycle stage payloads include the following common fields: 
+  `serviceArn` - The Amazon Resource Name (ARN) of the service. 
+  `targetServiceRevisionArn` - The ARN of the target service revision being deployed. 
+  `testTrafficWeights` - A map of service revision ARNs to their corresponding test traffic weight percentages. 
+  `productionTrafficWeights` - A map of service revision ARNs to their corresponding production traffic weight percentages. 

### Lifecycle stage payloads
<a name="lifecycle-stage-payloads"></a>

#### RECONCILE\$1SERVICE
<a name="reconcile-service"></a>

 This stage occurs at the beginning of the deployment process when the service is being reconciled. The following shows an example payload for this lifecycle stage.

```
{
  "serviceArn": "arn:aws:ecs:us-west-2:1234567890:service/myCluster/myService",
  "targetServiceRevisionArn": "arn:aws:ecs:us-west-2:1234567890:service-revision/myCluster/myService/01275892",
  "testTrafficWeights": {
    "arn:aws:ecs:us-west-2:1234567890:service-revision/myCluster/myService/01275892": 100,
    "arn:aws:ecs:us-west-2:1234567890:service-revision/myCluster/myService/78652123": 0
  },
  "productionTrafficWeights": {
    "arn:aws:ecs:us-west-2:1234567890:service-revision/myCluster/myService/01275892": 100,
    "arn:aws:ecs:us-west-2:1234567890:service-revision/myCluster/myService/78652123": 0
  }
}
```

 **Expectations at this stage:** 
+ Primary task set is at 0% scale

#### PRE\$1SCALE\$1UP
<a name="pre-scale-up"></a>

 This stage occurs before the new tasks are scaled up. The following shows an example payload for this lifecycle stage.

```
{
  "serviceArn": "arn:aws:ecs:us-west-2:1234567890:service/myCluster/myService",
  "targetServiceRevisionArn": "arn:aws:ecs:us-west-2:1234567890:service-revision/myCluster/myService/01275892",
  "testTrafficWeights": {},
  "productionTrafficWeights": {}
}
```

 **Expectations at this stage:** 
+ The green service revision tasks are at 0% scale

#### POST\$1SCALE\$1UP
<a name="post-scale-up"></a>

 This stage occurs after the new tasks have been scaled up and are healthy. The following shows an example payload for this lifecycle stage.

```
{
  "serviceArn": "arn:aws:ecs:us-west-2:1234567890:service/myCluster/myService",
  "targetServiceRevisionArn": "arn:aws:ecs:us-west-2:1234567890:service-revision/myCluster/myService/01275892",
  "testTrafficWeights": {},
  "productionTrafficWeights": {}
}
```

 **Expectations at this stage:** 
+ The green service revision tasks are at 100% scale
+ Tasks in the green service revision are healthy

#### TEST\$1TRAFFIC\$1SHIFT
<a name="test-traffic-shift"></a>

 This stage occurs when test traffic is being shifted to the green service revision tasks. 

The following shows an example payload for this lifecycle stage.

```
{
  "serviceArn": "arn:aws:ecs:us-west-2:1234567890:service/myCluster/myService",
  "targetServiceRevisionArn": "arn:aws:ecs:us-west-2:1234567890:service-revision/myCluster/myService/01275892",
  "testTrafficWeights": {
    "arn:aws:ecs:us-west-2:1234567890:service-revision/myCluster/myService/01275892": 100,
    "arn:aws:ecs:us-west-2:1234567890:service-revision/myCluster/myService/78652123": 0
  },
  "productionTrafficWeights": {}
}
```

 **Expectations at this stage:** 
+ Test traffic is in the process of moving towards the green service revision tasks. 

#### POST\$1TEST\$1TRAFFIC\$1SHIFT
<a name="post-test-traffic-shift"></a>

 This stage occurs after test traffic has been fully shifted to the new tasks. 

The following shows an example payload for this lifecycle stage.

```
{
  "serviceArn": "arn:aws:ecs:us-west-2:1234567890:service/myCluster/myService",
  "targetServiceRevisionArn": "arn:aws:ecs:us-west-2:1234567890:service-revision/myCluster/myService/01275892",
  "testTrafficWeights": {},
  "productionTrafficWeights": {}
}
```

 **Expectations at this stage:** 
+ 100% of test traffic moved towards the green service revision tasks. 

#### PRODUCTION\$1TRAFFIC\$1SHIFT
<a name="production-traffic-shift"></a>

 This stage occurs when production traffic is being shifted to the green service revision tasks. 

```
{
  "serviceArn": "arn:aws:ecs:us-west-2:1234567890:service/myCluster/myService",
  "targetServiceRevisionArn": "arn:aws:ecs:us-west-2:1234567890:service-revision/myCluster/myService/01275892",
  "testTrafficWeights": {},
  "productionTrafficWeights": {
    "arn:aws:ecs:us-west-2:1234567890:service-revision/myCluster/myService/01275892": 100,
    "arn:aws:ecs:us-west-2:1234567890:service-revision/myCluster/myService/78652123": 0
  }
}
```

 **Expectations at this stage:** 
+ Production traffic is in the process of moving to the green service revision. 

#### POST\$1PRODUCTION\$1TRAFFIC\$1SHIFT
<a name="post-production-traffic-shift"></a>

 This stage occurs after production traffic has been fully shifted to the green service revision tasks. 

```
{
  "serviceArn": "arn:aws:ecs:us-west-2:1234567890:service/myCluster/myService",
  "targetServiceRevisionArn": "arn:aws:ecs:us-west-2:1234567890:service-revision/myCluster/myService/01275892",
  "testTrafficWeights": {},
  "productionTrafficWeights": {}
}
```

 **Expectations at this stage:** 
+ 100% of production traffic moved towards the green service revision tasks. 

### Lifecycle stage categories
<a name="lifecycle-stage-categories"></a>

 Lifecycle stages fall into two categories: 

1.  **Single invocation stages** - These stages are invoked only once during a service deployment: 
   + PRE\$1SCALE\$1UP
   + POST\$1SCALE\$1UP
   + POST\$1TEST\$1TRAFFIC\$1SHIFT
   + POST\$1PRODUCTION\$1TRAFFIC\$1SHIFT

1.  **Recurring invocation stages** - These stages might be invoked multiple times during a service deployment, for example when a rollback operation happens: 
   + TEST\$1TRAFFIC\$1SHIFT
   + PRODUCTION\$1TRAFFIC\$1SHIFT

### Deployment status during lifecycle hooks
<a name="deployment-status-during-lifecycle-hooks"></a>

 While lifecycle hooks are running, the deployment status will be `IN_PROGRESS` for all lifecycle stages. 


| Lifecycle Stage | Deployment Status | 
| --- | --- | 
| RECONCILE\$1SERVICE | IN\$1PROGRESS | 
| PRE\$1SCALE\$1UP | IN\$1PROGRESS | 
| POST\$1SCALE\$1UP | IN\$1PROGRESS | 
| TEST\$1TRAFFIC\$1SHIFT | IN\$1PROGRESS | 
| POST\$1TEST\$1TRAFFIC\$1SHIFT | IN\$1PROGRESS | 
| PRODUCTION\$1TRAFFIC\$1SHIFT | IN\$1PROGRESS | 
| POST\$1PRODUCTION\$1TRAFFIC\$1SHIFT | IN\$1PROGRESS | 

# Stopping Amazon ECS service deployments
<a name="stop-service-deployment"></a>

You can manually stop a deployment when a failing deployment was not detected by the circuit breaker or CloudWatch alarms. The following stop types are available:
+ Rollback - This option rolls back the service deployment to the previous service revision. 

  You can use this option even if you didn't configure the service deployment for the rollback option. 

You can stop a deployment that is in any of the following states. For more information about service deployment states, see [View service history using Amazon ECS service deployments](service-deployment.md).
+ PENDING - The service deployment moves to the ROLLBACK\$1REQUESTED state, and then the rollback operation starts.
+ IN\$1PROGRESS - The service deployment moves to the ROLLBACK\$1REQUESTED state, and then the rollback operation starts.
+ STOP\$1REQUESTED - The service deployment continues to stop.
+ ROLLBACK\$1REQUESTED - The service deployment continues the rollback operation.
+ ROLLBACK\$1IN\$1PROGRESS - The service deployment continues the rollback operation.

## Procedure
<a name="stop-service-deployment-procedure"></a>

Before you begin, configure the required permissions for viewing service deployments. For more information, see [Permissions required for viewing Amazon ECS service deployments](service-deployment-permissions.md).

------
#### [ Amazon ECS Console ]

1. Open the console at [https://console.aws.amazon.com/ecs/v2](https://console.aws.amazon.com/ecs/v2).

1. On the **Clusters** page, choose the cluster.

1. On the cluster details page, in the **Services** section, choose the service.

   The service details page displays.

1. On the service details page, choose **Deployments**.

   The deployments page displays.

1. Under **Ongoing deployment**, choose **Roll back**. Then, in the confirmation window, choose **Roll back**.

------
#### [ AWS CLI ]

1. Run `list-service-deployments` to retrieve the service deployment ARN. 

   Replace the *user-input* with your values.

   ```
   aws ecs list-service-deployments --cluster cluster-name --service service-name
   ```

   Note the `serviceDeploymentArn` for the deployment you want to stop.

   ```
   {
       "serviceDeployments": [
           {
               "serviceDeploymentArn": "arn:aws:ecs:us-west-2:123456789012:service-deployment/cluster-name/service-name/NCWGC2ZR-taawPAYrIaU5",
               "serviceArn": "arn:aws:ecs:us-west-2:123456789012:service/cluster-name/service-name",
               "clusterArn": "arn:aws:ecs:us-west-2:123456789012:cluster/cluster-name",
               "targetServiceRevisionArn": "arn:aws:ecs:us-west-2:123456789012:service-revision/cluster-name/service-name/4980306466373577095",
               "status": "SUCCESSFUL"
           }
       ]
   }
   ```

1. Run `stop-service-deployments`. Use the `serviceDeploymentArn` that was returned from `list-service-deployments`.

   Replace the *user-input* with your values.

   ```
   aws ecs stop-service-deployment --service-deployment-arn arn:aws:ecs:region:123456789012:service-deployment/cluster-name/service-name/NCWGC2ZR-taawPAYrIaU5 --stop-type ROLLBACK
   ```

------

## Next steps
<a name="stop-service-deployment-next-step"></a>

Decide what changes need to be made to the service, and then update the service. For more information, see [Updating an Amazon ECS service](update-service-console-v2.md).

# Deploy Amazon ECS services by replacing tasks
<a name="deployment-type-ecs"></a>

When you create a service which uses the *rolling update* (`ECS`) deployment type, the Amazon ECS service scheduler replaces the currently running tasks with new tasks. The number of tasks that Amazon ECS adds or removes from the service during a rolling update is controlled by the service deployment configuration. 

Amazon ECS uses the following parameters to determine the number of tasks:
+ The `minimumHealthyPercent` represents the lower limit on the number of tasks that should be running and healthy for a service during a rolling deployment or when a container instance is draining, as a percent of the desired number of tasks for the service. This value is rounded up. For example if the minimum healthy percent is `50` and the desired task count is four, then the scheduler can stop two existing tasks before starting two new tasks. Likewise, if the minimum healthy percent is 75% and the desired task count is two, then the scheduler can't stop any tasks due to the resulting value also being two.
+ The `maximumPercent` represents the upper limit on the number of tasks that should be running for a service during a rolling deployment or when a container instance is draining, as a percent of the desired number of tasks for a service. This value is rounded down. For example if the maximum percent is `200` and the desired task count is four, then the scheduler can start four new tasks before stopping four existing tasks. Likewise, if the maximum percent is `125` and the desired task count is three, the scheduler can't start any tasks due to the resulting value also being three.

During a rolling deployment, when tasks become unhealthy, Amazon ECS replaces them to maintain your service's `minimumHealthyPercent` and protect availability. Unhealthy tasks are replaced using the same service revision they belong to. This ensures that unhealthy task replacement in the source revision is independent from task failures in the target revision. When the `maximumPercent` setting allows, the scheduler launches replacement tasks before stopping unhealthy ones. If the `maximumPercent` parameter limits the scheduler from starting a replacement task first, the scheduler stops one unhealthy task at a time to free capacity before launching a replacement task.

**Important**  
When setting a minimum healthy percent or a maximum percent, you should ensure that the scheduler can stop or start at least one task when a deployment is initiated. If your service has a deployment that is stuck due to an invalid deployment configuration, a service event message will be sent. For more information, see [service (*service-name*) was unable to stop or start tasks during a deployment because of the service deployment configuration. Update the minimumHealthyPercent or maximumPercent value and try again.](service-event-messages-list.md#service-event-messages-7).

Rolling deployments have 2 methods which provide a way to quickly identify when a service deployment has failed:
+ [How the Amazon ECS deployment circuit breaker detects failures](deployment-circuit-breaker.md)
+ [How CloudWatch alarms detect Amazon ECS deployment failures](deployment-alarm-failure.md)

The methods can be used separately or together. When both methods are used, the deployment is set to failed as soon as the failure criteria for either failure method is met.

Use the following guidelines to help determine which method to use:
+ Circuit breaker - Use this method when you want to stop a deployment when the tasks can't start.
+ CloudWatch alarms - Use this method when you want to stop a deployment based on application metrics.

Both methods support rolling back to the previous service revision.

## Container image resolution
<a name="deployment-container-image-stability"></a>

By default, Amazon ECS resolves container image tags specified in the task definition to container image digests. If you create a service that runs and maintains a single task, that task is used to establish image digests for the containers in the task. If you create a service that runs and maintains multiple tasks, the first task started by the service scheduler during deployment is used to establish the image digests for the containers in the tasks.

If three or more attempts at establishing the container image digests fail, the deployment continues without image digest resolution. If the deployment circuit breaker is enabled, the deployment is additionally failed and rolled back.

After the container image digests have been established, Amazon ECS uses the digests to start any other desired tasks, and for any future service updates. This leads to all tasks in a service always running identical container images, resulting in version consistency for your software.

You can configure this behavior for each container in your task by using the `versionConsistency` parameter in the container definition. For more information, see [versionConsistency](task_definition_parameters.md#ContainerDefinition-versionconsistency).

**Note**  
Amazon ECS Agent versions lower than `1.31.0` don't support image digest resolution. Agent versions `1.31.0` to `1.69.0` support image digest resolution only for images pushed to Amazon ECR repositories. Agent versions `1.70.0` or higher support image digest resolution for all images. 
The minimum Fargate Linux platform version for image digest resolution is `1.3.0`. The minimum Fargate Windows platform version for image digest resolution is `1.0.0`.
Amazon ECS doesn't capture digests of sidecar containers managed by Amazon ECS, such as the Amazon GuardDuty security agent or Service Connect proxy.
To reduce potential latency associated with container image resolution in services with multiple tasks, run Amazon ECS agent version `1.83.0` or higher on EC2 container instances. To avoid potential latency, specify container image digests in your task definition.
If you create a service with a desired task count of zero, Amazon ECS can't establish container digests until you trigger another deployment of the service with a desired task count greater than zero.
To establish updated image digests, you can force a new deployment. The updated digests will be used to start new tasks and will not affect already running tasks. For more information about forcing new deployments, see [forceNewDeployment](https://docs.aws.amazon.com/AmazonECS/latest/APIReference/API_UpdateService.html#ECS-UpdateService-request-forceNewDeployment) in the *Amazon ECS API reference*.
When using EC2 capacity providers, if there is insufficient capacity to start a task during the initial deployment, software version consistency may fail. To ensure version consistency is maintained even when capacity is limited, explicitly set `versionConsistency: "enabled"` in your task definition container configuration rather than relying on the default behavior. This causes Amazon ECS to wait until capacity becomes available before proceeding with the deployment.

# Best practices for Amazon ECS service parameters
<a name="service-options"></a>

To ensure that there's no application downtime, the deployment process is as follows:

1. Start the new application containers while keeping the existing containers running.

1. Check that the new containers are healthy.

1. Stop the old containers.

 Depending on your deployment configuration and the amount of free, unreserved space in your cluster it may take multiple rounds of this to complete replace all old tasks with new tasks. 

There are two service configuration options that you can use to modify the number:
+ `minimumHealthyPercent`: 100% (default)

  The lower limit on the number of tasks for your service that must remain in the `RUNNING` state during a deployment. This is a percentage of the `desiredCount` rounded up to the nearest integer. This parameter allows you to deploy without using additional cluster capacity.
+ `maximumPercent`: 200% (default)

   The upper limit on the number of tasks for your service that are allowed in the `RUNNING` or `PENDING` state during a deployment. This is a percentage of the `desiredCount` rounded down to the nearest integer.

**Example: Default configuration options**

Consider the following service that has six tasks, deployed in a cluster that has room for eight tasks total. The default service configuration options don't allow the deployment to go below 100% of the six desired tasks.

The deployment process is as follows:

1. The goal is to replace the six tasks.

1. The scheduler starts two new tasks because the default settings require that there are six running tasks.

   There are now six existing tasks and two new tasks.

1. The scheduler stops two of the existing tasks.

   There are now four existing tasks and two new ones.

1. The scheduler starts two additional new tasks.

   There are now four existing tasks and four new tasks.

1. The scheduler shuts down two of the existing tasks.

   There are now two existing tasks and four new ones.

1. The scheduler starts two additional new tasks.

   There are now two existing tasks and six new tasks

1. The scheduler shuts down the last two existing tasks.

   There are now six new tasks.

In the above example, if you use the default values for the options, there is a 2.5 minute wait for each new task that starts. Additionally, the load balancer might have to wait 5 minutes for the old task to stop. 

**Example: Modify `minimumHealthyPercent`**

You can speed up the deployment by setting the `minimumHealthyPercent` value to 50%.

Consider the following service that has six tasks, deployed in a cluster that has room for eight tasks total. The deployment process is as follows:

1. The goal is to replace six tasks.

1. The scheduler stops three of the existing tasks. 

   There are still three existing tasks running which meets the `minimumHealthyPercent` value.

1. The scheduler starts five new tasks.

   There are three existing task tasks and five new tasks.

1. The scheduler stops the remaining three existing tasks.

   There are five new tasks

1. The scheduler starts the final new tasks.

   There are six new tasks.

**Example: Modify cluster free space**

You could also add additional free space so that you can run additional tasks. 

Consider the following service that has six tasks, deployed in a cluster that has room for ten tasks total. The deployment process is as follows:

1. The goal is to replace the existing tasks.

1. The scheduler stops three of the existing tasks,

   There are three existing tasks.

1. The scheduler starts six new tasks.

   There are the existing tasks and six new tasks

1. The scheduler stops the three existing tasks.

   There are six new tasks.

**Recommendations**

Use the following values for the service configuration options when your tasks are idle for some time and don't have a high utilization rate.
+ `minimumHealthyPercent`: 50%
+ `maximumPercent`: 200% 

# Creating an Amazon ECS rolling update deployment
<a name="create-service-console-v2"></a>

Create a service to run and maintain a specified number of instances of a task definition simultaneously in a cluster. If one of your tasks fails or stops, the Amazon ECS service scheduler launches another instance of your task definition to replace it. This helps maintain your desired number of tasks in the service.

Decide on the following configuration parameters before you create a service:
+ There are two compute options that distribute your tasks.
  + A **capacity provider strategy** causes Amazon ECS to distribute your tasks in one or across multiple capacity providers. 

    If you want to run your workloads on Amazon ECS Managed Instances, you must use the Capacity provider strategy option.
  + A **launch type** causes Amazon ECS to launch our tasks directly on either Fargate or on the EC2 instances registered to your clusters.

    If you want to run your workloads on Amazon ECS Managed Instances, you must use the Capacity provider strategy option.
+ Task definitions that use the `awsvpc` network mode or services configured to use a load balancer must have a networking configuration. By default, the console selects the default Amazon VPC along with all subnets and the default security group within the default Amazon VPC. 
+ The placement strategy, The default task placement strategy distributes tasks evenly across Availability Zones. 

  We recommend that you use Availability Zone rebalancing to help ensure high availability for your service. For more information, see [Balancing an Amazon ECS service across Availability Zones](service-rebalancing.md).
+ When you use the **Launch Type** for your service deployment, by default the service starts in the subnets in your cluster VPC.
+ For the **capacity provider strategy**, the console selects a compute option by default. The following describes the order that the console uses to select a default:
  + If your cluster has a default capacity provider strategy defined, it is selected.
  + If your cluster doesn't have a default capacity provider strategy defined but you have the Fargate capacity providers added to the cluster, a custom capacity provider strategy that uses the `FARGATE` capacity provider is selected.
  + If your cluster doesn't have a default capacity provider strategy defined but you have one or more Auto Scaling group capacity providers added to the cluster, the **Use custom (Advanced)** option is selected and you need to manually define the strategy.
  + If your cluster doesn't have a default capacity provider strategy defined and no capacity providers added to the cluster, the Fargate launch type is selected.
+ The default deployment failure detection default options are to use the **Amazon ECS deployment circuit breaker** option with the **Rollback on failures** option.

  For more information, see [How the Amazon ECS deployment circuit breaker detects failures](deployment-circuit-breaker.md).
+ Decide if you want Amazon ECS to increase or decrease the desired number of tasks in your service automatically. For information see, [Automatically scale your Amazon ECS service](service-auto-scaling.md).
+ If you need an application to connect to other applications that run in Amazon ECS, determine the option that fits your architecture. For more information, see [Interconnect Amazon ECS services](interconnecting-services.md). 
+ When you create a service that uses Amazon ECS circuit breaker, Amazon ECS creates a service deployment and a service revision. These resources allow you to view detailed information about the service history. For more information, see [View service history using Amazon ECS service deployments](service-deployment.md).

  For information about how to create a service using the AWS CLI, see [https://docs.aws.amazon.com/cli/latest/reference/ecs/create-service.html](https://docs.aws.amazon.com/cli/latest/reference/ecs/create-service.html) in the *AWS Command Line Interface Reference*.

  For information about how to create a service using AWS CloudFormation, see [https://docs.aws.amazon.com/AWSCloudFormation/latest/UserGuide/aws-resource-ecs-service.html](https://docs.aws.amazon.com/AWSCloudFormation/latest/UserGuide/aws-resource-ecs-service.html) in the *AWS CloudFormation User Guide*.

## Create a service with the default options
<a name="create-default-service"></a>

You can use the console to quickly create and deploy a service. The service has the following configuration:
+ Deploys in the VPC and subnets associated with your cluster
+ Deploys one task
+ Uses the rolling deployment
+ Uses the capacity provider strategy with your default capacity provider
+ Uses the deployment circuit breaker to detect failures and sets the option to automatically roll back the deployment on failure

To deploy a service using the default parameters follow these steps.

**To create a service (Amazon ECS console)**

1. Open the console at [https://console.aws.amazon.com/ecs/v2](https://console.aws.amazon.com/ecs/v2).

1. In the navigation page, choose **Clusters**.

1. On the **Clusters** page, choose the cluster to create the service in.

1. From the **Services** tab, choose **Create**.

   The **Create service** page appears.

1. Under **Service details**, do the following:

   1. For **Task definition**, enter the task definition family and revision to use.

   1. For **Service name**, enter a name for your service.

1. To use ECS Exec to debug the service, under **Troubleshooting configuration**, select **Turn on ECS Exec**.

1. Under **Deployment configuration**, do the following:

   1. For **Desired tasks**, enter the number of tasks to launch and maintain in the service.

1. (Optional) To help identify your service and tasks, expand the **Tags** section, and then configure your tags.

   To have Amazon ECS automatically tag all newly launched tasks with the cluster name and the task definition tags, select **Turn on Amazon ECS managed tags**, and then select **Task definitions**.

   To have Amazon ECS automatically tag all newly launched tasks with the cluster name and the service tags, select **Turn on Amazon ECS managed tags**, and then select **Service**.

   Add or remove a tag.
   + [Add a tag] Choose **Add tag**, and then do the following:
     + For **Key**, enter the key name.
     + For **Value**, enter the key value.
   + [Remove a tag] Next to the tag, choose **Remove tag**.

## Create a service using defined parameters
<a name="create-custom-service"></a>

To create a service by using defined parameters, follow these steps.

**To create a service (Amazon ECS console)**

1. Open the console at [https://console.aws.amazon.com/ecs/v2](https://console.aws.amazon.com/ecs/v2).

1. Determine the resource from where you launch the service.    
[\[See the AWS documentation website for more details\]](http://docs.aws.amazon.com/AmazonECS/latest/developerguide/create-service-console-v2.html)

   The **Create service** page appears.

1. Under Service details, do the following:

   1. For **Task definition**, enter the task definition to use. Then, for **Revision**, choose the revision to use.

   1. For **Service name**, enter a name for your service.

1. For **Existing cluster**, choose the cluster.

   Choose **Create cluster** to run the task on a new cluster

1. Choose how your tasks are distributed across your cluster infrastructure. Under **Compute configuration**, choose your option.    
[\[See the AWS documentation website for more details\]](http://docs.aws.amazon.com/AmazonECS/latest/developerguide/create-service-console-v2.html)

1. To use ECS Exec to debug the service, under **Troubleshooting configuration**, select **Turn on ECS Exec**.

1. Under **Deployment configuration**, do the following:

   1. For **Service type**, choose the service scheduling strategy.
      + To have the scheduler deploy exactly one task on each active container instance that meets all of the task placement constraints, choose **Daemon**.
      + To have the scheduler place and maintain the desired number of tasks in your cluster, choose **Replica**.

   1. If you chose **Replica**, for **Desired tasks**, enter the number of tasks to launch and maintain in the service.

   1. If you chose **Replica**, to have Amazon ECS monitor the distribution of tasks across Availability Zones, and redistribute them when there is an imbalance, under **Availability Zone service rebalancing**, select **Availability Zone service rebalancing**.

   1. For **Health check grace period**, enter the amount of time (in seconds) that the enter the amount of time (in seconds) that the service scheduler ignores unhealthy Elastic Load Balancing, VPC Lattice, and container health checks after a task has first started. If you do not specify a health check grace period value, the default value of 0 is used.

   1. Determine the deployment type for your service. Expand **Deployment options**, and then specify the following parameters.    
[\[See the AWS documentation website for more details\]](http://docs.aws.amazon.com/AmazonECS/latest/developerguide/create-service-console-v2.html)

   1. To configure how Amazon ECS detects and handles deployment failures, expand **Deployment failure detection**, and then choose your options. 

      1. To stop a deployment when the tasks cannot start, select **Use the Amazon ECS deployment circuit breaker**.

         To have the software automatically roll back the deployment to the last completed deployment state when the deployment circuit breaker sets the deployment to a failed state, select **Rollback on failures**.

      1. To stop a deployment based on application metrics, select **Use CloudWatch alarm(s)**. Then, from **CloudWatch alarm name**, choose the alarms. To create a new alarm, go to the CloudWatch console.

         To have the software automatically roll back the deployment to the last completed deployment state when a CloudWatch alarm sets the deployment to a failed state, select **Rollback on failures**.

1. If your task definition uses the `awsvpc` network mode, you can specify a custom network configuration expand **Networking**, and then do the following:

   1. For **VPC**, select the VPC to use.

   1. For **Subnets**, select one or more subnets in the VPC that the task scheduler considers when placing your tasks.

   1. For **Security group**, you can either select an existing security group or create a new one. To use an existing security group, select the security group and move to the next step. To create a new security group, choose **Create a new security group**. You must specify a security group name, description, and then add one or more inbound rules for the security group.

   1. For **Public IP**, choose whether to auto-assign a public IP address to the elastic network interface (ENI) of the task.

      AWS Fargate tasks can be assigned a public IP address when run in a public subnet so they have a route to the internet. EC2 tasks can't be assigned a public IP using this field. For more information, see [Amazon ECS task networking options for Fargate](https://docs.aws.amazon.com/AmazonECS/latest/developerguide/fargate-task-networking.html) and [Allocate a network interface for an Amazon ECS task](https://docs.aws.amazon.com/AmazonECS/latest/developerguide/task-networking-awsvpc.html).

1. (Optional) To interconnect your service using Service Connect, expand **Service Connect**, and then specify the following:

   1.  Select **Turn on Service Connect**.

   1. Under **Service Connect configuration**, specify the client mode.
      + If your service runs a network client application that only needs to connect to other services in the namespace, choose **Client side only**.
      + If your service runs a network or web service application and needs to provide endpoints for this service, and connects to other services in the namespace, choose **Client and server**.

   1. To use a namespace that is not the default cluster namespace, for **Namespace**, choose the service namespace. This can be a namespace created separately in the same AWS Region in your AWS account or a namespace in the same Region that is shared with your account using AWS Resource Access Manager (AWS RAM). For more information about shared AWS Cloud Map namespaces, see [Cross-account AWS Cloud Map namespace sharing](https://docs.aws.amazon.com/cloud-map/latest/dg/sharing-namespaces.html) in the *AWS Cloud Map Developer Guide*.

   1. (Optional) Specify a log configuration. Select **Use log collection**. The default option sends container logs to CloudWatch Logs. The other log driver options are configured using AWS FireLens. For more information, see [Send Amazon ECS logs to an AWS service or AWS Partner](using_firelens.md).

      The following describes each container log destination in more detail.
      + **Amazon CloudWatch** – Configure the task to send container logs to CloudWatch Logs. The default log driver options are provided, which create a CloudWatch log group on your behalf. To specify a different log group name, change the driver option values.
      + **Amazon Data Firehose** – Configure the task to send container logs to Firehose. The default log driver options are provided, which send logs to a Firehose delivery stream. To specify a different delivery stream name, change the driver option values.
      + **Amazon Kinesis Data Streams** – Configure the task to send container logs to Kinesis Data Streams. The default log driver options are provided, which send logs to an Kinesis Data Streams stream. To specify a different stream name, change the driver option values.
      + **Amazon OpenSearch Service** – Configure the task to send container logs to an OpenSearch Service domain. The log driver options must be provided. 
      + **Amazon S3** – Configure the task to send container logs to an Amazon S3 bucket. The default log driver options are provided, but you must specify a valid Amazon S3 bucket name.

   1. (Optional) To enable access logs, follow these steps:

      1. Expand **Access log configuration**. For **Format**, choose either **JSON** or `TEXT`.

      1. To include query parameters in access logs, select **Include query parameters**.

1. (Optional) To interconnect your service using Service Discovery, expand **Service discovery**, and then do the following.

   1. Select **Use service discovery**.

   1. To use a new namespace, choose **Create a new namespace** under **Configure namespace**, and then provide a namespace name and description. To use an existing namespace, choose **Select an existing namespace** and then choose the namespace that you want to use.

   1. Provide Service Discovery service information such as the service's name and description.

   1. To have Amazon ECS perform periodic container-level health checks, select **Enable Amazon ECS task health propagation**.

   1. For **DNS record type**, select the DNS record type to create for your service. Amazon ECS service discovery only supports **A** and **SRV** records, depending on the network mode that your task definition specifies. For more information about these record types, see [Supported DNS Record Types](https://docs.aws.amazon.com/Route53/latest/DeveloperGuide/ResourceRecordTypes.html) in the *Amazon Route 53 Developer Guide*.
      + If the task definition that your service task specifies uses the `bridge` or `host` network mode, only type **SRV** records are supported. Choose a container name and port combination to associate with the record.
      + If the task definition that your service task specifies uses the `awsvpc` network mode, select either the **A** or **SRV** record type. If you choose **A**, skip to the next step. If you choose **SRV**, specify either the port that the service can be found on or a container name and port combination to associate with the record.

      For **TTL**, enter the time in seconds how long a record set is cached by DNS resolvers and by web browsers.

1. (Optional) To interconnect your service using VPC Lattice, xxpand **VPC Lattice**, and then do the following:

   1. Select **Turn on VPC Lattice**

   1. For **Infrastructure role**, choose the infrastructure role.

      If you haven't created a role, choose **Create infrastructure role**.

   1. Under **Target Groups** choose the target group or groups. You need to choose at least one target group and can have a maximum of five. Choose **Add target group** to add additional target groups. Choose the **Port name**, **Protocol**, and **Port** for each target group you chose. 

      To delete a target group, choose **Remove**.
**Note**  
If you want to add existing target groups, you need use the AWS CLI. For instructions on how to add target groups using the AWS CLI, see [register-targets ](https://docs.aws.amazon.com/cli/latest/reference/vpc-lattice/register-targets.html) in the* AWS Command Line Interface Reference*.
While a VPC Lattice service can have multiple target groups, each target group can only be added to one service.

   1. To complete the VPC Lattice configuration, by including your new target groups in the listener default action or in the rules of an existing VPC Lattice service in the VPC Lattice console. For more information, see [Listener rules for your VPC Lattice service](https://docs.aws.amazon.com/vpc-lattice/latest/ug/listener-rules.html).

1. (Optional) To configure a load balancer for your service, expand **Load balancing**.

   Choose the load balancer.    
[\[See the AWS documentation website for more details\]](http://docs.aws.amazon.com/AmazonECS/latest/developerguide/create-service-console-v2.html)

1. (Optional) To configure service Auto Scaling, expand **Service auto scaling**, and then specify the following parameters.To use predicte auto scaling, which looks at past load data from traffic flows, configure it after you create the service. For more information, see [Use historical patterns to scale Amazon ECS services with predictive scaling](predictive-auto-scaling.md).

   1. To use service auto scaling, select **Service auto scaling**.

   1. For **Minimum number of tasks**, enter the lower limit of the number of tasks for service auto scaling to use. The desired count will not go below this count.

   1. For **Maximum number of tasks**, enter the upper limit of the number of tasks for service auto scaling to use. The desired count will not go above this count.

   1. Choose the policy type. Under **Scaling policy type**, choose one of the following options.    
[\[See the AWS documentation website for more details\]](http://docs.aws.amazon.com/AmazonECS/latest/developerguide/create-service-console-v2.html)

1. (Optional) To use a task placement strategy other than the default, expand **Task Placement**, and then choose from the following options.

    For more information, see [How Amazon ECS places tasks on container instances](task-placement.md).
   + **AZ Balanced Spread** – Distribute tasks across Availability Zones and across container instances in the Availability Zone.
   + **AZ Balanced BinPack** – Distribute tasks across Availability Zones and across container instances with the least available memory.
   + **BinPack** – Distribute tasks based on the least available amount of CPU or memory.
   + **One Task Per Host** – Place, at most, one task from the service on each container instance.
   + **Custom** – Define your own task placement strategy. 

   If you chose **Custom**, define the algorithm for placing tasks and the rules that are considered during task placement.
   + Under **Strategy**, for **Type** and **Field**, choose the algorithm and the entity to use for the algorithm.

     You can enter a maximum of 5 strategies.
   + Under **Constraint**, for **Type** and **Expression**, choose the rule and attribute for the constraint.

     For example, to set the constraint to place tasks on T2 instances, for the **Expression**, enter **attribute:ecs.instance-type =\$1 t2.\$1**.

     You can enter a maximum of 10 constraints.

1. If your task uses a data volume that's compatible with configuration at deployment, you can configure the volume by expanding **Volume**.

   The volume name and volume type are configured when you create a task definition revision and can't be changed when creating a service. To update the volume name and type, you must create a new task definition revision and create a service by using the new revision.    
[\[See the AWS documentation website for more details\]](http://docs.aws.amazon.com/AmazonECS/latest/developerguide/create-service-console-v2.html)

1. To use ECS Exec to debug the service, under **Troubleshooting configuration**, select **Turn on ECS Exec**.

1. (Optional) To help identify your service and tasks, expand the **Tags** section, and then configure your tags.

   To have Amazon ECS automatically tag all newly launched tasks with the cluster name and the task definition tags, select **Turn on Amazon ECS managed tags**, and then for **Propagate tags from**, choose **Task definitions**.

   To have Amazon ECS automatically tag all newly launched tasks with the cluster name and the service tags, select **Turn on Amazon ECS managed tags**, and then for **Propagate tags from**, choose **Service**.

   Add or remove a tag.
   + [Add a tag] Choose **Add tag**, and then do the following:
     + For **Key**, enter the key name.
     + For **Value**, enter the key value.
   + [Remove a tag] Next to the tag, choose **Remove tag**.

1. Choose **Create**.

## Next steps
<a name="create-service-next-steps"></a>

The following are additional actions after you create a service.
+ Configure predicte auto scaling, which looks at past load data from traffic flows. For more information, see [Use historical patterns to scale Amazon ECS services with predictive scaling](predictive-auto-scaling.md).
+ Track your deployment and view your service history for services that Amazon ECS circuit breaker. For more information, see [View service history using Amazon ECS service deployments](service-deployment.md).

# Amazon ECS blue/green deployments
<a name="deployment-type-blue-green"></a>

A blue/green deployment is a release methodology that reduces downtime and risk by running two identical production environments called blue and green. With Amazon ECS blue/green deployments, you can validate new service revisions before directing production traffic to them. This approach provides a safer way to deploy changes with the ability to quickly roll back if needed.

## Benefits
<a name="blue-green-deployment-benefits"></a>

The following are benefits of using blue/green deployments:
+ Reduces risk through testing with production traffic before switching production. You can validate the new deployment with test traffic before directing production traffic to it.
+ Zero downtime deployments. The production environment remains available throughout the deployment process, ensuring continuous service availability.
+ Easy rollback if issues are detected. If problems arise with the green deployment, you can quickly revert to the blue deployment without extended service disruption.
+ Controlled testing environment. The green environment provides an isolated space to test new features with real traffic patterns before full deployment.
+ Predictable deployment process. The structured approach with defined lifecycle stages makes deployments more consistent and reliable.
+ Automated validation through lifecycle hooks. You can implement automated tests at various stages of the deployment to verify functionality.

## Terminology
<a name="blue-green-deployment-terms"></a>

The following are Amazon ECS blue/green deployment terms:
+ Bake time - The duration when both blue and green service revisions are running simultaneously after the production traffic has shifted.
+ Blue deployment - The current production service revision that you want to replace.
+ Green deployment - The new service revision that you want to deploy.
+ Lifecycle stages - A series of events in the deployment operation, such as "after production traffic shift".
+ Lifecycle hook - A Lambda function that verifies the deployment at a specific lifecycle stage.
+ Listener - A Elastic Load Balancing resource that checks for connection requests using the protocol and port that you configure. The rules that you define for a listener determine how Amazon ECS routes requests to its registered targets.
+ Rule - An Elastic Load Balancing resource associated with a listener. A rule defines how requests are routed and consists of an action, condition, and priority.
+ Target group - An Elastic Load Balancing resource used to route requests to one or more registered targets (for example, EC2 instances). When you create a listener, you specify a target group for its default action. Traffic is forwarded to the target group specified in the listener rule.
+ Traffic shift - The process Amazon ECS uses to shift traffic from the blue deployment to the green deployment. For Amazon ECS blue/green deployments, all traffic is shifted from the blue service to the green service at once.

## Considerations
<a name="blue-green-deployment-considerations"></a>

Consider the following when choosing a deployment type:
+ Resource usage: Blue/green deployments temporarily run both the blue and green service revisions simultaneously, which may double your resource usage during deployments.
+ Deployment monitoring: Blue/green deployments provide more detailed deployment status information, allowing you to monitor each stage of the deployment process.
+ Rollback: Blue/green deployments make it easier to roll back to the previous version if issues are detected, as the blue revision is kept running until the bake time expires.
+ Network Load Balancer lifecycle hooks: If you use a Network Load Balancer for blue/green deployments, there is an additional 10 minutes for the TEST\$1TRAFFIC\$1SHIFT and PRODUCTION\$1TRAFFIC\$1SHIFT lifecycle stages. This is because Amazon ECS makes sure that it is safe to shift traffic.

# Amazon ECS blue/green service deployments workflow
<a name="blue-green-deployment-how-it-works"></a>

The Amazon ECS blue/green deployment process follows a structured approach with six distinct phases that ensure safe and reliable application updates. Each phase serves a specific purpose in validating and transitioning your application from the current version (blue) to the new version (green).

1. **Preparation Phase**: Create the green environment alongside the existing blue environment. This includes provisioning new service revisions, and preparing target groups.

1. **Deployment Phase**: Deploy the new service revision to the green environment. Amazon ECS launches new tasks using the updated service revision while the blue environment continues serving production traffic.

1. **Testing Phase**: Validate the green environment using test traffic routing. The Application Load Balancer directs test requests to the green environment while production traffic remains on blue.

1. **Traffic Shifting Phase**: Shift production traffic from blue to green based on your configured deployment strategy. This phase includes monitoring and validation checkpoints.

1. **Monitoring Phase**: Monitor application health, performance metrics, and alarm states during the bake time period. A rollback operation is initiated when issues are detected.

1. **Completion Phase**: Finalize the deployment by terminating the blue environment or maintaining it for potential rollback scenarios, depending on your configuration.

## Workflow
<a name="blue-green-deployment-workflow"></a>

The following diagram illustrates the comprehensive blue/green deployment workflow, showing the interaction between Amazon ECS, and the Application Load Balancer:

![\[Comprehensive diagram showing the blue/green deployment process in Amazon ECS with detailed component interactions, traffic shifting phases, and monitoring checkpoints\]](http://docs.aws.amazon.com/AmazonECS/latest/developerguide/images/blue-green.png)


The enhanced deployment workflow includes the following detailed steps:

1. **Initial State**: The blue service (current production) handles 100% of production traffic. The Application Load Balancer has a single listener with rules that route all requests to the blue target group containing healthy blue tasks.

1. **Green Environment Provisioning**: Amazon ECS creates new tasks using the updated task definition. These tasks are registered with a new green target group but receive no traffic initially.

1. **Health Check Validation**: The Application Load Balancer performs health checks on green tasks. Only when green tasks pass health checks does the deployment proceed to the next phase.

1. **Test Traffic Routing**: If configured, the Application Load Balancer's listener rules route specific traffic patterns (such as requests with test headers) to the green environment for validation while production traffic remains on blue. This is controlled by the same listener that handles production traffic, using different rules based on request attributes.

1. **Production Traffic Shift**: Based on the deployment configuration, traffic shifts from blue to green. In ECS blue/green deployments, this is an immediate (all-at-once) shift where 100% of the traffic is moved from the blue to the green environment. The Application Load Balancer uses a single listener with listener rules that control traffic distribution between the blue and green target groups based on weights.

1. **Monitoring and Validation**: Throughout the traffic shift, Amazon ECS monitors CloudWatch metrics, alarm states, and deployment health. Automatic rollback triggers activate if issues are detected.

1. **Bake Time Period**: The duration when both blue and green service revisions are running simultaneously after the production traffic has shifted.

1. **Blue Environment Termination**: After successful traffic shift and validation, the blue environment is terminated to free up cluster resources, or maintained for rapid rollback capability.

1. **Final State**: The green environment becomes the new production environment, handling 100% of traffic. The deployment is marked as successful.

## Deployment lifecycle stages
<a name="blue-green-deployment-stages"></a>

The blue/green deployment process progresses through distinct lifecycle stages (a series of events in the deployment operation, such as "after production traffic shift"), each with specific responsibilities and validation checkpoints. Understanding these stages helps you monitor deployment progress and troubleshoot issues effectively.

 Each lifecycle stage can last up to 24 hours. We recommend that the value remains below the 24-hour mark. This is because asynchronous processes need time to trigger the hooks. The system times out, fails the deployment, and then initiates a rollback after a stage reaches 24 hours. CloudFormation deployments have additional timeout restrictions. While the 24-hour stage limit remains in effect, CloudFormation enforces a 36-hour limit on the entire deployment. CloudFormation fails the deployment, and then initiates a rollback if the process doesn't complete within 36 hours.


| Lifecycle stages | Description | Use this stage for lifecycle hook? | 
| --- | --- | --- | 
| RECONCILE\$1SERVICE | This stage only happens when you start a new service deployment with more than 1 service revision in an ACTIVE state. | Yes | 
| PRE\$1SCALE\$1UP | The green service revision has not started. The blue service revision is handling 100% of the production traffic. There is no test traffic. | Yes | 
| SCALE\$1UP | The time when the green service revision scales up to 100% and launches new tasks. The green service revision is not serving any traffic at this point. | No | 
| POST\$1SCALE\$1UP | The green service revision has started. The blue service revision is handling 100% of the production traffic. There is no test traffic. | Yes | 
| TEST\$1TRAFFIC\$1SHIFT | The blue and green service revisions are running. The blue service revision handles 100% of the production traffic. The green service revision is migrating from 0% to 100% of test traffic. | Yes | 
| POST\$1TEST\$1TRAFFIC\$1SHIFT | The test traffic shift is complete. The green service revision handles 100% of the test traffic. | Yes | 
| PRODUCTION\$1TRAFFIC\$1SHIFT | Production traffic is shifting to the green service revision. The green service revision is migrating from 0% to 100% of production traffic. | Yes | 
| POST\$1PRODUCTION\$1TRAFFIC\$1SHIFT | The production traffic shift is complete. | Yes | 
| BAKE\$1TIME | The duration when both blue and green service revisions are running simultaneously. | No | 
| CLEAN\$1UP | The blue service revision has completely scaled down to 0 running tasks. The green service revision is now the production service revision after this stage. | No | 

Each lifecycle stage includes built-in validation checkpoints that must pass before proceeding to the next stage. If any validation fails, the deployment can be automatically rolled back to maintain service availability and reliability.

When you use a Lambda function, the function must complete the work, or return IN\$1PROGRESS within 15 minutes. You can use the `callBackDelaySeconds` to delay the call to Lambda. For more information, see [app.py function](https://github.com/aws-samples/sample-amazon-ecs-blue-green-deployment-patterns/blob/main/ecs-bluegreen-lifecycle-hooks/src/approvalFunction/app.py#L20-L25) in the sample-amazon-ecs-blue-green-deployment-patterns on GitHub.

# Required resources for Amazon ECS blue/green deployments
<a name="blue-green-deployment-implementation"></a>

To use a blue/green deployment with managed traffic shifting, your service must use one of the following features:
+ Elastic Load Balancing
+ Service Connect

Services that don't use Service Discovery, Service Connect, VPC Lattice or Elastic Load Balancing can also use blue/green deployments, but don't get any of the managed traffic shifting benefits.

The following list provides a high-level overview of what you need to configure for Amazon ECS blue/green deployments:
+ Your service uses an Application Load Balancer, Network Load Balancer, or Service Connect. Configure the appropriate resources.
  + Application Load Balancer - For more information, see [Application Load Balancer resources for blue/green, linear, and canary deployments](alb-resources-for-blue-green.md).
  + Network Load Balancer - For more information, see [Network Load Balancer resources for Amazon ECS blue/green, linear and canary deployments](nlb-resources-for-blue-green.md).
  + Service Connect - For more information, see [Service Connect resources for Amazon ECS blue/green, linear, and canary deployments](service-connect-blue-green.md).
+ Set the service deployment controller to `ECS`.
+ Configure the deployment strategy as `blue/green` in your service definition.
+ Optionally, configure additional parameters such as:
  + Bake time for the new deployment
  + CloudWatch alarms for automatic rollback
  + Deployment lifecycle hooks for testing (these are Lambda functions that run at specified deployment stages)

## Best practices
<a name="blue-green-deployment-best-practices"></a>

Follow these best practices for successful Amazon ECS blue/green deployments:
+ Configure appropriate health checks that accurately reflect your application's health.
+ Set a bake time that allows sufficient testing of the green deployment.
+ Implement CloudWatch alarms to automatically detect issues and trigger rollbacks.
+ Use lifecycle hooks to perform automated testing at each deployment stage.
+ Ensure your application can handle both blue and green service revisions running simultaneously.
+ Plan for sufficient cluster capacity to handle both service revisions during deployment.
+ Test your rollback procedures before implementing them in production.

# Application Load Balancer resources for blue/green, linear, and canary deployments
<a name="alb-resources-for-blue-green"></a>

To use Application Load Balancers with Amazon ECS blue/green deployments, you need to configure specific resources that allow traffic routing between the blue and green service revisions. 

## Target groups
<a name="alb-target-groups"></a>

For blue/green deployments with Elastic Load Balancing, you need to create two target groups:
+ A primary target group for the blue service revision (current production traffic)
+ An alternate target group for the green service revision (new version)

Both target groups should be configured with the following settings:
+ Target type: `IP` (for Fargate or EC2 with `awsvpc` network mode)
+ Protocol: `HTTP` (or the protocol your application uses)
+ Port: The port your application listens on (typically `80` for HTTP)
+ VPC: The same VPC as your Amazon ECS tasks
+ Health check settings: Configured to properly check your application's health

During a blue/green deployment, Amazon ECS automatically registers tasks with the appropriate target group based on the deployment stage.

**Example Creating target groups for an Application Load Balancer**  
The following CLI commands create two target groups for use with an Application Load Balancer in a blue/green deployment:  

```
aws elbv2 create-target-group \
    --name blue-target-group \
    --protocol HTTP \
    --port 80 \
    --vpc-id vpc-abcd1234 \
    --target-type ip \
    --health-check-path / \
    --health-check-protocol HTTP \
    --health-check-interval-seconds 30 \
    --health-check-timeout-seconds 5 \
    --healthy-threshold-count 2 \
    --unhealthy-threshold-count 2

aws elbv2 create-target-group \
    --name green-target-group \
    --protocol HTTP \
    --port 80 \
    --vpc-id vpc-abcd1234 \
    --target-type ip \
    --health-check-path / \
    --health-check-protocol HTTP \
    --health-check-interval-seconds 30 \
    --health-check-timeout-seconds 5 \
    --healthy-threshold-count 2 \
    --unhealthy-threshold-count 2
```

## Application Load Balancer
<a name="alb-load-balancer"></a>

You need to create an Application Load Balancer with the following configuration:
+ Scheme: Internet-facing or internal, depending on your requirements
+ IP address type: IPv4
+ VPC: The same VPC as your Amazon ECS tasks
+ Subnets: At least two subnets in different Availability Zones
+ Security groups: A security group that allows traffic on the listener ports

The security group attached to the Application Load Balancer must have an outbound rule that allows traffic to the security group attached to your Amazon ECS tasks.

**Example Creating an Application Load Balancer**  
The following CLI command creates anApplication Load Balancer for use in a blue/green deployment:  

```
aws elbv2 create-load-balancer \
    --name my-application-load-balancer \
    --type application \
    --security-groups sg-abcd1234 \
    --subnets subnet-12345678 subnet-87654321
```

## Listeners and rules
<a name="alb-listeners"></a>

For blue/green deployments, you need to configure listeners on your Application Load Balancer:
+ Production listener: Handles production traffic (typically on port 80 or 443)
  + Initially forwards traffic to the primary target group (blue service revision)
  + After deployment, forwards traffic to the alternate target group (green service revision)
+ Test listener (optional): Handles test traffic to validate the green service revision before shifting production traffic
  + Can be configured on a different port (for example, 8080 or 8443)
  + Forwards traffic to the alternate target group (green service revision) during testing

During a blue/green deployment, Amazon ECS automatically updates the listener rules to route traffic to the appropriate target group based on the deployment stage.

**Example Creating a production listener**  
The following CLI command creates a production listener on port 80 that forwards traffic to the primary (blue) target group:  

```
aws elbv2 create-listener \
    --load-balancer-arn arn:aws:elasticloadbalancing:region:123456789012:loadbalancer/app/my-application-load-balancer/abcdef123456 \
    --protocol HTTP \
    --port 80 \
    --default-actions Type=forward,TargetGroupArn=arn:aws:elasticloadbalancing:region:123456789012:targetgroup/blue-target-group/abcdef123456
```

**Example Creating a test listener**  
The following CLI command creates a test listener on port 8080 that forwards traffic to the alternate (green) target group:  

```
aws elbv2 create-listener \
    --load-balancer-arn arn:aws:elasticloadbalancing:region:123456789012:loadbalancer/app/my-application-load-balancer/abcdef123456 \
    --protocol HTTP \
    --port 8080 \
    --default-actions Type=forward,TargetGroupArn=arn:aws:elasticloadbalancing:region:123456789012:targetgroup/green-target-group/ghijkl789012
```

**Example Creating a listener rule for path-based routing**  
The following CLI command creates a rule that forwards traffic for a specific path to the green target group for testing:  

```
aws elbv2 create-rule \
    --listener-arn arn:aws:elasticloadbalancing:region:123456789012:listener/app/my-application-load-balancer/abcdef123456/ghijkl789012 \
    --priority 10 \
    --conditions Field=path-pattern,Values='/test/*' \
    --actions Type=forward,TargetGroupArn=arn:aws:elasticloadbalancing:region:123456789012:targetgroup/green-target-group/ghijkl789012
```

**Example Creating a listener rule for header-based routing**  
The following CLI command creates a rule that forwards traffic with a specific header to the green target group for testing:  

```
aws elbv2 create-rule \
    --listener-arn arn:aws:elasticloadbalancing:region:123456789012:listener/app/my-application-load-balancer/abcdef123456/ghijkl789012 \
    --priority 20 \
    --conditions Field=http-header,HttpHeaderConfig='{Name=X-Environment,Values=[test]}' \
    --actions Type=forward,TargetGroupArn=arn:aws:elasticloadbalancing:region:123456789012:targetgroup/green-target-group/ghijkl789012
```

## Service configuration
<a name="alb-service-configuration"></a>

You must have permissions to allow Amazon ECS to manage load balancer resources in your clusters on your behalf. For more information, see [Amazon ECS infrastructure IAM role for load balancers](AmazonECSInfrastructureRolePolicyForLoadBalancers.md). 

When creating or updating an Amazon ECS service for blue/green deployments with Elastic Load Balancing, you need to specify the following configuration.

Replace the *user-input* with your values.

The key components in this configuration are:
+ `targetGroupArn`: The ARN of the primary target group (blue service revision).
+ `alternateTargetGroupArn`: The ARN of the alternate target group (green service revision).
+ `productionListenerRule`: The ARN of the listener rule for production traffic.
+ `roleArn`: The ARN of the role that allows Amazon ECS to manage Elastic Load Balancing resources.
+ `strategy`: Set to `BLUE_GREEN` to enable blue/green deployments.
+ `bakeTimeInMinutes`: The duration when both blue and green service revisions are running simultaneously after the production traffic has shifted.
+ `TestListenerRule`: The ARN of the listener rule for test traffic. This is an optional parameter.

```
{
    "loadBalancers": [
        {
            "targetGroupArn": "arn:aws:elasticloadbalancing:region:123456789012:targetgroup/primary-target-group/abcdef123456",
            "containerName": "container-name",
            "containerPort": 80,
            "advancedConfiguration": {
                "alternateTargetGroupArn": "arn:aws:elasticloadbalancing:region:account-id:targetgroup/alternate-target-group/ghijkl789012",
                "productionListenerRule": "arn:aws:elasticloadbalancing:region:account-id:listener-rule/app/load-balancer-name/abcdef123456/listener/ghijkl789012/rule/mnopqr345678",
                "roleArn": "arn:aws:iam::123456789012:role/ecs-elb-role"
            }
        }
    ],
    "deploymentConfiguration": {
        "strategy": "BLUE_GREEN",
        "maximumPercent": 200,
        "minimumHealthyPercent": 100,
        "bakeTimeInMinutes": 5
    }
}
```

## Traffic flow during deployment
<a name="alb-traffic-flow"></a>

During a blue/green deployment with Elastic Load Balancing, traffic flows through the system as follows:

1. *Initial state*: All production traffic is routed to the primary target group (blue service revision).

1. *Green service revision deployment*: Amazon ECS deploys the new tasks and registers them with the alternate target group.

1. *Test traffic*: If a test listener is configured, test traffic is routed to the alternate target group to validate the green service revision.

1. *Production traffic shift*: Amazon ECS updates the production listener rule to route traffic to the alternate target group (green service revision).

1. *Bake time*: The duration when both blue and green service revisions are running simultaneously after the production traffic has shifted.

1. *Completion*: After a successful deployment, the blue service revision is terminated.

If issues are detected during the deployment, Amazon ECS can automatically roll back by routing traffic back to the primary target group (blue service revision).

# Network Load Balancer resources for Amazon ECS blue/green, linear and canary deployments
<a name="nlb-resources-for-blue-green"></a>

To use a Network Load Balancer with Amazon ECS blue/green deployments, you need to configure specific resources that enable traffic routing between the blue and green service revisions. This section explains the required components and their configuration.

When your configuration includes a Network Load Balancer, Amazon ECS adds a 10 minute delay to the following lifecycle stages:
+ TEST\$1TRAFFIC\$1SHIFT
+ PRODUCTION\$1TRAFFIC\$1SHIFT

This delay accounts for Network Load Balancer timing issues that can cause a mismatch between the configured traffic weights and the actual traffic routing in the data plane. 

## Target groups
<a name="nlb-target-groups"></a>

For blue/green deployments with a Network Load Balancer, you need to create two target groups:
+ A primary target group for the blue service revision (current production traffic)
+ An alternate target group for the green service revision (new service revision)

Both target groups should be configured with the following settings:
+ Target type: `ip` (for Fargate or EC2 with `awsvpc` network mode)
+ Protocol: `TCP` (or the protocol your application uses)
+ Port: The port your application listens on (typically `80` for HTTP)
+ VPC: The same VPC as your Amazon ECS tasks
+ Health check settings: Configured to properly check your application's health

  For TCP health checks, the Network Load Balancer establishes a TCP connection with the target. If the connection is successful, the target is considered healthy.

  For HTTP/HTTPS health checks, the Network Load Balancer sends an HTTP/HTTPS request to the target and verifies the response.

During a blue/green deployment, Amazon ECS automatically registers tasks with the appropriate target group based on the deployment stage.

**Example Creating target groups for a Network Load Balancer**  
The following AWS CLI commands create two target groups for use with a Network Load Balancer in a blue/green deployment:  

```
aws elbv2 create-target-group \
    --name blue-target-group \
    --protocol TCP \
    --port 80 \
    --vpc-id vpc-abcd1234 \
    --target-type ip \
    --health-check-protocol TCP

aws elbv2 create-target-group \
    --name green-target-group \
    --protocol TCP \
    --port 80 \
    --vpc-id vpc-abcd1234 \
    --target-type ip \
    --health-check-protocol TCP
```

## Network Load Balancer
<a name="nlb-load-balancer"></a>

You need to create a Network Load Balancer with the following configuration:
+ Scheme: Internet-facing or internal, depending on your requirements
+ IP address type: IPv4
+ VPC: The same VPC as your Amazon ECS tasks
+ Subnets: At least two subnets in different Availability Zones

Unlike Application Load Balancers, Network Load Balancers operate at the transport layer (Layer 4) and do not use security groups. Instead, you need to ensure that the security groups associated with your Amazon ECS tasks allow traffic from the Network Load Balancer on the listener ports.

**Example Creating a Network Load Balancer**  
The following AWS CLI command creates a Network Load Balancer for use in a blue/green deployment:  

```
aws elbv2 create-load-balancer \
    --name my-network-load-balancer \
    --type network \
    --subnets subnet-12345678 subnet-87654321
```

## Considerations for using NLB with blue/green deployments
<a name="nlb-considerations"></a>

When using a Network Load Balancer for blue/green deployments, consider the following:
+ **Layer 4 operation**: Network Load Balancers operate at the transport layer (Layer 4) and do not inspect application layer (Layer 7) content. This means you cannot use HTTP headers or paths for routing decisions.
+ **Health checks**: Network Load Balancer health checks are limited to TCP, HTTP, or HTTPS protocols. For TCP health checks, the Network Load Balancer only verifies that the connection can be established.
+ **Connection preservation**: Network Load Balancers preserve the source IP address of the client, which can be useful for security and logging purposes.
+ **Static IP addresses**: Network Load Balancers provide static IP addresses for each subnet, which can be useful for whitelisting or when clients need to connect to a fixed IP address.
+ **Test traffic**: Since Network Load Balancers do not support content-based routing, test traffic must be sent to a different port than production traffic.

## Listeners and rules
<a name="nlb-listeners"></a>

For blue/green deployments with a Network Load Balancer, you need to configure listeners:
+ Production listener: Handles production traffic (typically on port 80 or 443)
  + Initially forwards traffic to the primary target group (blue service revision)
  + After deployment, forwards traffic to the alternate target group (green service revision)
+ Test listener (optional): Handles test traffic to validate the green service revision before shifting production traffic
  + Can be configured on a different port (e.g., 8080 or 8443)
  + Forwards traffic to the alternate target group (green service revision) during testing

Unlike Application Load Balancers, Network Load Balancers do not support content-based routing rules. Instead, traffic is routed based on the listener port and protocol.

The following AWS CLI commands create production and test listeners for a Network Load Balancer:

Replace the *user-input* with your values.

```
aws elbv2 create-listener \
    --load-balancer-arn arn:aws:elasticloadbalancing:region:123456789012:loadbalancer/net/my-network-lb/1234567890123456 \
    --protocol TCP \
    --port 80 \
    --default-actions Type=forward, TargetGroupArn=arn:aws:elasticloadbalancing:region:123456789012:targetgroup/blue-target-group/1234567890123456

aws elbv2 create-listener \
    --load-balancer-arn arn:aws:elasticloadbalancing:region:123456789012:loadbalancer/net/my-network-lb/1234567890123456 \
    --protocol TCP \
    --port 8080 \
    --default-actions Type=forward, TargetGroupArn=arn:aws:elasticloadbalancing:region:123456789012:targetgroup/green-target-group/1234567890123456
```

## Service configuration
<a name="nlb-service-configuration"></a>

You must have permissions to allow Amazon ECS to manage load balancer resources in your clusters on your behalf. For more information, see [Amazon ECS infrastructure IAM role for load balancers](AmazonECSInfrastructureRolePolicyForLoadBalancers.md). 

When creating or updating an Amazon ECS service for blue/green deployments with a Network Load Balancer, you need to specify the following configuration:

Replace the *user-input* with your values.

The key components in this configuration are:
+ `targetGroupArn`: The ARN of the primary target group (blue service revision)
+ `alternateTargetGroupArn`: The ARN of the alternate target group (green service revision)
+ `productionListenerRule`: The ARN of the listener for production traffic
+ `testListenerRule`: (Optional) The ARN of the listener for test traffic
+ `roleArn`: The ARN of the role that allows Amazon ECS to manage Network Load Balancer resources
+ `strategy`: Set to `BLUE_GREEN` to enable blue/green deployments
+ `bakeTimeInMinutes`: The duration when both blue and green service revisions are running simultaneously after the production traffic has shifted

```
{
    "loadBalancers": [
        {
            "targetGroupArn": "arn:aws:elasticloadbalancing:region:123456789012:targetgroup/blue-target-group/1234567890123456",
            "containerName": "container-name",
            "containerPort": 80,
            "advancedConfiguration": {
                "alternateTargetGroupArn": "arn:aws:elasticloadbalancing:region:123456789012:targetgroup/green-target-group/1234567890123456",
                "productionListenerRule": "arn:aws:elasticloadbalancing:region:123456789012:listener/net/my-network-lb/1234567890123456/1234567890123456",
                "testListenerRule": "arn:aws:elasticloadbalancing:region:123456789012:listener/net/my-network-lb/1234567890123456/2345678901234567",
                "roleArn": "arn:aws:iam::123456789012:role/ecs-nlb-role"
            }
        }
    ],
    "deploymentConfiguration": {
        "strategy": "BLUE_GREEN",
        "maximumPercent": 200,
        "minimumHealthyPercent": 100,
        "bakeTimeInMinutes": 5
    }
}
```

## Traffic flow during deployment
<a name="nlb-traffic-flow"></a>

During a blue/green deployment with a Network Load Balancer, traffic flows through the system as follows:

1. *Initial state*: All production traffic is routed to the primary target group (blue service revision).

1. *Green service revision deployment*: Amazon ECS deploys the new tasks and registers them with the alternate target group.

1. *Test traffic*: If a test listener is configured, test traffic is routed to the alternate target group to validate the green service revision.

1. *Production traffic shift*: Amazon ECS updates the production listener to route traffic to the alternate target group (green service revision).

1. *Bake time*: The duration when both blue and green service revisions are running simultaneously after the production traffic has shifted.

1. *Completion*: After a successful deployment, the blue service revision is terminated.

If issues are detected during the deployment, Amazon ECS can automatically roll back by routing traffic back to the primary target group (blue service revision).

# Service Connect resources for Amazon ECS blue/green, linear, and canary deployments
<a name="service-connect-blue-green"></a>

When using Service Connect with blue/green deployments, you need to configure specific components to enable proper traffic routing between the blue and green service revisions. This section explains the required components and their configuration.

## Architecture overview
<a name="service-connect-blue-green-architecture"></a>

Service Connect builds both service discovery and service mesh capabilities through a managed sidecar proxy that's automatically injected into your Amazon ECS tasks. These proxies handle routing decisions, retries, and metrics collection, while AWS Cloud Map provides the service registry backend. When you deploy a service with Service Connect enabled, the service registers itself in AWS Cloud Map, and client services discover it through the namespace.

In a standard Service Connect implementation, client services connect to logical service names, and the sidecar proxy handles routing to the actual service instances. With blue/green deployments, this model is extended to include test traffic routing through the `testTrafficRules` configuration.

During a blue/green deployment, the following key components work together:
+ **Service Connect Proxy**: All traffic between services passes through the Service Connect proxy, which makes routing decisions based on the configuration.
+ **AWS Cloud Map Registration**: Both blue and green deployments register with AWS Cloud Map, but the green deployment initially registers as a "test" endpoint.
+ **Test Traffic Routing**: The `testTrafficRules` in the Service Connect configuration determine how to identify and route test traffic to the green deployment. This is accomplished through **header-based routing**, where specific HTTP headers in the requests direct traffic to the test revision. By default, Service Connect recognizes the `x-amzn-ecs-blue-green-test` header for HTTP-based protocols when no custom rules are specified.
+ **Client Configuration**: All clients in the namespace automatically receive both production and test routes, but only requests matching test rules will go to the green deployment.

What makes this approach powerful is that it handles the complexity of service discovery during transitions. As traffic shifts from the blue to green deployment, all connectivity and discovery mechanisms update automatically. There's no need to update DNS records, reconfigure load balancers, or deploy service discovery changes separately since the service mesh handles it all.

## Traffic routing and testing
<a name="service-connect-blue-green-traffic-routing"></a>

Service Connect provides advanced traffic routing capabilities for blue/green deployments, including header-based routing and client alias configuration for testing scenarios.

### Test traffic header rules
<a name="service-connect-test-traffic-header-rules"></a>

During blue/green deployments, you can configure test traffic header rules to route specific requests to the green (new) service revision for testing purposes. This allows you to validate the new version with controlled traffic before completing the deployment.

Service Connect uses **header-based routing** to identify test traffic. By default, Service Connect recognizes the `x-amzn-ecs-blue-green-test` header for HTTP-based protocols when no custom rules are specified. When this header is present in a request, the Service Connect proxy automatically routes the request to the green deployment for testing.

Test traffic header rules enable you to:
+ Route requests with specific headers to the green service revision
+ Test new functionality with a subset of traffic
+ Validate service behavior before full traffic cutover
+ Implement canary testing strategies
+ Perform integration testing in a production-like environment

The header-based routing mechanism works seamlessly with your existing application architecture. Client services don't need to be aware of the blue/green deployment process - they simply include the appropriate headers when sending test requests, and the Service Connect proxy handles the routing logic automatically.

For more information about configuring test traffic header rules, see [ServiceConnectTestTrafficHeaderRules](https://docs.aws.amazon.com/AmazonECS/latest/APIReference/API_ServiceConnectTestTrafficHeaderRules.html) in the *Amazon Elastic Container Service API Reference*.

### Header matching rules
<a name="service-connect-header-match-rules"></a>

Header matching rules define the criteria for routing test traffic during blue/green deployments. You can configure multiple matching conditions to precisely control which requests are routed to the green service revision.

Header matching supports:
+ Exact header value matching
+ Header presence checking
+ Pattern-based matching
+ Multiple header combinations

Example use cases include routing requests with specific user agent strings, API versions, or feature flags to the green service for testing.

For more information about header matching configuration, see [ServiceConnectTestTrafficHeaderMatchRules](https://docs.aws.amazon.com/AmazonECS/latest/APIReference/API_ServiceConnectTestTrafficHeaderMatchRules.html) in the *Amazon Elastic Container Service API Reference*.

### Client aliases for blue/green deployments
<a name="service-connect-client-alias-blue-green"></a>

Client aliases provide stable DNS endpoints for services during blue/green deployments. They enable seamless traffic routing between blue and green service revisions without requiring client applications to change their connection endpoints.

During a blue/green deployment, client aliases:
+ Maintain consistent DNS names for client connections
+ Enable automatic traffic switching between service revisions
+ Support gradual traffic migration strategies
+ Provide rollback capabilities by redirecting traffic to the blue revision

You can configure multiple client aliases for different ports or protocols, allowing complex service architectures to maintain connectivity during deployments.

For more information about client alias configuration, see [ServiceConnectClientAlias](https://docs.aws.amazon.com/AmazonECS/latest/APIReference/API_ServiceConnectClientAlias.html) in the *Amazon Elastic Container Service API Reference*.

### Best practices for traffic routing
<a name="service-connect-blue-green-best-practices"></a>

When implementing traffic routing for blue/green deployments with Service Connect, consider the following best practices:
+ **Start with header-based testing**: Use test traffic header rules to validate the green service with controlled traffic before switching all traffic.
+ **Configure health checks**: Ensure both blue and green services have appropriate health checks configured to prevent routing traffic to unhealthy instances.
+ **Monitor service metrics**: Track key performance indicators for both service revisions during the deployment to identify issues early.
+ **Plan rollback strategy**: Configure client aliases and routing rules to enable quick rollback to the blue service if issues are detected.
+ **Test header matching logic**: Validate your header matching rules in a non-production environment before applying them to production deployments.

## Service Connect blue/green deployment workflow
<a name="service-connect-blue-green-workflow"></a>

Understanding how Service Connect manages the blue/green deployment process helps you implement and troubleshoot your deployments effectively. The following workflow shows how the different components interact during each phase of the deployment.

### Deployment phases
<a name="service-connect-deployment-phases"></a>

A Service Connect blue/green deployment progresses through several distinct phases:

1. **Initial State**: The blue service handles 100% of production traffic. All client services in the namespace connect to the blue service through the logical service name configured in Service Connect.

1. **Green Service Registration**: When the green deployment starts, it registers with AWS Cloud Map as a "test" endpoint. The Service Connect proxy in client services automatically receives both production and test route configurations.

1. **Test Traffic Routing**: Requests containing the test traffic headers (such as `x-amzn-ecs-blue-green-test`) are automatically routed to the green service by the Service Connect proxy. Production traffic continues to flow to the blue service.

1. **Traffic Shift Preparation**: After successful testing, the deployment process prepares for production traffic shift. Both blue and green services remain registered and healthy.

1. **Production Traffic Shift**: The Service Connect configuration updates to route production traffic to the green service. This happens automatically without requiring client service updates or DNS changes.

1. **Bake Time Period**: The duration when both blue and green service revisions are running simultaneously after the production traffic has shifted.

1. **Blue Service Deregistration**: After successful traffic shift and validation, the blue service is deregistered from AWS Cloud Map and terminated, completing the deployment.

### Service Connect proxy behavior
<a name="service-connect-proxy-behavior"></a>

The Service Connect proxy plays a crucial role in managing traffic during blue/green deployments. Understanding its behavior helps you design effective testing and deployment strategies.

Key proxy behaviors during blue/green deployments:
+ **Automatic Route Discovery**: The proxy automatically discovers both production and test routes from AWS Cloud Map without requiring application restarts or configuration changes.
+ **Header-Based Routing**: The proxy examines incoming request headers and routes traffic to the appropriate service revision based on the configured test traffic rules.
+ **Health Check Integration**: The proxy only routes traffic to healthy service instances, automatically excluding unhealthy tasks from the routing pool.
+ **Retry and Circuit Breaking**: The proxy provides built-in retry logic and circuit breaking capabilities, improving resilience during deployments.
+ **Metrics Collection**: The proxy collects detailed metrics for both blue and green services, enabling comprehensive monitoring during deployments.

### Service discovery updates
<a name="service-connect-service-discovery-updates"></a>

One of the key advantages of using Service Connect for blue/green deployments is the automatic handling of service discovery updates. Traditional blue/green deployments often require complex DNS updates or load balancer reconfiguration, but Service Connect manages these changes transparently.

During a deployment, Service Connect handles:
+ **Namespace Updates**: The Service Connect namespace automatically includes both blue and green service endpoints, with appropriate routing rules.
+ **Client Configuration**: All client services in the namespace automatically receive updated routing information without requiring restarts or redeployment.
+ **Gradual Transition**: Service discovery updates happen gradually and safely, ensuring no disruption to ongoing requests.
+ **Rollback Support**: If a rollback is needed, Service Connect can quickly revert service discovery configurations to route traffic back to the blue service.

# Creating an Amazon ECS blue/green deployment
<a name="deploy-blue-green-service"></a>

 By using Amazon ECS blue/green deployments, you can make and test service changes before implementing them in a production environment. 

## Prerequisites
<a name="deploy-blue-green-service-prerequisites"></a>

Perform the following operations before you start a blue/green deployment. 

1. Configure the appropriate permissions.
   + For information about Elastic Load Balancing permissions, see [Amazon ECS infrastructure IAM role for load balancers](AmazonECSInfrastructureRolePolicyForLoadBalancers.md).
   + For information about Lambda permissions, see [Permissions required for Lambda functions in Amazon ECS blue/green deployments](blue-green-permissions.md)

1. Amazon ECS blue/green deployments require that your service to use one of the following features: Configure the appropriate resources.
   + Application Load Balancer - For more information, see [Application Load Balancer resources for blue/green, linear, and canary deployments](alb-resources-for-blue-green.md).
   + Network Load Balancer - For more information, see [Network Load Balancer resources for Amazon ECS blue/green, linear and canary deployments](nlb-resources-for-blue-green.md).
   + Service Connect - For more information, see [Service Connect resources for Amazon ECS blue/green, linear, and canary deployments](service-connect-blue-green.md).

1. Decide if you want to run Lambda functions for the lifecycle stages.
   + PRE\$1SCALE\$1UP
   + POST\$1SCALE\$1UP
   + TEST\$1TRAFFIC\$1SHIFT
   + POST\$1TEST\$1TRAFFIC\$1SHIFT
   + PRODUCTION\$1TRAFFIC\$1SHIFT
   + POST\$1PRODUCTION\$1TRAFFIC\$1SHIFT

   For more information, see [Create a Lambda function with the console](https://docs.aws.amazon.com/lambda/latest/dg/getting-started.html#getting-started-create-function) in the *AWS Lambda Developer Guide*.

## Procedure
<a name="deploy-blue-green-service-procedure"></a>

You can use the console or the AWS CLI to create an Amazon ECS blue/green service.

------
#### [ Console ]

1. Open the console at [https://console.aws.amazon.com/ecs/v2](https://console.aws.amazon.com/ecs/v2).

1. Determine the resource from where you launch the service.    
[\[See the AWS documentation website for more details\]](http://docs.aws.amazon.com/AmazonECS/latest/developerguide/deploy-blue-green-service.html)

   The **Create service** page displays.

1. Under **Service details**, do the following:

   1. For **Task definition family**, choose the task definition to use. Then, for **Task definition revision**, enter the revision to use.

   1. For **Service name**, enter a name for your service.

1. To run the service in an existing cluster, for **Existing cluster**, choose the cluster. To run the service in a new cluster, choose **Create cluster** 

1. Choose how your tasks are distributed across your cluster infrastructure. Under **Compute configuration**, choose your option.    
[\[See the AWS documentation website for more details\]](http://docs.aws.amazon.com/AmazonECS/latest/developerguide/deploy-blue-green-service.html)

1. Under **Deployment configuration**, do the following:

   1. For **Service type**, choose **Replica**.

   1. For **Desired tasks**, enter the number of tasks to launch and maintain in the service.

   1. To have Amazon ECS monitor the distribution of tasks across Availability Zones, and redistribute them when there is an imbalance, under **Availability Zone service rebalancing**, select **Availability Zone service rebalancing**.

   1. For **Health check grace period**, enter the amount of time (in seconds) that the service scheduler ignores unhealthy Elastic Load Balancing, VPC Lattice, and container health checks after a task has first started. If you do not specify a health check grace period value, the default value of 0 is used.

1. 

   1. For **Bake time**, enter the number of minutes that both the blue and green service revisions will run simultaneously before the blue revision is terminated. This allows time for verification and testing.

   1. (Optional) Run Lambda functions to run at specific stages of the deployment. Under **Deployment lifecycle hooks**, select the stages to run the lifecycle hooks.

      To add a lifecycle hook:

      1. Choose **Add**.

      1. For **Lambda function**, enter the function name or ARN.

      1. For **Role**, select the IAM role that has permission to invoke the Lambda function.

      1. For **Lifecycle stages**, select the stages when the Lambda function should run.

1. To configure how Amazon ECS detects and handles deployment failures, expand **Deployment failure detection**, and then choose your options. 

   1. To stop a deployment when the tasks cannot start, select **Use the Amazon ECS deployment circuit breaker**.

      To have the software automatically roll back the deployment to the last completed deployment state when the deployment circuit breaker sets the deployment to a failed state, select **Rollback on failures**.

   1. To stop a deployment based on application metrics, select **Use CloudWatch alarm(s)**. Then, from **CloudWatch alarm name**, choose the alarms. To create a new alarm, go to the CloudWatch console.

      To have the software automatically roll back the deployment to the last completed deployment state when a CloudWatch alarm sets the deployment to a failed state, select **Rollback on failures**.

1. (Optional) To interconnect your service using Service Connect, expand **Service Connect**, and then specify the following:

   1.  Select **Turn on Service Connect**.

   1. Under **Service Connect configuration**, specify the client mode.
      + If your service runs a network client application that only needs to connect to other services in the namespace, choose **Client side only**.
      + If your service runs a network or web service application and needs to provide endpoints for this service, and connects to other services in the namespace, choose **Client and server**.

   1. To use a namespace that is not the default cluster namespace, for **Namespace**, choose the service namespace. This can be a namespace created separately in the same AWS Region in your AWS account or a namespace in the same Region that is shared with your account using AWS Resource Access Manager (AWS RAM). For more information about shared AWS Cloud Map namespaces, see [Cross-account AWS Cloud Map namespace sharing](https://docs.aws.amazon.com/cloud-map/latest/dg/sharing-namespaces.html) in the *AWS Cloud Map Developer Guide*.

   1. (Optional) Configure test traffic header rules for blue/green deployments. Under **Test traffic routing**, specify the following:

      1. Select **Enable test traffic header rules** to route specific requests to the green service revision during testing.

      1. For **Header matching rules**, configure the criteria for routing test traffic:
         + **Header name**: Enter the name of the HTTP header to match (for example, `X-Test-Version` or `User-Agent`).
         + **Match type**: Choose the matching criteria:
           + **Exact match**: Route requests where the header value exactly matches the specified value
           + **Header present**: Route requests that contain the specified header, regardless of value
           + **Pattern match**: Route requests where the header value matches a specified pattern
         + **Header value** (if using exact match or pattern match): Enter the value or pattern to match against.

         You can add multiple header matching rules to create complex routing logic. Requests matching any of the configured rules will be routed to the green service revision for testing.

      1. Choose **Add header rule** to configure additional header matching conditions.
**Note**  
Test traffic header rules enable you to validate new functionality with controlled traffic before completing the full deployment. This allows you to test the green service revision with specific requests (such as those from internal testing tools or beta users) while maintaining normal traffic flow to the blue service revision.

   1. (Optional) Specify a log configuration. Select **Use log collection**. The default option sends container logs to CloudWatch Logs. The other log driver options are configured using AWS FireLens. For more information, see [Send Amazon ECS logs to an AWS service or AWS Partner](using_firelens.md).

      The following describes each container log destination in more detail.
      + **Amazon CloudWatch** – Configure the task to send container logs to CloudWatch Logs. The default log driver options are provided, which create a CloudWatch log group on your behalf. To specify a different log group name, change the driver option values.
      + **Amazon Data Firehose** – Configure the task to send container logs to Firehose. The default log driver options are provided, which send logs to a Firehose delivery stream. To specify a different delivery stream name, change the driver option values.
      + **Amazon Kinesis Data Streams** – Configure the task to send container logs to Kinesis Data Streams. The default log driver options are provided, which send logs to an Kinesis Data Streams stream. To specify a different stream name, change the driver option values.
      + **Amazon OpenSearch Service** – Configure the task to send container logs to an OpenSearch Service domain. The log driver options must be provided. 
      + **Amazon S3** – Configure the task to send container logs to an Amazon S3 bucket. The default log driver options are provided, but you must specify a valid Amazon S3 bucket name.

1. (Optional) Configure **Load balancing** for blue/green deployment.    
[\[See the AWS documentation website for more details\]](http://docs.aws.amazon.com/AmazonECS/latest/developerguide/deploy-blue-green-service.html)

1. (Optional) To help identify your service and tasks, expand the **Tags** section, and then configure your tags.

   To have Amazon ECS automatically tag all newly launched tasks with the cluster name and the task definition tags, select **Turn on Amazon ECS managed tags**, and then for **Propagate tags from**, choose **Task definitions**.

   To have Amazon ECS automatically tag all newly launched tasks with the cluster name and the service tags, select **Turn on Amazon ECS managed tags**, and then for **Propagate tags from**, choose **Service**.

   Add or remove a tag.
   + [Add a tag] Choose **Add tag**, and then do the following:
     + For **Key**, enter the key name.
     + For **Value**, enter the key value.
   + [Remove a tag] Next to the tag, choose **Remove tag**.

1. Choose **Create**.

------
#### [ AWS CLI ]

1. Create a file named `service-definition.json` with the following content.

   Replace the *user-input* with your values.

   ```
   {
     "serviceName": "myBlueGreenService",
     "cluster": "arn:aws:ecs:us-west-2:123456789012:cluster/sample-fargate-cluster",
     "taskDefinition": "sample-fargate:1",
     "desiredCount": 5,
     "launchType": "FARGATE",
     "networkConfiguration": {
       "awsvpcConfiguration": {
         "subnets": [
           "subnet-09ce6e74c116a2299",
           "subnet-00bb3bd7a73526788",
           "subnet-0048a611aaec65477"
         ],
         "securityGroups": [
           "sg-09d45005497daa123"
         ],
         "assignPublicIp": "ENABLED"
       }
     },
     "deploymentController": {
       "type": "ECS"
     },
     "deploymentConfiguration": {
       "strategy": "BLUE_GREEN",
       "maximumPercent": 200,
       "minimumHealthyPercent": 100,
       "bakeTimeInMinutes": 2,
       "alarms": {
         "alarmNames": [
           "myAlarm"
         ],
         "rollback": true,
         "enable": true
       },
       "lifecycleHooks": [
         {
           "hookTargetArn": "arn:aws:lambda:us-west-2:7123456789012:function:checkExample",
           "roleArn": "arn:aws:iam::123456789012:role/ECSLifecycleHookInvoke",
           "lifecycleStages": [
             "PRE_SCALE_UP"
           ]
         }
       ]
     },
     "loadBalancers": [
       {
         "targetGroupArn": "arn:aws:elasticloadbalancing:us-west-2:123456789012:targetgroup/blue-target-group/54402ff563af1197",
         "containerName": "fargate-app",
         "containerPort": 80,
         "advancedConfiguration": {
           "alternateTargetGroupArn": "arn:aws:elasticloadbalancing:us-west-2:123456789012:targetgroup/green-target-group/cad10a56f5843199",
           "productionListenerRule": "arn:aws:elasticloadbalancing:us-west-2:123456789012:listener-rule/app/my-blue-green-demo/32e0e4f946c3c05b/9cfa8c482e204f7d/831dbaf72edb911",
           "roleArn": "arn:aws:iam::123456789012:role/LoadBalancerManagementforECS"
         }
       }
     ]
   }
   ```

1. Run `create-service`.

   Replace the *user-input* with your values.

   ```
   aws ecs create-service --cli-input-json file://service-definition.json
   ```

   Alternatively, you can use the following example which creates a blue/green deployment service with a load balancer configuration:

   ```
   aws ecs create-service \
      --cluster "arn:aws:ecs:us-west-2:123456789012:cluster/MyCluster" \
      --service-name "blue-green-example-service" \
      --task-definition "nginxServer:1" \
      --launch-type "FARGATE" \
      --network-configuration "awsvpcConfiguration={subnets=[subnet-12345,subnet-67890,subnet-abcdef,subnet-fedcba],securityGroups=[sg-12345],assignPublicIp=ENABLED}" \
      --desired-count 3 \
      --deployment-controller "type=ECS" \
      --deployment-configuration "strategy=BLUE_GREEN,maximumPercent=200,minimumHealthyPercent=100,bakeTimeInMinutes=0" \
      --load-balancers "targetGroupArn=arn:aws:elasticloadbalancing:us-west-2:123456789012:targetgroup/MyBGtg1/abcdef1234567890,containerName=nginx,containerPort=80,advancedConfiguration={alternateTargetGroupArn=arn:aws:elasticloadbalancing:us-west-2:123456789012:targetgroup/MyBGtg2/0987654321fedcba,productionListenerRule=arn:aws:elasticloadbalancing:us-west-2:123456789012:listener-rule/app/MyLB/1234567890abcdef/1234567890abcdef,roleArn=arn:aws:iam::123456789012:role/ELBManagementRole}"
   ```

------

## Next steps
<a name="deploy-blue-green-service-next-steps"></a>
+ Update the service to start the deployment. For more information, see [Updating an Amazon ECS service](update-service-console-v2.md).
+ Monitor the deployment process to ensure it follows the blue/green pattern:
  + The green service revision is created and scaled up
  + Test traffic is routed to the green revision (if configured)
  + Production traffic is shifted to the green revision
  + After the bake time, the blue revision is terminated

# Troubleshooting Amazon ECS blue/green deployments
<a name="troubleshooting-blue-green"></a>

This following provides solutions for common issues you might encounter when using blue/green deployments with Amazon ECS. Blue/green deployment errors can occur during the following phases:
+ *Synchronous path*: Errors that appear immediately in response to `CreateService` or `UpdateService` API calls.
+ *Asynchronous path*: Errors that appear in the `statusReason` field of `DescribeServiceDeployments` and cause a deployment rollback

**Tip**  
You can use the [Amazon ECS MCP server](ecs-mcp-introduction.md) with AI assistants to monitor deployments and troubleshoot deployment issues using natural language.

## Load balancer configuration issues
<a name="troubleshooting-blue-green-load-balancer"></a>

Load balancer configuration is a critical component of blue/green deployments in Amazon ECS. Proper configuration of listener rules, target groups, and load balancer types is essential for successful deployments. This section covers common load balancer configuration issues that can cause blue/green deployments to fail.

When troubleshooting load balancer issues, it's important to understand the relationship between listener rules and target groups. In a blue/green deployment:
+ The production listener rule directs traffic to the currently active (blue) service revision
+ The test listener rule can be used to validate the new (green) service revision before shifting production traffic
+ Target groups are used to register the container instances from each service revision
+ During deployment, traffic is gradually shifted from the blue service revision to the green service revision by adjusting the weights of the target groups in the listener rules

### Listener rule configuration errors
<a name="troubleshooting-blue-green-listener-rules"></a>

The following issues relate to incorrect listener rule configuration for blue/green deployments.

Using an Application Load Balancer listener ARN instead of a listener rule ARN  
*Error message*: `productionListenerRule has an invalid ARN format. Must be RuleArn for ALB or ListenerArn for NLB. Got: arn:aws:elasticloadbalancing:us-west-2:123456789012:listener/app/my-alb/abc123/def456`  
*Solution*: When using an Application Load Balancer, you must specify a listener rule ARN for `productionListenerRule` and `testListenerRule`, not a listener ARN. For Network Load Balancers, you must use the listener ARN.  
 For information about how to find the listener ARN, see [Listeners for your Application Load Balancers](https://docs.aws.amazon.com/elasticloadbalancing/latest/application/create-https-listener.html) in the *Application Load Balancer User Guide*. The ARN for a rule has the format `arn:aws:elasticloadbalancing:region:account-id:listener-rule/app/...`.

Using the same rule for both production and test listeners  
*Error message*: `The following rules cannot be used as both production and test listener rules: arn:aws:elasticloadbalancing:us-west-2:123456789012:listener-rule/app/my-alb/abc123/def456/ghi789`  
*Solution*: You must use different listener rules for production and test traffic. Create a separate listener rule for test traffic that routes to your test target group.

Target group not associated with listener rules  
*Error message*: `Service deployment rolled back because of invalid networking configuration: Target group arn:aws:elasticloadbalancing:us-west-2:123456789012:targetgroup/myAlternateTG/abc123 is not associated with either productionListenerRule or testListenerRule.`  
*Solution*: Both the primary target group and alternate target group must be associated with either the production listener rule or the test listener rule. Update your load balancer configuration to ensure both target groups are properly associated with your listener rules.

Missing test listener rule with an Application Load Balancer  
*Error message*: `For Application LoadBalancer, testListenerRule is required when productionListenerRule is not associated with both targetGroup and alternateTargetGroup`  
*Solution*: When you use an Application Load Balancer, if both target groups are not associated with the production listener rule, you must specify a test listener rule. Add a `testListenerRule` to your configuration and ensure both target groups are associated with either the production or test listener rule. For more information, see [Listeners for your Application Load Balancers](https://docs.aws.amazon.com/elasticloadbalancing/latest/application/create-https-listener.html) in the *Application Load Balancer User Guide*.

### Target group configuration errors
<a name="troubleshooting-blue-green-target-groups"></a>

The following issues relate to incorrect target group configuration for blue/green deployments.

Multiple target groups with traffic in listener rule  
*Error message*: `Service deployment rolled back because of invalid networking configuration. productionListenerRule arn:aws:elasticloadbalancing:us-west-2:123456789012:listener-rule/app/my-alb/abc123/def456/ghi789 should have exactly one target group serving traffic but found 2 target groups which are serving traffic`  
*Solution*: Before starting a blue/green deployment, ensure that only one target group is receiving traffic (has a non-zero weight) in your listener rule. Update your listener rule configuration to set the weight to zero for any target group that should not be receiving traffic.

Duplicate target groups across load balancer entries  
*Error message*: `Duplicate targetGroupArn found: arn:aws:elasticloadbalancing:us-west-2:123456789012:targetgroup/myecs-targetgroup/abc123`  
*Solution*: Each target group ARN must be unique across all load balancer entries in your service definition. Review your configuration and ensure you're using different target groups for each load balancer entry.

Unexpected target group in production listener rule  
*Error message*: `Service deployment rolled back because of invalid networking configuration. Production listener rule is forwarding traffic to unexpected target group arn:aws:elasticloadbalancing:us-west-2:123456789012:targetgroup/random-nlb-tg/abc123. Expected traffic to be forwarded to either targetGroupArn: arn:aws:elasticloadbalancing:us-west-2:123456789012:targetgroup/nlb-targetgroup/def456 or alternateTargetGroupArn: arn:aws:elasticloadbalancing:us-west-2:123456789012:targetgroup/nlb-tg-alternate/ghi789`  
*Solution*: The production listener rule is forwarding traffic to a target group that is not specified in your service definition. Ensure that the listener rule is configured to forward traffic only to the target groups specified in your service definition.   
For more information, see [forward actions](https://docs.aws.amazon.com/elasticloadbalancing/latest/application/load-balancer-listeners.html#forward-actions) in the *Application Load Balancer User Guide*.

### Load balancer type configuration errors
<a name="troubleshooting-blue-green-load-balancer-types"></a>

The following issues relate to incorrect load balancer type configuration for blue/green deployments.

Mixing Classic Load Balancer and Application Load Balancer or Network Load Balancer configurations  
*Error message*: `All loadBalancers must be strictly either ELBv1 (defining loadBalancerName) or ELBv2 (defining targetGroupArn)`  
Classic Load Balancers are the previous generation of load balancers from Elastic Load Balancing. We recommend that you migrate to a current generation load balancer. For more information, see [Migrate your Classic Load Balancer](https://docs.aws.amazon.com/elasticloadbalancing/latest/userguide/migrate-classic-load-balancer.html).
*Solution*: . Use either all Classic Load Balancers or all Application Load Balancers and Network Load Balancers.  
For Application Load Balancers and Network Load Balancers, specify only the `targetGroupArn` field.

Using advanced configuration with a Classic Load Balancer  
*Error message*: `advancedConfiguration field is not allowed with ELBv1 loadBalancers`  
*Solution*: Advanced configuration for blue/green deployments is only supported with Application Load Balancers and Network Load Balancers. If you use a Classic Load Balancer (specified with `loadBalancerName`), you cannot use the `advancedConfiguration` field. Either switch to an Application Load Balancer, or remove the `advancedConfiguration` field.

Inconsistent advanced configuration across load balancers  
*Error message*: `Either all or none of the provided loadBalancers must have advancedConfiguration defined`  
*Solution*: If you're using multiple load balancers, you must either define `advancedConfiguration` for all of them or for none of them. Update your configuration to ensure consistency across all load balancer entries.

Missing advanced configuration with blue/green deployment  
*Error message*: `advancedConfiguration field is required for all loadBalancers when using a non-ROLLING deployment strategy`  
*Solution*: When using a blue/green deployment strategy with Application Load Balancers, you must specify the `advancedConfiguration` field for all load balancer entries. Add the required `advancedConfiguration` to your load balancer configuration.

## Permission issues
<a name="troubleshooting-blue-green-permissions"></a>

The following issues relate to insufficient permissions for blue/green deployments.

Missing trust policy on infrastructure role  
*Error message*: `Service deployment rolled back because of invalid networking configuration. ECS was unable to manage the ELB resources due to missing permissions on ECS Infrastructure Role 'arn:aws:iam::123456789012:role/Admin'.`  
*Solution*: The IAM role specified for managing load balancer resources does not have the correct trust policy. Update the role's trust policy to allow the service to assume the role. The trust policy must include:    
****  

```
{
  "Version":"2012-10-17",		 	 	 
  "Statement": [
    {
      "Effect": "Allow",
      "Principal": {
        "Service": "ecs.amazonaws.com"
      },
      "Action": "sts:AssumeRole"
    }
  ]
}
```

Missing read permissions on load balancer role  
*Error message*: `service myService failed to describe target health on target-group myTargetGroup with (error User: arn:aws:sts::123456789012:assumed-role/myELBRole/ecs-service-scheduler is not authorized to perform: elasticloadbalancing:DescribeTargetHealth because no identity-based policy allows the elasticloadbalancing:DescribeTargetHealth action)`  
*Solution*: The IAM role used for managing load balancer resources does not have permission to read target health information. Add the `elasticloadbalancing:DescribeTargetHealth` permission to the role's policy. For information about Elastic Load Balancing permissions, see [Amazon ECS infrastructure IAM role for load balancers](AmazonECSInfrastructureRolePolicyForLoadBalancers.md).

Missing write permissions on load balancer role  
*Error message*: `service myService failed to register targets in target-group myTargetGroup with (error User: arn:aws:sts::123456789012:assumed-role/myELBRole/ecs-service-scheduler is not authorized to perform: elasticloadbalancing:RegisterTargets on resource: arn:aws:elasticloadbalancing:us-west-2:123456789012:targetgroup/myTargetGroup/abc123 because no identity-based policy allows the elasticloadbalancing:RegisterTargets action)`  
*Solution*: The IAM role used for managing load balancer resources does not have permission to register targets. Add the `elasticloadbalancing:RegisterTargets` permission to the role's policy. For information about Elastic Load Balancing permissions, see [Amazon ECS infrastructure IAM role for load balancers](AmazonECSInfrastructureRolePolicyForLoadBalancers.md).

Missing permission to modify listener rules  
*Error message*: `Service deployment rolled back because TEST_TRAFFIC_SHIFT lifecycle hook(s) failed. User: arn:aws:sts::123456789012:assumed-role/myELBRole/ECSNetworkingWithELB is not authorized to perform: elasticloadbalancing:ModifyListener on resource: arn:aws:elasticloadbalancing:us-west-2:123456789012:listener/app/my-alb/abc123/def456 because no identity-based policy allows the elasticloadbalancing:ModifyListener action`  
*Solution*: The IAM role used for managing load balancer resources does not have permission to modify listeners. Add the `elasticloadbalancing:ModifyListener` permission to the role's policy. For information about Elastic Load Balancing permissions, see [Amazon ECS infrastructure IAM role for load balancers](AmazonECSInfrastructureRolePolicyForLoadBalancers.md).

For blue/green deployments, we recommend attaching the `AmazonECS-ServiceLinkedRolePolicy` managed policy to your infrastructure role, which includes all the necessary permissions for managing load balancer resources.

## Lifecycle hook issues
<a name="troubleshooting-blue-green-lifecycle-hooks"></a>

The following issues relate to lifecycle hooks in blue/green deployments.

Incorrect trust policy on Lambda hook role  
*Error message*: `Service deployment rolled back because TEST_TRAFFIC_SHIFT lifecycle hook(s) failed. ECS was unable to assume role arn:aws:iam::123456789012:role/Admin`  
*Solution*: The IAM role specified for the Lambda lifecycle hook does not have the correct trust policy. Update the role's trust policy to allow the service to assume the role. The trust policy must include:    
****  

```
{
  "Version":"2012-10-17",		 	 	 
  "Statement": [
    {
      "Effect": "Allow",
      "Principal": {
        "Service": "ecs.amazonaws.com"
      },
      "Action": "sts:AssumeRole"
    }
  ]
}
```

Lambda hook returns FAILED status  
*Error message*: `Service deployment rolled back because TEST_TRAFFIC_SHIFT lifecycle hook(s) failed. Lifecycle hook target arn:aws:lambda:us-west-2:123456789012:function:myHook returned FAILED status.`  
*Solution*: The Lambda function specified as a lifecycle hook returned a FAILED status. Check the Lambda function logs in Amazon CloudWatch logs to determine the failure reason, and update the function to handle the deployment event correctly.

Missing permission to invoke Lambda function  
*Error message*: `Service deployment rolled back because TEST_TRAFFIC_SHIFT lifecycle hook(s) failed. ECS was unable to invoke hook target arn:aws:lambda:us-west-2:123456789012:function:myHook due to User: arn:aws:sts::123456789012:assumed-role/myLambdaRole/ECS-Lambda-Execution is not authorized to perform: lambda:InvokeFunction on resource: arn:aws:lambda:us-west-2:123456789012:function:myHook because no identity-based policy allows the lambda:InvokeFunction action`  
*Solution*: The IAM role used for the Lambda lifecycle hook does not have permission to invoke the Lambda function. Add the `lambda:InvokeFunction` permission to the role's policy for the specific Lambda function ARN. For information about Lambda permissions, see [Permissions required for Lambda functions in Amazon ECS blue/green deployments](blue-green-permissions.md).

Lambda function timeout or invalid response  
*Error message*: `Service deployment rolled back because TEST_TRAFFIC_SHIFT lifecycle hook(s) failed. ECS was unable to parse the response from arn:aws:lambda:us-west-2:123456789012:function:myHook due to HookStatus must not be null`  
*Solution*: The Lambda function either timed out or returned an invalid response. Ensure that your Lambda function returns a valid response with a `hookStatus` field set to either `SUCCEEDED` or `FAILED`. Also, check that the Lambda function timeout is set appropriately for your validation logic. For more information, see [Lifecycle hooks for Amazon ECS service deployments](deployment-lifecycle-hooks.md).  
Example of a valid Lambda response:  

```
{
  "hookStatus": "SUCCEEDED",
  "reason": "Validation passed"
}
```

# Amazon ECS linear deployments
<a name="deployment-type-linear"></a>

Linear deployments gradually shift traffic from the old service revision to the new one in equal increments over time, allowing you to monitor each step before proceeding to the next. With Amazon ECS linear deployments, control the pace of traffic shifting and validate new service revisions with increasing amounts of production traffic. This approach provides a controlled way to deploy changes with the ability to monitor performance at each increment.

## Resources involved in a linear deployment
<a name="linear-deployment-resources"></a>

The following are resources involved in Amazon ECS linear deployments:
+ Traffic shift - The process Amazon ECS uses to shift production traffic. For Amazon ECS linear deployments, traffic is shifted in equal percentage increments with configurable wait times between each increment.
+ Step percent - The percentage of traffic to shift in each increment during a linear deployment. This field takes Double for value, and valid values are from 3.0 to 100.0.
+ Step bake time - The duration to wait between each traffic shift increment during a linear deployment. Valid values are from 0 - 1440 minutes.
+ Deployment bake time - The time, in minutes, Amazon ECS waits after shifting all production traffic to the new service revision, before it terminates the old service revision. This is the duration when both blue and green service revisions are running simultaneously after the production traffic has shifted.
+ Lifecycle stages - A series of events in the deployment operation, such as "after production traffic shift".
+ Lifecycle hook - A Lambda function that runs at a specific lifecycle stage. You can create a function that verifies the deployment. Lambda functions or lifecycle hooks configured for PRODUCTION\$1TRAFFIC\$1SHIFT will be invoked at every production traffic shift step.
+ Target group - An Elastic Load Balancing resource used to route requests to one or more registered targets (for example, EC2 instances). When you create a listener, you specify a target group for its default action. Traffic is forwarded to the target group specified in the listener rule.
+ Listener - A Elastic Load Balancing resource that checks for connection requests using the protocol and port that you configure. The rules that you define for a listener determine how Amazon ECS routes requests to its registered targets.
+ Rule - An Elastic Load Balancing resource associated with a listener. A rule defines how requests are routed and consists of an action, condition, and priority.

## Considerations
<a name="linear-deployment-considerations"></a>

Consider the following when choosing a deployment type:
+ Resource usage: Linear deployments temporarily run both the blue and green service revisions simultaneously, which may double your resource usage during deployments.
+ Deployment monitoring: Linear deployments provide detailed deployment status information, allowing you to monitor each stage of the deployment process and each traffic shift increment.
+ Rollback: Linear deployments make it easier to roll back to the previous version if issues are detected, as the blue revision is kept running until the bake time expires.
+ Gradual validation: Linear deployments allow you to validate the new revision with increasing amounts of production traffic, providing more confidence in the deployment.
+ Deployment duration: Linear deployments take longer to complete than all-at-once deployments due to the incremental traffic shifting and wait times between steps.

## How Linear deployment works
<a name="linear-deployment-how-works"></a>

The Amazon ECS Linear deployment process follows a structured approach with six distinct phases that ensure safe and reliable application updates. Each phase serves a specific purpose in validating and transitioning your application from the current version (blue) to the new version (green).

1. Preparation Phase: Create the green environment alongside the existing blue environment.

1. Deployment Phase: Deploy the new service revision to the green environment. Amazon ECS launches new tasks using the updated service revision while the blue environment continues serving production traffic.

1. Testing Phase: Validate the green environment using test traffic routing. The Application Load Balancer directs test requests to the green environment while production traffic remains on blue.

1. Linear Traffic Shifting Phase: Gradually shift production traffic from blue to green in equal percentage increments based on your configured deployment strategy.

1. Monitoring Phase: Monitor application health, performance metrics, and alarm states during the bake time period. A rollback operation is initiated when issues are detected.

1. Completion Phase: Finalize the deployment by terminating the blue environment.

The linear traffic shift phase follows these steps:
+ Initial - The deployment begins with 100% of traffic routed to the blue (current) service revision. The green (new) service revision receives test traffic but no production traffic initially.
+ Incremental traffic shifting - Traffic is gradually shifted from blue to green in equal percentage increments. For example, with a 10.0% step configuration, traffic shifts occur as follows:
  + Step 1: 10.0% to green, 90.0% to blue
  + Step 2: 20.0% to green, 80.0% to blue
  + Step 3: 30.0% to green, 70.0% to blue
  + And so on until 100% reaches green
+ Step bake time - Between each traffic shift increment, the deployment waits for a configurable duration (step bake time) to allow monitoring and validation of the new revision's performance with the increased traffic load. Note, that last step bake time is skipped once traffic is shifted 100.0%.
+ Lifecycle hooks - Optional Lambda functions can be executed at various lifecycle stages during the deployment to perform automated validation, monitoring, or custom logic. Lambda functions or lifecycle hooks configured for PRODUCTION\$1TRAFFIC\$1SHIFT will be invoked at every production traffic shift step.

## Deployment lifecycle stages
<a name="linear-deployment-lifecycle-stages"></a>

The Linear deployment process progresses through distinct lifecycle stages, each with specific responsibilities and validation checkpoints. Understanding these stages helps you monitor deployment progress and troubleshoot issues effectively.

Each lifecycle stage can last up to 24 hours and in addition each traffic shift step in PRODUCTION\$1TRAFFIC\$1SHIFT can last upto 24 hours. We recommend that the value remains below the 24-hour mark. This is because asynchronous processes need time to trigger the hooks. The system times out, fails the deployment, and then initiates a rollback after a stage reaches 24 hours.

CloudFormation deployments have additional timeout restrictions. While the 24-hour stage limit remains in effect, CloudFormation enforces a 36-hour limit on the entire deployment. CloudFormation fails the deployment, and then initiates a rollback if the process doesn't complete within 36 hours.


| Lifecycle stages | Description | Lifecycle hook support | 
| --- | --- | --- | 
| RECONCILE\$1SERVICE | This stage only happens when you start a new service deployment with more than 1 service revision in an ACTIVE state. | Yes | 
| PRE\$1SCALE\$1UP | The green service revision has not started. The blue service revision is handling 100% of the production traffic. There is no test traffic. | Yes | 
| SCALE\$1UP | The time when the green service revision scales up to 100% and launches new tasks. The green service revision is not serving any traffic at this point. | No | 
| POST\$1SCALE\$1UP | The green service revision has started. The blue service revision is handling 100% of the production traffic. There is no test traffic. | Yes | 
| TEST\$1TRAFFIC\$1SHIFT | The blue and green service revisions are running. The blue service revision handles 100% of the production traffic. The green service revision is migrating from 0% to 100% of test traffic. | Yes | 
| POST\$1TEST\$1TRAFFIC\$1SHIFT | The test traffic shift is complete. The green service revision handles 100% of the test traffic. | Yes | 
| PRODUCTION\$1TRAFFIC\$1SHIFT | Traffic is gradually shifted from blue to green in equal percentage increments until green receives 100% of traffic. Each traffic shift invokes lifecycle hook with 24 hours timeout. | 
| POST\$1PRODUCTION\$1TRAFFIC\$1SHIFT | The production traffic shift is complete. | Yes | 
| BAKE\$1TIME | The duration when both blue and green service revisions are running simultaneously. | No | 
| CLEAN\$1UP | The blue service revision has completely scaled down to 0 running tasks. The green service revision is now the production service revision after this stage. | No | 

# Required resources for Amazon ECS linear deployments
<a name="linear-deployment-implementation"></a>

To use a linear deployment with managed traffic shifting, your service must use one of the following features:
+ Elastic Load Balancing
+ Service Connect

The following list provides a high-level overview of what you need to configure for Amazon ECS linear deployments:
+ Your service uses an Application Load Balancer, Network Load Balancer, or Service Connect. Configure the appropriate resources.
  + Application Load Balancer - For more information, see [Application Load Balancer resources for blue/green, linear, and canary deployments](alb-resources-for-blue-green.md).
  + Network Load Balancer - For more information, see [Network Load Balancer resources for Amazon ECS blue/green, linear and canary deployments](nlb-resources-for-blue-green.md).
  + Service Connect - For more information, see [Service Connect resources for Amazon ECS blue/green, linear, and canary deployments](service-connect-blue-green.md).
+ Set the service deployment controller to `ECS`.
+ Configure the deployment strategy as `linear` in your service definition.
+ Optionally, configure additional parameters such as:
  + Bake time for the new deployment
  + The percentage of traffic to shift in each increment.
  + The duration in minutes to wait between each traffic shift increment. 
  + CloudWatch alarms for automatic rollback
  + Deployment lifecycle hooks (these are Lambda functions that run at specified deployment stages such as BEFORE\$1INSTALL, PRODUCTION\$1TRAFFIC\$1SHIFT, or POST\$1PRODUCTION\$1TRAFFIC\$1SHIFT)

## Best practices
<a name="linear-deployment-best-practices"></a>

Follow these best practices for successful Amazon ECS linear deployments:
+ **Ensure your application can handle both service revisions running simultaneously.**
+ **Plan for sufficient cluster capacity to handle both service revisions during deployment.**
+ **Test your rollback procedures before implementing them in production.**
+ Configure appropriate health checks that accurately reflect your application's health.
+ Set a bake time that allows sufficient testing of the new service revision.
+ Implement CloudWatch alarms to automatically detect issues and trigger rollbacks.
+ Choose step percentages and bake times that balance deployment speed with validation needs.
+ Use smaller step percentages (5-10%) for critical applications to minimize risk exposure.
+ Set longer step bake times for applications that need time to warm up or stabilize.
+ Implement CloudWatch alarms to automatically detect issues and trigger rollbacks at any traffic increment.
+ Monitor application metrics closely during each traffic shift to detect performance degradation early.
+ Ensure your application can handle both service revisions running simultaneously.
+ Test your rollback procedures at different traffic percentages before implementing them in production.

# Creating an Amazon ECS linear deployment
<a name="deploy-linear-service"></a>

By using Amazon ECS linear deployments, you can gradually shift traffic in equal increments over specified time intervals. This provides controlled validation at each step of the deployment process.

## Prerequisites
<a name="deploy-linear-service-prerequisites"></a>

Perform the following operations before you start a linear deployment.

1. Configure the appropriate permissions.
   + For information about Elastic Load Balancing permissions, see [Amazon ECS infrastructure IAM role for load balancers](AmazonECSInfrastructureRolePolicyForLoadBalancers.md).
   + For information about Lambda permissions, see [Permissions required for Lambda functions in Amazon ECS blue/green deployments](blue-green-permissions.md).

1. Amazon ECS linear deployments require that your service to use one of the following features: Configure the appropriate resources.
   + Application Load Balancer - For more information, see [Application Load Balancer resources for blue/green, linear, and canary deployments](alb-resources-for-blue-green.md).
   + Network Load Balancer - For more information, see [Network Load Balancer resources for Amazon ECS blue/green, linear and canary deployments](nlb-resources-for-blue-green.md).
   + Service Connect - For more information, see [Service Connect resources for Amazon ECS blue/green, linear, and canary deployments](service-connect-blue-green.md).

## Procedure
<a name="deploy-linear-service-procedure"></a>

You can use the console or the AWS CLI to create an Amazon ECS linear deployment service.

------
#### [ Console ]

1. Open the console at [https://console.aws.amazon.com/ecs/v2](https://console.aws.amazon.com/ecs/v2).

1. Determine the resource from where you launch the service.    
[\[See the AWS documentation website for more details\]](http://docs.aws.amazon.com/AmazonECS/latest/developerguide/deploy-linear-service.html)

   The **Create service** page displays.

1. Under **Service details**, do the following:

   1. For **Task definition family**, choose the task definition to use. Then, for **Task definition revision**, enter the revision to use.

   1. For **Service name**, enter a name for your service.

1. To run the service in an existing cluster, for **Existing cluster**, choose the cluster. To run the service in a new cluster, choose **Create cluster** 

1. Choose how your tasks are distributed across your cluster infrastructure. Under **Compute configuration**, choose your option.    
[\[See the AWS documentation website for more details\]](http://docs.aws.amazon.com/AmazonECS/latest/developerguide/deploy-linear-service.html)

1. Under **Deployment configuration**, do the following:

   1. For **Service type**, choose **Replica**.

   1. For **Desired tasks**, enter the number of tasks to launch and maintain in the service.

   1. To have Amazon ECS monitor the distribution of tasks across Availability Zones, and redistribute them when there is an imbalance, under **Availability Zone service rebalancing**, select **Availability Zone service rebalancing**.

   1. For **Health check grace period**, enter the amount of time (in seconds) that the service scheduler ignores unhealthy Elastic Load Balancing, VPC Lattice, and container health checks after a task has first started. If you do not specify a health check grace period value, the default value of 0 is used.

1. Under **Deployment configuration**, configure the linear deployment settings:

   1. For **Deployment strategy**, choose **Linear**.

   1. For **Traffic increment percentage**, enter the percentage of traffic to shift in each increment (for example, 10% to shift traffic in 10% increments).

   1. For **Interval between increments**, enter the time in minutes to wait between each traffic shift increment.

   1. For **Bake time**, enter the number of minutes that both the blue and green service revisions will run simultaneously after the final traffic shift before the blue revision is terminated.

   1. (Optional) Run Lambda functions to run at specific stages of the deployment. Under **Deployment lifecycle hooks**, select the stages to run the lifecycle hooks.

      To add a lifecycle hook:

      1. Choose **Add**.

      1. For **Lambda function**, enter the function name or ARN.

      1. For **Role**, select the IAM role that has permission to invoke the Lambda function.

      1. For **Lifecycle stages**, select the stages when the Lambda function should run.

1. To configure how Amazon ECS detects and handles deployment failures, expand **Deployment failure detection**, and then choose your options. 

   1. To stop a deployment when the tasks cannot start, select **Use the Amazon ECS deployment circuit breaker**.

      To have the software automatically roll back the deployment to the last completed deployment state when the deployment circuit breaker sets the deployment to a failed state, select **Rollback on failures**.

   1. To stop a deployment based on application metrics, select **Use CloudWatch alarm(s)**. Then, from **CloudWatch alarm name**, choose the alarms. To create a new alarm, go to the CloudWatch console.

      To have the software automatically roll back the deployment to the last completed deployment state when a CloudWatch alarm sets the deployment to a failed state, select **Rollback on failures**.

1. (Optional) To interconnect your service using Service Connect, expand **Service Connect**, and then specify the following:

   1.  Select **Turn on Service Connect**.

   1. Under **Service Connect configuration**, specify the client mode.
      + If your service runs a network client application that only needs to connect to other services in the namespace, choose **Client side only**.
      + If your service runs a network or web service application and needs to provide endpoints for this service, and connects to other services in the namespace, choose **Client and server**.

   1. To use a namespace that is not the default cluster namespace, for **Namespace**, choose the service namespace. This can be a namespace created separately in the same AWS Region in your AWS account or a namespace in the same Region that is shared with your account using AWS Resource Access Manager (AWS RAM). For more information about shared AWS Cloud Map namespaces, see [Cross-account AWS Cloud Map namespace sharing](https://docs.aws.amazon.com/cloud-map/latest/dg/sharing-namespaces.html) in the *AWS Cloud Map Developer Guide*.

   1. (Optional) Configure test traffic header rules for linear deployments. Under **Test traffic routing**, specify the following:

      1. Select **Enable test traffic header rules** to route specific requests to the green service revision during testing.

      1. For **Header matching rules**, configure the criteria for routing test traffic:
         + **Header name**: Enter the name of the HTTP header to match (for example, `X-Test-Version` or `User-Agent`).
         + **Match type**: Choose the matching criteria:
           + **Exact match**: Route requests where the header value exactly matches the specified value
           + **Header present**: Route requests that contain the specified header, regardless of value
           + **Pattern match**: Route requests where the header value matches a specified pattern
         + **Header value** (if using exact match or pattern match): Enter the value or pattern to match against.

         You can add multiple header matching rules to create complex routing logic. Requests matching any of the configured rules will be routed to the green service revision for testing.

      1. Choose **Add header rule** to configure additional header matching conditions.
**Note**  
Test traffic header rules enable you to validate new functionality with controlled traffic before completing the full deployment. This allows you to test the green service revision with specific requests (such as those from internal testing tools or beta users) while maintaining normal traffic flow to the blue service revision.

   1. (Optional) Specify a log configuration. Select **Use log collection**. The default option sends container logs to CloudWatch Logs. The other log driver options are configured using AWS FireLens. For more information, see [Send Amazon ECS logs to an AWS service or AWS Partner](using_firelens.md).

      The following describes each container log destination in more detail.
      + **Amazon CloudWatch** – Configure the task to send container logs to CloudWatch Logs. The default log driver options are provided, which create a CloudWatch log group on your behalf. To specify a different log group name, change the driver option values.
      + **Amazon Data Firehose** – Configure the task to send container logs to Firehose. The default log driver options are provided, which send logs to a Firehose delivery stream. To specify a different delivery stream name, change the driver option values.
      + **Amazon Kinesis Data Streams** – Configure the task to send container logs to Kinesis Data Streams. The default log driver options are provided, which send logs to an Kinesis Data Streams stream. To specify a different stream name, change the driver option values.
      + **Amazon OpenSearch Service** – Configure the task to send container logs to an OpenSearch Service domain. The log driver options must be provided. 
      + **Amazon S3** – Configure the task to send container logs to an Amazon S3 bucket. The default log driver options are provided, but you must specify a valid Amazon S3 bucket name.

1. (Optional) Configure **Load balancing** for linear deployment.    
[\[See the AWS documentation website for more details\]](http://docs.aws.amazon.com/AmazonECS/latest/developerguide/deploy-linear-service.html)

1. (Optional) To help identify your service and tasks, expand the **Tags** section, and then configure your tags.

   To have Amazon ECS automatically tag all newly launched tasks with the cluster name and the task definition tags, select **Turn on Amazon ECS managed tags**, and then for **Propagate tags from**, choose **Task definitions**.

   To have Amazon ECS automatically tag all newly launched tasks with the cluster name and the service tags, select **Turn on Amazon ECS managed tags**, and then for **Propagate tags from**, choose **Service**.

   Add or remove a tag.
   + [Add a tag] Choose **Add tag**, and then do the following:
     + For **Key**, enter the key name.
     + For **Value**, enter the key value.
   + [Remove a tag] Next to the tag, choose **Remove tag**.

1. Choose **Create**.

------
#### [ AWS CLI ]

1. Create a file named `linear-service-definition.json` with the following content.

   Replace the *user-input* with your values.

   ```
   {
     "serviceName": "myLinearService",
     "cluster": "arn:aws:ecs:us-west-2:123456789012:cluster/sample-fargate-cluster",
     "taskDefinition": "sample-fargate:1",
     "desiredCount": 5,
     "launchType": "FARGATE",
     "networkConfiguration": {
       "awsvpcConfiguration": {
         "subnets": [
           "subnet-09ce6e74c116a2299",
           "subnet-00bb3bd7a73526788",
           "subnet-0048a611aaec65477"
         ],
         "securityGroups": [
           "sg-09d45005497daa123"
         ],
         "assignPublicIp": "ENABLED"
       }
     },
     "deploymentController": {
       "type": "ECS"
     },
     "deploymentConfiguration": {
       "strategy": "LINEAR",
       "maximumPercent": 200,
       "minimumHealthyPercent": 100,
       "linearConfiguration": {
         "stepPercentage": 10.0,
         "stepBakeTimeInMinutes":6
       },
       "bakeTimeInMinutes": 10,
       "alarms": {
         "alarmNames": [
           "myAlarm"
         ],
         "rollback": true,
         "enable": true
       }
     },
     "loadBalancers": [
       {
         "targetGroupArn": "arn:aws:elasticloadbalancing:us-west-2:123456789012:targetgroup/blue-target-group/54402ff563af1197",
         "containerName": "fargate-app",
         "containerPort": 80,
         "advancedConfiguration": {
           "alternateTargetGroupArn": "arn:aws:elasticloadbalancing:us-west-2:123456789012:targetgroup/green-target-group/cad10a56f5843199",
           "productionListenerRule": "arn:aws:elasticloadbalancing:us-west-2:123456789012:listener-rule/app/my-linear-demo/32e0e4f946c3c05b/9cfa8c482e204f7d/831dbaf72edb911",
           "roleArn": "arn:aws:iam::123456789012:role/LoadBalancerManagementforECS"
         }
       }
     ]
   }
   ```

1. Run `create-service`.

   ```
   aws ecs create-service --cli-input-json file://linear-service-definition.json
   ```

------

## Next steps
<a name="deploy-linear-service-next-steps"></a>
+ Update the service to start the deployment. For more information, see [Updating an Amazon ECS service](update-service-console-v2.md).
+ Monitor the deployment process to ensure it follows the linear pattern:
  + The green service revision is created and scaled up
  + Traffic is shifted incrementally at the specified intervals
  + Each increment waits for the specified interval before the next shift
  + After all traffic is shifted, the bake time begins
  + After the bake time, the blue revision is terminated

# Amazon ECS canary deployments
<a name="canary-deployment"></a>

Canary deployments first route a small percentage of traffic to the new revision for initial testing, then shift all remaining traffic at once after the canary phase completes successfully. With Amazon ECS canary deployments, validate new service revisions with real user traffic while minimizing risk exposure. This approach provides a controlled way to deploy changes with the ability to monitor performance and roll back quickly if issues are detected.

## Resources involved in a canary deployment
<a name="canary-deployment-resources"></a>

The following are resources involved in Amazon ECS canary deployments:
+ Traffic shift - The process Amazon ECS uses to shift production traffic. For Amazon ECS canary deployments, traffic is shifted in two phases: first to the canary percentage, then to complete the deployment.
+ Canary percentage - The percentage of traffic routed to the new version during the evaluation period.
+ Canary bake time - The duration to monitor the canary version before proceeding with full deployment.
+ Deployment bake time - The time, in minutes, Amazon ECS waits after shifting all production traffic to the new service revision, before it terminates the old service revision. This is the duration when both blue and green service revisions are running simultaneously after the production traffic has shifted.
+ Lifecycle stages - A series of events in the deployment operation, such as "after production traffic shift".
+ Lifecycle hook - A Lambda function that runs at a specific lifecycle stage. You can create a function that verifies the deployment.
+ Target group - An Elastic Load Balancing resource used to route requests to one or more registered targets (for example, EC2 instances). When you create a listener, you specify a target group for its default action. Traffic is forwarded to the target group specified in the listener rule.
+ Listener - A Elastic Load Balancing resource that checks for connection requests using the protocol and port that you configure. The rules that you define for a listener determine how Amazon ECS routes requests to its registered targets.
+ Rule - An Elastic Load Balancing resource associated with a listener. A rule defines how requests are routed and consists of an action, condition, and priority.

## Considerations
<a name="canary-deployment-considerations"></a>

Consider the following when choosing a deployment type:
+ Resource usage: Canary deployments run both original and canary task sets simultaneously during the evaluation period, increasing resource usage.
+ Traffic volume: Ensure the canary percentage generates sufficient traffic for meaningful validation of the new version.
+ Monitoring complexity: Canary deployments require monitoring and comparing metrics between two different versions simultaneously.
+ Rollback speed: Canary deployments enable quick rollback by shifting traffic back to the original task set.
+ Risk mitigation: Canary deployments provide excellent risk mitigation by limiting exposure to a small percentage of users.
+ Deployment duration: Canary deployments include evaluation periods that extend the overall deployment time but provide validation opportunities.

## How canary deployments work
<a name="canary-how-it-works"></a>

The Amazon ECS Canary deployment process follows a structured approach with six distinct phases that ensure safe and reliable application updates. Each phase serves a specific purpose in validating and transitioning your application from the current version (blue) to the new version (green).

1. Preparation Phase: Create the green environment alongside the existing blue environment.

1. Deployment Phase: Deploy the new service revision to the green environment. Amazon ECS launches new tasks using the updated service revision while the blue environment continues serving production traffic.

1. Testing Phase: Validate the green environment using test traffic routing. The Application Load Balancer directs test requests to the green environment while production traffic remains on blue.

1. Canary Traffic Shifting Phase: Shift configured percentage of traffic to the new green service revision during the canary phase, followed by shifting 100.0% of traffic to Green service revision

1. Monitoring Phase: Monitor application health, performance metrics, and alarm states during the bake time period. A rollback operation is initiated when issues are detected.

1. Completion Phase: Finalize the deployment by terminating the blue environment.

The canary traffic shift phase follows these steps:
+ Initial - The deployment begins with 100% of traffic routed to the blue (current) service revision. The green (new) service revision receives test traffic but no production traffic initially.
+ Canary traffic shifting - This is a two step traffic shift strategy.
  + Step 1: 10.0% to green, 90.0% to blue
  + Step 2: 100.0% to green, 0.0% to blue
+ Canary bake time - Waits for a configurable duration (canary bake time) after canary traffic shift to allow monitoring and validation of the new revision's performance with the increased traffic load.
+ Lifecycle hooks - Optional Lambda functions can be executed at various lifecycle stages during the deployment to perform automated validation, monitoring, or custom logic. Lambda functions or lifecycle hooks configured for PRODUCTION\$1TRAFFIC\$1SHIFT will be invoked at every production traffic shift step.

### Deployment lifecycle stages
<a name="canary-deployment-lifecycle-stages"></a>

The canary deployment process progresses through distinct lifecycle stages, each with specific responsibilities and validation checkpoints. Understanding these stages helps you monitor deployment progress and troubleshoot issues effectively.

Each lifecycle stage can last up to 24 hours and in addition each traffic shift step in PRODUCTION\$1TRAFFIC\$1SHIFT can last upto 24 hours. We recommend that the value remains below the 24-hour mark. This is because asynchronous processes need time to trigger the hooks. The system times out, fails the deployment, and then initiates a rollback after a stage reaches 24 hours.

CloudFormation deployments have additional timeout restrictions. While the 24-hour stage limit remains in effect, CloudFormation enforces a 36-hour limit on the entire deployment. CloudFormation fails the deployment, and then initiates a rollback if the process doesn't complete within 36 hours.


**Lifecycle stages**  

| Lifecycle stages | Description | Lifecycle hook support | 
| --- | --- | --- | 
| RECONCILE\$1SERVICE | This stage only happens when you start a new service deployment with more than 1 service revision in an ACTIVE state. | Yes | 
| PRE\$1SCALE\$1UP | The green service revision has not started. The blue service revision is handling 100% of the production traffic. There is no test traffic. | Yes | 
| SCALE\$1UP | The time when the green service revision scales up to 100% and launches new tasks. The green service revision is not serving any traffic at this point. | No | 
| POST\$1SCALE\$1UP | The green service revision has started. The blue service revision is handling 100% of the production traffic. There is no test traffic. | Yes | 
| TEST\$1TRAFFIC\$1SHIFT | The blue and green service revisions are running. The blue service revision handles 100% of the production traffic. The green service revision is migrating from 0% to 100% of test traffic. | Yes | 
| POST\$1TEST\$1TRAFFIC\$1SHIFT | The test traffic shift is complete. The green service revision handles 100% of the test traffic. | Yes | 
| PRODUCTION\$1TRAFFIC\$1SHIFT | Canary production traffic is routed to green revision and lifecycle hook is invoked with 24 hours timeout. The second step shifts remaining production traffic to green revision. | Yes | 
| POST\$1PRODUCTION\$1TRAFFIC\$1SHIFT | The production traffic shift is complete. | Yes | 
| BAKE\$1TIME | The duration when both blue and green service revisions are running simultaneously. | No | 
| CLEAN\$1UP | The blue service revision has completely scaled down to 0 running tasks. The green service revision is now the production service revision after this stage. | No | 

### Configuration parameters
<a name="canary-configuration-parameters"></a>

Canary deployments require the following configuration parameters:
+ Canary percentage - The percentage of traffic to route to the new service revision during the canary phase. This allows testing with a controlled subset of production traffic.
+ Canary bake time - The duration to wait during the canary phase before shifting the remaining traffic to the new service revision. This provides time to monitor and validate the new version.

### Traffic management
<a name="canary-traffic-management"></a>

Canary deployments use load balancer target groups to manage traffic distribution:
+ Original target group - Contains tasks from the current stable version and receives the majority of traffic.
+ Canary target group - Contains tasks from the new version and receives a small percentage of traffic for testing.
+ Weighted routing - The load balancer uses weighted routing rules to distribute traffic between the target groups based on the configured canary percentage.

### Monitoring and validation
<a name="canary-monitoring-validation"></a>

Effective canary deployments rely on comprehensive monitoring:
+ Health checks - Both task sets must pass health checks before receiving traffic.
+ Metrics comparison - Compare key performance indicators between the original and canary versions, such as response time, error rate, and throughput.
+ Automated rollback - Configure CloudWatch alarms to automatically trigger rollback if the canary version shows degraded performance.
+ Manual validation - Use the evaluation period to manually review logs, metrics, and user feedback before proceeding.

### Best practices for canary deployments
<a name="canary-best-practices"></a>

Follow these best practices to ensure successful canary deployments with services.

#### Choose appropriate traffic percentages
<a name="canary-traffic-percentage"></a>

Consider these factors when selecting canary traffic percentages:
+ Start small - Begin with 5-10% of traffic to minimize impact if issues occur.
+ Consider application criticality - Use smaller percentages for mission-critical applications and larger percentages for less critical services.
+ Account for traffic volume - Ensure the canary percentage generates sufficient traffic for meaningful validation.

#### Set appropriate evaluation periods
<a name="canary-evaluation-time"></a>

Configure evaluation periods based on these considerations:
+ Allow sufficient time - Set evaluation periods long enough to capture meaningful performance data, typically 10-30 minutes.
+ Consider traffic patterns - Account for your application's traffic patterns and peak usage times.
+ Balance speed and safety - Longer evaluation periods provide more data but slow deployment velocity.

#### Implement comprehensive monitoring
<a name="canary-monitoring-setup"></a>

Set up monitoring to track canary deployment performance:
+ Key metrics - Monitor response time, error rate, throughput, and resource utilization for both task sets.
+ Alarm-based rollback - Configure CloudWatch alarms to automatically trigger rollback when metrics exceed thresholds.
+ Comparative analysis - Set up dashboards to compare metrics between original and canary versions side-by-side.
+ Business metrics - Include business-specific metrics like conversion rates or user engagement alongside technical metrics.

#### Plan rollback strategies
<a name="canary-rollback-strategy"></a>

Prepare for potential rollback scenarios with these strategies:
+ Automated rollback - Configure automatic rollback triggers based on health checks and performance metrics.
+ Manual rollback procedures - Document clear procedures for manual rollback when automated triggers don't capture all issues.
+ Rollback testing - Regularly test rollback procedures to ensure they work correctly when needed.

#### Validate thoroughly before deployment
<a name="canary-testing-validation"></a>

Ensure thorough validation before proceeding with canary deployments:
+ Pre-deployment testing - Thoroughly test changes in staging environments before canary deployment.
+ Health check configuration - Ensure health checks accurately reflect application readiness and functionality.
+ Dependency validation - Verify that new versions are compatible with downstream and upstream services.
+ Data consistency - Ensure database schema changes and data migrations are backward compatible.

#### Coordinate team involvement
<a name="canary-team-coordination"></a>

Ensure effective team coordination during canary deployments:
+ Deployment windows - Schedule canary deployments during business hours when teams are available to monitor and respond.
+ Communication channels - Establish clear communication channels for deployment status and issue escalation.
+ Role assignments - Define roles and responsibilities for monitoring, decision-making, and rollback execution.

# Required resources for Amazon ECS canary deployments
<a name="canary-deployment-implementation"></a>

To use a canary deployment with managed traffic shifting, your service must use one of the following features:
+ Elastic Load Balancing
+ Service Connect

The following list provides a high-level overview of what you need to configure for Amazon ECS canary deployments:
+ Your service uses an Application Load Balancer, Network Load Balancer, or Service Connect. Configure the appropriate resources.
  + Application Load Balancer - For more information, see [Application Load Balancer resources for blue/green, linear, and canary deployments](alb-resources-for-blue-green.md).
  + Network Load Balancer - For more information, see [Network Load Balancer resources for Amazon ECS blue/green, linear and canary deployments](nlb-resources-for-blue-green.md).
  + Service Connect - For more information, see [Service Connect resources for Amazon ECS blue/green, linear, and canary deployments](service-connect-blue-green.md).
+ Set the service deployment controller to `ECS`.
+ Configure the deployment strategy as `canary` in your service definition.
+ Optionally, configure additional parameters such as:
  + Bake time for the new deployment
  + The percentage of traffic to route to the new service revision during the canary phase. 
  + The duration to wait during the canary phase before shifting the remaining traffic to the new service revision. 
  + CloudWatch alarms for automatic rollback
  + Deployment lifecycle hooks (these are Lambda functions that run at specified deployment stages)

## Best practices
<a name="canary-deployment-best-practices"></a>

Follow these best practices for successful Amazon ECS lcanary deployments:
+ **Ensure your application can handle both service revisions running simultaneously.**
+ **Plan for sufficient cluster capacity to handle both service revisions during deployment.**
+ **Test your rollback procedures before implementing them in production.**
+ Configure appropriate health checks that accurately reflect your application's health.
+ Set a bake time that allows sufficient testing of the green deployment.
+ Implement CloudWatch alarms to automatically detect issues and trigger rollbacks.
+ Use lifecycle hooks to perform automated testing at each deployment stage.
+ Start with small canary percentages (5-10%) to minimize impact if issues occur.
+ Set appropriate evaluation periods that allow sufficient time for meaningful performance data collection.
+ Implement comprehensive monitoring with CloudWatch alarms for automated rollback triggers.
+ Configure health checks that accurately reflect your application's readiness and functionality.
+ Monitor both technical metrics (response time, error rate) and business metrics during evaluation.
+ Ensure your application can handle traffic splitting without session or state issues.
+ Plan rollback procedures and test them regularly to ensure they work when needed.
+ Schedule canary deployments during business hours when teams can monitor and respond.
+ Validate changes thoroughly in staging environments before canary deployment.
+ Document clear procedures for manual intervention and rollback decisions.

# Creating an Amazon ECS canary deployment
<a name="deploy-canary-service"></a>

By using Amazon ECS canary deployments, you can shift a small percentage of traffic to your new service revision (the "canary"). Validate the deployment, and then shift the remaining traffic all at once after a specified interval. This approach allows you to test new functionality with minimal risk before full deployment.

## Prerequisites
<a name="deploy-canary-service-prerequisites"></a>

Perform the following operations before you start a canary deployment.

1. Configure the appropriate permissions.
   + For information about Elastic Load Balancing permissions, see [Amazon ECS infrastructure IAM role for load balancers](AmazonECSInfrastructureRolePolicyForLoadBalancers.md).
   + For information about Lambda permissions, see [Permissions required for Lambda functions in Amazon ECS blue/green deployments](blue-green-permissions.md).

1. Amazon ECS canary deployments require that your service to use one of the following features: Configure the appropriate resources.
   + Application Load Balancer - For more information, see [Application Load Balancer resources for blue/green, linear, and canary deployments](alb-resources-for-blue-green.md).
   + Network Load Balancer - For more information, see [Network Load Balancer resources for Amazon ECS blue/green, linear and canary deployments](nlb-resources-for-blue-green.md).
   + Service Connect - For more information, see [Service Connect resources for Amazon ECS blue/green, linear, and canary deployments](service-connect-blue-green.md).

## Procedure
<a name="deploy-canary-service-procedure"></a>

You can use the console or the AWS CLI to create an Amazon ECS canary deployment service.

------
#### [ Console ]

1. Open the console at [https://console.aws.amazon.com/ecs/v2](https://console.aws.amazon.com/ecs/v2).

1. Determine the resource from where you launch the service.    
[\[See the AWS documentation website for more details\]](http://docs.aws.amazon.com/AmazonECS/latest/developerguide/deploy-canary-service.html)

   The **Create service** page displays.

1. Under **Service details**, do the following:

   1. For **Task definition family**, choose the task definition to use. Then, for **Task definition revision**, enter the revision to use.

   1. For **Service name**, enter a name for your service.

1. To run the service in an existing cluster, for **Existing cluster**, choose the cluster. To run the service in a new cluster, choose **Create cluster** 

1. Choose how your tasks are distributed across your cluster infrastructure. Under **Compute configuration**, choose your option.    
[\[See the AWS documentation website for more details\]](http://docs.aws.amazon.com/AmazonECS/latest/developerguide/deploy-canary-service.html)

1. Under **Deployment configuration**, do the following:

   1. For **Service type**, choose **Replica**.

   1. For **Desired tasks**, enter the number of tasks to launch and maintain in the service.

   1. To have Amazon ECS monitor the distribution of tasks across Availability Zones, and redistribute them when there is an imbalance, under **Availability Zone service rebalancing**, select **Availability Zone service rebalancing**.

   1. For **Health check grace period**, enter the amount of time (in seconds) that the service scheduler ignores unhealthy Elastic Load Balancing, VPC Lattice, and container health checks after a task has first started. If you do not specify a health check grace period value, the default value of 0 is used.

1. Under **Deployment configuration**, configure the canary deployment settings:

   1. For **Deployment strategy**, choose **Canary**.

   1. For **Canary percentage**, enter the percentage of traffic to shift to the green service revision in the first stage (for example, 10% for the initial canary traffic).

   1. For **Canary bake time**, enter the time in minutes to wait before shifting the remaining traffic to the green service revision.

   1. For **Bake time**, enter the number of minutes that both the blue and green service revisions will run simultaneously after the final traffic shift before the blue revision is terminated.

   1. (Optional) Run Lambda functions to run at specific stages of the deployment. Under **Deployment lifecycle hooks**, select the stages to run the lifecycle hooks.

      To add a lifecycle hook:

      1. Choose **Add**.

      1. For **Lambda function**, enter the function name or ARN.

      1. For **Role**, select the IAM role that has permission to invoke the Lambda function.

      1. For **Lifecycle stages**, select the stages when the Lambda function should run.

1. To configure how Amazon ECS detects and handles deployment failures, expand **Deployment failure detection**, and then choose your options. 

   1. To stop a deployment when the tasks cannot start, select **Use the Amazon ECS deployment circuit breaker**.

      To have the software automatically roll back the deployment to the last completed deployment state when the deployment circuit breaker sets the deployment to a failed state, select **Rollback on failures**.

   1. To stop a deployment based on application metrics, select **Use CloudWatch alarm(s)**. Then, from **CloudWatch alarm name**, choose the alarms. To create a new alarm, go to the CloudWatch console.

      To have the software automatically roll back the deployment to the last completed deployment state when a CloudWatch alarm sets the deployment to a failed state, select **Rollback on failures**.

1. (Optional) To interconnect your service using Service Connect, expand **Service Connect**, and then specify the following:

   1.  Select **Turn on Service Connect**.

   1. Under **Service Connect configuration**, specify the client mode.
      + If your service runs a network client application that only needs to connect to other services in the namespace, choose **Client side only**.
      + If your service runs a network or web service application and needs to provide endpoints for this service, and connects to other services in the namespace, choose **Client and server**.

   1. To use a namespace that is not the default cluster namespace, for **Namespace**, choose the service namespace. This can be a namespace created separately in the same AWS Region in your AWS account or a namespace in the same Region that is shared with your account using AWS Resource Access Manager (AWS RAM). For more information about shared AWS Cloud Map namespaces, see [Cross-account AWS Cloud Map namespace sharing](https://docs.aws.amazon.com/cloud-map/latest/dg/sharing-namespaces.html) in the *AWS Cloud Map Developer Guide*.

   1. (Optional) Configure test traffic header rules for canary deployments. Under **Test traffic routing**, specify the following:

      1. Select **Enable test traffic header rules** to route specific requests to the green service revision during testing.

      1. For **Header matching rules**, configure the criteria for routing test traffic:
         + **Header name**: Enter the name of the HTTP header to match (for example, `X-Test-Version` or `User-Agent`).
         + **Match type**: Choose the matching criteria:
           + **Exact match**: Route requests where the header value exactly matches the specified value
           + **Header present**: Route requests that contain the specified header, regardless of value
           + **Pattern match**: Route requests where the header value matches a specified pattern
         + **Header value** (if using exact match or pattern match): Enter the value or pattern to match against.

         You can add multiple header matching rules to create complex routing logic. Requests matching any of the configured rules will be routed to the green service revision for testing.

      1. Choose **Add header rule** to configure additional header matching conditions.
**Note**  
Test traffic header rules enable you to validate new functionality with controlled traffic before completing the full deployment. This allows you to test the green service revision with specific requests (such as those from internal testing tools or beta users) while maintaining normal traffic flow to the blue service revision.

   1. (Optional) Specify a log configuration. Select **Use log collection**. The default option sends container logs to CloudWatch Logs. The other log driver options are configured using AWS FireLens. For more information, see [Send Amazon ECS logs to an AWS service or AWS Partner](using_firelens.md).

      The following describes each container log destination in more detail.
      + **Amazon CloudWatch** – Configure the task to send container logs to CloudWatch Logs. The default log driver options are provided, which create a CloudWatch log group on your behalf. To specify a different log group name, change the driver option values.
      + **Amazon Data Firehose** – Configure the task to send container logs to Firehose. The default log driver options are provided, which send logs to a Firehose delivery stream. To specify a different delivery stream name, change the driver option values.
      + **Amazon Kinesis Data Streams** – Configure the task to send container logs to Kinesis Data Streams. The default log driver options are provided, which send logs to an Kinesis Data Streams stream. To specify a different stream name, change the driver option values.
      + **Amazon OpenSearch Service** – Configure the task to send container logs to an OpenSearch Service domain. The log driver options must be provided. 
      + **Amazon S3** – Configure the task to send container logs to an Amazon S3 bucket. The default log driver options are provided, but you must specify a valid Amazon S3 bucket name.

1. (Optional) Configure **Load balancing** for canary deployment.    
[\[See the AWS documentation website for more details\]](http://docs.aws.amazon.com/AmazonECS/latest/developerguide/deploy-canary-service.html)

1. (Optional) To help identify your service and tasks, expand the **Tags** section, and then configure your tags.

   To have Amazon ECS automatically tag all newly launched tasks with the cluster name and the task definition tags, select **Turn on Amazon ECS managed tags**, and then for **Propagate tags from**, choose **Task definitions**.

   To have Amazon ECS automatically tag all newly launched tasks with the cluster name and the service tags, select **Turn on Amazon ECS managed tags**, and then for **Propagate tags from**, choose **Service**.

   Add or remove a tag.
   + [Add a tag] Choose **Add tag**, and then do the following:
     + For **Key**, enter the key name.
     + For **Value**, enter the key value.
   + [Remove a tag] Next to the tag, choose **Remove tag**.

1. Choose **Create**.

------
#### [ AWS CLI ]

1. Create a file named `canary-service-definition.json` with the following content.

   Replace the *user-input* with your values.

   ```
   {
     "serviceName": "myCanaryService",
     "cluster": "arn:aws:ecs:us-west-2:123456789012:cluster/sample-fargate-cluster",
     "taskDefinition": "sample-fargate:1",
     "desiredCount": 5,
     "launchType": "FARGATE",
     "networkConfiguration": {
       "awsvpcConfiguration": {
         "subnets": [
           "subnet-09ce6e74c116a2299",
           "subnet-00bb3bd7a73526788",
           "subnet-0048a611aaec65477"
         ],
         "securityGroups": [
           "sg-09d45005497daa123"
         ],
         "assignPublicIp": "ENABLED"
       }
     },
     "deploymentController": {
       "type": "ECS"
     },
     "deploymentConfiguration": {
       "strategy": "CANARY",
       "maximumPercent": 200,
       "minimumHealthyPercent": 100,
       "canaryConfiguration" : {
           "canaryPercent" : 5.0,
           "canaryBakeTime" : 10
       },
       "bakeTimeInMinutes": 10,
       "alarms": {
         "alarmNames": [
           "myAlarm"
         ],
         "rollback": true,
         "enable": true
       }
     },
     "loadBalancers": [
       {
         "targetGroupArn": "arn:aws:elasticloadbalancing:us-west-2:123456789012:targetgroup/blue-target-group/54402ff563af1197",
         "containerName": "fargate-app",
         "containerPort": 80,
         "advancedConfiguration": {
           "alternateTargetGroupArn": "arn:aws:elasticloadbalancing:us-west-2:123456789012:targetgroup/green-target-group/cad10a56f5843199",
           "productionListenerRule": "arn:aws:elasticloadbalancing:us-west-2:123456789012:listener-rule/app/my-canary-demo/32e0e4f946c3c05b/9cfa8c482e204f7d/831dbaf72edb911",
           "roleArn": "arn:aws:iam::123456789012:role/LoadBalancerManagementforECS"
         }
       }
     ]
   }
   ```

1. Run `create-service`.

   ```
   aws ecs create-service --cli-input-json file://canary-service-definition.json
   ```

------

## Next steps
<a name="deploy-canary-service-next-steps"></a>

After configuring your canary deployment, complete these steps:
+ Update the service to start the deployment. For more information, see [Updating an Amazon ECS service](update-service-console-v2.md).
+ Monitor the deployment process to ensure it follows the canary pattern:
  + The green service revision is created and scaled up
  + A small percentage of traffic (canary) is shifted to the green revision
  + The system waits for the specified canary interval
  + The remaining traffic is shifted all at once to the green revision
  + After the bake time, the blue revision is terminated

## Deployment terminology
<a name="deployment-terminology"></a>

The following terms are used throughout the Amazon ECS deployment documentation:

Blue-green deployment  
A deployment strategy that creates a new environment (green) alongside the existing environment (blue), then switches traffic from blue to green after validation.

Canary deployment  
A deployment strategy that routes a small percentage of traffic to a new version while maintaining the majority on the stable version for validation.

Linear deployment  
A deployment strategy that gradually shifts traffic from the old version to the new version in equal increments over time.

Rolling deployment  
A deployment strategy that replaces instances of the old version with instances of the new version one at a time.

Task set  
A collection of tasks that run the same task definition within a service during a deployment.

Target group  
A logical grouping of targets that receive traffic from a load balancer during deployments.

Deployment controller  
The method used to deploy new versions of your service, such as Amazon ECS, CodeDeploy, or external controllers.

Rollback  
The process of reverting to a previous version of your application when issues are detected during deployment.

# Deploy Amazon ECS services using a third-party controller
<a name="deployment-type-external"></a>

The *external* deployment type allows you to use any third-party deployment controller for full control over the deployment process for an Amazon ECS service. The details for your service are managed by either the service management API actions (`CreateService`, `UpdateService`, and `DeleteService`) or the task set management API actions (`CreateTaskSet`, `UpdateTaskSet`, `UpdateServicePrimaryTaskSet`, and `DeleteTaskSet`). Each API action manages a subset of the service definition parameters.

The `UpdateService` API action updates the desired count and health check grace period parameters for a service. If the compute option, platform version, load balancer details, network configuration, or task definition need to be updated, you must create a new task set.

The `UpdateTaskSet` API action updates only the scale parameter for a task set.

The `UpdateServicePrimaryTaskSet` API action modifies which task set in a service is the primary task set. When you call the `DescribeServices` API action, it returns all fields specified for a primary task set. If the primary task set for a service is updated, any task set parameter values that exist on the new primary task set that differ from the old primary task set in a service are updated to the new value when a new primary task set is defined. If no primary task set is defined for a service, when describing the service, the task set fields are null.

## External deployment considerations
<a name="deployment-type-external-considerations"></a>

Consider the following when using the external deployment type:
+ The supported load balancer types are either an Application Load Balancer or a Network Load Balancer.
+ The Fargate or `EXTERNAL` deployment controller types don't support the `DAEMON` scheduling strategy.
+ Application AutoScaling integration with Amazon ECS only supports Amazon ECS service as a target. The supported scalable dimension for Amazon ECS is `ecs:service:DesiredCount` - the task count of an Amazon ECS service. There is no direct integration between Application AutoScaling and Amazon ECS task sets. Amazon ECS task sets calculate the `ComputedDesiredCount` based on the Amazon ECS service `DesiredCount`.

## External deployment workflow
<a name="deployment-type-external-workflow"></a>

The following is the basic workflow for managing an external deployment on Amazon ECS.

**To manage an Amazon ECS service using an external deployment controller**

1. Create an Amazon ECS service. The only required parameter is the service name. You can specify the following parameters when creating a service using an external deployment controller. All other service parameters are specified when creating a task set within the service.  
`serviceName`  
Type: String  
Required: Yes  
The name of your service. Up to 255 letters (uppercase and lowercase), numbers, hyphens, and underscores are allowed. Service names must be unique within a cluster, but you can have similarly named services in multiple clusters within a Region or across multiple Regions.  
`desiredCount`  
The number of instantiations of the specified task set task definition to place and keep running within the service.  
`deploymentConfiguration`  
Optional deployment parameters that control how many tasks run during a deployment and the ordering of stopping and starting tasks.   
`tags`  
Type: Array of objects  
Required: No  
The metadata that you apply to the service to help you categorize and organize them. Each tag consists of a key and an optional value, both of which you define. When a service is deleted, the tags are deleted as well. A maximum of 50 tags can be applied to the service. For more information, see [Tagging Amazon ECS resources](ecs-using-tags.md).  
When you update a service, this parameter doesn't trigger a new service deployment.    
`key`  
Type: String  
Length Constraints: Minimum length of 1. Maximum length of 128.  
Required: No  
One part of a key-value pair that make up a tag. A key is a general label that acts like a category for more specific tag values.  
`value`  
Type: String  
Length Constraints: Minimum length of 0. Maximum length of 256.  
Required: No  
The optional part of a key-value pair that make up a tag. A value acts as a descriptor within a tag category (key).  
`enableECSManagedTags`  
Specifies whether to use Amazon ECS managed tags for the tasks within the service. For more information, see [Use tags for billing](ecs-using-tags.md#tag-resources-for-billing).  
`propagateTags`  
Type: String  
Valid values: `TASK_DEFINITION` \$1 `SERVICE`  
Required: No  
Specifies whether to copy the tags from the task definition or the service to the tasks in the service. If no value is specified, the tags are not copied. Tags can only be copied to the tasks within the service during service creation. To add tags to a task after service creation or task creation, use the `TagResource` API action.  
When you update a service, this parameter doesn't trigger a new service deployment.  
`schedulingStrategy`  
The scheduling strategy to use. Services using an external deployment controller support only the `REPLICA` scheduling strategy.  
`placementConstraints`  
An array of placement constraint objects to use for tasks in your service. You can specify a maximum of 10 constraints per task (this limit includes constraints in the task definition and those specified at run time). If you are using Fargate, task placement constraints aren't supported.  
`placementStrategy`  
The placement strategy objects to use for tasks in your service. You can specify a maximum of four strategy rules per service.

   The following is an example service definition for creating a service using an external deployment controller.

   ```
   {
       "cluster": "",
       "serviceName": "",
       "desiredCount": 0,
       "role": "",
       "deploymentConfiguration": {
           "maximumPercent": 0,
           "minimumHealthyPercent": 0
       },
       "placementConstraints": [
           {
               "type": "distinctInstance",
               "expression": ""
           }
       ],
       "placementStrategy": [
           {
               "type": "binpack",
               "field": ""
           }
       ],
       "schedulingStrategy": "REPLICA",
       "deploymentController": {
           "type": "EXTERNAL"
       },
       "tags": [
           {
               "key": "",
               "value": ""
           }
       ],
       "enableECSManagedTags": true,
       "propagateTags": "TASK_DEFINITION"
   }
   ```

1. Create an initial task set. The task set contains the following details about your service:  
`taskDefinition`  
The task definition for the tasks in the task set to use.  
`launchType`  
Type: String  
Valid values: `EC2` \$1 `FARGATE` \$1 `EXTERNAL`  
Required: No  
The launch type on which to run your service. If a launch type is not specified, the default `capacityProviderStrategy` is used by default.  
When you update a service, this parameter triggers a new service deployment.  
If a `launchType` is specified, the `capacityProviderStrategy` parameter must be omitted.  
`platformVersion`  
Type: String  
Required: No  
The platform version on which your tasks in the service are running. A platform version is only specified for tasks using the Fargate launch type. If one is not specified, the latest version (`LATEST`) is used by default.  
When you update a service, this parameter triggers a new service deployment.  
AWS Fargate platform versions are used to refer to a specific runtime environment for the Fargate task infrastructure. When specifying the `LATEST` platform version when running a task or creating a service, you get the most current platform version available for your tasks. When you scale up your service, those tasks receive the platform version that was specified on the service's current deployment. For more information, see [Fargate platform versions for Amazon ECS](platform-fargate.md).  
Platform versions are not specified for tasks using the EC2 launch type.  
`loadBalancers`  
A load balancer object representing the load balancer to use with your service. When using an external deployment controller, only Application Load Balancers and Network Load Balancers are supported. If you're using an Application Load Balancer, only one Application Load Balancer target group is allowed per task set.  
The following snippet shows an example `loadBalancer` object to use.  

   ```
   "loadBalancers": [
           {
               "targetGroupArn": "",
               "containerName": "",
               "containerPort": 0
           }
   ]
   ```
When specifying a `loadBalancer` object, you must specify a `targetGroupArn` and omit the `loadBalancerName` parameters.  
`networkConfiguration`  
The network configuration for the service. This parameter is required for task definitions that use the `awsvpc` network mode to receive their own elastic network interface, and it's not supported for other network modes. For more information about networking for the Fargate, see [Amazon ECS task networking options for Fargate](fargate-task-networking.md).  
`serviceRegistries`  
The details of the service discovery registries to assign to this service. For more information, see [Use service discovery to connect Amazon ECS services with DNS names](service-discovery.md).  
`scale`  
A floating-point percentage of the desired number of tasks to place and keep running in the task set. The value is specified as a percent total of a service's `desiredCount`. Accepted values are numbers between 0 and 100.

   The following is a JSON example for creating a task set for an external deployment controller.

   ```
   {
       "service": "",
       "cluster": "",
       "externalId": "",
       "taskDefinition": "",
       "networkConfiguration": {
           "awsvpcConfiguration": {
               "subnets": [
                   ""
               ],
               "securityGroups": [
                   ""
               ],
               "assignPublicIp": "DISABLED"
           }
       },
       "loadBalancers": [
           {
               "targetGroupArn": "",
               "containerName": "",
               "containerPort": 0
           }
       ],
       "serviceRegistries": [
           {
               "registryArn": "",
               "port": 0,
               "containerName": "",
               "containerPort": 0
           }
       ],
       "launchType": "EC2",
       "capacityProviderStrategy": [
           {
               "capacityProvider": "",
               "weight": 0,
               "base": 0
           }
       ],
       "platformVersion": "",
       "scale": {
           "value": null,
           "unit": "PERCENT"
       },
       "clientToken": ""
   }
   ```

1. When service changes are needed, use the `UpdateService`, `UpdateTaskSet`, or `CreateTaskSet` API action depending on which parameters you're updating. If you created a task set, use the `scale` parameter for each task set in a service to determine how many tasks to keep running in the service. For example, if you have a service that contains `tasksetA` and you create a `tasksetB`, you might test the validity of `tasksetB` before wanting to transition production traffic to it. You could set the `scale` for both task sets to `100`, and when you were ready to transition all production traffic to `tasksetB`, you could update the `scale` for `tasksetA` to `0` to scale it down.

# CodeDeploy blue/green deployments for Amazon ECS
<a name="deployment-type-bluegreen"></a>

We recommend that you use the Amazon ECS blue/green deployment. For more information, see [Creating an Amazon ECS blue/green deployment](deploy-blue-green-service.md).

The *blue/green* deployment type uses the blue/green deployment model controlled by CodeDeploy. Use this deployment type to verify a new deployment of a service before sending production traffic to it. For more information, see [What Is CodeDeploy](https://docs.aws.amazon.com/codedeploy/latest/userguide/welcome.html) in the *AWS CodeDeploy User Guide*.Validate the state of an Amazon ECS service before deployment

There are three ways traffic can shift during a blue/green deployment:
+ **Canary** — Traffic is shifted in two increments. You can choose from predeﬁned canary options that specify the percentage of traﬃc shifted to your updated task set in the ﬁrst increment and the interval, in minutes, before the remaining traﬃc is shifted in the second increment.
+ **Linear** — Traffic is shifted in equal increments with an equal number of minutes between each increment. You can choose from predeﬁned linear options that specify the percentage of traﬃc shifted in each increment and the number of minutes between each increment.
+ **All-at-once** — All traffic is shifted from the original task set to the updated task set all at once.

The following are components of CodeDeploy that Amazon ECS uses when a service uses the blue/green deployment type:

**CodeDeploy application**  
A collection of CodeDeploy resources. This consists of one or more deployment groups.

**CodeDeploy deployment group**  
The deployment settings. This consists of the following:  
+ Amazon ECS cluster and service
+ Load balancer target group and listener information
+ Deployment roll back strategy
+ Traffic rerouting settings
+ Original revision termination settings
+ Deployment configuration
+ CloudWatch alarms configuration that can be set up to stop deployments
+ SNS or CloudWatch Events settings for notifications
For more information, see [Working with Deployment Groups](https://docs.aws.amazon.com/codedeploy/latest/userguide/deployment-groups.html) in the *AWS CodeDeploy User Guide*.

**CodeDeploy deployment configuration**  
Specifies how CodeDeploy routes production traffic to your replacement task set during a deployment. The following pre-defined linear and canary deployment configuration are available. You can also create custom defined linear and canary deployments as well. For more information, see [Working with Deployment Configurations](https://docs.aws.amazon.com/codedeploy/latest/userguide/deployment-configurations.html) in the *AWS CodeDeploy User Guide*.  
+ **CodeDeployDefault.ECSAllAtOnce**: Shifts all traffic to the updated Amazon ECS container at once
+ **CodeDeployDefault.ECSLinear10PercentEvery1Minutes**: Shifts 10 percent of traffic every minute until all traffic is shifted.
+ **CodeDeployDefault.ECSLinear10PercentEvery3Minutes**: Shifts 10 percent of traffic every 3 minutes until all traffic is shifted.
+ **CodeDeployDefault.ECSCanary10Percent5Minutes**: Shifts 10 percent of traffic in the first increment. The remaining 90 percent is deployed five minutes later.
+ **CodeDeployDefault.ECSCanary10Percent15Minutes**: Shifts 10 percent of traffic in the first increment. The remaining 90 percent is deployed 15 minutes later.

**Revision**  
A revision is the CodeDeploy application specification file (AppSpec file). In the AppSpec file, you specify the full ARN of the task definition and the container and port of your replacement task set where traffic is to be routed when a new deployment is created. The container name must be one of the container names referenced in your task definition. If the network configuration or platform version has been updated in the service definition, you must also specify those details in the AppSpec file. You can also specify the Lambda functions to run during the deployment lifecycle events. The Lambda functions allow you to run tests and return metrics during the deployment. For more information, see [AppSpec File Reference](https://docs.aws.amazon.com/codedeploy/latest/userguide/reference-appspec-file.html) in the *AWS CodeDeploy User Guide*.

## Considerations
<a name="deployment-type-bluegreen-considerations"></a>

Consider the following when using the blue/green deployment type:
+ When an Amazon ECS service using the blue/green deployment type is initially created, an Amazon ECS task set is created.
+ You must configure the service to use either an Application Load Balancer or Network Load Balancer. The following are the load balancer requirements:
  + You must add a production listener to the load balancer, which is used to route production traffic.
  + An optional test listener can be added to the load balancer, which is used to route test traffic. If you specify a test listener, CodeDeploy routes your test traffic to the replacement task set during a deployment.
  + Both the production and test listeners must belong to the same load balancer.
  + You must define a target group for the load balancer. The target group routes traffic to the original task set in a service through the production listener.
  + When a Network Load Balancer is used, only the `CodeDeployDefault.ECSAllAtOnce` deployment configuration is supported.
+ For services configured to use service auto scaling and the blue/green deployment type, auto scaling is not blocked during a deployment but the deployment may fail under some circumstances. The following describes this behavior in more detail.
  + If a service is scaling and a deployment starts, the green task set is created and CodeDeploy will wait up to an hour for the green task set to reach steady state and won't shift any traffic until it does.
  + If a service is in the process of a blue/green deployment and a scaling event occurs, traffic will continue to shift for 5 minutes. If the service doesn't reach steady state within 5 minutes, CodeDeploy will stop the deployment and mark it as failed.
+ Tasks using Fargate or the `CODE_DEPLOY` deployment controller types don't support the `DAEMON` scheduling strategy.
+ When you initially create a CodeDeploy application and deployment group, you must specify the following:
  + You must define two target groups for the load balancer. One target group should be the initial target group defined for the load balancer when the Amazon ECS service was created. The second target group's only requirement is that it can't be associated with a different load balancer than the one the service uses.
+ When you create a CodeDeploy deployment for an Amazon ECS service, CodeDeploy creates a *replacement task set* (or *green task set*) in the deployment. If you added a test listener to the load balancer, CodeDeploy routes your test traffic to the replacement task set. This is when you can run any validation tests. Then CodeDeploy reroutes the production traffic from the original task set to the replacement task set according to the traffic rerouting settings for the deployment group.

## Required IAM permissions
<a name="deployment-type-bluegreen-IAM"></a>

Blue/green deployments are made possible by a combination of the Amazon ECS and CodeDeploy APIs. Users must have the appropriate permissions for these services before they can use Amazon ECS blue/green deployments in the AWS Management Console or with the AWS CLI or SDKs. 

In addition to the standard IAM permissions for creating and updating services, Amazon ECS requires the following permissions. These permissions have been added to the `AmazonECS_FullAccess` IAM policy. For more information, see [AmazonECS\$1FullAccess](security-iam-awsmanpol.md#security-iam-awsmanpol-AmazonECS_FullAccess).
+ `codedeploy:CreateApplication`
+ `codedeploy:CreateDeployment`
+ `codedeploy:CreateDeploymentGroup`
+ `codedeploy:GetApplication`
+ `codedeploy:GetDeployment`
+ `codedeploy:GetDeploymentGroup`
+ `codedeploy:ListApplications`
+ `codedeploy:ListDeploymentGroups`
+ `codedeploy:ListDeployments`
+ `codedeploy:StopDeployment`
+ `codedeploy:GetDeploymentTarget`
+ `codedeploy:ListDeploymentTargets`
+ `codedeploy:GetDeploymentConfig`
+ `codedeploy:GetApplicationRevision`
+ `codedeploy:RegisterApplicationRevision`
+ `codedeploy:BatchGetApplicationRevisions`
+ `codedeploy:BatchGetDeploymentGroups`
+ `codedeploy:BatchGetDeployments`
+ `codedeploy:BatchGetApplications`
+ `codedeploy:ListApplicationRevisions`
+ `codedeploy:ListDeploymentConfigs`
+ `codedeploy:ContinueDeployment`
+ `sns:ListTopics`
+ `cloudwatch:DescribeAlarms`
+ `lambda:ListFunctions`

**Note**  
In addition to the standard Amazon ECS permissions required to run tasks and services, users also require `iam:PassRole` permissions to use IAM roles for tasks.

CodeDeploy needs permissions to call Amazon ECS APIs, modify your Elastic Load Balancing, invoke Lambda functions, and describe CloudWatch alarms, as well as permissions to modify your service's desired count on your behalf. Before creating an Amazon ECS service that uses the blue/green deployment type, you must create an IAM role (`ecsCodeDeployRole`). For more information, see [Amazon ECS CodeDeploy IAM Role](codedeploy_IAM_role.md).

# Migrate CodeDeploy blue/green deployments to Amazon ECS blue/green deployments
<a name="migrate-codedeploy-to-ecs-bluegreen"></a>

CodeDeploy blue/green and Amazon ECS blue/green deployments provide similar functionality, but they differ in how you configure and manage them.

## CodeDeploy blue/green deployment overview
<a name="codedeploy-bluegreen-overview"></a>

When creating an Amazon ECS service using CodeDeploy, you:

1. Create a load balancer with a production listener and (optionally) a test listener. Each listener is configured with a single (default) rule that routes all traffic to a single target group (the primary target group).

1. Create an Amazon ECS service, configured to use the listener and target group, with `deploymentController` type set to `CODE_DEPLOY`. Service creation results in the creation of a (blue) task set registered with the specified target group.

1. Create a CodeDeploy deployment group (as part of a CodeDeploy application), and configure it with details of the Amazon ECS cluster, service name, load balancer listeners, two target groups (the primary target group used in the production listener rule, and a secondary target group to be used for replacement tasks), a service role (to grant CodeDeploy permissions to manipulate Amazon ECS and Elastic Load Balancing resources) and various parameters that control the deployment behavior.

With CodeDeploy, new versions of a service are deployed using `CreateDeployment()`, specifying the CodeDeploy application name, deployment group name, and an AppSpec file which provides details of the new revision and optional lifecycle hooks. The CodeDeploy deployment creates a replacement (green) task set and registers its tasks with the secondary target group. When this becomes healthy, it is available for testing (optional) and for production. In both cases, re-routing is achieved by changing the respective listener rule to point at the secondary target group associated with the green task set. Rollback is achieved by changing the production listener rule back to the primary target group.

## Amazon ECS blue/green deployment overview
<a name="ecs-bluegreen-overview"></a>

With Amazon ECS blue/green deployments, The deployment configuration is part of the Amazon ECS service itself:

1. You must pre-configure the load balancer production listener with a rule that includes two target groups with weights of 1 and 0.

1. You need to specify the following resources, or update the service resources: 
   + The ARN of this listener rule 
   + The two target groups
   + An IAM role to grant Amazon ECS permission to call the Elastic Load Balancing APIs
   + An optional IAM role to run Lambda functions
   + Set `deploymentController` type to `ECS` and `deploymentConfiguration.strategy` to `BLUE_GREEN`. This results in the creation of a (blue) service deployment whose tasks are registered with the primary target group.

With Amazon ECS blue/green, a new service revision is created by calling Amazon ECS `UpdateService()`, passing details of the new revision. The service deployment creates new (green) service revision tasks and registers them with the secondary target group. Amazon ECS handles re-routing and rollback operations by switching the weights on the listener rule.

## Key implementation differences
<a name="implementation-differences"></a>

While both approaches result in the creation of an initial set of tasks, the underlying implementation differs:
+ CodeDeploy uses a task set, whereas Amazon ECS uses a service revision. Amazon ECS task sets are an older construct which have been superseded by Amazon ECS service revisions and deployments. The latter offer greater visibility into the deployment process, as well as the service deployment and service revision history.
+ With CodeDeploy, lifecycle hooks are specified as part of the AppSpec file that is supplied to `CreateDeployment()`. This means that the hooks can be changed from one deployment to the next. With Amazon ECS blue/green, the hooks are specified as part of the service configuration, and any updates would require an `UpdateService()` call.
+ Both CodeDeploy and Amazon ECS blue/green use Lambda for hook implementation, but the expected inputs and outputs differ.

  With CodeDeploy, the function must call `PutLifecycleEventHookExecutionStatus()` to return the hook status, which can either be `SUCCEEDED` or `FAILED`. With Amazon ECS, the Lambda response is used to indicate the hook status.
+ CodeDeploy invokes each hook as a one-off call, and expects a final execution status to be returned within one hour. Amazon ECS hooks are more flexible in that they can return an `IN_PROGRESS` indicator, which signals that the hook must be re-invoked repeatedly until it results in `SUCCEEDED` or `FAILED`. For more information, see [Lifecycle hooks for Amazon ECS service deployments](deployment-lifecycle-hooks.md).

## Migration approaches
<a name="migration-paths"></a>

There are three main approaches to migrating from CodeDeploy blue/green to Amazon ECS blue/green deployments. Each approach has different characteristics in terms of complexity, risk, rollback capability, and potential downtime.

### Reuse the same Elastic Load Balancing resources used for CodeDeploy
<a name="inplace-update"></a>

You update the existing Amazon ECS service to use the Amazon ECS deployment controller with blue/green deployment strategy instead of CodeDeploy deployment controller. Consider the following when using this approach:
+ The migration procedure is simpler because you are updating the existing Amazon ECS service deployment controller and deployment strategy.
+ There is no downtime when correctly configured and migrated.
+ A rollback requires that you revert the service revision.
+ The risk is high because there is no parallel blue/green configuration.

You use the same load balancer listener and target groups that are used for CodeDeploy. If you are using CloudFormation, see [Migrating an CloudFormation CodeDeploy blue/green deployment template to an Amazon ECS blue/green CloudFormation template](migrate-codedeploy-to-ecs-bluegreen-cloudformation-template.md).

1. Modify the default rule of the production/test listeners to include the alternate target group and set the weight of the primary target group to 1 and the alternate target group to 0.

   For CodeDeploy, the listeners of the load balancer attached to the service are configured with a single (default) rule that routes all traffic to a single target group. For Amazon ECS blue/green, the load balancer listeners must be pre-configured with a rule that includes the two target groups with weights. The primary target group must be weighted to 1 and the alternate target group must be weighted to 0.

1. Update the existing Amazon ECS service by calling the `UpdateService` API and setting the parameter `deploymentController` to `ECS`, and the parameter `deploymentStrategy` to `BLUE_GREEN`. Specify the ARNs of the target group, the alternative target group, the production listener, and an optional test listener.

1. Verify that the service works as expected.

1. Delete the CodeDeploy setup for this Amazon ECS service as you are now using Amazon ECS blue/green.

### New service with existing load balancer
<a name="new-service-existing-lb"></a>

This approach uses the blue/green strategy for the migration. 

Consider the following when using this approach:
+ There is minimal disruption. It occurs only during the Elastic Load Balancing port swap.
+ A rollback requires that you revert the Elastic Load Balancing port swap.
+ The risk is low because there are parallel configurations. Therefore you can test before the traffic shift.

1. Leave the listeners, the target groups, and the Amazon ECS service for the CodeDeploy setup intact so you can easily rollback to this setup if needed.

1. Create new target groups and new listeners (with different ports from the original listeners) under the existing load balancer. Then, create a new Amazon ECS service that matches the existing Amazon ECS service except that you use `ECS` as deployment controller, `BLUE_GREEN` as a deployment strategy, and pass the ARNs for the new target groups and the new listeners rules.

1. Verify the new setup by manually sending HTTP traffic to the service. If everything goes well, swap the ports of the original listeners and the new listeners to route the traffic to the new setup.

1. Verify the new setup, and if everything continues to work as expected, delete the CodeDeploy setup.

### New service with a new load balancer
<a name="new-service-new-lb"></a>

Like the previous approach, this approach uses the blue/green strategy for the migration. The key difference is that the switch from the CodeDeploy setup to the Amazon ECS blue/green setup happens at a reverse proxy layer above the load balancer. Sample implementations for the reverse proxy layer are Route 53 and CloudFront.

This approach is suitable for customers who already have this reverse proxy layer, and if all the communication with the service is happening through it (for example, no direct communication at the load balancer level).

Consider the following when using this approach:
+ This requires a reverse proxy layer.
+ The migration procedure is more complex because you need to update the existing Amazon ECS service deployment controller and deployment strategy.
+ There is minimal disruption. It occurs only during the Elastic Load Balancing port swap.
+ A rollback requires that you reverse the proxy configuration changes.
+ The risk is low because there are parallel configurations. Therefore you can test before the traffic shift.

1. Do not modify the existing CodeDeploy setup intact (load balancer, listeners, target groups, Amazon ECS service, and CodeDeploy deployment group).

1. Create a new load balancer, target groups, and listeners configured for Amazon ECS blue/green deployments.

   Configure the appropriate resources.
   + Application Load Balancer - For more information, see [Application Load Balancer resources for blue/green, linear, and canary deployments](alb-resources-for-blue-green.md).
   + Network Load Balancer - For more information, see [Network Load Balancer resources for Amazon ECS blue/green, linear and canary deployments](nlb-resources-for-blue-green.md).

1. Create a new service with `ECS` as deployment controller and `BLUE_GREEN` as deployment strategy, pointing to the new load balancer resources.

1. Verify the new setup by testing it through the new load balancer.

1. Update the reverse proxy configuration to route traffic to the new load balancer.

1. Observe the new service revision, and if everything continues to work as expected, delete the CodeDeploy setup.

## Next steps
<a name="post-migration-considerations"></a>

After migrating to Amazon ECS blue/green deployments:
+ Update your deployment scripts and CI/CD pipelines to use the Amazon ECS `UpdateService` API instead of the CodeDeploy `CreateDeployment` API.
+ Update your monitoring and alerting to track Amazon ECS service deployments instead of CodeDeploy deployments.
+ Consider implementing automated testing of your new deployment process to ensure it works as expected.

# Migrating from a CodeDeploy blue/green to an Amazon ECS blue/green service deployment
<a name="migrate-code-deploy-to-ecs-blue-green"></a>

 By using Amazon ECS blue/green deployments, you can make and test service changes before implementing them in a production environment. 

You must create new lifecycle hooks for your Amazon ECS blue/green deployment.

## Prerequisites
<a name="migrate-code-deploy-to-ecs-blue-green-prerequisites"></a>

Perform the following operations before you start a blue/green deployment.

1. Replace the Amazon ECS CodeDeploy IAM role with the following permissions.
   + For information about Elastic Load Balancing permissions, see [Amazon ECS infrastructure IAM role for load balancers](AmazonECSInfrastructureRolePolicyForLoadBalancers.md).
   + For information about Lambda permissions, see [Permissions required for Lambda functions in Amazon ECS blue/green deployments](blue-green-permissions.md).

1. Turn off CodeDeploy automation. For more information, see [Working with deployment groups in CodeDeploy](https://docs.aws.amazon.com/codedeploy/latest/userguide/deployment-groups.html) in the *CodeDeploy User Guide*. 

1. Make sure that you have the following information from your CodeDeploy blue/green deployment. You can reuse this information for the Amazon ECS blue/green deployment:
   + The production target group
   + The production listener
   + The production rule
   + The test target group

     This is the target group for the green service revision,

1. Ensure that your Application Load Balancer target groups are properly associated with listener rules:
   + If you are not using test listeners, both target groups (production and test) must be associated with production listener rules.
   + If you are using test listeners, one target group must be linked to production listener rules and the other target group must be linked to test listener rules.

   If this requirement is not met, the service deployment will fail with the following error: `Service deployment rolled back because of invalid networking configuration. Both targetGroup and alternateTargetGroup must be associated with the productionListenerRule or testListenerRule.`

1. Verify that there are no ongoing service deployments for the service. For more information, see [View service history using Amazon ECS service deployments](service-deployment.md).

1. Amazon ECS blue/green deployments require your service to use one of the following features: Configure the appropriate resources.
   + Application Load Balancer - For more information, see [Application Load Balancer resources for blue/green, linear, and canary deployments](alb-resources-for-blue-green.md).
   + Network Load Balancer - For more information, see [Network Load Balancer resources for Amazon ECS blue/green, linear and canary deployments](nlb-resources-for-blue-green.md).
   + Service Connect - For more information, see [Service Connect resources for Amazon ECS blue/green, linear, and canary deployments](service-connect-blue-green.md).

1. Decide if you want to run Lambda functions for the lifecycle stages for the stages in the Amazon ECS blue/green deployment.
   + Pre scale up
   + After scale up
   + Test traffic shift
   + After test traffic shift
   + Production traffic shift
   + After production traffic shift

   Create Lambda functions for each lifecycle stage. For more information, see [Create a Lambda function with the console](https://docs.aws.amazon.com/lambda/latest/dg/getting-started.html#getting-started-create-function) in the *AWS Lambda Developer Guide*.

For more information about updating a service's deployment controller, see [Update Amazon ECS service parameters](update-service-parameters.md).

## Procedure
<a name="migrate-code-deploy-to-ecs-procedure"></a>

1. Open the console at [https://console.aws.amazon.com/ecs/v2](https://console.aws.amazon.com/ecs/v2).

1. On the **Clusters** page, choose the cluster.

   The cluster details page displays.

1. From the **Services** tab, choose the service.

   The service details page displays.

1. In the banner, choose **Update deployment controller type**.

   The **Migrate deployment controller type** page displays.

1. Expand **New**, and then specify the following parameters.

   1. For **Deployment controller type**, choose **ECS**.

   1. For **Deployment strategy**, choose **Blue/green**.

   1. For **Bake time**, enter the time that both the blue and green service revisions run.

   1. To run Lambda functions for a lifecycle stage, under **Deployment lifecyce hooks** do the following for each unique Lambda function:

      1. Choose **Add**.

         Repeat for every unique function you want to run.

      1. For **Lambda function**, enter the function name.

      1. For **Role**, choose the role that you created in the prerequisites with the blue/green permissions.

         For more information, see [Permissions required for Lambda functions in Amazon ECS blue/green deployments](blue-green-permissions.md).

      1. For **Lifecycle stages**, select the stages the Lambda function runs.

      1.  (Optional) For **Hook details**, enter a key-value pair that provides information about the hook.

1. Expand **Load Balancing**, and the configure the following:

   1. For **Role**, choose the role that you created in the prerequisites with the blue/green permissions.

      For more information, see [Permissions required for Lambda functions in Amazon ECS blue/green deployments](blue-green-permissions.md).

   1. For **Listener**, choose the production listener from your CodeDeploy blue/green deployment.

   1. For **Production rule**, choose the production rule from your CodeDeploy blue/green deployment.

   1. For **Test rule**, choose the test rule from your CodeDeploy blue/green deployment.

   1. For **Target group**, choose the production target group from your CodeDeploy blue/green deployment.

   1. For **Alternate target group**, choose the test target group from your CodeDeploy blue/green deployment.

1. Choose **Update**.

## Next steps
<a name="migrate-code-deploy-to-ecs-blue-green-next-steps"></a>
+ Update the service to start the deployment. For more information, see [Updating an Amazon ECS service](update-service-console-v2.md).
+ Monitor the deployment process to ensure it follows the blue/green pattern:
  + The green service revision is created and scaled up
  + Test traffic is routed to the green revision (if configured)
  + Production traffic is shifted to the green revision
  + After the bake time, the blue revision is terminated

# Migrating an CloudFormation CodeDeploy blue/green deployment template to an Amazon ECS blue/green CloudFormation template
<a name="migrate-codedeploy-to-ecs-bluegreen-cloudformation-template"></a>

Migrate a CloudFormation template that uses CodeDeploy blue/green deployments for Amazon ECS services to one that uses the native Amazon ECS blue/green deployment strategy. The migration follows the "Reuse the same Elastic Load Balancing resources used for CodeDeploy" approach. For more information, see [Migrate CodeDeploy blue/green deployments to Amazon ECS blue/green deployments](migrate-codedeploy-to-ecs-bluegreen.md).

## Source template
<a name="source-template"></a>

This template uses the `AWS::CodeDeployBlueGreen` transform and `AWS::CodeDeploy::BlueGreen` hook to implement blue/green deployments for an Amazon ECS service.

### Source
<a name="code-deploy-source"></a>

This is the complete CloudFormation template using CodeDeploy blue/green deployment. For more information, see [Blue/green deployment template example](https://docs.aws.amazon.com/AWSCloudFormation/latest/UserGuide/blue-green-template-example.html#blue-green-template-example.json) in the* AWS CloudFormation User Guide*:

```
{
  "AWSTemplateFormatVersion": "2010-09-09",
  "Parameters": {
    "Vpc": {
      "Type": "AWS::EC2::VPC::Id"
    },
    "Subnet1": {
      "Type": "AWS::EC2::Subnet::Id"
    },
    "Subnet2": {
      "Type": "AWS::EC2::Subnet::Id"
    }
  },
  "Transform": [
    "AWS::CodeDeployBlueGreen"
  ],
  "Hooks": {
    "CodeDeployBlueGreenHook": {
      "Type": "AWS::CodeDeploy::BlueGreen",
      "Properties": {
        "TrafficRoutingConfig": {
          "Type": "TimeBasedCanary",
          "TimeBasedCanary": {
            "StepPercentage": 15,
            "BakeTimeMins": 5
          }
        },
        "Applications": [
          {
            "Target": {
              "Type": "AWS::ECS::Service",
              "LogicalID": "ECSDemoService"
            },
            "ECSAttributes": {
              "TaskDefinitions": [
                "BlueTaskDefinition",
                "GreenTaskDefinition"
              ],
              "TaskSets": [
                "BlueTaskSet",
                "GreenTaskSet"
              ],
              "TrafficRouting": {
                "ProdTrafficRoute": {
                  "Type": "AWS::ElasticLoadBalancingV2::Listener",
                  "LogicalID": "ALBListenerProdTraffic"
                },
                "TargetGroups": [
                  "ALBTargetGroupBlue",
                  "ALBTargetGroupGreen"
                ]
              }
            }
          }
        ]
      }
    }
  },
  "Resources": {
    "ExampleSecurityGroup": {
      "Type": "AWS::EC2::SecurityGroup",
      "Properties": {
        "GroupDescription": "Security group for ec2 access",
        "VpcId": {"Ref": "Vpc"},
        "SecurityGroupIngress": [
          {
            "IpProtocol": "tcp",
            "FromPort": 80,
            "ToPort": 80,
            "CidrIp": "0.0.0.0/0"
          },
          {
            "IpProtocol": "tcp",
            "FromPort": 8080,
            "ToPort": 8080,
            "CidrIp": "0.0.0.0/0"
          },
          {
            "IpProtocol": "tcp",
            "FromPort": 22,
            "ToPort": 22,
            "CidrIp": "0.0.0.0/0"
          }
        ]
      }
    },
    "ALBTargetGroupBlue": {
      "Type": "AWS::ElasticLoadBalancingV2::TargetGroup",
      "Properties": {
        "HealthCheckIntervalSeconds": 5,
        "HealthCheckPath": "/",
        "HealthCheckPort": "80",
        "HealthCheckProtocol": "HTTP",
        "HealthCheckTimeoutSeconds": 2,
        "HealthyThresholdCount": 2,
        "Matcher": {
          "HttpCode": "200"
        },
        "Port": 80,
        "Protocol": "HTTP",
        "Tags": [
          {
            "Key": "Group",
            "Value": "Example"
          }
        ],
        "TargetType": "ip",
        "UnhealthyThresholdCount": 4,
        "VpcId": {"Ref": "Vpc"}
      }
    },
    "ALBTargetGroupGreen": {
      "Type": "AWS::ElasticLoadBalancingV2::TargetGroup",
      "Properties": {
        "HealthCheckIntervalSeconds": 5,
        "HealthCheckPath": "/",
        "HealthCheckPort": "80",
        "HealthCheckProtocol": "HTTP",
        "HealthCheckTimeoutSeconds": 2,
        "HealthyThresholdCount": 2,
        "Matcher": {
          "HttpCode": "200"
        },
        "Port": 80,
        "Protocol": "HTTP",
        "Tags": [
          {
            "Key": "Group",
            "Value": "Example"
          }
        ],
        "TargetType": "ip",
        "UnhealthyThresholdCount": 4,
        "VpcId": {"Ref": "Vpc"}
      }
    },
    "ExampleALB": {
      "Type": "AWS::ElasticLoadBalancingV2::LoadBalancer",
      "Properties": {
        "Scheme": "internet-facing",
        "SecurityGroups": [
          {"Ref": "ExampleSecurityGroup"}
        ],
        "Subnets": [
          {"Ref": "Subnet1"},
          {"Ref": "Subnet2"}
        ],
        "Tags": [
          {
            "Key": "Group",
            "Value": "Example"
          }
        ],
        "Type": "application",
        "IpAddressType": "ipv4"
      }
    },
    "ALBListenerProdTraffic": {
      "Type": "AWS::ElasticLoadBalancingV2::Listener",
      "Properties": {
        "DefaultActions": [
          {
            "Type": "forward",
            "ForwardConfig": {
              "TargetGroups": [
                {
                  "TargetGroupArn": {"Ref": "ALBTargetGroupBlue"},
                  "Weight": 1
                }
              ]
            }
          }
        ],
        "LoadBalancerArn": {"Ref": "ExampleALB"},
        "Port": 80,
        "Protocol": "HTTP"
      }
    },
    "ALBListenerProdRule": {
      "Type": "AWS::ElasticLoadBalancingV2::ListenerRule",
      "Properties": {
        "Actions": [
          {
            "Type": "forward",
            "ForwardConfig": {
              "TargetGroups": [
                {
                  "TargetGroupArn": {"Ref": "ALBTargetGroupBlue"},
                  "Weight": 1
                }
              ]
            }
          }
        ],
        "Conditions": [
          {
            "Field": "http-header",
            "HttpHeaderConfig": {
              "HttpHeaderName": "User-Agent",
              "Values": [
                "Mozilla"
              ]
            }
          }
        ],
        "ListenerArn": {"Ref": "ALBListenerProdTraffic"},
        "Priority": 1
      }
    },
    "ECSTaskExecutionRole": {
      "Type": "AWS::IAM::Role",
      "Properties": {
        "AssumeRolePolicyDocument": {
          "Version": "2012-10-17",		 	 	 
          "Statement": [
            {
              "Sid": "",
              "Effect": "Allow",
              "Principal": {
                "Service": "ecs-tasks.amazonaws.com"
              },
              "Action": "sts:AssumeRole"
            }
          ]
        },
        "ManagedPolicyArns": [
          "arn:aws:iam::aws:policy/service-role/AmazonECSTaskExecutionRolePolicy"
        ]
      }
    },
    "BlueTaskDefinition": {
      "Type": "AWS::ECS::TaskDefinition",
      "Properties": {
        "ExecutionRoleArn": {"Fn::GetAtt": ["ECSTaskExecutionRole", "Arn"]},
        "ContainerDefinitions": [
          {
            "Name": "DemoApp",
            "Image": "nginxdemos/hello:latest",
            "Essential": true,
            "PortMappings": [
              {
                "HostPort": 80,
                "Protocol": "tcp",
                "ContainerPort": 80
              }
            ]
          }
        ],
        "RequiresCompatibilities": [
          "FARGATE"
        ],
        "NetworkMode": "awsvpc",
        "Cpu": "256",
        "Memory": "512",
        "Family": "ecs-demo"
      }
    },
    "ECSDemoCluster": {
      "Type": "AWS::ECS::Cluster",
      "Properties": {}
    },
    "ECSDemoService": {
      "Type": "AWS::ECS::Service",
      "Properties": {
        "Cluster": {"Ref": "ECSDemoCluster"},
        "DesiredCount": 1,
        "DeploymentController": {
          "Type": "EXTERNAL"
        }
      }
    },
    "BlueTaskSet": {
      "Type": "AWS::ECS::TaskSet",
      "Properties": {
        "Cluster": {"Ref": "ECSDemoCluster"},
        "LaunchType": "FARGATE",
        "NetworkConfiguration": {
          "AwsVpcConfiguration": {
            "AssignPublicIp": "ENABLED",
            "SecurityGroups": [
              {"Ref": "ExampleSecurityGroup"}
            ],
            "Subnets": [
              {"Ref": "Subnet1"},
              {"Ref": "Subnet2"}
            ]
          }
        },
        "PlatformVersion": "1.4.0",
        "Scale": {
          "Unit": "PERCENT",
          "Value": 100
        },
        "Service": {"Ref": "ECSDemoService"},
        "TaskDefinition": {"Ref": "BlueTaskDefinition"},
        "LoadBalancers": [
          {
            "ContainerName": "DemoApp",
            "ContainerPort": 80,
            "TargetGroupArn": {"Ref": "ALBTargetGroupBlue"}
          }
        ]
      }
    },
    "PrimaryTaskSet": {
      "Type": "AWS::ECS::PrimaryTaskSet",
      "Properties": {
        "Cluster": {"Ref": "ECSDemoCluster"},
        "Service": {"Ref": "ECSDemoService"},
        "TaskSetId": {"Fn::GetAtt": ["BlueTaskSet", "Id"]}
      }
    }
  }
}
```

## Migration steps
<a name="migration-steps"></a>

### Remove CodeDeploy-specific resources
<a name="remove-codedeploy-resources"></a>

You no longer need the following properties:
+ The `AWS::CodeDeployBlueGreen` transform
+ The `CodeDeployBlueGreenHook` hook
+ The `GreenTaskDefinition` and `GreenTaskSet` resources (these will be managed by Amazon ECS)
+ The `PrimaryTaskSet` resource (Amazon ECS will manage task sets internally)

### Reconfigure the load balancer listener
<a name="reconfigure-load-balancer"></a>

Modify the `ALBListenerProdTraffic` resource to use a forward action with two target groups:

```
{
  "DefaultActions": [
    {
      "Type": "forward",
      "ForwardConfig": {
        "TargetGroups": [
          {
            "TargetGroupArn": {"Ref": "ALBTargetGroupBlue"},
            "Weight": 1
          },
          {
            "TargetGroupArn": {"Ref": "ALBTargetGroupGreen"},
            "Weight": 0
          }
        ]
      }
    }
  ]
}
```

### Update the deployment properties
<a name="update-ecs-service"></a>

Update and add the following:
+ Change the `DeploymentController` property from `EXTERNAL` to `ECS`.
+ Add the `Strategy` property and set it to BLUE\$1GREEN.
+ Add the `BakeTimeInMinutes` property.

  ```
  {
    "DeploymentConfiguration": {
      "MaximumPercent": 200,
      "MinimumHealthyPercent": 100,
      "DeploymentCircuitBreaker": {
        "Enable": true,
        "Rollback": true
      },
      "BakeTimeInMinutes": 5,
      "Strategy": "BLUE_GREEN"
    }
  }
  ```
+ Add the load balancer configuration to the service:

  ```
  {
    "LoadBalancers": [
      {
        "ContainerName": "DemoApp",
        "ContainerPort": 80,
        "TargetGroupArn": {"Ref": "ALBTargetGroupBlue"},
        "AdvancedConfiguration": {
          "AlternateTargetGroupArn": {"Ref": "ALBTargetGroupGreen"},
          "ProductionListenerRule": {"Ref": "ALBListenerProdRule"},
          "RoleArn": {"Fn::GetAtt": ["ECSInfrastructureRoleForLoadBalancers", "Arn"]}
        }
      }
    ]
  }
  ```
+ Add the task definition reference to the service:

  ```
  {
    "TaskDefinition": {"Ref": "BlueTaskDefinition"}
  }
  ```

### Create the AmazonECSInfrastructureRolePolicyForLoadBalancers role
<a name="create-ecs-service-role"></a>

Add a new IAM role that allows Amazon ECS to manage load balancer resources. For more information, see [Amazon ECS infrastructure IAM role for load balancers](AmazonECSInfrastructureRolePolicyForLoadBalancers.md)

## Testing recommendations
<a name="testing-recommendations"></a>

1. Deploy the migrated template to a non-production environment.

1. Verify that the service deploys correctly with the initial configuration.

1. Test a deployment by updating the task definition and observing the blue/green deployment process.

1. Verify that traffic shifts correctly between the blue and green deployments.

1. Test rollback functionality by forcing a deployment failure.

## Template after migration
<a name="migrated-template"></a>

### Final template
<a name="ecs-bluegreen-template"></a>

This is the complete CloudFormation template using an Amazon ECS blue/green deployment:

```
{
  "AWSTemplateFormatVersion": "2010-09-09",
  "Parameters": {
    "Vpc": {
      "Type": "AWS::EC2::VPC::Id"
    },
    "Subnet1": {
      "Type": "AWS::EC2::Subnet::Id"
    },
    "Subnet2": {
      "Type": "AWS::EC2::Subnet::Id"
    }
  },
  "Resources": {
    "ExampleSecurityGroup": {
      "Type": "AWS::EC2::SecurityGroup",
      "Properties": {
        "GroupDescription": "Security group for ec2 access",
        "VpcId": {"Ref": "Vpc"},
        "SecurityGroupIngress": [
          {
            "IpProtocol": "tcp",
            "FromPort": 80,
            "ToPort": 80,
            "CidrIp": "0.0.0.0/0"
          },
          {
            "IpProtocol": "tcp",
            "FromPort": 8080,
            "ToPort": 8080,
            "CidrIp": "0.0.0.0/0"
          },
          {
            "IpProtocol": "tcp",
            "FromPort": 22,
            "ToPort": 22,
            "CidrIp": "0.0.0.0/0"
          }
        ]
      }
    },
    "ALBTargetGroupBlue": {
      "Type": "AWS::ElasticLoadBalancingV2::TargetGroup",
      "Properties": {
        "HealthCheckIntervalSeconds": 5,
        "HealthCheckPath": "/",
        "HealthCheckPort": "80",
        "HealthCheckProtocol": "HTTP",
        "HealthCheckTimeoutSeconds": 2,
        "HealthyThresholdCount": 2,
        "Matcher": {
          "HttpCode": "200"
        },
        "Port": 80,
        "Protocol": "HTTP",
        "Tags": [
          {
            "Key": "Group",
            "Value": "Example"
          }
        ],
        "TargetType": "ip",
        "UnhealthyThresholdCount": 4,
        "VpcId": {"Ref": "Vpc"}
      }
    },
    "ALBTargetGroupGreen": {
      "Type": "AWS::ElasticLoadBalancingV2::TargetGroup",
      "Properties": {
        "HealthCheckIntervalSeconds": 5,
        "HealthCheckPath": "/",
        "HealthCheckPort": "80",
        "HealthCheckProtocol": "HTTP",
        "HealthCheckTimeoutSeconds": 2,
        "HealthyThresholdCount": 2,
        "Matcher": {
          "HttpCode": "200"
        },
        "Port": 80,
        "Protocol": "HTTP",
        "Tags": [
          {
            "Key": "Group",
            "Value": "Example"
          }
        ],
        "TargetType": "ip",
        "UnhealthyThresholdCount": 4,
        "VpcId": {"Ref": "Vpc"}
      }
    },
    "ExampleALB": {
      "Type": "AWS::ElasticLoadBalancingV2::LoadBalancer",
      "Properties": {
        "Scheme": "internet-facing",
        "SecurityGroups": [
          {"Ref": "ExampleSecurityGroup"}
        ],
        "Subnets": [
          {"Ref": "Subnet1"},
          {"Ref": "Subnet2"}
        ],
        "Tags": [
          {
            "Key": "Group",
            "Value": "Example"
          }
        ],
        "Type": "application",
        "IpAddressType": "ipv4"
      }
    },
    "ALBListenerProdTraffic": {
      "Type": "AWS::ElasticLoadBalancingV2::Listener",
      "Properties": {
        "DefaultActions": [
          {
            "Type": "forward",
            "ForwardConfig": {
              "TargetGroups": [
                {
                  "TargetGroupArn": {"Ref": "ALBTargetGroupBlue"},
                  "Weight": 1
                },
                {
                  "TargetGroupArn": {"Ref": "ALBTargetGroupGreen"},
                  "Weight": 0
                }
              ]
            }
          }
        ],
        "LoadBalancerArn": {"Ref": "ExampleALB"},
        "Port": 80,
        "Protocol": "HTTP"
      }
    },
    "ALBListenerProdRule": {
      "Type": "AWS::ElasticLoadBalancingV2::ListenerRule",
      "Properties": {
        "Actions": [
          {
            "Type": "forward",
            "ForwardConfig": {
              "TargetGroups": [
                {
                  "TargetGroupArn": {"Ref": "ALBTargetGroupBlue"},
                  "Weight": 1
                },
                {
                  "TargetGroupArn": {"Ref": "ALBTargetGroupGreen"},
                  "Weight": 0
                }
              ]
            }
          }
        ],
        "Conditions": [
          {
            "Field": "http-header",
            "HttpHeaderConfig": {
              "HttpHeaderName": "User-Agent",
              "Values": [
                "Mozilla"
              ]
            }
          }
        ],
        "ListenerArn": {"Ref": "ALBListenerProdTraffic"},
        "Priority": 1
      }
    },
    "ECSTaskExecutionRole": {
      "Type": "AWS::IAM::Role",
      "Properties": {
        "AssumeRolePolicyDocument": {
          "Version": "2012-10-17",		 	 	 
          "Statement": [
            {
              "Sid": "",
              "Effect": "Allow",
              "Principal": {
                "Service": "ecs-tasks.amazonaws.com"
              },
              "Action": "sts:AssumeRole"
            }
          ]
        },
        "ManagedPolicyArns": [
          "arn:aws:iam::aws:policy/service-role/AmazonECSTaskExecutionRolePolicy"
        ]
      }
    },
    "ECSInfrastructureRoleForLoadBalancers": {
      "Type": "AWS::IAM::Role",
      "Properties": {
        "AssumeRolePolicyDocument": {
          "Version": "2012-10-17",		 	 	 
          "Statement": [
            {
              "Sid": "AllowAccessToECSForInfrastructureManagement",
              "Effect": "Allow",
              "Principal": {
                "Service": "ecs.amazonaws.com"
              },
              "Action": "sts:AssumeRole"
            }
          ]
        },
        "ManagedPolicyArns": [
          "arn:aws:iam::aws:policy/AmazonECSInfrastructureRolePolicyForLoadBalancers"
        ]
      }
    },
    "BlueTaskDefinition": {
      "Type": "AWS::ECS::TaskDefinition",
      "Properties": {
        "ExecutionRoleArn": {"Fn::GetAtt": ["ECSTaskExecutionRole", "Arn"]},
        "ContainerDefinitions": [
          {
            "Name": "DemoApp",
            "Image": "nginxdemos/hello:latest",
            "Essential": true,
            "PortMappings": [
              {
                "HostPort": 80,
                "Protocol": "tcp",
                "ContainerPort": 80
              }
            ]
          }
        ],
        "RequiresCompatibilities": [
          "FARGATE"
        ],
        "NetworkMode": "awsvpc",
        "Cpu": "256",
        "Memory": "512",
        "Family": "ecs-demo"
      }
    },
    "ECSDemoCluster": {
      "Type": "AWS::ECS::Cluster",
      "Properties": {}
    },
    "ECSDemoService": {
      "Type": "AWS::ECS::Service",
      "Properties": {
        "Cluster": {"Ref": "ECSDemoCluster"},
        "DesiredCount": 1,
        "DeploymentController": {
          "Type": "ECS"
        },
        "DeploymentConfiguration": {
          "MaximumPercent": 200,
          "MinimumHealthyPercent": 100,
          "DeploymentCircuitBreaker": {
            "Enable": true,
            "Rollback": true
          },
          "BakeTimeInMinutes": 5,
          "Strategy": "BLUE_GREEN"
        },
        "NetworkConfiguration": {
          "AwsvpcConfiguration": {
            "AssignPublicIp": "ENABLED",
            "SecurityGroups": [
              {"Ref": "ExampleSecurityGroup"}
            ],
            "Subnets": [
              {"Ref": "Subnet1"},
              {"Ref": "Subnet2"}
            ]
          }
        },
        "LaunchType": "FARGATE",
        "PlatformVersion": "1.4.0",
        "TaskDefinition": {"Ref": "BlueTaskDefinition"},
        "LoadBalancers": [
          {
            "ContainerName": "DemoApp",
            "ContainerPort": 80,
            "TargetGroupArn": {"Ref": "ALBTargetGroupBlue"},
            "AdvancedConfiguration": {
              "AlternateTargetGroupArn": {"Ref": "ALBTargetGroupGreen"},
              "ProductionListenerRule": {"Ref": "ALBListenerProdRule"},
              "RoleArn": {"Fn::GetAtt": ["ECSInfrastructureRoleForLoadBalancers", "Arn"]}
            }
          }
        ]
      }
    }
  }
}
```

# Migrating from a CodeDeploy blue/green to an Amazon ECS rolling update service deployment
<a name="migrate-code-deploy-to-ecs-rolling"></a>

 You can migrate your service deployments from an CodeDeploy blue/green deployment to an Amazon ECS rolling update deployment. This moves you away from CodeDeploy dependency to using an integrated deployment.

The Amazon ECS service scheduler replaces the currently running tasks with new tasks. The number of tasks that Amazon ECS adds or removes from the service during a rolling update is controlled by the service deployment configuration.

## Prerequisites
<a name="migrate-code-deploy-to-ecs-rolling-prerequisites"></a>

Perform the following operations before you start a blue/green deployment. 

1. You no longer need the Amazon ECS CodeDeploy IAM role.

1. Turn off CodeDeploy automation. For more information, see [Working with deployment groups in CodeDeploy](https://docs.aws.amazon.com/codedeploy/latest/userguide/deployment-groups.html) in the *CodeDeploy User Guide*.

1. Verify that there are no ongoing service deployments for the service. For more information, see [View service history using Amazon ECS service deployments](service-deployment.md).

For more information about updating a service's deployment controller, see [Update Amazon ECS service parameters](update-service-parameters.md).

## Procedure
<a name="migrate-code-deploy-to-ecs-rolling-procedure"></a>

1. Open the console at [https://console.aws.amazon.com/ecs/v2](https://console.aws.amazon.com/ecs/v2).

1. On the **Clusters** page, choose the cluster.

   The cluster details page displays.

1. From the **Services** tab, choose the service.

   The service details page displays.

1. In the banner, choose **Migrate**.

   The **Update deployment configuration** page displays.

1. Expand **Deployment options**, and then specify the following parameters.

   1. For **Deployment controller type**, choose **ECS**.

   1. For **Deployment strategy**, choose **Rolling update**.

   1. For **Min running tasks**, enter the lower limit on the number of tasks in the service that must remain in the `RUNNING` state during a deployment, as a percentage of the desired number of tasks (rounded up to the nearest integer). For more information, see [Deployment configuration](https://docs.aws.amazon.com/AmazonECS/latest/developerguide/service_definition_parameters.html#sd-deploymentconfiguration).

   1. For **Max running tasks**, enter the upper limit on the number of tasks in the service that are allowed in the `RUNNING` or `PENDING` state during a deployment, as a percentage of the desired number of tasks (rounded down to the nearest integer).

1. Expand **Load Balancing**, and then configure the following:

   1. For **Role**, choose the role that you created in the prerequisites with the blue/green permissions.

      For more information, see [Permissions required for Lambda functions in Amazon ECS blue/green deployments](blue-green-permissions.md).

   1. For **Listener**, choose the production listener from your CodeDeploy blue/green deployment.

   1. For **Target group**, choose the production target group from your CodeDeploy blue/green deployment.

1. Choose **Update**.

## Next steps
<a name="migrate-code-deploy-to-ecs-rolling-next-steps"></a>

You must update the service for the change to take effect. For more information, see [Updating an Amazon ECS service](update-service-console-v2.md).

# Update the Amazon ECS deployment strategy
<a name="migrate-deployment-strategies"></a>

Amazon ECS supports multiple deployment strategies for updating your services. You can migrate between these strategies based on your application requirements. This topic explains how to migrate between rolling deployments and blue/green deployments.

## Understanding Amazon ECS deployment strategies
<a name="deployment-strategies-overview"></a>

Before migrating between deployment strategies, it's important to understand how each strategy works and their key differences:

**Rolling deployments**  
In a rolling deployment, Amazon ECS replaces the current running version of your application with a new version. The service scheduler uses the minimum and maximum healthy percentage parameters to determine the deployment strategy.  
Rolling deployments are simpler to set up but provide less control over the deployment process and traffic routing.

**Blue/green deployments**  
In a blue/green deployment, Amazon ECS creates a new version of your service (green) alongside the existing version (blue). This allows you to verify the new version before routing production traffic to it.  
Blue/green deployments provide more control over the deployment process, including traffic shifting, testing, and rollback capabilities.

## Best practices
<a name="migration-best-practices"></a>

Follow these best practices when migrating between deployment strategies:
+ **Test in a non-production environment**: Always test the update in a non-production environment before applying changes to production services.
+ **Plan for rollback**: Have a rollback plan in case the update doesn't work as expected.
+ **Monitor during transition**: Closely monitor your service during and after the migration to ensure it continues to operate correctly.
+ **Update documentation**: Update your deployment documentation to reflect the new deployment strategy.
+ **Consider traffic impact**: Understand how the update might impact traffic to your service and plan accordingly.

# Updating the deployment strategy from rolling update to Amazon ECS blue/green
<a name="update-rolling-to-bluegreen"></a>

You can migrate from a rolling update deployment to an Amazon ECS blue/green deployment when you want to make and test service changes before implementing them in a production environment. 

## Prerequisites
<a name="update-rolling-to-bluegreen-prerequisites"></a>

Before migrating your service from rolling to blue/green deployments, ensure you have the following:
+  Wait for any current deployments to complete. 
+ An existing Amazon ECS service using the rolling deployment strategy.
+ If you have multiple service revisions serving traffic, Amazon ECS attempts to consolidate traffic to a single revision during migration. If this fails, you might need to manually update your service to use a single revision before migrating.
+ Configure the appropriate permissions.
  + For information about Elastic Load Balancing permissions, see [Amazon ECS infrastructure IAM role for load balancers](AmazonECSInfrastructureRolePolicyForLoadBalancers.md).
  + For information about Lambda permissions, see [Permissions required for Lambda functions in Amazon ECS blue/green deployments](blue-green-permissions.md).
+ Depending on configuration, you need to perform one of the following:
  + If your service uses Elastic Load Balancing, update your service with the new `advancedConfiguration` and start a rolling deployment. 
  + If your service uses Service Connect, update your service and start a rolling deployment. 
  + If your service uses both Elastic Load Balancing and Service Connect, perform both steps above (you can use a single UpdateService request). 
  + If your service uses none of the above, then no additional operation is needed.
+ Amazon ECS blue/green deployments require that your service uses one of the following features. Configure the appropriate resources.
  + Application Load Balancer - For more information, see [Application Load Balancer resources for blue/green, linear, and canary deployments](alb-resources-for-blue-green.md).
  + Network Load Balancer - For more information, see [Network Load Balancer resources for Amazon ECS blue/green, linear and canary deployments](nlb-resources-for-blue-green.md).
  + Service Connect - For more information, see [Service Connect resources for Amazon ECS blue/green, linear, and canary deployments](service-connect-blue-green.md).

## Procedure
<a name="update-rolling-to-bluegreen-procedure"></a>

1. Open the Amazon ECS console at [https://console.aws.amazon.com/ecs/v2](https://console.aws.amazon.com/ecs/v2).

1. In the navigation pane, choose **Clusters**.

1. On the **Clusters** page, choose the cluster that contains the service you want to migrate.

   The Cluster details page is displayed.

1. On the **Cluster details** page, choose the **Services** tab.

1. Choose the service, and then choose **Update**.

   The Update service page is displayed

1. Expand **Deployment options**, and then do the following:

1. For **Deployment strategy**, choose **Blue/green**.

1. Configure the blue/green deployment settings:

   1. For **Bake time**, enter the number of minutes that both the blue and green service revisions will run simultaneously before the blue revision is terminated. 

      This allows time for verification and testing.

   1. (Optional) Configure Lambda functions to run at specific stages of the deployment. Under **Deployment lifecycle hooks**, configure Lambda functions for the following stages:
      + **Pre scale up**: Runs before scaling up the green service revision
      + **Post scale up**: Runs after scaling up the green service revision
      + **Test traffic shift**: Runs during test traffic routing to the green service revision
      + **Post test traffic shift**: Runs after test traffic is routed to the green service revision
      + **Production traffic shift**: Runs during production traffic routing to the green service revision
      + **Post production traffic shift**: Runs after production traffic is routed to the green service revision

      To add a lifecycle hook:

      1. Choose **Add**.

      1. For **Lambda function**, enter the function name or ARN.

      1. For **Role**, choose the IAM role that has permission to invoke the Lambda function.

      1. For **Lifecycle stages**, select the stages when the Lambda function should run.

      1. Optional: For **Hook details**, enter key-value pairs to provide additional information to the hook.

1. Configure the load balancer settings:

   1. Under **Load balancing**, verify that your service is configured to use a load balancer.

   1. For **Target group**, choose the primary target group for your production (blue) environment.

   1. For **Alternate target group**, choose the target group for your test (green) environment.

   1. For **Production listener rule**, choose the listener rule for routing production traffic.

   1. Optional: For **Test listener rule**, choose a listener rule for routing test traffic to your green environment.

   1. For **Role**, choose the IAM role that allows Amazon ECS to manage your load balancer.

1. Review your configuration changes, and then choose **Update**.

## Next steps
<a name="update-rolling-to-bluegreen-next-steps"></a>
+ Update the service to start the deployment. For more information, see [Updating an Amazon ECS service](update-service-console-v2.md).
+ Monitor the deployment process to ensure it follows the blue/green pattern:
  + The green service revision is created and scaled up
  + Test traffic is routed to the green revision (if configured)
  + Production traffic is shifted to the green revision
  + After the bake time, the blue revision is terminated

# Updating the deployment strategy from Amazon ECS blue/green to rolling update
<a name="update-bluegreen-to-rolling"></a>

You can migrate a blue/green deployment to a rolling update deployment.

Keep the following considerations in mind when migrating to rolling deployments:
+ **Traffic handling**: With rolling deployments, new tasks start receiving traffic as soon as they pass health checks. There is no separate testing phase as with blue/green deployments.
+ **Resource efficiency**: Rolling deployments typically use fewer resources than blue/green deployments because they replace tasks incrementally rather than creating a complete duplicate environment.
+ **Rollback complexity**: Rolling deployments make rollbacks more complex compared to blue/green deployments. If you need to roll back, you must initiate a new deployment with the previous task definition.
+ **Deployment speed**: Rolling deployments may take longer to complete than blue/green deployments, especially for services with many tasks.
+ **Load balancer configuration**: Your existing load balancer configuration will continue to work with rolling deployments, but the traffic shifting behavior will be different.

## Prerequisites
<a name="update-bluegreen-to-rolling-prerequisites"></a>

Before migrating your service from blue/green to rolling deployments, ensure you have the following:
+ An existing Amazon ECS service using the blue/green deployment strategy
+ No ongoing deployments for the service (wait for any current deployments to complete)
+ A clear understanding of how your service will behave with rolling deployments

**Note**  
You cannot migrate a service to rolling deployment if it has an ongoing deployment. Wait for any current deployments to complete before proceeding.

## Migration procedure
<a name="update-bluegreen-to-rolling-procedure"></a>

Follow these steps to migrate your Amazon ECS service from blue/green to rolling deployments:

1. Open the Amazon ECS console at [https://console.aws.amazon.com/ecs/v2](https://console.aws.amazon.com/ecs/v2).

1. In the navigation pane, choose **Clusters**.

1. On the **Clusters** page, choose the cluster that contains the service you want to migrate.

1. On the **Cluster details** page, choose the **Services** tab.

1. Select the service you want to migrate, and then choose **Update**.

1. On the **Update service** page, navigate to the **Deployment options** section and expand it if necessary.

1. For **Deployment strategy**, choose **Rolling update**.

1. Configure the rolling deployment settings:

   1. For **Minimum healthy percent**, enter the minimum percentage of tasks that must remain in the `RUNNING` state during a deployment. This value is specified as a percentage of the desired number of tasks for the service.

   1. For **Maximum percent**, enter the maximum percentage of tasks that are allowed in the `RUNNING` or `PENDING` state during a deployment. This value is specified as a percentage of the desired number of tasks for the service.

1. Optional: Under **Deployment failure detection**, configure how Amazon ECS detects and handles deployment failures:

   1. To enable the deployment circuit breaker, choose **Use the deployment circuit breaker**.

   1. To automatically roll back failed deployments, choose **Rollback on failure**.

1. Review your configuration changes, and then choose **Update** to save your changes and migrate the service to rolling deployment.

Amazon ECS will update your service configuration to use the rolling deployment strategy. The next time you update your service, it will use the rolling deployment process.

**Note**  
When you migrate from blue/green to rolling deployment, Amazon ECS handles the transition by:  
Identifying the current active service revision that is serving traffic.
Maintaining the existing load balancer configuration but changing how new deployments are handled.
Preparing the service for future rolling deployments.

## Next steps
<a name="update-bluegreen-to-rolling-next-steps"></a>
+ Update the service to start the deployment. For more information, see [Updating an Amazon ECS service](update-service-console-v2.md).

# Troubleshooting Amazon ECS deployment strategy updates
<a name="troubleshooting-deployment-controller-migration"></a>

This section provides solutions for common issues you might encounter when migrating deployment strategies.

## Multiple service revisions or tasks sets
<a name="troubleshooting-multiple-task-sets"></a>

The following issues relate to having multiple service revisions for a deployment.

Multiple task sets when updating ECS deployment controller  
*Error message*: `Updating the deployment controller is not supported when there are multiple tasksets in the service. Please ensure your service has only one taskset and try again.`  
*Solution*: This error occurs when attempting to change the deployment controller type of a service with multiple active task sets. To resolve this issue for the `CODE_DEPLOY` or `EXTERNAL` deployment controller:  

1. Check current task sets:

   ```
   aws ecs describe-services --cluster your-cluster-name --services your-service-name --query "services[0].taskSets"
   ```

1. Wait for any in-progress deployments to complete.

1. Force a new deployment to clean up task sets:

   ```
   aws ecs update-service --cluster your-cluster-name --service your-service-name --force-new-deployment
   ```

1. If necessary, delete extra task sets manually:

   ```
   aws ecs delete-task-set --cluster your-cluster-name --service your-service-name --task-set task-set-id
   ```

1. After only one task set remains, retry updating the deployment controller.
For more information, see [Amazon ECS service deployment controllers and strategies](ecs_service-options.md).

Missing primary task set when updating `ECS` deployment controller  
*Error message*: `Updating the deployment controller requires a primary taskset in the service. Please ensure your service has a primary taskset and try again.`  
*Solution*: This error occurs when attempting to change the deployment controller type of a service that doesn't have a primary task set. To resolve this issue:  

1. Verify the service status and task sets. ). If a task set exists in the service, it should be marked as `ACTIVE`. 

   ```
   aws ecs describe-services --cluster your-cluster-name --services your-service-name --query "services[0].taskSets[*].[status,id]
   ```

   If there are no task sets in the `ACTIVE` state, migrate the deployment. For more information, see [Migration approaches](https://docs.aws.amazon.com/AmazonECS/latest/developerguide/migrate-codedeploy-to-ecs-bluegreen.html#migration-paths). 

1. If the service has no running tasks, deploy at least one task by updating the service:

   ```
   aws ecs update-service-primary-task-set --cluster your-cluster-name --service your-service-name --primary-task-set your-taskset-id
   ```

   This will mark the (previously `ACTIVE`) task set in the service as `PRIMARY` status.

1. Wait for the task to reach a stable running state. You can check the status with:

   ```
   aws ecs describe-services --cluster your-cluster-name --services your-service-name --query "services[0].deployments"
   ```

1. After the service has a primary task set with running tasks, retry updating the deployment controller.
For more information, see [Amazon ECS service deployment controllers and strategies](ecs_service-options.md).

## Mismatch between the deployment failure detection type and deployment controller
<a name="troubleshooting-failure-detection"></a>

The following issues relate to a mismatch between the deployment failure detection type and deployment controller.

Deployment circuit breaker with non-ECS controller  
*Error message*: `Deployment circuit breaker feature is only supported with ECS deployment controller. Update to ECS deployment controller and try again.`  
*Solution*: This error occurs when attempting to enable the deployment circuit breaker feature on a service that is not using the `ECS` deployment controller. The deployment circuit breaker is only compatible with the `ECS`deployment controller.  

1. Check your service's current deployment controller:

   ```
   aws ecs describe-services --cluster your-cluster-name --services your-service-name --query "services[0].deploymentController"
   ```

1. Update your service to use the `ECS` deployment controller:

   ```
   aws ecs update-service --cluster your-cluster-name --service your-service-name --deployment-controller type=ECS
   ```

1. After the service is using the `ECS` deployment controller, enable the deployment circuit breaker:

   ```
   aws ecs update-service --cluster your-cluster-name --service your-service-name --deployment-configuration "deploymentCircuitBreaker={enable=true,rollback=true}"
   ```
For more information, see [How the Amazon ECS deployment circuit breaker detects failures](deployment-circuit-breaker.md).

Alarm-based rollback with non-ECS controller  
*Error message*: `Alarm based rollback feature is only supported with ECS deployment controller. Update to ECS deployment controller and try again.`  
*Solution*: This error occurs when attempting to configure alarm-based rollback on a service that is not using the `ECS` deployment controller. The alarm-based rollback feature is only compatible with the `ECS` deployment controller.  

1. Check your service's current deployment controller:

   ```
   aws ecs describe-services --cluster your-cluster-name --services your-service-name --query "services[0].deploymentController"
   ```

1. Update your service to use the `ECS` deployment controller:

   ```
   aws ecs update-service --cluster your-cluster-name --service your-service-name --deployment-controller type=ECS
   ```

1. After the service is using the `ECS` deployment controller, configure alarm-based rollback:

   ```
   aws ecs update-service --cluster your-cluster-name --services your-service-name --deployment-configuration "alarms={alarmNames=[your-alarm-name],enable=true,rollback=true}"
   ```
For more information, see [How CloudWatch alarms detect Amazon ECS deployment failures](deployment-alarm-failure.md).

## Mismatch between Service Connect and the deployment controller
<a name="troubleshooting-service-connect-mismatch"></a>

The following issues relate to a mismatch between Service Connect and the deployment controller.

`EXTERNAL` controller with Service Connect  
*Error message*: `The EXTERNAL deployment controller type is not supported for services using Service Connect.`  
*Solution*: This error occurs when attempting to use the `EXTERNAL` deployment controller with an service that has Service Connect enabled. The `EXTERNAL` controller is not compatible with Service Connect.  

1. Check if your service has Service Connect enabled:

   ```
   aws ecs describe-services --cluster your-cluster-name --services your-service-name --query "services[0].serviceConnectConfiguration"
   ```

1. If you need to use the `EXTERNAL` deployment controller, disable Service Connect by updating your service:

   ```
   aws ecs update-service --cluster your-cluster-name --service your-service-name --service-connect-configuration "{}"
   ```

1. Alternatively, if you must use Service Connect, use the `ECS` deployment controller instead:

   ```
   aws ecs update-service --cluster your-cluster-name --service your-service-name --deployment-controller type=ECS
   ```
For more information, see [Amazon ECS service deployment controllers and strategies](ecs_service-options.md).

Service Connect with non-ECS controller  
*Error message*: `Service Connect feature is only supported with ECS (rolling update) deployment controller. Update to ECS deployment controller and try again.`  
*Solution*: This error occurs when attempting to enable Service Connect on a that is not using the `ECS` deployment controller. The Service Connect feature is only compatible with the `ECS` deployment controller.  

1. Check your service's current deployment controller:

   ```
   aws ecs describe-services --cluster your-cluster-name --services your-service-name --query "services[0].deploymentController"
   ```

1. Update your service to use the ECS deployment controller:

   ```
   aws ecs update-service --cluster your-cluster-name --service your-service-name --deployment-controller type=ECS
   ```

1. Once the service is using the ECS deployment controller, enable Service Connect:

   ```
   aws ecs update-service --cluster your-cluster-name --service your-service-name --service-connect-configuration "enabled=true,namespace=your-namespace"
   ```
For more information, see [Amazon ECS service deployment controllers and strategies](ecs_service-options.md).

## Mismatch between controller type and scheduling strategy
<a name="troubleshooting-controller-type-scheduling"></a>

The following issues relate to a mismatch between controller type and scheduling strategy.

`CODE_DEPLOY` controller with `DAEMON` scheduling strategy  
*Error message*: `The CODE_DEPLOY deployment controller type is not supported for services using the DAEMON scheduling strategy.`  
*Solution*: This error occurs when attempting to use the CODE\$1DEPLOY deployment controller with a service that uses the `DAEMON` scheduling strategy. The `CODE_DEPLOY` controller is only compatible with the `REPLICA` scheduling strategy.  

1. Check your service's current scheduling strategy:

   ```
   aws ecs describe-services --cluster your-cluster-name --services your-service-name --query "services[0].schedulingStrategy"
   ```

1. If you need blue/green deployments, change your service to use the `REPLICA` scheduling strategy:

   ```
   aws ecs update-service --cluster your-cluster-name --service your-service-name --scheduling-strategy REPLICA
   ```

1. Alternatively, if you must use the `DAEMON` scheduling strategy, use the `ECS` deployment controller instead:

   ```
   aws ecs update-service --cluster your-cluster-name --service your-service-name --deployment-controller type=ECS
   ```
For more information, see [Amazon ECS service deployment controllers and strategies](ecs_service-options.md).

EXTERNAL controller with DAEMON scheduling strategy  
*Error message*: `The EXTERNAL deployment controller type is not supported for services using the DAEMON scheduling strategy.`  
*Solution*: This error occurs when attempting to use the EXTERNAL deployment controller with an ECS service that uses the DAEMON scheduling strategy. The EXTERNAL controller is only compatible with the REPLICA scheduling strategy.  

1. Check your service's current scheduling strategy:

   ```
   aws ecs describe-services --cluster your-cluster-name --services your-service-name --query "services[0].schedulingStrategy"
   ```

1. If you need to use the `EXTERNAL` deployment controller, change your service to use the `REPLICA` scheduling strategy:

   ```
   aws ecs update-service --cluster your-cluster-name --service your-service-name --scheduling-strategy REPLICA
   ```

1. Alternatively, if you must use the `DAEMON` scheduling strategy, use the `ECS` deployment controller instead:

   ```
   aws ecs update-service --cluster your-cluster-name --service your-service-name --deployment-controller type=ECS
   ```
For more information, see [Amazon ECS service deployment controllers and strategies](ecs_service-options.md).

Service registries with external launch type  
*Error message*: `Service registries are not supported for external launch type.`  
*Solution*: This error occurs when attempting to configure service discovery (service registries) for a service that uses the `EXTERNAL` launch type. Service discovery is not compatible with the `EXTERNAL` launch type.  

1. Check your service's current launch type:

   ```
   aws ecs describe-services --cluster your-cluster-name --services your-service-name --query "services[0].launchType"
   ```

1. If you need service discovery, change your service to use either the `EC2` or `FARGATE` launch type:

   ```
   aws ecs update-service --cluster your-cluster-name --service your-service-name --launch-type FARGATE
   ```

1. Alternatively, if you must use the `EXTERNAL` launch type, remove the service registry configuration:

   ```
   aws ecs update-service --cluster your-cluster-name --service your-service-name --service-registries "[]"
   ```
For more information, see [Amazon ECS service deployment controllers and strategies](ecs_service-options.md).

## Revert a deployment controller update
<a name="troubleshooting-revert"></a>

If you decide that you want to return to the previous deployment controller, you can do one of the following:
+ If you used CloudFormation, you can use the previous template to create a new stack. For more information, see [Create a stack from](https://docs.aws.amazon.com/AWSCloudFormation/latest/UserGuide/cfn-console-create-stack.html) in the *CloudFormation User Guide*.
+ If you used the Amazon ECS console, or the AWS CLI, you can update the service. For more information, see [Updating an Amazon ECS service](update-service-console-v2.md).

  If you use the update-service command, use the `--deployment-controller` option and set it to the previous deployment controller.