

# Integration with other AWS services
<a name="integrations-aws"></a>

CodeDeploy is integrated with the following AWS services:


|  |  | 
| --- |--- |
| Amazon CloudWatch |  [Amazon CloudWatch](https://docs.aws.amazon.com/AmazonCloudWatch/latest/DeveloperGuide/) is a monitoring service for AWS cloud resources and the applications you run on AWS. You can use Amazon CloudWatch to collect and track metrics, collect and monitor log files, and set alarms. CodeDeploy supports the following CloudWatch tools:  [\[See the AWS documentation website for more details\]](http://docs.aws.amazon.com/codedeploy/latest/userguide/integrations-aws.html)  | 
| Amazon EC2 Auto Scaling |  CodeDeploy supports [Amazon EC2 Auto Scaling](https://aws.amazon.com/autoscaling). This AWS service can automatically launch Amazon EC2 instances based on criteria you specify, for example:[\[See the AWS documentation website for more details\]](http://docs.aws.amazon.com/codedeploy/latest/userguide/integrations-aws.html) You can scale out a group of Amazon EC2 instances whenever you need them and then use CodeDeploy to deploy application revisions to them automatically. Amazon EC2 Auto Scaling terminates those Amazon EC2 instances when they are no longer needed. Learn more: [\[See the AWS documentation website for more details\]](http://docs.aws.amazon.com/codedeploy/latest/userguide/integrations-aws.html)  | 
| Amazon Elastic Container Service |   You can use CodeDeploy to deploy an Amazon ECS containerized application as a task set. CodeDeploy performs a blue/green deployment by installing an updated version of the application as a new replacement task set. CodeDeploy reroutes production traffic from the original application task set to the replacement task set. The original task set is terminated after a successful deployment. For more information about Amazon ECS, see [Amazon Elastic Container Service](https://aws.amazon.com/ecs/).  You can manage the way in which traffic is shifted to the updated task set during a deployment by choosing a canary, linear, or all-at-once configuration. For more information about Amazon ECS deployments, see [Deployments on an Amazon ECS compute platform](https://docs.aws.amazon.com/en_us/codedeploy/latest/userguide/deployment-steps-ecs.html).   | 
| AWS CloudTrail |  CodeDeploy is integrated with [AWS CloudTrail](https://docs.aws.amazon.com/awscloudtrail/latest/userguide/). This service captures API calls made by or on behalf of CodeDeploy in your AWS account and delivers the log files to an Amazon S3 bucket you specify. CloudTrail captures API calls from the CodeDeploy console, from CodeDeploy commands through the AWS CLI, or from the CodeDeploy APIs directly. Using the information collected by CloudTrail, you can determine: [\[See the AWS documentation website for more details\]](http://docs.aws.amazon.com/codedeploy/latest/userguide/integrations-aws.html) Learn more: [\[See the AWS documentation website for more details\]](http://docs.aws.amazon.com/codedeploy/latest/userguide/integrations-aws.html)  | 
| AWS Cloud9 |  [AWS Cloud9](https://docs.aws.amazon.com/cloud9/latest/user-guide/) is an online, cloud-based integrated development environment (IDE) you can use to write, run, debug, and deploy code using just a browser from an internet-connected machine. AWS Cloud9 includes a code editor, debugger, terminal, and essential tools, such as the AWS CLI and Git. [\[See the AWS documentation website for more details\]](http://docs.aws.amazon.com/codedeploy/latest/userguide/integrations-aws.html) For more information about AWS Cloud9, see [ What Is AWS Cloud9](https://docs.aws.amazon.com/cloud9/latest/user-guide/welcom.html) and [Getting started with AWS Cloud9](https://docs.aws.amazon.com/cloud9/latest/user-guide/get-started.html).  | 
| AWS CodePipeline |  [AWS CodePipeline](https://docs.aws.amazon.com/codepipeline/latest/userguide/) is a continuous delivery service you can use to model, visualize, and automate the steps required to release your software in a continuous delivery process. You can use AWS CodePipeline to define your own release process so that the service builds, tests, and deploys your code every time there is a code change. For example, you might have three deployment groups for an application: Beta, Gamma, and Prod. You can set up a pipeline so that each time there is a change in your source code, the updates are deployed to each deployment group, one by one. You can configure AWS CodePipeline to use CodeDeploy to deploy: [\[See the AWS documentation website for more details\]](http://docs.aws.amazon.com/codedeploy/latest/userguide/integrations-aws.html)  You can create the CodeDeploy application, deployment, and deployment group to use in a deploy action in a stage either before you create the pipeline or in the **Create Pipeline** wizard. Learn more: [\[See the AWS documentation website for more details\]](http://docs.aws.amazon.com/codedeploy/latest/userguide/integrations-aws.html)  | 
| AWS Serverless Application Model |  AWS Serverless Application Model (AWS SAM) is a model to define serverless applications. It extends CloudFormation to provide a simplified way of defining AWS Lambda functions, Amazon API Gateway APIs, and Amazon DynamoDB tables required by a serverless application. If you already use AWS SAM, you can add deployment preferences to start using CodeDeploy to manage the way in which traffic is shifted during an AWS Lambda application deployment. For more information, see the [AWS Serverless Application Model](https://github.com/awslabs/serverless-application-model).  | 
| Elastic Load Balancing |  CodeDeploy supports [Elastic Load Balancing](https://docs.aws.amazon.com/ElasticLoadBalancing/latest/DeveloperGuide/elastic-load-balancing.html), a service that distributes incoming application traffic across multiple Amazon EC2 instances.  For CodeDeploy deployments, load balancers also prevent traffic from being routed to instances when they are not ready, are currently being deployed to, or are no longer needed as part of an environment. Learn more: [\[See the AWS documentation website for more details\]](http://docs.aws.amazon.com/codedeploy/latest/userguide/integrations-aws.html)  | 

**Topics**
+ [Amazon EC2 Auto Scaling](integrations-aws-auto-scaling.md)
+ [Integrating CodeDeploy with Elastic Load Balancing](integrations-aws-elastic-load-balancing.md)

# Integrating CodeDeploy with Amazon EC2 Auto Scaling
<a name="integrations-aws-auto-scaling"></a>

CodeDeploy supports Amazon EC2 Auto Scaling, an AWS service that launches Amazon EC2 instances automatically according to conditions you define. These conditions can include limits exceeded in a specified time interval for CPU utilization, disk reads or writes, or inbound or outbound network traffic. Amazon EC2 Auto Scaling terminates the instances when they are no longer needed. For more information, see [What is Amazon EC2 Auto Scaling?](https://docs.aws.amazon.com/autoscaling/latest/userguide/WhatIsAutoScaling.html) in the *Amazon EC2 Auto Scaling User Guide*.

When new Amazon EC2 instances are launched as part of an Amazon EC2 Auto Scaling group, CodeDeploy can deploy your revisions to the new instances automatically. You can also coordinate deployments in CodeDeploy with Amazon EC2 Auto Scaling instances registered with Elastic Load Balancing load balancers. For more information, see [Integrating CodeDeploy with Elastic Load Balancing](integrations-aws-elastic-load-balancing.md) and [Set up a load balancer in Elastic Load Balancing for CodeDeploy Amazon EC2 deployments](deployment-groups-create-load-balancer.md).

**Note**  
You might encounter issues if you associate multiple deployment groups with a single Amazon EC2 Auto Scaling group. If one deployment fails, for example, the instance will begin to shut down, but the other deployments that were running can take an hour to time out. For more information, see [Avoid associating multiple deployment groups with a single Amazon EC2 Auto Scaling group](troubleshooting-auto-scaling.md#troubleshooting-multiple-depgroups) and [Under the hood: CodeDeploy and Amazon EC2 Auto Scaling integration](https://aws.amazon.com/blogs/devops/under-the-hood-aws-codedeploy-and-auto-scaling-integration/).

**Topics**
+ [Deploying CodeDeploy applications to Amazon EC2 Auto Scaling groups](#integrations-aws-auto-scaling-deploy)
+ [Enabling termination deployments during Auto Scaling scale-in events](#integrations-aws-auto-scaling-behaviors-hook-enable)
+ [How Amazon EC2 Auto Scaling works with CodeDeploy](#integrations-aws-auto-scaling-behaviors)
+ [Using a custom AMI with CodeDeploy and Amazon EC2 Auto Scaling](#integrations-aws-auto-scaling-custom-ami)

## Deploying CodeDeploy applications to Amazon EC2 Auto Scaling groups
<a name="integrations-aws-auto-scaling-deploy"></a>

To deploy a CodeDeploy application revision to an Amazon EC2 Auto Scaling group:

1. Create or locate an IAM instance profile that allows the Amazon EC2 Auto Scaling group to work with Amazon S3. For more information, see [Step 4: Create an IAM instance profile for your Amazon EC2 instances](getting-started-create-iam-instance-profile.md).
**Note**  
You can also use CodeDeploy to deploy revisions from GitHub repositories to Amazon EC2 Auto Scaling groups. Although Amazon EC2 instances still require an IAM instance profile, the profile doesn't need any additional permissions to deploy from a GitHub repository. 

1. Create or use an Amazon EC2 Auto Scaling group, specifying the IAM instance profile in your launch configuration or template. For more information, see [IAM role for applications that run on Amazon EC2 instances](https://docs.aws.amazon.com/autoscaling/ec2/userguide/us-iam-role.html).

1. Create or locate a service role that allows CodeDeploy to create a deployment group that contains the Amazon EC2 Auto Scaling group.

1. Create a deployment group with CodeDeploy, specifying the Amazon EC2 Auto Scaling group name, the service role, and a few other options. For more information, see [Create a deployment group for an in-place deployment (console)](deployment-groups-create-in-place.md) or [Create a deployment group for an in-place deployment (console)](deployment-groups-create-in-place.md).

1. Use CodeDeploy to deploy your revision to the deployment group that contains the Amazon EC2 Auto Scaling group.

For more information, see [Tutorial: Use CodeDeploy to deploy an application to an Auto Scaling group](tutorials-auto-scaling-group.md).

## Enabling termination deployments during Auto Scaling scale-in events
<a name="integrations-aws-auto-scaling-behaviors-hook-enable"></a>

A *termination deployment* is a type of CodeDeploy deployment that is activated automatically when an Auto Scaling [scale-in event](https://docs.aws.amazon.com/autoscaling/ec2/userguide/ec2-auto-scaling-lifecycle.html#as-lifecycle-scale-in) occurs. CodeDeploy performs the termination deployment right before the Auto Scaling service terminates the instance. During a termination deployment, CodeDeploy doesn't deploy anything. Instead, it generates lifecycle events, which you can hook up to your own scripts to enable custom shutdown functionality. For example, you could hook up the `ApplicationStop` lifecycle event to a script that shuts down your application gracefully before the instance is terminated. 

For a list of lifecycle events that CodeDeploy generates during a termination deployment, see [Lifecycle event hook availability](reference-appspec-file-structure-hooks.md#reference-appspec-file-structure-hooks-availability). 

If the termination deployment fails for any reason, CodeDeploy will allow the instance termination to proceed. This means that the instance will be shut down even though CodeDeploy did not run the full set (or any) of the lifecycle events to completion.

If you don't enable termination deployments, the Auto Scaling service will still terminate Amazon EC2 instances when a scale-in event occurs, but CodeDeploy will not generate lifecycle events.

**Note**  
Regardless of whether you enable termination deployments or not, if the Auto Scaling service terminates an Amazon EC2 instance while a CodeDeploy deployment is underway, then a race condition may occur between the lifecycle events generated by the Auto Scaling and CodeDeploy services. For example, the `Terminating` lifecycle event (generated by the Auto Scaling service) might override the `ApplicationStart` event (generated by the CodeDeploy deployment). In this scenario, you may experience a failure with either the Amazon EC2 instance termination or the CodeDeploy deployment.

**To enable CodeDeploy to perform termination deployments**
+ Select the **Add a termination hook to Auto Scaling groups** check box when creating or updating your deployment group. For instructions, see [Create a deployment group for an in-place deployment (console)](deployment-groups-create-in-place.md), or [Create a deployment group for an EC2/On-Premises blue/green deployment (console)](deployment-groups-create-blue-green.md).

  Enabling this check box causes CodeDeploy to install an [Auto Scaling lifecycle hook](https://docs.aws.amazon.com/autoscaling/ec2/userguide/lifecycle-hooks.html) into the Auto Scaling groups that you specify when you create or update your CodeDeploy deployment group. This hook is called the *termination hook* and enables termination deployments.

**After the termination hook is installed, a scale-in (termination) event unfolds as follows:**

1. The Auto Scaling service (or simply, Auto Scaling) determines that a scale-in event needs to occur, and contacts the EC2 service to terminate an EC2 instance.

1. The EC2 service starts terminating the EC2 instance. The instance moves into the `Terminating` state, and then into the `Terminating:Wait` state. 

1. During `Terminating:Wait`, Auto Scaling runs all the lifecycle hooks attached to the Auto Scaling group, including the termination hook installed by CodeDeploy.

1. The termination hook sends a notification to the [Amazon SQS queue](https://docs.aws.amazon.com/AWSSimpleQueueService/latest/SQSDeveloperGuide/welcome.html) that is polled by CodeDeploy.

1. Upon receiving the notification, CodeDeploy parses the message, performs some validation, and performs a [termination deployment](#integrations-aws-auto-scaling-behaviors-hook-enable).

1. While the termination deployment is running, CodeDeploy sends heartbeats every five minutes to Auto Scaling to let it know that the instance is still being worked on.

1. So far, the EC2 instance is still in the `Terminating:Wait` state (or possibly the `Warmed:Pending:Wait` state, if you've enabled [Auto Scaling group warm pools](https://docs.aws.amazon.com/autoscaling/ec2/userguide/warm-pool-instance-lifecycle.html)).

1. When the deployment completes, CodeDeploy indicates to Auto Scaling to `CONTINUE` the EC2 termination process, regardless of whether the termination deployment succeeded or failed.

## How Amazon EC2 Auto Scaling works with CodeDeploy
<a name="integrations-aws-auto-scaling-behaviors"></a>

When you create or update a CodeDeploy deployment group to include an Auto Scaling group, CodeDeploy accesses the Auto Scaling group using the CodeDeploy service role, and then installs [Auto Scaling lifecycle hooks](https://docs.aws.amazon.com/autoscaling/ec2/userguide/lifecycle-hooks.html) into your Auto Scaling groups.

**Note**  
*Auto Scaling lifecycle hooks* are different from the *lifecycle events* (also called *lifecycle event hooks*) generated by CodeDeploy and described in the [AppSpec 'hooks' section](reference-appspec-file-structure-hooks.md) of this guide.

The Auto Scaling lifecycle hooks that CodeDeploy installs are:
+ **A launch hook** — This hook notifies CodeDeploy that an Auto Scaling [scale-out event](https://docs.aws.amazon.com/autoscaling/ec2/userguide/ec2-auto-scaling-lifecycle.html#as-lifecycle-scale-out) is in progress, and that CodeDeploy needs to start a launch deployment.

  During a *launch deployment*, CodeDeploy:
  + Deploys a revision of your application to the scaled-out instance.
  + Generates lifecycle events to indicate the progress of the deployment. You can hook up these lifecycle events to your own scripts to enable custom startup functionality. For more information, see the table in [Lifecycle event hook availability](reference-appspec-file-structure-hooks.md#reference-appspec-file-structure-hooks-availability).

  The launch hook and associated launch deployment are always enabled and cannot be turned off.
+ **A termination hook** — This optional hook notifies CodeDeploy that an Auto Scaling [scale-in event](https://docs.aws.amazon.com/autoscaling/ec2/userguide/ec2-auto-scaling-lifecycle.html#as-lifecycle-scale-in) is in progress, and that CodeDeploy needs to start a termination deployment.

  During a *termination deployment*, CodeDeploy generates lifecycle events to indicate the progress of the instance shutdown. For more information, see [Enabling termination deployments during Auto Scaling scale-in events](#integrations-aws-auto-scaling-behaviors-hook-enable).

**Topics**
+ [After CodeDeploy installs the lifecycle hooks, how are they used?](#integrations-aws-auto-scaling-behaviors-hook-usage)
+ [How CodeDeploy names Amazon EC2 Auto Scaling groups](#integrations-aws-auto-scaling-behaviors-naming)
+ [Execution order of custom lifecycle hook events](#integrations-aws-auto-scaling-behaviors-hook-order)
+ [Scale-out events during a deployment](#integrations-aws-auto-scaling-behaviors-mixed-environment)
+ [Scale-in events during a deployment](#integrations-aws-auto-scaling-behaviors-scale-in)
+ [Order of events in AWS CloudFormation cfn-init scripts](#integrations-aws-auto-scaling-behaviors-event-order)

### After CodeDeploy installs the lifecycle hooks, how are they used?
<a name="integrations-aws-auto-scaling-behaviors-hook-usage"></a>

After the launch and termination lifecycle hooks are installed, they are used by CodeDeploy during Auto Scaling group scale-out and scale-in events, respectively.

**A scale-out (launch) event unfolds as follows:**

1. The Auto Scaling service (or simply, Auto Scaling) determines that a scale-out event needs to occur, and contacts the EC2 service to launch a new EC2 instance.

1. The EC2 service launches a new EC2 instance. The instance moves into the `Pending` state, and then into the `Pending:Wait` state. 

1. During `Pending:Wait`, Auto Scaling runs all the lifecycle hooks attached to the Auto Scaling group, including the launch hook installed by CodeDeploy.

1. The launch hook sends a notification to the [Amazon SQS queue](https://docs.aws.amazon.com/AWSSimpleQueueService/latest/SQSDeveloperGuide/welcome.html) that is polled by CodeDeploy.

1. Upon receiving the notification, CodeDeploy parses the message, performs some validation, and starts a [launch deployment](#launch-deployment).

1. While the launch deployment is running, CodeDeploy sends heartbeats every five minutes to Auto Scaling to let it know that the instance is still being worked on.

1. So far, the EC2 instance is still in the `Pending:Wait` state.

1. When the deployment completes, CodeDeploy indicates to Auto Scaling to either `CONTINUE` or `ABANDON` the EC2 launch process, depending on whether the deployment succeeded or failed.
   + If CodeDeploy indicates `CONTINUE`, Auto Scaling continues the launch process, either waiting for other hooks to complete, or putting the instance into the `Pending:Proceed` and then the `InService` state.
   + If CodeDeploy indicates `ABANDON`, Auto Scaling terminates the EC2 instance, and restarts the launch procedure if needed to meet the desired number of instances, as defined in the Auto Scaling **Desired Capacity** setting.

**A scale-in (termination) event unfolds as follows:**

See [Enabling termination deployments during Auto Scaling scale-in events](#integrations-aws-auto-scaling-behaviors-hook-enable).

### How CodeDeploy names Amazon EC2 Auto Scaling groups
<a name="integrations-aws-auto-scaling-behaviors-naming"></a>

 

During blue/green deployments on an EC2/On-Premises compute platform, you have two options for adding instances to your replacement (green) environment:
+  Use instances that already exist or that you create manually. 
+  Use settings from an Amazon EC2 Auto Scaling group that you specify to define and create instances in a new Amazon EC2 Auto Scaling group. 

 If you choose the second option, CodeDeploy provisions a new Amazon EC2 Auto Scaling group for you. It uses the following convention to name the group: 

```
CodeDeploy_deployment_group_name_deployment_id
```

For example, if a deployment with ID `10` deploys a deployment group named `alpha-deployments`, the provisioned Amazon EC2 Auto Scaling group is named `CodeDeploy_alpha-deployments_10`. For more information, see [Create a deployment group for an EC2/On-Premises blue/green deployment (console)](deployment-groups-create-blue-green.md) and [GreenFleetProvisioningOption](https://docs.aws.amazon.com/codedeploy/latest/APIReference/API_GreenFleetProvisioningOption.html).

### Execution order of custom lifecycle hook events
<a name="integrations-aws-auto-scaling-behaviors-hook-order"></a>

You can add your own lifecycle hooks to Amazon EC2 Auto Scaling groups to which CodeDeploy deploys. However, the order in which those custom lifecycle hook events are executed cannot be predetermined in relation to CodeDeploy default deployment lifecycle events. For example, if you add a custom lifecycle hook named `ReadyForSoftwareInstall` to an Amazon EC2 Auto Scaling group, you cannot know beforehand whether it will be executed before the first, or after the last, CodeDeploy default deployment lifecycle event.

To learn how to add custom lifecycle hooks to an Amazon EC2 Auto Scaling group, see [Adding lifecycle hooks](https://docs.aws.amazon.com/autoscaling/latest/userguide/lifecycle-hooks.html#adding-lifecycle-hooks) in the *Amazon EC2 Auto Scaling User Guide*.

### Scale-out events during a deployment
<a name="integrations-aws-auto-scaling-behaviors-mixed-environment"></a>

If an Auto Scaling scale-out event occurs while a deployment is underway, the new instances will be updated with the application revision that was previously deployed, not the newest application revision. If the deployment succeeds, the old instances and the newly scaled-out instances will be hosting different application revisions. To bring the instances with the older revision up to date, CodeDeploy automatically starts a follow-on deployment (immediately after the first) to update any outdated instances. If you'd like to change this default behavior so that outdated EC2 instances are left at the older revision, see [Automatic updates to outdated instances](deployment-groups-configure-advanced-options.md#auto-updates-outdated-instances).

If you want to suspend Amazon EC2 Auto Scaling scale-out processes while deployments are taking place, you can do this through a setting in the `common_functions.sh` script that is used for load balancing with CodeDeploy. If `HANDLE_PROCS=true`, the following Auto Scaling events are suspended automatically during the deployment process: 
+ AZRebalance
+ AlarmNotification
+ ScheduledActions
+ ReplaceUnhealthy

**Important**  
Only the CodeDeployDefault.OneAtATime deployment configuration supports this functionality.

For more information about using `HANDLE_PROCS=true` to avoid deployment problems when using Amazon EC2 Auto Scaling, see [Important notice about handling AutoScaling processes](https://github.com/awslabs/aws-codedeploy-samples/tree/master/load-balancing/elb#important-notice-about-handling-autoscaling-processes) in [aws-codedeploy-samples](https://github.com/awslabs/aws-codedeploy-samples) on GitHub.

### Scale-in events during a deployment
<a name="integrations-aws-auto-scaling-behaviors-scale-in"></a>

If an Auto Scaling group starts scaling in while a CodeDeploy deployment is underway on that Auto Scaling group, a race condition could occur between the termination process (including the CodeDeploy termination deployment lifecycle events) and other CodeDeploy lifecycle events on the terminating instance. The deployment on that specific instance may fail if the instance is terminated before all CodeDeploy lifecycle events complete. Also, the overall CodeDeploy deployment may or may not fail, depending on how you've set your **Minimum healthy hosts** setting in your deployment configuration.

### Order of events in AWS CloudFormation cfn-init scripts
<a name="integrations-aws-auto-scaling-behaviors-event-order"></a>

If you use `cfn-init` (or `cloud-init`) to run scripts on newly provisioned Linux-based instances, your deployments might fail unless you strictly control the order of events that occur after the instance starts.

That order must be:

1. The newly provisioned instance starts.

1. All `cfn-init` bootstrapping scripts run to completion.

1. The CodeDeploy agent starts.

1. The latest application revision is deployed to the instance.

If the order of events is not carefully controlled, the CodeDeploy agent might start a deployment before all the scripts have finished running. 

To control the order of events, use one of these best practices: 
+ Install the CodeDeploy agent through a `cfn-init` script, placing it after all other scripts.
+ Include the CodeDeploy agent in a custom AMI and use a `cfn-init` script to start it, placing it after all other scripts.

For information about using `cfn-init`, see [cfn-init](https://docs.aws.amazon.com/AWSCloudFormation/latest/UserGuide/cfn-init.html) in the *AWS CloudFormation User Guide*.

## Using a custom AMI with CodeDeploy and Amazon EC2 Auto Scaling
<a name="integrations-aws-auto-scaling-custom-ami"></a>

You have two options for specifying the base AMI to use when new Amazon EC2 instances are launched in an Amazon EC2 Auto Scaling group:
+ You can specify a base custom AMI that already has the CodeDeploy agent installed. Because the agent is already installed, this option launches new Amazon EC2 instances more quickly than the other option. However, this option provides a greater likelihood that initial deployments of Amazon EC2 instances will fail, especially if the CodeDeploy agent is out of date. If you choose this option, we recommend you regularly update the CodeDeploy agent in your base custom AMI.
+ You can specify a base AMI that doesn't have the CodeDeploy agent installed and have the agent installed as each new instance is launched in an Amazon EC2 Auto Scaling group. Although this option launches new Amazon EC2 instances more slowly than the other option, it provides a greater likelihood that initial deployments of instances will succeed. This option uses the most recent version of the CodeDeploy agent.

# Integrating CodeDeploy with Elastic Load Balancing
<a name="integrations-aws-elastic-load-balancing"></a>

During CodeDeploy deployments, a load balancer prevents internet traffic from being routed to instances when they are not ready, are currently being deployed to, or are no longer needed as part of an environment. The exact role the load balancer plays, however, depends on whether it is used in a blue/green deployment or an in-place deployment.

**Note**  
The use of Elastic Load Balancing load balancers is mandatory in blue/green deployments and optional in in-place deployments.

## Elastic Load Balancing types
<a name="integrations-aws-elastic-load-balancing-types"></a>

Elastic Load Balancing provides three types of load balancers that can be used in CodeDeploy deployments: Classic Load Balancers, Application Load Balancers, and Network Load Balancers.

Classic Load Balancer  
Routes and load balances either at the transport layer (TCP/SSL) or the application layer (HTTP/HTTPS). It supports a VPC.  
Classic Load Balancers are not supported with Amazon ECS deployments.

Application Load Balancer  
Routes and load balances at the application layer (HTTP/HTTPS) and supports path-based routing. It can route requests to ports on each EC2 instance or container instance in your virtual private cloud (VPC).  
 The Application Load Balancer target groups must have a target type of `instance` for deployments on EC2 instances, and `IP` for Fargate deployments. For more information, see [Target type](https://docs.aws.amazon.com/elasticloadbalancing/latest/application/load-balancer-target-groups.html#target-type). 

Network Load Balancer  
Routes and load balances at the transport layer (TCP/UDP Layer-4) based on address information extracted from the TCP packet header, not from packet content. Network Load Balancers can handle traffic bursts, retain the source IP of the client, and use a fixed IP for the life of the load balancer. 

To learn more about Elastic Load Balancing load balancers, see the following topics:
+ [What is Elastic Load Balancing?](https://docs.aws.amazon.com/elasticloadbalancing/latest/userguide/what-is-load-balancing.html)
+ [What is a Classic Load Balancer?](https://docs.aws.amazon.com/elasticloadbalancing/latest/classic/introduction.html)
+ [What is an Application Load Balancer?](https://docs.aws.amazon.com/elasticloadbalancing/latest/application/introduction.html)
+ [What is a Network Load Balancer?](https://docs.aws.amazon.com/elasticloadbalancing/latest/network/introduction.html)

## Blue/Green deployments
<a name="integrations-aws-elastic-load-balancing-blue-green"></a>

Rerouting instance traffic behind an Elastic Load Balancing load balancer is fundamental to CodeDeploy blue/green deployments. 

During a blue/green deployment, the load balancer allows traffic to be routed to the new instances in a deployment group that the latest application revision has been deployed to (the replacement environment), according to the rules you specify, and then blocks traffic from the old instances where the previous application revision was running (the original environment).

After instances in a replacement environment are registered with one or more load balancers, instances from the original environment are deregistered and, if you choose, terminated.

For a blue/green deployment, you can specify one or more Classic Load Balancers, Application Load Balancer target groups, or Network Load Balancer target groups in your deployment group. You use the CodeDeploy console or AWS CLI to add the load balancers to a deployment group.

For more information about load balancers in blue/green deployments, see the following topics:
+ [Set up a load balancer in Elastic Load Balancing for CodeDeploy Amazon EC2 deployments](deployment-groups-create-load-balancer.md)
+ [Create an application for a blue/green deployment (console)](applications-create-blue-green.md)
+ [Create a deployment group for an EC2/On-Premises blue/green deployment (console)](deployment-groups-create-blue-green.md)

## In-place deployments
<a name="integrations-aws-elastic-load-balancing-in-place"></a>

During an in-place deployment, a load balancer prevents internet traffic from being routed to an instance while it is being deployed to, and then makes the instance available for traffic again after the deployment to that instance is complete.

If a load balancer isn't used during an in-place deployment, internet traffic may still be directed to an instance during the deployment process. As a result, your customers might encounter broken, incomplete, or outdated web applications. When you use an Elastic Load Balancing load balancer with an in-place deployment, instances in a deployment group are deregistered from the load balancer, updated with the latest application revision, and then reregistered with the load balancer as part of the same deployment group after the deployment is successful. CodeDeploy will wait for up to 1 hour for the instance to become healthy behind the load balancer. If the instance is not marked as healthy by the load balancer during the waiting period, CodeDeploy either moves onto the next instance or fails the deployment, based on the deployment configuration.

For an in-place deployment, you can specify one or more Classic Load Balancers, Application Load Balancer target groups, or Network Load Balancer target groups. You can specify the load balancers as part of the deployment group's configuration, or you can use a script provided by CodeDeploy to implement the load balancers.

### Specify in-place deployment load balancer using a deployment group
<a name="integrations-aws-elastic-load-balancing-in-place-deployment-group"></a>

To add load balancers to a deployment group, you use the CodeDeploy console or AWS CLI. For information about specifying a load balancer in a deployment group for in-place deployments, see the following topics:
+ [Create an application for an in-place deployment (console)](applications-create-in-place.md)
+ [Create a deployment group for an in-place deployment (console)](deployment-groups-create-in-place.md)
+ [Set up a load balancer in Elastic Load Balancing for CodeDeploy Amazon EC2 deployments](deployment-groups-create-load-balancer.md)

### Specify in-place deployment load balancer using a script
<a name="integrations-aws-elastic-load-balancing-in-place-script"></a>

Use the steps in the following procedure to use deployment lifecycle scripts to set up load balancing for in-place deployments.
**Note**  
You should use the CodeDeployDefault.OneAtATime deployment configuration only when you are using a script to set up a load balancer for an in-place deployment. Concurrent runs are not supported, and the CodeDeployDefault.OneAtATime setting ensures a serial execution of the scripts. For more information about deployment configurations, see [Working with deployment configurations in CodeDeploy](deployment-configurations.md).

In the CodeDeploy Samples repository on GitHub, we provide instructions and samples you can adapt to use CodeDeploy Elastic Load Balancing load balancers. These repositories include three sample scripts—`register_with_elb.sh`, `deregister_from_elb.sh`, and `common_functions.sh`—that provide all of the code you need to get going. Simply edit the placeholders in these three scripts, and then reference these scripts from your `appspec.yml` file.

To set up in-place deployments in CodeDeploy with Amazon EC2 instances that are registered with Elastic Load Balancing load balancers, do the following:

1. Download the samples for the type of load balancer you want to use for an in-place deployment:
   + [Classic Load Balancer](https://github.com/awslabs/aws-codedeploy-samples/tree/master/load-balancing/elb)
   + [Application Load Balancer[ or Network Load Balancer](https://github.com/awslabs/aws-codedeploy-samples/tree/master/load-balancing/elb-v2) (the same script can be used for either type)](https://github.com/awslabs/aws-codedeploy-samples/tree/master/load-balancing/elb-v2)

1. Make sure each of your target Amazon EC2 instances has the AWS CLI installed. 

1. Make sure each of your target Amazon EC2 instances has an IAM instance profile attached with, at minimum, the elasticloadbalancing:\$1 and autoscaling:\$1 permissions.

1. Include in your application's source code directory the deployment lifecycle event scripts (`register_with_elb.sh`, `deregister_from_elb.sh`, and `common_functions.sh`).

1. In the `appspec.yml` for the application revision, provide instructions for CodeDeploy to run the `register_with_elb.sh` script during the **ApplicationStart** event and the `deregister_from_elb.sh` script during the **ApplicationStop** event.

1. If the instance is part of an Amazon EC2 Auto Scaling group, you can skip this step.

   In the `common_functions.sh` script:
   + If you are using the [Classic Load Balancer](https://github.com/awslabs/aws-codedeploy-samples/tree/master/load-balancing/elb), specify the names of the Elastic Load Balancing load balancers in `ELB_LIST=""`, and make any changes you need to the other deployment settings in the file.
   + If you are using the [Application Load Balancer[ or Network Load Balancer](https://github.com/awslabs/aws-codedeploy-samples/tree/master/load-balancing/elb-v2)](https://github.com/awslabs/aws-codedeploy-samples/tree/master/load-balancing/elb-v2), specify the names of the Elastic Load Balancing target group names in `TARGET_GROUP_LIST=""`, and make any changes you need to the other deployment settings in the file.

1. Bundle your application's source code, the `appspec.yml`, and the deployment lifecycle event scripts into an application revision, and then upload the revision. Deploy the revision to the Amazon EC2 instances. During the deployment, the deployment lifecycle event scripts will deregister the Amazon EC2 instance with the load balancer, wait for the connection to drain, and then re-register the Amazon EC2 instance with the load balancer after the deployment is complete.