

# Product and service integrations with CodeDeploy
<a name="integrations"></a>

By default, CodeDeploy integrates with a number of AWS services and partner products and services. The following information can help you configure CodeDeploy to integrate with the products and services you use. 
+ [Integration with other AWS services](integrations-aws.md)
+  [Integration with partner products and services](integrations-partners.md)
+ [Integration examples from the community](integrations-community.md)

# Integration with other AWS services
<a name="integrations-aws"></a>

CodeDeploy is integrated with the following AWS services:


|  |  | 
| --- |--- |
| Amazon CloudWatch |  [Amazon CloudWatch](https://docs.aws.amazon.com/AmazonCloudWatch/latest/DeveloperGuide/) is a monitoring service for AWS cloud resources and the applications you run on AWS. You can use Amazon CloudWatch to collect and track metrics, collect and monitor log files, and set alarms. CodeDeploy supports the following CloudWatch tools:  [\[See the AWS documentation website for more details\]](http://docs.aws.amazon.com/codedeploy/latest/userguide/integrations-aws.html)  | 
| Amazon EC2 Auto Scaling |  CodeDeploy supports [Amazon EC2 Auto Scaling](https://aws.amazon.com/autoscaling). This AWS service can automatically launch Amazon EC2 instances based on criteria you specify, for example:[\[See the AWS documentation website for more details\]](http://docs.aws.amazon.com/codedeploy/latest/userguide/integrations-aws.html) You can scale out a group of Amazon EC2 instances whenever you need them and then use CodeDeploy to deploy application revisions to them automatically. Amazon EC2 Auto Scaling terminates those Amazon EC2 instances when they are no longer needed. Learn more: [\[See the AWS documentation website for more details\]](http://docs.aws.amazon.com/codedeploy/latest/userguide/integrations-aws.html)  | 
| Amazon Elastic Container Service |   You can use CodeDeploy to deploy an Amazon ECS containerized application as a task set. CodeDeploy performs a blue/green deployment by installing an updated version of the application as a new replacement task set. CodeDeploy reroutes production traffic from the original application task set to the replacement task set. The original task set is terminated after a successful deployment. For more information about Amazon ECS, see [Amazon Elastic Container Service](https://aws.amazon.com/ecs/).  You can manage the way in which traffic is shifted to the updated task set during a deployment by choosing a canary, linear, or all-at-once configuration. For more information about Amazon ECS deployments, see [Deployments on an Amazon ECS compute platform](https://docs.aws.amazon.com/en_us/codedeploy/latest/userguide/deployment-steps-ecs.html).   | 
| AWS CloudTrail |  CodeDeploy is integrated with [AWS CloudTrail](https://docs.aws.amazon.com/awscloudtrail/latest/userguide/). This service captures API calls made by or on behalf of CodeDeploy in your AWS account and delivers the log files to an Amazon S3 bucket you specify. CloudTrail captures API calls from the CodeDeploy console, from CodeDeploy commands through the AWS CLI, or from the CodeDeploy APIs directly. Using the information collected by CloudTrail, you can determine: [\[See the AWS documentation website for more details\]](http://docs.aws.amazon.com/codedeploy/latest/userguide/integrations-aws.html) Learn more: [\[See the AWS documentation website for more details\]](http://docs.aws.amazon.com/codedeploy/latest/userguide/integrations-aws.html)  | 
| AWS Cloud9 |  [AWS Cloud9](https://docs.aws.amazon.com/cloud9/latest/user-guide/) is an online, cloud-based integrated development environment (IDE) you can use to write, run, debug, and deploy code using just a browser from an internet-connected machine. AWS Cloud9 includes a code editor, debugger, terminal, and essential tools, such as the AWS CLI and Git. [\[See the AWS documentation website for more details\]](http://docs.aws.amazon.com/codedeploy/latest/userguide/integrations-aws.html) For more information about AWS Cloud9, see [ What Is AWS Cloud9](https://docs.aws.amazon.com/cloud9/latest/user-guide/welcom.html) and [Getting started with AWS Cloud9](https://docs.aws.amazon.com/cloud9/latest/user-guide/get-started.html).  | 
| AWS CodePipeline |  [AWS CodePipeline](https://docs.aws.amazon.com/codepipeline/latest/userguide/) is a continuous delivery service you can use to model, visualize, and automate the steps required to release your software in a continuous delivery process. You can use AWS CodePipeline to define your own release process so that the service builds, tests, and deploys your code every time there is a code change. For example, you might have three deployment groups for an application: Beta, Gamma, and Prod. You can set up a pipeline so that each time there is a change in your source code, the updates are deployed to each deployment group, one by one. You can configure AWS CodePipeline to use CodeDeploy to deploy: [\[See the AWS documentation website for more details\]](http://docs.aws.amazon.com/codedeploy/latest/userguide/integrations-aws.html)  You can create the CodeDeploy application, deployment, and deployment group to use in a deploy action in a stage either before you create the pipeline or in the **Create Pipeline** wizard. Learn more: [\[See the AWS documentation website for more details\]](http://docs.aws.amazon.com/codedeploy/latest/userguide/integrations-aws.html)  | 
| AWS Serverless Application Model |  AWS Serverless Application Model (AWS SAM) is a model to define serverless applications. It extends CloudFormation to provide a simplified way of defining AWS Lambda functions, Amazon API Gateway APIs, and Amazon DynamoDB tables required by a serverless application. If you already use AWS SAM, you can add deployment preferences to start using CodeDeploy to manage the way in which traffic is shifted during an AWS Lambda application deployment. For more information, see the [AWS Serverless Application Model](https://github.com/awslabs/serverless-application-model).  | 
| Elastic Load Balancing |  CodeDeploy supports [Elastic Load Balancing](https://docs.aws.amazon.com/ElasticLoadBalancing/latest/DeveloperGuide/elastic-load-balancing.html), a service that distributes incoming application traffic across multiple Amazon EC2 instances.  For CodeDeploy deployments, load balancers also prevent traffic from being routed to instances when they are not ready, are currently being deployed to, or are no longer needed as part of an environment. Learn more: [\[See the AWS documentation website for more details\]](http://docs.aws.amazon.com/codedeploy/latest/userguide/integrations-aws.html)  | 

**Topics**
+ [Amazon EC2 Auto Scaling](integrations-aws-auto-scaling.md)
+ [Integrating CodeDeploy with Elastic Load Balancing](integrations-aws-elastic-load-balancing.md)

# Integrating CodeDeploy with Amazon EC2 Auto Scaling
<a name="integrations-aws-auto-scaling"></a>

CodeDeploy supports Amazon EC2 Auto Scaling, an AWS service that launches Amazon EC2 instances automatically according to conditions you define. These conditions can include limits exceeded in a specified time interval for CPU utilization, disk reads or writes, or inbound or outbound network traffic. Amazon EC2 Auto Scaling terminates the instances when they are no longer needed. For more information, see [What is Amazon EC2 Auto Scaling?](https://docs.aws.amazon.com/autoscaling/latest/userguide/WhatIsAutoScaling.html) in the *Amazon EC2 Auto Scaling User Guide*.

When new Amazon EC2 instances are launched as part of an Amazon EC2 Auto Scaling group, CodeDeploy can deploy your revisions to the new instances automatically. You can also coordinate deployments in CodeDeploy with Amazon EC2 Auto Scaling instances registered with Elastic Load Balancing load balancers. For more information, see [Integrating CodeDeploy with Elastic Load Balancing](integrations-aws-elastic-load-balancing.md) and [Set up a load balancer in Elastic Load Balancing for CodeDeploy Amazon EC2 deployments](deployment-groups-create-load-balancer.md).

**Note**  
You might encounter issues if you associate multiple deployment groups with a single Amazon EC2 Auto Scaling group. If one deployment fails, for example, the instance will begin to shut down, but the other deployments that were running can take an hour to time out. For more information, see [Avoid associating multiple deployment groups with a single Amazon EC2 Auto Scaling group](troubleshooting-auto-scaling.md#troubleshooting-multiple-depgroups) and [Under the hood: CodeDeploy and Amazon EC2 Auto Scaling integration](https://aws.amazon.com/blogs/devops/under-the-hood-aws-codedeploy-and-auto-scaling-integration/).

**Topics**
+ [Deploying CodeDeploy applications to Amazon EC2 Auto Scaling groups](#integrations-aws-auto-scaling-deploy)
+ [Enabling termination deployments during Auto Scaling scale-in events](#integrations-aws-auto-scaling-behaviors-hook-enable)
+ [How Amazon EC2 Auto Scaling works with CodeDeploy](#integrations-aws-auto-scaling-behaviors)
+ [Using a custom AMI with CodeDeploy and Amazon EC2 Auto Scaling](#integrations-aws-auto-scaling-custom-ami)

## Deploying CodeDeploy applications to Amazon EC2 Auto Scaling groups
<a name="integrations-aws-auto-scaling-deploy"></a>

To deploy a CodeDeploy application revision to an Amazon EC2 Auto Scaling group:

1. Create or locate an IAM instance profile that allows the Amazon EC2 Auto Scaling group to work with Amazon S3. For more information, see [Step 4: Create an IAM instance profile for your Amazon EC2 instances](getting-started-create-iam-instance-profile.md).
**Note**  
You can also use CodeDeploy to deploy revisions from GitHub repositories to Amazon EC2 Auto Scaling groups. Although Amazon EC2 instances still require an IAM instance profile, the profile doesn't need any additional permissions to deploy from a GitHub repository. 

1. Create or use an Amazon EC2 Auto Scaling group, specifying the IAM instance profile in your launch configuration or template. For more information, see [IAM role for applications that run on Amazon EC2 instances](https://docs.aws.amazon.com/autoscaling/ec2/userguide/us-iam-role.html).

1. Create or locate a service role that allows CodeDeploy to create a deployment group that contains the Amazon EC2 Auto Scaling group.

1. Create a deployment group with CodeDeploy, specifying the Amazon EC2 Auto Scaling group name, the service role, and a few other options. For more information, see [Create a deployment group for an in-place deployment (console)](deployment-groups-create-in-place.md) or [Create a deployment group for an in-place deployment (console)](deployment-groups-create-in-place.md).

1. Use CodeDeploy to deploy your revision to the deployment group that contains the Amazon EC2 Auto Scaling group.

For more information, see [Tutorial: Use CodeDeploy to deploy an application to an Auto Scaling group](tutorials-auto-scaling-group.md).

## Enabling termination deployments during Auto Scaling scale-in events
<a name="integrations-aws-auto-scaling-behaviors-hook-enable"></a>

A *termination deployment* is a type of CodeDeploy deployment that is activated automatically when an Auto Scaling [scale-in event](https://docs.aws.amazon.com/autoscaling/ec2/userguide/ec2-auto-scaling-lifecycle.html#as-lifecycle-scale-in) occurs. CodeDeploy performs the termination deployment right before the Auto Scaling service terminates the instance. During a termination deployment, CodeDeploy doesn't deploy anything. Instead, it generates lifecycle events, which you can hook up to your own scripts to enable custom shutdown functionality. For example, you could hook up the `ApplicationStop` lifecycle event to a script that shuts down your application gracefully before the instance is terminated. 

For a list of lifecycle events that CodeDeploy generates during a termination deployment, see [Lifecycle event hook availability](reference-appspec-file-structure-hooks.md#reference-appspec-file-structure-hooks-availability). 

If the termination deployment fails for any reason, CodeDeploy will allow the instance termination to proceed. This means that the instance will be shut down even though CodeDeploy did not run the full set (or any) of the lifecycle events to completion.

If you don't enable termination deployments, the Auto Scaling service will still terminate Amazon EC2 instances when a scale-in event occurs, but CodeDeploy will not generate lifecycle events.

**Note**  
Regardless of whether you enable termination deployments or not, if the Auto Scaling service terminates an Amazon EC2 instance while a CodeDeploy deployment is underway, then a race condition may occur between the lifecycle events generated by the Auto Scaling and CodeDeploy services. For example, the `Terminating` lifecycle event (generated by the Auto Scaling service) might override the `ApplicationStart` event (generated by the CodeDeploy deployment). In this scenario, you may experience a failure with either the Amazon EC2 instance termination or the CodeDeploy deployment.

**To enable CodeDeploy to perform termination deployments**
+ Select the **Add a termination hook to Auto Scaling groups** check box when creating or updating your deployment group. For instructions, see [Create a deployment group for an in-place deployment (console)](deployment-groups-create-in-place.md), or [Create a deployment group for an EC2/On-Premises blue/green deployment (console)](deployment-groups-create-blue-green.md).

  Enabling this check box causes CodeDeploy to install an [Auto Scaling lifecycle hook](https://docs.aws.amazon.com/autoscaling/ec2/userguide/lifecycle-hooks.html) into the Auto Scaling groups that you specify when you create or update your CodeDeploy deployment group. This hook is called the *termination hook* and enables termination deployments.

**After the termination hook is installed, a scale-in (termination) event unfolds as follows:**

1. The Auto Scaling service (or simply, Auto Scaling) determines that a scale-in event needs to occur, and contacts the EC2 service to terminate an EC2 instance.

1. The EC2 service starts terminating the EC2 instance. The instance moves into the `Terminating` state, and then into the `Terminating:Wait` state. 

1. During `Terminating:Wait`, Auto Scaling runs all the lifecycle hooks attached to the Auto Scaling group, including the termination hook installed by CodeDeploy.

1. The termination hook sends a notification to the [Amazon SQS queue](https://docs.aws.amazon.com/AWSSimpleQueueService/latest/SQSDeveloperGuide/welcome.html) that is polled by CodeDeploy.

1. Upon receiving the notification, CodeDeploy parses the message, performs some validation, and performs a [termination deployment](#integrations-aws-auto-scaling-behaviors-hook-enable).

1. While the termination deployment is running, CodeDeploy sends heartbeats every five minutes to Auto Scaling to let it know that the instance is still being worked on.

1. So far, the EC2 instance is still in the `Terminating:Wait` state (or possibly the `Warmed:Pending:Wait` state, if you've enabled [Auto Scaling group warm pools](https://docs.aws.amazon.com/autoscaling/ec2/userguide/warm-pool-instance-lifecycle.html)).

1. When the deployment completes, CodeDeploy indicates to Auto Scaling to `CONTINUE` the EC2 termination process, regardless of whether the termination deployment succeeded or failed.

## How Amazon EC2 Auto Scaling works with CodeDeploy
<a name="integrations-aws-auto-scaling-behaviors"></a>

When you create or update a CodeDeploy deployment group to include an Auto Scaling group, CodeDeploy accesses the Auto Scaling group using the CodeDeploy service role, and then installs [Auto Scaling lifecycle hooks](https://docs.aws.amazon.com/autoscaling/ec2/userguide/lifecycle-hooks.html) into your Auto Scaling groups.

**Note**  
*Auto Scaling lifecycle hooks* are different from the *lifecycle events* (also called *lifecycle event hooks*) generated by CodeDeploy and described in the [AppSpec 'hooks' section](reference-appspec-file-structure-hooks.md) of this guide.

The Auto Scaling lifecycle hooks that CodeDeploy installs are:
+ **A launch hook** — This hook notifies CodeDeploy that an Auto Scaling [scale-out event](https://docs.aws.amazon.com/autoscaling/ec2/userguide/ec2-auto-scaling-lifecycle.html#as-lifecycle-scale-out) is in progress, and that CodeDeploy needs to start a launch deployment.

  During a *launch deployment*, CodeDeploy:
  + Deploys a revision of your application to the scaled-out instance.
  + Generates lifecycle events to indicate the progress of the deployment. You can hook up these lifecycle events to your own scripts to enable custom startup functionality. For more information, see the table in [Lifecycle event hook availability](reference-appspec-file-structure-hooks.md#reference-appspec-file-structure-hooks-availability).

  The launch hook and associated launch deployment are always enabled and cannot be turned off.
+ **A termination hook** — This optional hook notifies CodeDeploy that an Auto Scaling [scale-in event](https://docs.aws.amazon.com/autoscaling/ec2/userguide/ec2-auto-scaling-lifecycle.html#as-lifecycle-scale-in) is in progress, and that CodeDeploy needs to start a termination deployment.

  During a *termination deployment*, CodeDeploy generates lifecycle events to indicate the progress of the instance shutdown. For more information, see [Enabling termination deployments during Auto Scaling scale-in events](#integrations-aws-auto-scaling-behaviors-hook-enable).

**Topics**
+ [After CodeDeploy installs the lifecycle hooks, how are they used?](#integrations-aws-auto-scaling-behaviors-hook-usage)
+ [How CodeDeploy names Amazon EC2 Auto Scaling groups](#integrations-aws-auto-scaling-behaviors-naming)
+ [Execution order of custom lifecycle hook events](#integrations-aws-auto-scaling-behaviors-hook-order)
+ [Scale-out events during a deployment](#integrations-aws-auto-scaling-behaviors-mixed-environment)
+ [Scale-in events during a deployment](#integrations-aws-auto-scaling-behaviors-scale-in)
+ [Order of events in AWS CloudFormation cfn-init scripts](#integrations-aws-auto-scaling-behaviors-event-order)

### After CodeDeploy installs the lifecycle hooks, how are they used?
<a name="integrations-aws-auto-scaling-behaviors-hook-usage"></a>

After the launch and termination lifecycle hooks are installed, they are used by CodeDeploy during Auto Scaling group scale-out and scale-in events, respectively.

**A scale-out (launch) event unfolds as follows:**

1. The Auto Scaling service (or simply, Auto Scaling) determines that a scale-out event needs to occur, and contacts the EC2 service to launch a new EC2 instance.

1. The EC2 service launches a new EC2 instance. The instance moves into the `Pending` state, and then into the `Pending:Wait` state. 

1. During `Pending:Wait`, Auto Scaling runs all the lifecycle hooks attached to the Auto Scaling group, including the launch hook installed by CodeDeploy.

1. The launch hook sends a notification to the [Amazon SQS queue](https://docs.aws.amazon.com/AWSSimpleQueueService/latest/SQSDeveloperGuide/welcome.html) that is polled by CodeDeploy.

1. Upon receiving the notification, CodeDeploy parses the message, performs some validation, and starts a [launch deployment](#launch-deployment).

1. While the launch deployment is running, CodeDeploy sends heartbeats every five minutes to Auto Scaling to let it know that the instance is still being worked on.

1. So far, the EC2 instance is still in the `Pending:Wait` state.

1. When the deployment completes, CodeDeploy indicates to Auto Scaling to either `CONTINUE` or `ABANDON` the EC2 launch process, depending on whether the deployment succeeded or failed.
   + If CodeDeploy indicates `CONTINUE`, Auto Scaling continues the launch process, either waiting for other hooks to complete, or putting the instance into the `Pending:Proceed` and then the `InService` state.
   + If CodeDeploy indicates `ABANDON`, Auto Scaling terminates the EC2 instance, and restarts the launch procedure if needed to meet the desired number of instances, as defined in the Auto Scaling **Desired Capacity** setting.

**A scale-in (termination) event unfolds as follows:**

See [Enabling termination deployments during Auto Scaling scale-in events](#integrations-aws-auto-scaling-behaviors-hook-enable).

### How CodeDeploy names Amazon EC2 Auto Scaling groups
<a name="integrations-aws-auto-scaling-behaviors-naming"></a>

 

During blue/green deployments on an EC2/On-Premises compute platform, you have two options for adding instances to your replacement (green) environment:
+  Use instances that already exist or that you create manually. 
+  Use settings from an Amazon EC2 Auto Scaling group that you specify to define and create instances in a new Amazon EC2 Auto Scaling group. 

 If you choose the second option, CodeDeploy provisions a new Amazon EC2 Auto Scaling group for you. It uses the following convention to name the group: 

```
CodeDeploy_deployment_group_name_deployment_id
```

For example, if a deployment with ID `10` deploys a deployment group named `alpha-deployments`, the provisioned Amazon EC2 Auto Scaling group is named `CodeDeploy_alpha-deployments_10`. For more information, see [Create a deployment group for an EC2/On-Premises blue/green deployment (console)](deployment-groups-create-blue-green.md) and [GreenFleetProvisioningOption](https://docs.aws.amazon.com/codedeploy/latest/APIReference/API_GreenFleetProvisioningOption.html).

### Execution order of custom lifecycle hook events
<a name="integrations-aws-auto-scaling-behaviors-hook-order"></a>

You can add your own lifecycle hooks to Amazon EC2 Auto Scaling groups to which CodeDeploy deploys. However, the order in which those custom lifecycle hook events are executed cannot be predetermined in relation to CodeDeploy default deployment lifecycle events. For example, if you add a custom lifecycle hook named `ReadyForSoftwareInstall` to an Amazon EC2 Auto Scaling group, you cannot know beforehand whether it will be executed before the first, or after the last, CodeDeploy default deployment lifecycle event.

To learn how to add custom lifecycle hooks to an Amazon EC2 Auto Scaling group, see [Adding lifecycle hooks](https://docs.aws.amazon.com/autoscaling/latest/userguide/lifecycle-hooks.html#adding-lifecycle-hooks) in the *Amazon EC2 Auto Scaling User Guide*.

### Scale-out events during a deployment
<a name="integrations-aws-auto-scaling-behaviors-mixed-environment"></a>

If an Auto Scaling scale-out event occurs while a deployment is underway, the new instances will be updated with the application revision that was previously deployed, not the newest application revision. If the deployment succeeds, the old instances and the newly scaled-out instances will be hosting different application revisions. To bring the instances with the older revision up to date, CodeDeploy automatically starts a follow-on deployment (immediately after the first) to update any outdated instances. If you'd like to change this default behavior so that outdated EC2 instances are left at the older revision, see [Automatic updates to outdated instances](deployment-groups-configure-advanced-options.md#auto-updates-outdated-instances).

If you want to suspend Amazon EC2 Auto Scaling scale-out processes while deployments are taking place, you can do this through a setting in the `common_functions.sh` script that is used for load balancing with CodeDeploy. If `HANDLE_PROCS=true`, the following Auto Scaling events are suspended automatically during the deployment process: 
+ AZRebalance
+ AlarmNotification
+ ScheduledActions
+ ReplaceUnhealthy

**Important**  
Only the CodeDeployDefault.OneAtATime deployment configuration supports this functionality.

For more information about using `HANDLE_PROCS=true` to avoid deployment problems when using Amazon EC2 Auto Scaling, see [Important notice about handling AutoScaling processes](https://github.com/awslabs/aws-codedeploy-samples/tree/master/load-balancing/elb#important-notice-about-handling-autoscaling-processes) in [aws-codedeploy-samples](https://github.com/awslabs/aws-codedeploy-samples) on GitHub.

### Scale-in events during a deployment
<a name="integrations-aws-auto-scaling-behaviors-scale-in"></a>

If an Auto Scaling group starts scaling in while a CodeDeploy deployment is underway on that Auto Scaling group, a race condition could occur between the termination process (including the CodeDeploy termination deployment lifecycle events) and other CodeDeploy lifecycle events on the terminating instance. The deployment on that specific instance may fail if the instance is terminated before all CodeDeploy lifecycle events complete. Also, the overall CodeDeploy deployment may or may not fail, depending on how you've set your **Minimum healthy hosts** setting in your deployment configuration.

### Order of events in AWS CloudFormation cfn-init scripts
<a name="integrations-aws-auto-scaling-behaviors-event-order"></a>

If you use `cfn-init` (or `cloud-init`) to run scripts on newly provisioned Linux-based instances, your deployments might fail unless you strictly control the order of events that occur after the instance starts.

That order must be:

1. The newly provisioned instance starts.

1. All `cfn-init` bootstrapping scripts run to completion.

1. The CodeDeploy agent starts.

1. The latest application revision is deployed to the instance.

If the order of events is not carefully controlled, the CodeDeploy agent might start a deployment before all the scripts have finished running. 

To control the order of events, use one of these best practices: 
+ Install the CodeDeploy agent through a `cfn-init` script, placing it after all other scripts.
+ Include the CodeDeploy agent in a custom AMI and use a `cfn-init` script to start it, placing it after all other scripts.

For information about using `cfn-init`, see [cfn-init](https://docs.aws.amazon.com/AWSCloudFormation/latest/UserGuide/cfn-init.html) in the *AWS CloudFormation User Guide*.

## Using a custom AMI with CodeDeploy and Amazon EC2 Auto Scaling
<a name="integrations-aws-auto-scaling-custom-ami"></a>

You have two options for specifying the base AMI to use when new Amazon EC2 instances are launched in an Amazon EC2 Auto Scaling group:
+ You can specify a base custom AMI that already has the CodeDeploy agent installed. Because the agent is already installed, this option launches new Amazon EC2 instances more quickly than the other option. However, this option provides a greater likelihood that initial deployments of Amazon EC2 instances will fail, especially if the CodeDeploy agent is out of date. If you choose this option, we recommend you regularly update the CodeDeploy agent in your base custom AMI.
+ You can specify a base AMI that doesn't have the CodeDeploy agent installed and have the agent installed as each new instance is launched in an Amazon EC2 Auto Scaling group. Although this option launches new Amazon EC2 instances more slowly than the other option, it provides a greater likelihood that initial deployments of instances will succeed. This option uses the most recent version of the CodeDeploy agent.

# Integrating CodeDeploy with Elastic Load Balancing
<a name="integrations-aws-elastic-load-balancing"></a>

During CodeDeploy deployments, a load balancer prevents internet traffic from being routed to instances when they are not ready, are currently being deployed to, or are no longer needed as part of an environment. The exact role the load balancer plays, however, depends on whether it is used in a blue/green deployment or an in-place deployment.

**Note**  
The use of Elastic Load Balancing load balancers is mandatory in blue/green deployments and optional in in-place deployments.

## Elastic Load Balancing types
<a name="integrations-aws-elastic-load-balancing-types"></a>

Elastic Load Balancing provides three types of load balancers that can be used in CodeDeploy deployments: Classic Load Balancers, Application Load Balancers, and Network Load Balancers.

Classic Load Balancer  
Routes and load balances either at the transport layer (TCP/SSL) or the application layer (HTTP/HTTPS). It supports a VPC.  
Classic Load Balancers are not supported with Amazon ECS deployments.

Application Load Balancer  
Routes and load balances at the application layer (HTTP/HTTPS) and supports path-based routing. It can route requests to ports on each EC2 instance or container instance in your virtual private cloud (VPC).  
 The Application Load Balancer target groups must have a target type of `instance` for deployments on EC2 instances, and `IP` for Fargate deployments. For more information, see [Target type](https://docs.aws.amazon.com/elasticloadbalancing/latest/application/load-balancer-target-groups.html#target-type). 

Network Load Balancer  
Routes and load balances at the transport layer (TCP/UDP Layer-4) based on address information extracted from the TCP packet header, not from packet content. Network Load Balancers can handle traffic bursts, retain the source IP of the client, and use a fixed IP for the life of the load balancer. 

To learn more about Elastic Load Balancing load balancers, see the following topics:
+ [What is Elastic Load Balancing?](https://docs.aws.amazon.com/elasticloadbalancing/latest/userguide/what-is-load-balancing.html)
+ [What is a Classic Load Balancer?](https://docs.aws.amazon.com/elasticloadbalancing/latest/classic/introduction.html)
+ [What is an Application Load Balancer?](https://docs.aws.amazon.com/elasticloadbalancing/latest/application/introduction.html)
+ [What is a Network Load Balancer?](https://docs.aws.amazon.com/elasticloadbalancing/latest/network/introduction.html)

## Blue/Green deployments
<a name="integrations-aws-elastic-load-balancing-blue-green"></a>

Rerouting instance traffic behind an Elastic Load Balancing load balancer is fundamental to CodeDeploy blue/green deployments. 

During a blue/green deployment, the load balancer allows traffic to be routed to the new instances in a deployment group that the latest application revision has been deployed to (the replacement environment), according to the rules you specify, and then blocks traffic from the old instances where the previous application revision was running (the original environment).

After instances in a replacement environment are registered with one or more load balancers, instances from the original environment are deregistered and, if you choose, terminated.

For a blue/green deployment, you can specify one or more Classic Load Balancers, Application Load Balancer target groups, or Network Load Balancer target groups in your deployment group. You use the CodeDeploy console or AWS CLI to add the load balancers to a deployment group.

For more information about load balancers in blue/green deployments, see the following topics:
+ [Set up a load balancer in Elastic Load Balancing for CodeDeploy Amazon EC2 deployments](deployment-groups-create-load-balancer.md)
+ [Create an application for a blue/green deployment (console)](applications-create-blue-green.md)
+ [Create a deployment group for an EC2/On-Premises blue/green deployment (console)](deployment-groups-create-blue-green.md)

## In-place deployments
<a name="integrations-aws-elastic-load-balancing-in-place"></a>

During an in-place deployment, a load balancer prevents internet traffic from being routed to an instance while it is being deployed to, and then makes the instance available for traffic again after the deployment to that instance is complete.

If a load balancer isn't used during an in-place deployment, internet traffic may still be directed to an instance during the deployment process. As a result, your customers might encounter broken, incomplete, or outdated web applications. When you use an Elastic Load Balancing load balancer with an in-place deployment, instances in a deployment group are deregistered from the load balancer, updated with the latest application revision, and then reregistered with the load balancer as part of the same deployment group after the deployment is successful. CodeDeploy will wait for up to 1 hour for the instance to become healthy behind the load balancer. If the instance is not marked as healthy by the load balancer during the waiting period, CodeDeploy either moves onto the next instance or fails the deployment, based on the deployment configuration.

For an in-place deployment, you can specify one or more Classic Load Balancers, Application Load Balancer target groups, or Network Load Balancer target groups. You can specify the load balancers as part of the deployment group's configuration, or you can use a script provided by CodeDeploy to implement the load balancers.

### Specify in-place deployment load balancer using a deployment group
<a name="integrations-aws-elastic-load-balancing-in-place-deployment-group"></a>

To add load balancers to a deployment group, you use the CodeDeploy console or AWS CLI. For information about specifying a load balancer in a deployment group for in-place deployments, see the following topics:
+ [Create an application for an in-place deployment (console)](applications-create-in-place.md)
+ [Create a deployment group for an in-place deployment (console)](deployment-groups-create-in-place.md)
+ [Set up a load balancer in Elastic Load Balancing for CodeDeploy Amazon EC2 deployments](deployment-groups-create-load-balancer.md)

### Specify in-place deployment load balancer using a script
<a name="integrations-aws-elastic-load-balancing-in-place-script"></a>

Use the steps in the following procedure to use deployment lifecycle scripts to set up load balancing for in-place deployments.
**Note**  
You should use the CodeDeployDefault.OneAtATime deployment configuration only when you are using a script to set up a load balancer for an in-place deployment. Concurrent runs are not supported, and the CodeDeployDefault.OneAtATime setting ensures a serial execution of the scripts. For more information about deployment configurations, see [Working with deployment configurations in CodeDeploy](deployment-configurations.md).

In the CodeDeploy Samples repository on GitHub, we provide instructions and samples you can adapt to use CodeDeploy Elastic Load Balancing load balancers. These repositories include three sample scripts—`register_with_elb.sh`, `deregister_from_elb.sh`, and `common_functions.sh`—that provide all of the code you need to get going. Simply edit the placeholders in these three scripts, and then reference these scripts from your `appspec.yml` file.

To set up in-place deployments in CodeDeploy with Amazon EC2 instances that are registered with Elastic Load Balancing load balancers, do the following:

1. Download the samples for the type of load balancer you want to use for an in-place deployment:
   + [Classic Load Balancer](https://github.com/awslabs/aws-codedeploy-samples/tree/master/load-balancing/elb)
   + [Application Load Balancer[ or Network Load Balancer](https://github.com/awslabs/aws-codedeploy-samples/tree/master/load-balancing/elb-v2) (the same script can be used for either type)](https://github.com/awslabs/aws-codedeploy-samples/tree/master/load-balancing/elb-v2)

1. Make sure each of your target Amazon EC2 instances has the AWS CLI installed. 

1. Make sure each of your target Amazon EC2 instances has an IAM instance profile attached with, at minimum, the elasticloadbalancing:\$1 and autoscaling:\$1 permissions.

1. Include in your application's source code directory the deployment lifecycle event scripts (`register_with_elb.sh`, `deregister_from_elb.sh`, and `common_functions.sh`).

1. In the `appspec.yml` for the application revision, provide instructions for CodeDeploy to run the `register_with_elb.sh` script during the **ApplicationStart** event and the `deregister_from_elb.sh` script during the **ApplicationStop** event.

1. If the instance is part of an Amazon EC2 Auto Scaling group, you can skip this step.

   In the `common_functions.sh` script:
   + If you are using the [Classic Load Balancer](https://github.com/awslabs/aws-codedeploy-samples/tree/master/load-balancing/elb), specify the names of the Elastic Load Balancing load balancers in `ELB_LIST=""`, and make any changes you need to the other deployment settings in the file.
   + If you are using the [Application Load Balancer[ or Network Load Balancer](https://github.com/awslabs/aws-codedeploy-samples/tree/master/load-balancing/elb-v2)](https://github.com/awslabs/aws-codedeploy-samples/tree/master/load-balancing/elb-v2), specify the names of the Elastic Load Balancing target group names in `TARGET_GROUP_LIST=""`, and make any changes you need to the other deployment settings in the file.

1. Bundle your application's source code, the `appspec.yml`, and the deployment lifecycle event scripts into an application revision, and then upload the revision. Deploy the revision to the Amazon EC2 instances. During the deployment, the deployment lifecycle event scripts will deregister the Amazon EC2 instance with the load balancer, wait for the connection to drain, and then re-register the Amazon EC2 instance with the load balancer after the deployment is complete.

# Integration with partner products and services
<a name="integrations-partners"></a>

CodeDeploy has built-in integration for the following partner products and services:


|  |  | 
| --- |--- |
| Ansible |  If you already have a set of [Ansible](http://www.ansible.com) playbooks, but just need somewhere to run them, the template for Ansible and CodeDeploy demonstrates how a couple of simple deployment hooks can ensure Ansible is available on the local deployment instance and runs the playbooks. If you already have a process for building and maintaining your inventory, there's also an Ansible module you can use to install and run the CodeDeploy agent. Learn more: [\[See the AWS documentation website for more details\]](http://docs.aws.amazon.com/codedeploy/latest/userguide/integrations-partners.html)  | 
| Atlassian – Bamboo and Bitbucket |  The CodeDeploy task for [Bamboo](https://www.atlassian.com/software/bamboo/) compresses the directory that contains an AppSpec file into a .zip file, uploads the file to Amazon S3, and then starts the deployment according to the configuration provided in the CodeDeploy application.  Atlassian Bitbucket support for CodeDeploy enables you to push code to Amazon EC2 instances directly from the Bitbucket UI, on demand, to any of your deployment groups. This means that after you update code in your Bitbucket repository, you do not have to sign in to your continuous integration (CI) platform or Amazon EC2 instances to run a manual deployment process.  Learn more: [\[See the AWS documentation website for more details\]](http://docs.aws.amazon.com/codedeploy/latest/userguide/integrations-partners.html)  | 
| Chef |  AWS provides two template samples for integrating [Chef](https://www.chef.io/) and CodeDeploy. The first is a Chef cookbook that installs and starts the CodeDeploy agent. This allows you to continue managing your host infrastructure with Chef while using CodeDeploy. The second sample template demonstrates how to use CodeDeploy to orchestrate the running of cookbooks and recipes with chef-solo on each node. Learn more: [\[See the AWS documentation website for more details\]](http://docs.aws.amazon.com/codedeploy/latest/userguide/integrations-partners.html)  | 
| CircleCI |  [CircleCI](https://circleci.com/) provides an automated testing and continuous integration and deployment toolset. After you create an IAM role in AWS to use with CircleCI and configure your deployment parameters in your circle.yml file, you can use CircleCI with CodeDeploy to create application revisions, upload them to an Amazon S3 bucket, and then initiate and monitor your deployments. Learn more: [\[See the AWS documentation website for more details\]](http://docs.aws.amazon.com/codedeploy/latest/userguide/integrations-partners.html)  | 
| CloudBees |  You can use the CodeDeploy Jenkins plugin, available on [CloudBees](https://www.cloudbees.com/) DEV@cloud, as a post-build action. For example, at the end of a continuous delivery pipeline, you can use it to deploy an application revision to your fleet of servers. Learn more: [\[See the AWS documentation website for more details\]](http://docs.aws.amazon.com/codedeploy/latest/userguide/integrations-partners.html)  | 
| Codeship |  You can use [Codeship](https://codeship.com/) to deploy application revisions through CodeDeploy. You can use the Codeship UI to add CodeDeploy to a deployment pipeline for a branch. Learn more:  [\[See the AWS documentation website for more details\]](http://docs.aws.amazon.com/codedeploy/latest/userguide/integrations-partners.html)  | 
| GitHub |  You can use CodeDeploy to deploy application revisions from [GitHub](http://www.github.com) repositories. You can also trigger a deployment from a GitHub repository whenever the source code in that repository is changed. Learn more: [\[See the AWS documentation website for more details\]](http://docs.aws.amazon.com/codedeploy/latest/userguide/integrations-partners.html)  | 
|  **HashiCorp Consul**  |  You can use the open-source HashiCorp Consul tool to help ensure the health and stability of your application environment when you deploy applications in CodeDeploy. You can use Consul to register applications to be discovered during deployment, put applications and nodes in maintenance mode to omit them from deployments, and stop deployments if target instances become unhealthy. Learn more: [\[See the AWS documentation website for more details\]](http://docs.aws.amazon.com/codedeploy/latest/userguide/integrations-partners.html)  | 
| Jenkins |  The CodeDeploy [Jenkins](http://jenkins-ci.org/) plugin provides a post-build step for your Jenkins project. Upon a successful build, it zips the workspace, uploads to Amazon S3, and starts a new deployment. Learn more:  [\[See the AWS documentation website for more details\]](http://docs.aws.amazon.com/codedeploy/latest/userguide/integrations-partners.html)  | 
| Puppet Labs |  AWS provides sample templates for [Puppet](https://puppetlabs.com/) and CodeDeploy. The first is a Puppet module that installs and starts the CodeDeploy agent. This allows you to continue managing your host infrastructure with Puppet while using CodeDeploy. The second sample template demonstrates how to use CodeDeploy to orchestrate the running of modules and manifests with a masterless puppet on each node. Learn more:  [\[See the AWS documentation website for more details\]](http://docs.aws.amazon.com/codedeploy/latest/userguide/integrations-partners.html)  | 
| SaltStack |  You can integrate [SaltStack](https://saltproject.io/index.html) infrastructure with CodeDeploy. You can use the CodeDeploy module to install and run the CodeDeploy agent on your minions or, with a couple of simple deployment hooks, you can use CodeDeploy to orchestrate the running of your Salt States. Learn more:  [\[See the AWS documentation website for more details\]](http://docs.aws.amazon.com/codedeploy/latest/userguide/integrations-partners.html)  | 
|  **TeamCity**  |  You can use the CodeDeploy Runner plugin to deploy applications directly from TeamCity. The plugin adds a TeamCity build step that prepares and uploads an application revision to an Amazon S3 bucket, registers the revision in a CodeDeploy application, creates a CodeDeploy deployment and, if you choose, waits for the deployment to be completed. Learn more: [\[See the AWS documentation website for more details\]](http://docs.aws.amazon.com/codedeploy/latest/userguide/integrations-partners.html)  | 
| Travis CI |  You can configure [Travis CI](https://travis-ci.com/) to trigger a deployment in CodeDeploy after a successful build. Learn more:  [\[See the AWS documentation website for more details\]](http://docs.aws.amazon.com/codedeploy/latest/userguide/integrations-partners.html)  | 

**Topics**
+ [GitHub](integrations-partners-github.md)

# Integrating CodeDeploy with GitHub
<a name="integrations-partners-github"></a>

CodeDeploy supports [GitHub](https://github.com/about), a web-based code hosting and sharing service. CodeDeploy can deploy application revisions stored in GitHub repositories or Amazon S3 buckets to instances. CodeDeploy supports GitHub for EC2/On-Premises deployments only.

**Topics**
+ [Deploying CodeDeploy revisions from GitHub](#github-deployment-steps)
+ [GitHub behaviors with CodeDeploy](#github-behaviors)

## Deploying CodeDeploy revisions from GitHub
<a name="github-deployment-steps"></a>

To deploy an application revision from a GitHub repository to instances:

1. Create a revision that's compatible with CodeDeploy and the Amazon EC2 instance type to which you will deploy.

   To create a compatible revision, follow the instructions in [Plan a revision for CodeDeploy](application-revisions-plan.md) and [Add an application specification file to a revision for CodeDeploy](application-revisions-appspec-file.md). 

1. Use a GitHub account to add your revision to a GitHub repository.

   To create a GitHub account, see [Join GitHub](https://github.com/join). To create a GitHub repository, see [Create a repo](https://help.github.com/articles/create-a-repo/).

1. Use the **Create deployment** page in the CodeDeploy console or the AWS CLI **create-deployment** command to deploy your revision from your GitHub repository to target instances configured for use in CodeDeploy deployments.

   If you want to call the **create-deployment** command, you must first use the **Create deployment** page of the console to give CodeDeploy permission to interact with GitHub on behalf of your preferred GitHub account for the specified application. You only need to do this once per application.

   To learn how to use the **Create deployment** page to deploy from a GitHub repository, see [Create a deployment with CodeDeploy](deployments-create.md).

   To learn how to call the **create-deployment** command to deploy from a GitHub repository, see [Create an EC2/On-Premises Compute Platform deployment (CLI)](deployments-create-cli.md).

   To learn how to prepare instances for use in CodeDeploy deployments, see [Working with instances for CodeDeploy](instances.md).

For more information, see [Tutorial: Use CodeDeploy to deploy an application from GitHub](tutorials-github.md).

## GitHub behaviors with CodeDeploy
<a name="github-behaviors"></a>

**Topics**
+ [GitHub authentication with applications in CodeDeploy](#behaviors-authentication)
+ [CodeDeploy interaction with private and public GitHub repositories](#behaviors-interactions-private-and-public)
+ [CodeDeploy interaction with organization-managed GitHub repositories](#behaviors-interactions-organization-managed)
+ [Automatically deploy from CodePipeline with CodeDeploy](#behaviors-deploy-automatically)

### GitHub authentication with applications in CodeDeploy
<a name="behaviors-authentication"></a>

After you give CodeDeploy permission to interact with GitHub, the association between that GitHub account and application is stored in CodeDeploy. You can link the application to a different GitHub account. You can also revoke permission for CodeDeploy to interact with GitHub.

**To link a GitHub account to an application in CodeDeploy**

1. Sign in to the AWS Management Console and open the CodeDeploy console at [https://console.aws.amazon.com/codedeploy](https://console.aws.amazon.com/codedeploy).
**Note**  
Sign in with the same user that you set up in [Getting started with CodeDeploy](getting-started-codedeploy.md).

1. In the navigation pane, expand **Deploy**, then choose **Applications**.

1. Choose the application you want to link to a different GitHub account.

1. If your application does not have a deployment group, choose **Create deployment group** to create one. For more information, see [Create a deployment group with CodeDeploy](deployment-groups-create.md). A deployment group is required to choose **Create deployment** in the next step.

1.  From **Deployments**, choose **Create deployment**. 
**Note**  
You don't have to create a new deployment. This is currently the only way to link a different GitHub account to an application.

1.  In **Deployment settings**, for **Revision type**, choose **My application is stored in GitHub**. 

1. Do one of the following:
   + To create a connection for AWS CodeDeploy applications to a GitHub account, sign out of GitHub in a separate web browser tab. In **GitHub token name**, type a name to identify this connection, and then choose **Connect to GitHub**. The web page prompts you to authorize CodeDeploy to interact with GitHub for your application. Continue to step 10.
   + To use a connection you have already created, in **GitHub token name**, select its name, and then choose **Connect to GitHub**. Continue to step 8.
   + To create a connection to a different GitHub account, sign out of GitHub in a separate web browser tab. In **GitHub token name**, type a name to identify the connection, and then choose **Connect to GitHub**. The web page prompts you to authorize CodeDeploy to interact with GitHub for your application. Continue to step 10.

1. If you are not already signed in to GitHub, follow the instructions on the **Sign in** page to sign in with the GitHub account to which you want to link the application.

1. Choose **Authorize application**. GitHub gives CodeDeploy permission to interact with GitHub on behalf of the signed-in GitHub account for the selected application. 

1. If you do not want to create a deployment, choose **Cancel**.

**To revoke permission for CodeDeploy to interact with GitHub**

1. Sign in to [GitHub ](https://github.com/dashboard) using credentials for the GitHub account in which you want to revoke AWS CodeDeploy permission.

1. Open the GitHub [Applications](https://github.com/settings/applications) page, locate **CodeDeploy** in the list of authorized applications, and then follow the GitHub procedure for revoking authorization for an application.

### CodeDeploy interaction with private and public GitHub repositories
<a name="behaviors-interactions-private-and-public"></a>

CodeDeploy supports the deployment of applications from private and public GitHub repositories. When you give CodeDeploy permission to access GitHub on your behalf, CodeDeploy will have read-write access to all of the private GitHub repositories to which your GitHub account has access. However, CodeDeploy only reads from GitHub repositories. It will not write to any of your private GitHub repositories.

### CodeDeploy interaction with organization-managed GitHub repositories
<a name="behaviors-interactions-organization-managed"></a>

By default, GitHub repositories that are managed by an organization (as opposed to your account's own private or public repositories) do not grant access to third-party applications, including CodeDeploy. Your deployment will fail if an organization's third-party application restrictions are enabled in GitHub and you attempt to deploy code from its GitHub repository. There are two ways to resolve this issue. 
+ As an organization member, you can ask the organization owner to approve access to CodeDeploy. The steps for requesting this access depend on whether you have already authorized CodeDeploy for your individual account:
  + If you have authorized access to CodeDeploy in your account, see [Requesting organization approval for your authorized applications](https://help.github.com/articles/requesting-organization-approval-for-your-authorized-applications/).
  + If you have not yet authorized access to CodeDeploy in your account, see [Requesting organization approval for third-party applications](https://help.github.com/articles/requesting-organization-approval-for-third-party-applications/).
+ The organization owner can disable all third-party application restrictions for the organization. For information, see [Disabling third-party application restrictions for your organization](https://help.github.com/articles/disabling-third-party-application-restrictions-for-your-organization/).

For more information, see [About third-party application restrictions](https://help.github.com/articles/about-third-party-application-restrictions/).

### Automatically deploy from CodePipeline with CodeDeploy
<a name="behaviors-deploy-automatically"></a>

You can trigger a deployment from a CodePipeline whenever the source code changes. For more infomation, see [CodePipeline](https://aws.amazon.com/codepipeline/).

# Integration examples from the community
<a name="integrations-community"></a>

The following sections provide links to blog posts, articles, and community-provided examples.

**Note**  
These links are provided for informational purposes only, and should not be considered either a comprehensive list or an endorsement of the content of the examples. AWS is not responsible for the content or accuracy of external content. 

## Blog posts
<a name="integrations-community-blogposts"></a>
+ [Automating CodeDeploy provisioning in CloudFormation](http://www.stelligent.com/cloud/automating-aws-codedeploy-provisioning-in-cloudformation/)

  Learn how to provision the deployment of an application in CodeDeploy by using CloudFormation.

  *Published January 2016*
+ [AWS Toolkit for Eclipse Integration with CodeDeploy (Part 1)](https://aws.amazon.com/blogs/developer/aws-toolkit-for-eclipse-integration-with-aws-codedeploy-part-1/)

  [AWS Toolkit for Eclipse Integration with CodeDeploy (Part 2)](https://aws.amazon.com/blogs/developer/aws-toolkit-for-eclipse-integration-with-aws-codedeploy-part-2/)

  [AWS Toolkit for Eclipse Integration with CodeDeploy (Part 3)](https://aws.amazon.com/blogs/developer/aws-toolkit-for-eclipse-integration-with-aws-codedeploy-part-3/)

  Learn how Java developers can use the CodeDeploy plugin for Eclipse to deploy web applications to AWS directly from Eclipse development environments.

  *Published February 2015*
+ [Automatically deploy from GitHub using CodeDeploy](https://aws.amazon.com/blogs/devops/automatically-deploy-from-github-using-aws-codedeploy/)

  Learn how automatic deployments from GitHub to CodeDeploy can be used to create an end-to-end pipeline — from source control to your testing or production environments. 

  *Published December 2014*