

# Data protection in AWS CodePipeline
<a name="data-protection"></a>

The AWS [shared responsibility model](https://aws.amazon.com/compliance/shared-responsibility-model/) applies to data protection in . As described in this model, AWS is responsible for protecting the global infrastructure that runs all of the AWS Cloud. You are responsible for maintaining control over your content that is hosted on this infrastructure. You are also responsible for the security configuration and management tasks for the AWS services that you use. For more information about data privacy, see the [Data Privacy FAQ](https://aws.amazon.com/compliance/data-privacy-faq/). For information about data protection in Europe, see the [AWS Shared Responsibility Model and GDPR](https://aws.amazon.com/blogs/security/the-aws-shared-responsibility-model-and-gdpr/) blog post on the *AWS Security Blog*.

For data protection purposes, we recommend that you protect AWS account credentials and set up individual users with AWS IAM Identity Center or AWS Identity and Access Management (IAM). That way, each user is given only the permissions necessary to fulfill their job duties. We also recommend that you secure your data in the following ways:
+ Use multi-factor authentication (MFA) with each account.
+ Use SSL/TLS to communicate with AWS resources. We require TLS 1.2 and recommend TLS 1.3.
+ Set up API and user activity logging with AWS CloudTrail. For information about using CloudTrail trails to capture AWS activities, see [Working with CloudTrail trails](https://docs.aws.amazon.com/awscloudtrail/latest/userguide/cloudtrail-trails.html) in the *AWS CloudTrail User Guide*.
+ Use AWS encryption solutions, along with all default security controls within AWS services.
+ Use advanced managed security services such as Amazon Macie, which assists in discovering and securing sensitive data that is stored in Amazon S3.
+ If you require FIPS 140-3 validated cryptographic modules when accessing AWS through a command line interface or an API, use a FIPS endpoint. For more information about the available FIPS endpoints, see [Federal Information Processing Standard (FIPS) 140-3](https://aws.amazon.com/compliance/fips/).

We strongly recommend that you never put confidential or sensitive information, such as your customers' email addresses, into tags or free-form text fields such as a **Name** field. This includes when you work with or other AWS services using the console, API, AWS CLI, or AWS SDKs. Any data that you enter into tags or free-form text fields used for names may be used for billing or diagnostic logs. If you provide a URL to an external server, we strongly recommend that you do not include credentials information in the URL to validate your request to that server.

The following security best practices also address data protection in CodePipeline:
+ [Configure server-side encryption for artifacts stored in Amazon S3 for CodePipeline](S3-artifact-encryption.md)
+ [Use AWS Secrets Manager to track database passwords or third-party API keys](parameter-store-encryption.md)

## Internetwork traffic privacy
<a name="inter-network-traffic-privacy"></a>

 Amazon VPC is an AWS service that you can use to launch AWS resources in a virtual network (*virtual private cloud*) that you define. CodePipeline supports Amazon VPC endpoints powered by AWS PrivateLink, an AWS technology that facilitates private communication between AWS services using an elastic network interface with private IP addresses. This means you can connect directly to CodePipeline through a private endpoint in your VPC, keeping all traffic inside your VPC and the AWS network. Previously, applications running inside a VPC required internet access to connect to CodePipeline. With a VPC, you have control over your network settings, such as:
+ IP address range,
+ Subnets,
+ Route tables, and
+ Network gateways.

To connect your VPC to CodePipeline, you define an interface VPC endpoint for CodePipeline. This type of endpoint makes it possible for you to connect your VPC to AWS services. The endpoint provides reliable, scalable connectivity to CodePipeline without requiring an internet gateway, network address translation (NAT) instance, or VPN connection. For information about setting up a VPC, see the [VPC User Guide](https://docs.aws.amazon.com/vpc/latest/userguide/).

## Encryption at rest
<a name="encryption-at-rest"></a>

Data in CodePipeline is encrypted at rest using AWS KMS keys. Code artifacts are stored in a customer-owned S3 bucket and encrypted with either the AWS managed key or a customer managed key. For more information, see [Configure server-side encryption for artifacts stored in Amazon S3 for CodePipeline](S3-artifact-encryption.md).

## Encryption in transit
<a name="encryption-in-transit"></a>

All service-to-service communication is encrypted in transit using SSL/TLS. 

## Encryption key management
<a name="key-management"></a>

If you choose the default option for encrypting code artifacts, CodePipeline uses the AWS managed key. You cannot change or delete this AWS managed key. If you use a customer managed key in AWS KMS to encrypt or decrypt artifacts in the S3 bucket, you can change or rotate this customer managed key as necessary.

**Important**  
CodePipeline only supports symmetric KMS keys. Do not use an asymmetric KMS key to encrypt the data in your S3 bucket.

**Topics**

# Configure server-side encryption for artifacts stored in Amazon S3 for CodePipeline
<a name="S3-artifact-encryption"></a>

There are two ways to configure server-side encryption for Amazon S3 artifacts:
+ CodePipeline creates an S3 artifact bucket and default AWS managed key when you create a pipeline using the Create Pipeline wizard. The AWS managed key is encrypted along with object data and managed by AWS.
+ You can create and manage your own customer managed key.

**Important**  
CodePipeline only supports symmetric KMS keys. Do not use an asymmetric KMS key to encrypt the data in your S3 bucket.

If you are using the default S3 key, you cannot change or delete this AWS managed key. If you are using a customer managed key in AWS KMS to encrypt or decrypt artifacts in the S3 bucket, you can change or rotate this customer managed key as necessary.

Amazon S3 supports bucket policies that you can use if you require server-side encryption for all objects that are stored in your bucket. For example, the following bucket policy denies upload object (`s3:PutObject`) permission to everyone if the request does not include the `x-amz-server-side-encryption` header requesting server-side encryption with SSE-KMS.

------
#### [ JSON ]

****  

```
{
    "Version":"2012-10-17",		 	 	 
    "Id": "SSEAndSSLPolicy",
    "Statement": [
        {
            "Sid": "DenyUnEncryptedObjectUploads",
            "Effect": "Deny",
            "Principal": "*",
            "Action": "s3:PutObject",
            "Resource": "arn:aws:s3:::codepipeline-us-west-2-89050EXAMPLE/*",
            "Condition": {
                "StringNotEquals": {
                    "s3:x-amz-server-side-encryption": "aws:kms"
                }
            }
        },
        {
            "Sid": "DenyInsecureConnections",
            "Effect": "Deny",
            "Principal": "*",
            "Action": "s3:*",
            "Resource": "arn:aws:s3:::codepipeline-us-west-2-89050EXAMPLE/*",
            "Condition": {
                "Bool": {
                    "aws:SecureTransport": "false"
                }
            }
        }
    ]
}
```

------

For more information about server-side encryption and AWS KMS, see [Protecting Data Using Server-Side Encryption](https://docs.aws.amazon.com/AmazonS3/latest/userguide/serv-side-encryption.html) and [Protecting data using server-side encryption with KMS keys stored in AWS Key Management Service (SSE-KMS)](https://docs.aws.amazon.com/AmazonS3/latest/userguide/UsingKMSEncryption.html).

For more information about AWS KMS, see the [AWS Key Management Service Developer Guide](https://docs.aws.amazon.com/kms/latest/developerguide/).

**Topics**
+ [View your AWS managed key](#S3-view-default-keys)
+ [Configure server-side encryption for S3 buckets using CloudFormation or the AWS CLI](#S3-rotate-customer-key)

## View your AWS managed key
<a name="S3-view-default-keys"></a>

When you use the **Create Pipeline** wizard to create your first pipeline, an S3 bucket is created for you in the same Region you created the pipeline. The bucket is used to store pipeline artifacts. When a pipeline runs, artifacts are put into and retrieved from the S3 bucket. By default, CodePipeline uses server-side encryption with AWS KMS using the AWS managed key for Amazon S3 (the `aws/s3` key). This AWS managed key is created and stored in your AWS account. When artifacts are retrieved from the S3 bucket, CodePipeline uses the same SSE-KMS process to decrypt the artifact.

**To view information about your AWS managed key**

1. Sign in to the AWS Management Console and open the AWS KMS console.

1. If a welcome page appears, choose **Get started now**.

1. In the service navigation pane, choose **AWS managed keys**. 

1. Choose the Region for your pipeline. For example, if the pipeline was created in `us-east-2`, make sure that the filter is set to US East (Ohio).

   For more information about the Regions and endpoints available for CodePipeline, see [AWS CodePipeline endpoints and quotas](https://docs.aws.amazon.com/general/latest/gr/codepipeline.html).

1. In the list, choose the key with the alias used for your pipeline (by default, **aws/s3**). Basic information about the key is displayed.



## Configure server-side encryption for S3 buckets using CloudFormation or the AWS CLI
<a name="S3-rotate-customer-key"></a>

When you use CloudFormation or the AWS CLI to create a pipeline, you must configure server-side encryption manually. Use the sample bucket policy above, and then create your own customer managed key. You can also use your own keys instead of the AWS managed key. Some reasons to choose your own key include:
+ You want to rotate the key on a schedule to meet business or security requirements for your organization.
+ You want to create a pipeline that uses resources associated with another AWS account. This requires the use of a customer managed key. For more information, see [Create a pipeline in CodePipeline that uses resources from another AWS account](pipelines-create-cross-account.md). 

Cryptographic best practices discourage extensive reuse of encryption keys. As a best practice, rotate your key on a regular basis. To create new cryptographic material for your AWS KMS keys, you can create a customer managed key, and then change your applications or aliases to use the new customer managed key. Or, you can enable automatic key rotation for an existing customer managed key. 

To rotate your customer managed key, see [Rotating keys](https://docs.aws.amazon.com/kms/latest/developerguide/rotate-keys.html). 

**Important**  
CodePipeline only supports symmetric KMS keys. Do not use an asymmetric KMS key to encrypt the data in your S3 bucket.

# Use AWS Secrets Manager to track database passwords or third-party API keys
<a name="parameter-store-encryption"></a>

We recommend that you use AWS Secrets Manager to rotate, manage, and retrieve database credentials, API keys, and other **secrets** throughout their lifecycle. Secrets Manager enables you to replace hardcoded credentials in your code (including passwords) with an API call to Secrets Manager to retrieve the secret programmatically. For more information, see [What Is AWS Secrets Manager?](https://docs.aws.amazon.com/secretsmanager/latest/userguide/intro.html) in the *AWS Secrets Manager User Guide*.

For pipelines where you pass parameters that are secrets (such as OAuth credentials) in an CloudFormation template, you should include dynamic references in your template that access the secrets you have stored in Secrets Manager. For the reference ID pattern and examples, see [Secrets Manager Secrets](https://docs.aws.amazon.com/AWSCloudFormation/latest/UserGuide/dynamic-references.html#dynamic-references-secretsmanager) in the *AWS CloudFormation User Guide*. For an example that uses dynamic references in a template snippet for GitHub webhook in a pipeline, see [Webhook Resource Configuration](https://docs.aws.amazon.com/AWSCloudFormation/latest/UserGuide/aws-resource-codepipeline-webhook.html#aws-resource-codepipeline-webhook--examples).



## See also
<a name="related-resources-managing-secrets"></a>

The following related resources can help you as you work with managing secrets.
+ Secrets Manager can rotate database credentials automatically, such as for rotation of Amazon RDS secrets. For more information, see [Rotating Your AWS Secrets Manager Secrets](https://docs.aws.amazon.com/secretsmanager/latest/userguide/rotating-secrets.html) in the *AWS Secrets Manager User Guide*.
+ To view instructions for adding Secrets Manager dynamic references to your CloudFormation templates, see [https://aws.amazon.com/blogs/security/how-to-create-and-retrieve-secrets-managed-in-aws-secrets-manager-using-aws-cloudformation-template/](https://aws.amazon.com/blogs/security/how-to-create-and-retrieve-secrets-managed-in-aws-secrets-manager-using-aws-cloudformation-template/). 