

 Amazon Redshift will no longer support the creation of new Python UDFs starting Patch 198. Existing Python UDFs will continue to function until June 30, 2026. For more information, see the [ blog post ](https://aws.amazon.com/blogs/big-data/amazon-redshift-python-user-defined-functions-will-reach-end-of-support-after-june-30-2026/). 

# Loading data from Amazon S3
<a name="t_Loading-data-from-S3"></a>

The COPY command leverages the Amazon Redshift massively parallel processing (MPP) architecture to read and load data in parallel from a file or multiple files in an Amazon S3 bucket. You can take maximum advantage of parallel processing by splitting your data into multiple files, in cases where the files are compressed. (There are exceptions to this rule. These are detailed in [Loading data files](https://docs.aws.amazon.com/redshift/latest/dg/c_best-practices-use-multiple-files.html).) You can also take maximum advantage of parallel processing by setting distribution keys on your tables. For more information about distribution keys, see [Data distribution for query optimization](t_Distributing_data.md). 

Data is loaded into the target table, one line per row. The fields in the data file are matched to table columns in order, left to right. Fields in the data files can be fixed-width or character delimited; the default delimiter is a pipe (\$1). By default, all the table columns are loaded, but you can optionally define a comma-separated list of columns. If a table column is not included in the column list specified in the COPY command, it is loaded with a default value. For more information, see [Loading default column values](c_loading_default_values.md).

**Topics**
+ [Loading data from compressed and uncompressed files](t_splitting-data-files.md)
+ [Uploading files to Amazon S3 to use with COPY](t_uploading-data-to-S3.md)
+ [Using the COPY command to load from Amazon S3](t_loading-tables-from-s3.md)

# Loading data from compressed and uncompressed files
<a name="t_splitting-data-files"></a>

When you load compressed data, we recommend that you split the data for each table into multiple files. When you load uncompressed, delimited data, the COPY command uses massively parallel processing (MPP) and scan ranges to load data from large files in an Amazon S3 bucket.

## Loading data from multiple compressed files
<a name="t_splitting-data-files-compressed"></a>

In cases where you have compressed data, we recommend that you split the data for each table into multiple files. The COPY command can load data from multiple files in parallel. You can load multiple files by specifying a common prefix, or *prefix key*, for the set, or by explicitly listing the files in a manifest file.

Split your data into files so that the number of files is a multiple of the number of slices in your cluster. That way, Amazon Redshift can divide the data evenly among the slices. The number of slices per node depends on the node size of the cluster. For example, each dc2.large compute node has two slices, and each dc2.8xlarge compute node has 16 slices. For more information about the number of slices that each node size has, see [About clusters and nodes](https://docs.aws.amazon.com/redshift/latest/mgmt/working-with-clusters.html#rs-about-clusters-and-nodes) in the *Amazon Redshift Management Guide*. 

The nodes all participate in running parallel queries, working on data that is distributed as evenly as possible across the slices. If you have a cluster with two dc2.large nodes, you might split your data into four files or some multiple of four. Amazon Redshift doesn't take file size into account when dividing the workload. Thus, you need to ensure that the files are roughly the same size, from 1 MB to 1 GB after compression. 

To use object prefixes to identify the load files, name each file with a common prefix. For example, you might split the `venue.txt` file might be split into four files, as follows.

```
venue.txt.1
venue.txt.2
venue.txt.3
venue.txt.4
```

If you put multiple files in a folder in your bucket and specify the folder name as the prefix, COPY loads all of the files in the folder. If you explicitly list the files to be loaded by using a manifest file, the files can reside in different buckets or folders.

For more information about manifest files, see [Using a manifest to specify data files](r_COPY_command_examples.md#copy-command-examples-manifest).

## Loading data from uncompressed, delimited files
<a name="t_splitting-data-files-uncompressed"></a>

When you load uncompressed, delimited data, the COPY command uses the massively parallel processing (MPP) architecture in Amazon Redshift. Amazon Redshift automatically uses slices working in parallel to load ranges of data from a large file in an Amazon S3 bucket. The file must be delimited for parallel loading to occur. For example, pipe delimited. Automatic, parallel data loading with the COPY command is also available for CSV files. You can also take advantage of parallel processing by setting distribution keys on your tables. For more information about distribution keys, see [Data distribution for query optimization](t_Distributing_data.md).

Automatic, parallel data loading isn't supported when the COPY query includes any of the following keywords: ESCAPE, REMOVEQUOTES, and FIXEDWIDTH.

Data from the file or files is loaded into the target table, one line per row. The fields in the data file are matched to table columns in order, left to right. Fields in the data files can be fixed-width or character delimited; the default delimiter is a pipe (\$1). By default, all the table columns are loaded, but you can optionally define a comma-separated list of columns. If a table column isn't included in the column list specified in the COPY command, it's loaded with a default value. For more information, see [Loading default column values](c_loading_default_values.md).

Follow this general process to load data from Amazon S3, when your data is uncompressed and delimited:

1. Upload your files to Amazon S3.

1. Run a COPY command to load the table. 

1. Verify that the data was loaded correctly.

For examples of COPY commands, see [COPY examples](r_COPY_command_examples.md). For information about data loaded into Amazon Redshift, check the [STL\$1LOAD\$1COMMITS](r_STL_LOAD_COMMITS.md) and [STL\$1LOAD\$1ERRORS](r_STL_LOAD_ERRORS.md) system tables. 

For more information about nodes and the slices contained in each, see [About clusters and nodes](https://docs.aws.amazon.com/redshift/latest/mgmt/working-with-clusters.html#rs-about-clusters-and-nodes) in the *Amazon Redshift Management Guide*.

# Uploading files to Amazon S3 to use with COPY
<a name="t_uploading-data-to-S3"></a>

There are a couple approaches to take when uploading text files to Amazon S3:
+ If you have compressed files, we recommend that you split large files to take advantage of parallel processing in Amazon Redshift.
+ On the other hand, COPY automatically splits large, uncompressed, text-delimited file data to facilitate parallelism and effectively distribute the data from large files.

Create an Amazon S3 bucket to hold your data files, and then upload the data files to the bucket. For information about creating buckets and uploading files, see [Working with Amazon S3 Buckets](https://docs.aws.amazon.com/AmazonS3/latest/userguide/UsingBucket.html) in the *Amazon Simple Storage Service User Guide.* 

**Important**  
The Amazon S3 bucket that holds the data files must be created in the same AWS Region as your cluster unless you use the [REGION](copy-parameters-data-source-s3.md#copy-region) option to specify the Region in which the Amazon S3 bucket is located.

Ensure that the S3 IP ranges are added to your allowlist. To learn more about the required S3 IP ranges, see [ Network isolation](https://docs.aws.amazon.com//redshift/latest/mgmt/security-network-isolation.html#network-isolation).

You can create an Amazon S3 bucket in a specific Region either by selecting the Region when you create the bucket by using the Amazon S3 console, or by specifying an endpoint when you create the bucket using the Amazon S3 API or CLI.

Following the data load, verify that the correct files are present on Amazon S3.

**Topics**
+ [Managing data consistency](managing-data-consistency.md)
+ [Uploading encrypted data to Amazon S3](t_uploading-encrypted-data.md)
+ [Verifying that the correct files are present in your bucket](verifying-that-correct-files-are-present.md)

# Managing data consistency
<a name="managing-data-consistency"></a>

Amazon S3 provides strong read-after-write consistency for COPY, UNLOAD, INSERT (external table), CREATE EXTERNAL TABLE AS, and Amazon Redshift Spectrum operations on Amazon S3 buckets in all AWS Regions. In addition, read operations on Amazon S3 Select, Amazon S3 Access Control Lists, Amazon S3 Object Tags, and object metadata (for example, HEAD object) are strongly consistent. For more information about data consistency, see [Amazon S3 Data Consistency Model](https://docs.aws.amazon.com/AmazonS3/latest/userguide/Introduction.html#ConsistencyModel) in the *Amazon Simple Storage Service User Guide*.

# Uploading encrypted data to Amazon S3
<a name="t_uploading-encrypted-data"></a>

Amazon S3 supports both server-side encryption and client-side encryption. This topic discusses the differences between the server-side and client-side encryption and describes the steps to use client-side encryption with Amazon Redshift. Server-side encryption is transparent to Amazon Redshift. 

## Server-side encryption
<a name="server-side-encryption"></a>

Server-side encryption is data encryption at rest—that is, Amazon S3 encrypts your data as it uploads it and decrypts it for you when you access it. When you load tables using a COPY command, there is no difference in the way you load from server-side encrypted or unencrypted objects on Amazon S3. For more information about server-side encryption, see [Using Server-Side Encryption](https://docs.aws.amazon.com/AmazonS3/latest/userguide/UsingServerSideEncryption.html) in the *Amazon Simple Storage Service User Guide*.

## Client-side encryption
<a name="client-side-encryption"></a>

In client-side encryption, your client application manages encryption of your data, the encryption keys, and related tools. You can upload data to an Amazon S3 bucket using client-side encryption, and then load the data using the COPY command with the ENCRYPTED option and a private encryption key to provide greater security.

You encrypt your data using envelope encryption. With *envelope encryption,* your application handles all encryption exclusively. Your private encryption keys and your unencrypted data are never sent to AWS, so it's very important that you safely manage your encryption keys. If you lose your encryption keys, you won't be able to unencrypt your data, and you can't recover your encryption keys from AWS. Envelope encryption combines the performance of fast symmetric encryption while maintaining the greater security that key management with asymmetric keys provides. A one-time-use symmetric key (the envelope symmetric key) is generated by your Amazon S3 encryption client to encrypt your data, then that key is encrypted by your root key and stored alongside your data in Amazon S3. When Amazon Redshift accesses your data during a load, the encrypted symmetric key is retrieved and decrypted with your real key, then the data is decrypted.

To work with Amazon S3 client-side encrypted data in Amazon Redshift, follow the steps outlined in [Protecting Data Using Client-Side Encryption](https://docs.aws.amazon.com/AmazonS3/latest/userguide/UsingClientSideEncryption.html) in the *Amazon Simple Storage Service User Guide*, with the additional requirements that you use:
+ **Symmetric encryption –** The AWS SDK for Java `AmazonS3EncryptionClient` class uses envelope encryption, described preceding, which is based on symmetric key encryption. Use this class to create an Amazon S3 client to upload client-side encrypted data.
+ **A 256-bit AES root symmetric key –** A root key encrypts the envelope key. You pass the root key to your instance of the `AmazonS3EncryptionClient` class. Save this key, because you will need it to copy data into Amazon Redshift.
+ **Object metadata to store encrypted envelope key –** By default, Amazon S3 stores the envelope key as object metadata for the `AmazonS3EncryptionClient` class. The encrypted envelope key that is stored as object metadata is used during the decryption process. 

**Note**  
If you get a cipher encryption error message when you use the encryption API for the first time, your version of the JDK may have a Java Cryptography Extension (JCE) jurisdiction policy file that limits the maximum key length for encryption and decryption transformations to 128 bits. For information about addressing this issue, go to [Specifying Client-Side Encryption Using the AWS SDK for Java](https://docs.aws.amazon.com/AmazonS3/latest/userguide/UsingClientSideEncryptionUpload.html) in the *Amazon Simple Storage Service User Guide*. 

For information about loading client-side encrypted files into your Amazon Redshift tables using the COPY command, see [Loading encrypted data files from Amazon S3](c_loading-encrypted-files.md).

## Example: Uploading client-side encrypted data
<a name="client-side-encryption-example"></a>

For an example of how to use the AWS SDK for Java to upload client-side encrypted data, go to [Protecting data using client-side encryption](https://docs.aws.amazon.com/AmazonS3/latest/userguide/encrypt-client-side-symmetric-master-key.html) in the *Amazon Simple Storage Service User Guide*. 

The second option shows the choices you must make during client-side encryption so that the data can be loaded in Amazon Redshift. Specifically, the example shows using object metadata to store the encrypted envelope key and the use of a 256-bit AES root symmetric key. 

This example provides example code using the AWS SDK for Java to create a 256-bit AES symmetric root key and save it to a file. Then the example upload an object to Amazon S3 using an S3 encryption client that first encrypts sample data on the client-side. The example also downloads the object and verifies that the data is the same.

# Verifying that the correct files are present in your bucket
<a name="verifying-that-correct-files-are-present"></a>

After you upload your files to your Amazon S3 bucket, we recommend listing the contents of the bucket to verify that all of the correct files are present and that no unwanted files are present. For example, if the bucket `amzn-s3-demo-bucket` holds a file named `venue.txt.back`, that file will be loaded, perhaps unintentionally, by the following command:

```
COPY venue FROM 's3://amzn-s3-demo-bucket/venue' … ;
```

If you want to control specifically which files are loaded, you can use a manifest file to explicitly list the data files. For more information about using a manifest file, see the [copy_from_s3_manifest_file](copy-parameters-data-source-s3.md#copy-manifest-file) option for the COPY command and [Using a manifest to specify data files](r_COPY_command_examples.md#copy-command-examples-manifest) in the COPY examples. 

For more information about listing the contents of the bucket, see [Listing Object Keys](https://docs.aws.amazon.com/AmazonS3/latest/userguide/ListingKeysUsingAPIs.html) in the *Amazon S3 Developer Guide*.

# Using the COPY command to load from Amazon S3
<a name="t_loading-tables-from-s3"></a>

Use the [COPY](r_COPY.md) command to load a table in parallel from data files on Amazon S3. You can specify the files to be loaded by using an Amazon S3 object prefix or by using a manifest file.

The syntax to specify the files to be loaded by using a prefix is as follows:

```
COPY <table_name> FROM 's3://<bucket_name>/<object_prefix>'
authorization;
```

 The manifest file is a JSON-formatted file that lists the data files to be loaded. The syntax to specify the files to be loaded by using a manifest file is as follows:

```
COPY <table_name> FROM 's3://<bucket_name>/<manifest_file>'
authorization
MANIFEST;
```

The table to be loaded must already exist in the database. For information about creating a table, see [CREATE TABLE](r_CREATE_TABLE_NEW.md) in the SQL Reference. 

The values for *authorization* provide the AWS authorization Amazon Redshift needs to access the Amazon S3 objects. For information about required permissions, see [IAM permissions for COPY, UNLOAD, and CREATE LIBRARY](copy-usage_notes-access-permissions.md#copy-usage_notes-iam-permissions). The preferred method for authentication is to specify the IAM\$1ROLE parameter and provide the Amazon Resource Name (ARN) for an IAM role with the necessary permissions. For more information, see [Role-based access control](copy-usage_notes-access-permissions.md#copy-usage_notes-access-role-based) . 

To authenticate using the IAM\$1ROLE parameter, replace *<aws-account-id>* and *<role-name>* as shown in the following syntax. 

```
IAM_ROLE 'arn:aws:iam::<aws-account-id>:role/<role-name>'
```

The following example shows authentication using an IAM role.

```
COPY customer 
FROM 's3://amzn-s3-demo-bucket/mydata' 
IAM_ROLE 'arn:aws:iam::0123456789012:role/MyRedshiftRole';
```

For more information about other authorization options, see [Authorization parameters](copy-parameters-authorization.md)

If you want to validate your data without actually loading the table, use the NOLOAD option with the [COPY](r_COPY.md) command.

The following example shows the first few rows of a pipe-delimited data in a file named `venue.txt`.

```
1|Toyota Park|Bridgeview|IL|0
2|Columbus Crew Stadium|Columbus|OH|0
3|RFK Stadium|Washington|DC|0
```

Before uploading the file to Amazon S3, split the file into multiple files so that the COPY command can load it using parallel processing. The number of files should be a multiple of the number of slices in your cluster. Split your load data files so that the files are about equal size, between 1 MB and 1 GB after compression. For more information, see [Loading data from compressed and uncompressed files](t_splitting-data-files.md).

For example, the `venue.txt` file might be split into four files, as follows:

```
venue.txt.1
venue.txt.2
venue.txt.3
venue.txt.4
```

The following COPY command loads the VENUE table using the pipe-delimited data in the data files with the prefix 'venue' in the Amazon S3 bucket `amzn-s3-demo-bucket`. 

**Note**  
The Amazon S3 bucket `amzn-s3-demo-bucket` in the following examples does not exist. For sample COPY commands that use real data in an existing Amazon S3 bucket, see [Load sample data](https://docs.aws.amazon.com/redshift/latest/gsg/cm-dev-t-load-sample-data.html).

```
COPY venue FROM 's3://amzn-s3-demo-bucket/venue'
IAM_ROLE 'arn:aws:iam::0123456789012:role/MyRedshiftRole'
DELIMITER '|';
```

If no Amazon S3 objects with the key prefix 'venue' exist, the load fails.

**Topics**
+ [Using a manifest to specify data files](loading-data-files-using-manifest.md)
+ [Loading compressed data files from Amazon S3](t_loading-gzip-compressed-data-files-from-S3.md)
+ [Loading fixed-width data from Amazon S3](t_loading_fixed_width_data.md)
+ [Loading multibyte data from Amazon S3](t_loading_unicode_data.md)
+ [Loading encrypted data files from Amazon S3](c_loading-encrypted-files.md)

# Using a manifest to specify data files
<a name="loading-data-files-using-manifest"></a>

You can use a manifest to make sure that the COPY command loads all of the required files, and only the required files, for a data load. You can use a manifest to load files from different buckets or files that do not share the same prefix. Instead of supplying an object path for the COPY command, you supply the name of a JSON-formatted text file that explicitly lists the files to be loaded. The URL in the manifest must specify the bucket name and full object path for the file, not just a prefix.

For more information about manifest files, see the COPY example [Using a manifest to specify data files](r_COPY_command_examples.md#copy-command-examples-manifest).

The following example shows the JSON to load files from different buckets and with file names that begin with date stamps.

```
{
  "entries": [
    {"url":"s3://amzn-s3-demo-bucket1/2013-10-04-custdata", "mandatory":true},
    {"url":"s3://amzn-s3-demo-bucket1/2013-10-05-custdata", "mandatory":true},
    {"url":"s3://amzn-s3-demo-bucket2/2013-10-04-custdata", "mandatory":true},
    {"url":"s3://amzn-s3-demo-bucket2/2013-10-05-custdata", "mandatory":true}
  ]
}
```

The optional `mandatory` flag specifies whether COPY should return an error if the file is not found. The default of `mandatory` is `false`. Regardless of any mandatory settings, COPY will terminate if no files are found. 

The following example runs the COPY command with the manifest in the previous example, which is named `cust.manifest`. 

```
COPY customer
FROM 's3://amzn-s3-demo-bucket/cust.manifest' 
IAM_ROLE 'arn:aws:iam::0123456789012:role/MyRedshiftRole'
MANIFEST;
```

## Using a manifest created by UNLOAD
<a name="loading-data-files-using-unload-manifest"></a>

A manifest created by an [UNLOAD](r_UNLOAD.md) operation using the MANIFEST parameter might have keys that are not required for the COPY operation. For example, the following `UNLOAD` manifest includes a `meta` key that is required for an Amazon Redshift Spectrum external table and for loading data files in an `ORC` or `Parquet` file format. The `meta` key contains a `content_length` key with a value that is the actual size of the file in bytes. The COPY operation requires only the `url` key and an optional `mandatory` key.

```
{
  "entries": [
    {"url":"s3://amzn-s3-demo-bucket/unload/manifest_0000_part_00", "meta": { "content_length": 5956875 }},
    {"url":"s3://amzn-s3-demo-bucket/unload/unload/manifest_0001_part_00", "meta": { "content_length": 5997091 }}
 ]
}
```

For more information about manifest files, see [Using a manifest to specify data files](r_COPY_command_examples.md#copy-command-examples-manifest).

# Loading compressed data files from Amazon S3
<a name="t_loading-gzip-compressed-data-files-from-S3"></a>

To load data files that are compressed using gzip, lzop, or bzip2, include the corresponding option: GZIP, LZOP, or BZIP2.

For example, the following command loads from files that were compressing using lzop.

```
COPY customer FROM 's3://amzn-s3-demo-bucket/customer.lzo' 
IAM_ROLE 'arn:aws:iam::0123456789012:role/MyRedshiftRole'
DELIMITER '|' LZOP;
```

**Note**  
If you compress a data file with lzop compression and use the *--filter* option, the COPY command doesn't support it.

# Loading fixed-width data from Amazon S3
<a name="t_loading_fixed_width_data"></a>

Fixed-width data files have uniform lengths for each column of data. Each field in a fixed-width data file has exactly the same length and position. For character data (CHAR and VARCHAR) in a fixed-width data file, you must include leading or trailing spaces as placeholders in order to keep the width uniform. For integers, you must use leading zeros as placeholders. A fixed-width data file has no delimiter to separate columns.

To load a fixed-width data file into an existing table, USE the FIXEDWIDTH parameter in the COPY command. Your table specifications must match the value of fixedwidth\$1spec in order for the data to load correctly.

To load fixed-width data from a file to a table, issue the following command:

```
COPY table_name FROM 's3://amzn-s3-demo-bucket/prefix' 
IAM_ROLE 'arn:aws:iam::0123456789012:role/MyRedshiftRole' 
FIXEDWIDTH 'fixedwidth_spec';
```

The *fixedwidth\$1spec* parameter is a string that contains an identifier for each column and the width of each column, separated by a colon. The **column:width** pairs are delimited by commas. The identifier can be anything that you choose: numbers, letters, or a combination of the two. The identifier has no relation to the table itself, so the specification must contain the columns in the same order as the table.

The following two examples show the same specification, with the first using numeric identifiers and the second using string identifiers:

```
'0:3,1:25,2:12,3:2,4:6'
```

```
'venueid:3,venuename:25,venuecity:12,venuestate:2,venueseats:6'
```

The following example shows fixed-width sample data that could be loaded into the VENUE table using the preceding specifications:

```
1  Toyota Park               Bridgeview  IL0
2  Columbus Crew Stadium     Columbus    OH0
3  RFK Stadium               Washington  DC0
4  CommunityAmerica Ballpark Kansas City KS0
5  Gillette Stadium          Foxborough  MA68756
```

The following COPY command loads this data set into the VENUE table:

```
COPY venue
FROM 's3://amzn-s3-demo-bucket/data/venue_fw.txt' 
IAM_ROLE 'arn:aws:iam::0123456789012:role/MyRedshiftRole' 
FIXEDWIDTH 'venueid:3,venuename:25,venuecity:12,venuestate:2,venueseats:6';
```

# Loading multibyte data from Amazon S3
<a name="t_loading_unicode_data"></a>

If your data includes non-ASCII multibyte characters (such as Chinese or Cyrillic characters), you must load the data to VARCHAR columns. The VARCHAR data type supports four-byte UTF-8 characters, but the CHAR data type only accepts single-byte ASCII characters. You cannot load five-byte or longer characters into Amazon Redshift tables. For more information about CHAR and VARCHAR, see [Data types](c_Supported_data_types.md).

To check which encoding an input file uses, use the Linux * `file` * command: 

```
$ file ordersdata.txt
ordersdata.txt: ASCII English text
$ file uni_ordersdata.dat
uni_ordersdata.dat: UTF-8 Unicode text
```

# Loading encrypted data files from Amazon S3
<a name="c_loading-encrypted-files"></a>

You can use the COPY command to load data files that were uploaded to Amazon S3 using server-side encryption, client-side encryption, or both. 

The COPY command supports the following types of Amazon S3 encryption:
+ Server-side encryption with Amazon S3-managed keys (SSE-S3)
+ Server-side encryption with AWS KMS keys (SSE-KMS)
+ Client-side encryption using a client-side symmetric root key

The COPY command doesn't support the following types of Amazon S3 encryption:
+ Server-side encryption with customer-provided keys (SSE-C)
+ Client-side encryption using an AWS KMS key
+ Client-side encryption using a customer-provided asymmetric root key

For more information about Amazon S3 encryption, see [ Protecting Data Using Server-Side Encryption](https://docs.aws.amazon.com/AmazonS3/latest/userguide/serv-side-encryption.html) and [Protecting Data Using Client-Side Encryption](https://docs.aws.amazon.com/AmazonS3/latest/userguide/UsingClientSideEncryption.html) in the Amazon Simple Storage Service User Guide.

The [UNLOAD](r_UNLOAD.md) command automatically encrypts files using SSE-S3. You can also unload using SSE-KMS or client-side encryption with a customer managed symmetric key. For more information, see [Unloading encrypted data files](t_unloading_encrypted_files.md)

The COPY command automatically recognizes and loads files encrypted using SSE-S3 and SSE-KMS. You can load files encrypted using a client-side symmetric root key by specifying the ENCRYPTED option and providing the key value. For more information, see [Uploading encrypted data to Amazon S3](t_uploading-encrypted-data.md).

To load client-side encrypted data files, provide the root key value using the MASTER\$1SYMMETRIC\$1KEY parameter and include the ENCRYPTED option.

```
COPY customer FROM 's3://amzn-s3-demo-bucket/encrypted/customer' 
IAM_ROLE 'arn:aws:iam::0123456789012:role/MyRedshiftRole'
MASTER_SYMMETRIC_KEY '<root_key>' 
ENCRYPTED
DELIMITER '|';
```

To load encrypted data files that are gzip, lzop, or bzip2 compressed, include the GZIP, LZOP, or BZIP2 option along with the root key value and the ENCRYPTED option.

```
COPY customer FROM 's3://amzn-s3-demo-bucket/encrypted/customer' 
IAM_ROLE 'arn:aws:iam::0123456789012:role/MyRedshiftRole'
MASTER_SYMMETRIC_KEY '<root_key>'
ENCRYPTED 
DELIMITER '|' 
GZIP;
```