

 Amazon Redshift will no longer support the creation of new Python UDFs starting Patch 198. Existing Python UDFs will continue to function until June 30, 2026. For more information, see the [ blog post ](https://aws.amazon.com/blogs/big-data/amazon-redshift-python-user-defined-functions-will-reach-end-of-support-after-june-30-2026/). 

# Data sources
<a name="copy-parameters-data-source"></a>

You can load data from text files in an Amazon S3 bucket, in an Amazon EMR cluster, or on a remote host that your cluster can access using an SSH connection. You can also load data directly from a DynamoDB table. 

The maximum size of a single input row from any source is 4 MB. 

To export data from a table to a set of files in an Amazon S3, use the [UNLOAD](r_UNLOAD.md) command. 

**Topics**
+ [COPY from Amazon S3](copy-parameters-data-source-s3.md)
+ [COPY from Amazon EMR](copy-parameters-data-source-emr.md)
+ [COPY from remote host (SSH)](copy-parameters-data-source-ssh.md)
+ [COPY from Amazon DynamoDB](copy-parameters-data-source-dynamodb.md)

# COPY from Amazon S3
<a name="copy-parameters-data-source-s3"></a>

To load data from files located in one or more S3 buckets, use the FROM clause to indicate how COPY locates the files in Amazon S3. You can provide the object path to the data files as part of the FROM clause, or you can provide the location of a manifest file that contains a list of Amazon S3 object paths. COPY from Amazon S3 uses an HTTPS connection. Ensure that the S3 IP ranges are added to your allow list. To learn more about the required S3 IP ranges, see [ Network isolation](https://docs.aws.amazon.com//redshift/latest/mgmt/security-network-isolation.html#network-isolation).

**Important**  
If the Amazon S3 buckets that hold the data files don't reside in the same AWS Region as your cluster, you must use the [REGION](#copy-region) parameter to specify the Region in which the data is located. 

**Topics**
+ [Syntax](#copy-parameters-data-source-s3-syntax)
+ [Examples](#copy-parameters-data-source-s3-examples)
+ [Optional parameters](#copy-parameters-data-source-s3-optional-parms)
+ [Unsupported parameters](#copy-parameters-data-source-s3-unsupported-parms)

## Syntax
<a name="copy-parameters-data-source-s3-syntax"></a>

```
FROM { 's3://objectpath' | 's3://manifest_file' }
authorization
| MANIFEST
| ENCRYPTED
| REGION [AS] 'aws-region'
| optional-parameters
```

## Examples
<a name="copy-parameters-data-source-s3-examples"></a>

The following example uses an object path to load data from Amazon S3. 

```
copy customer
from 's3://amzn-s3-demo-bucket/customer' 
iam_role 'arn:aws:iam::0123456789012:role/MyRedshiftRole';
```

The following example uses a manifest file to load data from Amazon S3. 

```
copy customer
from 's3://amzn-s3-demo-bucket/cust.manifest' 
iam_role 'arn:aws:iam::0123456789012:role/MyRedshiftRole'
manifest;
```

### Parameters
<a name="copy-parameters-data-source-s3-parameters"></a>

FROM  <a name="copy-parameters-from"></a>
The source of the data to be loaded. For more information about the encoding of the Amazon S3 file, see [Data conversion parameters](copy-parameters-data-conversion.md).

's3://*copy\$1from\$1s3\$1objectpath*'   <a name="copy-s3-objectpath"></a>
Specifies the path to the Amazon S3 objects that contain the data—for example, `'s3://amzn-s3-demo-bucket/custdata.txt'`. The *s3://copy\$1from\$1s3\$1objectpath* parameter can reference a single file or a set of objects or folders that have the same key prefix. For example, the name `custdata.txt` is a key prefix that refers to a number of physical files: `custdata.txt`,`custdata.txt.1`, `custdata.txt.2`, `custdata.txt.bak`,and so on. The key prefix can also reference a number of folders. For example, `'s3://amzn-s3-demo-bucket/custfolder'` refers to the folders `custfolder`, `custfolder_1`, `custfolder_2`, and so on. If a key prefix references multiple folders, all of the files in the folders are loaded. If a key prefix matches a file as well as a folder, such as `custfolder.log`, COPY attempts to load the file also. If a key prefix might result in COPY attempting to load unwanted files, use a manifest file. For more information, see [copy_from_s3_manifest_file](#copy-manifest-file), following.   
If the S3 bucket that holds the data files doesn't reside in the same AWS Region as your cluster, you must use the [REGION](#copy-region) parameter to specify the Region in which the data is located.
For more information, see [Loading data from Amazon S3](t_Loading-data-from-S3.md).

's3://*copy\$1from\$1s3\$1manifest\$1file*'   <a name="copy-manifest-file"></a>
Specifies the Amazon S3 object key for a manifest file that lists the data files to be loaded. The *'s3://*copy\$1from\$1s3\$1manifest\$1file'** argument must explicitly reference a single file—for example, `'s3://amzn-s3-demo-bucket/manifest.txt'`. It can't reference a key prefix.  
The manifest is a text file in JSON format that lists the URL of each file that is to be loaded from Amazon S3. The URL includes the bucket name and full object path for the file. The files that are specified in the manifest can be in different buckets, but all the buckets must be in the same AWS Region as the Amazon Redshift cluster. If a file is listed twice, the file is loaded twice. The following example shows the JSON for a manifest that loads three files.   

```
{
  "entries": [
    {"url":"s3://amzn-s3-demo-bucket1/custdata.1","mandatory":true},
    {"url":"s3://amzn-s3-demo-bucket1/custdata.2","mandatory":true},
    {"url":"s3://amzn-s3-demo-bucket2/custdata.1","mandatory":false}
  ]
}
```
The double quotation mark characters are required, and must be simple quotation marks (0x22), not slanted or "smart" quotation marks. Each entry in the manifest can optionally include a `mandatory` flag. If `mandatory` is set to `true`, COPY terminates if it doesn't find the file for that entry; otherwise, COPY will continue. The default value for `mandatory` is `false`.   
When loading from data files in ORC or Parquet format, a `meta` field is required, as shown in the following example.  

```
{  
   "entries":[  
      {  
         "url":"s3://amzn-s3-demo-bucket1/orc/2013-10-04-custdata",
         "mandatory":true,
         "meta":{  
            "content_length":99
         }
      },
      {  
         "url":"s3://amzn-s3-demo-bucket2/orc/2013-10-05-custdata",
         "mandatory":true,
         "meta":{  
            "content_length":99
         }
      }
   ]
}
```
The manifest file must not be encrypted or compressed, even if the ENCRYPTED, GZIP, LZOP, BZIP2, or ZSTD options are specified. COPY returns an error if the specified manifest file isn't found or the manifest file isn't properly formed.   
If a manifest file is used, the MANIFEST parameter must be specified with the COPY command. If the MANIFEST parameter isn't specified, COPY assumes that the file specified with FROM is a data file.   
For more information, see [Loading data from Amazon S3](t_Loading-data-from-S3.md).

*authorization*  
The COPY command needs authorization to access data in another AWS resource, including in Amazon S3, Amazon EMR, Amazon DynamoDB, and Amazon EC2. You can provide that authorization by referencing an AWS Identity and Access Management (IAM) role that is attached to your cluster (role-based access control) or by providing the access credentials for a user (key-based access control). For increased security and flexibility, we recommend using IAM role-based access control. For more information, see [Authorization parameters](copy-parameters-authorization.md).

MANIFEST  <a name="copy-manifest"></a>
Specifies that a manifest is used to identify the data files to be loaded from Amazon S3. If the MANIFEST parameter is used, COPY loads data from the files listed in the manifest referenced by *'s3://copy\$1from\$1s3\$1manifest\$1file'*. If the manifest file isn't found, or isn't properly formed, COPY fails. For more information, see [Using a manifest to specify data files](loading-data-files-using-manifest.md).

ENCRYPTED  <a name="copy-encrypted"></a>
A clause that specifies that the input files on Amazon S3 are encrypted using client-side encryption with customer managed keys. For more information, see [Loading encrypted data files from Amazon S3](c_loading-encrypted-files.md). Don't specify ENCRYPTED if the input files are encrypted using Amazon S3 server-side encryption (SSE-KMS or SSE-S3). COPY reads server-side encrypted files automatically.  
If you specify the ENCRYPTED parameter, you must also specify the [MASTER_SYMMETRIC_KEY](#copy-master-symmetric-key) parameter or include the **master\$1symmetric\$1key** value in the [Using the CREDENTIALS parameter](copy-parameters-authorization.md#copy-credentials) string.  
If the encrypted files are in compressed format, add the GZIP, LZOP, BZIP2, or ZSTD parameter.  
Manifest files and JSONPaths files must not be encrypted, even if the ENCRYPTED option is specified.

MASTER\$1SYMMETRIC\$1KEY '*root\$1key*'  <a name="copy-master-symmetric-key"></a>
The root symmetric key that was used to encrypt data files on Amazon S3. If MASTER\$1SYMMETRIC\$1KEY is specified, the [ENCRYPTED](#copy-encrypted) parameter must also be specified. MASTER\$1SYMMETRIC\$1KEY can't be used with the CREDENTIALS parameter. For more information, see [Loading encrypted data files from Amazon S3](c_loading-encrypted-files.md).  
If the encrypted files are in compressed format, add the GZIP, LZOP, BZIP2, or ZSTD parameter.

REGION [AS] '*aws-region*'  <a name="copy-region"></a>
Specifies the AWS Region where the source data is located. REGION is required for COPY from an Amazon S3 bucket or an DynamoDB table when the AWS resource that contains the data isn't in the same Region as the Amazon Redshift cluster.   
The value for *aws\$1region* must match a Region listed in the [Amazon Redshift regions and endpoints](https://docs.aws.amazon.com/general/latest/gr/rande.html#redshift_region) table.  
If the REGION parameter is specified, all resources, including a manifest file or multiple Amazon S3 buckets, must be located in the specified Region.   
Transferring data across Regions incurs additional charges against the Amazon S3 bucket or the DynamoDB table that contains the data. For more information about pricing, see **Data Transfer OUT From Amazon S3 To Another AWS Region** on the [Amazon S3 Pricing](https://aws.amazon.com/s3/pricing/) page and **Data Transfer OUT** on the [Amazon DynamoDB Pricing](https://aws.amazon.com/dynamodb/pricing/) page. 
By default, COPY assumes that the data is located in the same Region as the Amazon Redshift cluster. 

## Optional parameters
<a name="copy-parameters-data-source-s3-optional-parms"></a>

You can optionally specify the following parameters with COPY from Amazon S3: 
+ [Column mapping options](copy-parameters-column-mapping.md)
+ [Data format parameters](copy-parameters-data-format.md#copy-data-format-parameters)
+ [Data conversion parameters](copy-parameters-data-conversion.md)
+ [Data load operations](copy-parameters-data-load.md)

## Unsupported parameters
<a name="copy-parameters-data-source-s3-unsupported-parms"></a>

You can't use the following parameters with COPY from Amazon S3: 
+ SSH
+ READRATIO

# COPY from Amazon EMR
<a name="copy-parameters-data-source-emr"></a>

You can use the COPY command to load data in parallel from an Amazon EMR cluster configured to write text files to the cluster's Hadoop Distributed File System (HDFS) in the form of fixed-width files, character-delimited files, CSV files, JSON-formatted files, or Avro files.

**Topics**
+ [Syntax](#copy-parameters-data-source-emr-syntax)
+ [Example](#copy-parameters-data-source-emr-example)
+ [Parameters](#copy-parameters-data-source-emr-parameters)
+ [Supported parameters](#copy-parameters-data-source-emr-optional-parms)
+ [Unsupported parameters](#copy-parameters-data-source-emr-unsupported-parms)

## Syntax
<a name="copy-parameters-data-source-emr-syntax"></a>

```
FROM 'emr://emr_cluster_id/hdfs_filepath'  
authorization
[ optional_parameters ]
```

## Example
<a name="copy-parameters-data-source-emr-example"></a>

The following example loads data from an Amazon EMR cluster. 

```
copy sales
from 'emr://j-SAMPLE2B500FC/myoutput/part-*' 
iam_role 'arn:aws:iam::0123456789012:role/MyRedshiftRole';
```

## Parameters
<a name="copy-parameters-data-source-emr-parameters"></a>

FROM  
The source of the data to be loaded. 

 'emr://*emr\$1cluster\$1id*/*hdfs\$1file\$1path*'  <a name="copy-emr"></a>
The unique identifier for the Amazon EMR cluster and the HDFS file path that references the data files for the COPY command. The HDFS data file names must not contain the wildcard characters asterisk (\$1) and question mark (?).   
The Amazon EMR cluster must continue running until the COPY operation completes. If any of the HDFS data files are changed or deleted before the COPY operation completes, you might have unexpected results, or the COPY operation might fail. 
You can use the wildcard characters asterisk (\$1) and question mark (?) as part of the *hdfs\$1file\$1path* argument to specify multiple files to be loaded. For example, `'emr://j-SAMPLE2B500FC/myoutput/part*'` identifies the files `part-0000`, `part-0001`, and so on. If the file path doesn't contain wildcard characters, it is treated as a string literal. If you specify only a folder name, COPY attempts to load all files in the folder.   
If you use wildcard characters or use only the folder name, verify that no unwanted files will be loaded. For example, some processes might write a log file to the output folder.
For more information, see [Loading data from Amazon EMR](loading-data-from-emr.md).

*authorization*  
The COPY command needs authorization to access data in another AWS resource, including in Amazon S3, Amazon EMR, Amazon DynamoDB, and Amazon EC2. You can provide that authorization by referencing an AWS Identity and Access Management (IAM) role that is attached to your cluster (role-based access control) or by providing the access credentials for a user (key-based access control). For increased security and flexibility, we recommend using IAM role-based access control. For more information, see [Authorization parameters](copy-parameters-authorization.md).

## Supported parameters
<a name="copy-parameters-data-source-emr-optional-parms"></a>

You can optionally specify the following parameters with COPY from Amazon EMR: 
+ [Column mapping options](copy-parameters-column-mapping.md)
+ [Data format parameters](copy-parameters-data-format.md#copy-data-format-parameters)
+ [Data conversion parameters](copy-parameters-data-conversion.md)
+ [Data load operations](copy-parameters-data-load.md)

## Unsupported parameters
<a name="copy-parameters-data-source-emr-unsupported-parms"></a>

You can't use the following parameters with COPY from Amazon EMR: 
+ ENCRYPTED
+ MANIFEST
+ REGION
+ READRATIO
+ SSH

# COPY from remote host (SSH)
<a name="copy-parameters-data-source-ssh"></a>

You can use the COPY command to load data in parallel from one or more remote hosts, such Amazon Elastic Compute Cloud (Amazon EC2) instances or other computers. COPY connects to the remote hosts using Secure Shell (SSH) and runs commands on the remote hosts to generate text output. The remote host can be an EC2 Linux instance or another Unix or Linux computer configured to accept SSH connections. Amazon Redshift can connect to multiple hosts, and can open multiple SSH connections to each host. Amazon Redshift sends a unique command through each connection to generate text output to the host's standard output, which Amazon Redshift then reads as it does a text file.

Use the FROM clause to specify the Amazon S3 object key for the manifest file that provides the information COPY uses to open SSH connections and run the remote commands. 

**Topics**
+ [Syntax](#copy-parameters-data-source-ssh-syntax)
+ [Examples](#copy-parameters-data-source-ssh-examples)
+ [Parameters](#copy-parameters-data-source-ssh-parameters)
+ [Optional parameters](#copy-parameters-data-source-ssh-optional-parms)
+ [Unsupported parameters](#copy-parameters-data-source-ssh-unsupported-parms)

**Important**  
 If the S3 bucket that holds the manifest file doesn't reside in the same AWS Region as your cluster, you must use the REGION parameter to specify the Region in which the bucket is located. 

## Syntax
<a name="copy-parameters-data-source-ssh-syntax"></a>

```
FROM 's3://'ssh_manifest_file' }
authorization
SSH
| optional-parameters
```

## Examples
<a name="copy-parameters-data-source-ssh-examples"></a>

The following example uses a manifest file to load data from a remote host using SSH. 

```
copy sales
from 's3://amzn-s3-demo-bucket/ssh_manifest' 
iam_role 'arn:aws:iam::0123456789012:role/MyRedshiftRole'
ssh;
```

## Parameters
<a name="copy-parameters-data-source-ssh-parameters"></a>

FROM  
The source of the data to be loaded. 

's3://*copy\$1from\$1ssh\$1manifest\$1file*'  <a name="copy-ssh-manifest"></a>
The COPY command can connect to multiple hosts using SSH, and can create multiple SSH connections to each host. COPY runs a command through each host connection, and then loads the output from the commands in parallel into the table. The *s3://copy\$1from\$1ssh\$1manifest\$1file* argument specifies the Amazon S3 object key for the manifest file that provides the information COPY uses to open SSH connections and run the remote commands.  
The *s3://copy\$1from\$1ssh\$1manifest\$1file* argument must explicitly reference a single file; it can't be a key prefix. The following shows an example:  

```
's3://amzn-s3-demo-bucket/ssh_manifest.txt'
```
The manifest file is a text file in JSON format that Amazon Redshift uses to connect to the host. The manifest file specifies the SSH host endpoints and the commands that will be run on the hosts to return data to Amazon Redshift. Optionally, you can include the host public key, the login user name, and a mandatory flag for each entry. The following example shows a manifest file that creates two SSH connections:   

```
{ 
    "entries": [ 
	    {"endpoint":"<ssh_endpoint_or_IP>", 
           "command": "<remote_command>",
           "mandatory":true, 
           "publickey": "<public_key>", 
           "username": "<host_user_name>"}, 
	    {"endpoint":"<ssh_endpoint_or_IP>", 
           "command": "<remote_command>",
           "mandatory":true, 
           "publickey": "<public_key>", 
           "username": "<host_user_name>"} 
     ] 
}
```
The manifest file contains one `"entries"` construct for each SSH connection. You can have multiple connections to a single host or multiple connections to multiple hosts. The double quotation mark characters are required as shown, both for the field names and the values. The quotation mark characters must be simple quotation marks (0x22), not slanted or "smart" quotation marks. The only value that doesn't need double quotation mark characters is the Boolean value `true` or `false` for the `"mandatory"` field.   
The following list describes the fields in the manifest file.     
endpoint  <a name="copy-ssh-manifest-endpoint"></a>
The URL address or IP address of the host—for example, `"ec2-111-222-333.compute-1.amazonaws.com"`, or `"198.51.100.0"`.   
command  <a name="copy-ssh-manifest-command"></a>
The command to be run by the host to generate text output or binary output in gzip, lzop, bzip2, or zstd format. The command can be any command that the user *"host\$1user\$1name"* has permission to run. The command can be as simple as printing a file, or it can query a database or launch a script. The output (text file, gzip binary file, lzop binary file, or bzip2 binary file) must be in a form that the Amazon Redshift COPY command can ingest. For more information, see [Preparing your input data](t_preparing-input-data.md).  
publickey  <a name="copy-ssh-manifest-publickey"></a>
(Optional) The public key of the host. If provided, Amazon Redshift will use the public key to identify the host. If the public key isn't provided, Amazon Redshift will not attempt host identification. For example, if the remote host's public key is `ssh-rsa AbcCbaxxx…Example root@amazon.com`, type the following text in the public key field: `"AbcCbaxxx…Example"`  
mandatory  <a name="copy-ssh-manifest-mandatory"></a>
(Optional) A clause that indicates whether the COPY command should fail if the connection attempt fails. The default is `false`. If Amazon Redshift doesn't successfully make at least one connection, the COPY command fails.  
username  <a name="copy-ssh-manifest-username"></a>
(Optional) The user name that will be used to log on to the host system and run the remote command. The user login name must be the same as the login that was used to add the Amazon Redshift cluster's public key to the host's authorized keys file. The default username is `redshift`.
For more information about creating a manifest file, see [Loading data process](loading-data-from-remote-hosts.md#load-from-host-process).  
To COPY from a remote host, the SSH parameter must be specified with the COPY command. If the SSH parameter isn't specified, COPY assumes that the file specified with FROM is a data file and will fail.   
If you use automatic compression, the COPY command performs two data read operations, which means it will run the remote command twice. The first read operation is to provide a data sample for compression analysis, then the second read operation actually loads the data. If executing the remote command twice might cause a problem, you should disable automatic compression. To disable automatic compression, run the COPY command with the COMPUPDATE parameter set to OFF. For more information, see [Loading tables with automatic compression](c_Loading_tables_auto_compress.md).  
For detailed procedures for using COPY from SSH, see [Loading data from remote hosts](loading-data-from-remote-hosts.md).

*authorization*  
The COPY command needs authorization to access data in another AWS resource, including in Amazon S3, Amazon EMR, Amazon DynamoDB, and Amazon EC2. You can provide that authorization by referencing an AWS Identity and Access Management (IAM) role that is attached to your cluster (role-based access control) or by providing the access credentials for a user (key-based access control). For increased security and flexibility, we recommend using IAM role-based access control. For more information, see [Authorization parameters](copy-parameters-authorization.md).

SSH  <a name="copy-ssh"></a>
A clause that specifies that data is to be loaded from a remote host using the SSH protocol. If you specify SSH, you must also provide a manifest file using the [s3://copy_from_ssh_manifest_file](#copy-ssh-manifest) argument.   
If you are using SSH to copy from a host using a private IP address in a remote VPC, the VPC must have enhanced VPC routing enabled. For more information about Enhanced VPC routing, see [Amazon Redshift Enhanced VPC Routing](https://docs.aws.amazon.com/redshift/latest/mgmt/enhanced-vpc-routing.html).

## Optional parameters
<a name="copy-parameters-data-source-ssh-optional-parms"></a>

You can optionally specify the following parameters with COPY from SSH: 
+ [Column mapping options](copy-parameters-column-mapping.md)
+ [Data format parameters](copy-parameters-data-format.md#copy-data-format-parameters)
+ [Data conversion parameters](copy-parameters-data-conversion.md)
+ [Data load operations](copy-parameters-data-load.md)

## Unsupported parameters
<a name="copy-parameters-data-source-ssh-unsupported-parms"></a>

You can't use the following parameters with COPY from SSH: 
+ ENCRYPTED
+ MANIFEST
+ READRATIO

# COPY from Amazon DynamoDB
<a name="copy-parameters-data-source-dynamodb"></a>

To load data from an existing DynamoDB table, use the FROM clause to specify the DynamoDB table name.

**Topics**
+ [Syntax](#copy-parameters-data-source-dynamodb-syntax)
+ [Examples](#copy-parameters-data-source-dynamodb-examples)
+ [Optional parameters](#copy-parameters-data-source-dynamodb-optional-parms)
+ [Unsupported parameters](#copy-parameters-data-source-dynamodb-unsupported-parms)

**Important**  
If the DynamoDB table doesn't reside in the same region as your Amazon Redshift cluster, you must use the REGION parameter to specify the region in which the data is located. 

## Syntax
<a name="copy-parameters-data-source-dynamodb-syntax"></a>

```
FROM 'dynamodb://table-name' 
authorization
READRATIO ratio
| REGION [AS] 'aws_region'  
| optional-parameters
```

## Examples
<a name="copy-parameters-data-source-dynamodb-examples"></a>

The following example loads data from a DynamoDB table. 

```
copy favoritemovies from 'dynamodb://ProductCatalog'
iam_role 'arn:aws:iam::0123456789012:role/MyRedshiftRole'
readratio 50;
```

### Parameters
<a name="copy-parameters-data-source-dynamodb-parameters"></a>

FROM  
The source of the data to be loaded. 

'dynamodb://*table-name*'  <a name="copy-dynamodb"></a>
The name of the DynamoDB table that contains the data, for example `'dynamodb://ProductCatalog'`. For details about how DynamoDB attributes are mapped to Amazon Redshift columns, see [Loading data from an Amazon DynamoDB table](t_Loading-data-from-dynamodb.md).  
A DynamoDB table name is unique to an AWS account, which is identified by the AWS access credentials.

*authorization*  
The COPY command needs authorization to access data in another AWS resource, including in Amazon S3, Amazon EMR, DynamoDB, and Amazon EC2. You can provide that authorization by referencing an AWS Identity and Access Management (IAM) role that is attached to your cluster (role-based access control) or by providing the access credentials for a user (key-based access control). For increased security and flexibility, we recommend using IAM role-based access control. For more information, see [Authorization parameters](copy-parameters-authorization.md).

READRATIO [AS] *ratio*  <a name="copy-readratio"></a>
The percentage of the DynamoDB table's provisioned throughput to use for the data load. READRATIO is required for COPY from DynamoDB. It can't be used with COPY from Amazon S3. We highly recommend setting the ratio to a value less than the average unused provisioned throughput. Valid values are integers 1–200.  
Setting READRATIO to 100 or higher enables Amazon Redshift to consume the entirety of the DynamoDB table's provisioned throughput, which seriously degrades the performance of concurrent read operations against the same table during the COPY session. Write traffic is unaffected. Values higher than 100 are allowed to troubleshoot rare scenarios when Amazon Redshift fails to fulfill the provisioned throughput of the table. If you load data from DynamoDB to Amazon Redshift on an ongoing basis, consider organizing your DynamoDB tables as a time series to separate live traffic from the COPY operation.

## Optional parameters
<a name="copy-parameters-data-source-dynamodb-optional-parms"></a>

You can optionally specify the following parameters with COPY from Amazon DynamoDB: 
+ [Column mapping options](copy-parameters-column-mapping.md)
+ The following data conversion parameters are supported:
  + [ACCEPTANYDATE](copy-parameters-data-conversion.md#copy-acceptanydate) 
  + [BLANKSASNULL](copy-parameters-data-conversion.md#copy-blanksasnull) 
  + [DATEFORMAT](copy-parameters-data-conversion.md#copy-dateformat) 
  + [EMPTYASNULL](copy-parameters-data-conversion.md#copy-emptyasnull) 
  + [ROUNDEC](copy-parameters-data-conversion.md#copy-roundec) 
  + [TIMEFORMAT](copy-parameters-data-conversion.md#copy-timeformat) 
  + [TRIMBLANKS](copy-parameters-data-conversion.md#copy-trimblanks) 
  + [TRUNCATECOLUMNS](copy-parameters-data-conversion.md#copy-truncatecolumns) 
+ [Data load operations](copy-parameters-data-load.md)

## Unsupported parameters
<a name="copy-parameters-data-source-dynamodb-unsupported-parms"></a>

You can't use the following parameters with COPY from DynamoDB: 
+ All data format parameters
+ ESCAPE
+ FILLRECORD
+ IGNOREBLANKLINES
+ IGNOREHEADER
+ NULL
+ REMOVEQUOTES
+ ACCEPTINVCHARS
+ MANIFEST
+ ENCRYPTED