

 Amazon Redshift will no longer support the creation of new Python UDFs starting Patch 198. Existing Python UDFs will continue to function until June 30, 2026. For more information, see the [ blog post ](https://aws.amazon.com/blogs/big-data/amazon-redshift-python-user-defined-functions-will-reach-end-of-support-after-june-30-2026/). 

# Zero-ETL integrations
<a name="zero-etl-using"></a>

Zero-ETL integration is a fully managed solution that makes transactional and operational data available in Amazon Redshift from multiple operational and transactional sources. With this solution, you can configure an integration from your source to an Amazon Redshift data warehouse. You don't need to maintain an extract, transform, and load (ETL) pipeline. We take care of the ETL for you by automating the creation and management of data replication from the data source to the Amazon Redshift cluster or Redshift Serverless namespace. You can continue to update and query your source data while simultaneously using Amazon Redshift for analytic workloads, such as reporting and dashboards.

With zero-ETL integration you have fresher data for analytics, AI/ML, and reporting. You get more accurate and timely insights for use cases like business dashboards, optimized gaming experience, data quality monitoring, and customer behavior analysis. You can make data-driven predictions with more confidence, improve customer experiences, and promote data-driven insights across the business.

The following sources are currently supported for zero-ETL integrations:
+ Amazon Aurora MySQL (AMS)
+ Amazon Aurora PostgreSQL (APG)
+ Amazon DynamoDB
+ Amazon RDS for MySQL
+ Amazon RDS for Oracle
+ Amazon RDS for PostgreSQL
+ Oracle Database@AWS
+ Applications including Salesforce, Salesforce Marketing Cloud Account Engagement, SAP, ServiceNow, Instagram ads, Meta ads, and Zendesk
+ Self-Managed MySQL, PostgreSQL, SQL Server, and Oracle

To create a zero-ETL integration, you specify an integration source and an Amazon Redshift data warehouse as the target. After an initial data load, the integration replicates data from the source to the target data warehouse. The data becomes available in Amazon Redshift. You control the encryption of your data when you create the integration source, when you create the zero-ETL integration, and when you create the Amazon Redshift data warehouse. The integration monitors the health of the data pipeline and recovers from issues when possible. You can create integrations from sources of the same type into a single Amazon Redshift data warehouse to derive holistic insights across multiple applications.

With the data in Amazon Redshift, you can use analytics that Amazon Redshift provides. For example, built-in machine learning (ML), materialized views, data sharing, and direct access to multiple data stores and data lakes. For data engineers, zero-ETL integration provides access to time-sensitive data that otherwise can get delayed by intermittent errors in complex data pipelines. You can run analytical queries and ML models on transactional data to derive timely insights for time-sensitive events and business decisions.

You can create an Amazon Redshift event notification subscription so you can be notified when an event occurs for a given zero-ETL integration. To view the list of integration-related event notifications, see [Zero-ETL integration event notifications with Amazon EventBridge](integration-event-notifications.md). The simplest way to create a subscription is with the Amazon SNS console. For information on creating an Amazon SNS topic and subscribing to it, see [Getting started with Amazon SNS](https://docs.aws.amazon.com/sns/latest/dg/GettingStarted.html) in the *Amazon Simple Notification Service Developer Guide*.

As you get started with zero-ETL integrations, consider the following concepts:
+ A source database is the database from where data is replicated into Amazon Redshift.
+ A target data warehouse is the Amazon Redshift provisioned cluster or Redshift Serverless workgroup where data is replicated to.
+ A destination database is the database that you create from a zero-ETL integration in the target data warehouse.

For information about system tables and views you can use to monitor your zero-ETL integrations, see [Monitoring zero-ETL integrations with Amazon Redshift system views](zero-etl-monitoring.md#zero-etl-monitoring-sysviews). 

For a list of AWS Regions that each source for zero-ETL integrations supports, see [Supported Regions for zero-ETL integrations](zero-etl-using.regions.md).

For pricing information for zero-ETL integrations, see the appropriate pricing page:
+ [Amazon Redshift pricing](https://aws.amazon.com/redshift/pricing/)
+ [Amazon Aurora pricing](https://aws.amazon.com/rds/aurora/pricing/)
+ [Amazon RDS pricing](https://aws.amazon.com/rds/pricing/)
+ [Amazon DynamoDB pricing](https://aws.amazon.com/dynamodb/pricing/)
+ [AWS Glue pricing](https://aws.amazon.com/glue/pricing/)

For more information about zero-ETL integration sources, see the following topics:
+ For Aurora zero-ETL integrations, see [Benefits](https://docs.aws.amazon.com/AmazonRDS/latest/AuroraUserGuide/zero-etl.html#zero-etl.benefits), [Key concepts](https://docs.aws.amazon.com/AmazonRDS/latest/AuroraUserGuide/zero-etl.html#zero-etl.concepts), [Limitations](https://docs.aws.amazon.com/AmazonRDS/latest/AuroraUserGuide/zero-etl.html#zero-etl.reqs-lims), [Quotas](https://docs.aws.amazon.com/AmazonRDS/latest/AuroraUserGuide/zero-etl.html#zero-etl.quotas), and [Supported Regions](https://docs.aws.amazon.com/AmazonRDS/latest/AuroraUserGuide/zero-etl.html#zero-etl.regions) of zero-ETL integrations in the *Amazon Aurora User Guide*. 
+ For RDS zero-ETL integrations, see [Benefits](https://docs.aws.amazon.com/AmazonRDS/latest/UserGuide/zero-etl.html#zero-etl.benefits), [Key concepts](https://docs.aws.amazon.com/AmazonRDS/latest/UserGuide/zero-etl.html#zero-etl.concepts), [Limitations](https://docs.aws.amazon.com/AmazonRDS/latest/UserGuide/zero-etl.html#zero-etl.reqs-lims), [Quotas](https://docs.aws.amazon.com/AmazonRDS/latest/UserGuide/zero-etl.html#zero-etl.quotas), and [Supported Regions](https://docs.aws.amazon.com/AmazonRDS/latest/UserGuide/zero-etl.html#zero-etl.regions) of zero-ETL integrations in the *Amazon RDS User Guide*. 
+ For DynamoDB zero-ETL integrations, see [DynamoDB zero-ETL integration with Amazon Redshift](https://docs.aws.amazon.com/amazondynamodb/latest/developerguide/RedshiftforDynamoDB-zero-etl.html) in the *Amazon DynamoDB Developer Guide*. 
+ For zero-ETL integrations with applicatons, see [Zero-ETL integrations](https://docs.aws.amazon.com/glue/latest/dg/zero-etl-using.html) in the *AWS Glue Developer Guide*. 

**Topics**
+ [

# Considerations when using zero-ETL integrations with Amazon Redshift
](zero-etl.reqs-lims.md)
+ [

# Getting started with zero-ETL integrations
](zero-etl-using.setting-up.md)
+ [

# Creating destination databases in Amazon Redshift
](zero-etl-using.creating-db.md)
+ [

# Querying replicated data in Amazon Redshift
](zero-etl-using.querying-and-creating-materialized-views.md)
+ [

# Viewing zero-ETL integrations
](zero-etl-using.describing.md)
+ [

# History mode
](zero-etl-history-mode.md)
+ [

# Sharing your data in Amazon Redshift
](zero-etl-using.share-data-redshift.md)
+ [

# Monitoring zero-ETL integrations
](zero-etl-monitoring.md)
+ [

# Metrics for zero-ETL integrations
](zero-etl-using.metrics.md)
+ [

# Modify a zero-ETL integration for DynamoDB
](zero-etl-managing.modify-integration-ddb.md)
+ [

# Delete a zero-ETL integration for DynamoDB
](zero-etl-managing.delete-integration-ddb.md)
+ [

# Supported Regions for zero-ETL integrations
](zero-etl-using.regions.md)
+ [

# Troubleshooting zero-ETL integrations
](zero-etl-using.troubleshooting.md)

# Considerations when using zero-ETL integrations with Amazon Redshift
<a name="zero-etl.reqs-lims"></a>

The following considerations apply to zero-ETL integrations with Amazon Redshift. 
+ Your target Amazon Redshift data warehouse must meet the following prerequisites:
  + Running Amazon Redshift Serverless or an RA3 node type.
  + Encrypted (if using a provisioned cluster).
  + Has case sensitivity enabled.
+ If you delete a source that is an authorized integration source for an Amazon Redshift data warehouse, all associated integrations will go into the `FAILED` state. Any previously replicated data remains in your Amazon Redshift database and can be queried.
+ The destination database is read-only. You can't create tables, views, or materialized views in the destination database. However, you can use materialized views on other tables in the target data warehouse.
+ Materialized views are supported when used in cross-database queries. For information about creating materialized views with data replicated through zero-ETL integrations, see [Querying replicated data with materialized views](zero-etl-using.querying-and-creating-materialized-views.md#zero-etl-using.transforming).
+ By default, you can query tables only in the target data warehouse that are in the `Synced` state. To query tables in another state, set the database parameter `QUERY_ALL_STATES` to `TRUE`. For information about setting `QUERY_ALL_STATES`, see [CREATE DATABASE](https://docs.aws.amazon.com/redshift/latest/dg/r_CREATE_DATABASE.html) and [ALTER DATABASE](https://docs.aws.amazon.com/redshift/latest/dg/r_ALTER_DATABASE.html) in the *Amazon Redshift Database Developer Guide*. For more information about the state of your database, see [SVV\$1INTEGRATION\$1TABLE\$1STATE](https://docs.aws.amazon.com/redshift/latest/dg/r_SVV_INTEGRATION_TABLE_STATE.html) in the *Amazon Redshift Database Developer Guide*.
+ Amazon Redshift accepts only UTF-8 characters, so it might not honor the collation defined in your source. The sorting and comparison rules might be different, which can ultimately change the query results.
+ Zero-ETL integrations is limited to 50 per Amazon Redshift data warehouse target.
+ Tables in the integration source must have a primary key. Otherwise, your tables can't be replicated to the target data warehouse in Amazon Redshift.

  For information about how to add a primary key to Amazon Aurora PostgreSQL, see [Handle tables without primary keys while creating Amazon Aurora PostgreSQL zero-ETL integrations with Amazon Redshift](https://aws.amazon.com/blogs/database/handle-tables-without-primary-keys-while-creating-amazon-aurora-postgresql-zero-etl-integrations-with-amazon-redshift/) in the *AWS Database Blog*. For information about how to add a primary key to Amazon Aurora MySQL or RDS for MySQL, see [Handle tables without primary keys while creating Amazon Aurora MySQL or Amazon RDS for MySQL zero-ETL integrations with Amazon Redshift](https://aws.amazon.com/blogs/database/handle-tables-without-primary-keys-while-creating-amazon-aurora-mysql-or-amazon-rds-for-mysql-zero-etl-integrations-with-amazon-redshift/) in the *AWS Database Blog*. 
+ You can use data filtering for Aurora zero-ETL integrations to define the scope of replication from the source Aurora DB cluster to the target Amazon Redshift data warehouse. Rather than replicating all data to the target, you can define one or more filters that selectively include or exclude certain tables from being replicated. For more information, see [Data filtering for Aurora zero-ETL integrations with Amazon Redshift](https://docs.aws.amazon.com/AmazonRDS/latest/AuroraUserGuide/zero-etl.filtering.html) in the *Amazon Aurora User Guide*.
+ For Aurora PostgreSQL zero-ETL integrations with Amazon Redshift, Amazon Redshift supports a maximum of 100 databases from Aurora PostgreSQL. Each database replicates from source to target independently.
+ Zero-ETL integration does not support transformations while replicating the data from transactional data stores to Amazon Redshift. Data is replicated as-is from the source data base. However, you can apply transformations on the replicated data in Amazon Redshift.
+ Zero-ETL integration runs in Amazon Redshift using parallel connections. It runs using the credentials of the user who created the database from the integration. When the query runs, concurrency scaling does not kick in for these connections during the sync (writes). Concurrency scaling reads (from Amazon Redshift clients) works for synced objects.
+ You can set the `REFRESH_INTERVAL` for a zero-ETL integration to control the frequency of data replication into Amazon Redshift. For more information, see [CREATE DATABASE](https://docs.aws.amazon.com/redshift/latest/dg/r_CREATE_DATABASE.html) and [ALTER DATABASE](https://docs.aws.amazon.com/redshift/latest/dg/r_ALTER_DATABASE.html) in the *Amazon Redshift Database Developer Guide*.
+ After you create an Amazon Redshift database from a zero-ETL integration with Amazon DynamoDB, the database state should change from **Creating** to **Active**. This starts the replication of data in the source DynamoDB tables to the target Redshift tables, which are created under the public schema of the destination database (`ddb_rs_customerprofiles_zetl_db`).

## Considerations when using history mode on the target
<a name="zero-etl-considerations-history-mode"></a>

The following considerations apply when using history mode on the target database. For more information, see [History mode](zero-etl-history-mode.md).
+ When you drop a table on a source, the table on the target is not dropped, but is changed to `DroppedSource` state. You can drop or rename the table from the Amazon Redshift database.
+ When you truncate a table on a source, deletes are run on the target table. For example, if all records are truncated on the source, corresponding records on the target column `_record_is_active` are changed to `false`.
+ When you run TRUNCATE table SQL on the target table, active history rows are marked inactive with a corresponding timestamp.
+ When a row in a table is set to inactive, it can be deleted after a short (about 10 minute) delay. To delete inactive rows, connect to your zero-ETL database with query editor v2 or another SQL client.
+ You can only delete inactive rows from a table with history mode on. For example, a SQL command similar to the following only deletes inactive rows.

  ```
  delete from schema.user_table where _record_delete_time <= '2024-09-10 12:34:56'
  ```

  This is equivalent to a SQL command like the following.

  ```
  delete from schema.user_table where _record_delete_time <= '2024-09-10 12:34:56' and _record_is_active = False
  ```
+ When turning history mode off for a table, all historical data is saved to table named with `<schema>.<table-name>_historical_<timestamp>` while the original table named `<schema>.<table-name>` is refreshed.
+ When a table with history mode on is excluded from replication using a table filter, all rows are set as inactive and it is changed to `DroppedSource` state. For more information about table filters, see [Data filtering for Aurora zero-ETL integrations with Amazon Redshift](https://docs.aws.amazon.com/AmazonRDS/latest/AuroraUserGuide/zero-etl.filtering.html) in the *Amazon Aurora User Guide*.
+ History mode can only be switched to `true` or `false` for tables in `Synced` state.
+ Materialized views for tables with history mode on are created as full recompute.

## Considerations when the zero-ETL integration source is Aurora or Amazon RDS
<a name="zero-etl-considerations-aurora-rds"></a>

The following considerations apply to Aurora and Amazon RDS zero-ETL integrations with Amazon Redshift.
+ You can use data filtering for Aurora and RDS for MySQL zero-ETL integrations to define the scope of replication from the source DB cluster to the target Amazon Redshift data warehouse. Rather than replicating all data to the target, you can define one or more filters that selectively include or exclude certain tables from being replicated. For more information, see [Data filtering for Aurora zero-ETL integrations with Amazon Redshift](https://docs.aws.amazon.com/AmazonRDS/latest/AuroraUserGuide/zero-etl.filtering.html) in the *Amazon Aurora User Guide*.
+ Tables in the integration source must have a primary key. Otherwise, your tables can't be replicated to the target data warehouse in Amazon Redshift.

  For information about how to add a primary key to Amazon Aurora PostgreSQL, see [Handle tables without primary keys while creating Amazon Aurora PostgreSQL zero-ETL integrations with Amazon Redshift](https://aws.amazon.com/blogs/database/handle-tables-without-primary-keys-while-creating-amazon-aurora-postgresql-zero-etl-integrations-with-amazon-redshift/) in the *AWS Database Blog*. For information about how to add a primary key to Amazon Aurora MySQL or RDS for MySQL, see [Handle tables without primary keys while creating Amazon Aurora MySQL or Amazon RDS for MySQL zero-ETL integrations with Amazon Redshift](https://aws.amazon.com/blogs/database/handle-tables-without-primary-keys-while-creating-amazon-aurora-mysql-or-amazon-rds-for-mysql-zero-etl-integrations-with-amazon-redshift/) in the *AWS Database Blog*. 
+ The maximum length of an Amazon Redshift VARCHAR data type is 65,535 bytes. When the content from the source does not fit into this limit, replication does not proceed and the table is put into a failed state. You can set the database parameter `TRUNCATECOLUMNS` to `TRUE` to truncate content to fit in the column. For information about setting `TRUNCATECOLUMNS`, see [CREATE DATABASE](https://docs.aws.amazon.com/redshift/latest/dg/r_CREATE_DATABASE.html) and [ALTER DATABASE](https://docs.aws.amazon.com/redshift/latest/dg/r_ALTER_DATABASE.html) in the *Amazon Redshift Database Developer Guide*.

  For more information about data type differences between zero-ETL integration sources and Amazon Redshift databases, see [Data type differences between Aurora and Amazon Redshift](https://docs.aws.amazon.com/AmazonRDS/latest/AuroraUserGuide/zero-etl.querying.html#zero-etl.data-type-mapping) in the *Amazon Aurora User Guide*.

For Aurora sources, also see [Limitations](https://docs.aws.amazon.com/AmazonRDS/latest/AuroraUserGuide/zero-etl.html#zero-etl.reqs-lims) in the *Amazon Aurora User Guide*.

For Amazon RDS sources, also see [Limitations](https://docs.aws.amazon.com/AmazonRDS/latest/UserGuide/zero-etl.html#zero-etl.reqs-lims) in the *Amazon RDS User Guide*.

## Considerations when the zero-ETL integration source is DynamoDB
<a name="zero-etl-considerations-ddb"></a>

The following considerations apply to DynamoDB zero-ETL integrations with Amazon Redshift.
+ Table names from DynamoDB greater than 127 characters are not supported.
+ The data from a DynamoDB zero-ETL integration maps to a SUPER data type column in Amazon Redshift.
+ Column names for the partition key or sort key greater than 127 characters are not supported.
+ A zero-ETL integration from DynamoDB can map to only one Amazon Redshift database. 
+ For partition and sort keys, the precision and scale maximum is (38,18). Numeric data types on DynamoDB support a maximum precision up to 38. Amazon Redshift also supports a maximum precision of 38, but the default decimal precision/scale on Amazon Redshift is (38,10). That means values scale values can be truncated. 
+ For a successful zero-ETL integration, an individual attribute (consisting of name\$1value) in a DynamoDB item, must not be larger than 64 KB.
+ On activation, the zero-ETL integration exports the full DynamoDB table to populate the Amazon Redshift database. The time it takes for this initial process to complete depends on the DynamoDB table size. The zero-ETL integration then incrementally replicates updates from DynamoDB to Amazon Redshift using DynamoDB incremental exports. This means the replicated DynamoDB data in Amazon Redshift is kept up-to-date automatically.

  Currently, the minimum latency for DynamoDB zero-ETL integration is 15 minutes. You can increase it further by setting a non-zero `REFRESH_INTERVAL` for a zero-ETL integration. For more information, see [CREATE DATABASE](https://docs.aws.amazon.com/redshift/latest/dg/r_CREATE_DATABASE.html) and [ALTER DATABASE](https://docs.aws.amazon.com/redshift/latest/dg/r_ALTER_DATABASE.html) in the *Amazon Redshift Database Developer Guide*.

For Amazon DynamoDB sources, also see [Prerequisites and limitations](https://docs.aws.amazon.com/amazondynamodb/latest/developerguide/RedshiftforDynamoDB-zero-etl.html#RedshiftforDynamoDB-zero-etl-prereqs) in the *Amazon DynamoDB Developer Guide*.

## Considerations when the zero-ETL integration source is applications, such as, Salesforce, SAP, ServiceNow, and Zendesk
<a name="zero-etl-considerations-glue"></a>

The following considerations apply to source is applications, such as, Salesforce, SAP, ServiceNow, and Zendesk with Amazon Redshift.
+ Table names and column names from application sources greater than 127 characters are not supported.
+ The maximum length of an Amazon Redshift VARCHAR data type is 65,535 bytes. When the content from the source does not fit into this limit, replication does not proceed and the table is put into a failed state. You can set the database parameter `TRUNCATECOLUMNS` to `TRUE` to truncate content to fit in the column. For information about setting `TRUNCATECOLUMNS` see [CREATE DATABASE](https://docs.aws.amazon.com/redshift/latest/dg/r_CREATE_DATABASE.html) and [ALTER DATABASE](https://docs.aws.amazon.com/redshift/latest/dg/r_ALTER_DATABASE.html) in the *Amazon Redshift Database Developer Guide*.

  For more information about data type differences between zero-ETL integration application sources and Amazon Redshift databases, see [Zero-ETL integrations](https://docs.aws.amazon.com/glue/latest/dg/zero-etl-using.html) in the *AWS Glue Developer Guide*.
+ The minimum latency for a zero-ETL integration with applications is 1 hour. You can increase it further by setting a non-zero `REFRESH_INTERVAL` for a zero-ETL integration. For more information, see [CREATE DATABASE](https://docs.aws.amazon.com/redshift/latest/dg/r_CREATE_DATABASE.html) and [ALTER DATABASE](https://docs.aws.amazon.com/redshift/latest/dg/r_ALTER_DATABASE.html) in the *Amazon Redshift Database Developer Guide*.

For sources of zero-ETL integrations with applications, also see [Zero-ETL integrations](https://docs.aws.amazon.com/glue/latest/dg/zero-etl-using.html) in the *AWS Glue Developer Guide*. 

# Getting started with zero-ETL integrations
<a name="zero-etl-using.setting-up"></a>

This set of tasks walks you through setting up your first zero-ETL integration. First, you configure your integration source and set it up with the required parameters and permissions. Then, you continue to the rest of the initial setup from the Amazon Redshift console or AWS CLI. The console provides a **Fix it for me** option to correct some configuration issues.

**Topics**
+ [

# Create and configure a target Amazon Redshift data warehouse
](zero-etl-setting-up.rs-data-warehouse.md)
+ [

# Turn on case sensitivity for your data warehouse
](zero-etl-setting-up.case-sensitivity.md)
+ [

# Configure authorization for your Amazon Redshift data warehouse
](zero-etl-using.redshift-iam.md)
+ [

# Create a zero-ETL integration
](zero-etl-setting-up.create-integration.md)

# Create and configure a target Amazon Redshift data warehouse
<a name="zero-etl-setting-up.rs-data-warehouse"></a>

In this step, you create and configure a target Amazon Redshift data warehouse, such as a Redshift Serverless workgroup or a provisioned cluster. If you already have a Amazon Redshift data warehouse configured for use with zero-ETL integrations, you can skip this step.

Your target data warehouse must have the following characteristics:
+ Running Amazon Redshift Serverless or a provisioned cluster of an RA3 node type. 
+ Has case sensitivity (`enable_case_sensitive_identifier`) turned on. For more information, see [Turn on case sensitivity for your data warehouse](zero-etl-setting-up.case-sensitivity.md).
+ Encrypted, if your target data warehouse is an Amazon Redshift provisioned cluster. For more information, see [Amazon Redshift database encryption](working-with-db-encryption.md).
+ Created in the same AWS Region as the integration source.

To create your target data warehouse for your zero-ETL integrations, see one of the following topics depending on your deployment type:
+ To create an Amazon Redshift provisioned cluster, see [Creating a cluster](create-cluster.md).
+ To create an Amazon Redshift Serverless workgroup with a namespace, see [Creating a workgroup with a namespace](serverless-console-workgroups-create-workgroup-wizard.md).

When you create a provisioned cluster, Amazon Redshift also creates a default parameter group. You can't edit the default parameter group. However, you can create a custom parameter group before creating a new cluster and then associate it with the cluster. Or, you can edit the parameter group that will be associated with the created cluster. You must also turn on case sensitivity for the parameter group either when creating the custom parameter group or when editing a current one to use zero-ETL integrations.

To create a custom parameter group using the Amazon Redshift console or the AWS CLI, see [Creating a parameter group](https://docs.aws.amazon.com/redshift/latest/mgmt/parameter-group-create.html).

# Turn on case sensitivity for your data warehouse
<a name="zero-etl-setting-up.case-sensitivity"></a>

You can attach a parameter group and enable case sensitivity for a provisioned cluster during creation. However, you can update a serverless workgroup through the AWS Command Line Interface (AWS CLI) only after it's been created. This is required to support the case sensitivity of source tables and columns. The `enable_case_sensitive_identifier` is a configuration value that determines whether name identifiers of databases, tables, and columns are case sensitive. This parameter must be turned on to create zero-ETL integrations in the data warehouse. For more information, see [enable\$1case\$1sensitive\$1identifier](https://docs.aws.amazon.com/redshift/latest/dg/r_enable_case_sensitive_identifier.html).

For Amazon Redshift Serverless – [Turn on case sensitivity for Amazon Redshift Serverless using the AWS CLI](#case-sensitivity-serverless-cli). Note that you can turn on case sensitivity for Amazon Redshift Serverless only from the AWS CLI.

For Amazon Redshift provisioned clusters, enable case sensitivity for your target cluster using one of the following topics: 
+ [Turn on case sensitivity for Amazon Redshift provisioned clusters using the Amazon Redshift console](#case-sensitivity-cluster-console)
+ [Turn on case sensitivity for Amazon Redshift provisioned clusters using the AWS CLI](#case-sensitivity-cluster-cli)

## Turn on case sensitivity for Amazon Redshift Serverless using the AWS CLI
<a name="case-sensitivity-serverless-cli"></a>

Run the following AWS CLI command to turn on case sensitivity for your workgroup. 

```
aws redshift-serverless update-workgroup \
        --workgroup-name target-workgroup \
        --config-parameters parameterKey=enable_case_sensitive_identifier,parameterValue=true
```

Wait for the workgroup status to be `Active` before proceeding to the next step.

## Turn on case sensitivity for Amazon Redshift provisioned clusters using the Amazon Redshift console
<a name="case-sensitivity-cluster-console"></a>

1. Sign in to the AWS Management Console and open the Amazon Redshift console at [https://console.aws.amazon.com/redshiftv2/](https://console.aws.amazon.com/redshiftv2/).

1. In the left navigation pane, choose **Provisioned clusters dashboard**.

1. Choose the provisioned cluster that you want to replicate data into.

1. In the left navigation pane, choose **Configurations** > **Workload management**.

1. On the workload management page, choose the parameter group.

1. Choose the **Parameters** tab.

1. Choose **Edit parameters**, then change **enable\$1case\$1sensitive\$1identifier** to **true**.

1. Then, choose **Save**.

## Turn on case sensitivity for Amazon Redshift provisioned clusters using the AWS CLI
<a name="case-sensitivity-cluster-cli"></a>

1. Because you can't edit the default parameter group, from your terminal program, run the following AWS CLI command to create a custom parameter group. Later, you will associate it with the provisioned cluster.

   ```
   aws redshift create-cluster-parameter-group \
       --parameter-group-name zero-etl-params \
       --parameter-group-family redshift-2.0 \
       --description "Param group for zero-ETL integrations"
   ```

1. Run the following AWS CLI command to turn on case sensitivity for the parameter group.

   ```
   aws redshift modify-cluster-parameter-group \
       --parameter-group-name zero-etl-params \
       --parameters ParameterName=enable_case_sensitive_identifier,ParameterValue=true
   ```

1. Run the following command to associate the parameter group with the cluster.

   ```
   aws redshift modify-cluster \
       --cluster-identifier target-cluster \
       --cluster-parameter-group-name zero-etl-params
   ```

1. Wait for the provisioned cluster to be available. You can check the status of the cluster by using the `describe-cluster` command. Then, run the following command to reboot the cluster.

   ```
   aws redshift reboot-cluster \
       --cluster-identifier target-cluster
   ```

# Configure authorization for your Amazon Redshift data warehouse
<a name="zero-etl-using.redshift-iam"></a>

To replicate data from your integration source into your Amazon Redshift data warehouse, you must initially add the following two entities:
+ *Authorized principal* – identifies the user or role that can create zero-ETL integrations into the data warehouse.
+ *Authorized integration source* – identifies the source database that can update the data warehouse.

You can configure authorized principals and authorized integration sources from the **Resource Policy** tab on the Amazon Redshift console or using the Amazon Redshift `PutResourcePolicy` API operation.

## Add authorized principals
<a name="zero-etl-using.redshift-iam-ap"></a>

To create a zero-ETL integration into your Redshift Serverless workgroup or provisioned cluster, authorize access to the associated namespace or provisioned cluster. 

You can skip this step if both of the following conditions are true:
+ The AWS account that owns the Redshift Serverless workgroup or provisioned cluster also owns the source database.
+ That principal is associated with an identity-based IAM policy with permissions to create zero-ETL integrations into this Redshift Serverless namespace or provisioned cluster.

### Add authorized principals to an Amazon Redshift Serverless namespace
<a name="iam-ap-serverless"></a>

1. In the Amazon Redshift console, in the left navigation pane, choose **Redshift Serverless**.

1. Choose **Namespace configuration**, then choose your namespace, and go to the **Resource Policy** tab.

1. Choose **Add authorized principals**.

1. For each authorized principal that you want to add, enter into the namespace either the ARN of the AWS user or role, or the ID of the AWS account that you want to grant access to create zero-ETL integrations. An account ID is stored as an ARN.

1. Choose **Save changes**.

### Add authorized principals to an Amazon Redshift provisioned cluster
<a name="iam-ap-cluster"></a>

1. In the Amazon Redshift console, in the left navigation pane, choose **Provisioned clusters dashboard**.

1. Choose **Clusters**, then choose the cluster, and go to the **Resource Policy** tab.

1. Choose **Add authorized principals**.

1. For each authorized principal that you want to add, enter into the cluster either the ARN of the AWS user or role, or the ID of the AWS account that you want to grant access to create zero-ETL integrations. An account ID is stored as an ARN.

1. Choose **Save changes**.

## Add authorized integration sources
<a name="zero-etl-using.redshift-iam-air"></a>

To allow your source to update your Amazon Redshift data warehouse, you must add it as an authorized integration source to the namespace.

### Add an authorized integration source to an Amazon Redshift Serverless namespace
<a name="iam-air-serverless"></a>

1. In the Amazon Redshift console, go to **Serverless dashboard**. 

1. Choose the name of the namespace.

1. Go to the **Resource Policy** tab.

1. Choose **Add authorized integration source**.

1. Specify the ARN of the source for the zero-ETL integration.

**Note**  
Removing an authorized integration source stops data from replicating into the namespace. This action deactivates all zero-ETL integrations from that source into this namespace.

### Add an authorized integration source to an Amazon Redshift provisioned cluster
<a name="iam-air-cluster"></a>

1. In the Amazon Redshift console, go to **Provisioned clusters dashboard**. 

1. Choose the name of the provisioned cluster.

1. Go to the **Resource Policy** tab.

1. Choose **Add authorized integration source**.

1. Specify the ARN of the source that's the data source for the zero-ETL integration.

**Note**  
Removing an authorized integration source stops data from replicating into the provisioned cluster. This action deactivates all zero-ETL integrations from that source into this Amazon Redshift provisioned cluster.

## Configure authorization using the Amazon Redshift API
<a name="zero-etl-using.resource-policies"></a>

You can use the Amazon Redshift API operations to configure resource policies that work with zero-ETL integrations.

To control the source that can create an inbound integration into the namespace, create a resource policy and attach it to the namespace. With the resource policy, you can specify the source that has access to the integration. The resource policy is attached to the namespace of your target data warehouse to allow the source to create an inbound integration to replicate live data from the source into Amazon Redshift.

The following is a sample resource policy.

------
#### [ JSON ]

****  

```
{
  "Version":"2012-10-17",		 	 	 
  "Statement": [
    {
      "Effect": "Allow",
      "Principal": {
        "Service": "redshift.amazonaws.com"
      },
      "Action": "redshift:AuthorizeInboundIntegration",
      "Resource": "arn:aws:redshift:*:*:integration:*",
      "Condition": {
        "StringEquals": {
          "aws:SourceArn": "arn:aws:rds:*:111122223333:cluster:*"
        }
      }
    },
    {
      "Effect": "Allow",
      "Principal": {
       "AWS": "arn:aws:iam::111122223333:root"
      },
      "Action": "redshift:CreateInboundIntegration",
      "Resource": "arn:aws:redshift:*:*:integration:*"
    }
  ]
}
```

------

The following summarizes the Amazon Redshift API operations applicable to configuring resource policies for integrations:
+ Use the [PutResourcePolicy](https://docs.aws.amazon.com/redshift/latest/APIReference/API_PutResourcePolicy.html) API operation to persist the resource policy. When you provide another resource policy, the previous resource policy on the resource is replaced. Use the previous example resource policy, which grants permissions for the following actions:
  + `CreateInboundIntegration` – Allows the source principal to create an inbound integration for data to be replicated from the source into the target data warehouse.
  + `AuthorizeInboundIntegration` – Allows Amazon Redshift to continuously validate that the target data warehouse can receive data replicated from the source ARN.
+ Use the [GetResourcePolicy](https://docs.aws.amazon.com/redshift/latest/APIReference/API_GetResourcePolicy.html) API operation is to view existing resource policies.
+ Use the [DeleteResourcePolicy](https://docs.aws.amazon.com/redshift/latest/APIReference/API_DeleteResourcePolicy.html) API operation to remove a resource policy from the resource.

To update a resource policy, you can also use the [put-resource-policy](https://docs.aws.amazon.com/cli/latest/reference/redshift/put-resource-policy.html) AWS CLI command. For example, to put a resource policy on your Amazon Redshift namespace ARN for a DynamoDB source, run a AWS CLI command similar to the following.

```
aws redshift put-resource-policy \
--policy file://rs-rp.json \
--resource-arn "arn:aws:redshift-serverless:us-east-1:123456789012:namespace/cc4ffe56-ad2c-4fd1-a5a2-f29124a56433"
```

Where `rs-rp.json` contains:

------
#### [ JSON ]

****  

```
{
    "Version":"2012-10-17",		 	 	 
    "Statement": [
        {
            "Effect": "Allow",
            "Principal": {
                "Service": "redshift.amazonaws.com"
            },
            "Action": "redshift:AuthorizeInboundIntegration",
            "Resource": "arn:aws:redshift-serverless:us-east-1:123456789012:namespace/cc4ffe56-ad2c-4fd1-a5a2-f29124a56433",
            "Condition": {
                "StringEquals": {
                    "aws:SourceArn": "arn:aws:dynamodb:us-east-1:123456789012:table/test_ddb"
                }
            }
        },
        {
            "Effect": "Allow",
            "Principal": {
                "AWS": "arn:aws:iam::123456789012:root"
            },
            "Action": "redshift:CreateInboundIntegration",
            "Resource": "arn:aws:redshift-serverless:us-east-1:123456789012:namespace/cc4ffe56-ad2c-4fd1-a5a2-f29124a56433"
        }
    ]
}
```

------

# Create a zero-ETL integration
<a name="zero-etl-setting-up.create-integration"></a>

First, you create a zero-ETL integration to replicate your source data to Amazon Redshift.

The source of your data determines which type of zero-ETL integration to create.

**Topics**
+ [

# Create a zero-ETL integration for Aurora
](zero-etl-setting-up.create-integration-aurora.md)
+ [

# Create a zero-ETL integration for Amazon RDS
](zero-etl-setting-up.create-integration-rds.md)
+ [

# Create a zero-ETL integration for DynamoDB
](zero-etl-setting-up.create-integration-ddb.md)
+ [

# Create a zero-ETL integration with applications
](zero-etl-setting-up.create-integration-glue.md)

# Create a zero-ETL integration for Aurora
<a name="zero-etl-setting-up.create-integration-aurora"></a>

In this step, you create an Aurora zero-ETL integration with Amazon Redshift.

**To create an Aurora zero-ETL integration with Amazon Redshift**

1. From the Amazon RDS console, [create a custom DB cluster parameter group](https://docs.aws.amazon.com/AmazonRDS/latest/AuroraUserGuide/zero-etl.setting-up.html#zero-etl.parameters) as described in the *Amazon Aurora User Guide*.

1. From the Amazon RDS console, [create a source Amazon Aurora DB cluster](https://docs.aws.amazon.com/AmazonRDS/latest/AuroraUserGuide/zero-etl.setting-up.html#zero-etl.create-cluster) as described in the *Amazon Aurora User Guide*.

1. From the Amazon Redshift console: [Create and configure a target Amazon Redshift data warehouse](zero-etl-setting-up.rs-data-warehouse.md). 
   + From the AWS CLI or Amazon Redshift console: [Turn on case sensitivity for your data warehouse](zero-etl-setting-up.case-sensitivity.md).
   + From the Amazon Redshift console: [Configure authorization for your Amazon Redshift data warehouse](zero-etl-using.redshift-iam.md).

1. From the Amazon RDS console, [create a zero-ETL integration](https://docs.aws.amazon.com/AmazonRDS/latest/AuroraUserGuide/zero-etl.creating.html#zero-etl.create) as described in the *Amazon Aurora User Guide*.

1. From the Amazon Redshift console or the query editor v2, [create an Amazon Redshift database from your integration](https://docs.aws.amazon.com/redshift/latest/mgmt/zero-etl-using.creating-db.html).

   Then, [query and create materialized views with replicated data](https://docs.aws.amazon.com/redshift/latest/mgmt/zero-etl-using.querying-and-creating-materialized-views.html).

For detailed information to create Aurora zero-ETL integrations, see [Creating Amazon Aurora zero-ETL integrations with Amazon Redshift](https://docs.aws.amazon.com/AmazonRDS/latest/AuroraUserGuide/zero-etl.creating.html) in the *Amazon Aurora User Guide*.

# Create a zero-ETL integration for Amazon RDS
<a name="zero-etl-setting-up.create-integration-rds"></a>

In this step, you create an Amazon RDS zero-ETL integration with Amazon Redshift. Redshift supports integrations with RDS for MySQL, RDS for PostgreSQL, and RDS for Oracle.

**To create an Amazon RDS zero-ETL integration with Amazon Redshift**

1. From the Amazon RDS console, [create a custom DB parameter group](https://docs.aws.amazon.com/AmazonRDS/latest/UserGuide/zero-etl.setting-up.html#zero-etl.parameters) as described in the *Amazon RDS User Guide*.

1. From the Amazon RDS console, [create a source Amazon RDS instance](https://docs.aws.amazon.com/AmazonRDS/latest/UserGuide/zero-etl.setting-up.html#zero-etl.create-cluster) as described in the *Amazon RDS User Guide*.

1. From the Amazon Redshift console: [Create and configure a target Amazon Redshift data warehouse](zero-etl-setting-up.rs-data-warehouse.md). 
   + From the AWS CLI or Amazon Redshift console: [Turn on case sensitivity for your data warehouse](zero-etl-setting-up.case-sensitivity.md).
   + From the Amazon Redshift console: [Configure authorization for your Amazon Redshift data warehouse](zero-etl-using.redshift-iam.md).

1. From the Amazon RDS console, [create a zero-ETL integration](https://docs.aws.amazon.com/AmazonRDS/latest/UserGuide/zero-etl.creating.html#zero-etl.create) as described in the *Amazon RDS User Guide*.

1. From the Amazon Redshift console or the query editor v2, [create an Amazon Redshift database from your integration](https://docs.aws.amazon.com/redshift/latest/mgmt/zero-etl-using.creating-db.html).

   Then, [query and create materialized views with replicated data](https://docs.aws.amazon.com/redshift/latest/mgmt/zero-etl-using.querying-and-creating-materialized-views.html).

The Amazon RDS console offers a step-by-step integration creation flow, in which you specify the source database and the target Amazon Redshift data warehouse. If issues occur, then you can choose to have Amazon RDS fix the issues for you instead of manually fixing them on either the Amazon RDS or Amazon Redshift console. 

For detailed instructions to create RDS zero-ETL integrations, see [Creating Amazon RDS zero-ETL integrations with Amazon Redshift](https://docs.aws.amazon.com/AmazonRDS/latest/UserGuide/zero-etl.creating.html) in the *Amazon RDS User Guide*. 

For detailed instructions to specifically create an Amazon RDS for Oracle zero-ETL integration, see [Setting up a zero-ETL integration](https://docs.aws.amazon.com/odb/latest/UserGuide/setting-up-zero-etl.html) in the *Oracle Database@AWS User Guide*.

# Create a zero-ETL integration for DynamoDB
<a name="zero-etl-setting-up.create-integration-ddb"></a>

Before creating a zero-ETL integration, review the considerations and requirements outlined in [Considerations when using zero-ETL integrations with Amazon Redshift](zero-etl.reqs-lims.md). Follow this general flow to create a zero-ETL integration from DynamoDB to Amazon Redshift

**To replicate DynamoDB data to Amazon Redshift with zero-ETL integration**

1. Confirm your sign in credentials allow permissions to work with zero-ETL integrations with Amazon Redshift and DynamoDB. See [IAM policy to work with DynamoDB zero-ETL integrations](#zero-etl-signin-iam-policy) for an example IAM policy.

1. From the DynamoDB console, [configure your DynamoDB table](https://docs.aws.amazon.com/amazondynamodb/latest/developerguide/RedshiftforDynamoDB-zero-etl.html#RedshiftforDynamoDB-zero-etl-prereqs) to have point-in-time recovery (PITR), resource policies, identity-based policies, and encryption key permissions as described in the *Amazon DynamoDB Developer Guide*.

1. From the Amazon Redshift console: [Create and configure a target Amazon Redshift data warehouse](zero-etl-setting-up.rs-data-warehouse.md). 
   + From the AWS CLI or Amazon Redshift console: [Turn on case sensitivity for your data warehouse](zero-etl-setting-up.case-sensitivity.md).
   + From the Amazon Redshift console: [Configure authorization for your Amazon Redshift data warehouse](zero-etl-using.redshift-iam.md).

1. From the Amazon Redshift console, create the zero-ETL integration integration as described later in this topic.

1. From the Amazon Redshift console, create the destination database in your Amazon Redshift data warehouse. For more information, see [Creating destination databases in Amazon Redshift](zero-etl-using.creating-db.md).

1. From the Amazon Redshift console, query your replicated data in the Amazon Redshift data warehouse. For more information, see [Querying replicated data in Amazon Redshift](zero-etl-using.querying-and-creating-materialized-views.md).

In this step, you create an Amazon DynamoDB zero-ETL integration with Amazon Redshift.

------
#### [ Amazon Redshift console ]

**To create an Amazon DynamoDB zero-ETL integration with Amazon Redshift using the Amazon Redshift console**

1. From the Amazon Redshift console, choose **Zero-ETL integrations**. On the pane with the list of zero-ETL integrations, choose **Create zero-ETL integration**, **Create DynamoDB integration**.

1. On the pages to create an integration, enter information about the integration as follows:
   + Enter an **Integration name** – Which is a unique name that can be used to reference your integration.
   + Enter a **Description** – That describes the data that is to be replicated from source to target.
   + Choose the DynamoDB **Source table** – One DynamoDB table can be chosen. Point-in-time recovery (PITR) must be enabled on the table. Only tables with a table size up to 100 tebibytes (TiB) are shown. The source DynamoDB table must be encrypted. The source must also have a resource policy with authorized principals and integration sources. If these the policy is not correct, you are presented with the option **Fix it for me**. 
   + Choose the target **Amazon Redshift data warehouse** – The data warehouse can be an Amazon Redshift provisioned cluster or Redshift Serverless workgroup. If your target Amazon Redshift is in the same account, you are able to select the target. If the target is in a different account, you specify the **Redshift data warehouse ARN**. The target must have a resource policy with authorized principals and integration source and the `enable_case_sensitive_identifier` parameter set to true. If you do not have the correct resource policies on the target and your target is in the same account, you can select the **Fix it for me** option to automatically apply the resource policies during the create integration process. If your target is in a different AWS account, you need to apply the resource policy on the Amazon Redshift warehouse manually. If your target Amazon Redshift data warehouse does not have the correct parameter group option `enable_case_sensitive_identifier` configured as `true`, you can select the **Fix it for me** option to automatically update this parameter group and reboot the warehouse during the create integration process.
   + Enter up to 50 tag **Keys** and with an optional **Value** – To provide additional metadata about the integration. For more information, see [Tag resources in Amazon Redshift](amazon-redshift-tagging.md).
   + Choose **Encryption** options – To encrypt the integration. For more information, see [Encrypting DynamoDB integrations with a customer managed key](#zero-etl.create-encrypt).

     When you encrypt the integration, you can also add **Additional encryption contexts**. For more information, see [Encryption context](#zero-etl.add-encryption-context).

1. A review page is shown where you can choose **Create DynamoDB integration**.

1. A progress page is shown where you can view the progress of the various tasks to create the zero-ETL integration.

1. After the integration is created and active, on the details page of the integration, choose **Connect to database**. When your Amazon Redshift data warehouse was first created, a database was also created. You need to connect to any database in your target data warehouse to create another database for the integration. In the **Connect to database** page, determine whether you can use a recent connection and choose an **Authentication** method. Depending on your authentication method, enter information to connect to an existing database in your target. This authentication information can include the existing **Database name** (typically, `dev`) and the **Database user** specified when the database was created with the Amazon Redshift data warehouse.

1. After you are connected to a database, choose **Create database from integration** to create the database that receives the data from the source. When you create the database you provide the **Integration ID**, **Data warehouse name**, and **Database name**.

1. After the integration status and destination database is `Active`, data begins to replicate from your DynamoDB table to the target table. As you add data to the source it replicates automatically to the target Amazon Redshift data warehouse.

------
#### [ AWS CLI ]

To create an Amazon DynamoDB zero-ETL integration with Amazon Redshift using the AWS CLI, use the `create-integration` command with the following options:
+ `integration-name` – Specify a name for the integration.
+ `source-arn` – Specify the ARN of the DynamoDB source.
+ `target-arn` – Specify the namespace ARN of the Amazon Redshift provisioned cluster or Redshift Serverless workgroup target.

The following example creates an integration by providing the integration name, source ARN, and target ARN. The integration is not encrypted.

```
aws redshift create-integration \
--integration-name ddb-integration \
--source-arn arn:aws:dynamodb:us-east-1:123456789012:table/books \
--target-arn arn:aws:redshift:us-east-1:123456789012:namespace:a1b2c3d4-5678-90ab-cdef-EXAMPLE22222
          
{
    "Status": "creating",
    "IntegrationArn": "arn:aws:redshift:us-east-1:123456789012:integration:a1b2c3d4-5678-90ab-cdef-EXAMPLE11111",
    "Errors": [],
    "ResponseMetadata": {
        "RetryAttempts": 0,
        "HTTPStatusCode": 200,
        "RequestId": "132cbe27-fd10-4f0a-aacb-b68f10bb2bfb",
        "HTTPHeaders": {
            "x-amzn-requestid": "132cbe27-fd10-4f0a-aacb-b68f10bb2bfb",
            "date": "Sat, 24 Aug 2024 05:44:08 GMT",
            "content-length": "934",
            "content-type": "text/xml"
        }
    },
    "Tags": [],
    "CreateTime": "2024-08-24T05:44:08.573Z",
    "KMSKeyId": "arn:aws:kms:us-east-1:123456789012:key/a1b2c3d4-5678-90ab-cdef-EXAMPLE33333",
    "AdditionalEncryptionContext": {},
    "TargetArn": "arn:aws:redshift:us-east-1:123456789012:namespace:a1b2c3d4-5678-90ab-cdef-EXAMPLE22222",
    "IntegrationName": "ddb-integration",
    "SourceArn": "arn:aws:dynamodb:us-east-1:123456789012:table/books"
}
```

The following example creates an integration using a customer managed key for encryption. Before creating the integration:
+ Create a customer managed key (called “CMCMK” in the example) in the same account (called “AccountA” in the example) at the source DynamoDB table.
+ Ensure that the user/role (called “RoleA” in the example) is being used to create the integration has `kms:CreateGrant` and `kms:DescribeKey` permissions on this KMS key. 
+ Add the following to the key policy.

```
{
    "Sid": "Enable RoleA to create grants with key",
    "Effect": "Allow",
    "Principal": {
        "AWS": "RoleA-ARN"
    },
    "Action": "kms:CreateGrant",
    "Resource": "*",
    "Condition": {
        // Add "StringEquals" condition if you plan to provide additional encryption context
        // for the zero-ETL integration. Ensure that the key-value pairs added here match
        // the key-value pair you plan to use while creating the integration.
        // Remove this if you don't plan to use additional encryption context
        "StringEquals": {
            "kms:EncryptionContext:context-key1": "context-value1"
        },
        "ForAllValues:StringEquals": {
            "kms:GrantOperations": [
                "Decrypt",
                "GenerateDataKey",
                "CreateGrant"
            ]
        }
    }
},
{
    "Sid": "Enable RoleA to describe key",
    "Effect": "Allow",
    "Principal": {
        "AWS": "RoleA-ARN"
    },
    "Action": "kms:DescribeKey",
    "Resource": "*"
},
{
    "Sid": "Allow use by RS SP",
    "Effect": "Allow",
    "Principal": {
        "Service": "redshift.amazonaws.com" 
           },
    "Action": "kms:CreateGrant",
    "Resource": "*"
}
```

```
aws redshift create-integration \
--integration-name ddb-integration \
--source-arn arn:aws:dynamodb:us-east-1:123456789012:table/books \
--target-arn arn:aws:redshift:us-east-1:123456789012:namespace:a1b2c3d4-5678-90ab-cdef-EXAMPLE22222 \
--kms-key-id arn:aws:kms:us-east-1:123456789012:key/a1b2c3d4-5678-90ab-cdef-EXAMPLE33333 \
--additional-encryption-context key33=value33  // This matches the condition in the key policy.
          {
    "IntegrationArn": "arn:aws:redshift:us-east-1:123456789012:integration:a1b2c3d4-5678-90ab-cdef-EXAMPLE11111",
    "IntegrationName": "ddb-integration",
    "SourceArn": "arn:aws:dynamodb:us-east-1:123456789012:table/books",
    "SourceType": "dynamodb",
    "TargetArn": "arn:aws:redshift:us-east-1:123456789012:namespace:a1b2c3d4-5678-90ab-cdef-EXAMPLE22222",
    "Status": "creating",
    "Errors": [],
    "CreateTime": "2024-10-02T18:29:26.710Z",
    "KMSKeyId": "arn:aws:kms:us-east-1:123456789012:key/a1b2c3d4-5678-90ab-cdef-EXAMPLE33333",
    "AdditionalEncryptionContext": {
        "key33": "value33"
    },
    "Tags": []
}
```

------

## IAM policy to work with DynamoDB zero-ETL integrations
<a name="zero-etl-signin-iam-policy"></a>

When creating zero-ETL integrations, your sign in credentials must have permission to on both DynamoDB and Amazon Redshift actions and also on the resources involved as sources and targets of the integration. Following is a example that demonstrates the minimum permissions required.

------
#### [ JSON ]

****  

```
{
    "Version":"2012-10-17",		 	 	 
    "Statement": [
        {
            "Effect": "Allow",
            "Action": [
                "dynamodb:ListTables"
            ],
            "Resource": "*"
        },
        {
            "Effect": "Allow",
            "Action": [
                "dynamodb:GetResourcePolicy",
                "dynamodb:PutResourcePolicy",
                "dynamodb:UpdateContinuousBackups"
            ],
            "Resource": [
            "arn:aws:dynamodb:us-east-1:111122223333:table/my-ddb-table"
            ]
        },
        {
            "Sid": "AllowRedshiftDescribeIntegration",
            "Effect": "Allow",
            "Action": [
                "redshift:DescribeIntegrations"
            ],
            "Resource": "*"
        },
        {
            "Sid": "AllowRedshiftCreateIntegration",
            "Effect": "Allow",
            "Action": "redshift:CreateIntegration",
            "Resource": "arn:aws:redshift:us-east-1:111122223333:integration:*"
        },
        {
            "Sid": "AllowRedshiftModifyDeleteIntegration",
            "Effect": "Allow",
            "Action": [
                "redshift:ModifyIntegration",
                "redshift:DeleteIntegration"
            ],
            "Resource": "arn:aws:redshift:us-east-1:111122223333:integration:<uuid>"
        },
        {
            "Sid": "AllowRedshiftCreateInboundIntegration",
            "Effect": "Allow",
            "Action": "redshift:CreateInboundIntegration",
            "Resource": "arn:aws:redshift:us-east-1:111122223333:namespace:<uuid>"
        }
    ]
}
```

------

## Encrypting DynamoDB integrations with a customer managed key
<a name="zero-etl.create-encrypt"></a>

If you specify a custom KMS key rather than an AWS owned key when you create a DynamoDB zero-ETL integration, the key policy must provide the Amazon Redshift service principal access to the `CreateGrant` action. In addition, it must allow the requester account or role permission to run the `DescribeKey` and `CreateGrant` actions.

The following sample key policy statements demonstrate the permissions required in your policy. Some examples include context keys to further reduce the scope of permissions.

### Sample key policy statements
<a name="zero-etl.kms-sample-policy"></a>

The following policy statement allows the requester account or role to retrieve information about a KMS key.

```
{
   "Effect":"Allow",
   "Principal":{
      "AWS":"arn:aws:iam::{account-ID}:role/{role-name}"
   },
   "Action":"kms:DescribeKey",
   "Resource":"*"
}
```

The following policy statement allows the requester account or role to add a grant to a KMS key. The [https://docs.aws.amazon.com/kms/latest/developerguide/conditions-kms.html#conditions-kms-via-service](https://docs.aws.amazon.com/kms/latest/developerguide/conditions-kms.html#conditions-kms-via-service) condition key limits use of the KMS key to requests from Amazon Redshift.

```
{
   "Effect":"Allow",
   "Principal":{
      "AWS":"arn:aws:iam::{account-ID}:role/{role-name}"
   },
   "Action":"kms:CreateGrant",
   "Resource":"*",
   "Condition":{
      "StringEquals":{
         "kms:EncryptionContext:{context-key}":"{context-value}",
         "kms:ViaService":"redshift.{region}.amazonaws.com"
      },
      "ForAllValues:StringEquals":{
         "kms:GrantOperations":[
            "Decrypt",
            "GenerateDataKey",
            "CreateGrant"
         ]
      }
   }
}
```

The following policy statement allows the Amazon Redshift service principal to add a grant to a KMS key.

```
{
   "Effect":"Allow",
   "Principal":{
      "Service":"redshift.amazonaws.com"
   },
   "Action":"kms:CreateGrant",
   "Resource":"*",
   "Condition":{
      "StringEquals":{
         "kms:EncryptionContext:{context-key}":"{context-value}",
         "aws:SourceAccount":"{account-ID}"
      },
      "ForAllValues:StringEquals":{
         "kms:GrantOperations":[
            "Decrypt",
            "GenerateDataKey",
            "CreateGrant"
         ]
      },
      "ArnLike":{
         "aws:SourceArn":"arn:aws:*:{region}:{account-ID}:integration:*"
      }
   }
}
```

For more information, see [Creating a key policy](https://docs.aws.amazon.com/kms/latest/developerguide/key-policy-overview.html) in the *AWS Key Management Service Developer Guide*.

## Encryption context
<a name="zero-etl.add-encryption-context"></a>

When you encryption a zero-ETL integration, you can add key-value pairs as an **Additional encryption context**. You might want to add these key-value pairs to add additional contextual information about the data being replicated. For more information, see [Encryption context](https://docs.aws.amazon.com//kms/latest/developerguide/encrypt_context.html) in the *AWS Key Management Service Developer Guide*.

Amazon Redshift adds the following encryption context pairs in addition to any that you add:
+ `aws:redshift:integration:arn` - `IntegrationArn`
+ `aws:servicename:id` - `Redshift`

This reduces the overall number of pairs that you can add from 8 to 6, and contributes to the overall character limit of the grant constraint. For more information, see [Using grant constraints](https://docs.aws.amazon.com/kms/latest/developerguide/create-grant-overview.html#grant-constraints) in the *AWS Key Management Service Developer Guide*.

# Create a zero-ETL integration with applications
<a name="zero-etl-setting-up.create-integration-glue"></a>

In this step, you create a zero-ETL integration with applications with Amazon Redshift.

**To create a zero-ETL integration with applications with Amazon Redshift**

1. From the Amazon Redshift console: [Create and configure a target Amazon Redshift data warehouse](zero-etl-setting-up.rs-data-warehouse.md). 
   + From the AWS CLI or Amazon Redshift console: [Turn on case sensitivity for your data warehouse](zero-etl-setting-up.case-sensitivity.md).
   + From the Amazon Redshift console: [Configure authorization for your Amazon Redshift data warehouse](zero-etl-using.redshift-iam.md).

1. From the AWS Glue console: [Creating an integration](https://docs.aws.amazon.com/glue/latest/dg/zero-etl-common-integration-tasks.html#zero-etl-creating) as described in the *AWS Glue Developer Guide*.

1. After the destination database has been created and data starts replicating, you can query and create materialized data for your replicated data. For more information, see [Querying replicated data in Amazon Redshift](zero-etl-using.querying-and-creating-materialized-views.md).

For detailed information to create zero-ETL integrations with applications, see [Zero-ETL integrations](https://docs.aws.amazon.com/glue/latest/dg/zero-etl-using.html) in the *AWS Glue Developer Guide*.

# Creating destination databases in Amazon Redshift
<a name="zero-etl-using.creating-db"></a>

To replicate data from your source into Amazon Redshift, you must create a database from your integration in Amazon Redshift.

Connect to your target Redshift Serverless workgroup or provisioned cluster and create a database with a reference to your integration identifier. This identifier is the value returned for `integration_id` when you query the [SVV\$1INTEGRATION](https://docs.aws.amazon.com/redshift/latest/dg/r_SVV_INTEGRATION.html) view.

**Important**  
Before creating a database from your integration, your zero-ETL integration must be created and in the `Active` state on the Amazon Redshift console.

Before you can start replicating data from your source into Amazon Redshift, create a database from the integration in Amazon Redshift. You can either create the database using the Amazon Redshift console or the query editor v2. 

------
#### [ Amazon Redshift console ]

1. In the left navigation pane, choose **Zero-ETL integrations**.

1. From the integration list, choose an integration.

1. If you're using a provisioned cluster, you must first connect to the database. Choose **Connect to database**. You can connect using a recent connection, or by creating a new connection.

1. To create a database from the integration, choose **Create database from integration**. 

1. Enter a **Destination database name**. The **Integration ID** and **Data warehouse name** are pre-populated. 

   For Aurora PostgreSQL, RDS for PostgreSQL, or RDS for Oracle sources, enter the **Source named database** that you specified when creating your zero-ETL integration. In these cases, you can map a maximum of 100 source databases to Amazon Redshift databases.

1. Choose **Create database**.

------
#### [ Amazon Redshift query editor v2 ]

1. Navigate to the Amazon Redshift console and choose **Query editor v2**.

1. In the left panel, choose your Amazon Redshift Serverless workgroup or Amazon Redshift provisioned cluster, and then connect to it.

1. To get the integration ID, navigate to the integration list on the Amazon Redshift console.

   Alternatively, run the following command to get the `integration_id` value:

   ```
   SELECT integration_id FROM SVV_INTEGRATION;
   ```

1. Then, run the following command to create the database. By specifying the integration ID, you create a connection between the database and your source.

   Substitute `integration_id` with the value returned by the previous command.

   ```
   CREATE DATABASE destination_db_name FROM INTEGRATION 'integration_id';
   ```

   For Aurora PostgreSQL sources, you must also include a reference to the named database within the cluster that you specified when you created the integration. For example:

   ```
   CREATE DATABASE "destination_db_name" FROM INTEGRATION 'integration_id' DATABASE "named_db";
   ```

------

For more information about creating a database for a zero-ETL integration target, see [CREATE DATABASE](https://docs.aws.amazon.com/redshift/latest/dg/r_CREATE_DATABASE.html) in the *Amazon Redshift Database Developer Guide*. You can use ALTER DATBASE to change database parameters such as REFRESH INTERVAL. For more information about altering a database for a zero-ETL integration target, see [ALTER DATABASE](https://docs.aws.amazon.com/redshift/latest/dg/r_ALTER_DATABASE.html) in the *Amazon Redshift Database Developer Guide*.

**Note**  
Only your integration source can update data in the database you create from your integration. To change the schema of a table, run DDL or DML commands against tables in the source. You can run DDL and DML commands against tables in the source, but you can only run DDL commands and read-only queries on the destination database.

For information about viewing the status of a destination database, see [Viewing zero-ETL integrations](zero-etl-using.describing.md).

After creating a destination database, you can add data to your source. To add data to your source, see one of the following topics:
+ For Aurora sources, see [Add data to the source DB cluster](https://docs.aws.amazon.com/AmazonRDS/latest/AuroraUserGuide/zero-etl.querying.html#zero-etl.add-data-rds) in the *Amazon Aurora User Guide*.
+ For Amazon RDS sources, see [Add data to the source DB instance](https://docs.aws.amazon.com/AmazonRDS/latest/UserGuide/zero-etl.querying.html#zero-etl.add-data-rds) in the *Amazon RDS User Guide*.
+ For DynamoDB sources, see [Getting started with DynamoDB](https://docs.aws.amazon.com/amazondynamodb/latest/developerguide/GettingStartedDynamoDB.html) in the *Amazon DynamoDB Developer Guide*.
+ For zero-ETL integrations with applications sources, see [Zero-ETL integrations](https://docs.aws.amazon.com/glue/latest/dg/zero-etl-using.html) in the *AWS Glue Developer Guide*.

# Querying replicated data in Amazon Redshift
<a name="zero-etl-using.querying-and-creating-materialized-views"></a>

After you add data to your source, it's replicated in near real time to the Amazon Redshift data warehouse, and it's ready for querying. For information about integration metrics and table statistics, see [Metrics for zero-ETL integrations](zero-etl-using.metrics.md).

**Note**  
As a database is the same as a schema in MySQL, MySQL database level maps to Amazon Redshift schema level. Note this mapping difference when you query data replicated from Aurora MySQL or RDS for MySQL.

**To query replicated data**

1. Navigate to the Amazon Redshift console and choose **Query editor v2**.

1. Connect to your Amazon Redshift Serverless workgroup or Amazon Redshift provisioned cluster and choose your database from the dropdown list.

1. Use a SELECT statement to select all replicated data from the schema and table that you created in the source. For case sensitivity, use double quotes (" ") for schema, table, and column names. For example:

   ```
   SELECT * FROM "schema_name"."table_name";
   ```

   You can also query the data using the Amazon Redshift Data API.

## Querying replicated data with materialized views
<a name="zero-etl-using.transforming"></a>

You can create materialized views in your local Amazon Redshift database to transform data replicated through zero-ETL integrations. Connect to your local database and use cross-database queries to access the destination databases. You can use either fully qualified object names with the three-part notation (destination-database-name.schema-name.table-name) or create an external schema referencing the destination database-schema pair and use the two-part notation (external-schema-name.table-name). For more information on cross-database queries, see [Querying data across databases](https://docs.aws.amazon.com/redshift/latest/dg/cross-database-overview.html).

Use the following example to create and insert sample data into the *sales\$1zetl* and *event\$1zetl* tables from the source *tickit\$1zetl*. The tables are replicated into the Amazon Redshift database *zetl\$1int\$1db*.

```
CREATE TABLE sales_zetl (
        salesid integer NOT NULL primary key,
        eventid integer NOT NULL,
        pricepaid decimal(8, 2)
);
 
CREATE TABLE event_zetl (
        eventid integer NOT NULL PRIMARY KEY,
        eventname varchar(200)
);
       
INSERT INTO sales_zetl VALUES(1, 1, 3.33);
INSERT INTO sales_zetl VALUES(2, 2, 4.44);
INSERT INTO sales_zetl VALUES(3, 2, 5.55);
 
INSERT INTO event_zetl VALUES(1, "Event 1");
INSERT INTO event_zetl VALUES(2, "Event 2");
```

You can create a materialized view to get total sales per event using the three-part notation:

```
--three part notation zetl-database-name.schema-name.table-name 
CREATE MATERIALIZED VIEW mv_transformed_sales_per_event_3p 
AUTO REFRESH YES
AS
(SELECT eventname, sum(pricepaid) as total_price
FROM  zetl_int_db.tickit_zetl.sales_zetl S, zetl_int_db.tickit_zetl.event_zetl E
WHERE S.eventid = E.eventid
GROUP BY 1);
```

You can create a materialized view to get total sales per event using the two-part notation:

```
--two part notation external-schema-name.table-name notation
CREATE EXTERNAL schema ext_tickit_zetl
FROM REDSHIFT
DATABASE zetl_int_db
SCHEMA tickit_zetl;
 
CREATE MATERIALIZED VIEW mv_transformed_sales_per_event_2p
AUTO REFRESH YES
AS
(
    SELECT eventname, sum(pricepaid) as total_price
    FROM  ext_tickit_zetl.sales_zetl S, ext_tickit_zetl.event_zetl E
    WHERE S.eventid = E.eventid
    GROUP BY 1  
);
```

To view the materialized views you created, use the following example.

```
SELECT * FROM mv_transformed_sales_per_event_3p;
 
+-----------+-------------+
| eventname | total_price |
+-----------+-------------+
| Event 1   | 3.33        |
| Event 2   | 9.99        |
+-----------+-------------+
 
SELECT * FROM mv_transformed_sales_per_event_2p;
 
+-----------+-------------+
| eventname | total_price |
+-----------+-------------+
| Event 1   | 3.33        |
| Event 2   | 9.99        |
+-----------+-------------+
```

## Querying replicated data from DynamoDB
<a name="zero-etl-using.querying-ddb"></a>

When you replicate data from Amazon DynamoDB to a Amazon Redshift database, it is stored in a materialized view in a column of SUPER data type.

For this example, the following data is stored in DynamoDB.

```
{
    "key1": {
        "S": "key_1"
    },
    "key2": {
        "N": 0
    },
    "payload": {
        "L": [
            {
                "S": "sale1"
            },
            {
                "S": "sale2"
            },
        ]
    },
}
```

The Amazon Redshift materialized view is defined as the following.

```
CREATE MATERIALIZED VIEW mv_sales
                    BACKUP NO
                    AUTO REFRESH YES
                    AS
                    SELECT "value"."payload"."L"[0]."S"::VARCHAR AS first_payload
                    FROM public.sales;
```

To view the data in the materialized view run an SQL command. 

```
SELECT first_payload FROM mv_sales;
```

# Viewing zero-ETL integrations
<a name="zero-etl-using.describing"></a>

You can view your zero-ETL integrations from the Amazon Redshift console. Here you can view its configuration information and current status, and open screens to query and share data.

------
#### [ Amazon Redshift console ]

**To view the details of a zero-ETL integration**

1. Sign in to the AWS Management Console and open the Amazon Redshift console at [https://console.aws.amazon.com/redshiftv2/](https://console.aws.amazon.com/redshiftv2/).

1. From the left navigation pane, choose either the **Serverless** or **Provisioned clusters** dashboard. Then, choose **Zero-ETL integrations**.

1. Select the zero-ETL integration that you want to view. For each integration, the following information is provided:
   + **Integration ID** is the identifier returned when the integration is created.
   + **Status** can be one of the following:
     + `Active` – The zero-ETL integration is sending transactional data to the target Amazon Redshift data warehouse.
     + `Syncing` – The zero-ETL integration has encountered a recoverable error and is reseeding data. Affected tables aren’t available for querying in Amazon Redshift until they finish resyncing.
     + `Failed` – The zero-ETL integration encountered an unrecoverable event or error that can't be fixed. You must delete and recreate the zero-ETL integration.
     + `Creating` – The zero-ETL integration is being created.
     + `Deleting` – The zero-ETL integration is being deleted.
     + `Needs attention` – The zero-ETL integration encountered an event or error that requires manual intervention to resolve it. To fix the issue, follow the steps in the error message.
   + **Source type** is the type of source data replicating to the target. Types can specify other database managers, such as Aurora MySQL-Compatible Edition, Amazon Aurora PostgreSQL, RDS for MySQL, and from applications (`GlueSAAS`). 
   + **Source ARN** is the ARN of the source data. For most sources this is the ARN of the source database or table. For zero-ETL integration with applications sources, this is the ARN of the AWS Glue connection object.
   + **Target** is the namespace of the Amazon Redshift data warehouse receiving source data.
   + **Database** can be one of the following:
     + `No database` – There is no destination database for the integration.
     + `Creating` – Amazon Redshift is creating the destination database for the integration.
     + `Active` – Data is being replicated from the integration source to Amazon Redshift.
     + `Error` – There is an error with the integration.
     + `Recovering` – The integration is recovering after the data warehouse restarted.
     + `Resyncing` – Amazon Redshift is resynchronizing the tables in the integration.
   + **Target type** is the type of Amazon Redshift data warehouse.
   + **Creation date** is the date and time (UTC) when the integration was created.

**Note**  
To view integration details for a data warehouse, choose the details page for your provisioned cluster or serverless namespace and then choose the **Zero-ETL integrations** tab.

From the **Zero-ETL integrations** list, you can choose **Query data** to jump to Amazon Redshift query editor v2. The Amazon Redshift target database has the [enable\$1case\$1sensitive\$1identifier](https://docs.aws.amazon.com/redshift/latest/dg/r_enable_case_sensitive_identifier.html) parameter enabled. When you write SQL, you might need to surround schemas, tables, and column names with double quotes ("<name>"). For more information about querying data in your Amazon Redshift data warehouse, see [Querying a database using the query editor v2Querying a database using the Amazon Redshift query editor v2](query-editor-v2.md).

From the **Zero-ETL integrations** list, you can choose **Share data** to create a datashare. To create a datashare for the Amazon Redshift database, follow the instructions on the **Create datashare** page. Before you can share data in your Amazon Redshift database, you must first create a destination database. For more information about data sharing, see [Data sharing concepts for Amazon Redshift](https://docs.aws.amazon.com/redshift/latest/dg/concepts.html).

To refresh your integration, you can use the [ALTER DATABASE](https://docs.aws.amazon.com/redshift/latest/dg/r_ALTER_DATABASE.html) command. Doing so replicates all of the data from your integration source into your destination database. The following example refreshes all synced and failed tables within your zero-ETL integration.

```
ALTER DATABASE sample_integration_db INTEGRATION REFRESH ALL tables;
```

------
#### [ AWS CLI ]

To describe an Amazon DynamoDB zero-ETL integration with Amazon Redshift using the AWS CLI, use the `describe-integrations` command with the following options:
+ `integration-arn` – Specify the ARN of the DynamoDB integration to describe.
+ `integration-name` – Specify an optional filter that specifies one or more resources to return.

The follow example describes an integration by providing the integration ARN.

```
aws redshift describe-integrations
                 
{
    "Integrations": [
        {
            "Status": "failed", 
            "IntegrationArn": "arn:aws:redshift:us-east-1:123456789012:integration:a1b2c3d4-5678-90ab-cdef-EXAMPLE11111", 
            "Errors": [
                {
                    "ErrorCode": "INVALID_TABLE_PERMISSIONS", 
                    "ErrorMessage": "Redshift does not have sufficient access on the table key. Refer to the Amazon DynamoDB Developer Guide."
                }
            ], 
            "Tags": [], 
            "CreateTime": "2023-11-09T00:32:46.444Z", 
            "KMSKeyId": "arn:aws:kms:us-east-1:123456789012:key/a1b2c3d4-5678-90ab-cdef-EXAMPLE33333", 
            "TargetArn": "arn:aws:redshift:us-east-1:123456789012:namespace:a1b2c3d4-5678-90ab-cdef-EXAMPLE22222", 
            "IntegrationName": "ddb-to-provisioned-02", 
            "SourceArn": "arn:aws:dynamodb:us-east-1:123456789012:table/mytable"
        }
    ]
}
```

You can also filter the results of `describe-integrations` by the `integration-arn`, `source-arn`, `source-types`, or `status`. For more information, see [describe-integrations](https://docs.aws.amazon.com/cli/latest/reference/redshift/describe-integrations.html) in the *Amazon Redshift CLI Guide*.

------

# History mode
<a name="zero-etl-history-mode"></a>

With history mode, you can configure your zero-ETL integrations to track every version (including updates and deletes) of your records in source tables, directly in Amazon Redshift. You can run advanced analytics on all your data, such as, run a historical analysis, build look-back reports, perform trend analysis, and send incremental updates to downstream applications built on top of Amazon Redshift. History mode is supported with multiple Amazon Redshift zero-ETL integrations, including Amazon Aurora MySQL, Amazon Aurora PostgreSQL, Amazon RDS for MySQL, and Amazon DynamoDB. History mode is also supported by several applications, such as Salesforce, SAP, ServiceNow, and Zendesk.

You can turn on and off history mode for your zero-ETL integrations from the Amazon Redshift console ([https://console.aws.amazon.com/redshiftv2/](https://console.aws.amazon.com/redshiftv2/)). Use history mode to keep track of records that have been deleted or modified in the source of integration. Tracking happens in the target Amazon Redshift data warehouse. Turning on history mode does not impact the performance of regular analytic queries on these tables.

After you turn on history mode, tables that you drop within the source won't be dropped in Amazon Redshift. Instead, tables will appear in a `DroppedSource` state and you can still query these tables. You can also still use the DROP and RENAME commands with regular SQL.

If you want to reuse the same table name on the source, you must DROP or RENAME the corresponding `DroppedState` table before it can be replicated to Amazon Redshift. Make sure to do so before you create the table on the source.

For information about what to consider when using history mode, see [Considerations when using history mode on the target](zero-etl.reqs-lims.md#zero-etl-considerations-history-mode).

**To manage history mode for a zero-ETL integration**

1. Sign in to the AWS Management Console and open the Amazon Redshift console at [https://console.aws.amazon.com/redshiftv2/](https://console.aws.amazon.com/redshiftv2/).

1. From the left navigation pane, choose either the **Serverless** or **Provisioned clusters** dashboard. Then, choose **Zero-ETL integrations**.

1. Select the zero-ETL integration that you want to manage, choose **Manage history mode**. The **Manage history mode** window is displayed.

1. You can **Turn off** or **Turn on ** history mode for a target table that is replicated from a source type that has a single source table, such as, Amazon DynamoDB. When the zero-ETL integration has multiple target tables possible, you can **Turn off for all existing and future tables**, **Turn on for all existing and future tables**, or **Manage history mode for individual tables**. The default is history mode `off` when the zero-ETL integration is created.

   When history mode is turned `on`, the following columns are added to your target table to keep track of changes in the source. History mode `on` increases monthly usage and cost because Amazon Redshift doesn't delete any records in the target tables. Any source record that is deleted or changed creates a new record in the target, resulting in more total rows in the target with multiple record versions. Records are not deleted from the target table when deleted or modified in the source. You can manage target tables by deleting inactive records.    
[\[See the AWS documentation website for more details\]](http://docs.aws.amazon.com/redshift/latest/mgmt/zero-etl-history-mode.html)

   You can delete inactive records from a history mode table by filtering on records where the column `_record_is_active` is false. The following SQL DELETE command deletes inactive records from a table where the id column is less than or equal to 100. After you delete records, when automatic vacuum delete runs, storage for the deleted records is reclaimed.

   ```
   DELETE FROM myschema.mytable where not _record_is_active AND id <= 100;
   ```

   When history mode is turned `off`, Amazon Redshift makes a copy of your table in the target database with active records and without the added history columns. Amazon Redshift renames your table to `table-name_historical_timestamp` for your use. You can drop this copy of your table if you no longer need it. You can rename these tables using the ALTER TABLE command. For example:

   ```
   ALTER TABLE [schema-name.]table-name_historical_timestamp RENAME TO new_table_name;
   ```

   For more information, see [ALTER TABLE](https://docs.aws.amazon.com/redshift/latest/dg/r_ALTER_TABLE.html) in the *Amazon Redshift Database Developer Guide*.

You can also manage history mode using SQL commands CREATE DATABASE and ALTER DATABASE. For more information about how to set HISTORY\$1MODE, see [CREATE DATABASE](https://docs.aws.amazon.com/redshift/latest/dg/r_CREATE_DATABASE.html) and [ALTER DATABASE](https://docs.aws.amazon.com/redshift/latest/dg/r_ALTER_DATABASE.html) in the *Amazon Redshift Database Developer Guide*. 

# Sharing your data in Amazon Redshift
<a name="zero-etl-using.share-data-redshift"></a>

After you add data to the source, it's immediately replicated into Amazon Redshift and ready to be shared by creating datashares. 

To share data, you must create a destination database first.

**To share data in Amazon Redshift Serverless using the Amazon Redshift console**

1. In the Amazon Redshift console, in the left navigation pane, choose **Amazon Redshift Serverless > Serverless dashboard**.

1. From the left navigation pane, choose **Zero-ETL integrations**.

1. Choose **Share data**.

1. On the create datashare page, follow the steps in [Creating datashares](https://docs.aws.amazon.com/redshift/latest/dg/datashare-creation.html).

**To share data in Amazon Redshift provisioned clusters using the Amazon Redshift console**

1. In the Amazon Redshift console, in the left navigation pane, choose **Provisioned clusters dashboard**.

1. From the left navigation pane, choose **Zero-ETL integrations**.

1. From the integration list, choose an integration.

1. On the integration details page, choose **Connect to database**.

1. On the **Connection to database** page, you can either create a new connection or use a recent connection. Make sure that the connection is made to the destination database.

1. If you create a new connection, then enter a **Database name** for the database. Then, click **Connect**.

1. On the integration details page, choose **Share data**.

1. On the create datashare page, follow the steps in [Creating datashares](https://docs.aws.amazon.com/redshift/latest/dg/datashare-creation.html).

# Monitoring zero-ETL integrations
<a name="zero-etl-monitoring"></a>

You can monitor your zero-ETL integrations by querying the system views or with Amazon EventBridge.

## Monitoring zero-ETL integrations with Amazon Redshift system views
<a name="zero-etl-monitoring-sysviews"></a>

You can monitor your zero-ETL integrations by querying the following system views in Amazon Redshift.
+ [SVV\$1INTEGRATION](https://docs.aws.amazon.com/redshift/latest/dg/r_SVV_INTEGRATION.html) provides information about configuration details of zero-ETL integrations.
+ [ SYS\$1INTEGRATION\$1ACTIVITY](https://docs.aws.amazon.com/redshift/latest/dg/r_SYS_INTEGRATION_ACTIVITY.html) provides information about completed zero-ETL integrations.
+ [SVV\$1INTEGRATION\$1TABLE\$1MAPPING](https://docs.aws.amazon.com/redshift/latest/dg/r_SVV_INTEGRATION_TABLE_MAPPING.html) provides information about mapping metadata values from source to target.
+ [SVV\$1INTEGRATION\$1TABLE\$1STATE](https://docs.aws.amazon.com/redshift/latest/dg/r_SVV_INTEGRATION_TABLE_STATE.html) provides information about integration state.
+ [ SYS\$1INTEGRATION\$1TABLE\$1ACTIVITY](https://docs.aws.amazon.com/redshift/latest/dg/r_SYS_INTEGRATION_TABLE_ACTIVITY.html) provides information about insert, delete, and update activity of integrations.
+ [ SYS\$1INTEGRATION\$1TABLE\$1STATE\$1CHANGE](https://docs.aws.amazon.com/redshift/latest/dg/r_SYS_INTEGRATION_TABLE_STATE_CHANGE.html) provides information about table state change log for integrations.

## Monitoring zero-ETL integrations with Amazon EventBridge
<a name="zero-etl-monitoring-events"></a>

Amazon Redshift sends integration-related events to Amazon EventBridge. For a list of events and their corresponding event IDs, see [Zero-ETL integration event notifications with Amazon EventBridge](integration-event-notifications.md).

# Metrics for zero-ETL integrations
<a name="zero-etl-using.metrics"></a>

You can use the metrics in the Amazon Redshift console and Amazon CloudWatch to learn about the health and performance of your zero-ETL integrations. You can adjust the metrics to display data for shorter or longer duration, or choose to view metrics in CloudWatch. To view the metrics for your integration on the Amazon Redshift console, choose **Zero-ETL integrations** in the left navigation pane and choose your integration ID.

Depending on the source data of zero-ETL integrations, Amazon Redshift provides metrics on the integration details page for an integration. Possible metrics include the following types:
+ From the **Integration metrics** tab, graphs of the following are available:    
[\[See the AWS documentation website for more details\]](http://docs.aws.amazon.com/redshift/latest/mgmt/zero-etl-using.metrics.html)
+ From the **Table statistics** tab, you can view the list of tables that are currently active or have errors. The statistics on this tab are as follows (depending on source type):
  + **Schema name** – The name of the schema that the table is in.
  + **Table name** – The name of the table in the source database.
  + **Status** – The status of the table. Possible values include `Synced`, `Failed`, `Deleted`, `Resync Required`, and `Resync Initiated`.
  + **Database** – The Amazon Redshift database the table is in.
  + **Last updated** – The date and time (UTC) when the last update was made to the table.
  + **Table row count** – The number of rows in the table.
  + **Table size ** – The size of the table.

You can also view a graph of the number of **Rows** inserted, deleted, and updated for the selected timeframe.

# Modify a zero-ETL integration for DynamoDB
<a name="zero-etl-managing.modify-integration-ddb"></a>

In this step, you modify an DynamoDB zero-ETL integration with Amazon Redshift.

------
#### [ Amazon Redshift console ]

**To modify an Amazon DynamoDB zero-ETL integration with Amazon Redshift using the Amazon Redshift console**

1. From the Amazon Redshift console, choose **Zero-ETL integrations**. On the pane with the list of zero-ETL integrations, then choose the DynamoDB integration that you want to modify.

1. Choose **Edit** and make modifications to the **Integration name** or **Description**.

1. Choose **Save changes** to save your changes.

------
#### [ AWS CLI ]

To modify an Amazon DynamoDB zero-ETL integration with Amazon Redshift using the AWS CLI, use the `modify-integration` command with the following options:
+ `integration-arn` – Specify the ARN of the DynamoDB integration to modify.
+ `integration-name` – Specify a new name for the integration.
+ `description` – Specify a new description for the integration.

The follow example modifies an integration by providing the integration ARN, new description, and new name.

```
aws redshift modify-integration \
--integration-arn arn:aws:redshift:us-east-1:123456789012:integration:a1b2c3d4-5678-90ab-cdef-EXAMPLE11111 \
--description "Test modify description and name together." \
--integration-name "updated-integration-name-2"
      
{
    "IntegrationArn": "arn:aws:redshift:us-east-1:123456789012:integration:a1b2c3d4-5678-90ab-cdef-EXAMPLE11111",
    "IntegrationName": "updated-integration-name-2",
    "SourceArn": "arn:aws:dynamodb:us-east-1:123456789012:table/ddb-temp-test-table-table",
    "SourceType": "dynamodb",
    "TargetArn": "arn:aws:redshift:us-east-1:123456789012:namespace:a1b2c3d4-5678-90ab-cdef-EXAMPLE22222",
    "Status": "active",
    "Errors": [],
    "CreateTime": "2024-09-19T18:06:33.555Z",
    "Description": "Test modify description and name together.",
    "KMSKeyId": "arn:aws:kms:us-east-1:123456789012:key/a1b2c3d4-5678-90ab-cdef-EXAMPLE33333",
    "AdditionalEncryptionContext": {},
    "Tags": []
}
```

------

# Delete a zero-ETL integration for DynamoDB
<a name="zero-etl-managing.delete-integration-ddb"></a>

When you delete an integration, the target data warehouse retains any previously replicated data. You can continue to share and query this data. However, new data in the source will not replicate to the target.

In this step, you delete an DynamoDB zero-ETL integration with Amazon Redshift.

------
#### [ Amazon Redshift console ]

**To delete an Amazon DynamoDB zero-ETL integration with Amazon Redshift using the Amazon Redshift console**

1. From the Amazon Redshift console, choose **Zero-ETL integrations**. On the pane with the list of zero-ETL integrations, then choose the DynamoDB integration that you want to delete.

1. Choose **Delete** and provide the requested information.

1. Choose **Delete** to delete the zero-ETL integration.

------
#### [ AWS CLI ]

To delete an Amazon DynamoDB zero-ETL integration with Amazon Redshift using the AWS CLI, use the `delete-integration` command with the following options:
+ `integration-arn` – Specify the ARN of the DynamoDB integration to delete.

The follow example deletes an integration by providing the integration ARN.

```
aws redshift delete-integration \
--integration-arn arn:aws:redshift:us-east-1:123456789012:integration:a1b2c3d4-5678-90ab-cdef-EXAMPLE11111
    
{
    "IntegrationArn": "arn:aws:redshift:us-east-1:123456789012:integration:a1b2c3d4-5678-90ab-cdef-EXAMPLE11111",
    "IntegrationName": "updated-integration-name-2",
    "SourceArn": "arn:aws:dynamodb:us-east-1:123456789012:table/tidal-ddb-ddb-temp-test-table-table",
    "SourceType": "dynamodb",
    "TargetArn": "arn:aws:redshift:us-east-1:123456789012:namespace:a1b2c3d4-5678-90ab-cdef-EXAMPLE22222",
    "Status": "deleting",
    "Errors": [],
    "CreateTime": "2024-09-19T18:06:33.555Z",
    "Description": "Test modify description and name together.",
    "KMSKeyId": "arn:aws:kms:us-east-1:a1b2c3d4-5678-90ab-cdef-EXAMPLE11111:key/a1b2c3d4-5678-90ab-cdef-EXAMPLE33333",
    "AdditionalEncryptionContext": {},
    "Tags": []
}
```

------

# Supported Regions for zero-ETL integrations
<a name="zero-etl-using.regions"></a>

Zero-ETL integration is a fully managed solution that makes transactional and operational data available in Amazon Redshift from multiple operational and transactional sources, as well as enterprise applications. This page lists the available Regions for each supported source.

## Aurora MySQL
<a name="Zero-ETL-MySQL"></a>

The following Regions and engine versions are available for Aurora MySQL zero-ETL integrations with Amazon Redshift.


| Region | Aurora MySQL | 
| --- | --- | 
| Africa (Cape Town) | Available | 
| Asia Pacific (Hong Kong) | Available | 
| Asia Pacific (Tokyo) | Available | 
| Asia Pacific (Seoul) | Available | 
| Asia Pacific (Osaka) | Available | 
| Asia Pacific (Mumbai) | Available | 
| Asia Pacific (Hyderabad) | Available | 
| Asia Pacific (Singapore) | Available | 
| Asia Pacific (Sydney) | Available | 
| Asia Pacific (Jakarta) | Available | 
| Asia Pacific (Melbourne) | Available | 
| Asia Pacific (Malaysia) | Not available | 
| Canada (Central) | Available | 
| Canada West (Calgary) | Available | 
| China (Beijing) | Available | 
| China (Ningxia) | Available | 
| Europe (Frankfurt) | Available | 
| Europe (Zurich) | Available | 
| Europe (Stockholm) | Available | 
| Europe (Milan) | Available | 
| Europe (Spain) | Available | 
| Europe (Ireland) | Available | 
| Europe (London) | Available | 
| Europe (Paris) | Available | 
| Israel (Tel Aviv) | Available | 
| Middle East (UAE) | Available | 
| Middle East (Bahrain) | Available | 
| South America (São Paulo) | Available | 
| US East (N. Virginia) | Available | 
| US East (Ohio) | Available | 
| US West (N. California) | Available | 
| US West (Oregon) | Available | 
| AWS GovCloud (US-East) | Not available | 
| AWS GovCloud (US-West) | Not available | 

## Aurora PostgreSQL
<a name="Aurora-Zero-ETL-Postgres"></a>

The following Regions are available for Aurora PostgreSQL zero-ETL integrations with Amazon Redshift.


| Region | Aurora PostgreSQL | 
| --- | --- | 
| Africa (Cape Town) | Available | 
| Asia Pacific (Hong Kong) | Available | 
| Asia Pacific (Tokyo) | Available | 
| Asia Pacific (Seoul) | Available | 
| Asia Pacific (Osaka) | Available | 
| Asia Pacific (Mumbai) | Available | 
| Asia Pacific (Hyderabad) | Available | 
| Asia Pacific (Singapore) | Available | 
| Asia Pacific (Sydney) | Available | 
| Asia Pacific (Jakarta) | Available | 
| Asia Pacific (Melbourne) | Available | 
| Asia Pacific (Malaysia) | Available | 
| Canada (Central) | Available | 
| Canada West (Calgary) | Available | 
| China (Beijing) | Available | 
| China (Ningxia) | Available | 
| Europe (Frankfurt) | Available | 
| Europe (Zurich) | Available | 
| Europe (Stockholm) | Available | 
| Europe (Milan) | Available | 
| Europe (Spain) | Available | 
| Europe (Ireland) | Available | 
| Europe (London) | Available | 
| Europe (Paris) | Available | 
| Israel (Tel Aviv) | Available | 
| Middle East (UAE) | Available | 
| Middle East (Bahrain) | Available | 
| South America (São Paulo) | Available | 
| US East (N. Virginia) | Available | 
| US East (Ohio) | Available | 
| US West (N. California) | Available | 
| US West (Oregon) | Available | 
| AWS GovCloud (US-East) | Not available | 
| AWS GovCloud (US-West) | Not available | 

## Amazon DynamoDB
<a name="Aurora-Zero-ETL-DynamoDB"></a>

The following Regions are available for DynamoDB zero-ETL integrations with Amazon Redshift.


| Region | DynamoDB | 
| --- | --- | 
| Africa (Cape Town) | Available | 
| Asia Pacific (Hong Kong) | Available | 
| Asia Pacific (Taipei) | Available | 
| Asia Pacific (Tokyo) | Available | 
| Asia Pacific (Seoul) | Available | 
| Asia Pacific (Osaka) | Available | 
| Asia Pacific (Mumbai) | Available | 
| Asia Pacific (Hyderabad) | Available | 
| Asia Pacific (Singapore) | Available | 
| Asia Pacific (Sydney) | Available | 
| Asia Pacific (Jakarta) | Available | 
| Asia Pacific (Melbourne) | Available | 
| Asia Pacific (Malaysia) | Available | 
| Asia Pacific (Thailand) | Available | 
| Canada (Central) | Available | 
| Canada West (Calgary) | Available | 
| China (Beijing) | Available | 
| China (Ningxia) | Available | 
| Europe (Frankfurt) | Available | 
| Europe (Zurich) | Available | 
| Europe (Stockholm) | Available | 
| Europe (Milan) | Available | 
| Europe (Spain) | Available | 
| Europe (Ireland) | Available | 
| Europe (London) | Available | 
| Europe (Paris) | Available | 
| Israel (Tel Aviv) | Available | 
| Middle East (UAE) | Available | 
| Middle East (Bahrain) | Available | 
| Mexico (Central) | Available | 
| South America (São Paulo) | Available | 
| US East (N. Virginia) | Available | 
| US East (Ohio) | Available | 
| US West (N. California) | Available | 
| US West (Oregon) | Available | 
| AWS GovCloud (US-East) | Available | 
| AWS GovCloud (US-West) | Available | 

## Amazon RDS for MySQL
<a name="Zero-ETL-RDS-MySQL"></a>

The following Regions are available for Amazon RDS for MySQL zero-ETL integrations with Amazon Redshift.


| Region | RDS for MySQL | 
| --- | --- | 
| Africa (Cape Town) | Available | 
| Asia Pacific (Hong Kong) | Available | 
| Asia Pacific (Tokyo) | Available | 
| Asia Pacific (Seoul) | Available | 
| Asia Pacific (Osaka) | Available | 
| Asia Pacific (Mumbai) | Available | 
| Asia Pacific (Hyderabad) | Not available | 
| Asia Pacific (Singapore) | Available | 
| Asia Pacific (Sydney) | Available | 
| Asia Pacific (Jakarta) | Not available | 
| Asia Pacific (Melbourne) | Not available | 
| Asia Pacific (Malaysia) | Not available | 
| Canada (Central) | Available | 
| Canada West (Calgary) | Not available | 
| China (Beijing) | Not available | 
| China (Ningxia) | Not available | 
| Europe (Frankfurt) | Available | 
| Europe (Zurich) | Not available | 
| Europe (Stockholm) | Available | 
| Europe (Milan) | Available | 
| Europe (Spain) | Not available | 
| Europe (Ireland) | Available | 
| Europe (London) | Available | 
| Europe (Paris) | Available | 
| Israel (Tel Aviv) | Not available | 
| Middle East (UAE) | Not available | 
| Middle East (Bahrain) | Available | 
| South America (São Paulo) | Available | 
| US East (N. Virginia) | Available | 
| US East (Ohio) | Available | 
| US West (N. California) | Available | 
| US West (Oregon) | Available | 
| AWS GovCloud (US-East) | Not available | 
| AWS GovCloud (US-West) | Not available | 

## Enterprise applications
<a name="Zero-ETL-enterprise"></a>

The following Regions are available for enterprise application zero-ETL integrations with Amazon Redshift.


| Region | Enterprise applications | 
| --- | --- | 
| Africa (Cape Town) | Not available | 
| Asia Pacific (Hong Kong) | Available | 
| Asia Pacific (Tokyo) | Available | 
| Asia Pacific (Seoul) | Available | 
| Asia Pacific (Osaka) | Not available | 
| Asia Pacific (Mumbai) | Not available | 
| Asia Pacific (Hyderabad) | Not available | 
| Asia Pacific (Singapore) | Available | 
| Asia Pacific (Sydney) | Available | 
| Asia Pacific (Jakarta) | Not available | 
| Asia Pacific (Melbourne) | Not available | 
| Asia Pacific (Malaysia) | Not available | 
| Canada (Central) | Available | 
| Canada West (Calgary) | Not available | 
| China (Beijing) | Not available | 
| China (Ningxia) | Not available | 
| Europe (Frankfurt) | Available | 
| Europe (Zurich) | Not available | 
| Europe (Stockholm) | Available | 
| Europe (Milan) | Not available | 
| Europe (Spain) | Not available | 
| Europe (Ireland) | Available | 
| Europe (London) | Available | 
| Europe (Paris) | Not available | 
| Israel (Tel Aviv) | Not available | 
| Middle East (UAE) | Not available | 
| Middle East (Bahrain) | Not available | 
| South America (São Paulo) | Available | 
| US East (N. Virginia) | Available | 
| US East (Ohio) | Available | 
| US West (N. California) | Not available | 
| US West (Oregon) | Available | 
| AWS GovCloud (US-East) | Not available | 
| AWS GovCloud (US-West) | Not available | 

# Troubleshooting zero-ETL integrations
<a name="zero-etl-using.troubleshooting"></a>

Use the following sections to help troubleshoot problems that you have with zero-ETL integrations.

## Troubleshooting zero-ETL integrations with Aurora MySQL
<a name="zero-etl-using.troubleshooting.ams"></a>

Use the following information to troubleshoot common issues with zero-ETL integrations with Aurora MySQL.

**Topics**
+ [

### Creation of the integration failed
](#zero-etl-using.troubleshooting.creation)
+ [

### Tables don't have primary keys
](#zero-etl-using.troubleshooting.primary-key)
+ [

### Aurora MySQL tables aren't replicating to Amazon Redshift
](#zero-etl-using.troubleshooting.not-replicating)
+ [

### Unsupported data types in tables
](#zero-etl-using.troubleshooting.unsupported-data)
+ [

### Data manipulation language commands failed
](#zero-etl-using.troubleshooting.failed-dml)
+ [

### Tracked changes between data sources don't match
](#zero-etl-using.troubleshooting.tracked-changes-failure)
+ [

### Authorization failed
](#zero-etl-using.troubleshooting.authorization)
+ [

### Number of tables is more than 100K or the number of schemas is more than 4950
](#zero-etl-using.troubleshooting.table-limits)
+ [

### Amazon Redshift can't load data
](#zero-etl-using.troubleshooting.data-load)
+ [

### Workgroup parameter settings are incorrect
](#zero-etl-using.troubleshooting.case-sensitive)
+ [

### Database isn't created to activate a zero-ETL integration
](#zero-etl-using.troubleshooting.db-creation)
+ [

### Table is in the **Resync Required** or **Resync Initiated** state
](#zero-etl-using.troubleshooting.resync)
+ [

### Integration lag growing
](#zero-etl-using.troubleshooting.integration-lag)

### Creation of the integration failed
<a name="zero-etl-using.troubleshooting.creation"></a>

If the creation of the zero-ETL integration failed, the status of the integration is `Inactive`. Make sure that the following are correct for your source Aurora DB cluster:
+ You created your cluster in the Amazon RDS console.
+ Your source Aurora DB cluster is running a supported version. For a list of supported versions, see [Supported Regions and Aurora DB engines for zero-ETL integrations with Amazon Redshift](https://docs.aws.amazon.com/AmazonRDS/latest/AuroraUserGuide/Concepts.Aurora_Fea_Regions_DB-eng.Feature.Zero-ETL.html). To validate this, go to the **Configuration** tab for the cluster and check the **Engine version**.
+  You correctly configured binlog parameter settings for your cluster. If your Aurora MySQL binlog parameters are set incorrectly or not associated with the source Aurora DB cluster, creation fails. See [Configure DB cluster parameters](https://docs.aws.amazon.com/AmazonRDS/latest/AuroraUserGuide/zero-etl.setting-up.html#zero-etl.parameters).

In addition, make sure the following are correct for your Amazon Redshift data warehouse:
+ Case sensitivity is turned on. See [Turn on case sensitivity for your data warehouse](zero-etl-setting-up.case-sensitivity.md).
+ You added the correct authorized principal and integration source for your namespace. See [Configure authorization for your Amazon Redshift data warehouse](zero-etl-using.redshift-iam.md).

### Tables don't have primary keys
<a name="zero-etl-using.troubleshooting.primary-key"></a>

In the destination database, one or more of the tables don't have a primary key and can't be synchronized.

To resolve this issue, go to the **Table statistics** tab on the integration details page or use SVV\$1INTEGRATION\$1TABLE\$1STATE to view the failed tables. You can add primary keys to the tables and Amazon Redshift will resynchronize the tables. Alternatively, although not recommended, you can drop these tables on Aurora and create tables with a primary key. For more information, see [Amazon Redshift best practices for designing tables](https://docs.aws.amazon.com/redshift/latest/dg/c_designing-tables-best-practices.html).

### Aurora MySQL tables aren't replicating to Amazon Redshift
<a name="zero-etl-using.troubleshooting.not-replicating"></a>

If you don't see one or more tables reflected in Amazon Redshift, you can run the following command to resynchronize them. Replace *dbname* with the name of your Amazon Redshift database. And, replace *table1* and *table2* with the names of the tables to be synchronized.

```
ALTER DATABASE dbname INTEGRATION REFRESH TABLES table1, table2;
```

For more information, see see [ALTER DATABASE](https://docs.aws.amazon.com/redshift/latest/dg/r_ALTER_DATABASE.html) in the *Amazon Redshift Database Developer Guide*.

Your data might not be replicating because one or more of your source tables doesn't have a primary key. The monitoring dashboard in Amazon Redshift displays the status of these tables as `Failed`, and the status of the overall zero-ETL integration changes to `Needs attention`. To resolve this issue, you can identify an existing key in your table that can become a primary key, or you can add a synthetic primary key. For detailed solutions, see [Handle tables without primary keys while creating Amazon Aurora MySQL or RDS for MySQL zero-ETL integrations with Amazon Redshift.](https://aws.amazon.com/blogs/database/handle-tables-without-primary-keys-while-creating-amazon-aurora-mysql-or-amazon-rds-for-mysql-zero-etl-integrations-with-amazon-redshift/) in the *AWS Database Blog*.

Also confirm that if your target is an Amazon Redshift cluster, that the cluster is not paused.

### Unsupported data types in tables
<a name="zero-etl-using.troubleshooting.unsupported-data"></a>

In the database that you created from the integration in Amazon Redshift and in which data is replicated from the Aurora DB cluster, one or more of the tables have unsupported data types and can't be synchronized.

To resolve this issue, go to the **Table statistics** tab on the integration details page or use SVV\$1INTEGRATION\$1TABLE\$1STATE to view the failed tables. Then, remove these tables and recreate new tables on Amazon RDS. For more information on unsupported data types, see [Data type differences between Aurora and Amazon Redshift databases](https://docs.aws.amazon.com/AmazonRDS/latest/AuroraUserGuide/zero-etl.querying.html#zero-etl.data-type-mapping) in the *Amazon Aurora User Guide*.

### Data manipulation language commands failed
<a name="zero-etl-using.troubleshooting.failed-dml"></a>

 Amazon Redshift could not run DML commands on the Redshift tables. To resolve this issue, use SVV\$1INTEGRATION\$1TABLE\$1STATE to view the failed tables. Amazon Redshift automatically resynchronizes the tables to resolve this error. 

### Tracked changes between data sources don't match
<a name="zero-etl-using.troubleshooting.tracked-changes-failure"></a>

This error occurs when changes between Amazon Aurora and Amazon Redshift don't match, leading to the integration entering a `Failed` state.

To resolve this, delete the zero-ETL integration and create it again in Amazon RDS. For more information, see [Creating zero-ETL integrations](https://docs.aws.amazon.com/AmazonRDS/latest/AuroraUserGuide/zero-etl.creating.html) and [Deleting zero-ETL integrations](https://docs.aws.amazon.com/AmazonRDS/latest/AuroraUserGuide/zero-etl.deletinging.html).

### Authorization failed
<a name="zero-etl-using.troubleshooting.authorization"></a>

Authorization failed because the source Aurora DB cluster was removed as an authorized integration source for the Amazon Redshift data warehouse.

To resolve this issue, delete the zero-ETL integration and create it again on Amazon RDS. For more information, see [Creating zero-ETL integrations](https://docs.aws.amazon.com/AmazonRDS/latest/AuroraUserGuide/zero-etl.creating.html) and [Deleting zero-ETL integrations](https://docs.aws.amazon.com/AmazonRDS/latest/AuroraUserGuide/zero-etl.deleting.html).

### Number of tables is more than 100K or the number of schemas is more than 4950
<a name="zero-etl-using.troubleshooting.table-limits"></a>

For a destination data warehouse, the number of tables is more than 100K or the number of schemas is more than 4950. Amazon Aurora can't send data to Amazon Redshift. The number of tables and schemas exceeds the set limit. To resolve this issue, remove any unnecessary schemas or tables from the source database.

### Amazon Redshift can't load data
<a name="zero-etl-using.troubleshooting.data-load"></a>

Amazon Redshift can't load data to the zero-ETL integration.

To resolve this issue, delete the zero-ETL integration on Amazon RDS and create it again. For more information, see [Creating zero-ETL integrations](https://docs.aws.amazon.com/AmazonRDS/latest/AuroraUserGuide/zero-etl.creating.html) and [Deleting zero-ETL integrations](https://docs.aws.amazon.com/AmazonRDS/latest/AuroraUserGuide/zero-etl.deleting.html).

### Workgroup parameter settings are incorrect
<a name="zero-etl-using.troubleshooting.case-sensitive"></a>

Your workgroup doesn't have case sensitivity turned on.

To resolve this issue, go to the **Properties** tab on the integration details page, choose the parameter group, and turn on the case-sensitive identifier from the **Properties** tab. If you don't have an existing parameter group, create one with the case-sensitive identifier turned on. Then, create a new zero-ETL integration on Amazon RDS. For more information, see [Creating zero-ETL integrations](https://docs.aws.amazon.com/AmazonRDS/latest/AuroraUserGuide/zero-etl.creating.html).

### Database isn't created to activate a zero-ETL integration
<a name="zero-etl-using.troubleshooting.db-creation"></a>

There isn't a database created for the zero-ETL integration to activate it.

To resolve this issue, create a database for the integration. For more information, see [Creating destination databases in Amazon Redshift](zero-etl-using.creating-db.md).

### Table is in the **Resync Required** or **Resync Initiated** state
<a name="zero-etl-using.troubleshooting.resync"></a>

Your table is in the **Resync Required** or **Resync Initiated** state.

To gather more detailed error information about why your table is in that state, use the [SYS\$1LOAD\$1ERROR\$1DETAIL](https://docs.aws.amazon.com/redshift/latest/dg/SYS_LOAD_ERROR_DETAIL.html) system view.

### Integration lag growing
<a name="zero-etl-using.troubleshooting.integration-lag"></a>

The integration lag of your zero-ETL integrations can grow if there is a heavy use of SAVEPOINT in your source database.

## Troubleshooting zero-ETL integrations with Aurora PostgreSQL
<a name="zero-etl-using.troubleshooting.apg"></a>

Use the following information to troubleshoot common issues with zero-ETL integrations with Aurora PostgreSQL.

**Topics**
+ [

### Creation of the integration failed
](#zero-etl-using.troubleshooting.creation)
+ [

### Tables don't have primary keys
](#zero-etl-using.troubleshooting.primary-key)
+ [

### Aurora PostgreSQL tables aren't replicating to Amazon Redshift
](#zero-etl-using.troubleshooting.not-replicating)
+ [

### Unsupported data types in tables
](#zero-etl-using.troubleshooting.unsupported-data)
+ [

### Data manipulation language commands failed
](#zero-etl-using.troubleshooting.failed-dml)
+ [

### Tracked changes between data sources don't match
](#zero-etl-using.troubleshooting.tracked-changes-failure)
+ [

### Authorization failed
](#zero-etl-using.troubleshooting.authorization)
+ [

### Number of tables is more than 100K or the number of schemas is more than 4950
](#zero-etl-using.troubleshooting.table-limits)
+ [

### Amazon Redshift can't load data
](#zero-etl-using.troubleshooting.data-load)
+ [

### Workgroup parameter settings are incorrect
](#zero-etl-using.troubleshooting.case-sensitive)
+ [

### Database isn't created to activate a zero-ETL integration
](#zero-etl-using.troubleshooting.db-creation)
+ [

### Table is in the **Resync Required** or **Resync Initiated** state
](#zero-etl-using.troubleshooting.resync)

### Creation of the integration failed
<a name="zero-etl-using.troubleshooting.creation"></a>

If the creation of the zero-ETL integration failed, the status of the integration is `Inactive`. Make sure that the following are correct for your source Aurora DB cluster:
+ You created your cluster in the Amazon RDS console.
+ Your source Aurora DB cluster is running supported version. For a list of supported versions, see [Supported Regions and Aurora DB engines for zero-ETL integrations with Amazon Redshift](https://docs.aws.amazon.com/AmazonRDS/latest/AuroraUserGuide/Concepts.Aurora_Fea_Regions_DB-eng.Feature.Zero-ETL.html#Concepts.Aurora_Fea_Regions_DB-eng.Feature.Zero-ETL-Postgres). To validate this, go to the **Configuration** tab for the cluster and check the **Engine version**.
+  You correctly configured binlog parameter settings for your cluster. If your Aurora PostgreSQL binlog parameters are set incorrectly or not associated with the source Aurora DB cluster, creation fails. See [Configure DB cluster parameters](https://docs.aws.amazon.com/AmazonRDS/latest/AuroraUserGuide/zero-etl.setting-up.html#zero-etl.parameters).

In addition, make sure the following are correct for your Amazon Redshift data warehouse:
+ Case sensitivity is turned on. See [Turn on case sensitivity for your data warehouse](zero-etl-setting-up.case-sensitivity.md).
+ You added the correct authorized principal and integration source for your endterm="zero-etl-using.redshift-iam.title"/>.

### Tables don't have primary keys
<a name="zero-etl-using.troubleshooting.primary-key"></a>

In the destination database, one or more of the tables don't have a primary key and can't be synchronized.

To resolve this issue, go to the **Table statistics** tab on the integration details page or use SVV\$1INTEGRATION\$1TABLE\$1STATE to view the failed tables. You can add primary keys to the tables and Amazon Redshift will resynchronize the tables. Alternatively, although not recommended, you can drop these tables on Aurora and create tables with a primary key. For more information, see [Amazon Redshift best practices for designing tables](https://docs.aws.amazon.com/redshift/latest/dg/c_designing-tables-best-practices.html).

### Aurora PostgreSQL tables aren't replicating to Amazon Redshift
<a name="zero-etl-using.troubleshooting.not-replicating"></a>

If you don't see one or more tables reflected in Amazon Redshift, you can run the following command to resynchronize them. Replace *dbname* with the name of your Amazon Redshift database. And, replace *table1* and *table2* with the names of the tables to be synchronized.

```
ALTER DATABASE dbname INTEGRATION REFRESH TABLES table1, table2;
```

For more information, see see [ALTER DATABASE](https://docs.aws.amazon.com/redshift/latest/dg/r_ALTER_DATABASE.html) in the *Amazon Redshift Database Developer Guide*.

Your data might not be replicating because one or more of your source tables doesn't have a primary key. The monitoring dashboard in Amazon Redshift displays the status of these tables as `Failed`, and the status of the overall zero-ETL integration changes to `Needs attention`. To resolve this issue, you can identify an existing key in your table that can become a primary key, or you can add a synthetic primary key. For detailed solutions, see [Handle tables without primary keys while creating Amazon Aurora PostgreSQL zero-ETL integrations with Amazon Redshift.](https://aws.amazon.com/blogs/database/handle-tables-without-primary-keys-while-creating-amazon-aurora-postgresql-zero-etl-integrations-with-amazon-redshift/) in the *AWS Database Blog*.

Also confirm that if your target is an Amazon Redshift cluster, that the cluster is not paused.

### Unsupported data types in tables
<a name="zero-etl-using.troubleshooting.unsupported-data"></a>

In the database that you created from the integration in Amazon Redshift and in which data is replicated from the Aurora DB cluster, one or more of the tables have unsupported data types and can't be synchronized.

To resolve this issue, go to the **Table statistics** tab on the integration details page or use SVV\$1INTEGRATION\$1TABLE\$1STATE to view the failed tables. Then, remove these tables and recreate new tables on Amazon RDS. For more information on unsupported data types, see [Data type differences between Aurora and Amazon Redshift databases](https://docs.aws.amazon.com/AmazonRDS/latest/AuroraUserGuide/zero-etl.querying.html#zero-etl.data-type-mapping) in the *Amazon Aurora User Guide*.

### Data manipulation language commands failed
<a name="zero-etl-using.troubleshooting.failed-dml"></a>

 Amazon Redshift could not run DML commands on the Redshift tables. To resolve this issue, use SVV\$1INTEGRATION\$1TABLE\$1STATE to view the failed tables. Amazon Redshift automatically resynchronizes the tables to resolve this error. 

### Tracked changes between data sources don't match
<a name="zero-etl-using.troubleshooting.tracked-changes-failure"></a>

This error occurs when changes between Amazon Aurora and Amazon Redshift don't match, leading to the integration entering a `Failed` state.

To resolve this, delete the zero-ETL integration and create it again in Amazon RDS. For more information, see [Creating zero-ETL integrations](https://docs.aws.amazon.com/AmazonRDS/latest/AuroraUserGuide/zero-etl.creating.html) and [Deleting zero-ETL integrations](https://docs.aws.amazon.com/AmazonRDS/latest/AuroraUserGuide/zero-etl.deletinging.html).

### Authorization failed
<a name="zero-etl-using.troubleshooting.authorization"></a>

Authorization failed because the source Aurora DB cluster was removed as an authorized integration source for the Amazon Redshift data warehouse.

To resolve this issue, delete the zero-ETL integration and create it again on Amazon RDS. For more information, see [Creating zero-ETL integrations](https://docs.aws.amazon.com/AmazonRDS/latest/AuroraUserGuide/zero-etl.creating.html) and [Deleting zero-ETL integrations](https://docs.aws.amazon.com/AmazonRDS/latest/AuroraUserGuide/zero-etl.deleting.html).

### Number of tables is more than 100K or the number of schemas is more than 4950
<a name="zero-etl-using.troubleshooting.table-limits"></a>

For a destination data warehouse, the number of tables is more than 100K or the number of schemas is more than 4950. Amazon Aurora can't send data to Amazon Redshift. The number of tables and schemas exceeds the set limit. To resolve this issue, remove any unnecessary schemas or tables from the source database.

### Amazon Redshift can't load data
<a name="zero-etl-using.troubleshooting.data-load"></a>

Amazon Redshift can't load data to the zero-ETL integration.

To resolve this issue, delete the zero-ETL integration on Amazon RDS and create it again. For more information, see [Creating zero-ETL integrations](https://docs.aws.amazon.com/AmazonRDS/latest/AuroraUserGuide/zero-etl.creating.html) and [Deleting zero-ETL integrations](https://docs.aws.amazon.com/AmazonRDS/latest/AuroraUserGuide/zero-etl.deleting.html).

### Workgroup parameter settings are incorrect
<a name="zero-etl-using.troubleshooting.case-sensitive"></a>

Your workgroup doesn't have case sensitivity turned on.

To resolve this issue, go to the **Properties** tab on the integration details page, choose the parameter group, and turn on the case-sensitive identifier from the **Properties** tab. If you don't have an existing parameter group, create one with the case-sensitive identifier turned on. Then, create a new zero-ETL integration on Amazon RDS. For more information, see [Creating zero-ETL integrations](https://docs.aws.amazon.com/AmazonRDS/latest/AuroraUserGuide/zero-etl.creating.html).

### Database isn't created to activate a zero-ETL integration
<a name="zero-etl-using.troubleshooting.db-creation"></a>

There isn't a database created for the zero-ETL integration to activate it.

To resolve this issue, create a database for the integration. For more information, see [Creating destination databases in Amazon Redshift](zero-etl-using.creating-db.md).

### Table is in the **Resync Required** or **Resync Initiated** state
<a name="zero-etl-using.troubleshooting.resync"></a>

Your table is in the **Resync Required** or **Resync Initiated** state.

To gather more detailed error information about why your table is in that state, use the [SYS\$1LOAD\$1ERROR\$1DETAIL](https://docs.aws.amazon.com/redshift/latest/dg/SYS_LOAD_ERROR_DETAIL.html) system view.

## Troubleshooting zero-ETL integrations with RDS for MySQL
<a name="zero-etl-using.troubleshooting.rms"></a>

Use the following information to troubleshoot common issues with zero-ETL integrations with RDS for MySQL.

**Topics**
+ [

### Creation of the integration failed
](#zero-etl-using.troubleshooting.creation)
+ [

### Tables don't have primary keys
](#zero-etl-using.troubleshooting.primary-key)
+ [

### RDS for MySQL tables aren't replicating to Amazon Redshift
](#zero-etl-using.troubleshooting.not-replicating)
+ [

### Unsupported data types in tables
](#zero-etl-using.troubleshooting.unsupported-data)
+ [

### Data manipulation language commands failed
](#zero-etl-using.troubleshooting.failed-dml)
+ [

### Tracked changes between data sources don't match
](#zero-etl-using.troubleshooting.tracked-changes-failure)
+ [

### Authorization failed
](#zero-etl-using.troubleshooting.authorization)
+ [

### Number of tables is more than 100K or the number of schemas is more than 4950
](#zero-etl-using.troubleshooting.table-limits)
+ [

### Amazon Redshift can't load data
](#zero-etl-using.troubleshooting.data-load)
+ [

### Workgroup parameter settings are incorrect
](#zero-etl-using.troubleshooting.case-sensitive)
+ [

### Database isn't created to activate a zero-ETL integration
](#zero-etl-using.troubleshooting.db-creation)
+ [

### Table is in the **Resync Required** or **Resync Initiated** state
](#zero-etl-using.troubleshooting.resync)

### Creation of the integration failed
<a name="zero-etl-using.troubleshooting.creation"></a>

If the creation of the zero-ETL integration failed, the status of the integration is `Inactive`. Make sure that the following are correct for your source RDS DB instance:
+ You created your instance in the Amazon RDS console.
+ Your source RDS DB instance is running a supported version of RDS for MySQL. For a list of supported versions, see [Supported Regions and DB engines for Amazon RDS zero-ETL integrations with Amazon Redshift](https://docs.aws.amazon.com/AmazonRDS/latest/UserGuide/Concepts.RDS_Fea_Regions_DB-eng.Feature.ZeroETL.html). To validate this, go to the **Configuration** tab for the instance and check the **Engine version**.
+  You correctly configured binlog parameter settings for your instance. If your RDS for MySQL binlog parameters are set incorrectly or not associated with the source RDS DB instance, creation fails. See [Configure DB instance parameters](https://docs.aws.amazon.com/AmazonRDS/latest/UserGuide/zero-etl.setting-up.html#zero-etl.parameters).

In addition, make sure the following are correct for your Amazon Redshift data warehouse:
+ Case sensitivity is turned on. See [Turn on case sensitivity for your data warehouse](zero-etl-setting-up.case-sensitivity.md).
+ You added the correct authorized principal and integration source for your namespace. See [Configure authorization for your Amazon Redshift data warehouse](zero-etl-using.redshift-iam.md).

### Tables don't have primary keys
<a name="zero-etl-using.troubleshooting.primary-key"></a>

In the destination database, one or more of the tables don't have a primary key and can't be synchronized.

To resolve this issue, go to the **Table statistics** tab on the integration details page or use SVV\$1INTEGRATION\$1TABLE\$1STATE to view the failed tables. You can add primary keys to the tables and Amazon Redshift will resynchronize the tables. Alternatively, although not recommended, you can drop these tables on RDS and create tables with a primary key. For more information, see [Amazon Redshift best practices for designing tables](https://docs.aws.amazon.com/redshift/latest/dg/c_designing-tables-best-practices.html).

### RDS for MySQL tables aren't replicating to Amazon Redshift
<a name="zero-etl-using.troubleshooting.not-replicating"></a>

If you don't see one or more tables reflected in Amazon Redshift, you can run the following command to resynchronize them. Replace *dbname* with the name of your Amazon Redshift database. And, replace *table1* and *table2* with the names of the tables to be synchronized.

```
ALTER DATABASE dbname INTEGRATION REFRESH TABLES table1, table2;
```

For more information, see see [ALTER DATABASE](https://docs.aws.amazon.com/redshift/latest/dg/r_ALTER_DATABASE.html) in the *Amazon Redshift Database Developer Guide*.

Your data might not be replicating because one or more of your source tables doesn't have a primary key. The monitoring dashboard in Amazon Redshift displays the status of these tables as `Failed`, and the status of the overall zero-ETL integration changes to `Needs attention`. To resolve this issue, you can identify an existing key in your table that can become a primary key, or you can add a synthetic primary key. For detailed solutions, see [Handle tables without primary keys while creating Aurora MySQL-Compatible Edition or RDS for MySQL zero-ETL integrations with Amazon Redshift.](https://aws.amazon.com/blogs/database/handle-tables-without-primary-keys-while-creating-amazon-aurora-mysql-or-amazon-rds-for-mysql-zero-etl-integrations-with-amazon-redshift/) in the *AWS Database Blog*.

Also confirm that if your target is an Amazon Redshift cluster, that the cluster is not paused.

### Unsupported data types in tables
<a name="zero-etl-using.troubleshooting.unsupported-data"></a>

In the database that you created from the integration in Amazon Redshift and in which data is replicated from the RDS DB instance, one or more of the tables have unsupported data types and can't be synchronized.

To resolve this issue, go to the **Table statistics** tab on the integration details page or use SVV\$1INTEGRATION\$1TABLE\$1STATE to view the failed tables. Then, remove these tables and recreate new tables on Amazon RDS. For more information on unsupported data types, see [Data type differences between RDS and Amazon Redshift databases](https://docs.aws.amazon.com/AmazonRDS/latest/AuroraUserGuide/zero-etl.querying.html#zero-etl.data-type-mapping) in the *Amazon RDS User Guide*.

### Data manipulation language commands failed
<a name="zero-etl-using.troubleshooting.failed-dml"></a>

 Amazon Redshift could not run DML commands on the Redshift tables. To resolve this issue, use SVV\$1INTEGRATION\$1TABLE\$1STATE to view the failed tables. Amazon Redshift automatically resynchronizes the tables to resolve this error. 

### Tracked changes between data sources don't match
<a name="zero-etl-using.troubleshooting.tracked-changes-failure"></a>

This error occurs when changes between Amazon Aurora and Amazon Redshift don't match, leading to the integration entering a `Failed` state.

To resolve this, delete the zero-ETL integration and create it again in Amazon RDS. For more information, see [Creating zero-ETL integrations](https://docs.aws.amazon.com/AmazonRDS/latest/UserGuide/zero-etl.creating.html) and [Deleting zero-ETL integrations](https://docs.aws.amazon.com/AmazonRDS/latest/UserGuide/zero-etl.deletinging.html).

### Authorization failed
<a name="zero-etl-using.troubleshooting.authorization"></a>

Authorization failed because the source RDS DB instance was removed as an authorized integration source for the Amazon Redshift data warehouse.

To resolve this issue, delete the zero-ETL integration and create it again on Amazon RDS. For more information, see [Creating zero-ETL integrations](https://docs.aws.amazon.com/AmazonRDS/latest/UserGuide/zero-etl.creating.html) and [Deleting zero-ETL integrations](https://docs.aws.amazon.com/AmazonRDS/latest/UserGuide/zero-etl.deleting.html).

### Number of tables is more than 100K or the number of schemas is more than 4950
<a name="zero-etl-using.troubleshooting.table-limits"></a>

For a destination data warehouse, the number of tables is more than 100K or the number of schemas is more than 4950. Amazon Aurora can't send data to Amazon Redshift. The number of tables and schemas exceeds the set limit. To resolve this issue, remove any unnecessary schemas or tables from the source database.

### Amazon Redshift can't load data
<a name="zero-etl-using.troubleshooting.data-load"></a>

Amazon Redshift can't load data to the zero-ETL integration.

To resolve this issue, delete the zero-ETL integration on Amazon RDS and create it again. For more information, see [Creating zero-ETL integrations](https://docs.aws.amazon.com/AmazonRDS/latest/UserGuide/zero-etl.creating.html) and [Deleting zero-ETL integrations](https://docs.aws.amazon.com/AmazonRDS/latest/UserGuide/zero-etl.deleting.html).

### Workgroup parameter settings are incorrect
<a name="zero-etl-using.troubleshooting.case-sensitive"></a>

Your workgroup doesn't have case sensitivity turned on.

To resolve this issue, go to the **Properties** tab on the integration details page, choose the parameter group, and turn on the case-sensitive identifier from the **Properties** tab. If you don't have an existing parameter group, create one with the case-sensitive identifier turned on. Then, create a new zero-ETL integration on Amazon RDS. For more information, see [Creating zero-ETL integrations](https://docs.aws.amazon.com/AmazonRDS/latest/UserGuide/zero-etl.creating.html).

### Database isn't created to activate a zero-ETL integration
<a name="zero-etl-using.troubleshooting.db-creation"></a>

There isn't a database created for the zero-ETL integration to activate it.

To resolve this issue, create a database for the integration. For more information, see [Creating destination databases in Amazon Redshift](zero-etl-using.creating-db.md).

### Table is in the **Resync Required** or **Resync Initiated** state
<a name="zero-etl-using.troubleshooting.resync"></a>

Your table is in the **Resync Required** or **Resync Initiated** state.

To gather more detailed error information about why your table is in that state, use the [SYS\$1LOAD\$1ERROR\$1DETAIL](https://docs.aws.amazon.com/redshift/latest/dg/SYS_LOAD_ERROR_DETAIL.html) system view.

## Troubleshooting zero-ETL integrations with DynamoDB
<a name="zero-etl-dynamodb-integrations-troubleshooting"></a>

Use the following information to troubleshoot common issues with zero-ETL integrations with Amazon DynamoDB.

**Topics**
+ [

### Creation of the integration failed
](#zero-etl-dynamodb-integrations-troubleshooting-creation)
+ [

### Unsupported data types in tables
](#zero-etl-dynamodb-integrations-troubleshooting-unsupported-data-types)
+ [

### Unsupported table and attribute names
](#zero-etl-dynamodb-integrations-troubleshooting-unsupported-table-names)
+ [

### Authorization failed
](#zero-etl-dynamodb-integrations-troubleshooting-authorization)
+ [

### Amazon Redshift can't load data
](#zero-etl-dynamodb-integrations-troubleshooting-data-load)
+ [

### Workgroup or cluster parameter settings are incorrect
](#zero-etl-dynamodb-integrations-troubleshooting-case-sensitive)
+ [

### Database isn't created to activate a zero-ETL integration
](#zero-etl-dynamodb-integrations-troubleshooting-db-creation)
+ [

### Point-in-time recovery (PITR) is not enabled on source DynamoDB table
](#zero-etl-dynamodb-integrations-troubleshooting-pitr-recovery)
+ [

### KMS key access denied
](#zero-etl-dynamodb-integrations-troubleshooting-kms-key)
+ [

### Amazon Redshift does not have access to DynamoDB table key
](#zero-etl-dynamodb-integrations-troubleshooting-ddb-table-key)

### Creation of the integration failed
<a name="zero-etl-dynamodb-integrations-troubleshooting-creation"></a>

If the creation of the zero-ETL integration failed, the status of the integration is `Inactive`. Make sure that the following are correct for your Amazon Redshift data warehouse and source DynamoDB table:
+ Case sensitivity is turned on for your data warehouse. See [Turn on case sensitivity](https://docs.aws.amazon.com/redshift/latest/mgmt/zero-etl-using.setting-up.html#zero-etl-setting-up.case-sensitivity) in the *Amazon Redshift Management Guide*.
+ You added the correct authorized principal and integration source for your namespace in Amazon Redshift. See [Configure authorization for your Amazon Redshift data warehouse](https://docs.aws.amazon.com/redshift/latest/mgmt/zero-etl-using.setting-up.html#zero-etl-using.redshift-iam) in the *Amazon Redshift Management Guide*.
+ You added the correct resource-based policy to the source DynamoDB table. See [Policies and permissions in IAM](https://docs.aws.amazon.com/IAM/latest/UserGuide/access_policies.html) in the *IAM User Guide*.

### Unsupported data types in tables
<a name="zero-etl-dynamodb-integrations-troubleshooting-unsupported-data-types"></a>

DynamoDB numbers are translated to DECIMAL(38,10) in Amazon Redshift. Numbers exceeding this precision range are automatically transformed to (38,10). Delete the integration and unify the number precisions, and then re-create the integration.

### Unsupported table and attribute names
<a name="zero-etl-dynamodb-integrations-troubleshooting-unsupported-table-names"></a>

Amazon Redshift supports up to 127 character table and attribute names. If a long name, such as the DynamoDB table name or the partition key or sort key column name fails your integration, fix it by using a shorter name and re-create the integration.

### Authorization failed
<a name="zero-etl-dynamodb-integrations-troubleshooting-authorization"></a>

Authorization can fail when the source DynamoDB table is removed as an authorized integration source for the Amazon Redshift data warehouse.

To resolve this issue, delete the zero-ETL integration, and re-create it using Amazon DynamoDB.

### Amazon Redshift can't load data
<a name="zero-etl-dynamodb-integrations-troubleshooting-data-load"></a>

Amazon Redshift can't load data from a zero-ETL integration.

To resolve this issue, refresh the integration with ALTER DATABASE.

```
ALTER DATABASE sample_integration_db INTEGRATION REFRESH ALL TABLES
```

### Workgroup or cluster parameter settings are incorrect
<a name="zero-etl-dynamodb-integrations-troubleshooting-case-sensitive"></a>

Your workgroup or cluster doesn't have case sensitivity turned on.

To resolve this issue, go to the **Properties** tab on the integration details page, choose the parameter group, and turn on the case-sensitive identifier from the **Properties** tab. If you don't have an existing parameter group, create one with the case-sensitive identifier turned on. Then, create a new zero-ETL integration on DynamoDB. See [Turn on case sensitivity](https://docs.aws.amazon.com/redshift/latest/mgmt/zero-etl-using.setting-up.html#zero-etl-setting-up.case-sensitivity) in the *Amazon Redshift Management Guide*.

### Database isn't created to activate a zero-ETL integration
<a name="zero-etl-dynamodb-integrations-troubleshooting-db-creation"></a>

There isn't a database created for the zero-ETL integration to activate it.

To resolve this issue, create a database for the integration. See [Creating destination databases in Amazon Redshift](https://docs.aws.amazon.com/redshift/latest/mgmt/zero-etl-using.creating-db.html) in the *Amazon Redshift Management Guide*.

### Point-in-time recovery (PITR) is not enabled on source DynamoDB table
<a name="zero-etl-dynamodb-integrations-troubleshooting-pitr-recovery"></a>

Enabling PITR is required for DynamoDB to export data. Ensure PITR is always enabled. If you ever turn off PITR while the integration is active, you’ll need to follow instructions in the error message and refresh the integration using ALTER DATABASE.

```
ALTER DATABASE sample_integration_db INTEGRATION REFRESH ALL TABLES
```

### KMS key access denied
<a name="zero-etl-dynamodb-integrations-troubleshooting-kms-key"></a>

The KMS key used for the source table or integration must be configured with sufficient permissions. For information about table encryption and decryption, see [DynamoDB encryption at rest](https://docs.aws.amazon.com/amazondynamodb/latest/developerguide/EncryptionAtRest.html) in the *Amazon DynamoDB Developer Guide*.

### Amazon Redshift does not have access to DynamoDB table key
<a name="zero-etl-dynamodb-integrations-troubleshooting-ddb-table-key"></a>

If the source table encryption is an AWS managed key, then switch to an AWS owned key or customer managed key. If the table is already encrypted with a customer managed key, ensure that the policy doesn't have any condition keys.

## Troubleshooting zero-ETL integrations with applications
<a name="zero-etl-using.troubleshooting.glue"></a>

Use the following information to troubleshoot common issues with zero-ETL integrations with applications, such as, Salesforce, SAP, ServiceNow, and Zendesk.

**Topics**
+ [

### Creation of the integration failed
](#zero-etl-using.troubleshooting.creation)
+ [

### Tables aren't replicating to Amazon Redshift
](#zero-etl-using.troubleshooting.primary-key)
+ [

### Unsupported data types in tables
](#zero-etl-using.troubleshooting.unsupported-data)
+ [

### Workgroup parameter settings are incorrect
](#zero-etl-using.troubleshooting.case-sensitive)
+ [

### Database isn't created to activate a zero-ETL integration
](#zero-etl-using.troubleshooting.db-creation)
+ [

### Table is in the **Resync Required** or **Resync Initiated** state
](#zero-etl-using.troubleshooting.resync)

### Creation of the integration failed
<a name="zero-etl-using.troubleshooting.creation"></a>

If the creation of the zero-ETL integration failed, the status of the integration is `Inactive`. Make sure that the following are correct for your Amazon Redshift data warehouse:
+ Case sensitivity is turned on. See [Turn on case sensitivity for your data warehouse](zero-etl-setting-up.case-sensitivity.md).
+ You added the correct authorized principal and integration source for your namespace. See [Configure authorization for your Amazon Redshift data warehouse](zero-etl-using.redshift-iam.md).

### Tables aren't replicating to Amazon Redshift
<a name="zero-etl-using.troubleshooting.primary-key"></a>

In the destination database, one or more of the tables don't have a primary key and can't be synchronized.

To resolve this issue, go to the **Table statistics** tab on the integration details page or use SVV\$1INTEGRATION\$1TABLE\$1STATE to view the failed tables. You can add primary keys to the tables and Amazon Redshift will resynchronize the tables. You can run the following command to resynchronize them. Replace *dbname* with the name of your Amazon Redshift database. And, replace *table1* and *table2* with the names of the tables to be synchronized.

```
ALTER DATABASE dbname INTEGRATION REFRESH TABLES table1, table2;
```

For more information, see [ALTER DATABASE](https://docs.aws.amazon.com/redshift/latest/dg/r_ALTER_DATABASE.html) in the *Amazon Redshift Database Developer Guide*.

### Unsupported data types in tables
<a name="zero-etl-using.troubleshooting.unsupported-data"></a>

In the database that you created from the integration in Amazon Redshift and in which data is replicated from zero-ETL integrations with applications, one or more of the tables have unsupported data types and can't be synchronized.

To resolve this issue, go to the **Table statistics** tab on the integration details page or use SVV\$1INTEGRATION\$1TABLE\$1STATE to view the failed tables. Then, remove these tables and recreate new tables at the source. For more information, see see [Zero-ETL integrations](https://docs.aws.amazon.com/glue/latest/dg/zero-etl-using.html) in the *AWS Glue Developer Guide*.

### Workgroup parameter settings are incorrect
<a name="zero-etl-using.troubleshooting.case-sensitive"></a>

Your workgroup doesn't have case sensitivity turned on.

To resolve this issue, go to the **Properties** tab on the integration details page, choose the parameter group, and turn on the case-sensitive identifier from the **Properties** tab. If you don't have an existing parameter group, create one with the case-sensitive identifier turned on. Then, create a new zero-ETL integration. For more information, see see [Zero-ETL integrations](https://docs.aws.amazon.com/glue/latest/dg/zero-etl-using.html) in the *AWS Glue Developer Guide*.

### Database isn't created to activate a zero-ETL integration
<a name="zero-etl-using.troubleshooting.db-creation"></a>

There isn't a database created for the zero-ETL integration to activate it.

To resolve this issue, create a database for the integration. For more information, see [Creating destination databases in Amazon Redshift](zero-etl-using.creating-db.md).

### Table is in the **Resync Required** or **Resync Initiated** state
<a name="zero-etl-using.troubleshooting.resync"></a>

Your table is in the **Resync Required** or **Resync Initiated** state.

To gather more detailed error information about why your table is in that state, use the [SYS\$1LOAD\$1ERROR\$1DETAIL](https://docs.aws.amazon.com/redshift/latest/dg/SYS_LOAD_ERROR_DETAIL.html) system view.