Select your cookie preferences

We use essential cookies and similar tools that are necessary to provide our site and services. We use performance cookies to collect anonymous statistics, so we can understand how customers use our site and make improvements. Essential cookies cannot be deactivated, but you can choose “Customize” or “Decline” to decline performance cookies.

If you agree, AWS and approved third parties will also use cookies to provide useful site features, remember your preferences, and display relevant content, including relevant advertising. To accept or decline all non-essential cookies, choose “Accept” or “Decline.” To make more detailed choices, choose “Customize.”

Considerations for data sharing reads and writes in Amazon Redshift - Amazon Redshift

Considerations for data sharing reads and writes in Amazon Redshift

Note

Amazon Redshift multi-warehouse writes using data sharing is only supported on Amazon Redshift patch 186 for provisioned clusters on current track version 1.0.78881 or greater, and for Amazon Redshift Serverless workgroups on version 1.0.78890 or greater.

The following are considerations when working with datashare reads and writes in Amazon Redshift:

  • You can only share SQL UDFs through datashares. Python and Lambda UDFs aren't supported.

  • If the producer database has specific collation, use the same collation settings for the consumer database.

  • Amazon Redshift doesn't support nested SQL user-defined functions on producer clusters.

  • Amazon Redshift doesn't support sharing tables with interleaved sort keys and views that refer to tables with interleaved sort keys.

  • Amazon Redshift doesn't support accessing a datashare object which had a concurrent DDL occur between the Prepare and Execute of the access.

  • Amazon Redshift doesn't support sharing stored procedures through datashares.

  • Amazon Redshift doesn't support sharing metadata system views and system tables.

  • Compute type – You must use Serverless workgroups, ra3.large clusters, ra3.xlplus clusters, ra3.4xl clusters, or ra3.16xl clusters to use this feature.

  • Isolation level – Your database’s isolation level must be snapshot isolation in order to allow other Serverless workgroups and provisioned clusters to write to it.

  • Multi-statement queries and transactions – Multi-statement queries outside of a transaction block aren't currently supported. As a result, if you are using a query editor like dbeaver and you have multiple write queries, you need to wrap your queries in an explicit BEGIN...END transaction statement.

    When multi-command statements are used outside of transactions, if the first command is a write to a producer database, subsequent write commands in the statement are only allowed to that producer database. If the first command is a read, subsequent write commands are only allowed to the used database, if set, otherwise to the local database. Note that the writes in a transaction are only supported to a single database.

  • Consumer sizing – Consumer clusters must have at least 64 slices or more to perform writes using data sharing.

  • Views and materialized views – You can't create, update, or alter views or materialized views on a datashare database.

  • Security – You can't attach or remove security policies such as column-level (CLS), row-level (RLS) and dynamic data masking (DDM) to datashare objects.

  • Manageability – Consumers warehouses can't add datashare objects or views referencing datashare objects to another datashare. Consumers also can't modify or drop an existing datashare.

  • Truncate operations – Datashare writes support transactional truncates for remote tables. This is different than truncates that you run locally on a cluster, which are auto-commit. For more information about the SQL command, see TRUNCATE.

PrivacySite termsCookie preferences
© 2025, Amazon Web Services, Inc. or its affiliates. All rights reserved.