

 Amazon Redshift will no longer support the creation of new Python UDFs starting Patch 198. Existing Python UDFs will continue to function until June 30, 2026. For more information, see the [ blog post ](https://aws.amazon.com/blogs/big-data/amazon-redshift-python-user-defined-functions-will-reach-end-of-support-after-june-30-2026/). 

# Considerations for data sharing in Amazon Redshift
<a name="datashare-considerations"></a>

With Amazon Redshift *data sharing*, you can securely share access to live data across Amazon Redshift clusters, workgroups, AWS accounts, and AWS Regions without manually moving or copying the data. Previously, objects in datashares were read only in all circumstances. Writing to an object in a datashare is a new feature. Objects in datashares are only write-enabled when a producer specifically grants write privileges like INSERT or CREATE on objects to the datashare. Additionally, for cross-account sharing, a producer has to authorize the datashare for writes and the consumer has to associate specific clusters and workgroups for writes.

This section covers considerations when working with Amazon Redshift data sharing.

**Topics**
+ [General considerations for data sharing in Amazon Redshift](considerations-datashare-general.md)
+ [Considerations for data sharing reads and writes in Amazon Redshift](considerations-datashare-reads-writes.md)
+ [Considerations for data sharing in Amazon Redshift Serverless restore](considerations-datashare-serverless-restore.md)
+ [Considerations for data sharing with data lake tables in Amazon Redshift](considerations-datashare-datalake.md)
+ [Considerations for data sharing with AWS Lake Formation in Amazon Redshift](lake-formation-considerations.md)
+ [Considerations for data sharing with AWS Data Exchange in Amazon Redshift](adx-considerations.md)
+ [Permissions you can grant to datashares](permissions-datashares.md)
+ [Supported SQL statements for data sharing writes on consumers](multi-warehouse-writes-sql-statements.md)
+ [Unsupported SQL statements for data sharing writes on consumers](multi-warehouse-writes-sql-statements-unsupported.md)

# General considerations for data sharing in Amazon Redshift
<a name="considerations-datashare-general"></a>

The following are general considerations when working with datashares in Amazon Redshift:
+ *Default database* – When you read data from a datashare, you remain connected to your local cluster database. For more information about setting up and reading from a database created from a datashare, see [Querying datashare objects](https://docs.aws.amazon.com/redshift/latest/mgmt/query-editor-v2-datashare-using.html#query-editor-v2-datashare-consumer) and [Materialized views on external data lake tables in Amazon Redshift SpectrumMaterialized views on external data lake tables](materialized-view-external-table.md).
+ *Connections* – You must be connected directly to a datashare database or run the USE command to write to datashares. You can also use three-part notation. The USE command is not supported on external tables. 
+ *Performance* – The performance of the queries on shared data depends on the compute capacity of the consumer clusters.
+ *Data transfer charges* – Cross-Region data sharing includes additional cross-Region data-transfer charges. 
  + These data-transfer charges don't apply within the same Region, only across Regions. For more information, see [Managing cost control for cross-Region data sharing](cross-region-billing.md). 
  + The consumer is charged for all compute and cross-region data transfer fees required to query the producer's data. The producer is charged for the underlying storage of data in their provisioned cluster or serverless namespace.
+ *Data sharing within and between clusters* – You only need datashares when you are sharing data between different Amazon Redshift provisioned clusters or serverless workgroups. Within the same cluster, you can query another database using simple three-part notation `database.schema.table` as long as you have the required permissions on the objects in the other database.
+ *Metadata Discovery* – When you're a consumer connected directly to a datashare database through the Redshift JDBC, ODBC, or Python drivers, you can view catalog data in the following ways: 
  + SQL [SHOW](https://docs.aws.amazon.com/redshift/latest/dg/r_SHOW.html) commands.
  + Querying information\$1schema tables and views.
  + Querying [SVV metadata views](https://docs.aws.amazon.com/redshift/latest/dg/svv_views.html).
+ *Permissions visibility* – Consumers can see the permissions granted to the datashares through the SHOW GRANTS SQL command.
+ *Cluster encryption management for data sharing* – To share data across an AWS account, both the producer and consumer cluster must be encrypted. 
  + If both the producer and consumer clusters and serverless namespaces are in the same account, they must have the same encryption type (either both unencrypted, or both encrypted). In every other case, including Lake Formation managed datashares, both the consumer and producer must be encrypted. This is for security purposes. However, they don't need to share the same encryption key.
  + To protect data in transit, all data is encrypted in transit through the encryption schema of the producer cluster. The consumer cluster adopts this encryption schema when data is loaded. The consumer cluster then operates as a normal encrypted cluster. Communications between the producer and consumer are also encrypted using a shared key schema. For more information about encryption in transit, [Encryption in transit](https://docs.aws.amazon.com/redshift/latest/mgmt/security-encryption-in-transit.html).

# Considerations for data sharing reads and writes in Amazon Redshift
<a name="considerations-datashare-reads-writes"></a>

**Note**  
Amazon Redshift multi-warehouse writes using data sharing is only supported on Amazon Redshift patch 186 for provisioned clusters on current track version 1.0.78881 or greater, and for Amazon Redshift Serverless workgroups on version 1.0.78890 or greater. 

The following are considerations when working with datashare reads and writes in Amazon Redshift:
+ You can only share SQL UDFs through datashares. Python and Lambda UDFs aren't supported.
+ If the producer database has specific collation, use the same collation settings for the consumer database.
+ Amazon Redshift doesn't support nested SQL user-defined functions on producer clusters.
+ Amazon Redshift doesn't support sharing tables with interleaved sort keys and views that refer to tables with interleaved sort keys.
+ Amazon Redshift doesn't support accessing a datashare object which had a concurrent DDL occur between the Prepare and Execute of the access.
+ Amazon Redshift doesn't support sharing stored procedures through datashares.
+ Amazon Redshift doesn't support sharing metadata system views and system tables.
+ *Compute type* – You must use Serverless workgroups, ra3.large clusters, ra3.xlplus clusters, ra3.4xl clusters, or ra3.16xl clusters to use this feature. 
+ *Isolation level* – Your database’s isolation level must be snapshot isolation in order to allow other Serverless workgroups and provisioned clusters to write to it.
+ *Multi-statement queries and transactions* – Multi-statement queries outside of a transaction block aren't currently supported. As a result, if you are using a query editor like dbeaver and you have multiple write queries, you need to wrap your queries in an explicit BEGIN...END transaction statement.

  When multi-command statements are used outside of transactions, if the first command is a write to a producer database, subsequent write commands in the statement are only allowed to that producer database. If the first command is a read, subsequent write commands are only allowed to the used database, if set, otherwise to the local database. Note that the writes in a transaction are only supported to a single database.
+ *Consumer sizing* – Consumer clusters must have at least 64 slices or more to perform writes using data sharing. 
+ *Views and materialized views* – You can't create, update, or alter views or materialized views on a datashare database.
+ *Security* – You can't attach or remove security policies such as column-level (CLS), row-level (RLS) and dynamic data masking (DDM) to datashare objects.
+ *Manageability* – Consumers warehouses can't add datashare objects or views referencing datashare objects to another datashare. Consumers also can't modify or drop an existing datashare.
+ *Truncate operations* – Datashare writes support transactional truncates for remote tables. This is different than truncates that you run locally on a cluster, which are auto-commit. For more information about the SQL command, see [TRUNCATE](https://docs.aws.amazon.com/redshift/latest/dg/r_TRUNCATE.html).
+ *Cloning* – CREATE TABLE with LIKE clause statements support cloning from a single parent table when you write from consumer warehouses to producers.

# Considerations for data sharing in Amazon Redshift Serverless restore
<a name="considerations-datashare-serverless-restore"></a>

Consider following when working with datashares during Amazon Redshift Serverless restore operations:

**Restore snapshot to a Amazon Redshift Serverless producer namespace:**
+ Restoring a snapshot to a Amazon Redshift Serverless producer namespace replaces the current datashares with the datashares in the snapshot. However, a datashare permission will only be maintained if ALL of the following conditions are met:
  + The snapshot is taken on the same namespace the customer is restoring to, AND
  + The datashare permission exists on the current namespace, AND
  + The datashare permission existed on the namespace when the snapshot was taken.
+ If a datashare was granted to a consumer when taking the snapshot, but was dropped or the grant was revoked on the latest namespace, the datashare will be restored. However, datashare permissions granted to consumer clusters are no longer valid after restore. To revive datashare access, re-grant usage permissions of datashares to desired consumer clusters.
+ If a datashare is granted to a consumer on the latest namespace, but was not granted when taking the snapshot, the datashare access won't be maintained after restore.
+ Datashare authorization/association scope won't change on the namespace after restore. For example, before restore if a datashare is associated to the entire AWS account, after restore the association scope will remain at account level regardless of the association scope when taking the snapshot.
+ Restoring from a snapshot will revert the publicly accessible setting to its state at the time the snapshot was created, regardless of any subsequent changes.

**Restore snapshot to a Amazon Redshift Serverless consumer namespace:**
+ Restoring a consumer namespace preserves datashare access without requiring the producer administrator to re-grant usage. However, if the database created from the datashare no longer exists after restore, the consumer must recreate it from the datashare.

# Considerations for data sharing with data lake tables in Amazon Redshift
<a name="considerations-datashare-datalake"></a>

The following are considerations when working with data lake tables in Amazon Redshift:
+ Data sharing of data lake tables does not support customer managed AWS KMS keys for Amazon S3 bucket encryption. You can use AWS managed keys for encryption. For more information, see [ Using server-side encryption with Amazon S3 managed keys (SSE-S3)](https://docs.aws.amazon.com/AmazonS3/latest/userguide/UsingServerSideEncryption.html) .
+ To data share data lake tables from an encrypted AWS Glue catalog, you must delegate AWS KMS operations to an IAM role by following the instructions in [Encrypting your Data Catalog](https://docs.aws.amazon.com/glue/latest/dg/encrypt-glue-data-catalog.html).
+  External tables that explicitly specify manifest files in the `LOCATION` clause aren't supported for data sharing. This includes the following tables that Amazon Redshift Spectrum supports: 
  +  Delta Lake 
  +  Hudi 

# Considerations for data sharing with AWS Lake Formation in Amazon Redshift
<a name="lake-formation-considerations"></a>

The following are considerations and limitations for sharing Amazon Redshift data with Lake Formation. For information on data sharing considerations and limitations, see [Considerations when using data sharing in Amazon Redshift](https://docs.aws.amazon.com/redshift/latest/dg/datashare-considerations.html). For information about Lake Formation limitations, see [Notes on working with Amazon Redshift datashares in Lake Formation](https://docs.aws.amazon.com/lake-formation/latest/dg/notes-rs-datashare.html).
+ Sharing a datashare to Lake Formation across Regions is currently unsupported.
+ If column-level filters are defined for a user on a shared relation, performing a `SELECT *` operation returns only the columns the user has access to.
+ Cell-level filters from Lake Formation are unsupported.
+ If you created and shared a view and its tables to Lake Formation, you can configure filters to manage access of the tables, Amazon Redshift enforces Lake Formation defined policies when consumer cluster users access shared objects. When a user accesses a view shared with Lake Formation, Redshift enforces only the Lake Formation policies defined on the view and not the tables contained within the view. However, when users directly access the table, Redshift enforces the defined Lake Formation policies on the table.
+ You can't create materialized views on the consumer based on a shared table if the table has Lake Formation filters configured.
+ The Lake Formation administrator must have [ data lake administrator](https://docs.aws.amazon.com/lake-formation/latest/dg/getting-started-setup.html#create-data-lake-admin) permissions and the [ required permissions to accept a datashare](https://docs.aws.amazon.com/lake-formation/latest/dg/redshift-ds-prereqs.html).
+ The producer consumer cluster must be an RA3 cluster with the latest Amazon Redshift cluster version or a serverless workgroup to share datashares via Lake Formation.
+ Both the producer and consumer clusters must be encrypted.
+ Redshift row-level and column-level access control policies implemented in the producer cluster or workgroup are ignored when the datashare is shared to Lake Formation. The Lake Formation administrator must configure these policies in Lake Formation. The producer cluster or workgroup administrator can turn off RLS for a table by using the [ALTER TABLE](https://docs.aws.amazon.com/redshift/latest/dg/r_ALTER_TABLE.html) command.
+ Sharing datashares via Lake Formation is only available to users who have access to both Redshift and Lake Formation.

# Considerations for data sharing with AWS Data Exchange in Amazon Redshift
<a name="adx-considerations"></a>

When using AWS Data Exchange for Amazon Redshift, consider the following:
+ Both producers and consumers must use the RA3 instance types to use Amazon Redshift datashares. Producers must use the RA3 instance types with the latest Amazon Redshift cluster version.
+ Both the producer and consumer clusters must be encrypted.
+ You must be registered as an AWS Data Exchange provider to list products on AWS Data Exchange, including products that contain AWS Data Exchange datashares. For more information, see [Getting started as a provider](https://docs.aws.amazon.com/data-exchange/latest/userguide/provider-getting-started.html).
+ You don't need to be a registered AWS Data Exchange provider to find, subscribe to, and query Amazon Redshift data through AWS Data Exchange.
+ To control access to your data, create AWS Data Exchange datashares with the publicly accessible setting turned on. To alter an AWS Data Exchange datashare to turn off the publicly accessible setting, set the session variable to allow ALTER DATASHARE SET PUBLICACCESSIBLE FALSE. For more information, see [ALTER DATASHARE usage notes](r_ALTER_DATASHARE.md#r_ALTER_DATASHARE_usage).
+ Producers can't manually add or remove consumers from AWS Data Exchange datashares because access to the datashares is granted based on having an active subscription to an AWS Data Exchange product that contains the AWS Data Exchange datashare.
+ Producers can't view the SQL queries that consumers run. They can only view metadata, such as the number of queries or the objects consumers query, through Amazon Redshift tables that only the producer can access. For more information, see [Monitoring and auditing data sharing in Amazon Redshift](auditing.md).
+ We recommend that you make your datashares publicly accessible. If you don't, subscribers on AWS Data Exchange with publicly accessible consumer clusters won't be able to use your datashare.
+ We recommend that you don't delete an AWS Data Exchange datashare shared to other AWS accounts using the DROP DATASHARE statement. If you do, the AWS accounts that have access to the datashare will lose access. This action is irreversible. Performing this type of alteration can breach data product terms in AWS Data Exchange. If you want to delete an AWS Data Exchange datashare, see [DROP DATASHARE usage notes](r_DROP_DATASHARE.md#r_DROP_DATASHARE_usage).
+ For cross-Region data sharing, you can create AWS Data Exchange datashares to share licensed data.
+ When consuming data from a different Region, the consumer pays the Cross-Region data transfer fee from the producer Region to the consumer Region.

# Permissions you can grant to datashares
<a name="permissions-datashares"></a>

Different object types and various permissions you can grant to them in a data sharing context.

Databases:
+ CREATE
+ USAGE
+ ALTER
+ DROP

Schemas:
+ CREATE
+ USAGE
+ ALTER
+ DROP

Tables:
+ SELECT
+ INSERT
+ UPDATE
+ DELETE
+ TRUNCATE
+ DROP
+ ALTER
+ REFERENCES

Functions:
+ EXECUTE

# Supported SQL statements for data sharing writes on consumers
<a name="multi-warehouse-writes-sql-statements"></a>

The following Data Definition Language (DDL) statements are supported for data sharing with writes:
+ ALTER TABLE RENAME TO
+ ALTER TABLE RENAME COLUMN TO
+ ALTER TABLE ADD/DROP COLUMN
+ ALTER SCHEMA RENAME TO
+ \$1 CREATE \$1 DROP \$1 SCHEMA
+ \$1 CREATE \$1 DROP \$1 SHOW \$1 TABLE
+ CREATE TABLE table\$1name AS
+ BEGIN \$1 START TRANSACTION
+ END \$1 COMMIT \$1 ROLLBACK
+ TRUNCATE

The following Data Manipulation Language (DML) statements are supported for data sharing with writes:
+ SELECT
+ INSERT
+ INSERT INTO SELECT
+ UPDATE
+ DELETE
+ MERGE
+ COPY without COMPUPDATE

The following analyze statements are supported for data sharing with writes:
+ ANALYZE. The consumer runs USER ANALYZE and sends the stats to the producer. 
+ Analyze activated by CTAS/COPY running on the consumer. This iIncludes multiplexing for small consumers.
+ Auto-analyze run on the producer after COPY.

The following permission statements are supported for data sharing with writes:
+ \$1 GRANT \$1 REVOKE \$1 privilege\$1name ON OBJECT\$1TYPE object\$1name TO consumer\$1user 
+ SHOW GRANTS. For more information, see [SHOW GRANTS](r_SHOW_GRANTS.md).

# Unsupported SQL statements for data sharing writes on consumers
<a name="multi-warehouse-writes-sql-statements-unsupported"></a>

The following aren't supported:
+ Multi-statement queries to consumer warehouses when writing to producers.
+ Multi-statement queries to consumer warehouses in a different database, if the previous command is a read statement.
+ Object references other than three-dot notations, such as one.dot or two.dot notations, if not connected to shared database.
+ Concurrency scaling queries writing from consumers to producers.
+ Auto-copy jobs writing from consumers to producers.
+ Streaming jobs writing from consumers to producers.
+ Consumers creating zero-ETL integration tables on producer clusters. For more information about zero-ETL integrations, see [Working with zero-ETL integrations](https://docs.aws.amazon.com/redshift/latest/mgmt/zero-etl-using.html).
+ Writing to a table with an interleaved sort key through a data share.
+ Writing to a stored procedure through a data share.
+ Writing to a SQL user-defined functions (UDF) through a data share. These include nested, Python, and Lambda UDFs.
+ UPDATE, INSERT, or COPY statements on identity columns to consumer warehouses with more compute slices than producer. 
+ MERGE statements on non-RMS external tables to consumer warehouses, when writing to producers.
+ CREATE TABLE statements with:
  + DEFAULT expression set to data type VARBYTE. The VARBYTE data type can't be implicitly converted to any other data type. For more information, see [CAST function](r_CAST_function.md).
  + AS OF SELECT statements with NULL parameter to consumer warehouses, when writing to producers. 