

 Amazon Redshift will no longer support the creation of new Python UDFs starting Patch 198. Existing Python UDFs will continue to function until June 30, 2026. For more information, see the [ blog post ](https://aws.amazon.com/blogs/big-data/amazon-redshift-python-user-defined-functions-will-reach-end-of-support-after-june-30-2026/). 

# What is Amazon Redshift?
<a name="welcome"></a>

Welcome to the *Amazon Redshift Management Guide*. Amazon Redshift is a fully managed, petabyte-scale data warehouse service in the cloud. Amazon Redshift Serverless lets you access and analyze data without all of the configurations of a provisioned data warehouse. Resources are automatically provisioned and data warehouse capacity is intelligently scaled to deliver fast performance for even the most demanding and unpredictable workloads. You don't incur charges when the data warehouse is idle, so you only pay for what you use. You can load data and start querying right away in the Amazon Redshift query editor v2 or in your favorite business intelligence (BI) tool. Enjoy the best price performance and familiar SQL features in an easy-to-use, zero administration environment.

Regardless of the size of the dataset, Amazon Redshift offers fast query performance using the same SQL-based tools and business intelligence applications that you use today.

## Are you a first-time Amazon Redshift user?
<a name="are-you-a-firsttime-redshift-user"></a>

 If you are a first-time user of Amazon Redshift, we recommend that you begin by reading the following sections: 
+ [Service Highlights and Pricing](https://aws.amazon.com/redshift/redshift-serverless) – This product detail page provides the Amazon Redshift value proposition, service highlights, and pricing.
+ [Get started with Amazon Redshift Serverless data warehouses](https://docs.aws.amazon.com/redshift/latest/gsg/new-user-serverless.html) – This topic walks you through the process of setting up a serverless data warehouse, creating resources, and querying sample data. 
+ [Amazon Redshift Database Developer Guide](https://docs.aws.amazon.com/redshift/latest/dg/) – If you are a database developer, this guide explains how to design, build, query, and maintain the databases that make up your data warehouse.

If you prefer to manage your Amazon Redshift resources manually, you can create provisioned clusters for your data querying needs. For more information, see [Amazon Redshift clusters](https://docs.aws.amazon.com/redshift/latest/mgmt/working-with-clusters.html).

As an application developer, you can use the Amazon Redshift API or the AWS Software Development Kit (SDK) libraries to manage clusters programmatically. If you use the Amazon Redshift API, you must authenticate every HTTP or HTTPS request to the API by signing it. For more information about signing requests, go to [Signing an HTTP request](amazon-redshift-signing-requests.md).

 For information about the API, CLI, and SDKs, go to the following links: 
+ [Amazon Redshift Serverless API Reference](https://docs.aws.amazon.com/redshift-serverless/latest/APIReference/Welcome.html)
+ [Amazon Redshift API Reference](https://docs.aws.amazon.com/redshift/latest/APIReference/)
+ [Amazon Redshift Data API API Reference](https://docs.aws.amazon.com/redshift-data/latest/APIReference/Welcome.html)
+ [AWS CLI Command Reference](https://docs.aws.amazon.com/cli/latest/reference/)
+ SDK References in [Tools for Amazon Web Services](https://aws.amazon.com/tools/).

# Amazon Redshift Serverless feature overview
<a name="serverless-considerations"></a>

Most of the features supported by an Amazon Redshift provisioned data warehouse are also supported by Amazon Redshift Serverless. The following are some of its key capabilities. 


| Feature | Description | 
| --- | --- | 
| **Snapshots** | You can restore a snapshot of Amazon Redshift Serverless or a provisioned data warehouse to Amazon Redshift Serverless. For more information, see [Snapshots and recovery points](serverless-snapshots-recovery-points.md). | 
| **Recovery points ** | Amazon Redshift Serverless automatically creates a point of recovery every 30 minutes. These recovery points are kept for 24 hours. You can use them to restore after accidental writes or deletes. When you restore from a recovery point, all the data in your Amazon Redshift Serverless database is restored to an earlier point in time. You can also create a snapshot from a recovery point if you need to keep a point of recovery for a longer period. For more information, see [Snapshots and recovery points](serverless-snapshots-recovery-points.md).  | 
| **Base RPU capacity** | You can set a base capacity in Redshift Processing Units (RPUs). One RPU provides 16 GB of memory. This setting gives you the ability to control the balance between resources in use and cost for your workload. You can increase this value to grow resources available and improve query performance, or lower the value to limit your spending. The default is 128 RPUs. You can also set usage limits, such as RPUs used per day, to control costs. For more information, see [Billing for Amazon Redshift Serverless](serverless-billing.md). | 
| **Usage limits of data sharing** | You can limit the amount of data transferred from a producer Region to a consumer Region using the console or the API. These data transfer costs differ by AWS Region, and are measured in terabytes. For more information about data sharing, see [Getting started data sharing using the console](https://docs.aws.amazon.com/redshift/latest/dg/getting-started-datashare-console.html) in the *Amazon Redshift Database Developer Guide*. | 
| **User-defined functions (UDFs)** | You can run user-defined functions (UDFs) in Amazon Redshift Serverless. For more information, see [Creating user-defined functions](https://docs.aws.amazon.com//redshift/latest/dg/user-defined-functions.html) in the *Amazon Redshift Database Developer Guide*. | 
| **Stored procedures** | You can run stored procedures in Amazon Redshift Serverless. For more information, see [Creating stored procedures](https://docs.aws.amazon.com//redshift/latest/dg/stored-procedure-overview.html) in the *Amazon Redshift Database Developer Guide*. | 
| **Materialized views** | You can create materialized views in Amazon Redshift Serverless. For more information, see [Creating materialized views](https://docs.aws.amazon.com//redshift/latest/dg/materialized-view-overview.html) in the *Amazon Redshift Database Developer Guide*. | 
| **Spatial functions** | You can run spatial functions in Amazon Redshift Serverless. For more information, see [Querying spatial data](https://docs.aws.amazon.com//redshift/latest/dg/geospatial-overview.html) in the *Amazon Redshift Database Developer Guide*. | 
| **Federated queries** | You can run queries to join data with Aurora DB cluster and Amazon RDS databases from Amazon Redshift Serverless. For more information, see [Querying data with federated queries](https://docs.aws.amazon.com//redshift/latest/dg/federated-overview.html) in the *Amazon Redshift Database Developer Guide*. | 
| **Data lake queries** | You can run queries to join data from your Amazon S3 data lake with Amazon Redshift Serverless. For more information, see [Querying a data lake](https://docs.aws.amazon.com/redshift/latest/mgmt/query-editor-v2-querying-data-lake.html) in the *Amazon Redshift Management Guide*. | 
| **HyperLogLog** | You can run HyperLogLog functions in Amazon Redshift Serverless. For more information, see [Using HyperLogLog sketches](https://docs.aws.amazon.com//redshift/latest/dg/hyperloglog-overview.html) in the *Amazon Redshift Database Developer Guide*. | 
| **Querying data across databases** | You can query data across databases with Amazon Redshift Serverless. For more information, see [Querying data across databases](https://docs.aws.amazon.com//redshift/latest/dg/cross-database-overview.html) in the *Amazon Redshift Database Developer Guide*. | 
| **Data sharing** | You can access datashares on provisioned data warehouses with Amazon Redshift Serverless. For more information, see [Sharing data across clusters](https://docs.aws.amazon.com//redshift/latest/dg/datashare-overview.html) in the *Amazon Redshift Database Developer Guide*. | 
| **Semi-structured data querying** | You can ingest and store semi-structured data with the `SUPER` data type with Amazon Redshift Serverless. For more information, see [Ingesting and querying semi-structured data](https://docs.aws.amazon.com//redshift/latest/dg/super-overview.html) in the *Amazon Redshift Database Developer Guide*. | 
| **Tagging resources** | You can use the AWS CLI or the Amazon Redshift Serverless API to tag resources with metadata related to the resource. For more information, see [Tagging resources](https://docs.aws.amazon.com/redshift/latest/mgmt/serverless-tagging-resources.html). | 
| **Machine learning** | You can use Amazon Redshift machine learning with Amazon Redshift Serverless. For more information, see [Using machine learning](https://docs.aws.amazon.com//redshift/latest/dg/machine_learning.html) in the *Amazon Redshift Database Developer Guide*. | 
| **SQL commands and functions** | With a few exceptions (such as `REBOOT_CLUSTER`), you can use Amazon Redshift SQL commands and functions with Amazon Redshift Serverless. For more information, see [SQL reference](https://docs.aws.amazon.com//redshift/latest/dg/cm_chap_SQLCommandRef.html) in the *Amazon Redshift Database Developer Guide*. | 
| **CloudFormation resources** | Using CloudFormation templates, you can deploy and update Amazon Redshift Serverless resources. This integration means you can spend less time managing resources and focus on your applications. For more information about CloudFormation resources in Amazon Redshift Serverless, see [Amazon Redshift Serverless resource type reference](https://docs.aws.amazon.com/AWSCloudFormation/latest/UserGuide/AWS_RedshiftServerless.html). | 
| **CloudTrail resources** | Amazon Redshift Serverless is integrated with AWS CloudTrail to provide a record of actions taken in Amazon Redshift Serverless. CloudTrail captures all API calls for Amazon Redshift Serverless as events. For more information, see [CloudTrail for Amazon Redshift Serverless](https://docs.aws.amazon.com/redshift/latest/mgmt/logging-with-cloudtrail.html). | 

# Amazon Redshift provisioned clusters overview
<a name="overview"></a>

The Amazon Redshift service manages all of the work of setting up, operating, and scaling a data warehouse. These tasks include provisioning capacity, monitoring and backing up the cluster, and applying patches and upgrades to the Amazon Redshift engine.

The following video shows you how to create a cluster and query data using the Amazon Redshift query editor v2.

[![AWS Videos](http://img.youtube.com/vi/https://www.youtube.com/embed/8b58xGDHIog/0.jpg)](http://www.youtube.com/watch?v=https://www.youtube.com/embed/8b58xGDHIog)


## Cluster management
<a name="rs-overview-cluster-management"></a>

An Amazon Redshift cluster is a set of nodes, which consists of a leader node and one or more compute nodes. The type and number of compute nodes that you need depends on the size of your data, the number of queries you will run, and the query runtime performance that you need.

### Creating and managing clusters
<a name="rs-overview-create-and-manage-clusters"></a>

Depending on your data warehousing needs, you can start with a small, single-node cluster and easily scale up to a larger, multi-node cluster as your requirements change. You can add or remove compute nodes to the cluster without any interruption to the service. For more information, see [Amazon Redshift provisioned clusters](working-with-clusters.md).

### Reserving compute nodes
<a name="rs-overview-reserve-compute-nodes"></a>

If you intend to keep your cluster running for a year or longer, you can save money by reserving compute nodes for a one-year or three-year period. Reserving compute nodes offers significant savings compared to the hourly rates that you pay when you provision compute nodes on demand. For more information, see [Reserved nodes](purchase-reserved-node-instance.md).

### Creating cluster snapshots
<a name="rs-overview-create-cluster-snapshots"></a>

Snapshots are point-in-time backups of a cluster. There are two types of snapshots: automated and manual. Amazon Redshift stores these snapshots internally in Amazon Simple Storage Service (Amazon S3) by using an encrypted Secure Sockets Layer (SSL) connection. If you need to restore from a snapshot, Amazon Redshift creates a new cluster and imports data from the snapshot that you specify. For more information about snapshots, see [Amazon Redshift snapshots and backups](working-with-snapshots.md).

## Cluster access and security
<a name="rs-overview-cluster-access-and-security"></a>

There are several features related to cluster access and security in Amazon Redshift. These features help you to control access to your cluster, define connectivity rules, and encrypt data and connections. These features are in addition to features related to database access and security in Amazon Redshift. For more information about database security, see [Managing Database Security](https://docs.aws.amazon.com/redshift/latest/dg/r_Database_objects.html) in the *Amazon Redshift Database Developer Guide*.

### AWS accounts and IAM credentials
<a name="rs-overview-aws-accounts-and-iam-credentials"></a>

By default, an Amazon Redshift cluster is only accessible to the AWS account that creates the cluster. The cluster is locked down so that no one else has access. Within your AWS account, you use the AWS Identity and Access Management (IAM) service to create user accounts and manage permissions for those accounts to control cluster operations. For more information, see [Security in Amazon Redshift](iam-redshift-user-mgmt.md). For more information about managing IAM identities, including guidance and best practices for IAM roles, see [Identity and access management in Amazon Redshift](redshift-iam-authentication-access-control.md).

### Security groups
<a name="rs-overview-security-groups"></a>

By default, any cluster that you create is closed to everyone. IAM credentials only control access to the Amazon Redshift API-related resources: the Amazon Redshift console, command line interface (CLI), API, and SDK. To enable access to the cluster from SQL client tools via JDBC or ODBC, you use security groups: 
+ If you are using the EC2-VPC platform for your Amazon Redshift cluster, you must use VPC security groups. We recommend that you launch your cluster in an EC2-VPC platform.

  You cannot move a cluster to a VPC after it has been launched with EC2-Classic. However, you can restore an EC2-Classic snapshot to an EC2-VPC cluster using the Amazon Redshift console. For more information, see [Restoring a cluster from a snapshot](working-with-snapshot-restore-cluster-from-snapshot.md).
+ If you are using the EC2-Classic platform for your Amazon Redshift cluster, you must use Amazon Redshift security groups.

In either case, you add rules to the security group to grant explicit inbound access to a specific range of CIDR/IP addresses or to an Amazon Elastic Compute Cloud (Amazon EC2) security group if your SQL client runs on an Amazon EC2 instance. For more information, see [Amazon Redshift security groups](security-network-isolation.md#working-with-security-groups).

In addition to the inbound access rules, you create database users to provide credentials to authenticate to the database within the cluster itself. For more information, see [Databases](#rs-overview-databases) in this topic.

### Encryption
<a name="rs-overview-encryption"></a>

When you provision the cluster, you can optionally choose to encrypt the cluster for additional security. When you enable encryption, Amazon Redshift stores all data in user-created tables in an encrypted format. You can use AWS Key Management Service (AWS KMS) to manage your Amazon Redshift encryption keys. 

Encryption is an immutable property of the cluster. The only way to switch from an encrypted cluster to a cluster that is not encrypted is to unload the data and reload it into a new cluster. Encryption applies to the cluster and any backups. When you restore a cluster from an encrypted snapshot, the new cluster is encrypted as well.

For more information about encryption, keys, and hardware security modules, see [Amazon Redshift database encryption](working-with-db-encryption.md).

### SSL connections
<a name="rs-overview-ssl-connections"></a>

You can use Secure Sockets Layer (SSL) encryption to encrypt the connection between your SQL client and your cluster. For more information, see [Configuring security options for connections](connecting-ssl-support.md).

## Monitoring clusters
<a name="rs-overview-monitoring-clusters"></a>

There are several features related to monitoring in Amazon Redshift. You can use database audit logging to generate activity logs, configure events and notification subscriptions to track information of interest. Use the metrics in Amazon Redshift and Amazon CloudWatch to learn about the health and performance of your clusters and databases.

### Database audit logging
<a name="rs-overview-database-audit-logging"></a>

You can use the database audit logging feature to track information about authentication attempts, connections, disconnections, changes to database user definitions, and queries run in the database. This information is useful for security and troubleshooting purposes in Amazon Redshift. The logs are stored in Amazon S3 buckets. For more information, see [Database audit logging](db-auditing.md).

### Events and notifications
<a name="rs-overview-events-and-notifications"></a>

Amazon Redshift tracks events and retains information about them for a period of several weeks in your AWS account. For each event, Amazon Redshift reports information such as the date the event occurred, a description, the event source (for example, a cluster, a parameter group, or a snapshot), and the source ID. You can create Amazon Redshift event notification subscriptions that specify a set of event filters. When an event occurs that matches the filter criteria, Amazon Redshift uses Amazon Simple Notification Service to inform you that the event has occurred. For more information about events and notifications, see [Amazon Redshift events](working-with-events.md).

### Performance
<a name="rs-overview-performance"></a>

Amazon Redshift provides performance metrics and data so that you can track the health and performance of your clusters and databases. Amazon Redshift uses Amazon CloudWatch metrics to monitor the physical aspects of the cluster, such as CPU utilization, latency, and throughput. Amazon Redshift also provides query and load performance data to help you monitor the database activity in your cluster. For more information about performance metrics and monitoring, see [Monitoring Amazon Redshift cluster performance](metrics.md).

## Databases
<a name="rs-overview-databases"></a>

Amazon Redshift creates one database when you provision a cluster. This is the database that you use to load data and run queries on your data. You can create additional databases as needed by running a SQL command. For more information about creating additional databases, go to [Step 1: Create a database](https://docs.aws.amazon.com/redshift/latest/dg/t_creating_database.html) in the *Amazon Redshift Database Developer Guide*.

When you provision a cluster, you specify an admin user who has access to all of the databases that are created within the cluster. This admin user is a superuser who is the only user with access to the database initially, though this user can create additional superusers and users. For more information, go to [Superusers](https://docs.aws.amazon.com/redshift/latest/dg/r_superusers.html) and [Users](https://docs.aws.amazon.com/redshift/latest/dg/r_Users.html) in the *Amazon Redshift Database Developer Guide*.

Amazon Redshift uses parameter groups to define the behavior of all databases in a cluster, such as date presentation style and floating-point precision. If you don’t specify a parameter group when you provision your cluster, Amazon Redshift associates a default parameter group with the cluster. For more information, see [Amazon Redshift parameter groups](working-with-parameter-groups.md).

For more information about databases in Amazon Redshift, go to the [Amazon Redshift Database Developer Guide](https://docs.aws.amazon.com/redshift/latest/dg/).

# Comparing Amazon Redshift Serverless to an Amazon Redshift provisioned data warehouse
<a name="serverless-console-comparison"></a>

For Amazon Redshift Serverless, some concepts and features are different than their corresponding feature for an Amazon Redshift provisioned data warehouse. For instance, one contrasting comparison is that Amazon Redshift Serverless doesn't have the concept of a cluster or node. The following table describes features and behavior in Amazon Redshift Serverless and explains how they differ from the equivalent feature in a provisioned data warehouse. 


| Feature | Description | Serverless | Provisioned | 
| --- | --- | --- | --- | 
| **Workgroup and Namespace** | To isolate workloads and manage different resources in Amazon Redshift Serverless, you can create namespaces and workgroups in order to manage storage and compute resources separately. | A namespace is a collection of database objects and users. A workgroup is a collection of compute resources. For more information, see [Amazon Redshift Serverless](working-with-serverless.md) to understand the design for Amazon Redshift Serverless. | A provisioned cluster is a collection of compute nodes and a leader node, which you manage directly. For more information, see [Amazon Redshift provisioned clusters](working-with-clusters.md). | 
| **Node types** | When you work with Amazon Redshift Serverless, you don't choose node types or specify node count like you do with a provisioned Amazon Redshift cluster.  | Amazon Redshift Serverless automatically provisions and manages capacity for you. You can optionally specify base data warehouse capacity to select the right price/performance balance for your workloads. You can also specify maximum RPU hours to set cost controls to make sure that costs are predictable. For more information, see [Compute capacity for Amazon Redshift Serverless](serverless-capacity.md). | You build a cluster with node types that meet your cost and performance specifications. For more information, see [Amazon Redshift provisioned clusters](working-with-clusters.md). | 
| **Workload management and concurrency scaling** | Amazon Redshift can scale for periods of heavy load. Amazon Redshift Serverless also can scale to meet intermittent periods of high load. | Amazon Redshift Serverless automatically manages resources efficiently and scales, based on workloads, within the thresholds of cost controls. For more information, see [Billing for compute capacity](serverless-billing.md#serverless-rpu-billing). | With a provisioned data warehouse, you enable concurrency scaling on your cluster to handle periods of heavy load. For more information, see [Concurrency scaling](https://docs.aws.amazon.com/redshift/latest/dg/concurrency-scaling.html). | 
| **Port** | The port number that you use to connect. | With Amazon Redshift Serverless, you can change to another port from the port range of 5431–5455 or 8191–8215. For more information, see [Connecting to Amazon Redshift Serverless](serverless-connecting.md). | With a provisioned data warehouse, you can choose any port to connect. | 
| **Resizing** | Add or remove compute resources to perform well for the workload. | Resizing is not applicable in Amazon Redshift Serverless. You can however change the base data warehouse RPU capacity, based on your price and performance requirements. For more information, see [Compute capacity for Amazon Redshift Serverless](serverless-capacity.md). | With a provisioned cluster, you perform a cluster resize to add nodes or remove nodes. For more information, see [Overview of managing clusters in Amazon Redshift](https://docs.aws.amazon.com/redshift/latest/mgmt/managing-cluster-operations.html). | 
| **Pausing and resuming** | You can pause a provisioned cluster when you don't have workloads to run, to save cost. | With Amazon Redshift Serverless, you pay only when queries run, so there is no need to pause or resume. For more information, see [Billing for compute capacity](serverless-billing.md#serverless-rpu-billing). | You pause and resume a cluster manually, based on an assessment of your workload at various times. For more information, see [Overview of managing clusters in Amazon Redshift](https://docs.aws.amazon.com/redshift/latest/mgmt/managing-cluster-operations.html). | 
| **Querying external data with Spectrum queries** | You can query data in Amazon S3 buckets, in a variety of formats, such as JSON. | Billing accrues when compute resources process workloads. Also, billing accrues when external Redshift Spectrum data is queried, like any other transaction. For more information, see [Billing for compute capacity](serverless-billing.md#serverless-rpu-billing). | With a provisioned data warehouse, Amazon Redshift Spectrum capacity exists on separate servers that are queried from the Amazon Redshift cluster. For more information, see [Querying external data using Amazon Redshift Spectrum](https://docs.aws.amazon.com/redshift/latest/dg/c-using-spectrum.html). | 
| **Compute-resource billing** | How billing accrues for Amazon Redshift vs Amazon Redshift Serverless. | With Amazon Redshift Serverless, you pay for the workloads you run, in RPU-hours on a per-second basis, with a 60-second minimum charge. This includes queries that access data in open file formats in Amazon S3. For more information, see [Billing for compute capacity](serverless-billing.md#serverless-rpu-billing). | With a provisioned cluster, billing occurs per second when the cluster isn't paused. | 
| **Maintenance window** | How server maintenance works. | With Amazon Redshift Serverless, there is no maintenance window. Updates are handled seamlessly. For more information, see [What is Amazon Redshift Serverless?](https://docs.aws.amazon.com/redshift/latest/mgmt/serverless-whatis.html) | With a provisioned cluster, you specify a maintenance window when patching occurs. (Typically, you choose a recurring time when use is low.)  | 
| **Encryption** | You can enable database encryption. | Amazon Redshift Serverless is always encrypted with AWS KMS, with AWS managed or customer managed keys.  | The data in a provisioned data warehouse can be encrypted with AWS KMS (with AWS managed or customer managed keys), or unencrypted. See [Amazon Redshift database encryption](working-with-db-encryption.md). | 
| **Storage billing** | How billing for storage works. | For Amazon Redshift Serverless. The rate is calculated according to GB per month. See [Billing for compute capacity](serverless-billing.md#serverless-rpu-billing). |  Storage is billed apart from compute resources for a provisioned cluster with RA3 nodes. | 
| **User management** | How users are managed. | For Amazon Redshift Serverless, users are IAM or Redshift users. For more information, see [Identity and access management in Amazon Redshift Serverless](serverless-iam.md). For more information about managing IAM identities, including best practices for IAM roles, see [Identity and access management in Amazon Redshift](redshift-iam-authentication-access-control.md). | For a provisioned data warehouse, users are IAM or Redshift users. For more information, see [ Managing database security](https://docs.aws.amazon.com/redshift/latest/dg/r_Database_objects.html) in the *Amazon Redshift Database Developer Guide*. For more information about managing IAM identities, including best practices for IAM roles, see [Identity and access management in Amazon Redshift](redshift-iam-authentication-access-control.md). | 
| **JDBC and ODBC tools and compatibility** | How client connections work. | Amazon Redshift Serverless is compatible with any JDBC or ODBC compliant tool or client application. For more information about drivers, see [Configuring connections](https://docs.aws.amazon.com/redshift/latest/mgmt/configuring-connections.html) in the *Amazon Redshift Management Guide*. For information about connecting to Amazon Redshift Serverless, see [Connecting to Redshift Serverless](https://docs.aws.amazon.com/redshift/latest/mgmt/serverless-connecting.html). | Amazon Redshift provisioned is compatible with any JDBC or ODBC compliant tool or client application. For more information about drivers, see [Configuring connections](https://docs.aws.amazon.com/redshift/latest/mgmt/configuring-connections.html) in the *Amazon Redshift Management Guide*. For information about connecting to clusters, see [Connecting to an Amazon Redshift data warehouse using SQL client tools](https://docs.aws.amazon.com/redshift/latest/mgmt/connecting-to-cluster.html). | 
| **Requirement for credentials on sign in** | How credentials are handled. | For Amazon Redshift Serverless, you don't have to enter credentials in every instance. For more information, see [Connecting to Amazon Redshift Serverless](serverless-connecting.md#serverless-connecting-endpoint). | Access to Amazon Redshift requires sign-in credentials from a user associated with an IAM role. The IAM role has specific permissions attached for a provisioned data warehouse. Once authenticated, the user can connect directly to the database, to the Redshift console, and to query editor v2. | 
| **Data API** | You can access data from web services and other applications. | Amazon Redshift Serverless supports the Amazon Redshift Data API. With Amazon Redshift Serverless, you use the `workgroup-name` parameter instead of the `cluster-identity` parameter. For more information about calling the Data API, see [Using the Amazon Redshift Data API](data-api.md). | Amazon Redshift provisioned supports the Amazon Redshift Data API. With Amazon Redshift clusters, you use the`cluster-identity` parameter instead of the `workgroup-name` parameter. For more information about calling the Data API, see [Using the Amazon Redshift Data API](data-api.md). | 
| **Snapshots** | Provides point-in-time recovery. | Amazon Redshift Serverless supports snapshots and recovery points. For more information about snapshots and recovery points for a namespace, see [Snapshots and recovery points](serverless-snapshots-recovery-points.md). | Provisioned clusters support snapshots. For more information, see [Managing snapshots using the console](https://docs.aws.amazon.com/redshift/latest/mgmt/managing-snapshots-console.html). | 
| **Data Sharing** | Provides the ability to share data between databases in the same account or in different accounts. | Amazon Redshift Serverless supports all of the data sharing features that a provisioned data warehouse does. It also supports data sharing between Amazon Redshift Serverless and a provisioned data warehouse, tool, or client application.  | Provisioned clusters support cross database, cross account, cross-Region, and AWS Data Exchange data sharing. For more information, see [Sharing data across clusters in Amazon Redshift](https://docs.aws.amazon.com/redshift/latest/dg/datashare-overview.html). | 
| **Tracks** | Provides a schedule for software updates. | Amazon Redshift Serverless has no concept of a track. Versions and updates are handled by the service. For more information about the design of Amazon Redshift Serverless, see [Snapshots and recovery points](serverless-snapshots-recovery-points.md). | Provisioned clusters support switching between current and trailing tracks. | 
| **System tables and views** | Provides a way to monitor your resources and system metadata. | Amazon Redshift Serverless supports new system tables and views. For more information about system tables, see [Monitoring queries and workloads with Amazon Redshift ServerlessMonitoring queries and workloads](serverless-monitoring.md). For information about how to migrate your queries from using the older provisioned system tables and views to the new views, see [Migrating to SYS monitoring views](https://docs.aws.amazon.com/redshift/latest/dg/sys_view_migration.html). | A provisioned data warehouse supports the existing set of system tables and views for monitoring and other tasks that require system metadata. | 
| **Parameter groups** | This is a group of parameters that apply to all of the databases created in a cluster. These parameters configure database settings such as query timeout and date style. | Amazon Redshift Serverless does not have the concept of a parameter group. | Provisioned data warehouses support parameter groups. For more information about parameter groups for a provisioned cluster, see [Amazon Redshift parameter groups](working-with-parameter-groups.md). | 
| **Query monitoring** | Provides a time-based view of queries run. | Query monitoring in Amazon Redshift Serverless requires users to connect to the database to use system tables. Thus, query monitoring and system tables are in sync. Queries of system tables in Amazon Redshift Serverless use the database user mapped to the IAM user for using query monitoring. For more information about monitoring queries, see [Monitoring queries and workloads with Amazon Redshift Serverless](https://docs.aws.amazon.com/redshift/latest/mgmt/serverless-monitoring.html). | Query monitoring in provisioned clusters does not show all data in system tables. | 
| **Audit logging** | Provides information about connections and user activities in the database. | With Amazon Redshift Serverless, CloudWatch is a destination for audit logs. Amazon S3 based audit log delivery is not supported for Amazon Redshift Serverless. For more information, see [Audit logging for Amazon Redshift Serverless](https://docs.aws.amazon.com/redshift/latest/mgmt/serverless-audit-logging.html). | For a provisioned cluster, Amazon S3-based audit log delivery has been the norm. Now, delivery of audit logs to CloudWatch is extended to cover provisioned data warehouses. | 
| **Event notifications** | Amazon EventBridge is a serverless event bus service that you can use to connect your applications with event data from a variety of sources. | Amazon Redshift Serverless uses Amazon EventBridge to manage event notifications to keep you up-to-date regarding changes in your data warehouse. For more information, see [Amazon Redshift Serverless event notifications with Amazon EventBridge](serverless-event-notifications-eventbridge.md). | For a provisioned cluster, you manage event notifications using the Amazon Redshift console to create event subscriptions. For more information, see [Creating an event notification subscription](event-subscribe.md). | 
| **Cursor constraints** | Amazon Redshift enforces constraints on the size of all cursor result sets. | Amazon Redshift Serverless has a cursor maximum total result set size of 150,000 MB. | For a provisioned cluster, the cursor maximum total result set size depends on the cluster type. For more information, see [ Cursor constraints](https://docs.aws.amazon.com/redshift/latest/dg/declare.html#declare-constraints). | 

# Using the Amazon Redshift management interfaces for provisioned clusters
<a name="using-aws-sdk"></a>

**Note**  
This topic focuses on Amazon Redshift management interfaces for provisioned clusters. There are similar management interfaces for Amazon Redshift Serverless and Amazon Redshift Data API.

Amazon Redshift supports several management interfaces that you can use to create, manage, and delete Amazon Redshift clusters: the AWS SDKs, the AWS Command Line Interface (AWS CLI), and the Amazon Redshift management API.

**The Amazon Redshift API** – You can call this Amazon Redshift management API by submitting a request. Requests are HTTP or HTTPS requests that use the HTTP verbs `GET` or `POST` with a parameter named `Action`. Calling the Amazon Redshift API is the most direct way to access the Amazon Redshift service. However, it requires that your application handle low-level details such as error handling and generating a hash to sign the request.
+ For information about building and signing an Amazon Redshift API request, see [Signing an HTTP request](amazon-redshift-signing-requests.md).
+ For information about the Amazon Redshift API actions and data types for Amazon Redshift, see the [Amazon Redshift API reference](https://docs.aws.amazon.com/redshift/latest/APIReference/Welcome.html).

**AWS SDKs** – You can use the AWS SDKs to perform Amazon Redshift cluster-related operations. Several of the SDK libraries wrap the underlying Amazon Redshift API. They integrate the API functionality into the specific programming language and handle many of the low-level details, such as calculating signatures, handling request retries, and error handling. Calling the wrapper functions in the SDK libraries can greatly simplify the process of writing an application to manage an Amazon Redshift cluster.
+ Amazon Redshift is supported by the AWS SDKs for Java, .NET, PHP, Python, Ruby, and Node.js. The wrapper functions for Amazon Redshift are documented in the reference manual for each SDK. For a list of the AWS SDKs and links to their documentation, see [Tools for Amazon Web Services](https://aws.amazon.com/tools/).
+ This guide provides examples of working with Amazon Redshift using the Java SDK. For more general AWS SDK code examples, see [Code examples for Amazon Redshift using AWS SDKs](service_code_examples.md). 

**AWS CLI** – The CLI provides a set of command line tools that you can use to manage AWS services from Windows, Mac, and Linux computers. The AWS CLI includes commands based on the Amazon Redshift API actions.
+ For information about installing and setting up the Amazon Redshift CLI, see [Setting up the Amazon Redshift CLI](setting-up-rs-cli.md).
+ For reference material on the Amazon Redshift CLI commands, see [Amazon Redshift](https://docs.aws.amazon.com/cli/latest/reference/redshift/index.html) in the *AWS CLI Reference.*

# Using this service with an AWS SDK
<a name="sdk-general-information-section"></a>

AWS software development kits (SDKs) are available for many popular programming languages. Each SDK provides an API, code examples, and documentation that make it easier for developers to build applications in their preferred language.


| SDK documentation | Code examples | 
| --- | --- | 
| [AWS SDK for C\$1\$1](https://docs.aws.amazon.com/sdk-for-cpp) | [AWS SDK for C\$1\$1 code examples](https://github.com/awsdocs/aws-doc-sdk-examples/tree/main/cpp) | 
| [AWS CLI](https://docs.aws.amazon.com/cli) | [AWS CLI code examples](https://docs.aws.amazon.com/code-library/latest/ug/cli_2_code_examples.html) | 
| [AWS SDK for Go](https://docs.aws.amazon.com/sdk-for-go) | [AWS SDK for Go code examples](https://github.com/awsdocs/aws-doc-sdk-examples/tree/main/gov2) | 
| [AWS SDK for Java](https://docs.aws.amazon.com/sdk-for-java) | [AWS SDK for Java code examples](https://github.com/awsdocs/aws-doc-sdk-examples/tree/main/javav2) | 
| [AWS SDK for JavaScript](https://docs.aws.amazon.com/sdk-for-javascript) | [AWS SDK for JavaScript code examples](https://github.com/awsdocs/aws-doc-sdk-examples/tree/main/javascriptv3) | 
| [AWS SDK for Kotlin](https://docs.aws.amazon.com/sdk-for-kotlin) | [AWS SDK for Kotlin code examples](https://github.com/awsdocs/aws-doc-sdk-examples/tree/main/kotlin) | 
| [AWS SDK for .NET](https://docs.aws.amazon.com/sdk-for-net) | [AWS SDK for .NET code examples](https://github.com/awsdocs/aws-doc-sdk-examples/tree/main/dotnetv3) | 
| [AWS SDK for PHP](https://docs.aws.amazon.com/sdk-for-php) | [AWS SDK for PHP code examples](https://github.com/awsdocs/aws-doc-sdk-examples/tree/main/php) | 
| [AWS Tools for PowerShell](https://docs.aws.amazon.com/powershell) | [AWS Tools for PowerShell code examples](https://docs.aws.amazon.com/code-library/latest/ug/powershell_5_code_examples.html) | 
| [AWS SDK for Python (Boto3)](https://docs.aws.amazon.com/pythonsdk) | [AWS SDK for Python (Boto3) code examples](https://github.com/awsdocs/aws-doc-sdk-examples/tree/main/python) | 
| [AWS SDK for Ruby](https://docs.aws.amazon.com/sdk-for-ruby) | [AWS SDK for Ruby code examples](https://github.com/awsdocs/aws-doc-sdk-examples/tree/main/ruby) | 
| [AWS SDK for Rust](https://docs.aws.amazon.com/sdk-for-rust) | [AWS SDK for Rust code examples](https://github.com/awsdocs/aws-doc-sdk-examples/tree/main/rustv1) | 
| [AWS SDK for SAP ABAP](https://docs.aws.amazon.com/sdk-for-sapabap) | [AWS SDK for SAP ABAP code examples](https://github.com/awsdocs/aws-doc-sdk-examples/tree/main/sap-abap) | 
| [AWS SDK for Swift](https://docs.aws.amazon.com/sdk-for-swift) | [AWS SDK for Swift code examples](https://github.com/awsdocs/aws-doc-sdk-examples/tree/main/swift) | 

**Example availability**  
Can't find what you need? Request a code example by using the **Provide feedback** link at the bottom of this page.

# Signing an HTTP request
<a name="amazon-redshift-signing-requests"></a>

Amazon Redshift requires that every request you send to the management API be authenticated with a signature. This topic explains how to sign your requests. 

If you are using one of the AWS Software Development Kits (SDKs) or the AWS Command Line Interface, request signing is handled automatically, and you can skip this section. For more information about using AWS SDKs, see [Using the Amazon Redshift management interfaces for provisioned clusters](using-aws-sdk.md). For more information about using the Amazon Redshift Command Line Interface, go to [Amazon Redshift command line reference](https://docs.aws.amazon.com/cli/latest/reference/redshift/index.html).

To sign a request, you calculate a digital signature by using a cryptographic hash function. A cryptographic hash is a function that returns a unique hash value that is based on the input. The input to the hash function includes the text of your request and your secret access key that you can get from temporary credentials. The hash function returns a hash value that you include in the request as your signature. The signature is part of the `Authorization` header of your request.

**Note**  
Users need programmatic access if they want to interact with AWS outside of the AWS Management Console. The way to grant programmatic access depends on the type of user that's accessing AWS.  
To grant users programmatic access, choose one of the following options.  


****  

| Which user needs programmatic access? | To | By | 
| --- | --- | --- | 
| IAM | (Recommended) Use console credentials as temporary credentials to sign programmatic requests to the AWS CLI, AWS SDKs, or AWS APIs. |  Following the instructions for the interface that you want to use. [\[See the AWS documentation website for more details\]](http://docs.aws.amazon.com/redshift/latest/mgmt/amazon-redshift-signing-requests.html)  | 
|  Workforce identity (Users managed in IAM Identity Center)  | Use temporary credentials to sign programmatic requests to the AWS CLI, AWS SDKs, or AWS APIs. |  Following the instructions for the interface that you want to use. [\[See the AWS documentation website for more details\]](http://docs.aws.amazon.com/redshift/latest/mgmt/amazon-redshift-signing-requests.html)  | 
| IAM | Use temporary credentials to sign programmatic requests to the AWS CLI, AWS SDKs, or AWS APIs. | Following the instructions in [Using temporary credentials with AWS resources](https://docs.aws.amazon.com/IAM/latest/UserGuide/id_credentials_temp_use-resources.html) in the IAM User Guide. | 
| IAM | (Not recommended)Use long-term credentials to sign programmatic requests to the AWS CLI, AWS SDKs, or AWS APIs. |  Following the instructions for the interface that you want to use. [\[See the AWS documentation website for more details\]](http://docs.aws.amazon.com/redshift/latest/mgmt/amazon-redshift-signing-requests.html)  | 

After Amazon Redshift receives your request, it recalculates the signature by using the same hash function and input that you used to sign the request. If the resulting signature matches the signature in the request, Amazon Redshift processes the request; otherwise, the request is rejected. 

Amazon Redshift supports authentication using [AWS signature version 4](https://docs.aws.amazon.com/IAM/latest/UserGuide/reference_aws-signing.html). The process for calculating a signature is composed of three tasks. These tasks are illustrated in the example that follows.
+   [Task 1: Create a canonical request](https://docs.aws.amazon.com/IAM/latest/UserGuide/create-signed-request.html#create-canonical-request)

  Rearrange your HTTP request into a canonical form. Using a canonical form is necessary because Amazon Redshift uses the same canonical form to calculate the signature it compares with the one you sent. 
+   [Task 2: Create a string to sign](https://docs.aws.amazon.com/IAM/latest/UserGuide/create-signed-request.html#create-string-to-sign)

  Create a string that you will use as one of the input values to your cryptographic hash function. The string, called the *string to sign*, is a concatenation of the name of the hash algorithm, the request date, a *credential scope* string, and the canonicalized request from the previous task. The *credential scope* string itself is a concatenation of date, region, and service information.
+   [Task 3: Calculate a signature](https://docs.aws.amazon.com/IAM/latest/UserGuide/create-signed-request.html#calculate-signature)

  Calculate a signature for your request by using a cryptographic hash function that accepts two input strings, your string to sign and a *derived key*. The derived key is calculated by starting with your secret access key and using the credential scope string to create a series of hash-based message authentication codes (HMAC-SHA256). 

## Example signature calculation
<a name="example-signature-calculation"></a>

The following example walks you through the details of creating a signature for [CreateCluster](https://docs.aws.amazon.com/redshift/latest/APIReference/API_CreateCluster.html) request. You can use this example as a reference to check your own signature calculation method. Other reference calculations are included in the [Request signature examples section](https://docs.aws.amazon.com/IAM/latest/UserGuide/signature-v4-examples.html) of the IAM User Guide.

You can use a GET or POST request to send requests to Amazon Redshift. The difference between the two is that for the GET request your parameters are sent as query string parameters. For the POST request they are included in the body of the request. The example below shows a POST request.

The example assumes the following:
+ The time stamp of the request is `Fri, 07 Dec 2012 00:00:00 GMT`.
+ The endpoint is US East (Northern Virginia) Region, `us-east-1`.

The general request syntax is: 

```
https://redshift.us-east-1.amazonaws.com/
   ?Action=CreateCluster
   &ClusterIdentifier=examplecluster
   &MasterUsername=masteruser
   &MasterUserPassword=12345678Aa
   &NumberOfNode=2
   &NodeType=dc2.large
   &Version=2012-12-01
   &x-amz-algorithm=AWS4-HMAC-SHA256
   &x-amz-credential=AKIAIOSFODNN7EXAMPLE/20121207/us-east-1/redshift/aws4_request
   &x-amz-date=20121207T000000Z
   &x-amz-signedheaders=content-type;host;x-amz-date
```

The canonical form of the request calculated for [Task 1: Create a Canonical Request](#SignatureCalculationTask1) is:

```
POST
/

content-type:application/x-www-form-urlencoded; charset=utf-8
host:redshift.us-east-1.amazonaws.com
x-amz-date:20121207T000000Z

content-type;host;x-amz-date
55141b5d2aff6042ccd9d2af808fdf95ac78255e25b823d2dbd720226de1625d
```

The last line of the canonical request is the hash of the request body. The third line in the canonical request is empty because there are no query parameters for this API. 

The string to sign for [Task 2: Create a String to Sign](#SignatureCalculationTask2) is:

```
AWS4-HMAC-SHA256
20121207T000000Z
20121207/us-east-1/redshift/aws4_request
06b6bef4f4f060a5558b60c627cc6c5b5b5a959b9902b5ac2187be80cbac0714
```

The first line of the *string to sign* is the algorithm, the second line is the time stamp, the third line is the *credential scope*, and the last line is a hash of the canonical request from [Task 1: Create a Canonical Request](#SignatureCalculationTask1). The service name to use in the credential scope is `redshift`.

For [Task 3: Calculate a Signature](#SignatureCalculationTask3), the derived key can be represented as:

```
derived key = HMAC(HMAC(HMAC(HMAC("AWS4" + YourSecretAccessKey,"20121207"),"us-east-1"),"redshift"),"aws4_request")
```

The derived key is calculated as series of hash functions. Starting from the inner HMAC statement in the formula above, you concatenate the phrase **AWS4** with your secret access key and use this as the key to hash the data "us-east-1". The result of this hash becomes the key for the next hash function. 

After you calculate the derived key, you use it in a hash function that accepts two input strings, your string to sign and the derived key. For example, if you use the secret access key `wJalrXUtnFEMI/K7MDENG/bPxRfiCYEXAMPLEKEY` and the string to sign given earlier, then the calculated signature is as follows:

```
9a6b557aa9f38dea83d9215d8f0eae54100877f3e0735d38498d7ae489117920
```

The final step is to construct the `Authorization` header. For the demonstration access key `AKIAIOSFODNN7EXAMPLE`, the header (with line breaks added for readability) is:

```
Authorization: AWS4-HMAC-SHA256 Credential=AKIAIOSFODNN7EXAMPLE/20121207/us-east-1/redshift/aws4_request, 
SignedHeaders=content-type;host;x-amz-date, 
Signature=9a6b557aa9f38dea83d9215d8f0eae54100877f3e0735d38498d7ae489117920
```

# Setting up the Amazon Redshift CLI
<a name="setting-up-rs-cli"></a>

This section explains how to set up and run the AWS CLI command line tools for use in managing Amazon Redshift. The Amazon Redshift command line tools run on the AWS Command Line Interface (AWS CLI), which in turn uses Python ([https://www.python.org/](https://www.python.org)). The AWS CLI can be run on any operating system that supports Python.

## Installing the AWS Command Line Interface
<a name="setting-up.installing-the-tools"></a>

To begin using the Amazon Redshift command line tools, you first set up the AWS CLI, and then you add configuration files that define the Amazon Redshift CLI options.

If you have already installed and configured the AWS CLI for another AWS service, you can skip this procedure.

**To install the AWS Command Line Interface**

1. Go to [Install or update to the latest version of the AWS CLI](https://docs.aws.amazon.com/cli/latest/userguide/cli-chap-getting-set-up.html), and then follow the instructions for installing the AWS CLI.

   For CLI access, you need an access key ID and a secret access key. Use temporary credentials instead of long-term access keys when possible. Temporary credentials include an access key ID, a secret access key, and a security token that indicates when the credentials expire. For more information, see [ Using temporary credentials with AWS resources](https://docs.aws.amazon.com/IAM/latest/UserGuide/id_credentials_temp_use-resources.html) in the *IAM User Guide*.

1. Create a file containing configuration information such as your access keys, default region, and command output format. Then set the `AWS_CONFIG_FILE` environment variable to reference that file. For detailed instructions, go to [Configuring the AWS command line interface](https://docs.aws.amazon.com/cli/latest/userguide/cli-chap-getting-started.html) in the AWS Command Line Interface User Guide.

1. Run a test command to confirm that the AWS CLI interface is working. For example, the following command should display help information for the AWS CLI:

   ```
   aws help
   ```

   The following command should display help information for Amazon Redshift:

   ```
   aws redshift help
   ```

For reference material on the Amazon Redshift CLI commands, go to [Amazon Redshift](https://docs.aws.amazon.com/cli/latest/reference/redshift/index.html) in the AWS CLI Reference.