

 Amazon Redshift will no longer support the creation of new Python UDFs starting Patch 198. Existing Python UDFs will continue to function until June 30, 2026. For more information, see the [ blog post ](https://aws.amazon.com/blogs/big-data/amazon-redshift-python-user-defined-functions-will-reach-end-of-support-after-june-30-2026/). 

# Amazon Redshift Serverless
<a name="working-with-serverless"></a>

Amazon Redshift Serverless makes it convenient for you to run and scale analytics without having to provision and manage an on-premises data warehouse. With Amazon Redshift Serverless, data analysts, developers, and data scientists can now use Amazon Redshift to get insights from data in seconds by loading data into and querying records from the data warehouse in the cloud. Amazon Redshift automatically provisions and scales data warehouse capacity to deliver fast performance for demanding and unpredictable workloads. You pay only for the capacity that you use. You can benefit from this simplicity without changing your existing analytics and business intelligence applications.

For information about Amazon Redshift Serverless SLAs, see [Amazon Redshift Service Level Agreement](https://aws.amazon.com/redshift/sla/).

# What is Amazon Redshift Serverless?
<a name="serverless-whatis"></a>

Amazon Redshift Serverless automatically provisions data warehouse capacity and intelligently scales the underlying resources. Amazon Redshift Serverless adjusts capacity in seconds to deliver consistently high performance and simplified operations for even the most demanding and volatile workloads. 

With Amazon Redshift Serverless, you can benefit from the following features:
+ Access and analyze data without the need to set up, tune, and manage Amazon Redshift provisioned clusters.
+ Use the superior Amazon Redshift SQL capabilities, industry-leading performance, and data-lake integration to seamlessly query across a data warehouse, a data lake, and operational data sources.
+ Deliver consistently high performance and simplified operations for the most demanding and volatile workloads with intelligent and automatic scaling.
+ Use workgroups and namespaces to organize compute resources and data with granular cost controls.
+ Pay only when the data warehouse is in use.

With Amazon Redshift Serverless, you use a console interface to reach a serverless data warehouse or APIs to build applications. Through the data warehouse, you can access your Amazon Redshift managed storage and your Amazon S3 data lake.

This video shows you how Amazon Redshift Serverless makes it easy to run and scale analytics without having to manage data warehouse infrastructure:

[![AWS Videos](http://img.youtube.com/vi/https://www.youtube.com/embed/XcRJjXudIf8/0.jpg)](http://www.youtube.com/watch?v=https://www.youtube.com/embed/XcRJjXudIf8)


# Amazon Redshift Serverless console
<a name="serverless-console"></a>

To learn how to get started with the Amazon Redshift Serverless console, watch the following video. 

[![AWS Videos](http://img.youtube.com/vi/https://www.youtube.com/embed/eq4o26Hpuac/0.jpg)](http://www.youtube.com/watch?v=https://www.youtube.com/embed/eq4o26Hpuac)


## Serverless dashboard
<a name="serverless-console-dashboard"></a>

On the **Serverless dashboard** page, you can view a summary of your resources and graphs of your usage.
+ **Namespace overview** – This section shows the amount of snapshots and datashares within your namespace.
+ **Workgroups** – This section shows all of the workgroups within Amazon Redshift Serverless.
+ **Queries metrics** – This section shows query activity for the last one hour. 
+ **RPU capacity used** – This section shows capacity used for the last one hour. 
+ **Free trial** – This section shows the free trial credits remaining in your AWS account. This covers all usage of Amazon Redshift Serverless resources and operations, including snapshots, storage, workgroup, and so on, under the same account.
+ **Alarms** – This section shows the alarms you configured in Amazon Redshift Serverless.

## Data backup
<a name="serverless-console-data-backup"></a>

On the **Data backup** tab you can work with the following:
+ **Snapshots** – You can create, delete, and manage snapshots of your Amazon Redshift Serverless data. The default retention period is `indefinitely`, but you can configure the retention period to be any value between 1 and 3653 days. You can authorize AWS accounts to restore namespaces from a snapshot.
+ **Recovery points** – Displays the recovery points that are automatically created so you can recover from an accidental write or delete within the last 24 hours. To recover data, you can restore a recovery point to any available namespace. You can create a snapshot from a recovery point if you want to keep a point of recovery for a longer time period. The default retention period is `indefinitely`, but you can configure the retention period to be any value between 1 and 3653 days.

## Data access
<a name="serverless-console-data-access"></a>

On the **Data access** tab you can work with the following:
+ **Network and security** settings – You can view VPC-related values, AWS KMS encryption values, and audit logging values. You can update only audit logging.
+ **AWS KMS key** – The AWS KMS key used to encrypt resources in Amazon Redshift Serverless. 
+ **Permissions** – You can manage the IAM roles that Amazon Redshift Serverless can assume to use resources on your behalf. For more information, see [Identity and access management in Amazon Redshift Serverless](serverless-iam.md).
+ **Redshift-managed VPC endpoints** – You can access your Amazon Redshift Serverless instance from another VPC or subnet. For more information, see [Connecting to Amazon Redshift Serverless from other VPC endpoints](serverless-connecting.md#serverless-vpc-connect).

## Limits
<a name="serverless-console-limits"></a>

On the **Limits** tab, you can work with the following:
+ **Base capacity in Redshift processing units (RPUs)** settings – You can set the base capacity used to process your workload. To improve query performance, increase your RPU value. 
+ **Usage limits** – The maximum compute resources that your Amazon Redshift Serverless instance can use in a time period before an action is initiated. You limit the amount of resource Amazon Redshift Serverless uses to run your workload. Usage is measured in Redshift Processing Unit (RPU) hours. An RPU hour is the number of RPUs used in an hour. You determine an action to occur when you reach a limit that you set, as follows: 
  + Send an alert.
  + Log an entry to a system table.
  + Turn off user queries.

  You can set up to four limits.
+  **Query limits** – You can add a limit to monitor performance and limits. For more information about query monitoring limits, see [WLM query monitoring rules](https://docs.aws.amazon.com/redshift/latest/dg/cm-c-wlm-query-monitoring-rules.html). 

For more information, see [Compute capacity for Amazon Redshift Serverless](serverless-capacity.md).

## Datashares
<a name="serverless-console-datashares"></a>

On the **Datashares** tab you can work with the following:
+ **Datashares created in my namespace** settings – You can create a datashare and share it with other namespaces and AWS accounts. 
+ **Datashares from other namespaces and AWS accounts** – You can create a database from a datashare from other namespace and AWS accounts. 

For more information about data sharing, see [Data sharing in Amazon Redshift Serverless](serverless-datasharing.md).

## Query and database monitoring
<a name="serverless-console-database-monitoring"></a>

On the **Query and database monitoring** page, you can view graphs of your **Query history** and **Database performance**.

On the **Query history** tab, you see the following graphs (you can choose between **Query list** and **Resource metrics**):
+ **Query runtime** – This graph shows which queries are running in the same timeframe. Choose a bar in the graph to view more query execution details. 
+ **Queries and loads** – This section lists queries and loads by **Query ID**. 
+ **RPU capacity used** – This graph shows overall capacity in Redshift Processing Units (RPUs). 
+ **Database connections** – This graph shows the number of active database connections. 

## Database performance
<a name="serverless-console-database-performance"></a>

On the **Database performance** tab, you see the following graphs:
+ **Queries completed per second** – This graph shows the average number of queries completed per second. 
+ **Queries duration** – This graph shows the average amount of time to complete a query. 
+ **Database connections** – This graph shows the number of active database connections. 
+ **Running queries** – This graph shows the total number of running queries at a given time. 
+ **Queued queries** – This graph shows the total number of queries queued at a given time. 
+ **Query run time breakdown** – This graph shows the total time queries spent running by query type. 

## Resource monitoring
<a name="serverless-console-resource-monitoring"></a>

On the **Resource monitoring** page, you can view graphs of your consumed resources. You can filter the data based on several facets.
+ **Metric filter** – You can use metric filters to select filters for a specific workgroup, as well as choose the time range and time interval.
+ **RPU capacity used** – This graph shows the overall capacity in Redshift processing units (RPUs). 
+ **Compute usage** – This graph shows the usage of RPU hours by period for the selected time range. For time ranges of less than 6 hours, RPU hours are shown in exact time. For time ranges of 6 hours or more, RPU hours are shown as averages.
+ **Extra compute for automatic optimizations charged seconds** – This graph shows the number of RPU-seconds charged for automatic database optimizations for the selected time range. You're charged for automatic optimizations when you Amazon Redshift uses extra compute resources to run them. For more information, see [ Allocating extra compute resources for automatic database optimization ](https://docs.aws.amazon.com/redshift/latest/dg/t_extra-compute-autonomics.html).

## Datashares
<a name="serverless-console-datashares"></a>

On the **Datashares** page, you can manage datashares **In my account** and **From other accounts**. For more information about data sharing, see [Data sharing in Amazon Redshift Serverless](serverless-datasharing.md).

## AWS Glue Data Catalog
<a name="serverless-console-gdc"></a>

In the **AWS Glue Data Catalog** tab, you can view the registration status of your namespace to the AWS Glue Data Catalog. This tab only appears after you’ve started the registering process. For more information about registering namespaces to the AWS Glue Data Catalog, see [ Apache Iceberg compatibility for Amazon Redshift](https://docs.aws.amazon.com/redshift/latest/dg/iceberg-integration_overview.html) in the Amazon Redshift Database Developer Guide.

# Considerations when using Amazon Redshift Serverless
<a name="serverless-usage-considerations"></a>

For a list of AWS Regions where the Amazon Redshift Serverless is available, see the endpoints listed for [Redshift Serverless API](https://docs.aws.amazon.com/general/latest/gr/redshift-service.html) in the *Amazon Web Services General Reference*.

Some resources used by Amazon Redshift Serverless are subject to quotas. For more information, see [Quotas for Amazon Redshift Serverless objects](amazon-redshift-limits.md#serverless-limits-account). 

When you DECLARE a cursor, the result-set size specifications for Amazon Redshift Serverless is specified in [DECLARE](https://docs.aws.amazon.com/redshift/latest/dg/declare.html). Amazon Redshift Serverless has a cursor maximum total result set size of 150,000 MB.

*Online Patching* – Amazon Redshift Serverless offers automatic software updates without requiring traditional maintenance windows. When a new update is available, the system applies it within 14 days of release during idle periods. The update process typically takes up to 15 minutes. If no 15-minute idle period occurs within 14 days, your Serverless endpoint may experience brief unavailability. During this time, application connections to endpoints may fail. You can monitor Redshift patch releases in the "Cluster versions for Amazon Redshift" documentation. For information about Amazon Redshift Serverless SLAs, see [Amazon Redshift Service Level Agreement](https://aws.amazon.com/redshift/sla/).

*Track* – When Amazon Redshift releases a new workgroup version, your workgroup is updated automatically. You can control whether your workgroup is updated to the most recent release or to the previous release. For information about tracks, see [Tracks for Amazon Redshift provisioned clusters and serverless workgroups](tracks.md).

*Availability Zone IDs* – When you configure your Amazon Redshift Serverless instance, open **Additional considerations**, and make sure that the subnet IDs provided in **Subnet** contain at least two of the supported Availability Zone IDs. 
+ For workgroups without Enhanced VPC Routing (EVR), you need two Availability Zones (AZs).
+ For workgroups with EVR, you need three AZs.

To see the subnet to Availability Zone ID mapping, go to the VPC console and choose **Subnets** to see the list of subnet IDs with their Availability Zone IDs. Verify that your subnet is mapped to a supported Availability Zone ID. To create a subnet, see [Create a subnet in your VPC](https://docs.aws.amazon.com//vpc/latest/userguide/working-with-vpcs.html#AddaSubnet) in the *Amazon VPC User Guide*. 

*Two subnets (without EVR)* – You must have at least two subnets, and they must span across two Availability Zones. 

*Three subnets (with EVR ONLY)* – You must have at least three subnets when you use EVR, and they must span across three or more Availability Zones. 

*Free IP address requirements* – When using Redshift Serverless without enhanced VPC routing (EVR) enabled, you must have at least three free IP addresses available in each subnet. This is a requirement of the proper functioning of the service.

When updating the RPUs for Redshift Serverless deployment, at least three free IP addresses must be available in each subnet to accommodate the service's operational requirements.

For more information about allocating IP addresses and understanding IP addressing in Amazon VPC, see [ IP addressing for your VPCs and subnets](https://docs.aws.amazon.com/vpc/latest/userguide/vpc-ip-addressing.html) in the *Amazon VPC User Guide*.

------
#### [ Without EVR  ]

If you don't use enhanced VPC routing, you must have at least three free IP addresses for each subnet, regardless of the size of the base RPU (4 to 1024 RPUs), or the RPU usage of your workgroup or workgroups. The need for 3 IP address is also applicable to workgroups that have AI-driven scaling and optimization capabilities enabled. 

------
#### [ With Enhanced VPC Routing (EVR)  ]

If you use enhanced VPC routing with Redshift Serverless, the minimum number of IP addresses required when creating a workgroup are as follows:


| Redshift Processing Units (RPUs) | Free IP addresses required | Minimum CIDR size | 
| --- | --- | --- | 
| 4 | 9 | /27 | 
| 8 | 9 | /27 | 
| 16 | 13 | /27 | 
| 32 | 13 | /27 | 
| 64 | 21 | /27 | 
| 128 | 37 | /26 | 
| 256 | 69 | /25 | 
| 512 | 133 | /24 | 
| 1024 | 261 | /23 | 

With EVR, you also need free IP addresses when updating your workgroup to use more RPUs. The number of free IP addresses required when updating the subnets for a workgroup are as follows: 


| Redshift Processing Units (RPUs) | Updated Redshift Processing Units (RPUs) | Free IP addresses required | 
| --- | --- | --- | 
| 4 | 8 | 10 | 
| 8 | 16 | 10 | 
| 16 | 32 | 13 | 
| 32 | 64 | 16 | 
| 64 | 128 | 28 | 
| 128 | 256 | 52 | 
| 256 | 512 | 100 | 
| 512 | 1024 | 197 | 

**Note**  
 The maximum base RPU capacity of 1024 is only available in the following AWS Regions:  
 US East (N. Virginia) 
 US East (Ohio) 
 US West (Oregon) 
 Europe (Ireland) 
 Europe (London) 

------

For more information on allocating IP addresses, see [IP addressing](https://docs.aws.amazon.com//vpc/latest/userguide/how-it-works.html#vpc-ip-addressing) in the *Amazon VPC User Guide*.

*Storage space after migration* – When migrating small Amazon Redshift provisioned clusters to Amazon Redshift Serverless, you might see an increase in storage-space allocation after migration. This is a result of optimized storage-space allocation, resulting in preallocated storage space. This space is used over a period of time as data grows in Amazon Redshift Serverless.

*Datasharing between Amazon Redshift Serverless and Amazon Redshift provisioned clusters * – When datasharing where Amazon Redshift Serverless is the producer and a provisioned cluster is the consumer, the provisioned cluster must have a cluster version later than 1.0.38214. If you use a cluster version earlier than this, an error occurs when you run a query. You can view the cluster version on the Amazon Redshift console on the **Maintenance ** tab. You can also run `SELECT version();`.

*Max query execution time * – Elapsed execution time for a query, in seconds. Execution time doesn't include time spent waiting in a queue. If a query exceeds the set execution time, Amazon Redshift Serverless stops the query. Valid values are 0–86,399.

*Migrating for tables with interleaved sort keys * – When migrating Amazon Redshift provisioned clusters to Amazon Redshift Serverless, Redshift converts tables with interleaved sort keys and DISTSTYLE KEY to compound sort keys. The DISTSTYLE doesn't change. For more information on distribution styles, see [Working with data distribution styles](https://docs.aws.amazon.com//redshift/latest/dg/t_Distributing_data.html) in the Amazon Redshift Developer Guide. For more information on sort keys, see [Working with sort keys](https://docs.aws.amazon.com//redshift/latest/dg/t_Sorting_data.html). 

*VPC sharing* – You can create Amazon Redshift Serverless workgroups in a shared VPC. If you do so, we recommend that you don't delete the resource share as it can result in the workgroup becoming unavailable.

**IPv6 Support:**

 Amazon Redshift Serverless supports configuring your Amazon Redshift workgroups with both IPv4 and IPv6 addresses (dual-stack) or IPv4-only configurations within your AWS Virtual Private Clouds (VPCs). You can enable IPv6 support when creating new Amazon Redshift Serverless workgroups or modify existing workgroups to support IPv6 addressing. With this capability, you can deploy Amazon Redshift Serverless warehouses in IPv6-enabled VPC subnets and configure network settings to support the expanding address space requirements of your applications. Your applications can now communicate with Amazon Redshift Serverless warehouses using either IPv4 or IPv6 protocols, ensuring compatibility with both existing and future network architectures. 

# Compute capacity for Amazon Redshift Serverless
<a name="serverless-capacity"></a>

With Amazon Redshift Serverless compute capacity scales automatically up and down to match your workload requirements. Compute capacity refers to the processing power and memory allocated to your Amazon Redshift Serverless workloads. Common use cases include handling peak traffic periods, running complex analytics, or processing large volumes of data efficiently. The following terms provide details on how Amazon Redshift manages compute capacity.

**RPUs**

Amazon Redshift Serverless measures data warehouse capacity in Redshift Processing Units (RPUs). RPUs are resources used to handle workloads. One RPU provides 16 GB of memory.

**Base capacity**

 This setting specifies the base data warehouse capacity Amazon Redshift uses to serve queries. Base capacity is specified in RPUs. You can set a base capacity in Redshift Processing Units (RPUs). Setting a higher base capacity ensures improves query performance , especially for data processing jobs that require a lot of resources. The default base capacity for Amazon Redshift Serverless is 128 RPUs. You can adjust the **Base capacity** setting from 4 RPUs to 512 RPUs. You can set this value to 4 RPUs, or in units of 8 at or above 8 RPUs (8,16,24...512). You can set this value using the AWS console, the `UpdateWorkgroup` API operation, or `update-workgroup` operation in the AWS CLI. 

With a minimum base capacity of 4 RPU, you have flexibility to run simpler to more complex workloads based on your data warehouse cost and capacity requirements. The 4 base RPU capacity is targeted towards warehouses that contain less than 32TB of data, and the 8, 16, and 24 RPU base RPU capacities are targeted towards workloads that require less than 128TB of data. If your data requirements are greater than 128 TB, you must use a minimum of 32 base RPUs. In addition, for workloads that have tables with large number columns and higher concurrency, we recommend using 32 or more base RPUs.

The maximum base RPUs available, 1024, adds the highest level of computing resources to your workloads. This provides more flexibility to support workloads of large complexity and accelerates loading and querying data. 

**Note**  
An expanded maximum base RPU capacity of 1024 is available in the following AWS Regions. In other regions, the maximum base capacity is 512 RPUs.  
 US East (N. Virginia) 
 US East (Ohio) 
 US West (Oregon) 
Europe (Ireland)
Europe (Frankfurt)
You can increment or decrement RPUs in units of 32 when setting a base capacity between 512-1024.

If you manage larger and more complex workloads, consider increasing the size of your Redshift Serverless data warehouse. Larger warehouses have access to more compute resources, allowing them to process queries more efficiently. 

 Following are some instances where having a higher base capacity is beneficial: 
+  You have complex queries that take a long time to run 
+  Your tables have a large number of columns. 
+  Your queries have a high number of JOINs. 
+  Your queries aggregate or scan large amounts of data from an external source, such as a data lake. 

 For more information about Amazon Redshift Serverless quotas and limits, go to [Quotas for Amazon Redshift Serverless objects](amazon-redshift-limits.md#serverless-limits-account). 

## Considerations and limitations for Amazon Redshift Serverless capacity
<a name="serverless-rpu-capacity-considerations"></a>

The following are considerations and limitations for Amazon Redshift Serverless capacity. For general Redshift Serverless considerations, see [Considerations when using Amazon Redshift Serverless](serverless-usage-considerations.md).
+ Configurations of 4 base RPUs support managed storage capacity of up to 32 TB. If you're using more than 32 TB of managed storage, you can't set the base RPU to less than 8 RPUs.
+ Configurations of 8 or 16 base RPUs support Redshift managed storage capacity of up to 128 TB. If you're using more than 128 TB of managed storage, you can't set the base to less than 32 RPU.
+ Editing your workgroup's base capacity might cancel some of the queries running on your workgroup.
+ Redshift Serverless scales RPUs for your data warehouse using these increments:
  + 4 to 8 RPUs: Increases in steps of 4 RPUs.
  + 8 to 512 RPUs: Increases in steps of 8 RPUs.
  + 512 to 1024 RPUs: Increases in steps of 32 RPUs.
+ Vacuum boost is supported only for 8 RPUs and Above. For 8 RPUs and less, use the following command instead:

  ```
  VACUUM [FULL | SORT ONLY | DELETE ONLY | REINDEX | RECLUSTER] [table_name] [TO threshold PERCENT]
  ```

### Redshift Serverless with 4 Redshift Processing Units (RPUs) capacity
<a name="serverless-rpu-capacity-considerations-4rpu"></a>

Redshift Serverless with 4 base RPUs capacity is ideal for smaller or less demanding workloads. This entry point offers a flexible and cost-effective solution. This entry-level configuration supports data warehouses with up to the following resources:
+ Up to 32 TB of Redshift managed storage.
+ A maximum of 100 columns per table 
+ 64 GB of memory

If you need to exceed these limitations, you must increase your base capacity manually, rather than relying on auto-scaling. Once you scale your data warehouse beyond 4 RPUs, your data warehouse will continue to use more RPUs, and Amazon Redshift won't scale your data warehouse back down to 4 RPUs.

**Note**  
You can create tables with more than 100 columns when using 4 base RPUs, however, we recommend that you limit tables to 100 columns. Exceeding this limit may cause your data warehouse to exhaust its memory during query execution, which decreases performance. 

You can create data warehouses that use 4 RPUs in the following AWS Regions:
+ US East (Ohio)
+ US East (N. Virginia)
+ US West (N. California)
+ US West (Oregon)
+ Asia Pacific (Mumbai)
+ Asia Pacific (Singapore)
+ Asia Pacific (Sydney)
+ Asia Pacific (Tokyo)
+ Europe (Ireland)
+ Europe (Stockholm)

## AI-driven scaling and optimization
<a name="serverless-auto-optimization"></a>

The AI-driven scaling and optimization feature is available in all AWS Regions where Amazon Redshift Serverless is available. 

Amazon Redshift Serverless offers an advanced AI-driven scaling and optimization feature to meet diverse workload requirements. Data warehouses may have the following provisioning issues:
+ Data warehouses may be over-provisioned to improve performance of resource-intensive queries
+ Data warehouses may be under-provisioned to save costs.

Striking the right balance between performance and cost for data warehouse workloads is challenging, especially with ad-hoc queries and growing data volumes. When running mixed workloads, comprising both low and high resource-intensive queries, there is a need for intelligent scaling. The AI-driven scaling and optimization feature automatically scales Serverless compute or RPUs in response to data growth. This feature also helps maintain query performance within targeted price-performance objectives. The AI-driven scaling and optimization dynamically allocates compute resources as data volumes increase, ensuring queries continue to meet performance targets. AI-driven scaling and optimization allows the service to adapt seamlessly to changing workload requirements, without the need for manual intervention or complex capacity planning.

Amazon Redshift Serverless provides a more comprehensive and responsive scaling solution based on factors such as query complexity and data volume. This feature allows for optimizing workload price-performance while maintaining the flexibility to handle varying workloads and growing datasets efficiently. Amazon Redshift Serverless can automatically make AI-driven optimizations to your Amazon Redshift Serverless endpoint to meet your specified price-performance targets for your Serverless workgroup. This automatic price-performance optimization is especially helpful if you don't know what base capacity to set for your workloads, or if some parts of your workload might benefit from more allocated resources.

**Example**

If your organization typically runs workloads that only require 32 RPU but suddenly introduces a more complex query, you might not know the appropriate base capacity. Setting a higher base capacity yields better performance but also incurs higher costs, so the cost might not match your expectations. Using AI-driven scaling and resource optimization, Amazon Redshift Serverless automatically adjusts the RPUs to meet your price-performance targets while keeping costs optimized for your organization. This automatic optimization is useful regardless of workload size. The automatic optimization can help you meet your organization's price-performance targets if you have any number of complex queries. 

**Note**  
Price-performance targets are a workgroup-specific setting. Different workgroups can have different price-performance targets.

To keep costs predictable, set a limit of maximum capacity that Amazon Redshift Serverless is allowed to allocate to your workloads.

To configure price-performance targets, use the AWS console. You must enable your price-performance target explicitly when you create your Serverless workgroup. You can also modify the price-performance target after you create the Serverless workgroup. When you enable the price-performance target, it is set to **Balanced** by default. . 

**To edit the price-performance target for your workgroup**

1. In the Amazon Redshift Serverless console, choose **Workgroup configuration**.

1. Choose the workgroup for which you want to edit the price-performance target. Choose the **Performance** tab, then choose **Edit**.

1. Choose **Price-performance** target, and adjust the slider to your desired setting.

1. Choose **Save changes**.

1. To update the maximum amount of RPUs that Amazon Redshift Serverless can allocate to your workload, choose the **Limits** tab of the **Workgroup Configuration** section.

You can use the **Price-performance target** slider to set your desired balance between cost and performance. By moving the slider, you can choose one of the following options:
+ **Optimizes for cost** — This setting prioritizes cost savings. Amazon Redshift Serverless attempts to automatically scale up compute capacity when doing so doesn’t incur additional charges. Amazon Redshift Serverless also attempts to scale down compute resources for lower cost, possibly increasing query runtimes.
+ **Balanced** — This setting creates a balance between performance and cost. Amazon Redshift Serverless scales for performance, and may result in a moderate cost increase or decrease. This is the recommended setting for most Amazon Redshift Serverless data warehouses.
+ **Optimizes for performance** — This setting prioritizes performance. Amazon Redshift scales aggressively for high performance, potentially incurring higher costs.
+ Intermediate positions: You can also set the slider to one of two intermediate positions between **Balanced** and **Optimizes for cost** or **Optimizes for performance**. Use these settings if full optimization for cost or performance is too extreme.

### Considerations when choosing your price-performance target
<a name="serverless-auto-optimization-considerations"></a>

You can use the price-performance slider to choose your desired price-performance target for your workload. The AI-driven scaling and optimization algorithm learns over time from your workload history, and improves prediction and decision accuracy.

**Example**

For this example, assume a query that takes seven minutes and costs \$17. The following figure shows the query runtimes and cost with no scaling.

![\[Graph for example query for Amazon Redshift Serverless autoscaling.\]](http://docs.aws.amazon.com/redshift/latest/mgmt/images/autoscale_example_query.png)


A given query might scale in a few different ways, as shown below. Based on the price-performance target you choose, AI-driven scaling predicts how the query trades off performance and cost, and scales it accordingly. Choosing the different slider options yields the following results: 

![\[Graph for example query for Amazon Redshift Serverless autoscaling.\]](http://docs.aws.amazon.com/redshift/latest/mgmt/images/autoscale_example_scaling.png)

+ **Optimizes for Cost** — With the **Optimizes for Cost** option, your data warehouse scales favoring choices that lower your costs. In the preceding example, the super linear scaling approach demonstrates this behavior. Scaling will only occur if it can be done in a cost-effective manner according to the scaling model predictions. If the scaling models predict that cost-optimized scaling isn’t possible for the given workload, then the data warehouse won't scale.
+ **Balanced** — With the **Balanced** option, the system scales while balancing both cost and performance considerations, with a potential limited increase in cost. The **Balanced** option performs superlinear, linear, and possibly sublinear workload scaling. 
+ **Optimizes for Performance** — With the **Optimizes for Performance** option, in addition to the previous methods for improving performances, the system also scales even if the costs are higher, and possibly not proportional to the runtime improvement. With **Optimizes for Performance**, the system performs superlinear scaling, linear scaling, and sublinear scaling if possible. The closer the slider position is to the **Optimizes for Performance** position, the more Amazon Redshift Serverless permits sublinear scaling.

Note the following when setting the **Price-Performance** slider:
+ You can change the price-performance setting at any time, but the workload scaling won't change immediately. The scaling changes over time as the system learns about the current workload. We suggest monitoring a Serverless Workgroup for 1-3 days to verify the impact of the new setting.
+ The price-performance slider options **Max capacity** and **Max RPU-hours** work together. **Max capacity** and **Max RPU-hours** are the controls to limit the maximum RPUs that Amazon Redshift Serverless allows the data warehouse to scale, and the maximum RPU hours that Amazon Redshift Serverless allows the data warehouse to consume. Amazon Redshift Serverless always honors and enforces these settings, regardless of the price-performance target setting.

### Monitoring resource autoscaling
<a name="serverless-auto-optimization-monitoring"></a>

You can monitor the AI-driven RPU scaling in the following ways:
+ Review the RPU capacity used graph on the Amazon Redshift console.
+ Monitor the `ComputeCapacity` metric under `AWS/Redshift-Serverless` and `Workgroup` in CloudWatch.
+ Query the [SYS\$1QUERY\$1HISTORY](https://docs.aws.amazon.com/redshift/latest/dg/SYS_QUERY_HISTORY.html) view. Provide the specific query ID or query text to identify the time period. Use this time period to query the [SYS\$1SERVERLESS\$1USAGE](https://docs.aws.amazon.com/redshift/latest/dg/SYS_SERVERLESS_USAGE.html) system view to find the `compute_capacity` value. The `compute_capacity` field shows the RPUs scaled during the query runtime.

Use the following example to query the `SYS_QUERY_HISTORY` view. Replace the sample value with your query text.

```
select query_id,query_text,start_time,end_time, elapsed_time/1000000.0 duration_in_seconds
from sys_query_history
where query_text like '<query_text>'
and query_text not like '%sys_query_history%'
order by start_time desc
```

 Run the following query to see how `compute_capacity` scaled during the period from `start_time` to `end_time`. Replace `start_time` and `end_time` in the following query with the output of the preceding query:

```
select * from sys_serverless_usage
where end_time >= 'start_time'
and end_time <= DATEADD(minute,1,'end_time')
order by end_time asc
```

For step-by-step instructions for using these features, see [ Configure monitoring, limits, and alarms in Amazon Redshift Serverless to keep costs predictable ](https://aws.amazon.com/blogs/big-data/configure-monitoring-limits-and-alarms-in-amazon-redshift-serverless-to-keep-costs-predictable/).

### Considerations when using AI-driven scaling and optimization
<a name="serverless-auto-optimization-considerations"></a>

Consider the following when using AI-driven scaling and optimization:
+ For existing workloads on Amazon Redshift Serverless requiring 32 to 512 Base RPU, we recommend using Amazon Redshift Serverless AI-driven scaling and optimization for optimal results. We do not recommend using this feature for less than 32 Base RPU or more than 512 Base RPU workloads. 
+ Price-performance targets automatically optimize the workload, though results may vary. We recommend using this feature over time so the system can learn your specific patterns by running a representative workload.
+ AI-driven scaling and optimization uses optimal times to apply optimizations to Serverless workgroups depending on the workload running on your Amazon Redshift Serverless instance.

To learn more about AI-driven optimizations and resource scaling, watch the following video.

[![AWS Videos](http://img.youtube.com/vi/https://www.youtube.com/embed/U3f2FObbvKc/0.jpg)](http://www.youtube.com/watch?v=https://www.youtube.com/embed/U3f2FObbvKc)




# Billing for Amazon Redshift Serverless
<a name="serverless-billing"></a>

## Billing for compute capacity
<a name="serverless-rpu-billing"></a>

You can purchase capacity for Amazon Redshift Serverless in two ways:
+ **You can purchase on-demand capacity** – When you choose on-demand compute capacity, you pay for resources as you go. This is the best choice if you're just beginning to use Amazon Redshift Serverless or if you don't have a good sense yet of your steady usage patterns. On-demand offers the most flexibility. For more information, see [Billing for on-demand compute capacity](serverless-billing-on-demand.md).
+ **You can purchase reservations** – A reservation provides a discount when you buy a preset amount of compute resources for a specific amount of time, for example for a year. It's a good idea when you know you're going to use an amount of capacity steadily. It's helpful for saving money when you can forecast some of your capacity needs. For more information, see [Billing for serverless reservations](serverless-billing-reserved.md).

You can use reservations and on-demand resources together. It isn't required that you use one or the other.

For detailed pricing information, see [Amazon Redshift pricing](https://aws.amazon.com/redshift/pricing/).

# Billing for on-demand compute capacity
<a name="serverless-billing-on-demand"></a>

**Base capacity and its effect on billing**

When queries run, you're billed according to the capacity used in a given duration, in RPU hours on a per-second basis. When no queries are running, you aren't billed for compute capacity. You are also charged for Redshift Managed Storage (RMS), based on the amount of data stored. 

When you create your workgroup, you have the option to set the **Base capacity** for computing. To meet the price/performance requirements of your workload at a workgroup level, adjust the base capacity higher or lower for an existing workgroup. Select the workgroup from **Workgroup configuration** and choose the **Limits** tab to change the base capacity using the console.

As the number of queries increases, Amazon Redshift Serverless scales automatically to provide consistent performance.

**Maximum RPU hours usage limit**

To keep costs predictable for Amazon Redshift Serverless, you can set the **Maximum RPU hours** used per day, per week, or per month. You can set it using the console or with the API. When a limit is reached, you can specify that a log entry is written to a system table, or you receive an alert, or user queries are turned off. Setting the maximum RPU hours helps keep your cost under control. Settings for maximum RPU hours apply to your workgroup for both queries that access data in your data warehouse and queries that access external data, such as in an external table in Amazon S3.

The following is an example:

Assume you set a limit for 100 hours for each week. To do this on the console, you do the following:

1. Choose your workgroup and then choose **Manage usage limits** under the **Limits** tab.

1. Add a usage limit, choosing the **Weekly** frequency, a duration of **100** hours, and the setting the action to **Turn off user queries**.

In this example, if you reach the 100 RPU hour limit for a week, queries are turned off.

Setting the maximum RPU hours for the workgroup doesn't limit the performance or compute resources for the workgroup. You can adjust the settings at any time with no affect on query processing. The goal for setting maximum RPU hours is to help you meet your price and performance requirements. For more information about serverless billing, see [Amazon Redshift pricing](https://aws.amazon.com/redshift/pricing/).



Another way to keep the cost for Amazon Redshift Serverless predictable is to use AWS [Cost Anomaly Detection](https://aws.amazon.com/aws-cost-management/aws-cost-anomaly-detection/) to reduce the chance for billing surprises and provide more control.

**Note**  
The [Amazon Redshift pricing calculator](https://calculator.aws/#/addService/Redshift) is helpful for estimating pricing. You enter the compute resources you need and it provides a preview of the cost.

## Setting Max capacity to control costs for compute resources
<a name="serverless-maximum-rpu-setting-billing"></a>

The Max capacity setting serves as the RPU ceiling that Amazon Redshift Serverless can scale up to. It helps control your cost for compute resources. In a similar way to how base capacity sets a minimum amount of compute resources available, Max capacity sets a ceiling on RPU usage. That way, it helps your spending adhere to your plans. Max capacity applies specifically to each workgroup and it limits compute usage at all times.

### How Max capacity differs from RPU hour usage limits
<a name="serverless-maximum-setting-difference"></a>

 The purpose of both maximum RPU hour limits and the Max capacity setting is to control cost. But they achieve this through different means. The following points explain the difference: 
+ *Max capacity* – This setting establishes the highest count of RPUs that Amazon Redshift Serverless uses for scaling purposes. When automatic compute scaling is required, having a higher value for Max capacity can enhance query throughput. When the Max capacity limit is reached, the workgroup doesn't scale up resources any further. 
+ *Maximum RPU hours usage limit* – Unlike Max capacity, this setting doesn't set a ceiling on capacity. But it does perform other actions to help you limit costs. These include adding an entry to a log, notifying you, or stopping queries from running, if you choose. 

You can use Max capacity exclusively, or you can compliment it with actions from maximum RPU hour usage limits.

### A Max capacity use case
<a name="serverless-maximum-setting-billing-scenario"></a>

Each workgroup can have a different Max capacity setting. It helps you enforce budgeting requirements. To illustrate how this works, assume the following: 
+ You have a workgroup with the base capacity set to 256 RPUs. You have steady workloads at just over 256 RPUs for most of the month.
+ Max capacity is set to 512 RPUs.

Assume you have unexpected high use over a three-day period to generate ad-hoc statistical reports. In this case, you have Max capacity set to avoid compute costs beyond those for 512 RPUs. When you do this, you can be sure that compute capacity won't exceed this upper bound.

### Usage notes for Max capacity
<a name="serverless-maximum-setting-how-to"></a>

These notes can help you set Max capacity appropriately:
+ Each Amazon Redshift Serverless workgroup can have a different Max capacity setting.
+ If you have a period of very high resource usage and Max capacity is set to a low RPU level, it can delay workload processing and result in a user experience that isn't optimal.
+ Configuring the Max capacity setting doesn't interfere with running queries, even during times of high RPU usage. It doesn't work like a usage limit, which can stop queries from running. It only limits compute resources available to the workgroup. You can view capacity used over a period of time on the Amazon Redshift Serverless dashboard. For more information about viewing summary data, see [Checking Amazon Redshift Serverless summary data using the dashboard](https://docs.aws.amazon.com/redshift/latest/mgmt/serverless-dashboard.html).
+ The top Max capacity setting is 5632 RPUs.

### How to set Max capacity
<a name="serverless-maximum-rpu-setting-how-to"></a>

You can set Max capacity in the console. For an existing workgroup, you can change the setting under **Workgroup configuration**. You can also use the CLI to set it by using a command like the following sample:

```
aws redshift-serverless update-workgroup --workgroup-name myworkgroup --max-capacity 512
```

This sets the Max capacity setting for the workgroup with the given name. After setting it, you can check the value on the console to verify it. You can also check the value using the CLI by running the `get-workgroup` command.

You can turn off the Max capacity setting by setting it to `-1`, like the following:

```
aws redshift-serverless update-workgroup --workgroup-name myworkgroup --max-capacity -1
```

## Monitoring Amazon Redshift Serverless usage and cost
<a name="serverless-billing-visualizing"></a>

There are several ways you can estimate usage and billing for Amazon Redshift Serverless. System views can be helpful because the system metadata, including query and usage data, is timely and you don't have to do any setup to query it. CloudWatch can also be useful for monitoring usage for your Amazon Redshift Serverless instance, and has additional features to provide insights and set actions.

### Visualizing usage by querying a system view
<a name="serverless-billing-visualizing-sysview"></a>

Query the SYS\$1SERVERLESS\$1USAGE system table to track usage and get the charges for queries:

```
select trunc(start_time) "Day", 
(sum(charged_seconds)/3600::double 
precision) * <Price for 1 RPU> as cost_incurred 
from sys_serverless_usage 
group by 1 
order by 1
```

 This query provides the cost per day incurred for Amazon Redshift Serverless, based on usage. 

#### Usage notes for determining usage and cost
<a name="serverless-billing-visualizing-usage"></a>
+ You pay for the workloads you run in RPU-hours on a per-second basis, with a 60-second minimum charge.
+ Records from the sys\$1serverless\$1usage system table show cost incurred in 1-minute time intervals. Understanding the following columns is important:

  The charged\$1seconds column:
  + Provides the compute unit (RPU) seconds that were charged during the time interval. The results include any minimum charges in Amazon Redshift Serverless.
  + Has information about compute-resource usage after transactions complete. Thus, this column value may be 0 if transactions haven't finished.

  The compute\$1seconds column:
  + Provides real-time compute usage information. This doesn't include any minimum charges in Amazon Redshift Serverless. Thus it can differ to some degree from the charged seconds billed during the interval.
  + Shows usage information during each transaction (even if a transaction hasn’t ended), and hence the data provided is real-time.
+  There are situations where compute\$1seconds is 0 but charged\$1seconds is greater than 0, or vice versa. This is normal behavior resulting from the way data is recorded in the system view. For a more accurate representation of serverless usage details, we recommend aggregating the data in SYS\$1SERVERLESS\$1USAGE. 

 For more information about monitoring tables and views, see [Monitoring queries and workloads with Amazon Redshift Serverless](https://docs.aws.amazon.com/redshift/latest/mgmt/serverless-monitoring.html). 

### Visualizing usage with CloudWatch
<a name="serverless-billing-visualizing-cw"></a>

 You can use the metrics available in CloudWatch to track usage. The metrics generated for CloudWatch are `ComputeSeconds`, indicating the total RPU seconds used in the current minute and `ComputeCapacity`, indicating the total compute capacity for that minute. Usage metrics can also be found on the Redshift console on the Redshift **Serverless dashboard**. For more information about CloudWatch, see [What is Amazon CloudWatch?](https://docs.aws.amazon.com/AmazonCloudWatch/latest/monitoring/WhatIsCloudWatch.html) 

# Billing for serverless reservations
<a name="serverless-billing-reserved"></a>

Amazon Redshift Serverless allows you to run and scale analytics without having to provision and manage clusters with a pay-as-you-go pricing model. Now with serverless reservations, you can further optimize your compute costs and improve cost predictability of existing and new workloads on Redshift Serverless. 

Amazon Redshift manages serverless reservations at the AWS payer account level, and reservations can be shared between multiple AWS accounts, letting you reduce your compute costs by up to 24% on all Redshift Serverless workloads in your AWS account. Amazon Redshift bills serverless reservations hourly and meters reservations per second, offering a consistent billing model, 24 hours a day, seven days a week, while maintaining flexibility offered by Redshift Serverless. Amazon Redshift charges any usage exceeding the specified RPU level is at standard on-demand rates.

**Note**  
If you want to limit on-demand usage, you can use the **Max capacity** setting to set resource-usage limits for your workgroups. For more information, see [Billing for Amazon Redshift Serverless](serverless-billing.md).

## Benefits of serverless reservations
<a name="serverless-billing-reserved-benefits"></a>

Serverless reservations are a discounted pricing option for Amazon Redshift Serverless. Serverless reservations give you the option to commit to a specified number of Redshift Processing Units (RPUs) for a year at a discount from on-demand (OD) rates, with no upfront payment. You can receive a greater discount with an upfront payment. With serverless reservations, you can optimize your compute costs and improve cost predictability of existing and new workloads on Serverless.

Each serverless reservation is purchased at the AWS account level and can be shared between multiple Amazon Redshift Serverless workgroups in the same payer account. This gives you flexibility in how the discount is applied. Multiple workgroups with different workload patterns can share the reservation.

## How a serverless reservation works
<a name="serverless-billing-reserved-works"></a>

Reserving RPUs is a simple process that takes only a few minutes to complete. It includes specifying the RPU level to reserve and the payment type. Amazon Redshift Serverless uses the standard AWS billing and cost management tool that helps you determine the reservation level you need and to monitor your usage continuously. Serverless reservations are managed at the AWS payer account level and can be shared under the same payer account, and let you reduce your compute costs by up to 24% on all Redshift Serverless workloads in your AWS account. Serverless reservations are billed hourly and metered per second, offering a consistent billing model, 24 hours a day, seven days a week, while maintaining flexibility offered by Redshift Serverless. Any usage exceeding the specified RPU level is charged at standard Redshift Serverless on-demand rates. 

You can purchase multiple serverless reservations within the same AWS account. When you purchase additional serverless reservations, they layer on each other. For instance, if you purchase two reservations and choose 100 RPUs for each, it gives you a total of 200 RPUs at a discounted rate.

**Note**  
If you want to set a limit for on-demand usage, you can set the maximum RPUs in the Amazon Redshift Serverless console for a workgroup by choosing the **Limits** tab and then selecting **Manage usage limits**.

After you purchase a serverless reservation, it goes into effect immediately and shows up in the Redshift console in the Serverless reservations dashboard.

## Analyzing your RPU (Redshift Processing Unit) use to determine the reservation level you need
<a name="serverless-billing-reserved-analyzing"></a>

Redshift Serverless Reservations let you lock in predictable, lower compute costs by committing to a specific number of Redshift Processing Units (RPUs) for one year, giving you discounts over on-demand pricing. These discounts can be up to 20 percent with the no-upfront option, or up to 24 percent when you pay all-upfront. You purchase Redshift Serverless Reservations at the AWS payer-account level, and your savings automatically apply to any Redshift Serverless workgroup in any AWS linked account, so you can centrally manage budgets while supporting multiple teams. Redshift Serverless meters the usage at per-second granularity, averaged across each hour, and then billed hourly, ensuring you pay only for the capacity you use. Redshift Serverless Reservations combine flexible application across accounts with term-based savings, giving you predictable analytics prices without sacrificing the agility of Redshift Serverless. 

### Analyzing RPU Use for Reservations
<a name="serverless-billing-reserved-analyzing-howto"></a>

You can determine your RPU usage levels in one of two ways: You can use the Redshift Serverless dashboard for a seven-day view, or use Cost Explorer for long term analysis. The following procedures demonstrate how to analyze your RPU use:

**Method 1: Redshift Serverless Dashboard (7-day view)**

1. Sign in to the AWS Management Console and open the Amazon Redshift console at [https://console.aws.amazon.com/redshiftv2/](https://console.aws.amazon.com/redshiftv2/).

1. Open the Serverless dashboard.

1. Choose your workgroup.

1. View RPU capacity usage for a period in length from the last hour up to one week.

**Method 2: AWS Cost Explorer (Long-term analysis)**

1. Sign in to the AWS Management Console and open the Cost Explorer console at [https://console.aws.amazon.com/costmanagement/](https://console.aws.amazon.com/costmanagement/).

1. Set Granularity to **Hourly**

1. Group by **Usage type**

1. Apply the following filters:
   + Service: Redshift
   + Region: Your local region
   + Usage type: Filter for **Redshift:ServerlessUsage**

1. Review the Cost and usage graph for hourly serverless usage in your chosen region

## Purchasing a serverless reservation using the console
<a name="serverless-billing-reserved-setting"></a>

 When you purchase a reservation, you choose the RPU level that will be discounted. Prior to selecting your RPU level, it’s good to know your base capacity and the on-demand capacity you use over time. This section shows you how to determine your capacity and reserve a serverless reservation. 

To start, in the Redshift console, choose **Serverless**, and then **Serverless reservations** from the menu.

![\[Amazon Redshift console showing Serverless dashboard with Serverless reservations option highlighted.\]](http://docs.aws.amazon.com/redshift/latest/mgmt/images/capacity-reservations-menu-selection.png)


The console shows a description of the feature and a list of existing reservations. From here you can purchase a reservation, or you can use the reports and monitoring tools available to check your current usage. These help you determine your RPU levels and how many RPUs are appropriate to reserve.

To purchase a reservation, complete the following steps:

1. Choose **Purchase serverless reservations**.  
![\[Reservation overview showing 1 RPU total, 0 expiring, with option to purchase Serverless reservations.\]](http://docs.aws.amazon.com/redshift/latest/mgmt/images/capacity-reservations-list-purchase.png)

1. A walk through appears, which has a series of selections. Enter the **Serverless reservation** RPU level to reserve. If you are unsure what this level should be, you can use the tools described further along in this section.  
![\[Input field for entering reserved RPU capacity, with a range from 1 to any number.\]](http://docs.aws.amazon.com/redshift/latest/mgmt/images/capacity-reservations-RPU-level.png)

1. Set the payment type. You can choose to pay upfront for your reserved RPUs, or you can pay monthly. If you choose to pay up front, you get a bigger discount.  
![\[Payment type options: All Upfront with 24% discount or No Upfront with 20% discount.\]](http://docs.aws.amazon.com/redshift/latest/mgmt/images/capacity-reservations-payment-type.png)

1. When you finish making the selections, choose **Purchase serverless reservations** and then **Confirm**.

After you confirm the reservation, it appears in the list of reservations.

![\[Serverless reservations table showing one payment-pending reservation with details.\]](http://docs.aws.amazon.com/redshift/latest/mgmt/images/capacity-reservations-list-created.png)


## Usage notes
<a name="serverless-billing-reservations-notes"></a>


+ You can't change or delete a reservation. But you can create additional reservations to get more coverage.
+ Redshift Serverless uses reserved RPUs for a workload prior to using on-demand RPUs, to ensure cost savings. If you exceed the number of RPUs that you have reserved, you begin to accrue charges for those additional RPUs at the Redshift Serverless on-demand rate.
+ Free credits for Amazon Redshift Serverless aren't applied to serverless reservations, only to RPUs billed on-demand.

## Serverless reservation examples
<a name="serverless-billing-reserved-examples"></a>

In this scenario, your AWS payer/ linked account has two Amazon Redshift workgroups:
+ Workgroup 1 has steady state usage, such as for a business-intelligence team.
+ Workgroup 2 has unpredictable workloads with spikes in usage, such as for ETL operations. 

You want to optimize the costs for these workgroups, so you purchase a one-year serverless reservation. Based on historical data, you determine that both workgroups consume 64 RPUs at a steady state. Workgroup 2, however, occasionally increases from 32 RPUs to 48 RPUs and drops to 24 RPUs for short periods. You set the RPU level of your reservation at 64 RPUs to start, which is aligned with historical trends. The per-hour billing details are as follows:
+ For the first hour, similar to historical usage trends, both workgroups use 32 RPUs for total account usage of 64 RPUs. For this hour, all the RPUs are charged at the serverless reservations discounted rate. This is because the usage level of 64 RPUs is equal to the 64 RPU serverless reservation.
+ For the second hour, workgroup 1 continues to use 32 RPUs. However, workgroup 2 spikes to 48 RPUs, for a total account usage of 80 RPUs. For this hour, 64 RPUs are charged at the serverless reservations discounted rate, and 16 RPUs are charged at the Redshift Serverless on-demand rate.
+ For the third hour, workgroup 1 continues to consume 32 RPUs and workgroup 2 decreases to 8 RPUs. In this hour, the account is charged at the 64 RPU serverless reservation rate, even though the account total is 40 RPU.

See the following diagram for workgroup usage evolution, and on-demand and serverless reservation rates billing details:

![\[Graph showing total account usage, on-demand usage, and workgroup trends over three time periods.\]](http://docs.aws.amazon.com/redshift/latest/mgmt/images/capacity-reservation-example.png)


## Purchasing a serverless reservation using the AWS CLI or Amazon Redshift API
<a name="serverless-billing-reservations-api"></a>

You use `create-reservation` to create an RPU reservation. The following shows the command:

```
create-reservation
--capacity
--offering-id
```

You set `capacity` to the number of RPUs you want to reserve.

## Billing for storage
<a name="serverless-storage-billing"></a>

Primary storage capacity is billed as Redshift Managed Storage (RMS). Storage is billed by GB / month. Storage billing is separate from billing for compute capacity. Storage used for user snapshots is billed at the standard backup billing rates, depending on your usage tier.

Data transfer costs and machine learning (ML) costs apply separately, the same as provisioned clusters. Snapshot replication and data sharing across AWS Regions are billed at the transfer rates outlined on the pricing page. For more information, see [Amazon Redshift pricing](https://aws.amazon.com//redshift/pricing/).

### Visualizing billing usage with CloudWatch
<a name="db-serverless-billing-storage-cw"></a>

The metric `SnapshotStorage`, which tracks snapshot storage usage, is generated and sent to CloudWatch. For more information about CloudWatch, see [What is Amazon CloudWatch?](https://docs.aws.amazon.com/AmazonCloudWatch/latest/monitoring/WhatIsCloudWatch.html)

## Using the Amazon Redshift Serverless free trial
<a name="db-serverless-billing-free-trial"></a>

Amazon Redshift Serverless offers a free trial. If you participate in the free trial, you can view the free trial credit balance in the Redshift console, and check free trial usage in the [SYS\$1SERVERLESS\$1USAGE](https://docs.aws.amazon.com/redshift/latest/dg/SYS_SERVERLESS_USAGE.html) system view. Note that billing details for free trial usage does not appear in the billing console. You can only view usage in the billing console after the free trial ends. For more information about the Amazon Redshift Serverless free trial, see [Amazon Redshift Serverless free trial](https://aws.amazon.com//redshift/free-trial/).

## Billing usage notes
<a name="db-serverless-billing-details"></a>
+ **Recording usage** - A query or transaction is only metered and recorded after the transaction completes, is rolled back, or stopped. For instance, if a transaction runs for two days, RPU usage is recorded after it completes. You can monitor ongoing use in real time by querying `sys_serverless_usage`. Transaction recording may reflect as RPU usage variation and effect costs for specific hours and for daily use.
+ **Writing explicit transactions** - It's important as a best practice to end transactions. If you don't end or roll back an open transaction, Amazon Redshift Serverless continues to use RPUs. For example, if you write an explicit `BEGIN TRAN`, it's important to have corresponding `COMMIT` and `ROLLBACK` statements.
+ **Cancelled queries** - If you run a query and cancel it before it finishes, you are still billed for the time the query ran. 
+ **Scaling** - The Amazon Redshift Serverless instance may initiate scaling for handling periods of higher load, in order to maintain consistent performance. Your Amazon Redshift Serverless billing includes both base compute and scaled capacity at the same RPU rate.
+ **Scaling down** - Amazon Redshift Serverless scales up from its base RPU capacity to handle periods of higher load. In some cases, RPU capacity can remain at a higher setting for a period after query load falls. We recommend that you set maximum RPU hours in the console to guard against unexpected cost.
+ **System tables** - When you query a system table, the query time is billed. 
+ **Redshift Spectrum** - When you have Amazon Redshift Serverless, and you run queries, there isn't a separate charge for data-lake queries. For queries on data stored in Amazon S3, the charge is the same, by transaction time, as queries on local data.
+ **Federated queries** - Federated queries are charged in terms of RPUs used over a specific time interval, in the same manner as queries on the data warehouse or data lake.
+ **Storage** - Storage is billed separately, by GB / month.
+ **Minimum charge** - The minimum charge is for 60 seconds of resource usage, metered on a per-second basis.
+ **Snapshot billing** - Snapshot billing doesn't change. It's charged according to storage, billed at a rate of GB / month. You can restore your data warehouse to specific points in the last 24 hours at a 30 minute granularity, free of charge. For more information, see [Amazon Redshift pricing](https://aws.amazon.com//redshift/pricing/).
+ **Automatic optimizations run using extra compute resources** ‐ Amazon Redshift Serverless usually runs automatic optimization operations alongside user queries. These operations are known as autonomics, and you aren’t charged for them. 

  If you enable allocating extra compute resources, Amazon Redshift will run autonomics when necessary even in periods of high user activity. In such cases, you can be billed for the time spent running autonomics. For more information, see [ Allocating extra compute resources for automatic database optimization ](https://docs.aws.amazon.com/redshift/latest/dg/t_extra-compute-autonomics.html) in the *Amazon Redshift Database Developer Guide*.

### Amazon Redshift Serverless best practices for keeping billing predictable
<a name="db-serverless-billing-session-timeout"></a>

The following are best practices and built-in settings that help keep your billing consistent.
+ Make sure to end each transaction. When you use `BEGIN` to start a transaction, it's important to `END` it as well.
+ Use best-practice error handling to respond gracefully to errors and end each transaction. Minimizing open transactions helps to avoid unnecessary RPU use.
+ Use `SESSION TIMEOUT` to help end open transactions and idle sessions. It causes any session kept idle or inactive for more than 3600 seconds (1 hour) to time out. It causes any transaction kept open and inactive for more than 21600 seconds (6 hours) to time out. This timeout setting can be changed explicitly for a specific user, such as when you want to keep a session open for a long-running query. The topic [CREATE USER](https://docs.aws.amazon.com/redshift/latest/dg/r_CREATE_USER.html) shows how to adjust `SESSION TIMEOUT` for a user.
  + In most cases, we recommend that you don't extend the `SESSION TIMEOUT` value, unless you have a use case that requires it specifically. If the session remains idle, with an open transaction, it can result in a case where RPUs are used until the session is closed. This will result in unnecessary cost.
  + Amazon Redshift Serverless has a maximum time of 86,399 seconds (24 hours) for a running query. The maximum period of inactivity for an open transaction is six hours before Amazon Redshift Serverless ends the session associated with the transaction. For more information, see [Quotas for Amazon Redshift Serverless objects](amazon-redshift-limits.md#serverless-limits-account).

## Amazon Redshift Serverless billing with connection pooling
<a name="db-serverless-billing-connection-pooling"></a>

Amazon Redshift Serverless treats all incoming queries as billable user activity, including lightweight health-check queries sent by connection pools. This behavior applies regardless of whether the query originates from an application, a JDBC/ODBC driver, or a connection pooling framework. Each health-check query triggers compute usage, and charges are incurred regardless of query purpose or origin. As a result, maintaining open connection pools can generate costs even when no actual user workloads are running.

Connection pooling maintains a pool of persistent connections between applications and the Amazon Redshift Serverless endpoint. To ensure these connections remain healthy and available, pooling mechanisms often send lightweight or empty queries (for example, `SELECT 1`) at regular intervals. These automated queries verify connection status.

When you use connection pooling, consider these best practices to minimize unintended charges:
+ Adjust health check frequency by reviewing and optimizing the frequency of health check or keep-alive queries in your connection pooling configuration.
+ Optimize idle system settings by configuring connection pooling to minimize unnecessary connection churn or background query activity during system idle times.
+ Implement application-level pooling or improved connection lifecycle management if it can reduce overhead.
+ Disable heartbeat or validation queries if your connection pooling configuration allows it. Check your specific connection string parameters or configuration files to adjust these settings.
+ Fine-tune TCP keepalive settings: If you can't disable the driver's internal heartbeat mechanisms, adjust Transmission Control Protocol (TCP) keepalive settings at the operating system or application level to address connection timeout issues. Refer to your operating system, JDBC/ODBC driver, or connection pool documentation for details.
+ Optimize database connection pooling: Configure your connection pool (HikariCP, Apache Database Connection Pool) to manage connections and minimize connection overhead. Focus on parameters such as maximum connections, idle timeout, and validation queries (if necessary). This optimization helps align Amazon Redshift Serverless compute usage with actual workload demand, potentially reducing costs.

## Cost optimization for Amazon Redshift Serverless with zero-ETL
<a name="db-serverless-zetl"></a>

To optimize costs while running zero-ETL integrations on Amazon Redshift Serverless, you can right-size your environments and adjust your refresh settings based on workload needs. Consider making the following adjustments:
+ Use the lower base RPU capacity of 8 RPU where available for workloads.
+ Configure the REFRESH\$1INTERVAL of your target Redshift instance to balance freshness with cost. Shorter intervals ensure near real-time updates but drive up compute costs. Longer intervals (5 minutes or longer) reduce charges for workloads where immediate freshness is not critical, such as reporting or historical analysis. To edit your Redshift target REFRESH\$1INTERVAL, see the refresh interval clause in the [ALTER DATABASE](https://docs.aws.amazon.com/redshift/latest/dg/r_ALTER_DATABASE.html) description.
+ Maximize utilization of your Amazon Redshift Serverless environment by concurrently running analytics workloads while zero-ETL data is being ingested. This ensures that compute capacity is actively serving multiple business purposes.

# Connecting to Amazon Redshift Serverless
<a name="serverless-connecting"></a>

Once you've set up your Amazon Redshift Serverless instance, you can connect to it in a variety of methods, outlined below. If you have multiple teams or projects and want to manage costs separately, you can use separate AWS accounts.

For a list of AWS Regions where the Amazon Redshift Serverless is available, see the endpoints listed for [Redshift Serverless API](https://docs.aws.amazon.com/general/latest/gr/redshift-service.html) in the *Amazon Web Services General Reference*.

Amazon Redshift Serverless connects to the serverless environment in your AWS account in the current AWS Region. Amazon Redshift Serverless runs in a VPC within the port ranges 5431-5455 and 8191-8215. The default is 5439. Currently, you can only change ports with the API operation `UpdateWorkgroup` and the AWS CLI operation `update-workgroup`.

## Connecting to Amazon Redshift Serverless
<a name="serverless-connecting-endpoint"></a>

You can connect to a database (named `dev`) in Amazon Redshift Serverless with the following syntax.

```
workgroup-name.account-number.aws-region.redshift-serverless.amazonaws.com:port/dev
```

For example, the following connection string specifies Region us-east-1.

```
default.123456789012.us-east-1.redshift-serverless.amazonaws.com:5439/dev
```

## Connecting to Amazon Redshift Serverless through JDBC drivers
<a name="serverless-connecting-driver"></a>

You can use one of the following methods to connect to Amazon Redshift Serverless with your preferred SQL client using the Amazon Redshift-provided JDBC driver version 2.x driver.

To connect with sign-in credentials for database authentication using the JDBC driver version 2.x, use the following syntax. The port number is optional; if not included, Amazon Redshift Serverless defaults to port number 5439. You can change to another port from the port range of 5431-5455 or 8191-8215. To change the default port for a serverless endpoint, use the AWS CLI and Amazon Redshift API.

```
jdbc:redshift://workgroup-name.account-number.aws-region.redshift-serverless.amazonaws.com:5439/dev
```

For example, the following connection string specifies the workgroup default, the account ID 123456789012, and the Region us-east-2.

```
jdbc:redshift://default.123456789012.us-east-2.redshift-serverless.amazonaws.com:5439/dev
```

To connect with IAM using the JDBC driver version 2.x, use the following syntax. The port number is optional; if not included, Amazon Redshift Serverless defaults to port number 5439. You can change to another port from the port range of 5431-5455 or 8191-8215. To change the default port for a serverless endpoint, use the AWS CLI and Amazon Redshift API.

```
jdbc:redshift:iam://workgroup-name.account-number.aws-region.redshift-serverless.amazonaws.com:5439/dev
```

For example, the following connection string specifies the workgroup default, the account ID 123456789012, and the Region us-east-2.

```
jdbc:redshift:iam://default.123456789012.us-east-2.redshift-serverless.amazonaws.com:5439/dev
```

For ODBC, use the following syntax.

```
Driver={Amazon Redshift (x64)}; Server=workgroup-name.account-number.aws-region.redshift-serverless.amazonaws.com; Database=dev
```

If you are using a JDBC driver version prior to 2.1.0.9 and connecting with IAM, you will need to use the following syntax.

```
jdbc:redshift:iam://redshift-serverless-<name>:aws-region/database-name
```

For example, the following connection string specifies the workgroup default and the AWS Region us-east-1.

```
jdbc:redshift:iam://redshift-serverless-default:us-east-1/dev
```

For more information about drivers, see [Configuring connections in Amazon Redshift](configuring-connections.md). 

### Finding your JDBC and ODBC connection string
<a name="serverless-connecting-jdbc-odbc-string"></a>

To connect to your workgroup with your SQL client tool, you must have the JDBC or ODBC connection string. You can find the connection string in the Amazon Redshift Serverless console, on a workgroup's details page.

**To find the connection string for a workgroup**

1. Sign in to the AWS Management Console and open the Amazon Redshift console at [https://console.aws.amazon.com/redshiftv2/](https://console.aws.amazon.com/redshiftv2/).

1. On the navigation menu, choose **Redshift Serverless**.

1. On the navigation menu, choose **Workgroup configuration**, then choose the workgroup name from the list to open its details.

1. The **JDBC URL** and **ODBC URL** connection strings are available, along with additional details, in the **General information** section. Each string is based on the AWS Region where the workgroup runs. Choose the icon next to the appropriate connection string to copy the connection string.

## Connecting to Amazon Redshift Serverless with the Data API
<a name="serverless-data-api"></a>

You can also use the Amazon Redshift Data API to connect to Amazon Redshift Serverless. Use the `workgroup-name` parameter instead of the `cluster-identifier` parameter in your AWS CLI calls. 

For more information about the Data API, see [Using the Amazon Redshift Data API](data-api.md). For example code calling the Data API in Python and other examples, see [Getting Started with Redshift Data API](https://github.com/aws-samples/getting-started-with-amazon-redshift-data-api#readme) and look in the `quick-start` and `use-cases` folders in *GitHub*. 

## Connecting with SSL to Amazon Redshift Serverless
<a name="serverless-secure-ssl"></a>

### Configuring a secure connection to Amazon Redshift Serverless
<a name="serverless_secure-ssl"></a>

To support SSL connections, Redshift Serverless creates and installs an [AWS Certificate Manager (ACM)](https://aws.amazon.com/certificate-manager/) issued SSL certificate for each workgroup. ACM certificates are publicly trusted by most operating systems, web browsers, and clients. You might need to download a certificate bundle if your SQL clients or applications connect to Redshift Serverless using SSL with the `sslmode` connection option set to `require`, `verify-ca`, or `verify-full`. If your client needs a certificate, Redshift Serverless provides a bundle certificate as follows:
+ Download the bundle from [https://s3.amazonaws.com/redshift-downloads/amazon-trust-ca-bundle.crt](https://s3.amazonaws.com/redshift-downloads/amazon-trust-ca-bundle.crt). 
  + The expected MD5 checksum number is 418dea9b6d5d5de7a8f1ac42e164cdcf.
  + The sha256 checksum number is 36dba8e4b8041cd14b9d60158893963301bcbb92e1c456847784de2acb5bd550.

  Don't use the previous certificate bundle that was located at `https://s3.amazonaws.com/redshift-downloads/redshift-ca-bundle.crt`. 
+  In the China AWS Region, download the bundle from [https://s3---cn-north-1.amazonaws.com.rproxy.goskope.com.cn/redshift-downloads-cn/amazon-trust-ca-bundle.crt](https://s3---cn-north-1.amazonaws.com.rproxy.goskope.com.cn/redshift-downloads-cn/amazon-trust-ca-bundle.crt). 
  + The expected MD5 checksum number is 418dea9b6d5d5de7a8f1ac42e164cdcf.
  + The sha256 checksum number is 36dba8e4b8041cd14b9d60158893963301bcbb92e1c456847784de2acb5bd550.

  Don't use the previous certificate bundles that were located at `https://s3---cn-north-1.amazonaws.com.rproxy.goskope.com.cn/redshift-downloads-cn/redshift-ca-bundle.crt` and `https://s3---cn-north-1.amazonaws.com.rproxy.goskope.com.cn/redshift-downloads-cn/redshift-ssl-ca-cert.pem`

**Important**  
Redshift Serverless has changed the way that SSL certificates are managed. You might need to update your current trust root CA certificates to continue to connect to your workgroups using SSL. For more information about ACM certificates for SSL connections, see [Transitioning to ACM certificates for SSL connections](connecting-transitioning-to-acm-certs.md).

By default, workgroup databases accept a connection whether it uses SSL or not. 

To create a new workgroup that only accepts SSL connections, use the `create-workgroup` command and set the `require_ssl` parameter to `true`. To use the following example, replace *yourNamespaceName* with the name of your namespace and replace *yourWorkgroupName* with the name of your workgroup.

```
aws redshift-serverless create-workgroup \
--namespace-name yourNamespaceName \
--workgroup-name yourWorkgroupName \
--config-parameters parameterKey=require_ssl,parameterValue=true
```

To update an existing workgroup to only accept SSL connections, use the `update-workgroup` command and set the `require_ssl` parameter to `true`. Note that Redshift Serverless will restart your workgroup when you update the `require_ssl` parameter. To use the following example, replace *yourWorkgroupName* with the name of your workgroup.

```
aws redshift-serverless update-workgroup \
--workgroup-name yourWorkgroupName \
--config-parameters parameterKey=require_ssl,parameterValue=true
```

 Amazon Redshift supports the Elliptic Curve Diffie—Hellman Ephemeral (ECDHE) key agreement protocol. With ECDHE, the client and server each have an elliptic curve public-private key pair that is used to establish a shared secret over an insecure channel. You don't need to configure anything in Amazon Redshift to enable ECDHE. If you connect from a SQL client tool that uses ECDHE to encrypt communication between the client and server, Amazon Redshift uses the provided cipher list to make the appropriate connection. For more information, see [Elliptic curve diffie—hellman](https://en.wikipedia.org/wiki/Elliptic_curve_Diffie%E2%80%93Hellman) on Wikipedia and [Ciphers](https://www.openssl.org/) on the OpenSSL website. 

#### Configuring a FIPS-compliant SSL connection to Amazon Redshift Serverless
<a name="serverless_secure-fips-ssl"></a>

To create a new workgroup that uses a FIPS-compliant SSL connection, use the `create-workgroup` command and set the `use_fips_ssl` and `require_ssl` parameters to `true`. To use the following example, replace *yourNamespaceName* with the name of your namespace and replace *yourWorkgroupName* with the name of your workgroup.

```
aws redshift-serverless create-workgroup \
--namespace-name yourNamespaceName \
--workgroup-name yourWorkgroupName \
--config-parameters '[{"parameterKey": "require_ssl", "parameterValue": "true"}, {"parameterKey": "use_fips_ssl", "parameterValue": "true"}]'
```

To update an existing workgroup to use a FIPS-compliant SSL connection, use the `update-workgroup` command and set the `use_fips_ssl` and `require_ssl` parameters to `true`. Note that Redshift Serverless will restart your workgroup when you update the `use_fips_ssl` parameter. To use the following example, replace *yourWorkgroupName* with the name of your workgroup.

```
aws redshift-serverless update-workgroup \
--workgroup-name yourWorkgroupName \
--config-parameters '[{"parameterKey": "require_ssl", "parameterValue": "true"}, {"parameterKey": "use_fips_ssl", "parameterValue": "true"}]'
```

For more information about configuring Redshift Serverless to use FIPS-compliant connections, see [use\$1fips\$1ssl](https://docs.aws.amazon.com/redshift/latest/dg/use_fips_ssl) in the *Amazon Redshift Database Developer Guide*.

## Connecting to Amazon Redshift Serverless from an Amazon Redshift managed VPC endpoint
<a name="serverless-secure-vpc"></a>

### Connecting to Amazon Redshift Serverless from other VPC endpoints
<a name="serverless-vpc-connect"></a>

For information about setting up or configuring a managed-VPC endpoint for an Amazon Redshift Serverless workgroup, see [Working with Redshift-managed VPC endpoints](https://docs.aws.amazon.com/redshift/latest/mgmt/managing-cluster-cross-vpc.html).

## Connecting to Amazon Redshift Serverless from an interface VPC endpoint (AWS PrivateLink)
<a name="serverless-secure-vpc-privatelink"></a>

For information about connecting to Amazon Redshift Serverless from an interface VPC endpoint (AWS PrivateLink), see [Interface VPC endpoints](security-network-isolation.md#security-private-link).

## Connecting to Amazon Redshift Serverless from a Redshift VPC endpoint in another account
<a name="serverless-cross-vpc"></a>

### Connecting to Amazon Redshift Serverless from a cross VPC endpoint
<a name="database-connect-from-cross-account-vpc-endpoint"></a>

 Amazon Redshift Serverless is provisioned in a VPC. You can grant access to a VPC in another account to access Amazon Redshift Serverless in your account. This is similar to a connection from a managed VPC endpoint, but in this case the connection originates, for example, from a database client in another account. There are a few operations that you can perform:
+ A database owner can grant access to a VPC containing Amazon Redshift Serverless to another account in the same Region. 
+ A database owner can revoke Amazon Redshift Serverless access.

The main benefit of cross-account access is allowing easier database collaboration. Users don't have to be provisioned in the account that contains the database to access it, which reduces configuration steps and saves time.

#### Permissions required to grant access to a VPC in another account
<a name="database-connect-from-cross-account-vpc-endpoint-permissions"></a>

To grant access or change the access allowed, the grantor requires an assigned permissions policy with the following permissions:
+ redshift-serverless:PutResourcePolicy
+ redshift-serverless:GetResourcePolicy
+ redshift-serverless:DeleteResourcePolicy
+ ec2:CreateVpcEndpoint
+ ec2:ModifyVpcEndpoint

You might need other permissions that are specified in the AWS managed policy *AmazonRedshiftFullAccess*. For more information, see [Granting permissions to Amazon Redshift Serverless](https://docs.aws.amazon.com/redshift/latest/mgmt/serverless-iam.html#serverless-security-other-services).

The grantee requires an assigned permissions policy with the following permissions:
+ redshift-serverless:ListWorkgroups
+ redshift-serverless:CreateEndpointAccess
+ redshift-serverless:UpdateEndpointAccess
+ redshift-serverless:GetEndpointAccess
+ redshift-serverless:ListEndpointAccess
+ redshift-serverless:DeleteEndpointAccess

As a best practice, we recommend attaching permissions policies to an IAM role and then assigning it to users and groups as needed. For more information, see [Identity and access management in Amazon Redshift](https://docs.aws.amazon.com/redshift/latest/mgmt/redshift-iam-authentication-access-control.html).

This is a sample resource policy used to configure cross-VPC access:

------
#### [ JSON ]

****  

```
{
    "Version":"2012-10-17",		 	 	 
    "Statement": [
        {
            "Sid": "CrossAccountCrossVPCAccess",
            "Effect": "Allow",
            "Principal": {
                "AWS": [
                    "123456789012",
                    "234567890123"
                ]
            },
            "Action": [
                "redshift-serverless:CreateEndpointAccess",
                "redshift-serverless:UpdateEndpointAccess",
                "redshift-serverless:DeleteEndpointAccess",
                "redshift-serverless:GetEndpointAccess"
            ],
            "Resource": "arn:aws:redshift-serverless:*:*:*",
            "Condition": {
                "StringEquals": {
                    "aws:RequestedRegion": "us-east-1"
                },
                "StringLike": {
                    "aws:SourceVpc": [
                       "vpc-11223344556677889",
                       "vpc-12345678"
                    ]
                }
            }
        }
    ]
}
```

------

The procedures that follow in this section assume that the user performing them has the appropriate assigned permissions, for example, by means of an assigned IAM role that has the permissions listed. The procedures also assume that the workgroup has an IAM role attached with appropriate resource permissions.

#### Granting VPC access to other accounts, using the console
<a name="database-connect-from-cross-account-vpc-console-tasks"></a>

This procedure shows the steps for configuring database access when you're the database owner, and you want to grant access to it. 

**Granting access from the owner account**

1. In the properties for the Amazon Redshift Serverless workgroup, on the **Data access** tab, there is a list called **Granted accounts**. It shows accounts and VPCs granted access to the workgroup. Find the list and choose **Grant access** to add an account to the list.

1. A window appears where you can add the grantee information. Enter the AWS account ID, which is the 12-digit ID of the account that you want to grant access to.

1. Grant access to all VPCs for the grantee, or to specific VPCs. If you grant access only to specific VPCs, you can add the IDs for these by entering each one and choosing **Add VPC**.

1. **Save changes** when you're finished.

When you save the changes, the account appears in the list of **Granted accounts**. The entry shows the **Account ID** and the list of VPCs granted access.

The database owner can also revoke access to an account. The owner can revoke access at any time.

**Revoking access to an account**

1. You can start from the list of granted accounts. First, select one or more accounts.

1. Choose **Revoke access**.

After access is granted, a database administrator for the grantee can check the console to determine if they have access.

**Using the console to confirm that access is granted for you to access another account**

1. In the Amazon Redshift Serverless workgroup properties, on the **Data access** tab, there is a list called **Authorized accounts**. It shows accounts that can be accessed from this workgroup. The grantee can't use the workgroup's endpoint URL to access to the workgroup directly. To access the workgroup, you as the grantee go to the **endpoint** section and choose **create an endpoint**. 

1. Then, as the grantee, you provide an endpoint name and a VPC to access the workgroup.

1. After the endpoint is created successfully, it appears in the **endpoint** section and there is an endpoint URL for it. You can use this endpoint URL to access the workgroup.

#### Granting access to other accounts, using CLI commands
<a name="database-connect-from-cross-account-vpc-api"></a>

The account granting access must first grant access to another account to connect by using `put-resource-policy`. The database owner can call `put-resource-policy` to authorize another account to create connections to the workgroup. The grantee account can then use `create-endpoint-authorization` to create connections to the workgroup through their allowed VPCs.

The following shows the properties for `put-resource-policy`, which you can call to allow access to a specific account and VPC.

```
aws redshift-serverless put-resource-policy
--resource-arn <value> 
--policy <value>
```

After calling the command, you can call `get-resource-policy`, specifying the `resource-arn` to see which accounts and VPCs are allowed to access the resource.

The following call can be made by the grantee. It shows information about the granted access. Specifically, it returns a list that contains the VPCs granted access.

```
aws redshift-serverless list-workgroups
--owner-account <value>
```

The purpose of this is for the grantee to get information from the granting account about endpoint authorizations. The `owner-account` is the sharing account. When you run this, it returns the `CrossAccountVpcs` for each workgroup, which is a list of allowed VPCs. For reference, the following shows all of the properties available for a workgroup:

```
Output: workgroup (Object)
workgroupId String,
workgroupArn String,
workgroupName String,
status: String,
namespaceName: String,
baseCapacity: Integer, (Not-applicable)
enhancedVpcRouting: Boolean,
configParameters: List,
securityGroupIds: List,
subnetIds: List,
endpoint: String,
publiclyAccessible: Boolean,
creationDate: Timestamp,
port: Integer,
CrossAccountVpcs: List
```

**Note**  
As a point of reminder, [cluster relocation](https://docs.aws.amazon.com/redshift/latest/mgmt/managing-cluster-recovery.html) isn't a prerequisite for configuring additional Redshift networking features. It also isn't required that you turn it on to enable the following:  
**Connecting from a cross-account or cross-region VPC to Redshift** – You can connect from one AWS virtual private cloud (VPC) to another that contains a Redshift database, as described in this section.
**Setting up a custom domain name** – You can create a custom domain name, also known as a custom URL, for your Amazon Redshift cluster or Amazon Redshift Serverless workgroup, to make the endpoint name more memorable and simple. For more information, see [Using a custom domain name for client connections](https://docs.aws.amazon.com/redshift/latest/mgmt/connecting-connection-CNAME.html).

## Additional resources
<a name="serverless-connecting-additional"></a>

Instructions for setting your network traffic settings are available in [Public accessibility with default or custom security group configuration](https://docs.aws.amazon.com/redshift/latest/mgmt/rs-security-group-public-private.html#rs-security-group-public-default). This includes a use case where the cluster is publicly accessible.

Instructions for setting your network traffic settings are available in [Private accessibility with default or custom security group configuration](https://docs.aws.amazon.com/redshift/latest/mgmt/rs-security-group-public-private.html#rs-security-group-private). This includes a use case where the cluster isn't available to the internet.

For more information about secure connections to Amazon Redshift Serverless, including granting permissions, authorizing access to additional services, and creating IAM roles, see [Identity and access management in Amazon Redshift Serverless](serverless-iam.md).

# Defining database roles to grant to federated users in Amazon Redshift Serverless
<a name="redshift-iam-access-federated-db-roles"></a>

When you're part of an organization, you have a collection of associated roles. For instance, you have roles for your job function, like *programmer* and *manager*. Your roles determine which applications and data you have access to. Most organizations use an identity provider, such as Microsoft Active Directory, to assign roles to users and groups. The use of roles to control resource access has grown, because organizations don't have to do as much management of individual users.

Recently, role-based access control was introduced in Amazon Redshift Serverless. Using database roles, you can secure access to data and objects, like schemas or tables, for example. Or you can use roles to define a set of elevated permissions, such as for a system monitor or database administrator. But after you grant resource permissions to database roles, there is an additional step, which is to connect a user's roles from the organization to the database roles. You can assign each user to their database roles upon initial sign in by running SQL statements, but it's a lot of effort. An easier way is to define the database roles to grant and pass them to Amazon Redshift Serverless. This has the advantage of simplifying the initial sign-in process.

You can pass roles to Amazon Redshift Serverless using `GetCredentials`. When a user signs in for the first time to an Amazon Redshift Serverless database, an associated database user is created and mapped to the matching database roles. This topic details the mechanism for passing roles to Amazon Redshift Serverless.

Passing database roles has a couple primary use cases:
+ When a user signs in through a third-party identity provider, typically with federation configured, and passes the roles by means of a session tag.
+ When a user signs in through IAM sign-in credentials, and their roles are passed by means of a tag key and value. 

For more information about role-based access control, see [Role-based access control (RBAC)](https://docs.aws.amazon.com/redshift/latest/dg/t_Roles.html).

## Defining database roles
<a name="redshift-iam-access-federated-db-roles-configuration"></a>

Before you can pass roles to Amazon Redshift Serverless, you must configure database roles in your database and grant them appropriate permissions on database resources. For instance, in a simple scenario, you can create a database role named *sales* and grant it access to query tables with sales data. For more information about how to create database roles and grant permissions, see [CREATE ROLE](https://docs.aws.amazon.com/redshift/latest/dg/r_CREATE_ROLE.html) and [GRANT](https://docs.aws.amazon.com/redshift/latest/dg/r_GRANT.html).

## Use cases for defining database roles to grant to federated users
<a name="redshift-iam-access-federated-db-roles-use-cases"></a>

These sections outline a couple use cases where passing database roles to Amazon Redshift Serverless can simplify access to database resources.

### Signing in using an identity provider
<a name="redshift-iam-access-federated-db-roles-idp-principal"></a>

The first use case assumes that your organization has user identities in an identity and access management service. This service can be cloud based, for example JumpCloud or Okta, or on-premises, such as Microsoft Active Directory. The goal is to automatically map a user's roles from the identity provider to your database roles when they sign in to a client like Query editor V2, for instance, or with a JDBC client. To set this up, you must complete a couple of configuration tasks. These include the following:

1. Configure federated integration with your identity provider (IdP) using a trust relationship. This is a prerequisite. When you set this up, the identity provider is responsible for authenticating the user via a SAML assertion and providing sign-in credentials. For more information, see [Integrating third party SAML solution providers with AWS](https://docs.aws.amazon.com/IAM/latest/UserGuide/id_roles_providers_saml_3rd-party.html). You can also find more information at [Federate access to Amazon Redshift query editor V2 with Active Directory Federation Services (AD FS)](https://aws.amazon.com/blogs//big-data/federate-access-to-amazon-redshift-query-editor-v2-with-active-directory-federation-services-ad-fs-part-3/) or [Federate single sign-on access to Amazon Redshift query editor v2 with Okta](https://aws.amazon.com/blogs//big-data/federate-single-sign-on-access-to-amazon-redshift-query-editor-v2-with-okta/).

1. The user must have the following policy permissions: 
   + `GetCredentials` – Provides credentials for temporary authorization to log in to Amazon Redshift Serverless.
   + `sts:AssumeRoleWithSAML` – Provides a mechanism for tying an enterprise identity store or directory to role-based AWS access.
   + `sts:TagSession` – Permission to the tag-session action, on the identity provider principal.

    In this case, `AssumeRoleWithSAML` returns a set of security credentials for users who have been authenticated via a SAML authenticated response. This operation provides a mechanism for tying an identity store or directory to role-based AWS access without user-specific credentials. For users with permission to `AssumeRoleWithSAML`, the identity provider is responsible for managing the SAML assertion that is used to pass the role information.

   As a best practice, we recommend attaching permissions policies to an IAM role and then assigning it to users and groups as needed. For more information, see [Identity and access management in Amazon Redshift](https://docs.aws.amazon.com/redshift/latest/mgmt/redshift-iam-authentication-access-control.html).

1. You configure the tag `RedshiftDbRoles` with the colon-separated role values, in the format *role1:role2*. For example, `manager:engineer`. These can be retrieved from a session-tag implementation configured in your identity provider. The SAML authentication request passes the roles programmatically. For more information about passing session tags, see [Passing session tags in AWS STS](https://docs.aws.amazon.com/IAM/latest/UserGuide/id_session-tags.html).

   In a case where you pass a role name that doesn't exist in the database, it's ignored.

In this use case, when a user signs in using a federated identity, their roles are passed in the authorization request through the session tag key and value. Subsequently, following authorization, `GetCredentials` passes the roles to the database. Upon a successful connection, the database roles are mapped and the user can perform database tasks corresponding with their role. The essential part of the operation is that the `RedshiftDbRoles` session tag is assigned the roles in the initial authorization request. For more information about passing session tags, see [Passing session tags using AssumeRoleWithSAML](https://docs.aws.amazon.com/IAM/latest/UserGuide/id_session-tags.html#id_session-tags_adding-assume-role-saml).

### Signing in using IAM credentials
<a name="redshift-iam-access-federated-db-roles-iam-creds"></a>

In the second use case, roles can be passed for a user and they can access a database client application through IAM credentials.

1. The user who signs in in this case must be assigned policy permissions for the following actions:
   + `tag:GetResources` – Returns tagged resources associated with specified tags.
   + `tag:GetTagKeys` – Returns tag keys currently in use.

     As a best practice, we recommend attaching permissions policies to an IAM role and then assigning it to users and groups as needed. For more information, see [Identity and access management in Amazon Redshift](https://docs.aws.amazon.com/redshift/latest/mgmt/redshift-iam-authentication-access-control.html).

1. Allow permissions are also required to access the database service, such as Amazon Redshift Serverless.

1. For this use case, configure the tag values for your roles in AWS Identity and Access Management. You can choose to **edit tags** and create a tag key called *RedshiftDbRoles* with an accompanying tag value string that contains the roles. For example, *manager:engineer*.

When a user logs in, their role is added to the authorization request and passed to the database. It is mapped to an existing database role.

## Additional resources
<a name="redshift-iam-access-federated-db-roles-resources"></a>

As mentioned in the use cases, you can configure the trust relationship between your IdP and AWS. For more information, see [Configuring your SAML 2.0 IdP with relying party trust and adding claims](https://docs.aws.amazon.com/IAM/latest/UserGuide/id_roles_providers_create_saml_relying-party.html). 

# Identity and access management in Amazon Redshift Serverless
<a name="serverless-iam"></a>

Access to Amazon Redshift requires credentials that AWS can use to authenticate your requests. Those credentials must have permissions to access AWS resources, such as Amazon Redshift Serverless. 

The following sections provide details about how you can use AWS Identity and Access Management (IAM) and Amazon Redshift to help secure your resources by controlling who can access them. For more information, see [Identity and access management in Amazon Redshift](redshift-iam-authentication-access-control.md).

# Granting permissions to Amazon Redshift Serverless
<a name="serverless-security-other-services"></a>

To access other AWS services, Amazon Redshift Serverless requires permissions. Some Amazon Redshift features require Amazon Redshift to access other AWS services on your behalf. For your Amazon Redshift Serverless instance to act for you, supply security credentials to it. The preferred method to supply security credentials is to specify an AWS Identity and Access Management (IAM) role. You can also create an IAM role through the Amazon Redshift console and set it as the default. For more information, see [Creating an IAM role as default for Amazon Redshift](#serverless-default-iam-role).

To access other AWS services, create an IAM role with the appropriate permissions. You also need to associate the role with Amazon Redshift Serverless. In addition, either specify the Amazon Resource Name (ARN) of the role when you run the Amazon Redshift command or specify the `default` keyword.

When changing the trust relationship for the IAM role in the [https://console.aws.amazon.com/iam/](https://console.aws.amazon.com/iam/), make sure that it contains `redshift-serverless.amazonaws.com` and `redshift.amazonaws.com` as principal service names. For information about how to manage IAM roles to access other AWS services on your behalf, see [Authorizing Amazon Redshift to access AWS services on your behalf](authorizing-redshift-service.md).

## Creating an IAM role as default for Amazon Redshift
<a name="serverless-default-iam-role"></a>

When you create IAM roles through the Amazon Redshift console, Amazon Redshift programmatically creates the roles in your AWS account. Amazon Redshift also automatically attaches existing AWS managed policies to them. This approach means that you can stay within the Amazon Redshift console and don't have to switch to the IAM console for role creation.

The IAM role that you create through the console for your cluster has the `AmazonRedshiftAllCommandsFullAccess` managed policy automatically attached. This IAM role allows Amazon Redshift to copy, unload, query, and analyze data for AWS resources in your IAM account. The related commands include COPY, UNLOAD, CREATE EXTERNAL FUNCTION, CREATE EXTERNAL TABLE, CREATE EXTERNAL SCHEMA, CREATE MODEL, and CREATE LIBRARY. For more information about how to create an IAM role as default for Amazon Redshift, see [Creating an IAM role as default for Amazon Redshift](#serverless-default-iam-role).

To get started creating an IAM role as default for Amazon Redshift, open the AWS Management Console, choose the Amazon Redshift console, and then choose **Redshift Serverless** in the menu. From the Serverless dashboard you can create a new workgroup. The creation steps walk you selecting an IAM role or configuring a new IAM one.

When you have an existing Amazon Redshift Serverless workgroup and you want to configure IAM roles for it, open the AWS Management Console. Choose the Amazon Redshift console, and then choose **Redshift Serverless**. On the Amazon Redshift Serverless console, choose **Namespace configuration** for an existing workgroup. Under **Security and encryption**, you can edit the permissions.

### Assigning IAM roles to a namespace
<a name="serverless-endpoint-iam-role-namespace"></a>

Each IAM role is an AWS identity with permissions policies that determine what actions each role can perform in AWS. The role is intended to be assumable by anyone who needs it. Additionally, each namespace is a collection of objects, like tables and schemas, and users. When you use Amazon Redshift Serverless, you can associate multiple IAM roles with your namespace. This makes it easier to structure your permissions appropriately for a collection of database objects, so roles can perform actions on both internal and external data. For example, so you can run a `COPY` command in an Amazon Redshift database to retrieve data from Amazon S3 and populate a Redshift table.

You can associate multiple roles to a namespace using the console, as described previously in this section. You can also use the API command `CreateNamespace`, or the CLI command `create-namespace`. With the API or CLI command, you can assign IAM roles to the namespace by populating `IAMRoles` with one or more roles. Specifically, you add ARNs for specific roles to the collection.

#### Managing namespace associated IAM roles
<a name="serverless-endpoint-iam-role-namespace-console"></a>

On the AWS Management Console you can manage permissions policies for roles in AWS Identity and Access Management. You can manage IAM roles for the namespace, using settings available under **Namespace configuration**. For more information about namespaces and their use in Amazon Redshift Serverless, see [Workgroups and namespaces](serverless-workgroup-namespace.md).

# Getting started with IAM credentials for Amazon Redshift
<a name="serverless-iam-credentials"></a>

When you sign in to the Amazon Redshift console for the first time and first try out Amazon Redshift Serverless, we recommend that you sign in as a user with an attached IAM role that has the policies required. After you get started creating an Amazon Redshift Serverless instance, Amazon Redshift records the IAM role name that you used to sign in. You can use the same credentials to sign in to the Amazon Redshift console and the Amazon Redshift Serverless console.

While creating the Amazon Redshift Serverless instance, you can create a database. Use the query editor v2 to connect to the database with the temporary credentials option.

To add a new admin user name and password that persist for the database, choose **Customize admin user credentials** and enter a new admin user name and admin user password. 

To get started using Amazon Redshift Serverless and create a workgroup and namespace in the console for the first time, use an IAM role with a permissions policy attached. Make sure that this role has either the administrator permission ` arn:aws:iam::aws:policy/AdministratorAccess` or the full Amazon Redshift permission `arn:aws:iam::aws:policy/AmazonRedshiftFullAccess` attached to the IAM policy. 

The following scenarios outline how your IAM credentials are used by Amazon Redshift Serverless when you get started on the Amazon Redshift Serverless console:
+ If you choose **Use default settings** – Amazon Redshift Serverless translates your current IAM identity to a database superuser. You can use the same IAM identity with the Amazon Redshift Serverless console to perform superuser actions in your database in Amazon Redshift Serverless.
+ If you choose **Customize settings** without specifying the **Admin user name** and password Amazon Redshift Serverless, your current IAM credentials are used as your default admin user credentials. 
+ If you choose **Customize settings** and specify **Admin user name** and password Amazon Redshift Serverless – Amazon Redshift Serverless translates your current IAM identity to a database superuser. Amazon Redshift Serverless also creates another long-term login username and password pair also as a superuser. You can either use your current IAM identity or the created username and password pair to login in to your database as a superuser. 

# Accessing Amazon Redshift Serverless database objects with database-role permissions
<a name="serverless-iam-credentials-use-case"></a>

This procedure shows how to grant permission to query a table through an [Amazon Redshift database role](https://docs.aws.amazon.com/redshift/latest/dg/t_Roles.html). The role is assigned by means of a tag that's attached to a user in IAM and passed to Amazon Redshift when they sign in. It's an explanation by example of the concepts in [Defining database roles to grant to federated users in Amazon Redshift Serverless](https://docs.aws.amazon.com/redshift/latest/mgmt/redshift-iam-access-federated-db-roles.html). The benefit of completing these steps is that you can associate a user with a database role and avoid setting their permissions for each database object. It simplifies managing the user's ability to query, modify, or add data to tables and to perform other actions.

The procedure assumes you have already set up an Amazon Redshift Serverless database and you have the ability to grant permissions in the database. It also assumes you have permissions to create an IAM user in the AWS console, to create an IAM role, and to assign policy permissions.

1. Create an IAM user, using the IAM console. Later, you will connect to the database with this user.

1. Create a Redshift database role, using query editor v2 or another SQL client. For more information on creating database roles, see [CREATE ROLE](https://docs.aws.amazon.com/redshift/latest/dg/r_CREATE_ROLE.html).

   ```
   CREATE ROLE urban_planning;
   ```

   Query the [SVV\$1ROLES](https://docs.aws.amazon.com/redshift/latest/dg/r_SVV_ROLES.html) system view to check that your role is created. It also returns system roles.

   ```
   SELECT * from SVV_ROLES;
   ```

1. Grant the database role you created permission to select from a table. (The IAM user you created will eventually sign in and select records from the table by means of the database role.) The role name and table name in the following code example are samples. Here, permission is granted to select from a table named `cities`.

   ```
   GRANT SELECT on TABLE cities to ROLE urban_planning;
   ```

1. Use the AWS Identity and Access Management console to create an IAM role. This role grants permission to use query editor v2. Create a new IAM role and, for the trusted entity type, choose **AWS account**. Then choose **This account**. Give the role the following policy permissions:
   + `AmazonRedshiftReadOnlyAccess`
   + `tag:GetResources`
   + `tag:GetTagKeys`
   + All actions for sqlworkbench, including `sqlworkbench:ListDatabases` and `sqlworkbench:UpdateConnection`.

1. In the IAM console, add a tag with the **Key** `RedshiftDbRoles` to the IAM user you created previously. The tag's value should match the database role you created in the first step. It's `urban_planning` in the sample.

After you complete these steps, assign the IAM role to the user you created in the IAM console. When the user signs in to the database with query editor v2, their database role name in the tag is passed to Amazon Redshift and associated with them. Thus, they can query the appropriate tables by means of the database role. To illustrate, the user in this sample can query the `cities` table through the `urban_planning` database role.

# Migrating a provisioned cluster to Amazon Redshift Serverless
<a name="serverless-migration"></a>

You can migrate your existing provisioned clusters to Amazon Redshift Serverless, enabling on-demand and automatic scaling of compute resources. Migrating a provisioned cluster to Amazon Redshift Serverless allows you to optimize costs by paying only for the resources you use and automatically scaling capacity based on workload demands. Common use cases for the migration include running ad-hoc queries, periodic data processing jobs, or handling unpredictable workloads without over-provisioning resources. Perform the following set of tasks to migrate your provisioned Amazon Redshift cluster to the serverless deployment option.

## Creating a snapshot of your provisioned cluster
<a name="serverless-migration-snapshots"></a>

**Note**  
Amazon Redshift automatically converts interleaved keys to compound keys when you restore a provisioned cluster snapshot to a serverless namespace.

 To transfer data from your provisioned cluster to Amazon Redshift Serverless, create a snapshot of your provisioned cluster, and then restore the snapshot in Amazon Redshift Serverless. 

**Note**  
Before you migrate your data to a serverless workgroup, ensure that your provisioned cluster needs are compatible with the amount of RPU you choose in Amazon Redshift Serverless.

To create a snapshot of your provisioned cluster

1. Sign in to the AWS Management Console and open the Amazon Redshift console at [https://console.aws.amazon.com/redshiftv2/](https://console.aws.amazon.com/redshiftv2/).

1. On the navigation menu, choose **Clusters**, **Snapshots**, and then choose **Create snapshot**.

1. Enter the properties of the snapshot definition, then choose **Create snapshot**. It might take some time for the snapshot to be available. 

To restore a provisioned cluster snapshot to a serverless namespace:

1. Sign in to the AWS Management Console and open the Amazon Redshift console at [https://console.aws.amazon.com/redshiftv2/](https://console.aws.amazon.com/redshiftv2/).

1. Start on the Amazon Redshift provisioned cluster console and navigate to the **Clusters**, **Snapshots** page.

1. Choose a snapshot to use.

1. Choose **Restore snapshot**, **Restore to serverless namespace**.

1. Choose a namespace to restore your snapshot to.

1. Confirm you want to restore from your snapshot. This action replaces all the databases in your serverless endpoint with the data from your provisioned cluster. Choose **Restore**.

For more information about provisioned cluster snapshots, see [Amazon Redshift snapshots](https://docs.aws.amazon.com/redshift/latest/mgmt/working-with-snapshots.html).

## Connecting to Amazon Redshift Serverless using a driver
<a name="serverless-migration-drivers"></a>

 To connect to Amazon Redshift Serverless with your preferred SQL client, you can use the Amazon Redshift provided [ JDBC driver version 2.x](https://docs.aws.amazon.com/redshift/latest/mgmt/jdbc20-install.html). We recommend connecting to Amazon Redshift using the latest version of the Amazon Redshift JDBC driver version 2.x. The port number is optional. If you don’t include it, Amazon Redshift Serverless defaults to port number 5439. You can change to another port from the port range of 5431-5455 or 8191-8215. To change the default port for a serverless endpoint, use the AWS CLI and Amazon Redshift API. 

 To find the exact endpoint to use for the JDBC, ODBC, or Python driver, see **Workgroup configuration** in Amazon Redshift Serverless. You can also use the Amazon Redshift Serverless API operation `GetWorkgroup` or the AWS CLI operation `get-workgroups` to return information about your workgroup, and then connect. 

### Connecting using password-based authentication
<a name="serverless-migration-drivers-password-auth"></a>

To establish a connection using the Amazon Redshift JDBC driver version 2.x with password-based authentication, use the following syntax:

```
jdbc:redshift://<workgroup-name>.<account-number>.<aws-region>.redshift-serverless.amazonaws.com:5439/?username=username&password=password
```

To establish a connection using the Amazon Redshift Python connector with password-based authentication, use the following syntax:

```
import redshift_connector
with redshift_connector.connect(
    host='<workgroup-name>.<account-number>.<aws-region>.redshift-serverless.amazonaws.com',
    database='<database-name>',
    user='username',
    password='password'
    # port value of 5439 is specified by default
) as conn:
    pass
```

To establish a connection using the Amazon Redshift ODBC driver version 2.x with password-based authentication, use the following syntax:

```
Driver={Amazon Redshift ODBC Driver (x64)}; Server=<workgroup-name>.<account-number>.<aws-region>.redshift-serverless.amazonaws.com; Database=database-name; User=username; Password=password
```

### Connecting using IAM
<a name="serverless-migration-drivers-iam"></a>

 If you prefer logging in with IAM, use the Amazon Redshift Serverless [https://docs.aws.amazon.com//redshift-serverless/latest/APIReference/API_GetCredentials.html](https://docs.aws.amazon.com//redshift-serverless/latest/APIReference/API_GetCredentials.html) API operation. 

To use IAM authentication, add `iam:` to the Amazon Redshift JDBC URL following `jdbc:redshift:`, as shown in the following example.

```
jdbc:redshift:iam://<workgroup-name>.<account-number>.<aws-region>.redshift-serverless.amazonaws.com:5439/<database-name>
```

This Amazon Redshift Serverless endpoint doesn’t support customizing dbUser, dbGroup, or auto-create. By default, the driver automatically creates database users at login. It then assigns the users to Amazon Redshift database roles based on the tags specified in IAM, or based on the groups defined in your identity provider (IdP).

Ensure that your AWS identity has the correct IAM policy for the `redshift-serverless:GetCredentials` action. The following is an example IAM policy that grants the correct permissions to an AWS identity to connect to Amazon Redshift Serverless. For more information about IAM permissions, see [Adding and removing IAM identity permissions](https://docs.aws.amazon.com/IAM/latest/UserGuide/access_policies_manage-attach-detach.html) in the *IAM User Guide*.

------
#### [ JSON ]

****  

```
{
    "Version":"2012-10-17",		 	 	 
    "Statement": [
        {
            "Sid": "",
            "Effect": "Allow",
            "Action": "redshift-serverless:GetCredentials",
            "Resource": "*"
        }
    ]
}
```

------

To establish a connection using the Amazon Redshift Python connector with IAM based authentication, use `iam=true` in your code, as shown in the following syntax:

```
import redshift_connector
with redshift_connector.connect(
    iam=True,
    host='<workgroup-name>.<account-number>.<aws-region>.redshift-serverless.amazonaws.com',
    database='<database-name>'
    <IAM credentials>
) as conn:
    pass
```

For `IAM credentials`, you can use any credentials, including the following: 
+  AWS profile configuration. 
+  IAM credentials (an access key ID, secret access key, and optionally a session token). 
+  Identity provider federation. 

To establish a connection using the Amazon Redshift ODBC driver version 2.x with IAM based authentication and a profile, use the following syntax:

```
Driver={Amazon Redshift ODBC Driver (x64)}; IAM=true; Server=<workgroup-name>.<account-number>.<aws-region>.redshift-serverless.amazonaws.com; Database=database-name; Profile=aws-profile-name;
```

### Connecting using IAM with the GetClusterCredentials API
<a name="serverless-migration-drivers-iam-dbuser-group"></a>

**Note**  
When connecting to Amazon Redshift Serverless, we recommend that you use the [https://docs.aws.amazon.com//redshift-serverless/latest/APIReference/API_GetCredentials.html](https://docs.aws.amazon.com//redshift-serverless/latest/APIReference/API_GetCredentials.html) API. This API offers comprehensive role-based access control (RBAC) functionality as well as other new features that aren't available in `GetClusterCredentials`. We support the `GetClusterCredentials` API to simplify the transition from provisioned clusters to serverless workgroups, but we strongly recommend migrating to using `GetCredentials` as soon as possible for optimal compatibility.

You can establish a connection to Amazon Redshift Serverless using the [https://docs.aws.amazon.com//redshift/latest/APIReference/API_GetClusterCredentials.html](https://docs.aws.amazon.com//redshift/latest/APIReference/API_GetClusterCredentials.html) API. To implement this authentication method, modify your client or application by incorporating the following parameters:
+  `iam=true` 
+  `clusterid/cluster_identifier=redshift-serverless-<workgroup-name>` 
+  `region=<aws-region>` 

The following examples demonstrate the BrowserSAML plugin across all three drivers. This represents one of several available authentication approaches. The examples can be modified to use alternative authentication methods or plugins according to your specific requirements.

#### IAM policy permissions for GetClusterCredentials
<a name="serverless-migration-drivers-iam-dbuser-group-iam"></a>

Following is a sample IAM policy with the permissions required to use `GetClusterCredentials` with Amazon Redshift Serverless: 

------
#### [ JSON ]

****  

```
{
    "Version":"2012-10-17",		 	 	 
    "Statement": [
        {
            "Sid": "VisualEditor0",
            "Effect": "Allow",
            "Action": [
                "redshift-serverless:GetWorkgroup",
                "redshift-serverless:GetNamespace",
                "redshift:GetClusterCredentials",
                "redshift:CreateClusterUser",
                "redshift:JoinGroup",
                "redshift:ExecuteQuery",
                "redshift:FetchResults",
                "redshift:DescribeClusters",
                "redshift:DescribeTable"
            ],
            "Resource": [
                "arn:aws:redshift-serverless:us-east-1:111122223333:workgroup/workgroup-name",
                "arn:aws:redshift-serverless:us-east-1:111122223333:namespace/namespace-name",
                "arn:aws:redshift:us-east-1:111122223333:cluster:cluster-name",
                "arn:aws:redshift:us-east-1:111122223333:dbuser:database-name/${redshift:DbUser}"
            ]
        }
    ]
}
```

------

To establish a connection using the Amazon Redshift JDBC driver version 2.x with `GetClusterCredentials`, use the following syntax:

```
jdbc:redshift:iam://redshift-serverless-<workgroup-name>:<aws-region>/<database-name>?plugin_name=com.amazon.redshift.plugin.BrowserSamlCredentialsProvider&login_url=<single sign-on URL from IdP>"
```

To establish a connection using the Amazon Redshift Python connector with `GetClusterCredentials`, use the following syntax:

```
import redshift_connector
with redshift_connector.connect(
    iam=True,
    cluster_identifier='redshift-serverless-<workgroup-name>',
    region='<aws-region>',
    database='<database-name>',
    credentials_provider='BrowserSamlCredentialsProvider'
    login_url='<single sign-on URL from IdP>'
    # port value of 5439 is specified by default
) as conn:
    pass
```

To establish a connection using the Amazon Redshift ODBC driver version 2.x with `GetClusterCredentials`, use the following syntax:

```
Driver= {Amazon Redshift ODBC Driver (x64)}; IAM=true; isServerless=true; ClusterId=redshift-serverless-<workgroup-name>; region=<aws-region>; plugin_name=BrowserSAML;login_url=<single sign-on URL from IdP>
```

Following is an example ODBC DSN configuration in Windows:

![\[The Connection tab in the Amazon Redshift ODBC Driver for Windows. Fields corresponding to the sample syntax above are filled out.\]](http://docs.aws.amazon.com/redshift/latest/mgmt/images/GetClusterCredentials-odbc-windows.png)


## Using the Amazon Redshift Serverless SDK
<a name="serverless-migration-sdk"></a>

If you wrote any management scripts using the Amazon Redshift SDK, you must use the new Amazon Redshift Serverless SDK to manage Amazon Redshift Serverless and associated resources. For more information about available API operations, see the [Amazon Redshift Serverless API Reference guide](https://docs.aws.amazon.com/redshift-serverless/latest/APIReference/Welcome.html).

# Workgroups and namespaces
<a name="serverless-workgroup-namespace"></a>

To isolate workloads and manage different resources in Amazon Redshift Serverless, you can create namespaces and workgroups and manage storage and compute resources separately.

A namespace is a collection of database objects and users. The storage-related namespace groups together schemas, tables, users, or AWS Key Management Service keys for encrypting data. Storage properties include the database name and password of the admin user, permissions, and encryption and security. Other resources that are grouped under namespaces include datashares, recovery points, and usage limits. You can configure these storage properties using the Amazon Redshift Serverless console, the AWS Command Line Interface, or the Amazon Redshift Serverless APIs for the specific resource.

Workgroup is a collection of compute resources. The compute-related workgroup groups together compute resources like RPUs, VPC subnet groups, and security groups. Properties for the workgroup include network and security settings. Other resources that are grouped under workgroups include access and usage limits. You can configure these compute properties using the Amazon Redshift Serverless console, the AWS Command Line Interface, or the Amazon Redshift Serverless APIs.

You can create one or more namespaces and workgroups. Each namespace can have only one workgroup associated with it. Conversely, each workgroup can be associated with only one namespace.

## Workgroups and namespaces using the console
<a name="serverless-workgroups-and-namespaces-console"></a>

Setting up Amazon Redshift Serverless involves walking through several configuration steps. When you follow the steps to set up Amazon Redshift Serverless, you create a namespace and workgroup, and associate them with each other. To get started setting Amazon Redshift Serverless configuration using the Amazon Redshift Serverless console, you can choose **Get started with Amazon Redshift Serverless** to set up Amazon Redshift Serverless and begin to interact with it. You can choose an environment with default settings, which makes for quicker setup, or explicitly configure the settings per your organization's requirements.During this process, you specify settings for your workgroup and namespace.

After you set up the environment, [Workgroup properties](serverless-console-workgroups.md#serverless-workgroup-describe) and [Namespace properties](serverless-console-configure-namespace-working.md#serverless-console-namespace-config) help you get familiar with the settings.

## Workgroups and namespaces using the AWS Command Line Interface and Amazon Redshift Serverless API
<a name="serverless-workgroups-and-namespaces-cli"></a>

 Aside from using the AWS console, you can also use the AWS CLI or the Amazon Redshift Serverless API to interact with workgroups and namespaces. The table below lists the API and CLI operations you can use to manage snapshots and recovery points. 


| API operation | CLI command | Description | 
| --- | --- | --- | 
| [CreateNamespace](https://docs.aws.amazon.com/redshift-serverless/latest/APIReference/API_CreateNamespace.html) | create-namespace | Creates a namespace. By default, Amazon Redshift Serverless creates namespaces with a default AWS Key Management Service key, but you can specify another key to encrypt your data. You can also create a namespace by restoring a snapshot. See [Working with snapshots and recovery points](https://docs.aws.amazon.com/redshift/latest/mgmt/serverless-snapshots-recovery.html) for more information. | 
| [UpdateNamespace](https://docs.aws.amazon.com/redshift-serverless/latest/APIReference/API_UpdateNamespace.html) | update-namespace | Updates a namespace. | 
| [GetNamespace](https://docs.aws.amazon.com/redshift-serverless/latest/APIReference/API_GetNamespace.html) | get-namespace | Retrieves information about a namespace | 
| [ListNamepaces](https://docs.aws.amazon.com/redshift-serverless/latest/APIReference/API_ListNamespaces.html) | list-namespaces | Retrieves information about a list of namespaces. | 
| [DeleteNamespace](https://docs.aws.amazon.com/redshift-serverless/latest/APIReference/API_DeleteNamespace.html) | delete-namespace | Deletes a namespace. | 
| [CreateWorkgroup](https://docs.aws.amazon.com/redshift-serverless/latest/APIReference/API_CreateWorkgroup.html) | create-workgroup | Creates a workgroup. When creating a workgroup, make sure that you have an existing namespace that you can associate with the workgroup. When creating the workgroup, you can specify compute resources such as subnets, security groups, and RPUs. | 
| [UpdateWorkgroup](https://docs.aws.amazon.com/redshift-serverless/latest/APIReference/API_UpdateWorkgroup.html) | update-workgroup | Updates a workgroup. | 
| [GetWorkgroup](https://docs.aws.amazon.com/redshift-serverless/latest/APIReference/API_GetWorkgroup.html) | get-workgroup | Retrieves information about a workgroup. | 
| [ListWorkgroups](https://docs.aws.amazon.com/redshift-serverless/latest/APIReference/API_ListWorkgroups.html) | list-workgroups | Retrieves information about a list of workgroups. | 
| [DeleteWorkgroup](https://docs.aws.amazon.com/redshift-serverless/latest/APIReference/API_DeleteWorkgroup.html) | delete-workgroup | Deletes a workgroup. | 

# Workgroups
<a name="serverless-console-configure-workgroup-working"></a>

With Amazon Redshift Serverless, you can create and manage workgroups to isolate and control compute resources for different workloads or users. Workgroups allow you to set configuration options like memory and concurrency scaling limits, and prioritize query execution across workloads. The compute-related workgroup groups together compute resources like RPUs and VPC subnet groups. 

# Creating a workgroup with a namespace
<a name="serverless-console-workgroups-create-workgroup-wizard"></a>

Complete the following steps to create a workgroup. For more information about workgroup configuration, see [Workgroup properties](serverless-console-workgroups.md#serverless-workgroup-describe).

1. Choose the **Serverless dashboard**. Then choose **Create workgroup**.

1. Enter the workgroup name.

1. Choose an **IP address type** for the workgroup. Choices include: 
   + **IPv4** – With this option, your AWS resources only communicate over the IPv4 addressing protocol.
   + **Dual-stack mode** – With this option, your AWS resources can communicate over the IPv4, IPv6, or both addressing protocols. Also, you must associate an IPv6 CIDR block with the VPC and subnets used for your workgroup in the Amazon VPC. You can use the Amazon VPC console to create an Amazon VPC or update an existing Amazon VPC to use IPv6 addressing. For more information, see [IPv6 support for your VPC;](https://docs.aws.amazon.com/vpc/latest/userguide/vpc-migrate-ipv6.html) in the *Amazon VPC User Guide*. 

1. Choose a **Virtual private cloud (VPC)** for Amazon Redshift Serverless. This assigns the workgroup to a specific virtual network in your AWS environment. When using **dual-stack mode**, the Amazon VPC you choose must support IPV6 addressing. For more information about an Amazon VPC, see [Overview of VPCs and subnets](https://docs.aws.amazon.com/vpc/latest/userguide/VPC_Subnets.html).

1. Choose one or more **VPC security groups**. For more information, see [Control traffic to resources using security groups](https://docs.aws.amazon.com/vpc/latest/userguide/VPC_SecurityGroups.html).

1. Choose whether to enable extra compute resources for automatic optimizations. For more information, see [ Allocating extra compute resources for automatic database optimization ](https://docs.aws.amazon.com/redshift/latest/dg/t_extra-compute-autonomics.html) in the *Amazon Redshift Database Developer Guide*.

1. Under **Subnet**, specify one or more subnets to associate with your database. These subnets are contained in the Amazon VPC you chose previously and must be in three distinct Availability Zones. For more information, see [Considerations when using Amazon Redshift Serverless](https://docs.aws.amazon.com/redshift/latest/mgmt/serverless-usage-considerations.html).

1. Select the base RPU capacity that conforms with your requirements.

## Choose a namespace
<a name="serverless-console-workgroups-choose-namespace"></a>

1. Choose either **Create a new namespace**, and enter the namespace name, or **Add to an existing namespace**, and select the namespace from the drop-down list.

1. For **Database name and password**, specify the name of the first database. You can also specify an admin other than your default console admin, by editing the **Admin user credentials**.

1. For **Permissions**, you choose **Associate IAM role **to associate specific IAM roles with your namespace and workgroup. For more information about associating IAM roles with Amazon Redshift, see [Identity and access management in Amazon Redshift](https://docs.aws.amazon.com/redshift/latest/mgmt/redshift-iam-authentication-access-control.html).

1. You can customize your encryption settings by creating a new key or choosing a key other than the default. For **Audit logging**, choose the logs to export. Each log type specifies different metadata. Choose **Continue** to review your choices.

## Review workgroup selections
<a name="serverless-console-workgroups-review-workgroup"></a>

1. Review your settings under **Review and create**. It shows the settings you chose in the previous steps.

1. Choose **Save**.

After you create the workgroup, it's added to the **Workgroups** list.

# Viewing properties for a workgroup
<a name="serverless-console-workgroups"></a>

In Amazon Redshift Serverless, a workgroup is a collection of compute resources available for use. When you choose Amazon Redshift Serverless, in the AWS console, you can choose **Workgroup configuration** from the navigation menu to view a list. You can use the **Search** box to find workgroups that meet your search criteria. Each workgroup entry has a few properties displayed:
+ **Workgroup** - The name of the workgroup. You can select it to view and edit the workgroup's properties.
+ **Status** - Shows whether the workgroup is available.
+ **Namespace** - The namespace associated with the workgroup. Each workgroup is associated with one namespace.
+ **Creation date** - The date (UTC) that the workgroup was created.
+ **Tags** - Tags associated with the workgroup.

Additionally, **Workgroup configuration** has another list for managed workgroups, which are Amazon Redshift Serverless workgroups managed by AWS Glue. For more information on managed workgroups, see [Managed workgroups](https://docs.aws.amazon.com/redshift/latest/dg/iceberg-integration-managed-workgroups.html) in the Amazon Redshift Database Developer Guide.

## Workgroup properties
<a name="serverless-workgroup-describe"></a>

You can list workgroups by choosing **Workgroup configuration** in the left menu. Then you can choose a workgroup from the list. Several panels show properties for the workgroup. You can also perform actions. **General information** displays the following:
+ **Workgroup** - The name of the workgroup.
+ **Namespace** - The namespace associated with the workgroup. You can choose it to view its properties. A workgroup is associated with a single namespace.
+ **Date created** - When the workgroup was created.
+ **Status** - Indicates if the workgroup resources are available. If it's available, you can connect with a client to the Amazon Redshift Serverless instance, in order to query data or create database resources, or you can connect with query editor v2.
+ **Endpoint** - The URL.
+ **JDBC URL** - The URL to establish JDBC client connections. You can use this URL to connect with a JDBC driver for Amazon Redshift. For more information, see [Configuring a connection for JDBC driver version 2.x for Amazon Redshift](https://docs.aws.amazon.com/redshift/latest/mgmt/jdbc20-install.html).
+ **ODBC URL** - The URL to establish ODBC client connections. It contains properties, like the database and user ID, and their values.
+ **Workgroup version and Patch version** - Amazon Redshift Serverless regularly releases new versions and patches. You can use the workgroup version and Patch version numbers to track software updates to your Amazon Redshift Serverless workgroup. For more information about changes and features in specific patches, see [Cluster versions for Amazon Redshift](https://docs.aws.amazon.com/redshift/latest/mgmt/cluster-versions.html).
+ **Workgroup ARN** ‐ The Amazon Resource Name for the workgroup.
+ **Extra compute resources for automatic optimizations** ‐ Whether Amazon Redshift is allocating extra compute resources to perform automatic optimizations, even during periods of heavy usage. For more information, see [ Allocating extra compute resources for automatic database optimization ](https://docs.aws.amazon.com/redshift/latest/dg/t_extra-compute-autonomics.html) in the *Amazon Redshift Database Developer Guide*.

The **Data access** tab contains several panels:
+ **Network and security** - You can see network properties, such as the **Virtual private cloud (VPC)** identifier, **VPC security group** list, **Enhanced VPC routing**, **IP address type**, and the **Publicly accessible** setting. If you choose **Edit**, you can change these settings. Additionally, you can select **Turn on enhanced VPC routing**, which routes network traffic between your serverless database and data repositories through a VPC, for enhanced privacy and security. You can also select **Turn on Public Accessible**, which makes the database publicly accessible from outside the VPC, allowing instances and devices to connect.

  The **IP address type** can be set to dual-stack mode to support access to workgroups on both IPv4 an IPv6 at the same time. For more information about network layer communications Internet Protocols (IP), see [Internet Protocol](https://en.wikipedia.org/wiki/Internet_Protocol) in *Wikipedia*.
+ **Redshift managed VPC endpoints** - You can create managed VPC endpoints to access Amazon Redshift Serverless from another VPC.

The **Limits** tab has settings for controlling capacity and use limits for Amazon Redshift Serverless. It contains the following panels:
+ **Base capacity in Redshift processing units (RPUs)** - You can set the base capacity of the compute resources used to process your workload. For more information, see [Compute capacity for Amazon Redshift Serverless](serverless-capacity.md).
+ **Usage limits** - You can set up to four limits for the maximum compute resources that your Amazon Redshift Serverless instance can use in a time period, and select actions for Amazon Redshift Serverless to perform when reaching those limits. For example, you can set your workgroup to have two limits, one of 500 RPU hours and one of 900 RPU hours. You can have Amazon Redshift Serverless send you an alert when it reaches the first limit of 500 RPU hours, then turn off user queries when it reaches the second limit of 900 hours. These limits help control your costs and make them more predictable.
+ **Query limits** - You can set limits on queries, like the timeout setting. These limits help you optimize cost and performance.

The **Tabs** tab has the **Tags** panel, which shows any tags that you created for your workgroup. For more information about tagging resources, see [Tagging resources in Amazon Redshift Serverless](serverless-tagging-resources.md).

## Managed workgroup properties
<a name="serverless-managed-workgroup-describe"></a>

You can also choose workgroups managed by the AWS Glue Data Catalog under the **Managed workgroups** list.

Managed workgroups have different properties from regular workgroups. For more information on managed workgroups, see [Managed workgroups](https://docs.aws.amazon.com/redshift/latest/dg/iceberg-integration-managed-workgroups.html) in the Amazon Redshift Database Developer Guide.

**General information** displays the following: 
+ **Workgroup** - The name of the managed workgroup.
+ **Date created** - The date (UTC) that the managed workgroup was created.
+ **Catalog ARN** - The Amazon Resource Name (ARN) for the managed workgroup in the AWS Glue Data Catalog.
+ **Status** - Indicates if the managed workgroup's compute resources are available. If the resources are available, you can connect to the catalog that uses the managed workgroup with an Apache Iceberg-compatible SQL client in order to query data or create database resources. You can also connect to the catalog using the Amazon Redshift query editor v2. 

**Query and database monitoring** contains the **Managed workgroup performance** graph, showing the average elapsed time of all queries from the workgroup over time. 

The** Query history** tab is a list of all queries from the managed workgroup. Its details include information such as the user who ran the query, the client engine from which the query originated, and the query’s ID and status. The Users tab is a list of all the users in the workgroup. The **Performance metrics** tab shows various metrics such as average query time, number of completed queries, and percentage of storage capacity used.

# Deleting a workgroup
<a name="serverless_delete-workgroup"></a>

You can delete a workgroup using the console. Before you do this, make sure that you have your data backed up and snapshots in place. Resources deleted as part of the workgroup in many cases can't be retrieved.

Complete the following steps:

1. Choose **Amazon Redshift Serverless**, choose **Workgroup configuration** and choose **Delete Amazon Redshift Serverless instance**.

   

1. A dialogue opens. When you choose to delete the workgroup, all usage limits are removed, all VPC endpoints are removed, and access to VPC endpoints is removed.

   Type *delete* and select **Delete** to confirm.

After you complete the steps, the status of the workgroup is *Deleting* and a banner indicates that the workgroup is being deleted. While the delete process is in progress, some features under the **Serverless dashboard** are disabled. But you can configure provisioned clusters on the **Provisioned clusters dashboard**.

After you delete the workgroup, it doesn't appear with the namespace. You can choose the **Create workgroup** button to create a new one.

You can delete an existing workgroup and associate a new workgroup with a different configuration to the same namespace. When creating the new workgroup, choose the base capacity that works with the size of the data associated with the namespace.

You can associate a workgroup with a namespace that was created with a customer-managed key (CMK). For more information about AWS KMS, see [AWS KMS concepts](https://docs.aws.amazon.com/kms/latest/developerguide/concepts.html).

# Namespaces
<a name="serverless-console-configure-namespace-working"></a>

In Amazon Redshift Serverless, a namespace defines a logical container for database objects. It can hold tables, workgroups, and other database resources. If you haven't created a workgroup and a namespace, and you are looking for instructions in how to get started with Amazon Redshift Serverless, see [Setting up Amazon Redshift Serverless for the first time](https://docs.aws.amazon.com/redshift/latest/mgmt/serverless-console-first-time-setup.html).

## Namespace properties
<a name="serverless-console-namespace-config"></a>

In Amazon Redshift Serverless, a namespace defines a container for database objects. You can choose **Namespace configuration** from the navigation list, choose a namespace from the list, and edit its settings.

General information for the namespace includes the following:
+ **Namespace** - The name.
+ **Namespace ID** - The unique identifier.
+ **ARN** - A unique identifier used to specify the resource across AWS. It contains properties like the region and the service.
+ **Status** - The status, such as **Available**.
+ **Date created** - The date (UTC) that the namespace was created.
+ **Storage used** - The storage space used by the namespace and all of its objects.
+ **Admin user name** - The admin account. This is typically the account used to create the namespace.
+ **Database name** - The name of the database contained by the namespace.
+ **Total table count** - The count of tables in all schemas.

Additional settings and properties for the namespace are on several tabs. These include the following:
+ **Workgroup** - Shows the workgroup associated with the namespace.
+ **Data back up** - On this panel, you can configure and create snapshots, and configure recovery points.
+ **Security and encryption** - You can manage IAM role permissions and view or edit your security and encryption settings. These include your encryption key status, and the setting to turn on audit logging. For more information about audit logging for Amazon Redshift Serverless, see [Audit logging for Amazon Redshift Serverless](https://docs.aws.amazon.com/redshift/latest/mgmt/serverless-audit-logging.html).
+ **Datashares** - Shows datashares. With data sharing, you can provide access to data without the need to copy it or move it. For more information about data sharing, see [Data sharing in Amazon Redshift Serverless](https://docs.aws.amazon.com/redshift/latest/mgmt/serverless-datasharing.html).

# Searching for a namespace
<a name="serverless-console-configure-namespace"></a>

From the Amazon Redshift menu, you can choose from the **Namespaces** list in order to view or edit the properties for a namespace. Information on the console includes the namespace name, the admin name, and other properties.

A namespace's settings and properties are on several tabs. These include the following:
+ **Workgroup** - Shows workgroups associated with the namespace.
+ **Data back up** - You can configure and create snapshots, and configure recovery points.
+ **Security and encryption** - You can manage IAM role permissions and view or edit your security and encryption settings. These include your encryption key status and your audit logging settings.
+ **Datashares** - Shows datashares.

# Editing security and encryption
<a name="serverless-console-configuration-edit-network-settings"></a>

Amazon Redshift Serverless is secured by means of KMS encryption. You can update encryption settings via the console:

1. Choose **Namespace configuration** from the main menu on the console, choose the namespace to edit, and choose **Edit** on the **Security and encryption** tab. A dialog appears.

1. You can select **Customize encryption settings** and then **Choose an AWS customer managed key** to change the key used to encrypt your resources.

1. For **Audit logging**, choose the logs to export. Each log type specifies different metadata.

1. To complete the configuration update, choose **Save changes**.

# Changing the AWS KMS key for a namespace
<a name="serverless-workgroups-and-namespaces-rotate-kms-key"></a>

In Amazon Redshift, encryption protects data at rest. Amazon Redshift Serverless uses AWS KMS key encryption automatically to encrypt both your Amazon Redshift Serverless resources and snapshots. As a best practice, most organizations review the type of data they store and have a plan to rotate encryption keys on a schedule. The frequency for rotating keys can vary, depending on your policies for data security. Amazon Redshift Serverless supports changing the AWS KMS key for the namespace so you can adhere to your organization's security policies.

When you change the AWS KMS key, the data remains unchanged.

## Changing an AWS KMS key using the console
<a name="serverless-workgroups-and-namespaces-rotate-kms-key-console"></a>

In Amazon Redshift, encryption protects data at rest. Amazon Redshift Serverless uses AWS KMS key encryption automatically to encrypt both Amazon Redshift Serverless and snapshots. As a best practice, most organizations review the type of data they store and have a plan to rotate encryption keys on a schedule. The frequency for rotating keys can vary, depending on your policies for data security. Amazon Redshift Serverless supports changing the AWS KMS key for the namespace so you can adhere to your organization's security policies.

When you change the AWS KMS key, the data remains unchanged.

1. Sign in to the AWS Management Console and open the Amazon Redshift console at [https://console.aws.amazon.com/redshiftv2/](https://console.aws.amazon.com/redshiftv2/).

1. On the navigation menu, choose **Namespace configuration**. Choose your namespace from the list.

1. From the **Security and encryption** tab, choose **Edit**.

1. Choose **Customize encryption settings** and then choose a key for the namespace. You can optionally create a new key.

## Changing AWS KMS encryption keys using the AWS CLI
<a name="serverless-workgroups-and-namespaces-rotate-kms-key-cli"></a>

Use `update-namespace` to change the AWS KMS key for the namespace. The following shows the syntax for the command:

```
aws redshift-serverless update-namespace
--namespace-name
[--kms-key-id <id-of-kms-key>]
// other parameters omitted here
```

You must have a namespace created or the CLI command results in an error.

The time it takes to change the key depends on the amount of data in Amazon Redshift Serverless. This typically takes fifteen minutes per 8TB of stored data.

## Limitations
<a name="serverless-workgroups-and-namespaces-rotate-kms-key-limitations"></a>

You can’t change from a customer managed KMS Key to an AWS KMS key. In this case, you have to create a new namespace.

You can’t perform other actions while the key is being changed.

# Deleting a namespace
<a name="serverless-console-namespace-delete"></a>

If you want to delete a namespace with an associated workgroup, you have to first delete the workgroup.

On the Amazon Redshift Serverless console, complete the following steps:

1. Choose **Namespace configuration** from the left menu and then choose the namespace you want to delete from the list.

1. Choose **Actions** and select **Delete namespace**.

1. A dialogue box opens. You can keep your data by creating a manual snapshot prior to completing the delete operation.

   Type *delete* and select **Delete** to confirm.

# Monitoring queries and workloads with Amazon Redshift Serverless
<a name="serverless-monitoring"></a>

You can monitor your Amazon Redshift Serverless queries and workload with the provided system views. 

*Monitoring views* are system views in Amazon Redshift Serverless that are used to monitor query and workload usage. These views are located in the `pg_catalog` schema. The system views available have been designed to give you the information needed to monitor Amazon Redshift Serverless, which is much simpler than that needed for provisioned clusters. The SYS system views have been designed to work with Amazon Redshift Serverless. To display the information provided by these views, run SQL SELECT statements.

System views are defined to support the following monitoring objectives.

**Workload monitoring**  
You can monitor your query activities over time to:  
+ Understand workload patterns, so you know what is normal (baseline) and what is within business service level agreements (SLAs).
+ Rapidly identify deviation from normal, which might be a transient issue or something that warrants further action.

**Data load and unload monitoring**  
Data movement in and out of Amazon Redshift Serverless is a critical function. You use COPY and UNLOAD to load or unload data, and you must monitor progress closely in terms of bytes/rows transferred and files completed to track adherence to business SLAs. This is normally done by running system table queries frequently (that is, every minute) to track progress and raise alerts for investigation/corrective action if significant deviations are detected.

**Failure and problem diagnostics**  
There are cases where you must take action for query or runtime failures. Developers rely on system tables to self-diagnose issues and determine correct remedies.

**Performance tuning**  
You might need to tune queries that are not meeting SLA requirements either from the start, or have degraded over time. To tune, you must have runtime details including run plan, statistics, duration, and resource consumption. You need baseline data for offending queries to determine the cause for deviation and to guide you how to improve performance.

**User objects event monitoring**  
You need to monitor actions and activities on user objects, such as refreshing materialized views, vacuum, and analyze. This includes system-managed events like auto-refresh for materialized views. You want to monitor when an event ends if it is user initiated, or the last successful run if system initiated.

**Usage tracking for billing**  
You can monitor your usage trends over time to:  
+ Inform budget planning and business expansion estimates.
+ Identify potential cost-saving opportunities like removing cold data.

Use the SYS system views to monitor Amazon Redshift Serverless;. For more information about the SYS monitoring views, go to [SYS monitoring views](https://docs.aws.amazon.com//redshift/latest/dg/serverless_views-monitoring.html) in the Amazon Redshift Database Developer Guide.

# Adding a query monitoring policy
<a name="serverless-monitor-access"></a>

A superuser can provide access to users who aren't superusers so that they can perform query monitoring for all users. First, you add a policy for a user or a role to provide query monitoring access. Then, you grant query monitoring permission to the user or role. 

**To add the query monitoring policy**

1. Choose [https://console.aws.amazon.com/iam/](https://console.aws.amazon.com/iam/).

1. Under **Access management**, choose **Policies**.

1. Choose **Create Policy**.

1. Choose **JSON** and paste the following policy definition.

------
#### [ JSON ]

****  

   ```
   {
   "Version":"2012-10-17",		 	 	 
   "Statement": [
   {
   "Effect": "Allow",
   "Action": [
       "redshift-data:ExecuteStatement",
       "redshift-data:DescribeStatement",
       "redshift-data:GetStatementResult",
       "redshift-data:ListDatabases"
   ],
   "Resource": "*"
   },
   {
   "Effect": "Allow",
   "Action": "redshift-serverless:GetCredentials",
   "Resource": "*"
   }
   ]
   }
   ```

------

1. Choose **Review policy**.

1. For **Name**, enter a name for the policy, such as `query-monitoring`.

1. Choose **Create policy**.

After you create the policy, you can grant the appropriate permissions.

To provide access, add permissions to your users, groups, or roles:
+ Users and groups in AWS IAM Identity Center:

  Create a permission set. Follow the instructions in [Create a permission set](https://docs.aws.amazon.com//singlesignon/latest/userguide/howtocreatepermissionset.html) in the *AWS IAM Identity Center User Guide*.
+ Users managed in IAM through an identity provider:

  Create a role for identity federation. Follow the instructions in [Create a role for a third-party identity provider (federation)](https://docs.aws.amazon.com//IAM/latest/UserGuide/id_roles_create_for-idp.html) in the *IAM User Guide*.
+ IAM users:
  + Create a role that your user can assume. Follow the instructions in [Create a role for an IAM user](https://docs.aws.amazon.com//IAM/latest/UserGuide/id_roles_create_for-user.html) in the *IAM User Guide*.
  + (Not recommended) Attach a policy directly to a user or add a user to a user group. Follow the instructions in [Adding permissions to a user (console)](https://docs.aws.amazon.com//IAM/latest/UserGuide/id_users_change-permissions.html#users_change_permissions-add-console) in the *IAM User Guide*.

# Granting query monitoring permissions for a user
<a name="serverless-monitor-access-user"></a>

Users with `sys:monitor` permission can view all queries. In addition, users with `sys:operator` permission can cancel queries, analyze query history, and perform vacuum operations.

**To grant query monitoring permission for a user**

1. Enter the following command to provide system monitor access, where *user-name* is the name of the user for whom you want to provide access.

   ```
   grant role sys:monitor to "IAM:user-name";
   ```

1. (Optional) Enter the following command to provide system operator access, where *user-name* is the name of the user for whom you want to provide access.

   ```
   grant role sys:operator to "IAM:user-name";
   ```

# Granting query monitoring permissions for a role
<a name="serverless-monitor-access-role"></a>

Users with a role that has `sys:monitor` permission can view all queries. In addition, users with a role that has `sys:operator` permission can cancel queries, analyze query history, and perform vacuum operations.

**To grant query monitoring permission for a role**

1. Enter the following command to provide system monitor access, where *role-name* is the name of the role for which you want to provide access.

   ```
   grant role sys:monitor to "IAMR:role-name";
   ```

1. (Optional) Enter the following command to provide system operator access, where *role-name* is the name of the role for which you want to provide access.

   ```
   grant role sys:operator to "IAMR:role-name";
   ```

# Setting usage limits, including setting RPU limits
<a name="serverless-workgroup-max-rpu"></a>

Under the **Limits** tab for a workgroup, you can add one or more usage limits to control the maximum RPUs you use in a given time period, or to set a data sharing usage limit.

1. Choose **Manage usage limits**. The limits section appears at the bottom of the **Compute usage by period** panel.

1. Set a usage limit in number of RPU hours.

1. Choose a **Frequency**, which is either **Daily**, **Weekly**, or **Monthly**. This sets the time period for the usage limit. Choosing **Daily** in this instance gives you more detailed control.

1. Set a usage limit, in number of hours.

1. Set the action. These are the following:
   + **Log to system table** - Adds a record to the system view [SYS\$1QUERY\$1HISTORY](https://docs.aws.amazon.com/redshift/latest/dg/SYS_QUERY_HISTORY.html). You can query the `usage_limit` column in this view to determine if a query exceeded the limit.
   + **Alert** - Uses Amazon SNS to set up notification subscriptions and send notifications if a limit is breached. You can choose an existing Amazon SNS topic or create a new one.
   + **Turn off user queries** - Disables queries to stop use of Amazon Redshift Serverless. It also sends a notification.

   The first two actions are informational, but the last turns off query processing.

1. Optionally, you can set a **Cross-Region data sharing usage limit**, which limits how much data transferred from producer Region to consumer Region consumers can query. To do this, choose **Add limit** and follow the steps.

1. Choose **Save changes** at the bottom of the page to save the limit.

1. Set up to 3 more limits as necessary.

For more conceptual information about RPUs and billing, see [Billing for Amazon Redshift Serverless](https://docs.aws.amazon.com/redshift/latest/mgmt/serverless-billing.html).

# Setting query limits
<a name="serverless-workgroup-query-limits"></a>

Under the **Limits** tab for a workgroup, you can add a limit to monitor performance and limits. For more information about query monitoring limits, see [WLM query monitoring rules](https://docs.aws.amazon.com/redshift/latest/dg/cm-c-wlm-query-monitoring-rules.html).

1. Choose **Manage query limits**. Choose **Add new limit** on the **Manage query limits** dialogue.

1. Choose the limit type you want to set and enter a value for its corresponding limit.

1. Choose **Save changes** to save the limit.

When you change your query limit and configuration parameters, your database will restart.

# Setting query queues
<a name="serverless-workgroup-query-queues"></a>

Amazon Redshift Serverless supports queue-based query resource management. You can create dedicated query queues with customized monitoring rules for different workloads. This feature provides granular control over resource usage.

Query monitoring rules (QMR) apply only at the Redshift Serverless workgroup level, affecting all queries run in this workgroup uniformly. Queue-based approach lets you create queues with distinct monitoring rules. You can assign these queues to specific user roles and query groups. Each queue operates independently, with rules affecting only the queries within that queue.

Queues let you set metrics-based predicates and automated responses. For example, you can configure rules to automatically abort queries that exceed time limits or consume too many resources.

## Considerations
<a name="serverless-workgroup-query-queues-considerations"></a>

Consider the following when using serverless queues:
+ The following Workload Management (WLM) configuration keys used in Amazon Redshift provisioned clusters are not supported in Redshift Serverless queues: `max_execution_time`, `short_query_queue`, `auto_wlm`, `concurrency_scaling`, `priority`, `queue_type`, `query_concurrency`, `memory_percent_to_use`, `user_group`, `user_group_wild_card`.

  Additionally hop, change\$1query\$1priority actions are not supported in Serverless.
+ The hop Action (moving queries between queues) is not supported in Amazon Redshift Serverless.
+ Queue priorities are supported only for Amazon Redshift provisioned clusters.
+ Amazon Redshift Serverless automatically manages scaling and resource allocation for optimal performance, so you don't need to manually configure queue priorities.

## Setting up query queues
<a name="serverless-workgroup-query-queues-setup"></a>

You can create queues under the Limits tab for a serverless workgroup using the AWS Management Console, AWS CLI, or Redshift Serverless API.

------
#### [ Console ]

Follow these steps to create a queue for your serverless workgroup.

1. Navigate to your Redshift Serverless workgroup.

1. Select the Limits tab.

1. Under **Query Queues**, choose **Enable Queues**.
**Important**  
Enabling query queues is a permanent change. You cannot revert to queue-less monitoring once enabled.

1. Configure your queues using the following parameters:

   **Queue level parameters**
   + `name` - Queue identifier (required, unique, non-empty)
   + `user_role` - Array of user roles (optional)
   + `query_group` - Array of query groups (optional)
   + `query_group_wild_card` - 0 or 1 to enable wildcard matching (optional)
   + `user_group_wild_card` - 0 or 1 to enable wildcard matching (optional)
   + `rules` - Array of monitoring rules (optional)

   **Rule level parameters**
   + `rule_name` - Unique identifier, max 32 chars (required)
   + `predicate` - Array of conditions, 1-3 predicates (required)
   + `action` - "abort" or "log" (required)

   **Predicate level parameters**
   + `metric_name` - Metric to monitor (required)
   + `operator` - "=", "<", or ">" (required)
   + `value` - Numeric threshold (required)

   **Limits**
   + Max 8 queues
   + Max 25 rules across all queues
   + Max 3 predicates per rule
   + Rule names must be unique globally

**Example Configuration**

Three queues example: one for dashboarding queries with a short timeout, one for ETL queries with a long timeout and an admin queue:

```
[
  {
    "name": "dashboard",
    "user_role": ["analyst", "viewer"],
    "query_group": ["reporting"],
    "query_group_wild_card": 1,
    "rules": [
      {
        "rule_name": "short_timeout",
        "predicate": [
          {
            "metric_name": "query_execution_time",
            "operator": ">",
            "value": 60
          }
        ],
        "action": "abort"
      }
    ]
  },
  {
    "name": "ETL",
    "user_role": ["data_scientist"],
    "query_group": ["analytics", "ml"],
    "rules": [
      {
        "rule_name": "long_timeout",
        "predicate": [
          {
            "metric_name": "query_execution_time",
            "operator": ">",
            "value": 3600
          }
        ],
        "action": "log"
      },
      {
        "rule_name": "memory_limit",
        "predicate": [
          {
            "metric_name": "query_temp_blocks_to_disk",
            "operator": ">",
            "value": 100000
          }
        ],
        "action": "abort"
      }
    ]
  },
  {
    "name": "admin_queue",
    "user_role": ["admin"],
    "query_group": ["admin"]
  }
]
```

In this example:
+ Dashboard queries are aborted if they run more than 60 seconds
+ ETL queries are logged if they run more than an hour
+ Admin queue does not have any resource limits

------
#### [ CLI ]

You can manage queues using the CreateWorkgroup or UpdateWorkgroup APIs with the `wlm_json_configuration` config parameter to specify queues in JSON format.

```
aws redshift-serverless create-workgroup \
  --workgroup-name test-workgroup \
  --namespace-name test-namespace \
  --config-parameters '[{"parameterKey": "wlm_json_configuration", "parameterValue": "[{\"name\":\"dashboard\",\"user_role\":[\"analyst\",\"viewer\"],\"query_group\":[\"reporting\"],\"query_group_wild_card\":1,\"rules\":[{\"rule_name\":\"short_timeout\",\"predicate\":[{\"metric_name\":\"query_execution_time\",\"operator\":\">\",\"value\":60}],\"action\":\"abort\"}]},{\"name\":\"ETL\",\"user_role\":[\"data_scientist\"],\"query_group\":[\"analytics\",\"ml\"],\"rules\":[{\"rule_name\":\"long_timeout\",\"predicate\":[{\"metric_name\":\"query_execution_time\",\"operator\":\">\",\"value\":3600}],\"action\":\"log\"},{\"rule_name\":\"memory_limit\",\"predicate\":[{\"metric_name\":\"query_temp_blocks_to_disk\",\"operator\":\">\",\"value\":100000}],\"action\":\"abort\"}]},{\"name\":\"admin_queue\",\"user_role\":[\"admin\"],\"query_group\":[\"admin\"]}]"}]'
```

------

## Best practices
<a name="serverless-workgroup-query-queues-best-practices"></a>

Keep the following best practices in mind when you use serverless queues.
+ Use separate queues for workloads with distinct limit requirements (e.g., ETL, reporting, or ad-hoc analysis).
+ Start with simple thresholds and adjust based on query behavior and usage patterns. You can monitor query usage patterns using the tables and views documented in [System tables and views for query monitoring rules](https://docs.aws.amazon.com/redshift/latest/dg/cm-c-wlm-query-monitoring-rules.html#cm-c-wlm-qmr-tables-and-views).

# Checking Amazon Redshift Serverless summary data using the dashboard
<a name="serverless-dashboard"></a>

The Amazon Redshift Serverless dashboard contains a collection of panels that show at-a-glance metrics and information about your workgroup and namespace. These panels include the following: 
+ **Resources summary** - Displays high-level information about Amazon Redshift Serverless, such as the storage used and other metrics.
+ **Query summary** - Displays information about queries, including completed queries and running queries. Choose **View details** to go to a screen that has additional filters.
+ **RPU capacity used** - Displays the overall capacity used over a given time period, like the previous ten hours, for instance.
+ **Datashares** - Shows the count of datashares, which are used to share darta between, for example, AWS accounts. The metrics show which datashares require authorization, and other information.
+ **Total compute usage** - Shows your total consumed RPU hours for the selected workgroup over a selected time range, up to the last 7 days.

From the dashboard you can quickly dive into these available metrics to check a detail regarding Amazon Redshift Serverless, or review queries, or track work items.

# Audit logging for Amazon Redshift Serverless
<a name="serverless-audit-logging"></a>

You can configure Amazon Redshift Serverless to export connection, user, and user-activity log data to a log group in Amazon CloudWatch Logs. With Amazon CloudWatch Logs, you can perform real-time analysis of the log data and use CloudWatch to create alarms and view metrics. You can use CloudWatch Logs to store your log records in durable storage.

You can create CloudWatch alarms to track your metrics using the Amazon Redshift console. For more information on creating alarms, see [Managing alarms](https://docs.aws.amazon.com/redshift/latest/mgmt/performance-metrics-alarms.html).

To export generated log data to Amazon CloudWatch Logs, the respective logs must be selected for export in your Amazon Redshift Serverless configuration settings, on the console. You can do this by choosing the **Namespace configuration** settings, under **Security and encryption**. 

## Log events in CloudWatch
<a name="db-auditing-manage-logs-cloudwatch-monitoring"></a>

After selecting which Redshift logs to export, you can monitor events in Amazon CloudWatch Logs. A new log group is automatically created for Amazon Redshift Serverless, in which `log_type` represents the log type.

```
/aws/redshift/<namespace>/<log_type>
```

When you create your first workgroup and namespace, *default* is the namespace name. The log group name varies according to what you call the namespace.

For example, if you export the connection log, log data is stored in the following log group.

```
/aws/redshift/default/connectionlog
```

Log events are exported to a log group using the serverless log stream. The behavior depends on which of the following conditions are true:
+ **A log group with the specified name exists.** Redshift exports log data using the existing log group. To create log groups with predefined log-retention periods, metric filters, and customer access, you can use automated configuration, such as that provided by **AWS CloudFormation**.
+ **A log group with the specified name doesn't exist.** When a matching log entry is detected in the log for the instance, Amazon Redshift Serverless creates a new log group in Amazon CloudWatch Logs automatically. The log group uses the default log-retention period of *Never Expire*. To change the log-retention period, use the Amazon CloudWatch Logs console, the AWS CLI, or the Amazon CloudWatch Logs API. For more information about changing log-retention periods in CloudWatch Logs, see *Change log data retention* in [Working with log groups and log streams](https://docs.aws.amazon.com/AmazonCloudWatch/latest/logs/Working-with-log-groups-and-streams.html).

To search for information within log events, use the Amazon CloudWatch Logs console, the AWS CLI, or the Amazon CloudWatch Logs API. For more information about searching and filtering log data, see [Searching and filtering log data](https://docs.aws.amazon.com/AmazonCloudWatch/latest/logs/MonitoringLogData.html). 

## CloudWatch metrics
<a name="db-auditing-manage-logs-cloudwatch-monitoring-metrics"></a>

Amazon Redshift Serverless metrics are divided into compute metrics and data and storage metrics, falling under the workgroup and namespace dimension sets, respectively. For more information about workgroups and namespaces, see [ Workgroups and namespaces](https://docs.aws.amazon.com/redshift/latest/mgmt/serverless-workgroups-and-namespaces.html).

CloudWatch compute metrics are the following:


| Metric name | Units | Description | Dimension sets | 
| --- | --- | --- | --- | 
| QueriesCompletedPerSecond | Number of queries | The number of queries completed each second. | \$1Database, LatencyRange, Workgroup\$1, \$1LatencyRange, Workgroup\$1 | 
| QueryDuration | Microseconds | The average amount of time to complete a query. | \$1Database, LatencyRange, Workgroup\$1, \$1LatencyRange, Workgroup\$1 | 
| QueriesRunning | Number of queries | The number of running queries at a point in time. | \$1Database, QueryType, Workgroup\$1, \$1QueryType, Workgroup\$1 | 
| QueriesQueued | Number of queries | The number of queries in the queue at a point in time. | \$1Database, QueryType, Workgroup\$1, \$1QueryType, Workgroup\$1 | 
| DatabaseConnections | Number of connections | The number of connections to a database at a point in time. | \$1Database, Workgroup\$1, \$1Workgroup\$1 | 
| QueryRuntimeBreakdown | Milliseconds | The total time queries ran, by query stage. | \$1Database, Stage, Workgroup\$1, \$1Stage, Workgroup\$1 | 
| ComputeCapacity | RPU | Average number of compute units allocated during the past 30 minutes, rounded up to the nearest integer. | \$1Workgroup\$1 | 
| ComputeSeconds | RPU-seconds | Accumulated compute-unit seconds used in the last 30 minutes. | \$1Workgroup\$1 | 
| QueriesSucceeded | Number of queries | The number of queries that succeeded in the last 5 minutes. | \$1Database, QueryType, Workgroup\$1, \$1QueryType, Workgroup\$1 | 
| QueriesFailed | Number of queries | The number of queries that failed in the last 5 minutes. | \$1Database, QueryType, Workgroup\$1, \$1QueryType, Workgroup\$1 | 
| UsageLimitAvailable | RPU-hours or TBs | Depending on the UsageType, UsageLimitAvailable returns the following: [\[See the AWS documentation website for more details\]](http://docs.aws.amazon.com/redshift/latest/mgmt/serverless-audit-logging.html)  | \$1UsageLimitId, UsageType, Workgroup\$1 | 
| UsageLimitConsumed | RPU-hours or TBs | Depending on the UsageType, UsageLimitConsumed returns the following: [\[See the AWS documentation website for more details\]](http://docs.aws.amazon.com/redshift/latest/mgmt/serverless-audit-logging.html)  | \$1UsageLimitId, UsageType, Workgroup\$1 | 
| ExtraComputeForAutomaticOptimizationChargedSeconds | RPU-seconds | Number of compute-unit seconds charged for automatic optimization operations in the last 30 minutes.  | \$1Workgroup\$1 | 

CloudWatch data and storage metrics are the following:


| Metric name | Units | Description | Dimension sets | 
| --- | --- | --- | --- | 
| TotalTableCount | Number of tables | The number of user tables existing at a point in time. This total doesn't include Amazon Redshift Spectrum tables. | \$1Database, Namespace\$1 | 
| DataStorage | Megabytes | The number of megabytes used, in disk or storage space, for Redshift data. | \$1Namespace\$1 | 

The `SnapshotStorage` metric is namespace- and workgroup-agnostic. CloudWatch's `SnapshotStorage` metric is as follows:


| Metric name | Units | Description | Dimension sets | 
| --- | --- | --- | --- | 
| SnapshotStorage | Megabytes | The number of megabytes used, in disk or storage space, for Snapshots. | \$1\$1 | 

Dimension sets are the grouping dimensions applied to your metrics. You can use these dimension groups to specify how your statistics are retrieved.

The following table details dimensions and dimension values for specific metrics:


| Dimension | Description and values | 
| --- | --- | 
| DatabaseName | The name of the database. A custom value. | 
| Latency | Possible values are as follows: [\[See the AWS documentation website for more details\]](http://docs.aws.amazon.com/redshift/latest/mgmt/serverless-audit-logging.html)  | 
| QueryType | Possible values are INSERT, DELETE, UPDATE, UNLOAD, LOAD, SELECT, CTAS, and OTHER. | 
| stage | The execution stages for a query. Possible values are as follows: [\[See the AWS documentation website for more details\]](http://docs.aws.amazon.com/redshift/latest/mgmt/serverless-audit-logging.html) | 
| Namespace | The name of the namespace. A custom value. | 
| Workgroup | The name of the workgroup. A custom value. | 
| UsageLimitId | The identifier of the usage limit. | 
| UsageType | The Amazon Redshift Serverless feature being limited. Possible values are as follows: [\[See the AWS documentation website for more details\]](http://docs.aws.amazon.com/redshift/latest/mgmt/serverless-audit-logging.html)  | 

# Snapshots and recovery points
<a name="serverless-snapshots-recovery-points"></a>

A backup in Amazon Redshift Serverless is a point-in-time representation of the objects and data in your namespace. There are two types of backups: snapshots that are manually created and recovery points that Amazon Redshift Serverless automatically creates for you. 

Amazon Redshift Serverless automatically creates recovery points every 30 minutes or after every 5 GB of data changes per node, whichever happens first. For larger datasets (more than 5 GB × number of nodes), the minimal interval between recovery points is 15 minutes. All recovery points are kept for 24 hours. 

**Note**  
You cannot create your own snapshot schedule to control when recovery points are created.

Amazon Redshift Serverless creates snapshots in Redshift Managed Storage (RMS). For more information, see [Compute capacity for Amazon Redshift Serverless](serverless-capacity.md).

**Note**  
No-backup tables aren't supported for RA3 provisioned clusters and Amazon Redshift Serverless workgroups. A table marked as no-backup in an RA3 cluster or serverless workgroup is treated as a permanent table that will always be backed up while taking a snapshot, and always restored when restoring from a snapshot. To avoid snapshot costs for no-backup tables, truncate them before taking a snapshot.

If you find that you want to retrieve the data in a snapshot or a recovery point, you can restore a snapshot to a serverless namespace or to a provisioned cluster. There are three scenarios in which you can restore snapshots:
+ Restore a serverless snapshot to a serverless namespace.
+ Restore a serverless snapshot to a provisioned cluster.
+ Restore a provisioned cluster snapshot to a serverless namespace.

When you restore a serverless snapshot to a provisioned cluster, you must choose the node type to use, such as RA3, and the number of nodes, letting you control settings at the cluster or node level.

To restore a provisioned cluster snapshot to a serverless namespace, start from the Redshift provisioned console, choose the snapshot to restore, then choose **Restore from snapshot**, **Restore to serverless namespace**. Amazon Redshift converts tables with interleaved keys into compound sort keys when you restore a provisioned cluster snapshot to a serverless namespace. For more information about sort keys, see [Working with sort keys](https://docs.aws.amazon.com//redshift/latest/dg/t_Sorting_data.html). 

If you want to add additional context, you can tag snapshots and recovery points with key-value pairs that provide metadata and information to snapshots and recovery points. For more information about tagging resources, see [Tagging resources overview](https://docs.aws.amazon.com/redshift/latest/mgmt/serverless-tagging-resources.html).

Finally, you can also share snapshots with other AWS accounts, which lets them access data within the snapshot and run queries.

## AWS Backup integration
<a name="serverless-backup"></a>

You can also create and restore snapshots using AWS Backup, a fully managed service that helps you centralize and automate data protection across AWS services, in the cloud, and on premises. For more information, see [AWS Backup integration with Amazon Redshift](managing-aws-backup.md). For information on AWS Backup, see [What is AWS Backup?](https://docs.aws.amazon.com/aws-backup/latest/devguide/whatisbackup.html) in the *AWS Backup Developer Guide*. 

# Creating a snapshot
<a name="serverless-snapshots"></a>

**Note**  
No-backup tables aren't supported for RA3 provisioned clusters and Amazon Redshift Serverless workgroups. A table marked as no-backup in an RA3 cluster or serverless workgroup is treated as a permanent table that will always be backed up while taking a snapshot, and always restored when restoring from a snapshot. To avoid snapshot costs for no-backup tables, truncate them before taking a snapshot.

To create a snapshot, perform the steps in the following procedure.

**To create a snapshot**

1. On the Amazon Redshift Serverless console, choose **Data backup**.

1. Choose **Create snapshot**.

1. Choose a namespace to create a snapshot of.

1. Enter a snapshot identifier.

1. (Optional) Choose a retention period. If you choose **Custom value**, choose the number of days. The amount you choose must be between 1-3653 days, inclusive. The default is retain indefinitely.

1. Choose **Create**.

**To create a snapshot from namespace configuration**

1. On the Amazon Redshift Serverless console, choose **Namespace configuration**.

1. Choose the namespace to create a snapshot of. You can only create snapshots of namespaces that are associated with a workgroup and whose statuses are Available.

1. Choose the **Data backup** tab.

1. Choose **Create snapshot**.

1. Enter a snapshot identifier.

1. (Optional) Choose a retention period. If you choose **Custom value**, choose the number of days. The amount you choose must be between 1-3653 days, inclusive.

1. Choose **Create**.

# Creating a final snapshot
<a name="serverless-snapshot-create-final"></a>

To create a final snapshot of all data within a namespace before deleting the namespace, perform the steps in the following procedure.

**To create a final snapshot**

1. On the Amazon Redshift Serverless console, choose **Namespace configuration**.

1. Choose the namespace to delete.

1. Choose **Actions**, **Delete**.

1. Choose **Create final snapshot**.

1. Enter a name for the snapshot.

1. Enter delete.

1. Choose **Delete**.

# Sharing a snapshot or removing snapshot permissions
<a name="serverless-snapshot-share"></a>

To share a snapshot with another AWS account or remove an account's access to a snapshot, perform the following procedure.

**To share or remove access to a snapshot**

1. On the Amazon Redshift Serverless console, choose **Data backup**.

1. Choose a snapshot to share.

1. Choose **Actions**, **Manage access**.

1. To share a snapshot with another account, enter an **AWS account ID**. To remove access from an account, choose **Remove**.

1. Choose **Save changes**.

# Scheduling a snapshot
<a name="serverless-snapshot-scheduling"></a>

To precisely control when to take a snapshot, you can create a snapshot schedule for specific namepsaces. When scheduling snapshot creation, you can create either a one-time event or use Unix cron expressions to create a recurring schedule. Cron expressions support three fields and are separated by white space.

```
cron(Minutes Hours Day-of-month Month Day-of-week Year)
```


| **Fields** | **Values** | **Wildcards** | 
| --- | --- | --- | 
|  Minutes  |  0–59  |  , - \$1 /   | 
|  Hours  |  0–23  |  , - \$1 /   | 
|  Day-of-month  |  1–31  |  , - \$1 ? / L W  | 
|  Month  |  1–12 or JAN-DEC  |  , - \$1 /  | 
|  Day-of-week  |  1–7 or SUN-SAT  |  , - \$1 ? L \$1  | 
|  Year  |  1970–2199  |  , - \$1 /  | 

**Wildcards**
+ The **,** (comma) wildcard includes additional values. In the `Day-of-week` field, `MON,WED,FRI` would include Monday, Wednesday, and Friday. Total values are limited to 24 per field.
+ The **-** (dash) wildcard specifies ranges. In the `Hour` field, 1–15 would include hours 1 through 15 of the specified day.
+ The **\$1** (asterisk) wildcard includes all values in the field. In the `Hours` field, **\$1** would include every hour.
+ The **/** (forward slash) wildcard specifies increments. In the `Hours` field, you could enter **1/10** to specify every 10th hour, starting from the first hour of the day (for example, the 01:00, 11:00, and 21:00).
+ The **?** (question mark) wildcard specifies one or another. In the `Day-of-month` field you could enter **7**, and if you didn't care what day of the week the seventh was, you could enter **?** in the Day-of-week field.
+ The **L** wildcard in the `Day-of-month` or `Day-of-week` fields specifies the last day of the month or week.
+ The **W** wildcard in the `Day-of-month` field specifies a weekday. In the `Day-of-month` field, `3W` specifies the day closest to the third weekday of the month.
+ The **\$1** wildcard in the Day-of-week field specifies a certain instance of the specified day of the week within a month. For example, 3\$12 would be the second Tuesday of the month: the 3 refers to Tuesday because it is the third day of each week, and the 2 refers to the second day of that type within the month.
**Note**  
If you use a '\$1' character, you can define only one expression in the day-of-week field. For example, "3\$11,6\$13" is not valid because it is interpreted as two expressions. 

**Limits**
+ You can't specify the `Day-of-month` and `Day-of-week` fields in the same cron expression. If you specify a value in one of the fields, you must use a **?** (question mark) in the other.
+ Snapshot schedules don't support the following frequencies: 
  + Snapshots scheduled more frequently than 1 per hour.
  + Snapshots scheduled less frequently than 1 per day (24 hours).

  If you have overlapping schedules that result in scheduling snapshots within a 1 hour window, a validation error results.

The following table has some sample cron strings.


| Minutes | Hours | Day of week | Meaning | 
| --- | --- | --- | --- | 
|  0  |  14-20/1  |  TUE  |  Every hour between 2pm and 8pm on Tuesday.  | 
|  0  |  21  |  MON-FRI  |  Every night at 9pm Monday–Friday.  | 
|  30  |  0/6  |  SAT-SUN  |  Every 6 hour increment on Saturday and Sunday starting at 30 minutes after midnight (00:30) that day. This results in a snapshot at [00:30, 06:30, 12:30, and 18:30] each day.  | 
|  30  |  12/4  |  \$1  |  Every 4 hour increment starting at 12:30 each day. This resolves to [12:30, 16:30, 20:30].  | 

The following example demonstrates how to create a schedule that runs in 2-hour increments starting at 15:15 each day.

```
 cron(15 15/2 *)
```

You can use the Amazon Redshift Serverless console, API, or AWS CLI to create a snapshot schedule.

**To schedule a snapshot**

1. On the Amazon Redshift Serverless console, choose **Data backup**.

1. Choose **Snapshot schedules**.

1. Choose **Create schedule**.

1. Enter a name for the snapshot schedule.

1. Select the namespace to create snapshots for.

1. Enter a cron expression for the schedule or use the schedule builder to create one.

1. (Optional) Choose a retention period. If you choose **Custom value**, specify the number of days.

1. Choose **Create schedule**.

# Updating a snapshot retention period
<a name="serverless-snapshot-update"></a>

To update a snapshot retention period, perform the following procedure.

**To update a snapshot retention period**

1. On the Amazon Redshift Serverless console, choose **Data backup**.

1. Choose a snapshot to update.

1. Choose **Actions**, **Set manual snapshot settings.**

1. Choose a retention period. If you choose **Custom value**, choose the number of days.

1. Choose **Save changes**.

# Deleting a snapshot
<a name="serverless-snapshot-delete"></a>

To delete a snapshot, perform the following procedure.

**To delete a snapshot**
**Note**  
You can't delete a snapshot that's been shared with another account. You must first remove that account's access to the snapshot before deleting the snapshot.

1. On the Amazon Redshift Serverless console, choose **Data backup**.

1. Choose a snapshot to delete.

1. Choose **Actions**, **Delete**.

1. Choose **Delete**.

# Restoring a snapshot
<a name="serverless-snapshot-restore"></a>

**Note**  
No-backup tables aren't supported for RA3 provisioned clusters and Amazon Redshift Serverless workgroups. A table marked as no-backup in an RA3 cluster or serverless workgroup is treated as a permanent table that will always be backed up while taking a snapshot, and always restored when restoring from a snapshot. To avoid snapshot costs for no-backup tables, truncate them before taking a snapshot.

Restoring a snapshot to a serverless namespace replaces the current database with the database in the snapshot.

Restoring a snapshot to a serverless namespace is completed in two phases. The first phase completes in a few minutes, restores the data to your namespace, and makes it available for queries. The second phase of restoration is where your database is tuned, which can cause minor performance issues. This second phase can last from a few hours to several days, and in some cases, a couple of weeks. The amount of time depends on the size of the data, but performance progressively improves as the database gets tuned. At the end of this phase, your serverless namespace is fully tuned, and you can submit queries without performance issues.

**To restore a snapshot to a serverless namespace**

1. On the Amazon Redshift Serverless console, choose **Data backup**.

1. Choose the snapshot to restore. You can only restore one snapshot at a time.

1. Choose **Actions**, **Restore to serverless namespace**.

1. Choose an available namespace to restore to. You can only restore to namespaces whose statuses are Available.

1. Choose **Restore**.

**To restore a snapshot to a provisioned cluster**

1. On the Amazon Redshift Serverless console, choose **Data backup**.

1. Choose a snapshot to restore.

1. Choose **Action**, **Restore to provisioned cluster**.

1. Enter a cluster identifier.

1. Choose a **Node type**. The number of nodes depends on the node type.

1. Follow the instructions on the page on the console page to enter the properties for **Cluster configuration**. See [ Creating a cluster](https://docs.aws.amazon.com//redshift/latest/mgmt/create-cluster.html) for more information.

For more information about snapshots on provisioned clusters, see [Amazon Redshift snapshots and backups](https://docs.aws.amazon.com//redshift/latest/mgmt/working-with-snapshots.html).

# Converting a recovery point
<a name="serverless-recovery-point-convert"></a>

Recovery points in Amazon Redshift Serverless are created approximately every 30 minutes and saved for 24 hours. To convert a recovery point to a snapshot, perform the steps in the following procedure.

**To convert a recovery point to a snapshot**

1. On the Amazon Redshift Serverless console, choose **Data backup**.

1. Under **Recovery points**, choose the **Creation time** of the recovery point that you want to convert to a snapshot.

1. Choose **Create snapshot from recovery point**.

1. Enter a **Snapshot identifier**.

1. Choose **Create**.

# Restoring a recovery point
<a name="serverless-recovery-point-restore"></a>

Recovery points in Amazon Redshift Serverless are created approximately every 30 minutes and saved for 24 hours. To restore a recovery point to a serverless namespace, perform the steps in the following procedure

**To restore a recovery point to a serverless namespace**

1. On the Amazon Redshift Serverless console, choose **Data backup**.

1. Under **Recovery points**, choose the **Creation time** of the recovery point that you want to restore.

1. Choose **Restore**. You can only restore to namespaces whose statuses are Available.

1. Enter **restore** in the text input field and choose **Restore**.

# Copying backups to another AWS Region
<a name="serverless-backup-copy"></a>

 You can configure Amazon Redshift Serverless to automatically copy snapshots and recovery points to another AWS Region. When you create a snapshot in the *source* AWS Region, it's copied to a *destination* Region. You can configure your namespace so that it only copies snapshots and recovery points to one destination AWS Region at a time. For a list of AWS Regions where Amazon Redshift Serverless is available, see the endpoints listed for [Redshift Serverless API](https://docs.aws.amazon.com/general/latest/gr/redshift-service.html) in the *Amazon Web Services General Reference*. 

When you configure copying backups, you can also specify a retention period of how long Amazon Redshift Serverless should keep the copied snapshot. You can't change the retention periods of recovery points, which must be 1 day. The retention periods of snapshots in the destination Region is separate from the retention period of the snapshot in the source Region. By default, the retention period is to keep the snapshot indefinitely. If you choose **Custom value** choose the number of days. This amount you choose must be between 1-3653 days, inclusive.

To change the destination Region to copy snapshots to, first disable copying backups, and then specify the new destination Region when you re-enable copying.

Once a snapshot or recovery point is copied to a destination Region, you can use it to restore data to the Region.

By default, your data is encrypted with a key that AWS manages for you. To use a different key, choose the key that you want to use when configuring backup copying in the source AWS Region, and Amazon Redshift Serverless automatically creates a grant, which enables snapshot encryption in the destination AWS Region.

To copy backups to another Region, make sure that you have the following IAM permissions:

```
redshift-serverless:CreateSnapshotCopyConfiguration
redshift-serverless:UpdateSnapshotCopyConfiguration
redshift-serverless:ListSnapshotCopyConfigurations
redshift-serverless:DeleteSnapshotCopyConfiguration
```

If you're using your own KMS key to encrypt your backups, you also need the following permissions:

```
kms:CreateGrant
kms:DescribeKey
```

To configure copying your snapshots or recovery points to another AWS Region

1. On the Amazon Redshift Serverless console, choose the namespace for which you want to configure copying snapshots or recovery points.

1. Choose the **Actions**, **Configure cross-Region backup**.

1. Choose the destination AWS Region to copy the snapshot to.

1. (Optional) Choose how long to retain the snapshot. If you choose ** Custom value** choose the number of days The amount you choose must be between 1-3653 days, inclusive. The default is to retain indefinitely.

1. (Optional) Choose a different AWS KMS key to use to encrypt for encryption in the destination Region.

1. Choose **Save configuration**.

# Restoring a table
<a name="serverless-table-restore"></a>

 You can also restore a specific table from a snapshot or a recovery point When you do so, you specify the source snapshot or recovery point, database, schema, table, the target database, schema, and new table name. This new table can't have the same name as an existing table. If you want to replace an existing table by restoring a table, you must first rename or drop the table before you restore the table.

**Note**  
No-backup tables aren't supported for RA3 provisioned clusters and Amazon Redshift Serverless workgroups. A table marked as no-backup in an RA3 cluster or serverless workgroup is treated as a permanent table that will always be backed up while taking a snapshot, and always restored when restoring from a snapshot. However, selective restoration of no-backup tables isn't supported.

 The target table is created using the source table's column definitions, table attributes, and column attributes except for foreign keys. To prevent conflicts due to dependencies, the target table doesn't inherit foreign keys from the source table. Any dependencies, such as views or permissions granted on the source table, aren't applied to the target table. 

If the owner of the source table exists, then that user is the owner of the restored table, provided that the user has sufficient permissions to become the owner of a relation in the specified database and schema. Otherwise, the restored table is owned by the admin user that was created when the cluster was launched.

The restored table returns to the state it was in at the time the backup was taken. This includes transaction visibility rules defined by the Amazon Redshift adherence to [serializable isolation](https://docs.aws.amazon.com/redshift/latest/dg/c_serial_isolation.html), meaning that data will be immediately visible to in flight transactions started after the backup.

 You can use the Amazon Redshift Serverless console to restore tables from a snapshot. 

Restoring a table from data backup has the following limitations:
+ You can only restore one table at a time.
+ Any dependencies, such as views or permissions granted on the source table, aren't applied to the target table.
+ If row-level security is turned on for a table being restored, Amazon Redshift Serverless restores the table with row-level security turned on.

To restore a table using the Amazon Redshift Serverless console

1. On the Amazon Redshift Serverless console, choose **Data backup**.

1. Choose the snapshot or recovery point that has the table to restore.

1. Choose **Actions**, **Restore table from snapshot** or **Restore table from recovery point**.

1. Enter information about the source snapshot or recovery point and target table, then choose **Restore table**.

# Data sharing in Amazon Redshift Serverless
<a name="serverless-datasharing"></a>

With *data sharing*, you have live access to data so that your users can see the most up-to-date and consistent information as it's updated in Amazon Redshift Serverless.

You can share data for read purposes across different Amazon Redshift Serverless instances within or across AWS accounts.

You can get started with data sharing by using either the SQL interface or the Amazon Redshift console. For more information, see [Data sharing in Amazon Redshift](https://docs.aws.amazon.com//redshift/latest/dg/datashare-overview.html) in the *Amazon Redshift Database Developer Guide*.

With data sharing, Amazon Redshift Serverless namespaces and provisioned clusters can share live data with each other, whether they are within an AWS account across AWS accounts, or across AWS Regions. For more information, see [Regions where data sharing is available](https://docs.aws.amazon.com//redshift/latest/dg/data_sharing_regions.html).

To get started sharing data within an AWS account, open the AWS Management Console, and then choose the Amazon Redshift console. Choose **Namespace configuration** and then **Datashares**. 

To start querying data in a datashare, create a database in a namespace that has a workgroup associated with it. From a specified datashare, choose a namespace that has a workgroup associated with it and create a database to query data. 

## Considerations
<a name="getting_started_serverless_datasharing_usage"></a>

Consider the following when working with data sharing in Amazon Redshift Serverless:
+ Amazon Redshift only supports provisioned clusters of instance type ra3.16xlarge, ra3.4xlarge, and ra3.xlplus, and serverless endpoint as data sharing producers or consumers.
+ Amazon Redshift Serverless is encrypted by default.

For a list of datasharing limitations, including database objects supported, encryption requirements, and sort-key requirements, see [Considerations for data sharing in Amazon Redshift](https://docs.aws.amazon.com/redshift/latest/dg/datashare-considerations.html) in the *Amazon Redshift Database Developer Guide*. 

# Granting access to view datashares
<a name="serverless_datasharing_permissions"></a>

A superuser can provide access to users who aren't superusers so that they can view the datashares created by all users. 

To grant access to a datashare for a user, use the following command to provide datashare access for a user, where datashare\$1name is the name of the datashare and user-name is the name of the user for whom you want to provide access.

```
grant share on datashare datashare_name to "IAM:test_user";
```

To grant access to a datashare for a user group, first create a user group with users. For information on how to create user groups, see [CREATE GROUP](https://docs.aws.amazon.com//redshift/latest/dg/r_CREATE_GROUP.html). Then, grant datashare access to a user using the following command, where datashare\$1name is the name of the datashare and user-group is the name of the user-group to that you want to grant access.

```
grant share on datashare datashare_name to group user_group;
```

For information on how to use the GRANT statement, see [GRANT](https://docs.aws.amazon.com//redshift/latest/dg/r_GRANT.html).

# Registering namespaces to the AWS Glue Data Catalog
<a name="serverless_datasharing-register-namespace"></a>

You can register entire namespaces to the AWS Glue Data Catalog and create catalogs managed by AWS Glue. You can access these catalogs with any SQL engine that supports the Apache Iceberg REST API. For more information on creating Apache Iceberg-compatible catalogs from Amazon Redshift see [ Apache Iceberg compatibility for Amazon Redshift](https://docs.aws.amazon.com/redshift/latest/dg/iceberg-integration_overview.html) in the Amazon Redshift Database Developer Guide.

**To register a serverless namespace to the AWS Glue Data Catalog**

1. Sign in to the AWS Management Console and open the Amazon Redshift console at [https://console.aws.amazon.com/redshiftv2/](https://console.aws.amazon.com/redshiftv2/).

1. On the navigation menu, choose **Redshift Serverless**. The Serverless dashboard appears. In the **Namespaces/Workgroups **section is the list of namespaces and workgroups for your account in the current AWS Region. If you don't have any namespaces, choose **Create workgroup** to create a workgroup and its corresponding namespace.

1. Choose the name of the namespace that you want to register.

1.  From **Actions**, choose **Register to AWS Glue Data Catalog**. The **Register to AWS Glue Data Catalog** pop-up box appears. 

1. Enter the AWS account ID that you want to register the namespace to under **Destination account ID**. This is the account ID that will hold the catalog in the AWS Glue Data Catalog.

1.  Enter a name under **Register namespace as**. This will be the namespace’s name in the Data Catalog. 

1.  Choose **Register**. You’ll be taken to the AWS Lake Formation console. 

1.  Follow the catalog creation process in AWS Lake Formation. For information about creating a catalog, see [ Bringing Amazon Redshift data into the AWS Glue Data Catalog](https://docs.aws.amazon.com/lake-formation/latest/dg/managing-namespaces-datacatalog.html) in the AWS Lake Formation Developer Guide. 

# Tagging resources in Amazon Redshift Serverless
<a name="serverless-tagging-resources"></a>

In AWS, tags are user-defined labels that consist of key-value pairs. Amazon Redshift Serverless supports tagging to provide metadata about resources at a glance. 

Tags are not required for resources, but they help provide context. You might want to tag resources with metadata with information related to the resource. For example, suppose you want to track which resources belong to a test environment and a production environment. You could create a key named environment and provide the value test or production to identify the resources used in each environment. If you use tagging in other AWS services or have standard categories for your business, we recommend that you create the same key-value pairs for resources for consistency. 

 If you delete a resource, any associated tags are deleted. You can use both the AWS CLI and Amazon Redshift Serverless console to tag serverless resources. Available API operations are `TagResource`, `UntagResource`, and `ListTagsForResource`. 

Each resource has one tag set, which is a collection of one or more tags assigned to the resource. Each resource can have up to 50 tags per tag set. You can add tags when you create a resource and after a resource has been created. You can add tags to the following serverless resource types: 
+ Workgroups
+ Namespaces
+ Snapshots
+ Recovery points

Tags have the following requirements:
+ Keys can't be prefixed with `aws:`.
+ Keys must be unique per tag set.
+ A key must be between 1 and 128 allowed characters.
+ A value must be between 0 and 256 allowed characters.
+ Values do not need to be unique per tag set.
+ Allowed characters for keys and values are Unicode letters, digits, white space, and any of the following symbols: \$1 . : / = \$1 - @. 
+ Keys and values are case sensitive.

To manage tags of your Amazon Redshift Serverless resources

1. On the Amazon Redshift Serverless console, choose **Manage Tags**.

1. Enter the resource type to search for and choose **Search resources**. Choose the resource for which you want to manage tags, then choose **Manage tags**.

1. Specify the keys and optional values you want to add to the resource. When modifying a tag, you can change the tag's value, but not the key.

1. After you're done adding, removing, or modifying tags, choose **Save changes**, then choose **Apply** to save your changes.