

 Amazon Redshift will no longer support the creation of new Python UDFs starting Patch 198. Existing Python UDFs will continue to function until June 30, 2026. For more information, see the [ blog post ](https://aws.amazon.com/blogs/big-data/amazon-redshift-python-user-defined-functions-will-reach-end-of-support-after-june-30-2026/). 

# Database security
<a name="r_Database_objects"></a>

You manage database security by controlling which users have access to which database objects. Users can be assigned roles or groups, and the permissions you grant to users, roles, or groups decides which database objects they can access.

**Topics**
+ [Amazon Redshift security overview](c_security-overview.md)
+ [Default database user permissions](r_Privileges.md)
+ [superuser](r_superusers.md)
+ [Users](r_Users.md)
+ [Groups](r_Groups.md)
+ [Schemas](r_Schemas_and_tables.md)
+ [Role-based access control (RBAC)](t_Roles.md)
+ [Row-level security](t_rls.md)
+ [Metadata security](t_metadata_security.md)
+ [Dynamic data masking](t_ddm.md)
+ [Scoped permissions](t_scoped-permissions.md)

Access to database objects depends on the permissions that you grant to users or roles. The following guidelines summarize how database security works:
+ By default, permissions are granted only to the object owner.
+ Amazon Redshift database users are named users that can connect to a database. A user is granted permissions in two ways: explicitly, by having those permissions assigned directly to the account, or implicitly, by being a member of a group that is granted permissions.
+ Groups are collections of users that can be collectively assigned permissions for streamlined security maintenance.
+ Schemas are collections of database tables and other database objects. Schemas are similar to file system directories, except that schemas cannot be nested. Users can be granted access to a single schema or to multiple schemas.

Additionally, Amazon Redshift employs the following features to give you finer control over which users have access to which database objects:
+  Role-based access control (RBAC) lets you assign permissions to roles which you can then apply to users, letting you control permissions for large groups of users. Unlike groups, roles can inherit permissions from other roles. 

  Row-level security (RLS) lets you define policies that restrict access to rows of your choosing, then apply those policies to users or groups. 

   Dynamic data masking (DDM) further protects your data by transforming it at query runtime so that you can allow users access to data without exposing sensitive details. 

For examples of security implementation, see [Example for controlling user and group access](t_user_group_examples.md).

For more information about protecting your data, see [Security in Amazon Redshift](https://docs.aws.amazon.com/redshift/latest/mgmt/iam-redshift-user-mgmt.html) in the *Amazon Redshift Management Guide*. 

# Amazon Redshift security overview
<a name="c_security-overview"></a>



Amazon Redshift database security is distinct from other types of Amazon Redshift security. In addition to database security, which is described in this section, Amazon Redshift provides these features to manage security:
+  **Sign-in credentials** — Access to your Amazon Redshift AWS Management Console is controlled by your AWS account permissions. For more information, see [Sign-in credentials](https://docs.aws.amazon.com/general/latest/gr/aws-security-credentials.html).
+  **Access management** — To control access to specific Amazon Redshift resources, you define AWS Identity and Access Management (IAM) accounts. For more information, see [Controlling access to Amazon Redshift resources](https://docs.aws.amazon.com/redshift/latest/mgmt/iam-redshift-user-mgmt.html).
+  **Cluster security groups** — To grant other users inbound access to an Amazon Redshift cluster, you define a cluster security group and associate it with a cluster. For more information, see [ Amazon Redshift cluster security groups](https://docs.aws.amazon.com/redshift/latest/mgmt/working-with-security-groups.html).
+  **VPC** — To protect access to your cluster by using a virtual networking environment, you can launch your cluster in an Amazon Virtual Private Cloud (VPC). For more information, see [Managing clusters in Virtual Private Cloud (VPC)](https://docs.aws.amazon.com/redshift/latest/mgmt/managing-clusters-vpc.html).
+  **Cluster encryption** — To encrypt the data in all your user-created tables, you can turn on cluster encryption when you launch the cluster. For more information, see [Amazon Redshift clusters](https://docs.aws.amazon.com/redshift/latest/mgmt/working-with-clusters.html).
+  **SSL connections** — To encrypt the connection between your SQL client and your cluster, you can use secure sockets layer (SSL) encryption. For more information, see [Connect to your cluster using SSL](https://docs.aws.amazon.com/redshift/latest/mgmt/connecting-ssl-support.html).
+  **Load data encryption** — To encrypt your table load data files when you upload them to Amazon S3, you can use either server-side encryption or client-side encryption. When you load from server-side encrypted data, Amazon S3 handles decryption transparently. When you load from client-side encrypted data, the Amazon Redshift COPY command decrypts the data as it loads the table. For more information, see [Uploading encrypted data to Amazon S3](t_uploading-encrypted-data.md).
+ **Data in transit** — To protect your data in transit within the AWS Cloud, Amazon Redshift uses hardware accelerated SSL to communicate with Amazon S3 or Amazon DynamoDB for COPY, UNLOAD, backup, and restore operations.
+ **Column-level access control** — To have column-level access control for data in Amazon Redshift, use column-level grant and revoke statements without having to implement views-based access control or use another system.
+ **Row-level security control** — To have row-level security control for data in Amazon Redshift, create and attach policies to roles or users that restrict access to rows defined in the policy.

# Default database user permissions
<a name="r_Privileges"></a>

When you create a database object, you are its owner. By default, only a superuser or the owner of an object can query, modify, or grant permissions on the object. For users to use an object, you must grant the necessary permissions to the user or the group that contains the user. Database superusers have the same permissions as database owners.

Amazon Redshift supports the following permissions: SELECT, INSERT, UPDATE, DELETE, REFERENCES, CREATE, TEMPORARY, and USAGE. Different permissions are associated with different object types. For information about database object permissions supported by Amazon Redshift, see the [GRANT](r_GRANT.md) command.

Only the owner has the permission to modify or destroy an object. 

By default, all users have CREATE and USAGE permissions on the PUBLIC schema of a database. To disallow users from creating objects in the PUBLIC schema of a database, use the REVOKE command to remove that permission.

To revoke a permission that was previously granted, use the [REVOKE](r_REVOKE.md) command. The permissions of the object owner, such as DROP, GRANT, and REVOKE permissions, are implicit and cannot be granted or revoked. Object owners can revoke their own ordinary permissions, for example, to make a table read-only for themselves and others. Superusers retain all permissions regardless of GRANT and REVOKE commands.

# Superusers
<a name="r_superusers"></a>

<a name="def_superusers"></a>Database superusers have the same permissions as database owners for all databases.

The *admin* user, which is the user you created when you launched the cluster, is a superuser.

You must be a superuser to create a superuser.

Amazon Redshift system tables and system views are either visible only to superusers or visible to all users. Only superusers can query system tables and system views that are designated "visible to superusers." For information, see [SYS monitoring views](serverless_views-monitoring.md).

Superusers can view all catalog tables. For information, see [System catalog tables](c_intro_catalog_views.md).

A database superuser bypasses all permission checks. Superusers retain all permissions regardless of GRANT and REVOKE commands. Be careful when using a superuser role. We recommend that you do most of your work as a role that is not a superuser. You can create an administrator role with more restrictive permissions. For more information about creating roles, see [Role-based access control (RBAC)](t_Roles.md)

To create a new database superuser, log on to the database as a superuser and issue a CREATE USER command or an ALTER USER command with the CREATEUSER permission.

```
CREATE USER adminuser CREATEUSER PASSWORD '1234Admin';
ALTER USER adminuser CREATEUSER;
```

To create, alter, or drop a superuser, use the same commands to manage users. For more information, see [Creating, altering, and deleting users](r_Users-creatingaltering-and-deleting-users.md).

# Users
<a name="r_Users"></a>

You can create and manage database users using the Amazon Redshift SQL commands CREATE USER and ALTER USER. Or you can configure your SQL client with custom Amazon Redshift JDBC or ODBC drivers. These manage the process of creating database users and temporary passwords as part of the database logon process.

The drivers authenticate database users based on AWS Identity and Access Management (IAM) authentication. If you already manage user identities outside of AWS, you can use a SAML 2.0-compliant identity provider (IdP) to manage access to Amazon Redshift resources. You use an IAM role to configure your IdP and AWS to permit your federated users to generate temporary database credentials and log on to Amazon Redshift databases. For more information, see [Using IAM authentication to generate database user credentials](https://docs.aws.amazon.com/redshift/latest/mgmt/generating-user-credentials.html). 

Amazon Redshift users can only be created and dropped by a database superuser. Users are authenticated when they log on to Amazon Redshift. They can own databases and database objects (for example, tables). They can also grant permissions on those objects to users, groups, and schemas to control who has access to which object. Users with CREATE DATABASE rights can create databases and grant permissions to those databases. Superusers have database ownership permissions for all databases.

# Creating, altering, and deleting users
<a name="r_Users-creatingaltering-and-deleting-users"></a>

Database users are global across a data warehouse cluster (and not for each individual database). 
+  To create a user, use the [CREATE USER](r_CREATE_USER.md) command. 
+  To create a superuser, use the [CREATE USER](r_CREATE_USER.md) command with the CREATEUSER option. 
+ To remove an existing user, use the [DROP USER](r_DROP_USER.md) command. 
+ To change a user, for example changing a password, use the [ALTER USER](r_ALTER_USER.md) command. 
+ To view a list of users, query the PG\$1USER catalog table.

  ```
  select * from pg_user;
  
    usename   | usesysid | usecreatedb | usesuper | usecatupd |  passwd  | valuntil | useconfig
  ------------+----------+-------------+----------+-----------+----------+----------+-----------
   rdsdb      |        1 | t           | t        | t         | ******** |          |
   masteruser |      100 | t           | t        | f         | ******** |          |
   dwuser     |      101 | f           | f        | f         | ******** |          |
   simpleuser |      102 | f           | f        | f         | ******** |          |
   poweruser  |      103 | f           | t        | f         | ******** |          |
   dbuser     |      104 | t           | f        | f         | ******** |          |
  (6 rows)
  ```

# Groups
<a name="r_Groups"></a>

Groups are collections of users who are all granted whatever permissions are associated with the group. You can use groups to assign permissions. For example, you can create different groups for sales, administration, and support and give the users in each group the appropriate access to the data they need for their work. You can grant or revoke permissions at the group level, and those changes will apply to all members of the group, except for superusers.

To view all user groups, query the PG\$1GROUP system catalog table:

```
select * from pg_group;
```

For example, to list all database users by group, run the following SQL.

```
SELECT u.usesysid
,g.groname
,u.usename
FROM pg_user u
LEFT JOIN pg_group g ON u.usesysid = ANY (g.grolist)
```

# Creating, altering, and deleting groups
<a name="r_Groups-creating-altering-and-deleting-groups"></a>

Only a superuser can create, alter, or drop groups.

You can perform the following actions:
+ To create a group, use the [CREATE GROUP](r_CREATE_GROUP.md) command.
+ To add users to or remove users from an existing group, use the [ALTER GROUP](r_ALTER_GROUP.md) command.
+ To delete a group, use the [DROP GROUP](r_DROP_GROUP.md) command. This command only drops the group, not its member users.

# Example for controlling user and group access
<a name="t_user_group_examples"></a>

This example creates user groups and users and then grants them various permissions for an Amazon Redshift database that connects to a web application client. This example assumes three groups of users: regular users of a web application, power users of a web application, and web developers.

For information about how to remove a user from a group, see [ALTER GROUP](r_ALTER_GROUP.md).

1. Create the groups where the users will be assigned. The following set of commands creates three different user groups: 

   ```
   create group webappusers;
   
   create group webpowerusers;
   
   create group webdevusers;
   ```

1.  Create several database users with different permissions and add them to the groups.  

   1.  Create two users and add them to the WEBAPPUSERS group:  

      ```
      create user webappuser1 password 'webAppuser1pass'
      in group webappusers;
      
      create user webappuser2 password 'webAppuser2pass'
      in group webappusers;
      ```

   1.  Create a web developer user and add it to the WEBDEVUSERS group:  

      ```
      create user webdevuser1 password 'webDevuser2pass'
      in group webdevusers;
      ```

   1.  Create a superuser. This user will have administrative rights to create other users:  

      ```
      create user webappadmin  password 'webAppadminpass1'
      createuser;
      ```

1.  Create a schema to be associated with the database tables used by the web application, and grant the various user groups access to this schema:  

   1.  Create the WEBAPP schema:  

      ```
      create schema webapp;
      ```

   1.  Grant USAGE permissions to the WEBAPPUSERS group:  

      ```
      grant usage on schema webapp to group webappusers;
      ```

   1.  Grant USAGE permissions to the WEBPOWERUSERS group:  

      ```
      grant usage on schema webapp to group webpowerusers;
      ```

   1.  Grant ALL permissions to the WEBDEVUSERS group:  

      ```
      grant all on schema webapp to group webdevusers;
      ```

   The basic users and groups are now set up. You can now alter the users and groups. 

1.  For example, the following command alters the search\$1path parameter for the WEBAPPUSER1.  

   ```
   alter user webappuser1 set search_path to webapp, public;
   ```

   The SEARCH\$1PATH specifies the schema search order for database objects, such as tables and functions, when the object is referenced by a simple name with no schema specified. 

1.  You can also add users to a group after creating the group, such as adding WEBAPPUSER2 to the WEBPOWERUSERS group:  

   ```
   alter group webpowerusers add user webappuser2;
   ```

# Schemas
<a name="r_Schemas_and_tables"></a>

A database contains one or more named schemas. Each schema in a database contains tables and other kinds of named objects. By default, a database has a single schema, which is named PUBLIC. You can use schemas to group database objects under a common name. Schemas are similar to file system directories, except that schemas cannot be nested.

Identical database object names can be used in different schemas in the same database without conflict. For example, both MY\$1SCHEMA and YOUR\$1SCHEMA can contain a table named MYTABLE. Users with the necessary permissions can access objects across multiple schemas in a database.

By default, an object is created within the first schema in the search path of the database. For information, see [Search path](#c_Search_path) later in this section.

Schemas can help with organization and concurrency issues in a multiuser environment in the following ways:
+ To let many developers work in the same database without interfering with each other.
+ To organize database objects into logical groups to make them more manageable.
+ To give applications the ability to put their objects into separate schemas so that their names will not collide with the names of objects used by other applications.

## Search path
<a name="c_Search_path"></a>

The search path is defined in the search\$1path parameter with a comma-separated list of schema names. The search path specifies the order in which schemas are searched when an object, such as a table or function, is referenced by a simple name that does not include a schema qualifier.

If an object is created without specifying a target schema, the object is added to the first schema that is listed in search path. When objects with identical names exist in different schemas, an object name that does not specify a schema will refer to the first schema in the search path that contains an object with that name.

To change the default schema for the current session, use the [SET](r_SET.md) command.

For more information, see the [search\$1path](r_search_path.md) description in the Configuration Reference.

# Creating, altering, and deleting schemas
<a name="r_Schemas_and_tables-creating-altering-and-deleting-schemas"></a>

Any user can create schemas and alter or drop schemas they own.

You can perform the following actions:
+ To create a schema, use the [CREATE SCHEMA](r_CREATE_SCHEMA.md) command.
+ To change the owner of a schema, use the [ALTER SCHEMA](r_ALTER_SCHEMA.md) command.
+ To delete a schema and its objects, use the [DROP SCHEMA](r_DROP_SCHEMA.md) command.
+ To create a table within a schema, create the table with the format *schema\$1name.table\$1name*. 

To view a list of all schemas, query the PG\$1NAMESPACE system catalog table:

```
select * from pg_namespace;
```

To view a list of tables that belong to a schema, query the PG\$1TABLE\$1DEF system catalog table. For example, the following query returns a list of tables in the PG\$1CATALOG schema.

```
select distinct(tablename) from pg_table_def
where schemaname = 'pg_catalog';
```

# Schema-based permissions
<a name="r_Schemas_and_tables-schema-based-privileges"></a>

 Schema-based permissions are determined by the owner of the schema: 
+ By default, all users have CREATE and USAGE permissions on the PUBLIC schema of a database. To disallow users from creating objects in the PUBLIC schema of a database, use the [REVOKE](r_REVOKE.md) command to remove that permission.
+ Unless they are granted the USAGE permission by the object owner, users cannot access any objects in schemas they do not own. 
+ If users have been granted the CREATE permission to a schema that was created by another user, those users can create objects in that schema.

# Role-based access control (RBAC)
<a name="t_Roles"></a>

By using role-based access control (RBAC) to manage database permissions in Amazon Redshift, you can simplify the management of security permissions in Amazon Redshift. You can secure the access to sensitive data by controlling what users can do both at a broad or fine level. You can also control user access to tasks that are normally restricted to superusers. By assigning different permissions to different roles and assigning them to different users, you can have more granular control of user access.

Users with an assigned role can perform only the tasks that are specified by the assigned role that they are authorized with. For example, a user with the assigned role that has the CREATE TABLE and DROP TABLE permissions is only authorized to perform those tasks. You can control user access by granting different levels of security permissions to different users to access the data they require for their work.

RBAC applies the principle of least permissions to users based on their role requirements, regardless of the types of objects that are involved. Granting and revoking of permissions is performed at the role level, without the need to update permissions on individual database objects.

With RBAC, you can create roles with permissions to run commands that used to require superuser permissions. Users can run these commands, as long as they are authorized with a role that includes these permissions. Similarly, you can also create roles to limit the access to certain commands, and assign the role to either superusers or users that have been authorized with the role.

To learn how Amazon Redshift RBAC works, watch the following video. 

[![AWS Videos](http://img.youtube.com/vi/https://www.youtube.com/embed/IhHQ7mZ-tp4/0.jpg)](http://www.youtube.com/watch?v=https://www.youtube.com/embed/IhHQ7mZ-tp4)


# Role hierarchy
<a name="t_role_hierarchy"></a>

*Roles* are collections of permissions that you can assign to a user or another role. You can assign system or database permissions to a role. A user inherits permissions from an assigned role. 

In RBAC, users can have nested roles. You can grant roles to both users and roles. When granting a role to a user, you authorize the user with all the permissions that this role includes. When granting a role r1 to a user, you authorize the user with permissions from r1. The user now has permissions from r1 and also any existing permissions they already have.

When granting a role (r1) to another role (r2), you authorize r2 with all the permissions from r1. Also, when granting r2 to another role (r3), the permissions of r3 are the combination of the permissions from r1 and r2. Role hierarchy has r2 inherit permissions from r1. Amazon Redshift propagates permissions with each role authorization. Granting r1 to r2 and then r2 to r3 authorizes r3 with all the permissions from the three roles. Thus, by granting r3 to a user, the user has all the permissions from the three roles. 

Amazon Redshift doesn't allow the creation of a role authorization cycle. A role authorization cycle happens when a nested role is assigned back to a role earlier in the role hierarchy, such as r3 being assigned back to r1. For more information about how to create roles and manage role assignments, see [Managing roles in RBAC](r_roles-managing.md). 

# Role assignment
<a name="t_role_assignment"></a>

Superusers and regular users with the CREATE ROLE permissions can use the CREATE ROLE statement to create roles. Superusers and role administrators can use the GRANT ROLE statement to grant a role to others. They can use the REVOKE ROLE statement to revoke a role from others, and the DROP ROLE statement to drop roles. Role administrators include role owners and users who have been granted the role with the ADMIN OPTION permission.

Only superusers or role administrators can grant and revoke roles. You can grant or revoke one or more roles to or from one or more roles or users. Use the WITH ADMIN OPTION option in the GRANT ROLE statement to provide the administration options for all the granted roles to all the grantees. 

Amazon Redshift supports different combinations of role assignments, such as granting multiple roles or having multiple grantees. The WITH ADMIN OPTION only applies to users and not to roles. Similarly, use the WITH ADMIN OPTION option in the REVOKE ROLE statement to remove the role and the administrative authorization from the grantee. When used with the ADMIN OPTION, only the administrative authorization is revoked from the role.

The following example revokes the administrative authorization of the `sample_role2` role from `user2`.

```
REVOKE ADMIN OPTION FOR sample_role2 FROM user2;
```

For more information about how to create roles and manage role assignments, see [Managing roles in RBAC](r_roles-managing.md).

# Amazon Redshift system-defined roles
<a name="r_roles-default"></a>

Amazon Redshift provides a few system-defined roles that are defined with specific permissions. System-specific roles start with a `sys:` prefix. Only users with appropriate access can alter system-defined roles or create custom system-defined roles. You can't use the `sys:` prefix for a custom system-defined role. 

The following table summarizes the roles and their permissions.

[\[See the AWS documentation website for more details\]](http://docs.aws.amazon.com/redshift/latest/dg/r_roles-default.html)

## System-defined roles and users for data sharing
<a name="r_roles-datashare"></a>

 Amazon Redshift creates roles and users for internal use that correspond to datashares and datashare consumers. Each internal role name and user name has the reserved namespace prefix `ds:`. They have the following format: 

[\[See the AWS documentation website for more details\]](http://docs.aws.amazon.com/redshift/latest/dg/r_roles-default.html)

 A data sharing role is created for each datashare. It holds all permissions currently granted to the datashare. A data sharing user is created for each consumer of a datashare. It is granted permission to a single data sharing role. A consumer added to multiple datashares will have a data sharing user created for each datashare. 

These users and roles are required for data sharing to work properly. They cannot be modified or dropped and they cannot be accessed or used for any tasks run by customers. You can safely ignore them. For more information about data sharing, see [Sharing data across clusters in Amazon Redshift](https://docs.aws.amazon.com/redshift/latest/dg/datashare-overview.html).

**Note**  
You can't use the `ds:` prefix to create user-defined roles or users.

# System permissions for RBAC
<a name="r_roles-system-privileges"></a>

Following is a list of system permissions that you can grant to or revoke from a role.

[\[See the AWS documentation website for more details\]](http://docs.aws.amazon.com/redshift/latest/dg/r_roles-system-privileges.html)

# Database object permissions
<a name="r_roles-database-privileges"></a>

Apart from system permissions, Amazon Redshift includes database object permissions that define access options. These include such options as the ability to read data in tables and views, write data, create tables, and drop tables. For more information, see [GRANT](r_GRANT.md).

By using RBAC, you can assign database object permissions to roles, similarly to how you can with system permissions. Then you can assign roles to users, authorize users with system permissions, and authorize users with database permissions.

# ALTER DEFAULT PRIVILEGES for RBAC
<a name="r_roles-alter-default-privileges"></a>

Use the ALTER DEFAULT PRIVILEGES statement to define the default set of access permissions to be applied to objects that are created in the future by the specified user. By default, users can change only their own default access permissions. With RBAC, you can set the default access permissions for roles. For more information, see the [ALTER DEFAULT PRIVILEGES](r_ALTER_DEFAULT_PRIVILEGES.md) command.

RBAC enables you to assign database object permissions to roles, similarly to system permissions. Then you can assign roles to users, authorize users with system and/or database permissions.

# Considerations for role usage in RBAC
<a name="r_role-usage-notes"></a>

When working with RBAC roles, consider the following:
+ Amazon Redshift doesn't allow cycles of role authorizations. You can't grant r1 to r2 and then grant r2 to r1.
+ RBAC works for both native Amazon Redshift objects and Amazon Redshift Spectrum tables.
+ As an Amazon Redshift administrator, you can turn on RBAC by upgrading your cluster to the latest maintenance patch to get started. 
+ Only superusers and users with the CREATE ROLE system permission can create roles.
+ Only superusers and role administrators can modify or drop roles.
+ A role name can't be the same as a user name.
+ A role name can't contain invalid characters, such as “:/\$1n.”
+ A role name can't be a reserved word, such as PUBLIC.
+ The role name can't start with the reserved prefix for default roles, `sys:`.
+ You can't drop a role that has the RESTRICT parameter when it is granted to another role. The default setting is RESTRICT. Amazon Redshift throws an error when you try to drop a role that has inherited another role.
+ Users that don't have admin permissions on a role can't grant or revoke a role.
+ RBAC isn't fully supported for system tables and views. RBAC permissions for system tables and views don’t persist through upgrades, downgrades, or resizes. We recommend using [Amazon Redshift system-defined rolesSystem-defined roles and users for data sharing](r_roles-default.md) to manage system table and view permissions. For more information on system tables, go to [System tables and views reference](cm_chap_system-tables.md).

# Managing roles in RBAC
<a name="r_roles-managing"></a>

To perform the following actions, use the following commands:
+ To create a role, use the [CREATE ROLE](r_CREATE_ROLE.md) command.
+ To rename a role or change the owner of the role, use the [ALTER ROLE](r_ALTER_ROLE.md) command.
+ To delete a role, use the [DROP ROLE](r_DROP_ROLE.md) command. 
+ To grant a role to a user, use the [GRANT](r_GRANT.md) command. 
+ To revoke a role from a user, use the [REVOKE](r_REVOKE.md) command. 
+ To grant system permissions to a role, use the [GRANT](r_GRANT.md) command. 
+ To revoke system permissions from a role, use the [REVOKE](r_REVOKE.md) command. 

To view a list of roles in your cluster or workgroup, see [SVV\$1ROLES](r_SVV_ROLES.md).

# Tutorial: Creating roles and querying with RBAC
<a name="r_tutorial-RBAC"></a>

With RBAC, you can create roles with permissions to run commands that used to require superuser permissions. Users can run these commands, as long as they are authorized with a role that includes these permissions.

In this tutorial, you use role-based access control (RBAC) to manage permissions in a database you create. You then connect to the database and query the database from two different roles to test the functionality of RBAC.

The two roles that you create and use to query the database are the `sales_ro` and `sales_rw`. You create the `sales_ro` role and query data as a user with the `sales_ro` role. The `sales_ro` user can only use the SELECT command but cannot use the UPDATE command. Then, you create the `sales_rw` role and query data as a user with the `sales_rw` role. The `sales_rw` user can use the SELECT command and the UPDATE command.

Additionally, you can create roles to limit the access to certain commands, and assign the role to either superusers or users.

**Tasks**
+ [Prerequisites](#tutorial-rbac-prereqs)
+ [Step 1: Create an administrator user](#tutorial-rbac-step1)
+ [Step 2: Set up schemas](#tutorial-rbac-step2)
+ [Step 3: Create a read-only user](#tutorial-rbac-step3)
+ [Step 4: Query the data as the read-only user](#tutorial-rbac-step4)
+ [Step 5: Create a read-write user](#tutorial-rbac-step5)
+ [Step 6: Query the data as the user with the inherited read-only role](#tutorial-rbac-step6)
+ [Step 7: Grant update and insert permissions to the read-write role](#tutorial-rbac-step7)
+ [Step 8: Query the data as the read-write user](#tutorial-rbac-step8)
+ [Step 9: Analyze and vacuum tables in a database as the administrator user](#tutorial-rbac-step9)
+ [Step 10: Truncate tables as the read-write user](#tutorial-rbac-step10)
+ [System functions for RBAC (optional)](#tutorial-rbac-system-functions)
+ [System views for RBAC (optional)](#tutorial-rbac-system-views)
+ [Use row-level security with RBAC (optional)](#tutorial-rbac-rls)

## Prerequisites
<a name="tutorial-rbac-prereqs"></a>
+ Create an Amazon Redshift cluster or serverless workgroup that is loaded with the TICKIT sample database. To create a serverless workgroup, see [Get started with Redshift Serverless data warehouses](https://docs.aws.amazon.com/redshift/latest/gsg/new-user-serverless.html). To create a cluster, see [Create a sample Amazon Redshift cluster](https://docs.aws.amazon.com/redshift/latest/gsg/rs-gsg-launch-sample-cluster.html). For more information about the TICKIT sample database, see [Sample database](c_sampledb.md).
+ Have access to a user with superuser or role administrator permissions. Only superusers or role administrators can grant or revoke roles. For more information about permissions required for RBAC, see [System permissions for RBAC](r_roles-system-privileges.md).
+ Review the [Considerations for role usage in RBAC](r_role-usage-notes.md).

## Step 1: Create an administrator user
<a name="tutorial-rbac-step1"></a>

To set up for this tutorial, you create a database admin role and attach it to a database administrator user in this step. You must create the database administrator as a superuser or role administrator.

Run all queries in the Amazon Redshift [query editor v2](https://docs.aws.amazon.com/redshift/latest/mgmt/query-editor-v2-using.html).

1. To create the administrator role db\$1admin, use the following example.

   ```
   CREATE ROLE db_admin;
   ```

1. To create a database user named dbadmin, use the following example.

   ```
   CREATE USER dbadmin PASSWORD 'Test12345';
   ```

1. To grant the system defined role named sys:dba to the db\$1admin role, use the following example. When granted the sys:dba role, the dbadmin user can create schemas and tables. For more information, see [Amazon Redshift system-defined rolesSystem-defined roles and users for data sharing](r_roles-default.md).

## Step 2: Set up schemas
<a name="tutorial-rbac-step2"></a>

In this step, you connect to your database as the database administrator. Then, you create two schemas and add data to them.

1. Connect to the dev database as the dbadmin user using query editor v2. For more information about connecting to a database, see [Working with query editor v2](https://docs.aws.amazon.com/redshift/latest/mgmt/query-editor-v2-using.html).

1. To create the sales and marketing database schemas, use the following example.

   ```
   CREATE SCHEMA sales;
   CREATE SCHEMA marketing;
   ```

1. To create and insert values into tables in the sales schema, use the following example.

   ```
   CREATE TABLE sales.cat(
   catid smallint,
   catgroup varchar(10),
   catname varchar(10),
   catdesc varchar(50)
   );
   INSERT INTO sales.cat(SELECT * FROM category);
   
   CREATE TABLE sales.dates(
   dateid smallint,
   caldate date,
   day char(3),
   week smallint,
   month char(5),
   qtr char(5),
   year smallint,
   holiday boolean
   );
   INSERT INTO sales.dates(SELECT * FROM date);
   
   CREATE TABLE sales.events(
   eventid integer,
   venueid smallint,
   catid smallint,
   dateid smallint,
   eventname varchar(200),
   starttime timestamp
   );
   INSERT INTO sales.events(SELECT * FROM event);
   
    CREATE TABLE sales.sale(
   salesid integer,
   listid integer,
   sellerid integer,
   buyerid integer,
   eventid integer,
   dateid smallint,
   qtysold smallint,
   pricepaid decimal(8,2),
   commission decimal(8,2),
   saletime timestamp
   );
   INSERT INTO sales.sale(SELECT * FROM sales);
   ```

1. To create and insert values into tables in the marketing schema, use the following example.

   ```
   CREATE TABLE marketing.cat(
   catid smallint,
   catgroup varchar(10),
   catname varchar(10),
   catdesc varchar(50)
   );
   INSERT INTO marketing.cat(SELECT * FROM category);
   
   CREATE TABLE marketing.dates(
   dateid smallint,
   caldate date,
   day char(3),
   week smallint,
   month char(5),
   qtr char(5),
   year smallint,
   holiday boolean
   );
   INSERT INTO marketing.dates(SELECT * FROM date);
   
   CREATE TABLE marketing.events(
   eventid integer,
   venueid smallint,
   catid smallint,
   dateid smallint,
   eventname varchar(200),
   starttime timestamp
   );
   INSERT INTO marketing.events(SELECT * FROM event);
   
   CREATE TABLE marketing.sale(
   marketingid integer,
   listid integer,
   sellerid integer,
   buyerid integer,
   eventid integer,
   dateid smallint,
   qtysold smallint,
   pricepaid decimal(8,2),
   commission decimal(8,2),
   saletime timestamp
   );
   INSERT INTO marketing.sale(SELECT * FROM marketing);
   ```

## Step 3: Create a read-only user
<a name="tutorial-rbac-step3"></a>

In this step, you create a read-only role and a salesanalyst user for the read-only role. The sales analyst only needs read-only access to the tables in the sales schema to accomplish their assigned task of finding the events that resulted in the largest commissions.

1. Connect to the database as the dbadmin user.

1. To create the sales\$1ro role, use the following example.

   ```
   CREATE ROLE sales_ro;
   ```

1. To create the salesanalyst user, use the following example.

   ```
   CREATE USER salesanalyst PASSWORD 'Test12345';
   ```

1. To grant the sales\$1ro role usage and select access to objects of the sales schema, use the following example.

   ```
   GRANT USAGE ON SCHEMA sales TO ROLE sales_ro;
   GRANT SELECT ON ALL TABLES IN SCHEMA sales TO ROLE sales_ro;
   ```

1. To grant the salesanalyst user the sales\$1ro role, use the following example.

   ```
   GRANT ROLE sales_ro TO salesanalyst;
   ```

## Step 4: Query the data as the read-only user
<a name="tutorial-rbac-step4"></a>

In this step, the salesanalyst user queries data from the sales schema. Then, the salesanalyst user attempts to update a table and read tables in the marketing schema.

1. Connect to the database as the salesanalyst user.

1. To find the 10 sales with the highest commissions, use the following example.

   ```
   SET SEARCH_PATH TO sales;
   SELECT DISTINCT events.dateid, sale.commission, cat.catname
   FROM sale, events, dates, cat   
   WHERE events.dateid=dates.dateid AND events.dateid=sale.dateid AND events.catid = cat.catid
   ORDER BY 2 DESC LIMIT 10;
                  
   +--------+------------+----------+
   | dateid | commission | catname  |
   +--------+------------+----------+
   |   1880 |     1893.6 | Pop      |
   |   1880 |     1893.6 | Opera    |
   |   1880 |     1893.6 | Plays    |
   |   1880 |     1893.6 | Musicals |
   |   1861 |       1500 | Plays    |
   |   2003 |       1500 | Pop      |
   |   1861 |       1500 | Opera    |
   |   2003 |       1500 | Plays    |
   |   1861 |       1500 | Musicals |
   |   1861 |       1500 | Pop      |
   +--------+------------+----------+
   ```

1. To select 10 events from the events table in the sales schema, use the following example.

   ```
   SELECT * FROM sales.events LIMIT 10;
                  
   +---------+---------+-------+--------+--------------------+---------------------+
   | eventid | venueid | catid | dateid |     eventname      |      starttime      |
   +---------+---------+-------+--------+--------------------+---------------------+
   |    4836 |      73 |     9 |   1871 | Soulfest           | 2008-02-14 19:30:00 |
   |    5739 |      41 |     9 |   1871 | Fab Faux           | 2008-02-14 19:30:00 |
   |     627 |     229 |     6 |   1872 | High Society       | 2008-02-15 14:00:00 |
   |    2563 |     246 |     7 |   1872 | Hamlet             | 2008-02-15 20:00:00 |
   |    7703 |      78 |     9 |   1872 | Feist              | 2008-02-15 14:00:00 |
   |    7903 |      90 |     9 |   1872 | Little Big Town    | 2008-02-15 19:30:00 |
   |    7925 |     101 |     9 |   1872 | Spoon              | 2008-02-15 19:00:00 |
   |    8113 |      17 |     9 |   1872 | Santana            | 2008-02-15 15:00:00 |
   |     463 |     303 |     8 |   1873 | Tristan und Isolde | 2008-02-16 19:00:00 |
   |     613 |     236 |     6 |   1873 | Pal Joey           | 2008-02-16 15:00:00 |
   +---------+---------+-------+--------+--------------------+---------------------+
   ```

1. To attempt to update the eventname for eventid 1, run the following example. This example will result in a permission denied error because the salesanalyst user only has SELECT permissions on the events table in the sales schema. To update the events table, you must grant the sales\$1ro role permissions to UPDATE. For more information about granting permissions to update a table, see the UPDATE parameter for [GRANT](r_GRANT.md). For more information about the UPDATE command, see [UPDATE](r_UPDATE.md).

   ```
   UPDATE sales.events
   SET eventname = 'Comment event'
   WHERE eventid = 1;
                     
   ERROR: permission denied for relation events
   ```

1. To attempt to select all from the events table in the marketing schema, use the following example. This example will result in a permission denied error because the salesanalyst user only has SELECT permissions for the events table in the sales schema. To select data from the events table in the marketing schema, you must grant the sales\$1ro role SELECT permissions on the events table in the marketing schema.

   ```
   SELECT * FROM marketing.events;
                  
                  ERROR: permission denied for schema marketing
   ```

## Step 5: Create a read-write user
<a name="tutorial-rbac-step5"></a>

In this step, the sales engineer who is responsible for building the extract, transform, and load (ETL) pipeline for data processing in the sales schema will be given read-only access, but will later be given read and write access to perform their tasks.

1. Connect to the database as the dbadmin user.

1. To create the sales\$1rw role in the sales schema, use the following example.

   ```
   CREATE ROLE sales_rw;
   ```

1. To create the salesengineer user, use the following example.

   ```
   CREATE USER salesengineer PASSWORD 'Test12345';
   ```

1. To grant the sales\$1rw role usage and select access to objects of the sales schema by assigning the sales\$1ro role to it, use the following example. For more information on how roles inherit permissions in Amazon Redshift, see [Role hierarchy](t_role_hierarchy.md).

   ```
   GRANT ROLE sales_ro TO ROLE sales_rw;
   ```

1. To assign the sales\$1rw role to the salesengineer user, use the following example.

   ```
   GRANT ROLE sales_rw TO salesengineer;
   ```

## Step 6: Query the data as the user with the inherited read-only role
<a name="tutorial-rbac-step6"></a>

In this step, the salesengineer user attempts to update the events table before they are granted read permissions. 

1. Connect to the database as the salesengineer user.

1. The salesengineer user can successfully read data from the events table of the sales schema. To select the event with eventid 1 from the events table in the sales schema, use the following example.

   ```
   SELECT * FROM sales.events where eventid=1;
                     
   +---------+---------+-------+--------+-----------------+---------------------+
   | eventid | venueid | catid | dateid |    eventname    |      starttime      |
   +---------+---------+-------+--------+-----------------+---------------------+
   |       1 |     305 |     8 |   1851 | Gotterdammerung | 2008-01-25 14:30:00 |
   +---------+---------+-------+--------+-----------------+---------------------+
   ```

1. To attempt to select all from the events table in the marketing schema, use the following example. The salesengineer user doesn’t have permissions for tables in the marketing schema, so this query will result in a permission denied error. To select data from the events table in the marketing schema, you must grant the sales\$1rw role SELECT permissions on the events table in the marketing schema.

   ```
   SELECT * FROM marketing.events;
   
   ERROR: permission denied for schema marketing
   ```

1. To attempt to update the eventname for eventid 1, run the following example. This example will result in a permission denied error because the salesengineer user only has select permissions on the events table in the sales schema. To update the events table, you must grant the sales\$1rw role permissions to UPDATE.

   ```
   UPDATE sales.events
   SET eventname = 'Comment event'
   WHERE eventid = 1;
   
   ERROR: permission denied for relation events
   ```

## Step 7: Grant update and insert permissions to the read-write role
<a name="tutorial-rbac-step7"></a>

In this step, you grant update and insert permissions to the sales\$1rw role.

1. Connect to the database as the dbadmin user.

1. To grant UPDATE, INSERT, and DELETE permissions to the sales\$1rw role, use the following example.

   ```
   GRANT UPDATE, INSERT, ON ALL TABLES IN SCHEMA sales TO role sales_rw;
   ```

## Step 8: Query the data as the read-write user
<a name="tutorial-rbac-step8"></a>

In this step, the salesengineer successfully updates the table after their role is granted insert and update permissions. Next, the salesengineer attempts to analyze and vacuum the events table but fails to do so.

1. Connect to the database as the salesengineer user.

1. To update the eventname for eventid 1, run the following example.

   ```
   UPDATE sales.events
   SET eventname = 'Comment event'
   WHERE eventid = 1;
   ```

1. To view the change made in the previous query, use the following example to select the event with eventid 1 from the events table in the sales schema.

   ```
   SELECT * FROM sales.events WHERE eventid=1;
   
   +---------+---------+-------+--------+---------------+---------------------+
   | eventid | venueid | catid | dateid |   eventname   |      starttime      |
   +---------+---------+-------+--------+---------------+---------------------+
   |       1 |     305 |     8 |   1851 | Comment event | 2008-01-25 14:30:00 |
   +---------+---------+-------+--------+---------------+---------------------+
   ```

1. To analyze the updated events table in the sales schema, use the following example. This example will result in a permission denied error because the salesengineer user does not have the necessary permissions and isn’t the owner of the events table in the sales schema. To analyze the events table, you must grant the sales\$1rw role permissions to ANALYZE using the GRANT command. For more information about the ANALYZE command, see [ANALYZE](r_ANALYZE.md).

   ```
   ANALYZE sales.events;
                  
                  ERROR: skipping "events" --- only table or database owner can analyze
   ```

1. To vacuum the updated events table, use the following example. This example will result in a permission denied error because the salesengineer user does not have the necessary permissions and isn’t the owner of the events table in the sales schema. To vacuum the events table, you must grant the sales\$1rw role permissions to VACUUM using the GRANT command. For more information about the VACUUM command, see [VACUUM](r_VACUUM_command.md).

   ```
   VACUUM sales.events;
                     
   ERROR: skipping "events" --- only table or database owner can vacuum it
   ```

## Step 9: Analyze and vacuum tables in a database as the administrator user
<a name="tutorial-rbac-step9"></a>

In this step, the dbadmin user analyzes and vacuums all of the tables. The user has administrator permissions on this database, so they are able to run these commands.

1. Connect to the database as the dbadmin user.

1. To analyze the events table in the sales schema, use the following example. 

   ```
   ANALYZE sales.events;
   ```

1. To vacuum the events table in the sales schema, use the following example.

   ```
   VACUUM sales.events;
   ```

1. To analyze the events table in the marketing schema, use the following example. 

   ```
   ANALYZE marketing.events;
   ```

1. To vacuum the events table in the marketing schema, use the following example.

   ```
   VACUUM marketing.events;
   ```

## Step 10: Truncate tables as the read-write user
<a name="tutorial-rbac-step10"></a>

In this step, the salesengineer user attempts to truncate the events table in the sales schema, but only succeeds when granted truncate permissions by the dbadmin user. 

1. Connect to the database as the salesengineer user.

1. To try to delete all of the rows from the events table in the sales schema, use the following example. This example will result in an error because the salesengineer user does not have the necessary permissions and isn’t the owner of the events table in the sales schema. To truncate the events table, you must grant the sales\$1rw role permissions to TRUNCATE using the GRANT command. For more information about the TRUNCATE command, see [TRUNCATE](r_TRUNCATE.md).

   ```
   TRUNCATE sales.events;
                  
   ERROR: must be owner of relation events
   ```

1. Connect to the database as the dbadmin user.

1. To grant truncate table privileges to the sales\$1rw role, use the following example.

   ```
   GRANT TRUNCATE TABLE TO role sales_rw;
   ```

1. Connect to the database as the salesengineer user using query editor v2.

1. To read the first 10 events from the events table in the sales schema, use the following example.

   ```
   SELECT * FROM sales.events ORDER BY eventid LIMIT 10;
                  
   +---------+---------+-------+--------+-----------------------------+---------------------+
   | eventid | venueid | catid | dateid |          eventname          |      starttime      |
   +---------+---------+-------+--------+-----------------------------+---------------------+
   |       1 |     305 |     8 |   1851 | Comment event               | 2008-01-25 14:30:00 |
   |       2 |     306 |     8 |   2114 | Boris Godunov               | 2008-10-15 20:00:00 |
   |       3 |     302 |     8 |   1935 | Salome                      | 2008-04-19 14:30:00 |
   |       4 |     309 |     8 |   2090 | La Cenerentola (Cinderella) | 2008-09-21 14:30:00 |
   |       5 |     302 |     8 |   1982 | Il Trovatore                | 2008-06-05 19:00:00 |
   |       6 |     308 |     8 |   2109 | L Elisir d Amore            | 2008-10-10 19:30:00 |
   |       7 |     309 |     8 |   1891 | Doctor Atomic               | 2008-03-06 14:00:00 |
   |       8 |     302 |     8 |   1832 | The Magic Flute             | 2008-01-06 20:00:00 |
   |       9 |     308 |     8 |   2087 | The Fly                     | 2008-09-18 19:30:00 |
   |      10 |     305 |     8 |   2079 | Rigoletto                   | 2008-09-10 15:00:00 |
   +---------+---------+-------+--------+-----------------------------+---------------------+
   ```

1. To truncate the events table in the sales schema, use the following example.

   ```
   TRUNCATE sales.events;
   ```

1. To read the data from the updated events table in the sales schema, use the following example.

   ```
   SELECT * FROM sales.events ORDER BY eventid LIMIT 10;
                  
   +---------+---------+-------+--------+-----------------------------+---------------------+
   | eventid | venueid | catid | dateid |          eventname          |      starttime      |
   +---------+---------+-------+--------+-----------------------------+---------------------+
   ```

### Create read-only and read-write roles for the marketing schema (optional)
<a name="tutorial-rbac-create-marketing-schema"></a>

In this step, you create read-only and read-write roles for the marketing schema.

1. Connect to the database as the dbadmin user.

1. To create read-only and read-write roles for the marketing schema, use the following example.

   ```
   CREATE ROLE marketing_ro;
   
   CREATE ROLE marketing_rw;
   
   GRANT USAGE ON SCHEMA marketing TO ROLE marketing_ro, ROLE marketing_rw;
   
   GRANT SELECT ON ALL TABLES IN SCHEMA marketing TO ROLE marketing_ro;
   
   GRANT ROLE marketing_ro TO ROLE marketing_rw;
   
   GRANT INSERT, UPDATE, DELETE ON ALL TABLES IN SCHEMA marketing TO ROLE marketing_rw;
   
   CREATE USER marketinganalyst PASSWORD 'Test12345';
   
   CREATE USER marketingengineer PASSWORD 'Test12345';
   
   GRANT ROLE marketing_ro TO marketinganalyst;
   
   GRANT ROLE marketing_rw TO marketingengineer;
   ```

## System functions for RBAC (optional)
<a name="tutorial-rbac-system-functions"></a>

Amazon Redshift has two functions to provide system information about user membership and role membership in additional groups or roles: role\$1is\$1member\$1of and user\$1is\$1member\$1of. These functions are available to superusers and regular users. Superusers can check all role memberships. Regular users can only check membership for roles that they have been granted access to.

To use the role\$1is\$1member\$1of function

1. Connect to the database as the salesengineer user.

1. To check if the sales\$1rw role is a member of the sales\$1ro role, use the following example.

   ```
   SELECT role_is_member_of('sales_rw', 'sales_ro');
                  
   +-------------------+
   | role_is_member_of |
   +-------------------+
   | true              |
   +-------------------+
   ```

1. To check if the sales\$1ro role is a member of the sales\$1rw role, use the following example.

   ```
   SELECT role_is_member_of('sales_ro', 'sales_rw');
                  
   +-------------------+
   | role_is_member_of |
   +-------------------+
   | false             |
   +-------------------+
   ```

To use the user\$1is\$1member\$1of function

1. Connect to the database as the salesengineer user.

1. The following example attempts to check the user membership for the salesanalyst user. This query results in an error because salesengineer does not have access to salesanalyst. To run this command successfully, connect to the database as the salesanalyst user and use the example.

   ```
   SELECT user_is_member_of('salesanalyst', 'sales_ro');
                  
   ERROR
   ```

1. Connect to the database as a superuser.

1. To check the membership of the salesanalyst user when connected as a superuser, use the following example.

   ```
   SELECT user_is_member_of('salesanalyst', 'sales_ro');
                  
   +-------------------+
   | user_is_member_of |
   +-------------------+
   | true              |
   +-------------------+
   ```

1. Connect to the database as the dbadmin user.

1. To check the membership of the salesengineer user, use the following example. 

   ```
   SELECT user_is_member_of('salesengineer', 'sales_ro');
                  
   +-------------------+
   | user_is_member_of |
   +-------------------+
   | true              |
   +-------------------+
                  
   SELECT user_is_member_of('salesengineer', 'marketing_ro');
   
   +-------------------+
   | user_is_member_of |
   +-------------------+
   | false             |
   +-------------------+
                  
   SELECT user_is_member_of('marketinganalyst', 'sales_ro');
                  
   +-------------------+
   | user_is_member_of |
   +-------------------+
   | false             |
   +-------------------+
   ```

## System views for RBAC (optional)
<a name="tutorial-rbac-system-views"></a>

To view the roles, the assignment of roles to users, the role hierarchy, and the privileges for database objects via roles, use the system views for Amazon Redshift. These views are available to superusers and regular users. Superusers can check all role details. Regular users can only check details for roles that they have been granted access to.

1. To view a list of users that are explicitly granted roles in the cluster, use the following example.

   ```
   SELECT * FROM svv_user_grants;
   ```

1. To view a list of roles that are explicitly granted roles in the cluster, use the following example.

   ```
   SELECT * FROM svv_role_grants;
   ```

For the full list of system views, refer to [SVV metadata views](svv_views.md).

## Use row-level security with RBAC (optional)
<a name="tutorial-rbac-rls"></a>

To have granular access control over your sensitive data, use row-level security (RLS). For more information about RLS, see [Row-level security](t_rls.md).

In this section, you create a RLS policy that gives the `salesengineer` user permissions to only view rows in the `cat` table that have the `catdesc` value of Major League Baseball. You then query the database as the `salesengineer` user.

1. Connect to the database as the `salesengineer` user.

1. To view the first 5 entries in the `cat` table, use the following example.

   ```
   SELECT * 
   FROM sales.cat
   ORDER BY catid ASC
   LIMIT 5;
                     
   +-------+----------+---------+---------------------------------+
   | catid | catgroup | catname |             catdesc             |
   +-------+----------+---------+---------------------------------+
   |     1 | Sports   | MLB     | Major League Baseball           |
   |     2 | Sports   | NHL     | National Hockey League          |
   |     3 | Sports   | NFL     | National Football League        |
   |     4 | Sports   | NBA     | National Basketball Association |
   |     5 | Sports   | MLS     | Major League Soccer             |
   +-------+----------+---------+---------------------------------+
   ```

1. Connect to the database as the `dbadmin` user.

1. To create a RLS policy for the `catdesc` column in the `cat` table, use the following example.

   ```
   CREATE RLS POLICY policy_mlb_engineer
   WITH (catdesc VARCHAR(50)) 
   USING (catdesc = 'Major League Baseball');
   ```

1. To attach the RLS policy to the `sales_rw` role, use the following example.

   ```
   ATTACH RLS POLICY policy_mlb_engineer ON sales.cat TO ROLE sales_rw; 
   ```

1. To alter the table to turn on RLS, use the following example.

   ```
   ALTER TABLE sales.cat ROW LEVEL SECURITY ON; 
   ```

1. Connect to the database as the `salesengineer` user.

1. To attempt to view the first 5 entries in the `cat` table, use the following example. Note that only entries only appear when the `catdesc` column is `Major League Baseball`.

   ```
   SELECT * 
   FROM sales.cat
   ORDER BY catid ASC
   LIMIT 5;
                  
   +-------+----------+---------+-----------------------+
   | catid | catgroup | catname |        catdesc        |
   +-------+----------+---------+-----------------------+
   |     1 | Sports   | MLB     | Major League Baseball |
   +-------+----------+---------+-----------------------+
   ```

1. Connect to the database as the `salesanalyst` user.

1. To attempt to view the first 5 entries in the `cat` table, use the following example. Note that no entries appear because the default deny all policy is applied.

   ```
   SELECT * 
   FROM sales.cat
   ORDER BY catid ASC
   LIMIT 5;
                  
   +-------+----------+---------+-----------------------+
   | catid | catgroup | catname |        catdesc        |
   +-------+----------+---------+-----------------------+
   ```

1. Connect to the database as the `dbadmin` user.

1. To grant the IGNORE RLS permission to the `sales_ro` role, use the following example. This grants the `salesanalyst` user the permissions to ignore RLS policies since they are a member of the `sales_ro` role.

   ```
   GRANT IGNORE RLS TO ROLE sales_ro; 
   ```

1. Connect to the database as the `salesanalyst` user.

1. To view the first 5 entries in the `cat` table, use the following example.

   ```
   SELECT * 
   FROM sales.cat
   ORDER BY catid ASC
   LIMIT 5;
                  
   +-------+----------+---------+---------------------------------+
   | catid | catgroup | catname |             catdesc             |
   +-------+----------+---------+---------------------------------+
   |     1 | Sports   | MLB     | Major League Baseball           |
   |     2 | Sports   | NHL     | National Hockey League          |
   |     3 | Sports   | NFL     | National Football League        |
   |     4 | Sports   | NBA     | National Basketball Association |
   |     5 | Sports   | MLS     | Major League Soccer             |
   +-------+----------+---------+---------------------------------+
   ```

1. Connect to the database as the `dbadmin` user.

1. To revoke the IGNORE RLS permission from the `sales_ro` role, use the following example.

   ```
   REVOKE IGNORE RLS FROM ROLE sales_ro;
   ```

1. Connect to the database as the `salesanalyst` user.

1. To attempt to view the first 5 entries in the `cat` table, use the following example. Note that no entries appear because the default deny all policy is applied.

   ```
   SELECT * 
   FROM sales.cat
   ORDER BY catid ASC
   LIMIT 5;
                  
   +-------+----------+---------+-----------------------+
   | catid | catgroup | catname |        catdesc        |
   +-------+----------+---------+-----------------------+
   ```

1. Connect to the database as the `dbadmin` user.

1. To detach the RLS policy from the `cat` table, use the following example.

   ```
   DETACH RLS POLICY policy_mlb_engineer ON cat FROM ROLE sales_rw;
   ```

1. Connect to the database as the `salesanalyst` user.

1. To attempt to view the first 5 entries in the `cat` table, use the following example. Note that no entries appear because the default deny all policy is applied.

   ```
   SELECT * 
   FROM sales.cat
   ORDER BY catid ASC
   LIMIT 5;
                  
   +-------+----------+---------+---------------------------------+
   | catid | catgroup | catname |             catdesc             |
   +-------+----------+---------+---------------------------------+
   |     1 | Sports   | MLB     | Major League Baseball           |
   |     2 | Sports   | NHL     | National Hockey League          |
   |     3 | Sports   | NFL     | National Football League        |
   |     4 | Sports   | NBA     | National Basketball Association |
   |     5 | Sports   | MLS     | Major League Soccer             |
   +-------+----------+---------+---------------------------------+
   ```

1. Connect to the database as the `dbadmin` user.

1. To drop the RLS policy, use the following example.

   ```
   DROP RLS POLICY policy_mlb_engineer;
   ```

1. To remove RLS, use the following example.

   ```
   ALTER TABLE cat ROW LEVEL SECURITY OFF;
   ```

## Related topics
<a name="tutorial-rbac-related-topics"></a>

For more information about RBAC, see the following documentation:
+ [Role hierarchy](t_role_hierarchy.md)
+ [Role assignment](t_role_assignment.md)
+ [Database object permissions](r_roles-database-privileges.md)
+ [ALTER DEFAULT PRIVILEGES for RBAC](r_roles-alter-default-privileges.md)

# Row-level security
<a name="t_rls"></a>

Using row-level security (RLS) in Amazon Redshift, you can have granular access control over your sensitive data. You can decide which users or roles can access specific records of data within schemas or tables, based on security policies that are defined at the database objects level. In addition to column-level security, where you can grant users permissions to a subset of columns, use RLS policies to further restrict access to particular rows of the visible columns. For more information about column-level security, see [Usage notes for column-level access control](r_GRANT-usage-notes.md#r_GRANT-usage-notes-clp).

When you enforce RLS policies on tables, you can restrict returned result sets when users run queries.

When creating RLS policies, you can specify expressions that dictate whether Amazon Redshift returns any existing rows in a table in a query. By creating RLS policies to limit access, you don't have to add or externalize additional conditions in your queries. 

When creating RLS policies, we recommend that you create simple policies and avoid complex statements in policies. When defining RLS policies, don't use excessive table joins in the policy definition that are based on policies.

When a policy refers to a lookup table, Amazon Redshift scans the additional table, in addition to the table on which the policy exists. There will be performance differences between the same query for a user with an RLS policy attached, and a user without any policy attached.

# Using RLS policies in SQL statements
<a name="t_rls_statements"></a>

When using RLS policies in SQL statements, Amazon Redshift applies the following rules:
+ Amazon Redshift applies RLS policies to the SELECT, UPDATE, and DELETE statements by default. 
+ For SELECT and UNLOAD, Amazon Redshift filters rows according to your defined policy.
+ For UPDATE, Amazon Redshift updates only the rows that are visible to you. If a policy restricts a subset of the rows in a table, then you can't update them.
+ For DELETE, you can delete only the rows that are visible to you. If a policy restricts a subset of the rows in a table, then you can't delete them. For TRUNCATE, you can still truncate the table.
+ For CREATE TABLE LIKE, tables created with the LIKE options won't inherit permission settings from the source table. Similarly, the target table won't inherit the RLS policies from source table.

# Combining multiple policies per user
<a name="t_rls_combine_policies"></a>

RLS in Amazon Redshift supports attaching multiple policies per user and object. When there are multiple policies defined for a user, Amazon Redshift applies all the policies with either AND or OR syntax depending on the RLS CONJUNCTION TYPE setting for the table. For more information about conjunction type, see [ALTER TABLE](r_ALTER_TABLE.md). 

Multiple policies on a table can be associated with you. Either multiple policies are directly attached to you, or you belong to multiple roles, and the roles have different policies attached to them. 

When the multiple policies should restrict rows access in a given relation, you can set RLS CONJUNCTION TYPE of the relation to AND. Consider the following example. Alice can only see Sports event that has a "catname" of NBA as the policy specified.

```
-- Create an analyst role and grant it to a user named Alice.
CREATE ROLE analyst;
CREATE USER alice WITH PASSWORD 'Name_is_alice_1';
GRANT ROLE analyst TO alice;

-- Create an RLS policy that only lets the user see sports.
CREATE RLS POLICY policy_sports
WITH (catgroup VARCHAR(10))
USING (catgroup = 'Sports');

-- Create an RLS policy that only lets the user see NBA.
CREATE RLS POLICY policy_nba
WITH (catname VARCHAR(10))
USING (catname = 'NBA');

-- Attach both to the analyst role.
ATTACH RLS POLICY policy_sports ON category TO ROLE analyst;
ATTACH RLS POLICY policy_nba ON category TO ROLE analyst;

-- Activate RLS on the category table with AND CONJUNCTION TYPE. 
ALTER TABLE category ROW LEVEL SECURITY ON CONJUNCTION TYPE AND;

-- Change session to Alice.
SET SESSION AUTHORIZATION alice;

-- Select all from the category table.
SELECT catgroup, catname
FROM category;

 catgroup | catname 
---------+---------
 Sports   | NBA
(1 row)
```

When the multiple policies should permit the users to see more rows in a given relation, user can set RLS CONJUNCTION TYPE of the relation to OR. Consider the following example. Alice can only see "Concerts" and "Sports" as the policy specified.

```
-- Create an analyst role and grant it to a user named Alice.
CREATE ROLE analyst;
CREATE USER alice WITH PASSWORD 'Name_is_alice_1';
GRANT ROLE analyst TO alice;

-- Create an RLS policy that only lets the user see concerts.
CREATE RLS POLICY policy_concerts
WITH (catgroup VARCHAR(10))
USING (catgroup = 'Concerts');

-- Create an RLS policy that only lets the user see sports.
CREATE RLS POLICY policy_sports
WITH (catgroup VARCHAR(10))
USING (catgroup = 'Sports');

-- Attach both to the analyst role.
ATTACH RLS POLICY policy_concerts ON category TO ROLE analyst;
ATTACH RLS POLICY policy_sports ON category TO ROLE analyst;

-- Activate RLS on the category table with OR CONJUNCTION TYPE. 
ALTER TABLE category ROW LEVEL SECURITY ON CONJUNCTION TYPE OR;

-- Change session to Alice.
SET SESSION AUTHORIZATION alice;

-- Select all from the category table.
SELECT catgroup, count(*)
FROM category
GROUP BY catgroup ORDER BY catgroup;

 catgroup | count 
---------+-------
 Concerts |  3
 Sports   |  5
(2 rows)
```

# RLS policy ownership and management
<a name="t_rls_ownership"></a>

As a superuser, security administrator, or user that has the sys:secadmin role, you can create, modify, attach, and detach RLS policies. RLS policies can be attached to tables, views, late binding views (LBVs), and materialized views (MVs). At the object level, you can turn row-level security on or off without modifying the schema definition for tables.

To get started with row-level security, following are SQL statements that you can use:
+ Use the ALTER TABLE statement to turn on or off RLS on a table, view, or late binding view. For more information, see [ALTER TABLE](r_ALTER_TABLE.md).
+ Use the ALTER MATERIALIZED VIEW statement to statement to turn on or off RLS on a materialized view (MV). For more information, see [ALTER MATERIALIZED VIEW](r_ALTER_MATERIALIZED_VIEW.md).
+ Use the CREATE RLS POLICY statement to create a security policy for one or more tables, and specify one or more users or roles in the policy. 

  For more information, see [CREATE RLS POLICY](r_CREATE_RLS_POLICY.md).
+ Use the ALTER RLS POLICY statement to alter the policy, such as changing the policy definition. You can use the same policy for multiple tables or views.

  For more information, see [ALTER RLS POLICY](r_ALTER_RLS_POLICY.md).
+ Use the ATTACH RLS POLICY statement to attach a policy to one or more relations, to one or more users, or to roles.

  For more information, see [ATTACH RLS POLICY](r_ATTACH_RLS_POLICY.md).
+ Use the DETACH RLS POLICY statement to detach a policy from one or more relations, from one or more users, or from roles.

  For more information, see [DETACH RLS POLICY](r_DETACH_RLS_POLICY.md).
+ Use the DROP RLS POLICY statement to drop a policy.

  For more information, see [DROP RLS POLICY](r_DROP_RLS_POLICY.md).
+ Use the GRANT and REVOKE statements to explicitly grant and revoke SELECT permissions to RLS policies that reference lookup tables. For more information, see [GRANT](r_GRANT.md) and [REVOKE](r_REVOKE.md).

To monitor the policies created, sys:secadmin can view the [SVV\$1RLS\$1POLICY](r_SVV_RLS_POLICY.md) and [SVV\$1RLS\$1ATTACHED\$1POLICY](r_SVV_RLS_ATTACHED_POLICY.md).

To list RLS-protected relations, sys:secadmin can view [SVV\$1RLS\$1RELATION](r_SVV_RLS_RELATION.md).

To trace the application of RLS policies on queries that reference RLS-protected relations, a superuser, sys:operator, or any user with the system permission ACCESS SYSTEM TABLE can view [SVV\$1RLS\$1APPLIED\$1POLICY](r_SVV_RLS_APPLIED_POLICY.md). Note that sys:secadmin is not granted these permissions by default.

To allow users full access to an RLS-protected relation, you can grant the IGNORE RLS permission. Superusers or sys:secadmin are automatically granted IGNORE RLS. For more information, see [GRANT](r_GRANT.md).

To explain the RLS policy filters of a query in the EXPLAIN plan to troubleshoot RLS-related queries, you can grant the permission EXPLAIN RLS to any user. For more information, see [GRANT](r_GRANT.md) and [EXPLAIN](r_EXPLAIN.md). 

# Policy-dependent objects and principles
<a name="t_rls_object_dependency"></a>

To provide security for applications and to prevent policy objects from becoming stale or invalid, Amazon Redshift doesn't permit dropping or altering objects referenced by RLS policies.

Following lists schema object dependencies that Amazon Redshift tracks for RLS policies.
+ When tracking schema object dependency for the target table, Amazon Redshift follows these rules:
  + Amazon Redshift detaches the policy from a relation, user, role, or public when you drop a target table.
  + When you rename a target table name, there is no impact to the attached policies.
  + You can only drop the columns of the target table referenced inside the policy definition if you drop or detach the policy first. This also applies when the CASCADE option is specified. You can drop other columns in the target table.
  + You can't rename the referred columns of the target table. To rename referred columns, detach the policy first. This also applies when the CASCADE option is specified.
  + You can't change the type of the referred column, even when you specify the CASCADE option.
+ When tracking schema object dependency for the lookup table, Amazon Redshift follows these rules:
  + You can't drop a lookup table. To drop a lookup table, first drop the policy in which the lookup table is referred.
  + You can't rename a lookup table. To rename a lookup table, first drop the policy in which the lookup table is referred. This also applies when the CASCADE option is specified.
  + You can't drop the lookup table columns used in the policy definition. To drop the lookup table columns used in the policy definition, first drop the policy in which the lookup table is referred. This also applies when the CASCADE option is specified in the ALTER TABLE DROP COLUMN statement. You can drop other columns in the lookup table.
  + You can't rename the referred columns of the lookup table. To rename referred columns, first drop the policy in which the lookup table is referred. This also applies when the CASCADE option is specified.
  + You can't change the type of the referred column.
+ When a user or role is dropped, Amazon Redshift detaches all policies attached to the user or role automatically.
+ When you use the CASCADE option in the DROP SCHEMA statement, Amazon Redshift also drops the relations in the schema. It also drops the relations in any other schemas that are dependent on relations in the dropped schema. For a relation that is a lookup table in a policy, Amazon Redshift fails the DROP SCHEMA DDL. For any relations dropped by the DROP SCHEMA statement, Amazon Redshift detaches all policies that are attached to those relations.
+ You can only drop a lookup function (a function that is referred inside a policy definition) when you also drop the policy. This also applies when the CASCADE option is specified.
+ When a policy is attached to a table, Amazon Redshift checks if this table is a lookup table in a different policy. If this is the case, Amazon Redshift won't allow attaching a policy to this table.
+ While creating an RLS policy, Amazon Redshift checks if this table is a target table for any other RLS policy. If this is the case, Amazon Redshift won't allow creating a policy on this table.

## Example
<a name="t_rls_object_dependency-example"></a>

The following example illustrates how schema dependency is tracked.

```
-- The CREATE and ATTACH policy statements for `policy_events` references some
-- target and lookup tables.
-- Target tables are tickit_event_redshift and target_schema.target_event_table.
-- Lookup table is tickit_sales_redshift.
-- Policy `policy_events` has following dependencies:
--   table tickit_sales_redshift column eventid, qtysold
--   table tickit_event_redshift column eventid
--   table target_event_table column eventid
--   schema public and target_schema
CREATE RLS POLICY policy_events
WITH (eventid INTEGER)
USING (
    eventid IN (SELECT eventid FROM tickit_sales_redshift WHERE qtysold <3)
);

ATTACH RLS POLICY policy_events ON tickit_event_redshift TO ROLE analyst;

ATTACH RLS POLICY policy_events ON target_schema.target_event_table TO ROLE consumer;
```

# Considerations and limitations using RLS policies
<a name="t_rls_usage"></a>

## Considerations
<a name="t_rls_considerations"></a>

Following are considerations for working with RLS policies:
+ Amazon Redshift applies RLS policies to SELECT, UPDATE, or DELETE statements.
+ Amazon Redshift doesn't apply RLS policies to INSERT, COPY, ALTER TABLE APPEND statements.
+ RLS policies can be attached to tables, views, late binding views (LBVs), and materialized views (MVs).
+ Row-level security works with column-level security to protect your data.
+ When RLS is turned on for the source relation, Amazon Redshift supports the ALTER TABLE APPEND statement for superusers, users that have been explicitly granted the system permission IGNORE RLS, or the sys:secadmin role. In this case, you can run the ALTER TABLE APPEND statement to append rows to a target table by moving data from an existing source table. Amazon Redshift moves all tuples from the source relation into the target relation. The RLS status of the target relation doesn't affect the ALTER TABLE APPEND statement.
+ To facilitate migration from other data warehouse systems, you can set and retrieve customized session context variables for a connection by specifying the variable name and value.

  The following example sets session context variables for a row-level security (RLS) policy.

  ```
  -- Set a customized context variable.
  SELECT set_config(‘app.category’, ‘Concerts’, FALSE);
  
  -- Create a RLS policy using current_setting() to get the value of a customized context variable.
  CREATE RLS POLICY policy_categories
  WITH (catgroup VARCHAR(10)) 
  USING (catgroup = current_setting('app.category', FALSE));
  
  -- Set correct roles and attach the policy on the target table to one or more roles.
  ATTACH RLS POLICY policy_categories ON tickit_category_redshift TO ROLE analyst, ROLE dbadmin;
  ```

  For details on how to set and retrieve customized session context variables, go to [SET](r_SET.md), [SET\$1CONFIG](r_SET_CONFIG.md), [SHOW](r_SHOW.md), [CURRENT\$1SETTING](r_CURRENT_SETTING.md), and [RESET](r_RESET.md). For more information on modifying the server configuration in general, go to [Modifying the server configuration](cm_chap_ConfigurationRef.md#t_Modifying_the_default_settings).
**Important**  
 When using session context variables within RLS policies, the security policy is reliant on the user or role that invokes the policy. Be careful to avoid security vulnerabilities when using session context variables in RLS policies. 
+ Changing session user using SET SESSION AUTHORIZATION between DECLARE and FETCH or between subsequent FETCH statements won't refresh the already prepared plan based on the user policies at DECLARE time. Avoid changing session user when cursors are used with RLS-protected tables.
+ When the base objects inside a view object are RLS-protected, policies attached to the user running the query are applied on the respective base objects. This is different from object-level permission checks, where the view owner's permissions are checked against the view base objects. You can view the RLS-protected relations of a query in its EXPLAIN plan output.
+ When a user-defined function (UDF) is referenced in a RLS policy of a relation attached to a user, the user must have the EXECUTE permission over the UDF to query the relation.
+  Row-level security might limit query optimization. We recommend carefully evaluating query performance before deploying RLS-protected views on large datasets. 
+  Row-level security policies applied to late-binding views might be pushed into federated tables. These RLS policies might be visible in external processing engine logs. 

## Limitations
<a name="t_rls_limitations"></a>

Following are the limitations when working with RLS policies:
+ RLS policies can't be attached to external tables and several other relation types. For more information, see [ATTACH RLS POLICY](r_ATTACH_RLS_POLICY.md).
+ Amazon Redshift supports SELECT statements for certain RLS policies with lookups that have complex joins, but doesn't support UPDATE or DELETE statements. In cases with UPDATE or DELETE statements, Amazon Redshift returns the following error:

  ```
  ERROR: One of the RLS policies on target relation is not supported in UPDATE/DELETE.
  ```
+ Whenever a user-defined function (UDF) is referenced in a RLS policy of a relation attached to a user, the user must have the EXECUTE permission over the UDF to query the relation.
+ Correlated subqueries aren't supported. Amazon Redshift returns the following error:

  ```
  ERROR: RLS policy could not be rewritten.
  ```
+ Amazon Redshift doesn't support datasharing with RLS. If a relation doesn't have RLS turned off for datashares, the query fails on the consumer cluster with the following error:

  ```
  RLS-protected relation "rls_protected_table" cannot be accessed via datasharing query.
  ```

  You can turn off RLS for datashares using the ALTER TABLE command with the parameter ROW LEVEL SECURITY OFF FOR DATASHARES. For more information about using ALTER TABLE to enable or disable RLS, go to [ALTER TABLE](r_ALTER_TABLE.md).
+ In cross-database queries, Amazon Redshift blocks reads to RLS-protected relations. Users with the IGNORE RLS permission can access the protected relation using cross-database queries. When a user without the IGNORE RLS permission accesses RLS-protected relation through a cross-database query, the following error appears:

  ```
  RLS-protected relation "rls_protected_table" cannot be accessed via cross-database query.
  ```
+ ALTER RLS POLICY only supports modifying a RLS policy using the USING ( using\$1predicate\$1exp ) clause. You can't modify a RLS policy with a WITH clause when running ALTER RLS POLICY.
+ You can't query relations that have row-level security turned on if the values for any of the following configuration options don't match the default value of the session:
  +  `enable_case_sensitive_super_attribute` 
  +  `enable_case_sensitive_identifier` 
  +  `downcase_delimited_identifier` 

  Consider resetting your session’s configuration options if you attempt to query a relation with row-level security on and see the message "RLS protected relation does not support session level config on case sensitivity being different from its default value."
+  When your provisioned cluster or serverless namespace has any row-level security policies, the following commands are blocked for regular users: 

  ```
  ALTER <current_user> SET enable_case_sensitive_super_attribute/enable_case_sensitive_identifier/downcase_delimited_identifier
  ```

  When you create RLS policies, we recommend that you change the default configuration option settings for regular users to match the session’s configuration option settings at the time the policy was created. Superusers and users with the ALTER USER privilege can do this by using parameter group settings or the ALTER USER command. For information about parameter groups, see [Amazon Redshift parameter groups](https://docs.aws.amazon.com/redshift/latest/mgmt/working-with-parameter-groups.html) in the *Amazon Redshift Management Guide*. For information about the ALTER USER command, see [ALTER USER](r_ALTER_USER.md).
+  Views and late-binding views with row-level security policies can't be replaced by regular users using the [CREATE VIEW](r_CREATE_VIEW.md) command. To replace views or LBVs with RLS policies, first detach any RLS policies attached to them, replace the views or LBVs, and reattach the policies. Superusers and users with the `sys:secadmin permission` can use CREATE VIEW on views or LBVs with RLS policies without detaching the policies. 
+  Views with row-level security policies can't reference system tables and system views. 
+  A late-binding view that's referenced by a regular view can't be RLS protected. 
+  RLS-protected relations and nested data from data lakes can't be accessed in the same query. 

# Best practices for RLS performance
<a name="t_rls_performance"></a>

Following are best practices to ensure better performance from Amazon Redshift on tables protected by RLS.

## Safety of operators and functions
<a name="t_rls_safe_operators"></a>

When querying RLS-protected tables, the usage of certain operators or functions may lead to performance degradation. Amazon Redshift classifies operators and functions either as safe or unsafe for querying RLS-protected tables. A function or operator is classified as RLS-safe when it doesn't have any observable side-effects depending on the inputs. In particular, a RLS-safe function or operator can't be one of the following:
+ Outputs an input value, or any value that is dependent on the input value, with or without an error message.
+ Fails or returns errors that are dependent on the input value.

RLS-unsafe operators include:
+ Arithmetic operators — \$1, -, /, \$1, %.
+ Text operators — LIKE and SIMILAR TO.
+ Some cast operators. Note that certain casts are classified as safe, including date-to-timestamp and timestamp-to-date conversions, integer widening casts (such as INT2 to INT8 or INT4 to INT8), and integer-to-text casts.
+ UDFs.

Use the following SELECT statement to check the safety of operators and functions.

```
SELECT proname, proc_is_rls_safe(oid) FROM pg_proc;
```

Amazon Redshift imposes restrictions on the order of evaluation of user predicates containing RLS-unsafe operators and functions when planning queries on RLS-protected tables. Queries referencing RLS-unsafe operators or functions might cause performance degradation when querying RLS-protected tables. Performance can degrade significantly when Amazon Redshift can't push RLS-unsafe predicates down to base table scans to take advantage of sort keys. For better performance, avoid queries using RLS-unsafe predicates that take advantage of a sort key. To verify that Amazon Redshift is able to push down operators and functions, you can use EXPLAIN statements in combination with the system permission EXPLAIN RLS.

### Conditionally safe functions
<a name="t_rls_conditional_safety"></a>

Some functions are classified as unsafe in general but become safe when specific arguments are constant values (literal values, not column references). When the relevant arguments are constants, Amazon Redshift can push these predicates down to base table scans, improving query performance.

For example, `DATE_TRUNC('day', timestamp_col)` is conditionally safe because `'day'` is a constant literal. However, `DATE_TRUNC(datepart_col, timestamp_col)` is not conditionally safe because `datepart_col` is a column reference.

The following table lists the categories of conditionally safe functions and which arguments must be constant for the function to be considered safe.


| Function category | Functions | Argument that must be constant | 
| --- | --- | --- | 
| DATE\$1TRUNC | DATE\$1TRUNC(datepart, timestamp), DATE\$1TRUNC(datepart, timestamptz) | datepart (first argument) | 
| EXTRACT / DATE\$1PART | EXTRACT(field FROM source), DATE\$1PART(field, source) for timestamp, timestamptz, date, time, timetz, interval, and datetime types | field (first argument) | 
| DATEDIFF | DATEDIFF(datepart, start, end) for timestamp and time types | datepart (first argument) | 
| TO\$1CHAR | TO\$1CHAR(timestamp, format), TO\$1CHAR(timestamptz, format) | format (second argument) | 
| CONVERT\$1TIMEZONE | CONVERT\$1TIMEZONE(source\$1tz, target\$1tz, timestamp), CONVERT\$1TIMEZONE(target\$1tz, timestamp) | timezone arguments | 
| LEFT / RIGHT | LEFT(string, length), RIGHT(string, length) | length (second argument) | 
| SUBSTRING | SUBSTRING(string, start, length) for text, bytea, and varbyte types | length (third argument) | 
| SPLIT\$1PART | SPLIT\$1PART(string, delimiter, part) | part (third argument) | 

Use the following SELECT statement to check which functions are conditionally safe and which argument positions must be constant.

```
SELECT proname, proc_is_rls_conditionally_safe(oid) FROM pg_proc
WHERE proc_is_rls_conditionally_safe(oid) IS NOT NULL;
```

The function returns an array of 0-indexed argument positions that must be constant, or NULL if the function is not conditionally safe.

## Result caching
<a name="t_rls_result_cache"></a>

To reduce query runtime and improve system performance, Amazon Redshift caches the results of certain types of queries in the memory on the leader node.

Amazon Redshift uses cached results for a new query scanning RLS-protected tables when all the conditions for unprotected tables are true and when all of the following are true:
+ The tables or views in the policy haven't been modified.
+ The policy doesn't use a function that must be evaluated each time it's run, such as GETDATE or CURRENT\$1USER.

For better performance, avoid using policy predicates that don't satisfy the preceding conditions.

For more information about result caching in Amazon Redshift, see [Result caching](c_challenges_achieving_high_performance_queries.md#result-caching).

## Complex policies
<a name="t_rls_complex_policies"></a>

For better performance, avoid using complex policies with subqueries that join multiple tables.

## Using constant arguments for better pushdown
<a name="t_rls_constant_args"></a>

When using functions such as DATE\$1TRUNC, EXTRACT, DATEDIFF, TO\$1CHAR, CONVERT\$1TIMEZONE, LEFT, RIGHT, SUBSTRING, or SPLIT\$1PART in queries on RLS-protected tables, use constant literal arguments instead of column references for the arguments that control error behavior. This allows Amazon Redshift to classify these functions as safe and push predicates down to base table scans, which can significantly improve performance.

For example, the following query allows predicate pushdown because the datepart argument is a constant:

```
SELECT * FROM rls_protected_table
WHERE DATE_TRUNC('month', event_date) = '2024-01-01';
```

The following query prevents predicate pushdown because the datepart argument is a column reference:

```
SELECT * FROM rls_protected_table
WHERE DATE_TRUNC(datepart_col, event_date) = '2024-01-01';
```

For the full list of conditionally safe functions and which arguments must be constant, see [Conditionally safe functions](#t_rls_conditional_safety).

# Row-level security end-to-end example
<a name="t_rls-example"></a>

The following is an end-to-end example to illustrate how a superuser creates some users and roles. Then, a user with the secadmin role creates, attaches, detaches, and drops RLS policies. This example uses the tickit sample database. For more information, see [Load data from Amazon S3 to Amazon Redshift](https://docs.aws.amazon.com/redshift/latest/gsg/rs-gsg-create-sample-db.html) in the *Amazon Redshift Getting Started Guide*.

```
-- Create users and roles referenced in the policy statements.
CREATE ROLE analyst;
CREATE ROLE consumer;
CREATE ROLE dbadmin;
CREATE ROLE auditor;
CREATE USER bob WITH PASSWORD 'Name_is_bob_1';
CREATE USER alice WITH PASSWORD 'Name_is_alice_1';
CREATE USER joe WITH PASSWORD 'Name_is_joe_1';
CREATE USER molly WITH PASSWORD 'Name_is_molly_1';
CREATE USER bruce WITH PASSWORD 'Name_is_bruce_1';
GRANT ROLE sys:secadmin TO bob;
GRANT ROLE analyst TO alice;
GRANT ROLE consumer TO joe;
GRANT ROLE dbadmin TO molly;
GRANT ROLE auditor TO bruce;
GRANT ALL ON TABLE tickit_category_redshift TO PUBLIC;
GRANT ALL ON TABLE tickit_sales_redshift TO PUBLIC;
GRANT ALL ON TABLE tickit_event_redshift TO PUBLIC;

-- Create table and schema referenced in the policy statements.
CREATE SCHEMA target_schema;
GRANT ALL ON SCHEMA target_schema TO PUBLIC;
CREATE TABLE target_schema.target_event_table (LIKE tickit_event_redshift);
GRANT ALL ON TABLE target_schema.target_event_table TO PUBLIC;

-- Change session to analyst alice.
SET SESSION AUTHORIZATION alice;

-- Check the tuples visible to analyst alice.
-- Should contain all 3 categories.
SELECT catgroup, count(*)
FROM tickit_category_redshift
GROUP BY catgroup ORDER BY catgroup;

-- Change session to security administrator bob.
SET SESSION AUTHORIZATION bob;

CREATE RLS POLICY policy_concerts
WITH (catgroup VARCHAR(10))
USING (catgroup = 'Concerts');

SELECT poldb, polname, polalias, polatts, polqual, polenabled, polmodifiedby FROM svv_rls_policy WHERE poldb = CURRENT_DATABASE();

ATTACH RLS POLICY policy_concerts ON tickit_category_redshift TO ROLE analyst, ROLE dbadmin;

ALTER TABLE tickit_category_redshift ROW LEVEL SECURITY ON;

SELECT * FROM svv_rls_attached_policy;

-- Change session to analyst alice.
SET SESSION AUTHORIZATION alice;

-- Check that tuples with only `Concert` category will be visible to analyst alice.
SELECT catgroup, count(*)
FROM tickit_category_redshift
GROUP BY catgroup ORDER BY catgroup;

-- Change session to consumer joe.
SET SESSION AUTHORIZATION joe;

-- Although the policy is attached to a different role, no tuples will be
-- visible to consumer joe because the default deny all policy is applied.
SELECT catgroup, count(*)
FROM tickit_category_redshift
GROUP BY catgroup ORDER BY catgroup;

-- Change session to dbadmin molly.
SET SESSION AUTHORIZATION molly;

-- Check that tuples with only `Concert` category will be visible to dbadmin molly.
SELECT catgroup, count(*)
FROM tickit_category_redshift
GROUP BY catgroup ORDER BY catgroup;

-- Check that EXPLAIN output contains RLS SecureScan to prevent disclosure of
-- sensitive information such as RLS filters.
EXPLAIN SELECT catgroup, count(*) FROM tickit_category_redshift GROUP BY catgroup ORDER BY catgroup;

-- Change session to security administrator bob.
SET SESSION AUTHORIZATION bob;

-- Grant IGNORE RLS permission so that RLS policies do not get applicable to role dbadmin.
GRANT IGNORE RLS TO ROLE dbadmin;

-- Grant EXPLAIN RLS permission so that anyone in role auditor can view complete EXPLAIN output.
GRANT EXPLAIN RLS TO ROLE auditor;

-- Change session to dbadmin molly.
SET SESSION AUTHORIZATION molly;

-- Check that all tuples are visible to dbadmin molly because `IGNORE RLS` is granted to role dbadmin.
SELECT catgroup, count(*)
FROM tickit_category_redshift
GROUP BY catgroup ORDER BY catgroup;

-- Change session to auditor bruce.
SET SESSION AUTHORIZATION bruce;

-- Check explain plan is visible to auditor bruce because `EXPLAIN RLS` is granted to role auditor.
EXPLAIN SELECT catgroup, count(*) FROM tickit_category_redshift GROUP BY catgroup ORDER BY catgroup;

-- Change session to security administrator bob.
SET SESSION AUTHORIZATION bob;

DETACH RLS POLICY policy_concerts ON tickit_category_redshift FROM ROLE analyst, ROLE dbadmin;

-- Change session to analyst alice.
SET SESSION AUTHORIZATION alice;

-- Check that no tuples are visible to analyst alice.
-- Although the policy is detached, no tuples will be visible to analyst alice
-- because of default deny all policy is applied if the table has RLS on.
SELECT catgroup, count(*)
FROM tickit_category_redshift
GROUP BY catgroup ORDER BY catgroup;

-- Change session to security administrator bob.
SET SESSION AUTHORIZATION bob;

CREATE RLS POLICY policy_events
WITH (eventid INTEGER) AS ev
USING (
    ev.eventid IN (SELECT eventid FROM tickit_sales_redshift WHERE qtysold <3)
);

ATTACH RLS POLICY policy_events ON tickit_event_redshift TO ROLE analyst;
ATTACH RLS POLICY policy_events ON target_schema.target_event_table TO ROLE consumer;

RESET SESSION AUTHORIZATION;

-- Can not cannot alter type of dependent column.
ALTER TABLE target_schema.target_event_table ALTER COLUMN eventid TYPE float;
ALTER TABLE tickit_event_redshift ALTER COLUMN eventid TYPE float;
ALTER TABLE tickit_sales_redshift ALTER COLUMN eventid TYPE float;
ALTER TABLE tickit_sales_redshift ALTER COLUMN qtysold TYPE float;

-- Can not cannot rename dependent column.
ALTER TABLE target_schema.target_event_table RENAME COLUMN eventid TO renamed_eventid;
ALTER TABLE tickit_event_redshift RENAME COLUMN eventid TO renamed_eventid;
ALTER TABLE tickit_sales_redshift RENAME COLUMN eventid TO renamed_eventid;
ALTER TABLE tickit_sales_redshift RENAME COLUMN qtysold TO renamed_qtysold;

-- Can not drop dependent column.
ALTER TABLE target_schema.target_event_table DROP COLUMN eventid CASCADE;
ALTER TABLE tickit_event_redshift DROP COLUMN eventid CASCADE;
ALTER TABLE tickit_sales_redshift DROP COLUMN eventid CASCADE;
ALTER TABLE tickit_sales_redshift DROP COLUMN qtysold CASCADE;

-- Can not drop lookup table.
DROP TABLE tickit_sales_redshift CASCADE;

-- Change session to security administrator bob.
SET SESSION AUTHORIZATION bob;

DROP RLS POLICY policy_concerts;
DROP RLS POLICY IF EXISTS policy_events;

ALTER TABLE tickit_category_redshift ROW LEVEL SECURITY OFF;

RESET SESSION AUTHORIZATION;

-- Drop users and roles.
DROP USER bob;
DROP USER alice;
DROP USER joe;
DROP USER molly;
DROP USER bruce;
DROP ROLE analyst;
DROP ROLE consumer;
DROP ROLE auditor FORCE;
DROP ROLE dbadmin FORCE;
```

# Metadata security
<a name="t_metadata_security"></a>

Like Amazon Redshift’s row-level security, metadata security gives you more granular control over your metadata. If metadata security is enabled for your provisioned cluster or serverless workgroup, users can see metadata for the objects for which they have viewing access. Metadata security lets you separate visibility based on your needs. For example, you can use a single data warehouse to centralize all of your data storage. However, if you store data for multiple sectors, managing security can become troublesome. With metadata security enabled, you can configure your visibility. Users of one sector can have more visibility over their objects, while you restrict viewing access to users of another sector. Metadata security supports all object types, such as schemas, tables, views, materialized views, stored procedures, user-defined functions, and machine learning models.

Users can see metadata of objects under the following circumstances:
+ If object access is granted to the user.
+ If object access is granted to a group or a role that the user is a part of.
+ The object is public.
+ The user is the owner of the database object.

To enable metadata security, use the [ALTER SYSTEM](https://docs.aws.amazon.com/redshift/latest/dg/r_ALTER_SYSTEM.html) command. The following is the syntax of how to use the ALTER SYSTEM command with metadata security.

```
ALTER SYSTEM SET metadata_security=[true|t|on|false|f|off];
```

When you enable metadata security, all users who have the necessary permissions can see the relevant metadata of objects that they have access to. If you want only certain users to be able to see metadata security, grant the `ACCESS CATALOG` permission to a role, and then assign the role to the user. For more information about using roles to better control security, see [Role-based access control](https://docs.aws.amazon.com/redshift/latest/dg/t_Roles.html).

The following example demonstrates how to grant the `ACCESS CATALOG` permission to a role, and then assigns the role to a user. For more information about granting permissions, see the [GRANT](https://docs.aws.amazon.com/redshift/latest/dg/r_GRANT.html) command.

```
CREATE ROLE sample_metadata_viewer;

GRANT ACCESS CATALOG TO ROLE sample_metadata_viewer;

GRANT ROLE sample_metadata_viewer to salesadmin;
```

If you prefer to use already defined roles, the [system-defined roles](https://docs.aws.amazon.com/redshift/latest/dg/r_roles-default.html) `operator`, `secadmin`, `dba`, and `superuser` all have the necessary permissions to view object metadata. By default, superusers can see the complete catalog.

```
GRANT ROLE operator to sample_user;
```

If you’re using roles to control metadata security, you have access to all of the system views and functions that come with role-based access control. For example, you can query the [ SVV\$1ROLES](https://docs.aws.amazon.com/redshift/latest/dg/r_SVV_ROLES.html) view to see all roles. To see if a user is a member of a role or group, use the [USER\$1IS\$1MEMBER\$1OF ](https://docs.aws.amazon.com/redshift/latest/dg/r_USER_IS_MEMBER_OF.html) function. For a full list of SVV views, see [ SVV metadata views](https://docs.aws.amazon.com/redshift/latest/dg/svv_views.html). For a list of system information functions, see [ System information functions](https://docs.aws.amazon.com/redshift/latest/dg/r_System_information_functions.html).

# Dynamic data masking
<a name="t_ddm"></a>

**Note**  
Amazon Redshift automatically masks certain system table columns when logging information about queries made to Data Catalog views to prevent exposure of sensitive metadata. For more information, see [ Secure logging](https://docs.aws.amazon.com/redshift/latest/mgmt/db-auditing-secure-logging.html) in the *Amazon Redshift Management Guide*.

Using dynamic data masking (DDM) in Amazon Redshift, you can protect sensitive data in your data warehouse. You can manipulate how Amazon Redshift shows sensitive data to the user at query time, without transforming it in the database. You control access to data through masking policies that apply custom obfuscation rules to a given user or role. In that way, you can respond to changing privacy requirements without altering underlying data or editing SQL queries.

Dynamic data masking policies hide, obfuscate, or pseudonymize data that matches a given format. When attached to a table, the masking expression is applied to one or more of its columns. You can further modify masking policies to only apply them to certain users, or to user-defined roles that you can create with [Role-based access control (RBAC)](t_Roles.md). Additionally, you can apply DDM on the cell level by using conditional columns when creating your masking policy. For more information about conditional masking, see [Conditional dynamic data masking](t_ddm-conditional.md).

You can apply multiple masking policies with varying levels of obfuscation to the same column in a table and assign them to different roles. To avoid conflicts when you have different roles with different policies applying to one column, you can set priorities for each application. In that way, you can control what data a given user or role can access. DDM policies can partially or completely redact data, or hash it by using user-defined functions written in SQL, Python, or with AWS Lambda. By masking data using hashes, you can apply joins on this data without access to potentially sensitive information.

# SQL commands for managing dynamic data masking policies
<a name="r_ddm-procedures"></a>

You can perform the following actions to create, attach, detach, and delete dynamic data masking policies:
+ To create a DDM policy, use the [CREATE MASKING POLICY](r_CREATE_MASKING_POLICY.md) command.

  The following is an example of creating a masking policy using a SHA-2 hash function.

  ```
  CREATE MASKING POLICY hash_credit 
  WITH (credit_card varchar(256)) 
  USING (sha2(credit_card + 'testSalt', 256));
  ```
+ To alter an existing DDM policy, use the [ALTER MASKING POLICY](r_ALTER_MASKING_POLICY.md) command.

  The following is an example of altering an existing masking policy.

  ```
  ALTER MASKING POLICY hash_credit
  USING (sha2(credit_card + 'otherTestSalt', 256));
  ```
+ To attach a DDM policy on a table to one or more users or roles, use the [ATTACH MASKING POLICY](r_ATTACH_MASKING_POLICY.md) command.

  The following is an example of attaching a masking policy to a column/role pair.

  ```
   ATTACH MASKING POLICY hash_credit 
  ON credit_cards (credit_card) 
  TO ROLE science_role 
  PRIORITY 30;
  ```

  The PRIORITY clause determines which masking policy applies to a user session when multiple policies are attached to the same column. For example, if the user in the preceding example has another masking policy attached to the same credit card column with a priority of 20, science\$1role's policy is the one that applies, as it has the higher priority of 30.
+ To detach a DDM policy on a table from one or more users or roles, use the [DETACH MASKING POLICY](r_DETACH_MASKING_POLICY.md) command.

  The following is an example of detaching a masking policy from a column/role pair.

  ```
  DETACH MASKING POLICY hash_credit 
  ON credit_cards(credit_card) 
  FROM ROLE science_role;
  ```
+ To drop a DDM policy from all databases, use the [DROP MASKING POLICY](r_DROP_MASKING_POLICY.md) command.

  The following is an example of dropping a masking policy from all databases.

  ```
  DROP MASKING POLICY hash_credit;  
  ```

# Dynamic data masking policy hierarchy
<a name="t_ddm-hierarchy"></a>

When attaching multiple masking policies, consider the following:
+ You can attach multiple masking policies to a single column.
+ When multiple masking policies are applicable to a query, the highest priority policy attached to each respective column applies. Consider the following example. 

  ```
  ATTACH MASKING POLICY partial_hash
  ON credit_cards(address, credit_card)
  TO ROLE analytics_role 
  PRIORITY 20;
  
  ATTACH MASKING POLICY full_hash
  ON credit_cards(credit_card, ssn)
  TO ROLE auditor_role 
  PRIORITY 30;
  
  SELECT address, credit_card, ssn
  FROM credit_cards;
  ```

  When running the SELECT statement, a user with both the analytics and auditor roles sees the address column with the `partial_hash` masking policy applied. They see the credit card and SSN columns with the `full_hash` masking policy applied because the `full_hash` policy has the higher priority on the credit card column.
+  If you don't specify a priority when attaching a masking policy, the default priority is 0. 
+ You can't attach two policies to the same column with equal priority. 
+ You can't attach two policies to the same combination of user and column or role and column.
+ When multiple masking policies are applicable along the same SUPER path while attached to the same user or role, only the highest priority attachment takes effect. Consider the following examples. 

  The first example shows two masking policies attached on the same path, with the higher priority policy taking effect. 

  ```
  ATTACH MASKING POLICY hide_name
  ON employees(col_person.name)
  TO PUBLIC
  PRIORITY 20;
  
  ATTACH MASKING POLICY hide_last_name
  ON employees(col_person.name.last)
  TO PUBLIC
  PRIORITY 30;
  
  --Only the hide_last_name policy takes effect.
  SELECT employees.col_person.name FROM employees;
  ```

  The second example shows two masking policies attached to different paths in the same SUPER object, with no conflict between the policies. Both attachments will apply at the same time.

  ```
  ATTACH MASKING POLICY hide_first_name
  ON employees(col_person.name.first)
  TO PUBLIC
  PRIORITY 20;
  
  ATTACH MASKING POLICY hide_last_name
  ON employees(col_person.name.last)
  TO PUBLIC
  PRIORITY 20;
  
  --Both col_person.name.first and col_person.name.last are masked.
  SELECT employees.col_person.name FROM employees;
  ```

To confirm which masking policy applies to a given user and column or role and column combination, users with the [https://docs.aws.amazon.com/redshift/latest/dg/r_roles-default.html](https://docs.aws.amazon.com/redshift/latest/dg/r_roles-default.html) role can look up the column/role or column/user pair in the [SVV\$1ATTACHED\$1MASKING\$1POLICY](r_SVV_ATTACHED_MASKING_POLICY.md) system view. For more information, see [Dynamic data masking system views](r_ddm-svv.md).

# Using dynamic data masking with SUPER data type paths
<a name="t_ddm-super"></a>

 Amazon Redshift supports attaching dynamic data masking policies to paths of SUPER type columns. For more information about the SUPER data type, see [Semi-structured data in Amazon Redshift](super-overview.md). 

When attaching masking policies to paths of SUPER type columns, consider the following.
+ When attaching a masking policy to a path on a column, that column must be defined as the SUPER data type. You can only apply masking policies to *scalar* values on the SUPER path. You can't apply masking policies to complex structures or arrays. 
+ You can apply different masking policies to multiple scalar values on a single SUPER column, as long as the SUPER paths don't conflict. For example, the SUPER paths `a.b` and `a.b.c` conflict because they are on the same path, with `a.b` being the parent of `a.b.c`. The SUPER paths `a.b.c` and `a.b.d` don’t conflict.
+ Amazon Redshift can’t check that the paths that a masking policy attaches to exist in the data and are of the expected type until the policy is applied at user query runtime. For example, when you attach a masking policy that masks TEXT values to a SUPER path that contains an INT value, Amazon Redshift will attempt to cast the type of the value at the path.

  In such situations, the behavior of Amazon Redshift at runtime depends on your configuration settings for querying SUPER objects. By default, Amazon Redshift is in lax mode, and will resolve missing paths and invalid casts as `NULL` for the given SUPER path. For more information about SUPER-related configuration settings, see [SUPER configurations](super-configurations.md).
+ SUPER is a schemaless type, which means that Amazon Redshift can’t confirm the existence of the value at a given SUPER path. If you attach a masking policy to a SUPER path that doesn’t exist and Amazon Redshift is in lax mode, Amazon Redshift will resolve the path to a `NULL` value. We recommend that you consider the expected format of SUPER objects and the likelihood of them having unexpected attributes when attaching masking policies to paths of SUPER columns. If you think there might be an unexpected schema in your SUPER column, consider attaching your masking policies directly to the SUPER column. You can use SUPER type information functions to check attributes and types, and using `OBJECT_TRANSFORM` to mask the values. For more information about SUPER type information functions, see [SUPER type information functions](c_Type_Info_Functions.md).

## Examples
<a name="t_ddm-super-examples"></a>

**Attaching masking policies to SUPER paths**  
The following example attaches multiple masking policies onto multiple SUPER type paths in one column.

```
CREATE TABLE employees (
    col_person SUPER
);

INSERT INTO employees
VALUES
    (
        json_parse('
            {
                "name": {
                    "first": "John",
                    "last": "Doe"
                },
                "age": 25,
                "ssn": "111-22-3333",
                "company": "Company Inc."
            }
        ')
    ),
    (
        json_parse('
            {
                "name": {
                    "first": "Jane",
                    "last": "Appleseed"
                },
                "age": 34,
                "ssn": "444-55-7777",
                "company": "Organization Org."
            }
        ')
    )
;
GRANT ALL ON ALL TABLES IN SCHEMA "public" TO PUBLIC;

-- Create the masking policies.

-- This policy converts the given name to all uppercase letters.
CREATE MASKING POLICY mask_first_name
WITH(first_name TEXT)
USING ( UPPER(first_name) );

-- This policy replaces the given name with the fixed string 'XXXX'.
CREATE MASKING POLICY mask_last_name
WITH(last_name TEXT)
USING ( 'XXXX'::TEXT );

-- This policy rounds down the given age to the nearest 10.
CREATE MASKING POLICY mask_age
WITH(age INT)
USING ( (FLOOR(age::FLOAT / 10) * 10)::INT );

-- This policy converts the first five digits of the given SSN to 'XXX-XX'.
CREATE MASKING POLICY mask_ssn
WITH(ssn TEXT)
USING ( 'XXX-XX-'::TEXT || SUBSTRING(ssn::TEXT FROM 8 FOR 4) );

-- Attach the masking policies to the employees table.
ATTACH MASKING POLICY mask_first_name
ON employees(col_person.name.first)
TO PUBLIC;

ATTACH MASKING POLICY mask_last_name
ON employees(col_person.name.last)
TO PUBLIC;

ATTACH MASKING POLICY mask_age
ON employees(col_person.age)
TO PUBLIC;

ATTACH MASKING POLICY mask_ssn
ON employees(col_person.ssn)
TO PUBLIC;

-- Verify that your masking policies are attached.
SELECT
    policy_name,
    TABLE_NAME,
    priority,
    input_columns,
    output_columns
FROM
    svv_attached_masking_policy;

   policy_name   | table_name | priority |           input_columns           |          output_columns
-----------------+------------+----------+-----------------------------------+-----------------------------------
 mask_age        | employees  |        0 | ["col_person.\"age\""]            | ["col_person.\"age\""]
 mask_first_name | employees  |        0 | ["col_person.\"name\".\"first\""] | ["col_person.\"name\".\"first\""]
 mask_last_name  | employees  |        0 | ["col_person.\"name\".\"last\""]  | ["col_person.\"name\".\"last\""]
 mask_ssn        | employees  |        0 | ["col_person.\"ssn\""]            | ["col_person.\"ssn\""]
(4 rows)

-- Observe the masking policies taking effect.
SELECT col_person FROM employees ORDER BY col_person.age;

-- This result is formatted for ease of reading.
         col_person
--------------------------------
{
    "name": {
        "first": "JOHN",
        "last": "XXXX"
    },
    "age": 20,
    "ssn": "XXX-XX-3333",
    "company": "Company Inc."
}
{
    "name": {
        "first": "JANE",
        "last": "XXXX"
    },
    "age": 30,
    "ssn": "XXX-XX-7777",
    "company": "Organization Org."
}
```

Following are some examples of invalid masking policy attachments to SUPER paths.

```
-- This attachment fails because there is already a policy
-- with equal priority attached to employees.name.last, which is
-- on the same SUPER path as employees.name.
ATTACH MASKING POLICY mask_ssn
ON employees(col_person.name)
TO PUBLIC;
ERROR:  DDM policy "mask_last_name" is already attached on relation "employees" column "col_person."name"."last"" with same priority
               
-- Create a masking policy that masks DATETIME objects.
CREATE MASKING POLICY mask_date
WITH(INPUT DATETIME)
USING ( INPUT );
               
-- This attachment fails because SUPER type columns can't contain DATETIME objects.
ATTACH MASKING POLICY mask_date
ON employees(col_person.company)
TO PUBLIC;
ERROR:  cannot attach masking policy for output of type "timestamp without time zone" to column "col_person."company"" of type "super
```

Following is an example of attaching a masking policy to a SUPER path that doesn’t exist. By default, Amazon Redshift will resolve the path to `NULL`.

```
ATTACH MASKING POLICY mask_first_name
ON employees(col_person.not_exists)
TO PUBLIC;

SELECT col_person FROM employees LIMIT 1;

-- This result is formatted for ease of reading.
         col_person
-----------------------------------
{
    "name": {
        "first": "JOHN",
        "last": "XXXX"
    },
    "age": 20,
    "ssn": "XXX-XX-3333",
    "company": "Company Inc.",
    "not_exists": null
}
```

# Conditional dynamic data masking
<a name="t_ddm-conditional"></a>

You can mask data at the cell level by creating masking policies with conditional expressions in the masking expression. For example, you can create a masking policy that applies different masks to a value, depending on another column's value in that row.

The following is an example of using conditional data masking to create and attach a masking policy that partially redacts credit card numbers involved in fraud, while completely hiding all other credit card numbers. You must be a superuser or have the [https://docs.aws.amazon.com/redshift/latest/dg/r_roles-default.html](https://docs.aws.amazon.com/redshift/latest/dg/r_roles-default.html) role to run this example.

```
--Create an analyst role.
CREATE ROLE analyst;

--Create a credit card table. The table contains an is_fraud boolean column,
--which is TRUE if the credit card number in that row was involved in a fraudulent transaction.
CREATE TABLE credit_cards (id INT, is_fraud BOOLEAN, credit_card_number VARCHAR(16));

--Create a function that partially redacts credit card numbers.
CREATE FUNCTION REDACT_CREDIT_CARD (credit_card VARCHAR(16))
RETURNS VARCHAR(16) IMMUTABLE
AS $$
    import re
    regexp = re.compile("^([0-9]{6})[0-9]{5,6}([0-9]{4})")
 
    match = regexp.search(credit_card)
    if match != None:
        first = match.group(1)
        last = match.group(2)
    else:
        first = "000000"
        last = "0000"
    
    return "{}XXXXX{}".format(first, last)
$$ LANGUAGE plpythonu;

--Create a masking policy that partially redacts credit card numbers if the is_fraud value for that row is TRUE,
--and otherwise blanks out the credit card number completely.
CREATE MASKING POLICY card_number_conditional_mask
    WITH (fraudulent BOOLEAN, pan varchar(16)) 
    USING (CASE WHEN fraudulent THEN REDACT_CREDIT_CARD(pan)
                ELSE Null
           END);

--Attach the masking policy to the credit_cards/analyst table/role pair. 
ATTACH MASKING POLICY card_number_conditional_mask ON credit_cards (credit_card_number)
 USING (is_fraud, credit_card_number)
 TO ROLE analyst PRIORITY 100;
```

# Dynamic data masking system views
<a name="r_ddm-svv"></a>

Superusers, users with the `sys:operator` role, and users with the ACCESS SYSTEM TABLE permission can access the following DDM-related system views.
+  [SVV\$1MASKING\$1POLICY](r_SVV_MASKING_POLICY.md) 

   Use SVV\$1MASKING\$1POLICY to view all masking policies created on the cluster or workgroup. 
+  [SVV\$1ATTACHED\$1MASKING\$1POLICY](r_SVV_ATTACHED_MASKING_POLICY.md) 

  Use SVV\$1ATTACHED\$1MASKING\$1POLICY to view all the relations and users or roles with policies attached on the currently connected database.
+  [SYS\$1APPLIED\$1MASKING\$1POLICY\$1LOG](SYS_APPLIED_MASKING_POLICY_LOG.md) 

  Use SYS\$1APPLIED\$1MASKING\$1POLICY\$1LOG to trace the application of masking policies on queries that reference DDM-protected relations.

Following are some examples of the information that you can find using system views.

```
--Select all policies associated with specific users, as opposed to roles
SELECT policy_name,
       schema_name,
       table_name,
       grantee
FROM svv_attached_masking_policy
WHERE grantee_type = 'user';     

--Select all policies attached to a specific user
SELECT policy_name,
       schema_name,
       table_name,
       grantee
FROM svv_attached_masking_policy
WHERE grantee = 'target_grantee_name'            
            
--Select all policies attached to a given table
SELECT policy_name,
       schema_name,
       table_name,
       grantee
FROM svv_attached_masking_policy
WHERE table_name = 'target_table_name'
      AND schema_name = 'target_schema_name';            
            
--Select the highest priority policy attachment for a given role
SELECT samp.policy_name,
       samp.priority,
       samp.grantee,
       smp.policy_expression
FROM svv_masking_policy AS smp
JOIN svv_attached_masking_policy AS samp
    ON samp.policy_name = smp.policy_name
WHERE
    samp.grantee_type = 'role' AND
    samp.policy_name = mask_get_policy_for_role_on_column(
        'target_schema_name', 
        'target_table_name', 
        'target_column_name', 
        'target_role_name')
ORDER BY samp.priority desc
LIMIT 1;         

--See which policy a specific user will see on a specific column in a given relation
SELECT samp.policy_name,
       samp.priority,
       samp.grantee,
       smp.policy_expression
FROM svv_masking_policy AS smp
JOIN svv_attached_masking_policy AS samp
    ON samp.policy_name = smp.policy_name
WHERE
    samp.grantee_type = 'role' AND
    samp.policy_name = mask_get_policy_for_user_on_column(
        'target_schema_name',
        'target_table_name',
        'target_column_name',
        'target_user_name')
ORDER BY samp.priority desc; 
         
 --Select all policies attached to a given relation.
SELECT policy_name,
schema_name,
relation_name,
database_name
FROM sys_applied_masking_policy_log
WHERE relation_name = 'relation_name'
AND schema_name = 'schema_name';
```

# Considerations when using dynamic data masking
<a name="t_ddm-considerations"></a>

When using dynamic data masking, consider the following: 
+  When querying objects created from tables, such as views, users will see results based on their own masking policies, not the policies of the user who created the objects. For example, a user with the analyst role querying a view created by a secadmin would see results with masking policies attached to the analyst role. 
+  To prevent the EXPLAIN command from potentially exposing sensitive masking policy filters, only users with the SYS\$1EXPLAIN\$1DDM permission can see masking policies applied in EXPLAIN outputs. Users don't have the SYS\$1EXPLAIN\$1DDM permission by default.

  The following is the syntax for granting the permission to a role.

  ```
  GRANT EXPLAIN MASKING TO ROLE rolename
  ```

   For more information about the EXPLAIN command, see [EXPLAIN](r_EXPLAIN.md). 
+  Users with different roles can see differing results based on the filter conditions or join conditions used. For example, running a SELECT command on a table using a specific column value will fail if the user running the command has a masking policy applied that obfuscates that column. 
+  DDM policies must be applied ahead of any predicate operations, or projections. Masking polices can include the following:
  + Low cost constant operations such as converting a value to null
  + Moderate cost operations such as HMAC hashing
  + High cost operations such as calls to external Lambda user defined functions

  As such, we recommend that you use simple masking expressions when possible. 
+  You can use DDM policies for roles with row-level security policies, but note that RLS policies are applied before DDM. A dynamic data masking expression won't be able to read a row that was protected by RLS. For more information about RLS, see [Row-level security](t_rls.md). 
+  When using the [COPY](r_COPY.md) command to copy from parquet to protected target tables, you should explicitly specify columns in the COPY statement. For more information about mapping columns with COPY, see [Column mapping options](copy-parameters-column-mapping.md). 
+  DDM policies can't attach to the following relations:
  +  System tables and catalogs 
  +  External tables 
  +  Datasharing tables
  +  Cross-DB relations 
  +  Temporary tables 
  +  Correlated queries 
+  DDM policies can contain lookup tables. Lookup tables can be present in the USING clause. The following relation types can’t be used as lookup tables:
  +  System tables and catalogs 
  +  External tables 
  +  Datasharing tables 
  +  Views, materialized views, and late-binding views 
  +  Cross-DB relations 
  +  Temporary tables 
  +  Correlated queries 

  Following is an example of attaching a masking policy to a lookup table.

  ```
  --Create a masking policy referencing a lookup table
  CREATE MASKING POLICY lookup_mask_credit_card WITH (credit_card TEXT) USING (
    CASE
      WHEN
        credit_card IN (SELECT credit_card_lookup FROM credit_cards_lookup)      
      THEN '000000XXXX0000'
      ELSE REDACT_CREDIT_CARD(credit_card)
      END
    ); 
    
  --Provides access to the lookup table via a policy attached to a role
  GRANT SELECT ON TABLE credit_cards_lookup TO MASKING POLICY lookup_mask_credit_card;
  ```
+  You can't attach a masking policy that would produce an output incompatible with the target column's type and size. For example, you can’t attach a masking policy that outputs a 12 character long string to a VARCHAR(10) column. Amazon Redshift supports the following exceptions: 
  +  A masking policy with the input type INTN can be attached to a policy with size INTM as long as M < N. For example, a BIGINT (INT8) input policy can be attached to a smallint (INT4) column. 
  +  A masking policy with the input type NUMERIC or DECIMAL can always be attached to a FLOAT column. 
+ You can't use DDM policies with data sharing. If the datashare's data producer attaches a DDM policy to a table in the datashare, the table becomes inaccessible to users from the data consumer who are trying to query the table. Attempting to add the relation to a datashare fails on the producer-side cluster or namespace with the following error:

  ```
  <ddm_protected_relation> or a relation dependent on it is protected by a masking policy and cannot be added to a datashare
  ```

  If you attach a masking policy to a relation on the producer side and the relation is already included in a datashare, attempting to query the relation on the consumer side fails with the following error:

  ```
  cross-cluster query of the masked relation <ddm_protected_relation> is not supported.
  ```

  You can turn off DDM for datashares using the ALTER TABLE command with the MASKING OFF FOR DATASHARES parameter. For more information, see [ALTER TABLE](r_ALTER_TABLE.md).
+ You can't query relations that have attached DDM policies if your values for any of the following configuration options don't match the default value of the session:
  +  `enable_case_sensitive_super_attribute` 
  +  `enable_case_sensitive_identifier` 
  +  `downcase_delimited_identifier` 

  Consider resetting your session’s configuration options if you attempt to query a relation with a DDM policy attached and see the message "DDM protected relation does not support session level config on case sensitivity being different from its default value."
+  When your provisioned cluster or serverless namespace has any dynamic data masking policies, the following commands are blocked for regular users: 

  ```
  ALTER <current_user> SET enable_case_sensitive_super_attribute/enable_case_sensitive_identifier/downcase_delimited_identifier
  ```

  When you create DDM policies, we recommend that you change the default configuration option settings for regular users to match the session’s configuration option settings at the time the policy was created. Superusers and users with the ALTER USER privilege can do this by using parameter group settings or the ALTER USER command. For information about parameter groups, see [Amazon Redshift parameter groups](https://docs.aws.amazon.com/redshift/latest/mgmt/working-with-parameter-groups.html) in the *Amazon Redshift Management Guide*. For information about the ALTER USER command, see [ALTER USER](r_ALTER_USER.md).
+ Views and late-binding views with attached DDM policies can't be replaced by regular users using the [CREATE VIEW](r_CREATE_VIEW.md) command. To replace views or LBVs with DDM policies, first detach any DDM policies attached to them, replace the views or LBVs, and reattach the policies. Superusers and users with the `sys:secadmin` permission can use CREATE VIEW on views or LBVs with DDM policies without detaching the policies.
+ Views with attached DDM policies can't reference system tables and views. Late-binding views can reference system tables and views.
+ Late-binding views with attached DDM policies can't reference nested data in data lakes, such as JSON documents.
+  Late-binding views can't have DDM policies attached if that late-binding view is referenced by any view.
+  DDM policies attached to late-binding views are attached by column name. At query time, Amazon Redshift validates that all masking policies attached to the late-binding view have been applied successfully, and that the late-binding view's output column type matches the types in the attached masking policies. If the validation fails, Amazon Redshift returns an error for the query.
+ You can use customized session context variables when creating DDM policies. The following example sets session context variables for a DDM policy.

  ```
  -- Set a customized context variable.
  SELECT set_config('app.city', 'XXXX', FALSE);
  
  -- Create a MASKING policy using current_setting() to get the value of a customized context variable.
  CREATE MASKING POLICY city_mask
  WITH (city VARCHAR(30))
  USING (current_setting('app.city')::VARCHAR(30));
  
  -- Attach the policy on the target table to one or more roles.
  ATTACH MASKING POLICY city_mask 
  ON tickit_users_redshift(city) 
  TO ROLE analyst, ROLE dbadmin;
  ```

  For details on how to set and retrieve customized session context variables, go to [SET](r_SET.md), [SET\$1CONFIG](r_SET_CONFIG.md), [SHOW](r_SHOW.md), [CURRENT\$1SETTING](r_CURRENT_SETTING.md), and [RESET](r_RESET.md). For more information on modifying the server configuration in general, go to [Modifying the server configuration](cm_chap_ConfigurationRef.md#t_Modifying_the_default_settings).
**Important**  
 When using session context variables within DDM policies, the security policy is reliant on the user or role that invokes the policy. Be careful to avoid security vulnerabilities when using session context variables in DDM policies. 

# Dynamic data masking end-to-end example
<a name="ddm-example"></a>

The following is an end-to-end example showing how you can create and attach masking policies to a column. These policies let users access a column and see different values, depending on the degree of obfuscation in the policies attached to their roles. You must be a superuser or have the [https://docs.aws.amazon.com/redshift/latest/dg/r_roles-default.html](https://docs.aws.amazon.com/redshift/latest/dg/r_roles-default.html) role to run this example.

## Creating a masking policy
<a name="ddm-example-create"></a>

First, create a table and populate it with credit card values.

```
--create the table         
CREATE TABLE credit_cards (
  customer_id INT,
  credit_card TEXT
);

--populate the table with sample values
INSERT INTO credit_cards
VALUES
  (100, '4532993817514842'),
  (100, '4716002041425888'),
  (102, '5243112427642649'),
  (102, '6011720771834675'),
  (102, '6011378662059710'),
  (103, '373611968625635')
;

--run GRANT to grant permission to use the SELECT statement on the table
GRANT SELECT ON credit_cards TO PUBLIC;

--create two users
CREATE USER regular_user WITH PASSWORD '1234Test!';

CREATE USER analytics_user WITH PASSWORD '1234Test!';

--create the analytics_role role and grant it to analytics_user
--regular_user does not have a role
CREATE ROLE analytics_role;

GRANT ROLE analytics_role TO analytics_user;
```

Next, create a masking policy to apply to the analytics role.

```
--create a masking policy that fully masks the credit card number
CREATE MASKING POLICY mask_credit_card_full
WITH (credit_card VARCHAR(256))
USING ('000000XXXX0000'::TEXT);

--create a user-defined function that partially obfuscates credit card data
CREATE FUNCTION REDACT_CREDIT_CARD (credit_card TEXT)
RETURNS TEXT IMMUTABLE
AS $$
    import re
    regexp = re.compile("^([0-9]{6})[0-9]{5,6}([0-9]{4})")
 
    match = regexp.search(credit_card)
    if match != None:
        first = match.group(1)
        last = match.group(2)
    else:
        first = "000000"
        last = "0000"
    
    return "{}XXXXX{}".format(first, last)
$$ LANGUAGE plpythonu;

--create a masking policy that applies the REDACT_CREDIT_CARD function
CREATE MASKING POLICY mask_credit_card_partial
WITH (credit_card VARCHAR(256))
USING (REDACT_CREDIT_CARD(credit_card));

--confirm the masking policies using the associated system views
SELECT * FROM svv_masking_policy;

SELECT * FROM svv_attached_masking_policy;
```

## Attaching a masking policy
<a name="ddm-example-attach"></a>

Attach the masking policies to the credit card table.

```
--attach mask_credit_card_full to the credit card table as the default policy
--all users will see this masking policy unless a higher priority masking policy is attached to them or their role
ATTACH MASKING POLICY mask_credit_card_full
ON credit_cards(credit_card)
TO PUBLIC;

--attach mask_credit_card_partial to the analytics role
--users with the analytics role can see partial credit card information
ATTACH MASKING POLICY mask_credit_card_partial
ON credit_cards(credit_card)
TO ROLE analytics_role
PRIORITY 10;

--confirm the masking policies are applied to the table and role in the associated system view
SELECT * FROM svv_attached_masking_policy;

--confirm the full masking policy is in place for normal users by selecting from the credit card table as regular_user
SET SESSION AUTHORIZATION regular_user;

SELECT * FROM credit_cards;

--confirm the partial masking policy is in place for users with the analytics role by selecting from the credit card table as analytics_user
SET SESSION AUTHORIZATION analytics_user;

SELECT * FROM credit_cards;
```

## Altering a masking policy
<a name="ddm-example-alter"></a>

The following section shows how to alter a dynamic data masking policy.

```
--reset session authorization to the default
RESET SESSION AUTHORIZATION;

--alter the mask_credit_card_full policy
ALTER MASKING POLICY mask_credit_card_full
USING ('00000000000000'::TEXT);	
	
--confirm the full masking policy is in place after altering the policy, and that results are altered from '000000XXXX0000' to '00000000000000'
SELECT * FROM credit_cards;
```

## Detaching and dropping a masking policy
<a name="ddm-example-detach"></a>

The following section shows how to detach and drop masking policies by removing all dynamic data masking policies from the table.

```
--reset session authorization to the default
RESET SESSION AUTHORIZATION;

--detach both masking policies from the credit_cards table
DETACH MASKING POLICY mask_credit_card_full 
ON credit_cards(credit_card) 
FROM PUBLIC;

DETACH MASKING POLICY mask_credit_card_partial 
ON credit_cards(credit_card) 
FROM ROLE analytics_role;

--drop both masking policies
DROP MASKING POLICY mask_credit_card_full;

DROP MASKING POLICY mask_credit_card_partial;
```

# Scoped permissions
<a name="t_scoped-permissions"></a>

Scoped permissions let you grant permissions to a user or role on all objects of a type within a database or schema. Users and roles with scoped permissions have the specified permissions on all current and future objects within the database or schema.

You can view the scope of database-level scoped permissions in [SVV\$1DATABASE\$1PRIVILEGES](r_SVV_DATABASE_PRIVILEGES.md). You can view the scope of schema-level scoped permissions in [SVV\$1SCHEMA\$1PRIVILEGES](r_SVV_SCHEMA_PRIVILEGES.md).

 For more information on applying scoped permissions, see [GRANT](r_GRANT.md) and [REVOKE](r_REVOKE.md).

# Considerations for using scoped permissions
<a name="t_scoped-permissions-considerations"></a>

When using scoped permissions, consider the following:
+ You can use scoped permissions to grant or revoke permissions on a database or schema scope to or from a specified user or role. 
+ You can't grant scoped permissions to user groups. 
+ Granting or revoking scoped permissions changes permissions for all current and future objects in the scope.
+ Scoped permissions and object-level permissions operate independently of each other. For example, a user will maintain permissions on a table in both of the following cases.
  + The user is granted SELECT on the table schema1.table1 and SELECT scoped permission on schema1. The user then has SELECT revoked for all tables in schema schema1. The user retains SELECT on schema1.table1.
  + The user is granted SELECT on the table schema1.table1 and SELECT scoped permission on schema1. The user then has SELECT revoked for schema1.table1. The user retains SELECT on schema1.table1.
+ To grant or revoke scoped permissions, you must meet one of the following criteria:
  + Superusers.
  + Users with the grant option for that permission. For more information on grant options, go to the WITH GRANT OPTION parameter in [GRANT](r_GRANT.md).
+ Scoped permissions can only be granted to or revoked from objects for the connected database, or from databases imported from a datashare.
+ You can use scoped permissions to set the default permissions on a database created from a datashare. A consumer-side datashare user who is granted scoped permissions on a shared database will automatically gain those permissions for any new object added to the datashare on the producer side.
+ Producers can grant scoped permissions on objects within a schema to a datashare. (preview) 