

# Security with Amazon Aurora PostgreSQL
<a name="AuroraPostgreSQL.Security"></a>

For a general overview of Aurora security, see [Security in Amazon Aurora](UsingWithRDS.md). You can manage security for Amazon Aurora PostgreSQL at a few different levels:
+ To control who can perform Amazon RDS management actions on Aurora PostgreSQL DB clusters and DB instances, use AWS Identity and Access Management (IAM). IAM handles the authentication of user identity before the user can access the service. It also handles authorization, that is, whether the user is allowed to do what they're trying to do. IAM database authentication is an additional authentication method that you can choose when you create your Aurora PostgreSQL DB cluster. For more information, see [Identity and access management for Amazon Aurora](UsingWithRDS.IAM.md).

  If you do use IAM with your Aurora PostgreSQL DB cluster, sign in to the AWS Management Console with your IAM credentials first, before opening the Amazon RDS console at [https://console.aws.amazon.com/rds/](https://console.aws.amazon.com/rds/).
+ Make sure to create Aurora DB clusters in a virtual private cloud (VPC) based on the Amazon VPC service. To control which devices and Amazon EC2 instances can open connections to the endpoint and port of the DB instance for Aurora DB clusters in a VPC, use a VPC security group. You can make these endpoint and port connections by using Secure Sockets Layer (SSL). In addition, firewall rules at your company can control whether devices running at your company can open connections to a DB instance. For more information on VPCs, see [Amazon VPC and Amazon Aurora](USER_VPC.md).

  The supported VPC tenancy depends on the DB instance class used by your Aurora PostgreSQL DB clusters. With `default` VPC tenancy, the DB cluster runs on shared hardware. With `dedicated` VPC tenancy, the DB cluster runs on a dedicated hardware instance. The burstable performance DB instance classes support default VPC tenancy only. The burstable performance DB instance classes include the db.t3 and db.t4g DB instance classes. All other Aurora PostgreSQL DB instance classes support both default and dedicated VPC tenancy.

  For more information about instance classes, see [Amazon AuroraDB instance classes](Concepts.DBInstanceClass.md). For more information about `default` and `dedicated` VPC tenancy, see [Dedicated instances](https://docs.aws.amazon.com/AWSEC2/latest/UserGuide/dedicated-instance.html) in the *Amazon Elastic Compute Cloud User Guide*.
+ To grant permissions to the PostgreSQL databases running on your Amazon Aurora DB cluster, you can take the same general approach as with stand-alone instances of PostgreSQL. Commands such as `CREATE ROLE`, `ALTER ROLE`, `GRANT`, and `REVOKE` work just as they do in on-premises databases, as does directly modifying databases, schemas, and tables.

  PostgreSQL manages privileges by using *roles*. The `rds_superuser` role is the most privileged role on an Aurora PostgreSQL DB cluster. This role is created automatically, and it's granted to the user that creates the DB cluster (the master user account, `postgres` by default). To learn more, see [Understanding PostgreSQL roles and permissions](Appendix.PostgreSQL.CommonDBATasks.Roles.md). 

All available Aurora PostgreSQL versions, including versions 10, 11, 12, 13, 14, and higher releases support the Salted Challenge Response Authentication Mechanism (SCRAM) for passwords as an alternative to message digest (MD5). We recommend that you use SCRAM because it's more secure than MD5. For more information, including how to migrate database user passwords from MD5 to SCRAM, see [Using SCRAM for PostgreSQL password encryption](PostgreSQL_Password_Encryption_configuration.md).

# Understanding PostgreSQL roles and permissions
<a name="Appendix.PostgreSQL.CommonDBATasks.Roles"></a>

When you create an Aurora PostgreSQL DB cluster using the AWS Management Console, an administrator account is created at the same time. By default, its name is `postgres`, as shown in the following screenshot:

![\[The default login identity for Credentials in the Create database page is postgres.\]](http://docs.aws.amazon.com/AmazonRDS/latest/AuroraUserGuide/images/default-login-identity-apg-rpg.png)


You can choose another name rather than accept the default (`postgres`). If you do, the name you choose must start with a letter and be between 1 and 16 alphanumeric characters. For simplicity's sake, we refer to this main user account by its default value (`postgres`) throughout this guide.

 If you use the `create-db-cluster` AWS CLI rather than the AWS Management Console, you create the user name by passing it with the `master-username` parameter. For more information, see [Step 2: Create an Aurora PostgreSQL DB cluster](CHAP_GettingStartedAurora.AuroraPostgreSQL.FullConfig.md#CHAP_GettingStarted.AuroraPostgreSQL.CreateDBCluster).

Whether you use the AWS Management Console, the AWS CLI, or the Amazon RDS API, and whether you use the default `postgres` name or choose a different name, this first database user account is a member of the `rds_superuser` group and has `rds_superuser` privileges.

**Topics**
+ [Understanding the rds\$1superuser role](Appendix.PostgreSQL.CommonDBATasks.Roles.rds_superuser.md)
+ [Controlling user access to the PostgreSQL database](Appendix.PostgreSQL.CommonDBATasks.Access.md)
+ [Delegating and controlling user password management](Appendix.PostgreSQL.CommonDBATasks.RestrictPasswordMgmt.md)
+ [Using SCRAM for PostgreSQL password encryption](PostgreSQL_Password_Encryption_configuration.md)

# Understanding the rds\$1superuser role
<a name="Appendix.PostgreSQL.CommonDBATasks.Roles.rds_superuser"></a>

In PostgreSQL, a *role* can define a user, a group, or a set of specific permissions granted to a group or user for various objects in the database. PostgreSQL commands to `CREATE USER` and `CREATE GROUP` have been replaced by the more general, `CREATE ROLE` with specific properties to distinguish database users. A database user can be thought of as a role with the LOGIN privilege. 

**Note**  
The `CREATE USER` and `CREATE GROUP` commands can still be used. For more information, see [Database Roles](https://www.postgresql.org/docs/current/user-manag.html) in the PostgreSQL documentation.

The `postgres` user is the most highly privileged database user on your Aurora PostgreSQL DB cluster. It has the characteristics defined by the following `CREATE ROLE` statement. 

```
CREATE ROLE postgres WITH LOGIN NOSUPERUSER INHERIT CREATEDB CREATEROLE NOREPLICATION VALID UNTIL 'infinity'
```

The properties `NOSUPERUSER`, `NOREPLICATION`, `INHERIT`, and `VALID UNTIL 'infinity'` are the default options for CREATE ROLE, unless otherwise specified. 

By default, `postgres` has privileges granted to the `rds_superuser` role, and permissions to create roles and databases. The `rds_superuser` role allows the `postgres` user to do the following: 
+ Add extensions that are available for use with Aurora PostgreSQL. For more information, see [Working with extensions and foreign data wrappers](Appendix.PostgreSQL.CommonDBATasks.md). 
+ Create roles for users and grant privileges to users. For more information, see [CREATE ROLE](https://www.postgresql.org/docs/current/sql-createrole.html) and [GRANT](https://www.postgresql.org/docs/14/sql-grant.html) in the PostgreSQL documentation. 
+ Create databases. For more information, see [CREATE DATABASE](https://www.postgresql.org/docs/14/sql-createdatabase.html) in the PostgreSQL documentation.
+ Grant `rds_superuser` privileges to user roles that don't have these privileges, and revoke privileges as needed. We recommend that you grant this role only to those users who perform superuser tasks. In other words, you can grant this role to database administrators (DBAs) or system administrators.
+ Grant (and revoke) the `rds_replication` role to database users that don't have the `rds_superuser` role. 
+ Grant (and revoke) the `rds_password` role to database users that don't have the `rds_superuser` role. 
+ Obtain status information about all database connections by using the `pg_stat_activity` view. When needed, `rds_superuser` can stop any connections by using `pg_terminate_backend` or `pg_cancel_backend`. 

In the `CREATE ROLE postgres...` statement, you can see that the `postgres` user role specifically disallows PostgreSQL `superuser` permissions. Aurora PostgreSQL is a managed service, so you can't access the host OS, and you can't connect using the PostgreSQL `superuser` account. Many of the tasks that require `superuser` access on a stand-alone PostgreSQL are managed automatically by Aurora. 

For more information about granting privileges, see [GRANT](http://www.postgresql.org/docs/current/sql-grant.html) in the PostgreSQL documentation.

The `rds_superuser` role is one of several *predefined* roles in an Aurora PostgreSQL DB cluster. 

**Note**  
In PostgreSQL 13 and earlier releases, *predefined* roles are known as *default* roles.

In the following list, you find some of the other predefined roles that are created automatically for a new Aurora PostgreSQL DB cluster. Predefined roles and their privileges can't be changed. You can't drop, rename, or modify privileges for these predefined roles. Attempting to do so results in an error. 
+ **rds\$1password** – A role that can change passwords and set up password constraints for database users. The `rds_superuser` role is granted with this role by default, and can grant the role to database users. For more information, see [Controlling user access to the PostgreSQL databaseControlling user access to PostgreSQL](Appendix.PostgreSQL.CommonDBATasks.Access.md).
  + For RDS for PostgreSQL versions older than 14, `rds_password` role can change passwords and set up password constraints for database users and users with `rds_superuser` role. From RDS for PostgreSQL version 14 and later, `rds_password` role can change passwords and set up password constraints only for database users. Only users with `rds_superuser` role can perform these actions on other users with `rds_superuser` role. 
+ **rdsadmin** – A role that's created to handle many of the management tasks that the administrator with `superuser` privileges would perform on a standalone PostgreSQL database. This role is used internally by Aurora PostgreSQL for many management tasks. 

# Viewing roles and their privileges
<a name="Appendix.PostgreSQL.CommonDBATasks.Roles.View"></a>

You can view predefined roles and their privileges in your RDS for PostgreSQL DB instance using different commands depending on your PostgreSQL version. To see all predefined roles, you can connect to your RDS for PostgreSQL DB instance and run following commands using the `psql`.

**For `psql` 15 and earlier versions**

Connect to your RDS for PostgreSQL DB instance and use the `\du` command in psql:

```
postgres=> \du
                                                               List of roles
    Role name    |                         Attributes                         |                          Member of
-----------------+------------------------------------------------------------+------------------------------------------------------
 postgres        | Create role, Create DB                                    +| {rds_superuser}
                 | Password valid until infinity                              |
 rds_ad          | Cannot login                                               | {}
 rds_iam         | Cannot login                                               | {}
 rds_password    | Cannot login                                               | {}
 rds_replication | Cannot login                                               | {}
 rds_superuser   | Cannot login                                               | {pg_monitor,pg_signal_backend,rds_password,rds_replication}
 rdsadmin        | Superuser, Create role, Create DB, Replication, Bypass RLS+| {}
                 | Password valid until infinity                              |
```

**For `psql` 16 and later versions**

```
postgres=> \drg+
                             List of role grants
   Role name   |          Member of          |       Options       | Grantor
---------------+-----------------------------+---------------------+----------
 postgres      | rds_superuser               | INHERIT, SET        | rdsadmin
 rds_superuser | pg_checkpoint               | ADMIN, INHERIT, SET | rdsadmin
 rds_superuser | pg_monitor                  | ADMIN, INHERIT, SET | rdsadmin
 rds_superuser | pg_signal_backend           | ADMIN, INHERIT, SET | rdsadmin
 rds_superuser | pg_use_reserved_connections | ADMIN, INHERIT, SET | rdsadmin
 rds_superuser | rds_password                | ADMIN, INHERIT, SET | rdsadmin
 rds_superuser | rds_replication             | ADMIN, INHERIT, SET | rdsadmin
```

To check role membership without version dependency, you can use the following SQL query:

```
SELECT m.rolname AS "Role name", r.rolname AS "Member of"
FROM pg_catalog.pg_roles m
JOIN pg_catalog.pg_auth_members pam ON (pam.member = m.oid)
LEFT JOIN pg_catalog.pg_roles r ON (pam.roleid = r.oid)
LEFT JOIN pg_catalog.pg_roles g ON (pam.grantor = g.oid)
WHERE m.rolname !~ '^pg_'
ORDER BY 1, 2;
```

In the output, you can see that `rds_superuser` isn't a database user role (it can't login), but it has the privileges of many other roles. You can also see that database user `postgres` is a member of the `rds_superuser` role. As mentioned previously, `postgres` is the default value in the Amazon RDS console's **Create database** page. If you chose another name, that name is shown in the list of roles instead. 

**Note**  
 Aurora PostgreSQL versions 15.2 and 14.7 introduced restrictive behavior of the `rds_superuser` role. An Aurora PostgreSQL user needs to be granted the `CONNECT` privilege on the corresponding database to connect even if the user is granted the `rds_superuser` role. Prior to Aurora PostgreSQL versions 14.7 and 15.2, a user was able to connect to any database and system table if the user was granted the `rds_superuser` role. This restrictive behavior aligns with the AWS and Amazon Aurora commitments to the continuous improvement of security.  
Please update the respective logic in your applications if the above enhancement has an impact.

# Controlling user access to the PostgreSQL database
<a name="Appendix.PostgreSQL.CommonDBATasks.Access"></a>

New databases in PostgreSQL are always created with a default set of privileges in the database's `public` schema that allow all database users and roles to create objects. These privileges allow database users to connect to the database, for example, and create temporary tables while connected.

To better control user access to the databases instances that you create on your Aurora PostgreSQL DB cluster primary node , we recommend that you revoke these default `public` privileges. After doing so, you then grant specific privileges for database users on a more granular basis, as shown in the following procedure. 

**To set up roles and privileges for a new database instance**

Suppose you're setting up a database on a newly created Aurora PostgreSQL DB cluster for use by several researchers, all of whom need read-write access to the database. 

1. Use `psql` (or pgAdmin) to connect to the primary DB instance on your Aurora PostgreSQL DB cluster: 

   ```
   psql --host=your-cluster-instance-1.666666666666.aws-region.rds.amazonaws.com --port=5432 --username=postgres --password
   ```

   When prompted, enter your password. The `psql` client connects and displays the default administrative connection database, `postgres=>`, as the prompt.

1. To prevent database users from creating objects in the `public` schema, do the following:

   ```
   postgres=> REVOKE CREATE ON SCHEMA public FROM PUBLIC;
   REVOKE
   ```

1. Next, you create a new database instance:

   ```
   postgres=> CREATE DATABASE lab_db;
   CREATE DATABASE
   ```

1. Revoke all privileges from the `PUBLIC` schema on this new database.

   ```
   postgres=> REVOKE ALL ON DATABASE lab_db FROM public;
   REVOKE
   ```

1. Create a role for database users.

   ```
   postgres=> CREATE ROLE lab_tech;
   CREATE ROLE
   ```

1. Give database users that have this role the ability to connect to the database.

   ```
   postgres=> GRANT CONNECT ON DATABASE lab_db TO lab_tech;
   GRANT
   ```

1. Grant all users with the `lab_tech` role all privileges on this database.

   ```
   postgres=> GRANT ALL PRIVILEGES ON DATABASE lab_db TO lab_tech;
   GRANT
   ```

1. Create database users, as follows:

   ```
   postgres=> CREATE ROLE lab_user1 LOGIN PASSWORD 'change_me';
   CREATE ROLE
   postgres=> CREATE ROLE lab_user2 LOGIN PASSWORD 'change_me';
   CREATE ROLE
   ```

1. Grant these two users the privileges associated with the lab\$1tech role:

   ```
   postgres=> GRANT lab_tech TO lab_user1;
   GRANT ROLE
   postgres=> GRANT lab_tech TO lab_user2;
   GRANT ROLE
   ```

At this point, `lab_user1` and `lab_user2` can connect to the `lab_db` database. This example doesn't follow best practices for enterprise usage, which might include creating multiple database instances, different schemas, and granting limited permissions. For more complete information and additional scenarios, see [Managing PostgreSQL Users and Roles](https://aws.amazon.com/blogs//database/managing-postgresql-users-and-roles/). 

For more information about privileges in PostgreSQL databases, see the [GRANT](https://www.postgresql.org/docs/current/static/sql-grant.html) command in the PostgreSQL documentation.

# Delegating and controlling user password management
<a name="Appendix.PostgreSQL.CommonDBATasks.RestrictPasswordMgmt"></a>

As a DBA, you might want to delegate the management of user passwords. Or, you might want to prevent database users from changing their passwords or reconfiguring password constraints, such as password lifetime. To ensure that only the database users that you choose can change password settings, you can turn on the restricted password management feature. When you activate this feature, only those database users that have been granted the `rds_password` role can manage passwords. 

**Note**  
To use restricted password management, your Aurora PostgreSQL DB cluster must be running Amazon Aurora PostgreSQL 10.6 or higher.

By default, this feature is `off`, as shown in the following:

```
postgres=> SHOW rds.restrict_password_commands;
  rds.restrict_password_commands
--------------------------------
 off
(1 row)
```

To turn on this feature, you use a custom parameter group and change the setting for `rds.restrict_password_commands` to 1. Be sure to reboot your Aurora PostgreSQL's primary DB instance so that the setting takes effect. 

With this feature active, `rds_password` privileges are needed for the following SQL commands:

```
CREATE ROLE myrole WITH PASSWORD 'mypassword';
CREATE ROLE myrole WITH PASSWORD 'mypassword' VALID UNTIL '2023-01-01';
ALTER ROLE myrole WITH PASSWORD 'mypassword' VALID UNTIL '2023-01-01';
ALTER ROLE myrole WITH PASSWORD 'mypassword';
ALTER ROLE myrole VALID UNTIL '2023-01-01';
ALTER ROLE myrole RENAME TO myrole2;
```

Renaming a role (`ALTER ROLE myrole RENAME TO newname`) is also restricted if the password uses the MD5 hashing algorithm. 

With this feature active, attempting any of these SQL commands without the `rds_password` role permissions generates the following error: 

```
ERROR: must be a member of rds_password to alter passwords
```

We recommend that you grant the `rds_password` to only a few roles that you use solely for password management. If you grant `rds_password` privileges to database users that don't have `rds_superuser` privileges, you need to also grant them the `CREATEROLE` attribute.

Make sure that you verify password requirements such as expiration and needed complexity on the client side. If you use your own client-side utility for password related changes, the utility needs to be a member of `rds_password` and have `CREATE ROLE` privileges. 

# Using SCRAM for PostgreSQL password encryption
<a name="PostgreSQL_Password_Encryption_configuration"></a>

The *Salted Challenge Response Authentication Mechanism (SCRAM)* is an alternative to PostgreSQL's default message digest (MD5) algorithm for encrypting passwords. The SCRAM authentication mechanism is considered more secure than MD5. To learn more about these two different approaches to securing passwords, see [Password Authentication](https://www.postgresql.org/docs/14/auth-password.html) in the PostgreSQL documentation.

We recommend that you use SCRAM rather than MD5 as the password encryption scheme for your Aurora PostgreSQL DB cluster. SCRAM is supported in Aurora PostgreSQL version 10 and all higher major and minor versions. It's a cryptographic challenge-response mechanism that uses the scram-sha-256 algorithm for password authentication and encryption. 

You might need to update libraries for your client applications to support SCRAM. For example, JDBC versions before 42.2.0 don't support SCRAM. For more information, see [PostgreSQL JDBC Driver](https://jdbc.postgresql.org/changelogs/2018-01-17-42.2.0-release/) in the PostgreSQL JDBC Driver documentation. For a list of other PostgreSQL drivers and SCRAM support, see [List of drivers](https://wiki.postgresql.org/wiki/List_of_drivers) in the PostgreSQL documentation.

Aurora PostgreSQL version 14 and higher support scram-sha-256 for password encryption by default for new DB clusters. For these versions, the default DB cluster parameter group (`default.aurora-postgresql14`) has its `password_encryption` value set to scram-sha-256. SCRAM isn't supported for Aurora Serverless v1.

## Setting up Aurora PostgreSQL DB cluster to require SCRAM
<a name="PostgreSQL_Password_Encryption_configuration.preliminary"></a>

For Aurora PostgreSQL 14.3 and higher versions, you can require the Aurora PostgreSQL DB cluster to accept only passwords that use the scram-sha-256 algorithm.

**Important**  
For existing RDS Proxies with PostgreSQL databases, if you modify the database authentication to use `SCRAM` only, the proxy becomes unavailable for up to 60 seconds. To avoid the issue, do one of the following:  
Ensure that the database allows both `SCRAM` and `MD5` authentication.
To use only `SCRAM` authentication, create a new proxy, migrate your application traffic to the new proxy, then delete the proxy previously associated with the database.

Before making changes to your system, be sure you understand the complete process, as follows:
+ Get information about all roles and password encryption for all database users. 
+ Double-check the parameter settings for your Aurora PostgreSQL DB cluster for the parameters that control password encryption.
+ If your Aurora PostgreSQL DB cluster uses a default parameter group, you need to create a custom DB cluster parameter group and apply it to your Aurora PostgreSQL DB cluster so that you can modify parameters when needed. If your Aurora PostgreSQL DB cluster uses a custom parameter group, you can modify the necessary parameters later in the process, as needed. 
+ Change the `password_encryption` parameter to `scram-sha-256`.
+ Notify all database users that they need to update their passwords. Do the same for your `postgres` account. The new passwords are encrypted and stored using the scram-sha-256 algorithm.
+ Verify that all passwords are encrypted using as the type of encryption. 
+ If all passwords use scram-sha-256, you can change the `rds.accepted_password_auth_method` parameter from `md5+scram` to `scram-sha-256`. 

**Warning**  
After you change `rds.accepted_password_auth_method` to scram-sha-256 alone, any users (roles) with `md5`–encrypted passwords can't connect. 

### Getting ready to require SCRAM for your Aurora PostgreSQL DB cluster
<a name="PostgreSQL_Password_Encryption_configuration.getting-ready"></a>

Before making any changes to your Aurora PostgreSQL DB cluster, check all existing database user accounts. Also, check the type of encryption used for passwords. You can do these tasks by using the `rds_tools` extension. To see which PostgreSQL versions support `rds_tools`, see [Extension versions for Amazon RDS for PostgreSQL](https://docs.aws.amazon.com/AmazonRDS/latest/PostgreSQLReleaseNotes/postgresql-extensions.html).

**To get a list of database users (roles) and password encryption methods**

1. Use `psql` to connect to the primary instance of your Aurora PostgreSQL DB cluster , as shown in the following.

   ```
   psql --host=cluster-name-instance-1.111122223333.aws-region.rds.amazonaws.com --port=5432 --username=postgres --password
   ```

1. Install the `rds_tools` extension.

   ```
   postgres=> CREATE EXTENSION rds_tools;
   CREATE EXTENSION
   ```

1. Get a listing of roles and encryption.

   ```
   postgres=> SELECT * FROM 
         rds_tools.role_password_encryption_type();
   ```

   You see output similar to the following.

   ```
          rolname        | encryption_type
   ----------------------+-----------------
    pg_monitor           |
    pg_read_all_settings |
    pg_read_all_stats    |
    pg_stat_scan_tables  |
    pg_signal_backend    |
    lab_tester           | md5
    user_465             | md5
    postgres             | md5
   (8 rows)
   ```

### Creating a custom DB cluster parameter group
<a name="PostgreSQL_Password_Encryption_configuration.custom-parameter-group"></a>

**Note**  
If your Aurora PostgreSQL DB cluster already uses a custom parameter group, you don't need to create a new one. 

For an overview of parameter groups for Aurora, see [Creating a DB cluster parameter groupin Amazon Aurora](USER_WorkingWithParamGroups.CreatingCluster.md). 

The password encryption type used for passwords is set in one parameter, `password_encryption`. The encryption that the Aurora PostgreSQL DB cluster allows is set in another parameter, `rds.accepted_password_auth_method`. Changing either of these from the default values requires that you create a custom DB cluster parameter group and apply it to your cluster. 

You can also use the AWS Management Console or the RDS API to create a custom DB cluster parameter group . For more information, see [Creating a DB cluster parameter groupin Amazon Aurora](USER_WorkingWithParamGroups.CreatingCluster.md). 

You can now associate the custom parameter group with your DB instance. 

**To create a custom DB cluster parameter group**

1. Use the `[create-db-cluster-parameter-group](https://docs.aws.amazon.com/cli/latest/reference/rds/create-db-cluster-parameter-group.html)` CLI command to create the custom parameter group for the cluster. The following example uses `aurora-postgresql13` as the source for this custom parameter group. 

   For Linux, macOS, or Unix:

   ```
   aws rds create-db-cluster-parameter-group --db-cluster-parameter-group-name 'docs-lab-scram-passwords' \
     --db-parameter-group-family aurora-postgresql13  --description 'Custom DB cluster parameter group for SCRAM'
   ```

   For Windows:

   ```
   aws rds create-db-cluster-parameter-group --db-cluster-parameter-group-name "docs-lab-scram-passwords" ^
     --db-parameter-group-family aurora-postgresql13  --description "Custom DB cluster parameter group for SCRAM"
   ```

   You can now associate the custom parameter group with your cluster. 

1. Use the `[modify-db-cluster](https://docs.aws.amazon.com/cli/latest/reference/rds/modify-db-cluster.html)` CLI command to apply this custom parameter group to your Aurora PostgreSQL DB cluster.

   For Linux, macOS, or Unix:

   ```
   aws rds modify-db-cluster --db-cluster-identifier 'your-instance-name' \
           --db-cluster-parameter-group-name "docs-lab-scram-passwords
   ```

   For Windows:

   ```
   aws rds modify-db-cluster --db-cluster-identifier "your-instance-name" ^
           --db-cluster-parameter-group-name "docs-lab-scram-passwords
   ```

   To resynchronize your Aurora PostgreSQL DB cluster with your custom DB cluster parameter group, reboot the primary and all other instances of the cluster. 

### Configuring password encryption to use SCRAM
<a name="PostgreSQL_Password_Encryption_configuration.configure-password-encryption"></a>

The password encryption mechanism used by an Aurora PostgreSQL DB cluster is set in the DB cluster parameter group in the `password_encryption` parameter. Allowed values are unset, `md5`, or `scram-sha-256`. The default value depends on the Aurora PostgreSQL version, as follows:
+ Aurora PostgreSQL 14 – Default is `scram-sha-256`
+ Aurora PostgreSQL 13 – Default is `md5`

With a custom DB cluster parameter group attached to your Aurora PostgreSQL DB cluster, you can modify values for the password encryption parameter.

![\[Following, the RDS console shows the default values for the password_encryption parameters for Aurora PostgreSQL.\]](http://docs.aws.amazon.com/AmazonRDS/latest/AuroraUserGuide/images/apg-pwd-encryption-md5-scram-1.png)


**To change password encryption setting to scram-sha-256**
+ Change the value of the password encryption to scram-sha-256, as shown following. The change can be applied immediately because the parameter is dynamic, so a restart isn't required for the change to take effect. 

  For Linux, macOS, or Unix:

  ```
  aws rds modify-db-cluster-parameter-group --db-cluster-parameter-group-name \
    'docs-lab-scram-passwords' --parameters 'ParameterName=password_encryption,ParameterValue=scram-sha-256,ApplyMethod=immediate'
  ```

  For Windows:

  ```
  aws rds modify-db-parameter-group --db-parameter-group-name ^
    "docs-lab-scram-passwords" --parameters "ParameterName=password_encryption,ParameterValue=scram-sha-256,ApplyMethod=immediate"
  ```

### Migrating passwords for user roles to SCRAM
<a name="PostgreSQL_Password_Encryption_configuration.migrating-users"></a>

You can migrate passwords for user roles to SCRAM as described following.

**To migrate database user (role) passwords from MD5 to SCRAM**

1. Log in as the administrator user (default user name, `postgres`) as shown following.

   ```
   psql --host=cluster-name-instance-1.111122223333.aws-region.rds.amazonaws.com --port=5432 --username=postgres --password
   ```

1. Check the setting of the `password_encryption` parameter on your RDS for PostgreSQL DB instance by using the following command.

   ```
   postgres=> SHOW password_encryption;
    password_encryption
   ---------------------
    md5
    (1 row)
   ```

1. Change the value of this parameter to scram-sha-256. For more information, see [Configuring password encryption to use SCRAM](#PostgreSQL_Password_Encryption_configuration.configure-password-encryption). 

1.  Check the value again to make sure that it's now set to `scram-sha-256`, as follows. 

   ```
   postgres=> SHOW password_encryption;
    password_encryption
   ---------------------
    scram-sha-256
    (1 row)
   ```

1. Notify all database users to change their passwords. Be sure to also change your own password for account `postgres` (the database user with `rds_superuser` privileges). 

   ```
   labdb=> ALTER ROLE postgres WITH LOGIN PASSWORD 'change_me';
   ALTER ROLE
   ```

1. Repeat the process for all databases on your Aurora PostgreSQL DB cluster. 

### Changing parameter to require SCRAM
<a name="PostgreSQL_Password_Encryption_configuration.require-scram"></a>

This is the final step in the process. After you make the change in the following procedure, any user accounts (roles) that still use `md5` encryption for passwords can't log in to the Aurora PostgreSQL DB cluster. 

The `rds.accepted_password_auth_method` specifies the encryption method that the Aurora PostgreSQL DB cluster accepts for a user password during the login process. The default value is `md5+scram`, meaning that either method is accepted. In the following image, you can find the default setting for this parameter.

![\[The RDS console showing the default and allowed values for the rds.accepted_password_auth_method parameters.\]](http://docs.aws.amazon.com/AmazonRDS/latest/AuroraUserGuide/images/pwd-encryption-md5-scram-2.png)


The allowed values for this parameter are `md5+scram` or `scram` alone. Changing this parameter value to `scram` makes this a requirement. 

**To change the parameter value to require SCRAM authentication for passwords**

1. Verify that all database user passwords for all databases on your Aurora PostgreSQL DB cluster use `scram-sha-256` for password encryption. To do so, query `rds_tools` for the role (user) and encryption type, as follows. 

   ```
   postgres=> SELECT * FROM rds_tools.role_password_encryption_type();
     rolname        | encryption_type
     ----------------------+-----------------
     pg_monitor           |
     pg_read_all_settings |
     pg_read_all_stats    |
     pg_stat_scan_tables  |
     pg_signal_backend    |
     lab_tester           | scram-sha-256
     user_465             | scram-sha-256
     postgres             | scram-sha-256
     ( rows)
   ```

1. Repeat the query across all DB instances in your Aurora PostgreSQL DB cluster. 

   If all passwords use scram-sha-256, you can proceed. 

1. Change the value of the accepted password authentication to scram-sha-256, as follows.

   For Linux, macOS, or Unix:

   ```
   aws rds modify-db-cluster-parameter-group --db-cluster-parameter-group-name 'docs-lab-scram-passwords' \
     --parameters 'ParameterName=rds.accepted_password_auth_method,ParameterValue=scram,ApplyMethod=immediate'
   ```

   For Windows:

   ```
   aws rds modify-db-cluster-parameter-group --db-cluster-parameter-group-name "docs-lab-scram-passwords" ^
     --parameters "ParameterName=rds.accepted_password_auth_method,ParameterValue=scram,ApplyMethod=immediate"
   ```

## Securing Aurora PostgreSQL data with SSL/TLS
<a name="AuroraPostgreSQL.Security.SSL"></a>

Amazon RDS supports Secure Socket Layer (SSL) and Transport Layer Security (TLS) encryption for Aurora PostgreSQL DB clusters. Using SSL/TLS, you can encrypt a connection between your applications and your Aurora PostgreSQL DB clusters. You can also force all connections to your Aurora PostgreSQL DB cluster to use SSL/TLS. Amazon Aurora PostgreSQL supports Transport Layer Security (TLS) versions 1.1 and 1.2. We recommend using TLS 1.2 for encrypted connections. We have added support for TLSv1.3 from the following versions of Aurora PostgreSQL:
+ 15.3 and all higher versions
+ 14.8 and higher 14 versions
+ 13.11 and higher 13 versions
+ 12.15 and higher 12 versions
+ 11.20 and higher 11 versions

For general information about SSL/TLS support and PostgreSQL databases, see [SSL support](https://www.postgresql.org/docs/current/libpq-ssl.html) in the PostgreSQL documentation. For information about using an SSL/TLS connection over JDBC, see [Configuring the client](https://jdbc.postgresql.org/documentation/head/ssl-client.html) in the PostgreSQL documentation.

**Topics**
+ [Requiring an SSL/TLS connection to an Aurora PostgreSQL DB cluster](#AuroraPostgreSQL.Security.SSL.Requiring)
+ [Determining the SSL/TLS connection status](#AuroraPostgreSQL.Security.SSL.Status)
+ [Configuring cipher suites for connections to Aurora PostgreSQL DB clusters](#AuroraPostgreSQL.Security.SSL.ConfiguringCipherSuites)

SSL/TLS support is available in all AWS Regions for Aurora PostgreSQL. Amazon RDS creates an SSL/TLS certificate for your Aurora PostgreSQL DB cluster when the DB cluster is created. If you enable SSL/TLS certificate verification, then the SSL/TLS certificate includes the DB cluster endpoint as the Common Name (CN) for the SSL/TLS certificate to guard against spoofing attacks. 

**To connect to an Aurora PostgreSQL DB cluster over SSL/TLS**

1. Download the certificate.

   For information about downloading certificates, see [Using SSL/TLS to encrypt a connection to a DB cluster](UsingWithRDS.SSL.md).

1. Import the certificate into your operating system.

1. Connect to your Aurora PostgreSQL DB cluster over SSL/TLS.

   When you connect using SSL/TLS, your client can choose to verify the certificate chain or not. If your connection parameters specify `sslmode=verify-ca` or `sslmode=verify-full`, then your client requires the RDS CA certificates to be in their trust store or referenced in the connection URL. This requirement is to verify the certificate chain that signs your database certificate.

   When a client, such as psql or JDBC, is configured with SSL/TLS support, the client first tries to connect to the database with SSL/TLS by default. If the client can't connect with SSL/TLS, it reverts to connecting without SSL/TLS. By default, the `sslmode` option for JDBC and libpq-based clients is set to `prefer`. 

   Use the `sslrootcert` parameter to reference the certificate, for example `sslrootcert=rds-ssl-ca-cert.pem`.

The following is an example of using psql to connect to an Aurora PostgreSQL DB cluster.

```
$ psql -h testpg.cdhmuqifdpib.us-east-1.rds.amazonaws.com -p 5432 \
    "dbname=testpg user=testuser sslrootcert=rds-ca-2015-root.pem sslmode=verify-full"
```

### Requiring an SSL/TLS connection to an Aurora PostgreSQL DB cluster
<a name="AuroraPostgreSQL.Security.SSL.Requiring"></a>

To require SSL/TLS connections to your Aurora PostgreSQL DB cluster, use `rds.force_ssl` parameter.
+ To require SSL/TLS connections, set the `rds.force_ssl` parameter value to 1 (on).
+ To turn off required SSL/TLS connections, set the `rds.force_ssl` parameter value to 0 (off).

The default value of this parameter depends on the Aurora PostgreSQL version:
+ For Aurora PostgreSQL versions 17 and later: The default value is 1 (on).
+ For Aurora PostgreSQL versions 16 and older: The default value is 0 (off).

**Note**  
When you perform a major version upgrade from Aurora PostgreSQL version 16 or earlier to version 17 or later, the default value of the parameter changes from 0 (off) to 1 (on). This change may cause connectivity failures for applications that are not configured for SSL. You can revert to the previous default behavior by setting this parameter to 0 (off).

For more information on handling parameters, see [Parameter groups for Amazon Aurora](USER_WorkingWithParamGroups.md).

Updating the `rds.force_ssl` parameter also sets the PostgreSQL `ssl` parameter to 1 (on) and modifies your DB cluster's `pg_hba.conf` file to support the new SSL/TLS configuration.

When the `rds.force_ssl` parameter is set to 1 for a DB cluster, you see output similar to the following when you connect, indicating that SSL/TLS is now required:

```
$ psql postgres -h SOMEHOST.amazonaws.com -p 8192 -U someuser
psql (9.3.12, server 9.4.4)
WARNING: psql major version 9.3, server major version 9.4.
Some psql features might not work.
SSL connection (cipher: DHE-RSA-AES256-SHA, bits: 256)
Type "help" for help.

postgres=>
```

### Determining the SSL/TLS connection status
<a name="AuroraPostgreSQL.Security.SSL.Status"></a>

The encrypted status of your connection is shown in the logon banner when you connect to the DB cluster.

```
Password for user master: 
psql (9.3.12) 
SSL connection (cipher: DHE-RSA-AES256-SHA, bits: 256) 
Type "help" for help.   

postgres=>
```

You can also load the `sslinfo` extension and then call the `ssl_is_used()` function to determine if SSL/TLS is being used. The function returns `t` if the connection is using SSL/TLS, otherwise it returns `f`.

```
postgres=> create extension sslinfo;
CREATE EXTENSION

postgres=> select ssl_is_used();
 ssl_is_used
---------
t
(1 row)
```

You can use the `select ssl_cipher()` command to determine the SSL/TLS cipher:

```
postgres=> select ssl_cipher();
ssl_cipher
--------------------
DHE-RSA-AES256-SHA
(1 row)
```

 If you enable `set rds.force_ssl` and restart your DB cluster, non-SSL connections are refused with the following message:

```
$ export PGSSLMODE=disable
$ psql postgres -h SOMEHOST.amazonaws.com -p 8192 -U someuser
psql: FATAL: no pg_hba.conf entry for host "host.ip", user "someuser", database "postgres", SSL off
$
```

For information about the `sslmode` option, see [Database connection control functions](https://www.postgresql.org/docs/current/libpq-connect.html#LIBPQ-CONNECT-SSLMODE) in the PostgreSQL documentation.

### Configuring cipher suites for connections to Aurora PostgreSQL DB clusters
<a name="AuroraPostgreSQL.Security.SSL.ConfiguringCipherSuites"></a>

By using configurable cipher suites, you can have more control over the security of your database connections. You can specify a list of cipher suites that you want to allow to secure client SSL/TLS connections to your database. With configurable cipher suites, you can control the connection encryption that your database server accepts. Doing this helps prevent the use of insecure or deprecated ciphers.

Configurable cipher suites is supported in Aurora PostgreSQL versions 11.8 and higher.

To specify the list of permissible ciphers for encrypting connections, modify the `ssl_ciphers` cluster parameter. Set the `ssl_ciphers` parameter to a string of comma-separated cipher values in a cluster parameter group using the AWS Management Console, the AWS CLI, or the RDS API. To set cluster parameters, see [Modifying parameters in a DB cluster parameter groupin Amazon Aurora](USER_WorkingWithParamGroups.ModifyingCluster.md).

The following table shows the supported ciphers for the valid Aurora PostgreSQL engine versions.


| Aurora PostgreSQL engine versions | Supported ciphers | TLS 1.1 | TLS 1.2 | TLS 1.3 | 
| --- | --- | --- | --- | --- | 
| 9.6, 10.20 and lower, 11.15 and lower, 12.10 and lower, 13.6 and lower | DHE-RSA-AES128-SHA DHE-RSA-AES128-SHA256 DHE-RSA-AES128-GCM-SHA256 DHE-RSA-AES256-SHA DHE-RSA-AES256-SHA256 DHE-RSA-AES256-GCM-SHA384 ECDHE-ECDSA-AES256-SHA ECDHE-ECDSA-AES256-GCM-SHA384 ECDHE-RSA-AES256-SHA384 ECDHE-RSA-AES128-SHA ECDHE-RSA-AES128-SHA256 ECDHE-RSA-AES128-GCM-SHA256 ECDHE-RSA-AES256-SHA ECDHE-RSA-AES256-GCM-SHA384 | Yes No No No No No Yes No No Yes No No Yes No | No Yes Yes Yes Yes Yes Yes Yes Yes Yes Yes Yes Yes Yes |  No No No No No No No No No No No No No No  | 
| 10.21, 11.16, 12.11, 13.7, 14.3 and 14.4 |  ECDHE-RSA-AES128-SHATLS\$1ECDHE\$1RSA\$1WITH\$1AES\$1128\$1CBC\$1SHA TLS\$1ECDHE\$1RSA\$1WITH\$1AES\$1128\$1GCM\$1SHA256 TLS\$1ECDHE\$1RSA\$1WITH\$1AES\$1256\$1CBC\$1SHA TLS\$1ECDHE\$1RSA\$1WITH\$1AES\$1256\$1GCM\$1SHA384 TLS\$1ECDHE\$1RSA\$1WITH\$1AES\$1128\$1CBC\$1SHA TLS\$1ECDHE\$1RSA\$1WITH\$1AES\$1128\$1GCM\$1SHA256 TLS\$1ECDHE\$1RSA\$1WITH\$1AES\$1256\$1CBC\$1SHA TLS\$1ECDHE\$1RSA\$1WITH\$1AES\$1256\$1GCM\$1SHA384 TLS\$1RSA\$1WITH\$1AES\$1256\$1GCM\$1SHA384 TLS\$1RSA\$1WITH\$1AES\$1256\$1CBC\$1SHA TLS\$1RSA\$1WITH\$1AES\$1128\$1GCM\$1SHA256 TLS\$1RSA\$1WITH\$1AES\$1128\$1CBC\$1SHA TLS\$1ECDHE\$1RSA\$1WITH\$1CHACHA20\$1POLY1305\$1SHA256 | Yes No Yes No Yes No Yes No No Yes No Yes No | Yes Yes Yes Yes Yes Yes Yes Yes Yes Yes Yes Yes Yes | No No No No No No No No No No No No No | 
| 10.22 , 11.17 , 12.12 , 13.8 , 14.5 , and 15.2 |  TLS\$1ECDHE\$1RSA\$1WITH\$1AES\$1128\$1CBC\$1SHA TLS\$1ECDHE\$1RSA\$1WITH\$1AES\$1128\$1CBC\$1SHA256 TLS\$1ECDHE\$1RSA\$1WITH\$1AES\$1128\$1GCM\$1SHA256 TLS\$1ECDHE\$1RSA\$1WITH\$1AES\$1256\$1CBC\$1SHA TLS\$1ECDHE\$1RSA\$1WITH\$1AES\$1256\$1GCM\$1SHA384 TLS\$1ECDHE\$1RSA\$1WITH\$1AES\$1128\$1CBC\$1SHA TLS\$1ECDHE\$1RSA\$1WITH\$1AES\$1128\$1CBC\$1SHA256 TLS\$1ECDHE\$1RSA\$1WITH\$1AES\$1128\$1GCM\$1SHA256 TLS\$1ECDHE\$1RSA\$1WITH\$1AES\$1256\$1CBC\$1SHA TLS\$1ECDHE\$1RSA\$1WITH\$1AES\$1256\$1GCM\$1SHA384 TLS\$1RSA\$1WITH\$1AES\$1256\$1GCM\$1SHA384 TLS\$1RSA\$1WITH\$1AES\$1256\$1CBC\$1SHA TLS\$1RSA\$1WITH\$1AES\$1128\$1GCM\$1SHA256 TLS\$1RSA\$1WITH\$1AES\$1128\$1CBC\$1SHA256 TLS\$1RSA\$1WITH\$1AES\$1128\$1CBC\$1SHA TLS\$1ECDHE\$1RSA\$1WITH\$1CHACHA20\$1POLY1305\$1SHA256 |  Yes No No Yes No Yes No No Yes No No Yes No Yes Yes No  | Yes Yes Yes Yes Yes Yes Yes Yes Yes Yes Yes Yes Yes Yes Yes Yes | No No No No No No No No No No No No No No No No | 
| 11.20, 12.15, 13.11, 14.8, 15.3, 16.1 and higher | TLS\$1ECDHE\$1RSA\$1WITH\$1AES\$1128\$1CBC\$1SHA TLS\$1ECDHE\$1RSA\$1WITH\$1AES\$1128\$1CBC\$1SHA256 TLS\$1ECDHE\$1RSA\$1WITH\$1AES\$1128\$1GCM\$1SHA256 TLS\$1ECDHE\$1RSA\$1WITH\$1AES\$1256\$1CBC\$1SHA TLS\$1ECDHE\$1RSA\$1WITH\$1AES\$1256\$1GCM\$1SHA384 TLS\$1ECDHE\$1RSA\$1WITH\$1AES\$1128\$1CBC\$1SHA TLS\$1ECDHE\$1RSA\$1WITH\$1AES\$1128\$1CBC\$1SHA256 TLS\$1ECDHE\$1RSA\$1WITH\$1AES\$1128\$1GCM\$1SHA256 TLS\$1ECDHE\$1RSA\$1WITH\$1AES\$1256\$1CBC\$1SHA TLS\$1ECDHE\$1RSA\$1WITH\$1AES\$1256\$1GCM\$1SHA384 TLS\$1RSA\$1WITH\$1AES\$1256\$1GCM\$1SHA384 TLS\$1RSA\$1WITH\$1AES\$1256\$1CBC\$1SHA TLS\$1RSA\$1WITH\$1AES\$1128\$1GCM\$1SHA256 TLS\$1RSA\$1WITH\$1AES\$1128\$1CBC\$1SHA256 TLS\$1RSA\$1WITH\$1AES\$1128\$1CBC\$1SHA TLS\$1ECDHE\$1RSA\$1WITH\$1CHACHA20\$1POLY1305\$1SHA256 TLS\$1AES\$1128\$1GCM\$1SHA256 TLS\$1AES\$1256\$1GCM\$1SHA384  | Yes No No Yes No Yes No No Yes No No Yes No Yes Yes No No No | Yes Yes Yes Yes Yes Yes Yes Yes Yes Yes Yes Yes Yes Yes Yes Yes No No |  No No No No No No No No No No No No No No No No Yes Yes  | 

You can also use the [describe-engine-default-cluster-parameters](https://docs.aws.amazon.com/cli/latest/reference/rds/describe-engine-default-cluster-parameters.html) CLI command to determine which cipher suites are currently supported for a specific parameter group family. The following example shows how to get the allowed values for the `ssl_cipher` cluster parameter for Aurora PostgreSQL 11.

```
aws rds describe-engine-default-cluster-parameters --db-parameter-group-family aurora-postgresql11
                
    ...some output truncated...
	{
		"ParameterName": "ssl_ciphers",
		"Description": "Sets the list of allowed TLS ciphers to be used on secure connections.",
		"Source": "engine-default",
		"ApplyType": "dynamic",
		"DataType": "list",
		"AllowedValues": "DHE-RSA-AES128-SHA,DHE-RSA-AES128-SHA256,DHE-RSA-AES128-GCM-SHA256,DHE-RSA-AES256-SHA,DHE-RSA-AES256-SHA256,DHE-RSA-AES256-GCM-SHA384,
		ECDHE-RSA-AES128-SHA,ECDHE-RSA-AES128-SHA256,ECDHE-RSA-AES128-GCM-SHA256,ECDHE-RSA-AES256-SHA,ECDHE-RSA-AES256-SHA384,ECDHE-RSA-AES256-GCM-SHA384,TLS_RSA_WITH_AES_256_GCM_SHA384,
		TLS_RSA_WITH_AES_256_CBC_SHA,TLS_RSA_WITH_AES_128_GCM_SHA256,TLS_RSA_WITH_AES_128_CBC_SHA256,TLS_RSA_WITH_AES_128_CBC_SHA,TLS_ECDHE_RSA_WITH_CHACHA20_POLY1305_SHA256,
		TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384,TLS_ECDHE_RSA_WITH_AES_256_CBC_SHA,TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256,TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256,TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA",
		"IsModifiable": true,
		"MinimumEngineVersion": "11.8",
		"SupportedEngineModes": [
			"provisioned"
		]
	},
    ...some output truncated...
```

The `ssl_ciphers` parameter defaults to all allowed cipher suites. For more information about ciphers, see the [ssl\$1ciphers](https://www.postgresql.org/docs/current/runtime-config-connection.html#GUC-SSL-CIPHERS) variable in the PostgreSQL documentation. 

# Using dynamic masking with Aurora PostgreSQL
<a name="AuroraPostgreSQL.Security.DynamicMasking"></a>

Dynamic data masking is a security feature that protects sensitive data in Aurora PostgreSQL databases by controlling how data appears to users at query time. Aurora implements it through the `pg_columnmask` extension. `pg_columnmask` provides column-level data protection that complements PostgreSQL's native row-level security and granular access control mechanisms.

With `pg_columnmask`, you create masking policies that determine data visibility based on user roles. When users query tables with masking policies, Aurora PostgreSQL applies the appropriate masking function at query time based on the user's role and policy weight. The underlying data remains unchanged in storage.

`pg_columnmask` supports the following capabilities:
+ **Built-in and custom masking functions** – Use pre-built functions for common patterns like email and text masking, or create your own custom functions to protects sensitive data (PII) through SQL-based masking policies.
+ **Multiple masking strategies** – Completely hide information, replace partial values with wildcards, or define custom masking approaches.
+ **Policy prioritization** – Define multiple policies for a single column. Use weights to determine which masking policy should be used when multiple policies apply to a column. Aurora PostgreSQL applies policies based on weight and user role membership. 

`pg_columnmask` is available on Aurora PostgreSQL version 16.10 and higher, and version 17.6 and higher. It is available is available at no additional cost.

# Getting started with dynamic masking
<a name="AuroraPostgreSQL.Security.DynamicMasking.GetStarted"></a>

To dynamically mask data, you install the `pg_columnmask` extension in your database and create masking policies for your tables. The setup process involves prerequisite verification, extension installation, role configuration, policy creation, and validation testing.

## Extension installation and configuration
<a name="AuroraPostgreSQL.Security.DynamicMasking.GetStarted.Installation"></a>

Connect to your Aurora PostgreSQL cluster using the RDS Console Query Editor or a PostgreSQL client such as psql with rds\$1superuser (master user) credentials.

Execute the extension creation command to enable `pg_columnmask` functionality:

```
CREATE EXTENSION pg_columnmask;
```

This command installs the `pg_columnmask` extension, creates the necessary catalog tables, and registers the built-in masking functions. The extension installation is database-specific, meaning you must install it separately in each database where the functionality is required.

**Note**  
Connections made before installing this extension will still show unmasked data. Close and reconnect to fix this.

Verify the extension installation by checking the available masking functions:

```
SELECT proname FROM pg_proc
    WHERE pronamespace = 'pgcolumnmask'::regnamespace AND proname LIKE 'mask_%';
    proname     
--------Output --------
 mask_email
 mask_text
 mask_timestamp
(3 rows)
```

# Procedures for managing data masking policies
<a name="AuroraPostgreSQL.Security.DynamicMasking.Procedures"></a>

You can manage masking policies using procedures provided by the `pg_columnmask` extension. To create, modify, or drop masking policies, you must have one of the following privileges:
+ Owner of the table on which you are creating the `pg_columnmask` policy.
+ Member of `rds_superuser`.
+ Member of `pg_columnmask` policy manager role set by the `pgcolumnmask.policy_admin_rolname` parameter.

The following command creates a table that is used in subsequent sections:

```
CREATE TABLE public.customers (
    id SERIAL PRIMARY KEY,
    name TEXT,
    phone TEXT,
    address TEXT,
    email TEXT
);
```

## CREATE\$1MASKING\$1POLICY
<a name="AuroraPostgreSQL.Security.DynamicMasking.Procedures.CreateMaskingPolicy"></a>

The following procedure creates a new masking policy for a user table:

**Syntax**

```
create_masking_policy(
    policy_name,
    table_name,
    masking_expressions,
    roles,
    weight)
```

**Arguments**


| Parameter | Datatype | Description | 
| --- | --- | --- | 
| policy\$1name | NAME |  The name of the masking policy. Must be unique per table.  | 
| table\$1name | REGCLASS |  The qualified/unqualified name or oid of the table to apply masking policy.  | 
| masking\$1expressions | JSONB |  JSON object containing column name and masking function pairs. Each key is a column name and its value is the masking expression to be applied on that column.  | 
| roles | NAME[] |  The roles to which this masking policy applies. Default is PUBLIC.  | 
| weight | INT |  Weight of the masking policy. When multiple policies are applicable to a given user's query, the policy with the highest weight (higher integer number) will be applied to each masked column. Default is 0. No two masking policies on the table can have the same wieght.  | 

**Return type**

None

**Example of creating a masking policy that masks the email column for the `test_user` role:**  

```
CALL pgcolumnmask.create_masking_policy(
    'customer_mask',
    'public.customers',
    JSON_OBJECT('{
        "email", "pgcolumnmask.mask_email(email)"
    }')::JSONB,
    ARRAY['test_user'],
    100
);
```

## ALTER\$1MASKING\$1POLICY
<a name="AuroraPostgreSQL.Security.DynamicMasking.Procedures.AlterMaskingPolicy"></a>

This procedure modifies an existing masking policy. `ALTER_MASKING_POLICY` can modify the policy masking expressions, set of roles to which the policy applies and the weight of the masking policy. When one of those parameters is omitted, the corresponding part of the policy is unchanged.

**Syntax**

```
alter_masking_policy(
    policy_name,
    table_name,
    masking_expressions,
    roles,
    weight)
```

**Arguments**


| Parameter | Datatype | Description | 
| --- | --- | --- | 
| policy\$1name | NAME |  Existing name of the masking policy.  | 
| table\$1name | REGCLASS |  The qualified/unqualified name oid of the table containing the masking policy.  | 
| masking\$1expressions | JSONB |  New JSON object containing column name and masking function pairs or NULL otherwise.  | 
| roles | NAME[] |  The list of new roles to which this masking policy applies or NULL otherwise.  | 
| weight | INT |  New weight for the masking policy or NULL otherwise.  | 

**Return type**

None

**Example of adding the analyst role to an existing masking policy without changing other policy attributes.**  

```
CALL pgcolumnmask.alter_masking_policy(
    'customer_mask',
    'public.customers',
    NULL,
    ARRAY['test_user', 'analyst'],
    NULL 
);

-- Alter the weight of the policy without altering other details
CALL pgcolumnmask.alter_masking_policy(
    'customer_mask',
    'customers',
    NULL,
    NULL,
    4
);
```

## DROP\$1MASKING\$1POLICY
<a name="AuroraPostgreSQL.Security.DynamicMasking.Procedures.DropMaskingPolicy"></a>

This procedure removes an existing masking policy.

**Syntax**

```
drop_masking_policy(
        policy_name,
        table_name)
```

**Arguments**


| Parameter | Datatype | Description | 
| --- | --- | --- | 
| policy\$1name | NAME |  Existing name of the masking policy.  | 
| table\$1name | REGCLASS |  The qualified/unqualified name oid of the table containing the masking policy.  | 

**Return type**

None

**Example of dropping the masking policy customer\$1mask**  

```
-- Drop a masking policy
    CALL pgcolumnmask.drop_masking_policy(
        'customer_mask',
        'public.customers',
    );
```

# Escaping identifiers in masking policy DDL procedure
<a name="AuroraPostgreSQL.Security.DynamicMasking.EscapeIdentifiers"></a>

When creating data masking policies with quoted identifiers, proper escaping is required to ensure correct object references and policy application. To use quoted identifiers inside the `pg_columnmask` masking policy management procedures:
+ **Policy name** – Must be enclosed in double quotes.
+ **Table name** – Both schema name and table name must be enclosed in double quotes individually when required.
+ **Masking expressions** – Column and function names in masking expressions must be enclosed in double quotes and the quotes themselves must be escaped using a backslash.
+ **Roles** – The array of role names is automatically quoted. The role name should exactly match the name as seen in `pg_roles` including case sensitivity.

**Example of escaping and quoting syntax**  
This example shows the proper escaping and quoting syntax when creating masking policies for tables, columns, functions, and roles that use mixed-case names or require quoted identifiers in Aurora PostgreSQL.  

```
-- Create a table and columns with mixed case name 
CREATE TABLE public."Employees" (
    "Name" TEXT,
    "Email" TEXT,
    ssn VARCHAR(20)
);

-- Create a role with mixed case name
CREATE ROLE "Masked_user";

-- Create a function with mixed case name
CREATE OR REPLACE FUNCTION public."MaskEmail"(text)
    RETURNS character varying
    LANGUAGE plpgsql
    IMMUTABLE PARALLEL SAFE
    AS $$ BEGIN
        RETURN 'XXXXXXXX'::text;
    END $$;

-- Now use these objects with mixed case names in
-- masking policy management procedures
CALL pgcolumnmask.create_masking_policy(
    '"Policy1"',  -- policy name should be surrounded with double quotes for quoting
    'public."Employees"', -- table and schema name should be individually 
                          -- surrounded with double quotes for quoting
    JSON_OBJECT('{
        "\"Email\"", "\"MaskEmail\"(\"Email\")"
    }')::JSONB, -- masking expression should have double quotes around function names
                -- and columns names etc when needed. Also the double quotes itself
                -- should be escaped using \ (backslash) since this is a JSON string
    ARRAY['Masked_user'], -- Rolename do not need quoting
                          -- (this behaviour may change in future release)
    100
);

SELECT * FROM pgcolumnmask.pg_columnmask_policies
    WHERE tablename = 'Employees';
-[ RECORD 1 ]-----+-------------------------------------
schemaname        | public
tablename         | Employees
policyname        | Policy1
roles             | {Masked_user}
masked_columns    | {Email}
masking_functions | {"(\"MaskEmail\"(\"Email\"))::text"}
weight            | 100
```

## Administrative views
<a name="AuroraPostgreSQL.Security.DynamicMasking.AdminViews"></a>

You can review all the `pg_columnmask` policy using the publicly accessible `pgcolumnmask.pg_columnmask_policies` administrative view. Following information is available using this view. The view only returns the masking policies owned by current user.


| Column name | Data type | Description | 
| --- | --- | --- | 
|  schemaname  | NAME |  Schema of the relation to which the policy is attached  | 
|  tablename  | NAME |  Name of the relation to which the policy is attached  | 
|  policyname  | NAME |  Name of the masking policy, all masking policies have unique names  | 
|  roles  | TEXT[] |  Role to which policy applies.  | 
|  masked\$1columns  | TEXT[] |  Masked columns  | 
|  masking\$1functions  | TEXT[] |  Masking functions  | 
| weight | INT |  Weight of the attached policy  | 

# Pre-defined data masking functions
<a name="AuroraPostgreSQL.Security.DynamicMasking.PredefinedMaskingFunctions"></a>

`pg_columnmask` extension provides built-in utility functions written in C language (for faster execution) which can be used as masking expression for `pg_columnmask` policies.

**mask\$1text**

A function to mask text data with configurable visibility options.

**Arguments**


| Parameter | Datatype | Description | 
| --- | --- | --- | 
| input | TEXT |  The original text string to be masked  | 
| mask\$1char | CHAR(1) |  Character used for masking (default: 'X')  | 
| visible\$1prefix | INT |  Number of characters at the beginning of input text that will remain unmasked (default: 0)  | 
| visible\$1suffix | INT |  Number of characters at the end of input text that will remain unmasked (default: 0)  | 
| use\$1hash\$1mask | BOOLEAN |  If TRUE, uses a hash-based masking instead of mask\$1char (default: FALSE)  | 

**Example of using different masking options**  
Mask the entire input string with the default 'X' character  

```
postgres=> SELECT pgcolumnmask.mask_text('Hello World');
  mask_text  
-------------
 XXXXXXXXXXX
```
Use the `mask_char` argument to mask text input using a different character  

```
postgres=> SELECT pgcolumnmask.mask_text('Hello World', '*');
  mask_text  
-------------
 ***********
```
Use `visible_prefix` and `visible_suffix` parameters to control how many characters remain unmasked at the start and end of the text  

```
postgres=> SELECT pgcolumnmask.mask_text('Hello World', '*', 5, 1);
  mask_text  
-------------
 Hello*****d
```
When `use_hash_mask` is true the input string is masked using random characters `mask_char` argument is ignored but `visible_prefix` and `visible_suffix` are still honored  

```
postgres=> SELECT pgcolumnmask.mask_text('Hello World', '*', 2, 2, true);
  mask_text  
-------------
 Hex36dOHild
```

**mask\$1timestamp**


| Parameter | Datatype | Description | 
| --- | --- | --- | 
| ts\$1to\$1mask | TIMESTAMP |  The original timestamp to be masked  | 
| mask\$1part | TEXT |  Specifies which part of the timestamp to mask (default: 'all') Valid values: 'year', 'month', 'day', 'hour', 'minute', 'second', 'all'  | 
| mask\$1value | TIMESTAMP |  The timestamp value to use for masking (default: '1900-01-01 00:00:00')  | 

**Example of using `mask_timestamps`**  
These examples demonstrate complete timestamp masking to a default value, partial masking of specific timestamp components (year only), and masking with a custom replacement value.  
Completely mask input value to the default timestamp  

```
postgres=> SELECT pgcolumnmask.mask_timestamp('2023-06-15 14:30:00');
   mask_timestamp    
---------------------
 1900-01-01 00:00:00
```
To mask only one part of the timestamp from example only the year  

```
postgres=> SELECT pgcolumnmask.mask_timestamp('2023-06-15 14:30:00', 'year');
   mask_timestamp    
---------------------
 1900-06-15 14:30:00
```
To change the masked value for timestamp use the `mask_value` argument  

```
postgres=> SELECT pgcolumnmask.mask_timestamp('2023-06-15 14:30:00', 'all', '2012-12-12 12:12:12');
   mask_timestamp    
---------------------
 2012-12-12 12:12:12
```

**mask\$1timestamp**

A function to mask email addresses while preserving email structure.


| Parameter | Datatype | Description | 
| --- | --- | --- | 
| input | TEXT |  The original email address to be masked  | 
| mask\$1char | CHAR(1) |  Character used for masking (default: 'X')  | 
| mask\$1local | BOOLEAN |  If TRUE, masks the local part of email (before @) (default: TRUE)  | 
| mask\$1domain | BOOLEAN |  If TRUE, masks the domain part of email (after @) (default: TRUE)  | 

**Example of using `mask_email`**  
These examples demonstrate complete email masking, custom mask characters, and selective masking of either the local part or domain part of the email address.  
Complete masking  

```
postgres=> SELECT pgcolumnmask.mask_email('user@example.com');
    mask_email    
------------------
 XXXX@XXXXXXX.com
```
Use `mask_char` to change the character used for masking  

```
postgres=> SELECT pgcolumnmask.mask_email('user@example.com', '*');
    mask_email    
------------------
 ****@*******.com
```
Use `mask_local` and `mask_domain` to control masking on local and domain  

```
postgres=> SELECT pgcolumnmask.mask_email('user@example.com', '*', true, false);
    mask_email    
------------------
 ****@example.com

postgres=> SELECT pgcolumnmask.mask_email('user@example.com', '*', false, true);
    mask_email    
------------------
 user@*******.com
```

# Implementing pg\$1columnmask in an end-to-end workflow
<a name="AuroraPostgreSQL.Security.DynamicMasking.WorkflowExample"></a>

This section demonstrates a complete implementation of `pg_columnmask` using a sample employee table with sensitive data. You'll learn how to create custom masking functions, define multiple masking policies with different weight levels for various roles (intern, support, analyst), and observe how users with single or multiple role memberships see different levels of masked data. The examples also cover masking behavior in DML statements with RETURNING clauses, triggers on tables versus views, and policy management operations including renaming, altering weights, and cleanup.

1. Create a sample table with some sensitive data:

   ```
   CREATE SCHEMA hr;
   
   CREATE TABLE hr.employees (
       id INT PRIMARY KEY,
       name TEXT NOT NULL,
       email TEXT,
       ssn TEXT,
       salary NUMERIC(10,2)
    );
   
   INSERT INTO hr.employees VALUES
       (1, 'John Doe', 'john.doe@example.com', '123-45-6789', 50000.00),
       (2, 'Jane Smith', 'jane.smith@example.com', '987-65-4321', 60000.00);
   ```

1. Create custom masking functions:

   ```
   CREATE OR REPLACE FUNCTION public.mask_ssn(ssn TEXT)
       RETURNS TEXT AS $$
       BEGIN
           RETURN 'XXX-XX-' || RIGHT(ssn, 4);
       END;
       $$ LANGUAGE plpgsql;
   
   CREATE OR REPLACE FUNCTION public.mask_salary(salary NUMERIC, multiplier NUMERIC DEFAULT 0.0)
       RETURNS NUMERIC AS $$
       BEGIN
           RETURN salary * multiplier;
       END;
       $$ LANGUAGE plpgsql;
   ```

1. Create multiple policies with different masking levels based on user roles:

   ```
   -- Create different roles
   CREATE ROLE analyst_role;
   CREATE ROLE support_role;
   CREATE ROLE intern_role;
   
   GRANT USAGE ON SCHEMA hr TO analyst_role, support_role, intern_role;
   GRANT SELECT ON hr.employees TO analyst_role, support_role, intern_role;
   ----------------------------------------------------------------------
   
   -- Low-Weight Policy (Intern)
   CALL pgcolumnmask.create_masking_policy(
       'employee_mask_strict',
       'hr.employees',
       JSON_BUILD_OBJECT('name', 'pgcolumnmask.mask_text(name, ''*'')',
                         'email', 'pgcolumnmask.mask_email(email)',
                         'ssn', 'pgcolumnmask.mask_text(ssn, ''*'')',
                         'salary', 'public.mask_salary(salary)')::JSONB,
       ARRAY['intern_role'],
       10  -- Lowest weight
   );
   
   ----------------------------------------------------------------------
   -- Medium-Weight Policy (Support)
   CALL pgcolumnmask.create_masking_policy(
       'employee_mask_moderate',
       'hr.employees',
       JSON_BUILD_OBJECT('email', 'pgcolumnmask.mask_email(email, ''#'')',
                         'ssn', 'public.mask_ssn(ssn)',
                         'salary', 'public.mask_salary(salary)')::JSONB,
       ARRAY['support_role'],
       50   -- Medium weight
   );
   
   ----------------------------------------------------------------------
   -- High-Weight Policy (Analyst)
   CALL pgcolumnmask.create_masking_policy(
       'employee_mask_light',
       'hr.employees',
       JSON_BUILD_OBJECT('ssn', 'public.mask_ssn(ssn)',
                         'salary', 'public.mask_salary(salary, 0.9)')::JSONB,
       ARRAY['analyst_role'],
       100   -- Highest weight
   );
   ```

1. The following examples demonstrate how different users see data based on their role membership and policy weights.

   ```
   -- Create users
   CREATE USER sarah_intern;
   GRANT intern_role TO sarah_intern;
   
   CREATE USER lisa_support;
   GRANT support_role TO lisa_support;
   
   CREATE USER mike_analyst;
   GRANT analyst_role TO mike_analyst;
   
   CREATE USER ethan_support_intern;
   GRANT support_role, intern_role TO ethan_support_intern;
   
   CREATE USER john_analyst_intern;
   GRANT analyst_role, intern_role TO john_analyst_intern;
   ```

   As an intern (strictest masking):

   ```
   SET ROLE sarah_intern;
   
   SELECT * FROM hr.employees;
    id |    name    |         email          |     ssn     | salary 
   ----+------------+------------------------+-------------+--------
     1 | ********   | XXXXXXXX@XXXXXXX.com   | *********** |   0.00
     2 | ********** | XXXXXXXXXX@XXXXXXX.com | *********** |   0.00
   ```

   As a support user (moderate masking):

   ```
   SET ROLE lisa_support;
   
   SELECT * FROM hr.employees;
    id |    name    |         email          |     ssn     | salary 
   ----+------------+------------------------+-------------+--------
     1 | John Doe   | ########@#######.com   | XXX-XX-6789 |   0.00
     2 | Jane Smith | ##########@#######.com | XXX-XX-4321 |   0.00
   ```

   As an analyst (lightest masking):

   ```
   SET ROLE mike_analyst;
   
   SELECT * FROM hr.employees;
    id |    name    |         email          |     ssn     |  salary  
   ----+------------+------------------------+-------------+----------
     1 | John Doe   | john.doe@example.com   | XXX-XX-6789 | 45000.00
     2 | Jane Smith | jane.smith@example.com | XXX-XX-4321 | 54000.00
   ```

   As ethan\$1support\$1intern user which is both intern and support user:

   ```
   SET ROLE ethan_support_intern;
   
   -- masking policies appliable to this user: employee_mask_strict and employee_mask_moderate
   -- id : unmasked because no masking policy appliable on ethan_support_intern
   --            masks these columns
   -- name : masked because of employee_mask_strict policy
   -- email, ssn, salary : both employee_mask_strict and employee_mask_moderate mask these columns
   --                      but employee_mask_moderate will be use because of higher weight 
   
   SELECT * FROM hr.employees;
    id |    name    |         email          |     ssn     | salary 
   ----+------------+------------------------+-------------+--------
     1 | ********   | ########@#######.com   | XXX-XX-6789 |   0.00
     2 | ********** | ##########@#######.com | XXX-XX-4321 |   0.00
   ```

   As a john\$1analyst\$1intern which is both intern and analyst:

   ```
   SET ROLE john_analyst_intern;
   
   -- masking policies appliable to this user: employee_mask_strict and employee_mask_light
   -- id : unmasked because no masking policy appliable on john_analyst_intern
   --            masks these columns
   -- name, email : masked because of employee_mask_strict
   -- ssn, salary : both employee_mask_strict and employee_mask_light mask these columns
   --               but employee_mask_light will be use because of higher weight 
   
   SELECT * FROM hr.employees;
    id |    name    |         email          |     ssn     |  salary  
   ----+------------+------------------------+-------------+----------
     1 | ********   | XXXXXXXX@XXXXXXX.com   | XXX-XX-6789 | 45000.00
     2 | ********** | XXXXXXXXXX@XXXXXXX.com | XXX-XX-4321 | 54000.00
   ```

# Understanding masking behavior in DML operations
<a name="AuroraPostgreSQL.Security.DynamicMasking.DMLMasking"></a>

`pg_columnmask` applies consistently across all DML operations, including INSERT, UPDATE, DELETE, and MERGE statements. When you execute these operations, Aurora PostgreSQL masks data according to a core principle – any data read from storage is masked according to the current user's applicable policies.

Masking affects some of the following query components like:
+ WHERE clauses
+ JOIN conditions
+ Subqueries
+ RETURNING clauses

All of these components operate on masked values, not the original data. While data is written to storage unmasked, users only see their masked view when reading it back.

Aurora PostgreSQL enforces all database constraints (NOT NULL, UNIQUE, CHECK, FOREIGN KEY) on the actual stored values, not masked values. This can occasionally create apparent inconsistencies if masking functions aren't carefully designed.

Masking works alongside column-level permissions:
+ Users without SELECT privileges cannot read columns
+ Users with SELECT privileges see masked values according to their applicable policies

# Understanding masking behavior in trigger functions
<a name="AuroraPostgreSQL.Security.DynamicMasking.TriggerFunctionMasking"></a>

When `pg_columnmask` policies are applied to tables, it's important to understand how masking interacts with trigger functions. Triggers are database functions that execute automatically in response to certain events on a table, such as INSERT, UPDATE, or DELETE operations.

By default, DDM applies different masking rules depending on the type of trigger:

Table triggers  
**Transition tables are unmasked** – Trigger functions on tables have access to unmasked data in their transition tables for both old and new row versions  
Table owners create triggers and own the data, so they have full access to manage their tables effectively

View Triggers (INSTEAD OF Triggers)  
**Transition tables are masked** – Trigger functions on views see masked data according to the current user's permissions  
View owners may differ from base table owners and should respect masking policies on underlying tables

Two server-level configuration parameters control trigger behavior with masked tables. These can only be set by `rds_superuser`:
+ **Restrict Triggers on Masked Tables** – Prevents trigger execution when a masked user performs DML operations on tables with applicable masking policies.
+ **Restrict Triggers on Views with Masked Tables:** – Prevents trigger execution on views when the view definition includes tables with masking policies applicable to the current user.

**Example of differences between function application to table and view**  
The following example creates a trigger function that prints old and new row values, then demonstrates how the same function behaves differently when attached to a table versus a view.  

```
-- Create trigger function
CREATE OR REPLACE FUNCTION print_changes()
    RETURNS TRIGGER AS
    $$
        BEGIN
        RAISE NOTICE 'Old row: name=%, email=%, ssn=%, salary=%',
            OLD.name, OLD.email, OLD.ssn, OLD.salary;
        
        RAISE NOTICE 'New row: name=%, email=%, ssn=%, salary=%',
            NEW.name, NEW.email, NEW.ssn, NEW.salary;
        
        RETURN NEW;
        END;
    $$ LANGUAGE plpgsql;

-- Create trigger
CREATE TRIGGER print_changes_trigger
    BEFORE UPDATE ON hr.employees
    FOR EACH ROW
    EXECUTE FUNCTION print_changes();

-- Grant update to analyst role
GRANT UPDATE ON hr.employees TO analyst_role;

-- Unmasked data must be seen inside trigger even for masked user for the OLD and NEW
-- row passed to trigger function
BEGIN;
SET ROLE mike_analyst;
UPDATE hr.employees SET id = id + 10 RETURNING *;
NOTICE:  Old row: name=John Doe, email=john.doe@example.com, ssn=123-45-6789, salary=50000.00
NOTICE:  New row: name=John Doe, email=john.doe@example.com, ssn=123-45-6789, salary=50000.00
NOTICE:  Old row: name=Jane Smith, email=jane.smith@example.com, ssn=987-65-4321, salary=60000.00
NOTICE:  New row: name=Jane Smith, email=jane.smith@example.com, ssn=987-65-4321, salary=60000.00
 id |    name    |         email          |     ssn     |  salary  
----+------------+------------------------+-------------+----------
 11 | John Doe   | john.doe@example.com   | XXX-XX-6789 | 45000.00
 12 | Jane Smith | jane.smith@example.com | XXX-XX-4321 | 54000.00
(2 rows)

ROLLBACK;


-- Triggers on views (which are supposed to see masked data for new/old row)
CREATE VIEW hr.view_over_employees AS SELECT * FROM hr.employees;
GRANT UPDATE, SELECT ON hr.view_over_employees TO analyst_role;

-- Create trigger for this view
CREATE TRIGGER print_changes_trigger
    INSTEAD OF UPDATE ON hr.view_over_employees
    FOR EACH ROW
    EXECUTE FUNCTION print_changes();

-- Masked new and old rows should be passed to trigger if trigger is on view
BEGIN;
SET ROLE mike_analyst;
UPDATE hr.view_over_employees SET id = id + 10 RETURNING *;
NOTICE:  Old row: name=John Doe, email=john.doe@example.com, ssn=XXX-XX-6789, salary=45000.00
NOTICE:  New row: name=John Doe, email=john.doe@example.com, ssn=XXX-XX-6789, salary=45000.00
NOTICE:  Old row: name=Jane Smith, email=jane.smith@example.com, ssn=XXX-XX-4321, salary=54000.00
NOTICE:  New row: name=Jane Smith, email=jane.smith@example.com, ssn=XXX-XX-4321, salary=54000.00
 id |    name    |         email          |     ssn     |  salary  
----+------------+------------------------+-------------+----------
 11 | John Doe   | john.doe@example.com   | XXX-XX-6789 | 45000.00
 12 | Jane Smith | jane.smith@example.com | XXX-XX-4321 | 54000.00
(2 rows)
ROLLBACK;
```
We recommend reviewing trigger behavior before implementing triggers on masked tables. Table triggers have access to unmasked data in transition tables, while view triggers see masked data.

**Example of renaming masking policy**  
The following example demonstrates how to rename existing policies using the `rename_masking_policy` procedure.  

```
-- Rename the strict policy
CALL pgcolumnmask.rename_masking_policy(
    'employee_mask_strict',
    'hr.employees',
    'intern_protection_policy'
);

-- Verify the rename
SELECT policyname, roles, weight
    FROM pgcolumnmask.pg_columnmask_policies
    WHERE tablename = 'employees'
    ORDER BY weight DESC;

        policyname        |     roles      | weight 
--------------------------+----------------+--------
 employee_mask_light      | {analyst_role} |    100
 employee_mask_moderate   | {support_role} |     50
 intern_protection_policy | {intern_role}  |     10
```

**Example of altering policy weight**  
The following example demonstrates how to alter policy weights to change their weight.  

```
-- Change weight of moderate policy
CALL pgcolumnmask.alter_masking_policy(
    'employee_mask_moderate'::NAME,
    'hr.employees'::REGCLASS,
    NULL,    -- Keep existing masking expressions
    NULL,    -- Keep existing roles
    75       -- New weight
);

-- Verify the changes
SELECT policyname, roles, weight
    FROM pgcolumnmask.pg_columnmask_policies
    WHERE tablename = 'employees'
    ORDER BY weight DESC;
        policyname        |     roles      | weight 
--------------------------+----------------+--------
 employee_mask_light      | {analyst_role} |    100
 employee_mask_moderate   | {support_role} |     75
 intern_protection_policy | {intern_role}  |     10
```

**Example of cleaning up**  
The following example demonstrates how to drop all policies, tables and users.  

```
-- Drop policies
CALL pgcolumnmask.drop_masking_policy(
    'intern_protection_policy',
    'hr.employees'
);

CALL pgcolumnmask.drop_masking_policy(
    'employee_mask_moderate',
    'hr.employees'
);

CALL pgcolumnmask.drop_masking_policy(
    'employee_mask_light',
    'hr.employees'
);

-- Drop table and functions
DROP VIEW IF EXISTS hr.view_over_employees;
DROP TABLE IF EXISTS hr.employees;
DROP SCHEMA IF EXISTS hr;
DROP FUNCTION IF EXISTS public.mask_ssn(text);
DROP FUNCTION IF EXISTS public.mask_salary(numeric, numeric);

-- Drop users
DROP USER sarah_intern, lisa_support, mike_analyst,
    ethan_support_intern, john_analyst_intern;
DROP ROLE intern_role, support_role, analyst_role;
```

# Configuring masking policy management role
<a name="AuroraPostgreSQL.Security.DynamicMasking.PolicyManagementRole"></a>

The PostgreSQL column masking extension, `pg_columnmask`, allows you to delegate the management of masking policies to a specific role, rather than requiring `rds_superuser` or table owner privileges. This provides more granular control over who can create, alter, and drop masking policies.

To configure the role that will have masking policy management privileges, follow these steps:

1. Create the policy admin role – As an `rds_superuser`, create a new role responsible for managing masking policies:

   ```
   CREATE ROLE mask_admin NOLOGIN;
   ```

1. Configure the PostgreSQL parameter – In your custom DB cluster parameter group, set the `pgcolumnmask.policy_admin_rolname` engine configuration parameter to the name of the role you created:

   ```
   pgcolumnmask.policy_admin_rolname = mask_admin
   ```

   This engine configuration parameters can be set in a DB cluster parameter group and it does not require an instance reboot. For details on updating parameter, see [Modifying parameters in a DB cluster parameter groupin Amazon Aurora](USER_WorkingWithParamGroups.ModifyingCluster.md).

1. Grant the role to users As an `rds_superuser`, grant the `mask_admin` role to the users who should be able to manage masking policies:

   ```
   CREATE USER alice LOGIN;
   CREATE USER bob LOGIN;
   GRANT mask_admin TO alice, bob;
   ```

   Additionally, ensure that the users have USAGE privilege on the schemas where they will be managing masking policies:

   ```
   GRANT USAGE ON SCHEMA hr TO alice, bob;
   ```

Now, when users `alice` and `bob` connect to the database, they can use the standard `pg_columnmask` extension functions to create, alter, and drop masking policies on all tables in all the schemas where they have `USAGE` privilege on the schema.

# Best practices for secure pg\$1columnmask implementation
<a name="AuroraPostgreSQL.Security.DynamicMasking.BestPractices"></a>

The following section provides security best practices for implementing `pg_columnmask` in your Aurora PostgreSQL environment. Follow these recommendations to:
+ Establish a secure role-based access control architecture
+ Develop masking functions that prevent security vulnerabilities
+ Understand and control trigger behavior with masked data

## Role-based security architecture
<a name="AuroraPostgreSQL.Security.DynamicMasking.BestPractices.architecture"></a>

Define a role hierarchy to implement access controls in your database. Aurora PostgreSQL `pg_columnmask` augments these controls by providing an additional layer for fine-grained data masking within those roles.

Create dedicated roles that align with organizational functions rather than granting permissions to individual users. This approach provides better auditability and simplifies permission management as your organizational structure evolves.

**Example of creating an organzational role heirarchy**  
The following example creates an organizational role hierarchy with dedicated roles for different functions, then assigns individual users to the appropriate roles. In this example, organizational roles (analyst\$1role, support\$1role) are created first, then individual users are granted membership in these roles. This structure allows you to manage permissions at the role level rather than for each individual user.  

```
-- Create organizational role hierarchy
CREATE ROLE data_admin_role;
CREATE ROLE security_admin_role;
CREATE ROLE analyst_role;
CREATE ROLE support_role;
CREATE ROLE developer_role;

-- Specify security_admin_role as masking policy manager in the DB cluster parameter
-- group pgcolumnmask.policy_admin_rolname = 'security_admin_role'

-- Create specific users and assign to appropriate roles
CREATE USER security_manager;
CREATE USER data_analyst1, data_analyst2;
CREATE USER support_agent1, support_agent2;

GRANT security_admin_role TO security_manager;
GRANT analyst_role TO data_analyst1, data_analyst2;
GRANT support_role TO support_agent1, support_agent2;
```
Implement the principle of least privilege by granting only the minimum permissions necessary for each role. Avoid granting broad permissions that could be exploited if credentials are compromised.  

```
-- Grant specific table permissions rather than schema-wide access
GRANT SELECT ON sensitive_data.customers TO analyst_role;
GRANT SELECT ON sensitive_data.transactions TO analyst_role;
-- Do not grant: GRANT ALL ON SCHEMA sensitive_data TO analyst_role;
```
Policy administrators require `USAGE` privileges on schemas where they manage masking policies. Grant these privileges selectively, following the principle of least privilege. Conduct regular reviews of schema access permissions to ensure only authorized personnel maintain policy management capabilities.  
The policy admin role parameter configuration is restricted to database administrators only. This parameter cannot be modified at the database or session level, preventing unprivileged users from overriding policy admin assignments. This restriction ensures that masking policy control remains centralized and secure.  
Assign the policy admin role to specific individuals rather than groups. This targeted approach ensures selective access to masking policy management, as policy administrators have the ability to mask all tables within the database. 

## Secure masking function development
<a name="AuroraPostgreSQL.Security.DynamicMasking.BestPractices.MaskingDevelopment"></a>

Develop masking functions using early binding semantics to ensure proper dependency tracking and prevent late binding vulnerabilities such as search path modification during runtime. It is recommended to use `BEGIN ATOMIC` syntax for SQL functions to enable compile-time validation (i.e. early binding) and dependency management.

```
-- Example - Secure masking function with early binding
CREATE OR REPLACE FUNCTION secure_mask_ssn(input_ssn TEXT)
    RETURNS TEXT
    LANGUAGE SQL
    IMMUTABLE PARALLEL SAFE STRICT
    BEGIN ATOMIC
        SELECT CASE
            WHEN input_ssn IS NULL THEN NULL
            WHEN length(input_ssn) < 4 THEN repeat('X', length(input_ssn))
            ELSE repeat('X', length(input_ssn) - 4) || right(input_ssn, 4)
        END;
    END;
```

Alternatively, create functions that are immune to search path changes by explicitly schema qualifying all object references, ensuring consistent behavior across different user sessions.

```
-- Function immune to search path changes
CREATE OR REPLACE FUNCTION data_masking.secure_phone_mask(phone_number TEXT)
    RETURNS TEXT
    LANGUAGE SQL
    IMMUTABLE PARALLEL SAFE STRICT
    AS $$
    SELECT CASE
        WHEN phone_number IS NULL THEN NULL
        WHEN public.length(public.regexp_replace(phone_number, '[^0-9]', '', 'g')) < 10 THEN 'XXX-XXX-XXXX'
        ELSE public.regexp_replace(
            phone_number,
            '([0-9]{3})[0-9]{3}([0-9]{4})',
            public.concat('\1-XXX-\2')
        )
    END;
    $$;
```

Implement input validation within masking functions to handle edge cases and prevent unexpected behavior. Always include NULL handling and validate input formats to ensure consistent masking behavior. 

```
-- Robust masking function with comprehensive input validation
CREATE OR REPLACE FUNCTION secure_mask_phone(phone_number TEXT)
    RETURNS TEXT
    LANGUAGE SQL
    IMMUTABLE PARALLEL SAFE STRICT
    BEGIN ATOMIC
        SELECT CASE
            WHEN phone_number IS NULL THEN NULL
            WHEN length(trim(phone_number)) = 0 THEN phone_number
            WHEN length(regexp_replace(phone_number, '[^0-9]', '', 'g')) < 10 THEN 'XXX-XXX-XXXX'
            ELSE regexp_replace(phone_number, '([0-9]{3})[0-9]{3}([0-9]{4})', '\1-XXX-\2')
        END;
    END;
```

## DML Triggers behavior with pg\$1columnmask
<a name="AuroraPostgreSQL.Security.DynamicMasking.BestPractices.DMLTriggerBehavior"></a>

For table triggers, transition tables will be fully unmasked. For view triggers(IOT), transition tables will be masked according to the current user's view permissions.

Table triggers with pg\$1columnmask  
Triggers are passed a transition table which contains the old and new version of the rows modified by the firing DML query. Depending upon when the trigger is fired, Aurora PostgreSQL populates the old and new rows. For example, a `BEFORE INSERT` trigger only has new versions of the rows and empty old versions because there is no old version to refer.  
`pg_columnmask` does not mask transition tables inside triggers on tables. Triggers can use masked columns inside their body and it sees unmasked data. The trigger creator should make sure how the trigger gets executed for an user. The following example works correctly in this case.  

```
-- Example for table trigger uses masked column in its definition
-- Create a table and insert some rows
CREATE TABLE public.credit_card_table (
    name TEXT,
    credit_card_no VARCHAR(16),
    is_fraud BOOL
);

INSERT INTO public.credit_card_table (name, credit_card_no, is_fraud)
    VALUES
    ('John Doe', '4532015112830366', false),
    ('Jane Smith', '5410000000000000', true),
    ('Brad Smith', '1234567891234567', true);

-- Create a role which will see masked data and grant it privileges
CREATE ROLE intern_user;
GRANT SELECT, DELETE ON public.credit_card_table TO intern_user;

-- Trigger which will silenty skip delete of non fraudelent credit cards
CREATE OR REPLACE FUNCTION prevent_non_fraud_delete()
    RETURNS TRIGGER AS
    $$
    BEGIN
        IF OLD.is_fraud = false THEN
            RETURN NULL;
        END IF;
        RETURN OLD;
    END;
    $$ LANGUAGE plpgsql;

CREATE TRIGGER prevent_non_fraud_delete
    BEFORE DELETE ON credit_card_table
    FOR EACH ROW
    EXECUTE FUNCTION prevent_non_fraud_delete();

CREATE OR REPLACE FUNCTION public.return_false()
    RETURNS BOOLEAN
    LANGUAGE SQL
    IMMUTABLE PARALLEL SAFE STRICT
    BEGIN ATOMIC
      SELECT false;
    END;

-- A masking policy that masks both credit card number and is_fraud column.
-- If we apply masking inside trigger then prevent_non_fraud_delete trigger will
-- allow deleting more rows to masked user (even non fraud ones).
CALL pgcolumnmask.create_masking_policy(
    'mask_credit_card_no_&_is_fraud'::NAME,
    'public.credit_card_table'::REGCLASS,
    JSON_BUILD_OBJECT('credit_card_no', 'pgcolumnmask.mask_text(credit_card_no)',
                      'is_fraud', 'public.return_false()')::JSONB,
    ARRAY['intern_user']::NAME[],
    10::INT
);

-- Test trigger behaviour using intern_user
BEGIN;
SET ROLE intern_user;
-- credit card number & is_fraud is completely masked from intern_user
SELECT * FROM public.credit_card_table;
    name    |  credit_card_no  | is_fraud 
------------+------------------+----------
 John Doe   | XXXXXXXXXXXXXXXX | f
 Jane Smith | XXXXXXXXXXXXXXXX | f
 Brad Smith | XXXXXXXXXXXXXXXX | f
(3 rows)

-- The delete trigger lets the intern user delete rows for Jane and Brad even though
-- intern_user sees their is_fraud = false, but the table trigger works with original
-- unmasked value
DELETE FROM public.credit_card_table RETURNING *;
    name    |  credit_card_no  | is_fraud 
------------+------------------+----------
 Jane Smith | XXXXXXXXXXXXXXXX | f
 Brad Smith | XXXXXXXXXXXXXXXX | f
(2 rows)

COMMIT;
```
Trigger creator leaks unmasked data to user if they are not careful about the statements they use in their trigger body. For example using a `RAISE NOTICE ‘%’, masked_column;` prints the column to current user.  

```
-- Example showing table trigger leaking column value to current user
CREATE OR REPLACE FUNCTION leaky_trigger_func()
    RETURNS TRIGGER AS
    $$
    BEGIN
        RAISE NOTICE 'Old credit card number was: %', OLD.credit_card_no;
        RAISE NOTICE 'New credit card number is %', NEW.credit_card_no;
        RETURN NEW;
    END;
    $$ LANGUAGE plpgsql;

CREATE TRIGGER leaky_trigger
    AFTER UPDATE ON public.credit_card_table
    FOR EACH ROW
    EXECUTE FUNCTION leaky_trigger_func();

-- Grant update on column is_fraud to auditor role
-- auditor will NOT HAVE PERMISSION TO READ DATA
CREATE ROLE auditor;
GRANT UPDATE (is_fraud) ON public.credit_card_table TO auditor;

-- Also add auditor role to existing masking policy on credit card table
CALL pgcolumnmask.alter_masking_policy(
    'mask_credit_card_no_&_is_fraud'::NAME,
    'public.credit_card_table'::REGCLASS,
    NULL::JSONB,
    ARRAY['intern_user', 'auditor']::NAME[],
    NULL::INT
);

-- Log in as auditor
-- [auditor]
-- Update will fail if trying to read data from the table
UPDATE public.credit_card_table
    SET is_fraud = true
    WHERE credit_card_no = '4532015112830366';
ERROR:  permission denied for table cc_table

-- [auditor]
-- But leaky update trigger will still print the entire row even though
-- current user does not have permission to select from public.credit_card_table
UPDATE public.credit_card_table SET is_fraud = true;
NOTICE:  Old credit_card_no was: 4532015112830366
NOTICE:  New credit_card_no is 4532015112830366
```

Triggers on views with pg\$1columnmask (Instead of triggers)  
Triggers can only be created on views in PostgreSQL. They are used for running DML statements on views that are not updatable. Transit tables are always masked inside instead of trigger (IOT), because the view and the base tables used inside the view query could have different owners. In which case, base tables might have some masking policies applicable on the view owner and the view owner must always see masked data from base tables inside its triggers. This is different from triggers on tables because in that case the trigger creator and the data inside the tables are owned by the same user which is not the case here.  

```
-- Create a view over credit card table
CREATE OR REPLACE VIEW public.credit_card_view
    AS
    SELECT * FROM public.credit_card_table;

-- Truncate credit card table and insert fresh data
TRUNCATE TABLE public.credit_card_table;
INSERT INTO public.credit_card_table (name, credit_card_no, is_fraud)
    VALUES
    ('John Doe', '4532015112830366', false),
    ('Jane Smith', '5410000000000000', true),
    ('Brad Smith', '1234567891234567', true);

CREATE OR REPLACE FUNCTION public.print_changes()
    RETURNS TRIGGER AS
    $$
    BEGIN
        RAISE NOTICE 'Old row: name=%, credit card number=%, is fraud=%',
            OLD.name, OLD.credit_card_no, OLD.is_fraud;
    
        RAISE NOTICE 'New row: name=%, credit card number=%, is fraud=%',
            NEW.name, NEW.credit_card_no, NEW.is_fraud;
    
    RETURN NEW;
   END;
   $$ LANGUAGE plpgsql;

CREATE TRIGGER print_changes_trigger
    INSTEAD OF UPDATE ON public.credit_card_view
    FOR EACH ROW
    EXECUTE FUNCTION public.print_changes();

GRANT SELECT, UPDATE ON public.credit_card_view TO auditor;

-- [auditor]
-- Login as auditor role
BEGIN;

-- Any data coming out from the table will be masked in instead of triggers
-- according to masking policies applicable to current user
UPDATE public.credit_card_view
    SET name = CONCAT(name, '_new_name')
    RETURNING *;
NOTICE:  Old row: name=John Doe, credit card number=XXXXXXXXXXXXXXXX, is fraud=f
NOTICE:  New row: name=John Doe_new_name, credit card number=XXXXXXXXXXXXXXXX, is fraud=f
NOTICE:  Old row: name=Jane Smith, credit card number=XXXXXXXXXXXXXXXX, is fraud=f
NOTICE:  New row: name=Jane Smith_new_name, credit card number=XXXXXXXXXXXXXXXX, is fraud=f
NOTICE:  Old row: name=Brad Smith, credit card number=XXXXXXXXXXXXXXXX, is fraud=f
NOTICE:  New row: name=Brad Smith_new_name, credit card number=XXXXXXXXXXXXXXXX, is fraud=f
        name         |  credit_card_no  | is_fraud 
---------------------+------------------+----------
 John Doe_new_name   | XXXXXXXXXXXXXXXX | f
 Jane Smith_new_name | XXXXXXXXXXXXXXXX | f
 Brad Smith_new_name | XXXXXXXXXXXXXXXX | f
 
 -- Any new data going into the table using INSERT or UPDATE command will be unmasked
 UPDATE public.credit_card_view
    SET credit_card_no = '9876987698769876'
    RETURNING *;
NOTICE:  Old row: name=John Doe, credit card number=XXXXXXXXXXXXXXXX, is fraud=f
NOTICE:  New row: name=John Doe, credit card number=9876987698769876, is fraud=f
NOTICE:  Old row: name=Jane Smith, credit card number=XXXXXXXXXXXXXXXX, is fraud=f
NOTICE:  New row: name=Jane Smith, credit card number=9876987698769876, is fraud=f
NOTICE:  Old row: name=Brad Smith, credit card number=XXXXXXXXXXXXXXXX, is fraud=f
NOTICE:  New row: name=Brad Smith, credit card number=9876987698769876, is fraud=f
    name    |  credit_card_no  | is_fraud 
------------+------------------+----------
 John Doe   | 9876987698769876 | f
 Jane Smith | 9876987698769876 | f
 Brad Smith | 9876987698769876 | f
 
 COMMIT;
```

Database/User level GuCs to control triggers behavior  
Two configuration parameters control trigger execution behavior for users with applicable masking policies. Use these parameters to prevent triggers from executing on masked tables or views when additional security restrictions are required. Both parameters are disabled by default, allowing triggers to execute normally.  
**First GUC: Trigger firing restriction on masked tables**  
Specifications:  
+ Name: `pgcolumnmask.restrict_dml_triggers_for_masked_users`
+ Type: `boolean`
+ Default: `false` (triggers are allowed to be executed)
Prevents trigger execution on masked tables for masked users when set to TRUE. `pg_columnmask` runs through the error.  
**Second GUC: Trigger firing restriction on views with masked tables**  
Specifications:  
+ Name: `pgcolumnmask.restrict_iot_triggers_for_masked_users`
+ Type: `boolean`
+ Default: `false` (triggers are allowed to be executed)
Prevents trigger execution on views that include masked tables in their definition for masked users when set to TRUE.

These parameters operate independently and are configurable like standard database configuration parameters.

# Aurora PostgreSQL pg\$1columnmask data movement scenarios
<a name="AuroraPostgreSQL.Security.DynamicMasking.DataMovement"></a>

`pg_columnmask` behavior varies across different data movement operations depending on whether the operation occurs at the storage, logical, or application layer. Storage-level operations (such as cloning) behave differently from logical operations (such as `pg_dump`) and application-level operations (such as FDW queries). This section describes masking behavior for common scenarios including replication, backups, exports, and migrations, and explains the security implications for each.

**Topics**
+ [Aurora Global Database and Read Replicas](#AuroraPostgreSQL.Security.DynamicMasking.DataMovement.RR)
+ [Database clone and snapshot restore](#AuroraPostgreSQL.Security.DynamicMasking.DataMovement.Clones)
+ [Logical replication](#AuroraPostgreSQL.Security.DynamicMasking.DataMovement.LogRep)
+ [Blue/Green deployments](#AuroraPostgreSQL.Security.DynamicMasking.DataMovement.BlueGreen)
+ [Zero-ETL and CDC streams](#AuroraPostgreSQL.Security.DynamicMasking.DataMovement.ZETL)
+ [AWS Database Migration Service](#AuroraPostgreSQL.Security.DynamicMasking.DataMovement.DMS)
+ [Data exports](#AuroraPostgreSQL.Security.DynamicMasking.DataMovement.DataExport)
+ [Views and materialized views](#AuroraPostgreSQL.Security.DynamicMasking.DataMovement.Views)
+ [Data dump and restore](#AuroraPostgreSQL.Security.DynamicMasking.DataMovement.DDR)
+ [Foreign data wrapper](#AuroraPostgreSQL.Security.DynamicMasking.DataMovement.FDQ)

## Aurora Global Database and Read Replicas
<a name="AuroraPostgreSQL.Security.DynamicMasking.DataMovement.RR"></a>

Aurora `pg_columnmask` policies are stored in database system tables within the cluster volume. All replicas access the same policies and return consistently masked results. For Aurora Global Database deployments, `pg_columnmask` policies replicate to secondary AWS Regions along with other database system tables, ensuring consistent data protection across regions. During failover scenarios, all `pg_columnmask` policies remain intact and functional.

## Database clone and snapshot restore
<a name="AuroraPostgreSQL.Security.DynamicMasking.DataMovement.Clones"></a>

Aurora Fast Clone and snapshot restore operations preserve all `pg_columnmask` policies, roles, and configurations as part of the database system tables. The cloned or restored database inherits all existing policies from the source cluster. After cloning or restoration, each database cluster maintains independent `pg_columnmask` policies.

## Logical replication
<a name="AuroraPostgreSQL.Security.DynamicMasking.DataMovement.LogRep"></a>

During initial synchronization, logical replication uses standard SQL COPY operations, and `pg_columnmask` policies are enforced based on the replication user's permissions. During ongoing CDC (change data capture), masking policies are not applied and unmasked data is replicated through WAL records. Users with `pg_create_subscription` privileges can potentially exfiltrate unmasked data by setting up replication to a system they control.

## Blue/Green deployments
<a name="AuroraPostgreSQL.Security.DynamicMasking.DataMovement.BlueGreen"></a>

During snapshot restoration, `pg_columnmask` policies are automatically included. The green environment starts with an identical copy of all policies from the blue environment. During replication from blue to green, data is not masked. Subsequent masking policy changes (DDL commands) on the blue cluster do not replicate to the green cluster and invalidate RDS blue/green deployments.

## Zero-ETL and CDC streams
<a name="AuroraPostgreSQL.Security.DynamicMasking.DataMovement.ZETL"></a>

Data replication is not affected by `pg_columnmask` policies. Zero-ETL supports DDL replication but doesn't replicate `pg_columnmask` or RLS policies. No masking policies are applied to replicated data in Zero-ETL.

## AWS Database Migration Service
<a name="AuroraPostgreSQL.Security.DynamicMasking.DataMovement.DMS"></a>

Initial data sync is masked or unmasked based on the user selected for the DMS task. CDC data is always unmasked. While `pg_columnmask` related internal RLS policies may be migrated, they won't function on non-pg\$1columnmask-enabled targets.

## Data exports
<a name="AuroraPostgreSQL.Security.DynamicMasking.DataMovement.DataExport"></a>

`pg_columnmask` treats exports like any other query operation—masking is applied based on the executing user's permissions. This applies to SQL commands like COPY, SELECT INTO, CREATE TABLE AS, and Aurora PostgreSQL's S3 export functionality. 

**Note**  
When masked users export data, the resulting files contain masked values that may violate database constraints when restored.

## Views and materialized views
<a name="AuroraPostgreSQL.Security.DynamicMasking.DataMovement.Views"></a>

Keep the following considerations in mind when using views:
+ **Regular views** – Always use `INVOKER` semantics. The current user's masking policies apply when querying the view, regardless of who created the view.
+ **Materialized views** – When refreshed, the masking policies of the materialized view owner apply, not the policies of the user performing the refresh. If the owner has masking policies, the materialized view always contains masked data.

## Data dump and restore
<a name="AuroraPostgreSQL.Security.DynamicMasking.DataMovement.DDR"></a>

`pg_dump` operates as a regular database user and applies masking policies based on the connecting user's permissions. If a masked user performs a dump, the backup file contains masked data. `pg_columnmask` policies are included in the dump as part of the database schema. Successful restoration requires that all referenced roles exist in the target database and that the target has the `pg_columnmask` extension installed.

**Note**  
Starting with PostgreSQL 18, `pg_dump` supports the `—no-policies` option which excludes both Row Level Security (RLS) and `pg_columnmask` masking policies from database dumps. For more information, see [pg\$1dump](https://www.postgresql.org/docs/current/app-pgdump.html).

## Foreign data wrapper
<a name="AuroraPostgreSQL.Security.DynamicMasking.DataMovement.FDQ"></a>

When using foreign data wrappers, masking policies on remote tables are applied based on the mapped user's permissions on the source server, not the local querying user's permissions, and while you can access masked remote data through FDW, you cannot create DDM or RLS policies directly on foreign tables in your local database.