Getting started with Amazon RDS zero-ETL integrations with Amazon Redshift - Amazon Relational Database Service

Getting started with Amazon RDS zero-ETL integrations with Amazon Redshift

Before you create a zero-ETL integration with Amazon Redshift, configure your RDS database and your Amazon Redshift data warehouse with the required parameters and permissions. During setup, you'll complete the following steps:

After you complete these tasks, continue to Creating Amazon RDS zero-ETL integrations with Amazon Redshift.

Tip

You can have RDS complete these setup steps for you while you're creating the integration, rather than performing them manually. To immediately start creating an integration, see Creating Amazon RDS zero-ETL integrations with Amazon Redshift.

Step 1: Create a custom DB parameter group

Amazon RDS zero-ETL integrations with Amazon Redshift require specific values for the DB parameters that control binary logging (binlog). To configure binary logging, you must first create a custom DB parameter group, and then associate it with the source database. Configure the following parameter values. For instructions to create a parameter group, see DB parameter groups for Amazon RDS DB instances.

  • binlog_format=ROW

  • binlog_row_image=full

In addition, make sure that the binlog_row_value_options parameter is not set to PARTIAL_JSON. If the source database is a Multi-AZ DB cluster, make sure that the binlog_transaction_compression parameter is not set to ON

Step 2: Select or create a source database

After you create a custom DB parameter group, choose or create an RDS for MySQL database. This database will be the source of data replication to Amazon Redshift. For instructions to create a Single-AZ or Multi-AZ DB instance, see Creating an Amazon RDS DB instance. For instructions to create a Multi-AZ DB cluster, see Creating a Multi-AZ DB cluster for Amazon RDS.

The database must be running a supported DB engine version. For a list of supported versions, see Supported Regions and DB engines for Amazon RDS zero-ETL integrations with Amazon Redshift.

When you create the database, under Additional configuration, change the default DB parameter group to the custom parameter group that you created in the previous step.

Note

If you associate the parameter group with the database after the database is already created, you must reboot the database to apply the changes before you can create a zero-ETL integration. For instructions, see Rebooting a DB instance or Rebooting a Multi-AZ DB cluster and reader DB instances for Amazon RDS.

In addition, make sure that automated backups are enabled on the database. For more information, see Enabling automated backups.

Step 3: Create a target Amazon Redshift data warehouse

After you create your source database, you must create and configure a target data warehouse in Amazon Redshift. The data warehouse must meet the following requirements:

  • Using an RA3 node type with at least two nodes, or Redshift Serverless.

  • Encrypted (if using a provisioned cluster). For more information, see Amazon Redshift database encryption.

For instructions to create a data warehouse, see Creating a cluster for provisioned clusters, or Creating a workgroup with a namespace for Redshift Serverless.

Enable case sensitivity on the data warehouse

For the integration to be successful, the case sensitivity parameter (enable_case_sensitive_identifier) must be enabled for the data warehouse. By default, case sensitivity is disabled on all provisioned clusters and Redshift Serverless workgroups.

To enable case sensitivity, perform the following steps depending on your data warehouse type:

  • Provisioned cluster – To enable case sensitivity on a provisioned cluster, create a custom parameter group with the enable_case_sensitive_identifier parameter enabled. Then, associate the parameter group with the cluster. For instructions, see Managing parameter groups using the console or Configuring parameter values using the AWS CLI.

    Note

    Remember to reboot the cluster after you associate the custom parameter group with it.

  • Serverless workgroup – To enable case sensitivity on a Redshift Serverless workgroup, you must use the AWS CLI. The Amazon Redshift console doesn't currently support modifying Redshift Serverless parameter values. Send the following update-workgroup request:

    aws redshift-serverless update-workgroup \ --workgroup-name target-workgroup \ --config-parameters parameterKey=enable_case_sensitive_identifier,parameterValue=true

    You don't need to reboot a workgroup after you modify its parameter values.

Configure authorization for the data warehouse

After you create a data warehouse, you must configure the source RDS database as an authorized integration source. For instructions, see Configure authorization for your Amazon Redshift data warehouse.

Set up an integration using the AWS SDKs

Rather than setting up each resource manually, you can run the following Python script to automatically set up the required resources for you. The code example uses the AWS SDK for Python (Boto3) to create a source RDS for MySQL DB instance and target Amazon Redshift data warehouse, each with the required parameter values. It then waits for the databases to be available before creating a zero-ETL integration between them. You can comment out different functions depending on which resources you need to set up.

To install the required dependencies, run the following commands:

pip install boto3 pip install time

Within the script, optionally modify the names of the source, target, and parameter groups. The final function creates an integration named my-integration after the resources are set up.

import boto3 import time # Build the client using the default credential configuration. # You can use the CLI and run 'aws configure' to set access key, secret # key, and default Region. rds = boto3.client('rds') redshift = boto3.client('redshift') sts = boto3.client('sts') source_db_name = 'my-source-db' # A name for the source database source_param_group_name = 'my-source-param-group' # A name for the source parameter group target_cluster_name = 'my-target-cluster' # A name for the target cluster target_param_group_name = 'my-target-param-group' # A name for the target parameter group def create_source_db(*args): """Creates a source RDS for MySQL DB instance""" response = rds.create_db_parameter_group( DBParameterGroupName=source_param_group_name, DBParameterGroupFamily='mysql8.0', Description='RDS for MySQL zero-ETL integrations' ) print('Created source parameter group: ' + response['DBParameterGroup']['DBParameterGroupName']) response = rds.modify_db_parameter_group( DBParameterGroupName=source_param_group_name, Parameters=[ { 'ParameterName': 'binlog_format', 'ParameterValue': 'ROW', 'ApplyMethod': 'pending-reboot' }, { 'ParameterName': 'binlog_row_image', 'ParameterValue': 'full', 'ApplyMethod': 'pending-reboot' } ] ) print('Modified source parameter group: ' + response['DBParameterGroupName']) response = rds.create_db_instance( DBInstanceIdentifier=source_db_name, DBParameterGroupName=source_param_group_name, Engine='mysql', EngineVersion='8.0.32', DBName='mydb', DBInstanceClass='db.m5.large', AllocatedStorage=15, MasterUsername='username', MasterUserPassword='Password01**' ) print('Creating source database: ' + response['DBInstance']['DBInstanceIdentifier']) source_arn = (response['DBInstance']['DBInstanceArn']) create_target_cluster(target_cluster_name, source_arn, target_param_group_name) return(response) def create_target_cluster(target_cluster_name, source_arn, target_param_group_name): """Creates a target Redshift cluster""" response = redshift.create_cluster_parameter_group( ParameterGroupName=target_param_group_name, ParameterGroupFamily='redshift-1.0', Description='RDS for MySQL zero-ETL integrations' ) print('Created target parameter group: ' + response['ClusterParameterGroup']['ParameterGroupName']) response = redshift.modify_cluster_parameter_group( ParameterGroupName=target_param_group_name, Parameters=[ { 'ParameterName': 'enable_case_sensitive_identifier', 'ParameterValue': 'true' } ] ) print('Modified target parameter group: ' + response['ParameterGroupName']) response = redshift.create_cluster( ClusterIdentifier=target_cluster_name, NodeType='ra3.4xlarge', NumberOfNodes=2, Encrypted=True, MasterUsername='username', MasterUserPassword='Password01**', ClusterParameterGroupName=target_param_group_name ) print('Creating target cluster: ' + response['Cluster']['ClusterIdentifier']) # Retrieve the target cluster ARN response = redshift.describe_clusters( ClusterIdentifier=target_cluster_name ) target_arn = response['Clusters'][0]['ClusterNamespaceArn'] # Retrieve the current user's account ID response = sts.get_caller_identity() account_id = response['Account'] # Create a resource policy granting access to source database and account ID response = redshift.put_resource_policy( ResourceArn=target_arn, Policy=''' { \"Version\":\"2012-10-17\", \"Statement\":[ {\"Effect\":\"Allow\", \"Principal\":{ \"Service\":\"redshift.amazonaws.com\" }, \"Action\":[\"redshift:AuthorizeInboundIntegration\"], \"Condition\":{ \"StringEquals\":{ \"aws:SourceArn\":\"%s\"} } }, {\"Effect\":\"Allow\", \"Principal\":{ \"AWS\":\"arn:aws:iam::%s:root\"}, \"Action\":\"redshift:CreateInboundIntegration\"} ] } ''' % (source_arn, account_id) ) return(response) def wait_for_db_availability(*args): """Waits for both databases to be available""" print('Waiting for source and target to be available...') response = rds.describe_db_instances( DBInstanceIdentifier=source_db_name ) source_status = response['DBInstances'][0]['DBInstanceStatus'] source_arn = response['DBInstances'][0]['DBInstanceArn'] response = redshift.describe_clusters( ClusterIdentifier=target_cluster_name ) target_status = response['Clusters'][0]['ClusterStatus'] target_arn = response['Clusters'][0]['ClusterNamespaceArn'] # Every 60 seconds, check whether the databases are available if source_status != 'available' or target_status != 'available': time.sleep(60) response = wait_for_db_availability( source_db_name, target_cluster_name) else: print('Databases available. Ready to create zero-ETL integration.') create_integration(source_arn, target_arn) return def create_integration(source_arn, target_arn): """Creates a zero-ETL integration using the source and target databases""" response = rds.create_integration( SourceArn=source_arn, TargetArn=target_arn, IntegrationName='my-integration' ) print('Creating integration: ' + response['IntegrationName']) def main(): """main function""" create_source_db(source_db_name, source_param_group_name) wait_for_db_availability(source_db_name, target_cluster_name) if __name__ == "__main__": main()

Next steps

With a source RDS database and an Amazon Redshift target data warehouse, you can now create a zero-ETL integration and replicate data. For instructions, see Creating Amazon RDS zero-ETL integrations with Amazon Redshift.