本文属于机器翻译版本。若本译文内容与英语原文存在差异,则一律以英文原文为准。
在 Amazon Redshift 中进行读取和写入
以下代码示例使用 PySpark 通过数据来源 API 和 SparkSQL,从 Amazon Redshift 数据库读取和写入示例数据。
- Data source API
-
使用 PySpark,通过数据来源 API 从 Amazon Redshift 数据库读取和写入示例数据。
import boto3 from pyspark.sql import SQLContext sc = # existing SparkContext sql_context = SQLContext(sc) url = "jdbc:redshift:iam://redshifthost:5439/database" aws_iam_role_arn = "arn:aws:iam::
account-id
:role/role-name
" df = sql_context.read \ .format("io.github.spark_redshift_community.spark.redshift") \ .option("url",url
) \ .option("dbtable", "table-name
") \ .option("tempdir", "s3://path/for/temp/data
") \ .option("aws_iam_role", "aws-iam-role-arn
") \ .load() df.write \ .format("io.github.spark_redshift_community.spark.redshift") \ .option("url",url
) \ .option("dbtable", "table-name-copy
") \ .option("tempdir", "s3://path/for/temp/data
") \ .option("aws_iam_role", "aws-iam-role-arn
") \ .mode("error") \ .save() - SparkSQL
-
使用 PySpark,通过 SparkSQL 在 Amazon Redshift 数据库中读取和写入示例数据。
import boto3 import json import sys import os from pyspark.sql import SparkSession spark = SparkSession \ .builder \ .enableHiveSupport() \ .getOrCreate() url = "jdbc:redshift:iam://redshifthost:5439/database" aws_iam_role_arn = "arn:aws:iam::
account-id
:role/role-name
" bucket = "s3://path/for/temp/data
" tableName = "table-name
" # Redshift table name s = f"""CREATE TABLE IF NOT EXISTS {table-name
} (country string, data string) USING io.github.spark_redshift_community.spark.redshift OPTIONS (dbtable '{table-name
}', tempdir '{bucket
}', url '{url
}', aws_iam_role '{aws-iam-role-arn
}' ); """ spark.sql(s) columns = ["country" ,"data"] data = [("test-country
","test-data
")] df = spark.sparkContext.parallelize(data).toDF(columns) # Insert data into table df.write.insertInto(table-name
, overwrite=False) df = spark.sql(f"SELECT * FROM {table-name
}") df.show()
对 Amazon Redshift 进行身份验证
注意事项