本文為英文版的機器翻譯版本,如內容有任何歧義或不一致之處,概以英文版為準。
為 spark-submit 設定服務帳戶 (IRSA) IAM的角色
下列各節說明如何設定服務帳戶 (IRSA) IAM的角色來驗證和授權 Kubernetes 服務帳戶,以便您可以執行存放在 Amazon S3 中的 Spark 應用程式。
必要條件
在嘗試本文件中的任何範例之前,請確定您已完成下列先決條件:
-
建立 S3 儲存貯體並上傳 Spark 應用程式 jar
設定 Kubernetes 服務帳戶以擔任IAM角色
下列步驟說明如何設定 Kubernetes 服務帳戶擔任 AWS Identity and Access Management (IAM) 角色。將 Pod 設定為使用服務帳戶後,他們可以存取角色具有存取許可的任何 AWS 服務 。
-
建立政策檔案,以允許唯讀存取您上傳的 Amazon S3 物件:
cat >my-policy.json <<EOF { "Version": "2012-10-17", "Statement": [ { "Effect": "Allow", "Action": [ "s3:GetObject", "s3:ListBucket" ], "Resource": [ "arn:aws:s3:::<
my-spark-jar-bucket
>", "arn:aws:s3:::<my-spark-jar-bucket
>/*" ] } ] } EOF -
建立IAM政策。
aws iam create-policy --policy-name my-policy --policy-document file://my-policy.json
-
建立IAM角色並將其與 Spark 驅動程式的 Kubernetes 服務帳戶建立關聯
eksctl create iamserviceaccount --name my-spark-driver-sa --namespace spark-operator \ --cluster my-cluster --role-name "my-role" \ --attach-policy-arn arn:aws:iam::111122223333:policy/my-policy --approve
-
建立具有 Spark 驅動程式服務帳戶必要許可YAML的檔案:
cat >spark-rbac.yaml <<EOF apiVersion: rbac.authorization.k8s.io/v1 kind: Role metadata: namespace: default name: emr-containers-role-spark rules: - apiGroups: - "" resources: - pods verbs: - "*" - apiGroups: - "" resources: - services verbs: - "*" - apiGroups: - "" resources: - configmaps verbs: - "*" - apiGroups: - "" resources: - persistentvolumeclaims verbs: - "*" --- apiVersion: rbac.authorization.k8s.io/v1 kind: RoleBinding metadata: name: spark-role-binding namespace: default roleRef: apiGroup: rbac.authorization.k8s.io kind: Role name: emr-containers-role-spark subjects: - kind: ServiceAccount name: emr-containers-sa-spark namespace: default EOF
-
套用叢集角色繫結組態。
kubectl apply -f spark-rbac.yaml
-
kubectl
命令應傳回已建立帳戶的確認。serviceaccount/emr-containers-sa-spark created clusterrolebinding.rbac.authorization.k8s.io/emr-containers-role-spark configured
執行 Spark 應用程式
Amazon EMR 6.10.0 和更高版本支援在 Amazon EKS叢集上執行 Spark 應用程式時提交 Spark。完成以下步驟,以執行 Spark 應用程式:
-
請確定您已完成EMR在 上為 Amazon 設定 spark-submit EKS中的步驟。
-
設定以下環境變數的值:
export SPARK_HOME=spark-home export MASTER_URL=k8s://Amazon EKS-cluster-endpoint
-
現在,使用下列命令提交 Spark 應用程式:
$SPARK_HOME/bin/spark-submit \ --class org.apache.spark.examples.SparkPi \ --master $MASTER_URL \ --conf spark.kubernetes.container.image=895885662937.dkr.ecr.us-west-2.amazonaws.com/spark/emr-6.15.0:latest \ --conf spark.kubernetes.authenticate.driver.serviceAccountName=emr-containers-sa-spark \ --deploy-mode cluster \ --conf spark.kubernetes.namespace=default \ --conf "spark.driver.extraClassPath=/usr/lib/hadoop-lzo/lib/*:/usr/lib/hadoop/hadoop-aws.jar:/usr/share/aws/aws-java-sdk/*:/usr/share/aws/emr/emrfs/conf:/usr/share/aws/emr/emrfs/lib/*:/usr/share/aws/emr/emrfs/auxlib/*:/usr/share/aws/emr/security/conf:/usr/share/aws/emr/security/lib/*:/usr/share/aws/hmclient/lib/aws-glue-datacatalog-spark-client.jar:/usr/share/java/Hive-JSON-Serde/hive-openx-serde.jar:/usr/share/aws/sagemaker-spark-sdk/lib/sagemaker-spark-sdk.jar:/home/hadoop/extrajars/*" \ --conf "spark.driver.extraLibraryPath=/usr/lib/hadoop/lib/native:/usr/lib/hadoop-lzo/lib/native:/docker/usr/lib/hadoop/lib/native:/docker/usr/lib/hadoop-lzo/lib/native" \ --conf "spark.executor.extraClassPath=/usr/lib/hadoop-lzo/lib/*:/usr/lib/hadoop/hadoop-aws.jar:/usr/share/aws/aws-java-sdk/*:/usr/share/aws/emr/emrfs/conf:/usr/share/aws/emr/emrfs/lib/*:/usr/share/aws/emr/emrfs/auxlib/*:/usr/share/aws/emr/security/conf:/usr/share/aws/emr/security/lib/*:/usr/share/aws/hmclient/lib/aws-glue-datacatalog-spark-client.jar:/usr/share/java/Hive-JSON-Serde/hive-openx-serde.jar:/usr/share/aws/sagemaker-spark-sdk/lib/sagemaker-spark-sdk.jar:/home/hadoop/extrajars/*" \ --conf "spark.executor.extraLibraryPath=/usr/lib/hadoop/lib/native:/usr/lib/hadoop-lzo/lib/native:/docker/usr/lib/hadoop/lib/native:/docker/usr/lib/hadoop-lzo/lib/native" \ --conf spark.hadoop.fs.s3.customAWSCredentialsProvider=com.amazonaws.auth.WebIdentityTokenCredentialsProvider \ --conf spark.hadoop.fs.s3.impl=com.amazon.ws.emr.hadoop.fs.EmrFileSystem \ --conf spark.hadoop.fs.AbstractFileSystem.s3.impl=org.apache.hadoop.fs.s3.EMRFSDelegate \ --conf spark.hadoop.fs.s3.buffer.dir=/mnt/s3 \ --conf spark.hadoop.fs.s3.getObject.initialSocketTimeoutMilliseconds="2000" \ --conf spark.hadoop.mapreduce.fileoutputcommitter.algorithm.version.emr_internal_use_only.EmrFileSystem="2" \ --conf spark.hadoop.mapreduce.fileoutputcommitter.cleanup-failures.ignored.emr_internal_use_only.EmrFileSystem="true" \ s3://my-pod-bucket/spark-examples.jar 20
-
在 Spark 驅動程式完成 Spark 任務之後,您應該會在提交結束時看到日誌行,指出 Spark 任務已完成。
23/11/24 17:02:14 INFO LoggingPodStatusWatcherImpl: Application org.apache.spark.examples.SparkPi with submission ID default:org-apache-spark-examples-sparkpi-4980808c03ff3115-driver finished 23/11/24 17:02:14 INFO ShutdownHookManager: Shutdown hook called
清除
執行應用程式完成後,您可以使用下列命令執行清除。
kubectl delete -f spark-rbac.yaml