本文為英文版的機器翻譯版本,如內容有任何歧義或不一致之處,概以英文版為準。
搭配使用 RAPIDS Accelerator for Apache Spark 和 Amazon EMR on EKS
使用 Amazon EMR on EKS,可執行 Nvidia RAPIDS Accelerator for Apache Spark 的作業。本教學課程說明如何使用 RAPIDS on EC2 圖形處理單元 (GPU) 執行個體類型來執行 Spark 作業。此教學課程使用下列版本:
-
Amazon EMR on EKS 發行版本 6.9.0 及更高版本
-
Apache Spark 3.x
透過使用 Nvidia RAPIDS Accelerator for Apache Spark
開始之前,請確保擁有下列資源。
-
Amazon EMR on EKS 虛擬叢集
-
Amazon EKS 叢集,具有已啟用 GPU 的節點群組
Amazon EKS 虛擬叢集是 Amazon EKS 叢集上 Kubernetes 命名空間的已註冊控制碼,由 Amazon EMR on EKS 管理。此控點可讓 Amazon EMR 使用 Kubernetes 命名空間作為執行中作業的目的地。如需有關如何設定虛擬叢集的詳細資訊,請參閱本指南中的 在 EMR上設定 Amazon EKS。
必須使用具有 GPU 執行個體的節點群組來設定 Amazon EKS 虛擬叢集。必須使用 Nvidia 裝置外掛程式來設定節點。如需進一步了解,請參閱受管節點群組。
若要設定 Amazon EKS 叢集以新增啟用 GPU 的節點群組,請執行下列程序:
新增已啟用 GPU 的節點群組
-
使用下列 create-nodegroup 命令建立啟用 GPU 的節點群組。請務必使用正確的參數來取代 Amazon EKS 叢集。使用支援 Spark RAPIDS 的執行個體類型,例如 P4、P3、G5 或 G4dn。
aws eks create-nodegroup \ --cluster-name
EKS_CLUSTER_NAME
\ --nodegroup-nameNODEGROUP_NAME
\ --scaling-config minSize=0,maxSize=5,desiredSize=2CHOOSE_APPROPRIATELY
\ --ami-type AL2_x86_64_GPU \ --node-roleNODE_ROLE
\ --subnetsSUBNETS_SPACE_DELIMITED
\ --remote-access ec2SshKey=SSH_KEY
\ --instance-typesGPU_INSTANCE_TYPE
\ --disk-sizeDISK_SIZE
\ --regionAWS_REGION
-
在叢集中安裝 Nvidia 裝置外掛程式,以便在叢集的每個節點上發出 GPU 數量,並在叢集中執行已啟用 GPU 的容器。執行下列程式碼安裝外掛程式。
kubectl apply -f https://raw.githubusercontent.com/NVIDIA/k8s-device-plugin/v0.9.0/nvidia-device-plugin.yml
-
若要驗證叢集的每個節點上有多少 GPU 可用,請執行下列命令:
kubectl get nodes "-o=custom-columns=NAME:.metadata.name,GPU:.status.allocatable.nvidia\.com/gpu"
執行 Spark RAPIDS 作業
-
將 Spark RAPIDS 作業提交至 Amazon EMR on EKS 叢集。下列程式碼顯示啟動作業的命令範例。第一次執行作業時,可能需要幾分鐘的時間下載映像並將其快取到節點上。
aws emr-containers start-job-run \ --virtual-cluster-id
VIRTUAL_CLUSTER_ID
\ --execution-role-arnJOB_EXECUTION_ROLE
\ --release-label emr-6.9.0-spark-rapids-latest \ --job-driver '{"sparkSubmitJobDriver": {"entryPoint": "local:///usr/lib/spark/examples/jars/spark-examples.jar","entryPointArguments": ["10000"], "sparkSubmitParameters":"--class org.apache.spark.examples.SparkPi "}}' \ ---configuration-overrides '{"applicationConfiguration": [{"classification": "spark-defaults","properties": {"spark.executor.instances": "2","spark.executor.memory": "2G"}}],"monitoringConfiguration": {"cloudWatchMonitoringConfiguration": {"logGroupName": "LOG_GROUP _NAME
"},"s3MonitoringConfiguration": {"logUri": "LOG_GROUP_STREAM
"}}}' -
若要驗證 Spark RAPIDS Accelerator 已啟用,請檢查 Spark 驅動程式日誌。這些日誌會儲存在 CloudWatch 或您執行
start-job-run
命令時指定的 S3 位置。下列範例通常會顯示日誌行的樣式:22/11/15 00:12:44 INFO RapidsPluginUtils: RAPIDS Accelerator build: {version=22.08.0-amzn-0, user=release, url=, date=2022-11-03T03:32:45Z, revision=, cudf_version=22.08.0, branch=} 22/11/15 00:12:44 INFO RapidsPluginUtils: RAPIDS Accelerator JNI build: {version=22.08.0, user=, url=https://github.com/NVIDIA/spark-rapids-jni.git, date=2022-08-18T04:14:34Z, revision=a1b23cd_sample, branch=HEAD} 22/11/15 00:12:44 INFO RapidsPluginUtils: cudf build: {version=22.08.0, user=, url=https://github.com/rapidsai/cudf.git, date=2022-08-18T04:14:34Z, revision=a1b23ce_sample, branch=HEAD} 22/11/15 00:12:44 WARN RapidsPluginUtils: RAPIDS Accelerator 22.08.0-amzn-0 using cudf 22.08.0. 22/11/15 00:12:44 WARN RapidsPluginUtils: spark.rapids.sql.multiThreadedRead.numThreads is set to 20. 22/11/15 00:12:44 WARN RapidsPluginUtils: RAPIDS Accelerator is enabled, to disable GPU support set `spark.rapids.sql.enabled` to false. 22/11/15 00:12:44 WARN RapidsPluginUtils: spark.rapids.sql.explain is set to `NOT_ON_GPU`. Set it to 'NONE' to suppress the diagnostics logging about the query placement on the GPU.
-
若要查看將在 GPU 上執行的操作,請執行下列步驟以啟用額外的日誌。請注意 "
spark.rapids.sql.explain : ALL
" 組態。aws emr-containers start-job-run \ --virtual-cluster-id
VIRTUAL_CLUSTER_ID
\ --execution-role-arnJOB_EXECUTION_ROLE
\ --release-label emr-6.9.0-spark-rapids-latest \ --job-driver '{"sparkSubmitJobDriver": {"entryPoint": "local:///usr/lib/spark/examples/jars/spark-examples.jar","entryPointArguments": ["10000"], "sparkSubmitParameters":"--class org.apache.spark.examples.SparkPi "}}' \ ---configuration-overrides '{"applicationConfiguration": [{"classification": "spark-defaults","properties": {"spark.rapids.sql.explain":"ALL","spark.executor.instances": "2","spark.executor.memory": "2G"}}],"monitoringConfiguration": {"cloudWatchMonitoringConfiguration": {"logGroupName": "LOG_GROUP_NAME
"},"s3MonitoringConfiguration": {"logUri": "LOG_GROUP_STREAM
"}}}'上一個命令是使用 GPU 的作業範例。其輸出如以下範例所示。如需了解輸出,請參閱此說明圖例:
-
*
- 表示在 GPU 上運作的操作 -
!
- 表示無法在 GPU 上執行的操作 -
@
- 表示在 GPU 上運作的操作,但無法執行,因為它位於無法在 GPU 上執行的計畫內
22/11/15 01:22:58 INFO GpuOverrides: Plan conversion to the GPU took 118.64 ms 22/11/15 01:22:58 INFO GpuOverrides: Plan conversion to the GPU took 4.20 ms 22/11/15 01:22:58 INFO GpuOverrides: GPU plan transition optimization took 8.37 ms 22/11/15 01:22:59 WARN GpuOverrides: *Exec <ProjectExec> will run on GPU *Expression <Alias> substring(cast(date#149 as string), 0, 7) AS month#310 will run on GPU *Expression <Substring> substring(cast(date#149 as string), 0, 7) will run on GPU *Expression <Cast> cast(date#149 as string) will run on GPU *Exec <SortExec> will run on GPU *Expression <SortOrder> date#149 ASC NULLS FIRST will run on GPU *Exec <ShuffleExchangeExec> will run on GPU *Partitioning <RangePartitioning> will run on GPU *Expression <SortOrder> date#149 ASC NULLS FIRST will run on GPU *Exec <UnionExec> will run on GPU !Exec <ProjectExec> cannot run on GPU because not all expressions can be replaced @Expression <AttributeReference> customerID#0 could run on GPU @Expression <Alias> Charge AS kind#126 could run on GPU @Expression <Literal> Charge could run on GPU @Expression <AttributeReference> value#129 could run on GPU @Expression <Alias> add_months(2022-11-15, cast(-(cast(_we0#142 as bigint) + last_month#128L) as int)) AS date#149 could run on GPU ! <AddMonths> add_months(2022-11-15, cast(-(cast(_we0#142 as bigint) + last_month#128L) as int)) cannot run on GPU because GPU does not currently support the operator class org.apache.spark.sql.catalyst.expressions.AddMonths @Expression <Literal> 2022-11-15 could run on GPU @Expression <Cast> cast(-(cast(_we0#142 as bigint) + last_month#128L) as int) could run on GPU @Expression <UnaryMinus> -(cast(_we0#142 as bigint) + last_month#128L) could run on GPU @Expression <Add> (cast(_we0#142 as bigint) + last_month#128L) could run on GPU @Expression <Cast> cast(_we0#142 as bigint) could run on GPU @Expression <AttributeReference> _we0#142 could run on GPU @Expression <AttributeReference> last_month#128L could run on GPU
-