选择您的 Cookie 首选项

我们使用必要 Cookie 和类似工具提供我们的网站和服务。我们使用性能 Cookie 收集匿名统计数据,以便我们可以了解客户如何使用我们的网站并进行改进。必要 Cookie 无法停用,但您可以单击“自定义”或“拒绝”来拒绝性能 Cookie。

如果您同意,AWS 和经批准的第三方还将使用 Cookie 提供有用的网站功能、记住您的首选项并显示相关内容,包括相关广告。要接受或拒绝所有非必要 Cookie,请单击“接受”或“拒绝”。要做出更详细的选择,请单击“自定义”。

Running PySpark jobs

聚焦模式
Running PySpark jobs - AWS Clean Rooms
此页面尚未翻译为您的语言。 请求翻译

As the member who can query, you can run a PySpark job on a configured table by using an approved PySpark analysis template.

Prerequisites

Before you run a Python job, you must have:

  • An active membership in AWS Clean Rooms collaboration

  • Access to at least one analysis template in the collaboration

  • Access to at least one configured table in the collaboration

  • Permissions to write the results of a PySpark job to a specified S3 bucket

    For information about creating the required service role, see Create a service role to write results of a PySpark job.

  • The member who is responsible to pay for compute costs has joined the collaboration as an active member

For information about how to query data or view queries by calling the AWS Clean Rooms StartProtectedJob API operation directly or by using the AWS SDKs, see the AWS Clean Rooms API Reference.

For information about job logging, see Analysis logging in AWS Clean Rooms.

For information about receiving job results, see Receiving and using analysis results.

The following topics explain how to run a PySpark job on a configured table in a collaboration using the AWS Clean Rooms console.

隐私网站条款Cookie 首选项
© 2025, Amazon Web Services, Inc. 或其附属公司。保留所有权利。