Select your cookie preferences

We use essential cookies and similar tools that are necessary to provide our site and services. We use performance cookies to collect anonymous statistics, so we can understand how customers use our site and make improvements. Essential cookies cannot be deactivated, but you can choose “Customize” or “Decline” to decline performance cookies.

If you agree, AWS and approved third parties will also use cookies to provide useful site features, remember your preferences, and display relevant content, including relevant advertising. To accept or decline all non-essential cookies, choose “Accept” or “Decline.” To make more detailed choices, choose “Customize.”

Reading from Asana entities

Focus mode
Reading from Asana entities - AWS Glue

Prerequisites

An Asana Object you would like to read from. Refer the supported entities table below to check the available entities.

Supported entities for source

Entity Can be Filtered Supports Limit Supports Order By Supports Select * Supports Partitioning

Workspace

No Yes No Yes No
Tag No Yes No Yes No
User No Yes No Yes No

Portfolio

No Yes No Yes No
Team No Yes No Yes No
Project Yes Yes No Yes No
Section No Yes No Yes No
Task Yes No No Yes Yes
Goal Yes Yes No Yes No

AuditLogEvent

Yes Yes No Yes No

Status Update

Yes Yes No Yes No

Custom Field

No Yes No Yes No

Project Brief

Yes No No Yes Yes

Example

read_read = glueContext.create_dynamic_frame.from_options( connection_type="Asana", connection_options={ "connectionName": "connectionName", "ENTITY_NAME": "task/workspace:xxxx", "API_VERSION": "1.0", "PARTITION_FIELD": "created_at", "LOWER_BOUND": "2024-02-05T14:09:30.115Z", "UPPER_BOUND": "2024-06-07T13:30:00.134Z", "NUM_PARTITIONS": "3" }

Asana entity and field details

Partitioning queries

Additional spark options PARTITION_FIELD, LOWER_BOUND, UPPER_BOUND, NUM_PARTITIONS can be provided if you want to utilize concurrency in Spark. With these parameters, the original query would be split into NUM_PARTITIONS number of sub-queries that can be executed by spark tasks concurrently.

  • PARTITION_FIELD: the name of the field to be used to partition query.

  • LOWER_BOUND: an inclusive lower bound value of the chosen partition field.

    For date, we accept the Spark date format used in Spark SQL queries. Example of valid values: 2024-06-07T13:30:00.134Z.

  • UPPER_BOUND: an exclusive upper bound value of the chosen partition field.

  • NUM_PARTITIONS: number of partitions.

Entity-wise partitioning field support details are captured in the following table.

Entity Name Partitioning Field Data Type
Task

created_at

DateTime
Task

modified_at

DateTime

Example

read_read = glueContext.create_dynamic_frame.from_options( connection_type="Asana", connection_options={ "connectionName": "connectionName", "ENTITY_NAME": "task/workspace:xxxx", "API_VERSION": "1.0", "PARTITION_FIELD": "created_at", "LOWER_BOUND": "2024-02-05T14:09:30.115Z", "UPPER_BOUND": "2024-06-07T13:30:00.134Z", "NUM_PARTITIONS": "3" }
PrivacySite termsCookie preferences
© 2025, Amazon Web Services, Inc. or its affiliates. All rights reserved.