CfnJobProps
- class aws_cdk.aws_glue.CfnJobProps(*, command, role, allocated_capacity=None, connections=None, default_arguments=None, description=None, execution_class=None, execution_property=None, glue_version=None, log_uri=None, max_capacity=None, max_retries=None, name=None, non_overridable_arguments=None, notification_property=None, number_of_workers=None, security_configuration=None, tags=None, timeout=None, worker_type=None)
Bases:
object
Properties for defining a
CfnJob
.- Parameters:
command (
Union
[IResolvable
,JobCommandProperty
,Dict
[str
,Any
]]) – The code that executes a job.role (
str
) – The name or Amazon Resource Name (ARN) of the IAM role associated with this job.allocated_capacity (
Union
[int
,float
,None
]) – This parameter is no longer supported. UseMaxCapacity
instead. The number of capacity units that are allocated to this job.connections (
Union
[IResolvable
,ConnectionsListProperty
,Dict
[str
,Any
],None
]) – The connections used for this job.default_arguments (
Optional
[Any
]) – The default arguments for this job, specified as name-value pairs. You can specify arguments here that your own job-execution script consumes, in addition to arguments that AWS Glue itself consumes. For information about how to specify and consume your own job arguments, see Calling AWS Glue APIs in Python in the AWS Glue Developer Guide . For information about the key-value pairs that AWS Glue consumes to set up your job, see Special Parameters Used by AWS Glue in the AWS Glue Developer Guide .description (
Optional
[str
]) – A description of the job.execution_class (
Optional
[str
]) – Indicates whether the job is run with a standard or flexible execution class. The standard execution class is ideal for time-sensitive workloads that require fast job startup and dedicated resources. The flexible execution class is appropriate for time-insensitive jobs whose start and completion times may vary. Only jobs with AWS Glue version 3.0 and above and command typeglueetl
will be allowed to setExecutionClass
toFLEX
. The flexible execution class is available for Spark jobs.execution_property (
Union
[IResolvable
,ExecutionPropertyProperty
,Dict
[str
,Any
],None
]) – The maximum number of concurrent runs that are allowed for this job.glue_version (
Optional
[str
]) – Glue version determines the versions of Apache Spark and Python that AWS Glue supports. The Python version indicates the version supported for jobs of type Spark. For more information about the available AWS Glue versions and corresponding Spark and Python versions, see Glue version in the developer guide. Jobs that are created without specifying a Glue version default to Glue 0.9.log_uri (
Optional
[str
]) – This field is reserved for future use.max_capacity (
Union
[int
,float
,None
]) – The number of AWS Glue data processing units (DPUs) that can be allocated when this job runs. A DPU is a relative measure of processing power that consists of 4 vCPUs of compute capacity and 16 GB of memory. Do not setMax Capacity
if usingWorkerType
andNumberOfWorkers
. The value that can be allocated forMaxCapacity
depends on whether you are running a Python shell job or an Apache Spark ETL job: - When you specify a Python shell job (JobCommand.Name
=”pythonshell”), you can allocate either 0.0625 or 1 DPU. The default is 0.0625 DPU. - When you specify an Apache Spark ETL job (JobCommand.Name
=”glueetl”), you can allocate from 2 to 100 DPUs. The default is 10 DPUs. This job type cannot have a fractional DPU allocation.max_retries (
Union
[int
,float
,None
]) – The maximum number of times to retry this job after a JobRun fails.name (
Optional
[str
]) – The name you assign to this job definition.non_overridable_arguments (
Optional
[Any
]) – Non-overridable arguments for this job, specified as name-value pairs.notification_property (
Union
[IResolvable
,NotificationPropertyProperty
,Dict
[str
,Any
],None
]) – Specifies configuration properties of a notification.number_of_workers (
Union
[int
,float
,None
]) – The number of workers of a definedworkerType
that are allocated when a job runs. The maximum number of workers you can define are 299 forG.1X
, and 149 forG.2X
.security_configuration (
Optional
[str
]) – The name of theSecurityConfiguration
structure to be used with this job.tags (
Optional
[Any
]) – The tags to use with this job.timeout (
Union
[int
,float
,None
]) – The job timeout in minutes. This is the maximum time that a job run can consume resources before it is terminated and enters TIMEOUT status. The default is 2,880 minutes (48 hours).worker_type (
Optional
[str
]) – The type of predefined worker that is allocated when a job runs. Accepts a value of Standard, G.1X, or G.2X. - For theStandard
worker type, each worker provides 4 vCPU, 16 GB of memory and a 50GB disk, and 2 executors per worker. - For theG.1X
worker type, each worker maps to 1 DPU (4 vCPU, 16 GB of memory, 64 GB disk), and provides 1 executor per worker. We recommend this worker type for memory-intensive jobs. - For theG.2X
worker type, each worker maps to 2 DPU (8 vCPU, 32 GB of memory, 128 GB disk), and provides 1 executor per worker. We recommend this worker type for memory-intensive jobs.
- Link:
http://docs.aws.amazon.com/AWSCloudFormation/latest/UserGuide/aws-resource-glue-job.html
- ExampleMetadata:
fixture=_generated
Example:
# The code below shows an example of how to instantiate this type. # The values are placeholders you should change. import aws_cdk.aws_glue as glue # default_arguments: Any # non_overridable_arguments: Any # tags: Any cfn_job_props = glue.CfnJobProps( command=glue.CfnJob.JobCommandProperty( name="name", python_version="pythonVersion", runtime="runtime", script_location="scriptLocation" ), role="role", # the properties below are optional allocated_capacity=123, connections=glue.CfnJob.ConnectionsListProperty( connections=["connections"] ), default_arguments=default_arguments, description="description", execution_class="executionClass", execution_property=glue.CfnJob.ExecutionPropertyProperty( max_concurrent_runs=123 ), glue_version="glueVersion", log_uri="logUri", max_capacity=123, max_retries=123, name="name", non_overridable_arguments=non_overridable_arguments, notification_property=glue.CfnJob.NotificationPropertyProperty( notify_delay_after=123 ), number_of_workers=123, security_configuration="securityConfiguration", tags=tags, timeout=123, worker_type="workerType" )
Attributes
- allocated_capacity
This parameter is no longer supported. Use
MaxCapacity
instead.The number of capacity units that are allocated to this job.
- command
The code that executes a job.
- connections
The connections used for this job.
- default_arguments
The default arguments for this job, specified as name-value pairs.
You can specify arguments here that your own job-execution script consumes, in addition to arguments that AWS Glue itself consumes.
For information about how to specify and consume your own job arguments, see Calling AWS Glue APIs in Python in the AWS Glue Developer Guide .
For information about the key-value pairs that AWS Glue consumes to set up your job, see Special Parameters Used by AWS Glue in the AWS Glue Developer Guide .
- description
A description of the job.
- execution_class
Indicates whether the job is run with a standard or flexible execution class.
The standard execution class is ideal for time-sensitive workloads that require fast job startup and dedicated resources.
The flexible execution class is appropriate for time-insensitive jobs whose start and completion times may vary.
Only jobs with AWS Glue version 3.0 and above and command type
glueetl
will be allowed to setExecutionClass
toFLEX
. The flexible execution class is available for Spark jobs.
- execution_property
The maximum number of concurrent runs that are allowed for this job.
- glue_version
Glue version determines the versions of Apache Spark and Python that AWS Glue supports.
The Python version indicates the version supported for jobs of type Spark.
For more information about the available AWS Glue versions and corresponding Spark and Python versions, see Glue version in the developer guide.
Jobs that are created without specifying a Glue version default to Glue 0.9.
- log_uri
This field is reserved for future use.
- max_capacity
The number of AWS Glue data processing units (DPUs) that can be allocated when this job runs.
A DPU is a relative measure of processing power that consists of 4 vCPUs of compute capacity and 16 GB of memory.
Do not set
Max Capacity
if usingWorkerType
andNumberOfWorkers
.The value that can be allocated for
MaxCapacity
depends on whether you are running a Python shell job or an Apache Spark ETL job:When you specify a Python shell job (
JobCommand.Name
=”pythonshell”), you can allocate either 0.0625 or 1 DPU. The default is 0.0625 DPU.When you specify an Apache Spark ETL job (
JobCommand.Name
=”glueetl”), you can allocate from 2 to 100 DPUs. The default is 10 DPUs. This job type cannot have a fractional DPU allocation.
- max_retries
The maximum number of times to retry this job after a JobRun fails.
- name
The name you assign to this job definition.
- non_overridable_arguments
Non-overridable arguments for this job, specified as name-value pairs.
- notification_property
Specifies configuration properties of a notification.
- number_of_workers
The number of workers of a defined
workerType
that are allocated when a job runs.The maximum number of workers you can define are 299 for
G.1X
, and 149 forG.2X
.
- role
The name or Amazon Resource Name (ARN) of the IAM role associated with this job.
- security_configuration
The name of the
SecurityConfiguration
structure to be used with this job.
- tags
The tags to use with this job.
- timeout
The job timeout in minutes.
This is the maximum time that a job run can consume resources before it is terminated and enters TIMEOUT status. The default is 2,880 minutes (48 hours).
- worker_type
The type of predefined worker that is allocated when a job runs.
Accepts a value of Standard, G.1X, or G.2X.
For the
Standard
worker type, each worker provides 4 vCPU, 16 GB of memory and a 50GB disk, and 2 executors per worker.For the
G.1X
worker type, each worker maps to 1 DPU (4 vCPU, 16 GB of memory, 64 GB disk), and provides 1 executor per worker. We recommend this worker type for memory-intensive jobs.For the
G.2X
worker type, each worker maps to 2 DPU (8 vCPU, 32 GB of memory, 128 GB disk), and provides 1 executor per worker. We recommend this worker type for memory-intensive jobs.