Class: Aws::Glue::Client

Inherits:
Seahorse::Client::Base show all
Includes:
ClientStubs
Defined in:
gems/aws-sdk-glue/lib/aws-sdk-glue/client.rb

Overview

An API client for Glue. To construct a client, you need to configure a :region and :credentials.

client = Aws::Glue::Client.new(
  region: region_name,
  credentials: credentials,
  # ...
)

For details on configuring region and credentials see the developer guide.

See #initialize for a full list of supported configuration options.

Instance Attribute Summary

Attributes inherited from Seahorse::Client::Base

#config, #handlers

API Operations collapse

Instance Method Summary collapse

Methods included from ClientStubs

#api_requests, #stub_data, #stub_responses

Methods inherited from Seahorse::Client::Base

add_plugin, api, clear_plugins, define, new, #operation_names, plugins, remove_plugin, set_api, set_plugins

Methods included from Seahorse::Client::HandlerBuilder

#handle, #handle_request, #handle_response

Constructor Details

#initialize(options) ⇒ Client

Returns a new instance of Client.

Parameters:

  • options (Hash)

Options Hash (options):

  • :plugins (Array<Seahorse::Client::Plugin>) — default: []]

    A list of plugins to apply to the client. Each plugin is either a class name or an instance of a plugin class.

  • :credentials (required, Aws::CredentialProvider)

    Your AWS credentials. This can be an instance of any one of the following classes:

    • Aws::Credentials - Used for configuring static, non-refreshing credentials.

    • Aws::SharedCredentials - Used for loading static credentials from a shared file, such as ~/.aws/config.

    • Aws::AssumeRoleCredentials - Used when you need to assume a role.

    • Aws::AssumeRoleWebIdentityCredentials - Used when you need to assume a role after providing credentials via the web.

    • Aws::SSOCredentials - Used for loading credentials from AWS SSO using an access token generated from aws login.

    • Aws::ProcessCredentials - Used for loading credentials from a process that outputs to stdout.

    • Aws::InstanceProfileCredentials - Used for loading credentials from an EC2 IMDS on an EC2 instance.

    • Aws::ECSCredentials - Used for loading credentials from instances running in ECS.

    • Aws::CognitoIdentityCredentials - Used for loading credentials from the Cognito Identity service.

    When :credentials are not configured directly, the following locations will be searched for credentials:

    • Aws.config[:credentials]
    • The :access_key_id, :secret_access_key, :session_token, and :account_id options.
    • ENV['AWS_ACCESS_KEY_ID'], ENV['AWS_SECRET_ACCESS_KEY'], ENV['AWS_SESSION_TOKEN'], and ENV['AWS_ACCOUNT_ID']
    • ~/.aws/credentials
    • ~/.aws/config
    • EC2/ECS IMDS instance profile - When used by default, the timeouts are very aggressive. Construct and pass an instance of Aws::InstanceProfileCredentials or Aws::ECSCredentials to enable retries and extended timeouts. Instance profile credential fetching can be disabled by setting ENV['AWS_EC2_METADATA_DISABLED'] to true.
  • :region (required, String)

    The AWS region to connect to. The configured :region is used to determine the service :endpoint. When not passed, a default :region is searched for in the following locations:

    • Aws.config[:region]
    • ENV['AWS_REGION']
    • ENV['AMAZON_REGION']
    • ENV['AWS_DEFAULT_REGION']
    • ~/.aws/credentials
    • ~/.aws/config
  • :access_key_id (String)
  • :account_id (String)
  • :active_endpoint_cache (Boolean) — default: false

    When set to true, a thread polling for endpoints will be running in the background every 60 secs (default). Defaults to false.

  • :adaptive_retry_wait_to_fill (Boolean) — default: true

    Used only in adaptive retry mode. When true, the request will sleep until there is sufficent client side capacity to retry the request. When false, the request will raise a RetryCapacityNotAvailableError and will not retry instead of sleeping.

  • :client_side_monitoring (Boolean) — default: false

    When true, client-side metrics will be collected for all API requests from this client.

  • :client_side_monitoring_client_id (String) — default: ""

    Allows you to provide an identifier for this client which will be attached to all generated client side metrics. Defaults to an empty string.

  • :client_side_monitoring_host (String) — default: "127.0.0.1"

    Allows you to specify the DNS hostname or IPv4 or IPv6 address that the client side monitoring agent is running on, where client metrics will be published via UDP.

  • :client_side_monitoring_port (Integer) — default: 31000

    Required for publishing client metrics. The port that the client side monitoring agent is running on, where client metrics will be published via UDP.

  • :client_side_monitoring_publisher (Aws::ClientSideMonitoring::Publisher) — default: Aws::ClientSideMonitoring::Publisher

    Allows you to provide a custom client-side monitoring publisher class. By default, will use the Client Side Monitoring Agent Publisher.

  • :convert_params (Boolean) — default: true

    When true, an attempt is made to coerce request parameters into the required types.

  • :correct_clock_skew (Boolean) — default: true

    Used only in standard and adaptive retry modes. Specifies whether to apply a clock skew correction and retry requests with skewed client clocks.

  • :defaults_mode (String) — default: "legacy"

    See DefaultsModeConfiguration for a list of the accepted modes and the configuration defaults that are included.

  • :disable_host_prefix_injection (Boolean) — default: false

    Set to true to disable SDK automatically adding host prefix to default service endpoint when available.

  • :disable_request_compression (Boolean) — default: false

    When set to 'true' the request body will not be compressed for supported operations.

  • :endpoint (String, URI::HTTPS, URI::HTTP)

    Normally you should not configure the :endpoint option directly. This is normally constructed from the :region option. Configuring :endpoint is normally reserved for connecting to test or custom endpoints. The endpoint should be a URI formatted like:

    'http://example.com'
    'https://example.com'
    'http://example.com:123'
    
  • :endpoint_cache_max_entries (Integer) — default: 1000

    Used for the maximum size limit of the LRU cache storing endpoints data for endpoint discovery enabled operations. Defaults to 1000.

  • :endpoint_cache_max_threads (Integer) — default: 10

    Used for the maximum threads in use for polling endpoints to be cached, defaults to 10.

  • :endpoint_cache_poll_interval (Integer) — default: 60

    When :endpoint_discovery and :active_endpoint_cache is enabled, Use this option to config the time interval in seconds for making requests fetching endpoints information. Defaults to 60 sec.

  • :endpoint_discovery (Boolean) — default: false

    When set to true, endpoint discovery will be enabled for operations when available.

  • :ignore_configured_endpoint_urls (Boolean)

    Setting to true disables use of endpoint URLs provided via environment variables and the shared configuration file.

  • :log_formatter (Aws::Log::Formatter) — default: Aws::Log::Formatter.default

    The log formatter.

  • :log_level (Symbol) — default: :info

    The log level to send messages to the :logger at.

  • :logger (Logger)

    The Logger instance to send log messages to. If this option is not set, logging will be disabled.

  • :max_attempts (Integer) — default: 3

    An integer representing the maximum number attempts that will be made for a single request, including the initial attempt. For example, setting this value to 5 will result in a request being retried up to 4 times. Used in standard and adaptive retry modes.

  • :profile (String) — default: "default"

    Used when loading credentials from the shared credentials file at HOME/.aws/credentials. When not specified, 'default' is used.

  • :request_checksum_calculation (String) — default: "when_supported"

    Determines when a checksum will be calculated for request payloads. Values are:

    • when_supported - (default) When set, a checksum will be calculated for all request payloads of operations modeled with the httpChecksum trait where requestChecksumRequired is true and/or a requestAlgorithmMember is modeled.
    • when_required - When set, a checksum will only be calculated for request payloads of operations modeled with the httpChecksum trait where requestChecksumRequired is true or where a requestAlgorithmMember is modeled and supplied.
  • :request_min_compression_size_bytes (Integer) — default: 10240

    The minimum size in bytes that triggers compression for request bodies. The value must be non-negative integer value between 0 and 10485780 bytes inclusive.

  • :response_checksum_validation (String) — default: "when_supported"

    Determines when checksum validation will be performed on response payloads. Values are:

    • when_supported - (default) When set, checksum validation is performed on all response payloads of operations modeled with the httpChecksum trait where responseAlgorithms is modeled, except when no modeled checksum algorithms are supported.
    • when_required - When set, checksum validation is not performed on response payloads of operations unless the checksum algorithm is supported and the requestValidationModeMember member is set to ENABLED.
  • :retry_backoff (Proc)

    A proc or lambda used for backoff. Defaults to 2**retries * retry_base_delay. This option is only used in the legacy retry mode.

  • :retry_base_delay (Float) — default: 0.3

    The base delay in seconds used by the default backoff function. This option is only used in the legacy retry mode.

  • :retry_jitter (Symbol) — default: :none

    A delay randomiser function used by the default backoff function. Some predefined functions can be referenced by name - :none, :equal, :full, otherwise a Proc that takes and returns a number. This option is only used in the legacy retry mode.

    @see https://www.awsarchitectureblog.com/2015/03/backoff.html

  • :retry_limit (Integer) — default: 3

    The maximum number of times to retry failed requests. Only ~ 500 level server errors and certain ~ 400 level client errors are retried. Generally, these are throttling errors, data checksum errors, networking errors, timeout errors, auth errors, endpoint discovery, and errors from expired credentials. This option is only used in the legacy retry mode.

  • :retry_max_delay (Integer) — default: 0

    The maximum number of seconds to delay between retries (0 for no limit) used by the default backoff function. This option is only used in the legacy retry mode.

  • :retry_mode (String) — default: "legacy"

    Specifies which retry algorithm to use. Values are:

    • legacy - The pre-existing retry behavior. This is default value if no retry mode is provided.

    • standard - A standardized set of retry rules across the AWS SDKs. This includes support for retry quotas, which limit the number of unsuccessful retries a client can make.

    • adaptive - An experimental retry mode that includes all the functionality of standard mode along with automatic client side throttling. This is a provisional mode that may change behavior in the future.

  • :sdk_ua_app_id (String)

    A unique and opaque application ID that is appended to the User-Agent header as app/sdk_ua_app_id. It should have a maximum length of 50. This variable is sourced from environment variable AWS_SDK_UA_APP_ID or the shared config profile attribute sdk_ua_app_id.

  • :secret_access_key (String)
  • :session_token (String)
  • :sigv4a_signing_region_set (Array)

    A list of regions that should be signed with SigV4a signing. When not passed, a default :sigv4a_signing_region_set is searched for in the following locations:

    • Aws.config[:sigv4a_signing_region_set]
    • ENV['AWS_SIGV4A_SIGNING_REGION_SET']
    • ~/.aws/config
  • :simple_json (Boolean) — default: false

    Disables request parameter conversion, validation, and formatting. Also disables response data type conversions. The request parameters hash must be formatted exactly as the API expects.This option is useful when you want to ensure the highest level of performance by avoiding overhead of walking request parameters and response data structures.

  • :stub_responses (Boolean) — default: false

    Causes the client to return stubbed responses. By default fake responses are generated and returned. You can specify the response data to return or errors to raise by calling ClientStubs#stub_responses. See ClientStubs for more information.

    Please note When response stubbing is enabled, no HTTP requests are made, and retries are disabled.

  • :telemetry_provider (Aws::Telemetry::TelemetryProviderBase) — default: Aws::Telemetry::NoOpTelemetryProvider

    Allows you to provide a telemetry provider, which is used to emit telemetry data. By default, uses NoOpTelemetryProvider which will not record or emit any telemetry data. The SDK supports the following telemetry providers:

    • OpenTelemetry (OTel) - To use the OTel provider, install and require the opentelemetry-sdk gem and then, pass in an instance of a Aws::Telemetry::OTelProvider for telemetry provider.
  • :token_provider (Aws::TokenProvider)

    A Bearer Token Provider. This can be an instance of any one of the following classes:

    • Aws::StaticTokenProvider - Used for configuring static, non-refreshing tokens.

    • Aws::SSOTokenProvider - Used for loading tokens from AWS SSO using an access token generated from aws login.

    When :token_provider is not configured directly, the Aws::TokenProviderChain will be used to search for tokens configured for your profile in shared configuration files.

  • :use_dualstack_endpoint (Boolean)

    When set to true, dualstack enabled endpoints (with .aws TLD) will be used if available.

  • :use_fips_endpoint (Boolean)

    When set to true, fips compatible endpoints will be used if available. When a fips region is used, the region is normalized and this config is set to true.

  • :validate_params (Boolean) — default: true

    When true, request parameters are validated before sending the request.

  • :endpoint_provider (Aws::Glue::EndpointProvider)

    The endpoint provider used to resolve endpoints. Any object that responds to #resolve_endpoint(parameters) where parameters is a Struct similar to Aws::Glue::EndpointParameters.

  • :http_continue_timeout (Float) — default: 1

    The number of seconds to wait for a 100-continue response before sending the request body. This option has no effect unless the request has "Expect" header set to "100-continue". Defaults to nil which disables this behaviour. This value can safely be set per request on the session.

  • :http_idle_timeout (Float) — default: 5

    The number of seconds a connection is allowed to sit idle before it is considered stale. Stale connections are closed and removed from the pool before making a request.

  • :http_open_timeout (Float) — default: 15

    The default number of seconds to wait for response data. This value can safely be set per-request on the session.

  • :http_proxy (URI::HTTP, String)

    A proxy to send requests through. Formatted like 'http://proxy.com:123'.

  • :http_read_timeout (Float) — default: 60

    The default number of seconds to wait for response data. This value can safely be set per-request on the session.

  • :http_wire_trace (Boolean) — default: false

    When true, HTTP debug output will be sent to the :logger.

  • :on_chunk_received (Proc)

    When a Proc object is provided, it will be used as callback when each chunk of the response body is received. It provides three arguments: the chunk, the number of bytes received, and the total number of bytes in the response (or nil if the server did not send a content-length).

  • :on_chunk_sent (Proc)

    When a Proc object is provided, it will be used as callback when each chunk of the request body is sent. It provides three arguments: the chunk, the number of bytes read from the body, and the total number of bytes in the body.

  • :raise_response_errors (Boolean) — default: true

    When true, response errors are raised.

  • :ssl_ca_bundle (String)

    Full path to the SSL certificate authority bundle file that should be used when verifying peer certificates. If you do not pass :ssl_ca_bundle or :ssl_ca_directory the the system default will be used if available.

  • :ssl_ca_directory (String)

    Full path of the directory that contains the unbundled SSL certificate authority files for verifying peer certificates. If you do not pass :ssl_ca_bundle or :ssl_ca_directory the the system default will be used if available.

  • :ssl_ca_store (String)

    Sets the X509::Store to verify peer certificate.

  • :ssl_cert (OpenSSL::X509::Certificate)

    Sets a client certificate when creating http connections.

  • :ssl_key (OpenSSL::PKey)

    Sets a client key when creating http connections.

  • :ssl_timeout (Float)

    Sets the SSL timeout in seconds

  • :ssl_verify_peer (Boolean) — default: true

    When true, SSL peer certificates are verified when establishing a connection.

[View source]

474
475
476
# File 'gems/aws-sdk-glue/lib/aws-sdk-glue/client.rb', line 474

def initialize(*args)
  super
end

Instance Method Details

#batch_create_partition(params = {}) ⇒ Types::BatchCreatePartitionResponse

Creates one or more partitions in a batch operation.

Examples:

Request syntax with placeholder values


resp = client.batch_create_partition({
  catalog_id: "CatalogIdString",
  database_name: "NameString", # required
  table_name: "NameString", # required
  partition_input_list: [ # required
    {
      values: ["ValueString"],
      last_access_time: Time.now,
      storage_descriptor: {
        columns: [
          {
            name: "NameString", # required
            type: "ColumnTypeString",
            comment: "CommentString",
            parameters: {
              "KeyString" => "ParametersMapValue",
            },
          },
        ],
        location: "LocationString",
        additional_locations: ["LocationString"],
        input_format: "FormatString",
        output_format: "FormatString",
        compressed: false,
        number_of_buckets: 1,
        serde_info: {
          name: "NameString",
          serialization_library: "NameString",
          parameters: {
            "KeyString" => "ParametersMapValue",
          },
        },
        bucket_columns: ["NameString"],
        sort_columns: [
          {
            column: "NameString", # required
            sort_order: 1, # required
          },
        ],
        parameters: {
          "KeyString" => "ParametersMapValue",
        },
        skewed_info: {
          skewed_column_names: ["NameString"],
          skewed_column_values: ["ColumnValuesString"],
          skewed_column_value_location_maps: {
            "ColumnValuesString" => "ColumnValuesString",
          },
        },
        stored_as_sub_directories: false,
        schema_reference: {
          schema_id: {
            schema_arn: "GlueResourceArn",
            schema_name: "SchemaRegistryNameString",
            registry_name: "SchemaRegistryNameString",
          },
          schema_version_id: "SchemaVersionIdString",
          schema_version_number: 1,
        },
      },
      parameters: {
        "KeyString" => "ParametersMapValue",
      },
      last_analyzed_time: Time.now,
    },
  ],
})

Response structure


resp.errors #=> Array
resp.errors[0].partition_values #=> Array
resp.errors[0].partition_values[0] #=> String
resp.errors[0].error_detail.error_code #=> String
resp.errors[0].error_detail.error_message #=> String

Parameters:

  • params (Hash) (defaults to: {})

    ({})

Options Hash (params):

  • :catalog_id (String)

    The ID of the catalog in which the partition is to be created. Currently, this should be the Amazon Web Services account ID.

  • :database_name (required, String)

    The name of the metadata database in which the partition is to be created.

  • :table_name (required, String)

    The name of the metadata table in which the partition is to be created.

  • :partition_input_list (required, Array<Types::PartitionInput>)

    A list of PartitionInput structures that define the partitions to be created.

Returns:

See Also:

[View source]

584
585
586
587
# File 'gems/aws-sdk-glue/lib/aws-sdk-glue/client.rb', line 584

def batch_create_partition(params = {}, options = {})
  req = build_request(:batch_create_partition, params)
  req.send_request(options)
end

#batch_delete_connection(params = {}) ⇒ Types::BatchDeleteConnectionResponse

Deletes a list of connection definitions from the Data Catalog.

Examples:

Request syntax with placeholder values


resp = client.batch_delete_connection({
  catalog_id: "CatalogIdString",
  connection_name_list: ["NameString"], # required
})

Response structure


resp.succeeded #=> Array
resp.succeeded[0] #=> String
resp.errors #=> Hash
resp.errors["NameString"].error_code #=> String
resp.errors["NameString"].error_message #=> String

Parameters:

  • params (Hash) (defaults to: {})

    ({})

Options Hash (params):

  • :catalog_id (String)

    The ID of the Data Catalog in which the connections reside. If none is provided, the Amazon Web Services account ID is used by default.

  • :connection_name_list (required, Array<String>)

    A list of names of the connections to delete.

Returns:

See Also:

[View source]

622
623
624
625
# File 'gems/aws-sdk-glue/lib/aws-sdk-glue/client.rb', line 622

def batch_delete_connection(params = {}, options = {})
  req = build_request(:batch_delete_connection, params)
  req.send_request(options)
end

#batch_delete_partition(params = {}) ⇒ Types::BatchDeletePartitionResponse

Deletes one or more partitions in a batch operation.

Examples:

Request syntax with placeholder values


resp = client.batch_delete_partition({
  catalog_id: "CatalogIdString",
  database_name: "NameString", # required
  table_name: "NameString", # required
  partitions_to_delete: [ # required
    {
      values: ["ValueString"], # required
    },
  ],
})

Response structure


resp.errors #=> Array
resp.errors[0].partition_values #=> Array
resp.errors[0].partition_values[0] #=> String
resp.errors[0].error_detail.error_code #=> String
resp.errors[0].error_detail.error_message #=> String

Parameters:

  • params (Hash) (defaults to: {})

    ({})

Options Hash (params):

  • :catalog_id (String)

    The ID of the Data Catalog where the partition to be deleted resides. If none is provided, the Amazon Web Services account ID is used by default.

  • :database_name (required, String)

    The name of the catalog database in which the table in question resides.

  • :table_name (required, String)

    The name of the table that contains the partitions to be deleted.

  • :partitions_to_delete (required, Array<Types::PartitionValueList>)

    A list of PartitionInput structures that define the partitions to be deleted.

Returns:

See Also:

[View source]

674
675
676
677
# File 'gems/aws-sdk-glue/lib/aws-sdk-glue/client.rb', line 674

def batch_delete_partition(params = {}, options = {})
  req = build_request(:batch_delete_partition, params)
  req.send_request(options)
end

#batch_delete_table(params = {}) ⇒ Types::BatchDeleteTableResponse

Deletes multiple tables at once.

After completing this operation, you no longer have access to the table versions and partitions that belong to the deleted table. Glue deletes these "orphaned" resources asynchronously in a timely manner, at the discretion of the service.

To ensure the immediate deletion of all related resources, before calling BatchDeleteTable, use DeleteTableVersion or BatchDeleteTableVersion, and DeletePartition or BatchDeletePartition, to delete any resources that belong to the table.

Examples:

Request syntax with placeholder values


resp = client.batch_delete_table({
  catalog_id: "CatalogIdString",
  database_name: "NameString", # required
  tables_to_delete: ["NameString"], # required
  transaction_id: "TransactionIdString",
})

Response structure


resp.errors #=> Array
resp.errors[0].table_name #=> String
resp.errors[0].error_detail.error_code #=> String
resp.errors[0].error_detail.error_message #=> String

Parameters:

  • params (Hash) (defaults to: {})

    ({})

Options Hash (params):

  • :catalog_id (String)

    The ID of the Data Catalog where the table resides. If none is provided, the Amazon Web Services account ID is used by default.

  • :database_name (required, String)

    The name of the catalog database in which the tables to delete reside. For Hive compatibility, this name is entirely lowercase.

  • :tables_to_delete (required, Array<String>)

    A list of the table to delete.

  • :transaction_id (String)

    The transaction ID at which to delete the table contents.

Returns:

See Also:

[View source]

732
733
734
735
# File 'gems/aws-sdk-glue/lib/aws-sdk-glue/client.rb', line 732

def batch_delete_table(params = {}, options = {})
  req = build_request(:batch_delete_table, params)
  req.send_request(options)
end

#batch_delete_table_version(params = {}) ⇒ Types::BatchDeleteTableVersionResponse

Deletes a specified batch of versions of a table.

Examples:

Request syntax with placeholder values


resp = client.batch_delete_table_version({
  catalog_id: "CatalogIdString",
  database_name: "NameString", # required
  table_name: "NameString", # required
  version_ids: ["VersionString"], # required
})

Response structure


resp.errors #=> Array
resp.errors[0].table_name #=> String
resp.errors[0].version_id #=> String
resp.errors[0].error_detail.error_code #=> String
resp.errors[0].error_detail.error_message #=> String

Parameters:

  • params (Hash) (defaults to: {})

    ({})

Options Hash (params):

  • :catalog_id (String)

    The ID of the Data Catalog where the tables reside. If none is provided, the Amazon Web Services account ID is used by default.

  • :database_name (required, String)

    The database in the catalog in which the table resides. For Hive compatibility, this name is entirely lowercase.

  • :table_name (required, String)

    The name of the table. For Hive compatibility, this name is entirely lowercase.

  • :version_ids (required, Array<String>)

    A list of the IDs of versions to be deleted. A VersionId is a string representation of an integer. Each version is incremented by 1.

Returns:

See Also:

[View source]

780
781
782
783
# File 'gems/aws-sdk-glue/lib/aws-sdk-glue/client.rb', line 780

def batch_delete_table_version(params = {}, options = {})
  req = build_request(:batch_delete_table_version, params)
  req.send_request(options)
end

#batch_get_blueprints(params = {}) ⇒ Types::BatchGetBlueprintsResponse

Retrieves information about a list of blueprints.

Examples:

Request syntax with placeholder values


resp = client.batch_get_blueprints({
  names: ["OrchestrationNameString"], # required
  include_blueprint: false,
  include_parameter_spec: false,
})

Response structure


resp.blueprints #=> Array
resp.blueprints[0].name #=> String
resp.blueprints[0].description #=> String
resp.blueprints[0].created_on #=> Time
resp.blueprints[0].last_modified_on #=> Time
resp.blueprints[0].parameter_spec #=> String
resp.blueprints[0].blueprint_location #=> String
resp.blueprints[0].blueprint_service_location #=> String
resp.blueprints[0].status #=> String, one of "CREATING", "ACTIVE", "UPDATING", "FAILED"
resp.blueprints[0].error_message #=> String
resp.blueprints[0].last_active_definition.description #=> String
resp.blueprints[0].last_active_definition.last_modified_on #=> Time
resp.blueprints[0].last_active_definition.parameter_spec #=> String
resp.blueprints[0].last_active_definition.blueprint_location #=> String
resp.blueprints[0].last_active_definition.blueprint_service_location #=> String
resp.missing_blueprints #=> Array
resp.missing_blueprints[0] #=> String

Parameters:

  • params (Hash) (defaults to: {})

    ({})

Options Hash (params):

  • :names (required, Array<String>)

    A list of blueprint names.

  • :include_blueprint (Boolean)

    Specifies whether or not to include the blueprint in the response.

  • :include_parameter_spec (Boolean)

    Specifies whether or not to include the parameters, as a JSON string, for the blueprint in the response.

Returns:

See Also:

[View source]

834
835
836
837
# File 'gems/aws-sdk-glue/lib/aws-sdk-glue/client.rb', line 834

def batch_get_blueprints(params = {}, options = {})
  req = build_request(:batch_get_blueprints, params)
  req.send_request(options)
end

#batch_get_crawlers(params = {}) ⇒ Types::BatchGetCrawlersResponse

Returns a list of resource metadata for a given list of crawler names. After calling the ListCrawlers operation, you can call this operation to access the data to which you have been granted permissions. This operation supports all IAM permissions, including permission conditions that uses tags.

Examples:

Request syntax with placeholder values


resp = client.batch_get_crawlers({
  crawler_names: ["NameString"], # required
})

Response structure


resp.crawlers #=> Array
resp.crawlers[0].name #=> String
resp.crawlers[0].role #=> String
resp.crawlers[0].targets.s3_targets #=> Array
resp.crawlers[0].targets.s3_targets[0].path #=> String
resp.crawlers[0].targets.s3_targets[0].exclusions #=> Array
resp.crawlers[0].targets.s3_targets[0].exclusions[0] #=> String
resp.crawlers[0].targets.s3_targets[0].connection_name #=> String
resp.crawlers[0].targets.s3_targets[0].sample_size #=> Integer
resp.crawlers[0].targets.s3_targets[0].event_queue_arn #=> String
resp.crawlers[0].targets.s3_targets[0].dlq_event_queue_arn #=> String
resp.crawlers[0].targets.jdbc_targets #=> Array
resp.crawlers[0].targets.jdbc_targets[0].connection_name #=> String
resp.crawlers[0].targets.jdbc_targets[0].path #=> String
resp.crawlers[0].targets.jdbc_targets[0].exclusions #=> Array
resp.crawlers[0].targets.jdbc_targets[0].exclusions[0] #=> String
resp.crawlers[0].targets.jdbc_targets[0]. #=> Array
resp.crawlers[0].targets.jdbc_targets[0].[0] #=> String, one of "COMMENTS", "RAWTYPES"
resp.crawlers[0].targets.mongo_db_targets #=> Array
resp.crawlers[0].targets.mongo_db_targets[0].connection_name #=> String
resp.crawlers[0].targets.mongo_db_targets[0].path #=> String
resp.crawlers[0].targets.mongo_db_targets[0].scan_all #=> Boolean
resp.crawlers[0].targets.dynamo_db_targets #=> Array
resp.crawlers[0].targets.dynamo_db_targets[0].path #=> String
resp.crawlers[0].targets.dynamo_db_targets[0].scan_all #=> Boolean
resp.crawlers[0].targets.dynamo_db_targets[0].scan_rate #=> Float
resp.crawlers[0].targets.catalog_targets #=> Array
resp.crawlers[0].targets.catalog_targets[0].database_name #=> String
resp.crawlers[0].targets.catalog_targets[0].tables #=> Array
resp.crawlers[0].targets.catalog_targets[0].tables[0] #=> String
resp.crawlers[0].targets.catalog_targets[0].connection_name #=> String
resp.crawlers[0].targets.catalog_targets[0].event_queue_arn #=> String
resp.crawlers[0].targets.catalog_targets[0].dlq_event_queue_arn #=> String
resp.crawlers[0].targets.delta_targets #=> Array
resp.crawlers[0].targets.delta_targets[0].delta_tables #=> Array
resp.crawlers[0].targets.delta_targets[0].delta_tables[0] #=> String
resp.crawlers[0].targets.delta_targets[0].connection_name #=> String
resp.crawlers[0].targets.delta_targets[0].write_manifest #=> Boolean
resp.crawlers[0].targets.delta_targets[0].create_native_delta_table #=> Boolean
resp.crawlers[0].targets.iceberg_targets #=> Array
resp.crawlers[0].targets.iceberg_targets[0].paths #=> Array
resp.crawlers[0].targets.iceberg_targets[0].paths[0] #=> String
resp.crawlers[0].targets.iceberg_targets[0].connection_name #=> String
resp.crawlers[0].targets.iceberg_targets[0].exclusions #=> Array
resp.crawlers[0].targets.iceberg_targets[0].exclusions[0] #=> String
resp.crawlers[0].targets.iceberg_targets[0].maximum_traversal_depth #=> Integer
resp.crawlers[0].targets.hudi_targets #=> Array
resp.crawlers[0].targets.hudi_targets[0].paths #=> Array
resp.crawlers[0].targets.hudi_targets[0].paths[0] #=> String
resp.crawlers[0].targets.hudi_targets[0].connection_name #=> String
resp.crawlers[0].targets.hudi_targets[0].exclusions #=> Array
resp.crawlers[0].targets.hudi_targets[0].exclusions[0] #=> String
resp.crawlers[0].targets.hudi_targets[0].maximum_traversal_depth #=> Integer
resp.crawlers[0].database_name #=> String
resp.crawlers[0].description #=> String
resp.crawlers[0].classifiers #=> Array
resp.crawlers[0].classifiers[0] #=> String
resp.crawlers[0].recrawl_policy.recrawl_behavior #=> String, one of "CRAWL_EVERYTHING", "CRAWL_NEW_FOLDERS_ONLY", "CRAWL_EVENT_MODE"
resp.crawlers[0].schema_change_policy.update_behavior #=> String, one of "LOG", "UPDATE_IN_DATABASE"
resp.crawlers[0].schema_change_policy.delete_behavior #=> String, one of "LOG", "DELETE_FROM_DATABASE", "DEPRECATE_IN_DATABASE"
resp.crawlers[0].lineage_configuration.crawler_lineage_settings #=> String, one of "ENABLE", "DISABLE"
resp.crawlers[0].state #=> String, one of "READY", "RUNNING", "STOPPING"
resp.crawlers[0].table_prefix #=> String
resp.crawlers[0].schedule.schedule_expression #=> String
resp.crawlers[0].schedule.state #=> String, one of "SCHEDULED", "NOT_SCHEDULED", "TRANSITIONING"
resp.crawlers[0].crawl_elapsed_time #=> Integer
resp.crawlers[0].creation_time #=> Time
resp.crawlers[0].last_updated #=> Time
resp.crawlers[0].last_crawl.status #=> String, one of "SUCCEEDED", "CANCELLED", "FAILED"
resp.crawlers[0].last_crawl.error_message #=> String
resp.crawlers[0].last_crawl.log_group #=> String
resp.crawlers[0].last_crawl.log_stream #=> String
resp.crawlers[0].last_crawl.message_prefix #=> String
resp.crawlers[0].last_crawl.start_time #=> Time
resp.crawlers[0].version #=> Integer
resp.crawlers[0].configuration #=> String
resp.crawlers[0].crawler_security_configuration #=> String
resp.crawlers[0].lake_formation_configuration.use_lake_formation_credentials #=> Boolean
resp.crawlers[0].lake_formation_configuration. #=> String
resp.crawlers_not_found #=> Array
resp.crawlers_not_found[0] #=> String

Parameters:

  • params (Hash) (defaults to: {})

    ({})

Options Hash (params):

  • :crawler_names (required, Array<String>)

    A list of crawler names, which might be the names returned from the ListCrawlers operation.

Returns:

See Also:

[View source]

948
949
950
951
# File 'gems/aws-sdk-glue/lib/aws-sdk-glue/client.rb', line 948

def batch_get_crawlers(params = {}, options = {})
  req = build_request(:batch_get_crawlers, params)
  req.send_request(options)
end

#batch_get_custom_entity_types(params = {}) ⇒ Types::BatchGetCustomEntityTypesResponse

Retrieves the details for the custom patterns specified by a list of names.

Examples:

Request syntax with placeholder values


resp = client.batch_get_custom_entity_types({
  names: ["NameString"], # required
})

Response structure


resp.custom_entity_types #=> Array
resp.custom_entity_types[0].name #=> String
resp.custom_entity_types[0].regex_string #=> String
resp.custom_entity_types[0].context_words #=> Array
resp.custom_entity_types[0].context_words[0] #=> String
resp.custom_entity_types_not_found #=> Array
resp.custom_entity_types_not_found[0] #=> String

Parameters:

  • params (Hash) (defaults to: {})

    ({})

Options Hash (params):

  • :names (required, Array<String>)

    A list of names of the custom patterns that you want to retrieve.

Returns:

See Also:

[View source]

984
985
986
987
# File 'gems/aws-sdk-glue/lib/aws-sdk-glue/client.rb', line 984

def batch_get_custom_entity_types(params = {}, options = {})
  req = build_request(:batch_get_custom_entity_types, params)
  req.send_request(options)
end

#batch_get_data_quality_result(params = {}) ⇒ Types::BatchGetDataQualityResultResponse

Retrieves a list of data quality results for the specified result IDs.

Examples:

Request syntax with placeholder values


resp = client.batch_get_data_quality_result({
  result_ids: ["HashString"], # required
})

Response structure


resp.results #=> Array
resp.results[0].result_id #=> String
resp.results[0].profile_id #=> String
resp.results[0].score #=> Float
resp.results[0].data_source.glue_table.database_name #=> String
resp.results[0].data_source.glue_table.table_name #=> String
resp.results[0].data_source.glue_table.catalog_id #=> String
resp.results[0].data_source.glue_table.connection_name #=> String
resp.results[0].data_source.glue_table.additional_options #=> Hash
resp.results[0].data_source.glue_table.additional_options["NameString"] #=> String
resp.results[0].ruleset_name #=> String
resp.results[0].evaluation_context #=> String
resp.results[0].started_on #=> Time
resp.results[0].completed_on #=> Time
resp.results[0].job_name #=> String
resp.results[0].job_run_id #=> String
resp.results[0].ruleset_evaluation_run_id #=> String
resp.results[0].rule_results #=> Array
resp.results[0].rule_results[0].name #=> String
resp.results[0].rule_results[0].description #=> String
resp.results[0].rule_results[0].evaluation_message #=> String
resp.results[0].rule_results[0].result #=> String, one of "PASS", "FAIL", "ERROR"
resp.results[0].rule_results[0].evaluated_metrics #=> Hash
resp.results[0].rule_results[0].evaluated_metrics["NameString"] #=> Float
resp.results[0].rule_results[0].evaluated_rule #=> String
resp.results[0].analyzer_results #=> Array
resp.results[0].analyzer_results[0].name #=> String
resp.results[0].analyzer_results[0].description #=> String
resp.results[0].analyzer_results[0].evaluation_message #=> String
resp.results[0].analyzer_results[0].evaluated_metrics #=> Hash
resp.results[0].analyzer_results[0].evaluated_metrics["NameString"] #=> Float
resp.results[0].observations #=> Array
resp.results[0].observations[0].description #=> String
resp.results[0].observations[0].metric_based_observation.metric_name #=> String
resp.results[0].observations[0].metric_based_observation.statistic_id #=> String
resp.results[0].observations[0].metric_based_observation.metric_values.actual_value #=> Float
resp.results[0].observations[0].metric_based_observation.metric_values.expected_value #=> Float
resp.results[0].observations[0].metric_based_observation.metric_values.lower_limit #=> Float
resp.results[0].observations[0].metric_based_observation.metric_values.upper_limit #=> Float
resp.results[0].observations[0].metric_based_observation.new_rules #=> Array
resp.results[0].observations[0].metric_based_observation.new_rules[0] #=> String
resp.results_not_found #=> Array
resp.results_not_found[0] #=> String

Parameters:

  • params (Hash) (defaults to: {})

    ({})

Options Hash (params):

  • :result_ids (required, Array<String>)

    A list of unique result IDs for the data quality results.

Returns:

See Also:

[View source]

1055
1056
1057
1058
# File 'gems/aws-sdk-glue/lib/aws-sdk-glue/client.rb', line 1055

def batch_get_data_quality_result(params = {}, options = {})
  req = build_request(:batch_get_data_quality_result, params)
  req.send_request(options)
end

#batch_get_dev_endpoints(params = {}) ⇒ Types::BatchGetDevEndpointsResponse

Returns a list of resource metadata for a given list of development endpoint names. After calling the ListDevEndpoints operation, you can call this operation to access the data to which you have been granted permissions. This operation supports all IAM permissions, including permission conditions that uses tags.

Examples:

Request syntax with placeholder values


resp = client.batch_get_dev_endpoints({
  dev_endpoint_names: ["GenericString"], # required
})

Response structure


resp.dev_endpoints #=> Array
resp.dev_endpoints[0].endpoint_name #=> String
resp.dev_endpoints[0].role_arn #=> String
resp.dev_endpoints[0].security_group_ids #=> Array
resp.dev_endpoints[0].security_group_ids[0] #=> String
resp.dev_endpoints[0].subnet_id #=> String
resp.dev_endpoints[0].yarn_endpoint_address #=> String
resp.dev_endpoints[0].private_address #=> String
resp.dev_endpoints[0].zeppelin_remote_spark_interpreter_port #=> Integer
resp.dev_endpoints[0].public_address #=> String
resp.dev_endpoints[0].status #=> String
resp.dev_endpoints[0].worker_type #=> String, one of "Standard", "G.1X", "G.2X", "G.025X", "G.4X", "G.8X", "Z.2X"
resp.dev_endpoints[0].glue_version #=> String
resp.dev_endpoints[0].number_of_workers #=> Integer
resp.dev_endpoints[0].number_of_nodes #=> Integer
resp.dev_endpoints[0].availability_zone #=> String
resp.dev_endpoints[0].vpc_id #=> String
resp.dev_endpoints[0].extra_python_libs_s3_path #=> String
resp.dev_endpoints[0].extra_jars_s3_path #=> String
resp.dev_endpoints[0].failure_reason #=> String
resp.dev_endpoints[0].last_update_status #=> String
resp.dev_endpoints[0].created_timestamp #=> Time
resp.dev_endpoints[0].last_modified_timestamp #=> Time
resp.dev_endpoints[0].public_key #=> String
resp.dev_endpoints[0].public_keys #=> Array
resp.dev_endpoints[0].public_keys[0] #=> String
resp.dev_endpoints[0].security_configuration #=> String
resp.dev_endpoints[0].arguments #=> Hash
resp.dev_endpoints[0].arguments["GenericString"] #=> String
resp.dev_endpoints_not_found #=> Array
resp.dev_endpoints_not_found[0] #=> String

Parameters:

  • params (Hash) (defaults to: {})

    ({})

Options Hash (params):

  • :dev_endpoint_names (required, Array<String>)

    The list of DevEndpoint names, which might be the names returned from the ListDevEndpoint operation.

Returns:

See Also:

[View source]

1119
1120
1121
1122
# File 'gems/aws-sdk-glue/lib/aws-sdk-glue/client.rb', line 1119

def batch_get_dev_endpoints(params = {}, options = {})
  req = build_request(:batch_get_dev_endpoints, params)
  req.send_request(options)
end

#batch_get_jobs(params = {}) ⇒ Types::BatchGetJobsResponse

Returns a list of resource metadata for a given list of job names. After calling the ListJobs operation, you can call this operation to access the data to which you have been granted permissions. This operation supports all IAM permissions, including permission conditions that uses tags.

Examples:

Request syntax with placeholder values


resp = client.batch_get_jobs({
  job_names: ["NameString"], # required
})

Response structure


resp.jobs #=> Array
resp.jobs[0].name #=> String
resp.jobs[0].job_mode #=> String, one of "SCRIPT", "VISUAL", "NOTEBOOK"
resp.jobs[0].job_run_queuing_enabled #=> Boolean
resp.jobs[0].description #=> String
resp.jobs[0].log_uri #=> String
resp.jobs[0].role #=> String
resp.jobs[0].created_on #=> Time
resp.jobs[0].last_modified_on #=> Time
resp.jobs[0].execution_property.max_concurrent_runs #=> Integer
resp.jobs[0].command.name #=> String
resp.jobs[0].command.script_location #=> String
resp.jobs[0].command.python_version #=> String
resp.jobs[0].command.runtime #=> String
resp.jobs[0].default_arguments #=> Hash
resp.jobs[0].default_arguments["GenericString"] #=> String
resp.jobs[0].non_overridable_arguments #=> Hash
resp.jobs[0].non_overridable_arguments["GenericString"] #=> String
resp.jobs[0].connections.connections #=> Array
resp.jobs[0].connections.connections[0] #=> String
resp.jobs[0].max_retries #=> Integer
resp.jobs[0].allocated_capacity #=> Integer
resp.jobs[0].timeout #=> Integer
resp.jobs[0].max_capacity #=> Float
resp.jobs[0].worker_type #=> String, one of "Standard", "G.1X", "G.2X", "G.025X", "G.4X", "G.8X", "Z.2X"
resp.jobs[0].number_of_workers #=> Integer
resp.jobs[0].security_configuration #=> String
resp.jobs[0].notification_property.notify_delay_after #=> Integer
resp.jobs[0].glue_version #=> String
resp.jobs[0].code_gen_configuration_nodes #=> Hash
resp.jobs[0].code_gen_configuration_nodes["NodeId"].athena_connector_source.name #=> String
resp.jobs[0].code_gen_configuration_nodes["NodeId"].athena_connector_source.connection_name #=> String
resp.jobs[0].code_gen_configuration_nodes["NodeId"].athena_connector_source.connector_name #=> String
resp.jobs[0].code_gen_configuration_nodes["NodeId"].athena_connector_source.connection_type #=> String
resp.jobs[0].code_gen_configuration_nodes["NodeId"].athena_connector_source.connection_table #=> String
resp.jobs[0].code_gen_configuration_nodes["NodeId"].athena_connector_source.schema_name #=> String
resp.jobs[0].code_gen_configuration_nodes["NodeId"].athena_connector_source.output_schemas #=> Array
resp.jobs[0].code_gen_configuration_nodes["NodeId"].athena_connector_source.output_schemas[0].columns #=> Array
resp.jobs[0].code_gen_configuration_nodes["NodeId"].athena_connector_source.output_schemas[0].columns[0].name #=> String
resp.jobs[0].code_gen_configuration_nodes["NodeId"].athena_connector_source.output_schemas[0].columns[0].type #=> String
resp.jobs[0].code_gen_configuration_nodes["NodeId"].jdbc_connector_source.name #=> String
resp.jobs[0].code_gen_configuration_nodes["NodeId"].jdbc_connector_source.connection_name #=> String
resp.jobs[0].code_gen_configuration_nodes["NodeId"].jdbc_connector_source.connector_name #=> String
resp.jobs[0].code_gen_configuration_nodes["NodeId"].jdbc_connector_source.connection_type #=> String
resp.jobs[0].code_gen_configuration_nodes["NodeId"].jdbc_connector_source.additional_options.filter_predicate #=> String
resp.jobs[0].code_gen_configuration_nodes["NodeId"].jdbc_connector_source.additional_options.partition_column #=> String
resp.jobs[0].code_gen_configuration_nodes["NodeId"].jdbc_connector_source.additional_options.lower_bound #=> Integer
resp.jobs[0].code_gen_configuration_nodes["NodeId"].jdbc_connector_source.additional_options.upper_bound #=> Integer
resp.jobs[0].code_gen_configuration_nodes["NodeId"].jdbc_connector_source.additional_options.num_partitions #=> Integer
resp.jobs[0].code_gen_configuration_nodes["NodeId"].jdbc_connector_source.additional_options.job_bookmark_keys #=> Array
resp.jobs[0].code_gen_configuration_nodes["NodeId"].jdbc_connector_source.additional_options.job_bookmark_keys[0] #=> String
resp.jobs[0].code_gen_configuration_nodes["NodeId"].jdbc_connector_source.additional_options.job_bookmark_keys_sort_order #=> String
resp.jobs[0].code_gen_configuration_nodes["NodeId"].jdbc_connector_source.additional_options.data_type_mapping #=> Hash
resp.jobs[0].code_gen_configuration_nodes["NodeId"].jdbc_connector_source.additional_options.data_type_mapping["JDBCDataType"] #=> String, one of "DATE", "STRING", "TIMESTAMP", "INT", "FLOAT", "LONG", "BIGDECIMAL", "BYTE", "SHORT", "DOUBLE"
resp.jobs[0].code_gen_configuration_nodes["NodeId"].jdbc_connector_source.connection_table #=> String
resp.jobs[0].code_gen_configuration_nodes["NodeId"].jdbc_connector_source.query #=> String
resp.jobs[0].code_gen_configuration_nodes["NodeId"].jdbc_connector_source.output_schemas #=> Array
resp.jobs[0].code_gen_configuration_nodes["NodeId"].jdbc_connector_source.output_schemas[0].columns #=> Array
resp.jobs[0].code_gen_configuration_nodes["NodeId"].jdbc_connector_source.output_schemas[0].columns[0].name #=> String
resp.jobs[0].code_gen_configuration_nodes["NodeId"].jdbc_connector_source.output_schemas[0].columns[0].type #=> String
resp.jobs[0].code_gen_configuration_nodes["NodeId"].spark_connector_source.name #=> String
resp.jobs[0].code_gen_configuration_nodes["NodeId"].spark_connector_source.connection_name #=> String
resp.jobs[0].code_gen_configuration_nodes["NodeId"].spark_connector_source.connector_name #=> String
resp.jobs[0].code_gen_configuration_nodes["NodeId"].spark_connector_source.connection_type #=> String
resp.jobs[0].code_gen_configuration_nodes["NodeId"].spark_connector_source.additional_options #=> Hash
resp.jobs[0].code_gen_configuration_nodes["NodeId"].spark_connector_source.additional_options["EnclosedInStringProperty"] #=> String
resp.jobs[0].code_gen_configuration_nodes["NodeId"].spark_connector_source.output_schemas #=> Array
resp.jobs[0].code_gen_configuration_nodes["NodeId"].spark_connector_source.output_schemas[0].columns #=> Array
resp.jobs[0].code_gen_configuration_nodes["NodeId"].spark_connector_source.output_schemas[0].columns[0].name #=> String
resp.jobs[0].code_gen_configuration_nodes["NodeId"].spark_connector_source.output_schemas[0].columns[0].type #=> String
resp.jobs[0].code_gen_configuration_nodes["NodeId"].catalog_source.name #=> String
resp.jobs[0].code_gen_configuration_nodes["NodeId"].catalog_source.database #=> String
resp.jobs[0].code_gen_configuration_nodes["NodeId"].catalog_source.table #=> String
resp.jobs[0].code_gen_configuration_nodes["NodeId"].redshift_source.name #=> String
resp.jobs[0].code_gen_configuration_nodes["NodeId"].redshift_source.database #=> String
resp.jobs[0].code_gen_configuration_nodes["NodeId"].redshift_source.table #=> String
resp.jobs[0].code_gen_configuration_nodes["NodeId"].redshift_source.redshift_tmp_dir #=> String
resp.jobs[0].code_gen_configuration_nodes["NodeId"].redshift_source.tmp_dir_iam_role #=> String
resp.jobs[0].code_gen_configuration_nodes["NodeId"].s3_catalog_source.name #=> String
resp.jobs[0].code_gen_configuration_nodes["NodeId"].s3_catalog_source.database #=> String
resp.jobs[0].code_gen_configuration_nodes["NodeId"].s3_catalog_source.table #=> String
resp.jobs[0].code_gen_configuration_nodes["NodeId"].s3_catalog_source.partition_predicate #=> String
resp.jobs[0].code_gen_configuration_nodes["NodeId"].s3_catalog_source.additional_options.bounded_size #=> Integer
resp.jobs[0].code_gen_configuration_nodes["NodeId"].s3_catalog_source.additional_options.bounded_files #=> Integer
resp.jobs[0].code_gen_configuration_nodes["NodeId"].s3_csv_source.name #=> String
resp.jobs[0].code_gen_configuration_nodes["NodeId"].s3_csv_source.paths #=> Array
resp.jobs[0].code_gen_configuration_nodes["NodeId"].s3_csv_source.paths[0] #=> String
resp.jobs[0].code_gen_configuration_nodes["NodeId"].s3_csv_source.compression_type #=> String, one of "gzip", "bzip2"
resp.jobs[0].code_gen_configuration_nodes["NodeId"].s3_csv_source.exclusions #=> Array
resp.jobs[0].code_gen_configuration_nodes["NodeId"].s3_csv_source.exclusions[0] #=> String
resp.jobs[0].code_gen_configuration_nodes["NodeId"].s3_csv_source.group_size #=> String
resp.jobs[0].code_gen_configuration_nodes["NodeId"].s3_csv_source.group_files #=> String
resp.jobs[0].code_gen_configuration_nodes["NodeId"].s3_csv_source.recurse #=> Boolean
resp.jobs[0].code_gen_configuration_nodes["NodeId"].s3_csv_source.max_band #=> Integer
resp.jobs[0].code_gen_configuration_nodes["NodeId"].s3_csv_source.max_files_in_band #=> Integer
resp.jobs[0].code_gen_configuration_nodes["NodeId"].s3_csv_source.additional_options.bounded_size #=> Integer
resp.jobs[0].code_gen_configuration_nodes["NodeId"].s3_csv_source.additional_options.bounded_files #=> Integer
resp.jobs[0].code_gen_configuration_nodes["NodeId"].s3_csv_source.additional_options.enable_sample_path #=> Boolean
resp.jobs[0].code_gen_configuration_nodes["NodeId"].s3_csv_source.additional_options.sample_path #=> String
resp.jobs[0].code_gen_configuration_nodes["NodeId"].s3_csv_source.separator #=> String, one of "comma", "ctrla", "pipe", "semicolon", "tab"
resp.jobs[0].code_gen_configuration_nodes["NodeId"].s3_csv_source.escaper #=> String
resp.jobs[0].code_gen_configuration_nodes["NodeId"].s3_csv_source.quote_char #=> String, one of "quote", "quillemet", "single_quote", "disabled"
resp.jobs[0].code_gen_configuration_nodes["NodeId"].s3_csv_source.multiline #=> Boolean
resp.jobs[0].code_gen_configuration_nodes["NodeId"].s3_csv_source.with_header #=> Boolean
resp.jobs[0].code_gen_configuration_nodes["NodeId"].s3_csv_source.write_header #=> Boolean
resp.jobs[0].code_gen_configuration_nodes["NodeId"].s3_csv_source.skip_first #=> Boolean
resp.jobs[0].code_gen_configuration_nodes["NodeId"].s3_csv_source.optimize_performance #=> Boolean
resp.jobs[0].code_gen_configuration_nodes["NodeId"].s3_csv_source.output_schemas #=> Array
resp.jobs[0].code_gen_configuration_nodes["NodeId"].s3_csv_source.output_schemas[0].columns #=> Array
resp.jobs[0].code_gen_configuration_nodes["NodeId"].s3_csv_source.output_schemas[0].columns[0].name #=> String
resp.jobs[0].code_gen_configuration_nodes["NodeId"].s3_csv_source.output_schemas[0].columns[0].type #=> String
resp.jobs[0].code_gen_configuration_nodes["NodeId"].s3_json_source.name #=> String
resp.jobs[0].code_gen_configuration_nodes["NodeId"].s3_json_source.paths #=> Array
resp.jobs[0].code_gen_configuration_nodes["NodeId"].s3_json_source.paths[0] #=> String
resp.jobs[0].code_gen_configuration_nodes["NodeId"].s3_json_source.compression_type #=> String, one of "gzip", "bzip2"
resp.jobs[0].code_gen_configuration_nodes["NodeId"].s3_json_source.exclusions #=> Array
resp.jobs[0].code_gen_configuration_nodes["NodeId"].s3_json_source.exclusions[0] #=> String
resp.jobs[0].code_gen_configuration_nodes["NodeId"].s3_json_source.group_size #=> String
resp.jobs[0].code_gen_configuration_nodes["NodeId"].s3_json_source.group_files #=> String
resp.jobs[0].code_gen_configuration_nodes["NodeId"].s3_json_source.recurse #=> Boolean
resp.jobs[0].code_gen_configuration_nodes["NodeId"].s3_json_source.max_band #=> Integer
resp.jobs[0].code_gen_configuration_nodes["NodeId"].s3_json_source.max_files_in_band #=> Integer
resp.jobs[0].code_gen_configuration_nodes["NodeId"].s3_json_source.additional_options.bounded_size #=> Integer
resp.jobs[0].code_gen_configuration_nodes["NodeId"].s3_json_source.additional_options.bounded_files #=> Integer
resp.jobs[0].code_gen_configuration_nodes["NodeId"].s3_json_source.additional_options.enable_sample_path #=> Boolean
resp.jobs[0].code_gen_configuration_nodes["NodeId"].s3_json_source.additional_options.sample_path #=> String
resp.jobs[0].code_gen_configuration_nodes["NodeId"].s3_json_source.json_path #=> String
resp.jobs[0].code_gen_configuration_nodes["NodeId"].s3_json_source.multiline #=> Boolean
resp.jobs[0].code_gen_configuration_nodes["NodeId"].s3_json_source.output_schemas #=> Array
resp.jobs[0].code_gen_configuration_nodes["NodeId"].s3_json_source.output_schemas[0].columns #=> Array
resp.jobs[0].code_gen_configuration_nodes["NodeId"].s3_json_source.output_schemas[0].columns[0].name #=> String
resp.jobs[0].code_gen_configuration_nodes["NodeId"].s3_json_source.output_schemas[0].columns[0].type #=> String
resp.jobs[0].code_gen_configuration_nodes["NodeId"].s3_parquet_source.name #=> String
resp.jobs[0].code_gen_configuration_nodes["NodeId"].s3_parquet_source.paths #=> Array
resp.jobs[0].code_gen_configuration_nodes["NodeId"].s3_parquet_source.paths[0] #=> String
resp.jobs[0].code_gen_configuration_nodes["NodeId"].s3_parquet_source.compression_type #=> String, one of "snappy", "lzo", "gzip", "uncompressed", "none"
resp.jobs[0].code_gen_configuration_nodes["NodeId"].s3_parquet_source.exclusions #=> Array
resp.jobs[0].code_gen_configuration_nodes["NodeId"].s3_parquet_source.exclusions[0] #=> String
resp.jobs[0].code_gen_configuration_nodes["NodeId"].s3_parquet_source.group_size #=> String
resp.jobs[0].code_gen_configuration_nodes["NodeId"].s3_parquet_source.group_files #=> String
resp.jobs[0].code_gen_configuration_nodes["NodeId"].s3_parquet_source.recurse #=> Boolean
resp.jobs[0].code_gen_configuration_nodes["NodeId"].s3_parquet_source.max_band #=> Integer
resp.jobs[0].code_gen_configuration_nodes["NodeId"].s3_parquet_source.max_files_in_band #=> Integer
resp.jobs[0].code_gen_configuration_nodes["NodeId"].s3_parquet_source.additional_options.bounded_size #=> Integer
resp.jobs[0].code_gen_configuration_nodes["NodeId"].s3_parquet_source.additional_options.bounded_files #=> Integer
resp.jobs[0].code_gen_configuration_nodes["NodeId"].s3_parquet_source.additional_options.enable_sample_path #=> Boolean
resp.jobs[0].code_gen_configuration_nodes["NodeId"].s3_parquet_source.additional_options.sample_path #=> String
resp.jobs[0].code_gen_configuration_nodes["NodeId"].s3_parquet_source.output_schemas #=> Array
resp.jobs[0].code_gen_configuration_nodes["NodeId"].s3_parquet_source.output_schemas[0].columns #=> Array
resp.jobs[0].code_gen_configuration_nodes["NodeId"].s3_parquet_source.output_schemas[0].columns[0].name #=> String
resp.jobs[0].code_gen_configuration_nodes["NodeId"].s3_parquet_source.output_schemas[0].columns[0].type #=> String
resp.jobs[0].code_gen_configuration_nodes["NodeId"].relational_catalog_source.name #=> String
resp.jobs[0].code_gen_configuration_nodes["NodeId"].relational_catalog_source.database #=> String
resp.jobs[0].code_gen_configuration_nodes["NodeId"].relational_catalog_source.table #=> String
resp.jobs[0].code_gen_configuration_nodes["NodeId"].dynamo_db_catalog_source.name #=> String
resp.jobs[0].code_gen_configuration_nodes["NodeId"].dynamo_db_catalog_source.database #=> String
resp.jobs[0].code_gen_configuration_nodes["NodeId"].dynamo_db_catalog_source.table #=> String
resp.jobs[0].code_gen_configuration_nodes["NodeId"].jdbc_connector_target.name #=> String
resp.jobs[0].code_gen_configuration_nodes["NodeId"].jdbc_connector_target.inputs #=> Array
resp.jobs[0].code_gen_configuration_nodes["NodeId"].jdbc_connector_target.inputs[0] #=> String
resp.jobs[0].code_gen_configuration_nodes["NodeId"].jdbc_connector_target.connection_name #=> String
resp.jobs[0].code_gen_configuration_nodes["NodeId"].jdbc_connector_target.connection_table #=> String
resp.jobs[0].code_gen_configuration_nodes["NodeId"].jdbc_connector_target.connector_name #=> String
resp.jobs[0].code_gen_configuration_nodes["NodeId"].jdbc_connector_target.connection_type #=> String
resp.jobs[0].code_gen_configuration_nodes["NodeId"].jdbc_connector_target.additional_options #=> Hash
resp.jobs[0].code_gen_configuration_nodes["NodeId"].jdbc_connector_target.additional_options["EnclosedInStringProperty"] #=> String
resp.jobs[0].code_gen_configuration_nodes["NodeId"].jdbc_connector_target.output_schemas #=> Array
resp.jobs[0].code_gen_configuration_nodes["NodeId"].jdbc_connector_target.output_schemas[0].columns #=> Array
resp.jobs[0].code_gen_configuration_nodes["NodeId"].jdbc_connector_target.output_schemas[0].columns[0].name #=> String
resp.jobs[0].code_gen_configuration_nodes["NodeId"].jdbc_connector_target.output_schemas[0].columns[0].type #=> String
resp.jobs[0].code_gen_configuration_nodes["NodeId"].spark_connector_target.name #=> String
resp.jobs[0].code_gen_configuration_nodes["NodeId"].spark_connector_target.inputs #=> Array
resp.jobs[0].code_gen_configuration_nodes["NodeId"].spark_connector_target.inputs[0] #=> String
resp.jobs[0].code_gen_configuration_nodes["NodeId"].spark_connector_target.connection_name #=> String
resp.jobs[0].code_gen_configuration_nodes["NodeId"].spark_connector_target.connector_name #=> String
resp.jobs[0].code_gen_configuration_nodes["NodeId"].spark_connector_target.connection_type #=> String
resp.jobs[0].code_gen_configuration_nodes["NodeId"].spark_connector_target.additional_options #=> Hash
resp.jobs[0].code_gen_configuration_nodes["NodeId"].spark_connector_target.additional_options["EnclosedInStringProperty"] #=> String
resp.jobs[0].code_gen_configuration_nodes["NodeId"].spark_connector_target.output_schemas #=> Array
resp.jobs[0].code_gen_configuration_nodes["NodeId"].spark_connector_target.output_schemas[0].columns #=> Array
resp.jobs[0].code_gen_configuration_nodes["NodeId"].spark_connector_target.output_schemas[0].columns[0].name #=> String
resp.jobs[0].code_gen_configuration_nodes["NodeId"].spark_connector_target.output_schemas[0].columns[0].type #=> String
resp.jobs[0].code_gen_configuration_nodes["NodeId"].catalog_target.name #=> String
resp.jobs[0].code_gen_configuration_nodes["NodeId"].catalog_target.inputs #=> Array
resp.jobs[0].code_gen_configuration_nodes["NodeId"].catalog_target.inputs[0] #=> String
resp.jobs[0].code_gen_configuration_nodes["NodeId"].catalog_target.partition_keys #=> Array
resp.jobs[0].code_gen_configuration_nodes["NodeId"].catalog_target.partition_keys[0] #=> Array
resp.jobs[0].code_gen_configuration_nodes["NodeId"].catalog_target.partition_keys[0][0] #=> String
resp.jobs[0].code_gen_configuration_nodes["NodeId"].catalog_target.database #=> String
resp.jobs[0].code_gen_configuration_nodes["NodeId"].catalog_target.table #=> String
resp.jobs[0].code_gen_configuration_nodes["NodeId"].redshift_target.name #=> String
resp.jobs[0].code_gen_configuration_nodes["NodeId"].redshift_target.inputs #=> Array
resp.jobs[0].code_gen_configuration_nodes["NodeId"].redshift_target.inputs[0] #=> String
resp.jobs[0].code_gen_configuration_nodes["NodeId"].redshift_target.database #=> String
resp.jobs[0].code_gen_configuration_nodes["NodeId"].redshift_target.table #=> String
resp.jobs[0].code_gen_configuration_nodes["NodeId"].redshift_target.redshift_tmp_dir #=> String
resp.jobs[0].code_gen_configuration_nodes["NodeId"].redshift_target.tmp_dir_iam_role #=> String
resp.jobs[0].code_gen_configuration_nodes["NodeId"].redshift_target.upsert_redshift_options.table_location #=> String
resp.jobs[0].code_gen_configuration_nodes["NodeId"].redshift_target.upsert_redshift_options.connection_name #=> String
resp.jobs[0].code_gen_configuration_nodes["NodeId"].redshift_target.upsert_redshift_options.upsert_keys #=> Array
resp.jobs[0].code_gen_configuration_nodes["NodeId"].redshift_target.upsert_redshift_options.upsert_keys[0] #=> String
resp.jobs[0].code_gen_configuration_nodes["NodeId"].s3_catalog_target.name #=> String
resp.jobs[0].code_gen_configuration_nodes["NodeId"].s3_catalog_target.inputs #=> Array
resp.jobs[0].code_gen_configuration_nodes["NodeId"].s3_catalog_target.inputs[0] #=> String
resp.jobs[0].code_gen_configuration_nodes["NodeId"].s3_catalog_target.partition_keys #=> Array
resp.jobs[0].code_gen_configuration_nodes["NodeId"].s3_catalog_target.partition_keys[0] #=> Array
resp.jobs[0].code_gen_configuration_nodes["NodeId"].s3_catalog_target.partition_keys[0][0] #=> String
resp.jobs[0].code_gen_configuration_nodes["NodeId"].s3_catalog_target.table #=> String
resp.jobs[0].code_gen_configuration_nodes["NodeId"].s3_catalog_target.database #=> String
resp.jobs[0].code_gen_configuration_nodes["NodeId"].s3_catalog_target.schema_change_policy.enable_update_catalog #=> Boolean
resp.jobs[0].code_gen_configuration_nodes["NodeId"].s3_catalog_target.schema_change_policy.update_behavior #=> String, one of "UPDATE_IN_DATABASE", "LOG"
resp.jobs[0].code_gen_configuration_nodes["NodeId"].s3_glue_parquet_target.name #=> String
resp.jobs[0].code_gen_configuration_nodes["NodeId"].s3_glue_parquet_target.inputs #=> Array
resp.jobs[0].code_gen_configuration_nodes["NodeId"].s3_glue_parquet_target.inputs[0] #=> String
resp.jobs[0].code_gen_configuration_nodes["NodeId"].s3_glue_parquet_target.partition_keys #=> Array
resp.jobs[0].code_gen_configuration_nodes["NodeId"].s3_glue_parquet_target.partition_keys[0] #=> Array
resp.jobs[0].code_gen_configuration_nodes["NodeId"].s3_glue_parquet_target.partition_keys[0][0] #=> String
resp.jobs[0].code_gen_configuration_nodes["NodeId"].s3_glue_parquet_target.path #=> String
resp.jobs[0].code_gen_configuration_nodes["NodeId"].s3_glue_parquet_target.compression #=> String, one of "snappy", "lzo", "gzip", "uncompressed", "none"
resp.jobs[0].code_gen_configuration_nodes["NodeId"].s3_glue_parquet_target.schema_change_policy.enable_update_catalog #=> Boolean
resp.jobs[0].code_gen_configuration_nodes["NodeId"].s3_glue_parquet_target.schema_change_policy.update_behavior #=> String, one of "UPDATE_IN_DATABASE", "LOG"
resp.jobs[0].code_gen_configuration_nodes["NodeId"].s3_glue_parquet_target.schema_change_policy.table #=> String
resp.jobs[0].code_gen_configuration_nodes["NodeId"].s3_glue_parquet_target.schema_change_policy.database #=> String
resp.jobs[0].code_gen_configuration_nodes["NodeId"].s3_direct_target.name #=> String
resp.jobs[0].code_gen_configuration_nodes["NodeId"].s3_direct_target.inputs #=> Array
resp.jobs[0].code_gen_configuration_nodes["NodeId"].s3_direct_target.inputs[0] #=> String
resp.jobs[0].code_gen_configuration_nodes["NodeId"].s3_direct_target.partition_keys #=> Array
resp.jobs[0].code_gen_configuration_nodes["NodeId"].s3_direct_target.partition_keys[0] #=> Array
resp.jobs[0].code_gen_configuration_nodes["NodeId"].s3_direct_target.partition_keys[0][0] #=> String
resp.jobs[0].code_gen_configuration_nodes["NodeId"].s3_direct_target.path #=> String
resp.jobs[0].code_gen_configuration_nodes["NodeId"].s3_direct_target.compression #=> String
resp.jobs[0].code_gen_configuration_nodes["NodeId"].s3_direct_target.format #=> String, one of "json", "csv", "avro", "orc", "parquet", "hudi", "delta"
resp.jobs[0].code_gen_configuration_nodes["NodeId"].s3_direct_target.schema_change_policy.enable_update_catalog #=> Boolean
resp.jobs[0].code_gen_configuration_nodes["NodeId"].s3_direct_target.schema_change_policy.update_behavior #=> String, one of "UPDATE_IN_DATABASE", "LOG"
resp.jobs[0].code_gen_configuration_nodes["NodeId"].s3_direct_target.schema_change_policy.table #=> String
resp.jobs[0].code_gen_configuration_nodes["NodeId"].s3_direct_target.schema_change_policy.database #=> String
resp.jobs[0].code_gen_configuration_nodes["NodeId"].apply_mapping.name #=> String
resp.jobs[0].code_gen_configuration_nodes["NodeId"].apply_mapping.inputs #=> Array
resp.jobs[0].code_gen_configuration_nodes["NodeId"].apply_mapping.inputs[0] #=> String
resp.jobs[0].code_gen_configuration_nodes["NodeId"].apply_mapping.mapping #=> Array
resp.jobs[0].code_gen_configuration_nodes["NodeId"].apply_mapping.mapping[0].to_key #=> String
resp.jobs[0].code_gen_configuration_nodes["NodeId"].apply_mapping.mapping[0].from_path #=> Array
resp.jobs[0].code_gen_configuration_nodes["NodeId"].apply_mapping.mapping[0].from_path[0] #=> String
resp.jobs[0].code_gen_configuration_nodes["NodeId"].apply_mapping.mapping[0].from_type #=> String
resp.jobs[0].code_gen_configuration_nodes["NodeId"].apply_mapping.mapping[0].to_type #=> String
resp.jobs[0].code_gen_configuration_nodes["NodeId"].apply_mapping.mapping[0].dropped #=> Boolean
resp.jobs[0].code_gen_configuration_nodes["NodeId"].apply_mapping.mapping[0].children #=> Types::Mappings
resp.jobs[0].code_gen_configuration_nodes["NodeId"].select_fields.name #=> String
resp.jobs[0].code_gen_configuration_nodes["NodeId"].select_fields.inputs #=> Array
resp.jobs[0].code_gen_configuration_nodes["NodeId"].select_fields.inputs[0] #=> String
resp.jobs[0].code_gen_configuration_nodes["NodeId"].select_fields.paths #=> Array
resp.jobs[0].code_gen_configuration_nodes["NodeId"].select_fields.paths[0] #=> Array
resp.jobs[0].code_gen_configuration_nodes["NodeId"].select_fields.paths[0][0] #=> String
resp.jobs[0].code_gen_configuration_nodes["NodeId"].drop_fields.name #=> String
resp.jobs[0].code_gen_configuration_nodes["NodeId"].drop_fields.inputs #=> Array
resp.jobs[0].code_gen_configuration_nodes["NodeId"].drop_fields.inputs[0] #=> String
resp.jobs[0].code_gen_configuration_nodes["NodeId"].drop_fields.paths #=> Array
resp.jobs[0].code_gen_configuration_nodes["NodeId"].drop_fields.paths[0] #=> Array
resp.jobs[0].code_gen_configuration_nodes["NodeId"].drop_fields.paths[0][0] #=> String
resp.jobs[0].code_gen_configuration_nodes["NodeId"].rename_field.name #=> String
resp.jobs[0].code_gen_configuration_nodes["NodeId"].rename_field.inputs #=> Array
resp.jobs[0].code_gen_configuration_nodes["NodeId"].rename_field.inputs[0] #=> String
resp.jobs[0].code_gen_configuration_nodes["NodeId"].rename_field.source_path #=> Array
resp.jobs[0].code_gen_configuration_nodes["NodeId"].rename_field.source_path[0] #=> String
resp.jobs[0].code_gen_configuration_nodes["NodeId"].rename_field.target_path #=> Array
resp.jobs[0].code_gen_configuration_nodes["NodeId"].rename_field.target_path[0] #=> String
resp.jobs[0].code_gen_configuration_nodes["NodeId"].spigot.name #=> String
resp.jobs[0].code_gen_configuration_nodes["NodeId"].spigot.inputs #=> Array
resp.jobs[0].code_gen_configuration_nodes["NodeId"].spigot.inputs[0] #=> String
resp.jobs[0].code_gen_configuration_nodes["NodeId"].spigot.path #=> String
resp.jobs[0].code_gen_configuration_nodes["NodeId"].spigot.topk #=> Integer
resp.jobs[0].code_gen_configuration_nodes["NodeId"].spigot.prob #=> Float
resp.jobs[0].code_gen_configuration_nodes["NodeId"].join.name #=> String
resp.jobs[0].code_gen_configuration_nodes["NodeId"].join.inputs #=> Array
resp.jobs[0].code_gen_configuration_nodes["NodeId"].join.inputs[0] #=> String
resp.jobs[0].code_gen_configuration_nodes["NodeId"].join.join_type #=> String, one of "equijoin", "left", "right", "outer", "leftsemi", "leftanti"
resp.jobs[0].code_gen_configuration_nodes["NodeId"].join.columns #=> Array
resp.jobs[0].code_gen_configuration_nodes["NodeId"].join.columns[0].from #=> String
resp.jobs[0].code_gen_configuration_nodes["NodeId"].join.columns[0].keys #=> Array
resp.jobs[0].code_gen_configuration_nodes["NodeId"].join.columns[0].keys[0] #=> Array
resp.jobs[0].code_gen_configuration_nodes["NodeId"].join.columns[0].keys[0][0] #=> String
resp.jobs[0].code_gen_configuration_nodes["NodeId"].split_fields.name #=> String
resp.jobs[0].code_gen_configuration_nodes["NodeId"].split_fields.inputs #=> Array
resp.jobs[0].code_gen_configuration_nodes["NodeId"].split_fields.inputs[0] #=> String
resp.jobs[0].code_gen_configuration_nodes["NodeId"].split_fields.paths #=> Array
resp.jobs[0].code_gen_configuration_nodes["NodeId"].split_fields.paths[0] #=> Array
resp.jobs[0].code_gen_configuration_nodes["NodeId"].split_fields.paths[0][0] #=> String
resp.jobs[0].code_gen_configuration_nodes["NodeId"].select_from_collection.name #=> String
resp.jobs[0].code_gen_configuration_nodes["NodeId"].select_from_collection.inputs #=> Array
resp.jobs[0].code_gen_configuration_nodes["NodeId"].select_from_collection.inputs[0] #=> String
resp.jobs[0].code_gen_configuration_nodes["NodeId"].select_from_collection.index #=> Integer
resp.jobs[0].code_gen_configuration_nodes["NodeId"].fill_missing_values.name #=> String
resp.jobs[0].code_gen_configuration_nodes["NodeId"].fill_missing_values.inputs #=> Array
resp.jobs[0].code_gen_configuration_nodes["NodeId"].fill_missing_values.inputs[0] #=> String
resp.jobs[0].code_gen_configuration_nodes["NodeId"].fill_missing_values.imputed_path #=> String
resp.jobs[0].code_gen_configuration_nodes["NodeId"].fill_missing_values.filled_path #=> String
resp.jobs[0].code_gen_configuration_nodes["NodeId"].filter.name #=> String
resp.jobs[0].code_gen_configuration_nodes["NodeId"].filter.inputs #=> Array
resp.jobs[0].code_gen_configuration_nodes["NodeId"].filter.inputs[0] #=> String
resp.jobs[0].code_gen_configuration_nodes["NodeId"].filter.logical_operator #=> String, one of "AND", "OR"
resp.jobs[0].code_gen_configuration_nodes["NodeId"].filter.filters #=> Array
resp.jobs[0].code_gen_configuration_nodes["NodeId"].filter.filters[0].operation #=> String, one of "EQ", "LT", "GT", "LTE", "GTE", "REGEX", "ISNULL"
resp.jobs[0].code_gen_configuration_nodes["NodeId"].filter.filters[0].negated #=> Boolean
resp.jobs[0].code_gen_configuration_nodes["NodeId"].filter.filters[0].values #=> Array
resp.jobs[0].code_gen_configuration_nodes["NodeId"].filter.filters[0].values[0].type #=> String, one of "COLUMNEXTRACTED", "CONSTANT"
resp.jobs[0].code_gen_configuration_nodes["NodeId"].filter.filters[0].values[0].value #=> Array
resp.jobs[0].code_gen_configuration_nodes["NodeId"].filter.filters[0].values[0].value[0] #=> String
resp.jobs[0].code_gen_configuration_nodes["NodeId"].custom_code.name #=> String
resp.jobs[0].code_gen_configuration_nodes["NodeId"].custom_code.inputs #=> Array
resp.jobs[0].code_gen_configuration_nodes["NodeId"].custom_code.inputs[0] #=> String
resp.jobs[0].code_gen_configuration_nodes["NodeId"].custom_code.code #=> String
resp.jobs[0].code_gen_configuration_nodes["NodeId"].custom_code.class_name #=> String
resp.jobs[0].code_gen_configuration_nodes["NodeId"].custom_code.output_schemas #=> Array
resp.jobs[0].code_gen_configuration_nodes["NodeId"].custom_code.output_schemas[0].columns #=> Array
resp.jobs[0].code_gen_configuration_nodes["NodeId"].custom_code.output_schemas[0].columns[0].name #=> String
resp.jobs[0].code_gen_configuration_nodes["NodeId"].custom_code.output_schemas[0].columns[0].type #=> String
resp.jobs[0].code_gen_configuration_nodes["NodeId"].spark_sql.name #=> String
resp.jobs[0].code_gen_configuration_nodes["NodeId"].spark_sql.inputs #=> Array
resp.jobs[0].code_gen_configuration_nodes["NodeId"].spark_sql.inputs[0] #=> String
resp.jobs[0].code_gen_configuration_nodes["NodeId"].spark_sql.sql_query #=> String
resp.jobs[0].code_gen_configuration_nodes["NodeId"].spark_sql.sql_aliases #=> Array
resp.jobs[0].code_gen_configuration_nodes["NodeId"].spark_sql.sql_aliases[0].from #=> String
resp.jobs[0].code_gen_configuration_nodes["NodeId"].spark_sql.sql_aliases[0].alias #=> String
resp.jobs[0].code_gen_configuration_nodes["NodeId"].spark_sql.output_schemas #=> Array
resp.jobs[0].code_gen_configuration_nodes["NodeId"].spark_sql.output_schemas[0].columns #=> Array
resp.jobs[0].code_gen_configuration_nodes["NodeId"].spark_sql.output_schemas[0].columns[0].name #=> String
resp.jobs[0].code_gen_configuration_nodes["NodeId"].spark_sql.output_schemas[0].columns[0].type #=> String
resp.jobs[0].code_gen_configuration_nodes["NodeId"].direct_kinesis_source.name #=> String
resp.jobs[0].code_gen_configuration_nodes["NodeId"].direct_kinesis_source.window_size #=> Integer
resp.jobs[0].code_gen_configuration_nodes["NodeId"].direct_kinesis_source.detect_schema #=> Boolean
resp.jobs[0].code_gen_configuration_nodes["NodeId"].direct_kinesis_source.streaming_options.endpoint_url #=> String
resp.jobs[0].code_gen_configuration_nodes["NodeId"].direct_kinesis_source.streaming_options.stream_name #=> String
resp.jobs[0].code_gen_configuration_nodes["NodeId"].direct_kinesis_source.streaming_options.classification #=> String
resp.jobs[0].code_gen_configuration_nodes["NodeId"].direct_kinesis_source.streaming_options.delimiter #=> String
resp.jobs[0].code_gen_configuration_nodes["NodeId"].direct_kinesis_source.streaming_options.starting_position #=> String, one of "latest", "trim_horizon", "earliest", "timestamp"
resp.jobs[0].code_gen_configuration_nodes["NodeId"].direct_kinesis_source.streaming_options.max_fetch_time_in_ms #=> Integer
resp.jobs[0].code_gen_configuration_nodes["NodeId"].direct_kinesis_source.streaming_options.max_fetch_records_per_shard #=> Integer
resp.jobs[0].code_gen_configuration_nodes["NodeId"].direct_kinesis_source.streaming_options.max_record_per_read #=> Integer
resp.jobs[0].code_gen_configuration_nodes["NodeId"].direct_kinesis_source.streaming_options.add_idle_time_between_reads #=> Boolean
resp.jobs[0].code_gen_configuration_nodes["NodeId"].direct_kinesis_source.streaming_options.idle_time_between_reads_in_ms #=> Integer
resp.jobs[0].code_gen_configuration_nodes["NodeId"].direct_kinesis_source.streaming_options.describe_shard_interval #=> Integer
resp.jobs[0].code_gen_configuration_nodes["NodeId"].direct_kinesis_source.streaming_options.num_retries #=> Integer
resp.jobs[0].code_gen_configuration_nodes["NodeId"].direct_kinesis_source.streaming_options.retry_interval_ms #=> Integer
resp.jobs[0].code_gen_configuration_nodes["NodeId"].direct_kinesis_source.streaming_options.max_retry_interval_ms #=> Integer
resp.jobs[0].code_gen_configuration_nodes["NodeId"].direct_kinesis_source.streaming_options.avoid_empty_batches #=> Boolean
resp.jobs[0].code_gen_configuration_nodes["NodeId"].direct_kinesis_source.streaming_options.stream_arn #=> String
resp.jobs[0].code_gen_configuration_nodes["NodeId"].direct_kinesis_source.streaming_options.role_arn #=> String
resp.jobs[0].code_gen_configuration_nodes["NodeId"].direct_kinesis_source.streaming_options.role_session_name #=> String
resp.jobs[0].code_gen_configuration_nodes["NodeId"].direct_kinesis_source.streaming_options.add_record_timestamp #=> String
resp.jobs[0].code_gen_configuration_nodes["NodeId"].direct_kinesis_source.streaming_options.emit_consumer_lag_metrics #=> String
resp.jobs[0].code_gen_configuration_nodes["NodeId"].direct_kinesis_source.streaming_options.starting_timestamp #=> Time
resp.jobs[0].code_gen_configuration_nodes["NodeId"].direct_kinesis_source.data_preview_options.polling_time #=> Integer
resp.jobs[0].code_gen_configuration_nodes["NodeId"].direct_kinesis_source.data_preview_options.record_polling_limit #=> Integer
resp.jobs[0].code_gen_configuration_nodes["NodeId"].direct_kafka_source.name #=> String
resp.jobs[0].code_gen_configuration_nodes["NodeId"].direct_kafka_source.streaming_options.bootstrap_servers #=> String
resp.jobs[0].code_gen_configuration_nodes["NodeId"].direct_kafka_source.streaming_options.security_protocol #=> String
resp.jobs[0].code_gen_configuration_nodes["NodeId"].direct_kafka_source.streaming_options.connection_name #=> String
resp.jobs[0].code_gen_configuration_nodes["NodeId"].direct_kafka_source.streaming_options.topic_name #=> String
resp.jobs[0].code_gen_configuration_nodes["NodeId"].direct_kafka_source.streaming_options.assign #=> String
resp.jobs[0].code_gen_configuration_nodes["NodeId"].direct_kafka_source.streaming_options.subscribe_pattern #=> String
resp.jobs[0].code_gen_configuration_nodes["NodeId"].direct_kafka_source.streaming_options.classification #=> String
resp.jobs[0].code_gen_configuration_nodes["NodeId"].direct_kafka_source.streaming_options.delimiter #=> String
resp.jobs[0].code_gen_configuration_nodes["NodeId"].direct_kafka_source.streaming_options.starting_offsets #=> String
resp.jobs[0].code_gen_configuration_nodes["NodeId"].direct_kafka_source.streaming_options.ending_offsets #=> String
resp.jobs[0].code_gen_configuration_nodes["NodeId"].direct_kafka_source.streaming_options.poll_timeout_ms #=> Integer
resp.jobs[0].code_gen_configuration_nodes["NodeId"].direct_kafka_source.streaming_options.num_retries #=> Integer
resp.jobs[0].code_gen_configuration_nodes["NodeId"].direct_kafka_source.streaming_options.retry_interval_ms #=> Integer
resp.jobs[0].code_gen_configuration_nodes["NodeId"].direct_kafka_source.streaming_options.max_offsets_per_trigger #=> Integer
resp.jobs[0].code_gen_configuration_nodes["NodeId"].direct_kafka_source.streaming_options.min_partitions #=> Integer
resp.jobs[0].code_gen_configuration_nodes["NodeId"].direct_kafka_source.streaming_options.include_headers #=> Boolean
resp.jobs[0].code_gen_configuration_nodes["NodeId"].direct_kafka_source.streaming_options.add_record_timestamp #=> String
resp.jobs[0].code_gen_configuration_nodes["NodeId"].direct_kafka_source.streaming_options.emit_consumer_lag_metrics #=> String
resp.jobs[0].code_gen_configuration_nodes["NodeId"].direct_kafka_source.streaming_options.starting_timestamp #=> Time
resp.jobs[0].code_gen_configuration_nodes["NodeId"].direct_kafka_source.window_size #=> Integer
resp.jobs[0].code_gen_configuration_nodes["NodeId"].direct_kafka_source.detect_schema #=> Boolean
resp.jobs[0].code_gen_configuration_nodes["NodeId"].direct_kafka_source.data_preview_options.polling_time #=> Integer
resp.jobs[0].code_gen_configuration_nodes["NodeId"].direct_kafka_source.data_preview_options.record_polling_limit #=> Integer
resp.jobs[0].code_gen_configuration_nodes["NodeId"].catalog_kinesis_source.name #=> String
resp.jobs[0].code_gen_configuration_nodes["NodeId"].catalog_kinesis_source.window_size #=> Integer
resp.jobs[0].code_gen_configuration_nodes["NodeId"].catalog_kinesis_source.detect_schema #=> Boolean
resp.jobs[0].code_gen_configuration_nodes["NodeId"].catalog_kinesis_source.table #=> String
resp.jobs[0].code_gen_configuration_nodes["NodeId"].catalog_kinesis_source.database #=> String
resp.jobs[0].code_gen_configuration_nodes["NodeId"].catalog_kinesis_source.streaming_options.endpoint_url #=> String
resp.jobs[0].code_gen_configuration_nodes["NodeId"].catalog_kinesis_source.streaming_options.stream_name #=> String
resp.jobs[0].code_gen_configuration_nodes["NodeId"].catalog_kinesis_source.streaming_options.classification #=> String
resp.jobs[0].code_gen_configuration_nodes["NodeId"].catalog_kinesis_source.streaming_options.delimiter #=> String
resp.jobs[0].code_gen_configuration_nodes["NodeId"].catalog_kinesis_source.streaming_options.starting_position #=> String, one of "latest", "trim_horizon", "earliest", "timestamp"
resp.jobs[0].code_gen_configuration_nodes["NodeId"].catalog_kinesis_source.streaming_options.max_fetch_time_in_ms #=> Integer
resp.jobs[0].code_gen_configuration_nodes["NodeId"].catalog_kinesis_source.streaming_options.max_fetch_records_per_shard #=> Integer
resp.jobs[0].code_gen_configuration_nodes["NodeId"].catalog_kinesis_source.streaming_options.max_record_per_read #=> Integer
resp.jobs[0].code_gen_configuration_nodes["NodeId"].catalog_kinesis_source.streaming_options.add_idle_time_between_reads #=> Boolean
resp.jobs[0].code_gen_configuration_nodes["NodeId"].catalog_kinesis_source.streaming_options.idle_time_between_reads_in_ms #=> Integer
resp.jobs[0].code_gen_configuration_nodes["NodeId"].catalog_kinesis_source.streaming_options.describe_shard_interval #=> Integer
resp.jobs[0].code_gen_configuration_nodes["NodeId"].catalog_kinesis_source.streaming_options.num_retries #=> Integer
resp.jobs[0].code_gen_configuration_nodes["NodeId"].catalog_kinesis_source.streaming_options.retry_interval_ms #=> Integer
resp.jobs[0].code_gen_configuration_nodes["NodeId"].catalog_kinesis_source.streaming_options.max_retry_interval_ms #=> Integer
resp.jobs[0].code_gen_configuration_nodes["NodeId"].catalog_kinesis_source.streaming_options.avoid_empty_batches #=> Boolean
resp.jobs[0].code_gen_configuration_nodes["NodeId"].catalog_kinesis_source.streaming_options.stream_arn #=> String
resp.jobs[0].code_gen_configuration_nodes["NodeId"].catalog_kinesis_source.streaming_options.role_arn #=> String
resp.jobs[0].code_gen_configuration_nodes["NodeId"].catalog_kinesis_source.streaming_options.role_session_name #=> String
resp.jobs[0].code_gen_configuration_nodes["NodeId"].catalog_kinesis_source.streaming_options.add_record_timestamp #=> String
resp.jobs[0].code_gen_configuration_nodes["NodeId"].catalog_kinesis_source.streaming_options.emit_consumer_lag_metrics #=> String
resp.jobs[0].code_gen_configuration_nodes["NodeId"].catalog_kinesis_source.streaming_options.starting_timestamp #=> Time
resp.jobs[0].code_gen_configuration_nodes["NodeId"].catalog_kinesis_source.data_preview_options.polling_time #=> Integer
resp.jobs[0].code_gen_configuration_nodes["NodeId"].catalog_kinesis_source.data_preview_options.record_polling_limit #=> Integer
resp.jobs[0].code_gen_configuration_nodes["NodeId"].catalog_kafka_source.name #=> String
resp.jobs[0].code_gen_configuration_nodes["NodeId"].catalog_kafka_source.window_size #=> Integer
resp.jobs[0].code_gen_configuration_nodes["NodeId"].catalog_kafka_source.detect_schema #=> Boolean
resp.jobs[0].code_gen_configuration_nodes["NodeId"].catalog_kafka_source.table #=> String
resp.jobs[0].code_gen_configuration_nodes["NodeId"].catalog_kafka_source.database #=> String
resp.jobs[0].code_gen_configuration_nodes["NodeId"].catalog_kafka_source.streaming_options.bootstrap_servers #=> String
resp.jobs[0].code_gen_configuration_nodes["NodeId"].catalog_kafka_source.streaming_options.security_protocol #=> String
resp.jobs[0].code_gen_configuration_nodes["NodeId"].catalog_kafka_source.streaming_options.connection_name #=> String
resp.jobs[0].code_gen_configuration_nodes["NodeId"].catalog_kafka_source.streaming_options.topic_name #=> String
resp.jobs[0].code_gen_configuration_nodes["NodeId"].catalog_kafka_source.streaming_options.assign #=> String
resp.jobs[0].code_gen_configuration_nodes["NodeId"].catalog_kafka_source.streaming_options.subscribe_pattern #=> String
resp.jobs[0].code_gen_configuration_nodes["NodeId"].catalog_kafka_source.streaming_options.classification #=> String
resp.jobs[0].code_gen_configuration_nodes["NodeId"].catalog_kafka_source.streaming_options.delimiter #=> String
resp.jobs[0].code_gen_configuration_nodes["NodeId"].catalog_kafka_source.streaming_options.starting_offsets #=> String
resp.jobs[0].code_gen_configuration_nodes["NodeId"].catalog_kafka_source.streaming_options.ending_offsets #=> String
resp.jobs[0].code_gen_configuration_nodes["NodeId"].catalog_kafka_source.streaming_options.poll_timeout_ms #=> Integer
resp.jobs[0].code_gen_configuration_nodes["NodeId"].catalog_kafka_source.streaming_options.num_retries #=> Integer
resp.jobs[0].code_gen_configuration_nodes["NodeId"].catalog_kafka_source.streaming_options.retry_interval_ms #=> Integer
resp.jobs[0].code_gen_configuration_nodes["NodeId"].catalog_kafka_source.streaming_options.max_offsets_per_trigger #=> Integer
resp.jobs[0].code_gen_configuration_nodes["NodeId"].catalog_kafka_source.streaming_options.min_partitions #=> Integer
resp.jobs[0].code_gen_configuration_nodes["NodeId"].catalog_kafka_source.streaming_options.include_headers #=> Boolean
resp.jobs[0].code_gen_configuration_nodes["NodeId"].catalog_kafka_source.streaming_options.add_record_timestamp #=> String
resp.jobs[0].code_gen_configuration_nodes["NodeId"].catalog_kafka_source.streaming_options.emit_consumer_lag_metrics #=> String
resp.jobs[0].code_gen_configuration_nodes["NodeId"].catalog_kafka_source.streaming_options.starting_timestamp #=> Time
resp.jobs[0].code_gen_configuration_nodes["NodeId"].catalog_kafka_source.data_preview_options.polling_time #=> Integer
resp.jobs[0].code_gen_configuration_nodes["NodeId"].catalog_kafka_source.data_preview_options.record_polling_limit #=> Integer
resp.jobs[0].code_gen_configuration_nodes["NodeId"].drop_null_fields.name #=> String
resp.jobs[0].code_gen_configuration_nodes["NodeId"].drop_null_fields.inputs #=> Array
resp.jobs[0].code_gen_configuration_nodes["NodeId"].drop_null_fields.inputs[0] #=> String
resp.jobs[0].code_gen_configuration_nodes["NodeId"].drop_null_fields.null_check_box_list.is_empty #=> Boolean
resp.jobs[0].code_gen_configuration_nodes["NodeId"].drop_null_fields.null_check_box_list.is_null_string #=> Boolean
resp.jobs[0].code_gen_configuration_nodes["NodeId"].drop_null_fields.null_check_box_list.is_neg_one #=> Boolean
resp.jobs[0].code_gen_configuration_nodes["NodeId"].drop_null_fields.null_text_list #=> Array
resp.jobs[0].code_gen_configuration_nodes["NodeId"].drop_null_fields.null_text_list[0].value #=> String
resp.jobs[0].code_gen_configuration_nodes["NodeId"].drop_null_fields.null_text_list[0].datatype.id #=> String
resp.jobs[0].code_gen_configuration_nodes["NodeId"].drop_null_fields.null_text_list[0].datatype.label #=> String
resp.jobs[0].code_gen_configuration_nodes["NodeId"].merge.name #=> String
resp.jobs[0].code_gen_configuration_nodes["NodeId"].merge.inputs #=> Array
resp.jobs[0].code_gen_configuration_nodes["NodeId"].merge.inputs[0] #=> String
resp.jobs[0].code_gen_configuration_nodes["NodeId"].merge.source #=> String
resp.jobs[0].code_gen_configuration_nodes["NodeId"].merge.primary_keys #=> Array
resp.jobs[0].code_gen_configuration_nodes["NodeId"].merge.primary_keys[0] #=> Array
resp.jobs[0].code_gen_configuration_nodes["NodeId"].merge.primary_keys[0][0] #=> String
resp.jobs[0].code_gen_configuration_nodes["NodeId"].union.name #=> String
resp.jobs[0].code_gen_configuration_nodes["NodeId"].union.inputs #=> Array
resp.jobs[0].code_gen_configuration_nodes["NodeId"].union.inputs[0] #=> String
resp.jobs[0].code_gen_configuration_nodes["NodeId"].union.union_type #=> String, one of "ALL", "DISTINCT"
resp.jobs[0].code_gen_configuration_nodes["NodeId"].pii_detection.name #=> String
resp.jobs[0].code_gen_configuration_nodes["NodeId"].pii_detection.inputs #=> Array
resp.jobs[0].code_gen_configuration_nodes["NodeId"].pii_detection.inputs[0] #=> String
resp.jobs[0].code_gen_configuration_nodes["NodeId"].pii_detection.pii_type #=> String, one of "RowAudit", "RowMasking", "ColumnAudit", "ColumnMasking"
resp.jobs[0].code_gen_configuration_nodes["NodeId"].pii_detection.entity_types_to_detect #=> Array
resp.jobs[0].code_gen_configuration_nodes["NodeId"].pii_detection.entity_types_to_detect[0] #=> String
resp.jobs[0].code_gen_configuration_nodes["NodeId"].pii_detection.output_column_name #=> String
resp.jobs[0].code_gen_configuration_nodes["NodeId"].pii_detection.sample_fraction #=> Float
resp.jobs[0].code_gen_configuration_nodes["NodeId"].pii_detection.threshold_fraction #=> Float
resp.jobs[0].code_gen_configuration_nodes["NodeId"].pii_detection.mask_value #=> String
resp.jobs[0].code_gen_configuration_nodes["NodeId"].aggregate.name #=> String
resp.jobs[0].code_gen_configuration_nodes["NodeId"].aggregate.inputs #=> Array
resp.jobs[0].code_gen_configuration_nodes["NodeId"].aggregate.inputs[0] #=> String
resp.jobs[0].code_gen_configuration_nodes["NodeId"].aggregate.groups #=> Array
resp.jobs[0].code_gen_configuration_nodes["NodeId"].aggregate.groups[0] #=> Array
resp.jobs[0].code_gen_configuration_nodes["NodeId"].aggregate.groups[0][0] #=> String
resp.jobs[0].code_gen_configuration_nodes["NodeId"].aggregate.aggs #=> Array
resp.jobs[0].code_gen_configuration_nodes["NodeId"].aggregate.aggs[0].column #=> Array
resp.jobs[0].code_gen_configuration_nodes["NodeId"].aggregate.aggs[0].column[0] #=> String
resp.jobs[0].code_gen_configuration_nodes["NodeId"].aggregate.aggs[0].agg_func #=> String, one of "avg", "countDistinct", "count", "first", "last", "kurtosis", "max", "min", "skewness", "stddev_samp", "stddev_pop", "sum", "sumDistinct", "var_samp", "var_pop"
resp.jobs[0].code_gen_configuration_nodes["NodeId"].drop_duplicates.name #=> String
resp.jobs[0].code_gen_configuration_nodes["NodeId"].drop_duplicates.inputs #=> Array
resp.jobs[0].code_gen_configuration_nodes["NodeId"].drop_duplicates.inputs[0] #=> String
resp.jobs[0].code_gen_configuration_nodes["NodeId"].drop_duplicates.columns #=> Array
resp.jobs[0].code_gen_configuration_nodes["NodeId"].drop_duplicates.columns[0] #=> Array
resp.jobs[0].code_gen_configuration_nodes["NodeId"].drop_duplicates.columns[0][0] #=> String
resp.jobs[0].code_gen_configuration_nodes["NodeId"].governed_catalog_target.name #=> String
resp.jobs[0].code_gen_configuration_nodes["NodeId"].governed_catalog_target.inputs #=> Array
resp.jobs[0].code_gen_configuration_nodes["NodeId"].governed_catalog_target.inputs[0] #=> String
resp.jobs[0].code_gen_configuration_nodes["NodeId"].governed_catalog_target.partition_keys #=> Array
resp.jobs[0].code_gen_configuration_nodes["NodeId"].governed_catalog_target.partition_keys[0] #=> Array
resp.jobs[0].code_gen_configuration_nodes["NodeId"].governed_catalog_target.partition_keys[0][0] #=> String
resp.jobs[0].code_gen_configuration_nodes["NodeId"].governed_catalog_target.table #=> String
resp.jobs[0].code_gen_configuration_nodes["NodeId"].governed_catalog_target.database #=> String
resp.jobs[0].code_gen_configuration_nodes["NodeId"].governed_catalog_target.schema_change_policy.enable_update_catalog #=> Boolean
resp.jobs[0].code_gen_configuration_nodes["NodeId"].governed_catalog_target.schema_change_policy.update_behavior #=> String, one of "UPDATE_IN_DATABASE", "LOG"
resp.jobs[0].code_gen_configuration_nodes["NodeId"].governed_catalog_source.name #=> String
resp.jobs[0].code_gen_configuration_nodes["NodeId"].governed_catalog_source.database #=> String
resp.jobs[0].code_gen_configuration_nodes["NodeId"].governed_catalog_source.table #=> String
resp.jobs[0].code_gen_configuration_nodes["NodeId"].governed_catalog_source.partition_predicate #=> String
resp.jobs[0].code_gen_configuration_nodes["NodeId"].governed_catalog_source.additional_options.bounded_size #=> Integer
resp.jobs[0].code_gen_configuration_nodes["NodeId"].governed_catalog_source.additional_options.bounded_files #=> Integer
resp.jobs[0].code_gen_configuration_nodes["NodeId"].microsoft_sql_server_catalog_source.name #=> String
resp.jobs[0].code_gen_configuration_nodes["NodeId"].microsoft_sql_server_catalog_source.database #=> String
resp.jobs[0].code_gen_configuration_nodes["NodeId"].microsoft_sql_server_catalog_source.table #=> String
resp.jobs[0].code_gen_configuration_nodes["NodeId"].my_sql_catalog_source.name #=> String
resp.jobs[0].code_gen_configuration_nodes["NodeId"].my_sql_catalog_source.database #=> String
resp.jobs[0].code_gen_configuration_nodes["NodeId"].my_sql_catalog_source.table #=> String
resp.jobs[0].code_gen_configuration_nodes["NodeId"].oracle_sql_catalog_source.name #=> String
resp.jobs[0].code_gen_configuration_nodes["NodeId"].oracle_sql_catalog_source.database #=> String
resp.jobs[0].code_gen_configuration_nodes["NodeId"].oracle_sql_catalog_source.table #=> String
resp.jobs[0].code_gen_configuration_nodes["NodeId"].postgre_sql_catalog_source.name #=> String
resp.jobs[0].code_gen_configuration_nodes["NodeId"].postgre_sql_catalog_source.database #=> String
resp.jobs[0].code_gen_configuration_nodes["NodeId"].postgre_sql_catalog_source.table #=> String
resp.jobs[0].code_gen_configuration_nodes["NodeId"].microsoft_sql_server_catalog_target.name #=> String
resp.jobs[0].code_gen_configuration_nodes["NodeId"].microsoft_sql_server_catalog_target.inputs #=> Array
resp.jobs[0].code_gen_configuration_nodes["NodeId"].microsoft_sql_server_catalog_target.inputs[0] #=> String
resp.jobs[0].code_gen_configuration_nodes["NodeId"].microsoft_sql_server_catalog_target.database #=> String
resp.jobs[0].code_gen_configuration_nodes["NodeId"].microsoft_sql_server_catalog_target.table #=> String
resp.jobs[0].code_gen_configuration_nodes["NodeId"].my_sql_catalog_target.name #=> String
resp.jobs[0].code_gen_configuration_nodes["NodeId"].my_sql_catalog_target.inputs #=> Array
resp.jobs[0].code_gen_configuration_nodes["NodeId"].my_sql_catalog_target.inputs[0] #=> String
resp.jobs[0].code_gen_configuration_nodes["NodeId"].my_sql_catalog_target.database #=> String
resp.jobs[0].code_gen_configuration_nodes["NodeId"].my_sql_catalog_target.table #=> String
resp.jobs[0].code_gen_configuration_nodes["NodeId"].oracle_sql_catalog_target.name #=> String
resp.jobs[0].code_gen_configuration_nodes["NodeId"].oracle_sql_catalog_target.inputs #=> Array
resp.jobs[0].code_gen_configuration_nodes["NodeId"].oracle_sql_catalog_target.inputs[0] #=> String
resp.jobs[0].code_gen_configuration_nodes["NodeId"].oracle_sql_catalog_target.database #=> String
resp.jobs[0].code_gen_configuration_nodes["NodeId"].oracle_sql_catalog_target.table #=> String
resp.jobs[0].code_gen_configuration_nodes["NodeId"].postgre_sql_catalog_target.name #=> String
resp.jobs[0].code_gen_configuration_nodes["NodeId"].postgre_sql_catalog_target.inputs #=> Array
resp.jobs[0].code_gen_configuration_nodes["NodeId"].postgre_sql_catalog_target.inputs[0] #=> String
resp.jobs[0].code_gen_configuration_nodes["NodeId"].postgre_sql_catalog_target.database #=> String
resp.jobs[0].code_gen_configuration_nodes["NodeId"].postgre_sql_catalog_target.table #=> String
resp.jobs[0].code_gen_configuration_nodes["NodeId"].dynamic_transform.name #=> String
resp.jobs[0].code_gen_configuration_nodes["NodeId"].dynamic_transform.transform_name #=> String
resp.jobs[0].code_gen_configuration_nodes["NodeId"].dynamic_transform.inputs #=> Array
resp.jobs[0].code_gen_configuration_nodes["NodeId"].dynamic_transform.inputs[0] #=> String
resp.jobs[0].code_gen_configuration_nodes["NodeId"].dynamic_transform.parameters #=> Array
resp.jobs[0].code_gen_configuration_nodes["NodeId"].dynamic_transform.parameters[0].name #=> String
resp.jobs[0].code_gen_configuration_nodes["NodeId"].dynamic_transform.parameters[0].type #=> String, one of "str", "int", "float", "complex", "bool", "list", "null"
resp.jobs[0].code_gen_configuration_nodes["NodeId"].dynamic_transform.parameters[0].validation_rule #=> String
resp.jobs[0].code_gen_configuration_nodes["NodeId"].dynamic_transform.parameters[0].validation_message #=> String
resp.jobs[0].code_gen_configuration_nodes["NodeId"].dynamic_transform.parameters[0].value #=> Array
resp.jobs[0].code_gen_configuration_nodes["NodeId"].dynamic_transform.parameters[0].value[0] #=> String
resp.jobs[0].code_gen_configuration_nodes["NodeId"].dynamic_transform.parameters[0].list_type #=> String, one of "str", "int", "float", "complex", "bool", "list", "null"
resp.jobs[0].code_gen_configuration_nodes["NodeId"].dynamic_transform.parameters[0].is_optional #=> Boolean
resp.jobs[0].code_gen_configuration_nodes["NodeId"].dynamic_transform.function_name #=> String
resp.jobs[0].code_gen_configuration_nodes["NodeId"].dynamic_transform.path #=> String
resp.jobs[0].code_gen_configuration_nodes["NodeId"].dynamic_transform.version #=> String
resp.jobs[0].code_gen_configuration_nodes["NodeId"].dynamic_transform.output_schemas #=> Array
resp.jobs[0].code_gen_configuration_nodes["NodeId"].dynamic_transform.output_schemas[0].columns #=> Array
resp.jobs[0].code_gen_configuration_nodes["NodeId"].dynamic_transform.output_schemas[0].columns[0].name #=> String
resp.jobs[0].code_gen_configuration_nodes["NodeId"].dynamic_transform.output_schemas[0].columns[0].type #=> String
resp.jobs[0].code_gen_configuration_nodes["NodeId"].evaluate_data_quality.name #=> String
resp.jobs[0].code_gen_configuration_nodes["NodeId"].evaluate_data_quality.inputs #=> Array
resp.jobs[0].code_gen_configuration_nodes["NodeId"].evaluate_data_quality.inputs[0] #=> String
resp.jobs[0].code_gen_configuration_nodes["NodeId"].evaluate_data_quality.ruleset #=> String
resp.jobs[0].code_gen_configuration_nodes["NodeId"].evaluate_data_quality.output #=> String, one of "PrimaryInput", "EvaluationResults"
resp.jobs[0].code_gen_configuration_nodes["NodeId"].evaluate_data_quality.publishing_options.evaluation_context #=> String
resp.jobs[0].code_gen_configuration_nodes["NodeId"].evaluate_data_quality.publishing_options.results_s3_prefix #=> String
resp.jobs[0].code_gen_configuration_nodes["NodeId"].evaluate_data_quality.publishing_options.cloud_watch_metrics_enabled #=> Boolean
resp.jobs[0].code_gen_configuration_nodes["NodeId"].evaluate_data_quality.publishing_options.results_publishing_enabled #=> Boolean
resp.jobs[0].code_gen_configuration_nodes["NodeId"].evaluate_data_quality.stop_job_on_failure_options.stop_job_on_failure_timing #=> String, one of "Immediate", "AfterDataLoad"
resp.jobs[0].code_gen_configuration_nodes["NodeId"].s3_catalog_hudi_source.name #=> String
resp.jobs[0].code_gen_configuration_nodes["NodeId"].s3_catalog_hudi_source.database #=> String
resp.jobs[0].code_gen_configuration_nodes["NodeId"].s3_catalog_hudi_source.table #=> String
resp.jobs[0].code_gen_configuration_nodes["NodeId"].s3_catalog_hudi_source.additional_hudi_options #=> Hash
resp.jobs[0].code_gen_configuration_nodes["NodeId"].s3_catalog_hudi_source.additional_hudi_options["EnclosedInStringProperty"] #=> String
resp.jobs[0].code_gen_configuration_nodes["NodeId"].s3_catalog_hudi_source.output_schemas #=> Array
resp.jobs[0].code_gen_configuration_nodes["NodeId"].s3_catalog_hudi_source.output_schemas[0].columns #=> Array
resp.jobs[0].code_gen_configuration_nodes["NodeId"].s3_catalog_hudi_source.output_schemas[0].columns[0].name #=> String
resp.jobs[0].code_gen_configuration_nodes["NodeId"].s3_catalog_hudi_source.output_schemas[0].columns[0].type #=> String
resp.jobs[0].code_gen_configuration_nodes["NodeId"].catalog_hudi_source.name #=> String
resp.jobs[0].code_gen_configuration_nodes["NodeId"].catalog_hudi_source.database #=> String
resp.jobs[0].code_gen_configuration_nodes["NodeId"].catalog_hudi_source.table #=> String
resp.jobs[0].code_gen_configuration_nodes["NodeId"].catalog_hudi_source.additional_hudi_options #=> Hash
resp.jobs[0].code_gen_configuration_nodes["NodeId"].catalog_hudi_source.additional_hudi_options["EnclosedInStringProperty"] #=> String
resp.jobs[0].code_gen_configuration_nodes["NodeId"].catalog_hudi_source.output_schemas #=> Array
resp.jobs[0].code_gen_configuration_nodes["NodeId"].catalog_hudi_source.output_schemas[0].columns #=> Array
resp.jobs[0].code_gen_configuration_nodes["NodeId"].catalog_hudi_source.output_schemas[0].columns[0].name #=> String
resp.jobs[0].code_gen_configuration_nodes["NodeId"].catalog_hudi_source.output_schemas[0].columns[0].type #=> String
resp.jobs[0].code_gen_configuration_nodes["NodeId"].s3_hudi_source.name #=> String
resp.jobs[0].code_gen_configuration_nodes["NodeId"].s3_hudi_source.paths #=> Array
resp.jobs[0].code_gen_configuration_nodes["NodeId"].s3_hudi_source.paths[0] #=> String
resp.jobs[0].code_gen_configuration_nodes["NodeId"].s3_hudi_source.additional_hudi_options #=> Hash
resp.jobs[0].code_gen_configuration_nodes["NodeId"].s3_hudi_source.additional_hudi_options["EnclosedInStringProperty"] #=> String
resp.jobs[0].code_gen_configuration_nodes["NodeId"].s3_hudi_source.additional_options.bounded_size #=> Integer
resp.jobs[0].code_gen_configuration_nodes["NodeId"].s3_hudi_source.additional_options.bounded_files #=> Integer
resp.jobs[0].code_gen_configuration_nodes["NodeId"].s3_hudi_source.additional_options.enable_sample_path #=> Boolean
resp.jobs[0].code_gen_configuration_nodes["NodeId"].s3_hudi_source.additional_options.sample_path #=> String
resp.jobs[0].code_gen_configuration_nodes["NodeId"].s3_hudi_source.output_schemas #=> Array
resp.jobs[0].code_gen_configuration_nodes["NodeId"].s3_hudi_source.output_schemas[0].columns #=> Array
resp.jobs[0].code_gen_configuration_nodes["NodeId"].s3_hudi_source.output_schemas[0].columns[0].name #=> String
resp.jobs[0].code_gen_configuration_nodes["NodeId"].s3_hudi_source.output_schemas[0].columns[0].type #=> String
resp.jobs[0].code_gen_configuration_nodes["NodeId"].s3_hudi_catalog_target.name #=> String
resp.jobs[0].code_gen_configuration_nodes["NodeId"].s3_hudi_catalog_target.inputs #=> Array
resp.jobs[0].code_gen_configuration_nodes["NodeId"].s3_hudi_catalog_target.inputs[0] #=> String
resp.jobs[0].code_gen_configuration_nodes["NodeId"].s3_hudi_catalog_target.partition_keys #=> Array
resp.jobs[0].code_gen_configuration_nodes["NodeId"].s3_hudi_catalog_target.partition_keys[0] #=> Array
resp.jobs[0].code_gen_configuration_nodes["NodeId"].s3_hudi_catalog_target.partition_keys[0][0] #=> String
resp.jobs[0].code_gen_configuration_nodes["NodeId"].s3_hudi_catalog_target.table #=> String
resp.jobs[0].code_gen_configuration_nodes["NodeId"].s3_hudi_catalog_target.database #=> String
resp.jobs[0].code_gen_configuration_nodes["NodeId"].s3_hudi_catalog_target.additional_options #=> Hash
resp.jobs[0].code_gen_configuration_nodes["NodeId"].s3_hudi_catalog_target.additional_options["EnclosedInStringProperty"] #=> String
resp.jobs[0].code_gen_configuration_nodes["NodeId"].s3_hudi_catalog_target.schema_change_policy.enable_update_catalog #=> Boolean
resp.jobs[0].code_gen_configuration_nodes["NodeId"].s3_hudi_catalog_target.schema_change_policy.update_behavior #=> String, one of "UPDATE_IN_DATABASE", "LOG"
resp.jobs[0].code_gen_configuration_nodes["NodeId"].s3_hudi_direct_target.name #=> String
resp.jobs[0].code_gen_configuration_nodes["NodeId"].s3_hudi_direct_target.inputs #=> Array
resp.jobs[0].code_gen_configuration_nodes["NodeId"].s3_hudi_direct_target.inputs[0] #=> String
resp.jobs[0].code_gen_configuration_nodes["NodeId"].s3_hudi_direct_target.path #=> String
resp.jobs[0].code_gen_configuration_nodes["NodeId"].s3_hudi_direct_target.compression #=> String, one of "gzip", "lzo", "uncompressed", "snappy"
resp.jobs[0].code_gen_configuration_nodes["NodeId"].s3_hudi_direct_target.partition_keys #=> Array
resp.jobs[0].code_gen_configuration_nodes["NodeId"].s3_hudi_direct_target.partition_keys[0] #=> Array
resp.jobs[0].code_gen_configuration_nodes["NodeId"].s3_hudi_direct_target.partition_keys[0][0] #=> String
resp.jobs[0].code_gen_configuration_nodes["NodeId"].s3_hudi_direct_target.format #=> String, one of "json", "csv", "avro", "orc", "parquet", "hudi", "delta"
resp.jobs[0].code_gen_configuration_nodes["NodeId"].s3_hudi_direct_target.additional_options #=> Hash
resp.jobs[0].code_gen_configuration_nodes["NodeId"].s3_hudi_direct_target.additional_options["EnclosedInStringProperty"] #=> String
resp.jobs[0].code_gen_configuration_nodes["NodeId"].s3_hudi_direct_target.schema_change_policy.enable_update_catalog #=> Boolean
resp.jobs[0].code_gen_configuration_nodes["NodeId"].s3_hudi_direct_target.schema_change_policy.update_behavior #=> String, one of "UPDATE_IN_DATABASE", "LOG"
resp.jobs[0].code_gen_configuration_nodes["NodeId"].s3_hudi_direct_target.schema_change_policy.table #=> String
resp.jobs[0].code_gen_configuration_nodes["NodeId"].s3_hudi_direct_target.schema_change_policy.database #=> String
resp.jobs[0].code_gen_configuration_nodes["NodeId"].direct_jdbc_source.name #=> String
resp.jobs[0].code_gen_configuration_nodes["NodeId"].direct_jdbc_source.database #=> String
resp.jobs[0].code_gen_configuration_nodes["NodeId"].direct_jdbc_source.table #=> String
resp.jobs[0].code_gen_configuration_nodes["NodeId"].direct_jdbc_source.connection_name #=> String
resp.jobs[0].code_gen_configuration_nodes["NodeId"].direct_jdbc_source.connection_type #=> String, one of "sqlserver", "mysql", "oracle", "postgresql", "redshift"
resp.jobs[0].code_gen_configuration_nodes["NodeId"].direct_jdbc_source.redshift_tmp_dir #=> String
resp.jobs[0].code_gen_configuration_nodes["NodeId"].s3_catalog_delta_source.name #=> String
resp.jobs[0].code_gen_configuration_nodes["NodeId"].s3_catalog_delta_source.database #=> String
resp.jobs[0].code_gen_configuration_nodes["NodeId"].s3_catalog_delta_source.table #=> String
resp.jobs[0].code_gen_configuration_nodes["NodeId"].s3_catalog_delta_source.additional_delta_options #=> Hash
resp.jobs[0].code_gen_configuration_nodes["NodeId"].s3_catalog_delta_source.additional_delta_options["EnclosedInStringProperty"] #=> String
resp.jobs[0].code_gen_configuration_nodes["NodeId"].s3_catalog_delta_source.output_schemas #=> Array
resp.jobs[0].code_gen_configuration_nodes["NodeId"].s3_catalog_delta_source.output_schemas[0].columns #=> Array
resp.jobs[0].code_gen_configuration_nodes["NodeId"].s3_catalog_delta_source.output_schemas[0].columns[0].name #=> String
resp.jobs[0].code_gen_configuration_nodes["NodeId"].s3_catalog_delta_source.output_schemas[0].columns[0].type #=> String
resp.jobs[0].code_gen_configuration_nodes["NodeId"].catalog_delta_source.name #=> String
resp.jobs[0].code_gen_configuration_nodes["NodeId"].catalog_delta_source.database #=> String
resp.jobs[0].code_gen_configuration_nodes["NodeId"].catalog_delta_source.table #=> String
resp.jobs[0].code_gen_configuration_nodes["NodeId"].catalog_delta_source.additional_delta_options #=> Hash
resp.jobs[0].code_gen_configuration_nodes["NodeId"].catalog_delta_source.additional_delta_options["EnclosedInStringProperty"] #=> String
resp.jobs[0].code_gen_configuration_nodes["NodeId"].catalog_delta_source.output_schemas #=> Array
resp.jobs[0].code_gen_configuration_nodes["NodeId"].catalog_delta_source.output_schemas[0].columns #=> Array
resp.jobs[0].code_gen_configuration_nodes["NodeId"].catalog_delta_source.output_schemas[0].columns[0].name #=> String
resp.jobs[0].code_gen_configuration_nodes["NodeId"].catalog_delta_source.output_schemas[0].columns[0].type #=> String
resp.jobs[0].code_gen_configuration_nodes["NodeId"].s3_delta_source.name #=> String
resp.jobs[0].code_gen_configuration_nodes["NodeId"].s3_delta_source.paths #=> Array
resp.jobs[0].code_gen_configuration_nodes["NodeId"].s3_delta_source.paths[0] #=> String
resp.jobs[0].code_gen_configuration_nodes["NodeId"].s3_delta_source.additional_delta_options #=> Hash
resp.jobs[0].code_gen_configuration_nodes["NodeId"].s3_delta_source.additional_delta_options["EnclosedInStringProperty"] #=> String
resp.jobs[0].code_gen_configuration_nodes["NodeId"].s3_delta_source.additional_options.bounded_size #=> Integer
resp.jobs[0].code_gen_configuration_nodes["NodeId"].s3_delta_source.additional_options.bounded_files #=> Integer
resp.jobs[0].code_gen_configuration_nodes["NodeId"].s3_delta_source.additional_options.enable_sample_path #=> Boolean
resp.jobs[0].code_gen_configuration_nodes["NodeId"].s3_delta_source.additional_options.sample_path #=> String
resp.jobs[0].code_gen_configuration_nodes["NodeId"].s3_delta_source.output_schemas #=> Array
resp.jobs[0].code_gen_configuration_nodes["NodeId"].s3_delta_source.output_schemas[0].columns #=> Array
resp.jobs[0].code_gen_configuration_nodes["NodeId"].s3_delta_source.output_schemas[0].columns[0].name #=> String
resp.jobs[0].code_gen_configuration_nodes["NodeId"].s3_delta_source.output_schemas[0].columns[0].type #=> String
resp.jobs[0].code_gen_configuration_nodes["NodeId"].s3_delta_catalog_target.name #=> String
resp.jobs[0].code_gen_configuration_nodes["NodeId"].s3_delta_catalog_target.inputs #=> Array
resp.jobs[0].code_gen_configuration_nodes["NodeId"].s3_delta_catalog_target.inputs[0] #=> String
resp.jobs[0].code_gen_configuration_nodes["NodeId"].s3_delta_catalog_target.partition_keys #=> Array
resp.jobs[0].code_gen_configuration_nodes["NodeId"].s3_delta_catalog_target.partition_keys[0] #=> Array
resp.jobs[0].code_gen_configuration_nodes["NodeId"].s3_delta_catalog_target.partition_keys[0][0] #=> String
resp.jobs[0].code_gen_configuration_nodes["NodeId"].s3_delta_catalog_target.table #=> String
resp.jobs[0].code_gen_configuration_nodes["NodeId"].s3_delta_catalog_target.database #=> String
resp.jobs[0].code_gen_configuration_nodes["NodeId"].s3_delta_catalog_target.additional_options #=> Hash
resp.jobs[0].code_gen_configuration_nodes["NodeId"].s3_delta_catalog_target.additional_options["EnclosedInStringProperty"] #=> String
resp.jobs[0].code_gen_configuration_nodes["NodeId"].s3_delta_catalog_target.schema_change_policy.enable_update_catalog #=> Boolean
resp.jobs[0].code_gen_configuration_nodes["NodeId"].s3_delta_catalog_target.schema_change_policy.update_behavior #=> String, one of "UPDATE_IN_DATABASE", "LOG"
resp.jobs[0].code_gen_configuration_nodes["NodeId"].s3_delta_direct_target.name #=> String
resp.jobs[0].code_gen_configuration_nodes["NodeId"].s3_delta_direct_target.inputs #=> Array
resp.jobs[0].code_gen_configuration_nodes["NodeId"].s3_delta_direct_target.inputs[0] #=> String
resp.jobs[0].code_gen_configuration_nodes["NodeId"].s3_delta_direct_target.partition_keys #=> Array
resp.jobs[0].code_gen_configuration_nodes["NodeId"].s3_delta_direct_target.partition_keys[0] #=> Array
resp.jobs[0].code_gen_configuration_nodes["NodeId"].s3_delta_direct_target.partition_keys[0][0] #=> String
resp.jobs[0].code_gen_configuration_nodes["NodeId"].s3_delta_direct_target.path #=> String
resp.jobs[0].code_gen_configuration_nodes["NodeId"].s3_delta_direct_target.compression #=> String, one of "uncompressed", "snappy"
resp.jobs[0].code_gen_configuration_nodes["NodeId"].s3_delta_direct_target.format #=> String, one of "json", "csv", "avro", "orc", "parquet", "hudi", "delta"
resp.jobs[0].code_gen_configuration_nodes["NodeId"].s3_delta_direct_target.additional_options #=> Hash
resp.jobs[0].code_gen_configuration_nodes["NodeId"].s3_delta_direct_target.additional_options["EnclosedInStringProperty"] #=> String
resp.jobs[0].code_gen_configuration_nodes["NodeId"].s3_delta_direct_target.schema_change_policy.enable_update_catalog #=> Boolean
resp.jobs[0].code_gen_configuration_nodes["NodeId"].s3_delta_direct_target.schema_change_policy.update_behavior #=> String, one of "UPDATE_IN_DATABASE", "LOG"
resp.jobs[0].code_gen_configuration_nodes["NodeId"].s3_delta_direct_target.schema_change_policy.table #=> String
resp.jobs[0].code_gen_configuration_nodes["NodeId"].s3_delta_direct_target.schema_change_policy.database #=> String
resp.jobs[0].code_gen_configuration_nodes["NodeId"].amazon_redshift_source.name #=> String
resp.jobs[0].code_gen_configuration_nodes["NodeId"].amazon_redshift_source.data.access_type #=> String
resp.jobs[0].code_gen_configuration_nodes["NodeId"].amazon_redshift_source.data.source_type #=> String
resp.jobs[0].code_gen_configuration_nodes["NodeId"].amazon_redshift_source.data.connection.value #=> String
resp.jobs[0].code_gen_configuration_nodes["NodeId"].amazon_redshift_source.data.connection.label #=> String
resp.jobs[0].code_gen_configuration_nodes["NodeId"].amazon_redshift_source.data.connection.description #=> String
resp.jobs[0].code_gen_configuration_nodes["NodeId"].amazon_redshift_source.data.schema.value #=> String
resp.jobs[0].code_gen_configuration_nodes["NodeId"].amazon_redshift_source.data.schema.label #=> String
resp.jobs[0].code_gen_configuration_nodes["NodeId"].amazon_redshift_source.data.schema.description #=> String
resp.jobs[0].code_gen_configuration_nodes["NodeId"].amazon_redshift_source.data.table.value #=> String
resp.jobs[0].code_gen_configuration_nodes["NodeId"].amazon_redshift_source.data.table.label #=> String
resp.jobs[0].code_gen_configuration_nodes["NodeId"].amazon_redshift_source.data.table.description #=> String
resp.jobs[0].code_gen_configuration_nodes["NodeId"].amazon_redshift_source.data.catalog_database.value #=> String
resp.jobs[0].code_gen_configuration_nodes["NodeId"].amazon_redshift_source.data.catalog_database.label #=> String
resp.jobs[0].code_gen_configuration_nodes["NodeId"].amazon_redshift_source.data.catalog_database.description #=> String
resp.jobs[0].code_gen_configuration_nodes["NodeId"].amazon_redshift_source.data.catalog_table.value #=> String
resp.jobs[0].code_gen_configuration_nodes["NodeId"].amazon_redshift_source.data.catalog_table.label #=> String
resp.jobs[0].code_gen_configuration_nodes["NodeId"].amazon_redshift_source.data.catalog_table.description #=> String
resp.jobs[0].code_gen_configuration_nodes["NodeId"].amazon_redshift_source.data.catalog_redshift_schema #=> String
resp.jobs[0].code_gen_configuration_nodes["NodeId"].amazon_redshift_source.data.catalog_redshift_table #=> String
resp.jobs[0].code_gen_configuration_nodes["NodeId"].amazon_redshift_source.data.temp_dir #=> String
resp.jobs[0].code_gen_configuration_nodes["NodeId"].amazon_redshift_source.data.iam_role.value #=> String
resp.jobs[0].code_gen_configuration_nodes["NodeId"].amazon_redshift_source.data.iam_role.label #=> String
resp.jobs[0].code_gen_configuration_nodes["NodeId"].amazon_redshift_source.data.iam_role.description #=> String
resp.jobs[0].code_gen_configuration_nodes["NodeId"].amazon_redshift_source.data.advanced_options #=> Array
resp.jobs[0].code_gen_configuration_nodes["NodeId"].amazon_redshift_source.data.advanced_options[0].key #=> String
resp.jobs[0].code_gen_configuration_nodes["NodeId"].amazon_redshift_source.data.advanced_options[0].value #=> String
resp.jobs[0].code_gen_configuration_nodes["NodeId"].amazon_redshift_source.data.sample_query #=> String
resp.jobs[0].code_gen_configuration_nodes["NodeId"].amazon_redshift_source.data.pre_action #=> String
resp.jobs[0].code_gen_configuration_nodes["NodeId"].amazon_redshift_source.data.post_action #=> String
resp.jobs[0].code_gen_configuration_nodes["NodeId"].amazon_redshift_source.data.action #=> String
resp.jobs[0].code_gen_configuration_nodes["NodeId"].amazon_redshift_source.data.table_prefix #=> String
resp.jobs[0].code_gen_configuration_nodes["NodeId"].amazon_redshift_source.data.upsert #=> Boolean
resp.jobs[0].code_gen_configuration_nodes["NodeId"].amazon_redshift_source.data.merge_action #=> String
resp.jobs[0].code_gen_configuration_nodes["NodeId"].amazon_redshift_source.data.merge_when_matched #=> String
resp.jobs[0].code_gen_configuration_nodes["NodeId"].amazon_redshift_source.data.merge_when_not_matched #=> String
resp.jobs[0].code_gen_configuration_nodes["NodeId"].amazon_redshift_source.data.merge_clause #=> String
resp.jobs[0].code_gen_configuration_nodes["NodeId"].amazon_redshift_source.data.crawler_connection #=> String
resp.jobs[0].code_gen_configuration_nodes["NodeId"].amazon_redshift_source.data.table_schema #=> Array
resp.jobs[0].code_gen_configuration_nodes["NodeId"].amazon_redshift_source.data.table_schema[0].value #=> String
resp.jobs[0].code_gen_configuration_nodes["NodeId"].amazon_redshift_source.data.table_schema[0].label #=> String
resp.jobs[0].code_gen_configuration_nodes["NodeId"].amazon_redshift_source.data.table_schema[0].description #=> String
resp.jobs[0].code_gen_configuration_nodes["NodeId"].amazon_redshift_source.data.staging_table #=> String
resp.jobs[0].code_gen_configuration_nodes["NodeId"].amazon_redshift_source.data.selected_columns #=> Array
resp.jobs[0].code_gen_configuration_nodes["NodeId"].amazon_redshift_source.data.selected_columns[0].value #=> String
resp.jobs[0].code_gen_configuration_nodes["NodeId"].amazon_redshift_source.data.selected_columns[0].label #=> String
resp.jobs[0].code_gen_configuration_nodes["NodeId"].amazon_redshift_source.data.selected_columns[0].description #=> String
resp.jobs[0].code_gen_configuration_nodes["NodeId"].amazon_redshift_target.name #=> String
resp.jobs[0].code_gen_configuration_nodes["NodeId"].amazon_redshift_target.data.access_type #=> String
resp.jobs[0].code_gen_configuration_nodes["NodeId"].amazon_redshift_target.data.source_type #=> String
resp.jobs[0].code_gen_configuration_nodes["NodeId"].amazon_redshift_target.data.connection.value #=> String
resp.jobs[0].code_gen_configuration_nodes["NodeId"].amazon_redshift_target.data.connection.label #=> String
resp.jobs[0].code_gen_configuration_nodes["NodeId"].amazon_redshift_target.data.connection.description #=> String
resp.jobs[0].code_gen_configuration_nodes["NodeId"].amazon_redshift_target.data.schema.value #=> String
resp.jobs[0].code_gen_configuration_nodes["NodeId"].amazon_redshift_target.data.schema.label #=> String
resp.jobs[0].code_gen_configuration_nodes["NodeId"].amazon_redshift_target.data.schema.description #=> String
resp.jobs[0].code_gen_configuration_nodes["NodeId"].amazon_redshift_target.data.table.value #=> String
resp.jobs[0].code_gen_configuration_nodes["NodeId"].amazon_redshift_target.data.table.label #=> String
resp.jobs[0].code_gen_configuration_nodes["NodeId"].amazon_redshift_target.data.table.description #=> String
resp.jobs[0].code_gen_configuration_nodes["NodeId"].amazon_redshift_target.data.catalog_database.value #=> String
resp.jobs[0].code_gen_configuration_nodes["NodeId"].amazon_redshift_target.data.catalog_database.label #=> String
resp.jobs[0].code_gen_configuration_nodes["NodeId"].amazon_redshift_target.data.catalog_database.description #=> String
resp.jobs[0].code_gen_configuration_nodes["NodeId"].amazon_redshift_target.data.catalog_table.value #=> String
resp.jobs[0].code_gen_configuration_nodes["NodeId"].amazon_redshift_target.data.catalog_table.label #=> String
resp.jobs[0].code_gen_configuration_nodes["NodeId"].amazon_redshift_target.data.catalog_table.description #=> String
resp.jobs[0].code_gen_configuration_nodes["NodeId"].amazon_redshift_target.data.catalog_redshift_schema #=> String
resp.jobs[0].code_gen_configuration_nodes["NodeId"].amazon_redshift_target.data.catalog_redshift_table #=> String
resp.jobs[0].code_gen_configuration_nodes["NodeId"].amazon_redshift_target.data.temp_dir #=> String
resp.jobs[0].code_gen_configuration_nodes["NodeId"].amazon_redshift_target.data.iam_role.value #=> String
resp.jobs[0].code_gen_configuration_nodes["NodeId"].amazon_redshift_target.data.iam_role.label #=> String
resp.jobs[0].code_gen_configuration_nodes["NodeId"].amazon_redshift_target.data.iam_role.description #=> String
resp.jobs[0].code_gen_configuration_nodes["NodeId"].amazon_redshift_target.data.advanced_options #=> Array
resp.jobs[0].code_gen_configuration_nodes["NodeId"].amazon_redshift_target.data.advanced_options[0].key #=> String
resp.jobs[0].code_gen_configuration_nodes["NodeId"].amazon_redshift_target.data.advanced_options[0].value #=> String
resp.jobs[0].code_gen_configuration_nodes["NodeId"].amazon_redshift_target.data.sample_query #=> String
resp.jobs[0].code_gen_configuration_nodes["NodeId"].amazon_redshift_target.data.pre_action #=> String
resp.jobs[0].code_gen_configuration_nodes["NodeId"].amazon_redshift_target.data.post_action #=> String
resp.jobs[0].code_gen_configuration_nodes["NodeId"].amazon_redshift_target.data.action #=> String
resp.jobs[0].code_gen_configuration_nodes["NodeId"].amazon_redshift_target.data.table_prefix #=> String
resp.jobs[0].code_gen_configuration_nodes["NodeId"].amazon_redshift_target.data.upsert #=> Boolean
resp.jobs[0].code_gen_configuration_nodes["NodeId"].amazon_redshift_target.data.merge_action #=> String
resp.jobs[0].code_gen_configuration_nodes["NodeId"].amazon_redshift_target.data.merge_when_matched #=> String
resp.jobs[0].code_gen_configuration_nodes["NodeId"].amazon_redshift_target.data.merge_when_not_matched #=> String
resp.jobs[0].code_gen_configuration_nodes["NodeId"].amazon_redshift_target.data.merge_clause #=> String
resp.jobs[0].code_gen_configuration_nodes["NodeId"].amazon_redshift_target.data.crawler_connection #=> String
resp.jobs[0].code_gen_configuration_nodes["NodeId"].amazon_redshift_target.data.table_schema #=> Array
resp.jobs[0].code_gen_configuration_nodes["NodeId"].amazon_redshift_target.data.table_schema[0].value #=> String
resp.jobs[0].code_gen_configuration_nodes["NodeId"].amazon_redshift_target.data.table_schema[0].label #=> String
resp.jobs[0].code_gen_configuration_nodes["NodeId"].amazon_redshift_target.data.table_schema[0].description #=> String
resp.jobs[0].code_gen_configuration_nodes["NodeId"].amazon_redshift_target.data.staging_table #=> String
resp.jobs[0].code_gen_configuration_nodes["NodeId"].amazon_redshift_target.data.selected_columns #=> Array
resp.jobs[0].code_gen_configuration_nodes["NodeId"].amazon_redshift_target.data.selected_columns[0].value #=> String
resp.jobs[0].code_gen_configuration_nodes["NodeId"].amazon_redshift_target.data.selected_columns[0].label #=> String
resp.jobs[0].code_gen_configuration_nodes["NodeId"].amazon_redshift_target.data.selected_columns[0].description #=> String
resp.jobs[0].code_gen_configuration_nodes["NodeId"].amazon_redshift_target.inputs #=> Array
resp.jobs[0].code_gen_configuration_nodes["NodeId"].amazon_redshift_target.inputs[0] #=> String
resp.jobs[0].code_gen_configuration_nodes["NodeId"].evaluate_data_quality_multi_frame.name #=> String
resp.jobs[0].code_gen_configuration_nodes["NodeId"].evaluate_data_quality_multi_frame.inputs #=> Array
resp.jobs[0].code_gen_configuration_nodes["NodeId"].evaluate_data_quality_multi_frame.inputs[0] #=> String
resp.jobs[0].code_gen_configuration_nodes["NodeId"].evaluate_data_quality_multi_frame.additional_data_sources #=> Hash
resp.jobs[0].code_gen_configuration_nodes["NodeId"].evaluate_data_quality_multi_frame.additional_data_sources["NodeName"] #=> String
resp.jobs[0].code_gen_configuration_nodes["NodeId"].evaluate_data_quality_multi_frame.ruleset #=> String
resp.jobs[0].code_gen_configuration_nodes["NodeId"].evaluate_data_quality_multi_frame.publishing_options.evaluation_context #=> String
resp.jobs[0].code_gen_configuration_nodes["NodeId"].evaluate_data_quality_multi_frame.publishing_options.results_s3_prefix #=> String
resp.jobs[0].code_gen_configuration_nodes["NodeId"].evaluate_data_quality_multi_frame.publishing_options.cloud_watch_metrics_enabled #=> Boolean
resp.jobs[0].code_gen_configuration_nodes["NodeId"].evaluate_data_quality_multi_frame.publishing_options.results_publishing_enabled #=> Boolean
resp.jobs[0].code_gen_configuration_nodes["NodeId"].evaluate_data_quality_multi_frame.additional_options #=> Hash
resp.jobs[0].code_gen_configuration_nodes["NodeId"].evaluate_data_quality_multi_frame.additional_options["AdditionalOptionKeys"] #=> String
resp.jobs[0].code_gen_configuration_nodes["NodeId"].evaluate_data_quality_multi_frame.stop_job_on_failure_options.stop_job_on_failure_timing #=> String, one of "Immediate", "AfterDataLoad"
resp.jobs[0].code_gen_configuration_nodes["NodeId"].recipe.name #=> String
resp.jobs[0].code_gen_configuration_nodes["NodeId"].recipe.inputs #=> Array
resp.jobs[0].code_gen_configuration_nodes["NodeId"].recipe.inputs[0] #=> String
resp.jobs[0].code_gen_configuration_nodes["NodeId"].recipe.recipe_reference.recipe_arn #=> String
resp.jobs[0].code_gen_configuration_nodes["NodeId"].recipe.recipe_reference.recipe_version #=> String
resp.jobs[0].code_gen_configuration_nodes["NodeId"].recipe.recipe_steps #=> Array
resp.jobs[0].code_gen_configuration_nodes["NodeId"].recipe.recipe_steps[0].action.operation #=> String
resp.jobs[0].code_gen_configuration_nodes["NodeId"].recipe.recipe_steps[0].action.parameters #=> Hash
resp.jobs[0].code_gen_configuration_nodes["NodeId"].recipe.recipe_steps[0].action.parameters["ParameterName"] #=> String
resp.jobs[0].code_gen_configuration_nodes["NodeId"].recipe.recipe_steps[0].condition_expressions #=> Array
resp.jobs[0].code_gen_configuration_nodes["NodeId"].recipe.recipe_steps[0].condition_expressions[0].condition #=> String
resp.jobs[0].code_gen_configuration_nodes["NodeId"].recipe.recipe_steps[0].condition_expressions[0].value #=> String
resp.jobs[0].code_gen_configuration_nodes["NodeId"].recipe.recipe_steps[0].condition_expressions[0].target_column #=> String
resp.jobs[0].code_gen_configuration_nodes["NodeId"].snowflake_source.name #=> String
resp.jobs[0].code_gen_configuration_nodes["NodeId"].snowflake_source.data.source_type #=> String
resp.jobs[0].code_gen_configuration_nodes["NodeId"].snowflake_source.data.connection.value #=> String
resp.jobs[0].code_gen_configuration_nodes["NodeId"].snowflake_source.data.connection.label #=> String
resp.jobs[0].code_gen_configuration_nodes["NodeId"].snowflake_source.data.connection.description #=> String
resp.jobs[0].code_gen_configuration_nodes["NodeId"].snowflake_source.data.schema #=> String
resp.jobs[0].code_gen_configuration_nodes["NodeId"].snowflake_source.data.table #=> String
resp.jobs[0].code_gen_configuration_nodes["NodeId"].snowflake_source.data.database #=> String
resp.jobs[0].code_gen_configuration_nodes["NodeId"].snowflake_source.data.temp_dir #=> String
resp.jobs[0].code_gen_configuration_nodes["NodeId"].snowflake_source.data.iam_role.value #=> String
resp.jobs[0].code_gen_configuration_nodes["NodeId"].snowflake_source.data.iam_role.label #=> String
resp.jobs[0].code_gen_configuration_nodes["NodeId"].snowflake_source.data.iam_role.description #=> String
resp.jobs[0].code_gen_configuration_nodes["NodeId"].snowflake_source.data.additional_options #=> Hash
resp.jobs[0].code_gen_configuration_nodes["NodeId"].snowflake_source.data.additional_options["EnclosedInStringProperty"] #=> String
resp.jobs[0].code_gen_configuration_nodes["NodeId"].snowflake_source.data.sample_query #=> String
resp.jobs[0].code_gen_configuration_nodes["NodeId"].snowflake_source.data.pre_action #=> String
resp.jobs[0].code_gen_configuration_nodes["NodeId"].snowflake_source.data.post_action #=> String
resp.jobs[0].code_gen_configuration_nodes["NodeId"].snowflake_source.data.action #=> String
resp.jobs[0].code_gen_configuration_nodes["NodeId"].snowflake_source.data.upsert #=> Boolean
resp.jobs[0].code_gen_configuration_nodes["NodeId"].snowflake_source.data.merge_action #=> String
resp.jobs[0].code_gen_configuration_nodes["NodeId"].snowflake_source.data.merge_when_matched #=> String
resp.jobs[0].code_gen_configuration_nodes["NodeId"].snowflake_source.data.merge_when_not_matched #=> String
resp.jobs[0].code_gen_configuration_nodes["NodeId"].snowflake_source.data.merge_clause #=> String
resp.jobs[0].code_gen_configuration_nodes["NodeId"].snowflake_source.data.staging_table #=> String
resp.jobs[0].code_gen_configuration_nodes["NodeId"].snowflake_source.data.selected_columns #=> Array
resp.jobs[0].code_gen_configuration_nodes["NodeId"].snowflake_source.data.selected_columns[0].value #=> String
resp.jobs[0].code_gen_configuration_nodes["NodeId"].snowflake_source.data.selected_columns[0].label #=> String
resp.jobs[0].code_gen_configuration_nodes["NodeId"].snowflake_source.data.selected_columns[0].description #=> String
resp.jobs[0].code_gen_configuration_nodes["NodeId"].snowflake_source.data.auto_pushdown #=> Boolean
resp.jobs[0].code_gen_configuration_nodes["NodeId"].snowflake_source.data.table_schema #=> Array
resp.jobs[0].code_gen_configuration_nodes["NodeId"].snowflake_source.data.table_schema[0].value #=> String
resp.jobs[0].code_gen_configuration_nodes["NodeId"].snowflake_source.data.table_schema[0].label #=> String
resp.jobs[0].code_gen_configuration_nodes["NodeId"].snowflake_source.data.table_schema[0].description #=> String
resp.jobs[0].code_gen_configuration_nodes["NodeId"].snowflake_source.output_schemas #=> Array
resp.jobs[0].code_gen_configuration_nodes["NodeId"].snowflake_source.output_schemas[0].columns #=> Array
resp.jobs[0].code_gen_configuration_nodes["NodeId"].snowflake_source.output_schemas[0].columns[0].name #=> String
resp.jobs[0].code_gen_configuration_nodes["NodeId"].snowflake_source.output_schemas[0].columns[0].type #=> String
resp.jobs[0].code_gen_configuration_nodes["NodeId"].snowflake_target.name #=> String
resp.jobs[0].code_gen_configuration_nodes["NodeId"].snowflake_target.data.source_type #=> String
resp.jobs[0].code_gen_configuration_nodes["NodeId"].snowflake_target.data.connection.value #=> String
resp.jobs[0].code_gen_configuration_nodes["NodeId"].snowflake_target.data.connection.label #=> String
resp.jobs[0].code_gen_configuration_nodes["NodeId"].snowflake_target.data.connection.description #=> String
resp.jobs[0].code_gen_configuration_nodes["NodeId"].snowflake_target.data.schema #=> String
resp.jobs[0].code_gen_configuration_nodes["NodeId"].snowflake_target.data.table #=> String
resp.jobs[0].code_gen_configuration_nodes["NodeId"].snowflake_target.data.database #=> String
resp.jobs[0].code_gen_configuration_nodes["NodeId"].snowflake_target.data.temp_dir #=> String
resp.jobs[0].code_gen_configuration_nodes["NodeId"].snowflake_target.data.iam_role.value #=> String
resp.jobs[0].code_gen_configuration_nodes["NodeId"].snowflake_target.data.iam_role.label #=> String
resp.jobs[0].code_gen_configuration_nodes["NodeId"].snowflake_target.data.iam_role.description #=> String
resp.jobs[0].code_gen_configuration_nodes["NodeId"].snowflake_target.data.additional_options #=> Hash
resp.jobs[0].code_gen_configuration_nodes["NodeId"].snowflake_target.data.additional_options["EnclosedInStringProperty"] #=> String
resp.jobs[0].code_gen_configuration_nodes["NodeId"].snowflake_target.data.sample_query #=> String
resp.jobs[0].code_gen_configuration_nodes["NodeId"].snowflake_target.data.pre_action #=> String
resp.jobs[0].code_gen_configuration_nodes["NodeId"].snowflake_target.data.post_action #=> String
resp.jobs[0].code_gen_configuration_nodes["NodeId"].snowflake_target.data.action #=> String
resp.jobs[0].code_gen_configuration_nodes["NodeId"].snowflake_target.data.upsert #=> Boolean
resp.jobs[0].code_gen_configuration_nodes["NodeId"].snowflake_target.data.merge_action #=> String
resp.jobs[0].code_gen_configuration_nodes["NodeId"].snowflake_target.data.merge_when_matched #=> String
resp.jobs[0].code_gen_configuration_nodes["NodeId"].snowflake_target.data.merge_when_not_matched #=> String
resp.jobs[0].code_gen_configuration_nodes["NodeId"].snowflake_target.data.merge_clause #=> String
resp.jobs[0].code_gen_configuration_nodes["NodeId"].snowflake_target.data.staging_table #=> String
resp.jobs[0].code_gen_configuration_nodes["NodeId"].snowflake_target.data.selected_columns #=> Array
resp.jobs[0].code_gen_configuration_nodes["NodeId"].snowflake_target.data.selected_columns[0].value #=> String
resp.jobs[0].code_gen_configuration_nodes["NodeId"].snowflake_target.data.selected_columns[0].label #=> String
resp.jobs[0].code_gen_configuration_nodes["NodeId"].snowflake_target.data.selected_columns[0].description #=> String
resp.jobs[0].code_gen_configuration_nodes["NodeId"].snowflake_target.data.auto_pushdown #=> Boolean
resp.jobs[0].code_gen_configuration_nodes["NodeId"].snowflake_target.data.table_schema #=> Array
resp.jobs[0].code_gen_configuration_nodes["NodeId"].snowflake_target.data.table_schema[0].value #=> String
resp.jobs[0].code_gen_configuration_nodes["NodeId"].snowflake_target.data.table_schema[0].label #=> String
resp.jobs[0].code_gen_configuration_nodes["NodeId"].snowflake_target.data.table_schema[0].description #=> String
resp.jobs[0].code_gen_configuration_nodes["NodeId"].snowflake_target.inputs #=> Array
resp.jobs[0].code_gen_configuration_nodes["NodeId"].snowflake_target.inputs[0] #=> String
resp.jobs[0].code_gen_configuration_nodes["NodeId"].connector_data_source.name #=> String
resp.jobs[0].code_gen_configuration_nodes["NodeId"].connector_data_source.connection_type #=> String
resp.jobs[0].code_gen_configuration_nodes["NodeId"].connector_data_source.data #=> Hash
resp.jobs[0].code_gen_configuration_nodes["NodeId"].connector_data_source.data["GenericString"] #=> String
resp.jobs[0].code_gen_configuration_nodes["NodeId"].connector_data_source.output_schemas #=> Array
resp.jobs[0].code_gen_configuration_nodes["NodeId"].connector_data_source.output_schemas[0].columns #=> Array
resp.jobs[0].code_gen_configuration_nodes["NodeId"].connector_data_source.output_schemas[0].columns[0].name #=> String
resp.jobs[0].code_gen_configuration_nodes["NodeId"].connector_data_source.output_schemas[0].columns[0].type #=> String
resp.jobs[0].code_gen_configuration_nodes["NodeId"].connector_data_target.name #=> String
resp.jobs[0].code_gen_configuration_nodes["NodeId"].connector_data_target.connection_type #=> String
resp.jobs[0].code_gen_configuration_nodes["NodeId"].connector_data_target.data #=> Hash
resp.jobs[0].code_gen_configuration_nodes["NodeId"].connector_data_target.data["GenericString"] #=> String
resp.jobs[0].code_gen_configuration_nodes["NodeId"].connector_data_target.inputs #=> Array
resp.jobs[0].code_gen_configuration_nodes["NodeId"].connector_data_target.inputs[0] #=> String
resp.jobs[0].execution_class #=> String, one of "FLEX", "STANDARD"
resp.jobs[0].source_control_details.provider #=> String, one of "GITHUB", "GITLAB", "BITBUCKET", "AWS_CODE_COMMIT"
resp.jobs[0].source_control_details.repository #=> String
resp.jobs[0].source_control_details.owner #=> String
resp.jobs[0].source_control_details.branch #=> String
resp.jobs[0].source_control_details.folder #=> String
resp.jobs[0].source_control_details.last_commit_id #=> String
resp.jobs[0].source_control_details.auth_strategy #=> String, one of "PERSONAL_ACCESS_TOKEN", "AWS_SECRETS_MANAGER"
resp.jobs[0].source_control_details.auth_token #=> String
resp.jobs[0].maintenance_window #=> String
resp.jobs[0].profile_name #=> String
resp.jobs_not_found #=> Array
resp.jobs_not_found[0] #=> String

Parameters:

  • params (Hash) (defaults to: {})

    ({})

Options Hash (params):

  • :job_names (required, Array<String>)

    A list of job names, which might be the names returned from the ListJobs operation.

Returns:

See Also:

[View source]

2051
2052
2053
2054
# File 'gems/aws-sdk-glue/lib/aws-sdk-glue/client.rb', line 2051

def batch_get_jobs(params = {}, options = {})
  req = build_request(:batch_get_jobs, params)
  req.send_request(options)
end

#batch_get_partition(params = {}) ⇒ Types::BatchGetPartitionResponse

Retrieves partitions in a batch request.

Examples:

Request syntax with placeholder values


resp = client.batch_get_partition({
  catalog_id: "CatalogIdString",
  database_name: "NameString", # required
  table_name: "NameString", # required
  partitions_to_get: [ # required
    {
      values: ["ValueString"], # required
    },
  ],
})

Response structure


resp.partitions #=> Array
resp.partitions[0].values #=> Array
resp.partitions[0].values[0] #=> String
resp.partitions[0].database_name #=> String
resp.partitions[0].table_name #=> String
resp.partitions[0].creation_time #=> Time
resp.partitions[0].last_access_time #=> Time
resp.partitions[0].storage_descriptor.columns #=> Array
resp.partitions[0].storage_descriptor.columns[0].name #=> String
resp.partitions[0].storage_descriptor.columns[0].type #=> String
resp.partitions[0].storage_descriptor.columns[0].comment #=> String
resp.partitions[0].storage_descriptor.columns[0].parameters #=> Hash
resp.partitions[0].storage_descriptor.columns[0].parameters["KeyString"] #=> String
resp.partitions[0].storage_descriptor.location #=> String
resp.partitions[0].storage_descriptor.additional_locations #=> Array
resp.partitions[0].storage_descriptor.additional_locations[0] #=> String
resp.partitions[0].storage_descriptor.input_format #=> String
resp.partitions[0].storage_descriptor.output_format #=> String
resp.partitions[0].storage_descriptor.compressed #=> Boolean
resp.partitions[0].storage_descriptor.number_of_buckets #=> Integer
resp.partitions[0].storage_descriptor.serde_info.name #=> String
resp.partitions[0].storage_descriptor.serde_info.serialization_library #=> String
resp.partitions[0].storage_descriptor.serde_info.parameters #=> Hash
resp.partitions[0].storage_descriptor.serde_info.parameters["KeyString"] #=> String
resp.partitions[0].storage_descriptor.bucket_columns #=> Array
resp.partitions[0].storage_descriptor.bucket_columns[0] #=> String
resp.partitions[0].storage_descriptor.sort_columns #=> Array
resp.partitions[0].storage_descriptor.sort_columns[0].column #=> String
resp.partitions[0].storage_descriptor.sort_columns[0].sort_order #=> Integer
resp.partitions[0].storage_descriptor.parameters #=> Hash
resp.partitions[0].storage_descriptor.parameters["KeyString"] #=> String
resp.partitions[0].storage_descriptor.skewed_info.skewed_column_names #=> Array
resp.partitions[0].storage_descriptor.skewed_info.skewed_column_names[0] #=> String
resp.partitions[0].storage_descriptor.skewed_info.skewed_column_values #=> Array
resp.partitions[0].storage_descriptor.skewed_info.skewed_column_values[0] #=> String
resp.partitions[0].storage_descriptor.skewed_info.skewed_column_value_location_maps #=> Hash
resp.partitions[0].storage_descriptor.skewed_info.skewed_column_value_location_maps["ColumnValuesString"] #=> String
resp.partitions[0].storage_descriptor.stored_as_sub_directories #=> Boolean
resp.partitions[0].storage_descriptor.schema_reference.schema_id.schema_arn #=> String
resp.partitions[0].storage_descriptor.schema_reference.schema_id.schema_name #=> String
resp.partitions[0].storage_descriptor.schema_reference.schema_id.registry_name #=> String
resp.partitions[0].storage_descriptor.schema_reference.schema_version_id #=> String
resp.partitions[0].storage_descriptor.schema_reference.schema_version_number #=> Integer
resp.partitions[0].parameters #=> Hash
resp.partitions[0].parameters["KeyString"] #=> String
resp.partitions[0].last_analyzed_time #=> Time
resp.partitions[0].catalog_id #=> String
resp.unprocessed_keys #=> Array
resp.unprocessed_keys[0].values #=> Array
resp.unprocessed_keys[0].values[0] #=> String

Parameters:

  • params (Hash) (defaults to: {})

    ({})

Options Hash (params):

  • :catalog_id (String)

    The ID of the Data Catalog where the partitions in question reside. If none is supplied, the Amazon Web Services account ID is used by default.

  • :database_name (required, String)

    The name of the catalog database where the partitions reside.

  • :table_name (required, String)

    The name of the partitions' table.

  • :partitions_to_get (required, Array<Types::PartitionValueList>)

    A list of partition values identifying the partitions to retrieve.

Returns:

See Also:

[View source]

2147
2148
2149
2150
# File 'gems/aws-sdk-glue/lib/aws-sdk-glue/client.rb', line 2147

def batch_get_partition(params = {}, options = {})
  req = build_request(:batch_get_partition, params)
  req.send_request(options)
end

#batch_get_table_optimizer(params = {}) ⇒ Types::BatchGetTableOptimizerResponse

Returns the configuration for the specified table optimizers.

Examples:

Request syntax with placeholder values


resp = client.batch_get_table_optimizer({
  entries: [ # required
    {
      catalog_id: "CatalogIdString",
      database_name: "databaseNameString",
      table_name: "tableNameString",
      type: "compaction", # accepts compaction, retention, orphan_file_deletion
    },
  ],
})

Response structure


resp.table_optimizers #=> Array
resp.table_optimizers[0].catalog_id #=> String
resp.table_optimizers[0].database_name #=> String
resp.table_optimizers[0].table_name #=> String
resp.table_optimizers[0].table_optimizer.type #=> String, one of "compaction", "retention", "orphan_file_deletion"
resp.table_optimizers[0].table_optimizer.configuration.role_arn #=> String
resp.table_optimizers[0].table_optimizer.configuration.enabled #=> Boolean
resp.table_optimizers[0].table_optimizer.configuration.vpc_configuration.glue_connection_name #=> String
resp.table_optimizers[0].table_optimizer.configuration.retention_configuration.iceberg_configuration.snapshot_retention_period_in_days #=> Integer
resp.table_optimizers[0].table_optimizer.configuration.retention_configuration.iceberg_configuration.number_of_snapshots_to_retain #=> Integer
resp.table_optimizers[0].table_optimizer.configuration.retention_configuration.iceberg_configuration.clean_expired_files #=> Boolean
resp.table_optimizers[0].table_optimizer.configuration.orphan_file_deletion_configuration.iceberg_configuration.orphan_file_retention_period_in_days #=> Integer
resp.table_optimizers[0].table_optimizer.configuration.orphan_file_deletion_configuration.iceberg_configuration.location #=> String
resp.table_optimizers[0].table_optimizer.last_run.event_type #=> String, one of "starting", "completed", "failed", "in_progress"
resp.table_optimizers[0].table_optimizer.last_run.start_timestamp #=> Time
resp.table_optimizers[0].table_optimizer.last_run.end_timestamp #=> Time
resp.table_optimizers[0].table_optimizer.last_run.metrics.number_of_bytes_compacted #=> String
resp.table_optimizers[0].table_optimizer.last_run.metrics.number_of_files_compacted #=> String
resp.table_optimizers[0].table_optimizer.last_run.metrics.number_of_dpus #=> String
resp.table_optimizers[0].table_optimizer.last_run.metrics.job_duration_in_hour #=> String
resp.table_optimizers[0].table_optimizer.last_run.error #=> String
resp.table_optimizers[0].table_optimizer.last_run.compaction_metrics.iceberg_metrics.number_of_bytes_compacted #=> Integer
resp.table_optimizers[0].table_optimizer.last_run.compaction_metrics.iceberg_metrics.number_of_files_compacted #=> Integer
resp.table_optimizers[0].table_optimizer.last_run.compaction_metrics.iceberg_metrics.number_of_dpus #=> Integer
resp.table_optimizers[0].table_optimizer.last_run.compaction_metrics.iceberg_metrics.job_duration_in_hour #=> Float
resp.table_optimizers[0].table_optimizer.last_run.retention_metrics.iceberg_metrics.number_of_data_files_deleted #=> Integer
resp.table_optimizers[0].table_optimizer.last_run.retention_metrics.iceberg_metrics.number_of_manifest_files_deleted #=> Integer
resp.table_optimizers[0].table_optimizer.last_run.retention_metrics.iceberg_metrics.number_of_manifest_lists_deleted #=> Integer
resp.table_optimizers[0].table_optimizer.last_run.retention_metrics.iceberg_metrics.number_of_dpus #=> Integer
resp.table_optimizers[0].table_optimizer.last_run.retention_metrics.iceberg_metrics.job_duration_in_hour #=> Float
resp.table_optimizers[0].table_optimizer.last_run.orphan_file_deletion_metrics.iceberg_metrics.number_of_orphan_files_deleted #=> Integer
resp.table_optimizers[0].table_optimizer.last_run.orphan_file_deletion_metrics.iceberg_metrics.number_of_dpus #=> Integer
resp.table_optimizers[0].table_optimizer.last_run.orphan_file_deletion_metrics.iceberg_metrics.job_duration_in_hour #=> Float
resp.failures #=> Array
resp.failures[0].error.error_code #=> String
resp.failures[0].error.error_message #=> String
resp.failures[0].catalog_id #=> String
resp.failures[0].database_name #=> String
resp.failures[0].table_name #=> String
resp.failures[0].type #=> String, one of "compaction", "retention", "orphan_file_deletion"

Parameters:

  • params (Hash) (defaults to: {})

    ({})

Options Hash (params):

Returns:

See Also:

[View source]

2223
2224
2225
2226
# File 'gems/aws-sdk-glue/lib/aws-sdk-glue/client.rb', line 2223

def batch_get_table_optimizer(params = {}, options = {})
  req = build_request(:batch_get_table_optimizer, params)
  req.send_request(options)
end

#batch_get_triggers(params = {}) ⇒ Types::BatchGetTriggersResponse

Returns a list of resource metadata for a given list of trigger names. After calling the ListTriggers operation, you can call this operation to access the data to which you have been granted permissions. This operation supports all IAM permissions, including permission conditions that uses tags.

Examples:

Request syntax with placeholder values


resp = client.batch_get_triggers({
  trigger_names: ["NameString"], # required
})

Response structure


resp.triggers #=> Array
resp.triggers[0].name #=> String
resp.triggers[0].workflow_name #=> String
resp.triggers[0].id #=> String
resp.triggers[0].type #=> String, one of "SCHEDULED", "CONDITIONAL", "ON_DEMAND", "EVENT"
resp.triggers[0].state #=> String, one of "CREATING", "CREATED", "ACTIVATING", "ACTIVATED", "DEACTIVATING", "DEACTIVATED", "DELETING", "UPDATING"
resp.triggers[0].description #=> String
resp.triggers[0].schedule #=> String
resp.triggers[0].actions #=> Array
resp.triggers[0].actions[0].job_name #=> String
resp.triggers[0].actions[0].arguments #=> Hash
resp.triggers[0].actions[0].arguments["GenericString"] #=> String
resp.triggers[0].actions[0].timeout #=> Integer
resp.triggers[0].actions[0].security_configuration #=> String
resp.triggers[0].actions[0].notification_property.notify_delay_after #=> Integer
resp.triggers[0].actions[0].crawler_name #=> String
resp.triggers[0].predicate.logical #=> String, one of "AND", "ANY"
resp.triggers[0].predicate.conditions #=> Array
resp.triggers[0].predicate.conditions[0].logical_operator #=> String, one of "EQUALS"
resp.triggers[0].predicate.conditions[0].job_name #=> String
resp.triggers[0].predicate.conditions[0].state #=> String, one of "STARTING", "RUNNING", "STOPPING", "STOPPED", "SUCCEEDED", "FAILED", "TIMEOUT", "ERROR", "WAITING", "EXPIRED"
resp.triggers[0].predicate.conditions[0].crawler_name #=> String
resp.triggers[0].predicate.conditions[0].crawl_state #=> String, one of "RUNNING", "CANCELLING", "CANCELLED", "SUCCEEDED", "FAILED", "ERROR"
resp.triggers[0].event_batching_condition.batch_size #=> Integer
resp.triggers[0].event_batching_condition.batch_window #=> Integer
resp.triggers_not_found #=> Array
resp.triggers_not_found[0] #=> String

Parameters:

  • params (Hash) (defaults to: {})

    ({})

Options Hash (params):

  • :trigger_names (required, Array<String>)

    A list of trigger names, which may be the names returned from the ListTriggers operation.

Returns:

See Also:

[View source]

2283
2284
2285
2286
# File 'gems/aws-sdk-glue/lib/aws-sdk-glue/client.rb', line 2283

def batch_get_triggers(params = {}, options = {})
  req = build_request(:batch_get_triggers, params)
  req.send_request(options)
end

#batch_get_workflows(params = {}) ⇒ Types::BatchGetWorkflowsResponse

Returns a list of resource metadata for a given list of workflow names. After calling the ListWorkflows operation, you can call this operation to access the data to which you have been granted permissions. This operation supports all IAM permissions, including permission conditions that uses tags.

Examples:

Request syntax with placeholder values


resp = client.batch_get_workflows({
  names: ["NameString"], # required
  include_graph: false,
})

Response structure


resp.workflows #=> Array
resp.workflows[0].name #=> String
resp.workflows[0].description #=> String
resp.workflows[0].default_run_properties #=> Hash
resp.workflows[0].default_run_properties["IdString"] #=> String
resp.workflows[0].created_on #=> Time
resp.workflows[0].last_modified_on #=> Time
resp.workflows[0].last_run.name #=> String
resp.workflows[0].last_run.workflow_run_id #=> String
resp.workflows[0].last_run.previous_run_id #=> String
resp.workflows[0].last_run.workflow_run_properties #=> Hash
resp.workflows[0].last_run.workflow_run_properties["IdString"] #=> String
resp.workflows[0].last_run.started_on #=> Time
resp.workflows[0].last_run.completed_on #=> Time
resp.workflows[0].last_run.status #=> String, one of "RUNNING", "COMPLETED", "STOPPING", "STOPPED", "ERROR"
resp.workflows[0].last_run.error_message #=> String
resp.workflows[0].last_run.statistics.total_actions #=> Integer
resp.workflows[0].last_run.statistics.timeout_actions #=> Integer
resp.workflows[0].last_run.statistics.failed_actions #=> Integer
resp.workflows[0].last_run.statistics.stopped_actions #=> Integer
resp.workflows[0].last_run.statistics.succeeded_actions #=> Integer
resp.workflows[0].last_run.statistics.running_actions #=> Integer
resp.workflows[0].last_run.statistics.errored_actions #=> Integer
resp.workflows[0].last_run.statistics.waiting_actions #=> Integer
resp.workflows[0].last_run.graph.nodes #=> Array
resp.workflows[0].last_run.graph.nodes[0].type #=> String, one of "CRAWLER", "JOB", "TRIGGER"
resp.workflows[0].last_run.graph.nodes[0].name #=> String
resp.workflows[0].last_run.graph.nodes[0].unique_id #=> String
resp.workflows[0].last_run.graph.nodes[0].trigger_details.trigger.name #=> String
resp.workflows[0].last_run.graph.nodes[0].trigger_details.trigger.workflow_name #=> String
resp.workflows[0].last_run.graph.nodes[0].trigger_details.trigger.id #=> String
resp.workflows[0].last_run.graph.nodes[0].trigger_details.trigger.type #=> String, one of "SCHEDULED", "CONDITIONAL", "ON_DEMAND", "EVENT"
resp.workflows[0].last_run.graph.nodes[0].trigger_details.trigger.state #=> String, one of "CREATING", "CREATED", "ACTIVATING", "ACTIVATED", "DEACTIVATING", "DEACTIVATED", "DELETING", "UPDATING"
resp.workflows[0].last_run.graph.nodes[0].trigger_details.trigger.description #=> String
resp.workflows[0].last_run.graph.nodes[0].trigger_details.trigger.schedule #=> String
resp.workflows[0].last_run.graph.nodes[0].trigger_details.trigger.actions #=> Array
resp.workflows[0].last_run.graph.nodes[0].trigger_details.trigger.actions[0].job_name #=> String
resp.workflows[0].last_run.graph.nodes[0].trigger_details.trigger.actions[0].arguments #=> Hash
resp.workflows[0].last_run.graph.nodes[0].trigger_details.trigger.actions[0].arguments["GenericString"] #=> String
resp.workflows[0].last_run.graph.nodes[0].trigger_details.trigger.actions[0].timeout #=> Integer
resp.workflows[0].last_run.graph.nodes[0].trigger_details.trigger.actions[0].security_configuration #=> String
resp.workflows[0].last_run.graph.nodes[0].trigger_details.trigger.actions[0].notification_property.notify_delay_after #=> Integer
resp.workflows[0].last_run.graph.nodes[0].trigger_details.trigger.actions[0].crawler_name #=> String
resp.workflows[0].last_run.graph.nodes[0].trigger_details.trigger.predicate.logical #=> String, one of "AND", "ANY"
resp.workflows[0].last_run.graph.nodes[0].trigger_details.trigger.predicate.conditions #=> Array
resp.workflows[0].last_run.graph.nodes[0].trigger_details.trigger.predicate.conditions[0].logical_operator #=> String, one of "EQUALS"
resp.workflows[0].last_run.graph.nodes[0].trigger_details.trigger.predicate.conditions[0].job_name #=> String
resp.workflows[0].last_run.graph.nodes[0].trigger_details.trigger.predicate.conditions[0].state #=> String, one of "STARTING", "RUNNING", "STOPPING", "STOPPED", "SUCCEEDED", "FAILED", "TIMEOUT", "ERROR", "WAITING", "EXPIRED"
resp.workflows[0].last_run.graph.nodes[0].trigger_details.trigger.predicate.conditions[0].crawler_name #=> String
resp.workflows[0].last_run.graph.nodes[0].trigger_details.trigger.predicate.conditions[0].crawl_state #=> String, one of "RUNNING", "CANCELLING", "CANCELLED", "SUCCEEDED", "FAILED", "ERROR"
resp.workflows[0].last_run.graph.nodes[0].trigger_details.trigger.event_batching_condition.batch_size #=> Integer
resp.workflows[0].last_run.graph.nodes[0].trigger_details.trigger.event_batching_condition.batch_window #=> Integer
resp.workflows[0].last_run.graph.nodes[0].job_details.job_runs #=> Array
resp.workflows[0].last_run.graph.nodes[0].job_details.job_runs[0].id #=> String
resp.workflows[0].last_run.graph.nodes[0].job_details.job_runs[0].attempt #=> Integer
resp.workflows[0].last_run.graph.nodes[0].job_details.job_runs[0].previous_run_id #=> String
resp.workflows[0].last_run.graph.nodes[0].job_details.job_runs[0].trigger_name #=> String
resp.workflows[0].last_run.graph.nodes[0].job_details.job_runs[0].job_name #=> String
resp.workflows[0].last_run.graph.nodes[0].job_details.job_runs[0].job_mode #=> String, one of "SCRIPT", "VISUAL", "NOTEBOOK"
resp.workflows[0].last_run.graph.nodes[0].job_details.job_runs[0].job_run_queuing_enabled #=> Boolean
resp.workflows[0].last_run.graph.nodes[0].job_details.job_runs[0].started_on #=> Time
resp.workflows[0].last_run.graph.nodes[0].job_details.job_runs[0].last_modified_on #=> Time
resp.workflows[0].last_run.graph.nodes[0].job_details.job_runs[0].completed_on #=> Time
resp.workflows[0].last_run.graph.nodes[0].job_details.job_runs[0].job_run_state #=> String, one of "STARTING", "RUNNING", "STOPPING", "STOPPED", "SUCCEEDED", "FAILED", "TIMEOUT", "ERROR", "WAITING", "EXPIRED"
resp.workflows[0].last_run.graph.nodes[0].job_details.job_runs[0].arguments #=> Hash
resp.workflows[0].last_run.graph.nodes[0].job_details.job_runs[0].arguments["GenericString"] #=> String
resp.workflows[0].last_run.graph.nodes[0].job_details.job_runs[0].error_message #=> String
resp.workflows[0].last_run.graph.nodes[0].job_details.job_runs[0].predecessor_runs #=> Array
resp.workflows[0].last_run.graph.nodes[0].job_details.job_runs[0].predecessor_runs[0].job_name #=> String
resp.workflows[0].last_run.graph.nodes[0].job_details.job_runs[0].predecessor_runs[0].run_id #=> String
resp.workflows[0].last_run.graph.nodes[0].job_details.job_runs[0].allocated_capacity #=> Integer
resp.workflows[0].last_run.graph.nodes[0].job_details.job_runs[0].execution_time #=> Integer
resp.workflows[0].last_run.graph.nodes[0].job_details.job_runs[0].timeout #=> Integer
resp.workflows[0].last_run.graph.nodes[0].job_details.job_runs[0].max_capacity #=> Float
resp.workflows[0].last_run.graph.nodes[0].job_details.job_runs[0].worker_type #=> String, one of "Standard", "G.1X", "G.2X", "G.025X", "G.4X", "G.8X", "Z.2X"
resp.workflows[0].last_run.graph.nodes[0].job_details.job_runs[0].number_of_workers #=> Integer
resp.workflows[0].last_run.graph.nodes[0].job_details.job_runs[0].security_configuration #=> String
resp.workflows[0].last_run.graph.nodes[0].job_details.job_runs[0].log_group_name #=> String
resp.workflows[0].last_run.graph.nodes[0].job_details.job_runs[0].notification_property.notify_delay_after #=> Integer
resp.workflows[0].last_run.graph.nodes[0].job_details.job_runs[0].glue_version #=> String
resp.workflows[0].last_run.graph.nodes[0].job_details.job_runs[0].dpu_seconds #=> Float
resp.workflows[0].last_run.graph.nodes[0].job_details.job_runs[0].execution_class #=> String, one of "FLEX", "STANDARD"
resp.workflows[0].last_run.graph.nodes[0].job_details.job_runs[0].maintenance_window #=> String
resp.workflows[0].last_run.graph.nodes[0].job_details.job_runs[0].profile_name #=> String
resp.workflows[0].last_run.graph.nodes[0].job_details.job_runs[0].state_detail #=> String
resp.workflows[0].last_run.graph.nodes[0].crawler_details.crawls #=> Array
resp.workflows[0].last_run.graph.nodes[0].crawler_details.crawls[0].state #=> String, one of "RUNNING", "CANCELLING", "CANCELLED", "SUCCEEDED", "FAILED", "ERROR"
resp.workflows[0].last_run.graph.nodes[0].crawler_details.crawls[0].started_on #=> Time
resp.workflows[0].last_run.graph.nodes[0].crawler_details.crawls[0].completed_on #=> Time
resp.workflows[0].last_run.graph.nodes[0].crawler_details.crawls[0].error_message #=> String
resp.workflows[0].last_run.graph.nodes[0].crawler_details.crawls[0].log_group #=> String
resp.workflows[0].last_run.graph.nodes[0].crawler_details.crawls[0].log_stream #=> String
resp.workflows[0].last_run.graph.edges #=> Array
resp.workflows[0].last_run.graph.edges[0].source_id #=> String
resp.workflows[0].last_run.graph.edges[0].destination_id #=> String
resp.workflows[0].last_run.starting_event_batch_condition.batch_size #=> Integer
resp.workflows[0].last_run.starting_event_batch_condition.batch_window #=> Integer
resp.workflows[0].graph.nodes #=> Array
resp.workflows[0].graph.nodes[0].type #=> String, one of "CRAWLER", "JOB", "TRIGGER"
resp.workflows[0].graph.nodes[0].name #=> String
resp.workflows[0].graph.nodes[0].unique_id #=> String
resp.workflows[0].graph.nodes[0].trigger_details.trigger.name #=> String
resp.workflows[0].graph.nodes[0].trigger_details.trigger.workflow_name #=> String
resp.workflows[0].graph.nodes[0].trigger_details.trigger.id #=> String
resp.workflows[0].graph.nodes[0].trigger_details.trigger.type #=> String, one of "SCHEDULED", "CONDITIONAL", "ON_DEMAND", "EVENT"
resp.workflows[0].graph.nodes[0].trigger_details.trigger.state #=> String, one of "CREATING", "CREATED", "ACTIVATING", "ACTIVATED", "DEACTIVATING", "DEACTIVATED", "DELETING", "UPDATING"
resp.workflows[0].graph.nodes[0].trigger_details.trigger.description #=> String
resp.workflows[0].graph.nodes[0].trigger_details.trigger.schedule #=> String
resp.workflows[0].graph.nodes[0].trigger_details.trigger.actions #=> Array
resp.workflows[0].graph.nodes[0].trigger_details.trigger.actions[0].job_name #=> String
resp.workflows[0].graph.nodes[0].trigger_details.trigger.actions[0].arguments #=> Hash
resp.workflows[0].graph.nodes[0].trigger_details.trigger.actions[0].arguments["GenericString"] #=> String
resp.workflows[0].graph.nodes[0].trigger_details.trigger.actions[0].timeout #=> Integer
resp.workflows[0].graph.nodes[0].trigger_details.trigger.actions[0].security_configuration #=> String
resp.workflows[0].graph.nodes[0].trigger_details.trigger.actions[0].notification_property.notify_delay_after #=> Integer
resp.workflows[0].graph.nodes[0].trigger_details.trigger.actions[0].crawler_name #=> String
resp.workflows[0].graph.nodes[0].trigger_details.trigger.predicate.logical #=> String, one of "AND", "ANY"
resp.workflows[0].graph.nodes[0].trigger_details.trigger.predicate.conditions #=> Array
resp.workflows[0].graph.nodes[0].trigger_details.trigger.predicate.conditions[0].logical_operator #=> String, one of "EQUALS"
resp.workflows[0].graph.nodes[0].trigger_details.trigger.predicate.conditions[0].job_name #=> String
resp.workflows[0].graph.nodes[0].trigger_details.trigger.predicate.conditions[0].state #=> String, one of "STARTING", "RUNNING", "STOPPING", "STOPPED", "SUCCEEDED", "FAILED", "TIMEOUT", "ERROR", "WAITING", "EXPIRED"
resp.workflows[0].graph.nodes[0].trigger_details.trigger.predicate.conditions[0].crawler_name #=> String
resp.workflows[0].graph.nodes[0].trigger_details.trigger.predicate.conditions[0].crawl_state #=> String, one of "RUNNING", "CANCELLING", "CANCELLED", "SUCCEEDED", "FAILED", "ERROR"
resp.workflows[0].graph.nodes[0].trigger_details.trigger.event_batching_condition.batch_size #=> Integer
resp.workflows[0].graph.nodes[0].trigger_details.trigger.event_batching_condition.batch_window #=> Integer
resp.workflows[0].graph.nodes[0].job_details.job_runs #=> Array
resp.workflows[0].graph.nodes[0].job_details.job_runs[0].id #=> String
resp.workflows[0].graph.nodes[0].job_details.job_runs[0].attempt #=> Integer
resp.workflows[0].graph.nodes[0].job_details.job_runs[0].previous_run_id #=> String
resp.workflows[0].graph.nodes[0].job_details.job_runs[0].trigger_name #=> String
resp.workflows[0].graph.nodes[0].job_details.job_runs[0].job_name #=> String
resp.workflows[0].graph.nodes[0].job_details.job_runs[0].job_mode #=> String, one of "SCRIPT", "VISUAL", "NOTEBOOK"
resp.workflows[0].graph.nodes[0].job_details.job_runs[0].job_run_queuing_enabled #=> Boolean
resp.workflows[0].graph.nodes[0].job_details.job_runs[0].started_on #=> Time
resp.workflows[0].graph.nodes[0].job_details.job_runs[0].last_modified_on #=> Time
resp.workflows[0].graph.nodes[0].job_details.job_runs[0].completed_on #=> Time
resp.workflows[0].graph.nodes[0].job_details.job_runs[0].job_run_state #=> String, one of "STARTING", "RUNNING", "STOPPING", "STOPPED", "SUCCEEDED", "FAILED", "TIMEOUT", "ERROR", "WAITING", "EXPIRED"
resp.workflows[0].graph.nodes[0].job_details.job_runs[0].arguments #=> Hash
resp.workflows[0].graph.nodes[0].job_details.job_runs[0].arguments["GenericString"] #=> String
resp.workflows[0].graph.nodes[0].job_details.job_runs[0].error_message #=> String
resp.workflows[0].graph.nodes[0].job_details.job_runs[0].predecessor_runs #=> Array
resp.workflows[0].graph.nodes[0].job_details.job_runs[0].predecessor_runs[0].job_name #=> String
resp.workflows[0].graph.nodes[0].job_details.job_runs[0].predecessor_runs[0].run_id #=> String
resp.workflows[0].graph.nodes[0].job_details.job_runs[0].allocated_capacity #=> Integer
resp.workflows[0].graph.nodes[0].job_details.job_runs[0].execution_time #=> Integer
resp.workflows[0].graph.nodes[0].job_details.job_runs[0].timeout #=> Integer
resp.workflows[0].graph.nodes[0].job_details.job_runs[0].max_capacity #=> Float
resp.workflows[0].graph.nodes[0].job_details.job_runs[0].worker_type #=> String, one of "Standard", "G.1X", "G.2X", "G.025X", "G.4X", "G.8X", "Z.2X"
resp.workflows[0].graph.nodes[0].job_details.job_runs[0].number_of_workers #=> Integer
resp.workflows[0].graph.nodes[0].job_details.job_runs[0].security_configuration #=> String
resp.workflows[0].graph.nodes[0].job_details.job_runs[0].log_group_name #=> String
resp.workflows[0].graph.nodes[0].job_details.job_runs[0].notification_property.notify_delay_after #=> Integer
resp.workflows[0].graph.nodes[0].job_details.job_runs[0].glue_version #=> String
resp.workflows[0].graph.nodes[0].job_details.job_runs[0].dpu_seconds #=> Float
resp.workflows[0].graph.nodes[0].job_details.job_runs[0].execution_class #=> String, one of "FLEX", "STANDARD"
resp.workflows[0].graph.nodes[0].job_details.job_runs[0].maintenance_window #=> String
resp.workflows[0].graph.nodes[0].job_details.job_runs[0].profile_name #=> String
resp.workflows[0].graph.nodes[0].job_details.job_runs[0].state_detail #=> String
resp.workflows[0].graph.nodes[0].crawler_details.crawls #=> Array
resp.workflows[0].graph.nodes[0].crawler_details.crawls[0].state #=> String, one of "RUNNING", "CANCELLING", "CANCELLED", "SUCCEEDED", "FAILED", "ERROR"
resp.workflows[0].graph.nodes[0].crawler_details.crawls[0].started_on #=> Time
resp.workflows[0].graph.nodes[0].crawler_details.crawls[0].completed_on #=> Time
resp.workflows[0].graph.nodes[0].crawler_details.crawls[0].error_message #=> String
resp.workflows[0].graph.nodes[0].crawler_details.crawls[0].log_group #=> String
resp.workflows[0].graph.nodes[0].crawler_details.crawls[0].log_stream #=> String
resp.workflows[0].graph.edges #=> Array
resp.workflows[0].graph.edges[0].source_id #=> String
resp.workflows[0].graph.edges[0].destination_id #=> String
resp.workflows[0].max_concurrent_runs #=> Integer
resp.workflows[0].blueprint_details.blueprint_name #=> String
resp.workflows[0].blueprint_details.run_id #=> String
resp.missing_workflows #=> Array
resp.missing_workflows[0] #=> String

Parameters:

  • params (Hash) (defaults to: {})

    ({})

Options Hash (params):

  • :names (required, Array<String>)

    A list of workflow names, which may be the names returned from the ListWorkflows operation.

  • :include_graph (Boolean)

    Specifies whether to include a graph when returning the workflow resource metadata.

Returns:

See Also:

[View source]

2494
2495
2496
2497
# File 'gems/aws-sdk-glue/lib/aws-sdk-glue/client.rb', line 2494

def batch_get_workflows(params = {}, options = {})
  req = build_request(:batch_get_workflows, params)
  req.send_request(options)
end

#batch_put_data_quality_statistic_annotation(params = {}) ⇒ Types::BatchPutDataQualityStatisticAnnotationResponse

Annotate datapoints over time for a specific data quality statistic.

Examples:

Request syntax with placeholder values


resp = client.batch_put_data_quality_statistic_annotation({
  inclusion_annotations: [ # required
    {
      profile_id: "HashString",
      statistic_id: "HashString",
      inclusion_annotation: "INCLUDE", # accepts INCLUDE, EXCLUDE
    },
  ],
  client_token: "HashString",
})

Response structure


resp.failed_inclusion_annotations #=> Array
resp.failed_inclusion_annotations[0].profile_id #=> String
resp.failed_inclusion_annotations[0].statistic_id #=> String
resp.failed_inclusion_annotations[0].failure_reason #=> String

Parameters:

  • params (Hash) (defaults to: {})

    ({})

Options Hash (params):

Returns:

See Also:

[View source]

2535
2536
2537
2538
# File 'gems/aws-sdk-glue/lib/aws-sdk-glue/client.rb', line 2535

def batch_put_data_quality_statistic_annotation(params = {}, options = {})
  req = build_request(:batch_put_data_quality_statistic_annotation, params)
  req.send_request(options)
end

#batch_stop_job_run(params = {}) ⇒ Types::BatchStopJobRunResponse

Stops one or more job runs for a specified job definition.

Examples:

Request syntax with placeholder values


resp = client.batch_stop_job_run({
  job_name: "NameString", # required
  job_run_ids: ["IdString"], # required
})

Response structure


resp.successful_submissions #=> Array
resp.successful_submissions[0].job_name #=> String
resp.successful_submissions[0].job_run_id #=> String
resp.errors #=> Array
resp.errors[0].job_name #=> String
resp.errors[0].job_run_id #=> String
resp.errors[0].error_detail.error_code #=> String
resp.errors[0].error_detail.error_message #=> String

Parameters:

  • params (Hash) (defaults to: {})

    ({})

Options Hash (params):

  • :job_name (required, String)

    The name of the job definition for which to stop job runs.

  • :job_run_ids (required, Array<String>)

    A list of the JobRunIds that should be stopped for that job definition.

Returns:

See Also:

[View source]

2576
2577
2578
2579
# File 'gems/aws-sdk-glue/lib/aws-sdk-glue/client.rb', line 2576

def batch_stop_job_run(params = {}, options = {})
  req = build_request(:batch_stop_job_run, params)
  req.send_request(options)
end

#batch_update_partition(params = {}) ⇒ Types::BatchUpdatePartitionResponse

Updates one or more partitions in a batch operation.

Examples:

Request syntax with placeholder values


resp = client.batch_update_partition({
  catalog_id: "CatalogIdString",
  database_name: "NameString", # required
  table_name: "NameString", # required
  entries: [ # required
    {
      partition_value_list: ["ValueString"], # required
      partition_input: { # required
        values: ["ValueString"],
        last_access_time: Time.now,
        storage_descriptor: {
          columns: [
            {
              name: "NameString", # required
              type: "ColumnTypeString",
              comment: "CommentString",
              parameters: {
                "KeyString" => "ParametersMapValue",
              },
            },
          ],
          location: "LocationString",
          additional_locations: ["LocationString"],
          input_format: "FormatString",
          output_format: "FormatString",
          compressed: false,
          number_of_buckets: 1,
          serde_info: {
            name: "NameString",
            serialization_library: "NameString",
            parameters: {
              "KeyString" => "ParametersMapValue",
            },
          },
          bucket_columns: ["NameString"],
          sort_columns: [
            {
              column: "NameString", # required
              sort_order: 1, # required
            },
          ],
          parameters: {
            "KeyString" => "ParametersMapValue",
          },
          skewed_info: {
            skewed_column_names: ["NameString"],
            skewed_column_values: ["ColumnValuesString"],
            skewed_column_value_location_maps: {
              "ColumnValuesString" => "ColumnValuesString",
            },
          },
          stored_as_sub_directories: false,
          schema_reference: {
            schema_id: {
              schema_arn: "GlueResourceArn",
              schema_name: "SchemaRegistryNameString",
              registry_name: "SchemaRegistryNameString",
            },
            schema_version_id: "SchemaVersionIdString",
            schema_version_number: 1,
          },
        },
        parameters: {
          "KeyString" => "ParametersMapValue",
        },
        last_analyzed_time: Time.now,
      },
    },
  ],
})

Response structure


resp.errors #=> Array
resp.errors[0].partition_value_list #=> Array
resp.errors[0].partition_value_list[0] #=> String
resp.errors[0].error_detail.error_code #=> String
resp.errors[0].error_detail.error_message #=> String

Parameters:

  • params (Hash) (defaults to: {})

    ({})

Options Hash (params):

  • :catalog_id (String)

    The ID of the catalog in which the partition is to be updated. Currently, this should be the Amazon Web Services account ID.

  • :database_name (required, String)

    The name of the metadata database in which the partition is to be updated.

  • :table_name (required, String)

    The name of the metadata table in which the partition is to be updated.

  • :entries (required, Array<Types::BatchUpdatePartitionRequestEntry>)

    A list of up to 100 BatchUpdatePartitionRequestEntry objects to update.

Returns:

See Also:

[View source]

2688
2689
2690
2691
# File 'gems/aws-sdk-glue/lib/aws-sdk-glue/client.rb', line 2688

def batch_update_partition(params = {}, options = {})
  req = build_request(:batch_update_partition, params)
  req.send_request(options)
end

#cancel_data_quality_rule_recommendation_run(params = {}) ⇒ Struct

Cancels the specified recommendation run that was being used to generate rules.

Examples:

Request syntax with placeholder values


resp = client.cancel_data_quality_rule_recommendation_run({
  run_id: "HashString", # required
})

Parameters:

  • params (Hash) (defaults to: {})

    ({})

Options Hash (params):

  • :run_id (required, String)

    The unique run identifier associated with this run.

Returns:

  • (Struct)

    Returns an empty response.

See Also:

[View source]

2711
2712
2713
2714
# File 'gems/aws-sdk-glue/lib/aws-sdk-glue/client.rb', line 2711

def cancel_data_quality_rule_recommendation_run(params = {}, options = {})
  req = build_request(:cancel_data_quality_rule_recommendation_run, params)
  req.send_request(options)
end

#cancel_data_quality_ruleset_evaluation_run(params = {}) ⇒ Struct

Cancels a run where a ruleset is being evaluated against a data source.

Examples:

Request syntax with placeholder values


resp = client.cancel_data_quality_ruleset_evaluation_run({
  run_id: "HashString", # required
})

Parameters:

  • params (Hash) (defaults to: {})

    ({})

Options Hash (params):

  • :run_id (required, String)

    The unique run identifier associated with this run.

Returns:

  • (Struct)

    Returns an empty response.

See Also:

[View source]

2734
2735
2736
2737
# File 'gems/aws-sdk-glue/lib/aws-sdk-glue/client.rb', line 2734

def cancel_data_quality_ruleset_evaluation_run(params = {}, options = {})
  req = build_request(:cancel_data_quality_ruleset_evaluation_run, params)
  req.send_request(options)
end

#cancel_ml_task_run(params = {}) ⇒ Types::CancelMLTaskRunResponse

Cancels (stops) a task run. Machine learning task runs are asynchronous tasks that Glue runs on your behalf as part of various machine learning workflows. You can cancel a machine learning task run at any time by calling CancelMLTaskRun with a task run's parent transform's TransformID and the task run's TaskRunId.

Examples:

Request syntax with placeholder values


resp = client.cancel_ml_task_run({
  transform_id: "HashString", # required
  task_run_id: "HashString", # required
})

Response structure


resp.transform_id #=> String
resp.task_run_id #=> String
resp.status #=> String, one of "STARTING", "RUNNING", "STOPPING", "STOPPED", "SUCCEEDED", "FAILED", "TIMEOUT"

Parameters:

  • params (Hash) (defaults to: {})

    ({})

Options Hash (params):

  • :transform_id (required, String)

    The unique identifier of the machine learning transform.

  • :task_run_id (required, String)

    A unique identifier for the task run.

Returns:

See Also:

[View source]

2774
2775
2776
2777
# File 'gems/aws-sdk-glue/lib/aws-sdk-glue/client.rb', line 2774

def cancel_ml_task_run(params = {}, options = {})
  req = build_request(:cancel_ml_task_run, params)
  req.send_request(options)
end

#cancel_statement(params = {}) ⇒ Struct

Cancels the statement.

Examples:

Request syntax with placeholder values


resp = client.cancel_statement({
  session_id: "NameString", # required
  id: 1, # required
  request_origin: "OrchestrationNameString",
})

Parameters:

  • params (Hash) (defaults to: {})

    ({})

Options Hash (params):

  • :session_id (required, String)

    The Session ID of the statement to be cancelled.

  • :id (required, Integer)

    The ID of the statement to be cancelled.

  • :request_origin (String)

    The origin of the request to cancel the statement.

Returns:

  • (Struct)

    Returns an empty response.

See Also:

[View source]

2804
2805
2806
2807
# File 'gems/aws-sdk-glue/lib/aws-sdk-glue/client.rb', line 2804

def cancel_statement(params = {}, options = {})
  req = build_request(:cancel_statement, params)
  req.send_request(options)
end

#check_schema_version_validity(params = {}) ⇒ Types::CheckSchemaVersionValidityResponse

Validates the supplied schema. This call has no side effects, it simply validates using the supplied schema using DataFormat as the format. Since it does not take a schema set name, no compatibility checks are performed.

Examples:

Request syntax with placeholder values


resp = client.check_schema_version_validity({
  data_format: "AVRO", # required, accepts AVRO, JSON, PROTOBUF
  schema_definition: "SchemaDefinitionString", # required
})

Response structure


resp.valid #=> Boolean
resp.error #=> String

Parameters:

  • params (Hash) (defaults to: {})

    ({})

Options Hash (params):

  • :data_format (required, String)

    The data format of the schema definition. Currently AVRO, JSON and PROTOBUF are supported.

  • :schema_definition (required, String)

    The definition of the schema that has to be validated.

Returns:

See Also:

[View source]

2842
2843
2844
2845
# File 'gems/aws-sdk-glue/lib/aws-sdk-glue/client.rb', line 2842

def check_schema_version_validity(params = {}, options = {})
  req = build_request(:check_schema_version_validity, params)
  req.send_request(options)
end

#create_blueprint(params = {}) ⇒ Types::CreateBlueprintResponse

Registers a blueprint with Glue.

Examples:

Request syntax with placeholder values


resp = client.create_blueprint({
  name: "OrchestrationNameString", # required
  description: "Generic512CharString",
  blueprint_location: "OrchestrationS3Location", # required
  tags: {
    "TagKey" => "TagValue",
  },
})

Response structure


resp.name #=> String

Parameters:

  • params (Hash) (defaults to: {})

    ({})

Options Hash (params):

  • :name (required, String)

    The name of the blueprint.

  • :description (String)

    A description of the blueprint.

  • :blueprint_location (required, String)

    Specifies a path in Amazon S3 where the blueprint is published.

  • :tags (Hash<String,String>)

    The tags to be applied to this blueprint.

Returns:

See Also:

[View source]

2884
2885
2886
2887
# File 'gems/aws-sdk-glue/lib/aws-sdk-glue/client.rb', line 2884

def create_blueprint(params = {}, options = {})
  req = build_request(:create_blueprint, params)
  req.send_request(options)
end

#create_catalog(params = {}) ⇒ Struct

Creates a new catalog in the Glue Data Catalog.

Examples:

Request syntax with placeholder values


resp = client.create_catalog({
  name: "CatalogNameString", # required
  catalog_input: { # required
    description: "DescriptionString",
    federated_catalog: {
      identifier: "FederationIdentifier",
      connection_name: "NameString",
    },
    parameters: {
      "KeyString" => "ParametersMapValue",
    },
    target_redshift_catalog: {
      catalog_arn: "ResourceArnString", # required
    },
    catalog_properties: {
      data_lake_access_properties: {
        data_lake_access: false,
        data_transfer_role: "IAMRoleArn",
        kms_key: "ResourceArnString",
        catalog_type: "NameString",
      },
      custom_properties: {
        "KeyString" => "ParametersMapValue",
      },
    },
    create_table_default_permissions: [
      {
        principal: {
          data_lake_principal_identifier: "DataLakePrincipalString",
        },
        permissions: ["ALL"], # accepts ALL, SELECT, ALTER, DROP, DELETE, INSERT, CREATE_DATABASE, CREATE_TABLE, DATA_LOCATION_ACCESS
      },
    ],
    create_database_default_permissions: [
      {
        principal: {
          data_lake_principal_identifier: "DataLakePrincipalString",
        },
        permissions: ["ALL"], # accepts ALL, SELECT, ALTER, DROP, DELETE, INSERT, CREATE_DATABASE, CREATE_TABLE, DATA_LOCATION_ACCESS
      },
    ],
  },
  tags: {
    "TagKey" => "TagValue",
  },
})

Parameters:

  • params (Hash) (defaults to: {})

    ({})

Options Hash (params):

  • :name (required, String)

    The name of the catalog to create.

  • :catalog_input (required, Types::CatalogInput)

    A CatalogInput object that defines the metadata for the catalog.

  • :tags (Hash<String,String>)

    A map array of key-value pairs, not more than 50 pairs. Each key is a UTF-8 string, not less than 1 or more than 128 bytes long. Each value is a UTF-8 string, not more than 256 bytes long. The tags you assign to the catalog.

Returns:

  • (Struct)

    Returns an empty response.

See Also:

[View source]

2958
2959
2960
2961
# File 'gems/aws-sdk-glue/lib/aws-sdk-glue/client.rb', line 2958

def create_catalog(params = {}, options = {})
  req = build_request(:create_catalog, params)
  req.send_request(options)
end

#create_classifier(params = {}) ⇒ Struct

Creates a classifier in the user's account. This can be a GrokClassifier, an XMLClassifier, a JsonClassifier, or a CsvClassifier, depending on which field of the request is present.

Examples:

Request syntax with placeholder values


resp = client.create_classifier({
  grok_classifier: {
    classification: "Classification", # required
    name: "NameString", # required
    grok_pattern: "GrokPattern", # required
    custom_patterns: "CustomPatterns",
  },
  xml_classifier: {
    classification: "Classification", # required
    name: "NameString", # required
    row_tag: "RowTag",
  },
  json_classifier: {
    name: "NameString", # required
    json_path: "JsonPath", # required
  },
  csv_classifier: {
    name: "NameString", # required
    delimiter: "CsvColumnDelimiter",
    quote_symbol: "CsvQuoteSymbol",
    contains_header: "UNKNOWN", # accepts UNKNOWN, PRESENT, ABSENT
    header: ["NameString"],
    disable_value_trimming: false,
    allow_single_column: false,
    custom_datatype_configured: false,
    custom_datatypes: ["NameString"],
    serde: "OpenCSVSerDe", # accepts OpenCSVSerDe, LazySimpleSerDe, None
  },
})

Parameters:

  • params (Hash) (defaults to: {})

    ({})

Options Hash (params):

Returns:

  • (Struct)

    Returns an empty response.

See Also:

[View source]

3017
3018
3019
3020
# File 'gems/aws-sdk-glue/lib/aws-sdk-glue/client.rb', line 3017

def create_classifier(params = {}, options = {})
  req = build_request(:create_classifier, params)
  req.send_request(options)
end

#create_column_statistics_task_settings(params = {}) ⇒ Struct

Creates settings for a column statistics task.

Examples:

Request syntax with placeholder values


resp = client.create_column_statistics_task_settings({
  database_name: "NameString", # required
  table_name: "NameString", # required
  role: "NameString", # required
  schedule: "CronExpression",
  column_name_list: ["NameString"],
  sample_size: 1.0,
  catalog_id: "NameString",
  security_configuration: "NameString",
  tags: {
    "TagKey" => "TagValue",
  },
})

Parameters:

  • params (Hash) (defaults to: {})

    ({})

Options Hash (params):

  • :database_name (required, String)

    The name of the database where the table resides.

  • :table_name (required, String)

    The name of the table for which to generate column statistics.

  • :role (required, String)

    The role used for running the column statistics.

  • :schedule (String)

    A schedule for running the column statistics, specified in CRON syntax.

  • :column_name_list (Array<String>)

    A list of column names for which to run statistics.

  • :sample_size (Float)

    The percentage of data to sample.

  • :catalog_id (String)

    The ID of the Data Catalog in which the database resides.

  • :security_configuration (String)

    Name of the security configuration that is used to encrypt CloudWatch logs.

  • :tags (Hash<String,String>)

    A map of tags.

Returns:

  • (Struct)

    Returns an empty response.

See Also:

[View source]

3075
3076
3077
3078
# File 'gems/aws-sdk-glue/lib/aws-sdk-glue/client.rb', line 3075

def create_column_statistics_task_settings(params = {}, options = {})
  req = build_request(:create_column_statistics_task_settings, params)
  req.send_request(options)
end

#create_connection(params = {}) ⇒ Types::CreateConnectionResponse

Creates a connection definition in the Data Catalog.

Connections used for creating federated resources require the IAM glue:PassConnection permission.

Examples:

Request syntax with placeholder values


resp = client.create_connection({
  catalog_id: "CatalogIdString",
  connection_input: { # required
    name: "NameString", # required
    description: "DescriptionString",
    connection_type: "JDBC", # required, accepts JDBC, SFTP, MONGODB, KAFKA, NETWORK, MARKETPLACE, CUSTOM, SALESFORCE, VIEW_VALIDATION_REDSHIFT, VIEW_VALIDATION_ATHENA, GOOGLEADS, GOOGLESHEETS, GOOGLEANALYTICS4, SERVICENOW, MARKETO, SAPODATA, ZENDESK, JIRACLOUD, NETSUITEERP, HUBSPOT, FACEBOOKADS, INSTAGRAMADS, ZOHOCRM, SALESFORCEPARDOT, SALESFORCEMARKETINGCLOUD, SLACK, STRIPE, INTERCOM, SNAPCHATADS
    match_criteria: ["NameString"],
    connection_properties: { # required
      "HOST" => "ValueString",
    },
    spark_properties: {
      "PropertyKey" => "PropertyValue",
    },
    athena_properties: {
      "PropertyKey" => "PropertyValue",
    },
    python_properties: {
      "PropertyKey" => "PropertyValue",
    },
    physical_connection_requirements: {
      subnet_id: "NameString",
      security_group_id_list: ["NameString"],
      availability_zone: "NameString",
    },
    authentication_configuration: {
      authentication_type: "BASIC", # accepts BASIC, OAUTH2, CUSTOM, IAM
      o_auth_2_properties: {
        o_auth_2_grant_type: "AUTHORIZATION_CODE", # accepts AUTHORIZATION_CODE, CLIENT_CREDENTIALS, JWT_BEARER
        o_auth_2_client_application: {
          user_managed_client_application_client_id: "UserManagedClientApplicationClientId",
          aws_managed_client_application_reference: "AWSManagedClientApplicationReference",
        },
        token_url: "TokenUrl",
        token_url_parameters_map: {
          "TokenUrlParameterKey" => "TokenUrlParameterValue",
        },
        authorization_code_properties: {
          authorization_code: "AuthorizationCode",
          redirect_uri: "RedirectUri",
        },
        o_auth_2_credentials: {
          user_managed_client_application_client_secret: "UserManagedClientApplicationClientSecret",
          access_token: "AccessToken",
          refresh_token: "RefreshToken",
          jwt_token: "JwtToken",
        },
      },
      secret_arn: "SecretArn",
      kms_key_arn: "KmsKeyArn",
      basic_authentication_credentials: {
        username: "Username",
        password: "Password",
      },
      custom_authentication_credentials: {
        "CredentialKey" => "CredentialValue",
      },
    },
    validate_credentials: false,
    validate_for_compute_environments: ["SPARK"], # accepts SPARK, ATHENA, PYTHON
  },
  tags: {
    "TagKey" => "TagValue",
  },
})

Response structure


resp.create_connection_status #=> String, one of "READY", "IN_PROGRESS", "FAILED"

Parameters:

  • params (Hash) (defaults to: {})

    ({})

Options Hash (params):

  • :catalog_id (String)

    The ID of the Data Catalog in which to create the connection. If none is provided, the Amazon Web Services account ID is used by default.

  • :connection_input (required, Types::ConnectionInput)

    A ConnectionInput object defining the connection to create.

  • :tags (Hash<String,String>)

    The tags you assign to the connection.

Returns:

See Also:

[View source]

3174
3175
3176
3177
# File 'gems/aws-sdk-glue/lib/aws-sdk-glue/client.rb', line 3174

def create_connection(params = {}, options = {})
  req = build_request(:create_connection, params)
  req.send_request(options)
end

#create_crawler(params = {}) ⇒ Struct

Creates a new crawler with specified targets, role, configuration, and optional schedule. At least one crawl target must be specified, in the s3Targets field, the jdbcTargets field, or the DynamoDBTargets field.

Examples:

Request syntax with placeholder values


resp = client.create_crawler({
  name: "NameString", # required
  role: "Role", # required
  database_name: "DatabaseName",
  description: "DescriptionString",
  targets: { # required
    s3_targets: [
      {
        path: "Path",
        exclusions: ["Path"],
        connection_name: "ConnectionName",
        sample_size: 1,
        event_queue_arn: "EventQueueArn",
        dlq_event_queue_arn: "EventQueueArn",
      },
    ],
    jdbc_targets: [
      {
        connection_name: "ConnectionName",
        path: "Path",
        exclusions: ["Path"],
        enable_additional_metadata: ["COMMENTS"], # accepts COMMENTS, RAWTYPES
      },
    ],
    mongo_db_targets: [
      {
        connection_name: "ConnectionName",
        path: "Path",
        scan_all: false,
      },
    ],
    dynamo_db_targets: [
      {
        path: "Path",
        scan_all: false,
        scan_rate: 1.0,
      },
    ],
    catalog_targets: [
      {
        database_name: "NameString", # required
        tables: ["NameString"], # required
        connection_name: "ConnectionName",
        event_queue_arn: "EventQueueArn",
        dlq_event_queue_arn: "EventQueueArn",
      },
    ],
    delta_targets: [
      {
        delta_tables: ["Path"],
        connection_name: "ConnectionName",
        write_manifest: false,
        create_native_delta_table: false,
      },
    ],
    iceberg_targets: [
      {
        paths: ["Path"],
        connection_name: "ConnectionName",
        exclusions: ["Path"],
        maximum_traversal_depth: 1,
      },
    ],
    hudi_targets: [
      {
        paths: ["Path"],
        connection_name: "ConnectionName",
        exclusions: ["Path"],
        maximum_traversal_depth: 1,
      },
    ],
  },
  schedule: "CronExpression",
  classifiers: ["NameString"],
  table_prefix: "TablePrefix",
  schema_change_policy: {
    update_behavior: "LOG", # accepts LOG, UPDATE_IN_DATABASE
    delete_behavior: "LOG", # accepts LOG, DELETE_FROM_DATABASE, DEPRECATE_IN_DATABASE
  },
  recrawl_policy: {
    recrawl_behavior: "CRAWL_EVERYTHING", # accepts CRAWL_EVERYTHING, CRAWL_NEW_FOLDERS_ONLY, CRAWL_EVENT_MODE
  },
  lineage_configuration: {
    crawler_lineage_settings: "ENABLE", # accepts ENABLE, DISABLE
  },
  lake_formation_configuration: {
    use_lake_formation_credentials: false,
    account_id: "AccountId",
  },
  configuration: "CrawlerConfiguration",
  crawler_security_configuration: "CrawlerSecurityConfiguration",
  tags: {
    "TagKey" => "TagValue",
  },
})

Parameters:

  • params (Hash) (defaults to: {})

    ({})

Options Hash (params):

  • :name (required, String)

    Name of the new crawler.

  • :role (required, String)

    The IAM role or Amazon Resource Name (ARN) of an IAM role used by the new crawler to access customer resources.

  • :database_name (String)

    The Glue database where results are written, such as: arn:aws:daylight:us-east-1::database/sometable/*.

  • :description (String)

    A description of the new crawler.

  • :targets (required, Types::CrawlerTargets)

    A list of collection of targets to crawl.

  • :schedule (String)

    A cron expression used to specify the schedule (see Time-Based Schedules for Jobs and Crawlers. For example, to run something every day at 12:15 UTC, you would specify: cron(15 12 * * ? *).

  • :classifiers (Array<String>)

    A list of custom classifiers that the user has registered. By default, all built-in classifiers are included in a crawl, but these custom classifiers always override the default classifiers for a given classification.

  • :table_prefix (String)

    The table prefix used for catalog tables that are created.

  • :schema_change_policy (Types::SchemaChangePolicy)

    The policy for the crawler's update and deletion behavior.

  • :recrawl_policy (Types::RecrawlPolicy)

    A policy that specifies whether to crawl the entire dataset again, or to crawl only folders that were added since the last crawler run.

  • :lineage_configuration (Types::LineageConfiguration)

    Specifies data lineage configuration settings for the crawler.

  • :lake_formation_configuration (Types::LakeFormationConfiguration)

    Specifies Lake Formation configuration settings for the crawler.

  • :configuration (String)

    Crawler configuration information. This versioned JSON string allows users to specify aspects of a crawler's behavior. For more information, see Setting crawler configuration options.

  • :crawler_security_configuration (String)

    The name of the SecurityConfiguration structure to be used by this crawler.

  • :tags (Hash<String,String>)

    The tags to use with this crawler request. You may use tags to limit access to the crawler. For more information about tags in Glue, see Amazon Web Services Tags in Glue in the developer guide.

Returns:

  • (Struct)

    Returns an empty response.

See Also:

[View source]

3358
3359
3360
3361
# File 'gems/aws-sdk-glue/lib/aws-sdk-glue/client.rb', line 3358

def create_crawler(params = {}, options = {})
  req = build_request(:create_crawler, params)
  req.send_request(options)
end

#create_custom_entity_type(params = {}) ⇒ Types::CreateCustomEntityTypeResponse

Creates a custom pattern that is used to detect sensitive data across the columns and rows of your structured data.

Each custom pattern you create specifies a regular expression and an optional list of context words. If no context words are passed only a regular expression is checked.

Examples:

Request syntax with placeholder values


resp = client.create_custom_entity_type({
  name: "NameString", # required
  regex_string: "NameString", # required
  context_words: ["NameString"],
  tags: {
    "TagKey" => "TagValue",
  },
})

Response structure


resp.name #=> String

Parameters:

  • params (Hash) (defaults to: {})

    ({})

Options Hash (params):

  • :name (required, String)

    A name for the custom pattern that allows it to be retrieved or deleted later. This name must be unique per Amazon Web Services account.

  • :regex_string (required, String)

    A regular expression string that is used for detecting sensitive data in a custom pattern.

  • :context_words (Array<String>)

    A list of context words. If none of these context words are found within the vicinity of the regular expression the data will not be detected as sensitive data.

    If no context words are passed only a regular expression is checked.

  • :tags (Hash<String,String>)

    A list of tags applied to the custom entity type.

Returns:

See Also:

[View source]

3412
3413
3414
3415
# File 'gems/aws-sdk-glue/lib/aws-sdk-glue/client.rb', line 3412

def create_custom_entity_type(params = {}, options = {})
  req = build_request(:create_custom_entity_type, params)
  req.send_request(options)
end

#create_data_quality_ruleset(params = {}) ⇒ Types::CreateDataQualityRulesetResponse

Creates a data quality ruleset with DQDL rules applied to a specified Glue table.

You create the ruleset using the Data Quality Definition Language (DQDL). For more information, see the Glue developer guide.

Examples:

Request syntax with placeholder values


resp = client.create_data_quality_ruleset({
  name: "NameString", # required
  description: "DescriptionString",
  ruleset: "DataQualityRulesetString", # required
  tags: {
    "TagKey" => "TagValue",
  },
  target_table: {
    table_name: "NameString", # required
    database_name: "NameString", # required
    catalog_id: "NameString",
  },
  data_quality_security_configuration: "NameString",
  client_token: "HashString",
})

Response structure


resp.name #=> String

Parameters:

  • params (Hash) (defaults to: {})

    ({})

Options Hash (params):

  • :name (required, String)

    A unique name for the data quality ruleset.

  • :description (String)

    A description of the data quality ruleset.

  • :ruleset (required, String)

    A Data Quality Definition Language (DQDL) ruleset. For more information, see the Glue developer guide.

  • :tags (Hash<String,String>)

    A list of tags applied to the data quality ruleset.

  • :target_table (Types::DataQualityTargetTable)

    A target table associated with the data quality ruleset.

  • :data_quality_security_configuration (String)

    The name of the security configuration created with the data quality encryption option.

  • :client_token (String)

    Used for idempotency and is recommended to be set to a random ID (such as a UUID) to avoid creating or starting multiple instances of the same resource.

Returns:

See Also:

[View source]

3478
3479
3480
3481
# File 'gems/aws-sdk-glue/lib/aws-sdk-glue/client.rb', line 3478

def create_data_quality_ruleset(params = {}, options = {})
  req = build_request(:create_data_quality_ruleset, params)
  req.send_request(options)
end

#create_database(params = {}) ⇒ Struct

Creates a new database in a Data Catalog.

Examples:

Request syntax with placeholder values


resp = client.create_database({
  catalog_id: "CatalogIdString",
  database_input: { # required
    name: "NameString", # required
    description: "DescriptionString",
    location_uri: "URI",
    parameters: {
      "KeyString" => "ParametersMapValue",
    },
    create_table_default_permissions: [
      {
        principal: {
          data_lake_principal_identifier: "DataLakePrincipalString",
        },
        permissions: ["ALL"], # accepts ALL, SELECT, ALTER, DROP, DELETE, INSERT, CREATE_DATABASE, CREATE_TABLE, DATA_LOCATION_ACCESS
      },
    ],
    target_database: {
      catalog_id: "CatalogIdString",
      database_name: "NameString",
      region: "NameString",
    },
    federated_database: {
      identifier: "FederationIdentifier",
      connection_name: "NameString",
    },
  },
  tags: {
    "TagKey" => "TagValue",
  },
})

Parameters:

  • params (Hash) (defaults to: {})

    ({})

Options Hash (params):

  • :catalog_id (String)

    The ID of the Data Catalog in which to create the database. If none is provided, the Amazon Web Services account ID is used by default.

  • :database_input (required, Types::DatabaseInput)

    The metadata for the database.

  • :tags (Hash<String,String>)

    The tags you assign to the database.

Returns:

  • (Struct)

    Returns an empty response.

See Also:

[View source]

3535
3536
3537
3538
# File 'gems/aws-sdk-glue/lib/aws-sdk-glue/client.rb', line 3535

def create_database(params = {}, options = {})
  req = build_request(:create_database, params)
  req.send_request(options)
end

#create_dev_endpoint(params = {}) ⇒ Types::CreateDevEndpointResponse

Creates a new development endpoint.

Examples:

Request syntax with placeholder values


resp = client.create_dev_endpoint({
  endpoint_name: "GenericString", # required
  role_arn: "RoleArn", # required
  security_group_ids: ["GenericString"],
  subnet_id: "GenericString",
  public_key: "GenericString",
  public_keys: ["GenericString"],
  number_of_nodes: 1,
  worker_type: "Standard", # accepts Standard, G.1X, G.2X, G.025X, G.4X, G.8X, Z.2X
  glue_version: "GlueVersionString",
  number_of_workers: 1,
  extra_python_libs_s3_path: "GenericString",
  extra_jars_s3_path: "GenericString",
  security_configuration: "NameString",
  tags: {
    "TagKey" => "TagValue",
  },
  arguments: {
    "GenericString" => "GenericString",
  },
})

Response structure


resp.endpoint_name #=> String
resp.status #=> String
resp.security_group_ids #=> Array
resp.security_group_ids[0] #=> String
resp.subnet_id #=> String
resp.role_arn #=> String
resp.yarn_endpoint_address #=> String
resp.zeppelin_remote_spark_interpreter_port #=> Integer
resp.number_of_nodes #=> Integer
resp.worker_type #=> String, one of "Standard", "G.1X", "G.2X", "G.025X", "G.4X", "G.8X", "Z.2X"
resp.glue_version #=> String
resp.number_of_workers #=> Integer
resp.availability_zone #=> String
resp.vpc_id #=> String
resp.extra_python_libs_s3_path #=> String
resp.extra_jars_s3_path #=> String
resp.failure_reason #=> String
resp.security_configuration #=> String
resp.created_timestamp #=> Time
resp.arguments #=> Hash
resp.arguments["GenericString"] #=> String

Parameters:

  • params (Hash) (defaults to: {})

    ({})

Options Hash (params):

  • :endpoint_name (required, String)

    The name to be assigned to the new DevEndpoint.

  • :role_arn (required, String)

    The IAM role for the DevEndpoint.

  • :security_group_ids (Array<String>)

    Security group IDs for the security groups to be used by the new DevEndpoint.

  • :subnet_id (String)

    The subnet ID for the new DevEndpoint to use.

  • :public_key (String)

    The public key to be used by this DevEndpoint for authentication. This attribute is provided for backward compatibility because the recommended attribute to use is public keys.

  • :public_keys (Array<String>)

    A list of public keys to be used by the development endpoints for authentication. The use of this attribute is preferred over a single public key because the public keys allow you to have a different private key per client.

    If you previously created an endpoint with a public key, you must remove that key to be able to set a list of public keys. Call the UpdateDevEndpoint API with the public key content in the deletePublicKeys attribute, and the list of new keys in the addPublicKeys attribute.

  • :number_of_nodes (Integer)

    The number of Glue Data Processing Units (DPUs) to allocate to this DevEndpoint.

  • :worker_type (String)

    The type of predefined worker that is allocated to the development endpoint. Accepts a value of Standard, G.1X, or G.2X.

    • For the Standard worker type, each worker provides 4 vCPU, 16 GB of memory and a 50GB disk, and 2 executors per worker.

    • For the G.1X worker type, each worker maps to 1 DPU (4 vCPU, 16 GB of memory, 64 GB disk), and provides 1 executor per worker. We recommend this worker type for memory-intensive jobs.

    • For the G.2X worker type, each worker maps to 2 DPU (8 vCPU, 32 GB of memory, 128 GB disk), and provides 1 executor per worker. We recommend this worker type for memory-intensive jobs.

    Known issue: when a development endpoint is created with the G.2X WorkerType configuration, the Spark drivers for the development endpoint will run on 4 vCPU, 16 GB of memory, and a 64 GB disk.

  • :glue_version (String)

    Glue version determines the versions of Apache Spark and Python that Glue supports. The Python version indicates the version supported for running your ETL scripts on development endpoints.

    For more information about the available Glue versions and corresponding Spark and Python versions, see Glue version in the developer guide.

    Development endpoints that are created without specifying a Glue version default to Glue 0.9.

    You can specify a version of Python support for development endpoints by using the Arguments parameter in the CreateDevEndpoint or UpdateDevEndpoint APIs. If no arguments are provided, the version defaults to Python 2.

  • :number_of_workers (Integer)

    The number of workers of a defined workerType that are allocated to the development endpoint.

    The maximum number of workers you can define are 299 for G.1X, and 149 for G.2X.

  • :extra_python_libs_s3_path (String)

    The paths to one or more Python libraries in an Amazon S3 bucket that should be loaded in your DevEndpoint. Multiple values must be complete paths separated by a comma.

    You can only use pure Python libraries with a DevEndpoint. Libraries that rely on C extensions, such as the pandas Python data analysis library, are not yet supported.

  • :extra_jars_s3_path (String)

    The path to one or more Java .jar files in an S3 bucket that should be loaded in your DevEndpoint.

  • :security_configuration (String)

    The name of the SecurityConfiguration structure to be used with this DevEndpoint.

  • :tags (Hash<String,String>)

    The tags to use with this DevEndpoint. You may use tags to limit access to the DevEndpoint. For more information about tags in Glue, see Amazon Web Services Tags in Glue in the developer guide.

  • :arguments (Hash<String,String>)

    A map of arguments used to configure the DevEndpoint.

Returns:

See Also:

[View source]

3734
3735
3736
3737
# File 'gems/aws-sdk-glue/lib/aws-sdk-glue/client.rb', line 3734

def create_dev_endpoint(params = {}, options = {})
  req = build_request(:create_dev_endpoint, params)
  req.send_request(options)
end

#create_integration(params = {}) ⇒ Types::CreateIntegrationResponse

Creates a Zero-ETL integration in the caller's account between two resources with Amazon Resource Names (ARNs): the SourceArn and TargetArn.

Examples:

Request syntax with placeholder values


resp = client.create_integration({
  integration_name: "String128", # required
  source_arn: "String128", # required
  target_arn: "String128", # required
  description: "IntegrationDescription",
  data_filter: "String2048",
  kms_key_id: "String2048",
  additional_encryption_context: {
    "IntegrationString" => "IntegrationString",
  },
  tags: [
    {
      key: "TagKey",
      value: "TagValue",
    },
  ],
})

Response structure


resp.source_arn #=> String
resp.target_arn #=> String
resp.integration_name #=> String
resp.description #=> String
resp.integration_arn #=> String
resp.kms_key_id #=> String
resp.additional_encryption_context #=> Hash
resp.additional_encryption_context["IntegrationString"] #=> String
resp.tags #=> Array
resp.tags[0].key #=> String
resp.tags[0].value #=> String
resp.status #=> String, one of "CREATING", "ACTIVE", "MODIFYING", "FAILED", "DELETING", "SYNCING", "NEEDS_ATTENTION"
resp.create_time #=> Time
resp.errors #=> Array
resp.errors[0].error_code #=> String
resp.errors[0].error_message #=> String
resp.data_filter #=> String

Parameters:

  • params (Hash) (defaults to: {})

    ({})

Options Hash (params):

  • :integration_name (required, String)

    A unique name for an integration in Glue.

  • :source_arn (required, String)

    The ARN of the source resource for the integration.

  • :target_arn (required, String)

    The ARN of the target resource for the integration.

  • :description (String)

    A description of the integration.

  • :data_filter (String)

    Selects source tables for the integration using Maxwell filter syntax.

  • :kms_key_id (String)

    The ARN of a KMS key used for encrypting the channel.

  • :additional_encryption_context (Hash<String,String>)

    An optional set of non-secret key–value pairs that contains additional contextual information for encryption. This can only be provided if KMSKeyId is provided.

  • :tags (Array<Types::Tag>)

    Metadata assigned to the resource consisting of a list of key-value pairs.

Returns:

See Also:

[View source]

3829
3830
3831
3832
# File 'gems/aws-sdk-glue/lib/aws-sdk-glue/client.rb', line 3829

def create_integration(params = {}, options = {})
  req = build_request(:create_integration, params)
  req.send_request(options)
end

#create_integration_resource_property(params = {}) ⇒ Types::CreateIntegrationResourcePropertyResponse

This API can be used for setting up the ResourceProperty of the Glue connection (for the source) or Glue database ARN (for the target). These properties can include the role to access the connection or database. To set both source and target properties the same API needs to be invoked with the Glue connection ARN as ResourceArn with SourceProcessingProperties and the Glue database ARN as ResourceArn with TargetProcessingProperties respectively.

Examples:

Request syntax with placeholder values


resp = client.create_integration_resource_property({
  resource_arn: "String128", # required
  source_processing_properties: {
    role_arn: "String128",
  },
  target_processing_properties: {
    role_arn: "String128",
    kms_arn: "String2048",
    connection_name: "String128",
    event_bus_arn: "String2048",
  },
})

Response structure


resp.resource_arn #=> String
resp.source_processing_properties.role_arn #=> String
resp.target_processing_properties.role_arn #=> String
resp.target_processing_properties.kms_arn #=> String
resp.target_processing_properties.connection_name #=> String
resp.target_processing_properties.event_bus_arn #=> String

Parameters:

  • params (Hash) (defaults to: {})

    ({})

Options Hash (params):

  • :resource_arn (required, String)

    The connection ARN of the source, or the database ARN of the target.

  • :source_processing_properties (Types::SourceProcessingProperties)

    The resource properties associated with the integration source.

  • :target_processing_properties (Types::TargetProcessingProperties)

    The resource properties associated with the integration target.

Returns:

See Also:

[View source]

3885
3886
3887
3888
# File 'gems/aws-sdk-glue/lib/aws-sdk-glue/client.rb', line 3885

def create_integration_resource_property(params = {}, options = {})
  req = build_request(:create_integration_resource_property, params)
  req.send_request(options)
end

#create_integration_table_properties(params = {}) ⇒ Struct

This API is used to provide optional override properties for the the tables that need to be replicated. These properties can include properties for filtering and partitioning for the source and target tables. To set both source and target properties the same API need to be invoked with the Glue connection ARN as ResourceArn with SourceTableConfig, and the Glue database ARN as ResourceArn with TargetTableConfig respectively.

Examples:

Request syntax with placeholder values


resp = client.create_integration_table_properties({
  resource_arn: "String128", # required
  table_name: "String128", # required
  source_table_config: {
    fields: ["String128"],
    filter_predicate: "String128",
    primary_key: ["String128"],
    record_update_field: "String128",
  },
  target_table_config: {
    unnest_spec: "TOPLEVEL", # accepts TOPLEVEL, FULL, NOUNNEST
    partition_spec: [
      {
        field_name: "String128",
        function_spec: "String128",
      },
    ],
    target_table_name: "String128",
  },
})

Parameters:

  • params (Hash) (defaults to: {})

    ({})

Options Hash (params):

  • :resource_arn (required, String)

    The connection ARN of the source, or the database ARN of the target.

  • :table_name (required, String)

    The name of the table to be replicated.

  • :source_table_config (Types::SourceTableConfig)

    A structure for the source table configuration.

  • :target_table_config (Types::TargetTableConfig)

    A structure for the target table configuration.

Returns:

  • (Struct)

    Returns an empty response.

See Also:

[View source]

3939
3940
3941
3942
# File 'gems/aws-sdk-glue/lib/aws-sdk-glue/client.rb', line 3939

def create_integration_table_properties(params = {}, options = {})
  req = build_request(:create_integration_table_properties, params)
  req.send_request(options)
end

#create_job(params = {}) ⇒ Types::CreateJobResponse

Creates a new job definition.

Examples:

Response structure


resp.name #=> String

Parameters:

  • params (Hash) (defaults to: {})

    ({})

Options Hash (params):

  • :name (required, String)

    The name you assign to this job definition. It must be unique in your account.

  • :job_mode (String)

    A mode that describes how a job was created. Valid values are:

    • SCRIPT - The job was created using the Glue Studio script editor.

    • VISUAL - The job was created using the Glue Studio visual editor.

    • NOTEBOOK - The job was created using an interactive sessions notebook.

    When the JobMode field is missing or null, SCRIPT is assigned as the default value.

  • :job_run_queuing_enabled (Boolean)

    Specifies whether job run queuing is enabled for the job runs for this job.

    A value of true means job run queuing is enabled for the job runs. If false or not populated, the job runs will not be considered for queueing.

    If this field does not match the value set in the job run, then the value from the job run field will be used.

  • :description (String)

    Description of the job being defined.

  • :log_uri (String)

    This field is reserved for future use.

  • :role (required, String)

    The name or Amazon Resource Name (ARN) of the IAM role associated with this job.

  • :execution_property (Types::ExecutionProperty)

    An ExecutionProperty specifying the maximum number of concurrent runs allowed for this job.

  • :command (required, Types::JobCommand)

    The JobCommand that runs this job.

  • :default_arguments (Hash<String,String>)

    The default arguments for every run of this job, specified as name-value pairs.

    You can specify arguments here that your own job-execution script consumes, as well as arguments that Glue itself consumes.

    Job arguments may be logged. Do not pass plaintext secrets as arguments. Retrieve secrets from a Glue Connection, Secrets Manager or other secret management mechanism if you intend to keep them within the Job.

    For information about how to specify and consume your own Job arguments, see the Calling Glue APIs in Python topic in the developer guide.

    For information about the arguments you can provide to this field when configuring Spark jobs, see the Special Parameters Used by Glue topic in the developer guide.

    For information about the arguments you can provide to this field when configuring Ray jobs, see Using job parameters in Ray jobs in the developer guide.

  • :non_overridable_arguments (Hash<String,String>)

    Arguments for this job that are not overridden when providing job arguments in a job run, specified as name-value pairs.

  • :connections (Types::ConnectionsList)

    The connections used for this job.

  • :max_retries (Integer)

    The maximum number of times to retry this job if it fails.

  • :allocated_capacity (Integer)

    This parameter is deprecated. Use MaxCapacity instead.

    The number of Glue data processing units (DPUs) to allocate to this Job. You can allocate a minimum of 2 DPUs; the default is 10. A DPU is a relative measure of processing power that consists of 4 vCPUs of compute capacity and 16 GB of memory. For more information, see the Glue pricing page.

  • :timeout (Integer)

    The job timeout in minutes. This is the maximum time that a job run can consume resources before it is terminated and enters TIMEOUT status.

    Jobs must have timeout values less than 7 days or 10080 minutes. Otherwise, the jobs will throw an exception.

    When the value is left blank, the timeout is defaulted to 2880 minutes.

    Any existing Glue jobs that had a timeout value greater than 7 days will be defaulted to 7 days. For instance if you have specified a timeout of 20 days for a batch job, it will be stopped on the 7th day.

    For streaming jobs, if you have set up a maintenance window, it will be restarted during the maintenance window after 7 days.

  • :max_capacity (Float)

    For Glue version 1.0 or earlier jobs, using the standard worker type, the number of Glue data processing units (DPUs) that can be allocated when this job runs. A DPU is a relative measure of processing power that consists of 4 vCPUs of compute capacity and 16 GB of memory. For more information, see the Glue pricing page.

    For Glue version 2.0+ jobs, you cannot specify a Maximum capacity. Instead, you should specify a Worker type and the Number of workers.

    Do not set MaxCapacity if using WorkerType and NumberOfWorkers.

    The value that can be allocated for MaxCapacity depends on whether you are running a Python shell job, an Apache Spark ETL job, or an Apache Spark streaming ETL job:

    • When you specify a Python shell job (JobCommand.Name="pythonshell"), you can allocate either 0.0625 or 1 DPU. The default is 0.0625 DPU.

    • When you specify an Apache Spark ETL job (JobCommand.Name="glueetl") or Apache Spark streaming ETL job (JobCommand.Name="gluestreaming"), you can allocate from 2 to 100 DPUs. The default is 10 DPUs. This job type cannot have a fractional DPU allocation.

  • :security_configuration (String)

    The name of the SecurityConfiguration structure to be used with this job.

  • :tags (Hash<String,String>)

    The tags to use with this job. You may use tags to limit access to the job. For more information about tags in Glue, see Amazon Web Services Tags in Glue in the developer guide.

  • :notification_property (Types::NotificationProperty)

    Specifies configuration properties of a job notification.

  • :glue_version (String)

    In Spark jobs, GlueVersion determines the versions of Apache Spark and Python that Glue available in a job. The Python version indicates the version supported for jobs of type Spark.

    Ray jobs should set GlueVersion to 4.0 or greater. However, the versions of Ray, Python and additional libraries available in your Ray job are determined by the Runtime parameter of the Job command.

    For more information about the available Glue versions and corresponding Spark and Python versions, see Glue version in the developer guide.

    Jobs that are created without specifying a Glue version default to Glue 0.9.

  • :number_of_workers (Integer)

    The number of workers of a defined workerType that are allocated when a job runs.

  • :worker_type (String)

    The type of predefined worker that is allocated when a job runs. Accepts a value of G.1X, G.2X, G.4X, G.8X or G.025X for Spark jobs. Accepts the value Z.2X for Ray jobs.

    • For the G.1X worker type, each worker maps to 1 DPU (4 vCPUs, 16 GB of memory) with 94GB disk, and provides 1 executor per worker. We recommend this worker type for workloads such as data transforms, joins, and queries, to offers a scalable and cost effective way to run most jobs.

    • For the G.2X worker type, each worker maps to 2 DPU (8 vCPUs, 32 GB of memory) with 138GB disk, and provides 1 executor per worker. We recommend this worker type for workloads such as data transforms, joins, and queries, to offers a scalable and cost effective way to run most jobs.

    • For the G.4X worker type, each worker maps to 4 DPU (16 vCPUs, 64 GB of memory) with 256GB disk, and provides 1 executor per worker. We recommend this worker type for jobs whose workloads contain your most demanding transforms, aggregations, joins, and queries. This worker type is available only for Glue version 3.0 or later Spark ETL jobs in the following Amazon Web Services Regions: US East (Ohio), US East (N. Virginia), US West (Oregon), Asia Pacific (Singapore), Asia Pacific (Sydney), Asia Pacific (Tokyo), Canada (Central), Europe (Frankfurt), Europe (Ireland), and Europe (Stockholm).

    • For the G.8X worker type, each worker maps to 8 DPU (32 vCPUs, 128 GB of memory) with 512GB disk, and provides 1 executor per worker. We recommend this worker type for jobs whose workloads contain your most demanding transforms, aggregations, joins, and queries. This worker type is available only for Glue version 3.0 or later Spark ETL jobs, in the same Amazon Web Services Regions as supported for the G.4X worker type.

    • For the G.025X worker type, each worker maps to 0.25 DPU (2 vCPUs, 4 GB of memory) with 84GB disk, and provides 1 executor per worker. We recommend this worker type for low volume streaming jobs. This worker type is only available for Glue version 3.0 or later streaming jobs.

    • For the Z.2X worker type, each worker maps to 2 M-DPU (8vCPUs, 64 GB of memory) with 128 GB disk, and provides up to 8 Ray workers based on the autoscaler.

  • :code_gen_configuration_nodes (Hash<String,Types::CodeGenConfigurationNode>)

    The representation of a directed acyclic graph on which both the Glue Studio visual component and Glue Studio code generation is based.

  • :execution_class (String)

    Indicates whether the job is run with a standard or flexible execution class. The standard execution-class is ideal for time-sensitive workloads that require fast job startup and dedicated resources.

    The flexible execution class is appropriate for time-insensitive jobs whose start and completion times may vary.

    Only jobs with Glue version 3.0 and above and command type glueetl will be allowed to set ExecutionClass to FLEX. The flexible execution class is available for Spark jobs.

  • :source_control_details (Types::SourceControlDetails)

    The details for a source control configuration for a job, allowing synchronization of job artifacts to or from a remote repository.

  • :maintenance_window (String)

    This field specifies a day of the week and hour for a maintenance window for streaming jobs. Glue periodically performs maintenance activities. During these maintenance windows, Glue will need to restart your streaming jobs.

    Glue will restart the job within 3 hours of the specified maintenance window. For instance, if you set up the maintenance window for Monday at 10:00AM GMT, your jobs will be restarted between 10:00AM GMT to 1:00PM GMT.

Returns:

See Also:

[View source]

4222
4223
4224
4225
# File 'gems/aws-sdk-glue/lib/aws-sdk-glue/client.rb', line 4222

def create_job(params = {}, options = {})
  req = build_request(:create_job, params)
  req.send_request(options)
end

#create_ml_transform(params = {}) ⇒ Types::CreateMLTransformResponse

Creates an Glue machine learning transform. This operation creates the transform and all the necessary parameters to train it.

Call this operation as the first step in the process of using a machine learning transform (such as the FindMatches transform) for deduplicating data. You can provide an optional Description, in addition to the parameters that you want to use for your algorithm.

You must also specify certain parameters for the tasks that Glue runs on your behalf as part of learning from your data and creating a high-quality machine learning transform. These parameters include Role, and optionally, AllocatedCapacity, Timeout, and MaxRetries. For more information, see Jobs.

Examples:

Request syntax with placeholder values


resp = client.create_ml_transform({
  name: "NameString", # required
  description: "DescriptionString",
  input_record_tables: [ # required
    {
      database_name: "NameString", # required
      table_name: "NameString", # required
      catalog_id: "NameString",
      connection_name: "NameString",
      additional_options: {
        "NameString" => "DescriptionString",
      },
    },
  ],
  parameters: { # required
    transform_type: "FIND_MATCHES", # required, accepts FIND_MATCHES
    find_matches_parameters: {
      primary_key_column_name: "ColumnNameString",
      precision_recall_tradeoff: 1.0,
      accuracy_cost_tradeoff: 1.0,
      enforce_provided_labels: false,
    },
  },
  role: "RoleString", # required
  glue_version: "GlueVersionString",
  max_capacity: 1.0,
  worker_type: "Standard", # accepts Standard, G.1X, G.2X, G.025X, G.4X, G.8X, Z.2X
  number_of_workers: 1,
  timeout: 1,
  max_retries: 1,
  tags: {
    "TagKey" => "TagValue",
  },
  transform_encryption: {
    ml_user_data_encryption: {
      ml_user_data_encryption_mode: "DISABLED", # required, accepts DISABLED, SSE-KMS
      kms_key_id: "NameString",
    },
    task_run_security_configuration_name: "NameString",
  },
})

Response structure


resp.transform_id #=> String

Parameters:

  • params (Hash) (defaults to: {})

    ({})

Options Hash (params):

  • :name (required, String)

    The unique name that you give the transform when you create it.

  • :description (String)

    A description of the machine learning transform that is being defined. The default is an empty string.

  • :input_record_tables (required, Array<Types::GlueTable>)

    A list of Glue table definitions used by the transform.

  • :parameters (required, Types::TransformParameters)

    The algorithmic parameters that are specific to the transform type used. Conditionally dependent on the transform type.

  • :role (required, String)

    The name or Amazon Resource Name (ARN) of the IAM role with the required permissions. The required permissions include both Glue service role permissions to Glue resources, and Amazon S3 permissions required by the transform.

    • This role needs Glue service role permissions to allow access to resources in Glue. See Attach a Policy to IAM Users That Access Glue.

    • This role needs permission to your Amazon Simple Storage Service (Amazon S3) sources, targets, temporary directory, scripts, and any libraries used by the task run for this transform.

  • :glue_version (String)

    This value determines which version of Glue this machine learning transform is compatible with. Glue 1.0 is recommended for most customers. If the value is not set, the Glue compatibility defaults to Glue 0.9. For more information, see Glue Versions in the developer guide.

  • :max_capacity (Float)

    The number of Glue data processing units (DPUs) that are allocated to task runs for this transform. You can allocate from 2 to 100 DPUs; the default is 10. A DPU is a relative measure of processing power that consists of 4 vCPUs of compute capacity and 16 GB of memory. For more information, see the Glue pricing page.

    MaxCapacity is a mutually exclusive option with NumberOfWorkers and WorkerType.

    • If either NumberOfWorkers or WorkerType is set, then MaxCapacity cannot be set.

    • If MaxCapacity is set then neither NumberOfWorkers or WorkerType can be set.

    • If WorkerType is set, then NumberOfWorkers is required (and vice versa).

    • MaxCapacity and NumberOfWorkers must both be at least 1.

    When the WorkerType field is set to a value other than Standard, the MaxCapacity field is set automatically and becomes read-only.

    When the WorkerType field is set to a value other than Standard, the MaxCapacity field is set automatically and becomes read-only.

  • :worker_type (String)

    The type of predefined worker that is allocated when this task runs. Accepts a value of Standard, G.1X, or G.2X.

    • For the Standard worker type, each worker provides 4 vCPU, 16 GB of memory and a 50GB disk, and 2 executors per worker.

    • For the G.1X worker type, each worker provides 4 vCPU, 16 GB of memory and a 64GB disk, and 1 executor per worker.

    • For the G.2X worker type, each worker provides 8 vCPU, 32 GB of memory and a 128GB disk, and 1 executor per worker.

    MaxCapacity is a mutually exclusive option with NumberOfWorkers and WorkerType.

    • If either NumberOfWorkers or WorkerType is set, then MaxCapacity cannot be set.

    • If MaxCapacity is set then neither NumberOfWorkers or WorkerType can be set.

    • If WorkerType is set, then NumberOfWorkers is required (and vice versa).

    • MaxCapacity and NumberOfWorkers must both be at least 1.

  • :number_of_workers (Integer)

    The number of workers of a defined workerType that are allocated when this task runs.

    If WorkerType is set, then NumberOfWorkers is required (and vice versa).

  • :timeout (Integer)

    The timeout of the task run for this transform in minutes. This is the maximum time that a task run for this transform can consume resources before it is terminated and enters TIMEOUT status. The default is 2,880 minutes (48 hours).

  • :max_retries (Integer)

    The maximum number of times to retry a task for this transform after a task run fails.

  • :tags (Hash<String,String>)

    The tags to use with this machine learning transform. You may use tags to limit access to the machine learning transform. For more information about tags in Glue, see Amazon Web Services Tags in Glue in the developer guide.

  • :transform_encryption (Types::TransformEncryption)

    The encryption-at-rest settings of the transform that apply to accessing user data. Machine learning transforms can access user data encrypted in Amazon S3 using KMS.

Returns:

See Also:

[View source]

4434
4435
4436
4437
# File 'gems/aws-sdk-glue/lib/aws-sdk-glue/client.rb', line 4434

def create_ml_transform(params = {}, options = {})
  req = build_request(:create_ml_transform, params)
  req.send_request(options)
end

#create_partition(params = {}) ⇒ Struct

Creates a new partition.

Examples:

Request syntax with placeholder values


resp = client.create_partition({
  catalog_id: "CatalogIdString",
  database_name: "NameString", # required
  table_name: "NameString", # required
  partition_input: { # required
    values: ["ValueString"],
    last_access_time: Time.now,
    storage_descriptor: {
      columns: [
        {
          name: "NameString", # required
          type: "ColumnTypeString",
          comment: "CommentString",
          parameters: {
            "KeyString" => "ParametersMapValue",
          },
        },
      ],
      location: "LocationString",
      additional_locations: ["LocationString"],
      input_format: "FormatString",
      output_format: "FormatString",
      compressed: false,
      number_of_buckets: 1,
      serde_info: {
        name: "NameString",
        serialization_library: "NameString",
        parameters: {
          "KeyString" => "ParametersMapValue",
        },
      },
      bucket_columns: ["NameString"],
      sort_columns: [
        {
          column: "NameString", # required
          sort_order: 1, # required
        },
      ],
      parameters: {
        "KeyString" => "ParametersMapValue",
      },
      skewed_info: {
        skewed_column_names: ["NameString"],
        skewed_column_values: ["ColumnValuesString"],
        skewed_column_value_location_maps: {
          "ColumnValuesString" => "ColumnValuesString",
        },
      },
      stored_as_sub_directories: false,
      schema_reference: {
        schema_id: {
          schema_arn: "GlueResourceArn",
          schema_name: "SchemaRegistryNameString",
          registry_name: "SchemaRegistryNameString",
        },
        schema_version_id: "SchemaVersionIdString",
        schema_version_number: 1,
      },
    },
    parameters: {
      "KeyString" => "ParametersMapValue",
    },
    last_analyzed_time: Time.now,
  },
})

Parameters:

  • params (Hash) (defaults to: {})

    ({})

Options Hash (params):

  • :catalog_id (String)

    The Amazon Web Services account ID of the catalog in which the partition is to be created.

  • :database_name (required, String)

    The name of the metadata database in which the partition is to be created.

  • :table_name (required, String)

    The name of the metadata table in which the partition is to be created.

  • :partition_input (required, Types::PartitionInput)

    A PartitionInput structure defining the partition to be created.

Returns:

  • (Struct)

    Returns an empty response.

See Also:

[View source]

4530
4531
4532
4533
# File 'gems/aws-sdk-glue/lib/aws-sdk-glue/client.rb', line 4530

def create_partition(params = {}, options = {})
  req = build_request(:create_partition, params)
  req.send_request(options)
end

#create_partition_index(params = {}) ⇒ Struct

Creates a specified partition index in an existing table.

Examples:

Request syntax with placeholder values


resp = client.create_partition_index({
  catalog_id: "CatalogIdString",
  database_name: "NameString", # required
  table_name: "NameString", # required
  partition_index: { # required
    keys: ["NameString"], # required
    index_name: "NameString", # required
  },
})

Parameters:

  • params (Hash) (defaults to: {})

    ({})

Options Hash (params):

  • :catalog_id (String)

    The catalog ID where the table resides.

  • :database_name (required, String)

    Specifies the name of a database in which you want to create a partition index.

  • :table_name (required, String)

    Specifies the name of a table in which you want to create a partition index.

  • :partition_index (required, Types::PartitionIndex)

    Specifies a PartitionIndex structure to create a partition index in an existing table.

Returns:

  • (Struct)

    Returns an empty response.

See Also:

[View source]

4570
4571
4572
4573
# File 'gems/aws-sdk-glue/lib/aws-sdk-glue/client.rb', line 4570

def create_partition_index(params = {}, options = {})
  req = build_request(:create_partition_index, params)
  req.send_request(options)
end

#create_registry(params = {}) ⇒ Types::CreateRegistryResponse

Creates a new registry which may be used to hold a collection of schemas.

Examples:

Request syntax with placeholder values


resp = client.create_registry({
  registry_name: "SchemaRegistryNameString", # required
  description: "DescriptionString",
  tags: {
    "TagKey" => "TagValue",
  },
})

Response structure


resp.registry_arn #=> String
resp.registry_name #=> String
resp.description #=> String
resp.tags #=> Hash
resp.tags["TagKey"] #=> String

Parameters:

  • params (Hash) (defaults to: {})

    ({})

Options Hash (params):

  • :registry_name (required, String)

    Name of the registry to be created of max length of 255, and may only contain letters, numbers, hyphen, underscore, dollar sign, or hash mark. No whitespace.

  • :description (String)

    A description of the registry. If description is not provided, there will not be any default value for this.

  • :tags (Hash<String,String>)

    Amazon Web Services tags that contain a key value pair and may be searched by console, command line, or API.

Returns:

See Also:

[View source]

4620
4621
4622
4623
# File 'gems/aws-sdk-glue/lib/aws-sdk-glue/client.rb', line 4620

def create_registry(params = {}, options = {})
  req = build_request(:create_registry, params)
  req.send_request(options)
end

#create_schema(params = {}) ⇒ Types::CreateSchemaResponse

Creates a new schema set and registers the schema definition. Returns an error if the schema set already exists without actually registering the version.

When the schema set is created, a version checkpoint will be set to the first version. Compatibility mode "DISABLED" restricts any additional schema versions from being added after the first schema version. For all other compatibility modes, validation of compatibility settings will be applied only from the second version onwards when the RegisterSchemaVersion API is used.

When this API is called without a RegistryId, this will create an entry for a "default-registry" in the registry database tables, if it is not already present.

Examples:

Request syntax with placeholder values


resp = client.create_schema({
  registry_id: {
    registry_name: "SchemaRegistryNameString",
    registry_arn: "GlueResourceArn",
  },
  schema_name: "SchemaRegistryNameString", # required
  data_format: "AVRO", # required, accepts AVRO, JSON, PROTOBUF
  compatibility: "NONE", # accepts NONE, DISABLED, BACKWARD, BACKWARD_ALL, FORWARD, FORWARD_ALL, FULL, FULL_ALL
  description: "DescriptionString",
  tags: {
    "TagKey" => "TagValue",
  },
  schema_definition: "SchemaDefinitionString",
})

Response structure


resp.registry_name #=> String
resp.registry_arn #=> String
resp.schema_name #=> String
resp.schema_arn #=> String
resp.description #=> String
resp.data_format #=> String, one of "AVRO", "JSON", "PROTOBUF"
resp.compatibility #=> String, one of "NONE", "DISABLED", "BACKWARD", "BACKWARD_ALL", "FORWARD", "FORWARD_ALL", "FULL", "FULL_ALL"
resp.schema_checkpoint #=> Integer
resp.latest_schema_version #=> Integer
resp.next_schema_version #=> Integer
resp.schema_status #=> String, one of "AVAILABLE", "PENDING", "DELETING"
resp.tags #=> Hash
resp.tags["TagKey"] #=> String
resp.schema_version_id #=> String
resp.schema_version_status #=> String, one of "AVAILABLE", "PENDING", "FAILURE", "DELETING"

Parameters:

  • params (Hash) (defaults to: {})

    ({})

Options Hash (params):

  • :registry_id (Types::RegistryId)

    This is a wrapper shape to contain the registry identity fields. If this is not provided, the default registry will be used. The ARN format for the same will be: arn:aws:glue:us-east-2:<customer id>:registry/default-registry:random-5-letter-id.

  • :schema_name (required, String)

    Name of the schema to be created of max length of 255, and may only contain letters, numbers, hyphen, underscore, dollar sign, or hash mark. No whitespace.

  • :data_format (required, String)

    The data format of the schema definition. Currently AVRO, JSON and PROTOBUF are supported.

  • :compatibility (String)

    The compatibility mode of the schema. The possible values are:

    • NONE: No compatibility mode applies. You can use this choice in development scenarios or if you do not know the compatibility mode that you want to apply to schemas. Any new version added will be accepted without undergoing a compatibility check.

    • DISABLED: This compatibility choice prevents versioning for a particular schema. You can use this choice to prevent future versioning of a schema.

    • BACKWARD: This compatibility choice is recommended as it allows data receivers to read both the current and one previous schema version. This means that for instance, a new schema version cannot drop data fields or change the type of these fields, so they can't be read by readers using the previous version.

    • BACKWARD_ALL: This compatibility choice allows data receivers to read both the current and all previous schema versions. You can use this choice when you need to delete fields or add optional fields, and check compatibility against all previous schema versions.

    • FORWARD: This compatibility choice allows data receivers to read both the current and one next schema version, but not necessarily later versions. You can use this choice when you need to add fields or delete optional fields, but only check compatibility against the last schema version.

    • FORWARD_ALL: This compatibility choice allows data receivers to read written by producers of any new registered schema. You can use this choice when you need to add fields or delete optional fields, and check compatibility against all previous schema versions.

    • FULL: This compatibility choice allows data receivers to read data written by producers using the previous or next version of the schema, but not necessarily earlier or later versions. You can use this choice when you need to add or remove optional fields, but only check compatibility against the last schema version.

    • FULL_ALL: This compatibility choice allows data receivers to read data written by producers using all previous schema versions. You can use this choice when you need to add or remove optional fields, and check compatibility against all previous schema versions.

  • :description (String)

    An optional description of the schema. If description is not provided, there will not be any automatic default value for this.

  • :tags (Hash<String,String>)

    Amazon Web Services tags that contain a key value pair and may be searched by console, command line, or API. If specified, follows the Amazon Web Services tags-on-create pattern.

  • :schema_definition (String)

    The schema definition using the DataFormat setting for SchemaName.

Returns:

See Also:

[View source]

4768
4769
4770
4771
# File 'gems/aws-sdk-glue/lib/aws-sdk-glue/client.rb', line 4768

def create_schema(params = {}, options = {})
  req = build_request(:create_schema, params)
  req.send_request(options)
end

#create_script(params = {}) ⇒ Types::CreateScriptResponse

Transforms a directed acyclic graph (DAG) into code.

Examples:

Request syntax with placeholder values


resp = client.create_script({
  dag_nodes: [
    {
      id: "CodeGenIdentifier", # required
      node_type: "CodeGenNodeType", # required
      args: [ # required
        {
          name: "CodeGenArgName", # required
          value: "CodeGenArgValue", # required
          param: false,
        },
      ],
      line_number: 1,
    },
  ],
  dag_edges: [
    {
      source: "CodeGenIdentifier", # required
      target: "CodeGenIdentifier", # required
      target_parameter: "CodeGenArgName",
    },
  ],
  language: "PYTHON", # accepts PYTHON, SCALA
})

Response structure


resp.python_script #=> String
resp.scala_code #=> String

Parameters:

  • params (Hash) (defaults to: {})

    ({})

Options Hash (params):

  • :dag_nodes (Array<Types::CodeGenNode>)

    A list of the nodes in the DAG.

  • :dag_edges (Array<Types::CodeGenEdge>)

    A list of the edges in the DAG.

  • :language (String)

    The programming language of the resulting code from the DAG.

Returns:

See Also:

[View source]

4825
4826
4827
4828
# File 'gems/aws-sdk-glue/lib/aws-sdk-glue/client.rb', line 4825

def create_script(params = {}, options = {})
  req = build_request(:create_script, params)
  req.send_request(options)
end

#create_security_configuration(params = {}) ⇒ Types::CreateSecurityConfigurationResponse

Creates a new security configuration. A security configuration is a set of security properties that can be used by Glue. You can use a security configuration to encrypt data at rest. For information about using security configurations in Glue, see Encrypting Data Written by Crawlers, Jobs, and Development Endpoints.

Examples:

Request syntax with placeholder values


resp = client.create_security_configuration({
  name: "NameString", # required
  encryption_configuration: { # required
    s3_encryption: [
      {
        s3_encryption_mode: "DISABLED", # accepts DISABLED, SSE-KMS, SSE-S3
        kms_key_arn: "KmsKeyArn",
      },
    ],
    cloud_watch_encryption: {
      cloud_watch_encryption_mode: "DISABLED", # accepts DISABLED, SSE-KMS
      kms_key_arn: "KmsKeyArn",
    },
    job_bookmarks_encryption: {
      job_bookmarks_encryption_mode: "DISABLED", # accepts DISABLED, CSE-KMS
      kms_key_arn: "KmsKeyArn",
    },
    data_quality_encryption: {
      data_quality_encryption_mode: "DISABLED", # accepts DISABLED, SSE-KMS
      kms_key_arn: "KmsKeyArn",
    },
  },
})

Response structure


resp.name #=> String
resp.created_timestamp #=> Time

Parameters:

  • params (Hash) (defaults to: {})

    ({})

Options Hash (params):

  • :name (required, String)

    The name for the new security configuration.

  • :encryption_configuration (required, Types::EncryptionConfiguration)

    The encryption configuration for the new security configuration.

Returns:

See Also:

[View source]

4886
4887
4888
4889
# File 'gems/aws-sdk-glue/lib/aws-sdk-glue/client.rb', line 4886

def create_security_configuration(params = {}, options = {})
  req = build_request(:create_security_configuration, params)
  req.send_request(options)
end

#create_session(params = {}) ⇒ Types::CreateSessionResponse

Creates a new session.

Examples:

Request syntax with placeholder values


resp = client.create_session({
  id: "NameString", # required
  description: "DescriptionString",
  role: "OrchestrationRoleArn", # required
  command: { # required
    name: "NameString",
    python_version: "PythonVersionString",
  },
  timeout: 1,
  idle_timeout: 1,
  default_arguments: {
    "OrchestrationNameString" => "OrchestrationArgumentsValue",
  },
  connections: {
    connections: ["GenericString"],
  },
  max_capacity: 1.0,
  number_of_workers: 1,
  worker_type: "Standard", # accepts Standard, G.1X, G.2X, G.025X, G.4X, G.8X, Z.2X
  security_configuration: "NameString",
  glue_version: "GlueVersionString",
  tags: {
    "TagKey" => "TagValue",
  },
  request_origin: "OrchestrationNameString",
})

Response structure


resp.session.id #=> String
resp.session.created_on #=> Time
resp.session.status #=> String, one of "PROVISIONING", "READY", "FAILED", "TIMEOUT", "STOPPING", "STOPPED"
resp.session.error_message #=> String
resp.session.description #=> String
resp.session.role #=> String
resp.session.command.name #=> String
resp.session.command.python_version #=> String
resp.session.default_arguments #=> Hash
resp.session.default_arguments["OrchestrationNameString"] #=> String
resp.session.connections.connections #=> Array
resp.session.connections.connections[0] #=> String
resp.session.progress #=> Float
resp.session.max_capacity #=> Float
resp.session.security_configuration #=> String
resp.session.glue_version #=> String
resp.session.number_of_workers #=> Integer
resp.session.worker_type #=> String, one of "Standard", "G.1X", "G.2X", "G.025X", "G.4X", "G.8X", "Z.2X"
resp.session.completed_on #=> Time
resp.session.execution_time #=> Float
resp.session.dpu_seconds #=> Float
resp.session.idle_timeout #=> Integer
resp.session.profile_name #=> String

Parameters:

  • params (Hash) (defaults to: {})

    ({})

Options Hash (params):

  • :id (required, String)

    The ID of the session request.

  • :description (String)

    The description of the session.

  • :role (required, String)

    The IAM Role ARN

  • :command (required, Types::SessionCommand)

    The SessionCommand that runs the job.

  • :timeout (Integer)

    The number of minutes before session times out. Default for Spark ETL jobs is 48 hours (2880 minutes). Consult the documentation for other job types.

  • :idle_timeout (Integer)

    The number of minutes when idle before session times out. Default for Spark ETL jobs is value of Timeout. Consult the documentation for other job types.

  • :default_arguments (Hash<String,String>)

    A map array of key-value pairs. Max is 75 pairs.

  • :connections (Types::ConnectionsList)

    The number of connections to use for the session.

  • :max_capacity (Float)

    The number of Glue data processing units (DPUs) that can be allocated when the job runs. A DPU is a relative measure of processing power that consists of 4 vCPUs of compute capacity and 16 GB memory.

  • :number_of_workers (Integer)

    The number of workers of a defined WorkerType to use for the session.

  • :worker_type (String)

    The type of predefined worker that is allocated when a job runs. Accepts a value of G.1X, G.2X, G.4X, or G.8X for Spark jobs. Accepts the value Z.2X for Ray notebooks.

    • For the G.1X worker type, each worker maps to 1 DPU (4 vCPUs, 16 GB of memory) with 94GB disk, and provides 1 executor per worker. We recommend this worker type for workloads such as data transforms, joins, and queries, to offers a scalable and cost effective way to run most jobs.

    • For the G.2X worker type, each worker maps to 2 DPU (8 vCPUs, 32 GB of memory) with 138GB disk, and provides 1 executor per worker. We recommend this worker type for workloads such as data transforms, joins, and queries, to offers a scalable and cost effective way to run most jobs.

    • For the G.4X worker type, each worker maps to 4 DPU (16 vCPUs, 64 GB of memory) with 256GB disk, and provides 1 executor per worker. We recommend this worker type for jobs whose workloads contain your most demanding transforms, aggregations, joins, and queries. This worker type is available only for Glue version 3.0 or later Spark ETL jobs in the following Amazon Web Services Regions: US East (Ohio), US East (N. Virginia), US West (Oregon), Asia Pacific (Singapore), Asia Pacific (Sydney), Asia Pacific (Tokyo), Canada (Central), Europe (Frankfurt), Europe (Ireland), and Europe (Stockholm).

    • For the G.8X worker type, each worker maps to 8 DPU (32 vCPUs, 128 GB of memory) with 512GB disk, and provides 1 executor per worker. We recommend this worker type for jobs whose workloads contain your most demanding transforms, aggregations, joins, and queries. This worker type is available only for Glue version 3.0 or later Spark ETL jobs, in the same Amazon Web Services Regions as supported for the G.4X worker type.

    • For the Z.2X worker type, each worker maps to 2 M-DPU (8vCPUs, 64 GB of memory) with 128 GB disk, and provides up to 8 Ray workers based on the autoscaler.

  • :security_configuration (String)

    The name of the SecurityConfiguration structure to be used with the session

  • :glue_version (String)

    The Glue version determines the versions of Apache Spark and Python that Glue supports. The GlueVersion must be greater than 2.0.

  • :tags (Hash<String,String>)

    The map of key value pairs (tags) belonging to the session.

  • :request_origin (String)

    The origin of the request.

Returns:

See Also:

[View source]

5047
5048
5049
5050
# File 'gems/aws-sdk-glue/lib/aws-sdk-glue/client.rb', line 5047

def create_session(params = {}, options = {})
  req = build_request(:create_session, params)
  req.send_request(options)
end

#create_table(params = {}) ⇒ Struct

Creates a new table definition in the Data Catalog.

Examples:

Request syntax with placeholder values


resp = client.create_table({
  catalog_id: "CatalogIdString",
  database_name: "NameString", # required
  table_input: { # required
    name: "NameString", # required
    description: "DescriptionString",
    owner: "NameString",
    last_access_time: Time.now,
    last_analyzed_time: Time.now,
    retention: 1,
    storage_descriptor: {
      columns: [
        {
          name: "NameString", # required
          type: "ColumnTypeString",
          comment: "CommentString",
          parameters: {
            "KeyString" => "ParametersMapValue",
          },
        },
      ],
      location: "LocationString",
      additional_locations: ["LocationString"],
      input_format: "FormatString",
      output_format: "FormatString",
      compressed: false,
      number_of_buckets: 1,
      serde_info: {
        name: "NameString",
        serialization_library: "NameString",
        parameters: {
          "KeyString" => "ParametersMapValue",
        },
      },
      bucket_columns: ["NameString"],
      sort_columns: [
        {
          column: "NameString", # required
          sort_order: 1, # required
        },
      ],
      parameters: {
        "KeyString" => "ParametersMapValue",
      },
      skewed_info: {
        skewed_column_names: ["NameString"],
        skewed_column_values: ["ColumnValuesString"],
        skewed_column_value_location_maps: {
          "ColumnValuesString" => "ColumnValuesString",
        },
      },
      stored_as_sub_directories: false,
      schema_reference: {
        schema_id: {
          schema_arn: "GlueResourceArn",
          schema_name: "SchemaRegistryNameString",
          registry_name: "SchemaRegistryNameString",
        },
        schema_version_id: "SchemaVersionIdString",
        schema_version_number: 1,
      },
    },
    partition_keys: [
      {
        name: "NameString", # required
        type: "ColumnTypeString",
        comment: "CommentString",
        parameters: {
          "KeyString" => "ParametersMapValue",
        },
      },
    ],
    view_original_text: "ViewTextString",
    view_expanded_text: "ViewTextString",
    table_type: "TableTypeString",
    parameters: {
      "KeyString" => "ParametersMapValue",
    },
    target_table: {
      catalog_id: "CatalogIdString",
      database_name: "NameString",
      name: "NameString",
      region: "NameString",
    },
    view_definition: {
      is_protected: false,
      definer: "ArnString",
      representations: [
        {
          dialect: "REDSHIFT", # accepts REDSHIFT, ATHENA, SPARK
          dialect_version: "ViewDialectVersionString",
          view_original_text: "ViewTextString",
          validation_connection: "NameString",
          view_expanded_text: "ViewTextString",
        },
      ],
      sub_objects: ["ArnString"],
    },
  },
  partition_indexes: [
    {
      keys: ["NameString"], # required
      index_name: "NameString", # required
    },
  ],
  transaction_id: "TransactionIdString",
  open_table_format_input: {
    iceberg_input: {
      metadata_operation: "CREATE", # required, accepts CREATE
      version: "VersionString",
    },
  },
})

Parameters:

  • params (Hash) (defaults to: {})

    ({})

Options Hash (params):

  • :catalog_id (String)

    The ID of the Data Catalog in which to create the Table. If none is supplied, the Amazon Web Services account ID is used by default.

  • :database_name (required, String)

    The catalog database in which to create the new table. For Hive compatibility, this name is entirely lowercase.

  • :table_input (required, Types::TableInput)

    The TableInput object that defines the metadata table to create in the catalog.

  • :partition_indexes (Array<Types::PartitionIndex>)

    A list of partition indexes, PartitionIndex structures, to create in the table.

  • :transaction_id (String)

    The ID of the transaction.

  • :open_table_format_input (Types::OpenTableFormatInput)

    Specifies an OpenTableFormatInput structure when creating an open format table.

Returns:

  • (Struct)

    Returns an empty response.

See Also:

[View source]

5199
5200
5201
5202
# File 'gems/aws-sdk-glue/lib/aws-sdk-glue/client.rb', line 5199

def create_table(params = {}, options = {})
  req = build_request(:create_table, params)
  req.send_request(options)
end

#create_table_optimizer(params = {}) ⇒ Struct

Creates a new table optimizer for a specific function.

Examples:

Request syntax with placeholder values


resp = client.create_table_optimizer({
  catalog_id: "CatalogIdString", # required
  database_name: "NameString", # required
  table_name: "NameString", # required
  type: "compaction", # required, accepts compaction, retention, orphan_file_deletion
  table_optimizer_configuration: { # required
    role_arn: "ArnString",
    enabled: false,
    vpc_configuration: {
      glue_connection_name: "glueConnectionNameString",
    },
    retention_configuration: {
      iceberg_configuration: {
        snapshot_retention_period_in_days: 1,
        number_of_snapshots_to_retain: 1,
        clean_expired_files: false,
      },
    },
    orphan_file_deletion_configuration: {
      iceberg_configuration: {
        orphan_file_retention_period_in_days: 1,
        location: "MessageString",
      },
    },
  },
})

Parameters:

  • params (Hash) (defaults to: {})

    ({})

Options Hash (params):

  • :catalog_id (required, String)

    The Catalog ID of the table.

  • :database_name (required, String)

    The name of the database in the catalog in which the table resides.

  • :table_name (required, String)

    The name of the table.

  • :type (required, String)

    The type of table optimizer.

  • :table_optimizer_configuration (required, Types::TableOptimizerConfiguration)

    A TableOptimizerConfiguration object representing the configuration of a table optimizer.

Returns:

  • (Struct)

    Returns an empty response.

See Also:

[View source]

5257
5258
5259
5260
# File 'gems/aws-sdk-glue/lib/aws-sdk-glue/client.rb', line 5257

def create_table_optimizer(params = {}, options = {})
  req = build_request(:create_table_optimizer, params)
  req.send_request(options)
end

#create_trigger(params = {}) ⇒ Types::CreateTriggerResponse

Creates a new trigger.

Job arguments may be logged. Do not pass plaintext secrets as arguments. Retrieve secrets from a Glue Connection, Amazon Web Services Secrets Manager or other secret management mechanism if you intend to keep them within the Job.

Examples:

Request syntax with placeholder values


resp = client.create_trigger({
  name: "NameString", # required
  workflow_name: "NameString",
  type: "SCHEDULED", # required, accepts SCHEDULED, CONDITIONAL, ON_DEMAND, EVENT
  schedule: "GenericString",
  predicate: {
    logical: "AND", # accepts AND, ANY
    conditions: [
      {
        logical_operator: "EQUALS", # accepts EQUALS
        job_name: "NameString",
        state: "STARTING", # accepts STARTING, RUNNING, STOPPING, STOPPED, SUCCEEDED, FAILED, TIMEOUT, ERROR, WAITING, EXPIRED
        crawler_name: "NameString",
        crawl_state: "RUNNING", # accepts RUNNING, CANCELLING, CANCELLED, SUCCEEDED, FAILED, ERROR
      },
    ],
  },
  actions: [ # required
    {
      job_name: "NameString",
      arguments: {
        "GenericString" => "GenericString",
      },
      timeout: 1,
      security_configuration: "NameString",
      notification_property: {
        notify_delay_after: 1,
      },
      crawler_name: "NameString",
    },
  ],
  description: "DescriptionString",
  start_on_creation: false,
  tags: {
    "TagKey" => "TagValue",
  },
  event_batching_condition: {
    batch_size: 1, # required
    batch_window: 1,
  },
})

Response structure


resp.name #=> String

Parameters:

  • params (Hash) (defaults to: {})

    ({})

Options Hash (params):

  • :name (required, String)

    The name of the trigger.

  • :workflow_name (String)

    The name of the workflow associated with the trigger.

  • :type (required, String)

    The type of the new trigger.

  • :schedule (String)

    A cron expression used to specify the schedule (see Time-Based Schedules for Jobs and Crawlers. For example, to run something every day at 12:15 UTC, you would specify: cron(15 12 * * ? *).

    This field is required when the trigger type is SCHEDULED.

  • :predicate (Types::Predicate)

    A predicate to specify when the new trigger should fire.

    This field is required when the trigger type is CONDITIONAL.

  • :actions (required, Array<Types::Action>)

    The actions initiated by this trigger when it fires.

  • :description (String)

    A description of the new trigger.

  • :start_on_creation (Boolean)

    Set to true to start SCHEDULED and CONDITIONAL triggers when created. True is not supported for ON_DEMAND triggers.

  • :tags (Hash<String,String>)

    The tags to use with this trigger. You may use tags to limit access to the trigger. For more information about tags in Glue, see Amazon Web Services Tags in Glue in the developer guide.

  • :event_batching_condition (Types::EventBatchingCondition)

    Batch condition that must be met (specified number of events received or batch time window expired) before EventBridge event trigger fires.

Returns:

See Also:

[View source]

5373
5374
5375
5376
# File 'gems/aws-sdk-glue/lib/aws-sdk-glue/client.rb', line 5373

def create_trigger(params = {}, options = {})
  req = build_request(:create_trigger, params)
  req.send_request(options)
end

#create_usage_profile(params = {}) ⇒ Types::CreateUsageProfileResponse

Creates an Glue usage profile.

Examples:

Request syntax with placeholder values


resp = client.create_usage_profile({
  name: "NameString", # required
  description: "DescriptionString",
  configuration: { # required
    session_configuration: {
      "NameString" => {
        default_value: "ConfigValueString",
        allowed_values: ["ConfigValueString"],
        min_value: "ConfigValueString",
        max_value: "ConfigValueString",
      },
    },
    job_configuration: {
      "NameString" => {
        default_value: "ConfigValueString",
        allowed_values: ["ConfigValueString"],
        min_value: "ConfigValueString",
        max_value: "ConfigValueString",
      },
    },
  },
  tags: {
    "TagKey" => "TagValue",
  },
})

Response structure


resp.name #=> String

Parameters:

  • params (Hash) (defaults to: {})

    ({})

Options Hash (params):

  • :name (required, String)

    The name of the usage profile.

  • :description (String)

    A description of the usage profile.

  • :configuration (required, Types::ProfileConfiguration)

    A ProfileConfiguration object specifying the job and session values for the profile.

  • :tags (Hash<String,String>)

    A list of tags applied to the usage profile.

Returns:

See Also:

[View source]

5433
5434
5435
5436
# File 'gems/aws-sdk-glue/lib/aws-sdk-glue/client.rb', line 5433

def create_usage_profile(params = {}, options = {})
  req = build_request(:create_usage_profile, params)
  req.send_request(options)
end

#create_user_defined_function(params = {}) ⇒ Struct

Creates a new function definition in the Data Catalog.

Examples:

Request syntax with placeholder values


resp = client.create_user_defined_function({
  catalog_id: "CatalogIdString",
  database_name: "NameString", # required
  function_input: { # required
    function_name: "NameString",
    class_name: "NameString",
    owner_name: "NameString",
    owner_type: "USER", # accepts USER, ROLE, GROUP
    resource_uris: [
      {
        resource_type: "JAR", # accepts JAR, FILE, ARCHIVE
        uri: "URI",
      },
    ],
  },
})

Parameters:

  • params (Hash) (defaults to: {})

    ({})

Options Hash (params):

  • :catalog_id (String)

    The ID of the Data Catalog in which to create the function. If none is provided, the Amazon Web Services account ID is used by default.

  • :database_name (required, String)

    The name of the catalog database in which to create the function.

  • :function_input (required, Types::UserDefinedFunctionInput)

    A FunctionInput object that defines the function to create in the Data Catalog.

Returns:

  • (Struct)

    Returns an empty response.

See Also:

[View source]

5476
5477
5478
5479
# File 'gems/aws-sdk-glue/lib/aws-sdk-glue/client.rb', line 5476

def create_user_defined_function(params = {}, options = {})
  req = build_request(:create_user_defined_function, params)
  req.send_request(options)
end

#create_workflow(params = {}) ⇒ Types::CreateWorkflowResponse

Creates a new workflow.

Examples:

Request syntax with placeholder values


resp = client.create_workflow({
  name: "NameString", # required
  description: "GenericString",
  default_run_properties: {
    "IdString" => "GenericString",
  },
  tags: {
    "TagKey" => "TagValue",
  },
  max_concurrent_runs: 1,
})

Response structure


resp.name #=> String

Parameters:

  • params (Hash) (defaults to: {})

    ({})

Options Hash (params):

  • :name (required, String)

    The name to be assigned to the workflow. It should be unique within your account.

  • :description (String)

    A description of the workflow.

  • :default_run_properties (Hash<String,String>)

    A collection of properties to be used as part of each execution of the workflow.

    Run properties may be logged. Do not pass plaintext secrets as properties. Retrieve secrets from a Glue Connection, Amazon Web Services Secrets Manager or other secret management mechanism if you intend to use them within the workflow run.

  • :tags (Hash<String,String>)

    The tags to be used with this workflow.

  • :max_concurrent_runs (Integer)

    You can use this parameter to prevent unwanted multiple updates to data, to control costs, or in some cases, to prevent exceeding the maximum number of concurrent runs of any of the component jobs. If you leave this parameter blank, there is no limit to the number of concurrent workflow runs.

Returns:

See Also:

[View source]

5535
5536
5537
5538
# File 'gems/aws-sdk-glue/lib/aws-sdk-glue/client.rb', line 5535

def create_workflow(params = {}, options = {})
  req = build_request(:create_workflow, params)
  req.send_request(options)
end

#delete_blueprint(params = {}) ⇒ Types::DeleteBlueprintResponse

Deletes an existing blueprint.

Examples:

Request syntax with placeholder values


resp = client.delete_blueprint({
  name: "NameString", # required
})

Response structure


resp.name #=> String

Parameters:

  • params (Hash) (defaults to: {})

    ({})

Options Hash (params):

  • :name (required, String)

    The name of the blueprint to delete.

Returns:

See Also:

[View source]

5563
5564
5565
5566
# File 'gems/aws-sdk-glue/lib/aws-sdk-glue/client.rb', line 5563

def delete_blueprint(params = {}, options = {})
  req = build_request(:delete_blueprint, params)
  req.send_request(options)
end

#delete_catalog(params = {}) ⇒ Struct

Removes the specified catalog from the Glue Data Catalog.

After completing this operation, you no longer have access to the databases, tables (and all table versions and partitions that might belong to the tables) and the user-defined functions in the deleted catalog. Glue deletes these "orphaned" resources asynchronously in a timely manner, at the discretion of the service.

To ensure the immediate deletion of all related resources before calling the DeleteCatalog operation, use DeleteTableVersion (or BatchDeleteTableVersion), DeletePartition (or BatchDeletePartition), DeleteTable (or BatchDeleteTable), DeleteUserDefinedFunction and DeleteDatabase to delete any resources that belong to the catalog.

Examples:

Request syntax with placeholder values


resp = client.delete_catalog({
  catalog_id: "CatalogIdString", # required
})

Parameters:

  • params (Hash) (defaults to: {})

    ({})

Options Hash (params):

  • :catalog_id (required, String)

    The ID of the catalog.

Returns:

  • (Struct)

    Returns an empty response.

See Also:

[View source]

5598
5599
5600
5601
# File 'gems/aws-sdk-glue/lib/aws-sdk-glue/client.rb', line 5598

def delete_catalog(params = {}, options = {})
  req = build_request(:delete_catalog, params)
  req.send_request(options)
end

#delete_classifier(params = {}) ⇒ Struct

Removes a classifier from the Data Catalog.

Examples:

Request syntax with placeholder values


resp = client.delete_classifier({
  name: "NameString", # required
})

Parameters:

  • params (Hash) (defaults to: {})

    ({})

Options Hash (params):

  • :name (required, String)

    Name of the classifier to remove.

Returns:

  • (Struct)

    Returns an empty response.

See Also:

[View source]

5620
5621
5622
5623
# File 'gems/aws-sdk-glue/lib/aws-sdk-glue/client.rb', line 5620

def delete_classifier(params = {}, options = {})
  req = build_request(:delete_classifier, params)
  req.send_request(options)
end

#delete_column_statistics_for_partition(params = {}) ⇒ Struct

Delete the partition column statistics of a column.

The Identity and Access Management (IAM) permission required for this operation is DeletePartition.

Examples:

Request syntax with placeholder values


resp = client.delete_column_statistics_for_partition({
  catalog_id: "CatalogIdString",
  database_name: "NameString", # required
  table_name: "NameString", # required
  partition_values: ["ValueString"], # required
  column_name: "NameString", # required
})

Parameters:

  • params (Hash) (defaults to: {})

    ({})

Options Hash (params):

  • :catalog_id (String)

    The ID of the Data Catalog where the partitions in question reside. If none is supplied, the Amazon Web Services account ID is used by default.

  • :database_name (required, String)

    The name of the catalog database where the partitions reside.

  • :table_name (required, String)

    The name of the partitions' table.

  • :partition_values (required, Array<String>)

    A list of partition values identifying the partition.

  • :column_name (required, String)

    Name of the column.

Returns:

  • (Struct)

    Returns an empty response.

See Also:

[View source]

5663
5664
5665
5666
# File 'gems/aws-sdk-glue/lib/aws-sdk-glue/client.rb', line 5663

def delete_column_statistics_for_partition(params = {}, options = {})
  req = build_request(:delete_column_statistics_for_partition, params)
  req.send_request(options)
end

#delete_column_statistics_for_table(params = {}) ⇒ Struct

Retrieves table statistics of columns.

The Identity and Access Management (IAM) permission required for this operation is DeleteTable.

Examples:

Request syntax with placeholder values


resp = client.delete_column_statistics_for_table({
  catalog_id: "CatalogIdString",
  database_name: "NameString", # required
  table_name: "NameString", # required
  column_name: "NameString", # required
})

Parameters:

  • params (Hash) (defaults to: {})

    ({})

Options Hash (params):

  • :catalog_id (String)

    The ID of the Data Catalog where the partitions in question reside. If none is supplied, the Amazon Web Services account ID is used by default.

  • :database_name (required, String)

    The name of the catalog database where the partitions reside.

  • :table_name (required, String)

    The name of the partitions' table.

  • :column_name (required, String)

    The name of the column.

Returns:

  • (Struct)

    Returns an empty response.

See Also:

[View source]

5702
5703
5704
5705
# File 'gems/aws-sdk-glue/lib/aws-sdk-glue/client.rb', line 5702

def delete_column_statistics_for_table(params = {}, options = {})
  req = build_request(:delete_column_statistics_for_table, params)
  req.send_request(options)
end

#delete_column_statistics_task_settings(params = {}) ⇒ Struct

Deletes settings for a column statistics task.

Examples:

Request syntax with placeholder values


resp = client.delete_column_statistics_task_settings({
  database_name: "NameString", # required
  table_name: "NameString", # required
})

Parameters:

  • params (Hash) (defaults to: {})

    ({})

Options Hash (params):

  • :database_name (required, String)

    The name of the database where the table resides.

  • :table_name (required, String)

    The name of the table for which to delete column statistics.

Returns:

  • (Struct)

    Returns an empty response.

See Also:

[View source]

5728
5729
5730
5731
# File 'gems/aws-sdk-glue/lib/aws-sdk-glue/client.rb', line 5728

def delete_column_statistics_task_settings(params = {}, options = {})
  req = build_request(:delete_column_statistics_task_settings, params)
  req.send_request(options)
end

#delete_connection(params = {}) ⇒ Struct

Deletes a connection from the Data Catalog.

Examples:

Request syntax with placeholder values


resp = client.delete_connection({
  catalog_id: "CatalogIdString",
  connection_name: "NameString", # required
})

Parameters:

  • params (Hash) (defaults to: {})

    ({})

Options Hash (params):

  • :catalog_id (String)

    The ID of the Data Catalog in which the connection resides. If none is provided, the Amazon Web Services account ID is used by default.

  • :connection_name (required, String)

    The name of the connection to delete.

Returns:

  • (Struct)

    Returns an empty response.

See Also:

[View source]

5755
5756
5757
5758
# File 'gems/aws-sdk-glue/lib/aws-sdk-glue/client.rb', line 5755

def delete_connection(params = {}, options = {})
  req = build_request(:delete_connection, params)
  req.send_request(options)
end

#delete_crawler(params = {}) ⇒ Struct

Removes a specified crawler from the Glue Data Catalog, unless the crawler state is RUNNING.

Examples:

Request syntax with placeholder values


resp = client.delete_crawler({
  name: "NameString", # required
})

Parameters:

  • params (Hash) (defaults to: {})

    ({})

Options Hash (params):

  • :name (required, String)

    The name of the crawler to remove.

Returns:

  • (Struct)

    Returns an empty response.

See Also:

[View source]

5778
5779
5780
5781
# File 'gems/aws-sdk-glue/lib/aws-sdk-glue/client.rb', line 5778

def delete_crawler(params = {}, options = {})
  req = build_request(:delete_crawler, params)
  req.send_request(options)
end

#delete_custom_entity_type(params = {}) ⇒ Types::DeleteCustomEntityTypeResponse

Deletes a custom pattern by specifying its name.

Examples:

Request syntax with placeholder values


resp = client.delete_custom_entity_type({
  name: "NameString", # required
})

Response structure


resp.name #=> String

Parameters:

  • params (Hash) (defaults to: {})

    ({})

Options Hash (params):

  • :name (required, String)

    The name of the custom pattern that you want to delete.

Returns:

See Also:

[View source]

5806
5807
5808
5809
# File 'gems/aws-sdk-glue/lib/aws-sdk-glue/client.rb', line 5806

def delete_custom_entity_type(params = {}, options = {})
  req = build_request(:delete_custom_entity_type, params)
  req.send_request(options)
end

#delete_data_quality_ruleset(params = {}) ⇒ Struct

Deletes a data quality ruleset.

Examples:

Request syntax with placeholder values


resp = client.delete_data_quality_ruleset({
  name: "NameString", # required
})

Parameters:

  • params (Hash) (defaults to: {})

    ({})

Options Hash (params):

  • :name (required, String)

    A name for the data quality ruleset.

Returns:

  • (Struct)

    Returns an empty response.

See Also:

[View source]

5828
5829
5830
5831
# File 'gems/aws-sdk-glue/lib/aws-sdk-glue/client.rb', line 5828

def delete_data_quality_ruleset(params = {}, options = {})
  req = build_request(:delete_data_quality_ruleset, params)
  req.send_request(options)
end

#delete_database(params = {}) ⇒ Struct

Removes a specified database from a Data Catalog.

After completing this operation, you no longer have access to the tables (and all table versions and partitions that might belong to the tables) and the user-defined functions in the deleted database. Glue deletes these "orphaned" resources asynchronously in a timely manner, at the discretion of the service.

To ensure the immediate deletion of all related resources, before calling DeleteDatabase, use DeleteTableVersion or BatchDeleteTableVersion, DeletePartition or BatchDeletePartition, DeleteUserDefinedFunction, and DeleteTable or BatchDeleteTable, to delete any resources that belong to the database.

Examples:

Request syntax with placeholder values


resp = client.delete_database({
  catalog_id: "CatalogIdString",
  name: "NameString", # required
})

Parameters:

  • params (Hash) (defaults to: {})

    ({})

Options Hash (params):

  • :catalog_id (String)

    The ID of the Data Catalog in which the database resides. If none is provided, the Amazon Web Services account ID is used by default.

  • :name (required, String)

    The name of the database to delete. For Hive compatibility, this must be all lowercase.

Returns:

  • (Struct)

    Returns an empty response.

See Also:

[View source]

5871
5872
5873
5874
# File 'gems/aws-sdk-glue/lib/aws-sdk-glue/client.rb', line 5871

def delete_database(params = {}, options = {})
  req = build_request(:delete_database, params)
  req.send_request(options)
end

#delete_dev_endpoint(params = {}) ⇒ Struct

Deletes a specified development endpoint.

Examples:

Request syntax with placeholder values


resp = client.delete_dev_endpoint({
  endpoint_name: "GenericString", # required
})

Parameters:

  • params (Hash) (defaults to: {})

    ({})

Options Hash (params):

  • :endpoint_name (required, String)

    The name of the DevEndpoint.

Returns:

  • (Struct)

    Returns an empty response.

See Also:

[View source]

5893
5894
5895
5896
# File 'gems/aws-sdk-glue/lib/aws-sdk-glue/client.rb', line 5893

def delete_dev_endpoint(params = {}, options = {})
  req = build_request(:delete_dev_endpoint, params)
  req.send_request(options)
end

#delete_integration(params = {}) ⇒ Types::DeleteIntegrationResponse

Deletes the specified Zero-ETL integration.

Examples:

Request syntax with placeholder values


resp = client.delete_integration({
  integration_identifier: "String128", # required
})

Response structure


resp.source_arn #=> String
resp.target_arn #=> String
resp.integration_name #=> String
resp.description #=> String
resp.integration_arn #=> String
resp.kms_key_id #=> String
resp.additional_encryption_context #=> Hash
resp.additional_encryption_context["IntegrationString"] #=> String
resp.tags #=> Array
resp.tags[0].key #=> String
resp.tags[0].value #=> String
resp.status #=> String, one of "CREATING", "ACTIVE", "MODIFYING", "FAILED", "DELETING", "SYNCING", "NEEDS_ATTENTION"
resp.create_time #=> Time
resp.errors #=> Array
resp.errors[0].error_code #=> String
resp.errors[0].error_message #=> String
resp.data_filter #=> String

Parameters:

  • params (Hash) (defaults to: {})

    ({})

Options Hash (params):

  • :integration_identifier (required, String)

    The Amazon Resource Name (ARN) for the integration.

Returns:

See Also:

[View source]

5948
5949
5950
5951
# File 'gems/aws-sdk-glue/lib/aws-sdk-glue/client.rb', line 5948

def delete_integration(params = {}, options = {})
  req = build_request(:delete_integration, params)
  req.send_request(options)
end

#delete_integration_table_properties(params = {}) ⇒ Struct

Deletes the table properties that have been created for the tables that need to be replicated.

Examples:

Request syntax with placeholder values


resp = client.delete_integration_table_properties({
  resource_arn: "String128", # required
  table_name: "String128", # required
})

Parameters:

  • params (Hash) (defaults to: {})

    ({})

Options Hash (params):

  • :resource_arn (required, String)

    The connection ARN of the source, or the database ARN of the target.

  • :table_name (required, String)

    The name of the table to be replicated.

Returns:

  • (Struct)

    Returns an empty response.

See Also:

[View source]

5975
5976
5977
5978
# File 'gems/aws-sdk-glue/lib/aws-sdk-glue/client.rb', line 5975

def delete_integration_table_properties(params = {}, options = {})
  req = build_request(:delete_integration_table_properties, params)
  req.send_request(options)
end

#delete_job(params = {}) ⇒ Types::DeleteJobResponse

Deletes a specified job definition. If the job definition is not found, no exception is thrown.

Examples:

Request syntax with placeholder values


resp = client.delete_job({
  job_name: "NameString", # required
})

Response structure


resp.job_name #=> String

Parameters:

  • params (Hash) (defaults to: {})

    ({})

Options Hash (params):

  • :job_name (required, String)

    The name of the job definition to delete.

Returns:

See Also:

[View source]

6004
6005
6006
6007
# File 'gems/aws-sdk-glue/lib/aws-sdk-glue/client.rb', line 6004

def delete_job(params = {}, options = {})
  req = build_request(:delete_job, params)
  req.send_request(options)
end

#delete_ml_transform(params = {}) ⇒ Types::DeleteMLTransformResponse

Deletes an Glue machine learning transform. Machine learning transforms are a special type of transform that use machine learning to learn the details of the transformation to be performed by learning from examples provided by humans. These transformations are then saved by Glue. If you no longer need a transform, you can delete it by calling DeleteMLTransforms. However, any Glue jobs that still reference the deleted transform will no longer succeed.

Examples:

Request syntax with placeholder values


resp = client.delete_ml_transform({
  transform_id: "HashString", # required
})

Response structure


resp.transform_id #=> String

Parameters:

  • params (Hash) (defaults to: {})

    ({})

Options Hash (params):

  • :transform_id (required, String)

    The unique identifier of the transform to delete.

Returns:

See Also:

[View source]

6038
6039
6040
6041
# File 'gems/aws-sdk-glue/lib/aws-sdk-glue/client.rb', line 6038

def delete_ml_transform(params = {}, options = {})
  req = build_request(:delete_ml_transform, params)
  req.send_request(options)
end

#delete_partition(params = {}) ⇒ Struct

Deletes a specified partition.

Examples:

Request syntax with placeholder values


resp = client.delete_partition({
  catalog_id: "CatalogIdString",
  database_name: "NameString", # required
  table_name: "NameString", # required
  partition_values: ["ValueString"], # required
})

Parameters:

  • params (Hash) (defaults to: {})

    ({})

Options Hash (params):

  • :catalog_id (String)

    The ID of the Data Catalog where the partition to be deleted resides. If none is provided, the Amazon Web Services account ID is used by default.

  • :database_name (required, String)

    The name of the catalog database in which the table in question resides.

  • :table_name (required, String)

    The name of the table that contains the partition to be deleted.

  • :partition_values (required, Array<String>)

    The values that define the partition.

Returns:

  • (Struct)

    Returns an empty response.

See Also:

[View source]

6075
6076
6077
6078
# File 'gems/aws-sdk-glue/lib/aws-sdk-glue/client.rb', line 6075

def delete_partition(params = {}, options = {})
  req = build_request(:delete_partition, params)
  req.send_request(options)
end

#delete_partition_index(params = {}) ⇒ Struct

Deletes a specified partition index from an existing table.

Examples:

Request syntax with placeholder values


resp = client.delete_partition_index({
  catalog_id: "CatalogIdString",
  database_name: "NameString", # required
  table_name: "NameString", # required
  index_name: "NameString", # required
})

Parameters:

  • params (Hash) (defaults to: {})

    ({})

Options Hash (params):

  • :catalog_id (String)

    The catalog ID where the table resides.

  • :database_name (required, String)

    Specifies the name of a database from which you want to delete a partition index.

  • :table_name (required, String)

    Specifies the name of a table from which you want to delete a partition index.

  • :index_name (required, String)

    The name of the partition index to be deleted.

Returns:

  • (Struct)

    Returns an empty response.

See Also:

[View source]

6111
6112
6113
6114
# File 'gems/aws-sdk-glue/lib/aws-sdk-glue/client.rb', line 6111

def delete_partition_index(params = {}, options = {})
  req = build_request(:delete_partition_index, params)
  req.send_request(options)
end

#delete_registry(params = {}) ⇒ Types::DeleteRegistryResponse

Delete the entire registry including schema and all of its versions. To get the status of the delete operation, you can call the GetRegistry API after the asynchronous call. Deleting a registry will deactivate all online operations for the registry such as the UpdateRegistry, CreateSchema, UpdateSchema, and RegisterSchemaVersion APIs.

Examples:

Request syntax with placeholder values


resp = client.delete_registry({
  registry_id: { # required
    registry_name: "SchemaRegistryNameString",
    registry_arn: "GlueResourceArn",
  },
})

Response structure


resp.registry_name #=> String
resp.registry_arn #=> String
resp.status #=> String, one of "AVAILABLE", "DELETING"

Parameters:

  • params (Hash) (defaults to: {})

    ({})

Options Hash (params):

  • :registry_id (required, Types::RegistryId)

    This is a wrapper structure that may contain the registry name and Amazon Resource Name (ARN).

Returns:

See Also:

[View source]

6152
6153
6154
6155
# File 'gems/aws-sdk-glue/lib/aws-sdk-glue/client.rb', line 6152

def delete_registry(params = {}, options = {})
  req = build_request(:delete_registry, params)
  req.send_request(options)
end

#delete_resource_policy(params = {}) ⇒ Struct

Deletes a specified policy.

Examples:

Request syntax with placeholder values


resp = client.delete_resource_policy({
  policy_hash_condition: "HashString",
  resource_arn: "GlueResourceArn",
})

Parameters:

  • params (Hash) (defaults to: {})

    ({})

Options Hash (params):

  • :policy_hash_condition (String)

    The hash value returned when this policy was set.

  • :resource_arn (String)

    The ARN of the Glue resource for the resource policy to be deleted.

Returns:

  • (Struct)

    Returns an empty response.

See Also:

[View source]

6178
6179
6180
6181
# File 'gems/aws-sdk-glue/lib/aws-sdk-glue/client.rb', line 6178

def delete_resource_policy(params = {}, options = {})
  req = build_request(:delete_resource_policy, params)
  req.send_request(options)
end

#delete_schema(params = {}) ⇒ Types::DeleteSchemaResponse

Deletes the entire schema set, including the schema set and all of its versions. To get the status of the delete operation, you can call GetSchema API after the asynchronous call. Deleting a registry will deactivate all online operations for the schema, such as the GetSchemaByDefinition, and RegisterSchemaVersion APIs.

Examples:

Request syntax with placeholder values


resp = client.delete_schema({
  schema_id: { # required
    schema_arn: "GlueResourceArn",
    schema_name: "SchemaRegistryNameString",
    registry_name: "SchemaRegistryNameString",
  },
})

Response structure


resp.schema_arn #=> String
resp.schema_name #=> String
resp.status #=> String, one of "AVAILABLE", "PENDING", "DELETING"

Parameters:

  • params (Hash) (defaults to: {})

    ({})

Options Hash (params):

  • :schema_id (required, Types::SchemaId)

    This is a wrapper structure that may contain the schema name and Amazon Resource Name (ARN).

Returns:

See Also:

[View source]

6219
6220
6221
6222
# File 'gems/aws-sdk-glue/lib/aws-sdk-glue/client.rb', line 6219

def delete_schema(params = {}, options = {})
  req = build_request(:delete_schema, params)
  req.send_request(options)
end

#delete_schema_versions(params = {}) ⇒ Types::DeleteSchemaVersionsResponse

Remove versions from the specified schema. A version number or range may be supplied. If the compatibility mode forbids deleting of a version that is necessary, such as BACKWARDS_FULL, an error is returned. Calling the GetSchemaVersions API after this call will list the status of the deleted versions.

When the range of version numbers contain check pointed version, the API will return a 409 conflict and will not proceed with the deletion. You have to remove the checkpoint first using the DeleteSchemaCheckpoint API before using this API.

You cannot use the DeleteSchemaVersions API to delete the first schema version in the schema set. The first schema version can only be deleted by the DeleteSchema API. This operation will also delete the attached SchemaVersionMetadata under the schema versions. Hard deletes will be enforced on the database.

If the compatibility mode forbids deleting of a version that is necessary, such as BACKWARDS_FULL, an error is returned.

Examples:

Request syntax with placeholder values


resp = client.delete_schema_versions({
  schema_id: { # required
    schema_arn: "GlueResourceArn",
    schema_name: "SchemaRegistryNameString",
    registry_name: "SchemaRegistryNameString",
  },
  versions: "VersionsString", # required
})

Response structure


resp.schema_version_errors #=> Array
resp.schema_version_errors[0].version_number #=> Integer
resp.schema_version_errors[0].error_details.error_code #=> String
resp.schema_version_errors[0].error_details.error_message #=> String

Parameters:

  • params (Hash) (defaults to: {})

    ({})

Options Hash (params):

  • :schema_id (required, Types::SchemaId)

    This is a wrapper structure that may contain the schema name and Amazon Resource Name (ARN).

  • :versions (required, String)

    A version range may be supplied which may be of the format:

    • a single version number, 5

    • a range, 5-8 : deletes versions 5, 6, 7, 8

Returns:

See Also:

[View source]

6281
6282
6283
6284
# File 'gems/aws-sdk-glue/lib/aws-sdk-glue/client.rb', line 6281

def delete_schema_versions(params = {}, options = {})
  req = build_request(:delete_schema_versions, params)
  req.send_request(options)
end

#delete_security_configuration(params = {}) ⇒ Struct

Deletes a specified security configuration.

Examples:

Request syntax with placeholder values


resp = client.delete_security_configuration({
  name: "NameString", # required
})

Parameters:

  • params (Hash) (defaults to: {})

    ({})

Options Hash (params):

  • :name (required, String)

    The name of the security configuration to delete.

Returns:

  • (Struct)

    Returns an empty response.

See Also:

[View source]

6303
6304
6305
6306
# File 'gems/aws-sdk-glue/lib/aws-sdk-glue/client.rb', line 6303

def delete_security_configuration(params = {}, options = {})
  req = build_request(:delete_security_configuration, params)
  req.send_request(options)
end

#delete_session(params = {}) ⇒ Types::DeleteSessionResponse

Deletes the session.

Examples:

Request syntax with placeholder values


resp = client.delete_session({
  id: "NameString", # required
  request_origin: "OrchestrationNameString",
})

Response structure


resp.id #=> String

Parameters:

  • params (Hash) (defaults to: {})

    ({})

Options Hash (params):

  • :id (required, String)

    The ID of the session to be deleted.

  • :request_origin (String)

    The name of the origin of the delete session request.

Returns:

See Also:

[View source]

6335
6336
6337
6338
# File 'gems/aws-sdk-glue/lib/aws-sdk-glue/client.rb', line 6335

def delete_session(params = {}, options = {})
  req = build_request(:delete_session, params)
  req.send_request(options)
end

#delete_table(params = {}) ⇒ Struct

Removes a table definition from the Data Catalog.

After completing this operation, you no longer have access to the table versions and partitions that belong to the deleted table. Glue deletes these "orphaned" resources asynchronously in a timely manner, at the discretion of the service.

To ensure the immediate deletion of all related resources, before calling DeleteTable, use DeleteTableVersion or BatchDeleteTableVersion, and DeletePartition or BatchDeletePartition, to delete any resources that belong to the table.

Examples:

Request syntax with placeholder values


resp = client.delete_table({
  catalog_id: "CatalogIdString",
  database_name: "NameString", # required
  name: "NameString", # required
  transaction_id: "TransactionIdString",
})

Parameters:

  • params (Hash) (defaults to: {})

    ({})

Options Hash (params):

  • :catalog_id (String)

    The ID of the Data Catalog where the table resides. If none is provided, the Amazon Web Services account ID is used by default.

  • :database_name (required, String)

    The name of the catalog database in which the table resides. For Hive compatibility, this name is entirely lowercase.

  • :name (required, String)

    The name of the table to be deleted. For Hive compatibility, this name is entirely lowercase.

  • :transaction_id (String)

    The transaction ID at which to delete the table contents.

Returns:

  • (Struct)

    Returns an empty response.

See Also:

[View source]

6385
6386
6387
6388
# File 'gems/aws-sdk-glue/lib/aws-sdk-glue/client.rb', line 6385

def delete_table(params = {}, options = {})
  req = build_request(:delete_table, params)
  req.send_request(options)
end

#delete_table_optimizer(params = {}) ⇒ Struct

Deletes an optimizer and all associated metadata for a table. The optimization will no longer be performed on the table.

Examples:

Request syntax with placeholder values


resp = client.delete_table_optimizer({
  catalog_id: "CatalogIdString", # required
  database_name: "NameString", # required
  table_name: "NameString", # required
  type: "compaction", # required, accepts compaction, retention, orphan_file_deletion
})

Parameters:

  • params (Hash) (defaults to: {})

    ({})

Options Hash (params):

  • :catalog_id (required, String)

    The Catalog ID of the table.

  • :database_name (required, String)

    The name of the database in the catalog in which the table resides.

  • :table_name (required, String)

    The name of the table.

  • :type (required, String)

    The type of table optimizer.

Returns:

  • (Struct)

    Returns an empty response.

See Also:

[View source]

6420
6421
6422
6423
# File 'gems/aws-sdk-glue/lib/aws-sdk-glue/client.rb', line 6420

def delete_table_optimizer(params = {}, options = {})
  req = build_request(:delete_table_optimizer, params)
  req.send_request(options)
end

#delete_table_version(params = {}) ⇒ Struct

Deletes a specified version of a table.

Examples:

Request syntax with placeholder values


resp = client.delete_table_version({
  catalog_id: "CatalogIdString",
  database_name: "NameString", # required
  table_name: "NameString", # required
  version_id: "VersionString", # required
})

Parameters:

  • params (Hash) (defaults to: {})

    ({})

Options Hash (params):

  • :catalog_id (String)

    The ID of the Data Catalog where the tables reside. If none is provided, the Amazon Web Services account ID is used by default.

  • :database_name (required, String)

    The database in the catalog in which the table resides. For Hive compatibility, this name is entirely lowercase.

  • :table_name (required, String)

    The name of the table. For Hive compatibility, this name is entirely lowercase.

  • :version_id (required, String)

    The ID of the table version to be deleted. A VersionID is a string representation of an integer. Each version is incremented by 1.

Returns:

  • (Struct)

    Returns an empty response.

See Also:

[View source]

6458
6459
6460
6461
# File 'gems/aws-sdk-glue/lib/aws-sdk-glue/client.rb', line 6458

def delete_table_version(params = {}, options = {})
  req = build_request(:delete_table_version, params)
  req.send_request(options)
end

#delete_trigger(params = {}) ⇒ Types::DeleteTriggerResponse

Deletes a specified trigger. If the trigger is not found, no exception is thrown.

Examples:

Request syntax with placeholder values


resp = client.delete_trigger({
  name: "NameString", # required
})

Response structure


resp.name #=> String

Parameters:

  • params (Hash) (defaults to: {})

    ({})

Options Hash (params):

  • :name (required, String)

    The name of the trigger to delete.

Returns:

See Also:

[View source]

6487
6488
6489
6490
# File 'gems/aws-sdk-glue/lib/aws-sdk-glue/client.rb', line 6487

def delete_trigger(params = {}, options = {})
  req = build_request(:delete_trigger, params)
  req.send_request(options)
end

#delete_usage_profile(params = {}) ⇒ Struct

Deletes the Glue specified usage profile.

Examples:

Request syntax with placeholder values


resp = client.delete_usage_profile({
  name: "NameString", # required
})

Parameters:

  • params (Hash) (defaults to: {})

    ({})

Options Hash (params):

  • :name (required, String)

    The name of the usage profile to delete.

Returns:

  • (Struct)

    Returns an empty response.

See Also:

[View source]

6509
6510
6511
6512
# File 'gems/aws-sdk-glue/lib/aws-sdk-glue/client.rb', line 6509

def delete_usage_profile(params = {}, options = {})
  req = build_request(:delete_usage_profile, params)
  req.send_request(options)
end

#delete_user_defined_function(params = {}) ⇒ Struct

Deletes an existing function definition from the Data Catalog.

Examples:

Request syntax with placeholder values


resp = client.delete_user_defined_function({
  catalog_id: "CatalogIdString",
  database_name: "NameString", # required
  function_name: "NameString", # required
})

Parameters:

  • params (Hash) (defaults to: {})

    ({})

Options Hash (params):

  • :catalog_id (String)

    The ID of the Data Catalog where the function to be deleted is located. If none is supplied, the Amazon Web Services account ID is used by default.

  • :database_name (required, String)

    The name of the catalog database where the function is located.

  • :function_name (required, String)

    The name of the function definition to be deleted.

Returns:

  • (Struct)

    Returns an empty response.

See Also:

[View source]

6541
6542
6543
6544
# File 'gems/aws-sdk-glue/lib/aws-sdk-glue/client.rb', line 6541

def delete_user_defined_function(params = {}, options = {})
  req = build_request(:delete_user_defined_function, params)
  req.send_request(options)
end

#delete_workflow(params = {}) ⇒ Types::DeleteWorkflowResponse

Deletes a workflow.

Examples:

Request syntax with placeholder values


resp = client.delete_workflow({
  name: "NameString", # required
})

Response structure


resp.name #=> String

Parameters:

  • params (Hash) (defaults to: {})

    ({})

Options Hash (params):

  • :name (required, String)

    Name of the workflow to be deleted.

Returns:

See Also:

[View source]

6569
6570
6571
6572
# File 'gems/aws-sdk-glue/lib/aws-sdk-glue/client.rb', line 6569

def delete_workflow(params = {}, options = {})
  req = build_request(:delete_workflow, params)
  req.send_request(options)
end

#describe_connection_type(params = {}) ⇒ Types::DescribeConnectionTypeResponse

The DescribeConnectionType API provides full details of the supported options for a given connection type in Glue.

Examples:

Request syntax with placeholder values


resp = client.describe_connection_type({
  connection_type: "NameString", # required
})

Response structure


resp.connection_type #=> String
resp.description #=> String
resp.capabilities.supported_authentication_types #=> Array
resp.capabilities.supported_authentication_types[0] #=> String, one of "BASIC", "OAUTH2", "CUSTOM", "IAM"
resp.capabilities.supported_data_operations #=> Array
resp.capabilities.supported_data_operations[0] #=> String, one of "READ", "WRITE"
resp.capabilities.supported_compute_environments #=> Array
resp.capabilities.supported_compute_environments[0] #=> String, one of "SPARK", "ATHENA", "PYTHON"
resp.connection_properties #=> Hash
resp.connection_properties["PropertyName"].name #=> String
resp.connection_properties["PropertyName"].description #=> String
resp.connection_properties["PropertyName"].required #=> Boolean
resp.connection_properties["PropertyName"].default_value #=> String
resp.connection_properties["PropertyName"].property_types #=> Array
resp.connection_properties["PropertyName"].property_types[0] #=> String, one of "USER_INPUT", "SECRET", "READ_ONLY", "UNUSED", "SECRET_OR_USER_INPUT"
resp.connection_properties["PropertyName"].allowed_values #=> Array
resp.connection_properties["PropertyName"].allowed_values[0].description #=> String
resp.connection_properties["PropertyName"].allowed_values[0].value #=> String
resp.connection_properties["PropertyName"].data_operation_scopes #=> Array
resp.connection_properties["PropertyName"].data_operation_scopes[0] #=> String, one of "READ", "WRITE"
resp.connection_options #=> Hash
resp.connection_options["PropertyName"].name #=> String
resp.connection_options["PropertyName"].description #=> String
resp.connection_options["PropertyName"].required #=> Boolean
resp.connection_options["PropertyName"].default_value #=> String
resp.connection_options["PropertyName"].property_types #=> Array
resp.connection_options["PropertyName"].property_types[0] #=> String, one of "USER_INPUT", "SECRET", "READ_ONLY", "UNUSED", "SECRET_OR_USER_INPUT"
resp.connection_options["PropertyName"].allowed_values #=> Array
resp.connection_options["PropertyName"].allowed_values[0].description #=> String
resp.connection_options["PropertyName"].allowed_values[0].value #=> String
resp.connection_options["PropertyName"].data_operation_scopes #=> Array
resp.connection_options["PropertyName"].data_operation_scopes[0] #=> String, one of "READ", "WRITE"
resp.authentication_configuration.authentication_type.name #=> String
resp.authentication_configuration.authentication_type.description #=> String
resp.authentication_configuration.authentication_type.required #=> Boolean
resp.authentication_configuration.authentication_type.default_value #=> String
resp.authentication_configuration.authentication_type.property_types #=> Array
resp.authentication_configuration.authentication_type.property_types[0] #=> String, one of "USER_INPUT", "SECRET", "READ_ONLY", "UNUSED", "SECRET_OR_USER_INPUT"
resp.authentication_configuration.authentication_type.allowed_values #=> Array
resp.authentication_configuration.authentication_type.allowed_values[0].description #=> String
resp.authentication_configuration.authentication_type.allowed_values[0].value #=> String
resp.authentication_configuration.authentication_type.data_operation_scopes #=> Array
resp.authentication_configuration.authentication_type.data_operation_scopes[0] #=> String, one of "READ", "WRITE"
resp.authentication_configuration.secret_arn.name #=> String
resp.authentication_configuration.secret_arn.description #=> String
resp.authentication_configuration.secret_arn.required #=> Boolean
resp.authentication_configuration.secret_arn.default_value #=> String
resp.authentication_configuration.secret_arn.property_types #=> Array
resp.authentication_configuration.secret_arn.property_types[0] #=> String, one of "USER_INPUT", "SECRET", "READ_ONLY", "UNUSED", "SECRET_OR_USER_INPUT"
resp.authentication_configuration.secret_arn.allowed_values #=> Array
resp.authentication_configuration.secret_arn.allowed_values[0].description #=> String
resp.authentication_configuration.secret_arn.allowed_values[0].value #=> String
resp.authentication_configuration.secret_arn.data_operation_scopes #=> Array
resp.authentication_configuration.secret_arn.data_operation_scopes[0] #=> String, one of "READ", "WRITE"
resp.authentication_configuration.o_auth_2_properties #=> Hash
resp.authentication_configuration.o_auth_2_properties["PropertyName"].name #=> String
resp.authentication_configuration.o_auth_2_properties["PropertyName"].description #=> String
resp.authentication_configuration.o_auth_2_properties["PropertyName"].required #=> Boolean
resp.authentication_configuration.o_auth_2_properties["PropertyName"].default_value #=> String
resp.authentication_configuration.o_auth_2_properties["PropertyName"].property_types #=> Array
resp.authentication_configuration.o_auth_2_properties["PropertyName"].property_types[0] #=> String, one of "USER_INPUT", "SECRET", "READ_ONLY", "UNUSED", "SECRET_OR_USER_INPUT"
resp.authentication_configuration.o_auth_2_properties["PropertyName"].allowed_values #=> Array
resp.authentication_configuration.o_auth_2_properties["PropertyName"].allowed_values[0].description #=> String
resp.authentication_configuration.o_auth_2_properties["PropertyName"].allowed_values[0].value #=> String
resp.authentication_configuration.o_auth_2_properties["PropertyName"].data_operation_scopes #=> Array
resp.authentication_configuration.o_auth_2_properties["PropertyName"].data_operation_scopes[0] #=> String, one of "READ", "WRITE"
resp.authentication_configuration.basic_authentication_properties #=> Hash
resp.authentication_configuration.basic_authentication_properties["PropertyName"].name #=> String
resp.authentication_configuration.basic_authentication_properties["PropertyName"].description #=> String
resp.authentication_configuration.basic_authentication_properties["PropertyName"].required #=> Boolean
resp.authentication_configuration.basic_authentication_properties["PropertyName"].default_value #=> String
resp.authentication_configuration.basic_authentication_properties["PropertyName"].property_types #=> Array
resp.authentication_configuration.basic_authentication_properties["PropertyName"].property_types[0] #=> String, one of "USER_INPUT", "SECRET", "READ_ONLY", "UNUSED", "SECRET_OR_USER_INPUT"
resp.authentication_configuration.basic_authentication_properties["PropertyName"].allowed_values #=> Array
resp.authentication_configuration.basic_authentication_properties["PropertyName"].allowed_values[0].description #=> String
resp.authentication_configuration.basic_authentication_properties["PropertyName"].allowed_values[0].value #=> String
resp.authentication_configuration.basic_authentication_properties["PropertyName"].data_operation_scopes #=> Array
resp.authentication_configuration.basic_authentication_properties["PropertyName"].data_operation_scopes[0] #=> String, one of "READ", "WRITE"
resp.authentication_configuration.custom_authentication_properties #=> Hash
resp.authentication_configuration.custom_authentication_properties["PropertyName"].name #=> String
resp.authentication_configuration.custom_authentication_properties["PropertyName"].description #=> String
resp.authentication_configuration.custom_authentication_properties["PropertyName"].required #=> Boolean
resp.authentication_configuration.custom_authentication_properties["PropertyName"].default_value #=> String
resp.authentication_configuration.custom_authentication_properties["PropertyName"].property_types #=> Array
resp.authentication_configuration.custom_authentication_properties["PropertyName"].property_types[0] #=> String, one of "USER_INPUT", "SECRET", "READ_ONLY", "UNUSED", "SECRET_OR_USER_INPUT"
resp.authentication_configuration.custom_authentication_properties["PropertyName"].allowed_values #=> Array
resp.authentication_configuration.custom_authentication_properties["PropertyName"].allowed_values[0].description #=> String
resp.authentication_configuration.custom_authentication_properties["PropertyName"].allowed_values[0].value #=> String
resp.authentication_configuration.custom_authentication_properties["PropertyName"].data_operation_scopes #=> Array
resp.authentication_configuration.custom_authentication_properties["PropertyName"].data_operation_scopes[0] #=> String, one of "READ", "WRITE"
resp.compute_environment_configurations #=> Hash
resp.compute_environment_configurations["ComputeEnvironmentName"].name #=> String
resp.compute_environment_configurations["ComputeEnvironmentName"].description #=> String
resp.compute_environment_configurations["ComputeEnvironmentName"].compute_environment #=> String, one of "SPARK", "ATHENA", "PYTHON"
resp.compute_environment_configurations["ComputeEnvironmentName"].supported_authentication_types #=> Array
resp.compute_environment_configurations["ComputeEnvironmentName"].supported_authentication_types[0] #=> String, one of "BASIC", "OAUTH2", "CUSTOM", "IAM"
resp.compute_environment_configurations["ComputeEnvironmentName"].connection_options #=> Hash
resp.compute_environment_configurations["ComputeEnvironmentName"].connection_options["PropertyName"].name #=> String
resp.compute_environment_configurations["ComputeEnvironmentName"].connection_options["PropertyName"].description #=> String
resp.compute_environment_configurations["ComputeEnvironmentName"].connection_options["PropertyName"].required #=> Boolean
resp.compute_environment_configurations["ComputeEnvironmentName"].connection_options["PropertyName"].default_value #=> String
resp.compute_environment_configurations["ComputeEnvironmentName"].connection_options["PropertyName"].property_types #=> Array
resp.compute_environment_configurations["ComputeEnvironmentName"].connection_options["PropertyName"].property_types[0] #=> String, one of "USER_INPUT", "SECRET", "READ_ONLY", "UNUSED", "SECRET_OR_USER_INPUT"
resp.compute_environment_configurations["ComputeEnvironmentName"].connection_options["PropertyName"].allowed_values #=> Array
resp.compute_environment_configurations["ComputeEnvironmentName"].connection_options["PropertyName"].allowed_values[0].description #=> String
resp.compute_environment_configurations["ComputeEnvironmentName"].connection_options["PropertyName"].allowed_values[0].value #=> String
resp.compute_environment_configurations["ComputeEnvironmentName"].connection_options["PropertyName"].data_operation_scopes #=> Array
resp.compute_environment_configurations["ComputeEnvironmentName"].connection_options["PropertyName"].data_operation_scopes[0] #=> String, one of "READ", "WRITE"
resp.compute_environment_configurations["ComputeEnvironmentName"].connection_property_name_overrides #=> Hash
resp.compute_environment_configurations["ComputeEnvironmentName"].connection_property_name_overrides["PropertyName"] #=> String
resp.compute_environment_configurations["ComputeEnvironmentName"].connection_option_name_overrides #=> Hash
resp.compute_environment_configurations["ComputeEnvironmentName"].connection_option_name_overrides["PropertyName"] #=> String
resp.compute_environment_configurations["ComputeEnvironmentName"].connection_properties_required_overrides #=> Array
resp.compute_environment_configurations["ComputeEnvironmentName"].connection_properties_required_overrides[0] #=> String
resp.compute_environment_configurations["ComputeEnvironmentName"].physical_connection_properties_required #=> Boolean
resp.physical_connection_requirements #=> Hash
resp.physical_connection_requirements["PropertyName"].name #=> String
resp.physical_connection_requirements["PropertyName"].description #=> String
resp.physical_connection_requirements["PropertyName"].required #=> Boolean
resp.physical_connection_requirements["PropertyName"].default_value #=> String
resp.physical_connection_requirements["PropertyName"].property_types #=> Array
resp.physical_connection_requirements["PropertyName"].property_types[0] #=> String, one of "USER_INPUT", "SECRET", "READ_ONLY", "UNUSED", "SECRET_OR_USER_INPUT"
resp.physical_connection_requirements["PropertyName"].allowed_values #=> Array
resp.physical_connection_requirements["PropertyName"].allowed_values[0].description #=> String
resp.physical_connection_requirements["PropertyName"].allowed_values[0].value #=> String
resp.physical_connection_requirements["PropertyName"].data_operation_scopes #=> Array
resp.physical_connection_requirements["PropertyName"].data_operation_scopes[0] #=> String, one of "READ", "WRITE"
resp.athena_connection_properties #=> Hash
resp.athena_connection_properties["PropertyName"].name #=> String
resp.athena_connection_properties["PropertyName"].description #=> String
resp.athena_connection_properties["PropertyName"].required #=> Boolean
resp.athena_connection_properties["PropertyName"].default_value #=> String
resp.athena_connection_properties["PropertyName"].property_types #=> Array
resp.athena_connection_properties["PropertyName"].property_types[0] #=> String, one of "USER_INPUT", "SECRET", "READ_ONLY", "UNUSED", "SECRET_OR_USER_INPUT"
resp.athena_connection_properties["PropertyName"].allowed_values #=> Array
resp.athena_connection_properties["PropertyName"].allowed_values[0].description #=> String
resp.athena_connection_properties["PropertyName"].allowed_values[0].value #=> String
resp.athena_connection_properties["PropertyName"].data_operation_scopes #=> Array
resp.athena_connection_properties["PropertyName"].data_operation_scopes[0] #=> String, one of "READ", "WRITE"
resp.python_connection_properties #=> Hash
resp.python_connection_properties["PropertyName"].name #=> String
resp.python_connection_properties["PropertyName"].description #=> String
resp.python_connection_properties["PropertyName"].required #=> Boolean
resp.python_connection_properties["PropertyName"].default_value #=> String
resp.python_connection_properties["PropertyName"].property_types #=> Array
resp.python_connection_properties["PropertyName"].property_types[0] #=> String, one of "USER_INPUT", "SECRET", "READ_ONLY", "UNUSED", "SECRET_OR_USER_INPUT"
resp.python_connection_properties["PropertyName"].allowed_values #=> Array
resp.python_connection_properties["PropertyName"].allowed_values[0].description #=> String
resp.python_connection_properties["PropertyName"].allowed_values[0].value #=> String
resp.python_connection_properties["PropertyName"].data_operation_scopes #=> Array
resp.python_connection_properties["PropertyName"].data_operation_scopes[0] #=> String, one of "READ", "WRITE"
resp.spark_connection_properties #=> Hash
resp.spark_connection_properties["PropertyName"].name #=> String
resp.spark_connection_properties["PropertyName"].description #=> String
resp.spark_connection_properties["PropertyName"].required #=> Boolean
resp.spark_connection_properties["PropertyName"].default_value #=> String
resp.spark_connection_properties["PropertyName"].property_types #=> Array
resp.spark_connection_properties["PropertyName"].property_types[0] #=> String, one of "USER_INPUT", "SECRET", "READ_ONLY", "UNUSED", "SECRET_OR_USER_INPUT"
resp.spark_connection_properties["PropertyName"].allowed_values #=> Array
resp.spark_connection_properties["PropertyName"].allowed_values[0].description #=> String
resp.spark_connection_properties["PropertyName"].allowed_values[0].value #=> String
resp.spark_connection_properties["PropertyName"].data_operation_scopes #=> Array
resp.spark_connection_properties["PropertyName"].data_operation_scopes[0] #=> String, one of "READ", "WRITE"

Parameters:

  • params (Hash) (defaults to: {})

    ({})

Options Hash (params):

  • :connection_type (required, String)

    The name of the connection type to be described.

Returns:

See Also:

[View source]

6770
6771
6772
6773
# File 'gems/aws-sdk-glue/lib/aws-sdk-glue/client.rb', line 6770

def describe_connection_type(params = {}, options = {})
  req = build_request(:describe_connection_type, params)
  req.send_request(options)
end

#describe_entity(params = {}) ⇒ Types::DescribeEntityResponse

Provides details regarding the entity used with the connection type, with a description of the data model for each field in the selected entity.

The response includes all the fields which make up the entity.

The returned response is a pageable response and is Enumerable. For details on usage see PageableResponse.

Examples:

Request syntax with placeholder values


resp = client.describe_entity({
  connection_name: "NameString", # required
  catalog_id: "CatalogIdString",
  entity_name: "EntityName", # required
  next_token: "NextToken",
  data_store_api_version: "ApiVersion",
})

Response structure


resp.fields #=> Array
resp.fields[0].field_name #=> String
resp.fields[0].label #=> String
resp.fields[0].description #=> String
resp.fields[0].field_type #=> String, one of "INT", "SMALLINT", "BIGINT", "FLOAT", "LONG", "DATE", "BOOLEAN", "MAP", "ARRAY", "STRING", "TIMESTAMP", "DECIMAL", "BYTE", "SHORT", "DOUBLE", "STRUCT"
resp.fields[0].is_primary_key #=> Boolean
resp.fields[0].is_nullable #=> Boolean
resp.fields[0].is_retrievable #=> Boolean
resp.fields[0].is_filterable #=> Boolean
resp.fields[0].is_partitionable #=> Boolean
resp.fields[0].is_createable #=> Boolean
resp.fields[0].is_updateable #=> Boolean
resp.fields[0].is_upsertable #=> Boolean
resp.fields[0].is_default_on_create #=> Boolean
resp.fields[0].supported_values #=> Array
resp.fields[0].supported_values[0] #=> String
resp.fields[0].supported_filter_operators #=> Array
resp.fields[0].supported_filter_operators[0] #=> String, one of "LESS_THAN", "GREATER_THAN", "BETWEEN", "EQUAL_TO", "NOT_EQUAL_TO", "GREATER_THAN_OR_EQUAL_TO", "LESS_THAN_OR_EQUAL_TO", "CONTAINS", "ORDER_BY"
resp.fields[0].parent_field #=> String
resp.fields[0].native_data_type #=> String
resp.fields[0].custom_properties #=> Hash
resp.fields[0].custom_properties["String"] #=> String
resp.next_token #=> String

Parameters:

  • params (Hash) (defaults to: {})

    ({})

Options Hash (params):

  • :connection_name (required, String)

    The name of the connection that contains the connection type credentials.

  • :catalog_id (String)

    The catalog ID of the catalog that contains the connection. This can be null, By default, the Amazon Web Services Account ID is the catalog ID.

  • :entity_name (required, String)

    The name of the entity that you want to describe from the connection type.

  • :next_token (String)

    A continuation token, included if this is a continuation call.

  • :data_store_api_version (String)

    The version of the API used for the data store.

Returns:

See Also:

[View source]

6847
6848
6849
6850
# File 'gems/aws-sdk-glue/lib/aws-sdk-glue/client.rb', line 6847

def describe_entity(params = {}, options = {})
  req = build_request(:describe_entity, params)
  req.send_request(options)
end

#describe_inbound_integrations(params = {}) ⇒ Types::DescribeInboundIntegrationsResponse

Returns a list of inbound integrations for the specified integration.

Examples:

Request syntax with placeholder values


resp = client.describe_inbound_integrations({
  integration_arn: "String128",
  marker: "String128",
  max_records: 1,
  target_arn: "String128",
})

Response structure


resp.inbound_integrations #=> Array
resp.inbound_integrations[0].source_arn #=> String
resp.inbound_integrations[0].target_arn #=> String
resp.inbound_integrations[0].integration_arn #=> String
resp.inbound_integrations[0].status #=> String, one of "CREATING", "ACTIVE", "MODIFYING", "FAILED", "DELETING", "SYNCING", "NEEDS_ATTENTION"
resp.inbound_integrations[0].create_time #=> Time
resp.inbound_integrations[0].errors #=> Array
resp.inbound_integrations[0].errors[0].error_code #=> String
resp.inbound_integrations[0].errors[0].error_message #=> String
resp.marker #=> String

Parameters:

  • params (Hash) (defaults to: {})

    ({})

Options Hash (params):

  • :integration_arn (String)

    The Amazon Resource Name (ARN) of the integration.

  • :marker (String)

    A token to specify where to start paginating. This is the marker from a previously truncated response.

  • :max_records (Integer)

    The total number of items to return in the output.

  • :target_arn (String)

    The Amazon Resource Name (ARN) of the target resource in the integration.

Returns:

See Also:

[View source]

6899
6900
6901
6902
# File 'gems/aws-sdk-glue/lib/aws-sdk-glue/client.rb', line 6899

def describe_inbound_integrations(params = {}, options = {})
  req = build_request(:describe_inbound_integrations, params)
  req.send_request(options)
end

#describe_integrations(params = {}) ⇒ Types::DescribeIntegrationsResponse

The API is used to retrieve a list of integrations.

Examples:

Request syntax with placeholder values


resp = client.describe_integrations({
  integration_identifier: "String128",
  marker: "String128",
  max_records: 1,
  filters: [
    {
      name: "String128",
      values: ["String128"],
    },
  ],
})

Response structure


resp.integrations #=> Array
resp.integrations[0].source_arn #=> String
resp.integrations[0].target_arn #=> String
resp.integrations[0].description #=> String
resp.integrations[0].integration_name #=> String
resp.integrations[0].integration_arn #=> String
resp.integrations[0].kms_key_id #=> String
resp.integrations[0].additional_encryption_context #=> Hash
resp.integrations[0].additional_encryption_context["IntegrationString"] #=> String
resp.integrations[0].tags #=> Array
resp.integrations[0].tags[0].key #=> String
resp.integrations[0].tags[0].value #=> String
resp.integrations[0].status #=> String, one of "CREATING", "ACTIVE", "MODIFYING", "FAILED", "DELETING", "SYNCING", "NEEDS_ATTENTION"
resp.integrations[0].create_time #=> Time
resp.integrations[0].errors #=> Array
resp.integrations[0].errors[0].error_code #=> String
resp.integrations[0].errors[0].error_message #=> String
resp.integrations[0].data_filter #=> String
resp.marker #=> String

Parameters:

  • params (Hash) (defaults to: {})

    ({})

Options Hash (params):

  • :integration_identifier (String)

    The Amazon Resource Name (ARN) for the integration.

  • :marker (String)

    A value that indicates the starting point for the next set of response records in a subsequent request.

  • :max_records (Integer)

    The total number of items to return in the output.

  • :filters (Array<Types::IntegrationFilter>)

    A list of key and values, to filter down the results. Supported keys are "Status", "IntegrationName", and "SourceArn". IntegrationName is limited to only one value.

Returns:

See Also:

[View source]

6966
6967
6968
6969
# File 'gems/aws-sdk-glue/lib/aws-sdk-glue/client.rb', line 6966

def describe_integrations(params = {}, options = {})
  req = build_request(:describe_integrations, params)
  req.send_request(options)
end

#get_blueprint(params = {}) ⇒ Types::GetBlueprintResponse

Retrieves the details of a blueprint.

Examples:

Request syntax with placeholder values


resp = client.get_blueprint({
  name: "NameString", # required
  include_blueprint: false,
  include_parameter_spec: false,
})

Response structure


resp.blueprint.name #=> String
resp.blueprint.description #=> String
resp.blueprint.created_on #=> Time
resp.blueprint.last_modified_on #=> Time
resp.blueprint.parameter_spec #=> String
resp.blueprint.blueprint_location #=> String
resp.blueprint.blueprint_service_location #=> String
resp.blueprint.status #=> String, one of "CREATING", "ACTIVE", "UPDATING", "FAILED"
resp.blueprint.error_message #=> String
resp.blueprint.last_active_definition.description #=> String
resp.blueprint.last_active_definition.last_modified_on #=> Time
resp.blueprint.last_active_definition.parameter_spec #=> String
resp.blueprint.last_active_definition.blueprint_location #=> String
resp.blueprint.last_active_definition.blueprint_service_location #=> String

Parameters:

  • params (Hash) (defaults to: {})

    ({})

Options Hash (params):

  • :name (required, String)

    The name of the blueprint.

  • :include_blueprint (Boolean)

    Specifies whether or not to include the blueprint in the response.

  • :include_parameter_spec (Boolean)

    Specifies whether or not to include the parameter specification.

Returns:

See Also:

[View source]

7015
7016
7017
7018
# File 'gems/aws-sdk-glue/lib/aws-sdk-glue/client.rb', line 7015

def get_blueprint(params = {}, options = {})
  req = build_request(:get_blueprint, params)
  req.send_request(options)
end

#get_blueprint_run(params = {}) ⇒ Types::GetBlueprintRunResponse

Retrieves the details of a blueprint run.

Examples:

Request syntax with placeholder values


resp = client.get_blueprint_run({
  blueprint_name: "OrchestrationNameString", # required
  run_id: "IdString", # required
})

Response structure


resp.blueprint_run.blueprint_name #=> String
resp.blueprint_run.run_id #=> String
resp.blueprint_run.workflow_name #=> String
resp.blueprint_run.state #=> String, one of "RUNNING", "SUCCEEDED", "FAILED", "ROLLING_BACK"
resp.blueprint_run.started_on #=> Time
resp.blueprint_run.completed_on #=> Time
resp.blueprint_run.error_message #=> String
resp.blueprint_run.rollback_error_message #=> String
resp.blueprint_run.parameters #=> String
resp.blueprint_run.role_arn #=> String

Parameters:

  • params (Hash) (defaults to: {})

    ({})

Options Hash (params):

  • :blueprint_name (required, String)

    The name of the blueprint.

  • :run_id (required, String)

    The run ID for the blueprint run you want to retrieve.

Returns:

See Also:

[View source]

7056
7057
7058
7059
# File 'gems/aws-sdk-glue/lib/aws-sdk-glue/client.rb', line 7056

def get_blueprint_run(params = {}, options = {})
  req = build_request(:get_blueprint_run, params)
  req.send_request(options)
end

#get_blueprint_runs(params = {}) ⇒ Types::GetBlueprintRunsResponse

Retrieves the details of blueprint runs for a specified blueprint.

The returned response is a pageable response and is Enumerable. For details on usage see PageableResponse.

Examples:

Request syntax with placeholder values


resp = client.get_blueprint_runs({
  blueprint_name: "NameString", # required
  next_token: "GenericString",
  max_results: 1,
})

Response structure


resp.blueprint_runs #=> Array
resp.blueprint_runs[0].blueprint_name #=> String
resp.blueprint_runs[0].run_id #=> String
resp.blueprint_runs[0].workflow_name #=> String
resp.blueprint_runs[0].state #=> String, one of "RUNNING", "SUCCEEDED", "FAILED", "ROLLING_BACK"
resp.blueprint_runs[0].started_on #=> Time
resp.blueprint_runs[0].completed_on #=> Time
resp.blueprint_runs[0].error_message #=> String
resp.blueprint_runs[0].rollback_error_message #=> String
resp.blueprint_runs[0].parameters #=> String
resp.blueprint_runs[0].role_arn #=> String
resp.next_token #=> String

Parameters:

  • params (Hash) (defaults to: {})

    ({})

Options Hash (params):

  • :blueprint_name (required, String)

    The name of the blueprint.

  • :next_token (String)

    A continuation token, if this is a continuation request.

  • :max_results (Integer)

    The maximum size of a list to return.

Returns:

See Also:

[View source]

7106
7107
7108
7109
# File 'gems/aws-sdk-glue/lib/aws-sdk-glue/client.rb', line 7106

def get_blueprint_runs(params = {}, options = {})
  req = build_request(:get_blueprint_runs, params)
  req.send_request(options)
end

#get_catalog(params = {}) ⇒ Types::GetCatalogResponse

The name of the Catalog to retrieve. This should be all lowercase.

Examples:

Request syntax with placeholder values


resp = client.get_catalog({
  catalog_id: "CatalogIdString", # required
})

Response structure


resp.catalog.catalog_id #=> String
resp.catalog.name #=> String
resp.catalog.resource_arn #=> String
resp.catalog.description #=> String
resp.catalog.parameters #=> Hash
resp.catalog.parameters["KeyString"] #=> String
resp.catalog.create_time #=> Time
resp.catalog.update_time #=> Time
resp.catalog.target_redshift_catalog.catalog_arn #=> String
resp.catalog.federated_catalog.identifier #=> String
resp.catalog.federated_catalog.connection_name #=> String
resp.catalog.catalog_properties.data_lake_access_properties.data_lake_access #=> Boolean
resp.catalog.catalog_properties.data_lake_access_properties.data_transfer_role #=> String
resp.catalog.catalog_properties.data_lake_access_properties.kms_key #=> String
resp.catalog.catalog_properties.data_lake_access_properties.managed_workgroup_name #=> String
resp.catalog.catalog_properties.data_lake_access_properties.managed_workgroup_status #=> String
resp.catalog.catalog_properties.data_lake_access_properties.redshift_database_name #=> String
resp.catalog.catalog_properties.data_lake_access_properties.status_message #=> String
resp.catalog.catalog_properties.data_lake_access_properties.catalog_type #=> String
resp.catalog.catalog_properties.custom_properties #=> Hash
resp.catalog.catalog_properties.custom_properties["KeyString"] #=> String
resp.catalog.create_table_default_permissions #=> Array
resp.catalog.create_table_default_permissions[0].principal.data_lake_principal_identifier #=> String
resp.catalog.create_table_default_permissions[0].permissions #=> Array
resp.catalog.create_table_default_permissions[0].permissions[0] #=> String, one of "ALL", "SELECT", "ALTER", "DROP", "DELETE", "INSERT", "CREATE_DATABASE", "CREATE_TABLE", "DATA_LOCATION_ACCESS"
resp.catalog.create_database_default_permissions #=> Array
resp.catalog.create_database_default_permissions[0].principal.data_lake_principal_identifier #=> String
resp.catalog.create_database_default_permissions[0].permissions #=> Array
resp.catalog.create_database_default_permissions[0].permissions[0] #=> String, one of "ALL", "SELECT", "ALTER", "DROP", "DELETE", "INSERT", "CREATE_DATABASE", "CREATE_TABLE", "DATA_LOCATION_ACCESS"

Parameters:

  • params (Hash) (defaults to: {})

    ({})

Options Hash (params):

  • :catalog_id (required, String)

    The ID of the parent catalog in which the catalog resides. If none is provided, the Amazon Web Services Account Number is used by default.

Returns:

See Also:

[View source]

7163
7164
7165
7166
# File 'gems/aws-sdk-glue/lib/aws-sdk-glue/client.rb', line 7163

def get_catalog(params = {}, options = {})
  req = build_request(:get_catalog, params)
  req.send_request(options)
end

#get_catalog_import_status(params = {}) ⇒ Types::GetCatalogImportStatusResponse

Retrieves the status of a migration operation.

Examples:

Request syntax with placeholder values


resp = client.get_catalog_import_status({
  catalog_id: "CatalogIdString",
})

Response structure


resp.import_status.import_completed #=> Boolean
resp.import_status.import_time #=> Time
resp.import_status.imported_by #=> String

Parameters:

  • params (Hash) (defaults to: {})

    ({})

Options Hash (params):

  • :catalog_id (String)

    The ID of the catalog to migrate. Currently, this should be the Amazon Web Services account ID.

Returns:

See Also:

[View source]

7194
7195
7196
7197
# File 'gems/aws-sdk-glue/lib/aws-sdk-glue/client.rb', line 7194

def get_catalog_import_status(params = {}, options = {})
  req = build_request(:get_catalog_import_status, params)
  req.send_request(options)
end

#get_catalogs(params = {}) ⇒ Types::GetCatalogsResponse

Retrieves all catalogs defined in a catalog in the Glue Data Catalog. For a Redshift-federated catalog use case, this operation returns the list of catalogs mapped to Redshift databases in the Redshift namespace catalog.

Examples:

Request syntax with placeholder values


resp = client.get_catalogs({
  parent_catalog_id: "CatalogIdString",
  next_token: "Token",
  max_results: 1,
  recursive: false,
  include_root: false,
})

Response structure


resp.catalog_list #=> Array
resp.catalog_list[0].catalog_id #=> String
resp.catalog_list[0].name #=> String
resp.catalog_list[0].resource_arn #=> String
resp.catalog_list[0].description #=> String
resp.catalog_list[0].parameters #=> Hash
resp.catalog_list[0].parameters["KeyString"] #=> String
resp.catalog_list[0].create_time #=> Time
resp.catalog_list[0].update_time #=> Time
resp.catalog_list[0].target_redshift_catalog.catalog_arn #=> String
resp.catalog_list[0].federated_catalog.identifier #=> String
resp.catalog_list[0].federated_catalog.connection_name #=> String
resp.catalog_list[0].catalog_properties.data_lake_access_properties.data_lake_access #=> Boolean
resp.catalog_list[0].catalog_properties.data_lake_access_properties.data_transfer_role #=> String
resp.catalog_list[0].catalog_properties.data_lake_access_properties.kms_key #=> String
resp.catalog_list[0].catalog_properties.data_lake_access_properties.managed_workgroup_name #=> String
resp.catalog_list[0].catalog_properties.data_lake_access_properties.managed_workgroup_status #=> String
resp.catalog_list[0].catalog_properties.data_lake_access_properties.redshift_database_name #=> String
resp.catalog_list[0].catalog_properties.data_lake_access_properties.status_message #=> String
resp.catalog_list[0].catalog_properties.data_lake_access_properties.catalog_type #=> String
resp.catalog_list[0].catalog_properties.custom_properties #=> Hash
resp.catalog_list[0].catalog_properties.custom_properties["KeyString"] #=> String
resp.catalog_list[0].create_table_default_permissions #=> Array
resp.catalog_list[0].create_table_default_permissions[0].principal.data_lake_principal_identifier #=> String
resp.catalog_list[0].create_table_default_permissions[0].permissions #=> Array
resp.catalog_list[0].create_table_default_permissions[0].permissions[0] #=> String, one of "ALL", "SELECT", "ALTER", "DROP", "DELETE", "INSERT", "CREATE_DATABASE", "CREATE_TABLE", "DATA_LOCATION_ACCESS"
resp.catalog_list[0].create_database_default_permissions #=> Array
resp.catalog_list[0].create_database_default_permissions[0].principal.data_lake_principal_identifier #=> String
resp.catalog_list[0].create_database_default_permissions[0].permissions #=> Array
resp.catalog_list[0].create_database_default_permissions[0].permissions[0] #=> String, one of "ALL", "SELECT", "ALTER", "DROP", "DELETE", "INSERT", "CREATE_DATABASE", "CREATE_TABLE", "DATA_LOCATION_ACCESS"
resp.next_token #=> String

Parameters:

  • params (Hash) (defaults to: {})

    ({})

Options Hash (params):

  • :parent_catalog_id (String)

    The ID of the parent catalog in which the catalog resides. If none is provided, the Amazon Web Services Account Number is used by default.

  • :next_token (String)

    A continuation token, if this is a continuation call.

  • :max_results (Integer)

    The maximum number of catalogs to return in one response.

  • :recursive (Boolean)

    Whether to list all catalogs across the catalog hierarchy, starting from the ParentCatalogId. Defaults to false . When true, all catalog objects in the ParentCatalogID hierarchy are enumerated in the response.

  • :include_root (Boolean)

    Whether to list the default catalog in the account and region in the response. Defaults to false. When true and ParentCatalogId = NULL | Amazon Web Services Account ID, all catalogs and the default catalog are enumerated in the response.

    When the ParentCatalogId is not equal to null, and this attribute is passed as false or true, an InvalidInputException is thrown.

Returns:

See Also:

[View source]

7282
7283
7284
7285
# File 'gems/aws-sdk-glue/lib/aws-sdk-glue/client.rb', line 7282

def get_catalogs(params = {}, options = {})
  req = build_request(:get_catalogs, params)
  req.send_request(options)
end

#get_classifier(params = {}) ⇒ Types::GetClassifierResponse

Retrieve a classifier by name.

Examples:

Request syntax with placeholder values


resp = client.get_classifier({
  name: "NameString", # required
})

Response structure


resp.classifier.grok_classifier.name #=> String
resp.classifier.grok_classifier.classification #=> String
resp.classifier.grok_classifier.creation_time #=> Time
resp.classifier.grok_classifier.last_updated #=> Time
resp.classifier.grok_classifier.version #=> Integer
resp.classifier.grok_classifier.grok_pattern #=> String
resp.classifier.grok_classifier.custom_patterns #=> String
resp.classifier.xml_classifier.name #=> String
resp.classifier.xml_classifier.classification #=> String
resp.classifier.xml_classifier.creation_time #=> Time
resp.classifier.xml_classifier.last_updated #=> Time
resp.classifier.xml_classifier.version #=> Integer
resp.classifier.xml_classifier.row_tag #=> String
resp.classifier.json_classifier.name #=> String
resp.classifier.json_classifier.creation_time #=> Time
resp.classifier.json_classifier.last_updated #=> Time
resp.classifier.json_classifier.version #=> Integer
resp.classifier.json_classifier.json_path #=> String
resp.classifier.csv_classifier.name #=> String
resp.classifier.csv_classifier.creation_time #=> Time
resp.classifier.csv_classifier.last_updated #=> Time
resp.classifier.csv_classifier.version #=> Integer
resp.classifier.csv_classifier.delimiter #=> String
resp.classifier.csv_classifier.quote_symbol #=> String
resp.classifier.csv_classifier.contains_header #=> String, one of "UNKNOWN", "PRESENT", "ABSENT"
resp.classifier.csv_classifier.header #=> Array
resp.classifier.csv_classifier.header[0] #=> String
resp.classifier.csv_classifier.disable_value_trimming #=> Boolean
resp.classifier.csv_classifier.allow_single_column #=> Boolean
resp.classifier.csv_classifier.custom_datatype_configured #=> Boolean
resp.classifier.csv_classifier.custom_datatypes #=> Array
resp.classifier.csv_classifier.custom_datatypes[0] #=> String
resp.classifier.csv_classifier.serde #=> String, one of "OpenCSVSerDe", "LazySimpleSerDe", "None"

Parameters:

  • params (Hash) (defaults to: {})

    ({})

Options Hash (params):

  • :name (required, String)

    Name of the classifier to retrieve.

Returns:

See Also:

[View source]

7342
7343
7344
7345
# File 'gems/aws-sdk-glue/lib/aws-sdk-glue/client.rb', line 7342

def get_classifier(params = {}, options = {})
  req = build_request(:get_classifier, params)
  req.send_request(options)
end

#get_classifiers(params = {}) ⇒ Types::GetClassifiersResponse

Lists all classifier objects in the Data Catalog.

The returned response is a pageable response and is Enumerable. For details on usage see PageableResponse.

Examples:

Request syntax with placeholder values


resp = client.get_classifiers({
  max_results: 1,
  next_token: "Token",
})

Response structure


resp.classifiers #=> Array
resp.classifiers[0].grok_classifier.name #=> String
resp.classifiers[0].grok_classifier.classification #=> String
resp.classifiers[0].grok_classifier.creation_time #=> Time
resp.classifiers[0].grok_classifier.last_updated #=> Time
resp.classifiers[0].grok_classifier.version #=> Integer
resp.classifiers[0].grok_classifier.grok_pattern #=> String
resp.classifiers[0].grok_classifier.custom_patterns #=> String
resp.classifiers[0].xml_classifier.name #=> String
resp.classifiers[0].xml_classifier.classification #=> String
resp.classifiers[0].xml_classifier.creation_time #=> Time
resp.classifiers[0].xml_classifier.last_updated #=> Time
resp.classifiers[0].xml_classifier.version #=> Integer
resp.classifiers[0].xml_classifier.row_tag #=> String
resp.classifiers[0].json_classifier.name #=> String
resp.classifiers[0].json_classifier.creation_time #=> Time
resp.classifiers[0].json_classifier.last_updated #=> Time
resp.classifiers[0].json_classifier.version #=> Integer
resp.classifiers[0].json_classifier.json_path #=> String
resp.classifiers[0].csv_classifier.name #=> String
resp.classifiers[0].csv_classifier.creation_time #=> Time
resp.classifiers[0].csv_classifier.last_updated #=> Time
resp.classifiers[0].csv_classifier.version #=> Integer
resp.classifiers[0].csv_classifier.delimiter #=> String
resp.classifiers[0].csv_classifier.quote_symbol #=> String
resp.classifiers[0].csv_classifier.contains_header #=> String, one of "UNKNOWN", "PRESENT", "ABSENT"
resp.classifiers[0].csv_classifier.header #=> Array
resp.classifiers[0].csv_classifier.header[0] #=> String
resp.classifiers[0].csv_classifier.disable_value_trimming #=> Boolean
resp.classifiers[0].csv_classifier.allow_single_column #=> Boolean
resp.classifiers[0].csv_classifier.custom_datatype_configured #=> Boolean
resp.classifiers[0].csv_classifier.custom_datatypes #=> Array
resp.classifiers[0].csv_classifier.custom_datatypes[0] #=> String
resp.classifiers[0].csv_classifier.serde #=> String, one of "OpenCSVSerDe", "LazySimpleSerDe", "None"
resp.next_token #=> String

Parameters:

  • params (Hash) (defaults to: {})

    ({})

Options Hash (params):

  • :max_results (Integer)

    The size of the list to return (optional).

  • :next_token (String)

    An optional continuation token.

Returns:

See Also:

[View source]

7411
7412
7413
7414
# File 'gems/aws-sdk-glue/lib/aws-sdk-glue/client.rb', line 7411

def get_classifiers(params = {}, options = {})
  req = build_request(:get_classifiers, params)
  req.send_request(options)
end

#get_column_statistics_for_partition(params = {}) ⇒ Types::GetColumnStatisticsForPartitionResponse

Retrieves partition statistics of columns.

The Identity and Access Management (IAM) permission required for this operation is GetPartition.

Examples:

Request syntax with placeholder values


resp = client.get_column_statistics_for_partition({
  catalog_id: "CatalogIdString",
  database_name: "NameString", # required
  table_name: "NameString", # required
  partition_values: ["ValueString"], # required
  column_names: ["NameString"], # required
})

Response structure


resp.column_statistics_list #=> Array
resp.column_statistics_list[0].column_name #=> String
resp.column_statistics_list[0].column_type #=> String
resp.column_statistics_list[0].analyzed_time #=> Time
resp.column_statistics_list[0].statistics_data.type #=> String, one of "BOOLEAN", "DATE", "DECIMAL", "DOUBLE", "LONG", "STRING", "BINARY"
resp.column_statistics_list[0].statistics_data.boolean_column_statistics_data.number_of_trues #=> Integer
resp.column_statistics_list[0].statistics_data.boolean_column_statistics_data.number_of_falses #=> Integer
resp.column_statistics_list[0].statistics_data.boolean_column_statistics_data.number_of_nulls #=> Integer
resp.column_statistics_list[0].statistics_data.date_column_statistics_data.minimum_value #=> Time
resp.column_statistics_list[0].statistics_data.date_column_statistics_data.maximum_value #=> Time
resp.column_statistics_list[0].statistics_data.date_column_statistics_data.number_of_nulls #=> Integer
resp.column_statistics_list[0].statistics_data.date_column_statistics_data.number_of_distinct_values #=> Integer
resp.column_statistics_list[0].statistics_data.decimal_column_statistics_data.minimum_value.unscaled_value #=> String
resp.column_statistics_list[0].statistics_data.decimal_column_statistics_data.minimum_value.scale #=> Integer
resp.column_statistics_list[0].statistics_data.decimal_column_statistics_data.maximum_value.unscaled_value #=> String
resp.column_statistics_list[0].statistics_data.decimal_column_statistics_data.maximum_value.scale #=> Integer
resp.column_statistics_list[0].statistics_data.decimal_column_statistics_data.number_of_nulls #=> Integer
resp.column_statistics_list[0].statistics_data.decimal_column_statistics_data.number_of_distinct_values #=> Integer
resp.column_statistics_list[0].statistics_data.double_column_statistics_data.minimum_value #=> Float
resp.column_statistics_list[0].statistics_data.double_column_statistics_data.maximum_value #=> Float
resp.column_statistics_list[0].statistics_data.double_column_statistics_data.number_of_nulls #=> Integer
resp.column_statistics_list[0].statistics_data.double_column_statistics_data.number_of_distinct_values #=> Integer
resp.column_statistics_list[0].statistics_data.long_column_statistics_data.minimum_value #=> Integer
resp.column_statistics_list[0].statistics_data.long_column_statistics_data.maximum_value #=> Integer
resp.column_statistics_list[0].statistics_data.long_column_statistics_data.number_of_nulls #=> Integer
resp.column_statistics_list[0].statistics_data.long_column_statistics_data.number_of_distinct_values #=> Integer
resp.column_statistics_list[0].statistics_data.string_column_statistics_data.maximum_length #=> Integer
resp.column_statistics_list[0].statistics_data.string_column_statistics_data.average_length #=> Float
resp.column_statistics_list[0].statistics_data.string_column_statistics_data.number_of_nulls #=> Integer
resp.column_statistics_list[0].statistics_data.string_column_statistics_data.number_of_distinct_values #=> Integer
resp.column_statistics_list[0].statistics_data.binary_column_statistics_data.maximum_length #=> Integer
resp.column_statistics_list[0].statistics_data.binary_column_statistics_data.average_length #=> Float
resp.column_statistics_list[0].statistics_data.binary_column_statistics_data.number_of_nulls #=> Integer
resp.errors #=> Array
resp.errors[0].column_name #=> String
resp.errors[0].error.error_code #=> String
resp.errors[0].error.error_message #=> String

Parameters:

  • params (Hash) (defaults to: {})

    ({})

Options Hash (params):

  • :catalog_id (String)

    The ID of the Data Catalog where the partitions in question reside. If none is supplied, the Amazon Web Services account ID is used by default.

  • :database_name (required, String)

    The name of the catalog database where the partitions reside.

  • :table_name (required, String)

    The name of the partitions' table.

  • :partition_values (required, Array<String>)

    A list of partition values identifying the partition.

  • :column_names (required, Array<String>)

    A list of the column names.

Returns:

See Also:

[View source]

7497
7498
7499
7500
# File 'gems/aws-sdk-glue/lib/aws-sdk-glue/client.rb', line 7497

def get_column_statistics_for_partition(params = {}, options = {})
  req = build_request(:get_column_statistics_for_partition, params)
  req.send_request(options)
end

#get_column_statistics_for_table(params = {}) ⇒ Types::GetColumnStatisticsForTableResponse

Retrieves table statistics of columns.

The Identity and Access Management (IAM) permission required for this operation is GetTable.

Examples:

Request syntax with placeholder values


resp = client.get_column_statistics_for_table({
  catalog_id: "CatalogIdString",
  database_name: "NameString", # required
  table_name: "NameString", # required
  column_names: ["NameString"], # required
})

Response structure


resp.column_statistics_list #=> Array
resp.column_statistics_list[0].column_name #=> String
resp.column_statistics_list[0].column_type #=> String
resp.column_statistics_list[0].analyzed_time #=> Time
resp.column_statistics_list[0].statistics_data.type #=> String, one of "BOOLEAN", "DATE", "DECIMAL", "DOUBLE", "LONG", "STRING", "BINARY"
resp.column_statistics_list[0].statistics_data.boolean_column_statistics_data.number_of_trues #=> Integer
resp.column_statistics_list[0].statistics_data.boolean_column_statistics_data.number_of_falses #=> Integer
resp.column_statistics_list[0].statistics_data.boolean_column_statistics_data.number_of_nulls #=> Integer
resp.column_statistics_list[0].statistics_data.date_column_statistics_data.minimum_value #=> Time
resp.column_statistics_list[0].statistics_data.date_column_statistics_data.maximum_value #=> Time
resp.column_statistics_list[0].statistics_data.date_column_statistics_data.number_of_nulls #=> Integer
resp.column_statistics_list[0].statistics_data.date_column_statistics_data.number_of_distinct_values #=> Integer
resp.column_statistics_list[0].statistics_data.decimal_column_statistics_data.minimum_value.unscaled_value #=> String
resp.column_statistics_list[0].statistics_data.decimal_column_statistics_data.minimum_value.scale #=> Integer
resp.column_statistics_list[0].statistics_data.decimal_column_statistics_data.maximum_value.unscaled_value #=> String
resp.column_statistics_list[0].statistics_data.decimal_column_statistics_data.maximum_value.scale #=> Integer
resp.column_statistics_list[0].statistics_data.decimal_column_statistics_data.number_of_nulls #=> Integer
resp.column_statistics_list[0].statistics_data.decimal_column_statistics_data.number_of_distinct_values #=> Integer
resp.column_statistics_list[0].statistics_data.double_column_statistics_data.minimum_value #=> Float
resp.column_statistics_list[0].statistics_data.double_column_statistics_data.maximum_value #=> Float
resp.column_statistics_list[0].statistics_data.double_column_statistics_data.number_of_nulls #=> Integer
resp.column_statistics_list[0].statistics_data.double_column_statistics_data.number_of_distinct_values #=> Integer
resp.column_statistics_list[0].statistics_data.long_column_statistics_data.minimum_value #=> Integer
resp.column_statistics_list[0].statistics_data.long_column_statistics_data.maximum_value #=> Integer
resp.column_statistics_list[0].statistics_data.long_column_statistics_data.number_of_nulls #=> Integer
resp.column_statistics_list[0].statistics_data.long_column_statistics_data.number_of_distinct_values #=> Integer
resp.column_statistics_list[0].statistics_data.string_column_statistics_data.maximum_length #=> Integer
resp.column_statistics_list[0].statistics_data.string_column_statistics_data.average_length #=> Float
resp.column_statistics_list[0].statistics_data.string_column_statistics_data.number_of_nulls #=> Integer
resp.column_statistics_list[0].statistics_data.string_column_statistics_data.number_of_distinct_values #=> Integer
resp.column_statistics_list[0].statistics_data.binary_column_statistics_data.maximum_length #=> Integer
resp.column_statistics_list[0].statistics_data.binary_column_statistics_data.average_length #=> Float
resp.column_statistics_list[0].statistics_data.binary_column_statistics_data.number_of_nulls #=> Integer
resp.errors #=> Array
resp.errors[0].column_name #=> String
resp.errors[0].error.error_code #=> String
resp.errors[0].error.error_message #=> String

Parameters:

  • params (Hash) (defaults to: {})

    ({})

Options Hash (params):

  • :catalog_id (String)

    The ID of the Data Catalog where the partitions in question reside. If none is supplied, the Amazon Web Services account ID is used by default.

  • :database_name (required, String)

    The name of the catalog database where the partitions reside.

  • :table_name (required, String)

    The name of the partitions' table.

  • :column_names (required, Array<String>)

    A list of the column names.

Returns:

See Also:

[View source]

7579
7580
7581
7582
# File 'gems/aws-sdk-glue/lib/aws-sdk-glue/client.rb', line 7579

def get_column_statistics_for_table(params = {}, options = {})
  req = build_request(:get_column_statistics_for_table, params)
  req.send_request(options)
end

#get_column_statistics_task_run(params = {}) ⇒ Types::GetColumnStatisticsTaskRunResponse

Get the associated metadata/information for a task run, given a task run ID.

Examples:

Request syntax with placeholder values


resp = client.get_column_statistics_task_run({
  column_statistics_task_run_id: "HashString", # required
})

Response structure


resp.column_statistics_task_run.customer_id #=> String
resp.column_statistics_task_run.column_statistics_task_run_id #=> String
resp.column_statistics_task_run.database_name #=> String
resp.column_statistics_task_run.table_name #=> String
resp.column_statistics_task_run.column_name_list #=> Array
resp.column_statistics_task_run.column_name_list[0] #=> String
resp.column_statistics_task_run.catalog_id #=> String
resp.column_statistics_task_run.role #=> String
resp.column_statistics_task_run.sample_size #=> Float
resp.column_statistics_task_run.security_configuration #=> String
resp.column_statistics_task_run.number_of_workers #=> Integer
resp.column_statistics_task_run.worker_type #=> String
resp.column_statistics_task_run.computation_type #=> String, one of "FULL", "INCREMENTAL"
resp.column_statistics_task_run.status #=> String, one of "STARTING", "RUNNING", "SUCCEEDED", "FAILED", "STOPPED"
resp.column_statistics_task_run.creation_time #=> Time
resp.column_statistics_task_run.last_updated #=> Time
resp.column_statistics_task_run.start_time #=> Time
resp.column_statistics_task_run.end_time #=> Time
resp.column_statistics_task_run.error_message #=> String
resp.column_statistics_task_run.dpu_seconds #=> Float

Parameters:

  • params (Hash) (defaults to: {})

    ({})

Options Hash (params):

  • :column_statistics_task_run_id (required, String)

    The identifier for the particular column statistics task run.

Returns:

See Also:

[View source]

7627
7628
7629
7630
# File 'gems/aws-sdk-glue/lib/aws-sdk-glue/client.rb', line 7627

def get_column_statistics_task_run(params = {}, options = {})
  req = build_request(:get_column_statistics_task_run, params)
  req.send_request(options)
end

#get_column_statistics_task_runs(params = {}) ⇒ Types::GetColumnStatisticsTaskRunsResponse

Retrieves information about all runs associated with the specified table.

The returned response is a pageable response and is Enumerable. For details on usage see PageableResponse.

Examples:

Request syntax with placeholder values


resp = client.get_column_statistics_task_runs({
  database_name: "DatabaseName", # required
  table_name: "NameString", # required
  max_results: 1,
  next_token: "Token",
})

Response structure


resp.column_statistics_task_runs #=> Array
resp.column_statistics_task_runs[0].customer_id #=> String
resp.column_statistics_task_runs[0].column_statistics_task_run_id #=> String
resp.column_statistics_task_runs[0].database_name #=> String
resp.column_statistics_task_runs[0].table_name #=> String
resp.column_statistics_task_runs[0].column_name_list #=> Array
resp.column_statistics_task_runs[0].column_name_list[0] #=> String
resp.column_statistics_task_runs[0].catalog_id #=> String
resp.column_statistics_task_runs[0].role #=> String
resp.column_statistics_task_runs[0].sample_size #=> Float
resp.column_statistics_task_runs[0].security_configuration #=> String
resp.column_statistics_task_runs[0].number_of_workers #=> Integer
resp.column_statistics_task_runs[0].worker_type #=> String
resp.column_statistics_task_runs[0].computation_type #=> String, one of "FULL", "INCREMENTAL"
resp.column_statistics_task_runs[0].status #=> String, one of "STARTING", "RUNNING", "SUCCEEDED", "FAILED", "STOPPED"
resp.column_statistics_task_runs[0].creation_time #=> Time
resp.column_statistics_task_runs[0].last_updated #=> Time
resp.column_statistics_task_runs[0].start_time #=> Time
resp.column_statistics_task_runs[0].end_time #=> Time
resp.column_statistics_task_runs[0].error_message #=> String
resp.column_statistics_task_runs[0].dpu_seconds #=> Float
resp.next_token #=> String

Parameters:

  • params (Hash) (defaults to: {})

    ({})

Options Hash (params):

  • :database_name (required, String)

    The name of the database where the table resides.

  • :table_name (required, String)

    The name of the table.

  • :max_results (Integer)

    The maximum size of the response.

  • :next_token (String)

    A continuation token, if this is a continuation call.

Returns:

See Also:

[View source]

7692
7693
7694
7695
# File 'gems/aws-sdk-glue/lib/aws-sdk-glue/client.rb', line 7692

def get_column_statistics_task_runs(params = {}, options = {})
  req = build_request(:get_column_statistics_task_runs, params)
  req.send_request(options)
end

#get_column_statistics_task_settings(params = {}) ⇒ Types::GetColumnStatisticsTaskSettingsResponse

Gets settings for a column statistics task.

Examples:

Request syntax with placeholder values


resp = client.get_column_statistics_task_settings({
  database_name: "NameString", # required
  table_name: "NameString", # required
})

Response structure


resp.column_statistics_task_settings.database_name #=> String
resp.column_statistics_task_settings.table_name #=> String
resp.column_statistics_task_settings.schedule.schedule_expression #=> String
resp.column_statistics_task_settings.schedule.state #=> String, one of "SCHEDULED", "NOT_SCHEDULED", "TRANSITIONING"
resp.column_statistics_task_settings.column_name_list #=> Array
resp.column_statistics_task_settings.column_name_list[0] #=> String
resp.column_statistics_task_settings.catalog_id #=> String
resp.column_statistics_task_settings.role #=> String
resp.column_statistics_task_settings.sample_size #=> Float
resp.column_statistics_task_settings.security_configuration #=> String
resp.column_statistics_task_settings.schedule_type #=> String, one of "CRON", "AUTO"
resp.column_statistics_task_settings.setting_source #=> String, one of "CATALOG", "TABLE"
resp.column_statistics_task_settings.last_execution_attempt.status #=> String, one of "FAILED", "STARTED"
resp.column_statistics_task_settings.last_execution_attempt.column_statistics_task_run_id #=> String
resp.column_statistics_task_settings.last_execution_attempt.execution_timestamp #=> Time
resp.column_statistics_task_settings.last_execution_attempt.error_message #=> String

Parameters:

  • params (Hash) (defaults to: {})

    ({})

Options Hash (params):

  • :database_name (required, String)

    The name of the database where the table resides.

  • :table_name (required, String)

    The name of the table for which to retrieve column statistics.

Returns:

See Also:

[View source]

7739
7740
7741
7742
# File 'gems/aws-sdk-glue/lib/aws-sdk-glue/client.rb', line 7739

def get_column_statistics_task_settings(params = {}, options = {})
  req = build_request(:get_column_statistics_task_settings, params)
  req.send_request(options)
end

#get_connection(params = {}) ⇒ Types::GetConnectionResponse

Retrieves a connection definition from the Data Catalog.

Examples:

Request syntax with placeholder values


resp = client.get_connection({
  catalog_id: "CatalogIdString",
  name: "NameString", # required
  hide_password: false,
  apply_override_for_compute_environment: "SPARK", # accepts SPARK, ATHENA, PYTHON
})

Response structure


resp.connection.name #=> String
resp.connection.description #=> String
resp.connection.connection_type #=> String, one of "JDBC", "SFTP", "MONGODB", "KAFKA", "NETWORK", "MARKETPLACE", "CUSTOM", "SALESFORCE", "VIEW_VALIDATION_REDSHIFT", "VIEW_VALIDATION_ATHENA", "GOOGLEADS", "GOOGLESHEETS", "GOOGLEANALYTICS4", "SERVICENOW", "MARKETO", "SAPODATA", "ZENDESK", "JIRACLOUD", "NETSUITEERP", "HUBSPOT", "FACEBOOKADS", "INSTAGRAMADS", "ZOHOCRM", "SALESFORCEPARDOT", "SALESFORCEMARKETINGCLOUD", "SLACK", "STRIPE", "INTERCOM", "SNAPCHATADS"
resp.connection.match_criteria #=> Array
resp.connection.match_criteria[0] #=> String
resp.connection.connection_properties #=> Hash
resp.connection.connection_properties["ConnectionPropertyKey"] #=> String
resp.connection.spark_properties #=> Hash
resp.connection.spark_properties["PropertyKey"] #=> String
resp.connection.athena_properties #=> Hash
resp.connection.athena_properties["PropertyKey"] #=> String
resp.connection.python_properties #=> Hash
resp.connection.python_properties["PropertyKey"] #=> String
resp.connection.physical_connection_requirements.subnet_id #=> String
resp.connection.physical_connection_requirements.security_group_id_list #=> Array
resp.connection.physical_connection_requirements.security_group_id_list[0] #=> String
resp.connection.physical_connection_requirements.availability_zone #=> String
resp.connection.creation_time #=> Time
resp.connection.last_updated_time #=> Time
resp.connection.last_updated_by #=> String
resp.connection.status #=> String, one of "READY", "IN_PROGRESS", "FAILED"
resp.connection.status_reason #=> String
resp.connection.last_connection_validation_time #=> Time
resp.connection.authentication_configuration.authentication_type #=> String, one of "BASIC", "OAUTH2", "CUSTOM", "IAM"
resp.connection.authentication_configuration.secret_arn #=> String
resp.connection.authentication_configuration.o_auth_2_properties.o_auth_2_grant_type #=> String, one of "AUTHORIZATION_CODE", "CLIENT_CREDENTIALS", "JWT_BEARER"
resp.connection.authentication_configuration.o_auth_2_properties.o_auth_2_client_application.user_managed_client_application_client_id #=> String
resp.connection.authentication_configuration.o_auth_2_properties.o_auth_2_client_application.aws_managed_client_application_reference #=> String
resp.connection.authentication_configuration.o_auth_2_properties.token_url #=> String
resp.connection.authentication_configuration.o_auth_2_properties.token_url_parameters_map #=> Hash
resp.connection.authentication_configuration.o_auth_2_properties.token_url_parameters_map["TokenUrlParameterKey"] #=> String
resp.connection.connection_schema_version #=> Integer
resp.connection.compatible_compute_environments #=> Array
resp.connection.compatible_compute_environments[0] #=> String, one of "SPARK", "ATHENA", "PYTHON"

Parameters:

  • params (Hash) (defaults to: {})

    ({})

Options Hash (params):

  • :catalog_id (String)

    The ID of the Data Catalog in which the connection resides. If none is provided, the Amazon Web Services account ID is used by default.

  • :name (required, String)

    The name of the connection definition to retrieve.

  • :hide_password (Boolean)

    Allows you to retrieve the connection metadata without returning the password. For instance, the Glue console uses this flag to retrieve the connection, and does not display the password. Set this parameter when the caller might not have permission to use the KMS key to decrypt the password, but it does have permission to access the rest of the connection properties.

  • :apply_override_for_compute_environment (String)

    For connections that may be used in multiple services, specifies returning properties for the specified compute environment.

Returns:

See Also:

[View source]

7819
7820
7821
7822
# File 'gems/aws-sdk-glue/lib/aws-sdk-glue/client.rb', line 7819

def get_connection(params = {}, options = {})
  req = build_request(:get_connection, params)
  req.send_request(options)
end

#get_connections(params = {}) ⇒ Types::GetConnectionsResponse

Retrieves a list of connection definitions from the Data Catalog.

The returned response is a pageable response and is Enumerable. For details on usage see PageableResponse.

Examples:

Request syntax with placeholder values


resp = client.get_connections({
  catalog_id: "CatalogIdString",
  filter: {
    match_criteria: ["NameString"],
    connection_type: "JDBC", # accepts JDBC, SFTP, MONGODB, KAFKA, NETWORK, MARKETPLACE, CUSTOM, SALESFORCE, VIEW_VALIDATION_REDSHIFT, VIEW_VALIDATION_ATHENA, GOOGLEADS, GOOGLESHEETS, GOOGLEANALYTICS4, SERVICENOW, MARKETO, SAPODATA, ZENDESK, JIRACLOUD, NETSUITEERP, HUBSPOT, FACEBOOKADS, INSTAGRAMADS, ZOHOCRM, SALESFORCEPARDOT, SALESFORCEMARKETINGCLOUD, SLACK, STRIPE, INTERCOM, SNAPCHATADS
    connection_schema_version: 1,
  },
  hide_password: false,
  next_token: "Token",
  max_results: 1,
})

Response structure


resp.connection_list #=> Array
resp.connection_list[0].name #=> String
resp.connection_list[0].description #=> String
resp.connection_list[0].connection_type #=> String, one of "JDBC", "SFTP", "MONGODB", "KAFKA", "NETWORK", "MARKETPLACE", "CUSTOM", "SALESFORCE", "VIEW_VALIDATION_REDSHIFT", "VIEW_VALIDATION_ATHENA", "GOOGLEADS", "GOOGLESHEETS", "GOOGLEANALYTICS4", "SERVICENOW", "MARKETO", "SAPODATA", "ZENDESK", "JIRACLOUD", "NETSUITEERP", "HUBSPOT", "FACEBOOKADS", "INSTAGRAMADS", "ZOHOCRM", "SALESFORCEPARDOT", "SALESFORCEMARKETINGCLOUD", "SLACK", "STRIPE", "INTERCOM", "SNAPCHATADS"
resp.connection_list[0].match_criteria #=> Array
resp.connection_list[0].match_criteria[0] #=> String
resp.connection_list[0].connection_properties #=> Hash
resp.connection_list[0].connection_properties["ConnectionPropertyKey"] #=> String
resp.connection_list[0].spark_properties #=> Hash
resp.connection_list[0].spark_properties["PropertyKey"] #=> String
resp.connection_list[0].athena_properties #=> Hash
resp.connection_list[0].athena_properties["PropertyKey"] #=> String
resp.connection_list[0].python_properties #=> Hash
resp.connection_list[0].python_properties["PropertyKey"] #=> String
resp.connection_list[0].physical_connection_requirements.subnet_id #=> String
resp.connection_list[0].physical_connection_requirements.security_group_id_list #=> Array
resp.connection_list[0].physical_connection_requirements.security_group_id_list[0] #=> String
resp.connection_list[0].physical_connection_requirements.availability_zone #=> String
resp.connection_list[0].creation_time #=> Time
resp.connection_list[0].last_updated_time #=> Time
resp.connection_list[0].last_updated_by #=> String
resp.connection_list[0].status #=> String, one of "READY", "IN_PROGRESS", "FAILED"
resp.connection_list[0].status_reason #=> String
resp.connection_list[0].last_connection_validation_time #=> Time
resp.connection_list[0].authentication_configuration.authentication_type #=> String, one of "BASIC", "OAUTH2", "CUSTOM", "IAM"
resp.connection_list[0].authentication_configuration.secret_arn #=> String
resp.connection_list[0].authentication_configuration.o_auth_2_properties.o_auth_2_grant_type #=> String, one of "AUTHORIZATION_CODE", "CLIENT_CREDENTIALS", "JWT_BEARER"
resp.connection_list[0].authentication_configuration.o_auth_2_properties.o_auth_2_client_application.user_managed_client_application_client_id #=> String
resp.connection_list[0].authentication_configuration.o_auth_2_properties.o_auth_2_client_application.aws_managed_client_application_reference #=> String
resp.connection_list[0].authentication_configuration.o_auth_2_properties.token_url #=> String
resp.connection_list[0].authentication_configuration.o_auth_2_properties.token_url_parameters_map #=> Hash
resp.connection_list[0].authentication_configuration.o_auth_2_properties.token_url_parameters_map["TokenUrlParameterKey"] #=> String
resp.connection_list[0].connection_schema_version #=> Integer
resp.connection_list[0].compatible_compute_environments #=> Array
resp.connection_list[0].compatible_compute_environments[0] #=> String, one of "SPARK", "ATHENA", "PYTHON"
resp.next_token #=> String

Parameters:

  • params (Hash) (defaults to: {})

    ({})

Options Hash (params):

  • :catalog_id (String)

    The ID of the Data Catalog in which the connections reside. If none is provided, the Amazon Web Services account ID is used by default.

  • :filter (Types::GetConnectionsFilter)

    A filter that controls which connections are returned.

  • :hide_password (Boolean)

    Allows you to retrieve the connection metadata without returning the password. For instance, the Glue console uses this flag to retrieve the connection, and does not display the password. Set this parameter when the caller might not have permission to use the KMS key to decrypt the password, but it does have permission to access the rest of the connection properties.

  • :next_token (String)

    A continuation token, if this is a continuation call.

  • :max_results (Integer)

    The maximum number of connections to return in one response.

Returns:

See Also:

[View source]

7911
7912
7913
7914
# File 'gems/aws-sdk-glue/lib/aws-sdk-glue/client.rb', line 7911

def get_connections(params = {}, options = {})
  req = build_request(:get_connections, params)
  req.send_request(options)
end

#get_crawler(params = {}) ⇒ Types::GetCrawlerResponse

Retrieves metadata for a specified crawler.

Examples:

Request syntax with placeholder values


resp = client.get_crawler({
  name: "NameString", # required
})

Response structure


resp.crawler.name #=> String
resp.crawler.role #=> String
resp.crawler.targets.s3_targets #=> Array
resp.crawler.targets.s3_targets[0].path #=> String
resp.crawler.targets.s3_targets[0].exclusions #=> Array
resp.crawler.targets.s3_targets[0].exclusions[0] #=> String
resp.crawler.targets.s3_targets[0].connection_name #=> String
resp.crawler.targets.s3_targets[0].sample_size #=> Integer
resp.crawler.targets.s3_targets[0].event_queue_arn #=> String
resp.crawler.targets.s3_targets[0].dlq_event_queue_arn #=> String
resp.crawler.targets.jdbc_targets #=> Array
resp.crawler.targets.jdbc_targets[0].connection_name #=> String
resp.crawler.targets.jdbc_targets[0].path #=> String
resp.crawler.targets.jdbc_targets[0].exclusions #=> Array
resp.crawler.targets.jdbc_targets[0].exclusions[0] #=> String
resp.crawler.targets.jdbc_targets[0]. #=> Array
resp.crawler.targets.jdbc_targets[0].[0] #=> String, one of "COMMENTS", "RAWTYPES"
resp.crawler.targets.mongo_db_targets #=> Array
resp.crawler.targets.mongo_db_targets[0].connection_name #=> String
resp.crawler.targets.mongo_db_targets[0].path #=> String
resp.crawler.targets.mongo_db_targets[0].scan_all #=> Boolean
resp.crawler.targets.dynamo_db_targets #=> Array
resp.crawler.targets.dynamo_db_targets[0].path #=> String
resp.crawler.targets.dynamo_db_targets[0].scan_all #=> Boolean
resp.crawler.targets.dynamo_db_targets[0].scan_rate #=> Float
resp.crawler.targets.catalog_targets #=> Array
resp.crawler.targets.catalog_targets[0].database_name #=> String
resp.crawler.targets.catalog_targets[0].tables #=> Array
resp.crawler.targets.catalog_targets[0].tables[0] #=> String
resp.crawler.targets.catalog_targets[0].connection_name #=> String
resp.crawler.targets.catalog_targets[0].event_queue_arn #=> String
resp.crawler.targets.catalog_targets[0].dlq_event_queue_arn #=> String
resp.crawler.targets.delta_targets #=> Array
resp.crawler.targets.delta_targets[0].delta_tables #=> Array
resp.crawler.targets.delta_targets[0].delta_tables[0] #=> String
resp.crawler.targets.delta_targets[0].connection_name #=> String
resp.crawler.targets.delta_targets[0].write_manifest #=> Boolean
resp.crawler.targets.delta_targets[0].create_native_delta_table #=> Boolean
resp.crawler.targets.iceberg_targets #=> Array
resp.crawler.targets.iceberg_targets[0].paths #=> Array
resp.crawler.targets.iceberg_targets[0].paths[0] #=> String
resp.crawler.targets.iceberg_targets[0].connection_name #=> String
resp.crawler.targets.iceberg_targets[0].exclusions #=> Array
resp.crawler.targets.iceberg_targets[0].exclusions[0] #=> String
resp.crawler.targets.iceberg_targets[0].maximum_traversal_depth #=> Integer
resp.crawler.targets.hudi_targets #=> Array
resp.crawler.targets.hudi_targets[0].paths #=> Array
resp.crawler.targets.hudi_targets[0].paths[0] #=> String
resp.crawler.targets.hudi_targets[0].connection_name #=> String
resp.crawler.targets.hudi_targets[0].exclusions #=> Array
resp.crawler.targets.hudi_targets[0].exclusions[0] #=> String
resp.crawler.targets.hudi_targets[0].maximum_traversal_depth #=> Integer
resp.crawler.database_name #=> String
resp.crawler.description #=> String
resp.crawler.classifiers #=> Array
resp.crawler.classifiers[0] #=> String
resp.crawler.recrawl_policy.recrawl_behavior #=> String, one of "CRAWL_EVERYTHING", "CRAWL_NEW_FOLDERS_ONLY", "CRAWL_EVENT_MODE"
resp.crawler.schema_change_policy.update_behavior #=> String, one of "LOG", "UPDATE_IN_DATABASE"
resp.crawler.schema_change_policy.delete_behavior #=> String, one of "LOG", "DELETE_FROM_DATABASE", "DEPRECATE_IN_DATABASE"
resp.crawler.lineage_configuration.crawler_lineage_settings #=> String, one of "ENABLE", "DISABLE"
resp.crawler.state #=> String, one of "READY", "RUNNING", "STOPPING"
resp.crawler.table_prefix #=> String
resp.crawler.schedule.schedule_expression #=> String
resp.crawler.schedule.state #=> String, one of "SCHEDULED", "NOT_SCHEDULED", "TRANSITIONING"
resp.crawler.crawl_elapsed_time #=> Integer
resp.crawler.creation_time #=> Time
resp.crawler.last_updated #=> Time
resp.crawler.last_crawl.status #=> String, one of "SUCCEEDED", "CANCELLED", "FAILED"
resp.crawler.last_crawl.error_message #=> String
resp.crawler.last_crawl.log_group #=> String
resp.crawler.last_crawl.log_stream #=> String
resp.crawler.last_crawl.message_prefix #=> String
resp.crawler.last_crawl.start_time #=> Time
resp.crawler.version #=> Integer
resp.crawler.configuration #=> String
resp.crawler.crawler_security_configuration #=> String
resp.crawler.lake_formation_configuration.use_lake_formation_credentials #=> Boolean
resp.crawler.lake_formation_configuration. #=> String

Parameters:

  • params (Hash) (defaults to: {})

    ({})

Options Hash (params):

  • :name (required, String)

    The name of the crawler to retrieve metadata for.

Returns:

See Also:

[View source]

8016
8017
8018
8019
# File 'gems/aws-sdk-glue/lib/aws-sdk-glue/client.rb', line 8016

def get_crawler(params = {}, options = {})
  req = build_request(:get_crawler, params)
  req.send_request(options)
end

#get_crawler_metrics(params = {}) ⇒ Types::GetCrawlerMetricsResponse

Retrieves metrics about specified crawlers.

The returned response is a pageable response and is Enumerable. For details on usage see PageableResponse.

Examples:

Request syntax with placeholder values


resp = client.get_crawler_metrics({
  crawler_name_list: ["NameString"],
  max_results: 1,
  next_token: "Token",
})

Response structure


resp.crawler_metrics_list #=> Array
resp.crawler_metrics_list[0].crawler_name #=> String
resp.crawler_metrics_list[0].time_left_seconds #=> Float
resp.crawler_metrics_list[0].still_estimating #=> Boolean
resp.crawler_metrics_list[0].last_runtime_seconds #=> Float
resp.crawler_metrics_list[0].median_runtime_seconds #=> Float
resp.crawler_metrics_list[0].tables_created #=> Integer
resp.crawler_metrics_list[0].tables_updated #=> Integer
resp.crawler_metrics_list[0].tables_deleted #=> Integer
resp.next_token #=> String

Parameters:

  • params (Hash) (defaults to: {})

    ({})

Options Hash (params):

  • :crawler_name_list (Array<String>)

    A list of the names of crawlers about which to retrieve metrics.

  • :max_results (Integer)

    The maximum size of a list to return.

  • :next_token (String)

    A continuation token, if this is a continuation call.

Returns:

See Also:

[View source]

8064
8065
8066
8067
# File 'gems/aws-sdk-glue/lib/aws-sdk-glue/client.rb', line 8064

def get_crawler_metrics(params = {}, options = {})
  req = build_request(:get_crawler_metrics, params)
  req.send_request(options)
end

#get_crawlers(params = {}) ⇒ Types::GetCrawlersResponse

Retrieves metadata for all crawlers defined in the customer account.

The returned response is a pageable response and is Enumerable. For details on usage see PageableResponse.

Examples:

Request syntax with placeholder values


resp = client.get_crawlers({
  max_results: 1,
  next_token: "Token",
})

Response structure


resp.crawlers #=> Array
resp.crawlers[0].name #=> String
resp.crawlers[0].role #=> String
resp.crawlers[0].targets.s3_targets #=> Array
resp.crawlers[0].targets.s3_targets[0].path #=> String
resp.crawlers[0].targets.s3_targets[0].exclusions #=> Array
resp.crawlers[0].targets.s3_targets[0].exclusions[0] #=> String
resp.crawlers[0].targets.s3_targets[0].connection_name #=> String
resp.crawlers[0].targets.s3_targets[0].sample_size #=> Integer
resp.crawlers[0].targets.s3_targets[0].event_queue_arn #=> String
resp.crawlers[0].targets.s3_targets[0].dlq_event_queue_arn #=> String
resp.crawlers[0].targets.jdbc_targets #=> Array
resp.crawlers[0].targets.jdbc_targets[0].connection_name #=> String
resp.crawlers[0].targets.jdbc_targets[0].path #=> String
resp.crawlers[0].targets.jdbc_targets[0].exclusions #=> Array
resp.crawlers[0].targets.jdbc_targets[0].exclusions[0] #=> String
resp.crawlers[0].targets.jdbc_targets[0]. #=> Array
resp.crawlers[0].targets.jdbc_targets[0].[0] #=> String, one of "COMMENTS", "RAWTYPES"
resp.crawlers[0].targets.mongo_db_targets #=> Array
resp.crawlers[0].targets.mongo_db_targets[0].connection_name #=> String
resp.crawlers[0].targets.mongo_db_targets[0].path #=> String
resp.crawlers[0].targets.mongo_db_targets[0].scan_all #=> Boolean
resp.crawlers[0].targets.dynamo_db_targets #=> Array
resp.crawlers[0].targets.dynamo_db_targets[0].path #=> String
resp.crawlers[0].targets.dynamo_db_targets[0].scan_all #=> Boolean
resp.crawlers[0].targets.dynamo_db_targets[0].scan_rate #=> Float
resp.crawlers[0].targets.catalog_targets #=> Array
resp.crawlers[0].targets.catalog_targets[0].database_name #=> String
resp.crawlers[0].targets.catalog_targets[0].tables #=> Array
resp.crawlers[0].targets.catalog_targets[0].tables[0] #=> String
resp.crawlers[0].targets.catalog_targets[0].connection_name #=> String
resp.crawlers[0].targets.catalog_targets[0].event_queue_arn #=> String
resp.crawlers[0].targets.catalog_targets[0].dlq_event_queue_arn #=> String
resp.crawlers[0].targets.delta_targets #=> Array
resp.crawlers[0].targets.delta_targets[0].delta_tables #=> Array
resp.crawlers[0].targets.delta_targets[0].delta_tables[0] #=> String
resp.crawlers[0].targets.delta_targets[0].connection_name #=> String
resp.crawlers[0].targets.delta_targets[0].write_manifest #=> Boolean
resp.crawlers[0].targets.delta_targets[0].create_native_delta_table #=> Boolean
resp.crawlers[0].targets.iceberg_targets #=> Array
resp.crawlers[0].targets.iceberg_targets[0].paths #=> Array
resp.crawlers[0].targets.iceberg_targets[0].paths[0] #=> String
resp.crawlers[0].targets.iceberg_targets[0].connection_name #=> String
resp.crawlers[0].targets.iceberg_targets[0].exclusions #=> Array
resp.crawlers[0].targets.iceberg_targets[0].exclusions[0] #=> String
resp.crawlers[0].targets.iceberg_targets[0].maximum_traversal_depth #=> Integer
resp.crawlers[0].targets.hudi_targets #=> Array
resp.crawlers[0].targets.hudi_targets[0].paths #=> Array
resp.crawlers[0].targets.hudi_targets[0].paths[0] #=> String
resp.crawlers[0].targets.hudi_targets[0].connection_name #=> String
resp.crawlers[0].targets.hudi_targets[0].exclusions #=> Array
resp.crawlers[0].targets.hudi_targets[0].exclusions[0] #=> String
resp.crawlers[0].targets.hudi_targets[0].maximum_traversal_depth #=> Integer
resp.crawlers[0].database_name #=> String
resp.crawlers[0].description #=> String
resp.crawlers[0].classifiers #=> Array
resp.crawlers[0].classifiers[0] #=> String
resp.crawlers[0].recrawl_policy.recrawl_behavior #=> String, one of "CRAWL_EVERYTHING", "CRAWL_NEW_FOLDERS_ONLY", "CRAWL_EVENT_MODE"
resp.crawlers[0].schema_change_policy.update_behavior #=> String, one of "LOG", "UPDATE_IN_DATABASE"
resp.crawlers[0].schema_change_policy.delete_behavior #=> String, one of "LOG", "DELETE_FROM_DATABASE", "DEPRECATE_IN_DATABASE"
resp.crawlers[0].lineage_configuration.crawler_lineage_settings #=> String, one of "ENABLE", "DISABLE"
resp.crawlers[0].state #=> String, one of "READY", "RUNNING", "STOPPING"
resp.crawlers[0].table_prefix #=> String
resp.crawlers[0].schedule.schedule_expression #=> String
resp.crawlers[0].schedule.state #=> String, one of "SCHEDULED", "NOT_SCHEDULED", "TRANSITIONING"
resp.crawlers[0].crawl_elapsed_time #=> Integer
resp.crawlers[0].creation_time #=> Time
resp.crawlers[0].last_updated #=> Time
resp.crawlers[0].last_crawl.status #=> String, one of "SUCCEEDED", "CANCELLED", "FAILED"
resp.crawlers[0].last_crawl.error_message #=> String
resp.crawlers[0].last_crawl.log_group #=> String
resp.crawlers[0].last_crawl.log_stream #=> String
resp.crawlers[0].last_crawl.message_prefix #=> String
resp.crawlers[0].last_crawl.start_time #=> Time
resp.crawlers[0].version #=> Integer
resp.crawlers[0].configuration #=> String
resp.crawlers[0].crawler_security_configuration #=> String
resp.crawlers[0].lake_formation_configuration.use_lake_formation_credentials #=> Boolean
resp.crawlers[0].lake_formation_configuration. #=> String
resp.next_token #=> String

Parameters:

  • params (Hash) (defaults to: {})

    ({})

Options Hash (params):

  • :max_results (Integer)

    The number of crawlers to return on each call.

  • :next_token (String)

    A continuation token, if this is a continuation request.

Returns:

See Also:

[View source]

8178
8179
8180
8181
# File 'gems/aws-sdk-glue/lib/aws-sdk-glue/client.rb', line 8178

def get_crawlers(params = {}, options = {})
  req = build_request(:get_crawlers, params)
  req.send_request(options)
end

#get_custom_entity_type(params = {}) ⇒ Types::GetCustomEntityTypeResponse

Retrieves the details of a custom pattern by specifying its name.

Examples:

Request syntax with placeholder values


resp = client.get_custom_entity_type({
  name: "NameString", # required
})

Response structure


resp.name #=> String
resp.regex_string #=> String
resp.context_words #=> Array
resp.context_words[0] #=> String

Parameters:

  • params (Hash) (defaults to: {})

    ({})

Options Hash (params):

  • :name (required, String)

    The name of the custom pattern that you want to retrieve.

Returns:

See Also:

[View source]

8211
8212
8213
8214
# File 'gems/aws-sdk-glue/lib/aws-sdk-glue/client.rb', line 8211

def get_custom_entity_type(params = {}, options = {})
  req = build_request(:get_custom_entity_type, params)
  req.send_request(options)
end

#get_data_catalog_encryption_settings(params = {}) ⇒ Types::GetDataCatalogEncryptionSettingsResponse

Retrieves the security configuration for a specified catalog.

Examples:

Request syntax with placeholder values


resp = client.get_data_catalog_encryption_settings({
  catalog_id: "CatalogIdString",
})

Response structure


resp.data_catalog_encryption_settings.encryption_at_rest.catalog_encryption_mode #=> String, one of "DISABLED", "SSE-KMS", "SSE-KMS-WITH-SERVICE-ROLE"
resp.data_catalog_encryption_settings.encryption_at_rest.sse_aws_kms_key_id #=> String
resp.data_catalog_encryption_settings.encryption_at_rest.catalog_encryption_service_role #=> String
resp.data_catalog_encryption_settings.connection_password_encryption.return_connection_password_encrypted #=> Boolean
resp.data_catalog_encryption_settings.connection_password_encryption.aws_kms_key_id #=> String

Parameters:

  • params (Hash) (defaults to: {})

    ({})

Options Hash (params):

  • :catalog_id (String)

    The ID of the Data Catalog to retrieve the security configuration for. If none is provided, the Amazon Web Services account ID is used by default.

Returns:

See Also:

[View source]

8245
8246
8247
8248
# File 'gems/aws-sdk-glue/lib/aws-sdk-glue/client.rb', line 8245

def get_data_catalog_encryption_settings(params = {}, options = {})
  req = build_request(:get_data_catalog_encryption_settings, params)
  req.send_request(options)
end

#get_data_quality_model(params = {}) ⇒ Types::GetDataQualityModelResponse

Retrieve the training status of the model along with more information (CompletedOn, StartedOn, FailureReason).

Examples:

Request syntax with placeholder values


resp = client.get_data_quality_model({
  statistic_id: "HashString",
  profile_id: "HashString", # required
})

Response structure


resp.status #=> String, one of "RUNNING", "SUCCEEDED", "FAILED"
resp.started_on #=> Time
resp.completed_on #=> Time
resp.failure_reason #=> String

Parameters:

  • params (Hash) (defaults to: {})

    ({})

Options Hash (params):

  • :statistic_id (String)

    The Statistic ID.

  • :profile_id (required, String)

    The Profile ID.

Returns:

See Also:

[View source]

8284
8285
8286
8287
# File 'gems/aws-sdk-glue/lib/aws-sdk-glue/client.rb', line 8284

def get_data_quality_model(params = {}, options = {})
  req = build_request(:get_data_quality_model, params)
  req.send_request(options)
end

#get_data_quality_model_result(params = {}) ⇒ Types::GetDataQualityModelResultResponse

Retrieve a statistic's predictions for a given Profile ID.

Examples:

Request syntax with placeholder values


resp = client.get_data_quality_model_result({
  statistic_id: "HashString", # required
  profile_id: "HashString", # required
})

Response structure


resp.completed_on #=> Time
resp.model #=> Array
resp.model[0].lower_bound #=> Float
resp.model[0].upper_bound #=> Float
resp.model[0].predicted_value #=> Float
resp.model[0].actual_value #=> Float
resp.model[0].date #=> Time
resp.model[0].inclusion_annotation #=> String, one of "INCLUDE", "EXCLUDE"

Parameters:

  • params (Hash) (defaults to: {})

    ({})

Options Hash (params):

  • :statistic_id (required, String)

    The Statistic ID.

  • :profile_id (required, String)

    The Profile ID.

Returns:

See Also:

[View source]

8324
8325
8326
8327
# File 'gems/aws-sdk-glue/lib/aws-sdk-glue/client.rb', line 8324

def get_data_quality_model_result(params = {}, options = {})
  req = build_request(:get_data_quality_model_result, params)
  req.send_request(options)
end

#get_data_quality_result(params = {}) ⇒ Types::GetDataQualityResultResponse

Retrieves the result of a data quality rule evaluation.

Examples:

Request syntax with placeholder values


resp = client.get_data_quality_result({
  result_id: "HashString", # required
})

Response structure


resp.result_id #=> String
resp.profile_id #=> String
resp.score #=> Float
resp.data_source.glue_table.database_name #=> String
resp.data_source.glue_table.table_name #=> String
resp.data_source.glue_table.catalog_id #=> String
resp.data_source.glue_table.connection_name #=> String
resp.data_source.glue_table.additional_options #=> Hash
resp.data_source.glue_table.additional_options["NameString"] #=> String
resp.ruleset_name #=> String
resp.evaluation_context #=> String
resp.started_on #=> Time
resp.completed_on #=> Time
resp.job_name #=> String
resp.job_run_id #=> String
resp.ruleset_evaluation_run_id #=> String
resp.rule_results #=> Array
resp.rule_results[0].name #=> String
resp.rule_results[0].description #=> String
resp.rule_results[0].evaluation_message #=> String
resp.rule_results[0].result #=> String, one of "PASS", "FAIL", "ERROR"
resp.rule_results[0].evaluated_metrics #=> Hash
resp.rule_results[0].evaluated_metrics["NameString"] #=> Float
resp.rule_results[0].evaluated_rule #=> String
resp.analyzer_results #=> Array
resp.analyzer_results[0].name #=> String
resp.analyzer_results[0].description #=> String
resp.analyzer_results[0].evaluation_message #=> String
resp.analyzer_results[0].evaluated_metrics #=> Hash
resp.analyzer_results[0].evaluated_metrics["NameString"] #=> Float
resp.observations #=> Array
resp.observations[0].description #=> String
resp.observations[0].metric_based_observation.metric_name #=> String
resp.observations[0].metric_based_observation.statistic_id #=> String
resp.observations[0].metric_based_observation.metric_values.actual_value #=> Float
resp.observations[0].metric_based_observation.metric_values.expected_value #=> Float
resp.observations[0].metric_based_observation.metric_values.lower_limit #=> Float
resp.observations[0].metric_based_observation.metric_values.upper_limit #=> Float
resp.observations[0].metric_based_observation.new_rules #=> Array
resp.observations[0].metric_based_observation.new_rules[0] #=> String

Parameters:

  • params (Hash) (defaults to: {})

    ({})

Options Hash (params):

  • :result_id (required, String)

    A unique result ID for the data quality result.

Returns:

See Also:

[View source]

8404
8405
8406
8407
# File 'gems/aws-sdk-glue/lib/aws-sdk-glue/client.rb', line 8404

def get_data_quality_result(params = {}, options = {})
  req = build_request(:get_data_quality_result, params)
  req.send_request(options)
end

#get_data_quality_rule_recommendation_run(params = {}) ⇒ Types::GetDataQualityRuleRecommendationRunResponse

Gets the specified recommendation run that was used to generate rules.

Examples:

Request syntax with placeholder values


resp = client.get_data_quality_rule_recommendation_run({
  run_id: "HashString", # required
})

Response structure


resp.run_id #=> String
resp.data_source.glue_table.database_name #=> String
resp.data_source.glue_table.table_name #=> String
resp.data_source.glue_table.catalog_id #=> String
resp.data_source.glue_table.connection_name #=> String
resp.data_source.glue_table.additional_options #=> Hash
resp.data_source.glue_table.additional_options["NameString"] #=> String
resp.role #=> String
resp.number_of_workers #=> Integer
resp.timeout #=> Integer
resp.status #=> String, one of "STARTING", "RUNNING", "STOPPING", "STOPPED", "SUCCEEDED", "FAILED", "TIMEOUT"
resp.error_string #=> String
resp.started_on #=> Time
resp.last_modified_on #=> Time
resp.completed_on #=> Time
resp.execution_time #=> Integer
resp.recommended_ruleset #=> String
resp.created_ruleset_name #=> String
resp.data_quality_security_configuration #=> String

Parameters:

  • params (Hash) (defaults to: {})

    ({})

Options Hash (params):

  • :run_id (required, String)

    The unique run identifier associated with this run.

Returns:

See Also:

[View source]

8463
8464
8465
8466
# File 'gems/aws-sdk-glue/lib/aws-sdk-glue/client.rb', line 8463

def get_data_quality_rule_recommendation_run(params = {}, options = {})
  req = build_request(:get_data_quality_rule_recommendation_run, params)
  req.send_request(options)
end

#get_data_quality_ruleset(params = {}) ⇒ Types::GetDataQualityRulesetResponse

Returns an existing ruleset by identifier or name.

Examples:

Request syntax with placeholder values


resp = client.get_data_quality_ruleset({
  name: "NameString", # required
})

Response structure


resp.name #=> String
resp.description #=> String
resp.ruleset #=> String
resp.target_table.table_name #=> String
resp.target_table.database_name #=> String
resp.target_table.catalog_id #=> String
resp.created_on #=> Time
resp.last_modified_on #=> Time
resp.recommendation_run_id #=> String
resp.data_quality_security_configuration #=> String

Parameters:

  • params (Hash) (defaults to: {})

    ({})

Options Hash (params):

  • :name (required, String)

    The name of the ruleset.

Returns:

See Also:

[View source]

8507
8508
8509
8510
# File 'gems/aws-sdk-glue/lib/aws-sdk-glue/client.rb', line 8507

def get_data_quality_ruleset(params = {}, options = {})
  req = build_request(:get_data_quality_ruleset, params)
  req.send_request(options)
end

#get_data_quality_ruleset_evaluation_run(params = {}) ⇒ Types::GetDataQualityRulesetEvaluationRunResponse

Retrieves a specific run where a ruleset is evaluated against a data source.

Examples:

Request syntax with placeholder values


resp = client.get_data_quality_ruleset_evaluation_run({
  run_id: "HashString", # required
})

Response structure


resp.run_id #=> String
resp.data_source.glue_table.database_name #=> String
resp.data_source.glue_table.table_name #=> String
resp.data_source.glue_table.catalog_id #=> String
resp.data_source.glue_table.connection_name #=> String
resp.data_source.glue_table.additional_options #=> Hash
resp.data_source.glue_table.additional_options["NameString"] #=> String
resp.role #=> String
resp.number_of_workers #=> Integer
resp.timeout #=> Integer
resp.additional_run_options.cloud_watch_metrics_enabled #=> Boolean
resp.additional_run_options.results_s3_prefix #=> String
resp.additional_run_options.composite_rule_evaluation_method #=> String, one of "COLUMN", "ROW"
resp.status #=> String, one of "STARTING", "RUNNING", "STOPPING", "STOPPED", "SUCCEEDED", "FAILED", "TIMEOUT"
resp.error_string #=> String
resp.started_on #=> Time
resp.last_modified_on #=> Time
resp.completed_on #=> Time
resp.execution_time #=> Integer
resp.ruleset_names #=> Array
resp.ruleset_names[0] #=> String
resp.result_ids #=> Array
resp.result_ids[0] #=> String
resp.additional_data_sources #=> Hash
resp.additional_data_sources["NameString"].glue_table.database_name #=> String
resp.additional_data_sources["NameString"].glue_table.table_name #=> String
resp.additional_data_sources["NameString"].glue_table.catalog_id #=> String
resp.additional_data_sources["NameString"].glue_table.connection_name #=> String
resp.additional_data_sources["NameString"].glue_table.additional_options #=> Hash
resp.additional_data_sources["NameString"].glue_table.additional_options["NameString"] #=> String

Parameters:

  • params (Hash) (defaults to: {})

    ({})

Options Hash (params):

  • :run_id (required, String)

    The unique run identifier associated with this run.

Returns:

See Also:

[View source]

8579
8580
8581
8582
# File 'gems/aws-sdk-glue/lib/aws-sdk-glue/client.rb', line 8579

def get_data_quality_ruleset_evaluation_run(params = {}, options = {})
  req = build_request(:get_data_quality_ruleset_evaluation_run, params)
  req.send_request(options)
end

#get_database(params = {}) ⇒ Types::GetDatabaseResponse

Retrieves the definition of a specified database.

Examples:

Request syntax with placeholder values


resp = client.get_database({
  catalog_id: "CatalogIdString",
  name: "NameString", # required
})

Response structure


resp.database.name #=> String
resp.database.description #=> String
resp.database.location_uri #=> String
resp.database.parameters #=> Hash
resp.database.parameters["KeyString"] #=> String
resp.database.create_time #=> Time
resp.database.create_table_default_permissions #=> Array
resp.database.create_table_default_permissions[0].principal.data_lake_principal_identifier #=> String
resp.database.create_table_default_permissions[0].permissions #=> Array
resp.database.create_table_default_permissions[0].permissions[0] #=> String, one of "ALL", "SELECT", "ALTER", "DROP", "DELETE", "INSERT", "CREATE_DATABASE", "CREATE_TABLE", "DATA_LOCATION_ACCESS"
resp.database.target_database.catalog_id #=> String
resp.database.target_database.database_name #=> String
resp.database.target_database.region #=> String
resp.database.catalog_id #=> String
resp.database.federated_database.identifier #=> String
resp.database.federated_database.connection_name #=> String

Parameters:

  • params (Hash) (defaults to: {})

    ({})

Options Hash (params):

  • :catalog_id (String)

    The ID of the Data Catalog in which the database resides. If none is provided, the Amazon Web Services account ID is used by default.

  • :name (required, String)

    The name of the database to retrieve. For Hive compatibility, this should be all lowercase.

Returns:

See Also:

[View source]

8628
8629
8630
8631
# File 'gems/aws-sdk-glue/lib/aws-sdk-glue/client.rb', line 8628

def get_database(params = {}, options = {})
  req = build_request(:get_database, params)
  req.send_request(options)
end

#get_databases(params = {}) ⇒ Types::GetDatabasesResponse

Retrieves all databases defined in a given Data Catalog.

The returned response is a pageable response and is Enumerable. For details on usage see PageableResponse.

Examples:

Request syntax with placeholder values


resp = client.get_databases({
  catalog_id: "CatalogIdString",
  next_token: "Token",
  max_results: 1,
  resource_share_type: "FOREIGN", # accepts FOREIGN, ALL, FEDERATED
  attributes_to_get: ["NAME"], # accepts NAME
})

Response structure


resp.database_list #=> Array
resp.database_list[0].name #=> String
resp.database_list[0].description #=> String
resp.database_list[0].location_uri #=> String
resp.database_list[0].parameters #=> Hash
resp.database_list[0].parameters["KeyString"] #=> String
resp.database_list[0].create_time #=> Time
resp.database_list[0].create_table_default_permissions #=> Array
resp.database_list[0].create_table_default_permissions[0].principal.data_lake_principal_identifier #=> String
resp.database_list[0].create_table_default_permissions[0].permissions #=> Array
resp.database_list[0].create_table_default_permissions[0].permissions[0] #=> String, one of "ALL", "SELECT", "ALTER", "DROP", "DELETE", "INSERT", "CREATE_DATABASE", "CREATE_TABLE", "DATA_LOCATION_ACCESS"
resp.database_list[0].target_database.catalog_id #=> String
resp.database_list[0].target_database.database_name #=> String
resp.database_list[0].target_database.region #=> String
resp.database_list[0].catalog_id #=> String
resp.database_list[0].federated_database.identifier #=> String
resp.database_list[0].federated_database.connection_name #=> String
resp.next_token #=> String

Parameters:

  • params (Hash) (defaults to: {})

    ({})

Options Hash (params):

  • :catalog_id (String)

    The ID of the Data Catalog from which to retrieve Databases. If none is provided, the Amazon Web Services account ID is used by default.

  • :next_token (String)

    A continuation token, if this is a continuation call.

  • :max_results (Integer)

    The maximum number of databases to return in one response.

  • :resource_share_type (String)

    Allows you to specify that you want to list the databases shared with your account. The allowable values are FEDERATED, FOREIGN or ALL.

    • If set to FEDERATED, will list the federated databases (referencing an external entity) shared with your account.

    • If set to FOREIGN, will list the databases shared with your account.

    • If set to ALL, will list the databases shared with your account, as well as the databases in yor local account.

  • :attributes_to_get (Array<String>)

    Specifies the database fields returned by the GetDatabases call. This parameter doesn’t accept an empty list. The request must include the NAME.

Returns:

See Also:

[View source]

8706
8707
8708
8709
# File 'gems/aws-sdk-glue/lib/aws-sdk-glue/client.rb', line 8706

def get_databases(params = {}, options = {})
  req = build_request(:get_databases, params)
  req.send_request(options)
end

#get_dataflow_graph(params = {}) ⇒ Types::GetDataflowGraphResponse

Transforms a Python script into a directed acyclic graph (DAG).

Examples:

Request syntax with placeholder values


resp = client.get_dataflow_graph({
  python_script: "PythonScript",
})

Response structure


resp.dag_nodes #=> Array
resp.dag_nodes[0].id #=> String
resp.dag_nodes[0].node_type #=> String
resp.dag_nodes[0].args #=> Array
resp.dag_nodes[0].args[0].name #=> String
resp.dag_nodes[0].args[0].value #=> String
resp.dag_nodes[0].args[0].param #=> Boolean
resp.dag_nodes[0].line_number #=> Integer
resp.dag_edges #=> Array
resp.dag_edges[0].source #=> String
resp.dag_edges[0].target #=> String
resp.dag_edges[0].target_parameter #=> String

Parameters:

  • params (Hash) (defaults to: {})

    ({})

Options Hash (params):

  • :python_script (String)

    The Python script to transform.

Returns:

See Also:

[View source]

8746
8747
8748
8749
# File 'gems/aws-sdk-glue/lib/aws-sdk-glue/client.rb', line 8746

def get_dataflow_graph(params = {}, options = {})
  req = build_request(:get_dataflow_graph, params)
  req.send_request(options)
end

#get_dev_endpoint(params = {}) ⇒ Types::GetDevEndpointResponse

Retrieves information about a specified development endpoint.

When you create a development endpoint in a virtual private cloud (VPC), Glue returns only a private IP address, and the public IP address field is not populated. When you create a non-VPC development endpoint, Glue returns only a public IP address.

Examples:

Request syntax with placeholder values


resp = client.get_dev_endpoint({
  endpoint_name: "GenericString", # required
})

Response structure


resp.dev_endpoint.endpoint_name #=> String
resp.dev_endpoint.role_arn #=> String
resp.dev_endpoint.security_group_ids #=> Array
resp.dev_endpoint.security_group_ids[0] #=> String
resp.dev_endpoint.subnet_id #=> String
resp.dev_endpoint.yarn_endpoint_address #=> String
resp.dev_endpoint.private_address #=> String
resp.dev_endpoint.zeppelin_remote_spark_interpreter_port #=> Integer
resp.dev_endpoint.public_address #=> String
resp.dev_endpoint.status #=> String
resp.dev_endpoint.worker_type #=> String, one of "Standard", "G.1X", "G.2X", "G.025X", "G.4X", "G.8X", "Z.2X"
resp.dev_endpoint.glue_version #=> String
resp.dev_endpoint.number_of_workers #=> Integer
resp.dev_endpoint.number_of_nodes #=> Integer
resp.dev_endpoint.availability_zone #=> String
resp.dev_endpoint.vpc_id #=> String
resp.dev_endpoint.extra_python_libs_s3_path #=> String
resp.dev_endpoint.extra_jars_s3_path #=> String
resp.dev_endpoint.failure_reason #=> String
resp.dev_endpoint.last_update_status #=> String
resp.dev_endpoint.created_timestamp #=> Time
resp.dev_endpoint.last_modified_timestamp #=> Time
resp.dev_endpoint.public_key #=> String
resp.dev_endpoint.public_keys #=> Array
resp.dev_endpoint.public_keys[0] #=> String
resp.dev_endpoint.security_configuration #=> String
resp.dev_endpoint.arguments #=> Hash
resp.dev_endpoint.arguments["GenericString"] #=> String

Parameters:

  • params (Hash) (defaults to: {})

    ({})

Options Hash (params):

  • :endpoint_name (required, String)

    Name of the DevEndpoint to retrieve information for.

Returns:

See Also:

[View source]

8808
8809
8810
8811
# File 'gems/aws-sdk-glue/lib/aws-sdk-glue/client.rb', line 8808

def get_dev_endpoint(params = {}, options = {})
  req = build_request(:get_dev_endpoint, params)
  req.send_request(options)
end

#get_dev_endpoints(params = {}) ⇒ Types::GetDevEndpointsResponse

Retrieves all the development endpoints in this Amazon Web Services account.

When you create a development endpoint in a virtual private cloud (VPC), Glue returns only a private IP address and the public IP address field is not populated. When you create a non-VPC development endpoint, Glue returns only a public IP address.

The returned response is a pageable response and is Enumerable. For details on usage see PageableResponse.

Examples:

Request syntax with placeholder values


resp = client.get_dev_endpoints({
  max_results: 1,
  next_token: "GenericString",
})

Response structure


resp.dev_endpoints #=> Array
resp.dev_endpoints[0].endpoint_name #=> String
resp.dev_endpoints[0].role_arn #=> String
resp.dev_endpoints[0].security_group_ids #=> Array
resp.dev_endpoints[0].security_group_ids[0] #=> String
resp.dev_endpoints[0].subnet_id #=> String
resp.dev_endpoints[0].yarn_endpoint_address #=> String
resp.dev_endpoints[0].private_address #=> String
resp.dev_endpoints[0].zeppelin_remote_spark_interpreter_port #=> Integer
resp.dev_endpoints[0].public_address #=> String
resp.dev_endpoints[0].status #=> String
resp.dev_endpoints[0].worker_type #=> String, one of "Standard", "G.1X", "G.2X", "G.025X", "G.4X", "G.8X", "Z.2X"
resp.dev_endpoints[0].glue_version #=> String
resp.dev_endpoints[0].number_of_workers #=> Integer
resp.dev_endpoints[0].number_of_nodes #=> Integer
resp.dev_endpoints[0].availability_zone #=> String
resp.dev_endpoints[0].vpc_id #=> String
resp.dev_endpoints[0].extra_python_libs_s3_path #=> String
resp.dev_endpoints[0].extra_jars_s3_path #=> String
resp.dev_endpoints[0].failure_reason #=> String
resp.dev_endpoints[0].last_update_status #=> String
resp.dev_endpoints[0].created_timestamp #=> Time
resp.dev_endpoints[0].last_modified_timestamp #=> Time
resp.dev_endpoints[0].public_key #=> String
resp.dev_endpoints[0].public_keys #=> Array
resp.dev_endpoints[0].public_keys[0] #=> String
resp.dev_endpoints[0].security_configuration #=> String
resp.dev_endpoints[0].arguments #=> Hash
resp.dev_endpoints[0].arguments["GenericString"] #=> String
resp.next_token #=> String

Parameters:

  • params (Hash) (defaults to: {})

    ({})

Options Hash (params):

  • :max_results (Integer)

    The maximum size of information to return.

  • :next_token (String)

    A continuation token, if this is a continuation call.

Returns:

See Also:

[View source]

8880
8881
8882
8883
# File 'gems/aws-sdk-glue/lib/aws-sdk-glue/client.rb', line 8880

def get_dev_endpoints(params = {}, options = {})
  req = build_request(:get_dev_endpoints, params)
  req.send_request(options)
end

#get_entity_records(params = {}) ⇒ Types::GetEntityRecordsResponse

This API is used to query preview data from a given connection type or from a native Amazon S3 based Glue Data Catalog.

Returns records as an array of JSON blobs. Each record is formatted using Jackson JsonNode based on the field type defined by the DescribeEntity API.

Spark connectors generate schemas according to the same data type mapping as in the DescribeEntity API. Spark connectors convert data to the appropriate data types matching the schema when returning rows.

Examples:

Request syntax with placeholder values


resp = client.get_entity_records({
  connection_name: "NameString",
  catalog_id: "CatalogIdString",
  entity_name: "EntityName", # required
  next_token: "NextToken",
  data_store_api_version: "ApiVersion",
  connection_options: {
    "OptionKey" => "OptionValue",
  },
  filter_predicate: "FilterPredicate",
  limit: 1, # required
  order_by: "String",
  selected_fields: ["EntityFieldName"],
})

Response structure


resp.records #=> Array
resp.next_token #=> String

Parameters:

  • params (Hash) (defaults to: {})

    ({})

Options Hash (params):

  • :connection_name (String)

    The name of the connection that contains the connection type credentials.

  • :catalog_id (String)

    The catalog ID of the catalog that contains the connection. This can be null, By default, the Amazon Web Services Account ID is the catalog ID.

  • :entity_name (required, String)

    Name of the entity that we want to query the preview data from the given connection type.

  • :next_token (String)

    A continuation token, included if this is a continuation call.

  • :data_store_api_version (String)

    The API version of the SaaS connector.

  • :connection_options (Hash<String,String>)

    Connector options that are required to query the data.

  • :filter_predicate (String)

    A filter predicate that you can apply in the query request.

  • :limit (required, Integer)

    Limits the number of records fetched with the request.

  • :order_by (String)

    A parameter that orders the response preview data.

  • :selected_fields (Array<String>)

    List of fields that we want to fetch as part of preview data.

Returns:

See Also:

[View source]

8961
8962
8963
8964
# File 'gems/aws-sdk-glue/lib/aws-sdk-glue/client.rb', line 8961

def get_entity_records(params = {}, options = {})
  req = build_request(:get_entity_records, params)
  req.send_request(options)
end

#get_integration_resource_property(params = {}) ⇒ Types::GetIntegrationResourcePropertyResponse

This API is used for fetching the ResourceProperty of the Glue connection (for the source) or Glue database ARN (for the target)

Examples:

Request syntax with placeholder values


resp = client.get_integration_resource_property({
  resource_arn: "String128", # required
})

Response structure


resp.resource_arn #=> String
resp.source_processing_properties.role_arn #=> String
resp.target_processing_properties.role_arn #=> String
resp.target_processing_properties.kms_arn #=> String
resp.target_processing_properties.connection_name #=> String
resp.target_processing_properties.event_bus_arn #=> String

Parameters:

  • params (Hash) (defaults to: {})

    ({})

Options Hash (params):

  • :resource_arn (required, String)

    The connection ARN of the source, or the database ARN of the target.

Returns:

See Also:

[View source]

8997
8998
8999
9000
# File 'gems/aws-sdk-glue/lib/aws-sdk-glue/client.rb', line 8997

def get_integration_resource_property(params = {}, options = {})
  req = build_request(:get_integration_resource_property, params)
  req.send_request(options)
end

#get_integration_table_properties(params = {}) ⇒ Types::GetIntegrationTablePropertiesResponse

This API is used to retrieve optional override properties for the tables that need to be replicated. These properties can include properties for filtering and partition for source and target tables.

Examples:

Request syntax with placeholder values


resp = client.get_integration_table_properties({
  resource_arn: "String128", # required
  table_name: "String128", # required
})

Response structure


resp.resource_arn #=> String
resp.table_name #=> String
resp.source_table_config.fields #=> Array
resp.source_table_config.fields[0] #=> String
resp.source_table_config.filter_predicate #=> String
resp.source_table_config.primary_key #=> Array
resp.source_table_config.primary_key[0] #=> String
resp.source_table_config.record_update_field #=> String
resp.target_table_config.unnest_spec #=> String, one of "TOPLEVEL", "FULL", "NOUNNEST"
resp.target_table_config.partition_spec #=> Array
resp.target_table_config.partition_spec[0].field_name #=> String
resp.target_table_config.partition_spec[0].function_spec #=> String
resp.target_table_config.target_table_name #=> String

Parameters:

  • params (Hash) (defaults to: {})

    ({})

Options Hash (params):

  • :resource_arn (required, String)

    The connection ARN of the source, or the database ARN of the target.

  • :table_name (required, String)

    The name of the table to be replicated.

Returns:

See Also:

[View source]

9046
9047
9048
9049
# File 'gems/aws-sdk-glue/lib/aws-sdk-glue/client.rb', line 9046

def get_integration_table_properties(params = {}, options = {})
  req = build_request(:get_integration_table_properties, params)
  req.send_request(options)
end

#get_job(params = {}) ⇒ Types::GetJobResponse

Retrieves an existing job definition.

Examples:

Request syntax with placeholder values


resp = client.get_job({
  job_name: "NameString", # required
})

Response structure


resp.job.name #=> String
resp.job.job_mode #=> String, one of "SCRIPT", "VISUAL", "NOTEBOOK"
resp.job.job_run_queuing_enabled #=> Boolean
resp.job.description #=> String
resp.job.log_uri #=> String
resp.job.role #=> String
resp.job.created_on #=> Time
resp.job.last_modified_on #=> Time
resp.job.execution_property.max_concurrent_runs #=> Integer
resp.job.command.name #=> String
resp.job.command.script_location #=> String
resp.job.command.python_version #=> String
resp.job.command.runtime #=> String
resp.job.default_arguments #=> Hash
resp.job.default_arguments["GenericString"] #=> String
resp.job.non_overridable_arguments #=> Hash
resp.job.non_overridable_arguments["GenericString"] #=> String
resp.job.connections.connections #=> Array
resp.job.connections.connections[0] #=> String
resp.job.max_retries #=> Integer
resp.job.allocated_capacity #=> Integer
resp.job.timeout #=> Integer
resp.job.max_capacity #=> Float
resp.job.worker_type #=> String, one of "Standard", "G.1X", "G.2X", "G.025X", "G.4X", "G.8X", "Z.2X"
resp.job.number_of_workers #=> Integer
resp.job.security_configuration #=> String
resp.job.notification_property.notify_delay_after #=> Integer
resp.job.glue_version #=> String
resp.job.code_gen_configuration_nodes #=> Hash
resp.job.code_gen_configuration_nodes["NodeId"].athena_connector_source.name #=> String
resp.job.code_gen_configuration_nodes["NodeId"].athena_connector_source.connection_name #=> String
resp.job.code_gen_configuration_nodes["NodeId"].athena_connector_source.connector_name #=> String
resp.job.code_gen_configuration_nodes["NodeId"].athena_connector_source.connection_type #=> String
resp.job.code_gen_configuration_nodes["NodeId"].athena_connector_source.connection_table #=> String
resp.job.code_gen_configuration_nodes["NodeId"].athena_connector_source.schema_name #=> String
resp.job.code_gen_configuration_nodes["NodeId"].athena_connector_source.output_schemas #=> Array
resp.job.code_gen_configuration_nodes["NodeId"].athena_connector_source.output_schemas[0].columns #=> Array
resp.job.code_gen_configuration_nodes["NodeId"].athena_connector_source.output_schemas[0].columns[0].name #=> String
resp.job.code_gen_configuration_nodes["NodeId"].athena_connector_source.output_schemas[0].columns[0].type #=> String
resp.job.code_gen_configuration_nodes["NodeId"].jdbc_connector_source.name #=> String
resp.job.code_gen_configuration_nodes["NodeId"].jdbc_connector_source.connection_name #=> String
resp.job.code_gen_configuration_nodes["NodeId"].jdbc_connector_source.connector_name #=> String
resp.job.code_gen_configuration_nodes["NodeId"].jdbc_connector_source.connection_type #=> String
resp.job.code_gen_configuration_nodes["NodeId"].jdbc_connector_source.additional_options.filter_predicate #=> String
resp.job.code_gen_configuration_nodes["NodeId"].jdbc_connector_source.additional_options.partition_column #=> String
resp.job.code_gen_configuration_nodes["NodeId"].jdbc_connector_source.additional_options.lower_bound #=> Integer
resp.job.code_gen_configuration_nodes["NodeId"].jdbc_connector_source.additional_options.upper_bound #=> Integer
resp.job.code_gen_configuration_nodes["NodeId"].jdbc_connector_source.additional_options.num_partitions #=> Integer
resp.job.code_gen_configuration_nodes["NodeId"].jdbc_connector_source.additional_options.job_bookmark_keys #=> Array
resp.job.code_gen_configuration_nodes["NodeId"].jdbc_connector_source.additional_options.job_bookmark_keys[0] #=> String
resp.job.code_gen_configuration_nodes["NodeId"].jdbc_connector_source.additional_options.job_bookmark_keys_sort_order #=> String
resp.job.code_gen_configuration_nodes["NodeId"].jdbc_connector_source.additional_options.data_type_mapping #=> Hash
resp.job.code_gen_configuration_nodes["NodeId"].jdbc_connector_source.additional_options.data_type_mapping["JDBCDataType"] #=> String, one of "DATE", "STRING", "TIMESTAMP", "INT", "FLOAT", "LONG", "BIGDECIMAL", "BYTE", "SHORT", "DOUBLE"
resp.job.code_gen_configuration_nodes["NodeId"].jdbc_connector_source.connection_table #=> String
resp.job.code_gen_configuration_nodes["NodeId"].jdbc_connector_source.query #=> String
resp.job.code_gen_configuration_nodes["NodeId"].jdbc_connector_source.output_schemas #=> Array
resp.job.code_gen_configuration_nodes["NodeId"].jdbc_connector_source.output_schemas[0].columns #=> Array
resp.job.code_gen_configuration_nodes["NodeId"].jdbc_connector_source.output_schemas[0].columns[0].name #=> String
resp.job.code_gen_configuration_nodes["NodeId"].jdbc_connector_source.output_schemas[0].columns[0].type #=> String
resp.job.code_gen_configuration_nodes["NodeId"].spark_connector_source.name #=> String
resp.job.code_gen_configuration_nodes["NodeId"].spark_connector_source.connection_name #=> String
resp.job.code_gen_configuration_nodes["NodeId"].spark_connector_source.connector_name #=> String
resp.job.code_gen_configuration_nodes["NodeId"].spark_connector_source.connection_type #=> String
resp.job.code_gen_configuration_nodes["NodeId"].spark_connector_source.additional_options #=> Hash
resp.job.code_gen_configuration_nodes["NodeId"].spark_connector_source.additional_options["EnclosedInStringProperty"] #=> String
resp.job.code_gen_configuration_nodes["NodeId"].spark_connector_source.output_schemas #=> Array
resp.job.code_gen_configuration_nodes["NodeId"].spark_connector_source.output_schemas[0].columns #=> Array
resp.job.code_gen_configuration_nodes["NodeId"].spark_connector_source.output_schemas[0].columns[0].name #=> String
resp.job.code_gen_configuration_nodes["NodeId"].spark_connector_source.output_schemas[0].columns[0].type #=> String
resp.job.code_gen_configuration_nodes["NodeId"].catalog_source.name #=> String
resp.job.code_gen_configuration_nodes["NodeId"].catalog_source.database #=> String
resp.job.code_gen_configuration_nodes["NodeId"].catalog_source.table #=> String
resp.job.code_gen_configuration_nodes["NodeId"].redshift_source.name #=> String
resp.job.code_gen_configuration_nodes["NodeId"].redshift_source.database #=> String
resp.job.code_gen_configuration_nodes["NodeId"].redshift_source.table #=> String
resp.job.code_gen_configuration_nodes["NodeId"].redshift_source.redshift_tmp_dir #=> String
resp.job.code_gen_configuration_nodes["NodeId"].redshift_source.tmp_dir_iam_role #=> String
resp.job.code_gen_configuration_nodes["NodeId"].s3_catalog_source.name #=> String
resp.job.code_gen_configuration_nodes["NodeId"].s3_catalog_source.database #=> String
resp.job.code_gen_configuration_nodes["NodeId"].s3_catalog_source.table #=> String
resp.job.code_gen_configuration_nodes["NodeId"].s3_catalog_source.partition_predicate #=> String
resp.job.code_gen_configuration_nodes["NodeId"].s3_catalog_source.additional_options.bounded_size #=> Integer
resp.job.code_gen_configuration_nodes["NodeId"].s3_catalog_source.additional_options.bounded_files #=> Integer
resp.job.code_gen_configuration_nodes["NodeId"].s3_csv_source.name #=> String
resp.job.code_gen_configuration_nodes["NodeId"].s3_csv_source.paths #=> Array
resp.job.code_gen_configuration_nodes["NodeId"].s3_csv_source.paths[0] #=> String
resp.job.code_gen_configuration_nodes["NodeId"].s3_csv_source.compression_type #=> String, one of "gzip", "bzip2"
resp.job.code_gen_configuration_nodes["NodeId"].s3_csv_source.exclusions #=> Array
resp.job.code_gen_configuration_nodes["NodeId"].s3_csv_source.exclusions[0] #=> String
resp.job.code_gen_configuration_nodes["NodeId"].s3_csv_source.group_size #=> String
resp.job.code_gen_configuration_nodes["NodeId"].s3_csv_source.group_files #=> String
resp.job.code_gen_configuration_nodes["NodeId"].s3_csv_source.recurse #=> Boolean
resp.job.code_gen_configuration_nodes["NodeId"].s3_csv_source.max_band #=> Integer
resp.job.code_gen_configuration_nodes["NodeId"].s3_csv_source.max_files_in_band #=> Integer
resp.job.code_gen_configuration_nodes["NodeId"].s3_csv_source.additional_options.bounded_size #=> Integer
resp.job.code_gen_configuration_nodes["NodeId"].s3_csv_source.additional_options.bounded_files #=> Integer
resp.job.code_gen_configuration_nodes["NodeId"].s3_csv_source.additional_options.enable_sample_path #=> Boolean
resp.job.code_gen_configuration_nodes["NodeId"].s3_csv_source.additional_options.sample_path #=> String
resp.job.code_gen_configuration_nodes["NodeId"].s3_csv_source.separator #=> String, one of "comma", "ctrla", "pipe", "semicolon", "tab"
resp.job.code_gen_configuration_nodes["NodeId"].s3_csv_source.escaper #=> String
resp.job.code_gen_configuration_nodes["NodeId"].s3_csv_source.quote_char #=> String, one of "quote", "quillemet", "single_quote", "disabled"
resp.job.code_gen_configuration_nodes["NodeId"].s3_csv_source.multiline #=> Boolean
resp.job.code_gen_configuration_nodes["NodeId"].s3_csv_source.with_header #=> Boolean
resp.job.code_gen_configuration_nodes["NodeId"].s3_csv_source.write_header #=> Boolean
resp.job.code_gen_configuration_nodes["NodeId"].s3_csv_source.skip_first #=> Boolean
resp.job.code_gen_configuration_nodes["NodeId"].s3_csv_source.optimize_performance #=> Boolean
resp.job.code_gen_configuration_nodes["NodeId"].s3_csv_source.output_schemas #=> Array
resp.job.code_gen_configuration_nodes["NodeId"].s3_csv_source.output_schemas[0].columns #=> Array
resp.job.code_gen_configuration_nodes["NodeId"].s3_csv_source.output_schemas[0].columns[0].name #=> String
resp.job.code_gen_configuration_nodes["NodeId"].s3_csv_source.output_schemas[0].columns[0].type #=> String
resp.job.code_gen_configuration_nodes["NodeId"].s3_json_source.name #=> String
resp.job.code_gen_configuration_nodes["NodeId"].s3_json_source.paths #=> Array
resp.job.code_gen_configuration_nodes["NodeId"].s3_json_source.paths[0] #=> String
resp.job.code_gen_configuration_nodes["NodeId"].s3_json_source.compression_type #=> String, one of "gzip", "bzip2"
resp.job.code_gen_configuration_nodes["NodeId"].s3_json_source.exclusions #=> Array
resp.job.code_gen_configuration_nodes["NodeId"].s3_json_source.exclusions[0] #=> String
resp.job.code_gen_configuration_nodes["NodeId"].s3_json_source.group_size #=> String
resp.job.code_gen_configuration_nodes["NodeId"].s3_json_source.group_files #=> String
resp.job.code_gen_configuration_nodes["NodeId"].s3_json_source.recurse #=> Boolean
resp.job.code_gen_configuration_nodes["NodeId"].s3_json_source.max_band #=> Integer
resp.job.code_gen_configuration_nodes["NodeId"].s3_json_source.max_files_in_band #=> Integer
resp.job.code_gen_configuration_nodes["NodeId"].s3_json_source.additional_options.bounded_size #=> Integer
resp.job.code_gen_configuration_nodes["NodeId"].s3_json_source.additional_options.bounded_files #=> Integer
resp.job.code_gen_configuration_nodes["NodeId"].s3_json_source.additional_options.enable_sample_path #=> Boolean
resp.job.code_gen_configuration_nodes["NodeId"].s3_json_source.additional_options.sample_path #=> String
resp.job.code_gen_configuration_nodes["NodeId"].s3_json_source.json_path #=> String
resp.job.code_gen_configuration_nodes["NodeId"].s3_json_source.multiline #=> Boolean
resp.job.code_gen_configuration_nodes["NodeId"].s3_json_source.output_schemas #=> Array
resp.job.code_gen_configuration_nodes["NodeId"].s3_json_source.output_schemas[0].columns #=> Array
resp.job.code_gen_configuration_nodes["NodeId"].s3_json_source.output_schemas[0].columns[0].name #=> String
resp.job.code_gen_configuration_nodes["NodeId"].s3_json_source.output_schemas[0].columns[0].type #=> String
resp.job.code_gen_configuration_nodes["NodeId"].s3_parquet_source.name #=> String
resp.job.code_gen_configuration_nodes["NodeId"].s3_parquet_source.paths #=> Array
resp.job.code_gen_configuration_nodes["NodeId"].s3_parquet_source.paths[0] #=> String
resp.job.code_gen_configuration_nodes["NodeId"].s3_parquet_source.compression_type #=> String, one of "snappy", "lzo", "gzip", "uncompressed", "none"
resp.job.code_gen_configuration_nodes["NodeId"].s3_parquet_source.exclusions #=> Array
resp.job.code_gen_configuration_nodes["NodeId"].s3_parquet_source.exclusions[0] #=> String
resp.job.code_gen_configuration_nodes["NodeId"].s3_parquet_source.group_size #=> String
resp.job.code_gen_configuration_nodes["NodeId"].s3_parquet_source.group_files #=> String
resp.job.code_gen_configuration_nodes["NodeId"].s3_parquet_source.recurse #=> Boolean
resp.job.code_gen_configuration_nodes["NodeId"].s3_parquet_source.max_band #=> Integer
resp.job.code_gen_configuration_nodes["NodeId"].s3_parquet_source.max_files_in_band #=> Integer
resp.job.code_gen_configuration_nodes["NodeId"].s3_parquet_source.additional_options.bounded_size #=> Integer
resp.job.code_gen_configuration_nodes["NodeId"].s3_parquet_source.additional_options.bounded_files #=> Integer
resp.job.code_gen_configuration_nodes["NodeId"].s3_parquet_source.additional_options.enable_sample_path #=> Boolean
resp.job.code_gen_configuration_nodes["NodeId"].s3_parquet_source.additional_options.sample_path #=> String
resp.job.code_gen_configuration_nodes["NodeId"].s3_parquet_source.output_schemas #=> Array
resp.job.code_gen_configuration_nodes["NodeId"].s3_parquet_source.output_schemas[0].columns #=> Array
resp.job.code_gen_configuration_nodes["NodeId"].s3_parquet_source.output_schemas[0].columns[0].name #=> String
resp.job.code_gen_configuration_nodes["NodeId"].s3_parquet_source.output_schemas[0].columns[0].type #=> String
resp.job.code_gen_configuration_nodes["NodeId"].relational_catalog_source.name #=> String
resp.job.code_gen_configuration_nodes["NodeId"].relational_catalog_source.database #=> String
resp.job.code_gen_configuration_nodes["NodeId"].relational_catalog_source.table #=> String
resp.job.code_gen_configuration_nodes["NodeId"].dynamo_db_catalog_source.name #=> String
resp.job.code_gen_configuration_nodes["NodeId"].dynamo_db_catalog_source.database #=> String
resp.job.code_gen_configuration_nodes["NodeId"].dynamo_db_catalog_source.table #=> String
resp.job.code_gen_configuration_nodes["NodeId"].jdbc_connector_target.name #=> String
resp.job.code_gen_configuration_nodes["NodeId"].jdbc_connector_target.inputs #=> Array
resp.job.code_gen_configuration_nodes["NodeId"].jdbc_connector_target.inputs[0] #=> String
resp.job.code_gen_configuration_nodes["NodeId"].jdbc_connector_target.connection_name #=> String
resp.job.code_gen_configuration_nodes["NodeId"].jdbc_connector_target.connection_table #=> String
resp.job.code_gen_configuration_nodes["NodeId"].jdbc_connector_target.connector_name #=> String
resp.job.code_gen_configuration_nodes["NodeId"].jdbc_connector_target.connection_type #=> String
resp.job.code_gen_configuration_nodes["NodeId"].jdbc_connector_target.additional_options #=> Hash
resp.job.code_gen_configuration_nodes["NodeId"].jdbc_connector_target.additional_options["EnclosedInStringProperty"] #=> String
resp.job.code_gen_configuration_nodes["NodeId"].jdbc_connector_target.output_schemas #=> Array
resp.job.code_gen_configuration_nodes["NodeId"].jdbc_connector_target.output_schemas[0].columns #=> Array
resp.job.code_gen_configuration_nodes["NodeId"].jdbc_connector_target.output_schemas[0].columns[0].name #=> String
resp.job.code_gen_configuration_nodes["NodeId"].jdbc_connector_target.output_schemas[0].columns[0].type #=> String
resp.job.code_gen_configuration_nodes["NodeId"].spark_connector_target.name #=> String
resp.job.code_gen_configuration_nodes["NodeId"].spark_connector_target.inputs #=> Array
resp.job.code_gen_configuration_nodes["NodeId"].spark_connector_target.inputs[0] #=> String
resp.job.code_gen_configuration_nodes["NodeId"].spark_connector_target.connection_name #=> String
resp.job.code_gen_configuration_nodes["NodeId"].spark_connector_target.connector_name #=> String
resp.job.code_gen_configuration_nodes["NodeId"].spark_connector_target.connection_type #=> String
resp.job.code_gen_configuration_nodes["NodeId"].spark_connector_target.additional_options #=> Hash
resp.job.code_gen_configuration_nodes["NodeId"].spark_connector_target.additional_options["EnclosedInStringProperty"] #=> String
resp.job.code_gen_configuration_nodes["NodeId"].spark_connector_target.output_schemas #=> Array
resp.job.code_gen_configuration_nodes["NodeId"].spark_connector_target.output_schemas[0].columns #=> Array
resp.job.code_gen_configuration_nodes["NodeId"].spark_connector_target.output_schemas[0].columns[0].name #=> String
resp.job.code_gen_configuration_nodes["NodeId"].spark_connector_target.output_schemas[0].columns[0].type #=> String
resp.job.code_gen_configuration_nodes["NodeId"].catalog_target.name #=> String
resp.job.code_gen_configuration_nodes["NodeId"].catalog_target.inputs #=> Array
resp.job.code_gen_configuration_nodes["NodeId"].catalog_target.inputs[0] #=> String
resp.job.code_gen_configuration_nodes["NodeId"].catalog_target.partition_keys #=> Array
resp.job.code_gen_configuration_nodes["NodeId"].catalog_target.partition_keys[0] #=> Array
resp.job.code_gen_configuration_nodes["NodeId"].catalog_target.partition_keys[0][0] #=> String
resp.job.code_gen_configuration_nodes["NodeId"].catalog_target.database #=> String
resp.job.code_gen_configuration_nodes["NodeId"].catalog_target.table #=> String
resp.job.code_gen_configuration_nodes["NodeId"].redshift_target.name #=> String
resp.job.code_gen_configuration_nodes["NodeId"].redshift_target.inputs #=> Array
resp.job.code_gen_configuration_nodes["NodeId"].redshift_target.inputs[0] #=> String
resp.job.code_gen_configuration_nodes["NodeId"].redshift_target.database #=> String
resp.job.code_gen_configuration_nodes["NodeId"].redshift_target.table #=> String
resp.job.code_gen_configuration_nodes["NodeId"].redshift_target.redshift_tmp_dir #=> String
resp.job.code_gen_configuration_nodes["NodeId"].redshift_target.tmp_dir_iam_role #=> String
resp.job.code_gen_configuration_nodes["NodeId"].redshift_target.upsert_redshift_options.table_location #=> String
resp.job.code_gen_configuration_nodes["NodeId"].redshift_target.upsert_redshift_options.connection_name #=> String
resp.job.code_gen_configuration_nodes["NodeId"].redshift_target.upsert_redshift_options.upsert_keys #=> Array
resp.job.code_gen_configuration_nodes["NodeId"].redshift_target.upsert_redshift_options.upsert_keys[0] #=> String
resp.job.code_gen_configuration_nodes["NodeId"].s3_catalog_target.name #=> String
resp.job.code_gen_configuration_nodes["NodeId"].s3_catalog_target.inputs #=> Array
resp.job.code_gen_configuration_nodes["NodeId"].s3_catalog_target.inputs[0] #=> String
resp.job.code_gen_configuration_nodes["NodeId"].s3_catalog_target.partition_keys #=> Array
resp.job.code_gen_configuration_nodes["NodeId"].s3_catalog_target.partition_keys[0] #=> Array
resp.job.code_gen_configuration_nodes["NodeId"].s3_catalog_target.partition_keys[0][0] #=> String
resp.job.code_gen_configuration_nodes["NodeId"].s3_catalog_target.table #=> String
resp.job.code_gen_configuration_nodes["NodeId"].s3_catalog_target.database #=> String
resp.job.code_gen_configuration_nodes["NodeId"].s3_catalog_target.schema_change_policy.enable_update_catalog #=> Boolean
resp.job.code_gen_configuration_nodes["NodeId"].s3_catalog_target.schema_change_policy.update_behavior #=> String, one of "UPDATE_IN_DATABASE", "LOG"
resp.job.code_gen_configuration_nodes["NodeId"].s3_glue_parquet_target.name #=> String
resp.job.code_gen_configuration_nodes["NodeId"].s3_glue_parquet_target.inputs #=> Array
resp.job.code_gen_configuration_nodes["NodeId"].s3_glue_parquet_target.inputs[0] #=> String
resp.job.code_gen_configuration_nodes["NodeId"].s3_glue_parquet_target.partition_keys #=> Array
resp.job.code_gen_configuration_nodes["NodeId"].s3_glue_parquet_target.partition_keys[0] #=> Array
resp.job.code_gen_configuration_nodes["NodeId"].s3_glue_parquet_target.partition_keys[0][0] #=> String
resp.job.code_gen_configuration_nodes["NodeId"].s3_glue_parquet_target.path #=> String
resp.job.code_gen_configuration_nodes["NodeId"].s3_glue_parquet_target.compression #=> String, one of "snappy", "lzo", "gzip", "uncompressed", "none"
resp.job.code_gen_configuration_nodes["NodeId"].s3_glue_parquet_target.schema_change_policy.enable_update_catalog #=> Boolean
resp.job.code_gen_configuration_nodes["NodeId"].s3_glue_parquet_target.schema_change_policy.update_behavior #=> String, one of "UPDATE_IN_DATABASE", "LOG"
resp.job.code_gen_configuration_nodes["NodeId"].s3_glue_parquet_target.schema_change_policy.table #=> String
resp.job.code_gen_configuration_nodes["NodeId"].s3_glue_parquet_target.schema_change_policy.database #=> String
resp.job.code_gen_configuration_nodes["NodeId"].s3_direct_target.name #=> String
resp.job.code_gen_configuration_nodes["NodeId"].s3_direct_target.inputs #=> Array
resp.job.code_gen_configuration_nodes["NodeId"].s3_direct_target.inputs[0] #=> String
resp.job.code_gen_configuration_nodes["NodeId"].s3_direct_target.partition_keys #=> Array
resp.job.code_gen_configuration_nodes["NodeId"].s3_direct_target.partition_keys[0] #=> Array
resp.job.code_gen_configuration_nodes["NodeId"].s3_direct_target.partition_keys[0][0] #=> String
resp.job.code_gen_configuration_nodes["NodeId"].s3_direct_target.path #=> String
resp.job.code_gen_configuration_nodes["NodeId"].s3_direct_target.compression #=> String
resp.job.code_gen_configuration_nodes["NodeId"].s3_direct_target.format #=> String, one of "json", "csv", "avro", "orc", "parquet", "hudi", "delta"
resp.job.code_gen_configuration_nodes["NodeId"].s3_direct_target.schema_change_policy.enable_update_catalog #=> Boolean
resp.job.code_gen_configuration_nodes["NodeId"].s3_direct_target.schema_change_policy.update_behavior #=> String, one of "UPDATE_IN_DATABASE", "LOG"
resp.job.code_gen_configuration_nodes["NodeId"].s3_direct_target.schema_change_policy.table #=> String
resp.job.code_gen_configuration_nodes["NodeId"].s3_direct_target.schema_change_policy.database #=> String
resp.job.code_gen_configuration_nodes["NodeId"].apply_mapping.name #=> String
resp.job.code_gen_configuration_nodes["NodeId"].apply_mapping.inputs #=> Array
resp.job.code_gen_configuration_nodes["NodeId"].apply_mapping.inputs[0] #=> String
resp.job.code_gen_configuration_nodes["NodeId"].apply_mapping.mapping #=> Array
resp.job.code_gen_configuration_nodes["NodeId"].apply_mapping.mapping[0].to_key #=> String
resp.job.code_gen_configuration_nodes["NodeId"].apply_mapping.mapping[0].from_path #=> Array
resp.job.code_gen_configuration_nodes["NodeId"].apply_mapping.mapping[0].from_path[0] #=> String
resp.job.code_gen_configuration_nodes["NodeId"].apply_mapping.mapping[0].from_type #=> String
resp.job.code_gen_configuration_nodes["NodeId"].apply_mapping.mapping[0].to_type #=> String
resp.job.code_gen_configuration_nodes["NodeId"].apply_mapping.mapping[0].dropped #=> Boolean
resp.job.code_gen_configuration_nodes["NodeId"].apply_mapping.mapping[0].children #=> Types::Mappings
resp.job.code_gen_configuration_nodes["NodeId"].select_fields.name #=> String
resp.job.code_gen_configuration_nodes["NodeId"].select_fields.inputs #=> Array
resp.job.code_gen_configuration_nodes["NodeId"].select_fields.inputs[0] #=> String
resp.job.code_gen_configuration_nodes["NodeId"].select_fields.paths #=> Array
resp.job.code_gen_configuration_nodes["NodeId"].select_fields.paths[0] #=> Array
resp.job.code_gen_configuration_nodes["NodeId"].select_fields.paths[0][0] #=> String
resp.job.code_gen_configuration_nodes["NodeId"].drop_fields.name #=> String
resp.job.code_gen_configuration_nodes["NodeId"].drop_fields.inputs #=> Array
resp.job.code_gen_configuration_nodes["NodeId"].drop_fields.inputs[0] #=> String
resp.job.code_gen_configuration_nodes["NodeId"].drop_fields.paths #=> Array
resp.job.code_gen_configuration_nodes["NodeId"].drop_fields.paths[0] #=> Array
resp.job.code_gen_configuration_nodes["NodeId"].drop_fields.paths[0][0] #=> String
resp.job.code_gen_configuration_nodes["NodeId"].rename_field.name #=> String
resp.job.code_gen_configuration_nodes["NodeId"].rename_field.inputs #=> Array
resp.job.code_gen_configuration_nodes["NodeId"].rename_field.inputs[0] #=> String
resp.job.code_gen_configuration_nodes["NodeId"].rename_field.source_path #=> Array
resp.job.code_gen_configuration_nodes["NodeId"].rename_field.source_path[0] #=> String
resp.job.code_gen_configuration_nodes["NodeId"].rename_field.target_path #=> Array
resp.job.code_gen_configuration_nodes["NodeId"].rename_field.target_path[0] #=> String
resp.job.code_gen_configuration_nodes["NodeId"].spigot.name #=> String
resp.job.code_gen_configuration_nodes["NodeId"].spigot.inputs #=> Array
resp.job.code_gen_configuration_nodes["NodeId"].spigot.inputs[0] #=> String
resp.job.code_gen_configuration_nodes["NodeId"].spigot.path #=> String
resp.job.code_gen_configuration_nodes["NodeId"].spigot.topk #=> Integer
resp.job.code_gen_configuration_nodes["NodeId"].spigot.prob #=> Float
resp.job.code_gen_configuration_nodes["NodeId"].join.name #=> String
resp.job.code_gen_configuration_nodes["NodeId"].join.inputs #=> Array
resp.job.code_gen_configuration_nodes["NodeId"].join.inputs[0] #=> String
resp.job.code_gen_configuration_nodes["NodeId"].join.join_type #=> String, one of "equijoin", "left", "right", "outer", "leftsemi", "leftanti"
resp.job.code_gen_configuration_nodes["NodeId"].join.columns #=> Array
resp.job.code_gen_configuration_nodes["NodeId"].join.columns[0].from #=> String
resp.job.code_gen_configuration_nodes["NodeId"].join.columns[0].keys #=> Array
resp.job.code_gen_configuration_nodes["NodeId"].join.columns[0].keys[0] #=> Array
resp.job.code_gen_configuration_nodes["NodeId"].join.columns[0].keys[0][0] #=> String
resp.job.code_gen_configuration_nodes["NodeId"].split_fields.name #=> String
resp.job.code_gen_configuration_nodes["NodeId"].split_fields.inputs #=> Array
resp.job.code_gen_configuration_nodes["NodeId"].split_fields.inputs[0] #=> String
resp.job.code_gen_configuration_nodes["NodeId"].split_fields.paths #=> Array
resp.job.code_gen_configuration_nodes["NodeId"].split_fields.paths[0] #=> Array
resp.job.code_gen_configuration_nodes["NodeId"].split_fields.paths[0][0] #=> String
resp.job.code_gen_configuration_nodes["NodeId"].select_from_collection.name #=> String
resp.job.code_gen_configuration_nodes["NodeId"].select_from_collection.inputs #=> Array
resp.job.code_gen_configuration_nodes["NodeId"].select_from_collection.inputs[0] #=> String
resp.job.code_gen_configuration_nodes["NodeId"].select_from_collection.index #=> Integer
resp.job.code_gen_configuration_nodes["NodeId"].fill_missing_values.name #=> String
resp.job.code_gen_configuration_nodes["NodeId"].fill_missing_values.inputs #=> Array
resp.job.code_gen_configuration_nodes["NodeId"].fill_missing_values.inputs[0] #=> String
resp.job.code_gen_configuration_nodes["NodeId"].fill_missing_values.imputed_path #=> String
resp.job.code_gen_configuration_nodes["NodeId"].fill_missing_values.filled_path #=> String
resp.job.code_gen_configuration_nodes["NodeId"].filter.name #=> String
resp.job.code_gen_configuration_nodes["NodeId"].filter.inputs #=> Array
resp.job.code_gen_configuration_nodes["NodeId"].filter.inputs[0] #=> String
resp.job.code_gen_configuration_nodes["NodeId"].filter.logical_operator #=> String, one of "AND", "OR"
resp.job.code_gen_configuration_nodes["NodeId"].filter.filters #=> Array
resp.job.code_gen_configuration_nodes["NodeId"].filter.filters[0].operation #=> String, one of "EQ", "LT", "GT", "LTE", "GTE", "REGEX", "ISNULL"
resp.job.code_gen_configuration_nodes["NodeId"].filter.filters[0].negated #=> Boolean
resp.job.code_gen_configuration_nodes["NodeId"].filter.filters[0].values #=> Array
resp.job.code_gen_configuration_nodes["NodeId"].filter.filters[0].values[0].type #=> String, one of "COLUMNEXTRACTED", "CONSTANT"
resp.job.code_gen_configuration_nodes["NodeId"].filter.filters[0].values[0].value #=> Array
resp.job.code_gen_configuration_nodes["NodeId"].filter.filters[0].values[0].value[0] #=> String
resp.job.code_gen_configuration_nodes["NodeId"].custom_code.name #=> String
resp.job.code_gen_configuration_nodes["NodeId"].custom_code.inputs #=> Array
resp.job.code_gen_configuration_nodes["NodeId"].custom_code.inputs[0] #=> String
resp.job.code_gen_configuration_nodes["NodeId"].custom_code.code #=> String
resp.job.code_gen_configuration_nodes["NodeId"].custom_code.class_name #=> String
resp.job.code_gen_configuration_nodes["NodeId"].custom_code.output_schemas #=> Array
resp.job.code_gen_configuration_nodes["NodeId"].custom_code.output_schemas[0].columns #=> Array
resp.job.code_gen_configuration_nodes["NodeId"].custom_code.output_schemas[0].columns[0].name #=> String
resp.job.code_gen_configuration_nodes["NodeId"].custom_code.output_schemas[0].columns[0].type #=> String
resp.job.code_gen_configuration_nodes["NodeId"].spark_sql.name #=> String
resp.job.code_gen_configuration_nodes["NodeId"].spark_sql.inputs #=> Array
resp.job.code_gen_configuration_nodes["NodeId"].spark_sql.inputs[0] #=> String
resp.job.code_gen_configuration_nodes["NodeId"].spark_sql.sql_query #=> String
resp.job.code_gen_configuration_nodes["NodeId"].spark_sql.sql_aliases #=> Array
resp.job.code_gen_configuration_nodes["NodeId"].spark_sql.sql_aliases[0].from #=> String
resp.job.code_gen_configuration_nodes["NodeId"].spark_sql.sql_aliases[0].alias #=> String
resp.job.code_gen_configuration_nodes["NodeId"].spark_sql.output_schemas #=> Array
resp.job.code_gen_configuration_nodes["NodeId"].spark_sql.output_schemas[0].columns #=> Array
resp.job.code_gen_configuration_nodes["NodeId"].spark_sql.output_schemas[0].columns[0].name #=> String
resp.job.code_gen_configuration_nodes["NodeId"].spark_sql.output_schemas[0].columns[0].type #=> String
resp.job.code_gen_configuration_nodes["NodeId"].direct_kinesis_source.name #=> String
resp.job.code_gen_configuration_nodes["NodeId"].direct_kinesis_source.window_size #=> Integer
resp.job.code_gen_configuration_nodes["NodeId"].direct_kinesis_source.detect_schema #=> Boolean
resp.job.code_gen_configuration_nodes["NodeId"].direct_kinesis_source.streaming_options.endpoint_url #=> String
resp.job.code_gen_configuration_nodes["NodeId"].direct_kinesis_source.streaming_options.stream_name #=> String
resp.job.code_gen_configuration_nodes["NodeId"].direct_kinesis_source.streaming_options.classification #=> String
resp.job.code_gen_configuration_nodes["NodeId"].direct_kinesis_source.streaming_options.delimiter #=> String
resp.job.code_gen_configuration_nodes["NodeId"].direct_kinesis_source.streaming_options.starting_position #=> String, one of "latest", "trim_horizon", "earliest", "timestamp"
resp.job.code_gen_configuration_nodes["NodeId"].direct_kinesis_source.streaming_options.max_fetch_time_in_ms #=> Integer
resp.job.code_gen_configuration_nodes["NodeId"].direct_kinesis_source.streaming_options.max_fetch_records_per_shard #=> Integer
resp.job.code_gen_configuration_nodes["NodeId"].direct_kinesis_source.streaming_options.max_record_per_read #=> Integer
resp.job.code_gen_configuration_nodes["NodeId"].direct_kinesis_source.streaming_options.add_idle_time_between_reads #=> Boolean
resp.job.code_gen_configuration_nodes["NodeId"].direct_kinesis_source.streaming_options.idle_time_between_reads_in_ms #=> Integer
resp.job.code_gen_configuration_nodes["NodeId"].direct_kinesis_source.streaming_options.describe_shard_interval #=> Integer
resp.job.code_gen_configuration_nodes["NodeId"].direct_kinesis_source.streaming_options.num_retries #=> Integer
resp.job.code_gen_configuration_nodes["NodeId"].direct_kinesis_source.streaming_options.retry_interval_ms #=> Integer
resp.job.code_gen_configuration_nodes["NodeId"].direct_kinesis_source.streaming_options.max_retry_interval_ms #=> Integer
resp.job.code_gen_configuration_nodes["NodeId"].direct_kinesis_source.streaming_options.avoid_empty_batches #=> Boolean
resp.job.code_gen_configuration_nodes["NodeId"].direct_kinesis_source.streaming_options.stream_arn #=> String
resp.job.code_gen_configuration_nodes["NodeId"].direct_kinesis_source.streaming_options.role_arn #=> String
resp.job.code_gen_configuration_nodes["NodeId"].direct_kinesis_source.streaming_options.role_session_name #=> String
resp.job.code_gen_configuration_nodes["NodeId"].direct_kinesis_source.streaming_options.add_record_timestamp #=> String
resp.job.code_gen_configuration_nodes["NodeId"].direct_kinesis_source.streaming_options.emit_consumer_lag_metrics #=> String
resp.job.code_gen_configuration_nodes["NodeId"].direct_kinesis_source.streaming_options.starting_timestamp #=> Time
resp.job.code_gen_configuration_nodes["NodeId"].direct_kinesis_source.data_preview_options.polling_time #=> Integer
resp.job.code_gen_configuration_nodes["NodeId"].direct_kinesis_source.data_preview_options.record_polling_limit #=> Integer
resp.job.code_gen_configuration_nodes["NodeId"].direct_kafka_source.name #=> String
resp.job.code_gen_configuration_nodes["NodeId"].direct_kafka_source.streaming_options.bootstrap_servers #=> String
resp.job.code_gen_configuration_nodes["NodeId"].direct_kafka_source.streaming_options.security_protocol #=> String
resp.job.code_gen_configuration_nodes["NodeId"].direct_kafka_source.streaming_options.connection_name #=> String
resp.job.code_gen_configuration_nodes["NodeId"].direct_kafka_source.streaming_options.topic_name #=> String
resp.job.code_gen_configuration_nodes["NodeId"].direct_kafka_source.streaming_options.assign #=> String
resp.job.code_gen_configuration_nodes["NodeId"].direct_kafka_source.streaming_options.subscribe_pattern #=> String
resp.job.code_gen_configuration_nodes["NodeId"].direct_kafka_source.streaming_options.classification #=> String
resp.job.code_gen_configuration_nodes["NodeId"].direct_kafka_source.streaming_options.delimiter #=> String
resp.job.code_gen_configuration_nodes["NodeId"].direct_kafka_source.streaming_options.starting_offsets #=> String
resp.job.code_gen_configuration_nodes["NodeId"].direct_kafka_source.streaming_options.ending_offsets #=> String
resp.job.code_gen_configuration_nodes["NodeId"].direct_kafka_source.streaming_options.poll_timeout_ms #=> Integer
resp.job.code_gen_configuration_nodes["NodeId"].direct_kafka_source.streaming_options.num_retries #=> Integer
resp.job.code_gen_configuration_nodes["NodeId"].direct_kafka_source.streaming_options.retry_interval_ms #=> Integer
resp.job.code_gen_configuration_nodes["NodeId"].direct_kafka_source.streaming_options.max_offsets_per_trigger #=> Integer
resp.job.code_gen_configuration_nodes["NodeId"].direct_kafka_source.streaming_options.min_partitions #=> Integer
resp.job.code_gen_configuration_nodes["NodeId"].direct_kafka_source.streaming_options.include_headers #=> Boolean
resp.job.code_gen_configuration_nodes["NodeId"].direct_kafka_source.streaming_options.add_record_timestamp #=> String
resp.job.code_gen_configuration_nodes["NodeId"].direct_kafka_source.streaming_options.emit_consumer_lag_metrics #=> String
resp.job.code_gen_configuration_nodes["NodeId"].direct_kafka_source.streaming_options.starting_timestamp #=> Time
resp.job.code_gen_configuration_nodes["NodeId"].direct_kafka_source.window_size #=> Integer
resp.job.code_gen_configuration_nodes["NodeId"].direct_kafka_source.detect_schema #=> Boolean
resp.job.code_gen_configuration_nodes["NodeId"].direct_kafka_source.data_preview_options.polling_time #=> Integer
resp.job.code_gen_configuration_nodes["NodeId"].direct_kafka_source.data_preview_options.record_polling_limit #=> Integer
resp.job.code_gen_configuration_nodes["NodeId"].catalog_kinesis_source.name #=> String
resp.job.code_gen_configuration_nodes["NodeId"].catalog_kinesis_source.window_size #=> Integer
resp.job.code_gen_configuration_nodes["NodeId"].catalog_kinesis_source.detect_schema #=> Boolean
resp.job.code_gen_configuration_nodes["NodeId"].catalog_kinesis_source.table #=> String
resp.job.code_gen_configuration_nodes["NodeId"].catalog_kinesis_source.database #=> String
resp.job.code_gen_configuration_nodes["NodeId"].catalog_kinesis_source.streaming_options.endpoint_url #=> String
resp.job.code_gen_configuration_nodes["NodeId"].catalog_kinesis_source.streaming_options.stream_name #=> String
resp.job.code_gen_configuration_nodes["NodeId"].catalog_kinesis_source.streaming_options.classification #=> String
resp.job.code_gen_configuration_nodes["NodeId"].catalog_kinesis_source.streaming_options.delimiter #=> String
resp.job.code_gen_configuration_nodes["NodeId"].catalog_kinesis_source.streaming_options.starting_position #=> String, one of "latest", "trim_horizon", "earliest", "timestamp"
resp.job.code_gen_configuration_nodes["NodeId"].catalog_kinesis_source.streaming_options.max_fetch_time_in_ms #=> Integer
resp.job.code_gen_configuration_nodes["NodeId"].catalog_kinesis_source.streaming_options.max_fetch_records_per_shard #=> Integer
resp.job.code_gen_configuration_nodes["NodeId"].catalog_kinesis_source.streaming_options.max_record_per_read #=> Integer
resp.job.code_gen_configuration_nodes["NodeId"].catalog_kinesis_source.streaming_options.add_idle_time_between_reads #=> Boolean
resp.job.code_gen_configuration_nodes["NodeId"].catalog_kinesis_source.streaming_options.idle_time_between_reads_in_ms #=> Integer
resp.job.code_gen_configuration_nodes["NodeId"].catalog_kinesis_source.streaming_options.describe_shard_interval #=> Integer
resp.job.code_gen_configuration_nodes["NodeId"].catalog_kinesis_source.streaming_options.num_retries #=> Integer
resp.job.code_gen_configuration_nodes["NodeId"].catalog_kinesis_source.streaming_options.retry_interval_ms #=> Integer
resp.job.code_gen_configuration_nodes["NodeId"].catalog_kinesis_source.streaming_options.max_retry_interval_ms #=> Integer
resp.job.code_gen_configuration_nodes["NodeId"].catalog_kinesis_source.streaming_options.avoid_empty_batches #=> Boolean
resp.job.code_gen_configuration_nodes["NodeId"].catalog_kinesis_source.streaming_options.stream_arn #=> String
resp.job.code_gen_configuration_nodes["NodeId"].catalog_kinesis_source.streaming_options.role_arn #=> String
resp.job.code_gen_configuration_nodes["NodeId"].catalog_kinesis_source.streaming_options.role_session_name #=> String
resp.job.code_gen_configuration_nodes["NodeId"].catalog_kinesis_source.streaming_options.add_record_timestamp #=> String
resp.job.code_gen_configuration_nodes["NodeId"].catalog_kinesis_source.streaming_options.emit_consumer_lag_metrics #=> String
resp.job.code_gen_configuration_nodes["NodeId"].catalog_kinesis_source.streaming_options.starting_timestamp #=> Time
resp.job.code_gen_configuration_nodes["NodeId"].catalog_kinesis_source.data_preview_options.polling_time #=> Integer
resp.job.code_gen_configuration_nodes["NodeId"].catalog_kinesis_source.data_preview_options.record_polling_limit #=> Integer
resp.job.code_gen_configuration_nodes["NodeId"].catalog_kafka_source.name #=> String
resp.job.code_gen_configuration_nodes["NodeId"].catalog_kafka_source.window_size #=> Integer
resp.job.code_gen_configuration_nodes["NodeId"].catalog_kafka_source.detect_schema #=> Boolean
resp.job.code_gen_configuration_nodes["NodeId"].catalog_kafka_source.table #=> String
resp.job.code_gen_configuration_nodes["NodeId"].catalog_kafka_source.database #=> String
resp.job.code_gen_configuration_nodes["NodeId"].catalog_kafka_source.streaming_options.bootstrap_servers #=> String
resp.job.code_gen_configuration_nodes["NodeId"].catalog_kafka_source.streaming_options.security_protocol #=> String
resp.job.code_gen_configuration_nodes["NodeId"].catalog_kafka_source.streaming_options.connection_name #=> String
resp.job.code_gen_configuration_nodes["NodeId"].catalog_kafka_source.streaming_options.topic_name #=> String
resp.job.code_gen_configuration_nodes["NodeId"].catalog_kafka_source.streaming_options.assign #=> String
resp.job.code_gen_configuration_nodes["NodeId"].catalog_kafka_source.streaming_options.subscribe_pattern #=> String
resp.job.code_gen_configuration_nodes["NodeId"].catalog_kafka_source.streaming_options.classification #=> String
resp.job.code_gen_configuration_nodes["NodeId"].catalog_kafka_source.streaming_options.delimiter #=> String
resp.job.code_gen_configuration_nodes["NodeId"].catalog_kafka_source.streaming_options.starting_offsets #=> String
resp.job.code_gen_configuration_nodes["NodeId"].catalog_kafka_source.streaming_options.ending_offsets #=> String
resp.job.code_gen_configuration_nodes["NodeId"].catalog_kafka_source.streaming_options.poll_timeout_ms #=> Integer
resp.job.code_gen_configuration_nodes["NodeId"].catalog_kafka_source.streaming_options.num_retries #=> Integer
resp.job.code_gen_configuration_nodes["NodeId"].catalog_kafka_source.streaming_options.retry_interval_ms #=> Integer
resp.job.code_gen_configuration_nodes["NodeId"].catalog_kafka_source.streaming_options.max_offsets_per_trigger #=> Integer
resp.job.code_gen_configuration_nodes["NodeId"].catalog_kafka_source.streaming_options.min_partitions #=> Integer
resp.job.code_gen_configuration_nodes["NodeId"].catalog_kafka_source.streaming_options.include_headers #=> Boolean
resp.job.code_gen_configuration_nodes["NodeId"].catalog_kafka_source.streaming_options.add_record_timestamp #=> String
resp.job.code_gen_configuration_nodes["NodeId"].catalog_kafka_source.streaming_options.emit_consumer_lag_metrics #=> String
resp.job.code_gen_configuration_nodes["NodeId"].catalog_kafka_source.streaming_options.starting_timestamp #=> Time
resp.job.code_gen_configuration_nodes["NodeId"].catalog_kafka_source.data_preview_options.polling_time #=> Integer
resp.job.code_gen_configuration_nodes["NodeId"].catalog_kafka_source.data_preview_options.record_polling_limit #=> Integer
resp.job.code_gen_configuration_nodes["NodeId"].drop_null_fields.name #=> String
resp.job.code_gen_configuration_nodes["NodeId"].drop_null_fields.inputs #=> Array
resp.job.code_gen_configuration_nodes["NodeId"].drop_null_fields.inputs[0] #=> String
resp.job.code_gen_configuration_nodes["NodeId"].drop_null_fields.null_check_box_list.is_empty #=> Boolean
resp.job.code_gen_configuration_nodes["NodeId"].drop_null_fields.null_check_box_list.is_null_string #=> Boolean
resp.job.code_gen_configuration_nodes["NodeId"].drop_null_fields.null_check_box_list.is_neg_one #=> Boolean
resp.job.code_gen_configuration_nodes["NodeId"].drop_null_fields.null_text_list #=> Array
resp.job.code_gen_configuration_nodes["NodeId"].drop_null_fields.null_text_list[0].value #=> String
resp.job.code_gen_configuration_nodes["NodeId"].drop_null_fields.null_text_list[0].datatype.id #=> String
resp.job.code_gen_configuration_nodes["NodeId"].drop_null_fields.null_text_list[0].datatype.label #=> String
resp.job.code_gen_configuration_nodes["NodeId"].merge.name #=> String
resp.job.code_gen_configuration_nodes["NodeId"].merge.inputs #=> Array
resp.job.code_gen_configuration_nodes["NodeId"].merge.inputs[0] #=> String
resp.job.code_gen_configuration_nodes["NodeId"].merge.source #=> String
resp.job.code_gen_configuration_nodes["NodeId"].merge.primary_keys #=> Array
resp.job.code_gen_configuration_nodes["NodeId"].merge.primary_keys[0] #=> Array
resp.job.code_gen_configuration_nodes["NodeId"].merge.primary_keys[0][0] #=> String
resp.job.code_gen_configuration_nodes["NodeId"].union.name #=> String
resp.job.code_gen_configuration_nodes["NodeId"].union.inputs #=> Array
resp.job.code_gen_configuration_nodes["NodeId"].union.inputs[0] #=> String
resp.job.code_gen_configuration_nodes["NodeId"].union.union_type #=> String, one of "ALL", "DISTINCT"
resp.job.code_gen_configuration_nodes["NodeId"].pii_detection.name #=> String
resp.job.code_gen_configuration_nodes["NodeId"].pii_detection.inputs #=> Array
resp.job.code_gen_configuration_nodes["NodeId"].pii_detection.inputs[0] #=> String
resp.job.code_gen_configuration_nodes["NodeId"].pii_detection.pii_type #=> String, one of "RowAudit", "RowMasking", "ColumnAudit", "ColumnMasking"
resp.job.code_gen_configuration_nodes["NodeId"].pii_detection.entity_types_to_detect #=> Array
resp.job.code_gen_configuration_nodes["NodeId"].pii_detection.entity_types_to_detect[0] #=> String
resp.job.code_gen_configuration_nodes["NodeId"].pii_detection.output_column_name #=> String
resp.job.code_gen_configuration_nodes["NodeId"].pii_detection.sample_fraction #=> Float
resp.job.code_gen_configuration_nodes["NodeId"].pii_detection.threshold_fraction #=> Float
resp.job.code_gen_configuration_nodes["NodeId"].pii_detection.mask_value #=> String
resp.job.code_gen_configuration_nodes["NodeId"].aggregate.name #=> String
resp.job.code_gen_configuration_nodes["NodeId"].aggregate.inputs #=> Array
resp.job.code_gen_configuration_nodes["NodeId"].aggregate.inputs[0] #=> String
resp.job.code_gen_configuration_nodes["NodeId"].aggregate.groups #=> Array
resp.job.code_gen_configuration_nodes["NodeId"].aggregate.groups[0] #=> Array
resp.job.code_gen_configuration_nodes["NodeId"].aggregate.groups[0][0] #=> String
resp.job.code_gen_configuration_nodes["NodeId"].aggregate.aggs #=> Array
resp.job.code_gen_configuration_nodes["NodeId"].aggregate.aggs[0].column #=> Array
resp.job.code_gen_configuration_nodes["NodeId"].aggregate.aggs[0].column[0] #=> String
resp.job.code_gen_configuration_nodes["NodeId"].aggregate.aggs[0].agg_func #=> String, one of "avg", "countDistinct", "count", "first", "last", "kurtosis", "max", "min", "skewness", "stddev_samp", "stddev_pop", "sum", "sumDistinct", "var_samp", "var_pop"
resp.job.code_gen_configuration_nodes["NodeId"].drop_duplicates.name #=> String
resp.job.code_gen_configuration_nodes["NodeId"].drop_duplicates.inputs #=> Array
resp.job.code_gen_configuration_nodes["NodeId"].drop_duplicates.inputs[0] #=> String
resp.job.code_gen_configuration_nodes["NodeId"].drop_duplicates.columns #=> Array
resp.job.code_gen_configuration_nodes["NodeId"].drop_duplicates.columns[0] #=> Array
resp.job.code_gen_configuration_nodes["NodeId"].drop_duplicates.columns[0][0] #=> String
resp.job.code_gen_configuration_nodes["NodeId"].governed_catalog_target.name #=> String
resp.job.code_gen_configuration_nodes["NodeId"].governed_catalog_target.inputs #=> Array
resp.job.code_gen_configuration_nodes["NodeId"].governed_catalog_target.inputs[0] #=> String
resp.job.code_gen_configuration_nodes["NodeId"].governed_catalog_target.partition_keys #=> Array
resp.job.code_gen_configuration_nodes["NodeId"].governed_catalog_target.partition_keys[0] #=> Array
resp.job.code_gen_configuration_nodes["NodeId"].governed_catalog_target.partition_keys[0][0] #=> String
resp.job.code_gen_configuration_nodes["NodeId"].governed_catalog_target.table #=> String
resp.job.code_gen_configuration_nodes["NodeId"].governed_catalog_target.database #=> String
resp.job.code_gen_configuration_nodes["NodeId"].governed_catalog_target.schema_change_policy.enable_update_catalog #=> Boolean
resp.job.code_gen_configuration_nodes["NodeId"].governed_catalog_target.schema_change_policy.update_behavior #=> String, one of "UPDATE_IN_DATABASE", "LOG"
resp.job.code_gen_configuration_nodes["NodeId"].governed_catalog_source.name #=> String
resp.job.code_gen_configuration_nodes["NodeId"].governed_catalog_source.database #=> String
resp.job.code_gen_configuration_nodes["NodeId"].governed_catalog_source.table #=> String
resp.job.code_gen_configuration_nodes["NodeId"].governed_catalog_source.partition_predicate #=> String
resp.job.code_gen_configuration_nodes["NodeId"].governed_catalog_source.additional_options.bounded_size #=> Integer
resp.job.code_gen_configuration_nodes["NodeId"].governed_catalog_source.additional_options.bounded_files #=> Integer
resp.job.code_gen_configuration_nodes["NodeId"].microsoft_sql_server_catalog_source.name #=> String
resp.job.code_gen_configuration_nodes["NodeId"].microsoft_sql_server_catalog_source.database #=> String
resp.job.code_gen_configuration_nodes["NodeId"].microsoft_sql_server_catalog_source.table #=> String
resp.job.code_gen_configuration_nodes["NodeId"].my_sql_catalog_source.name #=> String
resp.job.code_gen_configuration_nodes["NodeId"].my_sql_catalog_source.database #=> String
resp.job.code_gen_configuration_nodes["NodeId"].my_sql_catalog_source.table #=> String
resp.job.code_gen_configuration_nodes["NodeId"].oracle_sql_catalog_source.name #=> String
resp.job.code_gen_configuration_nodes["NodeId"].oracle_sql_catalog_source.database #=> String
resp.job.code_gen_configuration_nodes["NodeId"].oracle_sql_catalog_source.table #=> String
resp.job.code_gen_configuration_nodes["NodeId"].postgre_sql_catalog_source.name #=> String
resp.job.code_gen_configuration_nodes["NodeId"].postgre_sql_catalog_source.database #=> String
resp.job.code_gen_configuration_nodes["NodeId"].postgre_sql_catalog_source.table #=> String
resp.job.code_gen_configuration_nodes["NodeId"].microsoft_sql_server_catalog_target.name #=> String
resp.job.code_gen_configuration_nodes["NodeId"].microsoft_sql_server_catalog_target.inputs #=> Array
resp.job.code_gen_configuration_nodes["NodeId"].microsoft_sql_server_catalog_target.inputs[0] #=> String
resp.job.code_gen_configuration_nodes["NodeId"].microsoft_sql_server_catalog_target.database #=> String
resp.job.code_gen_configuration_nodes["NodeId"].microsoft_sql_server_catalog_target.table #=> String
resp.job.code_gen_configuration_nodes["NodeId"].my_sql_catalog_target.name #=> String
resp.job.code_gen_configuration_nodes["NodeId"].my_sql_catalog_target.inputs #=> Array
resp.job.code_gen_configuration_nodes["NodeId"].my_sql_catalog_target.inputs[0] #=> String
resp.job.code_gen_configuration_nodes["NodeId"].my_sql_catalog_target.database #=> String
resp.job.code_gen_configuration_nodes["NodeId"].my_sql_catalog_target.table #=> String
resp.job.code_gen_configuration_nodes["NodeId"].oracle_sql_catalog_target.name #=> String
resp.job.code_gen_configuration_nodes["NodeId"].oracle_sql_catalog_target.inputs #=> Array
resp.job.code_gen_configuration_nodes["NodeId"].oracle_sql_catalog_target.inputs[0] #=> String
resp.job.code_gen_configuration_nodes["NodeId"].oracle_sql_catalog_target.database #=> String
resp.job.code_gen_configuration_nodes["NodeId"].oracle_sql_catalog_target.table #=> String
resp.job.code_gen_configuration_nodes["NodeId"].postgre_sql_catalog_target.name #=> String
resp.job.code_gen_configuration_nodes["NodeId"].postgre_sql_catalog_target.inputs #=> Array
resp.job.code_gen_configuration_nodes["NodeId"].postgre_sql_catalog_target.inputs[0] #=> String
resp.job.code_gen_configuration_nodes["NodeId"].postgre_sql_catalog_target.database #=> String
resp.job.code_gen_configuration_nodes["NodeId"].postgre_sql_catalog_target.table #=> String
resp.job.code_gen_configuration_nodes["NodeId"].dynamic_transform.name #=> String
resp.job.code_gen_configuration_nodes["NodeId"].dynamic_transform.transform_name #=> String
resp.job.code_gen_configuration_nodes["NodeId"].dynamic_transform.inputs #=> Array
resp.job.code_gen_configuration_nodes["NodeId"].dynamic_transform.inputs[0] #=> String
resp.job.code_gen_configuration_nodes["NodeId"].dynamic_transform.parameters #=> Array
resp.job.code_gen_configuration_nodes["NodeId"].dynamic_transform.parameters[0].name #=> String
resp.job.code_gen_configuration_nodes["NodeId"].dynamic_transform.parameters[0].type #=> String, one of "str", "int", "float", "complex", "bool", "list", "null"
resp.job.code_gen_configuration_nodes["NodeId"].dynamic_transform.parameters[0].validation_rule #=> String
resp.job.code_gen_configuration_nodes["NodeId"].dynamic_transform.parameters[0].validation_message #=> String
resp.job.code_gen_configuration_nodes["NodeId"].dynamic_transform.parameters[0].value #=> Array
resp.job.code_gen_configuration_nodes["NodeId"].dynamic_transform.parameters[0].value[0] #=> String
resp.job.code_gen_configuration_nodes["NodeId"].dynamic_transform.parameters[0].list_type #=> String, one of "str", "int", "float", "complex", "bool", "list", "null"
resp.job.code_gen_configuration_nodes["NodeId"].dynamic_transform.parameters[0].is_optional #=> Boolean
resp.job.code_gen_configuration_nodes["NodeId"].dynamic_transform.function_name #=> String
resp.job.code_gen_configuration_nodes["NodeId"].dynamic_transform.path #=> String
resp.job.code_gen_configuration_nodes["NodeId"].dynamic_transform.version #=> String
resp.job.code_gen_configuration_nodes["NodeId"].dynamic_transform.output_schemas #=> Array
resp.job.code_gen_configuration_nodes["NodeId"].dynamic_transform.output_schemas[0].columns #=> Array
resp.job.code_gen_configuration_nodes["NodeId"].dynamic_transform.output_schemas[0].columns[0].name #=> String
resp.job.code_gen_configuration_nodes["NodeId"].dynamic_transform.output_schemas[0].columns[0].type #=> String
resp.job.code_gen_configuration_nodes["NodeId"].evaluate_data_quality.name #=> String
resp.job.code_gen_configuration_nodes["NodeId"].evaluate_data_quality.inputs #=> Array
resp.job.code_gen_configuration_nodes["NodeId"].evaluate_data_quality.inputs[0] #=> String
resp.job.code_gen_configuration_nodes["NodeId"].evaluate_data_quality.ruleset #=> String
resp.job.code_gen_configuration_nodes["NodeId"].evaluate_data_quality.output #=> String, one of "PrimaryInput", "EvaluationResults"
resp.job.code_gen_configuration_nodes["NodeId"].evaluate_data_quality.publishing_options.evaluation_context #=> String
resp.job.code_gen_configuration_nodes["NodeId"].evaluate_data_quality.publishing_options.results_s3_prefix #=> String
resp.job.code_gen_configuration_nodes["NodeId"].evaluate_data_quality.publishing_options.cloud_watch_metrics_enabled #=> Boolean
resp.job.code_gen_configuration_nodes["NodeId"].evaluate_data_quality.publishing_options.results_publishing_enabled #=> Boolean
resp.job.code_gen_configuration_nodes["NodeId"].evaluate_data_quality.stop_job_on_failure_options.stop_job_on_failure_timing #=> String, one of "Immediate", "AfterDataLoad"
resp.job.code_gen_configuration_nodes["NodeId"].s3_catalog_hudi_source.name #=> String
resp.job.code_gen_configuration_nodes["NodeId"].s3_catalog_hudi_source.database #=> String
resp.job.code_gen_configuration_nodes["NodeId"].s3_catalog_hudi_source.table #=> String
resp.job.code_gen_configuration_nodes["NodeId"].s3_catalog_hudi_source.additional_hudi_options #=> Hash
resp.job.code_gen_configuration_nodes["NodeId"].s3_catalog_hudi_source.additional_hudi_options["EnclosedInStringProperty"] #=> String
resp.job.code_gen_configuration_nodes["NodeId"].s3_catalog_hudi_source.output_schemas #=> Array
resp.job.code_gen_configuration_nodes["NodeId"].s3_catalog_hudi_source.output_schemas[0].columns #=> Array
resp.job.code_gen_configuration_nodes["NodeId"].s3_catalog_hudi_source.output_schemas[0].columns[0].name #=> String
resp.job.code_gen_configuration_nodes["NodeId"].s3_catalog_hudi_source.output_schemas[0].columns[0].type #=> String
resp.job.code_gen_configuration_nodes["NodeId"].catalog_hudi_source.name #=> String
resp.job.code_gen_configuration_nodes["NodeId"].catalog_hudi_source.database #=> String
resp.job.code_gen_configuration_nodes["NodeId"].catalog_hudi_source.table #=> String
resp.job.code_gen_configuration_nodes["NodeId"].catalog_hudi_source.additional_hudi_options #=> Hash
resp.job.code_gen_configuration_nodes["NodeId"].catalog_hudi_source.additional_hudi_options["EnclosedInStringProperty"] #=> String
resp.job.code_gen_configuration_nodes["NodeId"].catalog_hudi_source.output_schemas #=> Array
resp.job.code_gen_configuration_nodes["NodeId"].catalog_hudi_source.output_schemas[0].columns #=> Array
resp.job.code_gen_configuration_nodes["NodeId"].catalog_hudi_source.output_schemas[0].columns[0].name #=> String
resp.job.code_gen_configuration_nodes["NodeId"].catalog_hudi_source.output_schemas[0].columns[0].type #=> String
resp.job.code_gen_configuration_nodes["NodeId"].s3_hudi_source.name #=> String
resp.job.code_gen_configuration_nodes["NodeId"].s3_hudi_source.paths #=> Array
resp.job.code_gen_configuration_nodes["NodeId"].s3_hudi_source.paths[0] #=> String
resp.job.code_gen_configuration_nodes["NodeId"].s3_hudi_source.additional_hudi_options #=> Hash
resp.job.code_gen_configuration_nodes["NodeId"].s3_hudi_source.additional_hudi_options["EnclosedInStringProperty"] #=> String
resp.job.code_gen_configuration_nodes["NodeId"].s3_hudi_source.additional_options.bounded_size #=> Integer
resp.job.code_gen_configuration_nodes["NodeId"].s3_hudi_source.additional_options.bounded_files #=> Integer
resp.job.code_gen_configuration_nodes["NodeId"].s3_hudi_source.additional_options.enable_sample_path #=> Boolean
resp.job.code_gen_configuration_nodes["NodeId"].s3_hudi_source.additional_options.sample_path #=> String
resp.job.code_gen_configuration_nodes["NodeId"].s3_hudi_source.output_schemas #=> Array
resp.job.code_gen_configuration_nodes["NodeId"].s3_hudi_source.output_schemas[0].columns #=> Array
resp.job.code_gen_configuration_nodes["NodeId"].s3_hudi_source.output_schemas[0].columns[0].name #=> String
resp.job.code_gen_configuration_nodes["NodeId"].s3_hudi_source.output_schemas[0].columns[0].type #=> String
resp.job.code_gen_configuration_nodes["NodeId"].s3_hudi_catalog_target.name #=> String
resp.job.code_gen_configuration_nodes["NodeId"].s3_hudi_catalog_target.inputs #=> Array
resp.job.code_gen_configuration_nodes["NodeId"].s3_hudi_catalog_target.inputs[0] #=> String
resp.job.code_gen_configuration_nodes["NodeId"].s3_hudi_catalog_target.partition_keys #=> Array
resp.job.code_gen_configuration_nodes["NodeId"].s3_hudi_catalog_target.partition_keys[0] #=> Array
resp.job.code_gen_configuration_nodes["NodeId"].s3_hudi_catalog_target.partition_keys[0][0] #=> String
resp.job.code_gen_configuration_nodes["NodeId"].s3_hudi_catalog_target.table #=> String
resp.job.code_gen_configuration_nodes["NodeId"].s3_hudi_catalog_target.database #=> String
resp.job.code_gen_configuration_nodes["NodeId"].s3_hudi_catalog_target.additional_options #=> Hash
resp.job.code_gen_configuration_nodes["NodeId"].s3_hudi_catalog_target.additional_options["EnclosedInStringProperty"] #=> String
resp.job.code_gen_configuration_nodes["NodeId"].s3_hudi_catalog_target.schema_change_policy.enable_update_catalog #=> Boolean
resp.job.code_gen_configuration_nodes["NodeId"].s3_hudi_catalog_target.schema_change_policy.update_behavior #=> String, one of "UPDATE_IN_DATABASE", "LOG"
resp.job.code_gen_configuration_nodes["NodeId"].s3_hudi_direct_target.name #=> String
resp.job.code_gen_configuration_nodes["NodeId"].s3_hudi_direct_target.inputs #=> Array
resp.job.code_gen_configuration_nodes["NodeId"].s3_hudi_direct_target.inputs[0] #=> String
resp.job.code_gen_configuration_nodes["NodeId"].s3_hudi_direct_target.path #=> String
resp.job.code_gen_configuration_nodes["NodeId"].s3_hudi_direct_target.compression #=> String, one of "gzip", "lzo", "uncompressed", "snappy"
resp.job.code_gen_configuration_nodes["NodeId"].s3_hudi_direct_target.partition_keys #=> Array
resp.job.code_gen_configuration_nodes["NodeId"].s3_hudi_direct_target.partition_keys[0] #=> Array
resp.job.code_gen_configuration_nodes["NodeId"].s3_hudi_direct_target.partition_keys[0][0] #=> String
resp.job.code_gen_configuration_nodes["NodeId"].s3_hudi_direct_target.format #=> String, one of "json", "csv", "avro", "orc", "parquet", "hudi", "delta"
resp.job.code_gen_configuration_nodes["NodeId"].s3_hudi_direct_target.additional_options #=> Hash
resp.job.code_gen_configuration_nodes["NodeId"].s3_hudi_direct_target.additional_options["EnclosedInStringProperty"] #=> String
resp.job.code_gen_configuration_nodes["NodeId"].s3_hudi_direct_target.schema_change_policy.enable_update_catalog #=> Boolean
resp.job.code_gen_configuration_nodes["NodeId"].s3_hudi_direct_target.schema_change_policy.update_behavior #=> String, one of "UPDATE_IN_DATABASE", "LOG"
resp.job.code_gen_configuration_nodes["NodeId"].s3_hudi_direct_target.schema_change_policy.table #=> String
resp.job.code_gen_configuration_nodes["NodeId"].s3_hudi_direct_target.schema_change_policy.database #=> String
resp.job.code_gen_configuration_nodes["NodeId"].direct_jdbc_source.name #=> String
resp.job.code_gen_configuration_nodes["NodeId"].direct_jdbc_source.database #=> String
resp.job.code_gen_configuration_nodes["NodeId"].direct_jdbc_source.table #=> String
resp.job.code_gen_configuration_nodes["NodeId"].direct_jdbc_source.connection_name #=> String
resp.job.code_gen_configuration_nodes["NodeId"].direct_jdbc_source.connection_type #=> String, one of "sqlserver", "mysql", "oracle", "postgresql", "redshift"
resp.job.code_gen_configuration_nodes["NodeId"].direct_jdbc_source.redshift_tmp_dir #=> String
resp.job.code_gen_configuration_nodes["NodeId"].s3_catalog_delta_source.name #=> String
resp.job.code_gen_configuration_nodes["NodeId"].s3_catalog_delta_source.database #=> String
resp.job.code_gen_configuration_nodes["NodeId"].s3_catalog_delta_source.table #=> String
resp.job.code_gen_configuration_nodes["NodeId"].s3_catalog_delta_source.additional_delta_options #=> Hash
resp.job.code_gen_configuration_nodes["NodeId"].s3_catalog_delta_source.additional_delta_options["EnclosedInStringProperty"] #=> String
resp.job.code_gen_configuration_nodes["NodeId"].s3_catalog_delta_source.output_schemas #=> Array
resp.job.code_gen_configuration_nodes["NodeId"].s3_catalog_delta_source.output_schemas[0].columns #=> Array
resp.job.code_gen_configuration_nodes["NodeId"].s3_catalog_delta_source.output_schemas[0].columns[0].name #=> String
resp.job.code_gen_configuration_nodes["NodeId"].s3_catalog_delta_source.output_schemas[0].columns[0].type #=> String
resp.job.code_gen_configuration_nodes["NodeId"].catalog_delta_source.name #=> String
resp.job.code_gen_configuration_nodes["NodeId"].catalog_delta_source.database #=> String
resp.job.code_gen_configuration_nodes["NodeId"].catalog_delta_source.table #=> String
resp.job.code_gen_configuration_nodes["NodeId"].catalog_delta_source.additional_delta_options #=> Hash
resp.job.code_gen_configuration_nodes["NodeId"].catalog_delta_source.additional_delta_options["EnclosedInStringProperty"] #=> String
resp.job.code_gen_configuration_nodes["NodeId"].catalog_delta_source.output_schemas #=> Array
resp.job.code_gen_configuration_nodes["NodeId"].catalog_delta_source.output_schemas[0].columns #=> Array
resp.job.code_gen_configuration_nodes["NodeId"].catalog_delta_source.output_schemas[0].columns[0].name #=> String
resp.job.code_gen_configuration_nodes["NodeId"].catalog_delta_source.output_schemas[0].columns[0].type #=> String
resp.job.code_gen_configuration_nodes["NodeId"].s3_delta_source.name #=> String
resp.job.code_gen_configuration_nodes["NodeId"].s3_delta_source.paths #=> Array
resp.job.code_gen_configuration_nodes["NodeId"].s3_delta_source.paths[0] #=> String
resp.job.code_gen_configuration_nodes["NodeId"].s3_delta_source.additional_delta_options #=> Hash
resp.job.code_gen_configuration_nodes["NodeId"].s3_delta_source.additional_delta_options["EnclosedInStringProperty"] #=> String
resp.job.code_gen_configuration_nodes["NodeId"].s3_delta_source.additional_options.bounded_size #=> Integer
resp.job.code_gen_configuration_nodes["NodeId"].s3_delta_source.additional_options.bounded_files #=> Integer
resp.job.code_gen_configuration_nodes["NodeId"].s3_delta_source.additional_options.enable_sample_path #=> Boolean
resp.job.code_gen_configuration_nodes["NodeId"].s3_delta_source.additional_options.sample_path #=> String
resp.job.code_gen_configuration_nodes["NodeId"].s3_delta_source.output_schemas #=> Array
resp.job.code_gen_configuration_nodes["NodeId"].s3_delta_source.output_schemas[0].columns #=> Array
resp.job.code_gen_configuration_nodes["NodeId"].s3_delta_source.output_schemas[0].columns[0].name #=> String
resp.job.code_gen_configuration_nodes["NodeId"].s3_delta_source.output_schemas[0].columns[0].type #=> String
resp.job.code_gen_configuration_nodes["NodeId"].s3_delta_catalog_target.name #=> String
resp.job.code_gen_configuration_nodes["NodeId"].s3_delta_catalog_target.inputs #=> Array
resp.job.code_gen_configuration_nodes["NodeId"].s3_delta_catalog_target.inputs[0] #=> String
resp.job.code_gen_configuration_nodes["NodeId"].s3_delta_catalog_target.partition_keys #=> Array
resp.job.code_gen_configuration_nodes["NodeId"].s3_delta_catalog_target.partition_keys[0] #=> Array
resp.job.code_gen_configuration_nodes["NodeId"].s3_delta_catalog_target.partition_keys[0][0] #=> String
resp.job.code_gen_configuration_nodes["NodeId"].s3_delta_catalog_target.table #=> String
resp.job.code_gen_configuration_nodes["NodeId"].s3_delta_catalog_target.database #=> String
resp.job.code_gen_configuration_nodes["NodeId"].s3_delta_catalog_target.additional_options #=> Hash
resp.job.code_gen_configuration_nodes["NodeId"].s3_delta_catalog_target.additional_options["EnclosedInStringProperty"] #=> String
resp.job.code_gen_configuration_nodes["NodeId"].s3_delta_catalog_target.schema_change_policy.enable_update_catalog #=> Boolean
resp.job.code_gen_configuration_nodes["NodeId"].s3_delta_catalog_target.schema_change_policy.update_behavior #=> String, one of "UPDATE_IN_DATABASE", "LOG"
resp.job.code_gen_configuration_nodes["NodeId"].s3_delta_direct_target.name #=> String
resp.job.code_gen_configuration_nodes["NodeId"].s3_delta_direct_target.inputs #=> Array
resp.job.code_gen_configuration_nodes["NodeId"].s3_delta_direct_target.inputs[0] #=> String
resp.job.code_gen_configuration_nodes["NodeId"].s3_delta_direct_target.partition_keys #=> Array
resp.job.code_gen_configuration_nodes["NodeId"].s3_delta_direct_target.partition_keys[0] #=> Array
resp.job.code_gen_configuration_nodes["NodeId"].s3_delta_direct_target.partition_keys[0][0] #=> String
resp.job.code_gen_configuration_nodes["NodeId"].s3_delta_direct_target.path #=> String
resp.job.code_gen_configuration_nodes["NodeId"].s3_delta_direct_target.compression #=> String, one of "uncompressed", "snappy"
resp.job.code_gen_configuration_nodes["NodeId"].s3_delta_direct_target.format #=> String, one of "json", "csv", "avro", "orc", "parquet", "hudi", "delta"
resp.job.code_gen_configuration_nodes["NodeId"].s3_delta_direct_target.additional_options #=> Hash
resp.job.code_gen_configuration_nodes["NodeId"].s3_delta_direct_target.additional_options["EnclosedInStringProperty"] #=> String
resp.job.code_gen_configuration_nodes["NodeId"].s3_delta_direct_target.schema_change_policy.enable_update_catalog #=> Boolean
resp.job.code_gen_configuration_nodes["NodeId"].s3_delta_direct_target.schema_change_policy.update_behavior #=> String, one of "UPDATE_IN_DATABASE", "LOG"
resp.job.code_gen_configuration_nodes["NodeId"].s3_delta_direct_target.schema_change_policy.table #=> String
resp.job.code_gen_configuration_nodes["NodeId"].s3_delta_direct_target.schema_change_policy.database #=> String
resp.job.code_gen_configuration_nodes["NodeId"].amazon_redshift_source.name #=> String
resp.job.code_gen_configuration_nodes["NodeId"].amazon_redshift_source.data.access_type #=> String
resp.job.code_gen_configuration_nodes["NodeId"].amazon_redshift_source.data.source_type #=> String
resp.job.code_gen_configuration_nodes["NodeId"].amazon_redshift_source.data.connection.value #=> String
resp.job.code_gen_configuration_nodes["NodeId"].amazon_redshift_source.data.connection.label #=> String
resp.job.code_gen_configuration_nodes["NodeId"].amazon_redshift_source.data.connection.description #=> String
resp.job.code_gen_configuration_nodes["NodeId"].amazon_redshift_source.data.schema.value #=> String
resp.job.code_gen_configuration_nodes["NodeId"].amazon_redshift_source.data.schema.label #=> String
resp.job.code_gen_configuration_nodes["NodeId"].amazon_redshift_source.data.schema.description #=> String
resp.job.code_gen_configuration_nodes["NodeId"].amazon_redshift_source.data.table.value #=> String
resp.job.code_gen_configuration_nodes["NodeId"].amazon_redshift_source.data.table.label #=> String
resp.job.code_gen_configuration_nodes["NodeId"].amazon_redshift_source.data.table.description #=> String
resp.job.code_gen_configuration_nodes["NodeId"].amazon_redshift_source.data.catalog_database.value #=> String
resp.job.code_gen_configuration_nodes["NodeId"].amazon_redshift_source.data.catalog_database.label #=> String
resp.job.code_gen_configuration_nodes["NodeId"].amazon_redshift_source.data.catalog_database.description #=> String
resp.job.code_gen_configuration_nodes["NodeId"].amazon_redshift_source.data.catalog_table.value #=> String
resp.job.code_gen_configuration_nodes["NodeId"].amazon_redshift_source.data.catalog_table.label #=> String
resp.job.code_gen_configuration_nodes["NodeId"].amazon_redshift_source.data.catalog_table.description #=> String
resp.job.code_gen_configuration_nodes["NodeId"].amazon_redshift_source.data.catalog_redshift_schema #=> String
resp.job.code_gen_configuration_nodes["NodeId"].amazon_redshift_source.data.catalog_redshift_table #=> String
resp.job.code_gen_configuration_nodes["NodeId"].amazon_redshift_source.data.temp_dir #=> String
resp.job.code_gen_configuration_nodes["NodeId"].amazon_redshift_source.data.iam_role.value #=> String
resp.job.code_gen_configuration_nodes["NodeId"].amazon_redshift_source.data.iam_role.label #=> String
resp.job.code_gen_configuration_nodes["NodeId"].amazon_redshift_source.data.iam_role.description #=> String
resp.job.code_gen_configuration_nodes["NodeId"].amazon_redshift_source.data.advanced_options #=> Array
resp.job.code_gen_configuration_nodes["NodeId"].amazon_redshift_source.data.advanced_options[0].key #=> String
resp.job.code_gen_configuration_nodes["NodeId"].amazon_redshift_source.data.advanced_options[0].value #=> String
resp.job.code_gen_configuration_nodes["NodeId"].amazon_redshift_source.data.sample_query #=> String
resp.job.code_gen_configuration_nodes["NodeId"].amazon_redshift_source.data.pre_action #=> String
resp.job.code_gen_configuration_nodes["NodeId"].amazon_redshift_source.data.post_action #=> String
resp.job.code_gen_configuration_nodes["NodeId"].amazon_redshift_source.data.action #=> String
resp.job.code_gen_configuration_nodes["NodeId"].amazon_redshift_source.data.table_prefix #=> String
resp.job.code_gen_configuration_nodes["NodeId"].amazon_redshift_source.data.upsert #=> Boolean
resp.job.code_gen_configuration_nodes["NodeId"].amazon_redshift_source.data.merge_action #=> String
resp.job.code_gen_configuration_nodes["NodeId"].amazon_redshift_source.data.merge_when_matched #=> String
resp.job.code_gen_configuration_nodes["NodeId"].amazon_redshift_source.data.merge_when_not_matched #=> String
resp.job.code_gen_configuration_nodes["NodeId"].amazon_redshift_source.data.merge_clause #=> String
resp.job.code_gen_configuration_nodes["NodeId"].amazon_redshift_source.data.crawler_connection #=> String
resp.job.code_gen_configuration_nodes["NodeId"].amazon_redshift_source.data.table_schema #=> Array
resp.job.code_gen_configuration_nodes["NodeId"].amazon_redshift_source.data.table_schema[0].value #=> String
resp.job.code_gen_configuration_nodes["NodeId"].amazon_redshift_source.data.table_schema[0].label #=> String
resp.job.code_gen_configuration_nodes["NodeId"].amazon_redshift_source.data.table_schema[0].description #=> String
resp.job.code_gen_configuration_nodes["NodeId"].amazon_redshift_source.data.staging_table #=> String
resp.job.code_gen_configuration_nodes["NodeId"].amazon_redshift_source.data.selected_columns #=> Array
resp.job.code_gen_configuration_nodes["NodeId"].amazon_redshift_source.data.selected_columns[0].value #=> String
resp.job.code_gen_configuration_nodes["NodeId"].amazon_redshift_source.data.selected_columns[0].label #=> String
resp.job.code_gen_configuration_nodes["NodeId"].amazon_redshift_source.data.selected_columns[0].description #=> String
resp.job.code_gen_configuration_nodes["NodeId"].amazon_redshift_target.name #=> String
resp.job.code_gen_configuration_nodes["NodeId"].amazon_redshift_target.data.access_type #=> String
resp.job.code_gen_configuration_nodes["NodeId"].amazon_redshift_target.data.source_type #=> String
resp.job.code_gen_configuration_nodes["NodeId"].amazon_redshift_target.data.connection.value #=> String
resp.job.code_gen_configuration_nodes["NodeId"].amazon_redshift_target.data.connection.label #=> String
resp.job.code_gen_configuration_nodes["NodeId"].amazon_redshift_target.data.connection.description #=> String
resp.job.code_gen_configuration_nodes["NodeId"].amazon_redshift_target.data.schema.value #=> String
resp.job.code_gen_configuration_nodes["NodeId"].amazon_redshift_target.data.schema.label #=> String
resp.job.code_gen_configuration_nodes["NodeId"].amazon_redshift_target.data.schema.description #=> String
resp.job.code_gen_configuration_nodes["NodeId"].amazon_redshift_target.data.table.value #=> String
resp.job.code_gen_configuration_nodes["NodeId"].amazon_redshift_target.data.table.label #=> String
resp.job.code_gen_configuration_nodes["NodeId"].amazon_redshift_target.data.table.description #=> String
resp.job.code_gen_configuration_nodes["NodeId"].amazon_redshift_target.data.catalog_database.value #=> String
resp.job.code_gen_configuration_nodes["NodeId"].amazon_redshift_target.data.catalog_database.label #=> String
resp.job.code_gen_configuration_nodes["NodeId"].amazon_redshift_target.data.catalog_database.description #=> String
resp.job.code_gen_configuration_nodes["NodeId"].amazon_redshift_target.data.catalog_table.value #=> String
resp.job.code_gen_configuration_nodes["NodeId"].amazon_redshift_target.data.catalog_table.label #=> String
resp.job.code_gen_configuration_nodes["NodeId"].amazon_redshift_target.data.catalog_table.description #=> String
resp.job.code_gen_configuration_nodes["NodeId"].amazon_redshift_target.data.catalog_redshift_schema #=> String
resp.job.code_gen_configuration_nodes["NodeId"].amazon_redshift_target.data.catalog_redshift_table #=> String
resp.job.code_gen_configuration_nodes["NodeId"].amazon_redshift_target.data.temp_dir #=> String
resp.job.code_gen_configuration_nodes["NodeId"].amazon_redshift_target.data.iam_role.value #=> String
resp.job.code_gen_configuration_nodes["NodeId"].amazon_redshift_target.data.iam_role.label #=> String
resp.job.code_gen_configuration_nodes["NodeId"].amazon_redshift_target.data.iam_role.description #=> String
resp.job.code_gen_configuration_nodes["NodeId"].amazon_redshift_target.data.advanced_options #=> Array
resp.job.code_gen_configuration_nodes["NodeId"].amazon_redshift_target.data.advanced_options[0].key #=> String
resp.job.code_gen_configuration_nodes["NodeId"].amazon_redshift_target.data.advanced_options[0].value #=> String
resp.job.code_gen_configuration_nodes["NodeId"].amazon_redshift_target.data.sample_query #=> String
resp.job.code_gen_configuration_nodes["NodeId"].amazon_redshift_target.data.pre_action #=> String
resp.job.code_gen_configuration_nodes["NodeId"].amazon_redshift_target.data.post_action #=> String
resp.job.code_gen_configuration_nodes["NodeId"].amazon_redshift_target.data.action #=> String
resp.job.code_gen_configuration_nodes["NodeId"].amazon_redshift_target.data.table_prefix #=> String
resp.job.code_gen_configuration_nodes["NodeId"].amazon_redshift_target.data.upsert #=> Boolean
resp.job.code_gen_configuration_nodes["NodeId"].amazon_redshift_target.data.merge_action #=> String
resp.job.code_gen_configuration_nodes["NodeId"].amazon_redshift_target.data.merge_when_matched #=> String
resp.job.code_gen_configuration_nodes["NodeId"].amazon_redshift_target.data.merge_when_not_matched #=> String
resp.job.code_gen_configuration_nodes["NodeId"].amazon_redshift_target.data.merge_clause #=> String
resp.job.code_gen_configuration_nodes["NodeId"].amazon_redshift_target.data.crawler_connection #=> String
resp.job.code_gen_configuration_nodes["NodeId"].amazon_redshift_target.data.table_schema #=> Array
resp.job.code_gen_configuration_nodes["NodeId"].amazon_redshift_target.data.table_schema[0].value #=> String
resp.job.code_gen_configuration_nodes["NodeId"].amazon_redshift_target.data.table_schema[0].label #=> String
resp.job.code_gen_configuration_nodes["NodeId"].amazon_redshift_target.data.table_schema[0].description #=> String
resp.job.code_gen_configuration_nodes["NodeId"].amazon_redshift_target.data.staging_table #=> String
resp.job.code_gen_configuration_nodes["NodeId"].amazon_redshift_target.data.selected_columns #=> Array
resp.job.code_gen_configuration_nodes["NodeId"].amazon_redshift_target.data.selected_columns[0].value #=> String
resp.job.code_gen_configuration_nodes["NodeId"].amazon_redshift_target.data.selected_columns[0].label #=> String
resp.job.code_gen_configuration_nodes["NodeId"].amazon_redshift_target.data.selected_columns[0].description #=> String
resp.job.code_gen_configuration_nodes["NodeId"].amazon_redshift_target.inputs #=> Array
resp.job.code_gen_configuration_nodes["NodeId"].amazon_redshift_target.inputs[0] #=> String
resp.job.code_gen_configuration_nodes["NodeId"].evaluate_data_quality_multi_frame.name #=> String
resp.job.code_gen_configuration_nodes["NodeId"].evaluate_data_quality_multi_frame.inputs #=> Array
resp.job.code_gen_configuration_nodes["NodeId"].evaluate_data_quality_multi_frame.inputs[0] #=> String
resp.job.code_gen_configuration_nodes["NodeId"].evaluate_data_quality_multi_frame.additional_data_sources #=> Hash
resp.job.code_gen_configuration_nodes["NodeId"].evaluate_data_quality_multi_frame.additional_data_sources["NodeName"] #=> String
resp.job.code_gen_configuration_nodes["NodeId"].evaluate_data_quality_multi_frame.ruleset #=> String
resp.job.code_gen_configuration_nodes["NodeId"].evaluate_data_quality_multi_frame.publishing_options.evaluation_context #=> String
resp.job.code_gen_configuration_nodes["NodeId"].evaluate_data_quality_multi_frame.publishing_options.results_s3_prefix #=> String
resp.job.code_gen_configuration_nodes["NodeId"].evaluate_data_quality_multi_frame.publishing_options.cloud_watch_metrics_enabled #=> Boolean
resp.job.code_gen_configuration_nodes["NodeId"].evaluate_data_quality_multi_frame.publishing_options.results_publishing_enabled #=> Boolean
resp.job.code_gen_configuration_nodes["NodeId"].evaluate_data_quality_multi_frame.additional_options #=> Hash
resp.job.code_gen_configuration_nodes["NodeId"].evaluate_data_quality_multi_frame.additional_options["AdditionalOptionKeys"] #=> String
resp.job.code_gen_configuration_nodes["NodeId"].evaluate_data_quality_multi_frame.stop_job_on_failure_options.stop_job_on_failure_timing #=> String, one of "Immediate", "AfterDataLoad"
resp.job.code_gen_configuration_nodes["NodeId"].recipe.name #=> String
resp.job.code_gen_configuration_nodes["NodeId"].recipe.inputs #=> Array
resp.job.code_gen_configuration_nodes["NodeId"].recipe.inputs[0] #=> String
resp.job.code_gen_configuration_nodes["NodeId"].recipe.recipe_reference.recipe_arn #=> String
resp.job.code_gen_configuration_nodes["NodeId"].recipe.recipe_reference.recipe_version #=> String
resp.job.code_gen_configuration_nodes["NodeId"].recipe.recipe_steps #=> Array
resp.job.code_gen_configuration_nodes["NodeId"].recipe.recipe_steps[0].action.operation #=> String
resp.job.code_gen_configuration_nodes["NodeId"].recipe.recipe_steps[0].action.parameters #=> Hash
resp.job.code_gen_configuration_nodes["NodeId"].recipe.recipe_steps[0].action.parameters["ParameterName"] #=> String
resp.job.code_gen_configuration_nodes["NodeId"].recipe.recipe_steps[0].condition_expressions #=> Array
resp.job.code_gen_configuration_nodes["NodeId"].recipe.recipe_steps[0].condition_expressions[0].condition #=> String
resp.job.code_gen_configuration_nodes["NodeId"].recipe.recipe_steps[0].condition_expressions[0].value #=> String
resp.job.code_gen_configuration_nodes["NodeId"].recipe.recipe_steps[0].condition_expressions[0].target_column #=> String
resp.job.code_gen_configuration_nodes["NodeId"].snowflake_source.name #=> String
resp.job.code_gen_configuration_nodes["NodeId"].snowflake_source.data.source_type #=> String
resp.job.code_gen_configuration_nodes["NodeId"].snowflake_source.data.connection.value #=> String
resp.job.code_gen_configuration_nodes["NodeId"].snowflake_source.data.connection.label #=> String
resp.job.code_gen_configuration_nodes["NodeId"].snowflake_source.data.connection.description #=> String
resp.job.code_gen_configuration_nodes["NodeId"].snowflake_source.data.schema #=> String
resp.job.code_gen_configuration_nodes["NodeId"].snowflake_source.data.table #=> String
resp.job.code_gen_configuration_nodes["NodeId"].snowflake_source.data.database #=> String
resp.job.code_gen_configuration_nodes["NodeId"].snowflake_source.data.temp_dir #=> String
resp.job.code_gen_configuration_nodes["NodeId"].snowflake_source.data.iam_role.value #=> String
resp.job.code_gen_configuration_nodes["NodeId"].snowflake_source.data.iam_role.label #=> String
resp.job.code_gen_configuration_nodes["NodeId"].snowflake_source.data.iam_role.description #=> String
resp.job.code_gen_configuration_nodes["NodeId"].snowflake_source.data.additional_options #=> Hash
resp.job.code_gen_configuration_nodes["NodeId"].snowflake_source.data.additional_options["EnclosedInStringProperty"] #=> String
resp.job.code_gen_configuration_nodes["NodeId"].snowflake_source.data.sample_query #=> String
resp.job.code_gen_configuration_nodes["NodeId"].snowflake_source.data.pre_action #=> String
resp.job.code_gen_configuration_nodes["NodeId"].snowflake_source.data.post_action #=> String
resp.job.code_gen_configuration_nodes["NodeId"].snowflake_source.data.action #=> String
resp.job.code_gen_configuration_nodes["NodeId"].snowflake_source.data.upsert #=> Boolean
resp.job.code_gen_configuration_nodes["NodeId"].snowflake_source.data.merge_action #=> String
resp.job.code_gen_configuration_nodes["NodeId"].snowflake_source.data.merge_when_matched #=> String
resp.job.code_gen_configuration_nodes["NodeId"].snowflake_source.data.merge_when_not_matched #=> String
resp.job.code_gen_configuration_nodes["NodeId"].snowflake_source.data.merge_clause #=> String
resp.job.code_gen_configuration_nodes["NodeId"].snowflake_source.data.staging_table #=> String
resp.job.code_gen_configuration_nodes["NodeId"].snowflake_source.data.selected_columns #=> Array
resp.job.code_gen_configuration_nodes["NodeId"].snowflake_source.data.selected_columns[0].value #=> String
resp.job.code_gen_configuration_nodes["NodeId"].snowflake_source.data.selected_columns[0].label #=> String
resp.job.code_gen_configuration_nodes["NodeId"].snowflake_source.data.selected_columns[0].description #=> String
resp.job.code_gen_configuration_nodes["NodeId"].snowflake_source.data.auto_pushdown #=> Boolean
resp.job.code_gen_configuration_nodes["NodeId"].snowflake_source.data.table_schema #=> Array
resp.job.code_gen_configuration_nodes["NodeId"].snowflake_source.data.table_schema[0].value #=> String
resp.job.code_gen_configuration_nodes["NodeId"].snowflake_source.data.table_schema[0].label #=> String
resp.job.code_gen_configuration_nodes["NodeId"].snowflake_source.data.table_schema[0].description #=> String
resp.job.code_gen_configuration_nodes["NodeId"].snowflake_source.output_schemas #=> Array
resp.job.code_gen_configuration_nodes["NodeId"].snowflake_source.output_schemas[0].columns #=> Array
resp.job.code_gen_configuration_nodes["NodeId"].snowflake_source.output_schemas[0].columns[0].name #=> String
resp.job.code_gen_configuration_nodes["NodeId"].snowflake_source.output_schemas[0].columns[0].type #=> String
resp.job.code_gen_configuration_nodes["NodeId"].snowflake_target.name #=> String
resp.job.code_gen_configuration_nodes["NodeId"].snowflake_target.data.source_type #=> String
resp.job.code_gen_configuration_nodes["NodeId"].snowflake_target.data.connection.value #=> String
resp.job.code_gen_configuration_nodes["NodeId"].snowflake_target.data.connection.label #=> String
resp.job.code_gen_configuration_nodes["NodeId"].snowflake_target.data.connection.description #=> String
resp.job.code_gen_configuration_nodes["NodeId"].snowflake_target.data.schema #=> String
resp.job.code_gen_configuration_nodes["NodeId"].snowflake_target.data.table #=> String
resp.job.code_gen_configuration_nodes["NodeId"].snowflake_target.data.database #=> String
resp.job.code_gen_configuration_nodes["NodeId"].snowflake_target.data.temp_dir #=> String
resp.job.code_gen_configuration_nodes["NodeId"].snowflake_target.data.iam_role.value #=> String
resp.job.code_gen_configuration_nodes["NodeId"].snowflake_target.data.iam_role.label #=> String
resp.job.code_gen_configuration_nodes["NodeId"].snowflake_target.data.iam_role.description #=> String
resp.job.code_gen_configuration_nodes["NodeId"].snowflake_target.data.additional_options #=> Hash
resp.job.code_gen_configuration_nodes["NodeId"].snowflake_target.data.additional_options["EnclosedInStringProperty"] #=> String
resp.job.code_gen_configuration_nodes["NodeId"].snowflake_target.data.sample_query #=> String
resp.job.code_gen_configuration_nodes["NodeId"].snowflake_target.data.pre_action #=> String
resp.job.code_gen_configuration_nodes["NodeId"].snowflake_target.data.post_action #=> String
resp.job.code_gen_configuration_nodes["NodeId"].snowflake_target.data.action #=> String
resp.job.code_gen_configuration_nodes["NodeId"].snowflake_target.data.upsert #=> Boolean
resp.job.code_gen_configuration_nodes["NodeId"].snowflake_target.data.merge_action #=> String
resp.job.code_gen_configuration_nodes["NodeId"].snowflake_target.data.merge_when_matched #=> String
resp.job.code_gen_configuration_nodes["NodeId"].snowflake_target.data.merge_when_not_matched #=> String
resp.job.code_gen_configuration_nodes["NodeId"].snowflake_target.data.merge_clause #=> String
resp.job.code_gen_configuration_nodes["NodeId"].snowflake_target.data.staging_table #=> String
resp.job.code_gen_configuration_nodes["NodeId"].snowflake_target.data.selected_columns #=> Array
resp.job.code_gen_configuration_nodes["NodeId"].snowflake_target.data.selected_columns[0].value #=> String
resp.job.code_gen_configuration_nodes["NodeId"].snowflake_target.data.selected_columns[0].label #=> String
resp.job.code_gen_configuration_nodes["NodeId"].snowflake_target.data.selected_columns[0].description #=> String
resp.job.code_gen_configuration_nodes["NodeId"].snowflake_target.data.auto_pushdown #=> Boolean
resp.job.code_gen_configuration_nodes["NodeId"].snowflake_target.data.table_schema #=> Array
resp.job.code_gen_configuration_nodes["NodeId"].snowflake_target.data.table_schema[0].value #=> String
resp.job.code_gen_configuration_nodes["NodeId"].snowflake_target.data.table_schema[0].label #=> String
resp.job.code_gen_configuration_nodes["NodeId"].snowflake_target.data.table_schema[0].description #=> String
resp.job.code_gen_configuration_nodes["NodeId"].snowflake_target.inputs #=> Array
resp.job.code_gen_configuration_nodes["NodeId"].snowflake_target.inputs[0] #=> String
resp.job.code_gen_configuration_nodes["NodeId"].connector_data_source.name #=> String
resp.job.code_gen_configuration_nodes["NodeId"].connector_data_source.connection_type #=> String
resp.job.code_gen_configuration_nodes["NodeId"].connector_data_source.data #=> Hash
resp.job.code_gen_configuration_nodes["NodeId"].connector_data_source.data["GenericString"] #=> String
resp.job.code_gen_configuration_nodes["NodeId"].connector_data_source.output_schemas #=> Array
resp.job.code_gen_configuration_nodes["NodeId"].connector_data_source.output_schemas[0].columns #=> Array
resp.job.code_gen_configuration_nodes["NodeId"].connector_data_source.output_schemas[0].columns[0].name #=> String
resp.job.code_gen_configuration_nodes["NodeId"].connector_data_source.output_schemas[0].columns[0].type #=> String
resp.job.code_gen_configuration_nodes["NodeId"].connector_data_target.name #=> String
resp.job.code_gen_configuration_nodes["NodeId"].connector_data_target.connection_type #=> String
resp.job.code_gen_configuration_nodes["NodeId"].connector_data_target.data #=> Hash
resp.job.code_gen_configuration_nodes["NodeId"].connector_data_target.data["GenericString"] #=> String
resp.job.code_gen_configuration_nodes["NodeId"].connector_data_target.inputs #=> Array
resp.job.code_gen_configuration_nodes["NodeId"].connector_data_target.inputs[0] #=> String
resp.job.execution_class #=> String, one of "FLEX", "STANDARD"
resp.job.source_control_details.provider #=> String, one of "GITHUB", "GITLAB", "BITBUCKET", "AWS_CODE_COMMIT"
resp.job.source_control_details.repository #=> String
resp.job.source_control_details.owner #=> String
resp.job.source_control_details.branch #=> String
resp.job.source_control_details.folder #=> String
resp.job.source_control_details.last_commit_id #=> String
resp.job.source_control_details.auth_strategy #=> String, one of "PERSONAL_ACCESS_TOKEN", "AWS_SECRETS_MANAGER"
resp.job.source_control_details.auth_token #=> String
resp.job.maintenance_window #=> String
resp.job.profile_name #=> String

Parameters:

  • params (Hash) (defaults to: {})

    ({})

Options Hash (params):

  • :job_name (required, String)

    The name of the job definition to retrieve.

Returns:

See Also:

[View source]

9969
9970
9971
9972
# File 'gems/aws-sdk-glue/lib/aws-sdk-glue/client.rb', line 9969

def get_job(params = {}, options = {})
  req = build_request(:get_job, params)
  req.send_request(options)
end

#get_job_bookmark(params = {}) ⇒ Types::GetJobBookmarkResponse

Returns information on a job bookmark entry.

For more information about enabling and using job bookmarks, see:

Examples:

Request syntax with placeholder values


resp = client.get_job_bookmark({
  job_name: "JobName", # required
  run_id: "RunId",
})

Response structure


resp.job_bookmark_entry.job_name #=> String
resp.job_bookmark_entry.version #=> Integer
resp.job_bookmark_entry.run #=> Integer
resp.job_bookmark_entry.attempt #=> Integer
resp.job_bookmark_entry.previous_run_id #=> String
resp.job_bookmark_entry.run_id #=> String
resp.job_bookmark_entry.job_bookmark #=> String

Parameters:

  • params (Hash) (defaults to: {})

    ({})

Options Hash (params):

  • :job_name (required, String)

    The name of the job in question.

  • :run_id (String)

    The unique run identifier associated with this job run.

Returns:

See Also:

[View source]

10021
10022
10023
10024
# File 'gems/aws-sdk-glue/lib/aws-sdk-glue/client.rb', line 10021

def get_job_bookmark(params = {}, options = {})
  req = build_request(:get_job_bookmark, params)
  req.send_request(options)
end

#get_job_run(params = {}) ⇒ Types::GetJobRunResponse

Retrieves the metadata for a given job run. Job run history is accessible for 365 days for your workflow and job run.

Examples:

Request syntax with placeholder values


resp = client.get_job_run({
  job_name: "NameString", # required
  run_id: "IdString", # required
  predecessors_included: false,
})

Response structure


resp.job_run.id #=> String
resp.job_run.attempt #=> Integer
resp.job_run.previous_run_id #=> String
resp.job_run.trigger_name #=> String
resp.job_run.job_name #=> String
resp.job_run.job_mode #=> String, one of "SCRIPT", "VISUAL", "NOTEBOOK"
resp.job_run.job_run_queuing_enabled #=> Boolean
resp.job_run.started_on #=> Time
resp.job_run.last_modified_on #=> Time
resp.job_run.completed_on #=> Time
resp.job_run.job_run_state #=> String, one of "STARTING", "RUNNING", "STOPPING", "STOPPED", "SUCCEEDED", "FAILED", "TIMEOUT", "ERROR", "WAITING", "EXPIRED"
resp.job_run.arguments #=> Hash
resp.job_run.arguments["GenericString"] #=> String
resp.job_run.error_message #=> String
resp.job_run.predecessor_runs #=> Array
resp.job_run.predecessor_runs[0].job_name #=> String
resp.job_run.predecessor_runs[0].run_id #=> String
resp.job_run.allocated_capacity #=> Integer
resp.job_run.execution_time #=> Integer
resp.job_run.timeout #=> Integer
resp.job_run.max_capacity #=> Float
resp.job_run.worker_type #=> String, one of "Standard", "G.1X", "G.2X", "G.025X", "G.4X", "G.8X", "Z.2X"
resp.job_run.number_of_workers #=> Integer
resp.job_run.security_configuration #=> String
resp.job_run.log_group_name #=> String
resp.job_run.notification_property.notify_delay_after #=> Integer
resp.job_run.glue_version #=> String
resp.job_run.dpu_seconds #=> Float
resp.job_run.execution_class #=> String, one of "FLEX", "STANDARD"
resp.job_run.maintenance_window #=> String
resp.job_run.profile_name #=> String
resp.job_run.state_detail #=> String

Parameters:

  • params (Hash) (defaults to: {})

    ({})

Options Hash (params):

  • :job_name (required, String)

    Name of the job definition being run.

  • :run_id (required, String)

    The ID of the job run.

  • :predecessors_included (Boolean)

    True if a list of predecessor runs should be returned.

Returns:

See Also:

[View source]

10089
10090
10091
10092
# File 'gems/aws-sdk-glue/lib/aws-sdk-glue/client.rb', line 10089

def get_job_run(params = {}, options = {})
  req = build_request(:get_job_run, params)
  req.send_request(options)
end

#get_job_runs(params = {}) ⇒ Types::GetJobRunsResponse

Retrieves metadata for all runs of a given job definition.

GetJobRuns returns the job runs in chronological order, with the newest jobs returned first.

The returned response is a pageable response and is Enumerable. For details on usage see PageableResponse.

Examples:

Request syntax with placeholder values


resp = client.get_job_runs({
  job_name: "NameString", # required
  next_token: "GenericString",
  max_results: 1,
})

Response structure


resp.job_runs #=> Array
resp.job_runs[0].id #=> String
resp.job_runs[0].attempt #=> Integer
resp.job_runs[0].previous_run_id #=> String
resp.job_runs[0].trigger_name #=> String
resp.job_runs[0].job_name #=> String
resp.job_runs[0].job_mode #=> String, one of "SCRIPT", "VISUAL", "NOTEBOOK"
resp.job_runs[0].job_run_queuing_enabled #=> Boolean
resp.job_runs[0].started_on #=> Time
resp.job_runs[0].last_modified_on #=> Time
resp.job_runs[0].completed_on #=> Time
resp.job_runs[0].job_run_state #=> String, one of "STARTING", "RUNNING", "STOPPING", "STOPPED", "SUCCEEDED", "FAILED", "TIMEOUT", "ERROR", "WAITING", "EXPIRED"
resp.job_runs[0].arguments #=> Hash
resp.job_runs[0].arguments["GenericString"] #=> String
resp.job_runs[0].error_message #=> String
resp.job_runs[0].predecessor_runs #=> Array
resp.job_runs[0].predecessor_runs[0].job_name #=> String
resp.job_runs[0].predecessor_runs[0].run_id #=> String
resp.job_runs[0].allocated_capacity #=> Integer
resp.job_runs[0].execution_time #=> Integer
resp.job_runs[0].timeout #=> Integer
resp.job_runs[0].max_capacity #=> Float
resp.job_runs[0].worker_type #=> String, one of "Standard", "G.1X", "G.2X", "G.025X", "G.4X", "G.8X", "Z.2X"
resp.job_runs[0].number_of_workers #=> Integer
resp.job_runs[0].security_configuration #=> String
resp.job_runs[0].log_group_name #=> String
resp.job_runs[0].notification_property.notify_delay_after #=> Integer
resp.job_runs[0].glue_version #=> String
resp.job_runs[0].dpu_seconds #=> Float
resp.job_runs[0].execution_class #=> String, one of "FLEX", "STANDARD"
resp.job_runs[0].maintenance_window #=> String
resp.job_runs[0].profile_name #=> String
resp.job_runs[0].state_detail #=> String
resp.next_token #=> String

Parameters:

  • params (Hash) (defaults to: {})

    ({})

Options Hash (params):

  • :job_name (required, String)

    The name of the job definition for which to retrieve all job runs.

  • :next_token (String)

    A continuation token, if this is a continuation call.

  • :max_results (Integer)

    The maximum size of the response.

Returns:

See Also:

[View source]

10164
10165
10166
10167
# File 'gems/aws-sdk-glue/lib/aws-sdk-glue/client.rb', line 10164

def get_job_runs(params = {}, options = {})
  req = build_request(:get_job_runs, params)
  req.send_request(options)
end

#get_jobs(params = {}) ⇒ Types::GetJobsResponse

Retrieves all current job definitions.

The returned response is a pageable response and is Enumerable. For details on usage see PageableResponse.

Examples:

Request syntax with placeholder values


resp = client.get_jobs({
  next_token: "GenericString",
  max_results: 1,
})

Response structure


resp.jobs #=> Array
resp.jobs[0].name #=> String
resp.jobs[0].job_mode #=> String, one of "SCRIPT", "VISUAL", "NOTEBOOK"
resp.jobs[0].job_run_queuing_enabled #=> Boolean
resp.jobs[0].description #=> String
resp.jobs[0].log_uri #=> String
resp.jobs[0].role #=> String
resp.jobs[0].created_on #=> Time
resp.jobs[0].last_modified_on #=> Time
resp.jobs[0].execution_property.max_concurrent_runs #=> Integer
resp.jobs[0].command.name #=> String
resp.jobs[0].command.script_location #=> String
resp.jobs[0].command.python_version #=> String
resp.jobs[0].command.runtime #=> String
resp.jobs[0].default_arguments #=> Hash
resp.jobs[0].default_arguments["GenericString"] #=> String
resp.jobs[0].non_overridable_arguments #=> Hash
resp.jobs[0].non_overridable_arguments["GenericString"] #=> String
resp.jobs[0].connections.connections #=> Array
resp.jobs[0].connections.connections[0] #=> String
resp.jobs[0].max_retries #=> Integer
resp.jobs[0].allocated_capacity #=> Integer
resp.jobs[0].timeout #=> Integer
resp.jobs[0].max_capacity #=> Float
resp.jobs[0].worker_type #=> String, one of "Standard", "G.1X", "G.2X", "G.025X", "G.4X", "G.8X", "Z.2X"
resp.jobs[0].number_of_workers #=> Integer
resp.jobs[0].security_configuration #=> String
resp.jobs[0].notification_property.notify_delay_after #=> Integer
resp.jobs[0].glue_version #=> String
resp.jobs[0].code_gen_configuration_nodes #=> Hash
resp.jobs[0].code_gen_configuration_nodes["NodeId"].athena_connector_source.name #=> String
resp.jobs[0].code_gen_configuration_nodes["NodeId"].athena_connector_source.connection_name #=> String
resp.jobs[0].code_gen_configuration_nodes["NodeId"].athena_connector_source.connector_name #=> String
resp.jobs[0].code_gen_configuration_nodes["NodeId"].athena_connector_source.connection_type #=> String
resp.jobs[0].code_gen_configuration_nodes["NodeId"].athena_connector_source.connection_table #=> String
resp.jobs[0].code_gen_configuration_nodes["NodeId"].athena_connector_source.schema_name #=> String
resp.jobs[0].code_gen_configuration_nodes["NodeId"].athena_connector_source.output_schemas #=> Array
resp.jobs[0].code_gen_configuration_nodes["NodeId"].athena_connector_source.output_schemas[0].columns #=> Array
resp.jobs[0].code_gen_configuration_nodes["NodeId"].athena_connector_source.output_schemas[0].columns[0].name #=> String
resp.jobs[0].code_gen_configuration_nodes["NodeId"].athena_connector_source.output_schemas[0].columns[0].type #=> String
resp.jobs[0].code_gen_configuration_nodes["NodeId"].jdbc_connector_source.name #=> String
resp.jobs[0].code_gen_configuration_nodes["NodeId"].jdbc_connector_source.connection_name #=> String
resp.jobs[0].code_gen_configuration_nodes["NodeId"].jdbc_connector_source.connector_name #=> String
resp.jobs[0].code_gen_configuration_nodes["NodeId"].jdbc_connector_source.connection_type #=> String
resp.jobs[0].code_gen_configuration_nodes["NodeId"].jdbc_connector_source.additional_options.filter_predicate #=> String
resp.jobs[0].code_gen_configuration_nodes["NodeId"].jdbc_connector_source.additional_options.partition_column #=> String
resp.jobs[0].code_gen_configuration_nodes["NodeId"].jdbc_connector_source.additional_options.lower_bound #=> Integer
resp.jobs[0].code_gen_configuration_nodes["NodeId"].jdbc_connector_source.additional_options.upper_bound #=> Integer
resp.jobs[0].code_gen_configuration_nodes["NodeId"].jdbc_connector_source.additional_options.num_partitions #=> Integer
resp.jobs[0].code_gen_configuration_nodes["NodeId"].jdbc_connector_source.additional_options.job_bookmark_keys #=> Array
resp.jobs[0].code_gen_configuration_nodes["NodeId"].jdbc_connector_source.additional_options.job_bookmark_keys[0] #=> String
resp.jobs[0].code_gen_configuration_nodes["NodeId"].jdbc_connector_source.additional_options.job_bookmark_keys_sort_order #=> String
resp.jobs[0].code_gen_configuration_nodes["NodeId"].jdbc_connector_source.additional_options.data_type_mapping #=> Hash
resp.jobs[0].code_gen_configuration_nodes["NodeId"].jdbc_connector_source.additional_options.data_type_mapping["JDBCDataType"] #=> String, one of "DATE", "STRING", "TIMESTAMP", "INT", "FLOAT", "LONG", "BIGDECIMAL", "BYTE", "SHORT", "DOUBLE"
resp.jobs[0].code_gen_configuration_nodes["NodeId"].jdbc_connector_source.connection_table #=> String
resp.jobs[0].code_gen_configuration_nodes["NodeId"].jdbc_connector_source.query #=> String
resp.jobs[0].code_gen_configuration_nodes["NodeId"].jdbc_connector_source.output_schemas #=> Array
resp.jobs[0].code_gen_configuration_nodes["NodeId"].jdbc_connector_source.output_schemas[0].columns #=> Array
resp.jobs[0].code_gen_configuration_nodes["NodeId"].jdbc_connector_source.output_schemas[0].columns[0].name #=> String
resp.jobs[0].code_gen_configuration_nodes["NodeId"].jdbc_connector_source.output_schemas[0].columns[0].type #=> String
resp.jobs[0].code_gen_configuration_nodes["NodeId"].spark_connector_source.name #=> String
resp.jobs[0].code_gen_configuration_nodes["NodeId"].spark_connector_source.connection_name #=> String
resp.jobs[0].code_gen_configuration_nodes["NodeId"].spark_connector_source.connector_name #=> String
resp.jobs[0].code_gen_configuration_nodes["NodeId"].spark_connector_source.connection_type #=> String
resp.jobs[0].code_gen_configuration_nodes["NodeId"].spark_connector_source.additional_options #=> Hash
resp.jobs[0].code_gen_configuration_nodes["NodeId"].spark_connector_source.additional_options["EnclosedInStringProperty"] #=> String
resp.jobs[0].code_gen_configuration_nodes["NodeId"].spark_connector_source.output_schemas #=> Array
resp.jobs[0].code_gen_configuration_nodes["NodeId"].spark_connector_source.output_schemas[0].columns #=> Array
resp.jobs[0].code_gen_configuration_nodes["NodeId"].spark_connector_source.output_schemas[0].columns[0].name #=> String
resp.jobs[0].code_gen_configuration_nodes["NodeId"].spark_connector_source.output_schemas[0].columns[0].type #=> String
resp.jobs[0].code_gen_configuration_nodes["NodeId"].catalog_source.name #=> String
resp.jobs[0].code_gen_configuration_nodes["NodeId"].catalog_source.database #=> String
resp.jobs[0].code_gen_configuration_nodes["NodeId"].catalog_source.table #=> String
resp.jobs[0].code_gen_configuration_nodes["NodeId"].redshift_source.name #=> String
resp.jobs[0].code_gen_configuration_nodes["NodeId"].redshift_source.database #=> String
resp.jobs[0].code_gen_configuration_nodes["NodeId"].redshift_source.table #=> String
resp.jobs[0].code_gen_configuration_nodes["NodeId"].redshift_source.redshift_tmp_dir #=> String
resp.jobs[0].code_gen_configuration_nodes["NodeId"].redshift_source.tmp_dir_iam_role #=> String
resp.jobs[0].code_gen_configuration_nodes["NodeId"].s3_catalog_source.name #=> String
resp.jobs[0].code_gen_configuration_nodes["NodeId"].s3_catalog_source.database #=> String
resp.jobs[0].code_gen_configuration_nodes["NodeId"].s3_catalog_source.table #=> String
resp.jobs[0].code_gen_configuration_nodes["NodeId"].s3_catalog_source.partition_predicate #=> String
resp.jobs[0].code_gen_configuration_nodes["NodeId"].s3_catalog_source.additional_options.bounded_size #=> Integer
resp.jobs[0].code_gen_configuration_nodes["NodeId"].s3_catalog_source.additional_options.bounded_files #=> Integer
resp.jobs[0].code_gen_configuration_nodes["NodeId"].s3_csv_source.name #=> String
resp.jobs[0].code_gen_configuration_nodes["NodeId"].s3_csv_source.paths #=> Array
resp.jobs[0].code_gen_configuration_nodes["NodeId"].s3_csv_source.paths[0] #=> String
resp.jobs[0].code_gen_configuration_nodes["NodeId"].s3_csv_source.compression_type #=> String, one of "gzip", "bzip2"
resp.jobs[0].code_gen_configuration_nodes["NodeId"].s3_csv_source.exclusions #=> Array
resp.jobs[0].code_gen_configuration_nodes["NodeId"].s3_csv_source.exclusions[0] #=> String
resp.jobs[0].code_gen_configuration_nodes["NodeId"].s3_csv_source.group_size #=> String
resp.jobs[0].code_gen_configuration_nodes["NodeId"].s3_csv_source.group_files #=> String
resp.jobs[0].code_gen_configuration_nodes["NodeId"].s3_csv_source.recurse #=> Boolean
resp.jobs[0].code_gen_configuration_nodes["NodeId"].s3_csv_source.max_band #=> Integer
resp.jobs[0].code_gen_configuration_nodes["NodeId"].s3_csv_source.max_files_in_band #=> Integer
resp.jobs[0].code_gen_configuration_nodes["NodeId"].s3_csv_source.additional_options.bounded_size #=> Integer
resp.jobs[0].code_gen_configuration_nodes["NodeId"].s3_csv_source.additional_options.bounded_files #=> Integer
resp.jobs[0].code_gen_configuration_nodes["NodeId"].s3_csv_source.additional_options.enable_sample_path #=> Boolean
resp.jobs[0].code_gen_configuration_nodes["NodeId"].s3_csv_source.additional_options.sample_path #=> String
resp.jobs[0].code_gen_configuration_nodes["NodeId"].s3_csv_source.separator #=> String, one of "comma", "ctrla", "pipe", "semicolon", "tab"
resp.jobs[0].code_gen_configuration_nodes["NodeId"].s3_csv_source.escaper #=> String
resp.jobs[0].code_gen_configuration_nodes["NodeId"].s3_csv_source.quote_char #=> String, one of "quote", "quillemet", "single_quote", "disabled"
resp.jobs[0].code_gen_configuration_nodes["NodeId"].s3_csv_source.multiline #=> Boolean
resp.jobs[0].code_gen_configuration_nodes["NodeId"].s3_csv_source.with_header #=> Boolean
resp.jobs[0].code_gen_configuration_nodes["NodeId"].s3_csv_source.write_header #=> Boolean
resp.jobs[0].code_gen_configuration_nodes["NodeId"].s3_csv_source.skip_first #=> Boolean
resp.jobs[0].code_gen_configuration_nodes["NodeId"].s3_csv_source.optimize_performance #=> Boolean
resp.jobs[0].code_gen_configuration_nodes["NodeId"].s3_csv_source.output_schemas #=> Array
resp.jobs[0].code_gen_configuration_nodes["NodeId"].s3_csv_source.output_schemas[0].columns #=> Array
resp.jobs[0].code_gen_configuration_nodes["NodeId"].s3_csv_source.output_schemas[0].columns[0].name #=> String
resp.jobs[0].code_gen_configuration_nodes["NodeId"].s3_csv_source.output_schemas[0].columns[0].type #=> String
resp.jobs[0].code_gen_configuration_nodes["NodeId"].s3_json_source.name #=> String
resp.jobs[0].code_gen_configuration_nodes["NodeId"].s3_json_source.paths #=> Array
resp.jobs[0].code_gen_configuration_nodes["NodeId"].s3_json_source.paths[0] #=> String
resp.jobs[0].code_gen_configuration_nodes["NodeId"].s3_json_source.compression_type #=> String, one of "gzip", "bzip2"
resp.jobs[0].code_gen_configuration_nodes["NodeId"].s3_json_source.exclusions #=> Array
resp.jobs[0].code_gen_configuration_nodes["NodeId"].s3_json_source.exclusions[0] #=> String
resp.jobs[0].code_gen_configuration_nodes["NodeId"].s3_json_source.group_size #=> String
resp.jobs[0].code_gen_configuration_nodes["NodeId"].s3_json_source.group_files #=> String
resp.jobs[0].code_gen_configuration_nodes["NodeId"].s3_json_source.recurse #=> Boolean
resp.jobs[0].code_gen_configuration_nodes["NodeId"].s3_json_source.max_band #=> Integer
resp.jobs[0].code_gen_configuration_nodes["NodeId"].s3_json_source.max_files_in_band #=> Integer
resp.jobs[0].code_gen_configuration_nodes["NodeId"].s3_json_source.additional_options.bounded_size #=> Integer
resp.jobs[0].code_gen_configuration_nodes["NodeId"].s3_json_source.additional_options.bounded_files #=> Integer
resp.jobs[0].code_gen_configuration_nodes["NodeId"].s3_json_source.additional_options.enable_sample_path #=> Boolean
resp.jobs[0].code_gen_configuration_nodes["NodeId"].s3_json_source.additional_options.sample_path #=> String
resp.jobs[0].code_gen_configuration_nodes["NodeId"].s3_json_source.json_path #=> String
resp.jobs[0].code_gen_configuration_nodes["NodeId"].s3_json_source.multiline #=> Boolean
resp.jobs[0].code_gen_configuration_nodes["NodeId"].s3_json_source.output_schemas #=> Array
resp.jobs[0].code_gen_configuration_nodes["NodeId"].s3_json_source.output_schemas[0].columns #=> Array
resp.jobs[0].code_gen_configuration_nodes["NodeId"].s3_json_source.output_schemas[0].columns[0].name #=> String
resp.jobs[0].code_gen_configuration_nodes["NodeId"].s3_json_source.output_schemas[0].columns[0].type #=> String
resp.jobs[0].code_gen_configuration_nodes["NodeId"].s3_parquet_source.name #=> String
resp.jobs[0].code_gen_configuration_nodes["NodeId"].s3_parquet_source.paths #=> Array
resp.jobs[0].code_gen_configuration_nodes["NodeId"].s3_parquet_source.paths[0] #=> String
resp.jobs[0].code_gen_configuration_nodes["NodeId"].s3_parquet_source.compression_type #=> String, one of "snappy", "lzo", "gzip", "uncompressed", "none"
resp.jobs[0].code_gen_configuration_nodes["NodeId"].s3_parquet_source.exclusions #=> Array
resp.jobs[0].code_gen_configuration_nodes["NodeId"].s3_parquet_source.exclusions[0] #=> String
resp.jobs[0].code_gen_configuration_nodes["NodeId"].s3_parquet_source.group_size #=> String
resp.jobs[0].code_gen_configuration_nodes["NodeId"].s3_parquet_source.group_files #=> String
resp.jobs[0].code_gen_configuration_nodes["NodeId"].s3_parquet_source.recurse #=> Boolean
resp.jobs[0].code_gen_configuration_nodes["NodeId"].s3_parquet_source.max_band #=> Integer
resp.jobs[0].code_gen_configuration_nodes["NodeId"].s3_parquet_source.max_files_in_band #=> Integer
resp.jobs[0].code_gen_configuration_nodes["NodeId"].s3_parquet_source.additional_options.bounded_size #=> Integer
resp.jobs[0].code_gen_configuration_nodes["NodeId"].s3_parquet_source.additional_options.bounded_files #=> Integer
resp.jobs[0].code_gen_configuration_nodes["NodeId"].s3_parquet_source.additional_options.enable_sample_path #=> Boolean
resp.jobs[0].code_gen_configuration_nodes["NodeId"].s3_parquet_source.additional_options.sample_path #=> String
resp.jobs[0].code_gen_configuration_nodes["NodeId"].s3_parquet_source.output_schemas #=> Array
resp.jobs[0].code_gen_configuration_nodes["NodeId"].s3_parquet_source.output_schemas[0].columns #=> Array
resp.jobs[0].code_gen_configuration_nodes["NodeId"].s3_parquet_source.output_schemas[0].columns[0].name #=> String
resp.jobs[0].code_gen_configuration_nodes["NodeId"].s3_parquet_source.output_schemas[0].columns[0].type #=> String
resp.jobs[0].code_gen_configuration_nodes["NodeId"].relational_catalog_source.name #=> String
resp.jobs[0].code_gen_configuration_nodes["NodeId"].relational_catalog_source.database #=> String
resp.jobs[0].code_gen_configuration_nodes["NodeId"].relational_catalog_source.table #=> String
resp.jobs[0].code_gen_configuration_nodes["NodeId"].dynamo_db_catalog_source.name #=> String
resp.jobs[0].code_gen_configuration_nodes["NodeId"].dynamo_db_catalog_source.database #=> String
resp.jobs[0].code_gen_configuration_nodes["NodeId"].dynamo_db_catalog_source.table #=> String
resp.jobs[0].code_gen_configuration_nodes["NodeId"].jdbc_connector_target.name #=> String
resp.jobs[0].code_gen_configuration_nodes["NodeId"].jdbc_connector_target.inputs #=> Array
resp.jobs[0].code_gen_configuration_nodes["NodeId"].jdbc_connector_target.inputs[0] #=> String
resp.jobs[0].code_gen_configuration_nodes["NodeId"].jdbc_connector_target.connection_name #=> String
resp.jobs[0].code_gen_configuration_nodes["NodeId"].jdbc_connector_target.connection_table #=> String
resp.jobs[0].code_gen_configuration_nodes["NodeId"].jdbc_connector_target.connector_name #=> String
resp.jobs[0].code_gen_configuration_nodes["NodeId"].jdbc_connector_target.connection_type #=> String
resp.jobs[0].code_gen_configuration_nodes["NodeId"].jdbc_connector_target.additional_options #=> Hash
resp.jobs[0].code_gen_configuration_nodes["NodeId"].jdbc_connector_target.additional_options["EnclosedInStringProperty"] #=> String
resp.jobs[0].code_gen_configuration_nodes["NodeId"].jdbc_connector_target.output_schemas #=> Array
resp.jobs[0].code_gen_configuration_nodes["NodeId"].jdbc_connector_target.output_schemas[0].columns #=> Array
resp.jobs[0].code_gen_configuration_nodes["NodeId"].jdbc_connector_target.output_schemas[0].columns[0].name #=> String
resp.jobs[0].code_gen_configuration_nodes["NodeId"].jdbc_connector_target.output_schemas[0].columns[0].type #=> String
resp.jobs[0].code_gen_configuration_nodes["NodeId"].spark_connector_target.name #=> String
resp.jobs[0].code_gen_configuration_nodes["NodeId"].spark_connector_target.inputs #=> Array
resp.jobs[0].code_gen_configuration_nodes["NodeId"].spark_connector_target.inputs[0] #=> String
resp.jobs[0].code_gen_configuration_nodes["NodeId"].spark_connector_target.connection_name #=> String
resp.jobs[0].code_gen_configuration_nodes["NodeId"].spark_connector_target.connector_name #=> String
resp.jobs[0].code_gen_configuration_nodes["NodeId"].spark_connector_target.connection_type #=> String
resp.jobs[0].code_gen_configuration_nodes["NodeId"].spark_connector_target.additional_options #=> Hash
resp.jobs[0].code_gen_configuration_nodes["NodeId"].spark_connector_target.additional_options["EnclosedInStringProperty"] #=> String
resp.jobs[0].code_gen_configuration_nodes["NodeId"].spark_connector_target.output_schemas #=> Array
resp.jobs[0].code_gen_configuration_nodes["NodeId"].spark_connector_target.output_schemas[0].columns #=> Array
resp.jobs[0].code_gen_configuration_nodes["NodeId"].spark_connector_target.output_schemas[0].columns[0].name #=> String
resp.jobs[0].code_gen_configuration_nodes["NodeId"].spark_connector_target.output_schemas[0].columns[0].type #=> String
resp.jobs[0].code_gen_configuration_nodes["NodeId"].catalog_target.name #=> String
resp.jobs[0].code_gen_configuration_nodes["NodeId"].catalog_target.inputs #=> Array
resp.jobs[0].code_gen_configuration_nodes["NodeId"].catalog_target.inputs[0] #=> String
resp.jobs[0].code_gen_configuration_nodes["NodeId"].catalog_target.partition_keys #=> Array
resp.jobs[0].code_gen_configuration_nodes["NodeId"].catalog_target.partition_keys[0] #=> Array
resp.jobs[0].code_gen_configuration_nodes["NodeId"].catalog_target.partition_keys[0][0] #=> String
resp.jobs[0].code_gen_configuration_nodes["NodeId"].catalog_target.database #=> String
resp.jobs[0].code_gen_configuration_nodes["NodeId"].catalog_target.table #=> String
resp.jobs[0].code_gen_configuration_nodes["NodeId"].redshift_target.name #=> String
resp.jobs[0].code_gen_configuration_nodes["NodeId"].redshift_target.inputs #=> Array
resp.jobs[0].code_gen_configuration_nodes["NodeId"].redshift_target.inputs[0] #=> String
resp.jobs[0].code_gen_configuration_nodes["NodeId"].redshift_target.database #=> String
resp.jobs[0].code_gen_configuration_nodes["NodeId"].redshift_target.table #=> String
resp.jobs[0].code_gen_configuration_nodes["NodeId"].redshift_target.redshift_tmp_dir #=> String
resp.jobs[0].code_gen_configuration_nodes["NodeId"].redshift_target.tmp_dir_iam_role #=> String
resp.jobs[0].code_gen_configuration_nodes["NodeId"].redshift_target.upsert_redshift_options.table_location #=> String
resp.jobs[0].code_gen_configuration_nodes["NodeId"].redshift_target.upsert_redshift_options.connection_name #=> String
resp.jobs[0].code_gen_configuration_nodes["NodeId"].redshift_target.upsert_redshift_options.upsert_keys #=> Array
resp.jobs[0].code_gen_configuration_nodes["NodeId"].redshift_target.upsert_redshift_options.upsert_keys[0] #=> String
resp.jobs[0].code_gen_configuration_nodes["NodeId"].s3_catalog_target.name #=> String
resp.jobs[0].code_gen_configuration_nodes["NodeId"].s3_catalog_target.inputs #=> Array
resp.jobs[0].code_gen_configuration_nodes["NodeId"].s3_catalog_target.inputs[0] #=> String
resp.jobs[0].code_gen_configuration_nodes["NodeId"].s3_catalog_target.partition_keys #=> Array
resp.jobs[0].code_gen_configuration_nodes["NodeId"].s3_catalog_target.partition_keys[0] #=> Array
resp.jobs[0].code_gen_configuration_nodes["NodeId"].s3_catalog_target.partition_keys[0][0] #=> String
resp.jobs[0].code_gen_configuration_nodes["NodeId"].s3_catalog_target.table #=> String
resp.jobs[0].code_gen_configuration_nodes["NodeId"].s3_catalog_target.database #=> String
resp.jobs[0].code_gen_configuration_nodes["NodeId"].s3_catalog_target.schema_change_policy.enable_update_catalog #=> Boolean
resp.jobs[0].code_gen_configuration_nodes["NodeId"].s3_catalog_target.schema_change_policy.update_behavior #=> String, one of "UPDATE_IN_DATABASE", "LOG"
resp.jobs[0].code_gen_configuration_nodes["NodeId"].s3_glue_parquet_target.name #=> String
resp.jobs[0].code_gen_configuration_nodes["NodeId"].s3_glue_parquet_target.inputs #=> Array
resp.jobs[0].code_gen_configuration_nodes["NodeId"].s3_glue_parquet_target.inputs[0] #=> String
resp.jobs[0].code_gen_configuration_nodes["NodeId"].s3_glue_parquet_target.partition_keys #=> Array
resp.jobs[0].code_gen_configuration_nodes["NodeId"].s3_glue_parquet_target.partition_keys[0] #=> Array
resp.jobs[0].code_gen_configuration_nodes["NodeId"].s3_glue_parquet_target.partition_keys[0][0] #=> String
resp.jobs[0].code_gen_configuration_nodes["NodeId"].s3_glue_parquet_target.path #=> String
resp.jobs[0].code_gen_configuration_nodes["NodeId"].s3_glue_parquet_target.compression #=> String, one of "snappy", "lzo", "gzip", "uncompressed", "none"
resp.jobs[0].code_gen_configuration_nodes["NodeId"].s3_glue_parquet_target.schema_change_policy.enable_update_catalog #=> Boolean
resp.jobs[0].code_gen_configuration_nodes["NodeId"].s3_glue_parquet_target.schema_change_policy.update_behavior #=> String, one of "UPDATE_IN_DATABASE", "LOG"
resp.jobs[0].code_gen_configuration_nodes["NodeId"].s3_glue_parquet_target.schema_change_policy.table #=> String
resp.jobs[0].code_gen_configuration_nodes["NodeId"].s3_glue_parquet_target.schema_change_policy.database #=> String
resp.jobs[0].code_gen_configuration_nodes["NodeId"].s3_direct_target.name #=> String
resp.jobs[0].code_gen_configuration_nodes["NodeId"].s3_direct_target.inputs #=> Array
resp.jobs[0].code_gen_configuration_nodes["NodeId"].s3_direct_target.inputs[0] #=> String
resp.jobs[0].code_gen_configuration_nodes["NodeId"].s3_direct_target.partition_keys #=> Array
resp.jobs[0].code_gen_configuration_nodes["NodeId"].s3_direct_target.partition_keys[0] #=> Array
resp.jobs[0].code_gen_configuration_nodes["NodeId"].s3_direct_target.partition_keys[0][0] #=> String
resp.jobs[0].code_gen_configuration_nodes["NodeId"].s3_direct_target.path #=> String
resp.jobs[0].code_gen_configuration_nodes["NodeId"].s3_direct_target.compression #=> String
resp.jobs[0].code_gen_configuration_nodes["NodeId"].s3_direct_target.format #=> String, one of "json", "csv", "avro", "orc", "parquet", "hudi", "delta"
resp.jobs[0].code_gen_configuration_nodes["NodeId"].s3_direct_target.schema_change_policy.enable_update_catalog #=> Boolean
resp.jobs[0].code_gen_configuration_nodes["NodeId"].s3_direct_target.schema_change_policy.update_behavior #=> String, one of "UPDATE_IN_DATABASE", "LOG"
resp.jobs[0].code_gen_configuration_nodes["NodeId"].s3_direct_target.schema_change_policy.table #=> String
resp.jobs[0].code_gen_configuration_nodes["NodeId"].s3_direct_target.schema_change_policy.database #=> String
resp.jobs[0].code_gen_configuration_nodes["NodeId"].apply_mapping.name #=> String
resp.jobs[0].code_gen_configuration_nodes["NodeId"].apply_mapping.inputs #=> Array
resp.jobs[0].code_gen_configuration_nodes["NodeId"].apply_mapping.inputs[0] #=> String
resp.jobs[0].code_gen_configuration_nodes["NodeId"].apply_mapping.mapping #=> Array
resp.jobs[0].code_gen_configuration_nodes["NodeId"].apply_mapping.mapping[0].to_key #=> String
resp.jobs[0].code_gen_configuration_nodes["NodeId"].apply_mapping.mapping[0].from_path #=> Array
resp.jobs[0].code_gen_configuration_nodes["NodeId"].apply_mapping.mapping[0].from_path[0] #=> String
resp.jobs[0].code_gen_configuration_nodes["NodeId"].apply_mapping.mapping[0].from_type #=> String
resp.jobs[0].code_gen_configuration_nodes["NodeId"].apply_mapping.mapping[0].to_type #=> String
resp.jobs[0].code_gen_configuration_nodes["NodeId"].apply_mapping.mapping[0].dropped #=> Boolean
resp.jobs[0].code_gen_configuration_nodes["NodeId"].apply_mapping.mapping[0].children #=> Types::Mappings
resp.jobs[0].code_gen_configuration_nodes["NodeId"].select_fields.name #=> String
resp.jobs[0].code_gen_configuration_nodes["NodeId"].select_fields.inputs #=> Array
resp.jobs[0].code_gen_configuration_nodes["NodeId"].select_fields.inputs[0] #=> String
resp.jobs[0].code_gen_configuration_nodes["NodeId"].select_fields.paths #=> Array
resp.jobs[0].code_gen_configuration_nodes["NodeId"].select_fields.paths[0] #=> Array
resp.jobs[0].code_gen_configuration_nodes["NodeId"].select_fields.paths[0][0] #=> String
resp.jobs[0].code_gen_configuration_nodes["NodeId"].drop_fields.name #=> String
resp.jobs[0].code_gen_configuration_nodes["NodeId"].drop_fields.inputs #=> Array
resp.jobs[0].code_gen_configuration_nodes["NodeId"].drop_fields.inputs[0] #=> String
resp.jobs[0].code_gen_configuration_nodes["NodeId"].drop_fields.paths #=> Array
resp.jobs[0].code_gen_configuration_nodes["NodeId"].drop_fields.paths[0] #=> Array
resp.jobs[0].code_gen_configuration_nodes["NodeId"].drop_fields.paths[0][0] #=> String
resp.jobs[0].code_gen_configuration_nodes["NodeId"].rename_field.name #=> String
resp.jobs[0].code_gen_configuration_nodes["NodeId"].rename_field.inputs #=> Array
resp.jobs[0].code_gen_configuration_nodes["NodeId"].rename_field.inputs[0] #=> String
resp.jobs[0].code_gen_configuration_nodes["NodeId"].rename_field.source_path #=> Array
resp.jobs[0].code_gen_configuration_nodes["NodeId"].rename_field.source_path[0] #=> String
resp.jobs[0].code_gen_configuration_nodes["NodeId"].rename_field.target_path #=> Array
resp.jobs[0].code_gen_configuration_nodes["NodeId"].rename_field.target_path[0] #=> String
resp.jobs[0].code_gen_configuration_nodes["NodeId"].spigot.name #=> String
resp.jobs[0].code_gen_configuration_nodes["NodeId"].spigot.inputs #=> Array
resp.jobs[0].code_gen_configuration_nodes["NodeId"].spigot.inputs[0] #=> String
resp.jobs[0].code_gen_configuration_nodes["NodeId"].spigot.path #=> String
resp.jobs[0].code_gen_configuration_nodes["NodeId"].spigot.topk #=> Integer
resp.jobs[0].code_gen_configuration_nodes["NodeId"].spigot.prob #=> Float
resp.jobs[0].code_gen_configuration_nodes["NodeId"].join.name #=> String
resp.jobs[0].code_gen_configuration_nodes["NodeId"].join.inputs #=> Array
resp.jobs[0].code_gen_configuration_nodes["NodeId"].join.inputs[0] #=> String
resp.jobs[0].code_gen_configuration_nodes["NodeId"].join.join_type #=> String, one of "equijoin", "left", "right", "outer", "leftsemi", "leftanti"
resp.jobs[0].code_gen_configuration_nodes["NodeId"].join.columns #=> Array
resp.jobs[0].code_gen_configuration_nodes["NodeId"].join.columns[0].from #=> String
resp.jobs[0].code_gen_configuration_nodes["NodeId"].join.columns[0].keys #=> Array
resp.jobs[0].code_gen_configuration_nodes["NodeId"].join.columns[0].keys[0] #=> Array
resp.jobs[0].code_gen_configuration_nodes["NodeId"].join.columns[0].keys[0][0] #=> String
resp.jobs[0].code_gen_configuration_nodes["NodeId"].split_fields.name #=> String
resp.jobs[0].code_gen_configuration_nodes["NodeId"].split_fields.inputs #=> Array
resp.jobs[0].code_gen_configuration_nodes["NodeId"].split_fields.inputs[0] #=> String
resp.jobs[0].code_gen_configuration_nodes["NodeId"].split_fields.paths #=> Array
resp.jobs[0].code_gen_configuration_nodes["NodeId"].split_fields.paths[0] #=> Array
resp.jobs[0].code_gen_configuration_nodes["NodeId"].split_fields.paths[0][0] #=> String
resp.jobs[0].code_gen_configuration_nodes["NodeId"].select_from_collection.name #=> String
resp.jobs[0].code_gen_configuration_nodes["NodeId"].select_from_collection.inputs #=> Array
resp.jobs[0].code_gen_configuration_nodes["NodeId"].select_from_collection.inputs[0] #=> String
resp.jobs[0].code_gen_configuration_nodes["NodeId"].select_from_collection.index #=> Integer
resp.jobs[0].code_gen_configuration_nodes["NodeId"].fill_missing_values.name #=> String
resp.jobs[0].code_gen_configuration_nodes["NodeId"].fill_missing_values.inputs #=> Array
resp.jobs[0].code_gen_configuration_nodes["NodeId"].fill_missing_values.inputs[0] #=> String
resp.jobs[0].code_gen_configuration_nodes["NodeId"].fill_missing_values.imputed_path #=> String
resp.jobs[0].code_gen_configuration_nodes["NodeId"].fill_missing_values.filled_path #=> String
resp.jobs[0].code_gen_configuration_nodes["NodeId"].filter.name #=> String
resp.jobs[0].code_gen_configuration_nodes["NodeId"].filter.inputs #=> Array
resp.jobs[0].code_gen_configuration_nodes["NodeId"].filter.inputs[0] #=> String
resp.jobs[0].code_gen_configuration_nodes["NodeId"].filter.logical_operator #=> String, one of "AND", "OR"
resp.jobs[0].code_gen_configuration_nodes["NodeId"].filter.filters #=> Array
resp.jobs[0].code_gen_configuration_nodes["NodeId"].filter.filters[0].operation #=> String, one of "EQ", "LT", "GT", "LTE", "GTE", "REGEX", "ISNULL"
resp.jobs[0].code_gen_configuration_nodes["NodeId"].filter.filters[0].negated #=> Boolean
resp.jobs[0].code_gen_configuration_nodes["NodeId"].filter.filters[0].values #=> Array
resp.jobs[0].code_gen_configuration_nodes["NodeId"].filter.filters[0].values[0].type #=> String, one of "COLUMNEXTRACTED", "CONSTANT"
resp.jobs[0].code_gen_configuration_nodes["NodeId"].filter.filters[0].values[0].value #=> Array
resp.jobs[0].code_gen_configuration_nodes["NodeId"].filter.filters[0].values[0].value[0] #=> String
resp.jobs[0].code_gen_configuration_nodes["NodeId"].custom_code.name #=> String
resp.jobs[0].code_gen_configuration_nodes["NodeId"].custom_code.inputs #=> Array
resp.jobs[0].code_gen_configuration_nodes["NodeId"].custom_code.inputs[0] #=> String
resp.jobs[0].code_gen_configuration_nodes["NodeId"].custom_code.code #=> String
resp.jobs[0].code_gen_configuration_nodes["NodeId"].custom_code.class_name #=> String
resp.jobs[0].code_gen_configuration_nodes["NodeId"].custom_code.output_schemas #=> Array
resp.jobs[0].code_gen_configuration_nodes["NodeId"].custom_code.output_schemas[0].columns #=> Array
resp.jobs[0].code_gen_configuration_nodes["NodeId"].custom_code.output_schemas[0].columns[0].name #=> String
resp.jobs[0].code_gen_configuration_nodes["NodeId"].custom_code.output_schemas[0].columns[0].type #=> String
resp.jobs[0].code_gen_configuration_nodes["NodeId"].spark_sql.name #=> String
resp.jobs[0].code_gen_configuration_nodes["NodeId"].spark_sql.inputs #=> Array
resp.jobs[0].code_gen_configuration_nodes["NodeId"].spark_sql.inputs[0] #=> String
resp.jobs[0].code_gen_configuration_nodes["NodeId"].spark_sql.sql_query #=> String
resp.jobs[0].code_gen_configuration_nodes["NodeId"].spark_sql.sql_aliases #=> Array
resp.jobs[0].code_gen_configuration_nodes["NodeId"].spark_sql.sql_aliases[0].from #=> String
resp.jobs[0].code_gen_configuration_nodes["NodeId"].spark_sql.sql_aliases[0].alias #=> String
resp.jobs[0].code_gen_configuration_nodes["NodeId"].spark_sql.output_schemas #=> Array
resp.jobs[0].code_gen_configuration_nodes["NodeId"].spark_sql.output_schemas[0].columns #=> Array
resp.jobs[0].code_gen_configuration_nodes["NodeId"].spark_sql.output_schemas[0].columns[0].name #=> String
resp.jobs[0].code_gen_configuration_nodes["NodeId"].spark_sql.output_schemas[0].columns[0].type #=> String
resp.jobs[0].code_gen_configuration_nodes["NodeId"].direct_kinesis_source.name #=> String
resp.jobs[0].code_gen_configuration_nodes["NodeId"].direct_kinesis_source.window_size #=> Integer
resp.jobs[0].code_gen_configuration_nodes["NodeId"].direct_kinesis_source.detect_schema #=> Boolean
resp.jobs[0].code_gen_configuration_nodes["NodeId"].direct_kinesis_source.streaming_options.endpoint_url #=> String
resp.jobs[0].code_gen_configuration_nodes["NodeId"].direct_kinesis_source.streaming_options.stream_name #=> String
resp.jobs[0].code_gen_configuration_nodes["NodeId"].direct_kinesis_source.streaming_options.classification #=> String
resp.jobs[0].code_gen_configuration_nodes["NodeId"].direct_kinesis_source.streaming_options.delimiter #=> String
resp.jobs[0].code_gen_configuration_nodes["NodeId"].direct_kinesis_source.streaming_options.starting_position #=> String, one of "latest", "trim_horizon", "earliest", "timestamp"
resp.jobs[0].code_gen_configuration_nodes["NodeId"].direct_kinesis_source.streaming_options.max_fetch_time_in_ms #=> Integer
resp.jobs[0].code_gen_configuration_nodes["NodeId"].direct_kinesis_source.streaming_options.max_fetch_records_per_shard #=> Integer
resp.jobs[0].code_gen_configuration_nodes["NodeId"].direct_kinesis_source.streaming_options.max_record_per_read #=> Integer
resp.jobs[0].code_gen_configuration_nodes["NodeId"].direct_kinesis_source.streaming_options.add_idle_time_between_reads #=> Boolean
resp.jobs[0].code_gen_configuration_nodes["NodeId"].direct_kinesis_source.streaming_options.idle_time_between_reads_in_ms #=> Integer
resp.jobs[0].code_gen_configuration_nodes["NodeId"].direct_kinesis_source.streaming_options.describe_shard_interval #=> Integer
resp.jobs[0].code_gen_configuration_nodes["NodeId"].direct_kinesis_source.streaming_options.num_retries #=> Integer
resp.jobs[0].code_gen_configuration_nodes["NodeId"].direct_kinesis_source.streaming_options.retry_interval_ms #=> Integer
resp.jobs[0].code_gen_configuration_nodes["NodeId"].direct_kinesis_source.streaming_options.max_retry_interval_ms #=> Integer
resp.jobs[0].code_gen_configuration_nodes["NodeId"].direct_kinesis_source.streaming_options.avoid_empty_batches #=> Boolean
resp.jobs[0].code_gen_configuration_nodes["NodeId"].direct_kinesis_source.streaming_options.stream_arn #=> String
resp.jobs[0].code_gen_configuration_nodes["NodeId"].direct_kinesis_source.streaming_options.role_arn #=> String
resp.jobs[0].code_gen_configuration_nodes["NodeId"].direct_kinesis_source.streaming_options.role_session_name #=> String
resp.jobs[0].code_gen_configuration_nodes["NodeId"].direct_kinesis_source.streaming_options.add_record_timestamp #=> String
resp.jobs[0].code_gen_configuration_nodes["NodeId"].direct_kinesis_source.streaming_options.emit_consumer_lag_metrics #=> String
resp.jobs[0].code_gen_configuration_nodes["NodeId"].direct_kinesis_source.streaming_options.starting_timestamp #=> Time
resp.jobs[0].code_gen_configuration_nodes["NodeId"].direct_kinesis_source.data_preview_options.polling_time #=> Integer
resp.jobs[0].code_gen_configuration_nodes["NodeId"].direct_kinesis_source.data_preview_options.record_polling_limit #=> Integer
resp.jobs[0].code_gen_configuration_nodes["NodeId"].direct_kafka_source.name #=> String
resp.jobs[0].code_gen_configuration_nodes["NodeId"].direct_kafka_source.streaming_options.bootstrap_servers #=> String
resp.jobs[0].code_gen_configuration_nodes["NodeId"].direct_kafka_source.streaming_options.security_protocol #=> String
resp.jobs[0].code_gen_configuration_nodes["NodeId"].direct_kafka_source.streaming_options.connection_name #=> String
resp.jobs[0].code_gen_configuration_nodes["NodeId"].direct_kafka_source.streaming_options.topic_name #=> String
resp.jobs[0].code_gen_configuration_nodes["NodeId"].direct_kafka_source.streaming_options.assign #=> String
resp.jobs[0].code_gen_configuration_nodes["NodeId"].direct_kafka_source.streaming_options.subscribe_pattern #=> String
resp.jobs[0].code_gen_configuration_nodes["NodeId"].direct_kafka_source.streaming_options.classification #=> String
resp.jobs[0].code_gen_configuration_nodes["NodeId"].direct_kafka_source.streaming_options.delimiter #=> String
resp.jobs[0].code_gen_configuration_nodes["NodeId"].direct_kafka_source.streaming_options.starting_offsets #=> String
resp.jobs[0].code_gen_configuration_nodes["NodeId"].direct_kafka_source.streaming_options.ending_offsets #=> String
resp.jobs[0].code_gen_configuration_nodes["NodeId"].direct_kafka_source.streaming_options.poll_timeout_ms #=> Integer
resp.jobs[0].code_gen_configuration_nodes["NodeId"].direct_kafka_source.streaming_options.num_retries #=> Integer
resp.jobs[0].code_gen_configuration_nodes["NodeId"].direct_kafka_source.streaming_options.retry_interval_ms #=> Integer
resp.jobs[0].code_gen_configuration_nodes["NodeId"].direct_kafka_source.streaming_options.max_offsets_per_trigger #=> Integer
resp.jobs[0].code_gen_configuration_nodes["NodeId"].direct_kafka_source.streaming_options.min_partitions #=> Integer
resp.jobs[0].code_gen_configuration_nodes["NodeId"].direct_kafka_source.streaming_options.include_headers #=> Boolean
resp.jobs[0].code_gen_configuration_nodes["NodeId"].direct_kafka_source.streaming_options.add_record_timestamp #=> String
resp.jobs[0].code_gen_configuration_nodes["NodeId"].direct_kafka_source.streaming_options.emit_consumer_lag_metrics #=> String
resp.jobs[0].code_gen_configuration_nodes["NodeId"].direct_kafka_source.streaming_options.starting_timestamp #=> Time
resp.jobs[0].code_gen_configuration_nodes["NodeId"].direct_kafka_source.window_size #=> Integer
resp.jobs[0].code_gen_configuration_nodes["NodeId"].direct_kafka_source.detect_schema #=> Boolean
resp.jobs[0].code_gen_configuration_nodes["NodeId"].direct_kafka_source.data_preview_options.polling_time #=> Integer
resp.jobs[0].code_gen_configuration_nodes["NodeId"].direct_kafka_source.data_preview_options.record_polling_limit #=> Integer
resp.jobs[0].code_gen_configuration_nodes["NodeId"].catalog_kinesis_source.name #=> String
resp.jobs[0].code_gen_configuration_nodes["NodeId"].catalog_kinesis_source.window_size #=> Integer
resp.jobs[0].code_gen_configuration_nodes["NodeId"].catalog_kinesis_source.detect_schema #=> Boolean
resp.jobs[0].code_gen_configuration_nodes["NodeId"].catalog_kinesis_source.table #=> String
resp.jobs[0].code_gen_configuration_nodes["NodeId"].catalog_kinesis_source.database #=> String
resp.jobs[0].code_gen_configuration_nodes["NodeId"].catalog_kinesis_source.streaming_options.endpoint_url #=> String
resp.jobs[0].code_gen_configuration_nodes["NodeId"].catalog_kinesis_source.streaming_options.stream_name #=> String
resp.jobs[0].code_gen_configuration_nodes["NodeId"].catalog_kinesis_source.streaming_options.classification #=> String
resp.jobs[0].code_gen_configuration_nodes["NodeId"].catalog_kinesis_source.streaming_options.delimiter #=> String
resp.jobs[0].code_gen_configuration_nodes["NodeId"].catalog_kinesis_source.streaming_options.starting_position #=> String, one of "latest", "trim_horizon", "earliest", "timestamp"
resp.jobs[0].code_gen_configuration_nodes["NodeId"].catalog_kinesis_source.streaming_options.max_fetch_time_in_ms #=> Integer
resp.jobs[0].code_gen_configuration_nodes["NodeId"].catalog_kinesis_source.streaming_options.max_fetch_records_per_shard #=> Integer
resp.jobs[0].code_gen_configuration_nodes["NodeId"].catalog_kinesis_source.streaming_options.max_record_per_read #=> Integer
resp.jobs[0].code_gen_configuration_nodes["NodeId"].catalog_kinesis_source.streaming_options.add_idle_time_between_reads #=> Boolean
resp.jobs[0].code_gen_configuration_nodes["NodeId"].catalog_kinesis_source.streaming_options.idle_time_between_reads_in_ms #=> Integer
resp.jobs[0].code_gen_configuration_nodes["NodeId"].catalog_kinesis_source.streaming_options.describe_shard_interval #=> Integer
resp.jobs[0].code_gen_configuration_nodes["NodeId"].catalog_kinesis_source.streaming_options.num_retries #=> Integer
resp.jobs[0].code_gen_configuration_nodes["NodeId"].catalog_kinesis_source.streaming_options.retry_interval_ms #=> Integer
resp.jobs[0].code_gen_configuration_nodes["NodeId"].catalog_kinesis_source.streaming_options.max_retry_interval_ms #=> Integer
resp.jobs[0].code_gen_configuration_nodes["NodeId"].catalog_kinesis_source.streaming_options.avoid_empty_batches #=> Boolean
resp.jobs[0].code_gen_configuration_nodes["NodeId"].catalog_kinesis_source.streaming_options.stream_arn #=> String
resp.jobs[0].code_gen_configuration_nodes["NodeId"].catalog_kinesis_source.streaming_options.role_arn #=> String
resp.jobs[0].code_gen_configuration_nodes["NodeId"].catalog_kinesis_source.streaming_options.role_session_name #=> String
resp.jobs[0].code_gen_configuration_nodes["NodeId"].catalog_kinesis_source.streaming_options.add_record_timestamp #=> String
resp.jobs[0].code_gen_configuration_nodes["NodeId"].catalog_kinesis_source.streaming_options.emit_consumer_lag_metrics #=> String
resp.jobs[0].code_gen_configuration_nodes["NodeId"].catalog_kinesis_source.streaming_options.starting_timestamp #=> Time
resp.jobs[0].code_gen_configuration_nodes["NodeId"].catalog_kinesis_source.data_preview_options.polling_time #=> Integer
resp.jobs[0].code_gen_configuration_nodes["NodeId"].catalog_kinesis_source.data_preview_options.record_polling_limit #=> Integer
resp.jobs[0].code_gen_configuration_nodes["NodeId"].catalog_kafka_source.name #=> String
resp.jobs[0].code_gen_configuration_nodes["NodeId"].catalog_kafka_source.window_size #=> Integer
resp.jobs[0].code_gen_configuration_nodes["NodeId"].catalog_kafka_source.detect_schema #=> Boolean
resp.jobs[0].code_gen_configuration_nodes["NodeId"].catalog_kafka_source.table #=> String
resp.jobs[0].code_gen_configuration_nodes["NodeId"].catalog_kafka_source.database #=> String
resp.jobs[0].code_gen_configuration_nodes["NodeId"].catalog_kafka_source.streaming_options.bootstrap_servers #=> String
resp.jobs[0].code_gen_configuration_nodes["NodeId"].catalog_kafka_source.streaming_options.security_protocol #=> String
resp.jobs[0].code_gen_configuration_nodes["NodeId"].catalog_kafka_source.streaming_options.connection_name #=> String
resp.jobs[0].code_gen_configuration_nodes["NodeId"].catalog_kafka_source.streaming_options.topic_name #=> String
resp.jobs[0].code_gen_configuration_nodes["NodeId"].catalog_kafka_source.streaming_options.assign #=> String
resp.jobs[0].code_gen_configuration_nodes["NodeId"].catalog_kafka_source.streaming_options.subscribe_pattern #=> String
resp.jobs[0].code_gen_configuration_nodes["NodeId"].catalog_kafka_source.streaming_options.classification #=> String
resp.jobs[0].code_gen_configuration_nodes["NodeId"].catalog_kafka_source.streaming_options.delimiter #=> String
resp.jobs[0].code_gen_configuration_nodes["NodeId"].catalog_kafka_source.streaming_options.starting_offsets #=> String
resp.jobs[0].code_gen_configuration_nodes["NodeId"].catalog_kafka_source.streaming_options.ending_offsets #=> String
resp.jobs[0].code_gen_configuration_nodes["NodeId"].catalog_kafka_source.streaming_options.poll_timeout_ms #=> Integer
resp.jobs[0].code_gen_configuration_nodes["NodeId"].catalog_kafka_source.streaming_options.num_retries #=> Integer
resp.jobs[0].code_gen_configuration_nodes["NodeId"].catalog_kafka_source.streaming_options.retry_interval_ms #=> Integer
resp.jobs[0].code_gen_configuration_nodes["NodeId"].catalog_kafka_source.streaming_options.max_offsets_per_trigger #=> Integer
resp.jobs[0].code_gen_configuration_nodes["NodeId"].catalog_kafka_source.streaming_options.min_partitions #=> Integer
resp.jobs[0].code_gen_configuration_nodes["NodeId"].catalog_kafka_source.streaming_options.include_headers #=> Boolean
resp.jobs[0].code_gen_configuration_nodes["NodeId"].catalog_kafka_source.streaming_options.add_record_timestamp #=> String
resp.jobs[0].code_gen_configuration_nodes["NodeId"].catalog_kafka_source.streaming_options.emit_consumer_lag_metrics #=> String
resp.jobs[0].code_gen_configuration_nodes["NodeId"].catalog_kafka_source.streaming_options.starting_timestamp #=> Time
resp.jobs[0].code_gen_configuration_nodes["NodeId"].catalog_kafka_source.data_preview_options.polling_time #=> Integer
resp.jobs[0].code_gen_configuration_nodes["NodeId"].catalog_kafka_source.data_preview_options.record_polling_limit #=> Integer
resp.jobs[0].code_gen_configuration_nodes["NodeId"].drop_null_fields.name #=> String
resp.jobs[0].code_gen_configuration_nodes["NodeId"].drop_null_fields.inputs #=> Array
resp.jobs[0].code_gen_configuration_nodes["NodeId"].drop_null_fields.inputs[0] #=> String
resp.jobs[0].code_gen_configuration_nodes["NodeId"].drop_null_fields.null_check_box_list.is_empty #=> Boolean
resp.jobs[0].code_gen_configuration_nodes["NodeId"].drop_null_fields.null_check_box_list.is_null_string #=> Boolean
resp.jobs[0].code_gen_configuration_nodes["NodeId"].drop_null_fields.null_check_box_list.is_neg_one #=> Boolean
resp.jobs[0].code_gen_configuration_nodes["NodeId"].drop_null_fields.null_text_list #=> Array
resp.jobs[0].code_gen_configuration_nodes["NodeId"].drop_null_fields.null_text_list[0].value #=> String
resp.jobs[0].code_gen_configuration_nodes["NodeId"].drop_null_fields.null_text_list[0].datatype.id #=> String
resp.jobs[0].code_gen_configuration_nodes["NodeId"].drop_null_fields.null_text_list[0].datatype.label #=> String
resp.jobs[0].code_gen_configuration_nodes["NodeId"].merge.name #=> String
resp.jobs[0].code_gen_configuration_nodes["NodeId"].merge.inputs #=> Array
resp.jobs[0].code_gen_configuration_nodes["NodeId"].merge.inputs[0] #=> String
resp.jobs[0].code_gen_configuration_nodes["NodeId"].merge.source #=> String
resp.jobs[0].code_gen_configuration_nodes["NodeId"].merge.primary_keys #=> Array
resp.jobs[0].code_gen_configuration_nodes["NodeId"].merge.primary_keys[0] #=> Array
resp.jobs[0].code_gen_configuration_nodes["NodeId"].merge.primary_keys[0][0] #=> String
resp.jobs[0].code_gen_configuration_nodes["NodeId"].union.name #=> String
resp.jobs[0].code_gen_configuration_nodes["NodeId"].union.inputs #=> Array
resp.jobs[0].code_gen_configuration_nodes["NodeId"].union.inputs[0] #=> String
resp.jobs[0].code_gen_configuration_nodes["NodeId"].union.union_type #=> String, one of "ALL", "DISTINCT"
resp.jobs[0].code_gen_configuration_nodes["NodeId"].pii_detection.name #=> String
resp.jobs[0].code_gen_configuration_nodes["NodeId"].pii_detection.inputs #=> Array
resp.jobs[0].code_gen_configuration_nodes["NodeId"].pii_detection.inputs[0] #=> String
resp.jobs[0].code_gen_configuration_nodes["NodeId"].pii_detection.pii_type #=> String, one of "RowAudit", "RowMasking", "ColumnAudit", "ColumnMasking"
resp.jobs[0].code_gen_configuration_nodes["NodeId"].pii_detection.entity_types_to_detect #=> Array
resp.jobs[0].code_gen_configuration_nodes["NodeId"].pii_detection.entity_types_to_detect[0] #=> String
resp.jobs[0].code_gen_configuration_nodes["NodeId"].pii_detection.output_column_name #=> String
resp.jobs[0].code_gen_configuration_nodes["NodeId"].pii_detection.sample_fraction #=> Float
resp.jobs[0].code_gen_configuration_nodes["NodeId"].pii_detection.threshold_fraction #=> Float
resp.jobs[0].code_gen_configuration_nodes["NodeId"].pii_detection.mask_value #=> String
resp.jobs[0].code_gen_configuration_nodes["NodeId"].aggregate.name #=> String
resp.jobs[0].code_gen_configuration_nodes["NodeId"].aggregate.inputs #=> Array
resp.jobs[0].code_gen_configuration_nodes["NodeId"].aggregate.inputs[0] #=> String
resp.jobs[0].code_gen_configuration_nodes["NodeId"].aggregate.groups #=> Array
resp.jobs[0].code_gen_configuration_nodes["NodeId"].aggregate.groups[0] #=> Array
resp.jobs[0].code_gen_configuration_nodes["NodeId"].aggregate.groups[0][0] #=> String
resp.jobs[0].code_gen_configuration_nodes["NodeId"].aggregate.aggs #=> Array
resp.jobs[0].code_gen_configuration_nodes["NodeId"].aggregate.aggs[0].column #=> Array
resp.jobs[0].code_gen_configuration_nodes["NodeId"].aggregate.aggs[0].column[0] #=> String
resp.jobs[0].code_gen_configuration_nodes["NodeId"].aggregate.aggs[0].agg_func #=> String, one of "avg", "countDistinct", "count", "first", "last", "kurtosis", "max", "min", "skewness", "stddev_samp", "stddev_pop", "sum", "sumDistinct", "var_samp", "var_pop"
resp.jobs[0].code_gen_configuration_nodes["NodeId"].drop_duplicates.name #=> String
resp.jobs[0].code_gen_configuration_nodes["NodeId"].drop_duplicates.inputs #=> Array
resp.jobs[0].code_gen_configuration_nodes["NodeId"].drop_duplicates.inputs[0] #=> String
resp.jobs[0].code_gen_configuration_nodes["NodeId"].drop_duplicates.columns #=> Array
resp.jobs[0].code_gen_configuration_nodes["NodeId"].drop_duplicates.columns[0] #=> Array
resp.jobs[0].code_gen_configuration_nodes["NodeId"].drop_duplicates.columns[0][0] #=> String
resp.jobs[0].code_gen_configuration_nodes["NodeId"].governed_catalog_target.name #=> String
resp.jobs[0].code_gen_configuration_nodes["NodeId"].governed_catalog_target.inputs #=> Array
resp.jobs[0].code_gen_configuration_nodes["NodeId"].governed_catalog_target.inputs[0] #=> String
resp.jobs[0].code_gen_configuration_nodes["NodeId"].governed_catalog_target.partition_keys #=> Array
resp.jobs[0].code_gen_configuration_nodes["NodeId"].governed_catalog_target.partition_keys[0] #=> Array
resp.jobs[0].code_gen_configuration_nodes["NodeId"].governed_catalog_target.partition_keys[0][0] #=> String
resp.jobs[0].code_gen_configuration_nodes["NodeId"].governed_catalog_target.table #=> String
resp.jobs[0].code_gen_configuration_nodes["NodeId"].governed_catalog_target.database #=> String
resp.jobs[0].code_gen_configuration_nodes["NodeId"].governed_catalog_target.schema_change_policy.enable_update_catalog #=> Boolean
resp.jobs[0].code_gen_configuration_nodes["NodeId"].governed_catalog_target.schema_change_policy.update_behavior #=> String, one of "UPDATE_IN_DATABASE", "LOG"
resp.jobs[0].code_gen_configuration_nodes["NodeId"].governed_catalog_source.name #=> String
resp.jobs[0].code_gen_configuration_nodes["NodeId"].governed_catalog_source.database #=> String
resp.jobs[0].code_gen_configuration_nodes["NodeId"].governed_catalog_source.table #=> String
resp.jobs[0].code_gen_configuration_nodes["NodeId"].governed_catalog_source.partition_predicate #=> String
resp.jobs[0].code_gen_configuration_nodes["NodeId"].governed_catalog_source.additional_options.bounded_size #=> Integer
resp.jobs[0].code_gen_configuration_nodes["NodeId"].governed_catalog_source.additional_options.bounded_files #=> Integer
resp.jobs[0].code_gen_configuration_nodes["NodeId"].microsoft_sql_server_catalog_source.name #=> String
resp.jobs[0].code_gen_configuration_nodes["NodeId"].microsoft_sql_server_catalog_source.database #=> String
resp.jobs[0].code_gen_configuration_nodes["NodeId"].microsoft_sql_server_catalog_source.table #=> String
resp.jobs[0].code_gen_configuration_nodes["NodeId"].my_sql_catalog_source.name #=> String
resp.jobs[0].code_gen_configuration_nodes["NodeId"].my_sql_catalog_source.database #=> String
resp.jobs[0].code_gen_configuration_nodes["NodeId"].my_sql_catalog_source.table #=> String
resp.jobs[0].code_gen_configuration_nodes["NodeId"].oracle_sql_catalog_source.name #=> String
resp.jobs[0].code_gen_configuration_nodes["NodeId"].oracle_sql_catalog_source.database #=> String
resp.jobs[0].code_gen_configuration_nodes["NodeId"].oracle_sql_catalog_source.table #=> String
resp.jobs[0].code_gen_configuration_nodes["NodeId"].postgre_sql_catalog_source.name #=> String
resp.jobs[0].code_gen_configuration_nodes["NodeId"].postgre_sql_catalog_source.database #=> String
resp.jobs[0].code_gen_configuration_nodes["NodeId"].postgre_sql_catalog_source.table #=> String
resp.jobs[0].code_gen_configuration_nodes["NodeId"].microsoft_sql_server_catalog_target.name #=> String
resp.jobs[0].code_gen_configuration_nodes["NodeId"].microsoft_sql_server_catalog_target.inputs #=> Array
resp.jobs[0].code_gen_configuration_nodes["NodeId"].microsoft_sql_server_catalog_target.inputs[0] #=> String
resp.jobs[0].code_gen_configuration_nodes["NodeId"].microsoft_sql_server_catalog_target.database #=> String
resp.jobs[0].code_gen_configuration_nodes["NodeId"].microsoft_sql_server_catalog_target.table #=> String
resp.jobs[0].code_gen_configuration_nodes["NodeId"].my_sql_catalog_target.name #=> String
resp.jobs[0].code_gen_configuration_nodes["NodeId"].my_sql_catalog_target.inputs #=> Array
resp.jobs[0].code_gen_configuration_nodes["NodeId"].my_sql_catalog_target.inputs[0] #=> String
resp.jobs[0].code_gen_configuration_nodes["NodeId"].my_sql_catalog_target.database #=> String
resp.jobs[0].code_gen_configuration_nodes["NodeId"].my_sql_catalog_target.table #=> String
resp.jobs[0].code_gen_configuration_nodes["NodeId"].oracle_sql_catalog_target.name #=> String
resp.jobs[0].code_gen_configuration_nodes["NodeId"].oracle_sql_catalog_target.inputs #=> Array
resp.jobs[0].code_gen_configuration_nodes["NodeId"].oracle_sql_catalog_target.inputs[0] #=> String
resp.jobs[0].code_gen_configuration_nodes["NodeId"].oracle_sql_catalog_target.database #=> String
resp.jobs[0].code_gen_configuration_nodes["NodeId"].oracle_sql_catalog_target.table #=> String
resp.jobs[0].code_gen_configuration_nodes["NodeId"].postgre_sql_catalog_target.name #=> String
resp.jobs[0].code_gen_configuration_nodes["NodeId"].postgre_sql_catalog_target.inputs #=> Array
resp.jobs[0].code_gen_configuration_nodes["NodeId"].postgre_sql_catalog_target.inputs[0] #=> String
resp.jobs[0].code_gen_configuration_nodes["NodeId"].postgre_sql_catalog_target.database #=> String
resp.jobs[0].code_gen_configuration_nodes["NodeId"].postgre_sql_catalog_target.table #=> String
resp.jobs[0].code_gen_configuration_nodes["NodeId"].dynamic_transform.name #=> String
resp.jobs[0].code_gen_configuration_nodes["NodeId"].dynamic_transform.transform_name #=> String
resp.jobs[0].code_gen_configuration_nodes["NodeId"].dynamic_transform.inputs #=> Array
resp.jobs[0].code_gen_configuration_nodes["NodeId"].dynamic_transform.inputs[0] #=> String
resp.jobs[0].code_gen_configuration_nodes["NodeId"].dynamic_transform.parameters #=> Array
resp.jobs[0].code_gen_configuration_nodes["NodeId"].dynamic_transform.parameters[0].name #=> String
resp.jobs[0].code_gen_configuration_nodes["NodeId"].dynamic_transform.parameters[0].type #=> String, one of "str", "int", "float", "complex", "bool", "list", "null"
resp.jobs[0].code_gen_configuration_nodes["NodeId"].dynamic_transform.parameters[0].validation_rule #=> String
resp.jobs[0].code_gen_configuration_nodes["NodeId"].dynamic_transform.parameters[0].validation_message #=> String
resp.jobs[0].code_gen_configuration_nodes["NodeId"].dynamic_transform.parameters[0].value #=> Array
resp.jobs[0].code_gen_configuration_nodes["NodeId"].dynamic_transform.parameters[0].value[0] #=> String
resp.jobs[0].code_gen_configuration_nodes["NodeId"].dynamic_transform.parameters[0].list_type #=> String, one of "str", "int", "float", "complex", "bool", "list", "null"
resp.jobs[0].code_gen_configuration_nodes["NodeId"].dynamic_transform.parameters[0].is_optional #=> Boolean
resp.jobs[0].code_gen_configuration_nodes["NodeId"].dynamic_transform.function_name #=> String
resp.jobs[0].code_gen_configuration_nodes["NodeId"].dynamic_transform.path #=> String
resp.jobs[0].code_gen_configuration_nodes["NodeId"].dynamic_transform.version #=> String
resp.jobs[0].code_gen_configuration_nodes["NodeId"].dynamic_transform.output_schemas #=> Array
resp.jobs[0].code_gen_configuration_nodes["NodeId"].dynamic_transform.output_schemas[0].columns #=> Array
resp.jobs[0].code_gen_configuration_nodes["NodeId"].dynamic_transform.output_schemas[0].columns[0].name #=> String
resp.jobs[0].code_gen_configuration_nodes["NodeId"].dynamic_transform.output_schemas[0].columns[0].type #=> String
resp.jobs[0].code_gen_configuration_nodes["NodeId"].evaluate_data_quality.name #=> String
resp.jobs[0].code_gen_configuration_nodes["NodeId"].evaluate_data_quality.inputs #=> Array
resp.jobs[0].code_gen_configuration_nodes["NodeId"].evaluate_data_quality.inputs[0] #=> String
resp.jobs[0].code_gen_configuration_nodes["NodeId"].evaluate_data_quality.ruleset #=> String
resp.jobs[0].code_gen_configuration_nodes["NodeId"].evaluate_data_quality.output #=> String, one of "PrimaryInput", "EvaluationResults"
resp.jobs[0].code_gen_configuration_nodes["NodeId"].evaluate_data_quality.publishing_options.evaluation_context #=> String
resp.jobs[0].code_gen_configuration_nodes["NodeId"].evaluate_data_quality.publishing_options.results_s3_prefix #=> String
resp.jobs[0].code_gen_configuration_nodes["NodeId"].evaluate_data_quality.publishing_options.cloud_watch_metrics_enabled #=> Boolean
resp.jobs[0].code_gen_configuration_nodes["NodeId"].evaluate_data_quality.publishing_options.results_publishing_enabled #=> Boolean
resp.jobs[0].code_gen_configuration_nodes["NodeId"].evaluate_data_quality.stop_job_on_failure_options.stop_job_on_failure_timing #=> String, one of "Immediate", "AfterDataLoad"
resp.jobs[0].code_gen_configuration_nodes["NodeId"].s3_catalog_hudi_source.name #=> String
resp.jobs[0].code_gen_configuration_nodes["NodeId"].s3_catalog_hudi_source.database #=> String
resp.jobs[0].code_gen_configuration_nodes["NodeId"].s3_catalog_hudi_source.table #=> String
resp.jobs[0].code_gen_configuration_nodes["NodeId"].s3_catalog_hudi_source.additional_hudi_options #=> Hash
resp.jobs[0].code_gen_configuration_nodes["NodeId"].s3_catalog_hudi_source.additional_hudi_options["EnclosedInStringProperty"] #=> String
resp.jobs[0].code_gen_configuration_nodes["NodeId"].s3_catalog_hudi_source.output_schemas #=> Array
resp.jobs[0].code_gen_configuration_nodes["NodeId"].s3_catalog_hudi_source.output_schemas[0].columns #=> Array
resp.jobs[0].code_gen_configuration_nodes["NodeId"].s3_catalog_hudi_source.output_schemas[0].columns[0].name #=> String
resp.jobs[0].code_gen_configuration_nodes["NodeId"].s3_catalog_hudi_source.output_schemas[0].columns[0].type #=> String
resp.jobs[0].code_gen_configuration_nodes["NodeId"].catalog_hudi_source.name #=> String
resp.jobs[0].code_gen_configuration_nodes["NodeId"].catalog_hudi_source.database #=> String
resp.jobs[0].code_gen_configuration_nodes["NodeId"].catalog_hudi_source.table #=> String
resp.jobs[0].code_gen_configuration_nodes["NodeId"].catalog_hudi_source.additional_hudi_options #=> Hash
resp.jobs[0].code_gen_configuration_nodes["NodeId"].catalog_hudi_source.additional_hudi_options["EnclosedInStringProperty"] #=> String
resp.jobs[0].code_gen_configuration_nodes["NodeId"].catalog_hudi_source.output_schemas #=> Array
resp.jobs[0].code_gen_configuration_nodes["NodeId"].catalog_hudi_source.output_schemas[0].columns #=> Array
resp.jobs[0].code_gen_configuration_nodes["NodeId"].catalog_hudi_source.output_schemas[0].columns[0].name #=> String
resp.jobs[0].code_gen_configuration_nodes["NodeId"].catalog_hudi_source.output_schemas[0].columns[0].type #=> String
resp.jobs[0].code_gen_configuration_nodes["NodeId"].s3_hudi_source.name #=> String
resp.jobs[0].code_gen_configuration_nodes["NodeId"].s3_hudi_source.paths #=> Array
resp.jobs[0].code_gen_configuration_nodes["NodeId"].s3_hudi_source.paths[0] #=> String
resp.jobs[0].code_gen_configuration_nodes["NodeId"].s3_hudi_source.additional_hudi_options #=> Hash
resp.jobs[0].code_gen_configuration_nodes["NodeId"].s3_hudi_source.additional_hudi_options["EnclosedInStringProperty"] #=> String
resp.jobs[0].code_gen_configuration_nodes["NodeId"].s3_hudi_source.additional_options.bounded_size #=> Integer
resp.jobs[0].code_gen_configuration_nodes["NodeId"].s3_hudi_source.additional_options.bounded_files #=> Integer
resp.jobs[0].code_gen_configuration_nodes["NodeId"].s3_hudi_source.additional_options.enable_sample_path #=> Boolean
resp.jobs[0].code_gen_configuration_nodes["NodeId"].s3_hudi_source.additional_options.sample_path #=> String
resp.jobs[0].code_gen_configuration_nodes["NodeId"].s3_hudi_source.output_schemas #=> Array
resp.jobs[0].code_gen_configuration_nodes["NodeId"].s3_hudi_source.output_schemas[0].columns #=> Array
resp.jobs[0].code_gen_configuration_nodes["NodeId"].s3_hudi_source.output_schemas[0].columns[0].name #=> String
resp.jobs[0].code_gen_configuration_nodes["NodeId"].s3_hudi_source.output_schemas[0].columns[0].type #=> String
resp.jobs[0].code_gen_configuration_nodes["NodeId"].s3_hudi_catalog_target.name #=> String
resp.jobs[0].code_gen_configuration_nodes["NodeId"].s3_hudi_catalog_target.inputs #=> Array
resp.jobs[0].code_gen_configuration_nodes["NodeId"].s3_hudi_catalog_target.inputs[0] #=> String
resp.jobs[0].code_gen_configuration_nodes["NodeId"].s3_hudi_catalog_target.partition_keys #=> Array
resp.jobs[0].code_gen_configuration_nodes["NodeId"].s3_hudi_catalog_target.partition_keys[0] #=> Array
resp.jobs[0].code_gen_configuration_nodes["NodeId"].s3_hudi_catalog_target.partition_keys[0][0] #=> String
resp.jobs[0].code_gen_configuration_nodes["NodeId"].s3_hudi_catalog_target.table #=> String
resp.jobs[0].code_gen_configuration_nodes["NodeId"].s3_hudi_catalog_target.database #=> String
resp.jobs[0].code_gen_configuration_nodes["NodeId"].s3_hudi_catalog_target.additional_options #=> Hash
resp.jobs[0].code_gen_configuration_nodes["NodeId"].s3_hudi_catalog_target.additional_options["EnclosedInStringProperty"] #=> String
resp.jobs[0].code_gen_configuration_nodes["NodeId"].s3_hudi_catalog_target.schema_change_policy.enable_update_catalog #=> Boolean
resp.jobs[0].code_gen_configuration_nodes["NodeId"].s3_hudi_catalog_target.schema_change_policy.update_behavior #=> String, one of "UPDATE_IN_DATABASE", "LOG"
resp.jobs[0].code_gen_configuration_nodes["NodeId"].s3_hudi_direct_target.name #=> String
resp.jobs[0].code_gen_configuration_nodes["NodeId"].s3_hudi_direct_target.inputs #=> Array
resp.jobs[0].code_gen_configuration_nodes["NodeId"].s3_hudi_direct_target.inputs[0] #=> String
resp.jobs[0].code_gen_configuration_nodes["NodeId"].s3_hudi_direct_target.path #=> String
resp.jobs[0].code_gen_configuration_nodes["NodeId"].s3_hudi_direct_target.compression #=> String, one of "gzip", "lzo", "uncompressed", "snappy"
resp.jobs[0].code_gen_configuration_nodes["NodeId"].s3_hudi_direct_target.partition_keys #=> Array
resp.jobs[0].code_gen_configuration_nodes["NodeId"].s3_hudi_direct_target.partition_keys[0] #=> Array
resp.jobs[0].code_gen_configuration_nodes["NodeId"].s3_hudi_direct_target.partition_keys[0][0] #=> String
resp.jobs[0].code_gen_configuration_nodes["NodeId"].s3_hudi_direct_target.format #=> String, one of "json", "csv", "avro", "orc", "parquet", "hudi", "delta"
resp.jobs[0].code_gen_configuration_nodes["NodeId"].s3_hudi_direct_target.additional_options #=> Hash
resp.jobs[0].code_gen_configuration_nodes["NodeId"].s3_hudi_direct_target.additional_options["EnclosedInStringProperty"] #=> String
resp.jobs[0].code_gen_configuration_nodes["NodeId"].s3_hudi_direct_target.schema_change_policy.enable_update_catalog #=> Boolean
resp.jobs[0].code_gen_configuration_nodes["NodeId"].s3_hudi_direct_target.schema_change_policy.update_behavior #=> String, one of "UPDATE_IN_DATABASE", "LOG"
resp.jobs[0].code_gen_configuration_nodes["NodeId"].s3_hudi_direct_target.schema_change_policy.table #=> String
resp.jobs[0].code_gen_configuration_nodes["NodeId"].s3_hudi_direct_target.schema_change_policy.database #=> String
resp.jobs[0].code_gen_configuration_nodes["NodeId"].direct_jdbc_source.name #=> String
resp.jobs[0].code_gen_configuration_nodes["NodeId"].direct_jdbc_source.database #=> String
resp.jobs[0].code_gen_configuration_nodes["NodeId"].direct_jdbc_source.table #=> String
resp.jobs[0].code_gen_configuration_nodes["NodeId"].direct_jdbc_source.connection_name #=> String
resp.jobs[0].code_gen_configuration_nodes["NodeId"].direct_jdbc_source.connection_type #=> String, one of "sqlserver", "mysql", "oracle", "postgresql", "redshift"
resp.jobs[0].code_gen_configuration_nodes["NodeId"].direct_jdbc_source.redshift_tmp_dir #=> String
resp.jobs[0].code_gen_configuration_nodes["NodeId"].s3_catalog_delta_source.name #=> String
resp.jobs[0].code_gen_configuration_nodes["NodeId"].s3_catalog_delta_source.database #=> String
resp.jobs[0].code_gen_configuration_nodes["NodeId"].s3_catalog_delta_source.table #=> String
resp.jobs[0].code_gen_configuration_nodes["NodeId"].s3_catalog_delta_source.additional_delta_options #=> Hash
resp.jobs[0].code_gen_configuration_nodes["NodeId"].s3_catalog_delta_source.additional_delta_options["EnclosedInStringProperty"] #=> String
resp.jobs[0].code_gen_configuration_nodes["NodeId"].s3_catalog_delta_source.output_schemas #=> Array
resp.jobs[0].code_gen_configuration_nodes["NodeId"].s3_catalog_delta_source.output_schemas[0].columns #=> Array
resp.jobs[0].code_gen_configuration_nodes["NodeId"].s3_catalog_delta_source.output_schemas[0].columns[0].name #=> String
resp.jobs[0].code_gen_configuration_nodes["NodeId"].s3_catalog_delta_source.output_schemas[0].columns[0].type #=> String
resp.jobs[0].code_gen_configuration_nodes["NodeId"].catalog_delta_source.name #=> String
resp.jobs[0].code_gen_configuration_nodes["NodeId"].catalog_delta_source.database #=> String
resp.jobs[0].code_gen_configuration_nodes["NodeId"].catalog_delta_source.table #=> String
resp.jobs[0].code_gen_configuration_nodes["NodeId"].catalog_delta_source.additional_delta_options #=> Hash
resp.jobs[0].code_gen_configuration_nodes["NodeId"].catalog_delta_source.additional_delta_options["EnclosedInStringProperty"] #=> String
resp.jobs[0].code_gen_configuration_nodes["NodeId"].catalog_delta_source.output_schemas #=> Array
resp.jobs[0].code_gen_configuration_nodes["NodeId"].catalog_delta_source.output_schemas[0].columns #=> Array
resp.jobs[0].code_gen_configuration_nodes["NodeId"].catalog_delta_source.output_schemas[0].columns[0].name #=> String
resp.jobs[0].code_gen_configuration_nodes["NodeId"].catalog_delta_source.output_schemas[0].columns[0].type #=> String
resp.jobs[0].code_gen_configuration_nodes["NodeId"].s3_delta_source.name #=> String
resp.jobs[0].code_gen_configuration_nodes["NodeId"].s3_delta_source.paths #=> Array
resp.jobs[0].code_gen_configuration_nodes["NodeId"].s3_delta_source.paths[0] #=> String
resp.jobs[0].code_gen_configuration_nodes["NodeId"].s3_delta_source.additional_delta_options #=> Hash
resp.jobs[0].code_gen_configuration_nodes["NodeId"].s3_delta_source.additional_delta_options["EnclosedInStringProperty"] #=> String
resp.jobs[0].code_gen_configuration_nodes["NodeId"].s3_delta_source.additional_options.bounded_size #=> Integer
resp.jobs[0].code_gen_configuration_nodes["NodeId"].s3_delta_source.additional_options.bounded_files #=> Integer
resp.jobs[0].code_gen_configuration_nodes["NodeId"].s3_delta_source.additional_options.enable_sample_path #=> Boolean
resp.jobs[0].code_gen_configuration_nodes["NodeId"].s3_delta_source.additional_options.sample_path #=> String
resp.jobs[0].code_gen_configuration_nodes["NodeId"].s3_delta_source.output_schemas #=> Array
resp.jobs[0].code_gen_configuration_nodes["NodeId"].s3_delta_source.output_schemas[0].columns #=> Array
resp.jobs[0].code_gen_configuration_nodes["NodeId"].s3_delta_source.output_schemas[0].columns[0].name #=> String
resp.jobs[0].code_gen_configuration_nodes["NodeId"].s3_delta_source.output_schemas[0].columns[0].type #=> String
resp.jobs[0].code_gen_configuration_nodes["NodeId"].s3_delta_catalog_target.name #=> String
resp.jobs[0].code_gen_configuration_nodes["NodeId"].s3_delta_catalog_target.inputs #=> Array
resp.jobs[0].code_gen_configuration_nodes["NodeId"].s3_delta_catalog_target.inputs[0] #=> String
resp.jobs[0].code_gen_configuration_nodes["NodeId"].s3_delta_catalog_target.partition_keys #=> Array
resp.jobs[0].code_gen_configuration_nodes["NodeId"].s3_delta_catalog_target.partition_keys[0] #=> Array
resp.jobs[0].code_gen_configuration_nodes["NodeId"].s3_delta_catalog_target.partition_keys[0][0] #=> String
resp.jobs[0].code_gen_configuration_nodes["NodeId"].s3_delta_catalog_target.table #=> String
resp.jobs[0].code_gen_configuration_nodes["NodeId"].s3_delta_catalog_target.database #=> String
resp.jobs[0].code_gen_configuration_nodes["NodeId"].s3_delta_catalog_target.additional_options #=> Hash
resp.jobs[0].code_gen_configuration_nodes["NodeId"].s3_delta_catalog_target.additional_options["EnclosedInStringProperty"] #=> String
resp.jobs[0].code_gen_configuration_nodes["NodeId"].s3_delta_catalog_target.schema_change_policy.enable_update_catalog #=> Boolean
resp.jobs[0].code_gen_configuration_nodes["NodeId"].s3_delta_catalog_target.schema_change_policy.update_behavior #=> String, one of "UPDATE_IN_DATABASE", "LOG"
resp.jobs[0].code_gen_configuration_nodes["NodeId"].s3_delta_direct_target.name #=> String
resp.jobs[0].code_gen_configuration_nodes["NodeId"].s3_delta_direct_target.inputs #=> Array
resp.jobs[0].code_gen_configuration_nodes["NodeId"].s3_delta_direct_target.inputs[0] #=> String
resp.jobs[0].code_gen_configuration_nodes["NodeId"].s3_delta_direct_target.partition_keys #=> Array
resp.jobs[0].code_gen_configuration_nodes["NodeId"].s3_delta_direct_target.partition_keys[0] #=> Array
resp.jobs[0].code_gen_configuration_nodes["NodeId"].s3_delta_direct_target.partition_keys[0][0] #=> String
resp.jobs[0].code_gen_configuration_nodes["NodeId"].s3_delta_direct_target.path #=> String
resp.jobs[0].code_gen_configuration_nodes["NodeId"].s3_delta_direct_target.compression #=> String, one of "uncompressed", "snappy"
resp.jobs[0].code_gen_configuration_nodes["NodeId"].s3_delta_direct_target.format #=> String, one of "json", "csv", "avro", "orc", "parquet", "hudi", "delta"
resp.jobs[0].code_gen_configuration_nodes["NodeId"].s3_delta_direct_target.additional_options #=> Hash
resp.jobs[0].code_gen_configuration_nodes["NodeId"].s3_delta_direct_target.additional_options["EnclosedInStringProperty"] #=> String
resp.jobs[0].code_gen_configuration_nodes["NodeId"].s3_delta_direct_target.schema_change_policy.enable_update_catalog #=> Boolean
resp.jobs[0].code_gen_configuration_nodes["NodeId"].s3_delta_direct_target.schema_change_policy.update_behavior #=> String, one of "UPDATE_IN_DATABASE", "LOG"
resp.jobs[0].code_gen_configuration_nodes["NodeId"].s3_delta_direct_target.schema_change_policy.table #=> String
resp.jobs[0].code_gen_configuration_nodes["NodeId"].s3_delta_direct_target.schema_change_policy.database #=> String
resp.jobs[0].code_gen_configuration_nodes["NodeId"].amazon_redshift_source.name #=> String
resp.jobs[0].code_gen_configuration_nodes["NodeId"].amazon_redshift_source.data.access_type #=> String
resp.jobs[0].code_gen_configuration_nodes["NodeId"].amazon_redshift_source.data.source_type #=> String
resp.jobs[0].code_gen_configuration_nodes["NodeId"].amazon_redshift_source.data.connection.value #=> String
resp.jobs[0].code_gen_configuration_nodes["NodeId"].amazon_redshift_source.data.connection.label #=> String
resp.jobs[0].code_gen_configuration_nodes["NodeId"].amazon_redshift_source.data.connection.description #=> String
resp.jobs[0].code_gen_configuration_nodes["NodeId"].amazon_redshift_source.data.schema.value #=> String
resp.jobs[0].code_gen_configuration_nodes["NodeId"].amazon_redshift_source.data.schema.label #=> String
resp.jobs[0].code_gen_configuration_nodes["NodeId"].amazon_redshift_source.data.schema.description #=> String
resp.jobs[0].code_gen_configuration_nodes["NodeId"].amazon_redshift_source.data.table.value #=> String
resp.jobs[0].code_gen_configuration_nodes["NodeId"].amazon_redshift_source.data.table.label #=> String
resp.jobs[0].code_gen_configuration_nodes["NodeId"].amazon_redshift_source.data.table.description #=> String
resp.jobs[0].code_gen_configuration_nodes["NodeId"].amazon_redshift_source.data.catalog_database.value #=> String
resp.jobs[0].code_gen_configuration_nodes["NodeId"].amazon_redshift_source.data.catalog_database.label #=> String
resp.jobs[0].code_gen_configuration_nodes["NodeId"].amazon_redshift_source.data.catalog_database.description #=> String
resp.jobs[0].code_gen_configuration_nodes["NodeId"].amazon_redshift_source.data.catalog_table.value #=> String
resp.jobs[0].code_gen_configuration_nodes["NodeId"].amazon_redshift_source.data.catalog_table.label #=> String
resp.jobs[0].code_gen_configuration_nodes["NodeId"].amazon_redshift_source.data.catalog_table.description #=> String
resp.jobs[0].code_gen_configuration_nodes["NodeId"].amazon_redshift_source.data.catalog_redshift_schema #=> String
resp.jobs[0].code_gen_configuration_nodes["NodeId"].amazon_redshift_source.data.catalog_redshift_table #=> String
resp.jobs[0].code_gen_configuration_nodes["NodeId"].amazon_redshift_source.data.temp_dir #=> String
resp.jobs[0].code_gen_configuration_nodes["NodeId"].amazon_redshift_source.data.iam_role.value #=> String
resp.jobs[0].code_gen_configuration_nodes["NodeId"].amazon_redshift_source.data.iam_role.label #=> String
resp.jobs[0].code_gen_configuration_nodes["NodeId"].amazon_redshift_source.data.iam_role.description #=> String
resp.jobs[0].code_gen_configuration_nodes["NodeId"].amazon_redshift_source.data.advanced_options #=> Array
resp.jobs[0].code_gen_configuration_nodes["NodeId"].amazon_redshift_source.data.advanced_options[0].key #=> String
resp.jobs[0].code_gen_configuration_nodes["NodeId"].amazon_redshift_source.data.advanced_options[0].value #=> String
resp.jobs[0].code_gen_configuration_nodes["NodeId"].amazon_redshift_source.data.sample_query #=> String
resp.jobs[0].code_gen_configuration_nodes["NodeId"].amazon_redshift_source.data.pre_action #=> String
resp.jobs[0].code_gen_configuration_nodes["NodeId"].amazon_redshift_source.data.post_action #=> String
resp.jobs[0].code_gen_configuration_nodes["NodeId"].amazon_redshift_source.data.action #=> String
resp.jobs[0].code_gen_configuration_nodes["NodeId"].amazon_redshift_source.data.table_prefix #=> String
resp.jobs[0].code_gen_configuration_nodes["NodeId"].amazon_redshift_source.data.upsert #=> Boolean
resp.jobs[0].code_gen_configuration_nodes["NodeId"].amazon_redshift_source.data.merge_action #=> String
resp.jobs[0].code_gen_configuration_nodes["NodeId"].amazon_redshift_source.data.merge_when_matched #=> String
resp.jobs[0].code_gen_configuration_nodes["NodeId"].amazon_redshift_source.data.merge_when_not_matched #=> String
resp.jobs[0].code_gen_configuration_nodes["NodeId"].amazon_redshift_source.data.merge_clause #=> String
resp.jobs[0].code_gen_configuration_nodes["NodeId"].amazon_redshift_source.data.crawler_connection #=> String
resp.jobs[0].code_gen_configuration_nodes["NodeId"].amazon_redshift_source.data.table_schema #=> Array
resp.jobs[0].code_gen_configuration_nodes["NodeId"].amazon_redshift_source.data.table_schema[0].value #=> String
resp.jobs[0].code_gen_configuration_nodes["NodeId"].amazon_redshift_source.data.table_schema[0].label #=> String
resp.jobs[0].code_gen_configuration_nodes["NodeId"].amazon_redshift_source.data.table_schema[0].description #=> String
resp.jobs[0].code_gen_configuration_nodes["NodeId"].amazon_redshift_source.data.staging_table #=> String
resp.jobs[0].code_gen_configuration_nodes["NodeId"].amazon_redshift_source.data.selected_columns #=> Array
resp.jobs[0].code_gen_configuration_nodes["NodeId"].amazon_redshift_source.data.selected_columns[0].value #=> String
resp.jobs[0].code_gen_configuration_nodes["NodeId"].amazon_redshift_source.data.selected_columns[0].label #=> String
resp.jobs[0].code_gen_configuration_nodes["NodeId"].amazon_redshift_source.data.selected_columns[0].description #=> String
resp.jobs[0].code_gen_configuration_nodes["NodeId"].amazon_redshift_target.name #=> String
resp.jobs[0].code_gen_configuration_nodes["NodeId"].amazon_redshift_target.data.access_type #=> String
resp.jobs[0].code_gen_configuration_nodes["NodeId"].amazon_redshift_target.data.source_type #=> String
resp.jobs[0].code_gen_configuration_nodes["NodeId"].amazon_redshift_target.data.connection.value #=> String
resp.jobs[0].code_gen_configuration_nodes["NodeId"].amazon_redshift_target.data.connection.label #=> String
resp.jobs[0].code_gen_configuration_nodes["NodeId"].amazon_redshift_target.data.connection.description #=> String
resp.jobs[0].code_gen_configuration_nodes["NodeId"].amazon_redshift_target.data.schema.value #=> String
resp.jobs[0].code_gen_configuration_nodes["NodeId"].amazon_redshift_target.data.schema.label #=> String
resp.jobs[0].code_gen_configuration_nodes["NodeId"].amazon_redshift_target.data.schema.description #=> String
resp.jobs[0].code_gen_configuration_nodes["NodeId"].amazon_redshift_target.data.table.value #=> String
resp.jobs[0].code_gen_configuration_nodes["NodeId"].amazon_redshift_target.data.table.label #=> String
resp.jobs[0].code_gen_configuration_nodes["NodeId"].amazon_redshift_target.data.table.description #=> String
resp.jobs[0].code_gen_configuration_nodes["NodeId"].amazon_redshift_target.data.catalog_database.value #=> String
resp.jobs[0].code_gen_configuration_nodes["NodeId"].amazon_redshift_target.data.catalog_database.label #=> String
resp.jobs[0].code_gen_configuration_nodes["NodeId"].amazon_redshift_target.data.catalog_database.description #=> String
resp.jobs[0].code_gen_configuration_nodes["NodeId"].amazon_redshift_target.data.catalog_table.value #=> String
resp.jobs[0].code_gen_configuration_nodes["NodeId"].amazon_redshift_target.data.catalog_table.label #=> String
resp.jobs[0].code_gen_configuration_nodes["NodeId"].amazon_redshift_target.data.catalog_table.description #=> String
resp.jobs[0].code_gen_configuration_nodes["NodeId"].amazon_redshift_target.data.catalog_redshift_schema #=> String
resp.jobs[0].code_gen_configuration_nodes["NodeId"].amazon_redshift_target.data.catalog_redshift_table #=> String
resp.jobs[0].code_gen_configuration_nodes["NodeId"].amazon_redshift_target.data.temp_dir #=> String
resp.jobs[0].code_gen_configuration_nodes["NodeId"].amazon_redshift_target.data.iam_role.value #=> String
resp.jobs[0].code_gen_configuration_nodes["NodeId"].amazon_redshift_target.data.iam_role.label #=> String
resp.jobs[0].code_gen_configuration_nodes["NodeId"].amazon_redshift_target.data.iam_role.description #=> String
resp.jobs[0].code_gen_configuration_nodes["NodeId"].amazon_redshift_target.data.advanced_options #=> Array
resp.jobs[0].code_gen_configuration_nodes["NodeId"].amazon_redshift_target.data.advanced_options[0].key #=> String
resp.jobs[0].code_gen_configuration_nodes["NodeId"].amazon_redshift_target.data.advanced_options[0].value #=> String
resp.jobs[0].code_gen_configuration_nodes["NodeId"].amazon_redshift_target.data.sample_query #=> String
resp.jobs[0].code_gen_configuration_nodes["NodeId"].amazon_redshift_target.data.pre_action #=> String
resp.jobs[0].code_gen_configuration_nodes["NodeId"].amazon_redshift_target.data.post_action #=> String
resp.jobs[0].code_gen_configuration_nodes["NodeId"].amazon_redshift_target.data.action #=> String
resp.jobs[0].code_gen_configuration_nodes["NodeId"].amazon_redshift_target.data.table_prefix #=> String
resp.jobs[0].code_gen_configuration_nodes["NodeId"].amazon_redshift_target.data.upsert #=> Boolean
resp.jobs[0].code_gen_configuration_nodes["NodeId"].amazon_redshift_target.data.merge_action #=> String
resp.jobs[0].code_gen_configuration_nodes["NodeId"].amazon_redshift_target.data.merge_when_matched #=> String
resp.jobs[0].code_gen_configuration_nodes["NodeId"].amazon_redshift_target.data.merge_when_not_matched #=> String
resp.jobs[0].code_gen_configuration_nodes["NodeId"].amazon_redshift_target.data.merge_clause #=> String
resp.jobs[0].code_gen_configuration_nodes["NodeId"].amazon_redshift_target.data.crawler_connection #=> String
resp.jobs[0].code_gen_configuration_nodes["NodeId"].amazon_redshift_target.data.table_schema #=> Array
resp.jobs[0].code_gen_configuration_nodes["NodeId"].amazon_redshift_target.data.table_schema[0].value #=> String
resp.jobs[0].code_gen_configuration_nodes["NodeId"].amazon_redshift_target.data.table_schema[0].label #=> String
resp.jobs[0].code_gen_configuration_nodes["NodeId"].amazon_redshift_target.data.table_schema[0].description #=> String
resp.jobs[0].code_gen_configuration_nodes["NodeId"].amazon_redshift_target.data.staging_table #=> String
resp.jobs[0].code_gen_configuration_nodes["NodeId"].amazon_redshift_target.data.selected_columns #=> Array
resp.jobs[0].code_gen_configuration_nodes["NodeId"].amazon_redshift_target.data.selected_columns[0].value #=> String
resp.jobs[0].code_gen_configuration_nodes["NodeId"].amazon_redshift_target.data.selected_columns[0].label #=> String
resp.jobs[0].code_gen_configuration_nodes["NodeId"].amazon_redshift_target.data.selected_columns[0].description #=> String
resp.jobs[0].code_gen_configuration_nodes["NodeId"].amazon_redshift_target.inputs #=> Array
resp.jobs[0].code_gen_configuration_nodes["NodeId"].amazon_redshift_target.inputs[0] #=> String
resp.jobs[0].code_gen_configuration_nodes["NodeId"].evaluate_data_quality_multi_frame.name #=> String
resp.jobs[0].code_gen_configuration_nodes["NodeId"].evaluate_data_quality_multi_frame.inputs #=> Array
resp.jobs[0].code_gen_configuration_nodes["NodeId"].evaluate_data_quality_multi_frame.inputs[0] #=> String
resp.jobs[0].code_gen_configuration_nodes["NodeId"].evaluate_data_quality_multi_frame.additional_data_sources #=> Hash
resp.jobs[0].code_gen_configuration_nodes["NodeId"].evaluate_data_quality_multi_frame.additional_data_sources["NodeName"] #=> String
resp.jobs[0].code_gen_configuration_nodes["NodeId"].evaluate_data_quality_multi_frame.ruleset #=> String
resp.jobs[0].code_gen_configuration_nodes["NodeId"].evaluate_data_quality_multi_frame.publishing_options.evaluation_context #=> String
resp.jobs[0].code_gen_configuration_nodes["NodeId"].evaluate_data_quality_multi_frame.publishing_options.results_s3_prefix #=> String
resp.jobs[0].code_gen_configuration_nodes["NodeId"].evaluate_data_quality_multi_frame.publishing_options.cloud_watch_metrics_enabled #=> Boolean
resp.jobs[0].code_gen_configuration_nodes["NodeId"].evaluate_data_quality_multi_frame.publishing_options.results_publishing_enabled #=> Boolean
resp.jobs[0].code_gen_configuration_nodes["NodeId"].evaluate_data_quality_multi_frame.additional_options #=> Hash
resp.jobs[0].code_gen_configuration_nodes["NodeId"].evaluate_data_quality_multi_frame.additional_options["AdditionalOptionKeys"] #=> String
resp.jobs[0].code_gen_configuration_nodes["NodeId"].evaluate_data_quality_multi_frame.stop_job_on_failure_options.stop_job_on_failure_timing #=> String, one of "Immediate", "AfterDataLoad"
resp.jobs[0].code_gen_configuration_nodes["NodeId"].recipe.name #=> String
resp.jobs[0].code_gen_configuration_nodes["NodeId"].recipe.inputs #=> Array
resp.jobs[0].code_gen_configuration_nodes["NodeId"].recipe.inputs[0] #=> String
resp.jobs[0].code_gen_configuration_nodes["NodeId"].recipe.recipe_reference.recipe_arn #=> String
resp.jobs[0].code_gen_configuration_nodes["NodeId"].recipe.recipe_reference.recipe_version #=> String
resp.jobs[0].code_gen_configuration_nodes["NodeId"].recipe.recipe_steps #=> Array
resp.jobs[0].code_gen_configuration_nodes["NodeId"].recipe.recipe_steps[0].action.operation #=> String
resp.jobs[0].code_gen_configuration_nodes["NodeId"].recipe.recipe_steps[0].action.parameters #=> Hash
resp.jobs[0].code_gen_configuration_nodes["NodeId"].recipe.recipe_steps[0].action.parameters["ParameterName"] #=> String
resp.jobs[0].code_gen_configuration_nodes["NodeId"].recipe.recipe_steps[0].condition_expressions #=> Array
resp.jobs[0].code_gen_configuration_nodes["NodeId"].recipe.recipe_steps[0].condition_expressions[0].condition #=> String
resp.jobs[0].code_gen_configuration_nodes["NodeId"].recipe.recipe_steps[0].condition_expressions[0].value #=> String
resp.jobs[0].code_gen_configuration_nodes["NodeId"].recipe.recipe_steps[0].condition_expressions[0].target_column #=> String
resp.jobs[0].code_gen_configuration_nodes["NodeId"].snowflake_source.name #=> String
resp.jobs[0].code_gen_configuration_nodes["NodeId"].snowflake_source.data.source_type #=> String
resp.jobs[0].code_gen_configuration_nodes["NodeId"].snowflake_source.data.connection.value #=> String
resp.jobs[0].code_gen_configuration_nodes["NodeId"].snowflake_source.data.connection.label #=> String
resp.jobs[0].code_gen_configuration_nodes["NodeId"].snowflake_source.data.connection.description #=> String
resp.jobs[0].code_gen_configuration_nodes["NodeId"].snowflake_source.data.schema #=> String
resp.jobs[0].code_gen_configuration_nodes["NodeId"].snowflake_source.data.table #=> String
resp.jobs[0].code_gen_configuration_nodes["NodeId"].snowflake_source.data.database #=> String
resp.jobs[0].code_gen_configuration_nodes["NodeId"].snowflake_source.data.temp_dir #=> String
resp.jobs[0].code_gen_configuration_nodes["NodeId"].snowflake_source.data.iam_role.value #=> String
resp.jobs[0].code_gen_configuration_nodes["NodeId"].snowflake_source.data.iam_role.label #=> String
resp.jobs[0].code_gen_configuration_nodes["NodeId"].snowflake_source.data.iam_role.description #=> String
resp.jobs[0].code_gen_configuration_nodes["NodeId"].snowflake_source.data.additional_options #=> Hash
resp.jobs[0].code_gen_configuration_nodes["NodeId"].snowflake_source.data.additional_options["EnclosedInStringProperty"] #=> String
resp.jobs[0].code_gen_configuration_nodes["NodeId"].snowflake_source.data.sample_query #=> String
resp.jobs[0].code_gen_configuration_nodes["NodeId"].snowflake_source.data.pre_action #=> String
resp.jobs[0].code_gen_configuration_nodes["NodeId"].snowflake_source.data.post_action #=> String
resp.jobs[0].code_gen_configuration_nodes["NodeId"].snowflake_source.data.action #=> String
resp.jobs[0].code_gen_configuration_nodes["NodeId"].snowflake_source.data.upsert #=> Boolean
resp.jobs[0].code_gen_configuration_nodes["NodeId"].snowflake_source.data.merge_action #=> String
resp.jobs[0].code_gen_configuration_nodes["NodeId"].snowflake_source.data.merge_when_matched #=> String
resp.jobs[0].code_gen_configuration_nodes["NodeId"].snowflake_source.data.merge_when_not_matched #=> String
resp.jobs[0].code_gen_configuration_nodes["NodeId"].snowflake_source.data.merge_clause #=> String
resp.jobs[0].code_gen_configuration_nodes["NodeId"].snowflake_source.data.staging_table #=> String
resp.jobs[0].code_gen_configuration_nodes["NodeId"].snowflake_source.data.selected_columns #=> Array
resp.jobs[0].code_gen_configuration_nodes["NodeId"].snowflake_source.data.selected_columns[0].value #=> String
resp.jobs[0].code_gen_configuration_nodes["NodeId"].snowflake_source.data.selected_columns[0].label #=> String
resp.jobs[0].code_gen_configuration_nodes["NodeId"].snowflake_source.data.selected_columns[0].description #=> String
resp.jobs[0].code_gen_configuration_nodes["NodeId"].snowflake_source.data.auto_pushdown #=> Boolean
resp.jobs[0].code_gen_configuration_nodes["NodeId"].snowflake_source.data.table_schema #=> Array
resp.jobs[0].code_gen_configuration_nodes["NodeId"].snowflake_source.data.table_schema[0].value #=> String
resp.jobs[0].code_gen_configuration_nodes["NodeId"].snowflake_source.data.table_schema[0].label #=> String
resp.jobs[0].code_gen_configuration_nodes["NodeId"].snowflake_source.data.table_schema[0].description #=> String
resp.jobs[0].code_gen_configuration_nodes["NodeId"].snowflake_source.output_schemas #=> Array
resp.jobs[0].code_gen_configuration_nodes["NodeId"].snowflake_source.output_schemas[0].columns #=> Array
resp.jobs[0].code_gen_configuration_nodes["NodeId"].snowflake_source.output_schemas[0].columns[0].name #=> String
resp.jobs[0].code_gen_configuration_nodes["NodeId"].snowflake_source.output_schemas[0].columns[0].type #=> String
resp.jobs[0].code_gen_configuration_nodes["NodeId"].snowflake_target.name #=> String
resp.jobs[0].code_gen_configuration_nodes["NodeId"].snowflake_target.data.source_type #=> String
resp.jobs[0].code_gen_configuration_nodes["NodeId"].snowflake_target.data.connection.value #=> String
resp.jobs[0].code_gen_configuration_nodes["NodeId"].snowflake_target.data.connection.label #=> String
resp.jobs[0].code_gen_configuration_nodes["NodeId"].snowflake_target.data.connection.description #=> String
resp.jobs[0].code_gen_configuration_nodes["NodeId"].snowflake_target.data.schema #=> String
resp.jobs[0].code_gen_configuration_nodes["NodeId"].snowflake_target.data.table #=> String
resp.jobs[0].code_gen_configuration_nodes["NodeId"].snowflake_target.data.database #=> String
resp.jobs[0].code_gen_configuration_nodes["NodeId"].snowflake_target.data.temp_dir #=> String
resp.jobs[0].code_gen_configuration_nodes["NodeId"].snowflake_target.data.iam_role.value #=> String
resp.jobs[0].code_gen_configuration_nodes["NodeId"].snowflake_target.data.iam_role.label #=> String
resp.jobs[0].code_gen_configuration_nodes["NodeId"].snowflake_target.data.iam_role.description #=> String
resp.jobs[0].code_gen_configuration_nodes["NodeId"].snowflake_target.data.additional_options #=> Hash
resp.jobs[0].code_gen_configuration_nodes["NodeId"].snowflake_target.data.additional_options["EnclosedInStringProperty"] #=> String
resp.jobs[0].code_gen_configuration_nodes["NodeId"].snowflake_target.data.sample_query #=> String
resp.jobs[0].code_gen_configuration_nodes["NodeId"].snowflake_target.data.pre_action #=> String
resp.jobs[0].code_gen_configuration_nodes["NodeId"].snowflake_target.data.post_action #=> String
resp.jobs[0].code_gen_configuration_nodes["NodeId"].snowflake_target.data.action #=> String
resp.jobs[0].code_gen_configuration_nodes["NodeId"].snowflake_target.data.upsert #=> Boolean
resp.jobs[0].code_gen_configuration_nodes["NodeId"].snowflake_target.data.merge_action #=> String
resp.jobs[0].code_gen_configuration_nodes["NodeId"].snowflake_target.data.merge_when_matched #=> String
resp.jobs[0].code_gen_configuration_nodes["NodeId"].snowflake_target.data.merge_when_not_matched #=> String
resp.jobs[0].code_gen_configuration_nodes["NodeId"].snowflake_target.data.merge_clause #=> String
resp.jobs[0].code_gen_configuration_nodes["NodeId"].snowflake_target.data.staging_table #=> String
resp.jobs[0].code_gen_configuration_nodes["NodeId"].snowflake_target.data.selected_columns #=> Array
resp.jobs[0].code_gen_configuration_nodes["NodeId"].snowflake_target.data.selected_columns[0].value #=> String
resp.jobs[0].code_gen_configuration_nodes["NodeId"].snowflake_target.data.selected_columns[0].label #=> String
resp.jobs[0].code_gen_configuration_nodes["NodeId"].snowflake_target.data.selected_columns[0].description #=> String
resp.jobs[0].code_gen_configuration_nodes["NodeId"].snowflake_target.data.auto_pushdown #=> Boolean
resp.jobs[0].code_gen_configuration_nodes["NodeId"].snowflake_target.data.table_schema #=> Array
resp.jobs[0].code_gen_configuration_nodes["NodeId"].snowflake_target.data.table_schema[0].value #=> String
resp.jobs[0].code_gen_configuration_nodes["NodeId"].snowflake_target.data.table_schema[0].label #=> String
resp.jobs[0].code_gen_configuration_nodes["NodeId"].snowflake_target.data.table_schema[0].description #=> String
resp.jobs[0].code_gen_configuration_nodes["NodeId"].snowflake_target.inputs #=> Array
resp.jobs[0].code_gen_configuration_nodes["NodeId"].snowflake_target.inputs[0] #=> String
resp.jobs[0].code_gen_configuration_nodes["NodeId"].connector_data_source.name #=> String
resp.jobs[0].code_gen_configuration_nodes["NodeId"].connector_data_source.connection_type #=> String
resp.jobs[0].code_gen_configuration_nodes["NodeId"].connector_data_source.data #=> Hash
resp.jobs[0].code_gen_configuration_nodes["NodeId"].connector_data_source.data["GenericString"] #=> String
resp.jobs[0].code_gen_configuration_nodes["NodeId"].connector_data_source.output_schemas #=> Array
resp.jobs[0].code_gen_configuration_nodes["NodeId"].connector_data_source.output_schemas[0].columns #=> Array
resp.jobs[0].code_gen_configuration_nodes["NodeId"].connector_data_source.output_schemas[0].columns[0].name #=> String
resp.jobs[0].code_gen_configuration_nodes["NodeId"].connector_data_source.output_schemas[0].columns[0].type #=> String
resp.jobs[0].code_gen_configuration_nodes["NodeId"].connector_data_target.name #=> String
resp.jobs[0].code_gen_configuration_nodes["NodeId"].connector_data_target.connection_type #=> String
resp.jobs[0].code_gen_configuration_nodes["NodeId"].connector_data_target.data #=> Hash
resp.jobs[0].code_gen_configuration_nodes["NodeId"].connector_data_target.data["GenericString"] #=> String
resp.jobs[0].code_gen_configuration_nodes["NodeId"].connector_data_target.inputs #=> Array
resp.jobs[0].code_gen_configuration_nodes["NodeId"].connector_data_target.inputs[0] #=> String
resp.jobs[0].execution_class #=> String, one of "FLEX", "STANDARD"
resp.jobs[0].source_control_details.provider #=> String, one of "GITHUB", "GITLAB", "BITBUCKET", "AWS_CODE_COMMIT"
resp.jobs[0].source_control_details.repository #=> String
resp.jobs[0].source_control_details.owner #=> String
resp.jobs[0].source_control_details.branch #=> String
resp.jobs[0].source_control_details.folder #=> String
resp.jobs[0].source_control_details.last_commit_id #=> String
resp.jobs[0].source_control_details.auth_strategy #=> String, one of "PERSONAL_ACCESS_TOKEN", "AWS_SECRETS_MANAGER"
resp.jobs[0].source_control_details.auth_token #=> String
resp.jobs[0].maintenance_window #=> String
resp.jobs[0].profile_name #=> String
resp.next_token #=> String

Parameters:

  • params (Hash) (defaults to: {})

    ({})

Options Hash (params):

  • :next_token (String)

    A continuation token, if this is a continuation call.

  • :max_results (Integer)

    The maximum size of the response.

Returns:

See Also:

[View source]

11096
11097
11098
11099
# File 'gems/aws-sdk-glue/lib/aws-sdk-glue/client.rb', line 11096

def get_jobs(params = {}, options = {})
  req = build_request(:get_jobs, params)
  req.send_request(options)
end

#get_mapping(params = {}) ⇒ Types::GetMappingResponse

Creates mappings.

Examples:

Request syntax with placeholder values


resp = client.get_mapping({
  source: { # required
    database_name: "NameString", # required
    table_name: "NameString", # required
  },
  sinks: [
    {
      database_name: "NameString", # required
      table_name: "NameString", # required
    },
  ],
  location: {
    jdbc: [
      {
        name: "CodeGenArgName", # required
        value: "CodeGenArgValue", # required
        param: false,
      },
    ],
    s3: [
      {
        name: "CodeGenArgName", # required
        value: "CodeGenArgValue", # required
        param: false,
      },
    ],
    dynamo_db: [
      {
        name: "CodeGenArgName", # required
        value: "CodeGenArgValue", # required
        param: false,
      },
    ],
  },
})

Response structure


resp.mapping #=> Array
resp.mapping[0].source_table #=> String
resp.mapping[0].source_path #=> String
resp.mapping[0].source_type #=> String
resp.mapping[0].target_table #=> String
resp.mapping[0].target_path #=> String
resp.mapping[0].target_type #=> String

Parameters:

  • params (Hash) (defaults to: {})

    ({})

Options Hash (params):

Returns:

See Also:

[View source]

11517
11518
11519
11520
# File 'gems/aws-sdk-glue/lib/aws-sdk-glue/client.rb', line 11517

def get_mapping(params = {}, options = {})
  req = build_request(:get_mapping, params)
  req.send_request(options)
end

#get_ml_task_run(params = {}) ⇒ Types::GetMLTaskRunResponse

Gets details for a specific task run on a machine learning transform. Machine learning task runs are asynchronous tasks that Glue runs on your behalf as part of various machine learning workflows. You can check the stats of any task run by calling GetMLTaskRun with the TaskRunID and its parent transform's TransformID.

Examples:

Request syntax with placeholder values


resp = client.get_ml_task_run({
  transform_id: "HashString", # required
  task_run_id: "HashString", # required
})

Response structure


resp.transform_id #=> String
resp.task_run_id #=> String
resp.status #=> String, one of "STARTING", "RUNNING", "STOPPING", "STOPPED", "SUCCEEDED", "FAILED", "TIMEOUT"
resp.log_group_name #=> String
resp.properties.task_type #=> String, one of "EVALUATION", "LABELING_SET_GENERATION", "IMPORT_LABELS", "EXPORT_LABELS", "FIND_MATCHES"
resp.properties.import_labels_task_run_properties.input_s3_path #=> String
resp.properties.import_labels_task_run_properties.replace #=> Boolean
resp.properties.export_labels_task_run_properties.output_s3_path #=> String
resp.properties.labeling_set_generation_task_run_properties.output_s3_path #=> String
resp.properties.find_matches_task_run_properties.job_id #=> String
resp.properties.find_matches_task_run_properties.job_name #=> String
resp.properties.find_matches_task_run_properties.job_run_id #=> String
resp.error_string #=> String
resp.started_on #=> Time
resp.last_modified_on #=> Time
resp.completed_on #=> Time
resp.execution_time #=> Integer

Parameters:

  • params (Hash) (defaults to: {})

    ({})

Options Hash (params):

  • :transform_id (required, String)

    The unique identifier of the machine learning transform.

  • :task_run_id (required, String)

    The unique identifier of the task run.

Returns:

See Also:

[View source]

11157
11158
11159
11160
# File 'gems/aws-sdk-glue/lib/aws-sdk-glue/client.rb', line 11157

def get_ml_task_run(params = {}, options = {})
  req = build_request(:get_ml_task_run, params)
  req.send_request(options)
end

#get_ml_task_runs(params = {}) ⇒ Types::GetMLTaskRunsResponse

Gets a list of runs for a machine learning transform. Machine learning task runs are asynchronous tasks that Glue runs on your behalf as part of various machine learning workflows. You can get a sortable, filterable list of machine learning task runs by calling GetMLTaskRuns with their parent transform's TransformID and other optional parameters as documented in this section.

This operation returns a list of historic runs and must be paginated.

The returned response is a pageable response and is Enumerable. For details on usage see PageableResponse.

Examples:

Request syntax with placeholder values


resp = client.get_ml_task_runs({
  transform_id: "HashString", # required
  next_token: "PaginationToken",
  max_results: 1,
  filter: {
    task_run_type: "EVALUATION", # accepts EVALUATION, LABELING_SET_GENERATION, IMPORT_LABELS, EXPORT_LABELS, FIND_MATCHES
    status: "STARTING", # accepts STARTING, RUNNING, STOPPING, STOPPED, SUCCEEDED, FAILED, TIMEOUT
    started_before: Time.now,
    started_after: Time.now,
  },
  sort: {
    column: "TASK_RUN_TYPE", # required, accepts TASK_RUN_TYPE, STATUS, STARTED
    sort_direction: "DESCENDING", # required, accepts DESCENDING, ASCENDING
  },
})

Response structure


resp.task_runs #=> Array
resp.task_runs[0].transform_id #=> String
resp.task_runs[0].task_run_id #=> String
resp.task_runs[0].status #=> String, one of "STARTING", "RUNNING", "STOPPING", "STOPPED", "SUCCEEDED", "FAILED", "TIMEOUT"
resp.task_runs[0].log_group_name #=> String
resp.task_runs[0].properties.task_type #=> String, one of "EVALUATION", "LABELING_SET_GENERATION", "IMPORT_LABELS", "EXPORT_LABELS", "FIND_MATCHES"
resp.task_runs[0].properties.import_labels_task_run_properties.input_s3_path #=> String
resp.task_runs[0].properties.import_labels_task_run_properties.replace #=> Boolean
resp.task_runs[0].properties.export_labels_task_run_properties.output_s3_path #=> String
resp.task_runs[0].properties.labeling_set_generation_task_run_properties.output_s3_path #=> String
resp.task_runs[0].properties.find_matches_task_run_properties.job_id #=> String
resp.task_runs[0].properties.find_matches_task_run_properties.job_name #=> String
resp.task_runs[0].properties.find_matches_task_run_properties.job_run_id #=> String
resp.task_runs[0].error_string #=> String
resp.task_runs[0].started_on #=> Time
resp.task_runs[0].last_modified_on #=> Time
resp.task_runs[0].completed_on #=> Time
resp.task_runs[0].execution_time #=> Integer
resp.next_token #=> String

Parameters:

  • params (Hash) (defaults to: {})

    ({})

Options Hash (params):

  • :transform_id (required, String)

    The unique identifier of the machine learning transform.

  • :next_token (String)

    A token for pagination of the results. The default is empty.

  • :max_results (Integer)

    The maximum number of results to return.

  • :filter (Types::TaskRunFilterCriteria)

    The filter criteria, in the TaskRunFilterCriteria structure, for the task run.

  • :sort (Types::TaskRunSortCriteria)

    The sorting criteria, in the TaskRunSortCriteria structure, for the task run.

Returns:

See Also:

[View source]

11239
11240
11241
11242
# File 'gems/aws-sdk-glue/lib/aws-sdk-glue/client.rb', line 11239

def get_ml_task_runs(params = {}, options = {})
  req = build_request(:get_ml_task_runs, params)
  req.send_request(options)
end

#get_ml_transform(params = {}) ⇒ Types::GetMLTransformResponse

Gets an Glue machine learning transform artifact and all its corresponding metadata. Machine learning transforms are a special type of transform that use machine learning to learn the details of the transformation to be performed by learning from examples provided by humans. These transformations are then saved by Glue. You can retrieve their metadata by calling GetMLTransform.

Examples:

Request syntax with placeholder values


resp = client.get_ml_transform({
  transform_id: "HashString", # required
})

Response structure


resp.transform_id #=> String
resp.name #=> String
resp.description #=> String
resp.status #=> String, one of "NOT_READY", "READY", "DELETING"
resp.created_on #=> Time
resp.last_modified_on #=> Time
resp.input_record_tables #=> Array
resp.input_record_tables[0].database_name #=> String
resp.input_record_tables[0].table_name #=> String
resp.input_record_tables[0].catalog_id #=> String
resp.input_record_tables[0].connection_name #=> String
resp.input_record_tables[0].additional_options #=> Hash
resp.input_record_tables[0].additional_options["NameString"] #=> String
resp.parameters.transform_type #=> String, one of "FIND_MATCHES"
resp.parameters.find_matches_parameters.primary_key_column_name #=> String
resp.parameters.find_matches_parameters.precision_recall_tradeoff #=> Float
resp.parameters.find_matches_parameters.accuracy_cost_tradeoff #=> Float
resp.parameters.find_matches_parameters.enforce_provided_labels #=> Boolean
resp.evaluation_metrics.transform_type #=> String, one of "FIND_MATCHES"
resp.evaluation_metrics.find_matches_metrics.area_under_pr_curve #=> Float
resp.evaluation_metrics.find_matches_metrics.precision #=> Float
resp.evaluation_metrics.find_matches_metrics.recall #=> Float
resp.evaluation_metrics.find_matches_metrics.f1 #=> Float
resp.evaluation_metrics.find_matches_metrics.confusion_matrix.num_true_positives #=> Integer
resp.evaluation_metrics.find_matches_metrics.confusion_matrix.num_false_positives #=> Integer
resp.evaluation_metrics.find_matches_metrics.confusion_matrix.num_true_negatives #=> Integer
resp.evaluation_metrics.find_matches_metrics.confusion_matrix.num_false_negatives #=> Integer
resp.evaluation_metrics.find_matches_metrics.column_importances #=> Array
resp.evaluation_metrics.find_matches_metrics.column_importances[0].column_name #=> String
resp.evaluation_metrics.find_matches_metrics.column_importances[0].importance #=> Float
resp.label_count #=> Integer
resp.schema #=> Array
resp.schema[0].name #=> String
resp.schema[0].data_type #=> String
resp.role #=> String
resp.glue_version #=> String
resp.max_capacity #=> Float
resp.worker_type #=> String, one of "Standard", "G.1X", "G.2X", "G.025X", "G.4X", "G.8X", "Z.2X"
resp.number_of_workers #=> Integer
resp.timeout #=> Integer
resp.max_retries #=> Integer
resp.transform_encryption.ml_user_data_encryption.ml_user_data_encryption_mode #=> String, one of "DISABLED", "SSE-KMS"
resp.transform_encryption.ml_user_data_encryption.kms_key_id #=> String
resp.transform_encryption.task_run_security_configuration_name #=> String

Parameters:

  • params (Hash) (defaults to: {})

    ({})

Options Hash (params):

  • :transform_id (required, String)

    The unique identifier of the transform, generated at the time that the transform was created.

Returns:

See Also:

[View source]

11334
11335
11336
11337
# File 'gems/aws-sdk-glue/lib/aws-sdk-glue/client.rb', line 11334

def get_ml_transform(params = {}, options = {})
  req = build_request(:get_ml_transform, params)
  req.send_request(options)
end

#get_ml_transforms(params = {}) ⇒ Types::GetMLTransformsResponse

Gets a sortable, filterable list of existing Glue machine learning transforms. Machine learning transforms are a special type of transform that use machine learning to learn the details of the transformation to be performed by learning from examples provided by humans. These transformations are then saved by Glue, and you can retrieve their metadata by calling GetMLTransforms.

The returned response is a pageable response and is Enumerable. For details on usage see PageableResponse.

Examples:

Request syntax with placeholder values


resp = client.get_ml_transforms({
  next_token: "PaginationToken",
  max_results: 1,
  filter: {
    name: "NameString",
    transform_type: "FIND_MATCHES", # accepts FIND_MATCHES
    status: "NOT_READY", # accepts NOT_READY, READY, DELETING
    glue_version: "GlueVersionString",
    created_before: Time.now,
    created_after: Time.now,
    last_modified_before: Time.now,
    last_modified_after: Time.now,
    schema: [
      {
        name: "ColumnNameString",
        data_type: "ColumnTypeString",
      },
    ],
  },
  sort: {
    column: "NAME", # required, accepts NAME, TRANSFORM_TYPE, STATUS, CREATED, LAST_MODIFIED
    sort_direction: "DESCENDING", # required, accepts DESCENDING, ASCENDING
  },
})

Response structure


resp.transforms #=> Array
resp.transforms[0].transform_id #=> String
resp.transforms[0].name #=> String
resp.transforms[0].description #=> String
resp.transforms[0].status #=> String, one of "NOT_READY", "READY", "DELETING"
resp.transforms[0].created_on #=> Time
resp.transforms[0].last_modified_on #=> Time
resp.transforms[0].input_record_tables #=> Array
resp.transforms[0].input_record_tables[0].database_name #=> String
resp.transforms[0].input_record_tables[0].table_name #=> String
resp.transforms[0].input_record_tables[0].catalog_id #=> String
resp.transforms[0].input_record_tables[0].connection_name #=> String
resp.transforms[0].input_record_tables[0].additional_options #=> Hash
resp.transforms[0].input_record_tables[0].additional_options["NameString"] #=> String
resp.transforms[0].parameters.transform_type #=> String, one of "FIND_MATCHES"
resp.transforms[0].parameters.find_matches_parameters.primary_key_column_name #=> String
resp.transforms[0].parameters.find_matches_parameters.precision_recall_tradeoff #=> Float
resp.transforms[0].parameters.find_matches_parameters.accuracy_cost_tradeoff #=> Float
resp.transforms[0].parameters.find_matches_parameters.enforce_provided_labels #=> Boolean
resp.transforms[0].evaluation_metrics.transform_type #=> String, one of "FIND_MATCHES"
resp.transforms[0].evaluation_metrics.find_matches_metrics.area_under_pr_curve #=> Float
resp.transforms[0].evaluation_metrics.find_matches_metrics.precision #=> Float
resp.transforms[0].evaluation_metrics.find_matches_metrics.recall #=> Float
resp.transforms[0].evaluation_metrics.find_matches_metrics.f1 #=> Float
resp.transforms[0].evaluation_metrics.find_matches_metrics.confusion_matrix.num_true_positives #=> Integer
resp.transforms[0].evaluation_metrics.find_matches_metrics.confusion_matrix.num_false_positives #=> Integer
resp.transforms[0].evaluation_metrics.find_matches_metrics.confusion_matrix.num_true_negatives #=> Integer
resp.transforms[0].evaluation_metrics.find_matches_metrics.confusion_matrix.num_false_negatives #=> Integer
resp.transforms[0].evaluation_metrics.find_matches_metrics.column_importances #=> Array
resp.transforms[0].evaluation_metrics.find_matches_metrics.column_importances[0].column_name #=> String
resp.transforms[0].evaluation_metrics.find_matches_metrics.column_importances[0].importance #=> Float
resp.transforms[0].label_count #=> Integer
resp.transforms[0].schema #=> Array
resp.transforms[0].schema[0].name #=> String
resp.transforms[0].schema[0].data_type #=> String
resp.transforms[0].role #=> String
resp.transforms[0].glue_version #=> String
resp.transforms[0].max_capacity #=> Float
resp.transforms[0].worker_type #=> String, one of "Standard", "G.1X", "G.2X", "G.025X", "G.4X", "G.8X", "Z.2X"
resp.transforms[0].number_of_workers #=> Integer
resp.transforms[0].timeout #=> Integer
resp.transforms[0].max_retries #=> Integer
resp.transforms[0].transform_encryption.ml_user_data_encryption.ml_user_data_encryption_mode #=> String, one of "DISABLED", "SSE-KMS"
resp.transforms[0].transform_encryption.ml_user_data_encryption.kms_key_id #=> String
resp.transforms[0].transform_encryption.task_run_security_configuration_name #=> String
resp.next_token #=> String

Parameters:

  • params (Hash) (defaults to: {})

    ({})

Options Hash (params):

Returns:

See Also:

[View source]

11445
11446
11447
11448
# File 'gems/aws-sdk-glue/lib/aws-sdk-glue/client.rb', line 11445

def get_ml_transforms(params = {}, options = {})
  req = build_request(:get_ml_transforms, params)
  req.send_request(options)
end

#get_partition(params = {}) ⇒ Types::GetPartitionResponse

Retrieves information about a specified partition.

Examples:

Request syntax with placeholder values


resp = client.get_partition({
  catalog_id: "CatalogIdString",
  database_name: "NameString", # required
  table_name: "NameString", # required
  partition_values: ["ValueString"], # required
})

Response structure


resp.data.partition.values #=> Array
resp.data.partition.values[0] #=> String
resp.data.partition.database_name #=> String
resp.data.partition.table_name #=> String
resp.data.partition.creation_time #=> Time
resp.data.partition.last_access_time #=> Time
resp.data.partition.storage_descriptor.columns #=> Array
resp.data.partition.storage_descriptor.columns[0].name #=> String
resp.data.partition.storage_descriptor.columns[0].type #=> String
resp.data.partition.storage_descriptor.columns[0].comment #=> String
resp.data.partition.storage_descriptor.columns[0].parameters #=> Hash
resp.data.partition.storage_descriptor.columns[0].parameters["KeyString"] #=> String
resp.data.partition.storage_descriptor.location #=> String
resp.data.partition.storage_descriptor.additional_locations #=> Array
resp.data.partition.storage_descriptor.additional_locations[0] #=> String
resp.data.partition.storage_descriptor.input_format #=> String
resp.data.partition.storage_descriptor.output_format #=> String
resp.data.partition.storage_descriptor.compressed #=> Boolean
resp.data.partition.storage_descriptor.number_of_buckets #=> Integer
resp.data.partition.storage_descriptor.serde_info.name #=> String
resp.data.partition.storage_descriptor.serde_info.serialization_library #=> String
resp.data.partition.storage_descriptor.serde_info.parameters #=> Hash
resp.data.partition.storage_descriptor.serde_info.parameters["KeyString"] #=> String
resp.data.partition.storage_descriptor.bucket_columns #=> Array
resp.data.partition.storage_descriptor.bucket_columns[0] #=> String
resp.data.partition.storage_descriptor.sort_columns #=> Array
resp.data.partition.storage_descriptor.sort_columns[0].column #=> String
resp.data.partition.storage_descriptor.sort_columns[0].sort_order #=> Integer
resp.data.partition.storage_descriptor.parameters #=> Hash
resp.data.partition.storage_descriptor.parameters["KeyString"] #=> String
resp.data.partition.storage_descriptor.skewed_info.skewed_column_names #=> Array
resp.data.partition.storage_descriptor.skewed_info.skewed_column_names[0] #=> String
resp.data.partition.storage_descriptor.skewed_info.skewed_column_values #=> Array
resp.data.partition.storage_descriptor.skewed_info.skewed_column_values[0] #=> String
resp.data.partition.storage_descriptor.skewed_info.skewed_column_value_location_maps #=> Hash
resp.data.partition.storage_descriptor.skewed_info.skewed_column_value_location_maps["ColumnValuesString"] #=> String
resp.data.partition.storage_descriptor.stored_as_sub_directories #=> Boolean
resp.data.partition.storage_descriptor.schema_reference.schema_id.schema_arn #=> String
resp.data.partition.storage_descriptor.schema_reference.schema_id.schema_name #=> String
resp.data.partition.storage_descriptor.schema_reference.schema_id.registry_name #=> String
resp.data.partition.storage_descriptor.schema_reference.schema_version_id #=> String
resp.data.partition.storage_descriptor.schema_reference.schema_version_number #=> Integer
resp.data.partition.parameters #=> Hash
resp.data.partition.parameters["KeyString"] #=> String
resp.data.partition.last_analyzed_time #=> Time
resp.data.partition.catalog_id #=> String

Parameters:

  • params (Hash) (defaults to: {})

    ({})

Options Hash (params):

  • :catalog_id (String)

    The ID of the Data Catalog where the partition in question resides. If none is provided, the Amazon Web Services account ID is used by default.

  • :database_name (required, String)

    The name of the catalog database where the partition resides.

  • :table_name (required, String)

    The name of the partition's table.

  • :partition_values (required, Array<String>)

    The values that define the partition.

Returns:

See Also:

[View source]

11604
11605
11606
11607
# File 'gems/aws-sdk-glue/lib/aws-sdk-glue/client.rb', line 11604

def get_partition(params = {}, options = {})
  req = build_request(:get_partition, params)
  req.send_request(options)
end

#get_partition_indexes(params = {}) ⇒ Types::GetPartitionIndexesResponse

Retrieves the partition indexes associated with a table.

The returned response is a pageable response and is Enumerable. For details on usage see PageableResponse.

Examples:

Request syntax with placeholder values


resp = client.get_partition_indexes({
  catalog_id: "CatalogIdString",
  database_name: "NameString", # required
  table_name: "NameString", # required
  next_token: "Token",
})

Response structure


resp.partition_index_descriptor_list #=> Array
resp.partition_index_descriptor_list[0].index_name #=> String
resp.partition_index_descriptor_list[0].keys #=> Array
resp.partition_index_descriptor_list[0].keys[0].name #=> String
resp.partition_index_descriptor_list[0].keys[0].type #=> String
resp.partition_index_descriptor_list[0].index_status #=> String, one of "CREATING", "ACTIVE", "DELETING", "FAILED"
resp.partition_index_descriptor_list[0].backfill_errors #=> Array
resp.partition_index_descriptor_list[0].backfill_errors[0].code #=> String, one of "ENCRYPTED_PARTITION_ERROR", "INTERNAL_ERROR", "INVALID_PARTITION_TYPE_DATA_ERROR", "MISSING_PARTITION_VALUE_ERROR", "UNSUPPORTED_PARTITION_CHARACTER_ERROR"
resp.partition_index_descriptor_list[0].backfill_errors[0].partitions #=> Array
resp.partition_index_descriptor_list[0].backfill_errors[0].partitions[0].values #=> Array
resp.partition_index_descriptor_list[0].backfill_errors[0].partitions[0].values[0] #=> String
resp.next_token #=> String

Parameters:

  • params (Hash) (defaults to: {})

    ({})

Options Hash (params):

  • :catalog_id (String)

    The catalog ID where the table resides.

  • :database_name (required, String)

    Specifies the name of a database from which you want to retrieve partition indexes.

  • :table_name (required, String)

    Specifies the name of a table for which you want to retrieve the partition indexes.

  • :next_token (String)

    A continuation token, included if this is a continuation call.

Returns:

See Also:

[View source]

11660
11661
11662
11663
# File 'gems/aws-sdk-glue/lib/aws-sdk-glue/client.rb', line 11660

def get_partition_indexes(params = {}, options = {})
  req = build_request(:get_partition_indexes, params)
  req.send_request(options)
end

#get_partitions(params = {}) ⇒ Types::GetPartitionsResponse

Retrieves information about the partitions in a table.

The returned response is a pageable response and is Enumerable. For details on usage see PageableResponse.

Examples:

Request syntax with placeholder values


resp = client.get_partitions({
  catalog_id: "CatalogIdString",
  database_name: "NameString", # required
  table_name: "NameString", # required
  expression: "PredicateString",
  next_token: "Token",
  segment: {
    segment_number: 1, # required
    total_segments: 1, # required
  },
  max_results: 1,
  exclude_column_schema: false,
  transaction_id: "TransactionIdString",
  query_as_of_time: Time.now,
})

Response structure


resp.partitions #=> Array
resp.partitions[0].values #=> Array
resp.partitions[0].values[0] #=> String
resp.partitions[0].database_name #=> String
resp.partitions[0].table_name #=> String
resp.partitions[0].creation_time #=> Time
resp.partitions[0].last_access_time #=> Time
resp.partitions[0].storage_descriptor.columns #=> Array
resp.partitions[0].storage_descriptor.columns[0].name #=> String
resp.partitions[0].storage_descriptor.columns[0].type #=> String
resp.partitions[0].storage_descriptor.columns[0].comment #=> String
resp.partitions[0].storage_descriptor.columns[0].parameters #=> Hash
resp.partitions[0].storage_descriptor.columns[0].parameters["KeyString"] #=> String
resp.partitions[0].storage_descriptor.location #=> String
resp.partitions[0].storage_descriptor.additional_locations #=> Array
resp.partitions[0].storage_descriptor.additional_locations[0] #=> String
resp.partitions[0].storage_descriptor.input_format #=> String
resp.partitions[0].storage_descriptor.output_format #=> String
resp.partitions[0].storage_descriptor.compressed #=> Boolean
resp.partitions[0].storage_descriptor.number_of_buckets #=> Integer
resp.partitions[0].storage_descriptor.serde_info.name #=> String
resp.partitions[0].storage_descriptor.serde_info.serialization_library #=> String
resp.partitions[0].storage_descriptor.serde_info.parameters #=> Hash
resp.partitions[0].storage_descriptor.serde_info.parameters["KeyString"] #=> String
resp.partitions[0].storage_descriptor.bucket_columns #=> Array
resp.partitions[0].storage_descriptor.bucket_columns[0] #=> String
resp.partitions[0].storage_descriptor.sort_columns #=> Array
resp.partitions[0].storage_descriptor.sort_columns[0].column #=> String
resp.partitions[0].storage_descriptor.sort_columns[0].sort_order #=> Integer
resp.partitions[0].storage_descriptor.parameters #=> Hash
resp.partitions[0].storage_descriptor.parameters["KeyString"] #=> String
resp.partitions[0].storage_descriptor.skewed_info.skewed_column_names #=> Array
resp.partitions[0].storage_descriptor.skewed_info.skewed_column_names[0] #=> String
resp.partitions[0].storage_descriptor.skewed_info.skewed_column_values #=> Array
resp.partitions[0].storage_descriptor.skewed_info.skewed_column_values[0] #=> String
resp.partitions[0].storage_descriptor.skewed_info.skewed_column_value_location_maps #=> Hash
resp.partitions[0].storage_descriptor.skewed_info.skewed_column_value_location_maps["ColumnValuesString"] #=> String
resp.partitions[0].storage_descriptor.stored_as_sub_directories #=> Boolean
resp.partitions[0].storage_descriptor.schema_reference.schema_id.schema_arn #=> String
resp.partitions[0].storage_descriptor.schema_reference.schema_id.schema_name #=> String
resp.partitions[0].storage_descriptor.schema_reference.schema_id.registry_name #=> String
resp.partitions[0].storage_descriptor.schema_reference.schema_version_id #=> String
resp.partitions[0].storage_descriptor.schema_reference.schema_version_number #=> Integer
resp.partitions[0].parameters #=> Hash
resp.partitions[0].parameters["KeyString"] #=> String
resp.partitions[0].last_analyzed_time #=> Time
resp.partitions[0].catalog_id #=> String
resp.next_token #=> String

Parameters:

  • params (Hash) (defaults to: {})

    ({})

Options Hash (params):

  • :catalog_id (String)

    The ID of the Data Catalog where the partitions in question reside. If none is provided, the Amazon Web Services account ID is used by default.

  • :database_name (required, String)

    The name of the catalog database where the partitions reside.

  • :table_name (required, String)

    The name of the partitions' table.

  • :expression (String)

    An expression that filters the partitions to be returned.

    The expression uses SQL syntax similar to the SQL WHERE filter clause. The SQL statement parser JSQLParser parses the expression.

    Operators: The following are the operators that you can use in the Expression API call:

    =

    : Checks whether the values of the two operands are equal; if yes, then the condition becomes true.

    Example: Assume 'variable a' holds 10 and 'variable b' holds 20.

    (a = b) is not true.

    < >

    Checks whether the values of two operands are equal; if the values are not equal, then the condition becomes true.

    Example: (a < > b) is true.

    >

    Checks whether the value of the left operand is greater than the value of the right operand; if yes, then the condition becomes true.

    Example: (a > b) is not true.

    <

    Checks whether the value of the left operand is less than the value of the right operand; if yes, then the condition becomes true.

    Example: (a < b) is true.

    >=

    : Checks whether the value of the left operand is greater than or equal to the value of the right operand; if yes, then the condition becomes true.

    Example: (a >= b) is not true.

    <=

    : Checks whether the value of the left operand is less than or equal to the value of the right operand; if yes, then the condition becomes true.

    Example: (a <= b) is true.

    AND, OR, IN, BETWEEN, LIKE, NOT, IS NULL

    Logical operators.

    Supported Partition Key Types: The following are the supported partition keys.

    • string

    • date

    • timestamp

    • int

    • bigint

    • long

    • tinyint

    • smallint

    • decimal

    If an type is encountered that is not valid, an exception is thrown.

    The following list shows the valid operators on each type. When you define a crawler, the partitionKey type is created as a STRING, to be compatible with the catalog partitions.

    Sample API Call:

  • :next_token (String)

    A continuation token, if this is not the first call to retrieve these partitions.

  • :segment (Types::Segment)

    The segment of the table's partitions to scan in this request.

  • :max_results (Integer)

    The maximum number of partitions to return in a single response.

  • :exclude_column_schema (Boolean)

    When true, specifies not returning the partition column schema. Useful when you are interested only in other partition attributes such as partition values or location. This approach avoids the problem of a large response by not returning duplicate data.

  • :transaction_id (String)

    The transaction ID at which to read the partition contents.

  • :query_as_of_time (Time, DateTime, Date, Integer, String)

    The time as of when to read the partition contents. If not set, the most recent transaction commit time will be used. Cannot be specified along with TransactionId.

Returns:

See Also:

[View source]

11875
11876
11877
11878
# File 'gems/aws-sdk-glue/lib/aws-sdk-glue/client.rb', line 11875

def get_partitions(params = {}, options = {})
  req = build_request(:get_partitions, params)
  req.send_request(options)
end

#get_plan(params = {}) ⇒ Types::GetPlanResponse

Gets code to perform a specified mapping.

Examples:

Request syntax with placeholder values


resp = client.get_plan({
  mapping: [ # required
    {
      source_table: "TableName",
      source_path: "SchemaPathString",
      source_type: "FieldType",
      target_table: "TableName",
      target_path: "SchemaPathString",
      target_type: "FieldType",
    },
  ],
  source: { # required
    database_name: "NameString", # required
    table_name: "NameString", # required
  },
  sinks: [
    {
      database_name: "NameString", # required
      table_name: "NameString", # required
    },
  ],
  location: {
    jdbc: [
      {
        name: "CodeGenArgName", # required
        value: "CodeGenArgValue", # required
        param: false,
      },
    ],
    s3: [
      {
        name: "CodeGenArgName", # required
        value: "CodeGenArgValue", # required
        param: false,
      },
    ],
    dynamo_db: [
      {
        name: "CodeGenArgName", # required
        value: "CodeGenArgValue", # required
        param: false,
      },
    ],
  },
  language: "PYTHON", # accepts PYTHON, SCALA
  additional_plan_options_map: {
    "GenericString" => "GenericString",
  },
})

Response structure


resp.python_script #=> String
resp.scala_code #=> String

Parameters:

  • params (Hash) (defaults to: {})

    ({})

Options Hash (params):

  • :mapping (required, Array<Types::MappingEntry>)

    The list of mappings from a source table to target tables.

  • :source (required, Types::CatalogEntry)

    The source table.

  • :sinks (Array<Types::CatalogEntry>)

    The target tables.

  • :location (Types::Location)

    The parameters for the mapping.

  • :language (String)

    The programming language of the code to perform the mapping.

  • :additional_plan_options_map (Hash<String,String>)

    A map to hold additional optional key-value parameters.

    Currently, these key-value pairs are supported:

    • inferSchema  —  Specifies whether to set inferSchema to true or false for the default script generated by an Glue job. For example, to set inferSchema to true, pass the following key value pair:

      --additional-plan-options-map '{"inferSchema":"true"}'

Returns:

See Also:

[View source]

11974
11975
11976
11977
# File 'gems/aws-sdk-glue/lib/aws-sdk-glue/client.rb', line 11974

def get_plan(params = {}, options = {})
  req = build_request(:get_plan, params)
  req.send_request(options)
end

#get_registry(params = {}) ⇒ Types::GetRegistryResponse

Describes the specified registry in detail.

Examples:

Request syntax with placeholder values


resp = client.get_registry({
  registry_id: { # required
    registry_name: "SchemaRegistryNameString",
    registry_arn: "GlueResourceArn",
  },
})

Response structure


resp.registry_name #=> String
resp.registry_arn #=> String
resp.description #=> String
resp.status #=> String, one of "AVAILABLE", "DELETING"
resp.created_time #=> String
resp.updated_time #=> String

Parameters:

  • params (Hash) (defaults to: {})

    ({})

Options Hash (params):

  • :registry_id (required, Types::RegistryId)

    This is a wrapper structure that may contain the registry name and Amazon Resource Name (ARN).

Returns:

See Also:

[View source]

12016
12017
12018
12019
# File 'gems/aws-sdk-glue/lib/aws-sdk-glue/client.rb', line 12016

def get_registry(params = {}, options = {})
  req = build_request(:get_registry, params)
  req.send_request(options)
end

#get_resource_policies(params = {}) ⇒ Types::GetResourcePoliciesResponse

Retrieves the resource policies set on individual resources by Resource Access Manager during cross-account permission grants. Also retrieves the Data Catalog resource policy.

If you enabled metadata encryption in Data Catalog settings, and you do not have permission on the KMS key, the operation can't return the Data Catalog resource policy.

The returned response is a pageable response and is Enumerable. For details on usage see PageableResponse.

Examples:

Request syntax with placeholder values


resp = client.get_resource_policies({
  next_token: "Token",
  max_results: 1,
})

Response structure


resp.get_resource_policies_response_list #=> Array
resp.get_resource_policies_response_list[0].policy_in_json #=> String
resp.get_resource_policies_response_list[0].policy_hash #=> String
resp.get_resource_policies_response_list[0].create_time #=> Time
resp.get_resource_policies_response_list[0].update_time #=> Time
resp.next_token #=> String

Parameters:

  • params (Hash) (defaults to: {})

    ({})

Options Hash (params):

  • :next_token (String)

    A continuation token, if this is a continuation request.

  • :max_results (Integer)

    The maximum size of a list to return.

Returns:

See Also:

[View source]

12062
12063
12064
12065
# File 'gems/aws-sdk-glue/lib/aws-sdk-glue/client.rb', line 12062

def get_resource_policies(params = {}, options = {})
  req = build_request(:get_resource_policies, params)
  req.send_request(options)
end

#get_resource_policy(params = {}) ⇒ Types::GetResourcePolicyResponse

Retrieves a specified resource policy.

Examples:

Request syntax with placeholder values


resp = client.get_resource_policy({
  resource_arn: "GlueResourceArn",
})

Response structure


resp.policy_in_json #=> String
resp.policy_hash #=> String
resp.create_time #=> Time
resp.update_time #=> Time

Parameters:

  • params (Hash) (defaults to: {})

    ({})

Options Hash (params):

  • :resource_arn (String)

    The ARN of the Glue resource for which to retrieve the resource policy. If not supplied, the Data Catalog resource policy is returned. Use GetResourcePolicies to view all existing resource policies. For more information see Specifying Glue Resource ARNs.

Returns:

See Also:

[View source]

12103
12104
12105
12106
# File 'gems/aws-sdk-glue/lib/aws-sdk-glue/client.rb', line 12103

def get_resource_policy(params = {}, options = {})
  req = build_request(:get_resource_policy, params)
  req.send_request(options)
end

#get_schema(params = {}) ⇒ Types::GetSchemaResponse

Describes the specified schema in detail.

Examples:

Request syntax with placeholder values


resp = client.get_schema({
  schema_id: { # required
    schema_arn: "GlueResourceArn",
    schema_name: "SchemaRegistryNameString",
    registry_name: "SchemaRegistryNameString",
  },
})

Response structure


resp.registry_name #=> String
resp.registry_arn #=> String
resp.schema_name #=> String
resp.schema_arn #=> String
resp.description #=> String
resp.data_format #=> String, one of "AVRO", "JSON", "PROTOBUF"
resp.compatibility #=> String, one of "NONE", "DISABLED", "BACKWARD", "BACKWARD_ALL", "FORWARD", "FORWARD_ALL", "FULL", "FULL_ALL"
resp.schema_checkpoint #=> Integer
resp.latest_schema_version #=> Integer
resp.next_schema_version #=> Integer
resp.schema_status #=> String, one of "AVAILABLE", "PENDING", "DELETING"
resp.created_time #=> String
resp.updated_time #=> String

Parameters:

  • params (Hash) (defaults to: {})

    ({})

Options Hash (params):

  • :schema_id (required, Types::SchemaId)

    This is a wrapper structure to contain schema identity fields. The structure contains:

    • SchemaId$SchemaArn: The Amazon Resource Name (ARN) of the schema. Either SchemaArn or SchemaName and RegistryName has to be provided.

    • SchemaId$SchemaName: The name of the schema. Either SchemaArn or SchemaName and RegistryName has to be provided.

Returns:

See Also:

[View source]

12167
12168
12169
12170
# File 'gems/aws-sdk-glue/lib/aws-sdk-glue/client.rb', line 12167

def get_schema(params = {}, options = {})
  req = build_request(:get_schema, params)
  req.send_request(options)
end

#get_schema_by_definition(params = {}) ⇒ Types::GetSchemaByDefinitionResponse

Retrieves a schema by the SchemaDefinition. The schema definition is sent to the Schema Registry, canonicalized, and hashed. If the hash is matched within the scope of the SchemaName or ARN (or the default registry, if none is supplied), that schema’s metadata is returned. Otherwise, a 404 or NotFound error is returned. Schema versions in Deleted statuses will not be included in the results.

Examples:

Request syntax with placeholder values


resp = client.get_schema_by_definition({
  schema_id: { # required
    schema_arn: "GlueResourceArn",
    schema_name: "SchemaRegistryNameString",
    registry_name: "SchemaRegistryNameString",
  },
  schema_definition: "SchemaDefinitionString", # required
})

Response structure


resp.schema_version_id #=> String
resp.schema_arn #=> String
resp.data_format #=> String, one of "AVRO", "JSON", "PROTOBUF"
resp.status #=> String, one of "AVAILABLE", "PENDING", "FAILURE", "DELETING"
resp.created_time #=> String

Parameters:

  • params (Hash) (defaults to: {})

    ({})

Options Hash (params):

  • :schema_id (required, Types::SchemaId)

    This is a wrapper structure to contain schema identity fields. The structure contains:

    • SchemaId$SchemaArn: The Amazon Resource Name (ARN) of the schema. One of SchemaArn or SchemaName has to be provided.

    • SchemaId$SchemaName: The name of the schema. One of SchemaArn or SchemaName has to be provided.

  • :schema_definition (required, String)

    The definition of the schema for which schema details are required.

Returns:

See Also:

[View source]

12223
12224
12225
12226
# File 'gems/aws-sdk-glue/lib/aws-sdk-glue/client.rb', line 12223

def get_schema_by_definition(params = {}, options = {})
  req = build_request(:get_schema_by_definition, params)
  req.send_request(options)
end

#get_schema_version(params = {}) ⇒ Types::GetSchemaVersionResponse

Get the specified schema by its unique ID assigned when a version of the schema is created or registered. Schema versions in Deleted status will not be included in the results.

Examples:

Request syntax with placeholder values


resp = client.get_schema_version({
  schema_id: {
    schema_arn: "GlueResourceArn",
    schema_name: "SchemaRegistryNameString",
    registry_name: "SchemaRegistryNameString",
  },
  schema_version_id: "SchemaVersionIdString",
  schema_version_number: {
    latest_version: false,
    version_number: 1,
  },
})

Response structure


resp.schema_version_id #=> String
resp.schema_definition #=> String
resp.data_format #=> String, one of "AVRO", "JSON", "PROTOBUF"
resp.schema_arn #=> String
resp.version_number #=> Integer
resp.status #=> String, one of "AVAILABLE", "PENDING", "FAILURE", "DELETING"
resp.created_time #=> String

Parameters:

  • params (Hash) (defaults to: {})

    ({})

Options Hash (params):

  • :schema_id (Types::SchemaId)

    This is a wrapper structure to contain schema identity fields. The structure contains:

    • SchemaId$SchemaArn: The Amazon Resource Name (ARN) of the schema. Either SchemaArn or SchemaName and RegistryName has to be provided.

    • SchemaId$SchemaName: The name of the schema. Either SchemaArn or SchemaName and RegistryName has to be provided.

  • :schema_version_id (String)

    The SchemaVersionId of the schema version. This field is required for fetching by schema ID. Either this or the SchemaId wrapper has to be provided.

  • :schema_version_number (Types::SchemaVersionNumber)

    The version number of the schema.

Returns:

See Also:

[View source]

12290
12291
12292
12293
# File 'gems/aws-sdk-glue/lib/aws-sdk-glue/client.rb', line 12290

def get_schema_version(params = {}, options = {})
  req = build_request(:get_schema_version, params)
  req.send_request(options)
end

#get_schema_versions_diff(params = {}) ⇒ Types::GetSchemaVersionsDiffResponse

Fetches the schema version difference in the specified difference type between two stored schema versions in the Schema Registry.

This API allows you to compare two schema versions between two schema definitions under the same schema.

Examples:

Request syntax with placeholder values


resp = client.get_schema_versions_diff({
  schema_id: { # required
    schema_arn: "GlueResourceArn",
    schema_name: "SchemaRegistryNameString",
    registry_name: "SchemaRegistryNameString",
  },
  first_schema_version_number: { # required
    latest_version: false,
    version_number: 1,
  },
  second_schema_version_number: { # required
    latest_version: false,
    version_number: 1,
  },
  schema_diff_type: "SYNTAX_DIFF", # required, accepts SYNTAX_DIFF
})

Response structure


resp.diff #=> String

Parameters:

  • params (Hash) (defaults to: {})

    ({})

Options Hash (params):

  • :schema_id (required, Types::SchemaId)

    This is a wrapper structure to contain schema identity fields. The structure contains:

    • SchemaId$SchemaArn: The Amazon Resource Name (ARN) of the schema. One of SchemaArn or SchemaName has to be provided.

    • SchemaId$SchemaName: The name of the schema. One of SchemaArn or SchemaName has to be provided.

  • :first_schema_version_number (required, Types::SchemaVersionNumber)

    The first of the two schema versions to be compared.

  • :second_schema_version_number (required, Types::SchemaVersionNumber)

    The second of the two schema versions to be compared.

  • :schema_diff_type (required, String)

    Refers to SYNTAX_DIFF, which is the currently supported diff type.

Returns:

See Also:

[View source]

12351
12352
12353
12354
# File 'gems/aws-sdk-glue/lib/aws-sdk-glue/client.rb', line 12351

def get_schema_versions_diff(params = {}, options = {})
  req = build_request(:get_schema_versions_diff, params)
  req.send_request(options)
end

#get_security_configuration(params = {}) ⇒ Types::GetSecurityConfigurationResponse

Retrieves a specified security configuration.

Examples:

Request syntax with placeholder values


resp = client.get_security_configuration({
  name: "NameString", # required
})

Response structure


resp.security_configuration.name #=> String
resp.security_configuration.created_time_stamp #=> Time
resp.security_configuration.encryption_configuration.s3_encryption #=> Array
resp.security_configuration.encryption_configuration.s3_encryption[0].s3_encryption_mode #=> String, one of "DISABLED", "SSE-KMS", "SSE-S3"
resp.security_configuration.encryption_configuration.s3_encryption[0].kms_key_arn #=> String
resp.security_configuration.encryption_configuration.cloud_watch_encryption.cloud_watch_encryption_mode #=> String, one of "DISABLED", "SSE-KMS"
resp.security_configuration.encryption_configuration.cloud_watch_encryption.kms_key_arn #=> String
resp.security_configuration.encryption_configuration.job_bookmarks_encryption.job_bookmarks_encryption_mode #=> String, one of "DISABLED", "CSE-KMS"
resp.security_configuration.encryption_configuration.job_bookmarks_encryption.kms_key_arn #=> String
resp.security_configuration.encryption_configuration.data_quality_encryption.data_quality_encryption_mode #=> String, one of "DISABLED", "SSE-KMS"
resp.security_configuration.encryption_configuration.data_quality_encryption.kms_key_arn #=> String

Parameters:

  • params (Hash) (defaults to: {})

    ({})

Options Hash (params):

  • :name (required, String)

    The name of the security configuration to retrieve.

Returns:

See Also:

[View source]

12389
12390
12391
12392
# File 'gems/aws-sdk-glue/lib/aws-sdk-glue/client.rb', line 12389

def get_security_configuration(params = {}, options = {})
  req = build_request(:get_security_configuration, params)
  req.send_request(options)
end

#get_security_configurations(params = {}) ⇒ Types::GetSecurityConfigurationsResponse

Retrieves a list of all security configurations.

The returned response is a pageable response and is Enumerable. For details on usage see PageableResponse.

Examples:

Request syntax with placeholder values


resp = client.get_security_configurations({
  max_results: 1,
  next_token: "GenericString",
})

Response structure


resp.security_configurations #=> Array
resp.security_configurations[0].name #=> String
resp.security_configurations[0].created_time_stamp #=> Time
resp.security_configurations[0].encryption_configuration.s3_encryption #=> Array
resp.security_configurations[0].encryption_configuration.s3_encryption[0].s3_encryption_mode #=> String, one of "DISABLED", "SSE-KMS", "SSE-S3"
resp.security_configurations[0].encryption_configuration.s3_encryption[0].kms_key_arn #=> String
resp.security_configurations[0].encryption_configuration.cloud_watch_encryption.cloud_watch_encryption_mode #=> String, one of "DISABLED", "SSE-KMS"
resp.security_configurations[0].encryption_configuration.cloud_watch_encryption.kms_key_arn #=> String
resp.security_configurations[0].encryption_configuration.job_bookmarks_encryption.job_bookmarks_encryption_mode #=> String, one of "DISABLED", "CSE-KMS"
resp.security_configurations[0].encryption_configuration.job_bookmarks_encryption.kms_key_arn #=> String
resp.security_configurations[0].encryption_configuration.data_quality_encryption.data_quality_encryption_mode #=> String, one of "DISABLED", "SSE-KMS"
resp.security_configurations[0].encryption_configuration.data_quality_encryption.kms_key_arn #=> String
resp.next_token #=> String

Parameters:

  • params (Hash) (defaults to: {})

    ({})

Options Hash (params):

  • :max_results (Integer)

    The maximum number of results to return.

  • :next_token (String)

    A continuation token, if this is a continuation call.

Returns:

See Also:

[View source]

12436
12437
12438
12439
# File 'gems/aws-sdk-glue/lib/aws-sdk-glue/client.rb', line 12436

def get_security_configurations(params = {}, options = {})
  req = build_request(:get_security_configurations, params)
  req.send_request(options)
end

#get_session(params = {}) ⇒ Types::GetSessionResponse

Retrieves the session.

Examples:

Request syntax with placeholder values


resp = client.get_session({
  id: "NameString", # required
  request_origin: "OrchestrationNameString",
})

Response structure


resp.session.id #=> String
resp.session.created_on #=> Time
resp.session.status #=> String, one of "PROVISIONING", "READY", "FAILED", "TIMEOUT", "STOPPING", "STOPPED"
resp.session.error_message #=> String
resp.session.description #=> String
resp.session.role #=> String
resp.session.command.name #=> String
resp.session.command.python_version #=> String
resp.session.default_arguments #=> Hash
resp.session.default_arguments["OrchestrationNameString"] #=> String
resp.session.connections.connections #=> Array
resp.session.connections.connections[0] #=> String
resp.session.progress #=> Float
resp.session.max_capacity #=> Float
resp.session.security_configuration #=> String
resp.session.glue_version #=> String
resp.session.number_of_workers #=> Integer
resp.session.worker_type #=> String, one of "Standard", "G.1X", "G.2X", "G.025X", "G.4X", "G.8X", "Z.2X"
resp.session.completed_on #=> Time
resp.session.execution_time #=> Float
resp.session.dpu_seconds #=> Float
resp.session.idle_timeout #=> Integer
resp.session.profile_name #=> String

Parameters:

  • params (Hash) (defaults to: {})

    ({})

Options Hash (params):

  • :id (required, String)

    The ID of the session.

  • :request_origin (String)

    The origin of the request.

Returns:

See Also:

[View source]

12490
12491
12492
12493
# File 'gems/aws-sdk-glue/lib/aws-sdk-glue/client.rb', line 12490

def get_session(params = {}, options = {})
  req = build_request(:get_session, params)
  req.send_request(options)
end

#get_statement(params = {}) ⇒ Types::GetStatementResponse

Retrieves the statement.

Examples:

Request syntax with placeholder values


resp = client.get_statement({
  session_id: "NameString", # required
  id: 1, # required
  request_origin: "OrchestrationNameString",
})

Response structure


resp.statement.id #=> Integer
resp.statement.code #=> String
resp.statement.state #=> String, one of "WAITING", "RUNNING", "AVAILABLE", "CANCELLING", "CANCELLED", "ERROR"
resp.statement.output.data.text_plain #=> String
resp.statement.output.execution_count #=> Integer
resp.statement.output.status #=> String, one of "WAITING", "RUNNING", "AVAILABLE", "CANCELLING", "CANCELLED", "ERROR"
resp.statement.output.error_name #=> String
resp.statement.output.error_value #=> String
resp.statement.output.traceback #=> Array
resp.statement.output.traceback[0] #=> String
resp.statement.progress #=> Float
resp.statement.started_on #=> Integer
resp.statement.completed_on #=> Integer

Parameters:

  • params (Hash) (defaults to: {})

    ({})

Options Hash (params):

  • :session_id (required, String)

    The Session ID of the statement.

  • :id (required, Integer)

    The Id of the statement.

  • :request_origin (String)

    The origin of the request.

Returns:

See Also:

[View source]

12538
12539
12540
12541
# File 'gems/aws-sdk-glue/lib/aws-sdk-glue/client.rb', line 12538

def get_statement(params = {}, options = {})
  req = build_request(:get_statement, params)
  req.send_request(options)
end

#get_table(params = {}) ⇒ Types::GetTableResponse

Retrieves the Table definition in a Data Catalog for a specified table.

Examples:

Request syntax with placeholder values


resp = client.get_table({
  catalog_id: "CatalogIdString",
  database_name: "NameString", # required
  name: "NameString", # required
  transaction_id: "TransactionIdString",
  query_as_of_time: Time.now,
  include_status_details: false,
})

Response structure


resp.table.name #=> String
resp.table.database_name #=> String
resp.table.description #=> String
resp.table.owner #=> String
resp.table.create_time #=> Time
resp.table.update_time #=> Time
resp.table.last_access_time #=> Time
resp.table.last_analyzed_time #=> Time
resp.table.retention #=> Integer
resp.table.storage_descriptor.columns #=> Array
resp.table.storage_descriptor.columns[0].name #=> String
resp.table.storage_descriptor.columns[0].type #=> String
resp.table.storage_descriptor.columns[0].comment #=> String
resp.table.storage_descriptor.columns[0].parameters #=> Hash
resp.table.storage_descriptor.columns[0].parameters["KeyString"] #=> String
resp.table.storage_descriptor.location #=> String
resp.table.storage_descriptor.additional_locations #=> Array
resp.table.storage_descriptor.additional_locations[0] #=> String
resp.table.storage_descriptor.input_format #=> String
resp.table.storage_descriptor.output_format #=> String
resp.table.storage_descriptor.compressed #=> Boolean
resp.table.storage_descriptor.number_of_buckets #=> Integer
resp.table.storage_descriptor.serde_info.name #=> String
resp.table.storage_descriptor.serde_info.serialization_library #=> String
resp.table.storage_descriptor.serde_info.parameters #=> Hash
resp.table.storage_descriptor.serde_info.parameters["KeyString"] #=> String
resp.table.storage_descriptor.bucket_columns #=> Array
resp.table.storage_descriptor.bucket_columns[0] #=> String
resp.table.storage_descriptor.sort_columns #=> Array
resp.table.storage_descriptor.sort_columns[0].column #=> String
resp.table.storage_descriptor.sort_columns[0].sort_order #=> Integer
resp.table.storage_descriptor.parameters #=> Hash
resp.table.storage_descriptor.parameters["KeyString"] #=> String
resp.table.storage_descriptor.skewed_info.skewed_column_names #=> Array
resp.table.storage_descriptor.skewed_info.skewed_column_names[0] #=> String
resp.table.storage_descriptor.skewed_info.skewed_column_values #=> Array
resp.table.storage_descriptor.skewed_info.skewed_column_values[0] #=> String
resp.table.storage_descriptor.skewed_info.skewed_column_value_location_maps #=> Hash
resp.table.storage_descriptor.skewed_info.skewed_column_value_location_maps["ColumnValuesString"] #=> String
resp.table.storage_descriptor.stored_as_sub_directories #=> Boolean
resp.table.storage_descriptor.schema_reference.schema_id.schema_arn #=> String
resp.table.storage_descriptor.schema_reference.schema_id.schema_name #=> String
resp.table.storage_descriptor.schema_reference.schema_id.registry_name #=> String
resp.table.storage_descriptor.schema_reference.schema_version_id #=> String
resp.table.storage_descriptor.schema_reference.schema_version_number #=> Integer
resp.table.partition_keys #=> Array
resp.table.partition_keys[0].name #=> String
resp.table.partition_keys[0].type #=> String
resp.table.partition_keys[0].comment #=> String
resp.table.partition_keys[0].parameters #=> Hash
resp.table.partition_keys[0].parameters["KeyString"] #=> String
resp.table.view_original_text #=> String
resp.table.view_expanded_text #=> String
resp.table.table_type #=> String
resp.table.parameters #=> Hash
resp.table.parameters["KeyString"] #=> String
resp.table.created_by #=> String
resp.table.is_registered_with_lake_formation #=> Boolean
resp.table.target_table.catalog_id #=> String
resp.table.target_table.database_name #=> String
resp.table.target_table.name #=> String
resp.table.target_table.region #=> String
resp.table.catalog_id #=> String
resp.table.version_id #=> String
resp.table.federated_table.identifier #=> String
resp.table.federated_table.database_identifier #=> String
resp.table.federated_table.connection_name #=> String
resp.table.view_definition.is_protected #=> Boolean
resp.table.view_definition.definer #=> String
resp.table.view_definition.sub_objects #=> Array
resp.table.view_definition.sub_objects[0] #=> String
resp.table.view_definition.representations #=> Array
resp.table.view_definition.representations[0].dialect #=> String, one of "REDSHIFT", "ATHENA", "SPARK"
resp.table.view_definition.representations[0].dialect_version #=> String
resp.table.view_definition.representations[0].view_original_text #=> String
resp.table.view_definition.representations[0].view_expanded_text #=> String
resp.table.view_definition.representations[0].validation_connection #=> String
resp.table.view_definition.representations[0].is_stale #=> Boolean
resp.table.is_multi_dialect_view #=> Boolean
resp.table.status.requested_by #=> String
resp.table.status.updated_by #=> String
resp.table.status.request_time #=> Time
resp.table.status.update_time #=> Time
resp.table.status.action #=> String, one of "UPDATE", "CREATE"
resp.table.status.state #=> String, one of "QUEUED", "IN_PROGRESS", "SUCCESS", "STOPPED", "FAILED"
resp.table.status.error.error_code #=> String
resp.table.status.error.error_message #=> String
resp.table.status.details.requested_change #=> Types::Table
resp.table.status.details.view_validations #=> Array
resp.table.status.details.view_validations[0].dialect #=> String, one of "REDSHIFT", "ATHENA", "SPARK"
resp.table.status.details.view_validations[0].dialect_version #=> String
resp.table.status.details.view_validations[0].view_validation_text #=> String
resp.table.status.details.view_validations[0].update_time #=> Time
resp.table.status.details.view_validations[0].state #=> String, one of "QUEUED", "IN_PROGRESS", "SUCCESS", "STOPPED", "FAILED"
resp.table.status.details.view_validations[0].error.error_code #=> String
resp.table.status.details.view_validations[0].error.error_message #=> String

Parameters:

  • params (Hash) (defaults to: {})

    ({})

Options Hash (params):

  • :catalog_id (String)

    The ID of the Data Catalog where the table resides. If none is provided, the Amazon Web Services account ID is used by default.

  • :database_name (required, String)

    The name of the database in the catalog in which the table resides. For Hive compatibility, this name is entirely lowercase.

  • :name (required, String)

    The name of the table for which to retrieve the definition. For Hive compatibility, this name is entirely lowercase.

  • :transaction_id (String)

    The transaction ID at which to read the table contents.

  • :query_as_of_time (Time, DateTime, Date, Integer, String)

    The time as of when to read the table contents. If not set, the most recent transaction commit time will be used. Cannot be specified along with TransactionId.

  • :include_status_details (Boolean)

    Specifies whether to include status details related to a request to create or update an Glue Data Catalog view.

Returns:

See Also:

[View source]

12688
12689
12690
12691
# File 'gems/aws-sdk-glue/lib/aws-sdk-glue/client.rb', line 12688

def get_table(params = {}, options = {})
  req = build_request(:get_table, params)
  req.send_request(options)
end

#get_table_optimizer(params = {}) ⇒ Types::GetTableOptimizerResponse

Returns the configuration of all optimizers associated with a specified table.

Examples:

Request syntax with placeholder values


resp = client.get_table_optimizer({
  catalog_id: "CatalogIdString", # required
  database_name: "NameString", # required
  table_name: "NameString", # required
  type: "compaction", # required, accepts compaction, retention, orphan_file_deletion
})

Response structure


resp.catalog_id #=> String
resp.database_name #=> String
resp.table_name #=> String
resp.table_optimizer.type #=> String, one of "compaction", "retention", "orphan_file_deletion"
resp.table_optimizer.configuration.role_arn #=> String
resp.table_optimizer.configuration.enabled #=> Boolean
resp.table_optimizer.configuration.vpc_configuration.glue_connection_name #=> String
resp.table_optimizer.configuration.retention_configuration.iceberg_configuration.snapshot_retention_period_in_days #=> Integer
resp.table_optimizer.configuration.retention_configuration.iceberg_configuration.number_of_snapshots_to_retain #=> Integer
resp.table_optimizer.configuration.retention_configuration.iceberg_configuration.clean_expired_files #=> Boolean
resp.table_optimizer.configuration.orphan_file_deletion_configuration.iceberg_configuration.orphan_file_retention_period_in_days #=> Integer
resp.table_optimizer.configuration.orphan_file_deletion_configuration.iceberg_configuration.location #=> String
resp.table_optimizer.last_run.event_type #=> String, one of "starting", "completed", "failed", "in_progress"
resp.table_optimizer.last_run.start_timestamp #=> Time
resp.table_optimizer.last_run.end_timestamp #=> Time
resp.table_optimizer.last_run.metrics.number_of_bytes_compacted #=> String
resp.table_optimizer.last_run.metrics.number_of_files_compacted #=> String
resp.table_optimizer.last_run.metrics.number_of_dpus #=> String
resp.table_optimizer.last_run.metrics.job_duration_in_hour #=> String
resp.table_optimizer.last_run.error #=> String
resp.table_optimizer.last_run.compaction_metrics.iceberg_metrics.number_of_bytes_compacted #=> Integer
resp.table_optimizer.last_run.compaction_metrics.iceberg_metrics.number_of_files_compacted #=> Integer
resp.table_optimizer.last_run.compaction_metrics.iceberg_metrics.number_of_dpus #=> Integer
resp.table_optimizer.last_run.compaction_metrics.iceberg_metrics.job_duration_in_hour #=> Float
resp.table_optimizer.last_run.retention_metrics.iceberg_metrics.number_of_data_files_deleted #=> Integer
resp.table_optimizer.last_run.retention_metrics.iceberg_metrics.number_of_manifest_files_deleted #=> Integer
resp.table_optimizer.last_run.retention_metrics.iceberg_metrics.number_of_manifest_lists_deleted #=> Integer
resp.table_optimizer.last_run.retention_metrics.iceberg_metrics.number_of_dpus #=> Integer
resp.table_optimizer.last_run.retention_metrics.iceberg_metrics.job_duration_in_hour #=> Float
resp.table_optimizer.last_run.orphan_file_deletion_metrics.iceberg_metrics.number_of_orphan_files_deleted #=> Integer
resp.table_optimizer.last_run.orphan_file_deletion_metrics.iceberg_metrics.number_of_dpus #=> Integer
resp.table_optimizer.last_run.orphan_file_deletion_metrics.iceberg_metrics.job_duration_in_hour #=> Float

Parameters:

  • params (Hash) (defaults to: {})

    ({})

Options Hash (params):

  • :catalog_id (required, String)

    The Catalog ID of the table.

  • :database_name (required, String)

    The name of the database in the catalog in which the table resides.

  • :table_name (required, String)

    The name of the table.

  • :type (required, String)

    The type of table optimizer.

Returns:

See Also:

[View source]

12763
12764
12765
12766
# File 'gems/aws-sdk-glue/lib/aws-sdk-glue/client.rb', line 12763

def get_table_optimizer(params = {}, options = {})
  req = build_request(:get_table_optimizer, params)
  req.send_request(options)
end

#get_table_version(params = {}) ⇒ Types::GetTableVersionResponse

Retrieves a specified version of a table.

Examples:

Request syntax with placeholder values


resp = client.get_table_version({
  catalog_id: "CatalogIdString",
  database_name: "NameString", # required
  table_name: "NameString", # required
  version_id: "VersionString",
})

Response structure


resp.table_version.table.name #=> String
resp.table_version.table.database_name #=> String
resp.table_version.table.description #=> String
resp.table_version.table.owner #=> String
resp.table_version.table.create_time #=> Time
resp.table_version.table.update_time #=> Time
resp.table_version.table.last_access_time #=> Time
resp.table_version.table.last_analyzed_time #=> Time
resp.table_version.table.retention #=> Integer
resp.table_version.table.storage_descriptor.columns #=> Array
resp.table_version.table.storage_descriptor.columns[0].name #=> String
resp.table_version.table.storage_descriptor.columns[0].type #=> String
resp.table_version.table.storage_descriptor.columns[0].comment #=> String
resp.table_version.table.storage_descriptor.columns[0].parameters #=> Hash
resp.table_version.table.storage_descriptor.columns[0].parameters["KeyString"] #=> String
resp.table_version.table.storage_descriptor.location #=> String
resp.table_version.table.storage_descriptor.additional_locations #=> Array
resp.table_version.table.storage_descriptor.additional_locations[0] #=> String
resp.table_version.table.storage_descriptor.input_format #=> String
resp.table_version.table.storage_descriptor.output_format #=> String
resp.table_version.table.storage_descriptor.compressed #=> Boolean
resp.table_version.table.storage_descriptor.number_of_buckets #=> Integer
resp.table_version.table.storage_descriptor.serde_info.name #=> String
resp.table_version.table.storage_descriptor.serde_info.serialization_library #=> String
resp.table_version.table.storage_descriptor.serde_info.parameters #=> Hash
resp.table_version.table.storage_descriptor.serde_info.parameters["KeyString"] #=> String
resp.table_version.table.storage_descriptor.bucket_columns #=> Array
resp.table_version.table.storage_descriptor.bucket_columns[0] #=> String
resp.table_version.table.storage_descriptor.sort_columns #=> Array
resp.table_version.table.storage_descriptor.sort_columns[0].column #=> String
resp.table_version.table.storage_descriptor.sort_columns[0].sort_order #=> Integer
resp.table_version.table.storage_descriptor.parameters #=> Hash
resp.table_version.table.storage_descriptor.parameters["KeyString"] #=> String
resp.table_version.table.storage_descriptor.skewed_info.skewed_column_names #=> Array
resp.table_version.table.storage_descriptor.skewed_info.skewed_column_names[0] #=> String
resp.table_version.table.storage_descriptor.skewed_info.skewed_column_values #=> Array
resp.table_version.table.storage_descriptor.skewed_info.skewed_column_values[0] #=> String
resp.table_version.table.storage_descriptor.skewed_info.skewed_column_value_location_maps #=> Hash
resp.table_version.table.storage_descriptor.skewed_info.skewed_column_value_location_maps["ColumnValuesString"] #=> String
resp.table_version.table.storage_descriptor.stored_as_sub_directories #=> Boolean
resp.table_version.table.storage_descriptor.schema_reference.schema_id.schema_arn #=> String
resp.table_version.table.storage_descriptor.schema_reference.schema_id.schema_name #=> String
resp.table_version.table.storage_descriptor.schema_reference.schema_id.registry_name #=> String
resp.table_version.table.storage_descriptor.schema_reference.schema_version_id #=> String
resp.table_version.table.storage_descriptor.schema_reference.schema_version_number #=> Integer
resp.table_version.table.partition_keys #=> Array
resp.table_version.table.partition_keys[0].name #=> String
resp.table_version.table.partition_keys[0].type #=> String
resp.table_version.table.partition_keys[0].comment #=> String
resp.table_version.table.partition_keys[0].parameters #=> Hash
resp.table_version.table.partition_keys[0].parameters["KeyString"] #=> String
resp.table_version.table.view_original_text #=> String
resp.table_version.table.view_expanded_text #=> String
resp.table_version.table.table_type #=> String
resp.table_version.table.parameters #=> Hash
resp.table_version.table.parameters["KeyString"] #=> String
resp.table_version.table.created_by #=> String
resp.table_version.table.is_registered_with_lake_formation #=> Boolean
resp.table_version.table.target_table.catalog_id #=> String
resp.table_version.table.target_table.database_name #=> String
resp.table_version.table.target_table.name #=> String
resp.table_version.table.target_table.region #=> String
resp.table_version.table.catalog_id #=> String
resp.table_version.table.version_id #=> String
resp.table_version.table.federated_table.identifier #=> String
resp.table_version.table.federated_table.database_identifier #=> String
resp.table_version.table.federated_table.connection_name #=> String
resp.table_version.table.view_definition.is_protected #=> Boolean
resp.table_version.table.view_definition.definer #=> String
resp.table_version.table.view_definition.sub_objects #=> Array
resp.table_version.table.view_definition.sub_objects[0] #=> String
resp.table_version.table.view_definition.representations #=> Array
resp.table_version.table.view_definition.representations[0].dialect #=> String, one of "REDSHIFT", "ATHENA", "SPARK"
resp.table_version.table.view_definition.representations[0].dialect_version #=> String
resp.table_version.table.view_definition.representations[0].view_original_text #=> String
resp.table_version.table.view_definition.representations[0].view_expanded_text #=> String
resp.table_version.table.view_definition.representations[0].validation_connection #=> String
resp.table_version.table.view_definition.representations[0].is_stale #=> Boolean
resp.table_version.table.is_multi_dialect_view #=> Boolean
resp.table_version.table.status.requested_by #=> String
resp.table_version.table.status.updated_by #=> String
resp.table_version.table.status.request_time #=> Time
resp.table_version.table.status.update_time #=> Time
resp.table_version.table.status.action #=> String, one of "UPDATE", "CREATE"
resp.table_version.table.status.state #=> String, one of "QUEUED", "IN_PROGRESS", "SUCCESS", "STOPPED", "FAILED"
resp.table_version.table.status.error.error_code #=> String
resp.table_version.table.status.error.error_message #=> String
resp.table_version.table.status.details.requested_change #=> Types::Table
resp.table_version.table.status.details.view_validations #=> Array
resp.table_version.table.status.details.view_validations[0].dialect #=> String, one of "REDSHIFT", "ATHENA", "SPARK"
resp.table_version.table.status.details.view_validations[0].dialect_version #=> String
resp.table_version.table.status.details.view_validations[0].view_validation_text #=> String
resp.table_version.table.status.details.view_validations[0].update_time #=> Time
resp.table_version.table.status.details.view_validations[0].state #=> String, one of "QUEUED", "IN_PROGRESS", "SUCCESS", "STOPPED", "FAILED"
resp.table_version.table.status.details.view_validations[0].error.error_code #=> String
resp.table_version.table.status.details.view_validations[0].error.error_message #=> String
resp.table_version.version_id #=> String

Parameters:

  • params (Hash) (defaults to: {})

    ({})

Options Hash (params):

  • :catalog_id (String)

    The ID of the Data Catalog where the tables reside. If none is provided, the Amazon Web Services account ID is used by default.

  • :database_name (required, String)

    The database in the catalog in which the table resides. For Hive compatibility, this name is entirely lowercase.

  • :table_name (required, String)

    The name of the table. For Hive compatibility, this name is entirely lowercase.

  • :version_id (String)

    The ID value of the table version to be retrieved. A VersionID is a string representation of an integer. Each version is incremented by 1.

Returns:

See Also:

[View source]

12903
12904
12905
12906
# File 'gems/aws-sdk-glue/lib/aws-sdk-glue/client.rb', line 12903

def get_table_version(params = {}, options = {})
  req = build_request(:get_table_version, params)
  req.send_request(options)
end

#get_table_versions(params = {}) ⇒ Types::GetTableVersionsResponse

Retrieves a list of strings that identify available versions of a specified table.

The returned response is a pageable response and is Enumerable. For details on usage see PageableResponse.

Examples:

Request syntax with placeholder values


resp = client.get_table_versions({
  catalog_id: "CatalogIdString",
  database_name: "NameString", # required
  table_name: "NameString", # required
  next_token: "Token",
  max_results: 1,
})

Response structure


resp.table_versions #=> Array
resp.table_versions[0].table.name #=> String
resp.table_versions[0].table.database_name #=> String
resp.table_versions[0].table.description #=> String
resp.table_versions[0].table.owner #=> String
resp.table_versions[0].table.create_time #=> Time
resp.table_versions[0].table.update_time #=> Time
resp.table_versions[0].table.last_access_time #=> Time
resp.table_versions[0].table.last_analyzed_time #=> Time
resp.table_versions[0].table.retention #=> Integer
resp.table_versions[0].table.storage_descriptor.columns #=> Array
resp.table_versions[0].table.storage_descriptor.columns[0].name #=> String
resp.table_versions[0].table.storage_descriptor.columns[0].type #=> String
resp.table_versions[0].table.storage_descriptor.columns[0].comment #=> String
resp.table_versions[0].table.storage_descriptor.columns[0].parameters #=> Hash
resp.table_versions[0].table.storage_descriptor.columns[0].parameters["KeyString"] #=> String
resp.table_versions[0].table.storage_descriptor.location #=> String
resp.table_versions[0].table.storage_descriptor.additional_locations #=> Array
resp.table_versions[0].table.storage_descriptor.additional_locations[0] #=> String
resp.table_versions[0].table.storage_descriptor.input_format #=> String
resp.table_versions[0].table.storage_descriptor.output_format #=> String
resp.table_versions[0].table.storage_descriptor.compressed #=> Boolean
resp.table_versions[0].table.storage_descriptor.number_of_buckets #=> Integer
resp.table_versions[0].table.storage_descriptor.serde_info.name #=> String
resp.table_versions[0].table.storage_descriptor.serde_info.serialization_library #=> String
resp.table_versions[0].table.storage_descriptor.serde_info.parameters #=> Hash
resp.table_versions[0].table.storage_descriptor.serde_info.parameters["KeyString"] #=> String
resp.table_versions[0].table.storage_descriptor.bucket_columns #=> Array
resp.table_versions[0].table.storage_descriptor.bucket_columns[0] #=> String
resp.table_versions[0].table.storage_descriptor.sort_columns #=> Array
resp.table_versions[0].table.storage_descriptor.sort_columns[0].column #=> String
resp.table_versions[0].table.storage_descriptor.sort_columns[0].sort_order #=> Integer
resp.table_versions[0].table.storage_descriptor.parameters #=> Hash
resp.table_versions[0].table.storage_descriptor.parameters["KeyString"] #=> String
resp.table_versions[0].table.storage_descriptor.skewed_info.skewed_column_names #=> Array
resp.table_versions[0].table.storage_descriptor.skewed_info.skewed_column_names[0] #=> String
resp.table_versions[0].table.storage_descriptor.skewed_info.skewed_column_values #=> Array
resp.table_versions[0].table.storage_descriptor.skewed_info.skewed_column_values[0] #=> String
resp.table_versions[0].table.storage_descriptor.skewed_info.skewed_column_value_location_maps #=> Hash
resp.table_versions[0].table.storage_descriptor.skewed_info.skewed_column_value_location_maps["ColumnValuesString"] #=> String
resp.table_versions[0].table.storage_descriptor.stored_as_sub_directories #=> Boolean
resp.table_versions[0].table.storage_descriptor.schema_reference.schema_id.schema_arn #=> String
resp.table_versions[0].table.storage_descriptor.schema_reference.schema_id.schema_name #=> String
resp.table_versions[0].table.storage_descriptor.schema_reference.schema_id.registry_name #=> String
resp.table_versions[0].table.storage_descriptor.schema_reference.schema_version_id #=> String
resp.table_versions[0].table.storage_descriptor.schema_reference.schema_version_number #=> Integer
resp.table_versions[0].table.partition_keys #=> Array
resp.table_versions[0].table.partition_keys[0].name #=> String
resp.table_versions[0].table.partition_keys[0].type #=> String
resp.table_versions[0].table.partition_keys[0].comment #=> String
resp.table_versions[0].table.partition_keys[0].parameters #=> Hash
resp.table_versions[0].table.partition_keys[0].parameters["KeyString"] #=> String
resp.table_versions[0].table.view_original_text #=> String
resp.table_versions[0].table.view_expanded_text #=> String
resp.table_versions[0].table.table_type #=> String
resp.table_versions[0].table.parameters #=> Hash
resp.table_versions[0].table.parameters["KeyString"] #=> String
resp.table_versions[0].table.created_by #=> String
resp.table_versions[0].table.is_registered_with_lake_formation #=> Boolean
resp.table_versions[0].table.target_table.catalog_id #=> String
resp.table_versions[0].table.target_table.database_name #=> String
resp.table_versions[0].table.target_table.name #=> String
resp.table_versions[0].table.target_table.region #=> String
resp.table_versions[0].table.catalog_id #=> String
resp.table_versions[0].table.version_id #=> String
resp.table_versions[0].table.federated_table.identifier #=> String
resp.table_versions[0].table.federated_table.database_identifier #=> String
resp.table_versions[0].table.federated_table.connection_name #=> String
resp.table_versions[0].table.view_definition.is_protected #=> Boolean
resp.table_versions[0].table.view_definition.definer #=> String
resp.table_versions[0].table.view_definition.sub_objects #=> Array
resp.table_versions[0].table.view_definition.sub_objects[0] #=> String
resp.table_versions[0].table.view_definition.representations #=> Array
resp.table_versions[0].table.view_definition.representations[0].dialect #=> String, one of "REDSHIFT", "ATHENA", "SPARK"
resp.table_versions[0].table.view_definition.representations[0].dialect_version #=> String
resp.table_versions[0].table.view_definition.representations[0].view_original_text #=> String
resp.table_versions[0].table.view_definition.representations[0].view_expanded_text #=> String
resp.table_versions[0].table.view_definition.representations[0].validation_connection #=> String
resp.table_versions[0].table.view_definition.representations[0].is_stale #=> Boolean
resp.table_versions[0].table.is_multi_dialect_view #=> Boolean
resp.table_versions[0].table.status.requested_by #=> String
resp.table_versions[0].table.status.updated_by #=> String
resp.table_versions[0].table.status.request_time #=> Time
resp.table_versions[0].table.status.update_time #=> Time
resp.table_versions[0].table.status.action #=> String, one of "UPDATE", "CREATE"
resp.table_versions[0].table.status.state #=> String, one of "QUEUED", "IN_PROGRESS", "SUCCESS", "STOPPED", "FAILED"
resp.table_versions[0].table.status.error.error_code #=> String
resp.table_versions[0].table.status.error.error_message #=> String
resp.table_versions[0].table.status.details.requested_change #=> Types::Table
resp.table_versions[0].table.status.details.view_validations #=> Array
resp.table_versions[0].table.status.details.view_validations[0].dialect #=> String, one of "REDSHIFT", "ATHENA", "SPARK"
resp.table_versions[0].table.status.details.view_validations[0].dialect_version #=> String
resp.table_versions[0].table.status.details.view_validations[0].view_validation_text #=> String
resp.table_versions[0].table.status.details.view_validations[0].update_time #=> Time
resp.table_versions[0].table.status.details.view_validations[0].state #=> String, one of "QUEUED", "IN_PROGRESS", "SUCCESS", "STOPPED", "FAILED"
resp.table_versions[0].table.status.details.view_validations[0].error.error_code #=> String
resp.table_versions[0].table.status.details.view_validations[0].error.error_message #=> String
resp.table_versions[0].version_id #=> String
resp.next_token #=> String

Parameters:

  • params (Hash) (defaults to: {})

    ({})

Options Hash (params):

  • :catalog_id (String)

    The ID of the Data Catalog where the tables reside. If none is provided, the Amazon Web Services account ID is used by default.

  • :database_name (required, String)

    The database in the catalog in which the table resides. For Hive compatibility, this name is entirely lowercase.

  • :table_name (required, String)

    The name of the table. For Hive compatibility, this name is entirely lowercase.

  • :next_token (String)

    A continuation token, if this is not the first call.

  • :max_results (Integer)

    The maximum number of table versions to return in one response.

Returns:

See Also:

[View source]

13052
13053
13054
13055
# File 'gems/aws-sdk-glue/lib/aws-sdk-glue/client.rb', line 13052

def get_table_versions(params = {}, options = {})
  req = build_request(:get_table_versions, params)
  req.send_request(options)
end

#get_tables(params = {}) ⇒ Types::GetTablesResponse

Retrieves the definitions of some or all of the tables in a given Database.

The returned response is a pageable response and is Enumerable. For details on usage see PageableResponse.

Examples:

Request syntax with placeholder values


resp = client.get_tables({
  catalog_id: "CatalogIdString",
  database_name: "NameString", # required
  expression: "FilterString",
  next_token: "Token",
  max_results: 1,
  transaction_id: "TransactionIdString",
  query_as_of_time: Time.now,
  include_status_details: false,
  attributes_to_get: ["NAME"], # accepts NAME, TABLE_TYPE
})

Response structure


resp.table_list #=> Array
resp.table_list[0].name #=> String
resp.table_list[0].database_name #=> String
resp.table_list[0].description #=> String
resp.table_list[0].owner #=> String
resp.table_list[0].create_time #=> Time
resp.table_list[0].update_time #=> Time
resp.table_list[0].last_access_time #=> Time
resp.table_list[0].last_analyzed_time #=> Time
resp.table_list[0].retention #=> Integer
resp.table_list[0].storage_descriptor.columns #=> Array
resp.table_list[0].storage_descriptor.columns[0].name #=> String
resp.table_list[0].storage_descriptor.columns[0].type #=> String
resp.table_list[0].storage_descriptor.columns[0].comment #=> String
resp.table_list[0].storage_descriptor.columns[0].parameters #=> Hash
resp.table_list[0].storage_descriptor.columns[0].parameters["KeyString"] #=> String
resp.table_list[0].storage_descriptor.location #=> String
resp.table_list[0].storage_descriptor.additional_locations #=> Array
resp.table_list[0].storage_descriptor.additional_locations[0] #=> String
resp.table_list[0].storage_descriptor.input_format #=> String
resp.table_list[0].storage_descriptor.output_format #=> String
resp.table_list[0].storage_descriptor.compressed #=> Boolean
resp.table_list[0].storage_descriptor.number_of_buckets #=> Integer
resp.table_list[0].storage_descriptor.serde_info.name #=> String
resp.table_list[0].storage_descriptor.serde_info.serialization_library #=> String
resp.table_list[0].storage_descriptor.serde_info.parameters #=> Hash
resp.table_list[0].storage_descriptor.serde_info.parameters["KeyString"] #=> String
resp.table_list[0].storage_descriptor.bucket_columns #=> Array
resp.table_list[0].storage_descriptor.bucket_columns[0] #=> String
resp.table_list[0].storage_descriptor.sort_columns #=> Array
resp.table_list[0].storage_descriptor.sort_columns[0].column #=> String
resp.table_list[0].storage_descriptor.sort_columns[0].sort_order #=> Integer
resp.table_list[0].storage_descriptor.parameters #=> Hash
resp.table_list[0].storage_descriptor.parameters["KeyString"] #=> String
resp.table_list[0].storage_descriptor.skewed_info.skewed_column_names #=> Array
resp.table_list[0].storage_descriptor.skewed_info.skewed_column_names[0] #=> String
resp.table_list[0].storage_descriptor.skewed_info.skewed_column_values #=> Array
resp.table_list[0].storage_descriptor.skewed_info.skewed_column_values[0] #=> String
resp.table_list[0].storage_descriptor.skewed_info.skewed_column_value_location_maps #=> Hash
resp.table_list[0].storage_descriptor.skewed_info.skewed_column_value_location_maps["ColumnValuesString"] #=> String
resp.table_list[0].storage_descriptor.stored_as_sub_directories #=> Boolean
resp.table_list[0].storage_descriptor.schema_reference.schema_id.schema_arn #=> String
resp.table_list[0].storage_descriptor.schema_reference.schema_id.schema_name #=> String
resp.table_list[0].storage_descriptor.schema_reference.schema_id.registry_name #=> String
resp.table_list[0].storage_descriptor.schema_reference.schema_version_id #=> String
resp.table_list[0].storage_descriptor.schema_reference.schema_version_number #=> Integer
resp.table_list[0].partition_keys #=> Array
resp.table_list[0].partition_keys[0].name #=> String
resp.table_list[0].partition_keys[0].type #=> String
resp.table_list[0].partition_keys[0].comment #=> String
resp.table_list[0].partition_keys[0].parameters #=> Hash
resp.table_list[0].partition_keys[0].parameters["KeyString"] #=> String
resp.table_list[0].view_original_text #=> String
resp.table_list[0].view_expanded_text #=> String
resp.table_list[0].table_type #=> String
resp.table_list[0].parameters #=> Hash
resp.table_list[0].parameters["KeyString"] #=> String
resp.table_list[0].created_by #=> String
resp.table_list[0].is_registered_with_lake_formation #=> Boolean
resp.table_list[0].target_table.catalog_id #=> String
resp.table_list[0].target_table.database_name #=> String
resp.table_list[0].target_table.name #=> String
resp.table_list[0].target_table.region #=> String
resp.table_list[0].catalog_id #=> String
resp.table_list[0].version_id #=> String
resp.table_list[0].federated_table.identifier #=> String
resp.table_list[0].federated_table.database_identifier #=> String
resp.table_list[0].federated_table.connection_name #=> String
resp.table_list[0].view_definition.is_protected #=> Boolean
resp.table_list[0].view_definition.definer #=> String
resp.table_list[0].view_definition.sub_objects #=> Array
resp.table_list[0].view_definition.sub_objects[0] #=> String
resp.table_list[0].view_definition.representations #=> Array
resp.table_list[0].view_definition.representations[0].dialect #=> String, one of "REDSHIFT", "ATHENA", "SPARK"
resp.table_list[0].view_definition.representations[0].dialect_version #=> String
resp.table_list[0].view_definition.representations[0].view_original_text #=> String
resp.table_list[0].view_definition.representations[0].view_expanded_text #=> String
resp.table_list[0].view_definition.representations[0].validation_connection #=> String
resp.table_list[0].view_definition.representations[0].is_stale #=> Boolean
resp.table_list[0].is_multi_dialect_view #=> Boolean
resp.table_list[0].status.requested_by #=> String
resp.table_list[0].status.updated_by #=> String
resp.table_list[0].status.request_time #=> Time
resp.table_list[0].status.update_time #=> Time
resp.table_list[0].status.action #=> String, one of "UPDATE", "CREATE"
resp.table_list[0].status.state #=> String, one of "QUEUED", "IN_PROGRESS", "SUCCESS", "STOPPED", "FAILED"
resp.table_list[0].status.error.error_code #=> String
resp.table_list[0].status.error.error_message #=> String
resp.table_list[0].status.details.requested_change #=> Types::Table
resp.table_list[0].status.details.view_validations #=> Array
resp.table_list[0].status.details.view_validations[0].dialect #=> String, one of "REDSHIFT", "ATHENA", "SPARK"
resp.table_list[0].status.details.view_validations[0].dialect_version #=> String
resp.table_list[0].status.details.view_validations[0].view_validation_text #=> String
resp.table_list[0].status.details.view_validations[0].update_time #=> Time
resp.table_list[0].status.details.view_validations[0].state #=> String, one of "QUEUED", "IN_PROGRESS", "SUCCESS", "STOPPED", "FAILED"
resp.table_list[0].status.details.view_validations[0].error.error_code #=> String
resp.table_list[0].status.details.view_validations[0].error.error_message #=> String
resp.next_token #=> String

Parameters:

  • params (Hash) (defaults to: {})

    ({})

Options Hash (params):

  • :catalog_id (String)

    The ID of the Data Catalog where the tables reside. If none is provided, the Amazon Web Services account ID is used by default.

  • :database_name (required, String)

    The database in the catalog whose tables to list. For Hive compatibility, this name is entirely lowercase.

  • :expression (String)

    A regular expression pattern. If present, only those tables whose names match the pattern are returned.

  • :next_token (String)

    A continuation token, included if this is a continuation call.

  • :max_results (Integer)

    The maximum number of tables to return in a single response.

  • :transaction_id (String)

    The transaction ID at which to read the table contents.

  • :query_as_of_time (Time, DateTime, Date, Integer, String)

    The time as of when to read the table contents. If not set, the most recent transaction commit time will be used. Cannot be specified along with TransactionId.

  • :include_status_details (Boolean)

    Specifies whether to include status details related to a request to create or update an Glue Data Catalog view.

  • :attributes_to_get (Array<String>)

    Specifies the table fields returned by the GetTables call. This parameter doesn’t accept an empty list. The request must include NAME.

    The following are the valid combinations of values:

    • NAME - Names of all tables in the database.

    • NAME, TABLE_TYPE - Names of all tables and the table types.

Returns:

See Also:

[View source]

13227
13228
13229
13230
# File 'gems/aws-sdk-glue/lib/aws-sdk-glue/client.rb', line 13227

def get_tables(params = {}, options = {})
  req = build_request(:get_tables, params)
  req.send_request(options)
end

#get_tags(params = {}) ⇒ Types::GetTagsResponse

Retrieves a list of tags associated with a resource.

Examples:

Request syntax with placeholder values


resp = client.get_tags({
  resource_arn: "GlueResourceArn", # required
})

Response structure


resp.tags #=> Hash
resp.tags["TagKey"] #=> String

Parameters:

  • params (Hash) (defaults to: {})

    ({})

Options Hash (params):

  • :resource_arn (required, String)

    The Amazon Resource Name (ARN) of the resource for which to retrieve tags.

Returns:

See Also:

[View source]

13257
13258
13259
13260
# File 'gems/aws-sdk-glue/lib/aws-sdk-glue/client.rb', line 13257

def get_tags(params = {}, options = {})
  req = build_request(:get_tags, params)
  req.send_request(options)
end

#get_trigger(params = {}) ⇒ Types::GetTriggerResponse

Retrieves the definition of a trigger.

Examples:

Request syntax with placeholder values


resp = client.get_trigger({
  name: "NameString", # required
})

Response structure


resp.trigger.name #=> String
resp.trigger.workflow_name #=> String
resp.trigger.id #=> String
resp.trigger.type #=> String, one of "SCHEDULED", "CONDITIONAL", "ON_DEMAND", "EVENT"
resp.trigger.state #=> String, one of "CREATING", "CREATED", "ACTIVATING", "ACTIVATED", "DEACTIVATING", "DEACTIVATED", "DELETING", "UPDATING"
resp.trigger.description #=> String
resp.trigger.schedule #=> String
resp.trigger.actions #=> Array
resp.trigger.actions[0].job_name #=> String
resp.trigger.actions[0].arguments #=> Hash
resp.trigger.actions[0].arguments["GenericString"] #=> String
resp.trigger.actions[0].timeout #=> Integer
resp.trigger.actions[0].security_configuration #=> String
resp.trigger.actions[0].notification_property.notify_delay_after #=> Integer
resp.trigger.actions[0].crawler_name #=> String
resp.trigger.predicate.logical #=> String, one of "AND", "ANY"
resp.trigger.predicate.conditions #=> Array
resp.trigger.predicate.conditions[0].logical_operator #=> String, one of "EQUALS"
resp.trigger.predicate.conditions[0].job_name #=> String
resp.trigger.predicate.conditions[0].state #=> String, one of "STARTING", "RUNNING", "STOPPING", "STOPPED", "SUCCEEDED", "FAILED", "TIMEOUT", "ERROR", "WAITING", "EXPIRED"
resp.trigger.predicate.conditions[0].crawler_name #=> String
resp.trigger.predicate.conditions[0].crawl_state #=> String, one of "RUNNING", "CANCELLING", "CANCELLED", "SUCCEEDED", "FAILED", "ERROR"
resp.trigger.event_batching_condition.batch_size #=> Integer
resp.trigger.event_batching_condition.batch_window #=> Integer

Parameters:

  • params (Hash) (defaults to: {})

    ({})

Options Hash (params):

  • :name (required, String)

    The name of the trigger to retrieve.

Returns:

See Also:

[View source]

13308
13309
13310
13311
# File 'gems/aws-sdk-glue/lib/aws-sdk-glue/client.rb', line 13308

def get_trigger(params = {}, options = {})
  req = build_request(:get_trigger, params)
  req.send_request(options)
end

#get_triggers(params = {}) ⇒ Types::GetTriggersResponse

Gets all the triggers associated with a job.

The returned response is a pageable response and is Enumerable. For details on usage see PageableResponse.

Examples:

Request syntax with placeholder values


resp = client.get_triggers({
  next_token: "GenericString",
  dependent_job_name: "NameString",
  max_results: 1,
})

Response structure


resp.triggers #=> Array
resp.triggers[0].name #=> String
resp.triggers[0].workflow_name #=> String
resp.triggers[0].id #=> String
resp.triggers[0].type #=> String, one of "SCHEDULED", "CONDITIONAL", "ON_DEMAND", "EVENT"
resp.triggers[0].state #=> String, one of "CREATING", "CREATED", "ACTIVATING", "ACTIVATED", "DEACTIVATING", "DEACTIVATED", "DELETING", "UPDATING"
resp.triggers[0].description #=> String
resp.triggers[0].schedule #=> String
resp.triggers[0].actions #=> Array
resp.triggers[0].actions[0].job_name #=> String
resp.triggers[0].actions[0].arguments #=> Hash
resp.triggers[0].actions[0].arguments["GenericString"] #=> String
resp.triggers[0].actions[0].timeout #=> Integer
resp.triggers[0].actions[0].security_configuration #=> String
resp.triggers[0].actions[0].notification_property.notify_delay_after #=> Integer
resp.triggers[0].actions[0].crawler_name #=> String
resp.triggers[0].predicate.logical #=> String, one of "AND", "ANY"
resp.triggers[0].predicate.conditions #=> Array
resp.triggers[0].predicate.conditions[0].logical_operator #=> String, one of "EQUALS"
resp.triggers[0].predicate.conditions[0].job_name #=> String
resp.triggers[0].predicate.conditions[0].state #=> String, one of "STARTING", "RUNNING", "STOPPING", "STOPPED", "SUCCEEDED", "FAILED", "TIMEOUT", "ERROR", "WAITING", "EXPIRED"
resp.triggers[0].predicate.conditions[0].crawler_name #=> String
resp.triggers[0].predicate.conditions[0].crawl_state #=> String, one of "RUNNING", "CANCELLING", "CANCELLED", "SUCCEEDED", "FAILED", "ERROR"
resp.triggers[0].event_batching_condition.batch_size #=> Integer
resp.triggers[0].event_batching_condition.batch_window #=> Integer
resp.next_token #=> String

Parameters:

  • params (Hash) (defaults to: {})

    ({})

Options Hash (params):

  • :next_token (String)

    A continuation token, if this is a continuation call.

  • :dependent_job_name (String)

    The name of the job to retrieve triggers for. The trigger that can start this job is returned, and if there is no such trigger, all triggers are returned.

  • :max_results (Integer)

    The maximum size of the response.

Returns:

See Also:

[View source]

13374
13375
13376
13377
# File 'gems/aws-sdk-glue/lib/aws-sdk-glue/client.rb', line 13374

def get_triggers(params = {}, options = {})
  req = build_request(:get_triggers, params)
  req.send_request(options)
end

#get_unfiltered_partition_metadata(params = {}) ⇒ Types::GetUnfilteredPartitionMetadataResponse

Retrieves partition metadata from the Data Catalog that contains unfiltered metadata.

For IAM authorization, the public IAM action associated with this API is glue:GetPartition.

Examples:

Request syntax with placeholder values


resp = client.({
  region: "ValueString",
  catalog_id: "CatalogIdString", # required
  database_name: "NameString", # required
  table_name: "NameString", # required
  partition_values: ["ValueString"], # required
  audit_context: {
    additional_audit_context: "AuditContextString",
    requested_columns: ["ColumnNameString"],
    all_columns_requested: false,
  },
  supported_permission_types: ["COLUMN_PERMISSION"], # required, accepts COLUMN_PERMISSION, CELL_FILTER_PERMISSION, NESTED_PERMISSION, NESTED_CELL_PERMISSION
  query_session_context: {
    query_id: "HashString",
    query_start_time: Time.now,
    cluster_id: "NullableString",
    query_authorization_id: "HashString",
    additional_context: {
      "ContextKey" => "ContextValue",
    },
  },
})

Response structure


resp.data.partition.values #=> Array
resp.data.partition.values[0] #=> String
resp.data.partition.database_name #=> String
resp.data.partition.table_name #=> String
resp.data.partition.creation_time #=> Time
resp.data.partition.last_access_time #=> Time
resp.data.partition.storage_descriptor.columns #=> Array
resp.data.partition.storage_descriptor.columns[0].name #=> String
resp.data.partition.storage_descriptor.columns[0].type #=> String
resp.data.partition.storage_descriptor.columns[0].comment #=> String
resp.data.partition.storage_descriptor.columns[0].parameters #=> Hash
resp.data.partition.storage_descriptor.columns[0].parameters["KeyString"] #=> String
resp.data.partition.storage_descriptor.location #=> String
resp.data.partition.storage_descriptor.additional_locations #=> Array
resp.data.partition.storage_descriptor.additional_locations[0] #=> String
resp.data.partition.storage_descriptor.input_format #=> String
resp.data.partition.storage_descriptor.output_format #=> String
resp.data.partition.storage_descriptor.compressed #=> Boolean
resp.data.partition.storage_descriptor.number_of_buckets #=> Integer
resp.data.partition.storage_descriptor.serde_info.name #=> String
resp.data.partition.storage_descriptor.serde_info.serialization_library #=> String
resp.data.partition.storage_descriptor.serde_info.parameters #=> Hash
resp.data.partition.storage_descriptor.serde_info.parameters["KeyString"] #=> String
resp.data.partition.storage_descriptor.bucket_columns #=> Array
resp.data.partition.storage_descriptor.bucket_columns[0] #=> String
resp.data.partition.storage_descriptor.sort_columns #=> Array
resp.data.partition.storage_descriptor.sort_columns[0].column #=> String
resp.data.partition.storage_descriptor.sort_columns[0].sort_order #=> Integer
resp.data.partition.storage_descriptor.parameters #=> Hash
resp.data.partition.storage_descriptor.parameters["KeyString"] #=> String
resp.data.partition.storage_descriptor.skewed_info.skewed_column_names #=> Array
resp.data.partition.storage_descriptor.skewed_info.skewed_column_names[0] #=> String
resp.data.partition.storage_descriptor.skewed_info.skewed_column_values #=> Array
resp.data.partition.storage_descriptor.skewed_info.skewed_column_values[0] #=> String
resp.data.partition.storage_descriptor.skewed_info.skewed_column_value_location_maps #=> Hash
resp.data.partition.storage_descriptor.skewed_info.skewed_column_value_location_maps["ColumnValuesString"] #=> String
resp.data.partition.storage_descriptor.stored_as_sub_directories #=> Boolean
resp.data.partition.storage_descriptor.schema_reference.schema_id.schema_arn #=> String
resp.data.partition.storage_descriptor.schema_reference.schema_id.schema_name #=> String
resp.data.partition.storage_descriptor.schema_reference.schema_id.registry_name #=> String
resp.data.partition.storage_descriptor.schema_reference.schema_version_id #=> String
resp.data.partition.storage_descriptor.schema_reference.schema_version_number #=> Integer
resp.data.partition.parameters #=> Hash
resp.data.partition.parameters["KeyString"] #=> String
resp.data.partition.last_analyzed_time #=> Time
resp.data.partition.catalog_id #=> String
resp.authorized_columns #=> Array
resp.authorized_columns[0] #=> String
resp.is_registered_with_lake_formation #=> Boolean

Parameters:

  • params (Hash) (defaults to: {})

    ({})

Options Hash (params):

  • :region (String)

    Specified only if the base tables belong to a different Amazon Web Services Region.

  • :catalog_id (required, String)

    The catalog ID where the partition resides.

  • :database_name (required, String) — default: Required

    Specifies the name of a database that contains the partition.

  • :table_name (required, String) — default: Required

    Specifies the name of a table that contains the partition.

  • :partition_values (required, Array<String>) — default: Required

    A list of partition key values.

  • :audit_context (Types::AuditContext)

    A structure containing Lake Formation audit context information.

  • :supported_permission_types (required, Array<String>) — default: Required

    A list of supported permission types.

  • :query_session_context (Types::QuerySessionContext)

    A structure used as a protocol between query engines and Lake Formation or Glue. Contains both a Lake Formation generated authorization identifier and information from the request's authorization context.

Returns:

See Also:

[View source]

13501
13502
13503
13504
# File 'gems/aws-sdk-glue/lib/aws-sdk-glue/client.rb', line 13501

def (params = {}, options = {})
  req = build_request(:get_unfiltered_partition_metadata, params)
  req.send_request(options)
end

#get_unfiltered_partitions_metadata(params = {}) ⇒ Types::GetUnfilteredPartitionsMetadataResponse

Retrieves partition metadata from the Data Catalog that contains unfiltered metadata.

For IAM authorization, the public IAM action associated with this API is glue:GetPartitions.

The returned response is a pageable response and is Enumerable. For details on usage see PageableResponse.

Examples:

Request syntax with placeholder values


resp = client.({
  region: "ValueString",
  catalog_id: "CatalogIdString", # required
  database_name: "NameString", # required
  table_name: "NameString", # required
  expression: "PredicateString",
  audit_context: {
    additional_audit_context: "AuditContextString",
    requested_columns: ["ColumnNameString"],
    all_columns_requested: false,
  },
  supported_permission_types: ["COLUMN_PERMISSION"], # required, accepts COLUMN_PERMISSION, CELL_FILTER_PERMISSION, NESTED_PERMISSION, NESTED_CELL_PERMISSION
  next_token: "Token",
  segment: {
    segment_number: 1, # required
    total_segments: 1, # required
  },
  max_results: 1,
  query_session_context: {
    query_id: "HashString",
    query_start_time: Time.now,
    cluster_id: "NullableString",
    query_authorization_id: "HashString",
    additional_context: {
      "ContextKey" => "ContextValue",
    },
  },
})

Response structure


resp.unfiltered_partitions #=> Array
resp.unfiltered_partitions[0].partition.values #=> Array
resp.unfiltered_partitions[0].partition.values[0] #=> String
resp.unfiltered_partitions[0].partition.database_name #=> String
resp.unfiltered_partitions[0].partition.table_name #=> String
resp.unfiltered_partitions[0].partition.creation_time #=> Time
resp.unfiltered_partitions[0].partition.last_access_time #=> Time
resp.unfiltered_partitions[0].partition.storage_descriptor.columns #=> Array
resp.unfiltered_partitions[0].partition.storage_descriptor.columns[0].name #=> String
resp.unfiltered_partitions[0].partition.storage_descriptor.columns[0].type #=> String
resp.unfiltered_partitions[0].partition.storage_descriptor.columns[0].comment #=> String
resp.unfiltered_partitions[0].partition.storage_descriptor.columns[0].parameters #=> Hash
resp.unfiltered_partitions[0].partition.storage_descriptor.columns[0].parameters["KeyString"] #=> String
resp.unfiltered_partitions[0].partition.storage_descriptor.location #=> String
resp.unfiltered_partitions[0].partition.storage_descriptor.additional_locations #=> Array
resp.unfiltered_partitions[0].partition.storage_descriptor.additional_locations[0] #=> String
resp.unfiltered_partitions[0].partition.storage_descriptor.input_format #=> String
resp.unfiltered_partitions[0].partition.storage_descriptor.output_format #=> String
resp.unfiltered_partitions[0].partition.storage_descriptor.compressed #=> Boolean
resp.unfiltered_partitions[0].partition.storage_descriptor.number_of_buckets #=> Integer
resp.unfiltered_partitions[0].partition.storage_descriptor.serde_info.name #=> String
resp.unfiltered_partitions[0].partition.storage_descriptor.serde_info.serialization_library #=> String
resp.unfiltered_partitions[0].partition.storage_descriptor.serde_info.parameters #=> Hash
resp.unfiltered_partitions[0].partition.storage_descriptor.serde_info.parameters["KeyString"] #=> String
resp.unfiltered_partitions[0].partition.storage_descriptor.bucket_columns #=> Array
resp.unfiltered_partitions[0].partition.storage_descriptor.bucket_columns[0] #=> String
resp.unfiltered_partitions[0].partition.storage_descriptor.sort_columns #=> Array
resp.unfiltered_partitions[0].partition.storage_descriptor.sort_columns[0].column #=> String
resp.unfiltered_partitions[0].partition.storage_descriptor.sort_columns[0].sort_order #=> Integer
resp.unfiltered_partitions[0].partition.storage_descriptor.parameters #=> Hash
resp.unfiltered_partitions[0].partition.storage_descriptor.parameters["KeyString"] #=> String
resp.unfiltered_partitions[0].partition.storage_descriptor.skewed_info.skewed_column_names #=> Array
resp.unfiltered_partitions[0].partition.storage_descriptor.skewed_info.skewed_column_names[0] #=> String
resp.unfiltered_partitions[0].partition.storage_descriptor.skewed_info.skewed_column_values #=> Array
resp.unfiltered_partitions[0].partition.storage_descriptor.skewed_info.skewed_column_values[0] #=> String
resp.unfiltered_partitions[0].partition.storage_descriptor.skewed_info.skewed_column_value_location_maps #=> Hash
resp.unfiltered_partitions[0].partition.storage_descriptor.skewed_info.skewed_column_value_location_maps["ColumnValuesString"] #=> String
resp.unfiltered_partitions[0].partition.storage_descriptor.stored_as_sub_directories #=> Boolean
resp.unfiltered_partitions[0].partition.storage_descriptor.schema_reference.schema_id.schema_arn #=> String
resp.unfiltered_partitions[0].partition.storage_descriptor.schema_reference.schema_id.schema_name #=> String
resp.unfiltered_partitions[0].partition.storage_descriptor.schema_reference.schema_id.registry_name #=> String
resp.unfiltered_partitions[0].partition.storage_descriptor.schema_reference.schema_version_id #=> String
resp.unfiltered_partitions[0].partition.storage_descriptor.schema_reference.schema_version_number #=> Integer
resp.unfiltered_partitions[0].partition.parameters #=> Hash
resp.unfiltered_partitions[0].partition.parameters["KeyString"] #=> String
resp.unfiltered_partitions[0].partition.last_analyzed_time #=> Time
resp.unfiltered_partitions[0].partition.catalog_id #=> String
resp.unfiltered_partitions[0].authorized_columns #=> Array
resp.unfiltered_partitions[0].authorized_columns[0] #=> String
resp.unfiltered_partitions[0].is_registered_with_lake_formation #=> Boolean
resp.next_token #=> String

Parameters:

  • params (Hash) (defaults to: {})

    ({})

Options Hash (params):

  • :region (String)

    Specified only if the base tables belong to a different Amazon Web Services Region.

  • :catalog_id (required, String)

    The ID of the Data Catalog where the partitions in question reside. If none is provided, the AWS account ID is used by default.

  • :database_name (required, String)

    The name of the catalog database where the partitions reside.

  • :table_name (required, String)

    The name of the table that contains the partition.

  • :expression (String)

    An expression that filters the partitions to be returned.

    The expression uses SQL syntax similar to the SQL WHERE filter clause. The SQL statement parser JSQLParser parses the expression.

    Operators: The following are the operators that you can use in the Expression API call:

    =

    : Checks whether the values of the two operands are equal; if yes, then the condition becomes true.

    Example: Assume 'variable a' holds 10 and 'variable b' holds 20.

    (a = b) is not true.

    < >

    Checks whether the values of two operands are equal; if the values are not equal, then the condition becomes true.

    Example: (a < > b) is true.

    >

    Checks whether the value of the left operand is greater than the value of the right operand; if yes, then the condition becomes true.

    Example: (a > b) is not true.

    <

    Checks whether the value of the left operand is less than the value of the right operand; if yes, then the condition becomes true.

    Example: (a < b) is true.

    >=

    : Checks whether the value of the left operand is greater than or equal to the value of the right operand; if yes, then the condition becomes true.

    Example: (a >= b) is not true.

    <=

    : Checks whether the value of the left operand is less than or equal to the value of the right operand; if yes, then the condition becomes true.

    Example: (a <= b) is true.

    AND, OR, IN, BETWEEN, LIKE, NOT, IS NULL

    Logical operators.

    Supported Partition Key Types: The following are the supported partition keys.

    • string

    • date

    • timestamp

    • int

    • bigint

    • long

    • tinyint

    • smallint

    • decimal

    If an type is encountered that is not valid, an exception is thrown.

  • :audit_context (Types::AuditContext)

    A structure containing Lake Formation audit context information.

  • :supported_permission_types (required, Array<String>)

    A list of supported permission types.

  • :next_token (String)

    A continuation token, if this is not the first call to retrieve these partitions.

  • :segment (Types::Segment)

    The segment of the table's partitions to scan in this request.

  • :max_results (Integer)

    The maximum number of partitions to return in a single response.

  • :query_session_context (Types::QuerySessionContext)

    A structure used as a protocol between query engines and Lake Formation or Glue. Contains both a Lake Formation generated authorization identifier and information from the request's authorization context.

Returns:

See Also:

[View source]

13731
13732
13733
13734
# File 'gems/aws-sdk-glue/lib/aws-sdk-glue/client.rb', line 13731

def (params = {}, options = {})
  req = build_request(:get_unfiltered_partitions_metadata, params)
  req.send_request(options)
end

#get_unfiltered_table_metadata(params = {}) ⇒ Types::GetUnfilteredTableMetadataResponse

Allows a third-party analytical engine to retrieve unfiltered table metadata from the Data Catalog.

For IAM authorization, the public IAM action associated with this API is glue:GetTable.

Examples:

Request syntax with placeholder values


resp = client.({
  region: "ValueString",
  catalog_id: "CatalogIdString", # required
  database_name: "NameString", # required
  name: "NameString", # required
  audit_context: {
    additional_audit_context: "AuditContextString",
    requested_columns: ["ColumnNameString"],
    all_columns_requested: false,
  },
  supported_permission_types: ["COLUMN_PERMISSION"], # required, accepts COLUMN_PERMISSION, CELL_FILTER_PERMISSION, NESTED_PERMISSION, NESTED_CELL_PERMISSION
  parent_resource_arn: "ArnString",
  root_resource_arn: "ArnString",
  supported_dialect: {
    dialect: "REDSHIFT", # accepts REDSHIFT, ATHENA, SPARK
    dialect_version: "ViewDialectVersionString",
  },
  permissions: ["ALL"], # accepts ALL, SELECT, ALTER, DROP, DELETE, INSERT, CREATE_DATABASE, CREATE_TABLE, DATA_LOCATION_ACCESS
  query_session_context: {
    query_id: "HashString",
    query_start_time: Time.now,
    cluster_id: "NullableString",
    query_authorization_id: "HashString",
    additional_context: {
      "ContextKey" => "ContextValue",
    },
  },
})

Response structure


resp.table.name #=> String
resp.table.database_name #=> String
resp.table.description #=> String
resp.table.owner #=> String
resp.table.create_time #=> Time
resp.table.update_time #=> Time
resp.table.last_access_time #=> Time
resp.table.last_analyzed_time #=> Time
resp.table.retention #=> Integer
resp.table.storage_descriptor.columns #=> Array
resp.table.storage_descriptor.columns[0].name #=> String
resp.table.storage_descriptor.columns[0].type #=> String
resp.table.storage_descriptor.columns[0].comment #=> String
resp.table.storage_descriptor.columns[0].parameters #=> Hash
resp.table.storage_descriptor.columns[0].parameters["KeyString"] #=> String
resp.table.storage_descriptor.location #=> String
resp.table.storage_descriptor.additional_locations #=> Array
resp.table.storage_descriptor.additional_locations[0] #=> String
resp.table.storage_descriptor.input_format #=> String
resp.table.storage_descriptor.output_format #=> String
resp.table.storage_descriptor.compressed #=> Boolean
resp.table.storage_descriptor.number_of_buckets #=> Integer
resp.table.storage_descriptor.serde_info.name #=> String
resp.table.storage_descriptor.serde_info.serialization_library #=> String
resp.table.storage_descriptor.serde_info.parameters #=> Hash
resp.table.storage_descriptor.serde_info.parameters["KeyString"] #=> String
resp.table.storage_descriptor.bucket_columns #=> Array
resp.table.storage_descriptor.bucket_columns[0] #=> String
resp.table.storage_descriptor.sort_columns #=> Array
resp.table.storage_descriptor.sort_columns[0].column #=> String
resp.table.storage_descriptor.sort_columns[0].sort_order #=> Integer
resp.table.storage_descriptor.parameters #=> Hash
resp.table.storage_descriptor.parameters["KeyString"] #=> String
resp.table.storage_descriptor.skewed_info.skewed_column_names #=> Array
resp.table.storage_descriptor.skewed_info.skewed_column_names[0] #=> String
resp.table.storage_descriptor.skewed_info.skewed_column_values #=> Array
resp.table.storage_descriptor.skewed_info.skewed_column_values[0] #=> String
resp.table.storage_descriptor.skewed_info.skewed_column_value_location_maps #=> Hash
resp.table.storage_descriptor.skewed_info.skewed_column_value_location_maps["ColumnValuesString"] #=> String
resp.table.storage_descriptor.stored_as_sub_directories #=> Boolean
resp.table.storage_descriptor.schema_reference.schema_id.schema_arn #=> String
resp.table.storage_descriptor.schema_reference.schema_id.schema_name #=> String
resp.table.storage_descriptor.schema_reference.schema_id.registry_name #=> String
resp.table.storage_descriptor.schema_reference.schema_version_id #=> String
resp.table.storage_descriptor.schema_reference.schema_version_number #=> Integer
resp.table.partition_keys #=> Array
resp.table.partition_keys[0].name #=> String
resp.table.partition_keys[0].type #=> String
resp.table.partition_keys[0].comment #=> String
resp.table.partition_keys[0].parameters #=> Hash
resp.table.partition_keys[0].parameters["KeyString"] #=> String
resp.table.view_original_text #=> String
resp.table.view_expanded_text #=> String
resp.table.table_type #=> String
resp.table.parameters #=> Hash
resp.table.parameters["KeyString"] #=> String
resp.table.created_by #=> String
resp.table.is_registered_with_lake_formation #=> Boolean
resp.table.target_table.catalog_id #=> String
resp.table.target_table.database_name #=> String
resp.table.target_table.name #=> String
resp.table.target_table.region #=> String
resp.table.catalog_id #=> String
resp.table.version_id #=> String
resp.table.federated_table.identifier #=> String
resp.table.federated_table.database_identifier #=> String
resp.table.federated_table.connection_name #=> String
resp.table.view_definition.is_protected #=> Boolean
resp.table.view_definition.definer #=> String
resp.table.view_definition.sub_objects #=> Array
resp.table.view_definition.sub_objects[0] #=> String
resp.table.view_definition.representations #=> Array
resp.table.view_definition.representations[0].dialect #=> String, one of "REDSHIFT", "ATHENA", "SPARK"
resp.table.view_definition.representations[0].dialect_version #=> String
resp.table.view_definition.representations[0].view_original_text #=> String
resp.table.view_definition.representations[0].view_expanded_text #=> String
resp.table.view_definition.representations[0].validation_connection #=> String
resp.table.view_definition.representations[0].is_stale #=> Boolean
resp.table.is_multi_dialect_view #=> Boolean
resp.table.status.requested_by #=> String
resp.table.status.updated_by #=> String
resp.table.status.request_time #=> Time
resp.table.status.update_time #=> Time
resp.table.status.action #=> String, one of "UPDATE", "CREATE"
resp.table.status.state #=> String, one of "QUEUED", "IN_PROGRESS", "SUCCESS", "STOPPED", "FAILED"
resp.table.status.error.error_code #=> String
resp.table.status.error.error_message #=> String
resp.table.status.details.requested_change #=> Types::Table
resp.table.status.details.view_validations #=> Array
resp.table.status.details.view_validations[0].dialect #=> String, one of "REDSHIFT", "ATHENA", "SPARK"
resp.table.status.details.view_validations[0].dialect_version #=> String
resp.table.status.details.view_validations[0].view_validation_text #=> String
resp.table.status.details.view_validations[0].update_time #=> Time
resp.table.status.details.view_validations[0].state #=> String, one of "QUEUED", "IN_PROGRESS", "SUCCESS", "STOPPED", "FAILED"
resp.table.status.details.view_validations[0].error.error_code #=> String
resp.table.status.details.view_validations[0].error.error_message #=> String
resp.authorized_columns #=> Array
resp.authorized_columns[0] #=> String
resp.is_registered_with_lake_formation #=> Boolean
resp.cell_filters #=> Array
resp.cell_filters[0].column_name #=> String
resp.cell_filters[0].row_filter_expression #=> String
resp.query_authorization_id #=> String
resp.is_multi_dialect_view #=> Boolean
resp.resource_arn #=> String
resp.is_protected #=> Boolean
resp.permissions #=> Array
resp.permissions[0] #=> String, one of "ALL", "SELECT", "ALTER", "DROP", "DELETE", "INSERT", "CREATE_DATABASE", "CREATE_TABLE", "DATA_LOCATION_ACCESS"
resp.row_filter #=> String

Parameters:

  • params (Hash) (defaults to: {})

    ({})

Options Hash (params):

  • :region (String)

    Specified only if the base tables belong to a different Amazon Web Services Region.

  • :catalog_id (required, String)

    The catalog ID where the table resides.

  • :database_name (required, String) — default: Required

    Specifies the name of a database that contains the table.

  • :name (required, String) — default: Required

    Specifies the name of a table for which you are requesting metadata.

  • :audit_context (Types::AuditContext)

    A structure containing Lake Formation audit context information.

  • :supported_permission_types (required, Array<String>)

    Indicates the level of filtering a third-party analytical engine is capable of enforcing when calling the GetUnfilteredTableMetadata API operation. Accepted values are:

    • COLUMN_PERMISSION - Column permissions ensure that users can access only specific columns in the table. If there are particular columns contain sensitive data, data lake administrators can define column filters that exclude access to specific columns.

    • CELL_FILTER_PERMISSION - Cell-level filtering combines column filtering (include or exclude columns) and row filter expressions to restrict access to individual elements in the table.

    • NESTED_PERMISSION - Nested permissions combines cell-level filtering and nested column filtering to restrict access to columns and/or nested columns in specific rows based on row filter expressions.

    • NESTED_CELL_PERMISSION - Nested cell permissions combines nested permission with nested cell-level filtering. This allows different subsets of nested columns to be restricted based on an array of row filter expressions.

    Note: Each of these permission types follows a hierarchical order where each subsequent permission type includes all permission of the previous type.

    Important: If you provide a supported permission type that doesn't match the user's level of permissions on the table, then Lake Formation raises an exception. For example, if the third-party engine calling the GetUnfilteredTableMetadata operation can enforce only column-level filtering, and the user has nested cell filtering applied on the table, Lake Formation throws an exception, and will not return unfiltered table metadata and data access credentials.

  • :parent_resource_arn (String)

    The resource ARN of the view.

  • :root_resource_arn (String)

    The resource ARN of the root view in a chain of nested views.

  • :supported_dialect (Types::SupportedDialect)

    A structure specifying the dialect and dialect version used by the query engine.

  • :permissions (Array<String>)

    The Lake Formation data permissions of the caller on the table. Used to authorize the call when no view context is found.

  • :query_session_context (Types::QuerySessionContext)

    A structure used as a protocol between query engines and Lake Formation or Glue. Contains both a Lake Formation generated authorization identifier and information from the request's authorization context.

Returns:

See Also:

[View source]

13975
13976
13977
13978
# File 'gems/aws-sdk-glue/lib/aws-sdk-glue/client.rb', line 13975

def (params = {}, options = {})
  req = build_request(:get_unfiltered_table_metadata, params)
  req.send_request(options)
end

#get_usage_profile(params = {}) ⇒ Types::GetUsageProfileResponse

Retrieves information about the specified Glue usage profile.

Examples:

Request syntax with placeholder values


resp = client.get_usage_profile({
  name: "NameString", # required
})

Response structure


resp.name #=> String
resp.description #=> String
resp.configuration.session_configuration #=> Hash
resp.configuration.session_configuration["NameString"].default_value #=> String
resp.configuration.session_configuration["NameString"].allowed_values #=> Array
resp.configuration.session_configuration["NameString"].allowed_values[0] #=> String
resp.configuration.session_configuration["NameString"].min_value #=> String
resp.configuration.session_configuration["NameString"].max_value #=> String
resp.configuration.job_configuration #=> Hash
resp.configuration.job_configuration["NameString"].default_value #=> String
resp.configuration.job_configuration["NameString"].allowed_values #=> Array
resp.configuration.job_configuration["NameString"].allowed_values[0] #=> String
resp.configuration.job_configuration["NameString"].min_value #=> String
resp.configuration.job_configuration["NameString"].max_value #=> String
resp.created_on #=> Time
resp.last_modified_on #=> Time

Parameters:

  • params (Hash) (defaults to: {})

    ({})

Options Hash (params):

  • :name (required, String)

    The name of the usage profile to retrieve.

Returns:

See Also:

[View source]

14022
14023
14024
14025
# File 'gems/aws-sdk-glue/lib/aws-sdk-glue/client.rb', line 14022

def get_usage_profile(params = {}, options = {})
  req = build_request(:get_usage_profile, params)
  req.send_request(options)
end

#get_user_defined_function(params = {}) ⇒ Types::GetUserDefinedFunctionResponse

Retrieves a specified function definition from the Data Catalog.

Examples:

Request syntax with placeholder values


resp = client.get_user_defined_function({
  catalog_id: "CatalogIdString",
  database_name: "NameString", # required
  function_name: "NameString", # required
})

Response structure


resp.user_defined_function.function_name #=> String
resp.user_defined_function.database_name #=> String
resp.user_defined_function.class_name #=> String
resp.user_defined_function.owner_name #=> String
resp.user_defined_function.owner_type #=> String, one of "USER", "ROLE", "GROUP"
resp.user_defined_function.create_time #=> Time
resp.user_defined_function.resource_uris #=> Array
resp.user_defined_function.resource_uris[0].resource_type #=> String, one of "JAR", "FILE", "ARCHIVE"
resp.user_defined_function.resource_uris[0].uri #=> String
resp.user_defined_function.catalog_id #=> String

Parameters:

  • params (Hash) (defaults to: {})

    ({})

Options Hash (params):

  • :catalog_id (String)

    The ID of the Data Catalog where the function to be retrieved is located. If none is provided, the Amazon Web Services account ID is used by default.

  • :database_name (required, String)

    The name of the catalog database where the function is located.

  • :function_name (required, String)

    The name of the function.

Returns:

See Also:

[View source]

14069
14070
14071
14072
# File 'gems/aws-sdk-glue/lib/aws-sdk-glue/client.rb', line 14069

def get_user_defined_function(params = {}, options = {})
  req = build_request(:get_user_defined_function, params)
  req.send_request(options)
end

#get_user_defined_functions(params = {}) ⇒ Types::GetUserDefinedFunctionsResponse

Retrieves multiple function definitions from the Data Catalog.

The returned response is a pageable response and is Enumerable. For details on usage see PageableResponse.

Examples:

Request syntax with placeholder values


resp = client.get_user_defined_functions({
  catalog_id: "CatalogIdString",
  database_name: "NameString",
  pattern: "NameString", # required
  next_token: "Token",
  max_results: 1,
})

Response structure


resp.user_defined_functions #=> Array
resp.user_defined_functions[0].function_name #=> String
resp.user_defined_functions[0].database_name #=> String
resp.user_defined_functions[0].class_name #=> String
resp.user_defined_functions[0].owner_name #=> String
resp.user_defined_functions[0].owner_type #=> String, one of "USER", "ROLE", "GROUP"
resp.user_defined_functions[0].create_time #=> Time
resp.user_defined_functions[0].resource_uris #=> Array
resp.user_defined_functions[0].resource_uris[0].resource_type #=> String, one of "JAR", "FILE", "ARCHIVE"
resp.user_defined_functions[0].resource_uris[0].uri #=> String
resp.user_defined_functions[0].catalog_id #=> String
resp.next_token #=> String

Parameters:

  • params (Hash) (defaults to: {})

    ({})

Options Hash (params):

  • :catalog_id (String)

    The ID of the Data Catalog where the functions to be retrieved are located. If none is provided, the Amazon Web Services account ID is used by default.

  • :database_name (String)

    The name of the catalog database where the functions are located. If none is provided, functions from all the databases across the catalog will be returned.

  • :pattern (required, String)

    An optional function-name pattern string that filters the function definitions returned.

  • :next_token (String)

    A continuation token, if this is a continuation call.

  • :max_results (Integer)

    The maximum number of functions to return in one response.

Returns:

See Also:

[View source]

14132
14133
14134
14135
# File 'gems/aws-sdk-glue/lib/aws-sdk-glue/client.rb', line 14132

def get_user_defined_functions(params = {}, options = {})
  req = build_request(:get_user_defined_functions, params)
  req.send_request(options)
end

#get_workflow(params = {}) ⇒ Types::GetWorkflowResponse

Retrieves resource metadata for a workflow.

Examples:

Request syntax with placeholder values


resp = client.get_workflow({
  name: "NameString", # required
  include_graph: false,
})

Response structure


resp.workflow.name #=> String
resp.workflow.description #=> String
resp.workflow.default_run_properties #=> Hash
resp.workflow.default_run_properties["IdString"] #=> String
resp.workflow.created_on #=> Time
resp.workflow.last_modified_on #=> Time
resp.workflow.last_run.name #=> String
resp.workflow.last_run.workflow_run_id #=> String
resp.workflow.last_run.previous_run_id #=> String
resp.workflow.last_run.workflow_run_properties #=> Hash
resp.workflow.last_run.workflow_run_properties["IdString"] #=> String
resp.workflow.last_run.started_on #=> Time
resp.workflow.last_run.completed_on #=> Time
resp.workflow.last_run.status #=> String, one of "RUNNING", "COMPLETED", "STOPPING", "STOPPED", "ERROR"
resp.workflow.last_run.error_message #=> String
resp.workflow.last_run.statistics.total_actions #=> Integer
resp.workflow.last_run.statistics.timeout_actions #=> Integer
resp.workflow.last_run.statistics.failed_actions #=> Integer
resp.workflow.last_run.statistics.stopped_actions #=> Integer
resp.workflow.last_run.statistics.succeeded_actions #=> Integer
resp.workflow.last_run.statistics.running_actions #=> Integer
resp.workflow.last_run.statistics.errored_actions #=> Integer
resp.workflow.last_run.statistics.waiting_actions #=> Integer
resp.workflow.last_run.graph.nodes #=> Array
resp.workflow.last_run.graph.nodes[0].type #=> String, one of "CRAWLER", "JOB", "TRIGGER"
resp.workflow.last_run.graph.nodes[0].name #=> String
resp.workflow.last_run.graph.nodes[0].unique_id #=> String
resp.workflow.last_run.graph.nodes[0].trigger_details.trigger.name #=> String
resp.workflow.last_run.graph.nodes[0].trigger_details.trigger.workflow_name #=> String
resp.workflow.last_run.graph.nodes[0].trigger_details.trigger.id #=> String
resp.workflow.last_run.graph.nodes[0].trigger_details.trigger.type #=> String, one of "SCHEDULED", "CONDITIONAL", "ON_DEMAND", "EVENT"
resp.workflow.last_run.graph.nodes[0].trigger_details.trigger.state #=> String, one of "CREATING", "CREATED", "ACTIVATING", "ACTIVATED", "DEACTIVATING", "DEACTIVATED", "DELETING", "UPDATING"
resp.workflow.last_run.graph.nodes[0].trigger_details.trigger.description #=> String
resp.workflow.last_run.graph.nodes[0].trigger_details.trigger.schedule #=> String
resp.workflow.last_run.graph.nodes[0].trigger_details.trigger.actions #=> Array
resp.workflow.last_run.graph.nodes[0].trigger_details.trigger.actions[0].job_name #=> String
resp.workflow.last_run.graph.nodes[0].trigger_details.trigger.actions[0].arguments #=> Hash
resp.workflow.last_run.graph.nodes[0].trigger_details.trigger.actions[0].arguments["GenericString"] #=> String
resp.workflow.last_run.graph.nodes[0].trigger_details.trigger.actions[0].timeout #=> Integer
resp.workflow.last_run.graph.nodes[0].trigger_details.trigger.actions[0].security_configuration #=> String
resp.workflow.last_run.graph.nodes[0].trigger_details.trigger.actions[0].notification_property.notify_delay_after #=> Integer
resp.workflow.last_run.graph.nodes[0].trigger_details.trigger.actions[0].crawler_name #=> String
resp.workflow.last_run.graph.nodes[0].trigger_details.trigger.predicate.logical #=> String, one of "AND", "ANY"
resp.workflow.last_run.graph.nodes[0].trigger_details.trigger.predicate.conditions #=> Array
resp.workflow.last_run.graph.nodes[0].trigger_details.trigger.predicate.conditions[0].logical_operator #=> String, one of "EQUALS"
resp.workflow.last_run.graph.nodes[0].trigger_details.trigger.predicate.conditions[0].job_name #=> String
resp.workflow.last_run.graph.nodes[0].trigger_details.trigger.predicate.conditions[0].state #=> String, one of "STARTING", "RUNNING", "STOPPING", "STOPPED", "SUCCEEDED", "FAILED", "TIMEOUT", "ERROR", "WAITING", "EXPIRED"
resp.workflow.last_run.graph.nodes[0].trigger_details.trigger.predicate.conditions[0].crawler_name #=> String
resp.workflow.last_run.graph.nodes[0].trigger_details.trigger.predicate.conditions[0].crawl_state #=> String, one of "RUNNING", "CANCELLING", "CANCELLED", "SUCCEEDED", "FAILED", "ERROR"
resp.workflow.last_run.graph.nodes[0].trigger_details.trigger.event_batching_condition.batch_size #=> Integer
resp.workflow.last_run.graph.nodes[0].trigger_details.trigger.event_batching_condition.batch_window #=> Integer
resp.workflow.last_run.graph.nodes[0].job_details.job_runs #=> Array
resp.workflow.last_run.graph.nodes[0].job_details.job_runs[0].id #=> String
resp.workflow.last_run.graph.nodes[0].job_details.job_runs[0].attempt #=> Integer
resp.workflow.last_run.graph.nodes[0].job_details.job_runs[0].previous_run_id #=> String
resp.workflow.last_run.graph.nodes[0].job_details.job_runs[0].trigger_name #=> String
resp.workflow.last_run.graph.nodes[0].job_details.job_runs[0].job_name #=> String
resp.workflow.last_run.graph.nodes[0].job_details.job_runs[0].job_mode #=> String, one of "SCRIPT", "VISUAL", "NOTEBOOK"
resp.workflow.last_run.graph.nodes[0].job_details.job_runs[0].job_run_queuing_enabled #=> Boolean
resp.workflow.last_run.graph.nodes[0].job_details.job_runs[0].started_on #=> Time
resp.workflow.last_run.graph.nodes[0].job_details.job_runs[0].last_modified_on #=> Time
resp.workflow.last_run.graph.nodes[0].job_details.job_runs[0].completed_on #=> Time
resp.workflow.last_run.graph.nodes[0].job_details.job_runs[0].job_run_state #=> String, one of "STARTING", "RUNNING", "STOPPING", "STOPPED", "SUCCEEDED", "FAILED", "TIMEOUT", "ERROR", "WAITING", "EXPIRED"
resp.workflow.last_run.graph.nodes[0].job_details.job_runs[0].arguments #=> Hash
resp.workflow.last_run.graph.nodes[0].job_details.job_runs[0].arguments["GenericString"] #=> String
resp.workflow.last_run.graph.nodes[0].job_details.job_runs[0].error_message #=> String
resp.workflow.last_run.graph.nodes[0].job_details.job_runs[0].predecessor_runs #=> Array
resp.workflow.last_run.graph.nodes[0].job_details.job_runs[0].predecessor_runs[0].job_name #=> String
resp.workflow.last_run.graph.nodes[0].job_details.job_runs[0].predecessor_runs[0].run_id #=> String
resp.workflow.last_run.graph.nodes[0].job_details.job_runs[0].allocated_capacity #=> Integer
resp.workflow.last_run.graph.nodes[0].job_details.job_runs[0].execution_time #=> Integer
resp.workflow.last_run.graph.nodes[0].job_details.job_runs[0].timeout #=> Integer
resp.workflow.last_run.graph.nodes[0].job_details.job_runs[0].max_capacity #=> Float
resp.workflow.last_run.graph.nodes[0].job_details.job_runs[0].worker_type #=> String, one of "Standard", "G.1X", "G.2X", "G.025X", "G.4X", "G.8X", "Z.2X"
resp.workflow.last_run.graph.nodes[0].job_details.job_runs[0].number_of_workers #=> Integer
resp.workflow.last_run.graph.nodes[0].job_details.job_runs[0].security_configuration #=> String
resp.workflow.last_run.graph.nodes[0].job_details.job_runs[0].log_group_name #=> String
resp.workflow.last_run.graph.nodes[0].job_details.job_runs[0].notification_property.notify_delay_after #=> Integer
resp.workflow.last_run.graph.nodes[0].job_details.job_runs[0].glue_version #=> String
resp.workflow.last_run.graph.nodes[0].job_details.job_runs[0].dpu_seconds #=> Float
resp.workflow.last_run.graph.nodes[0].job_details.job_runs[0].execution_class #=> String, one of "FLEX", "STANDARD"
resp.workflow.last_run.graph.nodes[0].job_details.job_runs[0].maintenance_window #=> String
resp.workflow.last_run.graph.nodes[0].job_details.job_runs[0].profile_name #=> String
resp.workflow.last_run.graph.nodes[0].job_details.job_runs[0].state_detail #=> String
resp.workflow.last_run.graph.nodes[0].crawler_details.crawls #=> Array
resp.workflow.last_run.graph.nodes[0].crawler_details.crawls[0].state #=> String, one of "RUNNING", "CANCELLING", "CANCELLED", "SUCCEEDED", "FAILED", "ERROR"
resp.workflow.last_run.graph.nodes[0].crawler_details.crawls[0].started_on #=> Time
resp.workflow.last_run.graph.nodes[0].crawler_details.crawls[0].completed_on #=> Time
resp.workflow.last_run.graph.nodes[0].crawler_details.crawls[0].error_message #=> String
resp.workflow.last_run.graph.nodes[0].crawler_details.crawls[0].log_group #=> String
resp.workflow.last_run.graph.nodes[0].crawler_details.crawls[0].log_stream #=> String
resp.workflow.last_run.graph.edges #=> Array
resp.workflow.last_run.graph.edges[0].source_id #=> String
resp.workflow.last_run.graph.edges[0].destination_id #=> String
resp.workflow.last_run.starting_event_batch_condition.batch_size #=> Integer
resp.workflow.last_run.starting_event_batch_condition.batch_window #=> Integer
resp.workflow.graph.nodes #=> Array
resp.workflow.graph.nodes[0].type #=> String, one of "CRAWLER", "JOB", "TRIGGER"
resp.workflow.graph.nodes[0].name #=> String
resp.workflow.graph.nodes[0].unique_id #=> String
resp.workflow.graph.nodes[0].trigger_details.trigger.name #=> String
resp.workflow.graph.nodes[0].trigger_details.trigger.workflow_name #=> String
resp.workflow.graph.nodes[0].trigger_details.trigger.id #=> String
resp.workflow.graph.nodes[0].trigger_details.trigger.type #=> String, one of "SCHEDULED", "CONDITIONAL", "ON_DEMAND", "EVENT"
resp.workflow.graph.nodes[0].trigger_details.trigger.state #=> String, one of "CREATING", "CREATED", "ACTIVATING", "ACTIVATED", "DEACTIVATING", "DEACTIVATED", "DELETING", "UPDATING"
resp.workflow.graph.nodes[0].trigger_details.trigger.description #=> String
resp.workflow.graph.nodes[0].trigger_details.trigger.schedule #=> String
resp.workflow.graph.nodes[0].trigger_details.trigger.actions #=> Array
resp.workflow.graph.nodes[0].trigger_details.trigger.actions[0].job_name #=> String
resp.workflow.graph.nodes[0].trigger_details.trigger.actions[0].arguments #=> Hash
resp.workflow.graph.nodes[0].trigger_details.trigger.actions[0].arguments["GenericString"] #=> String
resp.workflow.graph.nodes[0].trigger_details.trigger.actions[0].timeout #=> Integer
resp.workflow.graph.nodes[0].trigger_details.trigger.actions[0].security_configuration #=> String
resp.workflow.graph.nodes[0].trigger_details.trigger.actions[0].notification_property.notify_delay_after #=> Integer
resp.workflow.graph.nodes[0].trigger_details.trigger.actions[0].crawler_name #=> String
resp.workflow.graph.nodes[0].trigger_details.trigger.predicate.logical #=> String, one of "AND", "ANY"
resp.workflow.graph.nodes[0].trigger_details.trigger.predicate.conditions #=> Array
resp.workflow.graph.nodes[0].trigger_details.trigger.predicate.conditions[0].logical_operator #=> String, one of "EQUALS"
resp.workflow.graph.nodes[0].trigger_details.trigger.predicate.conditions[0].job_name #=> String
resp.workflow.graph.nodes[0].trigger_details.trigger.predicate.conditions[0].state #=> String, one of "STARTING", "RUNNING", "STOPPING", "STOPPED", "SUCCEEDED", "FAILED", "TIMEOUT", "ERROR", "WAITING", "EXPIRED"
resp.workflow.graph.nodes[0].trigger_details.trigger.predicate.conditions[0].crawler_name #=> String
resp.workflow.graph.nodes[0].trigger_details.trigger.predicate.conditions[0].crawl_state #=> String, one of "RUNNING", "CANCELLING", "CANCELLED", "SUCCEEDED", "FAILED", "ERROR"
resp.workflow.graph.nodes[0].trigger_details.trigger.event_batching_condition.batch_size #=> Integer
resp.workflow.graph.nodes[0].trigger_details.trigger.event_batching_condition.batch_window #=> Integer
resp.workflow.graph.nodes[0].job_details.job_runs #=> Array
resp.workflow.graph.nodes[0].job_details.job_runs[0].id #=> String
resp.workflow.graph.nodes[0].job_details.job_runs[0].attempt #=> Integer
resp.workflow.graph.nodes[0].job_details.job_runs[0].previous_run_id #=> String
resp.workflow.graph.nodes[0].job_details.job_runs[0].trigger_name #=> String
resp.workflow.graph.nodes[0].job_details.job_runs[0].job_name #=> String
resp.workflow.graph.nodes[0].job_details.job_runs[0].job_mode #=> String, one of "SCRIPT", "VISUAL", "NOTEBOOK"
resp.workflow.graph.nodes[0].job_details.job_runs[0].job_run_queuing_enabled #=> Boolean
resp.workflow.graph.nodes[0].job_details.job_runs[0].started_on #=> Time
resp.workflow.graph.nodes[0].job_details.job_runs[0].last_modified_on #=> Time
resp.workflow.graph.nodes[0].job_details.job_runs[0].completed_on #=> Time
resp.workflow.graph.nodes[0].job_details.job_runs[0].job_run_state #=> String, one of "STARTING", "RUNNING", "STOPPING", "STOPPED", "SUCCEEDED", "FAILED", "TIMEOUT", "ERROR", "WAITING", "EXPIRED"
resp.workflow.graph.nodes[0].job_details.job_runs[0].arguments #=> Hash
resp.workflow.graph.nodes[0].job_details.job_runs[0].arguments["GenericString"] #=> String
resp.workflow.graph.nodes[0].job_details.job_runs[0].error_message #=> String
resp.workflow.graph.nodes[0].job_details.job_runs[0].predecessor_runs #=> Array
resp.workflow.graph.nodes[0].job_details.job_runs[0].predecessor_runs[0].job_name #=> String
resp.workflow.graph.nodes[0].job_details.job_runs[0].predecessor_runs[0].run_id #=> String
resp.workflow.graph.nodes[0].job_details.job_runs[0].allocated_capacity #=> Integer
resp.workflow.graph.nodes[0].job_details.job_runs[0].execution_time #=> Integer
resp.workflow.graph.nodes[0].job_details.job_runs[0].timeout #=> Integer
resp.workflow.graph.nodes[0].job_details.job_runs[0].max_capacity #=> Float
resp.workflow.graph.nodes[0].job_details.job_runs[0].worker_type #=> String, one of "Standard", "G.1X", "G.2X", "G.025X", "G.4X", "G.8X", "Z.2X"
resp.workflow.graph.nodes[0].job_details.job_runs[0].number_of_workers #=> Integer
resp.workflow.graph.nodes[0].job_details.job_runs[0].security_configuration #=> String
resp.workflow.graph.nodes[0].job_details.job_runs[0].log_group_name #=> String
resp.workflow.graph.nodes[0].job_details.job_runs[0].notification_property.notify_delay_after #=> Integer
resp.workflow.graph.nodes[0].job_details.job_runs[0].glue_version #=> String
resp.workflow.graph.nodes[0].job_details.job_runs[0].dpu_seconds #=> Float
resp.workflow.graph.nodes[0].job_details.job_runs[0].execution_class #=> String, one of "FLEX", "STANDARD"
resp.workflow.graph.nodes[0].job_details.job_runs[0].maintenance_window #=> String
resp.workflow.graph.nodes[0].job_details.job_runs[0].profile_name #=> String
resp.workflow.graph.nodes[0].job_details.job_runs[0].state_detail #=> String
resp.workflow.graph.nodes[0].crawler_details.crawls #=> Array
resp.workflow.graph.nodes[0].crawler_details.crawls[0].state #=> String, one of "RUNNING", "CANCELLING", "CANCELLED", "SUCCEEDED", "FAILED", "ERROR"
resp.workflow.graph.nodes[0].crawler_details.crawls[0].started_on #=> Time
resp.workflow.graph.nodes[0].crawler_details.crawls[0].completed_on #=> Time
resp.workflow.graph.nodes[0].crawler_details.crawls[0].error_message #=> String
resp.workflow.graph.nodes[0].crawler_details.crawls[0].log_group #=> String
resp.workflow.graph.nodes[0].crawler_details.crawls[0].log_stream #=> String
resp.workflow.graph.edges #=> Array
resp.workflow.graph.edges[0].source_id #=> String
resp.workflow.graph.edges[0].destination_id #=> String
resp.workflow.max_concurrent_runs #=> Integer
resp.workflow.blueprint_details.blueprint_name #=> String
resp.workflow.blueprint_details.run_id #=> String

Parameters:

  • params (Hash) (defaults to: {})

    ({})

Options Hash (params):

  • :name (required, String)

    The name of the workflow to retrieve.

  • :include_graph (Boolean)

    Specifies whether to include a graph when returning the workflow resource metadata.

Returns:

See Also:

[View source]

14334
14335
14336
14337
# File 'gems/aws-sdk-glue/lib/aws-sdk-glue/client.rb', line 14334

def get_workflow(params = {}, options = {})
  req = build_request(:get_workflow, params)
  req.send_request(options)
end

#get_workflow_run(params = {}) ⇒ Types::GetWorkflowRunResponse

Retrieves the metadata for a given workflow run. Job run history is accessible for 90 days for your workflow and job run.

Examples:

Request syntax with placeholder values


resp = client.get_workflow_run({
  name: "NameString", # required
  run_id: "IdString", # required
  include_graph: false,
})

Response structure


resp.run.name #=> String
resp.run.workflow_run_id #=> String
resp.run.previous_run_id #=> String
resp.run.workflow_run_properties #=> Hash
resp.run.workflow_run_properties["IdString"] #=> String
resp.run.started_on #=> Time
resp.run.completed_on #=> Time
resp.run.status #=> String, one of "RUNNING", "COMPLETED", "STOPPING", "STOPPED", "ERROR"
resp.run.error_message #=> String
resp.run.statistics.total_actions #=> Integer
resp.run.statistics.timeout_actions #=> Integer
resp.run.statistics.failed_actions #=> Integer
resp.run.statistics.stopped_actions #=> Integer
resp.run.statistics.succeeded_actions #=> Integer
resp.run.statistics.running_actions #=> Integer
resp.run.statistics.errored_actions #=> Integer
resp.run.statistics.waiting_actions #=> Integer
resp.run.graph.nodes #=> Array
resp.run.graph.nodes[0].type #=> String, one of "CRAWLER", "JOB", "TRIGGER"
resp.run.graph.nodes[0].name #=> String
resp.run.graph.nodes[0].unique_id #=> String
resp.run.graph.nodes[0].trigger_details.trigger.name #=> String
resp.run.graph.nodes[0].trigger_details.trigger.workflow_name #=> String
resp.run.graph.nodes[0].trigger_details.trigger.id #=> String
resp.run.graph.nodes[0].trigger_details.trigger.type #=> String, one of "SCHEDULED", "CONDITIONAL", "ON_DEMAND", "EVENT"
resp.run.graph.nodes[0].trigger_details.trigger.state #=> String, one of "CREATING", "CREATED", "ACTIVATING", "ACTIVATED", "DEACTIVATING", "DEACTIVATED", "DELETING", "UPDATING"
resp.run.graph.nodes[0].trigger_details.trigger.description #=> String
resp.run.graph.nodes[0].trigger_details.trigger.schedule #=> String
resp.run.graph.nodes[0].trigger_details.trigger.actions #=> Array
resp.run.graph.nodes[0].trigger_details.trigger.actions[0].job_name #=> String
resp.run.graph.nodes[0].trigger_details.trigger.actions[0].arguments #=> Hash
resp.run.graph.nodes[0].trigger_details.trigger.actions[0].arguments["GenericString"] #=> String
resp.run.graph.nodes[0].trigger_details.trigger.actions[0].timeout #=> Integer
resp.run.graph.nodes[0].trigger_details.trigger.actions[0].security_configuration #=> String
resp.run.graph.nodes[0].trigger_details.trigger.actions[0].notification_property.notify_delay_after #=> Integer
resp.run.graph.nodes[0].trigger_details.trigger.actions[0].crawler_name #=> String
resp.run.graph.nodes[0].trigger_details.trigger.predicate.logical #=> String, one of "AND", "ANY"
resp.run.graph.nodes[0].trigger_details.trigger.predicate.conditions #=> Array
resp.run.graph.nodes[0].trigger_details.trigger.predicate.conditions[0].logical_operator #=> String, one of "EQUALS"
resp.run.graph.nodes[0].trigger_details.trigger.predicate.conditions[0].job_name #=> String
resp.run.graph.nodes[0].trigger_details.trigger.predicate.conditions[0].state #=> String, one of "STARTING", "RUNNING", "STOPPING", "STOPPED", "SUCCEEDED", "FAILED", "TIMEOUT", "ERROR", "WAITING", "EXPIRED"
resp.run.graph.nodes[0].trigger_details.trigger.predicate.conditions[0].crawler_name #=> String
resp.run.graph.nodes[0].trigger_details.trigger.predicate.conditions[0].crawl_state #=> String, one of "RUNNING", "CANCELLING", "CANCELLED", "SUCCEEDED", "FAILED", "ERROR"
resp.run.graph.nodes[0].trigger_details.trigger.event_batching_condition.batch_size #=> Integer
resp.run.graph.nodes[0].trigger_details.trigger.event_batching_condition.batch_window #=> Integer
resp.run.graph.nodes[0].job_details.job_runs #=> Array
resp.run.graph.nodes[0].job_details.job_runs[0].id #=> String
resp.run.graph.nodes[0].job_details.job_runs[0].attempt #=> Integer
resp.run.graph.nodes[0].job_details.job_runs[0].previous_run_id #=> String
resp.run.graph.nodes[0].job_details.job_runs[0].trigger_name #=> String
resp.run.graph.nodes[0].job_details.job_runs[0].job_name #=> String
resp.run.graph.nodes[0].job_details.job_runs[0].job_mode #=> String, one of "SCRIPT", "VISUAL", "NOTEBOOK"
resp.run.graph.nodes[0].job_details.job_runs[0].job_run_queuing_enabled #=> Boolean
resp.run.graph.nodes[0].job_details.job_runs[0].started_on #=> Time
resp.run.graph.nodes[0].job_details.job_runs[0].last_modified_on #=> Time
resp.run.graph.nodes[0].job_details.job_runs[0].completed_on #=> Time
resp.run.graph.nodes[0].job_details.job_runs[0].job_run_state #=> String, one of "STARTING", "RUNNING", "STOPPING", "STOPPED", "SUCCEEDED", "FAILED", "TIMEOUT", "ERROR", "WAITING", "EXPIRED"
resp.run.graph.nodes[0].job_details.job_runs[0].arguments #=> Hash
resp.run.graph.nodes[0].job_details.job_runs[0].arguments["GenericString"] #=> String
resp.run.graph.nodes[0].job_details.job_runs[0].error_message #=> String
resp.run.graph.nodes[0].job_details.job_runs[0].predecessor_runs #=> Array
resp.run.graph.nodes[0].job_details.job_runs[0].predecessor_runs[0].job_name #=> String
resp.run.graph.nodes[0].job_details.job_runs[0].predecessor_runs[0].run_id #=> String
resp.run.graph.nodes[0].job_details.job_runs[0].allocated_capacity #=> Integer
resp.run.graph.nodes[0].job_details.job_runs[0].execution_time #=> Integer
resp.run.graph.nodes[0].job_details.job_runs[0].timeout #=> Integer
resp.run.graph.nodes[0].job_details.job_runs[0].max_capacity #=> Float
resp.run.graph.nodes[0].job_details.job_runs[0].worker_type #=> String, one of "Standard", "G.1X", "G.2X", "G.025X", "G.4X", "G.8X", "Z.2X"
resp.run.graph.nodes[0].job_details.job_runs[0].number_of_workers #=> Integer
resp.run.graph.nodes[0].job_details.job_runs[0].security_configuration #=> String
resp.run.graph.nodes[0].job_details.job_runs[0].log_group_name #=> String
resp.run.graph.nodes[0].job_details.job_runs[0].notification_property.notify_delay_after #=> Integer
resp.run.graph.nodes[0].job_details.job_runs[0].glue_version #=> String
resp.run.graph.nodes[0].job_details.job_runs[0].dpu_seconds #=> Float
resp.run.graph.nodes[0].job_details.job_runs[0].execution_class #=> String, one of "FLEX", "STANDARD"
resp.run.graph.nodes[0].job_details.job_runs[0].maintenance_window #=> String
resp.run.graph.nodes[0].job_details.job_runs[0].profile_name #=> String
resp.run.graph.nodes[0].job_details.job_runs[0].state_detail #=> String
resp.run.graph.nodes[0].crawler_details.crawls #=> Array
resp.run.graph.nodes[0].crawler_details.crawls[0].state #=> String, one of "RUNNING", "CANCELLING", "CANCELLED", "SUCCEEDED", "FAILED", "ERROR"
resp.run.graph.nodes[0].crawler_details.crawls[0].started_on #=> Time
resp.run.graph.nodes[0].crawler_details.crawls[0].completed_on #=> Time
resp.run.graph.nodes[0].crawler_details.crawls[0].error_message #=> String
resp.run.graph.nodes[0].crawler_details.crawls[0].log_group #=> String
resp.run.graph.nodes[0].crawler_details.crawls[0].log_stream #=> String
resp.run.graph.edges #=> Array
resp.run.graph.edges[0].source_id #=> String
resp.run.graph.edges[0].destination_id #=> String
resp.run.starting_event_batch_condition.batch_size #=> Integer
resp.run.starting_event_batch_condition.batch_window #=> Integer

Parameters:

  • params (Hash) (defaults to: {})

    ({})

Options Hash (params):

  • :name (required, String)

    Name of the workflow being run.

  • :run_id (required, String)

    The ID of the workflow run.

  • :include_graph (Boolean)

    Specifies whether to include the workflow graph in response or not.

Returns:

See Also:

[View source]

14460
14461
14462
14463
# File 'gems/aws-sdk-glue/lib/aws-sdk-glue/client.rb', line 14460

def get_workflow_run(params = {}, options = {})
  req = build_request(:get_workflow_run, params)
  req.send_request(options)
end

#get_workflow_run_properties(params = {}) ⇒ Types::GetWorkflowRunPropertiesResponse

Retrieves the workflow run properties which were set during the run.

Examples:

Request syntax with placeholder values


resp = client.get_workflow_run_properties({
  name: "NameString", # required
  run_id: "IdString", # required
})

Response structure


resp.run_properties #=> Hash
resp.run_properties["IdString"] #=> String

Parameters:

  • params (Hash) (defaults to: {})

    ({})

Options Hash (params):

  • :name (required, String)

    Name of the workflow which was run.

  • :run_id (required, String)

    The ID of the workflow run whose run properties should be returned.

Returns:

See Also:

[View source]

14493
14494
14495
14496
# File 'gems/aws-sdk-glue/lib/aws-sdk-glue/client.rb', line 14493

def get_workflow_run_properties(params = {}, options = {})
  req = build_request(:get_workflow_run_properties, params)
  req.send_request(options)
end

#get_workflow_runs(params = {}) ⇒ Types::GetWorkflowRunsResponse

Retrieves metadata for all runs of a given workflow.

The returned response is a pageable response and is Enumerable. For details on usage see PageableResponse.

Examples:

Request syntax with placeholder values


resp = client.get_workflow_runs({
  name: "NameString", # required
  include_graph: false,
  next_token: "GenericString",
  max_results: 1,
})

Response structure


resp.runs #=> Array
resp.runs[0].name #=> String
resp.runs[0].workflow_run_id #=> String
resp.runs[0].previous_run_id #=> String
resp.runs[0].workflow_run_properties #=> Hash
resp.runs[0].workflow_run_properties["IdString"] #=> String
resp.runs[0].started_on #=> Time
resp.runs[0].completed_on #=> Time
resp.runs[0].status #=> String, one of "RUNNING", "COMPLETED", "STOPPING", "STOPPED", "ERROR"
resp.runs[0].error_message #=> String
resp.runs[0].statistics.total_actions #=> Integer
resp.runs[0].statistics.timeout_actions #=> Integer
resp.runs[0].statistics.failed_actions #=> Integer
resp.runs[0].statistics.stopped_actions #=> Integer
resp.runs[0].statistics.succeeded_actions #=> Integer
resp.runs[0].statistics.running_actions #=> Integer
resp.runs[0].statistics.errored_actions #=> Integer
resp.runs[0].statistics.waiting_actions #=> Integer
resp.runs[0].graph.nodes #=> Array
resp.runs[0].graph.nodes[0].type #=> String, one of "CRAWLER", "JOB", "TRIGGER"
resp.runs[0].graph.nodes[0].name #=> String
resp.runs[0].graph.nodes[0].unique_id #=> String
resp.runs[0].graph.nodes[0].trigger_details.trigger.name #=> String
resp.runs[0].graph.nodes[0].trigger_details.trigger.workflow_name #=> String
resp.runs[0].graph.nodes[0].trigger_details.trigger.id #=> String
resp.runs[0].graph.nodes[0].trigger_details.trigger.type #=> String, one of "SCHEDULED", "CONDITIONAL", "ON_DEMAND", "EVENT"
resp.runs[0].graph.nodes[0].trigger_details.trigger.state #=> String, one of "CREATING", "CREATED", "ACTIVATING", "ACTIVATED", "DEACTIVATING", "DEACTIVATED", "DELETING", "UPDATING"
resp.runs[0].graph.nodes[0].trigger_details.trigger.description #=> String
resp.runs[0].graph.nodes[0].trigger_details.trigger.schedule #=> String
resp.runs[0].graph.nodes[0].trigger_details.trigger.actions #=> Array
resp.runs[0].graph.nodes[0].trigger_details.trigger.actions[0].job_name #=> String
resp.runs[0].graph.nodes[0].trigger_details.trigger.actions[0].arguments #=> Hash
resp.runs[0].graph.nodes[0].trigger_details.trigger.actions[0].arguments["GenericString"] #=> String
resp.runs[0].graph.nodes[0].trigger_details.trigger.actions[0].timeout #=> Integer
resp.runs[0].graph.nodes[0].trigger_details.trigger.actions[0].security_configuration #=> String
resp.runs[0].graph.nodes[0].trigger_details.trigger.actions[0].notification_property.notify_delay_after #=> Integer
resp.runs[0].graph.nodes[0].trigger_details.trigger.actions[0].crawler_name #=> String
resp.runs[0].graph.nodes[0].trigger_details.trigger.predicate.logical #=> String, one of "AND", "ANY"
resp.runs[0].graph.nodes[0].trigger_details.trigger.predicate.conditions #=> Array
resp.runs[0].graph.nodes[0].trigger_details.trigger.predicate.conditions[0].logical_operator #=> String, one of "EQUALS"
resp.runs[0].graph.nodes[0].trigger_details.trigger.predicate.conditions[0].job_name #=> String
resp.runs[0].graph.nodes[0].trigger_details.trigger.predicate.conditions[0].state #=> String, one of "STARTING", "RUNNING", "STOPPING", "STOPPED", "SUCCEEDED", "FAILED", "TIMEOUT", "ERROR", "WAITING", "EXPIRED"
resp.runs[0].graph.nodes[0].trigger_details.trigger.predicate.conditions[0].crawler_name #=> String
resp.runs[0].graph.nodes[0].trigger_details.trigger.predicate.conditions[0].crawl_state #=> String, one of "RUNNING", "CANCELLING", "CANCELLED", "SUCCEEDED", "FAILED", "ERROR"
resp.runs[0].graph.nodes[0].trigger_details.trigger.event_batching_condition.batch_size #=> Integer
resp.runs[0].graph.nodes[0].trigger_details.trigger.event_batching_condition.batch_window #=> Integer
resp.runs[0].graph.nodes[0].job_details.job_runs #=> Array
resp.runs[0].graph.nodes[0].job_details.job_runs[0].id #=> String
resp.runs[0].graph.nodes[0].job_details.job_runs[0].attempt #=> Integer
resp.runs[0].graph.nodes[0].job_details.job_runs[0].previous_run_id #=> String
resp.runs[0].graph.nodes[0].job_details.job_runs[0].trigger_name #=> String
resp.runs[0].graph.nodes[0].job_details.job_runs[0].job_name #=> String
resp.runs[0].graph.nodes[0].job_details.job_runs[0].job_mode #=> String, one of "SCRIPT", "VISUAL", "NOTEBOOK"
resp.runs[0].graph.nodes[0].job_details.job_runs[0].job_run_queuing_enabled #=> Boolean
resp.runs[0].graph.nodes[0].job_details.job_runs[0].started_on #=> Time
resp.runs[0].graph.nodes[0].job_details.job_runs[0].last_modified_on #=> Time
resp.runs[0].graph.nodes[0].job_details.job_runs[0].completed_on #=> Time
resp.runs[0].graph.nodes[0].job_details.job_runs[0].job_run_state #=> String, one of "STARTING", "RUNNING", "STOPPING", "STOPPED", "SUCCEEDED", "FAILED", "TIMEOUT", "ERROR", "WAITING", "EXPIRED"
resp.runs[0].graph.nodes[0].job_details.job_runs[0].arguments #=> Hash
resp.runs[0].graph.nodes[0].job_details.job_runs[0].arguments["GenericString"] #=> String
resp.runs[0].graph.nodes[0].job_details.job_runs[0].error_message #=> String
resp.runs[0].graph.nodes[0].job_details.job_runs[0].predecessor_runs #=> Array
resp.runs[0].graph.nodes[0].job_details.job_runs[0].predecessor_runs[0].job_name #=> String
resp.runs[0].graph.nodes[0].job_details.job_runs[0].predecessor_runs[0].run_id #=> String
resp.runs[0].graph.nodes[0].job_details.job_runs[0].allocated_capacity #=> Integer
resp.runs[0].graph.nodes[0].job_details.job_runs[0].execution_time #=> Integer
resp.runs[0].graph.nodes[0].job_details.job_runs[0].timeout #=> Integer
resp.runs[0].graph.nodes[0].job_details.job_runs[0].max_capacity #=> Float
resp.runs[0].graph.nodes[0].job_details.job_runs[0].worker_type #=> String, one of "Standard", "G.1X", "G.2X", "G.025X", "G.4X", "G.8X", "Z.2X"
resp.runs[0].graph.nodes[0].job_details.job_runs[0].number_of_workers #=> Integer
resp.runs[0].graph.nodes[0].job_details.job_runs[0].security_configuration #=> String
resp.runs[0].graph.nodes[0].job_details.job_runs[0].log_group_name #=> String
resp.runs[0].graph.nodes[0].job_details.job_runs[0].notification_property.notify_delay_after #=> Integer
resp.runs[0].graph.nodes[0].job_details.job_runs[0].glue_version #=> String
resp.runs[0].graph.nodes[0].job_details.job_runs[0].dpu_seconds #=> Float
resp.runs[0].graph.nodes[0].job_details.job_runs[0].execution_class #=> String, one of "FLEX", "STANDARD"
resp.runs[0].graph.nodes[0].job_details.job_runs[0].maintenance_window #=> String
resp.runs[0].graph.nodes[0].job_details.job_runs[0].profile_name #=> String
resp.runs[0].graph.nodes[0].job_details.job_runs[0].state_detail #=> String
resp.runs[0].graph.nodes[0].crawler_details.crawls #=> Array
resp.runs[0].graph.nodes[0].crawler_details.crawls[0].state #=> String, one of "RUNNING", "CANCELLING", "CANCELLED", "SUCCEEDED", "FAILED", "ERROR"
resp.runs[0].graph.nodes[0].crawler_details.crawls[0].started_on #=> Time
resp.runs[0].graph.nodes[0].crawler_details.crawls[0].completed_on #=> Time
resp.runs[0].graph.nodes[0].crawler_details.crawls[0].error_message #=> String
resp.runs[0].graph.nodes[0].crawler_details.crawls[0].log_group #=> String
resp.runs[0].graph.nodes[0].crawler_details.crawls[0].log_stream #=> String
resp.runs[0].graph.edges #=> Array
resp.runs[0].graph.edges[0].source_id #=> String
resp.runs[0].graph.edges[0].destination_id #=> String
resp.runs[0].starting_event_batch_condition.batch_size #=> Integer
resp.runs[0].starting_event_batch_condition.batch_window #=> Integer
resp.next_token #=> String

Parameters:

  • params (Hash) (defaults to: {})

    ({})

Options Hash (params):

  • :name (required, String)

    Name of the workflow whose metadata of runs should be returned.

  • :include_graph (Boolean)

    Specifies whether to include the workflow graph in response or not.

  • :next_token (String)

    The maximum size of the response.

  • :max_results (Integer)

    The maximum number of workflow runs to be included in the response.

Returns:

See Also:

[View source]

14627
14628
14629
14630
# File 'gems/aws-sdk-glue/lib/aws-sdk-glue/client.rb', line 14627

def get_workflow_runs(params = {}, options = {})
  req = build_request(:get_workflow_runs, params)
  req.send_request(options)
end

#import_catalog_to_glue(params = {}) ⇒ Struct

Imports an existing Amazon Athena Data Catalog to Glue.

Examples:

Request syntax with placeholder values


resp = client.import_catalog_to_glue({
  catalog_id: "CatalogIdString",
})

Parameters:

  • params (Hash) (defaults to: {})

    ({})

Options Hash (params):

  • :catalog_id (String)

    The ID of the catalog to import. Currently, this should be the Amazon Web Services account ID.

Returns:

  • (Struct)

    Returns an empty response.

See Also:

[View source]

14650
14651
14652
14653
# File 'gems/aws-sdk-glue/lib/aws-sdk-glue/client.rb', line 14650

def import_catalog_to_glue(params = {}, options = {})
  req = build_request(:import_catalog_to_glue, params)
  req.send_request(options)
end

#list_blueprints(params = {}) ⇒ Types::ListBlueprintsResponse

Lists all the blueprint names in an account.

The returned response is a pageable response and is Enumerable. For details on usage see PageableResponse.

Examples:

Request syntax with placeholder values


resp = client.list_blueprints({
  next_token: "GenericString",
  max_results: 1,
  tags: {
    "TagKey" => "TagValue",
  },
})

Response structure


resp.blueprints #=> Array
resp.blueprints[0] #=> String
resp.next_token #=> String

Parameters:

  • params (Hash) (defaults to: {})

    ({})

Options Hash (params):

  • :next_token (String)

    A continuation token, if this is a continuation request.

  • :max_results (Integer)

    The maximum size of a list to return.

  • :tags (Hash<String,String>)

    Filters the list by an Amazon Web Services resource tag.

Returns:

See Also:

[View source]

14693
14694
14695
14696
# File 'gems/aws-sdk-glue/lib/aws-sdk-glue/client.rb', line 14693

def list_blueprints(params = {}, options = {})
  req = build_request(:list_blueprints, params)
  req.send_request(options)
end

#list_column_statistics_task_runs(params = {}) ⇒ Types::ListColumnStatisticsTaskRunsResponse

List all task runs for a particular account.

The returned response is a pageable response and is Enumerable. For details on usage see PageableResponse.

Examples:

Request syntax with placeholder values


resp = client.list_column_statistics_task_runs({
  max_results: 1,
  next_token: "Token",
})

Response structure


resp.column_statistics_task_run_ids #=> Array
resp.column_statistics_task_run_ids[0] #=> String
resp.next_token #=> String

Parameters:

  • params (Hash) (defaults to: {})

    ({})

Options Hash (params):

  • :max_results (Integer)

    The maximum size of the response.

  • :next_token (String)

    A continuation token, if this is a continuation call.

Returns:

See Also:

[View source]

14730
14731
14732
14733
# File 'gems/aws-sdk-glue/lib/aws-sdk-glue/client.rb', line 14730

def list_column_statistics_task_runs(params = {}, options = {})
  req = build_request(:list_column_statistics_task_runs, params)
  req.send_request(options)
end

#list_connection_types(params = {}) ⇒ Types::ListConnectionTypesResponse

The ListConnectionTypes API provides a discovery mechanism to learn available connection types in Glue. The response contains a list of connection types with high-level details of what is supported for each connection type. The connection types listed are the set of supported options for the ConnectionType value in the CreateConnection API.

The returned response is a pageable response and is Enumerable. For details on usage see PageableResponse.

Examples:

Request syntax with placeholder values


resp = client.list_connection_types({
  max_results: 1,
  next_token: "NextToken",
})

Response structure


resp.connection_types #=> Array
resp.connection_types[0].connection_type #=> String, one of "JDBC", "SFTP", "MONGODB", "KAFKA", "NETWORK", "MARKETPLACE", "CUSTOM", "SALESFORCE", "VIEW_VALIDATION_REDSHIFT", "VIEW_VALIDATION_ATHENA", "GOOGLEADS", "GOOGLESHEETS", "GOOGLEANALYTICS4", "SERVICENOW", "MARKETO", "SAPODATA", "ZENDESK", "JIRACLOUD", "NETSUITEERP", "HUBSPOT", "FACEBOOKADS", "INSTAGRAMADS", "ZOHOCRM", "SALESFORCEPARDOT", "SALESFORCEMARKETINGCLOUD", "SLACK", "STRIPE", "INTERCOM", "SNAPCHATADS"
resp.connection_types[0].description #=> String
resp.connection_types[0].capabilities.supported_authentication_types #=> Array
resp.connection_types[0].capabilities.supported_authentication_types[0] #=> String, one of "BASIC", "OAUTH2", "CUSTOM", "IAM"
resp.connection_types[0].capabilities.supported_data_operations #=> Array
resp.connection_types[0].capabilities.supported_data_operations[0] #=> String, one of "READ", "WRITE"
resp.connection_types[0].capabilities.supported_compute_environments #=> Array
resp.connection_types[0].capabilities.supported_compute_environments[0] #=> String, one of "SPARK", "ATHENA", "PYTHON"
resp.next_token #=> String

Parameters:

  • params (Hash) (defaults to: {})

    ({})

Options Hash (params):

  • :max_results (Integer)

    The maximum number of results to return.

  • :next_token (String)

    A continuation token, if this is a continuation call.

Returns:

See Also:

[View source]

14778
14779
14780
14781
# File 'gems/aws-sdk-glue/lib/aws-sdk-glue/client.rb', line 14778

def list_connection_types(params = {}, options = {})
  req = build_request(:list_connection_types, params)
  req.send_request(options)
end

#list_crawlers(params = {}) ⇒ Types::ListCrawlersResponse

Retrieves the names of all crawler resources in this Amazon Web Services account, or the resources with the specified tag. This operation allows you to see which resources are available in your account, and their names.

This operation takes the optional Tags field, which you can use as a filter on the response so that tagged resources can be retrieved as a group. If you choose to use tags filtering, only resources with the tag are retrieved.

The returned response is a pageable response and is Enumerable. For details on usage see PageableResponse.

Examples:

Request syntax with placeholder values


resp = client.list_crawlers({
  max_results: 1,
  next_token: "Token",
  tags: {
    "TagKey" => "TagValue",
  },
})

Response structure


resp.crawler_names #=> Array
resp.crawler_names[0] #=> String
resp.next_token #=> String

Parameters:

  • params (Hash) (defaults to: {})

    ({})

Options Hash (params):

  • :max_results (Integer)

    The maximum size of a list to return.

  • :next_token (String)

    A continuation token, if this is a continuation request.

  • :tags (Hash<String,String>)

    Specifies to return only these tagged resources.

Returns:

See Also:

[View source]

14829
14830
14831
14832
# File 'gems/aws-sdk-glue/lib/aws-sdk-glue/client.rb', line 14829

def list_crawlers(params = {}, options = {})
  req = build_request(:list_crawlers, params)
  req.send_request(options)
end

#list_crawls(params = {}) ⇒ Types::ListCrawlsResponse

Returns all the crawls of a specified crawler. Returns only the crawls that have occurred since the launch date of the crawler history feature, and only retains up to 12 months of crawls. Older crawls will not be returned.

You may use this API to:

  • Retrive all the crawls of a specified crawler.

  • Retrieve all the crawls of a specified crawler within a limited count.

  • Retrieve all the crawls of a specified crawler in a specific time range.

  • Retrieve all the crawls of a specified crawler with a particular state, crawl ID, or DPU hour value.

Examples:

Request syntax with placeholder values


resp = client.list_crawls({
  crawler_name: "NameString", # required
  max_results: 1,
  filters: [
    {
      field_name: "CRAWL_ID", # accepts CRAWL_ID, STATE, START_TIME, END_TIME, DPU_HOUR
      filter_operator: "GT", # accepts GT, GE, LT, LE, EQ, NE
      field_value: "GenericString",
    },
  ],
  next_token: "Token",
})

Response structure


resp.crawls #=> Array
resp.crawls[0].crawl_id #=> String
resp.crawls[0].state #=> String, one of "RUNNING", "COMPLETED", "FAILED", "STOPPED"
resp.crawls[0].start_time #=> Time
resp.crawls[0].end_time #=> Time
resp.crawls[0].summary #=> String
resp.crawls[0].error_message #=> String
resp.crawls[0].log_group #=> String
resp.crawls[0].log_stream #=> String
resp.crawls[0].message_prefix #=> String
resp.crawls[0].dpu_hour #=> Float
resp.next_token #=> String

Parameters:

  • params (Hash) (defaults to: {})

    ({})

Options Hash (params):

  • :crawler_name (required, String)

    The name of the crawler whose runs you want to retrieve.

  • :max_results (Integer)

    The maximum number of results to return. The default is 20, and maximum is 100.

  • :filters (Array<Types::CrawlsFilter>)

    Filters the crawls by the criteria you specify in a list of CrawlsFilter objects.

  • :next_token (String)

    A continuation token, if this is a continuation call.

Returns:

See Also:

[View source]

14905
14906
14907
14908
# File 'gems/aws-sdk-glue/lib/aws-sdk-glue/client.rb', line 14905

def list_crawls(params = {}, options = {})
  req = build_request(:list_crawls, params)
  req.send_request(options)
end

#list_custom_entity_types(params = {}) ⇒ Types::ListCustomEntityTypesResponse

Lists all the custom patterns that have been created.

The returned response is a pageable response and is Enumerable. For details on usage see PageableResponse.

Examples:

Request syntax with placeholder values


resp = client.list_custom_entity_types({
  next_token: "PaginationToken",
  max_results: 1,
  tags: {
    "TagKey" => "TagValue",
  },
})

Response structure


resp.custom_entity_types #=> Array
resp.custom_entity_types[0].name #=> String
resp.custom_entity_types[0].regex_string #=> String
resp.custom_entity_types[0].context_words #=> Array
resp.custom_entity_types[0].context_words[0] #=> String
resp.next_token #=> String

Parameters:

  • params (Hash) (defaults to: {})

    ({})

Options Hash (params):

  • :next_token (String)

    A paginated token to offset the results.

  • :max_results (Integer)

    The maximum number of results to return.

  • :tags (Hash<String,String>)

    A list of key-value pair tags.

Returns:

See Also:

[View source]

14951
14952
14953
14954
# File 'gems/aws-sdk-glue/lib/aws-sdk-glue/client.rb', line 14951

def list_custom_entity_types(params = {}, options = {})
  req = build_request(:list_custom_entity_types, params)
  req.send_request(options)
end

#list_data_quality_results(params = {}) ⇒ Types::ListDataQualityResultsResponse

Returns all data quality execution results for your account.

The returned response is a pageable response and is Enumerable. For details on usage see PageableResponse.

Examples:

Request syntax with placeholder values


resp = client.list_data_quality_results({
  filter: {
    data_source: {
      glue_table: { # required
        database_name: "NameString", # required
        table_name: "NameString", # required
        catalog_id: "NameString",
        connection_name: "NameString",
        additional_options: {
          "NameString" => "DescriptionString",
        },
      },
    },
    job_name: "NameString",
    job_run_id: "HashString",
    started_after: Time.now,
    started_before: Time.now,
  },
  next_token: "PaginationToken",
  max_results: 1,
})

Response structure


resp.results #=> Array
resp.results[0].result_id #=> String
resp.results[0].data_source.glue_table.database_name #=> String
resp.results[0].data_source.glue_table.table_name #=> String
resp.results[0].data_source.glue_table.catalog_id #=> String
resp.results[0].data_source.glue_table.connection_name #=> String
resp.results[0].data_source.glue_table.additional_options #=> Hash
resp.results[0].data_source.glue_table.additional_options["NameString"] #=> String
resp.results[0].job_name #=> String
resp.results[0].job_run_id #=> String
resp.results[0].started_on #=> Time
resp.next_token #=> String

Parameters:

  • params (Hash) (defaults to: {})

    ({})

Options Hash (params):

  • :filter (Types::DataQualityResultFilterCriteria)

    The filter criteria.

  • :next_token (String)

    A paginated token to offset the results.

  • :max_results (Integer)

    The maximum number of results to return.

Returns:

See Also:

[View source]

15017
15018
15019
15020
# File 'gems/aws-sdk-glue/lib/aws-sdk-glue/client.rb', line 15017

def list_data_quality_results(params = {}, options = {})
  req = build_request(:list_data_quality_results, params)
  req.send_request(options)
end

#list_data_quality_rule_recommendation_runs(params = {}) ⇒ Types::ListDataQualityRuleRecommendationRunsResponse

Lists the recommendation runs meeting the filter criteria.

The returned response is a pageable response and is Enumerable. For details on usage see PageableResponse.

Examples:

Request syntax with placeholder values


resp = client.list_data_quality_rule_recommendation_runs({
  filter: {
    data_source: { # required
      glue_table: { # required
        database_name: "NameString", # required
        table_name: "NameString", # required
        catalog_id: "NameString",
        connection_name: "NameString",
        additional_options: {
          "NameString" => "DescriptionString",
        },
      },
    },
    started_before: Time.now,
    started_after: Time.now,
  },
  next_token: "PaginationToken",
  max_results: 1,
})

Response structure


resp.runs #=> Array
resp.runs[0].run_id #=> String
resp.runs[0].status #=> String, one of "STARTING", "RUNNING", "STOPPING", "STOPPED", "SUCCEEDED", "FAILED", "TIMEOUT"
resp.runs[0].started_on #=> Time
resp.runs[0].data_source.glue_table.database_name #=> String
resp.runs[0].data_source.glue_table.table_name #=> String
resp.runs[0].data_source.glue_table.catalog_id #=> String
resp.runs[0].data_source.glue_table.connection_name #=> String
resp.runs[0].data_source.glue_table.additional_options #=> Hash
resp.runs[0].data_source.glue_table.additional_options["NameString"] #=> String
resp.next_token #=> String

Parameters:

  • params (Hash) (defaults to: {})

    ({})

Options Hash (params):

Returns:

See Also:

[View source]

15080
15081
15082
15083
# File 'gems/aws-sdk-glue/lib/aws-sdk-glue/client.rb', line 15080

def list_data_quality_rule_recommendation_runs(params = {}, options = {})
  req = build_request(:list_data_quality_rule_recommendation_runs, params)
  req.send_request(options)
end

#list_data_quality_ruleset_evaluation_runs(params = {}) ⇒ Types::ListDataQualityRulesetEvaluationRunsResponse

Lists all the runs meeting the filter criteria, where a ruleset is evaluated against a data source.

The returned response is a pageable response and is Enumerable. For details on usage see PageableResponse.

Examples:

Request syntax with placeholder values


resp = client.list_data_quality_ruleset_evaluation_runs({
  filter: {
    data_source: { # required
      glue_table: { # required
        database_name: "NameString", # required
        table_name: "NameString", # required
        catalog_id: "NameString",
        connection_name: "NameString",
        additional_options: {
          "NameString" => "DescriptionString",
        },
      },
    },
    started_before: Time.now,
    started_after: Time.now,
  },
  next_token: "PaginationToken",
  max_results: 1,
})

Response structure


resp.runs #=> Array
resp.runs[0].run_id #=> String
resp.runs[0].status #=> String, one of "STARTING", "RUNNING", "STOPPING", "STOPPED", "SUCCEEDED", "FAILED", "TIMEOUT"
resp.runs[0].started_on #=> Time
resp.runs[0].data_source.glue_table.database_name #=> String
resp.runs[0].data_source.glue_table.table_name #=> String
resp.runs[0].data_source.glue_table.catalog_id #=> String
resp.runs[0].data_source.glue_table.connection_name #=> String
resp.runs[0].data_source.glue_table.additional_options #=> Hash
resp.runs[0].data_source.glue_table.additional_options["NameString"] #=> String
resp.next_token #=> String

Parameters:

  • params (Hash) (defaults to: {})

    ({})

Options Hash (params):

Returns:

See Also:

[View source]

15144
15145
15146
15147
# File 'gems/aws-sdk-glue/lib/aws-sdk-glue/client.rb', line 15144

def list_data_quality_ruleset_evaluation_runs(params = {}, options = {})
  req = build_request(:list_data_quality_ruleset_evaluation_runs, params)
  req.send_request(options)
end

#list_data_quality_rulesets(params = {}) ⇒ Types::ListDataQualityRulesetsResponse

Returns a paginated list of rulesets for the specified list of Glue tables.

The returned response is a pageable response and is Enumerable. For details on usage see PageableResponse.

Examples:

Request syntax with placeholder values


resp = client.list_data_quality_rulesets({
  next_token: "PaginationToken",
  max_results: 1,
  filter: {
    name: "NameString",
    description: "DescriptionString",
    created_before: Time.now,
    created_after: Time.now,
    last_modified_before: Time.now,
    last_modified_after: Time.now,
    target_table: {
      table_name: "NameString", # required
      database_name: "NameString", # required
      catalog_id: "NameString",
    },
  },
  tags: {
    "TagKey" => "TagValue",
  },
})

Response structure


resp.rulesets #=> Array
resp.rulesets[0].name #=> String
resp.rulesets[0].description #=> String
resp.rulesets[0].created_on #=> Time
resp.rulesets[0].last_modified_on #=> Time
resp.rulesets[0].target_table.table_name #=> String
resp.rulesets[0].target_table.database_name #=> String
resp.rulesets[0].target_table.catalog_id #=> String
resp.rulesets[0].recommendation_run_id #=> String
resp.rulesets[0].rule_count #=> Integer
resp.next_token #=> String

Parameters:

  • params (Hash) (defaults to: {})

    ({})

Options Hash (params):

  • :next_token (String)

    A paginated token to offset the results.

  • :max_results (Integer)

    The maximum number of results to return.

  • :filter (Types::DataQualityRulesetFilterCriteria)

    The filter criteria.

  • :tags (Hash<String,String>)

    A list of key-value pair tags.

Returns:

See Also:

[View source]

15212
15213
15214
15215
# File 'gems/aws-sdk-glue/lib/aws-sdk-glue/client.rb', line 15212

def list_data_quality_rulesets(params = {}, options = {})
  req = build_request(:list_data_quality_rulesets, params)
  req.send_request(options)
end

#list_data_quality_statistic_annotations(params = {}) ⇒ Types::ListDataQualityStatisticAnnotationsResponse

Retrieve annotations for a data quality statistic.

Examples:

Request syntax with placeholder values


resp = client.list_data_quality_statistic_annotations({
  statistic_id: "HashString",
  profile_id: "HashString",
  timestamp_filter: {
    recorded_before: Time.now,
    recorded_after: Time.now,
  },
  max_results: 1,
  next_token: "PaginationToken",
})

Response structure


resp.annotations #=> Array
resp.annotations[0].profile_id #=> String
resp.annotations[0].statistic_id #=> String
resp.annotations[0].statistic_recorded_on #=> Time
resp.annotations[0].inclusion_annotation.value #=> String, one of "INCLUDE", "EXCLUDE"
resp.annotations[0].inclusion_annotation.last_modified_on #=> Time
resp.next_token #=> String

Parameters:

  • params (Hash) (defaults to: {})

    ({})

Options Hash (params):

  • :statistic_id (String)

    The Statistic ID.

  • :profile_id (String)

    The Profile ID.

  • :timestamp_filter (Types::TimestampFilter)

    A timestamp filter.

  • :max_results (Integer)

    The maximum number of results to return in this request.

  • :next_token (String)

    A pagination token to retrieve the next set of results.

Returns:

See Also:

[View source]

15266
15267
15268
15269
# File 'gems/aws-sdk-glue/lib/aws-sdk-glue/client.rb', line 15266

def list_data_quality_statistic_annotations(params = {}, options = {})
  req = build_request(:list_data_quality_statistic_annotations, params)
  req.send_request(options)
end

#list_data_quality_statistics(params = {}) ⇒ Types::ListDataQualityStatisticsResponse

Retrieves a list of data quality statistics.

Examples:

Request syntax with placeholder values


resp = client.list_data_quality_statistics({
  statistic_id: "HashString",
  profile_id: "HashString",
  timestamp_filter: {
    recorded_before: Time.now,
    recorded_after: Time.now,
  },
  max_results: 1,
  next_token: "PaginationToken",
})

Response structure


resp.statistics #=> Array
resp.statistics[0].statistic_id #=> String
resp.statistics[0].profile_id #=> String
resp.statistics[0].run_identifier.run_id #=> String
resp.statistics[0].run_identifier.job_run_id #=> String
resp.statistics[0].statistic_name #=> String
resp.statistics[0].double_value #=> Float
resp.statistics[0].evaluation_level #=> String, one of "Dataset", "Column", "Multicolumn"
resp.statistics[0].columns_referenced #=> Array
resp.statistics[0].columns_referenced[0] #=> String
resp.statistics[0].referenced_datasets #=> Array
resp.statistics[0].referenced_datasets[0] #=> String
resp.statistics[0].statistic_properties #=> Hash
resp.statistics[0].statistic_properties["NameString"] #=> String
resp.statistics[0].recorded_on #=> Time
resp.statistics[0].inclusion_annotation.value #=> String, one of "INCLUDE", "EXCLUDE"
resp.statistics[0].inclusion_annotation.last_modified_on #=> Time
resp.next_token #=> String

Parameters:

  • params (Hash) (defaults to: {})

    ({})

Options Hash (params):

  • :statistic_id (String)

    The Statistic ID.

  • :profile_id (String)

    The Profile ID.

  • :timestamp_filter (Types::TimestampFilter)

    A timestamp filter.

  • :max_results (Integer)

    The maximum number of results to return in this request.

  • :next_token (String)

    A pagination token to request the next page of results.

Returns:

See Also:

[View source]

15331
15332
15333
15334
# File 'gems/aws-sdk-glue/lib/aws-sdk-glue/client.rb', line 15331

def list_data_quality_statistics(params = {}, options = {})
  req = build_request(:list_data_quality_statistics, params)
  req.send_request(options)
end

#list_dev_endpoints(params = {}) ⇒ Types::ListDevEndpointsResponse

Retrieves the names of all DevEndpoint resources in this Amazon Web Services account, or the resources with the specified tag. This operation allows you to see which resources are available in your account, and their names.

This operation takes the optional Tags field, which you can use as a filter on the response so that tagged resources can be retrieved as a group. If you choose to use tags filtering, only resources with the tag are retrieved.

The returned response is a pageable response and is Enumerable. For details on usage see PageableResponse.

Examples:

Request syntax with placeholder values


resp = client.list_dev_endpoints({
  next_token: "GenericString",
  max_results: 1,
  tags: {
    "TagKey" => "TagValue",
  },
})

Response structure


resp.dev_endpoint_names #=> Array
resp.dev_endpoint_names[0] #=> String
resp.next_token #=> String

Parameters:

  • params (Hash) (defaults to: {})

    ({})

Options Hash (params):

  • :next_token (String)

    A continuation token, if this is a continuation request.

  • :max_results (Integer)

    The maximum size of a list to return.

  • :tags (Hash<String,String>)

    Specifies to return only these tagged resources.

Returns:

See Also:

[View source]

15382
15383
15384
15385
# File 'gems/aws-sdk-glue/lib/aws-sdk-glue/client.rb', line 15382

def list_dev_endpoints(params = {}, options = {})
  req = build_request(:list_dev_endpoints, params)
  req.send_request(options)
end

#list_entities(params = {}) ⇒ Types::ListEntitiesResponse

Returns the available entities supported by the connection type.

The returned response is a pageable response and is Enumerable. For details on usage see PageableResponse.

Examples:

Request syntax with placeholder values


resp = client.list_entities({
  connection_name: "NameString",
  catalog_id: "CatalogIdString",
  parent_entity_name: "EntityName",
  next_token: "NextToken",
  data_store_api_version: "ApiVersion",
})

Response structure


resp.entities #=> Array
resp.entities[0].entity_name #=> String
resp.entities[0].label #=> String
resp.entities[0].is_parent_entity #=> Boolean
resp.entities[0].description #=> String
resp.entities[0].category #=> String
resp.entities[0].custom_properties #=> Hash
resp.entities[0].custom_properties["String"] #=> String
resp.next_token #=> String

Parameters:

  • params (Hash) (defaults to: {})

    ({})

Options Hash (params):

  • :connection_name (String)

    A name for the connection that has required credentials to query any connection type.

  • :catalog_id (String)

    The catalog ID of the catalog that contains the connection. This can be null, By default, the Amazon Web Services Account ID is the catalog ID.

  • :parent_entity_name (String)

    Name of the parent entity for which you want to list the children. This parameter takes a fully-qualified path of the entity in order to list the child entities.

  • :next_token (String)

    A continuation token, included if this is a continuation call.

  • :data_store_api_version (String)

    The API version of the SaaS connector.

Returns:

See Also:

[View source]

15442
15443
15444
15445
# File 'gems/aws-sdk-glue/lib/aws-sdk-glue/client.rb', line 15442

def list_entities(params = {}, options = {})
  req = build_request(:list_entities, params)
  req.send_request(options)
end

#list_jobs(params = {}) ⇒ Types::ListJobsResponse

Retrieves the names of all job resources in this Amazon Web Services account, or the resources with the specified tag. This operation allows you to see which resources are available in your account, and their names.

This operation takes the optional Tags field, which you can use as a filter on the response so that tagged resources can be retrieved as a group. If you choose to use tags filtering, only resources with the tag are retrieved.

The returned response is a pageable response and is Enumerable. For details on usage see PageableResponse.

Examples:

Request syntax with placeholder values


resp = client.list_jobs({
  next_token: "GenericString",
  max_results: 1,
  tags: {
    "TagKey" => "TagValue",
  },
})

Response structure


resp.job_names #=> Array
resp.job_names[0] #=> String
resp.next_token #=> String

Parameters:

  • params (Hash) (defaults to: {})

    ({})

Options Hash (params):

  • :next_token (String)

    A continuation token, if this is a continuation request.

  • :max_results (Integer)

    The maximum size of a list to return.

  • :tags (Hash<String,String>)

    Specifies to return only these tagged resources.

Returns:

See Also:

[View source]

15493
15494
15495
15496
# File 'gems/aws-sdk-glue/lib/aws-sdk-glue/client.rb', line 15493

def list_jobs(params = {}, options = {})
  req = build_request(:list_jobs, params)
  req.send_request(options)
end

#list_ml_transforms(params = {}) ⇒ Types::ListMLTransformsResponse

Retrieves a sortable, filterable list of existing Glue machine learning transforms in this Amazon Web Services account, or the resources with the specified tag. This operation takes the optional Tags field, which you can use as a filter of the responses so that tagged resources can be retrieved as a group. If you choose to use tag filtering, only resources with the tags are retrieved.

The returned response is a pageable response and is Enumerable. For details on usage see PageableResponse.

Examples:

Request syntax with placeholder values


resp = client.list_ml_transforms({
  next_token: "PaginationToken",
  max_results: 1,
  filter: {
    name: "NameString",
    transform_type: "FIND_MATCHES", # accepts FIND_MATCHES
    status: "NOT_READY", # accepts NOT_READY, READY, DELETING
    glue_version: "GlueVersionString",
    created_before: Time.now,
    created_after: Time.now,
    last_modified_before: Time.now,
    last_modified_after: Time.now,
    schema: [
      {
        name: "ColumnNameString",
        data_type: "ColumnTypeString",
      },
    ],
  },
  sort: {
    column: "NAME", # required, accepts NAME, TRANSFORM_TYPE, STATUS, CREATED, LAST_MODIFIED
    sort_direction: "DESCENDING", # required, accepts DESCENDING, ASCENDING
  },
  tags: {
    "TagKey" => "TagValue",
  },
})

Response structure


resp.transform_ids #=> Array
resp.transform_ids[0] #=> String
resp.next_token #=> String

Parameters:

  • params (Hash) (defaults to: {})

    ({})

Options Hash (params):

  • :next_token (String)

    A continuation token, if this is a continuation request.

  • :max_results (Integer)

    The maximum size of a list to return.

  • :filter (Types::TransformFilterCriteria)

    A TransformFilterCriteria used to filter the machine learning transforms.

  • :sort (Types::TransformSortCriteria)

    A TransformSortCriteria used to sort the machine learning transforms.

  • :tags (Hash<String,String>)

    Specifies to return only these tagged resources.

Returns:

See Also:

[View source]

15569
15570
15571
15572
# File 'gems/aws-sdk-glue/lib/aws-sdk-glue/client.rb', line 15569

def list_ml_transforms(params = {}, options = {})
  req = build_request(:list_ml_transforms, params)
  req.send_request(options)
end

#list_registries(params = {}) ⇒ Types::ListRegistriesResponse

Returns a list of registries that you have created, with minimal registry information. Registries in the Deleting status will not be included in the results. Empty results will be returned if there are no registries available.

The returned response is a pageable response and is Enumerable. For details on usage see PageableResponse.

Examples:

Request syntax with placeholder values


resp = client.list_registries({
  max_results: 1,
  next_token: "SchemaRegistryTokenString",
})

Response structure


resp.registries #=> Array
resp.registries[0].registry_name #=> String
resp.registries[0].registry_arn #=> String
resp.registries[0].description #=> String
resp.registries[0].status #=> String, one of "AVAILABLE", "DELETING"
resp.registries[0].created_time #=> String
resp.registries[0].updated_time #=> String
resp.next_token #=> String

Parameters:

  • params (Hash) (defaults to: {})

    ({})

Options Hash (params):

  • :max_results (Integer)

    Maximum number of results required per page. If the value is not supplied, this will be defaulted to 25 per page.

  • :next_token (String)

    A continuation token, if this is a continuation call.

Returns:

See Also:

[View source]

15615
15616
15617
15618
# File 'gems/aws-sdk-glue/lib/aws-sdk-glue/client.rb', line 15615

def list_registries(params = {}, options = {})
  req = build_request(:list_registries, params)
  req.send_request(options)
end

#list_schema_versions(params = {}) ⇒ Types::ListSchemaVersionsResponse

Returns a list of schema versions that you have created, with minimal information. Schema versions in Deleted status will not be included in the results. Empty results will be returned if there are no schema versions available.

The returned response is a pageable response and is Enumerable. For details on usage see PageableResponse.

Examples:

Request syntax with placeholder values


resp = client.list_schema_versions({
  schema_id: { # required
    schema_arn: "GlueResourceArn",
    schema_name: "SchemaRegistryNameString",
    registry_name: "SchemaRegistryNameString",
  },
  max_results: 1,
  next_token: "SchemaRegistryTokenString",
})

Response structure


resp.schemas #=> Array
resp.schemas[0].schema_arn #=> String
resp.schemas[0].schema_version_id #=> String
resp.schemas[0].version_number #=> Integer
resp.schemas[0].status #=> String, one of "AVAILABLE", "PENDING", "FAILURE", "DELETING"
resp.schemas[0].created_time #=> String
resp.next_token #=> String

Parameters:

  • params (Hash) (defaults to: {})

    ({})

Options Hash (params):

  • :schema_id (required, Types::SchemaId)

    This is a wrapper structure to contain schema identity fields. The structure contains:

    • SchemaId$SchemaArn: The Amazon Resource Name (ARN) of the schema. Either SchemaArn or SchemaName and RegistryName has to be provided.

    • SchemaId$SchemaName: The name of the schema. Either SchemaArn or SchemaName and RegistryName has to be provided.

  • :max_results (Integer)

    Maximum number of results required per page. If the value is not supplied, this will be defaulted to 25 per page.

  • :next_token (String)

    A continuation token, if this is a continuation call.

Returns:

See Also:

[View source]

15676
15677
15678
15679
# File 'gems/aws-sdk-glue/lib/aws-sdk-glue/client.rb', line 15676

def list_schema_versions(params = {}, options = {})
  req = build_request(:list_schema_versions, params)
  req.send_request(options)
end

#list_schemas(params = {}) ⇒ Types::ListSchemasResponse

Returns a list of schemas with minimal details. Schemas in Deleting status will not be included in the results. Empty results will be returned if there are no schemas available.

When the RegistryId is not provided, all the schemas across registries will be part of the API response.

The returned response is a pageable response and is Enumerable. For details on usage see PageableResponse.

Examples:

Request syntax with placeholder values


resp = client.list_schemas({
  registry_id: {
    registry_name: "SchemaRegistryNameString",
    registry_arn: "GlueResourceArn",
  },
  max_results: 1,
  next_token: "SchemaRegistryTokenString",
})

Response structure


resp.schemas #=> Array
resp.schemas[0].registry_name #=> String
resp.schemas[0].schema_name #=> String
resp.schemas[0].schema_arn #=> String
resp.schemas[0].description #=> String
resp.schemas[0].schema_status #=> String, one of "AVAILABLE", "PENDING", "DELETING"
resp.schemas[0].created_time #=> String
resp.schemas[0].updated_time #=> String
resp.next_token #=> String

Parameters:

  • params (Hash) (defaults to: {})

    ({})

Options Hash (params):

  • :registry_id (Types::RegistryId)

    A wrapper structure that may contain the registry name and Amazon Resource Name (ARN).

  • :max_results (Integer)

    Maximum number of results required per page. If the value is not supplied, this will be defaulted to 25 per page.

  • :next_token (String)

    A continuation token, if this is a continuation call.

Returns:

See Also:

[View source]

15733
15734
15735
15736
# File 'gems/aws-sdk-glue/lib/aws-sdk-glue/client.rb', line 15733

def list_schemas(params = {}, options = {})
  req = build_request(:list_schemas, params)
  req.send_request(options)
end

#list_sessions(params = {}) ⇒ Types::ListSessionsResponse

Retrieve a list of sessions.

The returned response is a pageable response and is Enumerable. For details on usage see PageableResponse.

Examples:

Request syntax with placeholder values


resp = client.list_sessions({
  next_token: "OrchestrationToken",
  max_results: 1,
  tags: {
    "TagKey" => "TagValue",
  },
  request_origin: "OrchestrationNameString",
})

Response structure


resp.ids #=> Array
resp.ids[0] #=> String
resp.sessions #=> Array
resp.sessions[0].id #=> String
resp.sessions[0].created_on #=> Time
resp.sessions[0].status #=> String, one of "PROVISIONING", "READY", "FAILED", "TIMEOUT", "STOPPING", "STOPPED"
resp.sessions[0].error_message #=> String
resp.sessions[0].description #=> String
resp.sessions[0].role #=> String
resp.sessions[0].command.name #=> String
resp.sessions[0].command.python_version #=> String
resp.sessions[0].default_arguments #=> Hash
resp.sessions[0].default_arguments["OrchestrationNameString"] #=> String
resp.sessions[0].connections.connections #=> Array
resp.sessions[0].connections.connections[0] #=> String
resp.sessions[0].progress #=> Float
resp.sessions[0].max_capacity #=> Float
resp.sessions[0].security_configuration #=> String
resp.sessions[0].glue_version #=> String
resp.sessions[0].number_of_workers #=> Integer
resp.sessions[0].worker_type #=> String, one of "Standard", "G.1X", "G.2X", "G.025X", "G.4X", "G.8X", "Z.2X"
resp.sessions[0].completed_on #=> Time
resp.sessions[0].execution_time #=> Float
resp.sessions[0].dpu_seconds #=> Float
resp.sessions[0].idle_timeout #=> Integer
resp.sessions[0].profile_name #=> String
resp.next_token #=> String

Parameters:

  • params (Hash) (defaults to: {})

    ({})

Options Hash (params):

  • :next_token (String)

    The token for the next set of results, or null if there are no more result.

  • :max_results (Integer)

    The maximum number of results.

  • :tags (Hash<String,String>)

    Tags belonging to the session.

  • :request_origin (String)

    The origin of the request.

Returns:

See Also:

[View source]

15806
15807
15808
15809
# File 'gems/aws-sdk-glue/lib/aws-sdk-glue/client.rb', line 15806

def list_sessions(params = {}, options = {})
  req = build_request(:list_sessions, params)
  req.send_request(options)
end

#list_statements(params = {}) ⇒ Types::ListStatementsResponse

Lists statements for the session.

Examples:

Request syntax with placeholder values


resp = client.list_statements({
  session_id: "NameString", # required
  request_origin: "OrchestrationNameString",
  next_token: "OrchestrationToken",
})

Response structure


resp.statements #=> Array
resp.statements[0].id #=> Integer
resp.statements[0].code #=> String
resp.statements[0].state #=> String, one of "WAITING", "RUNNING", "AVAILABLE", "CANCELLING", "CANCELLED", "ERROR"
resp.statements[0].output.data.text_plain #=> String
resp.statements[0].output.execution_count #=> Integer
resp.statements[0].output.status #=> String, one of "WAITING", "RUNNING", "AVAILABLE", "CANCELLING", "CANCELLED", "ERROR"
resp.statements[0].output.error_name #=> String
resp.statements[0].output.error_value #=> String
resp.statements[0].output.traceback #=> Array
resp.statements[0].output.traceback[0] #=> String
resp.statements[0].progress #=> Float
resp.statements[0].started_on #=> Integer
resp.statements[0].completed_on #=> Integer
resp.next_token #=> String

Parameters:

  • params (Hash) (defaults to: {})

    ({})

Options Hash (params):

  • :session_id (required, String)

    The Session ID of the statements.

  • :request_origin (String)

    The origin of the request to list statements.

  • :next_token (String)

    A continuation token, if this is a continuation call.

Returns:

See Also:

[View source]

15857
15858
15859
15860
# File 'gems/aws-sdk-glue/lib/aws-sdk-glue/client.rb', line 15857

def list_statements(params = {}, options = {})
  req = build_request(:list_statements, params)
  req.send_request(options)
end

#list_table_optimizer_runs(params = {}) ⇒ Types::ListTableOptimizerRunsResponse

Lists the history of previous optimizer runs for a specific table.

The returned response is a pageable response and is Enumerable. For details on usage see PageableResponse.

Examples:

Request syntax with placeholder values


resp = client.list_table_optimizer_runs({
  catalog_id: "CatalogIdString", # required
  database_name: "NameString", # required
  table_name: "NameString", # required
  type: "compaction", # required, accepts compaction, retention, orphan_file_deletion
  max_results: 1,
  next_token: "ListTableOptimizerRunsToken",
})

Response structure


resp.catalog_id #=> String
resp.database_name #=> String
resp.table_name #=> String
resp.next_token #=> String
resp.table_optimizer_runs #=> Array
resp.table_optimizer_runs[0].event_type #=> String, one of "starting", "completed", "failed", "in_progress"
resp.table_optimizer_runs[0].start_timestamp #=> Time
resp.table_optimizer_runs[0].end_timestamp #=> Time
resp.table_optimizer_runs[0].metrics.number_of_bytes_compacted #=> String
resp.table_optimizer_runs[0].metrics.number_of_files_compacted #=> String
resp.table_optimizer_runs[0].metrics.number_of_dpus #=> String
resp.table_optimizer_runs[0].metrics.job_duration_in_hour #=> String
resp.table_optimizer_runs[0].error #=> String
resp.table_optimizer_runs[0].compaction_metrics.iceberg_metrics.number_of_bytes_compacted #=> Integer
resp.table_optimizer_runs[0].compaction_metrics.iceberg_metrics.number_of_files_compacted #=> Integer
resp.table_optimizer_runs[0].compaction_metrics.iceberg_metrics.number_of_dpus #=> Integer
resp.table_optimizer_runs[0].compaction_metrics.iceberg_metrics.job_duration_in_hour #=> Float
resp.table_optimizer_runs[0].retention_metrics.iceberg_metrics.number_of_data_files_deleted #=> Integer
resp.table_optimizer_runs[0].retention_metrics.iceberg_metrics.number_of_manifest_files_deleted #=> Integer
resp.table_optimizer_runs[0].retention_metrics.iceberg_metrics.number_of_manifest_lists_deleted #=> Integer
resp.table_optimizer_runs[0].retention_metrics.iceberg_metrics.number_of_dpus #=> Integer
resp.table_optimizer_runs[0].retention_metrics.iceberg_metrics.job_duration_in_hour #=> Float
resp.table_optimizer_runs[0].orphan_file_deletion_metrics.iceberg_metrics.number_of_orphan_files_deleted #=> Integer
resp.table_optimizer_runs[0].orphan_file_deletion_metrics.iceberg_metrics.number_of_dpus #=> Integer
resp.table_optimizer_runs[0].orphan_file_deletion_metrics.iceberg_metrics.job_duration_in_hour #=> Float

Parameters:

  • params (Hash) (defaults to: {})

    ({})

Options Hash (params):

  • :catalog_id (required, String)

    The Catalog ID of the table.

  • :database_name (required, String)

    The name of the database in the catalog in which the table resides.

  • :table_name (required, String)

    The name of the table.

  • :type (required, String)

    The type of table optimizer.

  • :max_results (Integer)

    The maximum number of optimizer runs to return on each call.

  • :next_token (String)

    A continuation token, if this is a continuation call.

Returns:

See Also:

[View source]

15935
15936
15937
15938
# File 'gems/aws-sdk-glue/lib/aws-sdk-glue/client.rb', line 15935

def list_table_optimizer_runs(params = {}, options = {})
  req = build_request(:list_table_optimizer_runs, params)
  req.send_request(options)
end

#list_triggers(params = {}) ⇒ Types::ListTriggersResponse

Retrieves the names of all trigger resources in this Amazon Web Services account, or the resources with the specified tag. This operation allows you to see which resources are available in your account, and their names.

This operation takes the optional Tags field, which you can use as a filter on the response so that tagged resources can be retrieved as a group. If you choose to use tags filtering, only resources with the tag are retrieved.

The returned response is a pageable response and is Enumerable. For details on usage see PageableResponse.

Examples:

Request syntax with placeholder values


resp = client.list_triggers({
  next_token: "GenericString",
  dependent_job_name: "NameString",
  max_results: 1,
  tags: {
    "TagKey" => "TagValue",
  },
})

Response structure


resp.trigger_names #=> Array
resp.trigger_names[0] #=> String
resp.next_token #=> String

Parameters:

  • params (Hash) (defaults to: {})

    ({})

Options Hash (params):

  • :next_token (String)

    A continuation token, if this is a continuation request.

  • :dependent_job_name (String)

    The name of the job for which to retrieve triggers. The trigger that can start this job is returned. If there is no such trigger, all triggers are returned.

  • :max_results (Integer)

    The maximum size of a list to return.

  • :tags (Hash<String,String>)

    Specifies to return only these tagged resources.

Returns:

See Also:

[View source]

15992
15993
15994
15995
# File 'gems/aws-sdk-glue/lib/aws-sdk-glue/client.rb', line 15992

def list_triggers(params = {}, options = {})
  req = build_request(:list_triggers, params)
  req.send_request(options)
end

#list_usage_profiles(params = {}) ⇒ Types::ListUsageProfilesResponse

List all the Glue usage profiles.

The returned response is a pageable response and is Enumerable. For details on usage see PageableResponse.

Examples:

Request syntax with placeholder values


resp = client.list_usage_profiles({
  next_token: "OrchestrationToken",
  max_results: 1,
})

Response structure


resp.profiles #=> Array
resp.profiles[0].name #=> String
resp.profiles[0].description #=> String
resp.profiles[0].created_on #=> Time
resp.profiles[0].last_modified_on #=> Time
resp.next_token #=> String

Parameters:

  • params (Hash) (defaults to: {})

    ({})

Options Hash (params):

  • :next_token (String)

    A continuation token, included if this is a continuation call.

  • :max_results (Integer)

    The maximum number of usage profiles to return in a single response.

Returns:

See Also:

[View source]

16032
16033
16034
16035
# File 'gems/aws-sdk-glue/lib/aws-sdk-glue/client.rb', line 16032

def list_usage_profiles(params = {}, options = {})
  req = build_request(:list_usage_profiles, params)
  req.send_request(options)
end

#list_workflows(params = {}) ⇒ Types::ListWorkflowsResponse

Lists names of workflows created in the account.

The returned response is a pageable response and is Enumerable. For details on usage see PageableResponse.

Examples:

Request syntax with placeholder values


resp = client.list_workflows({
  next_token: "GenericString",
  max_results: 1,
})

Response structure


resp.workflows #=> Array
resp.workflows[0] #=> String
resp.next_token #=> String

Parameters:

  • params (Hash) (defaults to: {})

    ({})

Options Hash (params):

  • :next_token (String)

    A continuation token, if this is a continuation request.

  • :max_results (Integer)

    The maximum size of a list to return.

Returns:

See Also:

[View source]

16069
16070
16071
16072
# File 'gems/aws-sdk-glue/lib/aws-sdk-glue/client.rb', line 16069

def list_workflows(params = {}, options = {})
  req = build_request(:list_workflows, params)
  req.send_request(options)
end

#modify_integration(params = {}) ⇒ Types::ModifyIntegrationResponse

Modifies a Zero-ETL integration in the caller's account.

Examples:

Request syntax with placeholder values


resp = client.modify_integration({
  integration_identifier: "String128", # required
  description: "IntegrationDescription",
  data_filter: "String2048",
  integration_name: "String128",
})

Response structure


resp.source_arn #=> String
resp.target_arn #=> String
resp.integration_name #=> String
resp.description #=> String
resp.integration_arn #=> String
resp.kms_key_id #=> String
resp.additional_encryption_context #=> Hash
resp.additional_encryption_context["IntegrationString"] #=> String
resp.tags #=> Array
resp.tags[0].key #=> String
resp.tags[0].value #=> String
resp.status #=> String, one of "CREATING", "ACTIVE", "MODIFYING", "FAILED", "DELETING", "SYNCING", "NEEDS_ATTENTION"
resp.create_time #=> Time
resp.errors #=> Array
resp.errors[0].error_code #=> String
resp.errors[0].error_message #=> String
resp.data_filter #=> String

Parameters:

  • params (Hash) (defaults to: {})

    ({})

Options Hash (params):

  • :integration_identifier (required, String)

    The Amazon Resource Name (ARN) for the integration.

  • :description (String)

    A description of the integration.

  • :data_filter (String)

    Selects source tables for the integration using Maxwell filter syntax.

  • :integration_name (String)

    A unique name for an integration in Glue.

Returns:

See Also:

[View source]

16136
16137
16138
16139
# File 'gems/aws-sdk-glue/lib/aws-sdk-glue/client.rb', line 16136

def modify_integration(params = {}, options = {})
  req = build_request(:modify_integration, params)
  req.send_request(options)
end

#put_data_catalog_encryption_settings(params = {}) ⇒ Struct

Sets the security configuration for a specified catalog. After the configuration has been set, the specified encryption is applied to every catalog write thereafter.

Examples:

Request syntax with placeholder values


resp = client.put_data_catalog_encryption_settings({
  catalog_id: "CatalogIdString",
  data_catalog_encryption_settings: { # required
    encryption_at_rest: {
      catalog_encryption_mode: "DISABLED", # required, accepts DISABLED, SSE-KMS, SSE-KMS-WITH-SERVICE-ROLE
      sse_aws_kms_key_id: "NameString",
      catalog_encryption_service_role: "IAMRoleArn",
    },
    connection_password_encryption: {
      return_connection_password_encrypted: false, # required
      aws_kms_key_id: "NameString",
    },
  },
})

Parameters:

  • params (Hash) (defaults to: {})

    ({})

Options Hash (params):

  • :catalog_id (String)

    The ID of the Data Catalog to set the security configuration for. If none is provided, the Amazon Web Services account ID is used by default.

  • :data_catalog_encryption_settings (required, Types::DataCatalogEncryptionSettings)

    The security configuration to set.

Returns:

  • (Struct)

    Returns an empty response.

See Also:

[View source]

16176
16177
16178
16179
# File 'gems/aws-sdk-glue/lib/aws-sdk-glue/client.rb', line 16176

def put_data_catalog_encryption_settings(params = {}, options = {})
  req = build_request(:put_data_catalog_encryption_settings, params)
  req.send_request(options)
end

#put_data_quality_profile_annotation(params = {}) ⇒ Struct

Annotate all datapoints for a Profile.

Examples:

Request syntax with placeholder values


resp = client.put_data_quality_profile_annotation({
  profile_id: "HashString", # required
  inclusion_annotation: "INCLUDE", # required, accepts INCLUDE, EXCLUDE
})

Parameters:

  • params (Hash) (defaults to: {})

    ({})

Options Hash (params):

  • :profile_id (required, String)

    The ID of the data quality monitoring profile to annotate.

  • :inclusion_annotation (required, String)

    The inclusion annotation value to apply to the profile.

Returns:

  • (Struct)

    Returns an empty response.

See Also:

[View source]

16202
16203
16204
16205
# File 'gems/aws-sdk-glue/lib/aws-sdk-glue/client.rb', line 16202

def put_data_quality_profile_annotation(params = {}, options = {})
  req = build_request(:put_data_quality_profile_annotation, params)
  req.send_request(options)
end

#put_resource_policy(params = {}) ⇒ Types::PutResourcePolicyResponse

Sets the Data Catalog resource policy for access control.

Examples:

Request syntax with placeholder values


resp = client.put_resource_policy({
  policy_in_json: "PolicyJsonString", # required
  resource_arn: "GlueResourceArn",
  policy_hash_condition: "HashString",
  policy_exists_condition: "MUST_EXIST", # accepts MUST_EXIST, NOT_EXIST, NONE
  enable_hybrid: "TRUE", # accepts TRUE, FALSE
})

Response structure


resp.policy_hash #=> String

Parameters:

  • params (Hash) (defaults to: {})

    ({})

Options Hash (params):

  • :policy_in_json (required, String)

    Contains the policy document to set, in JSON format.

  • :resource_arn (String)

    Do not use. For internal use only.

  • :policy_hash_condition (String)

    The hash value returned when the previous policy was set using PutResourcePolicy. Its purpose is to prevent concurrent modifications of a policy. Do not use this parameter if no previous policy has been set.

  • :policy_exists_condition (String)

    A value of MUST_EXIST is used to update a policy. A value of NOT_EXIST is used to create a new policy. If a value of NONE or a null value is used, the call does not depend on the existence of a policy.

  • :enable_hybrid (String)

    If 'TRUE', indicates that you are using both methods to grant cross-account access to Data Catalog resources:

    • By directly updating the resource policy with PutResourePolicy

    • By using the Grant permissions command on the Amazon Web Services Management Console.

    Must be set to 'TRUE' if you have already used the Management Console to grant cross-account access, otherwise the call fails. Default is 'FALSE'.

Returns:

See Also:

[View source]

16262
16263
16264
16265
# File 'gems/aws-sdk-glue/lib/aws-sdk-glue/client.rb', line 16262

def put_resource_policy(params = {}, options = {})
  req = build_request(:put_resource_policy, params)
  req.send_request(options)
end

#put_schema_version_metadata(params = {}) ⇒ Types::PutSchemaVersionMetadataResponse

Puts the metadata key value pair for a specified schema version ID. A maximum of 10 key value pairs will be allowed per schema version. They can be added over one or more calls.

Examples:

Request syntax with placeholder values


resp = client.({
  schema_id: {
    schema_arn: "GlueResourceArn",
    schema_name: "SchemaRegistryNameString",
    registry_name: "SchemaRegistryNameString",
  },
  schema_version_number: {
    latest_version: false,
    version_number: 1,
  },
  schema_version_id: "SchemaVersionIdString",
  metadata_key_value: { # required
    metadata_key: "MetadataKeyString",
    metadata_value: "MetadataValueString",
  },
})

Response structure


resp.schema_arn #=> String
resp.schema_name #=> String
resp.registry_name #=> String
resp.latest_version #=> Boolean
resp.version_number #=> Integer
resp.schema_version_id #=> String
resp. #=> String
resp. #=> String

Parameters:

  • params (Hash) (defaults to: {})

    ({})

Options Hash (params):

Returns:

See Also:

[View source]

16328
16329
16330
16331
# File 'gems/aws-sdk-glue/lib/aws-sdk-glue/client.rb', line 16328

def (params = {}, options = {})
  req = build_request(:put_schema_version_metadata, params)
  req.send_request(options)
end

#put_workflow_run_properties(params = {}) ⇒ Struct

Puts the specified workflow run properties for the given workflow run. If a property already exists for the specified run, then it overrides the value otherwise adds the property to existing properties.

Examples:

Request syntax with placeholder values


resp = client.put_workflow_run_properties({
  name: "NameString", # required
  run_id: "IdString", # required
  run_properties: { # required
    "IdString" => "GenericString",
  },
})

Parameters:

  • params (Hash) (defaults to: {})

    ({})

Options Hash (params):

  • :name (required, String)

    Name of the workflow which was run.

  • :run_id (required, String)

    The ID of the workflow run for which the run properties should be updated.

  • :run_properties (required, Hash<String,String>)

    The properties to put for the specified run.

    Run properties may be logged. Do not pass plaintext secrets as properties. Retrieve secrets from a Glue Connection, Amazon Web Services Secrets Manager or other secret management mechanism if you intend to use them within the workflow run.

Returns:

  • (Struct)

    Returns an empty response.

See Also:

[View source]

16368
16369
16370
16371
# File 'gems/aws-sdk-glue/lib/aws-sdk-glue/client.rb', line 16368

def put_workflow_run_properties(params = {}, options = {})
  req = build_request(:put_workflow_run_properties, params)
  req.send_request(options)
end

#query_schema_version_metadata(params = {}) ⇒ Types::QuerySchemaVersionMetadataResponse

Queries for the schema version metadata information.

Examples:

Request syntax with placeholder values


resp = client.({
  schema_id: {
    schema_arn: "GlueResourceArn",
    schema_name: "SchemaRegistryNameString",
    registry_name: "SchemaRegistryNameString",
  },
  schema_version_number: {
    latest_version: false,
    version_number: 1,
  },
  schema_version_id: "SchemaVersionIdString",
  metadata_list: [
    {
      metadata_key: "MetadataKeyString",
      metadata_value: "MetadataValueString",
    },
  ],
  max_results: 1,
  next_token: "SchemaRegistryTokenString",
})

Response structure


resp. #=> Hash
resp.["MetadataKeyString"]. #=> String
resp.["MetadataKeyString"].created_time #=> String
resp.["MetadataKeyString"]. #=> Array
resp.["MetadataKeyString"].[0]. #=> String
resp.["MetadataKeyString"].[0].created_time #=> String
resp.schema_version_id #=> String
resp.next_token #=> String

Parameters:

  • params (Hash) (defaults to: {})

    ({})

Options Hash (params):

  • :schema_id (Types::SchemaId)

    A wrapper structure that may contain the schema name and Amazon Resource Name (ARN).

  • :schema_version_number (Types::SchemaVersionNumber)

    The version number of the schema.

  • :schema_version_id (String)

    The unique version ID of the schema version.

  • :metadata_list (Array<Types::MetadataKeyValuePair>)

    Search key-value pairs for metadata, if they are not provided all the metadata information will be fetched.

  • :max_results (Integer)

    Maximum number of results required per page. If the value is not supplied, this will be defaulted to 25 per page.

  • :next_token (String)

    A continuation token, if this is a continuation call.

Returns:

See Also:

[View source]

16440
16441
16442
16443
# File 'gems/aws-sdk-glue/lib/aws-sdk-glue/client.rb', line 16440

def (params = {}, options = {})
  req = build_request(:query_schema_version_metadata, params)
  req.send_request(options)
end

#register_schema_version(params = {}) ⇒ Types::RegisterSchemaVersionResponse

Adds a new version to the existing schema. Returns an error if new version of schema does not meet the compatibility requirements of the schema set. This API will not create a new schema set and will return a 404 error if the schema set is not already present in the Schema Registry.

If this is the first schema definition to be registered in the Schema Registry, this API will store the schema version and return immediately. Otherwise, this call has the potential to run longer than other operations due to compatibility modes. You can call the GetSchemaVersion API with the SchemaVersionId to check compatibility modes.

If the same schema definition is already stored in Schema Registry as a version, the schema ID of the existing schema is returned to the caller.

Examples:

Request syntax with placeholder values


resp = client.register_schema_version({
  schema_id: { # required
    schema_arn: "GlueResourceArn",
    schema_name: "SchemaRegistryNameString",
    registry_name: "SchemaRegistryNameString",
  },
  schema_definition: "SchemaDefinitionString", # required
})

Response structure


resp.schema_version_id #=> String
resp.version_number #=> Integer
resp.status #=> String, one of "AVAILABLE", "PENDING", "FAILURE", "DELETING"

Parameters:

  • params (Hash) (defaults to: {})

    ({})

Options Hash (params):

  • :schema_id (required, Types::SchemaId)

    This is a wrapper structure to contain schema identity fields. The structure contains:

    • SchemaId$SchemaArn: The Amazon Resource Name (ARN) of the schema. Either SchemaArn or SchemaName and RegistryName has to be provided.

    • SchemaId$SchemaName: The name of the schema. Either SchemaArn or SchemaName and RegistryName has to be provided.

  • :schema_definition (required, String)

    The schema definition using the DataFormat setting for the SchemaName.

Returns:

See Also:

[View source]

16504
16505
16506
16507
# File 'gems/aws-sdk-glue/lib/aws-sdk-glue/client.rb', line 16504

def register_schema_version(params = {}, options = {})
  req = build_request(:register_schema_version, params)
  req.send_request(options)
end

#remove_schema_version_metadata(params = {}) ⇒ Types::RemoveSchemaVersionMetadataResponse

Removes a key value pair from the schema version metadata for the specified schema version ID.

Examples:

Request syntax with placeholder values


resp = client.({
  schema_id: {
    schema_arn: "GlueResourceArn",
    schema_name: "SchemaRegistryNameString",
    registry_name: "SchemaRegistryNameString",
  },
  schema_version_number: {
    latest_version: false,
    version_number: 1,
  },
  schema_version_id: "SchemaVersionIdString",
  metadata_key_value: { # required
    metadata_key: "MetadataKeyString",
    metadata_value: "MetadataValueString",
  },
})

Response structure


resp.schema_arn #=> String
resp.schema_name #=> String
resp.registry_name #=> String
resp.latest_version #=> Boolean
resp.version_number #=> Integer
resp.schema_version_id #=> String
resp. #=> String
resp. #=> String

Parameters:

  • params (Hash) (defaults to: {})

    ({})

Options Hash (params):

  • :schema_id (Types::SchemaId)

    A wrapper structure that may contain the schema name and Amazon Resource Name (ARN).

  • :schema_version_number (Types::SchemaVersionNumber)

    The version number of the schema.

  • :schema_version_id (String)

    The unique version ID of the schema version.

  • :metadata_key_value (required, Types::MetadataKeyValuePair)

    The value of the metadata key.

Returns:

See Also:

[View source]

16570
16571
16572
16573
# File 'gems/aws-sdk-glue/lib/aws-sdk-glue/client.rb', line 16570

def (params = {}, options = {})
  req = build_request(:remove_schema_version_metadata, params)
  req.send_request(options)
end

#reset_job_bookmark(params = {}) ⇒ Types::ResetJobBookmarkResponse

Resets a bookmark entry.

For more information about enabling and using job bookmarks, see:

Examples:

Request syntax with placeholder values


resp = client.reset_job_bookmark({
  job_name: "JobName", # required
  run_id: "RunId",
})

Response structure


resp.job_bookmark_entry.job_name #=> String
resp.job_bookmark_entry.version #=> Integer
resp.job_bookmark_entry.run #=> Integer
resp.job_bookmark_entry.attempt #=> Integer
resp.job_bookmark_entry.previous_run_id #=> String
resp.job_bookmark_entry.run_id #=> String
resp.job_bookmark_entry.job_bookmark #=> String

Parameters:

  • params (Hash) (defaults to: {})

    ({})

Options Hash (params):

  • :job_name (required, String)

    The name of the job in question.

  • :run_id (String)

    The unique run identifier associated with this job run.

Returns:

See Also:

[View source]

16622
16623
16624
16625
# File 'gems/aws-sdk-glue/lib/aws-sdk-glue/client.rb', line 16622

def reset_job_bookmark(params = {}, options = {})
  req = build_request(:reset_job_bookmark, params)
  req.send_request(options)
end

#resume_workflow_run(params = {}) ⇒ Types::ResumeWorkflowRunResponse

Restarts selected nodes of a previous partially completed workflow run and resumes the workflow run. The selected nodes and all nodes that are downstream from the selected nodes are run.

Examples:

Request syntax with placeholder values


resp = client.resume_workflow_run({
  name: "NameString", # required
  run_id: "IdString", # required
  node_ids: ["NameString"], # required
})

Response structure


resp.run_id #=> String
resp.node_ids #=> Array
resp.node_ids[0] #=> String

Parameters:

  • params (Hash) (defaults to: {})

    ({})

Options Hash (params):

  • :name (required, String)

    The name of the workflow to resume.

  • :run_id (required, String)

    The ID of the workflow run to resume.

  • :node_ids (required, Array<String>)

    A list of the node IDs for the nodes you want to restart. The nodes that are to be restarted must have a run attempt in the original run.

Returns:

See Also:

[View source]

16664
16665
16666
16667
# File 'gems/aws-sdk-glue/lib/aws-sdk-glue/client.rb', line 16664

def resume_workflow_run(params = {}, options = {})
  req = build_request(:resume_workflow_run, params)
  req.send_request(options)
end

#run_statement(params = {}) ⇒ Types::RunStatementResponse

Executes the statement.

Examples:

Request syntax with placeholder values


resp = client.run_statement({
  session_id: "NameString", # required
  code: "OrchestrationStatementCodeString", # required
  request_origin: "OrchestrationNameString",
})

Response structure


resp.id #=> Integer

Parameters:

  • params (Hash) (defaults to: {})

    ({})

Options Hash (params):

  • :session_id (required, String)

    The Session Id of the statement to be run.

  • :code (required, String)

    The statement code to be run.

  • :request_origin (String)

    The origin of the request.

Returns:

See Also:

[View source]

16700
16701
16702
16703
# File 'gems/aws-sdk-glue/lib/aws-sdk-glue/client.rb', line 16700

def run_statement(params = {}, options = {})
  req = build_request(:run_statement, params)
  req.send_request(options)
end

#search_tables(params = {}) ⇒ Types::SearchTablesResponse

Searches a set of tables based on properties in the table metadata as well as on the parent database. You can search against text or filter conditions.

You can only get tables that you have access to based on the security policies defined in Lake Formation. You need at least a read-only access to the table for it to be returned. If you do not have access to all the columns in the table, these columns will not be searched against when returning the list of tables back to you. If you have access to the columns but not the data in the columns, those columns and the associated metadata for those columns will be included in the search.

The returned response is a pageable response and is Enumerable. For details on usage see PageableResponse.

Examples:

Request syntax with placeholder values


resp = client.search_tables({
  catalog_id: "CatalogIdString",
  next_token: "Token",
  filters: [
    {
      key: "ValueString",
      value: "ValueString",
      comparator: "EQUALS", # accepts EQUALS, GREATER_THAN, LESS_THAN, GREATER_THAN_EQUALS, LESS_THAN_EQUALS
    },
  ],
  search_text: "ValueString",
  sort_criteria: [
    {
      field_name: "ValueString",
      sort: "ASC", # accepts ASC, DESC
    },
  ],
  max_results: 1,
  resource_share_type: "FOREIGN", # accepts FOREIGN, ALL, FEDERATED
  include_status_details: false,
})

Response structure


resp.next_token #=> String
resp.table_list #=> Array
resp.table_list[0].name #=> String
resp.table_list[0].database_name #=> String
resp.table_list[0].description #=> String
resp.table_list[0].owner #=> String
resp.table_list[0].create_time #=> Time
resp.table_list[0].update_time #=> Time
resp.table_list[0].last_access_time #=> Time
resp.table_list[0].last_analyzed_time #=> Time
resp.table_list[0].retention #=> Integer
resp.table_list[0].storage_descriptor.columns #=> Array
resp.table_list[0].storage_descriptor.columns[0].name #=> String
resp.table_list[0].storage_descriptor.columns[0].type #=> String
resp.table_list[0].storage_descriptor.columns[0].comment #=> String
resp.table_list[0].storage_descriptor.columns[0].parameters #=> Hash
resp.table_list[0].storage_descriptor.columns[0].parameters["KeyString"] #=> String
resp.table_list[0].storage_descriptor.location #=> String
resp.table_list[0].storage_descriptor.additional_locations #=> Array
resp.table_list[0].storage_descriptor.additional_locations[0] #=> String
resp.table_list[0].storage_descriptor.input_format #=> String
resp.table_list[0].storage_descriptor.output_format #=> String
resp.table_list[0].storage_descriptor.compressed #=> Boolean
resp.table_list[0].storage_descriptor.number_of_buckets #=> Integer
resp.table_list[0].storage_descriptor.serde_info.name #=> String
resp.table_list[0].storage_descriptor.serde_info.serialization_library #=> String
resp.table_list[0].storage_descriptor.serde_info.parameters #=> Hash
resp.table_list[0].storage_descriptor.serde_info.parameters["KeyString"] #=> String
resp.table_list[0].storage_descriptor.bucket_columns #=> Array
resp.table_list[0].storage_descriptor.bucket_columns[0] #=> String
resp.table_list[0].storage_descriptor.sort_columns #=> Array
resp.table_list[0].storage_descriptor.sort_columns[0].column #=> String
resp.table_list[0].storage_descriptor.sort_columns[0].sort_order #=> Integer
resp.table_list[0].storage_descriptor.parameters #=> Hash
resp.table_list[0].storage_descriptor.parameters["KeyString"] #=> String
resp.table_list[0].storage_descriptor.skewed_info.skewed_column_names #=> Array
resp.table_list[0].storage_descriptor.skewed_info.skewed_column_names[0] #=> String
resp.table_list[0].storage_descriptor.skewed_info.skewed_column_values #=> Array
resp.table_list[0].storage_descriptor.skewed_info.skewed_column_values[0] #=> String
resp.table_list[0].storage_descriptor.skewed_info.skewed_column_value_location_maps #=> Hash
resp.table_list[0].storage_descriptor.skewed_info.skewed_column_value_location_maps["ColumnValuesString"] #=> String
resp.table_list[0].storage_descriptor.stored_as_sub_directories #=> Boolean
resp.table_list[0].storage_descriptor.schema_reference.schema_id.schema_arn #=> String
resp.table_list[0].storage_descriptor.schema_reference.schema_id.schema_name #=> String
resp.table_list[0].storage_descriptor.schema_reference.schema_id.registry_name #=> String
resp.table_list[0].storage_descriptor.schema_reference.schema_version_id #=> String
resp.table_list[0].storage_descriptor.schema_reference.schema_version_number #=> Integer
resp.table_list[0].partition_keys #=> Array
resp.table_list[0].partition_keys[0].name #=> String
resp.table_list[0].partition_keys[0].type #=> String
resp.table_list[0].partition_keys[0].comment #=> String
resp.table_list[0].partition_keys[0].parameters #=> Hash
resp.table_list[0].partition_keys[0].parameters["KeyString"] #=> String
resp.table_list[0].view_original_text #=> String
resp.table_list[0].view_expanded_text #=> String
resp.table_list[0].table_type #=> String
resp.table_list[0].parameters #=> Hash
resp.table_list[0].parameters["KeyString"] #=> String
resp.table_list[0].created_by #=> String
resp.table_list[0].is_registered_with_lake_formation #=> Boolean
resp.table_list[0].target_table.catalog_id #=> String
resp.table_list[0].target_table.database_name #=> String
resp.table_list[0].target_table.name #=> String
resp.table_list[0].target_table.region #=> String
resp.table_list[0].catalog_id #=> String
resp.table_list[0].version_id #=> String
resp.table_list[0].federated_table.identifier #=> String
resp.table_list[0].federated_table.database_identifier #=> String
resp.table_list[0].federated_table.connection_name #=> String
resp.table_list[0].view_definition.is_protected #=> Boolean
resp.table_list[0].view_definition.definer #=> String
resp.table_list[0].view_definition.sub_objects #=> Array
resp.table_list[0].view_definition.sub_objects[0] #=> String
resp.table_list[0].view_definition.representations #=> Array
resp.table_list[0].view_definition.representations[0].dialect #=> String, one of "REDSHIFT", "ATHENA", "SPARK"
resp.table_list[0].view_definition.representations[0].dialect_version #=> String
resp.table_list[0].view_definition.representations[0].view_original_text #=> String
resp.table_list[0].view_definition.representations[0].view_expanded_text #=> String
resp.table_list[0].view_definition.representations[0].validation_connection #=> String
resp.table_list[0].view_definition.representations[0].is_stale #=> Boolean
resp.table_list[0].is_multi_dialect_view #=> Boolean
resp.table_list[0].status.requested_by #=> String
resp.table_list[0].status.updated_by #=> String
resp.table_list[0].status.request_time #=> Time
resp.table_list[0].status.update_time #=> Time
resp.table_list[0].status.action #=> String, one of "UPDATE", "CREATE"
resp.table_list[0].status.state #=> String, one of "QUEUED", "IN_PROGRESS", "SUCCESS", "STOPPED", "FAILED"
resp.table_list[0].status.error.error_code #=> String
resp.table_list[0].status.error.error_message #=> String
resp.table_list[0].status.details.requested_change #=> Types::Table
resp.table_list[0].status.details.view_validations #=> Array
resp.table_list[0].status.details.view_validations[0].dialect #=> String, one of "REDSHIFT", "ATHENA", "SPARK"
resp.table_list[0].status.details.view_validations[0].dialect_version #=> String
resp.table_list[0].status.details.view_validations[0].view_validation_text #=> String
resp.table_list[0].status.details.view_validations[0].update_time #=> Time
resp.table_list[0].status.details.view_validations[0].state #=> String, one of "QUEUED", "IN_PROGRESS", "SUCCESS", "STOPPED", "FAILED"
resp.table_list[0].status.details.view_validations[0].error.error_code #=> String
resp.table_list[0].status.details.view_validations[0].error.error_message #=> String

Parameters:

  • params (Hash) (defaults to: {})

    ({})

Options Hash (params):

  • :catalog_id (String)

    A unique identifier, consisting of account_id.

  • :next_token (String)

    A continuation token, included if this is a continuation call.

  • :filters (Array<Types::PropertyPredicate>)

    A list of key-value pairs, and a comparator used to filter the search results. Returns all entities matching the predicate.

    The Comparator member of the PropertyPredicate struct is used only for time fields, and can be omitted for other field types. Also, when comparing string values, such as when Key=Name, a fuzzy match algorithm is used. The Key field (for example, the value of the Name field) is split on certain punctuation characters, for example, -, :, #, etc. into tokens. Then each token is exact-match compared with the Value member of PropertyPredicate. For example, if Key=Name and Value=link, tables named customer-link and xx-link-yy are returned, but xxlinkyy is not returned.

  • :search_text (String)

    A string used for a text search.

    Specifying a value in quotes filters based on an exact match to the value.

  • :sort_criteria (Array<Types::SortCriterion>)

    A list of criteria for sorting the results by a field name, in an ascending or descending order.

  • :max_results (Integer)

    The maximum number of tables to return in a single response.

  • :resource_share_type (String)

    Allows you to specify that you want to search the tables shared with your account. The allowable values are FOREIGN or ALL.

    • If set to FOREIGN, will search the tables shared with your account.

    • If set to ALL, will search the tables shared with your account, as well as the tables in yor local account.

  • :include_status_details (Boolean)

    Specifies whether to include status details related to a request to create or update an Glue Data Catalog view.

Returns:

See Also:

[View source]

16901
16902
16903
16904
# File 'gems/aws-sdk-glue/lib/aws-sdk-glue/client.rb', line 16901

def search_tables(params = {}, options = {})
  req = build_request(:search_tables, params)
  req.send_request(options)
end

#start_blueprint_run(params = {}) ⇒ Types::StartBlueprintRunResponse

Starts a new run of the specified blueprint.

Examples:

Request syntax with placeholder values


resp = client.start_blueprint_run({
  blueprint_name: "OrchestrationNameString", # required
  parameters: "BlueprintParameters",
  role_arn: "OrchestrationIAMRoleArn", # required
})

Response structure


resp.run_id #=> String

Parameters:

  • params (Hash) (defaults to: {})

    ({})

Options Hash (params):

  • :blueprint_name (required, String)

    The name of the blueprint.

  • :parameters (String)

    Specifies the parameters as a BlueprintParameters object.

  • :role_arn (required, String)

    Specifies the IAM role used to create the workflow.

Returns:

See Also:

[View source]

16937
16938
16939
16940
# File 'gems/aws-sdk-glue/lib/aws-sdk-glue/client.rb', line 16937

def start_blueprint_run(params = {}, options = {})
  req = build_request(:start_blueprint_run, params)
  req.send_request(options)
end

#start_column_statistics_task_run(params = {}) ⇒ Types::StartColumnStatisticsTaskRunResponse

Starts a column statistics task run, for a specified table and columns.

Examples:

Request syntax with placeholder values


resp = client.start_column_statistics_task_run({
  database_name: "NameString", # required
  table_name: "NameString", # required
  column_name_list: ["NameString"],
  role: "NameString", # required
  sample_size: 1.0,
  catalog_id: "NameString",
  security_configuration: "NameString",
})

Response structure


resp.column_statistics_task_run_id #=> String

Parameters:

  • params (Hash) (defaults to: {})

    ({})

Options Hash (params):

  • :database_name (required, String)

    The name of the database where the table resides.

  • :table_name (required, String)

    The name of the table to generate statistics.

  • :column_name_list (Array<String>)

    A list of the column names to generate statistics. If none is supplied, all column names for the table will be used by default.

  • :role (required, String)

    The IAM role that the service assumes to generate statistics.

  • :sample_size (Float)

    The percentage of rows used to generate statistics. If none is supplied, the entire table will be used to generate stats.

  • :catalog_id (String)

    The ID of the Data Catalog where the table reside. If none is supplied, the Amazon Web Services account ID is used by default.

  • :security_configuration (String)

    Name of the security configuration that is used to encrypt CloudWatch logs for the column stats task run.

Returns:

See Also:

[View source]

16994
16995
16996
16997
# File 'gems/aws-sdk-glue/lib/aws-sdk-glue/client.rb', line 16994

def start_column_statistics_task_run(params = {}, options = {})
  req = build_request(:start_column_statistics_task_run, params)
  req.send_request(options)
end

#start_column_statistics_task_run_schedule(params = {}) ⇒ Struct

Starts a column statistics task run schedule.

Examples:

Request syntax with placeholder values


resp = client.start_column_statistics_task_run_schedule({
  database_name: "NameString", # required
  table_name: "NameString", # required
})

Parameters:

  • params (Hash) (defaults to: {})

    ({})

Options Hash (params):

  • :database_name (required, String)

    The name of the database where the table resides.

  • :table_name (required, String)

    The name of the table for which to start a column statistic task run schedule.

Returns:

  • (Struct)

    Returns an empty response.

See Also:

[View source]

17021
17022
17023
17024
# File 'gems/aws-sdk-glue/lib/aws-sdk-glue/client.rb', line 17021

def start_column_statistics_task_run_schedule(params = {}, options = {})
  req = build_request(:start_column_statistics_task_run_schedule, params)
  req.send_request(options)
end

#start_crawler(params = {}) ⇒ Struct

Starts a crawl using the specified crawler, regardless of what is scheduled. If the crawler is already running, returns a CrawlerRunningException.

Examples:

Request syntax with placeholder values


resp = client.start_crawler({
  name: "NameString", # required
})

Parameters:

  • params (Hash) (defaults to: {})

    ({})

Options Hash (params):

  • :name (required, String)

    Name of the crawler to start.

Returns:

  • (Struct)

    Returns an empty response.

See Also:

[View source]

17049
17050
17051
17052
# File 'gems/aws-sdk-glue/lib/aws-sdk-glue/client.rb', line 17049

def start_crawler(params = {}, options = {})
  req = build_request(:start_crawler, params)
  req.send_request(options)
end

#start_crawler_schedule(params = {}) ⇒ Struct

Changes the schedule state of the specified crawler to SCHEDULED, unless the crawler is already running or the schedule state is already SCHEDULED.

Examples:

Request syntax with placeholder values


resp = client.start_crawler_schedule({
  crawler_name: "NameString", # required
})

Parameters:

  • params (Hash) (defaults to: {})

    ({})

Options Hash (params):

  • :crawler_name (required, String)

    Name of the crawler to schedule.

Returns:

  • (Struct)

    Returns an empty response.

See Also:

[View source]

17073
17074
17075
17076
# File 'gems/aws-sdk-glue/lib/aws-sdk-glue/client.rb', line 17073

def start_crawler_schedule(params = {}, options = {})
  req = build_request(:start_crawler_schedule, params)
  req.send_request(options)
end

#start_data_quality_rule_recommendation_run(params = {}) ⇒ Types::StartDataQualityRuleRecommendationRunResponse

Starts a recommendation run that is used to generate rules when you don't know what rules to write. Glue Data Quality analyzes the data and comes up with recommendations for a potential ruleset. You can then triage the ruleset and modify the generated ruleset to your liking.

Recommendation runs are automatically deleted after 90 days.

Examples:

Request syntax with placeholder values


resp = client.start_data_quality_rule_recommendation_run({
  data_source: { # required
    glue_table: { # required
      database_name: "NameString", # required
      table_name: "NameString", # required
      catalog_id: "NameString",
      connection_name: "NameString",
      additional_options: {
        "NameString" => "DescriptionString",
      },
    },
  },
  role: "RoleString", # required
  number_of_workers: 1,
  timeout: 1,
  created_ruleset_name: "NameString",
  data_quality_security_configuration: "NameString",
  client_token: "HashString",
})

Response structure


resp.run_id #=> String

Parameters:

  • params (Hash) (defaults to: {})

    ({})

Options Hash (params):

  • :data_source (required, Types::DataSource)

    The data source (Glue table) associated with this run.

  • :role (required, String)

    An IAM role supplied to encrypt the results of the run.

  • :number_of_workers (Integer)

    The number of G.1X workers to be used in the run. The default is 5.

  • :timeout (Integer)

    The timeout for a run in minutes. This is the maximum time that a run can consume resources before it is terminated and enters TIMEOUT status. The default is 2,880 minutes (48 hours).

  • :created_ruleset_name (String)

    A name for the ruleset.

  • :data_quality_security_configuration (String)

    The name of the security configuration created with the data quality encryption option.

  • :client_token (String)

    Used for idempotency and is recommended to be set to a random ID (such as a UUID) to avoid creating or starting multiple instances of the same resource.

Returns:

See Also:

[View source]

17146
17147
17148
17149
# File 'gems/aws-sdk-glue/lib/aws-sdk-glue/client.rb', line 17146

def start_data_quality_rule_recommendation_run(params = {}, options = {})
  req = build_request(:start_data_quality_rule_recommendation_run, params)
  req.send_request(options)
end

#start_data_quality_ruleset_evaluation_run(params = {}) ⇒ Types::StartDataQualityRulesetEvaluationRunResponse

Once you have a ruleset definition (either recommended or your own), you call this operation to evaluate the ruleset against a data source (Glue table). The evaluation computes results which you can retrieve with the GetDataQualityResult API.

Examples:

Request syntax with placeholder values


resp = client.start_data_quality_ruleset_evaluation_run({
  data_source: { # required
    glue_table: { # required
      database_name: "NameString", # required
      table_name: "NameString", # required
      catalog_id: "NameString",
      connection_name: "NameString",
      additional_options: {
        "NameString" => "DescriptionString",
      },
    },
  },
  role: "RoleString", # required
  number_of_workers: 1,
  timeout: 1,
  client_token: "HashString",
  additional_run_options: {
    cloud_watch_metrics_enabled: false,
    results_s3_prefix: "UriString",
    composite_rule_evaluation_method: "COLUMN", # accepts COLUMN, ROW
  },
  ruleset_names: ["NameString"], # required
  additional_data_sources: {
    "NameString" => {
      glue_table: { # required
        database_name: "NameString", # required
        table_name: "NameString", # required
        catalog_id: "NameString",
        connection_name: "NameString",
        additional_options: {
          "NameString" => "DescriptionString",
        },
      },
    },
  },
})

Response structure


resp.run_id #=> String

Parameters:

  • params (Hash) (defaults to: {})

    ({})

Options Hash (params):

  • :data_source (required, Types::DataSource)

    The data source (Glue table) associated with this run.

  • :role (required, String)

    An IAM role supplied to encrypt the results of the run.

  • :number_of_workers (Integer)

    The number of G.1X workers to be used in the run. The default is 5.

  • :timeout (Integer)

    The timeout for a run in minutes. This is the maximum time that a run can consume resources before it is terminated and enters TIMEOUT status. The default is 2,880 minutes (48 hours).

  • :client_token (String)

    Used for idempotency and is recommended to be set to a random ID (such as a UUID) to avoid creating or starting multiple instances of the same resource.

  • :additional_run_options (Types::DataQualityEvaluationRunAdditionalRunOptions)

    Additional run options you can specify for an evaluation run.

  • :ruleset_names (required, Array<String>)

    A list of ruleset names.

  • :additional_data_sources (Hash<String,Types::DataSource>)

    A map of reference strings to additional data sources you can specify for an evaluation run.

Returns:

See Also:

[View source]

17236
17237
17238
17239
# File 'gems/aws-sdk-glue/lib/aws-sdk-glue/client.rb', line 17236

def start_data_quality_ruleset_evaluation_run(params = {}, options = {})
  req = build_request(:start_data_quality_ruleset_evaluation_run, params)
  req.send_request(options)
end

#start_export_labels_task_run(params = {}) ⇒ Types::StartExportLabelsTaskRunResponse

Begins an asynchronous task to export all labeled data for a particular transform. This task is the only label-related API call that is not part of the typical active learning workflow. You typically use StartExportLabelsTaskRun when you want to work with all of your existing labels at the same time, such as when you want to remove or change labels that were previously submitted as truth. This API operation accepts the TransformId whose labels you want to export and an Amazon Simple Storage Service (Amazon S3) path to export the labels to. The operation returns a TaskRunId. You can check on the status of your task run by calling the GetMLTaskRun API.

Examples:

Request syntax with placeholder values


resp = client.start_export_labels_task_run({
  transform_id: "HashString", # required
  output_s3_path: "UriString", # required
})

Response structure


resp.task_run_id #=> String

Parameters:

  • params (Hash) (defaults to: {})

    ({})

Options Hash (params):

  • :transform_id (required, String)

    The unique identifier of the machine learning transform.

  • :output_s3_path (required, String)

    The Amazon S3 path where you export the labels.

Returns:

See Also:

[View source]

17277
17278
17279
17280
# File 'gems/aws-sdk-glue/lib/aws-sdk-glue/client.rb', line 17277

def start_export_labels_task_run(params = {}, options = {})
  req = build_request(:start_export_labels_task_run, params)
  req.send_request(options)
end

#start_import_labels_task_run(params = {}) ⇒ Types::StartImportLabelsTaskRunResponse

Enables you to provide additional labels (examples of truth) to be used to teach the machine learning transform and improve its quality. This API operation is generally used as part of the active learning workflow that starts with the StartMLLabelingSetGenerationTaskRun call and that ultimately results in improving the quality of your machine learning transform.

After the StartMLLabelingSetGenerationTaskRun finishes, Glue machine learning will have generated a series of questions for humans to answer. (Answering these questions is often called 'labeling' in the machine learning workflows). In the case of the FindMatches transform, these questions are of the form, “What is the correct way to group these rows together into groups composed entirely of matching records?” After the labeling process is finished, users upload their answers/labels with a call to StartImportLabelsTaskRun. After StartImportLabelsTaskRun finishes, all future runs of the machine learning transform use the new and improved labels and perform a higher-quality transformation.

By default, StartMLLabelingSetGenerationTaskRun continually learns from and combines all labels that you upload unless you set Replace to true. If you set Replace to true, StartImportLabelsTaskRun deletes and forgets all previously uploaded labels and learns only from the exact set that you upload. Replacing labels can be helpful if you realize that you previously uploaded incorrect labels, and you believe that they are having a negative effect on your transform quality.

You can check on the status of your task run by calling the GetMLTaskRun operation.

Examples:

Request syntax with placeholder values


resp = client.start_import_labels_task_run({
  transform_id: "HashString", # required
  input_s3_path: "UriString", # required
  replace_all_labels: false,
})

Response structure


resp.task_run_id #=> String

Parameters:

  • params (Hash) (defaults to: {})

    ({})

Options Hash (params):

  • :transform_id (required, String)

    The unique identifier of the machine learning transform.

  • :input_s3_path (required, String)

    The Amazon Simple Storage Service (Amazon S3) path from where you import the labels.

  • :replace_all_labels (Boolean)

    Indicates whether to overwrite your existing labels.

Returns:

See Also:

[View source]

17343
17344
17345
17346
# File 'gems/aws-sdk-glue/lib/aws-sdk-glue/client.rb', line 17343

def start_import_labels_task_run(params = {}, options = {})
  req = build_request(:start_import_labels_task_run, params)
  req.send_request(options)
end

#start_job_run(params = {}) ⇒ Types::StartJobRunResponse

Starts a job run using a job definition.

Examples:

Request syntax with placeholder values


resp = client.start_job_run({
  job_name: "NameString", # required
  job_run_queuing_enabled: false,
  job_run_id: "IdString",
  arguments: {
    "GenericString" => "GenericString",
  },
  allocated_capacity: 1,
  timeout: 1,
  max_capacity: 1.0,
  security_configuration: "NameString",
  notification_property: {
    notify_delay_after: 1,
  },
  worker_type: "Standard", # accepts Standard, G.1X, G.2X, G.025X, G.4X, G.8X, Z.2X
  number_of_workers: 1,
  execution_class: "FLEX", # accepts FLEX, STANDARD
})

Response structure


resp.job_run_id #=> String

Parameters:

  • params (Hash) (defaults to: {})

    ({})

Options Hash (params):

  • :job_name (required, String)

    The name of the job definition to use.

  • :job_run_queuing_enabled (Boolean)

    Specifies whether job run queuing is enabled for the job run.

    A value of true means job run queuing is enabled for the job run. If false or not populated, the job run will not be considered for queueing.

  • :job_run_id (String)

    The ID of a previous JobRun to retry.

  • :arguments (Hash<String,String>)

    The job arguments associated with this run. For this job run, they replace the default arguments set in the job definition itself.

    You can specify arguments here that your own job-execution script consumes, as well as arguments that Glue itself consumes.

    Job arguments may be logged. Do not pass plaintext secrets as arguments. Retrieve secrets from a Glue Connection, Secrets Manager or other secret management mechanism if you intend to keep them within the Job.

    For information about how to specify and consume your own Job arguments, see the Calling Glue APIs in Python topic in the developer guide.

    For information about the arguments you can provide to this field when configuring Spark jobs, see the Special Parameters Used by Glue topic in the developer guide.

    For information about the arguments you can provide to this field when configuring Ray jobs, see Using job parameters in Ray jobs in the developer guide.

  • :allocated_capacity (Integer)

    This field is deprecated. Use MaxCapacity instead.

    The number of Glue data processing units (DPUs) to allocate to this JobRun. You can allocate a minimum of 2 DPUs; the default is 10. A DPU is a relative measure of processing power that consists of 4 vCPUs of compute capacity and 16 GB of memory. For more information, see the Glue pricing page.

  • :timeout (Integer)

    The JobRun timeout in minutes. This is the maximum time that a job run can consume resources before it is terminated and enters TIMEOUT status. This value overrides the timeout value set in the parent job.

    Jobs must have timeout values less than 7 days or 10080 minutes. Otherwise, the jobs will throw an exception.

    When the value is left blank, the timeout is defaulted to 2880 minutes.

    Any existing Glue jobs that had a timeout value greater than 7 days will be defaulted to 7 days. For instance if you have specified a timeout of 20 days for a batch job, it will be stopped on the 7th day.

    For streaming jobs, if you have set up a maintenance window, it will be restarted during the maintenance window after 7 days.

  • :max_capacity (Float)

    For Glue version 1.0 or earlier jobs, using the standard worker type, the number of Glue data processing units (DPUs) that can be allocated when this job runs. A DPU is a relative measure of processing power that consists of 4 vCPUs of compute capacity and 16 GB of memory. For more information, see the Glue pricing page.

    For Glue version 2.0+ jobs, you cannot specify a Maximum capacity. Instead, you should specify a Worker type and the Number of workers.

    Do not set MaxCapacity if using WorkerType and NumberOfWorkers.

    The value that can be allocated for MaxCapacity depends on whether you are running a Python shell job, an Apache Spark ETL job, or an Apache Spark streaming ETL job:

    • When you specify a Python shell job (JobCommand.Name="pythonshell"), you can allocate either 0.0625 or 1 DPU. The default is 0.0625 DPU.

    • When you specify an Apache Spark ETL job (JobCommand.Name="glueetl") or Apache Spark streaming ETL job (JobCommand.Name="gluestreaming"), you can allocate from 2 to 100 DPUs. The default is 10 DPUs. This job type cannot have a fractional DPU allocation.

  • :security_configuration (String)

    The name of the SecurityConfiguration structure to be used with this job run.

  • :notification_property (Types::NotificationProperty)

    Specifies configuration properties of a job run notification.

  • :worker_type (String)

    The type of predefined worker that is allocated when a job runs. Accepts a value of G.1X, G.2X, G.4X, G.8X or G.025X for Spark jobs. Accepts the value Z.2X for Ray jobs.

    • For the G.1X worker type, each worker maps to 1 DPU (4 vCPUs, 16 GB of memory) with 94GB disk, and provides 1 executor per worker. We recommend this worker type for workloads such as data transforms, joins, and queries, to offers a scalable and cost effective way to run most jobs.

    • For the G.2X worker type, each worker maps to 2 DPU (8 vCPUs, 32 GB of memory) with 138GB disk, and provides 1 executor per worker. We recommend this worker type for workloads such as data transforms, joins, and queries, to offers a scalable and cost effective way to run most jobs.

    • For the G.4X worker type, each worker maps to 4 DPU (16 vCPUs, 64 GB of memory) with 256GB disk, and provides 1 executor per worker. We recommend this worker type for jobs whose workloads contain your most demanding transforms, aggregations, joins, and queries. This worker type is available only for Glue version 3.0 or later Spark ETL jobs in the following Amazon Web Services Regions: US East (Ohio), US East (N. Virginia), US West (Oregon), Asia Pacific (Singapore), Asia Pacific (Sydney), Asia Pacific (Tokyo), Canada (Central), Europe (Frankfurt), Europe (Ireland), and Europe (Stockholm).

    • For the G.8X worker type, each worker maps to 8 DPU (32 vCPUs, 128 GB of memory) with 512GB disk, and provides 1 executor per worker. We recommend this worker type for jobs whose workloads contain your most demanding transforms, aggregations, joins, and queries. This worker type is available only for Glue version 3.0 or later Spark ETL jobs, in the same Amazon Web Services Regions as supported for the G.4X worker type.

    • For the G.025X worker type, each worker maps to 0.25 DPU (2 vCPUs, 4 GB of memory) with 84GB disk, and provides 1 executor per worker. We recommend this worker type for low volume streaming jobs. This worker type is only available for Glue version 3.0 or later streaming jobs.

    • For the Z.2X worker type, each worker maps to 2 M-DPU (8vCPUs, 64 GB of memory) with 128 GB disk, and provides up to 8 Ray workers based on the autoscaler.

  • :number_of_workers (Integer)

    The number of workers of a defined workerType that are allocated when a job runs.

  • :execution_class (String)

    Indicates whether the job is run with a standard or flexible execution class. The standard execution-class is ideal for time-sensitive workloads that require fast job startup and dedicated resources.

    The flexible execution class is appropriate for time-insensitive jobs whose start and completion times may vary.

    Only jobs with Glue version 3.0 and above and command type glueetl will be allowed to set ExecutionClass to FLEX. The flexible execution class is available for Spark jobs.

Returns:

See Also:

[View source]

17557
17558
17559
17560
# File 'gems/aws-sdk-glue/lib/aws-sdk-glue/client.rb', line 17557

def start_job_run(params = {}, options = {})
  req = build_request(:start_job_run, params)
  req.send_request(options)
end

#start_ml_evaluation_task_run(params = {}) ⇒ Types::StartMLEvaluationTaskRunResponse

Starts a task to estimate the quality of the transform.

When you provide label sets as examples of truth, Glue machine learning uses some of those examples to learn from them. The rest of the labels are used as a test to estimate quality.

Returns a unique identifier for the run. You can call GetMLTaskRun to get more information about the stats of the EvaluationTaskRun.

Examples:

Request syntax with placeholder values


resp = client.start_ml_evaluation_task_run({
  transform_id: "HashString", # required
})

Response structure


resp.task_run_id #=> String

Parameters:

  • params (Hash) (defaults to: {})

    ({})

Options Hash (params):

  • :transform_id (required, String)

    The unique identifier of the machine learning transform.

Returns:

See Also:

[View source]

17592
17593
17594
17595
# File 'gems/aws-sdk-glue/lib/aws-sdk-glue/client.rb', line 17592

def start_ml_evaluation_task_run(params = {}, options = {})
  req = build_request(:start_ml_evaluation_task_run, params)
  req.send_request(options)
end

#start_ml_labeling_set_generation_task_run(params = {}) ⇒ Types::StartMLLabelingSetGenerationTaskRunResponse

Starts the active learning workflow for your machine learning transform to improve the transform's quality by generating label sets and adding labels.

When the StartMLLabelingSetGenerationTaskRun finishes, Glue will have generated a "labeling set" or a set of questions for humans to answer.

In the case of the FindMatches transform, these questions are of the form, “What is the correct way to group these rows together into groups composed entirely of matching records?”

After the labeling process is finished, you can upload your labels with a call to StartImportLabelsTaskRun. After StartImportLabelsTaskRun finishes, all future runs of the machine learning transform will use the new and improved labels and perform a higher-quality transformation.

Examples:

Request syntax with placeholder values


resp = client.start_ml_labeling_set_generation_task_run({
  transform_id: "HashString", # required
  output_s3_path: "UriString", # required
})

Response structure


resp.task_run_id #=> String

Parameters:

  • params (Hash) (defaults to: {})

    ({})

Options Hash (params):

  • :transform_id (required, String)

    The unique identifier of the machine learning transform.

  • :output_s3_path (required, String)

    The Amazon Simple Storage Service (Amazon S3) path where you generate the labeling set.

Returns:

See Also:

[View source]

17641
17642
17643
17644
# File 'gems/aws-sdk-glue/lib/aws-sdk-glue/client.rb', line 17641

def start_ml_labeling_set_generation_task_run(params = {}, options = {})
  req = build_request(:start_ml_labeling_set_generation_task_run, params)
  req.send_request(options)
end

#start_trigger(params = {}) ⇒ Types::StartTriggerResponse

Starts an existing trigger. See Triggering Jobs for information about how different types of trigger are started.

Examples:

Request syntax with placeholder values


resp = client.start_trigger({
  name: "NameString", # required
})

Response structure


resp.name #=> String

Parameters:

  • params (Hash) (defaults to: {})

    ({})

Options Hash (params):

  • :name (required, String)

    The name of the trigger to start.

Returns:

See Also:

[View source]

17674
17675
17676
17677
# File 'gems/aws-sdk-glue/lib/aws-sdk-glue/client.rb', line 17674

def start_trigger(params = {}, options = {})
  req = build_request(:start_trigger, params)
  req.send_request(options)
end

#start_workflow_run(params = {}) ⇒ Types::StartWorkflowRunResponse

Starts a new run of the specified workflow.

Examples:

Request syntax with placeholder values


resp = client.start_workflow_run({
  name: "NameString", # required
  run_properties: {
    "IdString" => "GenericString",
  },
})

Response structure


resp.run_id #=> String

Parameters:

  • params (Hash) (defaults to: {})

    ({})

Options Hash (params):

  • :name (required, String)

    The name of the workflow to start.

  • :run_properties (Hash<String,String>)

    The workflow run properties for the new workflow run.

    Run properties may be logged. Do not pass plaintext secrets as properties. Retrieve secrets from a Glue Connection, Amazon Web Services Secrets Manager or other secret management mechanism if you intend to use them within the workflow run.

Returns:

See Also:

[View source]

17713
17714
17715
17716
# File 'gems/aws-sdk-glue/lib/aws-sdk-glue/client.rb', line 17713

def start_workflow_run(params = {}, options = {})
  req = build_request(:start_workflow_run, params)
  req.send_request(options)
end

#stop_column_statistics_task_run(params = {}) ⇒ Struct

Stops a task run for the specified table.

Examples:

Request syntax with placeholder values


resp = client.stop_column_statistics_task_run({
  database_name: "DatabaseName", # required
  table_name: "NameString", # required
})

Parameters:

  • params (Hash) (defaults to: {})

    ({})

Options Hash (params):

  • :database_name (required, String)

    The name of the database where the table resides.

  • :table_name (required, String)

    The name of the table.

Returns:

  • (Struct)

    Returns an empty response.

See Also:

[View source]

17739
17740
17741
17742
# File 'gems/aws-sdk-glue/lib/aws-sdk-glue/client.rb', line 17739

def stop_column_statistics_task_run(params = {}, options = {})
  req = build_request(:stop_column_statistics_task_run, params)
  req.send_request(options)
end

#stop_column_statistics_task_run_schedule(params = {}) ⇒ Struct

Stops a column statistics task run schedule.

Examples:

Request syntax with placeholder values


resp = client.stop_column_statistics_task_run_schedule({
  database_name: "NameString", # required
  table_name: "NameString", # required
})

Parameters:

  • params (Hash) (defaults to: {})

    ({})

Options Hash (params):

  • :database_name (required, String)

    The name of the database where the table resides.

  • :table_name (required, String)

    The name of the table for which to stop a column statistic task run schedule.

Returns:

  • (Struct)

    Returns an empty response.

See Also:

[View source]

17766
17767
17768
17769
# File 'gems/aws-sdk-glue/lib/aws-sdk-glue/client.rb', line 17766

def stop_column_statistics_task_run_schedule(params = {}, options = {})
  req = build_request(:stop_column_statistics_task_run_schedule, params)
  req.send_request(options)
end

#stop_crawler(params = {}) ⇒ Struct

If the specified crawler is running, stops the crawl.

Examples:

Request syntax with placeholder values


resp = client.stop_crawler({
  name: "NameString", # required
})

Parameters:

  • params (Hash) (defaults to: {})

    ({})

Options Hash (params):

  • :name (required, String)

    Name of the crawler to stop.

Returns:

  • (Struct)

    Returns an empty response.

See Also:

[View source]

17788
17789
17790
17791
# File 'gems/aws-sdk-glue/lib/aws-sdk-glue/client.rb', line 17788

def stop_crawler(params = {}, options = {})
  req = build_request(:stop_crawler, params)
  req.send_request(options)
end

#stop_crawler_schedule(params = {}) ⇒ Struct

Sets the schedule state of the specified crawler to NOT_SCHEDULED, but does not stop the crawler if it is already running.

Examples:

Request syntax with placeholder values


resp = client.stop_crawler_schedule({
  crawler_name: "NameString", # required
})

Parameters:

  • params (Hash) (defaults to: {})

    ({})

Options Hash (params):

  • :crawler_name (required, String)

    Name of the crawler whose schedule state to set.

Returns:

  • (Struct)

    Returns an empty response.

See Also:

[View source]

17811
17812
17813
17814
# File 'gems/aws-sdk-glue/lib/aws-sdk-glue/client.rb', line 17811

def stop_crawler_schedule(params = {}, options = {})
  req = build_request(:stop_crawler_schedule, params)
  req.send_request(options)
end

#stop_session(params = {}) ⇒ Types::StopSessionResponse

Stops the session.

Examples:

Request syntax with placeholder values


resp = client.stop_session({
  id: "NameString", # required
  request_origin: "OrchestrationNameString",
})

Response structure


resp.id #=> String

Parameters:

  • params (Hash) (defaults to: {})

    ({})

Options Hash (params):

  • :id (required, String)

    The ID of the session to be stopped.

  • :request_origin (String)

    The origin of the request.

Returns:

See Also:

[View source]

17843
17844
17845
17846
# File 'gems/aws-sdk-glue/lib/aws-sdk-glue/client.rb', line 17843

def stop_session(params = {}, options = {})
  req = build_request(:stop_session, params)
  req.send_request(options)
end

#stop_trigger(params = {}) ⇒ Types::StopTriggerResponse

Stops a specified trigger.

Examples:

Request syntax with placeholder values


resp = client.stop_trigger({
  name: "NameString", # required
})

Response structure


resp.name #=> String

Parameters:

  • params (Hash) (defaults to: {})

    ({})

Options Hash (params):

  • :name (required, String)

    The name of the trigger to stop.

Returns:

See Also:

[View source]

17871
17872
17873
17874
# File 'gems/aws-sdk-glue/lib/aws-sdk-glue/client.rb', line 17871

def stop_trigger(params = {}, options = {})
  req = build_request(:stop_trigger, params)
  req.send_request(options)
end

#stop_workflow_run(params = {}) ⇒ Struct

Stops the execution of the specified workflow run.

Examples:

Request syntax with placeholder values


resp = client.stop_workflow_run({
  name: "NameString", # required
  run_id: "IdString", # required
})

Parameters:

  • params (Hash) (defaults to: {})

    ({})

Options Hash (params):

  • :name (required, String)

    The name of the workflow to stop.

  • :run_id (required, String)

    The ID of the workflow run to stop.

Returns:

  • (Struct)

    Returns an empty response.

See Also:

[View source]

17897
17898
17899
17900
# File 'gems/aws-sdk-glue/lib/aws-sdk-glue/client.rb', line 17897

def stop_workflow_run(params = {}, options = {})
  req = build_request(:stop_workflow_run, params)
  req.send_request(options)
end

#tag_resource(params = {}) ⇒ Struct

Adds tags to a resource. A tag is a label you can assign to an Amazon Web Services resource. In Glue, you can tag only certain resources. For information about what resources you can tag, see Amazon Web Services Tags in Glue.

Examples:

Request syntax with placeholder values


resp = client.tag_resource({
  resource_arn: "GlueResourceArn", # required
  tags_to_add: { # required
    "TagKey" => "TagValue",
  },
})

Parameters:

  • params (Hash) (defaults to: {})

    ({})

Options Hash (params):

  • :resource_arn (required, String)

    The ARN of the Glue resource to which to add the tags. For more information about Glue resource ARNs, see the Glue ARN string pattern.

  • :tags_to_add (required, Hash<String,String>)

    Tags to add to this resource.

Returns:

  • (Struct)

    Returns an empty response.

See Also:

[View source]

17938
17939
17940
17941
# File 'gems/aws-sdk-glue/lib/aws-sdk-glue/client.rb', line 17938

def tag_resource(params = {}, options = {})
  req = build_request(:tag_resource, params)
  req.send_request(options)
end

#test_connection(params = {}) ⇒ Struct

Tests a connection to a service to validate the service credentials that you provide.

You can either provide an existing connection name or a TestConnectionInput for testing a non-existing connection input. Providing both at the same time will cause an error.

If the action is successful, the service sends back an HTTP 200 response.

Examples:

Request syntax with placeholder values


resp = client.test_connection({
  connection_name: "NameString",
  catalog_id: "CatalogIdString",
  test_connection_input: {
    connection_type: "JDBC", # required, accepts JDBC, SFTP, MONGODB, KAFKA, NETWORK, MARKETPLACE, CUSTOM, SALESFORCE, VIEW_VALIDATION_REDSHIFT, VIEW_VALIDATION_ATHENA, GOOGLEADS, GOOGLESHEETS, GOOGLEANALYTICS4, SERVICENOW, MARKETO, SAPODATA, ZENDESK, JIRACLOUD, NETSUITEERP, HUBSPOT, FACEBOOKADS, INSTAGRAMADS, ZOHOCRM, SALESFORCEPARDOT, SALESFORCEMARKETINGCLOUD, SLACK, STRIPE, INTERCOM, SNAPCHATADS
    connection_properties: { # required
      "HOST" => "ValueString",
    },
    authentication_configuration: {
      authentication_type: "BASIC", # accepts BASIC, OAUTH2, CUSTOM, IAM
      o_auth_2_properties: {
        o_auth_2_grant_type: "AUTHORIZATION_CODE", # accepts AUTHORIZATION_CODE, CLIENT_CREDENTIALS, JWT_BEARER
        o_auth_2_client_application: {
          user_managed_client_application_client_id: "UserManagedClientApplicationClientId",
          aws_managed_client_application_reference: "AWSManagedClientApplicationReference",
        },
        token_url: "TokenUrl",
        token_url_parameters_map: {
          "TokenUrlParameterKey" => "TokenUrlParameterValue",
        },
        authorization_code_properties: {
          authorization_code: "AuthorizationCode",
          redirect_uri: "RedirectUri",
        },
        o_auth_2_credentials: {
          user_managed_client_application_client_secret: "UserManagedClientApplicationClientSecret",
          access_token: "AccessToken",
          refresh_token: "RefreshToken",
          jwt_token: "JwtToken",
        },
      },
      secret_arn: "SecretArn",
      kms_key_arn: "KmsKeyArn",
      basic_authentication_credentials: {
        username: "Username",
        password: "Password",
      },
      custom_authentication_credentials: {
        "CredentialKey" => "CredentialValue",
      },
    },
  },
})

Parameters:

  • params (Hash) (defaults to: {})

    ({})

Options Hash (params):

  • :connection_name (String)

    Optional. The name of the connection to test. If only name is provided, the operation will get the connection and use that for testing.

  • :catalog_id (String)

    The catalog ID where the connection resides.

  • :test_connection_input (Types::TestConnectionInput)

    A structure that is used to specify testing a connection to a service.

Returns:

  • (Struct)

    Returns an empty response.

See Also:

[View source]

18016
18017
18018
18019
# File 'gems/aws-sdk-glue/lib/aws-sdk-glue/client.rb', line 18016

def test_connection(params = {}, options = {})
  req = build_request(:test_connection, params)
  req.send_request(options)
end

#untag_resource(params = {}) ⇒ Struct

Removes tags from a resource.

Examples:

Request syntax with placeholder values


resp = client.untag_resource({
  resource_arn: "GlueResourceArn", # required
  tags_to_remove: ["TagKey"], # required
})

Parameters:

  • params (Hash) (defaults to: {})

    ({})

Options Hash (params):

  • :resource_arn (required, String)

    The Amazon Resource Name (ARN) of the resource from which to remove the tags.

  • :tags_to_remove (required, Array<String>)

    Tags to remove from this resource.

Returns:

  • (Struct)

    Returns an empty response.

See Also:

[View source]

18043
18044
18045
18046
# File 'gems/aws-sdk-glue/lib/aws-sdk-glue/client.rb', line 18043

def untag_resource(params = {}, options = {})
  req = build_request(:untag_resource, params)
  req.send_request(options)
end

#update_blueprint(params = {}) ⇒ Types::UpdateBlueprintResponse

Updates a registered blueprint.

Examples:

Request syntax with placeholder values


resp = client.update_blueprint({
  name: "OrchestrationNameString", # required
  description: "Generic512CharString",
  blueprint_location: "OrchestrationS3Location", # required
})

Response structure


resp.name #=> String

Parameters:

  • params (Hash) (defaults to: {})

    ({})

Options Hash (params):

  • :name (required, String)

    The name of the blueprint.

  • :description (String)

    A description of the blueprint.

  • :blueprint_location (required, String)

    Specifies a path in Amazon S3 where the blueprint is published.

Returns:

See Also:

[View source]

18079
18080
18081
18082
# File 'gems/aws-sdk-glue/lib/aws-sdk-glue/client.rb', line 18079

def update_blueprint(params = {}, options = {})
  req = build_request(:update_blueprint, params)
  req.send_request(options)
end

#update_catalog(params = {}) ⇒ Struct

Updates an existing catalog's properties in the Glue Data Catalog.

Examples:

Request syntax with placeholder values


resp = client.update_catalog({
  catalog_id: "CatalogIdString", # required
  catalog_input: { # required
    description: "DescriptionString",
    federated_catalog: {
      identifier: "FederationIdentifier",
      connection_name: "NameString",
    },
    parameters: {
      "KeyString" => "ParametersMapValue",
    },
    target_redshift_catalog: {
      catalog_arn: "ResourceArnString", # required
    },
    catalog_properties: {
      data_lake_access_properties: {
        data_lake_access: false,
        data_transfer_role: "IAMRoleArn",
        kms_key: "ResourceArnString",
        catalog_type: "NameString",
      },
      custom_properties: {
        "KeyString" => "ParametersMapValue",
      },
    },
    create_table_default_permissions: [
      {
        principal: {
          data_lake_principal_identifier: "DataLakePrincipalString",
        },
        permissions: ["ALL"], # accepts ALL, SELECT, ALTER, DROP, DELETE, INSERT, CREATE_DATABASE, CREATE_TABLE, DATA_LOCATION_ACCESS
      },
    ],
    create_database_default_permissions: [
      {
        principal: {
          data_lake_principal_identifier: "DataLakePrincipalString",
        },
        permissions: ["ALL"], # accepts ALL, SELECT, ALTER, DROP, DELETE, INSERT, CREATE_DATABASE, CREATE_TABLE, DATA_LOCATION_ACCESS
      },
    ],
  },
})

Parameters:

  • params (Hash) (defaults to: {})

    ({})

Options Hash (params):

  • :catalog_id (required, String)

    The ID of the catalog.

  • :catalog_input (required, Types::CatalogInput)

    A CatalogInput object specifying the new properties of an existing catalog.

Returns:

  • (Struct)

    Returns an empty response.

See Also:

[View source]

18145
18146
18147
18148
# File 'gems/aws-sdk-glue/lib/aws-sdk-glue/client.rb', line 18145

def update_catalog(params = {}, options = {})
  req = build_request(:update_catalog, params)
  req.send_request(options)
end

#update_classifier(params = {}) ⇒ Struct

Modifies an existing classifier (a GrokClassifier, an XMLClassifier, a JsonClassifier, or a CsvClassifier, depending on which field is present).

Examples:

Request syntax with placeholder values


resp = client.update_classifier({
  grok_classifier: {
    name: "NameString", # required
    classification: "Classification",
    grok_pattern: "GrokPattern",
    custom_patterns: "CustomPatterns",
  },
  xml_classifier: {
    name: "NameString", # required
    classification: "Classification",
    row_tag: "RowTag",
  },
  json_classifier: {
    name: "NameString", # required
    json_path: "JsonPath",
  },
  csv_classifier: {
    name: "NameString", # required
    delimiter: "CsvColumnDelimiter",
    quote_symbol: "CsvQuoteSymbol",
    contains_header: "UNKNOWN", # accepts UNKNOWN, PRESENT, ABSENT
    header: ["NameString"],
    disable_value_trimming: false,
    allow_single_column: false,
    custom_datatype_configured: false,
    custom_datatypes: ["NameString"],
    serde: "OpenCSVSerDe", # accepts OpenCSVSerDe, LazySimpleSerDe, None
  },
})

Parameters:

  • params (Hash) (defaults to: {})

    ({})

Options Hash (params):

Returns:

  • (Struct)

    Returns an empty response.

See Also:

[View source]

18204
18205
18206
18207
# File 'gems/aws-sdk-glue/lib/aws-sdk-glue/client.rb', line 18204

def update_classifier(params = {}, options = {})
  req = build_request(:update_classifier, params)
  req.send_request(options)
end

#update_column_statistics_for_partition(params = {}) ⇒ Types::UpdateColumnStatisticsForPartitionResponse

Creates or updates partition statistics of columns.

The Identity and Access Management (IAM) permission required for this operation is UpdatePartition.

Examples:

Request syntax with placeholder values


resp = client.update_column_statistics_for_partition({
  catalog_id: "CatalogIdString",
  database_name: "NameString", # required
  table_name: "NameString", # required
  partition_values: ["ValueString"], # required
  column_statistics_list: [ # required
    {
      column_name: "NameString", # required
      column_type: "TypeString", # required
      analyzed_time: Time.now, # required
      statistics_data: { # required
        type: "BOOLEAN", # required, accepts BOOLEAN, DATE, DECIMAL, DOUBLE, LONG, STRING, BINARY
        boolean_column_statistics_data: {
          number_of_trues: 1, # required
          number_of_falses: 1, # required
          number_of_nulls: 1, # required
        },
        date_column_statistics_data: {
          minimum_value: Time.now,
          maximum_value: Time.now,
          number_of_nulls: 1, # required
          number_of_distinct_values: 1, # required
        },
        decimal_column_statistics_data: {
          minimum_value: {
            unscaled_value: "data", # required
            scale: 1, # required
          },
          maximum_value: {
            unscaled_value: "data", # required
            scale: 1, # required
          },
          number_of_nulls: 1, # required
          number_of_distinct_values: 1, # required
        },
        double_column_statistics_data: {
          minimum_value: 1.0,
          maximum_value: 1.0,
          number_of_nulls: 1, # required
          number_of_distinct_values: 1, # required
        },
        long_column_statistics_data: {
          minimum_value: 1,
          maximum_value: 1,
          number_of_nulls: 1, # required
          number_of_distinct_values: 1, # required
        },
        string_column_statistics_data: {
          maximum_length: 1, # required
          average_length: 1.0, # required
          number_of_nulls: 1, # required
          number_of_distinct_values: 1, # required
        },
        binary_column_statistics_data: {
          maximum_length: 1, # required
          average_length: 1.0, # required
          number_of_nulls: 1, # required
        },
      },
    },
  ],
})

Response structure


resp.errors #=> Array
resp.errors[0].column_statistics.column_name #=> String
resp.errors[0].column_statistics.column_type #=> String
resp.errors[0].column_statistics.analyzed_time #=> Time
resp.errors[0].column_statistics.statistics_data.type #=> String, one of "BOOLEAN", "DATE", "DECIMAL", "DOUBLE", "LONG", "STRING", "BINARY"
resp.errors[0].column_statistics.statistics_data.boolean_column_statistics_data.number_of_trues #=> Integer
resp.errors[0].column_statistics.statistics_data.boolean_column_statistics_data.number_of_falses #=> Integer
resp.errors[0].column_statistics.statistics_data.boolean_column_statistics_data.number_of_nulls #=> Integer
resp.errors[0].column_statistics.statistics_data.date_column_statistics_data.minimum_value #=> Time
resp.errors[0].column_statistics.statistics_data.date_column_statistics_data.maximum_value #=> Time
resp.errors[0].column_statistics.statistics_data.date_column_statistics_data.number_of_nulls #=> Integer
resp.errors[0].column_statistics.statistics_data.date_column_statistics_data.number_of_distinct_values #=> Integer
resp.errors[0].column_statistics.statistics_data.decimal_column_statistics_data.minimum_value.unscaled_value #=> String
resp.errors[0].column_statistics.statistics_data.decimal_column_statistics_data.minimum_value.scale #=> Integer
resp.errors[0].column_statistics.statistics_data.decimal_column_statistics_data.maximum_value.unscaled_value #=> String
resp.errors[0].column_statistics.statistics_data.decimal_column_statistics_data.maximum_value.scale #=> Integer
resp.errors[0].column_statistics.statistics_data.decimal_column_statistics_data.number_of_nulls #=> Integer
resp.errors[0].column_statistics.statistics_data.decimal_column_statistics_data.number_of_distinct_values #=> Integer
resp.errors[0].column_statistics.statistics_data.double_column_statistics_data.minimum_value #=> Float
resp.errors[0].column_statistics.statistics_data.double_column_statistics_data.maximum_value #=> Float
resp.errors[0].column_statistics.statistics_data.double_column_statistics_data.number_of_nulls #=> Integer
resp.errors[0].column_statistics.statistics_data.double_column_statistics_data.number_of_distinct_values #=> Integer
resp.errors[0].column_statistics.statistics_data.long_column_statistics_data.minimum_value #=> Integer
resp.errors[0].column_statistics.statistics_data.long_column_statistics_data.maximum_value #=> Integer
resp.errors[0].column_statistics.statistics_data.long_column_statistics_data.number_of_nulls #=> Integer
resp.errors[0].column_statistics.statistics_data.long_column_statistics_data.number_of_distinct_values #=> Integer
resp.errors[0].column_statistics.statistics_data.string_column_statistics_data.maximum_length #=> Integer
resp.errors[0].column_statistics.statistics_data.string_column_statistics_data.average_length #=> Float
resp.errors[0].column_statistics.statistics_data.string_column_statistics_data.number_of_nulls #=> Integer
resp.errors[0].column_statistics.statistics_data.string_column_statistics_data.number_of_distinct_values #=> Integer
resp.errors[0].column_statistics.statistics_data.binary_column_statistics_data.maximum_length #=> Integer
resp.errors[0].column_statistics.statistics_data.binary_column_statistics_data.average_length #=> Float
resp.errors[0].column_statistics.statistics_data.binary_column_statistics_data.number_of_nulls #=> Integer
resp.errors[0].error.error_code #=> String
resp.errors[0].error.error_message #=> String

Parameters:

  • params (Hash) (defaults to: {})

    ({})

Options Hash (params):

  • :catalog_id (String)

    The ID of the Data Catalog where the partitions in question reside. If none is supplied, the Amazon Web Services account ID is used by default.

  • :database_name (required, String)

    The name of the catalog database where the partitions reside.

  • :table_name (required, String)

    The name of the partitions' table.

  • :partition_values (required, Array<String>)

    A list of partition values identifying the partition.

  • :column_statistics_list (required, Array<Types::ColumnStatistics>)

    A list of the column statistics.

Returns:

See Also:

[View source]

18342
18343
18344
18345
# File 'gems/aws-sdk-glue/lib/aws-sdk-glue/client.rb', line 18342

def update_column_statistics_for_partition(params = {}, options = {})
  req = build_request(:update_column_statistics_for_partition, params)
  req.send_request(options)
end

#update_column_statistics_for_table(params = {}) ⇒ Types::UpdateColumnStatisticsForTableResponse

Creates or updates table statistics of columns.

The Identity and Access Management (IAM) permission required for this operation is UpdateTable.

Examples:

Request syntax with placeholder values


resp = client.update_column_statistics_for_table({
  catalog_id: "CatalogIdString",
  database_name: "NameString", # required
  table_name: "NameString", # required
  column_statistics_list: [ # required
    {
      column_name: "NameString", # required
      column_type: "TypeString", # required
      analyzed_time: Time.now, # required
      statistics_data: { # required
        type: "BOOLEAN", # required, accepts BOOLEAN, DATE, DECIMAL, DOUBLE, LONG, STRING, BINARY
        boolean_column_statistics_data: {
          number_of_trues: 1, # required
          number_of_falses: 1, # required
          number_of_nulls: 1, # required
        },
        date_column_statistics_data: {
          minimum_value: Time.now,
          maximum_value: Time.now,
          number_of_nulls: 1, # required
          number_of_distinct_values: 1, # required
        },
        decimal_column_statistics_data: {
          minimum_value: {
            unscaled_value: "data", # required
            scale: 1, # required
          },
          maximum_value: {
            unscaled_value: "data", # required
            scale: 1, # required
          },
          number_of_nulls: 1, # required
          number_of_distinct_values: 1, # required
        },
        double_column_statistics_data: {
          minimum_value: 1.0,
          maximum_value: 1.0,
          number_of_nulls: 1, # required
          number_of_distinct_values: 1, # required
        },
        long_column_statistics_data: {
          minimum_value: 1,
          maximum_value: 1,
          number_of_nulls: 1, # required
          number_of_distinct_values: 1, # required
        },
        string_column_statistics_data: {
          maximum_length: 1, # required
          average_length: 1.0, # required
          number_of_nulls: 1, # required
          number_of_distinct_values: 1, # required
        },
        binary_column_statistics_data: {
          maximum_length: 1, # required
          average_length: 1.0, # required
          number_of_nulls: 1, # required
        },
      },
    },
  ],
})

Response structure


resp.errors #=> Array
resp.errors[0].column_statistics.column_name #=> String
resp.errors[0].column_statistics.column_type #=> String
resp.errors[0].column_statistics.analyzed_time #=> Time
resp.errors[0].column_statistics.statistics_data.type #=> String, one of "BOOLEAN", "DATE", "DECIMAL", "DOUBLE", "LONG", "STRING", "BINARY"
resp.errors[0].column_statistics.statistics_data.boolean_column_statistics_data.number_of_trues #=> Integer
resp.errors[0].column_statistics.statistics_data.boolean_column_statistics_data.number_of_falses #=> Integer
resp.errors[0].column_statistics.statistics_data.boolean_column_statistics_data.number_of_nulls #=> Integer
resp.errors[0].column_statistics.statistics_data.date_column_statistics_data.minimum_value #=> Time
resp.errors[0].column_statistics.statistics_data.date_column_statistics_data.maximum_value #=> Time
resp.errors[0].column_statistics.statistics_data.date_column_statistics_data.number_of_nulls #=> Integer
resp.errors[0].column_statistics.statistics_data.date_column_statistics_data.number_of_distinct_values #=> Integer
resp.errors[0].column_statistics.statistics_data.decimal_column_statistics_data.minimum_value.unscaled_value #=> String
resp.errors[0].column_statistics.statistics_data.decimal_column_statistics_data.minimum_value.scale #=> Integer
resp.errors[0].column_statistics.statistics_data.decimal_column_statistics_data.maximum_value.unscaled_value #=> String
resp.errors[0].column_statistics.statistics_data.decimal_column_statistics_data.maximum_value.scale #=> Integer
resp.errors[0].column_statistics.statistics_data.decimal_column_statistics_data.number_of_nulls #=> Integer
resp.errors[0].column_statistics.statistics_data.decimal_column_statistics_data.number_of_distinct_values #=> Integer
resp.errors[0].column_statistics.statistics_data.double_column_statistics_data.minimum_value #=> Float
resp.errors[0].column_statistics.statistics_data.double_column_statistics_data.maximum_value #=> Float
resp.errors[0].column_statistics.statistics_data.double_column_statistics_data.number_of_nulls #=> Integer
resp.errors[0].column_statistics.statistics_data.double_column_statistics_data.number_of_distinct_values #=> Integer
resp.errors[0].column_statistics.statistics_data.long_column_statistics_data.minimum_value #=> Integer
resp.errors[0].column_statistics.statistics_data.long_column_statistics_data.maximum_value #=> Integer
resp.errors[0].column_statistics.statistics_data.long_column_statistics_data.number_of_nulls #=> Integer
resp.errors[0].column_statistics.statistics_data.long_column_statistics_data.number_of_distinct_values #=> Integer
resp.errors[0].column_statistics.statistics_data.string_column_statistics_data.maximum_length #=> Integer
resp.errors[0].column_statistics.statistics_data.string_column_statistics_data.average_length #=> Float
resp.errors[0].column_statistics.statistics_data.string_column_statistics_data.number_of_nulls #=> Integer
resp.errors[0].column_statistics.statistics_data.string_column_statistics_data.number_of_distinct_values #=> Integer
resp.errors[0].column_statistics.statistics_data.binary_column_statistics_data.maximum_length #=> Integer
resp.errors[0].column_statistics.statistics_data.binary_column_statistics_data.average_length #=> Float
resp.errors[0].column_statistics.statistics_data.binary_column_statistics_data.number_of_nulls #=> Integer
resp.errors[0].error.error_code #=> String
resp.errors[0].error.error_message #=> String

Parameters:

  • params (Hash) (defaults to: {})

    ({})

Options Hash (params):

  • :catalog_id (String)

    The ID of the Data Catalog where the partitions in question reside. If none is supplied, the Amazon Web Services account ID is used by default.

  • :database_name (required, String)

    The name of the catalog database where the partitions reside.

  • :table_name (required, String)

    The name of the partitions' table.

  • :column_statistics_list (required, Array<Types::ColumnStatistics>)

    A list of the column statistics.

Returns:

See Also:

[View source]

18476
18477
18478
18479
# File 'gems/aws-sdk-glue/lib/aws-sdk-glue/client.rb', line 18476

def update_column_statistics_for_table(params = {}, options = {})
  req = build_request(:update_column_statistics_for_table, params)
  req.send_request(options)
end

#update_column_statistics_task_settings(params = {}) ⇒ Struct

Updates settings for a column statistics task.

Examples:

Request syntax with placeholder values


resp = client.update_column_statistics_task_settings({
  database_name: "NameString", # required
  table_name: "NameString", # required
  role: "NameString",
  schedule: "CronExpression",
  column_name_list: ["NameString"],
  sample_size: 1.0,
  catalog_id: "NameString",
  security_configuration: "NameString",
})

Parameters:

  • params (Hash) (defaults to: {})

    ({})

Options Hash (params):

  • :database_name (required, String)

    The name of the database where the table resides.

  • :table_name (required, String)

    The name of the table for which to generate column statistics.

  • :role (String)

    The role used for running the column statistics.

  • :schedule (String)

    A schedule for running the column statistics, specified in CRON syntax.

  • :column_name_list (Array<String>)

    A list of column names for which to run statistics.

  • :sample_size (Float)

    The percentage of data to sample.

  • :catalog_id (String)

    The ID of the Data Catalog in which the database resides.

  • :security_configuration (String)

    Name of the security configuration that is used to encrypt CloudWatch logs.

Returns:

  • (Struct)

    Returns an empty response.

See Also:

[View source]

18528
18529
18530
18531
# File 'gems/aws-sdk-glue/lib/aws-sdk-glue/client.rb', line 18528

def update_column_statistics_task_settings(params = {}, options = {})
  req = build_request(:update_column_statistics_task_settings, params)
  req.send_request(options)
end

#update_connection(params = {}) ⇒ Struct

Updates a connection definition in the Data Catalog.

Examples:

Request syntax with placeholder values


resp = client.update_connection({
  catalog_id: "CatalogIdString",
  name: "NameString", # required
  connection_input: { # required
    name: "NameString", # required
    description: "DescriptionString",
    connection_type: "JDBC", # required, accepts JDBC, SFTP, MONGODB, KAFKA, NETWORK, MARKETPLACE, CUSTOM, SALESFORCE, VIEW_VALIDATION_REDSHIFT, VIEW_VALIDATION_ATHENA, GOOGLEADS, GOOGLESHEETS, GOOGLEANALYTICS4, SERVICENOW, MARKETO, SAPODATA, ZENDESK, JIRACLOUD, NETSUITEERP, HUBSPOT, FACEBOOKADS, INSTAGRAMADS, ZOHOCRM, SALESFORCEPARDOT, SALESFORCEMARKETINGCLOUD, SLACK, STRIPE, INTERCOM, SNAPCHATADS
    match_criteria: ["NameString"],
    connection_properties: { # required
      "HOST" => "ValueString",
    },
    spark_properties: {
      "PropertyKey" => "PropertyValue",
    },
    athena_properties: {
      "PropertyKey" => "PropertyValue",
    },
    python_properties: {
      "PropertyKey" => "PropertyValue",
    },
    physical_connection_requirements: {
      subnet_id: "NameString",
      security_group_id_list: ["NameString"],
      availability_zone: "NameString",
    },
    authentication_configuration: {
      authentication_type: "BASIC", # accepts BASIC, OAUTH2, CUSTOM, IAM
      o_auth_2_properties: {
        o_auth_2_grant_type: "AUTHORIZATION_CODE", # accepts AUTHORIZATION_CODE, CLIENT_CREDENTIALS, JWT_BEARER
        o_auth_2_client_application: {
          user_managed_client_application_client_id: "UserManagedClientApplicationClientId",
          aws_managed_client_application_reference: "AWSManagedClientApplicationReference",
        },
        token_url: "TokenUrl",
        token_url_parameters_map: {
          "TokenUrlParameterKey" => "TokenUrlParameterValue",
        },
        authorization_code_properties: {
          authorization_code: "AuthorizationCode",
          redirect_uri: "RedirectUri",
        },
        o_auth_2_credentials: {
          user_managed_client_application_client_secret: "UserManagedClientApplicationClientSecret",
          access_token: "AccessToken",
          refresh_token: "RefreshToken",
          jwt_token: "JwtToken",
        },
      },
      secret_arn: "SecretArn",
      kms_key_arn: "KmsKeyArn",
      basic_authentication_credentials: {
        username: "Username",
        password: "Password",
      },
      custom_authentication_credentials: {
        "CredentialKey" => "CredentialValue",
      },
    },
    validate_credentials: false,
    validate_for_compute_environments: ["SPARK"], # accepts SPARK, ATHENA, PYTHON
  },
})

Parameters:

  • params (Hash) (defaults to: {})

    ({})

Options Hash (params):

  • :catalog_id (String)

    The ID of the Data Catalog in which the connection resides. If none is provided, the Amazon Web Services account ID is used by default.

  • :name (required, String)

    The name of the connection definition to update.

  • :connection_input (required, Types::ConnectionInput)

    A ConnectionInput object that redefines the connection in question.

Returns:

  • (Struct)

    Returns an empty response.

See Also:

[View source]

18616
18617
18618
18619
# File 'gems/aws-sdk-glue/lib/aws-sdk-glue/client.rb', line 18616

def update_connection(params = {}, options = {})
  req = build_request(:update_connection, params)
  req.send_request(options)
end

#update_crawler(params = {}) ⇒ Struct

Updates a crawler. If a crawler is running, you must stop it using StopCrawler before updating it.

Examples:

Request syntax with placeholder values


resp = client.update_crawler({
  name: "NameString", # required
  role: "Role",
  database_name: "DatabaseName",
  description: "DescriptionStringRemovable",
  targets: {
    s3_targets: [
      {
        path: "Path",
        exclusions: ["Path"],
        connection_name: "ConnectionName",
        sample_size: 1,
        event_queue_arn: "EventQueueArn",
        dlq_event_queue_arn: "EventQueueArn",
      },
    ],
    jdbc_targets: [
      {
        connection_name: "ConnectionName",
        path: "Path",
        exclusions: ["Path"],
        enable_additional_metadata: ["COMMENTS"], # accepts COMMENTS, RAWTYPES
      },
    ],
    mongo_db_targets: [
      {
        connection_name: "ConnectionName",
        path: "Path",
        scan_all: false,
      },
    ],
    dynamo_db_targets: [
      {
        path: "Path",
        scan_all: false,
        scan_rate: 1.0,
      },
    ],
    catalog_targets: [
      {
        database_name: "NameString", # required
        tables: ["NameString"], # required
        connection_name: "ConnectionName",
        event_queue_arn: "EventQueueArn",
        dlq_event_queue_arn: "EventQueueArn",
      },
    ],
    delta_targets: [
      {
        delta_tables: ["Path"],
        connection_name: "ConnectionName",
        write_manifest: false,
        create_native_delta_table: false,
      },
    ],
    iceberg_targets: [
      {
        paths: ["Path"],
        connection_name: "ConnectionName",
        exclusions: ["Path"],
        maximum_traversal_depth: 1,
      },
    ],
    hudi_targets: [
      {
        paths: ["Path"],
        connection_name: "ConnectionName",
        exclusions: ["Path"],
        maximum_traversal_depth: 1,
      },
    ],
  },
  schedule: "CronExpression",
  classifiers: ["NameString"],
  table_prefix: "TablePrefix",
  schema_change_policy: {
    update_behavior: "LOG", # accepts LOG, UPDATE_IN_DATABASE
    delete_behavior: "LOG", # accepts LOG, DELETE_FROM_DATABASE, DEPRECATE_IN_DATABASE
  },
  recrawl_policy: {
    recrawl_behavior: "CRAWL_EVERYTHING", # accepts CRAWL_EVERYTHING, CRAWL_NEW_FOLDERS_ONLY, CRAWL_EVENT_MODE
  },
  lineage_configuration: {
    crawler_lineage_settings: "ENABLE", # accepts ENABLE, DISABLE
  },
  lake_formation_configuration: {
    use_lake_formation_credentials: false,
    account_id: "AccountId",
  },
  configuration: "CrawlerConfiguration",
  crawler_security_configuration: "CrawlerSecurityConfiguration",
})

Parameters:

  • params (Hash) (defaults to: {})

    ({})

Options Hash (params):

  • :name (required, String)

    Name of the new crawler.

  • :role (String)

    The IAM role or Amazon Resource Name (ARN) of an IAM role that is used by the new crawler to access customer resources.

  • :database_name (String)

    The Glue database where results are stored, such as: arn:aws:daylight:us-east-1::database/sometable/*.

  • :description (String)

    A description of the new crawler.

  • :targets (Types::CrawlerTargets)

    A list of targets to crawl.

  • :schedule (String)

    A cron expression used to specify the schedule (see Time-Based Schedules for Jobs and Crawlers. For example, to run something every day at 12:15 UTC, you would specify: cron(15 12 * * ? *).

  • :classifiers (Array<String>)

    A list of custom classifiers that the user has registered. By default, all built-in classifiers are included in a crawl, but these custom classifiers always override the default classifiers for a given classification.

  • :table_prefix (String)

    The table prefix used for catalog tables that are created.

  • :schema_change_policy (Types::SchemaChangePolicy)

    The policy for the crawler's update and deletion behavior.

  • :recrawl_policy (Types::RecrawlPolicy)

    A policy that specifies whether to crawl the entire dataset again, or to crawl only folders that were added since the last crawler run.

  • :lineage_configuration (Types::LineageConfiguration)

    Specifies data lineage configuration settings for the crawler.

  • :lake_formation_configuration (Types::LakeFormationConfiguration)

    Specifies Lake Formation configuration settings for the crawler.

  • :configuration (String)

    Crawler configuration information. This versioned JSON string allows users to specify aspects of a crawler's behavior. For more information, see Setting crawler configuration options.

  • :crawler_security_configuration (String)

    The name of the SecurityConfiguration structure to be used by this crawler.

Returns:

  • (Struct)

    Returns an empty response.

See Also:

[View source]

18786
18787
18788
18789
# File 'gems/aws-sdk-glue/lib/aws-sdk-glue/client.rb', line 18786

def update_crawler(params = {}, options = {})
  req = build_request(:update_crawler, params)
  req.send_request(options)
end

#update_crawler_schedule(params = {}) ⇒ Struct

Updates the schedule of a crawler using a cron expression.

Examples:

Request syntax with placeholder values


resp = client.update_crawler_schedule({
  crawler_name: "NameString", # required
  schedule: "CronExpression",
})

Parameters:

  • params (Hash) (defaults to: {})

    ({})

Options Hash (params):

  • :crawler_name (required, String)

    The name of the crawler whose schedule to update.

  • :schedule (String)

    The updated cron expression used to specify the schedule (see Time-Based Schedules for Jobs and Crawlers. For example, to run something every day at 12:15 UTC, you would specify: cron(15 12 * * ? *).

Returns:

  • (Struct)

    Returns an empty response.

See Also:

[View source]

18819
18820
18821
18822
# File 'gems/aws-sdk-glue/lib/aws-sdk-glue/client.rb', line 18819

def update_crawler_schedule(params = {}, options = {})
  req = build_request(:update_crawler_schedule, params)
  req.send_request(options)
end

#update_data_quality_ruleset(params = {}) ⇒ Types::UpdateDataQualityRulesetResponse

Updates the specified data quality ruleset.

Examples:

Request syntax with placeholder values


resp = client.update_data_quality_ruleset({
  name: "NameString", # required
  description: "DescriptionString",
  ruleset: "DataQualityRulesetString",
})

Response structure


resp.name #=> String
resp.description #=> String
resp.ruleset #=> String

Parameters:

  • params (Hash) (defaults to: {})

    ({})

Options Hash (params):

  • :name (required, String)

    The name of the data quality ruleset.

  • :description (String)

    A description of the ruleset.

  • :ruleset (String)

    A Data Quality Definition Language (DQDL) ruleset. For more information, see the Glue developer guide.

Returns:

See Also:

[View source]

18860
18861
18862
18863
# File 'gems/aws-sdk-glue/lib/aws-sdk-glue/client.rb', line 18860

def update_data_quality_ruleset(params = {}, options = {})
  req = build_request(:update_data_quality_ruleset, params)
  req.send_request(options)
end

#update_database(params = {}) ⇒ Struct

Updates an existing database definition in a Data Catalog.

Examples:

Request syntax with placeholder values


resp = client.update_database({
  catalog_id: "CatalogIdString",
  name: "NameString", # required
  database_input: { # required
    name: "NameString", # required
    description: "DescriptionString",
    location_uri: "URI",
    parameters: {
      "KeyString" => "ParametersMapValue",
    },
    create_table_default_permissions: [
      {
        principal: {
          data_lake_principal_identifier: "DataLakePrincipalString",
        },
        permissions: ["ALL"], # accepts ALL, SELECT, ALTER, DROP, DELETE, INSERT, CREATE_DATABASE, CREATE_TABLE, DATA_LOCATION_ACCESS
      },
    ],
    target_database: {
      catalog_id: "CatalogIdString",
      database_name: "NameString",
      region: "NameString",
    },
    federated_database: {
      identifier: "FederationIdentifier",
      connection_name: "NameString",
    },
  },
})

Parameters:

  • params (Hash) (defaults to: {})

    ({})

Options Hash (params):

  • :catalog_id (String)

    The ID of the Data Catalog in which the metadata database resides. If none is provided, the Amazon Web Services account ID is used by default.

  • :name (required, String)

    The name of the database to update in the catalog. For Hive compatibility, this is folded to lowercase.

  • :database_input (required, Types::DatabaseInput)

    A DatabaseInput object specifying the new definition of the metadata database in the catalog.

Returns:

  • (Struct)

    Returns an empty response.

See Also:

[View source]

18918
18919
18920
18921
# File 'gems/aws-sdk-glue/lib/aws-sdk-glue/client.rb', line 18918

def update_database(params = {}, options = {})
  req = build_request(:update_database, params)
  req.send_request(options)
end

#update_dev_endpoint(params = {}) ⇒ Struct

Updates a specified development endpoint.

Examples:

Request syntax with placeholder values


resp = client.update_dev_endpoint({
  endpoint_name: "GenericString", # required
  public_key: "GenericString",
  add_public_keys: ["GenericString"],
  delete_public_keys: ["GenericString"],
  custom_libraries: {
    extra_python_libs_s3_path: "GenericString",
    extra_jars_s3_path: "GenericString",
  },
  update_etl_libraries: false,
  delete_arguments: ["GenericString"],
  add_arguments: {
    "GenericString" => "GenericString",
  },
})

Parameters:

  • params (Hash) (defaults to: {})

    ({})

Options Hash (params):

  • :endpoint_name (required, String)

    The name of the DevEndpoint to be updated.

  • :public_key (String)

    The public key for the DevEndpoint to use.

  • :add_public_keys (Array<String>)

    The list of public keys for the DevEndpoint to use.

  • :delete_public_keys (Array<String>)

    The list of public keys to be deleted from the DevEndpoint.

  • :custom_libraries (Types::DevEndpointCustomLibraries)

    Custom Python or Java libraries to be loaded in the DevEndpoint.

  • :update_etl_libraries (Boolean)

    True if the list of custom libraries to be loaded in the development endpoint needs to be updated, or False if otherwise.

  • :delete_arguments (Array<String>)

    The list of argument keys to be deleted from the map of arguments used to configure the DevEndpoint.

  • :add_arguments (Hash<String,String>)

    The map of arguments to add the map of arguments used to configure the DevEndpoint.

    Valid arguments are:

    • "--enable-glue-datacatalog": ""

    ^

    You can specify a version of Python support for development endpoints by using the Arguments parameter in the CreateDevEndpoint or UpdateDevEndpoint APIs. If no arguments are provided, the version defaults to Python 2.

Returns:

  • (Struct)

    Returns an empty response.

See Also:

[View source]

18987
18988
18989
18990
# File 'gems/aws-sdk-glue/lib/aws-sdk-glue/client.rb', line 18987

def update_dev_endpoint(params = {}, options = {})
  req = build_request(:update_dev_endpoint, params)
  req.send_request(options)
end

#update_integration_resource_property(params = {}) ⇒ Types::UpdateIntegrationResourcePropertyResponse

This API can be used for updating the ResourceProperty of the Glue connection (for the source) or Glue database ARN (for the target). These properties can include the role to access the connection or database. Since the same resource can be used across multiple integrations, updating resource properties will impact all the integrations using it.

Examples:

Request syntax with placeholder values


resp = client.update_integration_resource_property({
  resource_arn: "String128", # required
  source_processing_properties: {
    role_arn: "String128",
  },
  target_processing_properties: {
    role_arn: "String128",
    kms_arn: "String2048",
    connection_name: "String128",
    event_bus_arn: "String2048",
  },
})

Response structure


resp.resource_arn #=> String
resp.source_processing_properties.role_arn #=> String
resp.target_processing_properties.role_arn #=> String
resp.target_processing_properties.kms_arn #=> String
resp.target_processing_properties.connection_name #=> String
resp.target_processing_properties.event_bus_arn #=> String

Parameters:

  • params (Hash) (defaults to: {})

    ({})

Options Hash (params):

  • :resource_arn (required, String)

    The connection ARN of the source, or the database ARN of the target.

  • :source_processing_properties (Types::SourceProcessingProperties)

    The resource properties associated with the integration source.

  • :target_processing_properties (Types::TargetProcessingProperties)

    The resource properties associated with the integration target.

Returns:

See Also:

[View source]

19042
19043
19044
19045
# File 'gems/aws-sdk-glue/lib/aws-sdk-glue/client.rb', line 19042

def update_integration_resource_property(params = {}, options = {})
  req = build_request(:update_integration_resource_property, params)
  req.send_request(options)
end

#update_integration_table_properties(params = {}) ⇒ Struct

This API is used to provide optional override properties for the tables that need to be replicated. These properties can include properties for filtering and partitioning for the source and target tables. To set both source and target properties the same API need to be invoked with the Glue connection ARN as ResourceArn with SourceTableConfig, and the Glue database ARN as ResourceArn with TargetTableConfig respectively.

The override will be reflected across all the integrations using same ResourceArn and source table.

Examples:

Request syntax with placeholder values


resp = client.update_integration_table_properties({
  resource_arn: "String128", # required
  table_name: "String128", # required
  source_table_config: {
    fields: ["String128"],
    filter_predicate: "String128",
    primary_key: ["String128"],
    record_update_field: "String128",
  },
  target_table_config: {
    unnest_spec: "TOPLEVEL", # accepts TOPLEVEL, FULL, NOUNNEST
    partition_spec: [
      {
        field_name: "String128",
        function_spec: "String128",
      },
    ],
    target_table_name: "String128",
  },
})

Parameters:

  • params (Hash) (defaults to: {})

    ({})

Options Hash (params):

  • :resource_arn (required, String)

    The connection ARN of the source, or the database ARN of the target.

  • :table_name (required, String)

    The name of the table to be replicated.

  • :source_table_config (Types::SourceTableConfig)

    A structure for the source table configuration.

  • :target_table_config (Types::TargetTableConfig)

    A structure for the target table configuration.

Returns:

  • (Struct)

    Returns an empty response.

See Also:

[View source]

19099
19100
19101
19102
# File 'gems/aws-sdk-glue/lib/aws-sdk-glue/client.rb', line 19099

def update_integration_table_properties(params = {}, options = {})
  req = build_request(:update_integration_table_properties, params)
  req.send_request(options)
end

#update_job(params = {}) ⇒ Types::UpdateJobResponse

Updates an existing job definition. The previous job definition is completely overwritten by this information.

Examples:

Response structure


resp.job_name #=> String

Parameters:

  • params (Hash) (defaults to: {})

    ({})

Options Hash (params):

  • :job_name (required, String)

    The name of the job definition to update.

  • :job_update (required, Types::JobUpdate)

    Specifies the values with which to update the job definition. Unspecified configuration is removed or reset to default values.

Returns:

See Also:

[View source]

19126
19127
19128
19129
# File 'gems/aws-sdk-glue/lib/aws-sdk-glue/client.rb', line 19126

def update_job(params = {}, options = {})
  req = build_request(:update_job, params)
  req.send_request(options)
end

#update_job_from_source_control(params = {}) ⇒ Types::UpdateJobFromSourceControlResponse

Synchronizes a job from the source control repository. This operation takes the job artifacts that are located in the remote repository and updates the Glue internal stores with these artifacts.

This API supports optional parameters which take in the repository information.

Examples:

Request syntax with placeholder values


resp = client.update_job_from_source_control({
  job_name: "NameString",
  provider: "GITHUB", # accepts GITHUB, GITLAB, BITBUCKET, AWS_CODE_COMMIT
  repository_name: "NameString",
  repository_owner: "NameString",
  branch_name: "NameString",
  folder: "NameString",
  commit_id: "CommitIdString",
  auth_strategy: "PERSONAL_ACCESS_TOKEN", # accepts PERSONAL_ACCESS_TOKEN, AWS_SECRETS_MANAGER
  auth_token: "AuthTokenString",
})

Response structure


resp.job_name #=> String

Parameters:

  • params (Hash) (defaults to: {})

    ({})

Options Hash (params):

  • :job_name (String)

    The name of the Glue job to be synchronized to or from the remote repository.

  • :provider (String)

    The provider for the remote repository. Possible values: GITHUB, AWS_CODE_COMMIT, GITLAB, BITBUCKET.

  • :repository_name (String)

    The name of the remote repository that contains the job artifacts. For BitBucket providers, RepositoryName should include WorkspaceName. Use the format <WorkspaceName>/<RepositoryName>.

  • :repository_owner (String)

    The owner of the remote repository that contains the job artifacts.

  • :branch_name (String)

    An optional branch in the remote repository.

  • :folder (String)

    An optional folder in the remote repository.

  • :commit_id (String)

    A commit ID for a commit in the remote repository.

  • :auth_strategy (String)

    The type of authentication, which can be an authentication token stored in Amazon Web Services Secrets Manager, or a personal access token.

  • :auth_token (String)

    The value of the authorization token.

Returns:

See Also:

[View source]

19197
19198
19199
19200
# File 'gems/aws-sdk-glue/lib/aws-sdk-glue/client.rb', line 19197

def update_job_from_source_control(params = {}, options = {})
  req = build_request(:update_job_from_source_control, params)
  req.send_request(options)
end

#update_ml_transform(params = {}) ⇒ Types::UpdateMLTransformResponse

Updates an existing machine learning transform. Call this operation to tune the algorithm parameters to achieve better results.

After calling this operation, you can call the StartMLEvaluationTaskRun operation to assess how well your new parameters achieved your goals (such as improving the quality of your machine learning transform, or making it more cost-effective).

Examples:

Request syntax with placeholder values


resp = client.update_ml_transform({
  transform_id: "HashString", # required
  name: "NameString",
  description: "DescriptionString",
  parameters: {
    transform_type: "FIND_MATCHES", # required, accepts FIND_MATCHES
    find_matches_parameters: {
      primary_key_column_name: "ColumnNameString",
      precision_recall_tradeoff: 1.0,
      accuracy_cost_tradeoff: 1.0,
      enforce_provided_labels: false,
    },
  },
  role: "RoleString",
  glue_version: "GlueVersionString",
  max_capacity: 1.0,
  worker_type: "Standard", # accepts Standard, G.1X, G.2X, G.025X, G.4X, G.8X, Z.2X
  number_of_workers: 1,
  timeout: 1,
  max_retries: 1,
})

Response structure


resp.transform_id #=> String

Parameters:

  • params (Hash) (defaults to: {})

    ({})

Options Hash (params):

  • :transform_id (required, String)

    A unique identifier that was generated when the transform was created.

  • :name (String)

    The unique name that you gave the transform when you created it.

  • :description (String)

    A description of the transform. The default is an empty string.

  • :parameters (Types::TransformParameters)

    The configuration parameters that are specific to the transform type (algorithm) used. Conditionally dependent on the transform type.

  • :role (String)

    The name or Amazon Resource Name (ARN) of the IAM role with the required permissions.

  • :glue_version (String)

    This value determines which version of Glue this machine learning transform is compatible with. Glue 1.0 is recommended for most customers. If the value is not set, the Glue compatibility defaults to Glue 0.9. For more information, see Glue Versions in the developer guide.

  • :max_capacity (Float)

    The number of Glue data processing units (DPUs) that are allocated to task runs for this transform. You can allocate from 2 to 100 DPUs; the default is 10. A DPU is a relative measure of processing power that consists of 4 vCPUs of compute capacity and 16 GB of memory. For more information, see the Glue pricing page.

    When the WorkerType field is set to a value other than Standard, the MaxCapacity field is set automatically and becomes read-only.

  • :worker_type (String)

    The type of predefined worker that is allocated when this task runs. Accepts a value of Standard, G.1X, or G.2X.

    • For the Standard worker type, each worker provides 4 vCPU, 16 GB of memory and a 50GB disk, and 2 executors per worker.

    • For the G.1X worker type, each worker provides 4 vCPU, 16 GB of memory and a 64GB disk, and 1 executor per worker.

    • For the G.2X worker type, each worker provides 8 vCPU, 32 GB of memory and a 128GB disk, and 1 executor per worker.

  • :number_of_workers (Integer)

    The number of workers of a defined workerType that are allocated when this task runs.

  • :timeout (Integer)

    The timeout for a task run for this transform in minutes. This is the maximum time that a task run for this transform can consume resources before it is terminated and enters TIMEOUT status. The default is 2,880 minutes (48 hours).

  • :max_retries (Integer)

    The maximum number of times to retry a task for this transform after a task run fails.

Returns:

See Also:

[View source]

19315
19316
19317
19318
# File 'gems/aws-sdk-glue/lib/aws-sdk-glue/client.rb', line 19315

def update_ml_transform(params = {}, options = {})
  req = build_request(:update_ml_transform, params)
  req.send_request(options)
end

#update_partition(params = {}) ⇒ Struct

Updates a partition.

Examples:

Request syntax with placeholder values


resp = client.update_partition({
  catalog_id: "CatalogIdString",
  database_name: "NameString", # required
  table_name: "NameString", # required
  partition_value_list: ["ValueString"], # required
  partition_input: { # required
    values: ["ValueString"],
    last_access_time: Time.now,
    storage_descriptor: {
      columns: [
        {
          name: "NameString", # required
          type: "ColumnTypeString",
          comment: "CommentString",
          parameters: {
            "KeyString" => "ParametersMapValue",
          },
        },
      ],
      location: "LocationString",
      additional_locations: ["LocationString"],
      input_format: "FormatString",
      output_format: "FormatString",
      compressed: false,
      number_of_buckets: 1,
      serde_info: {
        name: "NameString",
        serialization_library: "NameString",
        parameters: {
          "KeyString" => "ParametersMapValue",
        },
      },
      bucket_columns: ["NameString"],
      sort_columns: [
        {
          column: "NameString", # required
          sort_order: 1, # required
        },
      ],
      parameters: {
        "KeyString" => "ParametersMapValue",
      },
      skewed_info: {
        skewed_column_names: ["NameString"],
        skewed_column_values: ["ColumnValuesString"],
        skewed_column_value_location_maps: {
          "ColumnValuesString" => "ColumnValuesString",
        },
      },
      stored_as_sub_directories: false,
      schema_reference: {
        schema_id: {
          schema_arn: "GlueResourceArn",
          schema_name: "SchemaRegistryNameString",
          registry_name: "SchemaRegistryNameString",
        },
        schema_version_id: "SchemaVersionIdString",
        schema_version_number: 1,
      },
    },
    parameters: {
      "KeyString" => "ParametersMapValue",
    },
    last_analyzed_time: Time.now,
  },
})

Parameters:

  • params (Hash) (defaults to: {})

    ({})

Options Hash (params):

  • :catalog_id (String)

    The ID of the Data Catalog where the partition to be updated resides. If none is provided, the Amazon Web Services account ID is used by default.

  • :database_name (required, String)

    The name of the catalog database in which the table in question resides.

  • :table_name (required, String)

    The name of the table in which the partition to be updated is located.

  • :partition_value_list (required, Array<String>)

    List of partition key values that define the partition to update.

  • :partition_input (required, Types::PartitionInput)

    The new partition object to update the partition to.

    The Values property can't be changed. If you want to change the partition key values for a partition, delete and recreate the partition.

Returns:

  • (Struct)

    Returns an empty response.

See Also:

[View source]

19419
19420
19421
19422
# File 'gems/aws-sdk-glue/lib/aws-sdk-glue/client.rb', line 19419

def update_partition(params = {}, options = {})
  req = build_request(:update_partition, params)
  req.send_request(options)
end

#update_registry(params = {}) ⇒ Types::UpdateRegistryResponse

Updates an existing registry which is used to hold a collection of schemas. The updated properties relate to the registry, and do not modify any of the schemas within the registry.

Examples:

Request syntax with placeholder values


resp = client.update_registry({
  registry_id: { # required
    registry_name: "SchemaRegistryNameString",
    registry_arn: "GlueResourceArn",
  },
  description: "DescriptionString", # required
})

Response structure


resp.registry_name #=> String
resp.registry_arn #=> String

Parameters:

  • params (Hash) (defaults to: {})

    ({})

Options Hash (params):

  • :registry_id (required, Types::RegistryId)

    This is a wrapper structure that may contain the registry name and Amazon Resource Name (ARN).

  • :description (required, String)

    A description of the registry. If description is not provided, this field will not be updated.

Returns:

See Also:

[View source]

19460
19461
19462
19463
# File 'gems/aws-sdk-glue/lib/aws-sdk-glue/client.rb', line 19460

def update_registry(params = {}, options = {})
  req = build_request(:update_registry, params)
  req.send_request(options)
end

#update_schema(params = {}) ⇒ Types::UpdateSchemaResponse

Updates the description, compatibility setting, or version checkpoint for a schema set.

For updating the compatibility setting, the call will not validate compatibility for the entire set of schema versions with the new compatibility setting. If the value for Compatibility is provided, the VersionNumber (a checkpoint) is also required. The API will validate the checkpoint version number for consistency.

If the value for the VersionNumber (checkpoint) is provided, Compatibility is optional and this can be used to set/reset a checkpoint for the schema.

This update will happen only if the schema is in the AVAILABLE state.

Examples:

Request syntax with placeholder values


resp = client.update_schema({
  schema_id: { # required
    schema_arn: "GlueResourceArn",
    schema_name: "SchemaRegistryNameString",
    registry_name: "SchemaRegistryNameString",
  },
  schema_version_number: {
    latest_version: false,
    version_number: 1,
  },
  compatibility: "NONE", # accepts NONE, DISABLED, BACKWARD, BACKWARD_ALL, FORWARD, FORWARD_ALL, FULL, FULL_ALL
  description: "DescriptionString",
})

Response structure


resp.schema_arn #=> String
resp.schema_name #=> String
resp.registry_name #=> String

Parameters:

  • params (Hash) (defaults to: {})

    ({})

Options Hash (params):

  • :schema_id (required, Types::SchemaId)

    This is a wrapper structure to contain schema identity fields. The structure contains:

    • SchemaId$SchemaArn: The Amazon Resource Name (ARN) of the schema. One of SchemaArn or SchemaName has to be provided.

    • SchemaId$SchemaName: The name of the schema. One of SchemaArn or SchemaName has to be provided.

  • :schema_version_number (Types::SchemaVersionNumber)

    Version number required for check pointing. One of VersionNumber or Compatibility has to be provided.

  • :compatibility (String)

    The new compatibility setting for the schema.

  • :description (String)

    The new description for the schema.

Returns:

See Also:

[View source]

19532
19533
19534
19535
# File 'gems/aws-sdk-glue/lib/aws-sdk-glue/client.rb', line 19532

def update_schema(params = {}, options = {})
  req = build_request(:update_schema, params)
  req.send_request(options)
end

#update_source_control_from_job(params = {}) ⇒ Types::UpdateSourceControlFromJobResponse

Synchronizes a job to the source control repository. This operation takes the job artifacts from the Glue internal stores and makes a commit to the remote repository that is configured on the job.

This API supports optional parameters which take in the repository information.

Examples:

Request syntax with placeholder values


resp = client.update_source_control_from_job({
  job_name: "NameString",
  provider: "GITHUB", # accepts GITHUB, GITLAB, BITBUCKET, AWS_CODE_COMMIT
  repository_name: "NameString",
  repository_owner: "NameString",
  branch_name: "NameString",
  folder: "NameString",
  commit_id: "CommitIdString",
  auth_strategy: "PERSONAL_ACCESS_TOKEN", # accepts PERSONAL_ACCESS_TOKEN, AWS_SECRETS_MANAGER
  auth_token: "AuthTokenString",
})

Response structure


resp.job_name #=> String

Parameters:

  • params (Hash) (defaults to: {})

    ({})

Options Hash (params):

  • :job_name (String)

    The name of the Glue job to be synchronized to or from the remote repository.

  • :provider (String)

    The provider for the remote repository. Possible values: GITHUB, AWS_CODE_COMMIT, GITLAB, BITBUCKET.

  • :repository_name (String)

    The name of the remote repository that contains the job artifacts. For BitBucket providers, RepositoryName should include WorkspaceName. Use the format <WorkspaceName>/<RepositoryName>.

  • :repository_owner (String)

    The owner of the remote repository that contains the job artifacts.

  • :branch_name (String)

    An optional branch in the remote repository.

  • :folder (String)

    An optional folder in the remote repository.

  • :commit_id (String)

    A commit ID for a commit in the remote repository.

  • :auth_strategy (String)

    The type of authentication, which can be an authentication token stored in Amazon Web Services Secrets Manager, or a personal access token.

  • :auth_token (String)

    The value of the authorization token.

Returns:

See Also:

[View source]

19603
19604
19605
19606
# File 'gems/aws-sdk-glue/lib/aws-sdk-glue/client.rb', line 19603

def update_source_control_from_job(params = {}, options = {})
  req = build_request(:update_source_control_from_job, params)
  req.send_request(options)
end

#update_table(params = {}) ⇒ Struct

Updates a metadata table in the Data Catalog.

Examples:

Request syntax with placeholder values


resp = client.update_table({
  catalog_id: "CatalogIdString",
  database_name: "NameString", # required
  table_input: { # required
    name: "NameString", # required
    description: "DescriptionString",
    owner: "NameString",
    last_access_time: Time.now,
    last_analyzed_time: Time.now,
    retention: 1,
    storage_descriptor: {
      columns: [
        {
          name: "NameString", # required
          type: "ColumnTypeString",
          comment: "CommentString",
          parameters: {
            "KeyString" => "ParametersMapValue",
          },
        },
      ],
      location: "LocationString",
      additional_locations: ["LocationString"],
      input_format: "FormatString",
      output_format: "FormatString",
      compressed: false,
      number_of_buckets: 1,
      serde_info: {
        name: "NameString",
        serialization_library: "NameString",
        parameters: {
          "KeyString" => "ParametersMapValue",
        },
      },
      bucket_columns: ["NameString"],
      sort_columns: [
        {
          column: "NameString", # required
          sort_order: 1, # required
        },
      ],
      parameters: {
        "KeyString" => "ParametersMapValue",
      },
      skewed_info: {
        skewed_column_names: ["NameString"],
        skewed_column_values: ["ColumnValuesString"],
        skewed_column_value_location_maps: {
          "ColumnValuesString" => "ColumnValuesString",
        },
      },
      stored_as_sub_directories: false,
      schema_reference: {
        schema_id: {
          schema_arn: "GlueResourceArn",
          schema_name: "SchemaRegistryNameString",
          registry_name: "SchemaRegistryNameString",
        },
        schema_version_id: "SchemaVersionIdString",
        schema_version_number: 1,
      },
    },
    partition_keys: [
      {
        name: "NameString", # required
        type: "ColumnTypeString",
        comment: "CommentString",
        parameters: {
          "KeyString" => "ParametersMapValue",
        },
      },
    ],
    view_original_text: "ViewTextString",
    view_expanded_text: "ViewTextString",
    table_type: "TableTypeString",
    parameters: {
      "KeyString" => "ParametersMapValue",
    },
    target_table: {
      catalog_id: "CatalogIdString",
      database_name: "NameString",
      name: "NameString",
      region: "NameString",
    },
    view_definition: {
      is_protected: false,
      definer: "ArnString",
      representations: [
        {
          dialect: "REDSHIFT", # accepts REDSHIFT, ATHENA, SPARK
          dialect_version: "ViewDialectVersionString",
          view_original_text: "ViewTextString",
          validation_connection: "NameString",
          view_expanded_text: "ViewTextString",
        },
      ],
      sub_objects: ["ArnString"],
    },
  },
  skip_archive: false,
  transaction_id: "TransactionIdString",
  version_id: "VersionString",
  view_update_action: "ADD", # accepts ADD, REPLACE, ADD_OR_REPLACE, DROP
  force: false,
})

Parameters:

  • params (Hash) (defaults to: {})

    ({})

Options Hash (params):

  • :catalog_id (String)

    The ID of the Data Catalog where the table resides. If none is provided, the Amazon Web Services account ID is used by default.

  • :database_name (required, String)

    The name of the catalog database in which the table resides. For Hive compatibility, this name is entirely lowercase.

  • :table_input (required, Types::TableInput)

    An updated TableInput object to define the metadata table in the catalog.

  • :skip_archive (Boolean)

    By default, UpdateTable always creates an archived version of the table before updating it. However, if skipArchive is set to true, UpdateTable does not create the archived version.

  • :transaction_id (String)

    The transaction ID at which to update the table contents.

  • :version_id (String)

    The version ID at which to update the table contents.

  • :view_update_action (String)

    The operation to be performed when updating the view.

  • :force (Boolean)

    A flag that can be set to true to ignore matching storage descriptor and subobject matching requirements.

Returns:

  • (Struct)

    Returns an empty response.

See Also:

[View source]

19754
19755
19756
19757
# File 'gems/aws-sdk-glue/lib/aws-sdk-glue/client.rb', line 19754

def update_table(params = {}, options = {})
  req = build_request(:update_table, params)
  req.send_request(options)
end

#update_table_optimizer(params = {}) ⇒ Struct

Updates the configuration for an existing table optimizer.

Examples:

Request syntax with placeholder values


resp = client.update_table_optimizer({
  catalog_id: "CatalogIdString", # required
  database_name: "NameString", # required
  table_name: "NameString", # required
  type: "compaction", # required, accepts compaction, retention, orphan_file_deletion
  table_optimizer_configuration: { # required
    role_arn: "ArnString",
    enabled: false,
    vpc_configuration: {
      glue_connection_name: "glueConnectionNameString",
    },
    retention_configuration: {
      iceberg_configuration: {
        snapshot_retention_period_in_days: 1,
        number_of_snapshots_to_retain: 1,
        clean_expired_files: false,
      },
    },
    orphan_file_deletion_configuration: {
      iceberg_configuration: {
        orphan_file_retention_period_in_days: 1,
        location: "MessageString",
      },
    },
  },
})

Parameters:

  • params (Hash) (defaults to: {})

    ({})

Options Hash (params):

  • :catalog_id (required, String)

    The Catalog ID of the table.

  • :database_name (required, String)

    The name of the database in the catalog in which the table resides.

  • :table_name (required, String)

    The name of the table.

  • :type (required, String)

    The type of table optimizer.

  • :table_optimizer_configuration (required, Types::TableOptimizerConfiguration)

    A TableOptimizerConfiguration object representing the configuration of a table optimizer.

Returns:

  • (Struct)

    Returns an empty response.

See Also:

[View source]

19812
19813
19814
19815
# File 'gems/aws-sdk-glue/lib/aws-sdk-glue/client.rb', line 19812

def update_table_optimizer(params = {}, options = {})
  req = build_request(:update_table_optimizer, params)
  req.send_request(options)
end

#update_trigger(params = {}) ⇒ Types::UpdateTriggerResponse

Updates a trigger definition.

Job arguments may be logged. Do not pass plaintext secrets as arguments. Retrieve secrets from a Glue Connection, Amazon Web Services Secrets Manager or other secret management mechanism if you intend to keep them within the Job.

Examples:

Request syntax with placeholder values


resp = client.update_trigger({
  name: "NameString", # required
  trigger_update: { # required
    name: "NameString",
    description: "DescriptionString",
    schedule: "GenericString",
    actions: [
      {
        job_name: "NameString",
        arguments: {
          "GenericString" => "GenericString",
        },
        timeout: 1,
        security_configuration: "NameString",
        notification_property: {
          notify_delay_after: 1,
        },
        crawler_name: "NameString",
      },
    ],
    predicate: {
      logical: "AND", # accepts AND, ANY
      conditions: [
        {
          logical_operator: "EQUALS", # accepts EQUALS
          job_name: "NameString",
          state: "STARTING", # accepts STARTING, RUNNING, STOPPING, STOPPED, SUCCEEDED, FAILED, TIMEOUT, ERROR, WAITING, EXPIRED
          crawler_name: "NameString",
          crawl_state: "RUNNING", # accepts RUNNING, CANCELLING, CANCELLED, SUCCEEDED, FAILED, ERROR
        },
      ],
    },
    event_batching_condition: {
      batch_size: 1, # required
      batch_window: 1,
    },
  },
})

Response structure


resp.trigger.name #=> String
resp.trigger.workflow_name #=> String
resp.trigger.id #=> String
resp.trigger.type #=> String, one of "SCHEDULED", "CONDITIONAL", "ON_DEMAND", "EVENT"
resp.trigger.state #=> String, one of "CREATING", "CREATED", "ACTIVATING", "ACTIVATED", "DEACTIVATING", "DEACTIVATED", "DELETING", "UPDATING"
resp.trigger.description #=> String
resp.trigger.schedule #=> String
resp.trigger.actions #=> Array
resp.trigger.actions[0].job_name #=> String
resp.trigger.actions[0].arguments #=> Hash
resp.trigger.actions[0].arguments["GenericString"] #=> String
resp.trigger.actions[0].timeout #=> Integer
resp.trigger.actions[0].security_configuration #=> String
resp.trigger.actions[0].notification_property.notify_delay_after #=> Integer
resp.trigger.actions[0].crawler_name #=> String
resp.trigger.predicate.logical #=> String, one of "AND", "ANY"
resp.trigger.predicate.conditions #=> Array
resp.trigger.predicate.conditions[0].logical_operator #=> String, one of "EQUALS"
resp.trigger.predicate.conditions[0].job_name #=> String
resp.trigger.predicate.conditions[0].state #=> String, one of "STARTING", "RUNNING", "STOPPING", "STOPPED", "SUCCEEDED", "FAILED", "TIMEOUT", "ERROR", "WAITING", "EXPIRED"
resp.trigger.predicate.conditions[0].crawler_name #=> String
resp.trigger.predicate.conditions[0].crawl_state #=> String, one of "RUNNING", "CANCELLING", "CANCELLED", "SUCCEEDED", "FAILED", "ERROR"
resp.trigger.event_batching_condition.batch_size #=> Integer
resp.trigger.event_batching_condition.batch_window #=> Integer

Parameters:

  • params (Hash) (defaults to: {})

    ({})

Options Hash (params):

  • :name (required, String)

    The name of the trigger to update.

  • :trigger_update (required, Types::TriggerUpdate)

    The new values with which to update the trigger.

Returns:

See Also:

[View source]

19906
19907
19908
19909
# File 'gems/aws-sdk-glue/lib/aws-sdk-glue/client.rb', line 19906

def update_trigger(params = {}, options = {})
  req = build_request(:update_trigger, params)
  req.send_request(options)
end

#update_usage_profile(params = {}) ⇒ Types::UpdateUsageProfileResponse

Update an Glue usage profile.

Examples:

Request syntax with placeholder values


resp = client.update_usage_profile({
  name: "NameString", # required
  description: "DescriptionString",
  configuration: { # required
    session_configuration: {
      "NameString" => {
        default_value: "ConfigValueString",
        allowed_values: ["ConfigValueString"],
        min_value: "ConfigValueString",
        max_value: "ConfigValueString",
      },
    },
    job_configuration: {
      "NameString" => {
        default_value: "ConfigValueString",
        allowed_values: ["ConfigValueString"],
        min_value: "ConfigValueString",
        max_value: "ConfigValueString",
      },
    },
  },
})

Response structure


resp.name #=> String

Parameters:

  • params (Hash) (defaults to: {})

    ({})

Options Hash (params):

  • :name (required, String)

    The name of the usage profile.

  • :description (String)

    A description of the usage profile.

  • :configuration (required, Types::ProfileConfiguration)

    A ProfileConfiguration object specifying the job and session values for the profile.

Returns:

See Also:

[View source]

19960
19961
19962
19963
# File 'gems/aws-sdk-glue/lib/aws-sdk-glue/client.rb', line 19960

def update_usage_profile(params = {}, options = {})
  req = build_request(:update_usage_profile, params)
  req.send_request(options)
end

#update_user_defined_function(params = {}) ⇒ Struct

Updates an existing function definition in the Data Catalog.

Examples:

Request syntax with placeholder values


resp = client.update_user_defined_function({
  catalog_id: "CatalogIdString",
  database_name: "NameString", # required
  function_name: "NameString", # required
  function_input: { # required
    function_name: "NameString",
    class_name: "NameString",
    owner_name: "NameString",
    owner_type: "USER", # accepts USER, ROLE, GROUP
    resource_uris: [
      {
        resource_type: "JAR", # accepts JAR, FILE, ARCHIVE
        uri: "URI",
      },
    ],
  },
})

Parameters:

  • params (Hash) (defaults to: {})

    ({})

Options Hash (params):

  • :catalog_id (String)

    The ID of the Data Catalog where the function to be updated is located. If none is provided, the Amazon Web Services account ID is used by default.

  • :database_name (required, String)

    The name of the catalog database where the function to be updated is located.

  • :function_name (required, String)

    The name of the function.

  • :function_input (required, Types::UserDefinedFunctionInput)

    A FunctionInput object that redefines the function in the Data Catalog.

Returns:

  • (Struct)

    Returns an empty response.

See Also:

[View source]

20009
20010
20011
20012
# File 'gems/aws-sdk-glue/lib/aws-sdk-glue/client.rb', line 20009

def update_user_defined_function(params = {}, options = {})
  req = build_request(:update_user_defined_function, params)
  req.send_request(options)
end

#update_workflow(params = {}) ⇒ Types::UpdateWorkflowResponse

Updates an existing workflow.

Examples:

Request syntax with placeholder values


resp = client.update_workflow({
  name: "NameString", # required
  description: "GenericString",
  default_run_properties: {
    "IdString" => "GenericString",
  },
  max_concurrent_runs: 1,
})

Response structure


resp.name #=> String

Parameters:

  • params (Hash) (defaults to: {})

    ({})

Options Hash (params):

  • :name (required, String)

    Name of the workflow to be updated.

  • :description (String)

    The description of the workflow.

  • :default_run_properties (Hash<String,String>)

    A collection of properties to be used as part of each execution of the workflow.

    Run properties may be logged. Do not pass plaintext secrets as properties. Retrieve secrets from a Glue Connection, Amazon Web Services Secrets Manager or other secret management mechanism if you intend to use them within the workflow run.

  • :max_concurrent_runs (Integer)

    You can use this parameter to prevent unwanted multiple updates to data, to control costs, or in some cases, to prevent exceeding the maximum number of concurrent runs of any of the component jobs. If you leave this parameter blank, there is no limit to the number of concurrent workflow runs.

Returns:

See Also:

[View source]

20061
20062
20063
20064
# File 'gems/aws-sdk-glue/lib/aws-sdk-glue/client.rb', line 20061

def update_workflow(params = {}, options = {})
  req = build_request(:update_workflow, params)
  req.send_request(options)
end